Compare commits

..

1707 Commits

Author SHA1 Message Date
37c2b57791 simplify 2023-06-22 14:51:27 -04:00
bcd3cb645f use BASE and TOKEN from OpenAPI if they are set 2023-06-22 14:47:55 -04:00
22c337b1aa Update UI To Use New Model Manager (#3548)
PR for the Model Manager UI work related to 3.0

[DONE]

- Update ModelType Config names to be specific so that the front end can
parse them correctly.
- Rebuild frontend schema to reflect these changes.
- Update Linear UI Text To Image and Image to Image to work with the new
model loader.
- Updated the ModelInput component in the Node Editor to work with the
new changes.

[TODO REMEMBER]

- Add proper types for ModelLoaderType in `ModelSelect.tsx`

[TODO] 

- Everything else.
2023-06-22 22:06:26 +12:00
339e7ce213 feat(ui): initial implementation of model loading
- Update model listing code to use `rtk-query`
- Update all graph generation to use new `pipeline_model_loader` node
2023-06-22 17:48:57 +10:00
2a178f5a25 chore(ui): regen api client 2023-06-22 17:48:13 +10:00
1bc170727b tidy(nodes): rename sd_model_loader to pipeline_model_loader
this is more accurate bc it can do eg kandinsky also
2023-06-22 17:47:58 +10:00
3722cdf5d6 chore(ui): regen api client 2023-06-22 17:36:20 +10:00
42a59aa147 feat(nodes): add sd_model_loader node
Loads any pipeline model.

Also introduced is `PipelineModelField`, which includes a model name and base model.
2023-06-22 17:36:05 +10:00
b937b7da01 feat(models): update model manager service & route to return list of models 2023-06-22 17:34:12 +10:00
21245a0fb2 Set model type to const value in openapi schema, add model format enums to model schema(as they not not referenced in case of Literal definition) 2023-06-22 16:51:53 +10:00
da566b59e8 Update model format field to use enums 2023-06-22 16:51:53 +10:00
e4dc9c5a04 Rename format to model_format(still named format when work with config) 2023-06-22 16:51:53 +10:00
aceadacad4 Remove default model logic 2023-06-22 16:51:53 +10:00
d3dec59cc3 tweal: UI colors 2023-06-22 16:51:53 +10:00
6c98700740 fix: Adjust the Schedular select width
So the long names do not get cut off.
2023-06-22 16:51:53 +10:00
c4c3c96062 Revert "feat: Port Schedulers to Mantine"
This reverts commit e0c105f413.
2023-06-22 16:51:35 +10:00
6256be480c fix: Remove type from Model type name 2023-06-22 16:48:35 +10:00
7033071934 fix: Unserialization key issue 2023-06-22 16:48:35 +10:00
e48528bbef revert: getModels to receivedModels 2023-06-22 16:48:35 +10:00
6bdf68dd4c feat: Port Schedulers to Mantine 2023-06-22 16:48:35 +10:00
0c3616229e cleanup: Updated model slice names to be more descriptive
Basically updated all slices to be more descriptive in their names. Did so in order to make sure theres good naming scheme available for secondary models.
2023-06-22 16:43:14 +10:00
604cc1adcd wip: Move Model Selector to own file 2023-06-22 16:43:14 +10:00
4847212d5b feat: Enable 2.x Model Generation in Linear UI 2023-06-22 16:43:14 +10:00
727293d722 fix: 2.1 models breaking generation
Co-Authored-By: StAlKeR7779 <7768370+StAlKeR7779@users.noreply.github.com>
2023-06-22 16:42:59 +10:00
d2f3500e1b chore: Rebuild API - base_model and type added 2023-06-22 16:42:59 +10:00
ef83a2fffe Add name, base_mode, type fields to model info 2023-06-22 16:42:51 +10:00
f8d7477c7a wip: Add 2.x Models to the Model List 2023-06-22 16:42:51 +10:00
e374211313 chore: Rebuild API with new Model API names 2023-06-22 16:41:31 +10:00
01d17601b8 Generate config names for openapi 2023-06-22 16:41:19 +10:00
bf0d5f4cfc fix: Update missing name types to new names 2023-06-22 16:41:02 +10:00
663f4935f5 chore: Rebuild API 2023-06-22 16:41:02 +10:00
9838dda1b7 chore: Update model config type names 2023-06-22 16:40:40 +10:00
2d889e133d chore(ui): regen api client 2023-06-22 16:25:49 +10:00
6779f1a5ad fix(db): update models for boards w/ nullable deleted_at 2023-06-22 16:25:49 +10:00
19a6e5dad8 chore(ui): regen api client 2023-06-22 16:25:49 +10:00
285195bf72 feat(api): add get_board route 2023-06-22 16:25:49 +10:00
10008859a4 tidy(ui): remove all refs to boards thunks 2023-06-22 16:25:49 +10:00
3c04340f3f tidy(ui): tidy up update image board modal 2023-06-22 16:25:49 +10:00
79f0c4d3c4 feat(ui): add remove from board to image context menu 2023-06-22 16:25:49 +10:00
37d4e05838 fix(ui): fix board's image list not updating when image removed from board 2023-06-22 16:25:49 +10:00
a00ad6ac03 feat(ui): dropping image on All Images board removes it from board 2023-06-22 16:25:49 +10:00
2ffead000c tidy(ui): remove console.log() 2023-06-22 16:25:49 +10:00
922319cb84 fix(ui): fix first added board doesn't show until refresh
Had incorrect `invalidatesTags` array for the mutation.
2023-06-22 16:25:49 +10:00
6ee0e197bb feat(db): add deleted_at to board_images 2023-06-22 16:25:49 +10:00
d3e6f0130c fix(ui): fix issue with gallery not letting you load more images
To determine whether the Load More button should work, we need to keep track of how many images are left to load for a given board or category.

The Assets tab doesn't work, though. Need to figure out a better way to handle this.
2023-06-22 16:25:49 +10:00
421c23d3ea fix(ui): fix gallery image fetching for board categories 2023-06-22 16:25:49 +10:00
4545f3209f fix(ui): fix bug with image deletion not removing image from gallery 2023-06-22 16:25:49 +10:00
e2ee8102c2 tidy(db): tidy image_record_storage.py 2023-06-22 16:25:49 +10:00
083a0fc4cf tidy(ui): remove references to boardsAdapter 2023-06-22 16:25:49 +10:00
26b75b85f7 fix(ui): if deleting selected board, deselect it 2023-06-22 16:25:49 +10:00
f560a462a0 feat(ui): rudimentary categorized gallery image fetching 2023-06-22 16:25:49 +10:00
d501986610 chore(ui): regen api client 2023-06-22 16:25:49 +10:00
67a75f6895 feat(api, db): support board_id filter on images service get_many() 2023-06-22 16:25:49 +10:00
3c032c0767 feat(ui): only auto-add image to board if is not intermediate 2023-06-22 16:25:49 +10:00
abd6561140 feat(ui): just fetch all boards instead of paginating them 2023-06-22 16:25:49 +10:00
bd533426fc feat(ui): first pass at boards styling 2023-06-22 16:25:49 +10:00
2489d5459f chore(ui): regen api client 2023-06-22 16:25:49 +10:00
ac477cf5d6 fix(ui): improve image deletion handling 2023-06-22 16:25:49 +10:00
be3bdae847 fix: resolve rebase conflicts 2023-06-22 16:25:49 +10:00
3e0ee838cf fix(ui): add initial image dimensions to state
We need to access the initial image dimensions during the creation of the `ImageToImage` graph to determine if we need to resize the image.

Because the `initialImage` is now just an image name, we need to either store (easy) or dynamically retrieve its dimensions during graph creation (a bit less easy).

Took the easiest path. May need to revise this in the future.
2023-06-22 16:25:49 +10:00
8d3bec57d5 feat(ui): store only image name in parameters
Images that are used as parameters (e.g. init image, canvas images) are stored as full `ImageDTO` objects in state, separate from and duplicating any object representing those same objects in the `imagesSlice`.

We cannot store only image names as parameters, then pull the full `ImageDTO` from `imagesSlice`, because if an image is not on a loaded page, it doesn't exist in `imagesSlice`. For example, if you scroll down a few pages in the gallery and send that image to canvas, on reloading the app, the canvas will be unable to load that image.

We solved this temporarily by storing the full `ImageDTO` object wherever it was needed, but this is both inefficient and allows for stale `ImageDTO`s across the app.

One other possible solution was to just fetch the `ImageDTO` for all images at startup, and insert them into the `imagesSlice`, but then we run into an issue where we are displaying images in the gallery totally out of context.

For example, if an image from several pages into the gallery was sent to canvas, and the user refreshes, we'd display the first 20 images in gallery. Then to populate the canvas, we'd fetch that image we sent to canvas and add it to `imagesSlice`. Now we'd have 21 images in the gallery: 1 to 20 and whichever image we sent to canvas. Weird.

Using `rtk-query` solves this by allowing us to very easily fetch individual images in the components that need them, and not directly interact with `imagesSlice`.

This commit changes all references to images-as-parameters to store only the name of the image, and not the full `ImageDTO` object. Then, we use an `rtk-query` generated `useGetImageDTOQuery()` hook in each of those components to fetch the image.

We can use cache invalidation when we mutate any image to trigger automated re-running of the query and all the images are automatically kept up to date.

This also obviates the need for the convoluted URL fetching scheme for images that are used as parameters. The `imagesSlice` still need this handling unfortunately.
2023-06-22 16:25:49 +10:00
cfda128e06 feat(ui): wip boards via rtk-query 2023-06-22 16:25:49 +10:00
661a94b3de feat(db): add get_all() method for boards
This is needed to show the full list of boards in the update boards modal.
2023-06-22 16:25:49 +10:00
9ef64016c7 feat(db): sort board by created_at 2023-06-22 16:25:49 +10:00
21f0d0b0c1 fix(db): fix deserialize_board_record()
It was not adding `cover_image_name`
2023-06-22 16:25:49 +10:00
8bce234542 feat(db): update image-board relationships on add
Functionally, `add_image_to_board()` now moves images between boards.
2023-06-22 16:25:49 +10:00
daadf6ebfd feat(ui): add board image count badge 2023-06-22 16:25:49 +10:00
fe10a9f747 render cover image based on URL in image entities 2023-06-22 16:25:49 +10:00
7a2d3f628a add boardToAddTo state so that result can be added to board when generation is complete 2023-06-22 16:25:49 +10:00
4defb92105 handle long board names 2023-06-22 16:25:49 +10:00
f9f3c91a83 drag and drop to move image to board, a bit of board list UI 2023-06-22 16:25:49 +10:00
95b9c8e505 return cover_image_name since urls change, override one from db for now 2023-06-22 16:25:49 +10:00
49a02c157b feat(ui): fix UpdateImageBoardModal select 2023-06-22 16:25:49 +10:00
d604d986f9 feat(db, api): update get_board_for_image & service dependencies
- previously was `get_boards_for_image`, returning a list of `BoardDTO`, now returns a single `board_id`
2023-06-22 16:25:49 +10:00
70cc037a9c fix(ui): do not persist boards 2023-06-22 16:25:49 +10:00
e4893e4031 fix(db): return board records from CRUD methods 2023-06-22 16:25:49 +10:00
4a0a718b96 foiled by a comma 2023-06-22 16:25:49 +10:00
ca8f1a7828 (api) use most recently generated image for cover photo 2023-06-22 16:25:49 +10:00
2e41af2109 [half-baked] adding image to board modal 2023-06-22 16:25:49 +10:00
bd29e5e655 UI tweaks 2023-06-22 16:25:49 +10:00
dcfee2e1e4 add searching to boards list 2023-06-22 16:25:49 +10:00
8aac683319 can delete and rename boards 2023-06-22 16:25:49 +10:00
d306a84447 feat(ui): rough out boards UI 2023-06-22 16:25:49 +10:00
5865ecd530 feat(db): add FK for boards.cover_image_name 2023-06-22 16:25:49 +10:00
e1f9685b02 feat(db): add index for boards 2023-06-22 16:25:49 +10:00
498bf0d0ba feat(db): add indices for board_images 2023-06-22 16:25:49 +10:00
163ef2c941 feat(ui): remove refs to BoardRecord in UI
UI should only work w/ BoardDTO
2023-06-22 16:25:49 +10:00
48193b7fa7 chore(ui): regen api client 2023-06-22 16:25:49 +10:00
dd1b3c9f35 fix(api): update API models to use BoardDTOs 2023-06-22 16:25:49 +10:00
4b32322a58 feat(nodes): make board <> images a one-to-many relationship
we can extend this to many-to-many in the future if desired.
2023-06-22 16:25:49 +10:00
e06c43adc8 lint fix 2023-06-22 16:25:49 +10:00
c009f46b00 regenerate api schema 2023-06-22 16:25:49 +10:00
748016bdab routes working 2023-06-22 16:25:49 +10:00
72e9ced889 feat(nodes): add boards and board_images services 2023-06-22 16:25:49 +10:00
3833304f57 [WIP] board list endpoint w cover photos 2023-06-22 16:25:49 +10:00
4bfaae6617 fix type 2023-06-22 16:25:49 +10:00
499a174832 some more 2023-06-22 16:25:49 +10:00
6ca5ad9075 filter images by board_id 2023-06-22 16:25:49 +10:00
a121e6b3a0 add board_id association to image 2023-06-22 16:25:49 +10:00
207602f425 remove unused 2023-06-22 16:25:49 +10:00
a1671519d5 board CRUD 2023-06-22 16:25:49 +10:00
257e972599 fix failing pytest for config module 2023-06-20 13:26:01 -04:00
d339c8627f feat: Upgrade to Diffusers 0.17.1 (#3545)
Just syncing up with diffusers upstream.
2023-06-19 23:25:22 +12:00
a53e0dce6c Merge branch 'upgrade-diffusers' of https://github.com/blessedcoolant/InvokeAI into upgrade-diffusers 2023-06-19 23:21:06 +12:00
0ae6325353 chore: Add torchsde as a dependency for the SDE schedulers 2023-06-19 23:20:53 +12:00
12299120ab Merge branch 'main' into upgrade-diffusers 2023-06-19 23:16:39 +12:00
1a7fe172ca Fix inpaint node to new manager (#3550)
Inpaint node still used by canvas, so fixed it to new model manager api.
Other old generation code deleted.
2023-06-19 23:01:05 +12:00
4f5693040e Merge branch 'main' into fix/inpaint_new_manager 2023-06-19 22:55:00 +12:00
bb2df88c06 Add dpmpp_sde and dpmpp_2m_sde schedulers(with karras) (#3554)
Added sde schedulers.
Problem - they add random on each step, to get consistent image we need
to provide seed or generator.
I done it, but if you think that it better do in other way - feel free
to change.

Also made ancestral schedulers reproducible, this done same way as for
sde scheduler.
2023-06-19 22:52:33 +12:00
41442eb7f6 feat(ui): convert canvas txt2img & img2img to latents
- Add graph builders for canvas txt2img & img2img - they are mostly copy and paste from the linear graph builders but different in a few ways that are very tricky to work around. Just made totally new functions for them.
- Canvas txt2img and img2img support ControlNet (not inpaint/outpaint). There's no way to determine in real-time which mode the canvas is in just yet, so we cannot disable the ControlNet UI when the mode will be inpaint/outpaint - it will always display. It's possible to determine this in near-real-time, will add this at some point.
- Canvas inpaint/outpaint migrated to use model loader, though inpaint/outpaint are still using the non-latents nodes.
2023-06-19 15:57:28 +10:00
223a679ac1 chore(ui): regen api client 2023-06-19 15:57:28 +10:00
3c60616b4d feat(ui): simplify linear graph creation logic
Instead of manually creating every node and edge, we can simply copy/paste the base graph from node editor, then sub in parameters.

This is a much more intelligible process. We still need to handle seed, img2img fit and controlnet separately.
2023-06-19 15:57:28 +10:00
a01998d095 Remove more old logic 2023-06-19 15:57:28 +10:00
7b35162b9e Remove old logic except for inpaint, add support for lora and ti to inpaint node 2023-06-19 15:57:28 +10:00
c26e1a9271 Rewrite inpaint node to new model manager, remove TextToImage and ImageToImage nodes 2023-06-19 15:57:28 +10:00
9b32407744 Provide generator to all schedulers step function to make both ancestral and sde schedulers reproducible 2023-06-19 00:34:01 +03:00
f3d9797ebe Add dpmpp_sde and dpmpp_2m_sde schedulers(with karras) 2023-06-18 23:38:15 +03:00
f312e1448f Update index.md
fixed typo
2023-06-18 10:39:02 -04:00
a11946f0ad feat: Port Schedulers to Mantine (#3552)
- Ports Schedulers to use IAIMantineSelect.
- Adds ability to favorite schedulers in Settings. Favorited schedulers
show up at the top of the list.
- Adds IAIMantineMultiSelect component.
- Change SettingsSchedulers component to use IAIMantineMultiSelect
instead of Chakra Menus.
2023-06-18 22:22:03 +12:00
80a8d3ef28 style: Theme placeholder style for IAIMantineMultiSelect 2023-06-18 22:17:09 +12:00
f4ca9d0e09 Merge branch 'scheduler-select' of https://github.com/blessedcoolant/InvokeAI into scheduler-select 2023-06-18 22:05:12 +12:00
a960fa009d fix: Fix some styling issues with IAIMantineMultiSelect 2023-06-18 22:04:12 +12:00
b96b95bc95 feat(ui): enabledSchedulers -> favoriteSchedulers 2023-06-18 20:01:05 +10:00
450641c414 fix(ui): enable all schedulers by default 2023-06-18 19:39:31 +10:00
94cfcdc411 feat(ui): improve scheduler selection logic
- remove UI-specific state (the enabled schedulers) from redux, instead derive it in a selector
- simplify logic by putting schedulers in an object instead of an array
- rename `activeSchedulers` to `enabledSchedulers`
- remove need for `useEffect()` when `enabledSchedulers` changes by adding a listener for the `enabledSchedulersChanged` action/event to `generationSlice`
- increase type safety by making `enabledSchedulers` an array of `SchedulerParam`, which is created by the zod schema for scheduler
2023-06-18 19:34:37 +10:00
150059f704 fix(ui): create all scheduler constants up-front 2023-06-18 18:49:10 +10:00
f1a8b9daee fix(ui): clarify scheduler logic
- use full conditional syntax with `{}`
- do not mutate `action.payload` in a reducer
2023-06-18 18:47:59 +10:00
be8c0bb952 feat: Use Labels for Schedulers 2023-06-18 20:17:51 +12:00
dae5b9b259 fix: Minor styling fix to the IAIMantineMultiSelect component 2023-06-18 20:06:56 +12:00
06428fac67 fix: Revert scheduler back to zod validation 2023-06-18 20:02:36 +12:00
59b5dfc3e0 feat: Port Schedulers to Mantine 2023-06-18 19:47:27 +12:00
fd981a90be Add lms and dpmpp2_s karras scheduler (#3551)
Karras sigmas support added to lms and dpmpp2_s schedulers in 0.17.0
diffusers.
2023-06-18 17:36:47 +12:00
6b7cf3f3be Add lms and dpmpp2_s karras scheduler 2023-06-17 21:00:16 +03:00
9d4b84ef68 feat: Upgrade to Diffusers 0.17.1 2023-06-16 23:57:57 +12:00
4cbc802e36 Model manager fixes (#3541)
Fix lora import
Fix sd2 config - `variant` field not added
Fix list models api - `base_model` arg not provided, redundant assert
check
2023-06-16 06:43:00 +12:00
5f2d07917d Fix lora import, fix sd2 config, fix list models api 2023-06-15 21:30:15 +03:00
5c740452f6 Model Manager rewrite (#3335) 2023-06-14 08:44:04 -07:00
82c2498043 Merge branch 'main' into lstein/new-model-manager 2023-06-14 08:41:40 -07:00
0497bea264 fix: add dynamicprompts to pyproject.toml 2023-06-15 01:05:16 +10:00
b8e32fa459 chore(ui): regen api client 2023-06-15 01:05:16 +10:00
34ebee67b7 fix(nodes): fix revert conflict 2023-06-15 01:05:16 +10:00
e0c998d192 Revert "feat(ui): add warning socket event handling"
This reverts commit e7a61e631a42190e4b64e0d5e22771c669c5b30c.
2023-06-15 01:05:16 +10:00
b51e9a6bdb Revert "feat(nodes): add warning socket event"
This reverts commit cefdd9d634e515239bd85666c872a0d64bb9d772.
2023-06-15 01:05:16 +10:00
09f396ce84 feat(ui): add warning socket event handling 2023-06-15 01:05:16 +10:00
abee37eab3 feat(nodes): add warning socket event 2023-06-15 01:05:16 +10:00
42e48b2bef feat(nodes): add dynamic prompt node 2023-06-15 01:05:16 +10:00
70ece4364c refactor(minor): Image & Latent File Storage (#3538)
- `DiskImageStorage` and `DiskLatentsStorage` have now both been updated
to exclusively work with `Path` objects and not rely on the `os` lib to
handle pathing related functions.
- We now also validate the existence of the required image output
folders and latent output folders to ensure that the app does not break
in case the required folders get tampered with mid-session.
- Just overall general cleanup.

Tested it. Don't seem to be any thing breaking.
2023-06-15 02:43:27 +12:00
f9d5f9d52c fix(nodes): minor fixes for folder validation
- fix type for `__output_folder`
- prefix `validate_storage_folders()` with `__` to indicate private method
2023-06-15 00:40:39 +10:00
d0ee3558d1 Merge branch 'main' into lstein/new-model-manager 2023-06-14 17:29:01 +03:00
587297878a refactor(minor): Latent Disk Storage 2023-06-15 02:21:49 +12:00
b4c998a9ae refactor(minor): Image File Storage 2023-06-15 01:58:58 +12:00
88e8e3977b feat(ui): update UI to not use image_origin
see commit `8ad8de8: feat(nodes): remove `image_origin` from most places` for details.
2023-06-14 23:08:27 +10:00
24b86cffe9 chore(ui): regen api client & types 2023-06-14 23:08:27 +10:00
a1773197e9 feat(nodes): remove image_origin from most places
- remove `image_origin` from most places where we interact with images
- consolidate image file storage into a single `images/` dir

Images have an `image_origin` attribute but it is not actually used when retrieving images, nor will it ever be. It is still used when creating images and helps to differentiate between internally generated images and uploads.

It was included in eg API routes and image service methods as a holdover from the previous app implementation where images were not managed in a database. Now that we have images in a db, we can do away with this and simplify basically everything that touches images.

The one potentially controversial change is to no longer separate internal and external images on disk. If we retain this separation, we have to keep `image_origin` around in a number of spots and it getting image paths on disk painful.

So, I am have gotten rid of this organisation. Images are now all stored in `images`, regardless of their origin. As we improve the image management features, this change will hopefully become transparent.
2023-06-14 23:08:27 +10:00
1e08d865c9 chore: dummy commit to trigger actions 2023-06-14 14:14:24 +10:00
f8bb650cc1 revert: IAIScrollArea 2023-06-14 14:14:24 +10:00
2cee8bebb2 fix(ui): revert offset scrollbars
The wonky padding is too janky. Just overlay for now.
2023-06-14 14:14:24 +10:00
ade4ec5fd8 fix(ui): fix crash when toggling pinned parameters panel 2023-06-14 14:14:24 +10:00
70ffd6b03f fix(ui): fix controlnet selects data types 2023-06-14 14:14:24 +10:00
6c551df311 fix(ui): fix rebase conflicts 2023-06-14 14:14:24 +10:00
24f605629e cleanup: Remove OverlayScrollable component 2023-06-14 14:14:24 +10:00
2af1ec9d02 fix: Minor padding issue in unpinned drawer 2023-06-14 14:14:24 +10:00
79d53341de fix: Stretch scroll area so it retains parent width 2023-06-14 14:14:24 +10:00
e40b3506c4 fix: Options squishing on accordion collapse 2023-06-14 14:14:24 +10:00
33912382e3 feat: Introduce Mantine's ScrollArea 2023-06-14 14:14:24 +10:00
d282810e53 cleanup: Remove IAICustomSelect and port types 2023-06-14 14:14:24 +10:00
9df502fc77 fix(ui): fix mantine select props 2023-06-14 14:14:24 +10:00
705573f0a8 feat(ui): even more pedantic mantine select theming 2023-06-14 14:14:24 +10:00
1878ea94f6 feat: Port Canvas Layer Select to IAIMantineSelect 2023-06-14 14:14:24 +10:00
4ba5086b9a feat(ui): add tooltip to IAIMantineSelect 2023-06-14 14:14:24 +10:00
4a991b4daa feat(ui): more pedantic mantine select theming 2023-06-14 14:14:24 +10:00
80474d26f9 feat(ui): mantine scrollbar theming 2023-06-14 14:14:24 +10:00
9a77bd9140 feat: Port IAISelect's to IAIMantineSelect's
Ported everything except Model Manager selects and the Canvas Layer Select (this needs tooltip support)
2023-06-14 14:14:24 +10:00
14cdc800c3 feat(ui): pedantic mantine select theming 2023-06-14 14:14:24 +10:00
9cfbea4c25 feat: Match styling of Mantine Select with InvokeAI 2023-06-14 14:14:24 +10:00
5fe674e223 feat: Standardize IAIMantineSelect Component 2023-06-14 14:14:24 +10:00
32200efce8 feat: Change default font to Inter 2023-06-14 14:14:24 +10:00
68a02da990 feat: Use Mantine Select for Scheduler 2023-06-14 14:14:24 +10:00
5b20766ea3 chore: Move Mantine Theme Override to own file 2023-06-14 14:14:24 +10:00
9a914250a0 feat: Change Model Select To Mantine 2023-06-14 14:14:24 +10:00
0e3106f631 feat: Add Mantine Support 2023-06-14 14:14:24 +10:00
6c5954f9d1 Add controlnet to model manager, fixes 2023-06-14 04:26:21 +03:00
740c05a0bb Save models on rescan, uncache model on edit/delete, fixes 2023-06-14 03:12:12 +03:00
26090011c4 Fix conflict resolve, add model configs to type annotation 2023-06-14 00:26:37 +03:00
c9ae26a176 Merge branch 'main' into lstein/new-model-manager 2023-06-13 23:37:52 +03:00
e7db6d8120 Fix ckpt and vae conversion, migrate script, remove sd2-base 2023-06-13 18:05:12 +03:00
a6af7e8824 use format "diffusers" rather than format "folder" in models.yaml 2023-06-13 01:43:05 -04:00
87ba17a1f5 add migration script and update convert and face restoration paths 2023-06-13 01:27:51 -04:00
c7ea46a5da use latest version of transformers to avoid deprecation warnings 2023-06-12 16:07:39 -04:00
1439dc7712 Add SchedulerPredictionType and ModelVariantType enums 2023-06-12 16:07:04 -04:00
46cac6468e Upgrade to Diffusers 0.17.0 (#3514)
Diffusers is due for an update soon. #3512

Opening up a PR now with the required changes for when the new version
is live.

I've tested it out on Windows and nothing has broken from what I could
tell. I'd like someone to run some tests on Linux / Mac just to make
sure. Refer to the PR above on how to test it or install the release
branch.

```
pip install diffusers[torch]==0.17.0
```

Feel free to push any other changes to this PR you see fit.
2023-06-13 07:11:02 +12:00
2a814d886b Merge branch 'main' into diffusers-upgrade 2023-06-13 05:29:15 +12:00
60a2fbec41 feat(ui): improve controlnet-related config types 2023-06-13 00:04:21 +10:00
f15a328b80 fix(ui): allow controlnet with preprocessed control image 2023-06-13 00:04:21 +10:00
811d9ab55a fix(ui): disable shouldAutoConfig switch while processing 2023-06-13 00:04:21 +10:00
e00fed5c46 feat(ui): support disabling controlnet models & processors 2023-06-13 00:04:21 +10:00
a3fa38b353 fix(ui): revert IAICustomSelect usage to IAISelect
There are some bugs with it that I cannot figure out related to `floating-ui` and `downshift`'s handling of refs.

Will need to revisit this component in the future.
2023-06-13 00:04:21 +10:00
2e42a4bdd9 feat(ui): disable controlnets during processing 2023-06-13 00:04:21 +10:00
36f72b5a49 fix(ui): check for valid controlnets before adding to graph 2023-06-13 00:04:21 +10:00
af42d7d347 feat(ui): support negative controlnet weights 2023-06-13 00:04:21 +10:00
8607b1994c fix(ui): fix crash when controlnet enabled but no controlnets added 2023-06-13 00:04:21 +10:00
36eb1bd893 Fixes 2023-06-12 16:14:09 +03:00
9fa78443de Fixes, add sd variant detection 2023-06-12 05:52:30 +03:00
893f776f1d model_probe working; model_install incomplete 2023-06-11 19:51:53 -04:00
e051c450ed fix: git stash (#3528) 2023-06-12 08:55:36 +12:00
50135b726e fix: git stash 2023-06-12 08:53:09 +12:00
085ab54124 remove modified models.py and migrate code to models/base.py 2023-06-11 16:10:15 -04:00
8e1a56875e remove defunct code 2023-06-11 12:57:06 -04:00
000626ab2e move all installation code out of model_manager 2023-06-11 12:51:50 -04:00
694fd0c92f Fixes, first runable version 2023-06-11 16:42:40 +03:00
c647056287 Feat/easy param (#3504)
* Testing change to LatentsToText to allow setting different cfg_scale values per diffusion step.

* Adding first attempt at float param easing node, using Penner easing functions.

* Core implementation of ControlNet and MultiControlNet.

* Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving.

* Added example of using ControlNet with legacy Txt2Img generator

* Resolving rebase conflict

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added an additional "raw_processed_image" output port to controlnets, mainly so could route ImageField to a ShowImage node

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* More rebase repair.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Fixed lint-ish formatting error

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Added dependency on controlnet-aux v0.0.3

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): add value to conditioning field

* fix(ui): add control field type

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Moved to controlnet_aux v0.0.4, reinstated Zoe controlnet preprocessor. Also in pyproject.toml  had to specify downgrade of timm to 0.6.13 _after_ controlnet-aux installs timm >= 0.9.2, because timm >0.6.13 breaks Zoe preprocessor.

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Added Mediapipe image processor for use as ControlNet preprocessor.
Also hacked in ability to specify HF subfolder when loading ControlNet models from string.

* Fixed bug where MediapipFaceProcessorInvocation was ignoring max_faces and min_confidence params.

* Added nodes for float params: ParamFloatInvocation and FloatCollectionOutput. Also added FloatOutput.

* Added mediapipe install requirement. Should be able to remove once controlnet_aux package adds mediapipe to its requirements.

* Added float to FIELD_TYPE_MAP ins constants.ts

* Progress toward improvement in fieldTemplateBuilder.ts  getFieldType()

* Fixed controlnet preprocessors and controlnet handling in TextToLatents to work with revised Image services.

* Cleaning up from merge, re-adding cfg_scale to FIELD_TYPE_MAP

* Making sure cfg_scale of type list[float] can be used in image metadata, to support param easing for cfg_scale

* Fixed math for per-step param easing.

* Added option to show plot of param value at each step

* Just cleaning up after adding param easing plot option, removing vestigial code.

* Modified control_weight ControlNet param to be polistmorphic --
can now be either a single float weight applied for all steps, or a list of floats of size total_steps, that specifies weight for each step.

* Added more informative error message when _validat_edge() throws an error.

* Just improving parm easing bar chart title to include easing type.

* Added requirement for easing-functions package

* Taking out some diagnostic prints.

* Added option to use both easing function and mirror of easing function together.

* Fixed recently introduced problem (when pulled in main), triggered by num_steps in StepParamEasingInvocation not having a default value -- just added default.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-06-11 16:27:44 +10:00
738ba40f51 Fixes 2023-06-11 06:12:21 +03:00
3ce3a7ee72 Rewrite model configs, separate models 2023-06-11 04:49:09 +03:00
74b43c9bdf fix incorrect variable/typenames in model_cache 2023-06-10 10:41:48 -04:00
3d2ff7755e resolve conflicts 2023-06-10 10:13:54 -04:00
a87d52a389 resolve conflicts between lstein & sttalker changes 2023-06-10 09:59:19 -04:00
959e64c9b3 start removing repo_id support 2023-06-10 09:57:23 -04:00
2c056ead42 New models structure draft 2023-06-10 03:14:10 +03:00
30f20b55d5 fix logger behavior so that it is initialized after command line parsed (#3509)
In some cases the command-line was getting parsed before the logger was
initialized, causing the logger not to pick up custom logging
instructions from `--log_handlers`. This PR fixes the issue.
2023-06-09 08:24:47 -07:00
1bca32ed16 Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-09 06:27:26 -07:00
7f91139e21 fix(ui): fix crash when using dropdown on certain device resolutions 2023-06-09 22:19:30 +10:00
c53b7c7389 ui: misc fixes (#3525)
[fix(ui): blur tab on
click](93f3658a4a)

Fixes issue where after clicking a tab, using the arrow keys changes tab
instead of changing selected image

[fix(ui): fix canvas not filling screen on first
load](68be95acbb)

[feat(ui): remove clear temp folder canvas
button](813f79f0f9)

This button is nonfunctional.

Soon we will introduce a different way to handle clearing out
intermediate images (likely automated).
2023-06-09 23:44:21 +12:00
93f3658a4a fix(ui): blur tab on click
Fixes issue where after clicking a tab, using the arrow keys changes tab instead of changing selected image
2023-06-09 18:20:52 +10:00
68be95acbb fix(ui): fix canvas not filling screen on first load 2023-06-09 17:55:11 +10:00
813f79f0f9 feat(ui): remove clear temp folder canvas button
This button is nonfunctional.

Soon we will introduce a different way to handle clearing out intermediate images (likely automated).
2023-06-09 17:33:17 +10:00
c3ec86bc70 feat(ui): enhance IAICustomSelect (#3523)
Now accepts an array of strings or array of `IAICustomSelectOption`s.
This supports custom labels and tooltips within the select component.
2023-06-09 18:26:20 +12:00
05a19753c6 feat(ui): remove controlnet model descriptions
These are not yet exposed on the UI - somebody who understands what they do better can add them when we have a place to expose them
2023-06-09 16:20:30 +10:00
a33327c651 feat(ui): enhance IAICustomSelect
Now accepts an array of strings or array of `IAICustomSelectOption`s. This supports custom labels and tooltips within the select component.
2023-06-09 16:00:17 +10:00
6ad7cc4f2a feat(ui): decrease delay on dnd to 150ms (#3522) 2023-06-09 17:54:24 +12:00
c506355b8b feat(ui): decrease delay on dnd to 150ms 2023-06-09 15:53:17 +10:00
d54168b8fb feat(nodes): add tests for depth-first execution 2023-06-09 14:53:45 +10:00
c91b071c47 fix(nodes): use DFS with preorder traversal 2023-06-09 14:53:45 +10:00
9c57b18008 fix(nodes): update Invoker.invoke() docstring 2023-06-09 14:53:45 +10:00
69539a0472 feat(nodes): depth-first execution
There was an issue where for graphs w/ iterations, your images were output all at once, at the very end of processing. So if you canceled halfway through an execution of 10 nodes, you wouldn't get any images - even though you'd completed 5 images' worth of inference.

## Cause

Because graphs executed breadth-first (i.e. depth-by-depth), leaf nodes were necessarily processed last. For image generation graphs, your `LatentsToImage` will be leaf nodes, and be the last depth to be executed.

For example, a `TextToLatents` graph w/ 3 iterations would execute all 3 `TextToLatents` nodes fully before moving to the next depth, where the `LatentsToImage` nodes produce output images, resulting in a node execution order like this:

1. TextToLatents
2. TextToLatents
3. TextToLatents
4. LatentsToImage
5. LatentsToImage
6. LatentsToImage

## Solution

This PR makes a two changes to graph execution to execute as deeply as it can along each branch of the graph.

### Eager node preparation

We now prepare as many nodes as possible, instead of just a single node at a time.

We also need to change the conditions in which nodes are prepared. Previously, nodes were prepared only when all of their direct ancestors were executed.

The updated logic prepares nodes that:
- are *not* `Iterate` nodes whose inputs have *not* been executed
- do *not* have any unexecuted `Iterate` ancestor nodes

This results in graphs always being maximally prepared.

### Always execute the deepest prepared node

We now choose the next node to execute by traversing from the bottom of the graph instead of the top, choosing the first node whose inputs are all executed.

This means we always execute the deepest node possible.

## Result

Graphs now execute depth-first, so instead of an execution order like this:

1. TextToLatents
2. TextToLatents
3. TextToLatents
4. LatentsToImage
5. LatentsToImage
6. LatentsToImage

... we get an execution order like this:

1. TextToLatents
2. LatentsToImage
3. TextToLatents
4. LatentsToImage
5. TextToLatents
6. LatentsToImage

Immediately after inference, the image is decoded and sent to the gallery.

fixes #3400
2023-06-09 14:53:45 +10:00
7bce455d16 Merge branch 'main' into diffusers-upgrade 2023-06-09 16:27:52 +12:00
3f45294c61 feat(ui): restore reset button for init image (#3521) 2023-06-09 16:02:26 +12:00
fd03c7eebe feat(ui): restore reset button for init image 2023-06-09 14:00:23 +10:00
07c49a5726 feat(ui): skip resize on img2img if not needed (#3520) 2023-06-09 15:56:22 +12:00
8c688f8e29 feat(ui): skip resize on img2img if not needed 2023-06-09 13:54:23 +10:00
887576d217 add directory scanning for loras, controlnets and textual_inversions 2023-06-08 23:11:53 -04:00
6652f3405b merge with main 2023-06-08 21:08:43 -04:00
3d13167d32 Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-08 13:41:24 -07:00
f2bb507ebb allow logger to be reconfigured after startup 2023-06-08 09:23:11 -04:00
fe8f3381fc create databases directory on startup (#3518)
This PR creates the databases directory at app startup time. It also
removes a couple of debugging statements that were inadvertently left in
the model manager.
2023-06-08 23:40:32 +12:00
2a6d11e645 create databases directory on startup 2023-06-08 07:17:54 -04:00
01f46d3c7d Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-07 19:50:44 -07:00
5f76b62553 Update installer support for main (#3448)
#  Make InvokeAI package installable by mere mortals
    
This commit makes InvokeAI 3.0 to be installable via PyPi.org and/or the
installer script. The install process is now pretty much identical to
the 2.3 process, including creating launcher scripts `invoke.sh` and
`invoke.bat`.
    
Main changes:
    
1. Moved static web pages into `invokeai/frontend/web` and modified the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root to
launch, and enables PyPi installations with `pip install invokeai`
    
2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.
    
3. Fix a bug in the checkpoint converter script that was identified
during testing.
    
4. Better error reporting when checkpoint converter fails.
    
5. Rebuild front end.

# Major improvements to the model installer.

1. The text user interface for `invokeai-model-install` has been
expanded to allow the user to install controlnet, LoRA, textual
inversion, diffusers and checkpoint models. The user can install
interactively (without leaving the TUI), or in batch mode after exiting
the application.
 

![image](https://github.com/invoke-ai/InvokeAI/assets/111189/f8f7ac23-3e18-4973-b7fe-729864c703a0)

2. The `invokeai-model-install` command now lets you list, add and
delete models from the command line:

## Listing models
```
$ invokeai-model-install --list diffusers
Diffuser models:
analog-diffusion-1.0      not loaded  diffusers  An SD-1.5 model trained on diverse analog photographs (2.13 GB)
d&d-diffusion-1.0         not loaded  diffusers  Dungeons & Dragons characters (2.13 GB)
deliberate-1.0            not loaded  diffusers  Versatile model that produces detailed images up to 768px (4.27 GB)
DreamShaper               not loaded  diffusers  Imported diffusers model DreamShaper
sd-inpainting-1.5         not loaded  diffusers  RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
sd-inpainting-2.0         not loaded  diffusers  Stable Diffusion version 2.0 inpainting model (5.21 GB)
stable-diffusion-1.5      not loaded  diffusers  Stable Diffusion version 1.5 diffusers model (4.27 GB)
stable-diffusion-2.1      not loaded  diffusers  Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
```

```
$ invokeai-model-install --list tis
Loading Python libraries...

Installed Textual Inversion Embeddings:
   EasyNegative
   ahx-beta-453407d
```

## Installing models

(this example shows correct handling of a server side error at Civitai)
```
$ invokeai-model-install --diffusers https://civitai.com/api/download/models/46259 Linaqruf/anything-v3.0
Loading Python libraries...

[2023-06-05 22:17:23,556]::[InvokeAI]::INFO --> INSTALLING EXTERNAL MODELS
[2023-06-05 22:17:23,557]::[InvokeAI]::INFO --> Probing https://civitai.com/api/download/models/46259 for import
[2023-06-05 22:17:23,557]::[InvokeAI]::INFO --> https://civitai.com/api/download/models/46259 appears to be a URL
[2023-06-05 22:17:23,763]::[InvokeAI]::ERROR --> An error occurred during downloading /home/lstein/invokeai-test/models/ldm/stable-diffusion-v1/46259: Internal Server Error
[2023-06-05 22:17:23,763]::[InvokeAI]::ERROR --> ERROR DOWNLOADING https://civitai.com/api/download/models/46259: {"error":"Invalid database operation","cause":{"clientVersion":"4.12.0"}}
[2023-06-05 22:17:23,764]::[InvokeAI]::INFO --> Probing Linaqruf/anything-v3.0 for import
[2023-06-05 22:17:23,764]::[InvokeAI]::DEBUG --> Linaqruf/anything-v3.0 appears to be a HuggingFace diffusers repo_id
[2023-06-05 22:17:23,768]::[InvokeAI]::INFO --> Loading diffusers model from Linaqruf/anything-v3.0
[2023-06-05 22:17:23,769]::[InvokeAI]::DEBUG --> Using faster float16 precision
[2023-06-05 22:17:23,883]::[InvokeAI]::ERROR --> An unexpected error occurred while downloading the model: 404 Client Error. (Request ID: Root=1-647e9733-1b0ee3af67d6ac3456b1ebfc)

Revision Not Found for url: https://huggingface.co/Linaqruf/anything-v3.0/resolve/fp16/model_index.json.
Invalid rev id: fp16)
Downloading (…)ain/model_index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 511/511 [00:00<00:00, 2.57MB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 472/472 [00:00<00:00, 6.13MB/s]
Downloading (…)cheduler_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 341/341 [00:00<00:00, 3.30MB/s]
Downloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 807/807 [00:00<00:00, 11.3MB/s]
```

## Deleting models

```
 invokeai-model-install --delete --diffusers anything-v3
Loading Python libraries...

[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> Processing requested deletions
[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> anything-v3...
[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> Deleting the cached model directory for Linaqruf/anything-v3.0
[2023-06-05 22:19:45,948]::[InvokeAI]::WARNING --> Deletion of this model is expected to free 4.3G

```
2023-06-07 19:25:07 -07:00
4bbe3b0d00 Merge branch 'main' into release/make-web-dist-startable 2023-06-07 19:21:01 -07:00
9ed86a08f1 multiple small fixes
1. Contents of autoscan directory field are restored after doing an installation.
2. Activate dialogue to choose V2 parameterization when importing from a directory.
3. Remove autoscan directory from init file when its checkbox is unselected.
4. Add widget cycling behavior to install models form.
2023-06-07 17:32:00 -04:00
68405910ba Upgrade to Diffusers 0.17.0 2023-06-08 04:42:52 +12:00
0a50e2638c fix(ui): default controlnet autoprocess to true (#3513)
I had accidentally defaulted it to false
2023-06-08 01:56:53 +12:00
fc7c5da4dd fix(ui): default controlnet autoprocess to true
I had accidentally defaulted it to false
2023-06-07 23:55:24 +10:00
a3357e073c refactor exception handling 2023-06-07 07:35:34 -04:00
d114833a12 pause after printing exception 2023-06-07 07:26:14 -04:00
96038bd075 print exception on TUI crash 2023-06-07 07:23:14 -04:00
2f383c2598 docs(nodes): update INVOCATIONS.md (#3511) 2023-06-07 20:47:57 +12:00
702a8d1f72 docs(nodes): update INVOCATIONS.md 2023-06-07 18:44:43 +10:00
0a8390356f feat(ui): enhance autoprocessing
The processor is automatically selected when model is changed.

But if the user manually changes the processor, processor settings, or disables the new `Auto configure processor` switch, auto processing is disabled.

The user can enable auto configure by turning the switch back on.

When auto configure is enabled, a small dot is overlaid on the expand button to remind the user that the system is not auto configuring the processor for them.

If auto configure is enabled, the processor settings are reset to the default for the selected model.
2023-06-07 18:25:30 +10:00
844058c0a5 feat(ui): make prompt not required
- also change the placeholder text
2023-06-07 18:25:30 +10:00
7d74cbe29c fix(ui): make progress image not draggable 2023-06-07 18:25:30 +10:00
62ac0ed2dc feat(ui): tweak cnet model change
If there is no control image, and the model does not have a default processor, set the processor to `none`.
2023-06-07 18:25:30 +10:00
ae14adec2a feat(ui): add reset button for control image 2023-06-07 18:25:30 +10:00
6c2b39d1df feat(ui): improve controlnet image style
css is terrible
2023-06-07 18:25:30 +10:00
0843028e6e fix(ui): improve dragging activation
- delay of 250ms
- prevent gallery images from accidentally activating native drag and drop
2023-06-07 18:25:30 +10:00
de0fd87035 fix(ui): when a session errors, reset controlnet processing spinner 2023-06-07 18:25:30 +10:00
8b6c0be259 feat(ui): fix IAIDndImage button styles when upload disabled 2023-06-07 18:25:30 +10:00
58fec84858 feat(ui): add upload to IAIDndImage
Add uploading to IAIDndImage
- add `postUploadAction` arg to `imageUploaded` thunk, with several current valid options (set control image, set init, set nodes image, set canvas, or toast)
- updated IAIDndImage to optionally allow click to upload
2023-06-07 18:25:30 +10:00
f223ad7776 fix(ui): only show loading indicator on processing control images 2023-06-07 18:25:30 +10:00
00eabf630d fix(ui): fix control image not used if processor type is none 2023-06-07 18:25:30 +10:00
6245a27650 feat(ui): auto-select controlnet processor
- when the controlnet model is changed, if there is a default processor for the model set, the processor is changed.
- once a control image is selected (and processed), changing the model does not change the processor - must be manually changed
2023-06-07 18:25:30 +10:00
fa1ac57c90 Graph overlay was expanding off the screen to the size of the prompt line (#3510)
sure this isn't really important at the moment

just limited the size of the width and gave it a shadow

![image](https://github.com/invoke-ai/InvokeAI/assets/115216705/96e2db0a-9edb-48b8-9040-56ce054b5ecf)
2023-06-07 18:01:35 +12:00
0f16b1c98d Remove Shadow 2023-06-07 15:51:37 +10:00
08e66c5451 Update NodeGraphOverlay.tsx
Graph overlay was expanding off the screen to the size of the prompt
2023-06-07 14:49:03 +10:00
563bf70c95 fix CI failure in configure non-interactive mode; merged with main 2023-06-06 23:24:40 -04:00
49d29420c4 Merge branch 'main' into release/make-web-dist-startable 2023-06-06 23:24:16 -04:00
ae9d0c6c1b fix logger behavior so that it is initialized after command line parsed 2023-06-06 23:19:10 -04:00
04f9757f8d prevent crash when trying to calculate size of missing safety_checker
- Also fixed up order in which logger is created in invokeai-web
  so that handlers are installed after command-line options are
  parsed (and not before!)
2023-06-06 22:57:49 -04:00
1f9e1eb964 merge with main 2023-06-06 22:18:41 -04:00
d8d11f9bbb quench fp16 rev id not found warning 2023-06-06 22:01:05 -04:00
13fa0d3bc0 make log message textbox deeper 2023-06-06 17:23:13 -04:00
5eeb4b8e06 allow user to abort conversion of V2 models from within TUI 2023-06-06 17:21:50 -04:00
f5044c290d fix crash during model conversion 2023-06-06 17:05:29 -04:00
1b43276e5d make widget selection wrap around 2023-06-06 13:53:11 -07:00
294f086857 configure/install working correctly on windows11 2023-06-06 12:51:34 -07:00
e5024bf5e9 fix conhost launch-with args 2023-06-06 15:17:15 -04:00
79198b4bba feat(ui): fix bugs with image deletion (#3506)
- `imageUsage` object was always stale due to react component lifecycle,
fixed this
- cleaned up the deletion listener and context
2023-06-07 05:33:05 +12:00
1a2f0984db Merge branch 'main' into feat/ui/fix-stale-imageUsage 2023-06-07 04:35:16 +12:00
454683e6eb feat(ui): update image urls on connect (#3507)
* feat(ui): update image urls on connect

Add `updateImageUrlsOnConnect` RTK listener:
- requests URLs for *every* image the app knows about, on connect: gallery, selectedImage, initialImage, canvas images, nodes images, controlnet images
- only fires when `shouldUpdateImagesOnConnect` config is enabled

* remove prop

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-06-06 10:23:51 -04:00
bbb2a08e8f feat(ui): fix bugs with image deletion
- `imageUsage` object was always stale due to react component lifecycle, fixed this
- cleaned up the deletion listener and context
2023-06-06 20:01:27 +10:00
bf116927e1 feat(ui): clear features if image used by them is deleted
This handles the case when an image is deleted but is still in use in as eg an init image on canvas, or a control image. If we just delete the image, canvas/controlnet/etc may break (the image would just fail to load).

When an image is deleted, the app checks to see if it is in use in:
- Image to Image
- ControlNet
- Unified Canvas
- Node Editor

The delete dialog will always open if the image is in use anywhere, and the user is advised that deleting the image will reset the feature(s).

Even if the user has ticked the box to not confirm on delete, the dialog will still show if the image is in use somewhere.
2023-06-06 14:35:07 +10:00
3d249c4fa3 feat(ui): refactor image deletion
Add `DeleteImageContext`:
- provide a single function to delete an image
- opens the modal or immediately deletes, if confirm is off
2023-06-06 14:35:07 +10:00
fa338ddb6a feat(ui): add useGetIsImageInUse
Checks if an image is currently being used eg in canvas, nodes, controlnet, init image.
2023-06-06 14:35:07 +10:00
b200451330 feat(ui): add nodesSelector 2023-06-06 14:35:07 +10:00
8283d23b74 feat(ui): remove shouldTransformUrls
This is no longer used.
2023-06-06 14:35:07 +10:00
2fc0a4d53b feat(ui): improve handling for urls/metadata received
Update images everywhere when urls or metadata is received:
- control images
- init images
- canvas
- nodes
- init image

Also renamed the variable.
2023-06-06 14:35:07 +10:00
3ff732d583 feat(ui): clear controlnet image when image deleted 2023-06-06 14:35:07 +10:00
840c632c0a feat(ui): sort images by updated_at instead of created_at
fixes issue where saved staging area images are sorted as expected in gallery.
2023-06-06 14:30:53 +10:00
40d6e4f287 fix(ui): fix canvas auto-save not working 2023-06-06 14:30:53 +10:00
fc5f9c30a6 fix(ui): fix metadata viewer not working for canvas images 2023-06-06 14:30:53 +10:00
229de2dbb8 feat(ui): fix canvas saving
- fix "bounding box region only" not being respected when saving
- add toasts for each action
- improve workflow `take()` predicates to use the requestId
2023-06-06 14:30:53 +10:00
cc22427f25 feat(ui): improve UI on smaller screens
- responsive changes were causing a lot of weird layout issues, had to remove the rest of them
- canvas (non-beta) toolbar now wraps
- reduces minH for prompt boxes a bit
2023-06-06 14:29:57 +10:00
90333c0074 merge with main 2023-06-05 22:03:44 -04:00
54e5301b35 Multiple fixes
1. Model installer works correctly under Windows 11 Terminal
2. Fixed crash when configure script hands control off to installer
3. Kill install subprocess on keyboard interrupt
4. Command-line functionality for --yes configuration and model installation
   restored.
5. New command-line features:
   - install/delete lists of diffusers, LoRAS, controlnets and textual inversions
     using repo ids, paths or URLs.

Help:

```
usage: invokeai-model-install [-h] [--diffusers [DIFFUSERS ...]] [--loras [LORAS ...]] [--controlnets [CONTROLNETS ...]] [--textual-inversions [TEXTUAL_INVERSIONS ...]] [--delete] [--full-precision | --no-full-precision]
                              [--yes] [--default_only] [--list-models {diffusers,loras,controlnets,tis}] [--config_file CONFIG_FILE] [--root_dir ROOT]

InvokeAI model downloader

options:
  -h, --help            show this help message and exit
  --diffusers [DIFFUSERS ...]
                        List of URLs or repo_ids of diffusers to install/delete
  --loras [LORAS ...]   List of URLs or repo_ids of LoRA/LyCORIS models to install/delete
  --controlnets [CONTROLNETS ...]
                        List of URLs or repo_ids of controlnet models to install/delete
  --textual-inversions [TEXTUAL_INVERSIONS ...]
                        List of URLs or repo_ids of textual inversion embeddings to install/delete
  --delete              Delete models listed on command line rather than installing them
  --full-precision, --no-full-precision
                        use 32-bit weights instead of faster 16-bit weights (default: False)
  --yes, -y             answer "yes" to all prompts
  --default_only        only install the default model
  --list-models {diffusers,loras,controlnets,tis}
                        list installed models
  --config_file CONFIG_FILE, -c CONFIG_FILE
                        path to configuration file to create
  --root_dir ROOT       path to root of install directory
```
2023-06-05 21:45:35 -04:00
b31fc43bfa Fix potential race condition in config system (#3466)
There was a potential gotcha in the config system that was previously
merged with main. The `InvokeAIAppConfig` object was configuring itself
from the command line and configuration file within its initialization
routine. However, this could cause it to read `argv` from the command
line at unexpected times. This PR fixes the object so that it only reads
from the init file and command line when its `parse_args()` method is
explicitly called, which should be done at startup time in any top level
script that uses it.

In addition, using the `get_invokeai_config()` function to get a global
version of the config object didn't feel pythonic to me, so I have
changed this to `InvokeAIAppConfig.get_config()` throughout.

## Updated Usage

In the main script, at startup time, do the following:

```
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
config.parse_args()
```

In non-main scripts, it is not necessary (or recommended) to call
`parse_args()`:
```
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
```

The configuration object properties can be overridden when
`get_config()` is called by passing initialization values in the usual
way. If a property is set this way, then it will not be changed by
subsequent calls to `parse_args()`, but can only be changed by
explicitly setting the property.

```
config = InvokeAIAppConfig.get_config(nsfw_checker=True)
config.parse_args(argv=['--no-nsfw_checker'])
config.nsfw_checker
# True
```

You may specify alternative argv lists and configuration files in
`parse_args()`:

```
config.parse_args(argv=['--no-nsfw_checker'],
                             conf = OmegaConf.load('/tmp/test.yaml')
)
```

For backward compatibility, the `get_invokeai_config()` function is
still available from the module, but has been removed from the rest of
the source tree.
2023-06-05 15:26:50 -07:00
9bcf0b2251 Merge branch 'main' into lstein/config-management-fixes 2023-06-05 15:10:33 -07:00
d4bc98c383 revert to conhost method 2023-06-05 11:46:01 -07:00
bc892c535c feat(ui): fix image fit (#3501)
- Prevent init, current & control images from overflowing
2023-06-05 20:48:55 +12:00
099e1e7c08 feat(ui): fix image fit
- Prevent init, current & control images from overflowing
2023-06-05 17:16:30 +10:00
b1000e30c1 feat(ui): disable keyboard dnd
Need to fix a bug w/ collision detection before enabling it. Will pursue later.
2023-06-05 15:24:24 +10:00
7bd94eac0e feat(ui): support image dnd to canvas 2023-06-05 15:24:24 +10:00
2c77563dcc feat(ui): move DropOverlay into its own IAIDropOverlay component 2023-06-05 15:24:24 +10:00
603c9a587e open Windows Terminal maximized 2023-06-05 00:24:13 -04:00
1a5a2dfda9 increased window size 2023-06-04 23:54:52 -04:00
090b7eeaf3 workaround to get adequate window size on Windows Terminal 2023-06-04 23:44:07 -04:00
117536324c the "restore" env variable in .bat launcher confuses pydantic 2023-06-04 22:53:46 -04:00
999c092b6a fix mouse and window resizing issues 2023-06-04 22:00:11 -04:00
9e31b1f387 Merge branch 'main' into lstein/config-management-fixes 2023-06-04 18:17:43 -04:00
cb157ea530 fix crash when install-models launched from config script 2023-06-04 14:55:51 -04:00
5f6f38074d merge with main 2023-06-04 13:59:31 -04:00
25b8dd340a Prompting: enable long prompts and compel's new .and() concatenating feature (#3497)
this PR adds long prompt support and enables compel's new `.and()`
concatenation feature which improves image quality especially with SD2.1

example of a long prompt:
> a moist sloppy pindlesackboy sloppy hamblin' bogomadong, Clem Fandango
is pissed-off, Wario's Woods in background, making a noise like
ga-woink-a
![000075 6dfd7adf
466129594](https://github.com/invoke-ai/InvokeAI/assets/144366/051608b6-8d52-463b-af10-04b695cda9c1)

the same prompt broken into fragments and concatenated using `.and()`
(syntax works like `.blend()`):
```
("a moist sloppy pindlesackboy sloppy hamblin' bogomadong", 
"Clem Fandango is pissed-off", 
"Wario's Woods in background", 
"making a noise like ga-woink-a").and()
```
![000076 68b1c320
466129594](https://github.com/invoke-ai/InvokeAI/assets/144366/3fee291f-5562-40f9-9c3c-a73765fc893a)


and a less silly example:

> A dream of a distant galaxy, by Caspar David Friedrich, matte
painting, trending on artstation, HQ
![000129 1b33b559
2793529321](https://github.com/invoke-ai/InvokeAI/assets/144366/d4113756-ed0d-49cd-bb2e-a2fc4a09e0af)

the same prompt broken into two fragments and concatenated:
```
("A dream of a distant galaxy, by Caspar David Friedrich, matte painting", 
"trending on artstation, HQ").and()
```
![000128 b5d5cd62
2793529321](https://github.com/invoke-ai/InvokeAI/assets/144366/c373c009-05db-4c42-8a1d-c89fbdb334ec)

as with `.blend()` you can also weight the parts eg `("a man eating an
apple", "sitting on the roof of a car", "high quality, trending on
artstation, 8K UHD").and(1, 0.5, 0.5)` which will assign weight `1` to
`a man eating an apple` and `0.5` to `sitting on the roof of a car` and
`high quality, trending on artstation, 8K UHD`.
2023-06-05 04:53:08 +12:00
fb06f5b892 Merge branch 'main' into feat_compel_longprompts_and_concat 2023-06-05 04:34:39 +12:00
1a7fb601dc ask user for v2 variant when model manager can't infer it 2023-06-04 11:27:44 -04:00
cdcfda164d enable long prompts, upgrade compel to enable .and() (concatenating prompts) 2023-06-04 15:30:54 +02:00
966b154a1f Update web README.md (#3496) 2023-06-05 00:56:00 +12:00
95fa66661c dummy commit to make github actions run 2023-06-04 22:55:35 +10:00
6247b79111 docs(ui): update API_CLIENT 2023-06-04 22:46:53 +10:00
5831364f9c Update web README.md 2023-06-04 22:44:18 +10:00
919b81cff1 fix(ui): fix rebase issue 2023-06-04 22:34:58 +10:00
065fff7db5 fix(ui): fix wonkiness with image dnd 2023-06-04 22:34:58 +10:00
a664ee30a2 feat(ui): do not change images if the dropped image is the same image 2023-06-04 22:34:58 +10:00
03f3ad435a feat(ui): updated controlnet logic/ui 2023-06-04 22:34:58 +10:00
2270c270ef feat(ui): add tooltip to IAISwitch 2023-06-04 22:34:58 +10:00
4f7820719b feat(ui): add ellipsis direction to IAICustomSelect 2023-06-04 22:34:58 +10:00
fa285883ad feat(ui): make OverlayDragImage translucent 2023-06-04 22:34:58 +10:00
474fca8e6a feat(ui): add controlNetDenylist 2023-06-04 22:34:58 +10:00
5dc0250b00 feat(ui): ControlNet layout tweaks 2023-06-04 22:34:58 +10:00
f269377a01 feat(ui): "ProcessorOptionsContainer" -> "ProcessorWrapper", organise 2023-06-04 22:34:58 +10:00
d0406024e3 feat(ui): IAICustomSelect tweak styles 2023-06-04 22:34:58 +10:00
aa3a969bd2 feat: Update ControlNet Model List & Map 2023-06-04 22:34:58 +10:00
73a95973a8 wip: Add Wrapper Container for Preprocessor Options
For fast altering of the layout across all pre-preocessors.
2023-06-04 22:34:58 +10:00
bf4fe3c1ac wip: Fixing layout shifts with the ControlNet tab 2023-06-04 22:34:58 +10:00
d6c08ba469 feat(ui): add mini/advanced controlnet ui 2023-06-04 22:34:58 +10:00
69f0ba65f1 chore(ui): bump react-icons 2023-06-04 22:34:58 +10:00
828c86964d feat(ui): IAICustomSelect prevent label wrap 2023-06-04 22:34:58 +10:00
54b7ddd63f feat(ui): IAIDndImage cursor: 'grab' 2023-06-04 22:34:58 +10:00
a0dde66b5d feat(ui): more work on controlnet mini 2023-06-04 22:34:58 +10:00
b6b3b9f99c feat(ui): make scrollbar less bright 2023-06-04 22:34:58 +10:00
faa69f8a47 feat(ui): add alpha colors 2023-06-04 22:34:58 +10:00
d92c7f5483 feat(ui): organize IAIDndImage component 2023-06-04 22:34:58 +10:00
6b824eb112 feat(ui): initial mini controlnet UI, dnd improvements 2023-06-04 22:34:58 +10:00
72b4371804 feat(ui): control image auto-process 2023-06-04 22:34:58 +10:00
fa290aff8d feat(ui): add defaults for all processors 2023-06-04 22:34:58 +10:00
3d99d7ae8b feat(ui): update handling of inProgess, do not allow cnet process when processing 2023-06-04 22:34:58 +10:00
2eb367969c feat(ui): do not autoprocess control if invocation in progress 2023-06-04 22:34:58 +10:00
9cdad95f48 feat(ui): add rest of controlnet processors 2023-06-04 22:34:58 +10:00
707ed39300 chore(ui): regen api client 2023-06-04 22:34:58 +10:00
6bbb5f061a feat(nodes): update controlnet names/descriptions 2023-06-04 22:34:58 +10:00
6896e69e95 fix(ui): fix multiple controlnets 2023-06-04 22:34:58 +10:00
b17f4c1650 feat(ui): more tweaking controlnet ui 2023-06-04 22:34:58 +10:00
98493ed9e2 feat(ui): reorg parameter panel to make room for controlnet 2023-06-04 22:34:58 +10:00
94c953deab feat(ui): get processed images back into controlnet ui 2023-06-04 22:34:58 +10:00
fa4d88e163 feat(ui): improve drag and drop ux 2023-06-04 22:34:58 +10:00
b1e1e3efc7 fix(ui): fix IAISelectableImage fallback 2023-06-04 22:34:58 +10:00
3b9426eb72 feat(ui): controlnet/image dnd wip
Implement `dnd-kit` for image drag and drop
- vastly simplifies logic bc we can drag and drop non-serializable data (like an `ImageDTO`)
- also much prettier
- also will fix conflicts with file upload via OS drag and drop, bc `dnd-kit` does not use native HTML drag and drop API
- Implemented for Init image, controlnet, and node editor so far

More progress on the ControlNet UI
2023-06-04 22:34:58 +10:00
e2e07696fc feat(ui): wip controlnet ui 2023-06-04 22:34:58 +10:00
d6a959b000 feat(nodes): tidy controlnet processor nodes & improve descriptions 2023-06-04 22:34:58 +10:00
c3935d3849 feat(nodes): add separate scripts to launch cli and web (#3495) 2023-06-04 08:13:14 -04:00
383e3d77cb feat(nodes): add separate scripts to launch cli and web 2023-06-04 22:02:47 +10:00
31e97ead2a move invokeai.db to ~/invokeai/databases
- The invokeai.db database file has now been moved into
  `INVOKEAIROOT/databases`. Using plural here for possible
  future with more than one database file.

- Removed a few dangling debug messages that appeared during
  testing.

- Rebuilt frontend to test web.
2023-06-03 20:25:34 -04:00
0b49995659 merge with main 2023-06-03 20:06:27 -04:00
ff204db6b2 Add logging configuration (#3460)
This PR provides a number of options for controlling how InvokeAI logs
messages, including options to log to a file, syslog and a web server.
Several logging handlers can be configured simultaneously.

## Controlling How InvokeAI Logs Status Messages

InvokeAI logs status messages using a configurable logging system. You
can log to the terminal window, to a designated file on the local
machine, to the syslog facility on a Linux or Mac, or to a properly
configured web server. You can configure several logs at the same time,
and control the level of message logged and the logging format (to a
limited extent).

Three command-line options control logging:

### `--log_handlers <handler1> <handler2> ...`

This option activates one or more log handlers. Options are "console",
"file", "syslog" and "http". To specify more than one, separate them by
spaces:

```bash
invokeai-web --log_handlers console syslog=/dev/log file=C:\Users\fred\invokeai.log
```

The format of these options is described below.

### `--log_format {plain|color|legacy|syslog}`

This controls the format of log messages written to the console. Only
the "console" log handler is currently affected by this setting.

* "plain" provides formatted messages like this:

```bash

[2023-05-24 23:18:2[2023-05-24 23:18:50,352]::[InvokeAI]::DEBUG --> this is a debug message
[2023-05-24 23:18:50,352]::[InvokeAI]::INFO --> this is an informational messages
[2023-05-24 23:18:50,352]::[InvokeAI]::WARNING --> this is a warning
[2023-05-24 23:18:50,352]::[InvokeAI]::ERROR --> this is an error
[2023-05-24 23:18:50,352]::[InvokeAI]::CRITICAL --> this is a critical error
```

* "color" produces similar output, but the text will be color coded to
indicate the severity of the message.

* "legacy" produces output similar to InvokeAI versions 2.3 and earlier:

```bash
### this is a critical error
*** this is an error
** this is a warning
>> this is an informational messages
   | this is a debug message
```

* "syslog" produces messages suitable for syslog entries:

```bash
InvokeAI [2691178] <CRITICAL> this is a critical error
InvokeAI [2691178] <ERROR> this is an error
InvokeAI [2691178] <WARNING> this is a warning
InvokeAI [2691178] <INFO> this is an informational messages
InvokeAI [2691178] <DEBUG> this is a debug message
```

(note that the date, time and hostname will be added by the syslog
system)

### `--log_level {debug|info|warning|error|critical}`

Providing this command-line option will cause only messages at the
specified level or above to be emitted.

## Console logging

When "console" is provided to `--log_handlers`, messages will be written
to the command line window in which InvokeAI was launched. By default,
the color formatter will be used unless overridden by `--log_format`.

## File logging

When "file" is provided to `--log_handlers`, entries will be written to
the file indicated in the path argument. By default, the "plain" format
will be used:

```bash
invokeai-web --log_handlers file=/var/log/invokeai.log
```

## Syslog logging

When "syslog" is requested, entries will be sent to the syslog system.
There are a variety of ways to control where the log message is sent:

* Send to the local machine using the `/dev/log` socket:

```
invokeai-web --log_handlers syslog=/dev/log
```

* Send to the local machine using a UDP message:

```
invokeai-web --log_handlers syslog=localhost
```

* Send to the local machine using a UDP message on a nonstandard port:

```
invokeai-web --log_handlers syslog=localhost:512
```

* Send to a remote machine named "loghost" on the local LAN using
facility LOG_USER and UDP packets:

```
invokeai-web --log_handlers syslog=loghost,facility=LOG_USER,socktype=SOCK_DGRAM
```

This can be abbreviated `syslog=loghost`, as LOG_USER and SOCK_DGRAM are
defaults.

* Send to a remote machine named "loghost" using the facility LOCAL0 and
using a TCP socket:

```
invokeai-web --log_handlers syslog=loghost,facility=LOG_LOCAL0,socktype=SOCK_STREAM
```

If no arguments are specified (just a bare "syslog"), then the logging
system will look for a UNIX socket named `/dev/log`, and if not found
try to send a UDP message to `localhost`. The Macintosh OS used to
support logging to a socket named `/var/run/syslog`, but this feature
has since been disabled.

## Web logging

If you have access to a web server that is configured to log messages
when a particular URL is requested, you can log using the "http" method:

```
invokeai-web --log_handlers http=http://my.server/path/to/logger,method=POST
```

The optional [,method=] part can be used to specify whether the URL
accepts GET (default) or POST messages.

Currently password authentication and SSL are not supported.

## Using the configuration file

You can set and forget logging options by adding a "Logging" section to
`invokeai.yaml`:

```
InvokeAI:
  [... other settings...]
  Logging:
    log_handlers:
       - console
       - syslog=/dev/log
    log_level: info
    log_format: color
```
2023-06-03 20:03:40 -04:00
f74f3d6a3a many TUI improvements:
1. Separated the "starter models" and "more models" sections. This
   gives us room to list all installed diffuserse models, not just
   those that are on the starter list.

2. Support mouse-based paste into the textboxes with either middle
   or right mouse buttons.

3. Support terminal-style cursor movement:
     ^A to move to beginning of line
     ^E to move to end of line
     ^K kill text to right and put in killring
     ^Y yank text back

4. Internal code cleanup.
2023-06-03 16:17:53 -04:00
713fb061e8 Merge branch 'main' into release/make-web-dist-startable 2023-06-02 23:19:33 -04:00
77b7680b32 slight refactoring of code; configure --yes should work now 2023-06-02 23:19:14 -04:00
ff63433591 Merge branch 'main' into lstein/config-management-fixes 2023-06-02 22:56:43 -04:00
31281d7181 Merge branch 'main' into lstein/logging-improvements 2023-06-02 22:56:13 -04:00
8285fbb0b1 Merge branch 'lstein/new-model-manager' of github.com:invoke-ai/InvokeAI into lstein/new-model-manager 2023-06-02 22:48:00 -04:00
951e6b746c remove model cache test; should be replaced with something else 2023-06-02 22:47:48 -04:00
44a6623094 Merge branch 'main' into lstein/new-model-manager 2023-06-02 22:40:51 -04:00
72d1e4e404 fix bug in model_manager that prevented import of inpainting models 2023-06-02 22:39:26 -04:00
91918e648b dynamic display of log messages now working 2023-06-02 22:24:46 -04:00
1390b65a9c new TUI is fully functional; needs some polishing 2023-06-02 17:20:50 -04:00
82231369d3 Make Invoke Button also the progress bar (#3492)
Find on some screens the progress bar at top is hard to see, Bar should
only show when in progress


![Animation](https://github.com/invoke-ai/InvokeAI/assets/115216705/04f945d3-377b-4646-b125-1355e74b6b09)
2023-06-02 19:30:45 +12:00
7620bacc01 feat: Add temporary NodeInvokeButton 2023-06-02 17:55:15 +12:00
ea9cf04765 fix: Remove progress bg instead of altering button bg 2023-06-02 17:36:14 +12:00
47301e6f85 fix: Do the same without zIndex 2023-06-02 17:33:38 +12:00
f143fb7254 feat: Make Invoke Button also the progress bar 2023-06-02 17:24:40 +12:00
2bdb655375 Change to absolute 2023-06-02 14:59:10 +10:00
41f7758977 listing, downloading and deleting LoRAs working; TI support pending 2023-06-02 00:40:15 -04:00
8ae1eaaccc Add Progress bar under invoke Button
Find on some screens the progress bar at top of screen gets cut off
2023-06-02 14:19:02 +10:00
98773b20ac merge with main 2023-06-01 18:09:49 -04:00
d66979073b add optional config for settings modal 2023-06-02 00:36:45 +10:00
c9e621093e fix(ui): fix looping gallery images fetch
The gallery could get in a state where it thought it had just reached the end of the list and endlessly fetches more images, if there are no more images to fetch (weird I know).

Add some logic to remove the `end reached` handler when there are no more images to load.
2023-06-02 00:34:03 +10:00
e06ba40795 fix(ui): do not allow dpmpp_2s to be used ever
it doesn't work for the img2img pipelines, but the implemented conditional display could break the scheduler selection dropdown.

simple fix until diffusers merges the fix - never use this scheduler.
2023-06-02 00:30:01 +10:00
6571e4c2fd feat(ui): refactor parameter recall
- use zod to validate parameters before recalling
- update recall params hook to handle all validation and UI feedback
2023-06-02 00:30:01 +10:00
ff9240b51d slight code cleanup 2023-06-01 00:45:07 -04:00
18466e01fd tab selection seems very natural; not wired to backend yet 2023-06-01 00:43:28 -04:00
e9821ab711 implemented tabbed model selection; not wired to backend yet 2023-06-01 00:31:46 -04:00
d6530df635 rename invokeai.backend.config to invokeai.backend.install 2023-05-31 21:34:20 -04:00
b47786e846 First working TI draft 2023-05-31 02:12:27 +03:00
062b2cf46f fix(ui): fix width and height not working on txt2img tab
I missed a spot when working on the graph logic yesterday.
2023-05-30 18:41:09 -04:00
082ecf6f25 minor formatting improvements 2023-05-30 13:59:32 -04:00
1632ac6b9f add controlnet model downloading 2023-05-30 13:49:43 -04:00
69ccd3a0b5 Fixes for checkpoint models 2023-05-30 19:12:47 +03:00
877959b413 fix(ui): ensure download image opens in new tab 2023-05-30 09:22:54 -04:00
6e60f7517b feat(ui): add model description tooltips 2023-05-30 09:06:13 -04:00
296ee6b7ea feat(ui): tidy ParamScheduler component 2023-05-30 09:06:13 -04:00
7c7ffddb2b feat(ui): upgrade IAICustomSelect to optionally display tooltips for each item 2023-05-30 09:06:13 -04:00
e1ae7842ff feat(ui): add defaultModel to config 2023-05-30 09:06:13 -04:00
9687fe7bac fix(ui): set default model to first model (alpha sort) 2023-05-30 09:06:13 -04:00
a9a2bd90c2 fix(nodes): set min and max for l2l strength 2023-05-30 09:06:13 -04:00
47ca71a7eb fix(nodes): set cfg_scale min to 1 in latents 2023-05-30 09:06:13 -04:00
a9c47237b1 fix(ui): mark img2img resize node intermediate 2023-05-30 09:06:13 -04:00
33bbae2f47 fix(ui): fix missing init image when fit disabled 2023-05-30 09:06:13 -04:00
fab7a1d337 fix(ui): fix bug with staging bbox not resetting 2023-05-30 09:06:13 -04:00
cffcf80977 fix(ui): remove w/h from canvas params, add bbox w/h 2023-05-30 09:06:13 -04:00
1a3fd05b81 fix(ui): fix canvas bbox autoscale 2023-05-30 09:06:13 -04:00
c22c6ca135 fix(ui): fix img2img fit 2023-05-30 09:06:13 -04:00
3afb6a387f chore(ui): regen api 2023-05-30 09:06:13 -04:00
33e5ed7180 fix(ui): fix edge case in nodes graph building
Inputs with explicit values are validated by pydantic even if they also
have a connection (which is the actual value that is used).

Fix this by omitting explicit values for inputs that have a connection.
2023-05-30 09:06:13 -04:00
2067757fab feat(ui): enable progress images by default 2023-05-30 09:06:13 -04:00
b1b94a3d56 Fixed problem with inpainting after controlnet support added to main.
Problem was that controlnet support involved adding **kwargs to method calls down in denoising loop, and AddsMaskLatents didn't accept **kwarg arg. So just changed to accept and pass on **kwargs.
2023-05-30 08:01:21 -04:00
c9ee42450e added controlnet models to frontend; backend needs to be done 2023-05-30 00:38:37 -04:00
10fe31c2a1 Merge branch 'main' into lstein/config-management-fixes 2023-05-29 21:03:03 -04:00
420a76ecdd Add lora loader node 2023-05-30 02:12:33 +03:00
79de9047b5 First working lora implementation 2023-05-30 01:11:00 +03:00
dc54cbb1fc Merge branch 'main' into release/make-web-dist-startable 2023-05-29 14:16:10 -04:00
070218aba7 feat(ui): add progress image toggle to current image buttons 2023-05-29 09:07:46 -04:00
f1c226b171 fix(ui): remove console.log() 2023-05-29 09:07:46 -04:00
7004430380 feat(ui): gallery filter dropdown -> Images/Assets toggle 2023-05-29 09:07:46 -04:00
1ddc620192 feat(ui): only cancel on staging commit if processing 2023-05-29 09:07:46 -04:00
a7cebbd970 feat(ui): cancel session when staging image accepted 2023-05-29 09:07:46 -04:00
d97438b0b3 fix(ui): fix typo in actionsDenylist 2023-05-29 09:07:46 -04:00
4522f3f4c9 fix(ui): fix progress images in canvas 2023-05-29 09:07:46 -04:00
6fe28980b0 feat(ui): revert in-gallery progress
wasn't fully baked. will revisist in the future.
2023-05-29 09:07:46 -04:00
4aec5d8ffc fix(ui): typo 2023-05-29 09:07:46 -04:00
bbb4e8f5ef feat(nodes): add resize image and scale image nodes 2023-05-29 09:07:46 -04:00
bce33ea62e fix(ui): when session is complete, null out progress image
This may cause minor gallery jumpiness at the very end of processing, but is necessary to prevent the progress image from sticking around if the last node in a session did not have an image output.
2023-05-29 09:07:46 -04:00
e4705d5ce7 fix(ui): add additional socket event layer to gate handling socket events
Some socket events should not be handled by the slice reducers. For example generation progress should not be handled for a canceled session.

Added another layer of socket actions.

Example:
- `socketGeneratorProgress` is dispatched when the actual socket event is received
- Listener middleware exclusively handles this event and determines if the application should also handle it
- If so, it dispatches `appSocketGeneratorProgress`, which the slices can handle

Needed to fix issues related to canceling invocations.
2023-05-29 09:07:46 -04:00
6764b2a854 fix(ui): fix save to gallery without bounding box 2023-05-29 09:07:46 -04:00
970340cf62 fix(ui): infill and scaling options label 2023-05-29 09:07:46 -04:00
043f9d9ba4 fix(ui): fix auto-switch to new images 2023-05-29 09:07:46 -04:00
6f82801d07 fix(ui): fix canvas save to gallery incorrect is_intermediate flag 2023-05-28 20:19:56 -04:00
3e3dd39ae4 fix(nodes): fix images service update() for is_intermediate 2023-05-28 20:19:56 -04:00
89aa06e014 feat(ui): consolidate images slice
Now that images are in a database and we can make filtered queries, we can do away with the cumbersome `resultsSlice` and `uploadsSlice`.

- Remove `resultsSlice` and `uploadsSlice` entirely
- Add `imagesSlice` fills the same role
- Convert the application to use `imagesSlice`, reducing a lot of messy logic where we had to check which category was selected
- Add a simple filter popover to the gallery, which lets you select any number of image categories
2023-05-28 20:19:56 -04:00
6cc00ef4b7 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
f31e62afad feat(nodes): make list images route use offset pagination
Because we dynamically insert images into the DB and UI's images state, `page`/`per_page` pagination makes loading the images awkward.

Using `offset`/`limit` pagination lets us query for images with an offset equal to the number of images already loaded (which match the query parameters).

The result is that we always get the correct next page of images when loading more.
2023-05-28 20:19:56 -04:00
38fd2ad45d fix(ui): fix metadata viewer crash 2023-05-28 20:19:56 -04:00
05b99b5377 fix(ui): fix erroneously displays is_intermediate field on nodes 2023-05-28 20:19:56 -04:00
08a14ee6d5 fix(nodes): fix conflicts with controlnet 2023-05-28 20:19:56 -04:00
29fcc92da9 feat(ui): handle new image origin/category setup
- Update all thunks & network related things
- Update gallery

What I have not done yet is rename the gallery tabs and the relevant slices, but I believe the functionality is all there.

Also I fixed several bugs along the way but couldn't really commit them separately bc I was refactoring. Can't remember what they were, but related to the gallery image switching.
2023-05-28 20:19:56 -04:00
d78e3572e3 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
160267c71a feat(nodes): refactor image types
- Remove `ImageType` entirely, it is confusing
- Create `ResourceOrigin`, may be `internal` or `external`
- Revamp `ImageCategory`, may be `general`, `mask`, `control`, `user`, `other`. Expect to add more as time goes on
- Update images `list` route to accept `include_categories` OR `exclude_categories` query parameters to afford finer-grained querying. All services are updated to accomodate this change.

The new setup should account for our types of images, including the combinations we couldn't really handle until now:
- Canvas init and masks
- Canvas when saved-to-gallery or merged
2023-05-28 20:19:56 -04:00
fd47e70c92 feat(nodes): use higher precision timestamps in db 2023-05-28 20:19:56 -04:00
9317b42e5f feat(nodes, ui): wip image types 2023-05-28 20:19:56 -04:00
bdab73701f fix(ui): canvas images not added to staging 2023-05-28 20:19:56 -04:00
3ea5e78322 fix(nodes): fix list images route param descriptions 2023-05-28 20:19:56 -04:00
f609ee21a2 fix(ui): handle intermediates when fetching gallery 2023-05-28 20:19:56 -04:00
f51defeeb3 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
ee0225f4ba fix(nodes): handle intermediates during images.get_many() 2023-05-28 20:19:56 -04:00
33a0af4637 feat(nodes): add nameservice
Currenly only used to make names for images, but when latents, conditioning, etc are managed in DB, will do the same for them.

Intended to eventually support custom naming schemes.
2023-05-28 20:19:56 -04:00
d37b08a7dd Merge branch 'main' into release/make-web-dist-startable 2023-05-28 19:46:09 -04:00
9a796364da Fixed controlnet preprocessors and controlnet handling in TextToLatents to work with revised Image services. 2023-05-26 21:44:00 -04:00
1ad4eb3a7b Progress toward improvement in fieldTemplateBuilder.ts getFieldType() 2023-05-26 21:44:00 -04:00
3767a453bb Added float to FIELD_TYPE_MAP ins constants.ts 2023-05-26 21:44:00 -04:00
b0892d30a4 Added mediapipe install requirement. Should be able to remove once controlnet_aux package adds mediapipe to its requirements. 2023-05-26 21:44:00 -04:00
d9b1e4a98c Added nodes for float params: ParamFloatInvocation and FloatCollectionOutput. Also added FloatOutput. 2023-05-26 21:44:00 -04:00
a4dec8c1d6 Fixed bug where MediapipFaceProcessorInvocation was ignoring max_faces and min_confidence params. 2023-05-26 21:44:00 -04:00
8960ceb98b Added Mediapipe image processor for use as ControlNet preprocessor.
Also hacked in ability to specify HF subfolder when loading ControlNet models from string.
2023-05-26 21:44:00 -04:00
be79d088c0 fix(nodes): controlnet input accepts list or single controlnet 2023-05-26 21:44:00 -04:00
009407ea3f fix(ui): fix node ui type hints 2023-05-26 21:44:00 -04:00
6999d28c7f chore(ui): regen api client 2023-05-26 21:44:00 -04:00
324e9eb74b Extended node-based ControlNet support to LatentsToLatentsInvocation. 2023-05-26 21:44:00 -04:00
56cff40362 Cleaning up after ControlNet refactor in TextToLatentsInvocation 2023-05-26 21:44:00 -04:00
2ba40c5e52 Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier. 2023-05-26 21:44:00 -04:00
3ab147204c Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle. 2023-05-26 21:44:00 -04:00
e4c89cba9c Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names. 2023-05-26 21:44:00 -04:00
322ea84c4e Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it. 2023-05-26 21:44:00 -04:00
f2b41c60ff Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input. 2023-05-26 21:44:00 -04:00
754acec92f Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)
2023-05-26 21:44:00 -04:00
11fc7e40a5 Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled. 2023-05-26 21:44:00 -04:00
d15bb88eb2 Removed last bits of dtype and and device hardwiring from controlnet section 2023-05-26 21:44:00 -04:00
70ba36eefc Cleaning up mistakes after rebase. 2023-05-26 21:44:00 -04:00
7e70391c2b Cleaning up TextToLatent arg testing 2023-05-26 21:44:00 -04:00
e2a94be336 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
63a86eefb4 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
b0727b9d47 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
d96e727dd5 Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
fe480886dc changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
8031d1827b Refactored controlnet node to output ControlField that bundles control info. 2023-05-26 21:44:00 -04:00
b5acdb322d Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
a4d1fe8819 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
10b7a58887 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
901a277959 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
aaa093bef1 Fixed use of ControlNet control_weight parameter 2023-05-26 21:44:00 -04:00
bb96543d66 Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this... 2023-05-26 21:44:00 -04:00
a2a2cfa765 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
18e6a2b410 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
db27263bc2 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
0e027ec3ef Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
5acbbeecaa Added HED, LineArt, and OpenPose ControlNet nodes 2023-05-26 21:44:00 -04:00
6ef2168b67 changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
6d958a214c Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke()) 2023-05-26 21:44:00 -04:00
4ae4bf4ff9 Resolving conflicts in rebase to origin/main 2023-05-26 21:44:00 -04:00
fdef53b2de Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
11bd038b9d Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
768cfe3aab Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
c4277b0662 Moved to controlnet_aux v0.0.4, reinstated Zoe controlnet preprocessor. Also in pyproject.toml had to specify downgrade of timm to 0.6.13 _after_ controlnet-aux installs timm >= 0.9.2, because timm >0.6.13 breaks Zoe preprocessor. 2023-05-26 21:44:00 -04:00
020f3ccf07 fix(nodes): controlnet input accepts list or single controlnet 2023-05-26 21:44:00 -04:00
7467fa5e57 fix(ui): fix node ui type hints 2023-05-26 21:44:00 -04:00
e19ef7ed2f fix(ui): add control field type 2023-05-26 21:44:00 -04:00
71003be6b8 fix(ui): add value to conditioning field 2023-05-26 21:44:00 -04:00
c1dbafc2df chore(ui): regen api client 2023-05-26 21:44:00 -04:00
dcebd71381 Extended node-based ControlNet support to LatentsToLatentsInvocation. 2023-05-26 21:44:00 -04:00
d855a65e73 Cleaning up after ControlNet refactor in TextToLatentsInvocation 2023-05-26 21:44:00 -04:00
a9007c7e0f Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier. 2023-05-26 21:44:00 -04:00
af60304f97 Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle. 2023-05-26 21:44:00 -04:00
6de241eead Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names. 2023-05-26 21:44:00 -04:00
51032dc0b2 Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it. 2023-05-26 21:44:00 -04:00
9ec3d2bc0c Added dependency on controlnet-aux v0.0.3 2023-05-26 21:44:00 -04:00
297931f5d9 Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input. 2023-05-26 21:44:00 -04:00
f613c073c1 Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)
2023-05-26 21:44:00 -04:00
63d248622c Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled. 2023-05-26 21:44:00 -04:00
48485fe92f Removed last bits of dtype and and device hardwiring from controlnet section 2023-05-26 21:44:00 -04:00
07726af703 Cleaning up mistakes after rebase. 2023-05-26 21:44:00 -04:00
ad1004b485 Cleaning up TextToLatent arg testing 2023-05-26 21:44:00 -04:00
0096fb2790 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
9c8c2e49d6 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
2005a96847 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
00a8d60c1b Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
3aa182390a changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
e44f1d6d4e Refactored controlnet node to output ControlField that bundles control info. 2023-05-26 21:44:00 -04:00
dfdf8e2ead Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
3a645c4e80 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
113129daf9 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
940e3b6635 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
7fb29dabff Fixed lint-ish formatting error 2023-05-26 21:44:00 -04:00
714ad6dbb8 Fixed use of ControlNet control_weight parameter 2023-05-26 21:44:00 -04:00
c0863fa20f Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this... 2023-05-26 21:44:00 -04:00
78b0b37ba6 More rebase repair. 2023-05-26 21:44:00 -04:00
5d5cdc7716 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
93cd818f6a Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
598a628790 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
f3666eda63 Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
754017b59e Added an additional "raw_processed_image" output port to controlnets, mainly so could route ImageField to a ShowImage node 2023-05-26 21:44:00 -04:00
21251ce12c Added HED, LineArt, and OpenPose ControlNet nodes 2023-05-26 21:44:00 -04:00
dc12fa6cd6 changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
f2f4c37f19 Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke()) 2023-05-26 21:44:00 -04:00
0864fca641 Resolving conflicts in rebase to origin/main 2023-05-26 21:44:00 -04:00
5e4c0217c7 Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
78cd106c23 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
6ed0efa938 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
ca0669c337 Resolving rebase conflict 2023-05-26 21:44:00 -04:00
b59a749627 Added example of using ControlNet with legacy Txt2Img generator 2023-05-26 21:44:00 -04:00
a91dee87d0 Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving. 2023-05-26 21:44:00 -04:00
5ff98a4179 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
36b2f12219 Merge branch 'main' into release/make-web-dist-startable 2023-05-26 12:56:24 -04:00
5569f205ee Update CODEOWNERS 2023-05-26 08:59:10 -04:00
a76cf8aab2 Update CODEOWNERS 2023-05-26 08:59:10 -04:00
5c0f0d1808 Merge branch 'main' into lstein/logging-improvements 2023-05-26 08:57:17 -04:00
951900a86a Merge branch 'main' into lstein/config-management-fixes 2023-05-26 08:56:41 -04:00
582f516fef Merge branch 'main' into release/make-web-dist-startable 2023-05-26 18:06:38 +10:00
a25bae2545 fix(ui): tweak log levels 2023-05-26 18:06:08 +10:00
0ea35b1e3d feat(ui): improve session canceled handling 2023-05-26 18:06:08 +10:00
c6f935bf1a feat(ui): improve gallery page handling 2023-05-26 18:06:08 +10:00
96b4d35d43 fix(ui): fix uploads not loading more images correctly after generation 2023-05-26 18:06:08 +10:00
7b0938e7e4 feat(ui): add comments for weird stuff 2023-05-26 18:06:08 +10:00
249522b568 fix(ui): fix gallery not loading more images correctly after generation 2023-05-26 18:06:08 +10:00
39088e42cc fix(ui): remove console logs 2023-05-26 18:06:08 +10:00
30e0033ebe fix(ui): fix results not added to gallery 2023-05-26 18:06:08 +10:00
b599c40099 feat(ui): improve session invoked handling 2023-05-26 18:06:08 +10:00
8f190169db feat(ui): improve session creation handling 2023-05-26 18:06:08 +10:00
1d4d705795 feat(ui): improve image urls handling 2023-05-26 18:06:08 +10:00
b3f71b3078 feat(ui): improve image metadata handling 2023-05-26 18:06:08 +10:00
6059db4f15 feat(ui): improve image delete handling 2023-05-26 18:06:08 +10:00
0d5f44b153 feat(ui): improve image upload handling 2023-05-26 18:06:08 +10:00
17164a37a8 fix(ui): fix gallery auto switch 2023-05-26 18:06:08 +10:00
f88ccabe30 fix(ui): gallery not loading on page load 2023-05-26 18:06:08 +10:00
e1c85f1234 Merge branch 'main' into release/make-web-dist-startable 2023-05-26 18:04:09 +10:00
f50293920e correct typo in tiled_vae field definition 2023-05-25 23:29:16 -04:00
1e2db3a17f hook tiled_decode up to configuration 2023-05-25 23:28:15 -04:00
57a3eb3652 feat(ui): unset progress image inside invocationComplete listener 2023-05-26 13:25:50 +10:00
82a8972bde create listener for imageMetdataReceived to swap our progressImage 2023-05-26 13:25:50 +10:00
497a885c85 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 22:49:18 -04:00
4d9f55d0f6 replace deleted get_root() 2023-05-25 22:48:50 -04:00
5f8f51436a merge with main; fix conflicts 2023-05-25 22:40:45 -04:00
0c3b4bb70d chore(ui): regen api client 2023-05-25 22:17:14 -04:00
33e13820fc feat(nodes): remove meta node field; use individual is_intermediate field instead
as suggested by @Kyle0654
2023-05-25 22:17:14 -04:00
43d991cfdb fix(ui): fix incorrect comment 2023-05-25 22:17:14 -04:00
291e9cf14b fix(nodes): add is_intermediate to all image-outputting nodes 2023-05-25 22:17:14 -04:00
a2de5c9963 feat(ui): change intermediates handling
- Update the canvas graph generation to flag its uploaded init and mask images as `intermediate`.
- During canvas setup, hit the update route to associate the uploaded images with the session id.
- Organize the socketio and RTK listener middlware better. Needed to facilitate the updated canvas logic.
- Add a new action `sessionReadyToInvoke`. The `sessionInvoked` action is *only* ever run in response to this event. This lets us do whatever complicated setup (eg canvas) and explicitly invoking. Previously, invoking was tied to the socket subscribe events.
- Some minor tidying.
2023-05-25 22:17:14 -04:00
5025f84627 chore(ui): regen api client 2023-05-25 22:17:14 -04:00
d2c8a53c55 feat(nodes): change intermediates handling
- `ImageType` is now restricted to `results` and `uploads`.
- Add a reserved `meta` field to nodes to hold the `is_intermediate` boolean. We can extend it in the future to support other node `meta`.
- Add a `is_intermediate` column to the `images` table to hold this. (When `latents`, `conditioning` etc are added to the DB, they will also have this column.)
- All nodes default to `*not* intermediate`. Nodes must explicitly be marked `intermediate` for their outputs to be `intermediate`.
- When building a graph, you can set `node.meta.is_intermediate=True` and it will be handled as an intermediate.
- Add a new `update()` method to the `ImageService`, and a route to call it. Updates have a strict model, currently only `session_id` and `image_category` may be updated.
- Add a new `update()` method to the `ImageRecordStorageService` to update the image record using the model.
2023-05-25 22:17:14 -04:00
5659d10778 remove unused function get_root() 2023-05-25 22:06:37 -04:00
46cab81d6f fix missing web_dir 2023-05-25 22:01:48 -04:00
dd157bce85 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 21:52:05 -04:00
2f25dd7d0d Merge branch 'main' into lstein/config-management-fixes 2023-05-25 21:10:12 -04:00
e56965ad76 documentation tweaks; fixed initialization in a couple more places 2023-05-25 21:10:00 -04:00
2273b3a8c8 fix potential race condition in config system 2023-05-25 20:41:26 -04:00
05fb0ac2b2 Update latent.py 2023-05-26 10:27:33 +10:00
d4acd49ee3 Update generate.py 2023-05-26 10:27:33 +10:00
d98868e524 Update generationSlice.ts to change Default Scheduler 2023-05-26 10:27:33 +10:00
93bb27f2c7 fix gallery navigation 2023-05-26 10:01:06 +10:00
a4c44edf8d more use parameter fixes 2023-05-26 10:01:06 +10:00
1e94d7739a fix metadata references, add support for negative_conditioning syntax 2023-05-26 10:01:06 +10:00
9110838fe4 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 19:06:09 -04:00
ca7b267326 raise error if syslogging requested and syslog lib not available 2023-05-25 10:10:46 -04:00
7f5992d6a5 Merge branch 'lstein/logging-improvements' of github.com:invoke-ai/InvokeAI into lstein/logging-improvements 2023-05-25 09:39:56 -04:00
88776fb2de get invokeai_configure working again 2023-05-25 09:39:45 -04:00
34f567abd4 Merge branch 'main' into lstein/logging-improvements 2023-05-25 08:48:47 -04:00
b87f3043ae add logging configuration 2023-05-24 23:57:15 -04:00
3829ffbe66 fix(tests): add --use_memory_db flag; use it in tests 2023-05-25 12:12:31 +10:00
ad619ae880 fix(tests): log db_location 2023-05-25 12:12:31 +10:00
d22ebe08be fix(tests): log db_location 2023-05-25 12:12:31 +10:00
ee0c6ad86e fix(cli): fix invocation services for cli 2023-05-25 12:12:31 +10:00
96adb56633 fix(tests): fix missing services in tests; fix ImageField instantiation 2023-05-25 12:12:31 +10:00
3000436121 chore(nodes): remove unused imports 2023-05-25 12:12:31 +10:00
37cdd91f5d fix(nodes): use forward declarations for InvocationServices
Also use `TYPE_CHECKING` to get IDE hints.
2023-05-25 12:12:31 +10:00
6f3c6ddf3f Update 020_INSTALL_MANUAL.md
Corrected a markdown formatting error (missing backtick).
2023-05-24 11:33:32 -04:00
0bfbda512d build(nodes): remove references to metadata service in tests 2023-05-24 11:30:47 -04:00
295b98a13c build(nodes): remove outdated metadata test
I will add tests for the new service soon
2023-05-24 11:30:47 -04:00
ff6b345d45 fix(nodes): rebase fixes 2023-05-24 11:30:47 -04:00
1fb307abf4 feat(nodes): restore canvas functionality (non-latents) 2023-05-24 11:30:47 -04:00
29c952dcf6 feat(ui): restore canvas functionality 2023-05-24 11:30:47 -04:00
010f63a50d feat(ui): misc tidy 2023-05-24 11:30:47 -04:00
068bbe3a39 fix(ui): fix uploads tab in gallery 2023-05-24 11:30:47 -04:00
ad39680feb feat(nodes): wip inpainting nodes prep 2023-05-24 11:30:47 -04:00
1e0ae8404c feat(nodes): comment out seamless
this will be a model config feature when model manager is ready
2023-05-24 11:30:47 -04:00
460d555a3d feat(nodes): add image mul, channel, convert nodes
also make img node names consistent
2023-05-24 11:30:47 -04:00
66ad04fcfc feat(nodes): add mask image category 2023-05-24 11:30:47 -04:00
c7c0836721 feat(ui): migrate linear workflows to latents 2023-05-24 11:30:47 -04:00
d2c223de8f feat(nodes): move fully* to new images service
* except i haven't rebuilt inpaint in latents
2023-05-24 11:30:47 -04:00
dd16f788ed fix(nodes): fix RangeOfSizeInvocation off-by-one error 2023-05-24 11:30:47 -04:00
b25c1af018 feat(nodes): add RangeOfSizeInvocation
The `RangeInvocation` is a simple wrapper around `range()`, but you must provide `stop > start`.

`RangeOfSizeInvocation` replaces the `stop` parameter with `size`, so that you can just provide the `start` and `step` and get a range of `size` length.
2023-05-24 11:30:47 -04:00
8f393b64b8 feat(nodes): add seed validator
If `seed>SEED_MAX`, we can still continue if we parse the seed as `seed % SEED_MAX`.
2023-05-24 11:30:47 -04:00
55b3193629 fix(nodes): add RangeInvocation validator
`stop` must be greater than `start`.
2023-05-24 11:30:47 -04:00
6f78c073ed fix(ui): fix uploads & other bugs 2023-05-24 11:30:47 -04:00
c406be6f4f fix(ui): fix image deletion 2023-05-24 11:30:47 -04:00
aeaf3737aa fix(ui): fix gallery bugs 2023-05-24 11:30:47 -04:00
23d9d58c08 fix(nodes): fix bugs with serving images
When returning a `FileResponse`, we must provide a valid path, else an exception is raised outside the route handler.

Add the `validate_path` method back to the service so we can validate paths before returning the file.

I don't like this but apparently this is just how `starlette` and `fastapi` works with `FileResponse`.
2023-05-24 11:30:47 -04:00
4c331a5d7e chore(ui): regen api client 2023-05-24 11:30:47 -04:00
035425ef24 feat(nodes): address feedback
- Address database feedback:
  - Remove all the extraneous tables. Only an `images` table now:
  - `image_type` and `image_category` are unrestricted strings. When creating images, the provided values are checked to ensure they are a valid type and category.
  - Add `updated_at` and `deleted_at` columns. `deleted_at` is currently unused.
  - Use SQLite's built-in timestamp features to populate these. Add a trigger to update `updated_at` when the row is updated. Currently no way to update a row.
  - Rename the `id` column in `images` to `image_name`
- Rename `ImageCategory.IMAGE` to `ImageCategory.GENERAL`
- Move all exceptions outside their base classes to make them more portable.
- Add `width` and `height` columns to the database. These store the actual dimensions of the image file, whereas the metadata's `width` and `height` refer to the respective generation parameters and are nullable.
- Make `deserialize_image_record` take a `dict` instead of `sqlite3.Row`
- Improve comments throughout
- Tidy up unused code/files and some minor organisation
2023-05-24 11:30:47 -04:00
021e5a2aa3 feat(nodes): improve metadata service comments 2023-05-24 11:30:47 -04:00
7a1de3887e feat(ui): wip update UI for migration 2023-05-24 11:30:47 -04:00
4a7a5234df fix(ui): fix image nodes losing image 2023-05-24 11:30:47 -04:00
6aebe1614d feat(ui): wip use new images service 2023-05-24 11:30:47 -04:00
74292eba28 chore(ui): regen api client 2023-05-24 11:30:47 -04:00
c31ff364ab fix(nodes): tidy images service 2023-05-24 11:30:47 -04:00
f310a39381 feat(nodes): finalize image routes 2023-05-24 11:30:47 -04:00
5a7e611e0a fix(nodes): fix image url 2023-05-24 11:30:47 -04:00
4e29a751d8 feat(ui): add POC image record fetching 2023-05-24 11:30:47 -04:00
3f94f81acd chore(ui): regen api client 2023-05-24 11:30:47 -04:00
5de3c41d19 feat(nodes): add metadata handling 2023-05-24 11:30:47 -04:00
f071b03ceb chore(ui): regen api client 2023-05-24 11:30:47 -04:00
b9375186a5 feat(nodes): consolidate image routers 2023-05-24 11:30:47 -04:00
11bd932cba feat(nodes): revert invocation_complete url hack 2023-05-24 11:30:47 -04:00
b77ccfaf32 chore(ui): regen api client 2023-05-24 11:30:47 -04:00
96653eebb6 build(ui): do not export schemas on api client generation 2023-05-24 11:30:47 -04:00
60d25f105f fix(nodes): restore metadata traverser 2023-05-24 11:30:47 -04:00
734b653a5f fix(nodes): add base images router 2023-05-24 11:30:47 -04:00
52c9e6ec91 feat(nodes): organise/tidy 2023-05-24 11:30:47 -04:00
c0f132e41a hack(nodes): hack to get image urls in the invocation complete event 2023-05-24 11:30:47 -04:00
cc1160a43a feat(nodes): streamline urlservice 2023-05-24 11:30:47 -04:00
adde8450bc fix(nodes): remove bad import 2023-05-24 11:30:47 -04:00
5bf9891553 feat(nodes): it works 2023-05-24 11:30:47 -04:00
22c34c343a feat(nodes): fix types for InvocationServices 2023-05-24 11:30:47 -04:00
f7804f6126 feat(nodes): add logger to images service 2023-05-24 11:30:47 -04:00
d14b02e93f feat(logger): fix logger type issues 2023-05-24 11:30:47 -04:00
1b75d899ae feat(nodes): wip image storage implementation 2023-05-24 11:30:47 -04:00
d4aa79acd7 fix(nodes): use save instead of set
`set` is a python builtin
2023-05-24 11:30:47 -04:00
33d199c007 feat(nodes): image records router 2023-05-24 11:30:47 -04:00
9c89d3452c feat(nodes): add high-level images service
feat(nodes): add ResultsServiceABC & SqliteResultsService

**Doesn't actually work bc of circular imports. Can't even test it.**

- add a base class for ResultsService and SQLite implementation
- use `graph_execution_manager` `on_changed` callback to keep `results` table in sync

fix(nodes): fix results service bugs

chore(ui): regen api

fix(ui): fix type guards

feat(nodes): add `result_type` to results table, fix types

fix(nodes): do not shadow `list` builtin

feat(nodes): add results router

It doesn't work due to circular imports still

fix(nodes): Result class should use outputs classes, not fields

feat(ui): crude results router

fix(ui): send to canvas in currentimagebuttons not working

feat(nodes): add core metadata builder

feat(nodes): add design doc

feat(nodes): wip latents db stuff

feat(nodes): images_db_service and resources router

feat(nodes): wip images db & router

feat(nodes): update image related names

feat(nodes): update urlservice

feat(nodes): add high-level images service
2023-05-24 11:30:47 -04:00
fb0b63c580 fix(nodes): fix seam painting
The problem was the same seed was getting used for the seam painting pass, causing the fried look.

Same issue as if you do img2img on a txt2img with the same seed/prompt.

Thanks to @hipsterusername for teaming up to debug this. We got pretty deep into the weeds.
2023-05-25 00:58:03 +10:00
bb2c6e5925 Merge branch 'main' into release/make-web-dist-startable 2023-05-24 10:55:51 -04:00
928caff2a6 fix: attempt to fix actions (#3454)
i think this conditional needs to be removed.
2023-05-25 02:37:39 +12:00
670c79f2c7 fix: attempt to fix actions
i think this conditional needs to be removed.
2023-05-25 00:31:48 +10:00
d6efb98953 build: fix test-invoke-pip.yml
- Restore conditional which ensures tests are only run on `main`
- Fix `yaml` syntax error
2023-05-24 21:48:12 +10:00
19da795274 fix(ui): send to canvas in currentimagebuttons not working 2023-05-24 21:46:58 +10:00
454ba9b893 add crossOrigin = anonymous attribute to konva image 2023-05-24 10:32:41 +10:00
8e419a4f97 Revert weak references as can be done without it 2023-05-23 04:29:40 +03:00
2533209326 Rewrite cache to weak references 2023-05-23 03:48:22 +03:00
d2dc1ed26f make InvokeAI package installable
This commit makes InvokeAI 3.0 to be installable via PyPi.org and the
installer script.

Main changes.

1. Move static web pages into `invokeai/frontend/web` and modify the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root
to launch.

2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.

3. Fix a bug in the checkpoint converter script that was identified
during testing.

4. Better error reporting when checkpoint converter fails.

5. Rebuild front end.
2023-05-22 17:51:47 -04:00
d4fb16825e move static into invokeai.frontend.web directory for dist install 2023-05-22 16:48:17 -04:00
165c1adcf8 Merge branch 'main' into lstein/new-model-manager 2023-05-22 21:51:07 +03:00
650d69ef5b added optional middleware prop and new actions needed (#3437)
* added optional middleware prop and new actions needed

* accidental import

* make middleware an array

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-22 08:16:11 -04:00
ff0e79fa9a add id for invoke button 2023-05-19 21:44:31 +10:00
127b54f812 add some IDs 2023-05-19 21:44:31 +10:00
bdf33f13b3 fix bad merge in compel 2023-05-18 18:08:45 -04:00
27241cdde1 port more globals changes over 2023-05-18 17:17:45 -04:00
259d6ec90d fixup cachedir call 2023-05-18 14:52:16 -04:00
a77c4c87b2 fixed logic error in resolution of model path 2023-05-18 14:35:34 -04:00
d96175d127 resolve some undefined symbols in model_cache 2023-05-18 14:31:47 -04:00
7025c00581 Add configuration system, remove legacy globals, args, generate and CLI (#3340)
# Application-wide configuration service

This PR creates a new `InvokeAIAppConfig` object that reads
application-wide settings from an init file, the environment, and the
command line.

Arguments and fields are taken from the pydantic definition of the
model. Defaults can be set by creating a yaml configuration file that
has a top-level key of "InvokeAI" and subheadings for each of the
categories returned by `invokeai --help`.

The file looks like this:

[file: invokeai.yaml]
```
InvokeAI:
  Paths:
    root: /home/lstein/invokeai-main
    conf_path: configs/models.yaml
    legacy_conf_dir: configs/stable-diffusion
    outdir: outputs
    embedding_dir: embeddings
    lora_dir: loras
    autoconvert_dir: null
    gfpgan_model_dir: models/gfpgan/GFPGANv1.4.pth
  Models:
    model: stable-diffusion-1.5
    embeddings: true
  Memory/Performance:
    xformers_enabled: false
    sequential_guidance: false
    precision: float16
    max_loaded_models: 4
    always_use_cpu: false
    free_gpu_mem: false
  Features:
    nsfw_checker: true
    restore: true
    esrgan: true
    patchmatch: true
    internet_available: true
    log_tokenization: false
  Cross-Origin Resource Sharing:
    allow_origins: []
    allow_credentials: true
    allow_methods:
    - '*'
    allow_headers:
    - '*'
  Web Server:
    host: 127.0.0.1
    port: 8081

```

The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can use any OmegaConf dictionary by passing it to
the config object at initialization time:

```
 omegaconf = OmegaConf.load('/tmp/init.yaml')
 conf = InvokeAIAppConfig(conf=omegaconf)
```
The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can replace supersede this by providing
anyOmegaConf dictionary object initialization time:

```
omegaconf = OmegaConf.load('/tmp/init.yaml')
conf = InvokeAIAppConfig(conf=omegaconf)
```

By default, InvokeAIAppConfig will parse the contents of `sys.argv` at
initialization time. You may pass a list of strings in the optional
`argv` argument to use instead of the system argv:

```
conf = InvokeAIAppConfig(arg=['--xformers_enabled'])
```

It is also possible to set a value at initialization time. This value
has highest priority.
```
conf = InvokeAIAppConfig(xformers_enabled=True)
```
Any setting can be overwritten by setting an environment variable of
form: "INVOKEAI_<setting>", as in:

```
export INVOKEAI_port=8080
```

Order of precedence (from highest):
   1) initialization options
   2) command line options
   3) environment variable options
   4) config file options
   5) pydantic defaults

Typical usage:

```
from invokeai.app.services.config import InvokeAIAppConfig

# get global configuration and print its nsfw_checker value
conf = InvokeAIAppConfig()
print(conf.nsfw_checker)
```
Finally, the configuration object is able to recreate its (modified)
yaml file, by calling its `to_yaml()` method:

```
conf = InvokeAIAppConfig(outdir='/tmp', port=8080)
print(conf.to_yaml())
```

# Legacy code removal and porting

This PR replaces Globals with the InvokeAIAppConfig system throughout,
and therefore removes the `globals.py` and `args.py` modules. It also
removes `generate` and the legacy CLI. ***The old CLI and web servers
are now gone.***

I have ported the functionality of the configuration script, the model
installer, and the merge and textual inversion scripts. The `invokeai`
command will now launch `invokeai-node-cli`, and `invokeai-web` will
launch the web server.

I have changed the continuous invocation tests to accommodate the new
command syntax in `invokeai-node-cli`. As a convenience function, you
can also pass invocations to `invokeai-node-cli` (or its alias
`invokeai`) on the command line as as standard input:

```
invokeai-node-cli "t2i --positive_prompt 'banana sushi' --seed 42"
invokeai < invocation_commands.txt
```
2023-05-18 13:37:09 -04:00
b1a99d772c added method to convert vaes 2023-05-18 13:31:11 -04:00
7ea995149e fixes to env parsing, textual inversion & help text
- Make environment variable settings case InSenSiTive:
  INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
  environment variables will both set `max_loaded_models`

- Updated realesrgan to use new config system.

- Updated textual_inversion_training to use new config system.

- Discovered a race condition when InvokeAIAppConfig is created
  at module load time, which makes it impossible to customize
  or replace the help message produced with --help on the command
  line. To fix this, moved all instances of get_invokeai_config()
  from module load time to object initialization time. Makes code
  cleaner, too.

- Added `--from_file` argument to `invokeai-node-cli` and changed
  github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
fd82763412 Model manager draft 2023-05-18 03:56:52 +03:00
f9710dd6ed remove reference to legacy opt.hf_token, clean up whitespace in invokeai_configure 2023-05-17 20:39:00 -04:00
4e7dd7d3f6 ci: remove reference to Globals in a workflow 2023-05-17 20:26:26 -04:00
20ca9e1fc1 config: move 'CORS' settings to 'Web Server' in the docstring to match the actual category 2023-05-17 19:45:51 -04:00
8a8b09a953 api_app: rename web_config to app_config for consistency 2023-05-17 19:42:13 -04:00
9e4e386c9b web and formatting fixes
- remove non-existent import InvokeAIWebConfig
- fix workflow file formatting
- clean up whitespace
2023-05-17 19:12:03 -04:00
eca1e449a8 Merge branch 'lstein/global-configuration' of github.com:invoke-ai/InvokeAI into lstein/global-configuration 2023-05-17 15:23:21 -04:00
ffaadb9d05 reorder options in help text 2023-05-17 15:22:58 -04:00
8adff96e29 Merge branch 'main' into lstein/global-configuration 2023-05-17 14:37:09 -04:00
7593dc19d6 complete several steps needed to make 3.0 installable
- invokeai-configure updated to work with new config system
- migrate invokeai.init to invokeai.yaml during configure
- replace legacy invokeai with invokeai-node-cli
- add ability to run an invocation directly from invokeai-node-cli command line
- update CI tests to work with new invokeai syntax
2023-05-17 14:13:27 -04:00
b7c5a39685 make invokeai.yaml more hierarchical; fix list configuration bug 2023-05-17 12:19:19 -04:00
bd1b84f7d0 tell user to refresh page on image load error (#3425)
* refetch images list if error loading

* tell user to refresh instead of refetching

* unused import

* feat(ui): use `useAppToaster` to make toast

* fix(ui): clear selected/initial image on error

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-05-17 11:52:37 -04:00
eadfd239a8 update config script to work with new config system 2023-05-17 00:18:19 -04:00
e971a7f35c when migrating models.yaml, rename original models.yaml.orig 2023-05-16 22:37:53 -04:00
8d75e50435 partial port of invokeai-configure 2023-05-16 01:50:01 -04:00
6ab84741a0 fix(nodes): make ModelsList an enum-keyed dict
The `ModelsList` OpenAPI schema is generated as being keyed by plain strings. This means that API consumers do not know the shape of the dict. It _should_ be keyed by the `SDModelType` enum.

Unfortunately, `fastapi` does not actually handle this correctly yet; it still generates the schema with plain string keys.

Adding this anyways though in hopes that it will be resolved upstream and we can get the correct schema. Until then, I'll implement the (simple but annoying) logic on the frontend.

https://github.com/pydantic/pydantic/issues/4393
2023-05-16 15:02:58 +10:00
cd16857f38 fix None in model_type 2023-05-16 00:13:44 -04:00
1442f1cb8d change model filter to None in second place 2023-05-16 00:03:57 -04:00
eea0d6f7bc default to no filter in list_models() 2023-05-15 23:52:29 -04:00
1d9c115225 feat(nodes): add low and high to RandomIntInvocation 2023-05-16 13:50:52 +10:00
4fe94a9315 list_models() now returns a dict of {type,{name: info}} 2023-05-15 23:44:08 -04:00
30af20a056 ui: cleanup (#3418)
- tidy up a lot of cruft
- `sampler` --> `scheduler`
2023-05-16 15:27:12 +12:00
cc21fb216c chore(ui): clean up GalleryPanel 2023-05-16 10:43:26 +10:00
6fe62a2705 feat(ui): sampler --> scheduler 2023-05-16 10:40:26 +10:00
da87378713 chore(ui): regen api client 2023-05-16 10:39:40 +10:00
b6f5267385 chore(ui): clean up generationSlice 2023-05-16 10:21:18 +10:00
f9e78d3c64 chore(ui): clean up gallerySlice 2023-05-16 10:16:36 +10:00
b7b5bd1b46 chore(ui): clean up uiSlice 2023-05-16 09:57:19 +10:00
9a3727d3ad chore(ui): clean up systemSlice 2023-05-16 09:48:58 +10:00
d68c14516c chore(ui): clean up persist denylists 2023-05-16 09:46:03 +10:00
9f4d39aa42 chore(ui): clean up modelSlice 2023-05-16 09:45:49 +10:00
84b801d88f ui: restore canvas and upload functionality (#3414)
- refactor image uploading, fix init image upload button 
- refactor toast and hotkey hooks into logical components
- restore canvas save/download/copy/merge functionality
- clean up unused files and packages
- fix canvas rendering issue resulting from fractional stage coords
2023-05-16 02:23:39 +12:00
2fc70c509b Merge branch 'main' into feat/ui/fix-uploading 2023-05-16 02:20:59 +12:00
34fb1c4b19 make conditioning.py work with compel 1.1.5 (#3383)
This PR fixes the ValueError issue that was preventing all prompts from
working.
2023-05-15 09:46:04 -04:00
80bdd550cf Merge branch 'main' into lstein/bugfix/compel 2023-05-15 09:25:21 -04:00
7ef0d2aa35 merge with main 2023-05-15 09:07:17 -04:00
2359b92b46 chore(ui): tidy unused component ref 2023-05-15 22:58:15 +10:00
a404fb2d32 docs(ui): update PACKAGE_SCRIPTS.md 2023-05-15 22:49:28 +10:00
513eb11616 chore(ui): clean up unused files/packages 2023-05-15 22:48:06 +10:00
d2c9140e69 feat(ui): restore save/copy/download/merge functionality 2023-05-15 22:21:03 +10:00
d95fe5925a feat(ui): restore image post-upload actions
eg set init image if on img2img when uploading
2023-05-15 18:52:48 +10:00
835922ea8f fix(ui): floor canvas coords to prevent partial pixel offset rendering issues 2023-05-15 18:50:34 +10:00
e1e5266fc3 feat(ui): refactor base image uploading logic 2023-05-15 17:45:05 +10:00
5e4457445f feat(ui): make toast/hotkey into logical components 2023-05-15 15:25:27 +10:00
0221ca8f49 fix(ui): use cloned canvas for retrieving dataURL/Blobs 2023-05-15 13:54:30 +10:00
c8f765cc06 improve debugging messages 2023-05-14 18:29:55 -04:00
cf36e4029e fix(ui): fix syntax error in the logo component flexbox 2023-05-15 08:24:33 +10:00
b9e9087dbe do not manage GPU for pipelines if sequential_offloading is True 2023-05-14 18:09:38 -04:00
63e465eb5c tweaks to get_model() behavior
1. If an external VAE is specified in config file, then
   get_model(submodel=vae) will return the external VAE, not the one
   burnt into the parent diffusers pipeline.

2. The mechanism in (1) is generalized such that you can now have
   "unet:", "text_encoder:" and similar stanzas in the config file.
   Valid formats of these subsections:

       unet:
          repo_id: foo/bar

       unet:
          path: /path/to/local/folder

       unet:
          repo_id: foo/bar
	  subfolder: unet

    In the near future, these will also be used to attach external
    parts to the pipeline, generalizing VAE behavior.

3. Accommodate callers (i.e. the WebUI) that are passing the
   model key ("diffusers/stable-diffusion-1.5") to get_model()
   instead of the tuple of model_name and model_type.

4. Fixed bug in VAE model attaching code.

5. Rebuilt web front end.
2023-05-14 16:50:59 -04:00
c8a98a9a22 Merge branch 'main' into lstein/bugfix/compel 2023-05-14 14:43:18 -04:00
38ecca9362 Logging Improvements (#3401)
This PR improves the logging module a tad bit along with the
documentation.

**New Look:**


![WindowsTerminal_XaijwCqFpo](https://github.com/invoke-ai/InvokeAI/assets/54517381/49a97411-1927-4a49-80ff-f4d9665be55f)

## Usage

**General Logger**

InvokeAI has a module level logger. You can call it this way.

In this below example, you will use the default logger `InvokeAI` and
all your messages will be logged under that name.

```python

from invokeai.backend.util.logging import logger

logger.critical("CriticalMessage") // In Bold Red
logger.error("Info Message") // In Red
logger.warning("Info Message") // In Yellow
logger.info("Info Message") // In Grey 
logger.debug("Debug Message") // In Grey
```

Results:

```
[12-05-2023 20]::[InvokeAI]::CRITICAL --> This is an info message [In Bold Red]
[12-05-2023 20]::[InvokeAI]::ERROR --> This is an info message [In Red]
[12-05-2023 20]::[InvokeAI]::WARNING --> This is an info message [In Yellow]
[12-05-2023 20]::[InvokeAI]::INFO --> This is an info message [In Grey]
[12-05-2023 20]::[InvokeAI]::DEBUG --> This is an info message [In Grey]
```

**Custom Logger**

If you want to use a custom logger for your module, you can import it
the following way.

```python

from invokeai.backend.util.logging import logging
logger = logging.getLogger(name='Model Manager')

logger.critical("CriticalMessage") // In Bold Red
logger.error("Info Message") // In Red
logger.warning("Info Message") // In Yellow
logger.info("Info Message") // In Grey 
logger.debug("Debug Message") // In Grey
```

Results:

```
[12-05-2023 20]::[Model Manager]::CRITICAL --> This is an info message [In Bold Red]
[12-05-2023 20]::[Model Manager]::ERROR --> This is an info message [In Red]
[12-05-2023 20]::[Model Manager]::WARNING --> This is an info message [In Yellow]
[12-05-2023 20]::[Model Manager]::INFO --> This is an info message [In Grey]
[12-05-2023 20]::[Model Manager]::DEBUG --> This is an info message [In Grey]
```

**When to use custom logger?**

It is recommended to use a custom logger if your module is not a part of
base InvokeAI. For example: custom extensions / nodes.
2023-05-15 02:18:20 +12:00
c4681774a5 Merge branch 'main' into logging-facelift 2023-05-15 02:08:29 +12:00
050add58d2 fix getting conditionings 2023-05-14 12:20:54 +02:00
3d60c958c7 ui: commercial fixes (#3409)
minor commercial fixes
2023-05-14 20:44:06 +12:00
f5df150097 feat(ui): add callback to signal app is ready
needed for commercial
2023-05-14 18:42:15 +10:00
dac82adb5b fix(ui): make logo component non-selectable 2023-05-14 18:41:11 +10:00
b72c9787a9 Revert "comment out customer_attention_context"
This reverts commit 8f8cd90787.

Due to NameError: name 'options' is not defined
2023-05-14 00:37:55 -04:00
426f4eaf7e adjusted regression tests to work with new SDModelTypes 2023-05-13 22:29:33 -04:00
2623941d91 Merge branch 'main' into lstein/bugfix/compel 2023-05-13 22:23:59 -04:00
baf5451fa0 Merge branch 'main' into lstein/new-model-manager 2023-05-13 22:01:34 -04:00
d3a7fea939 Revert "fix: Rework the layout of the parameters scrollbar"
This reverts commit 6f1fc397f7.
2023-05-14 11:45:08 +10:00
5a7b687c84 fix(ui): add missing packages 2023-05-14 11:45:08 +10:00
0020457fc7 fix(ui): tweak settings scheduler styling 2023-05-14 11:45:08 +10:00
658b556544 feat(ui): IAICustomSelect v2, implement for scheduler & model 2023-05-14 11:45:08 +10:00
37da0fc075 feat(ui): IAICustomSelect v1 2023-05-14 11:45:08 +10:00
6d3e8507cc fix(ui): fix "no image" fallbacks 2023-05-14 11:45:08 +10:00
0e9470503f fix: Rework the layout of the parameters scrollbar 2023-05-14 11:45:08 +10:00
d2ebc6741b feat: Add setting to hide / display schedulers 2023-05-14 11:45:08 +10:00
026d3260b4 Add Heun Karras Scheduler 2023-05-14 11:45:08 +10:00
1103ab2844 merge with main 2023-05-13 21:35:19 -04:00
11b2076b46 implement change to web_config suggested by ebr 2023-05-13 21:33:19 -04:00
b31a6ff605 fix reversed args in _model_key() call 2023-05-13 21:11:06 -04:00
1f602e6143 Fix - apply precision to text_encoder 2023-05-14 03:46:13 +03:00
039fa73269 Change SDModelType enum to string, fixes(model unload negative locks count, scheduler load error, saftensors convert, wrong logic in del_model, wrong parse metadata in web) 2023-05-14 03:06:26 +03:00
78533714e3 Merge branch 'main' into logging-facelift 2023-05-14 09:07:51 +12:00
691e1bf829 Make debug messages cyan/blue 2023-05-14 09:06:57 +12:00
2204e47596 allow submodels to be fetched independent of parent pipeline 2023-05-13 16:54:47 -04:00
d8b1f29066 proxy SDModelInfo so that it can be used directly as context 2023-05-13 16:29:18 -04:00
b23c9f1da5 get Tuple type hint syntax right 2023-05-13 14:59:21 -04:00
5e8e3cf464 correct typos in model_manager_service 2023-05-13 14:55:59 -04:00
72967bf118 convert add_model(), del_model(), list_models() etc to use bifurcated names 2023-05-13 14:44:44 -04:00
bc96727cbe Rewrite latent nodes to new model manager 2023-05-13 16:08:03 +03:00
3b2a054f7a Add model loader node; unet, clip, vae fields; change compel node to clip field 2023-05-13 04:37:20 +03:00
47a088d685 rehydrate selectedImage URL when results and uploads are fetched 2023-05-13 09:48:38 +10:00
63db3fc22f reduce queue check interval to 0.5s 2023-05-12 17:54:26 -04:00
ad0bb3f61a fix: queue error should not crash InvocationProcessor
1. if retrieving an item from the queue raises an exception, the
   InvocationProcessor thread crashes, but the API continues running in
   a non-functional state. This fixes the issue
2. when there are no items in the queue, sleep 1 second before checking
   again.
3. Also ensures the thread isn't crashed if an exception is raised from
   invoker, and emits the error event

Intentionally using base Exceptions because for now we don't know which
specific exception to expect.

Fixes (sort of)? #3222
2023-05-12 17:54:26 -04:00
131145eab1 A big refactor of model manager(according to IMHO) 2023-05-12 23:13:34 +03:00
4492044d29 Redo compel node to separate model loading 2023-05-12 23:09:33 +03:00
5431dd5f50 Fix event args 2023-05-12 23:08:03 +03:00
79fecba274 Fix model manager initialization in web ui 2023-05-12 23:05:08 +03:00
8f8cd90787 comment out customer_attention_context 2023-05-12 13:59:00 -04:00
d796ea7bec feat: Logging Improvements 2023-05-13 02:13:49 +12:00
e5b7dd63e9 fix(nodes): temporarily disable librarygraphs
- Do not retrieve graph from DB until we resolve the issue of changing node schemas causing application to fail to start up due to invalid graphs
2023-05-12 22:33:49 +10:00
af060188bd Merge branch 'main' into lstein/bugfix/compel 2023-05-12 08:22:18 -04:00
4270e7ae25 Feat/ui/improve-language (#3399) 2023-05-12 23:32:50 +12:00
60a565d7de feat(ui): use chakra menu for theme changer 2023-05-12 20:04:29 +10:00
78cf70eaad fix(ui): tweak lang picker style 2023-05-12 20:04:10 +10:00
eebaa50710 fix(ui): fix language picker tooltip 2023-05-12 19:52:21 +10:00
7d582553f2 feat(ui): use chakra menu for language picker 2023-05-12 19:50:34 +10:00
4d6eea7e81 feat(ui): store language in redux 2023-05-12 19:35:03 +10:00
f44593331d ui: misc fixes (#3398)
- do not show canvas intermediates in gallery
- do not show progress image in uploads gallery category
- use custom dark mode `localStorage` key (prevents collision with
commercial)
- use variable font (reduce bundle size by factor of 10)
- change how custom headers are used
- use style injection for building package
- fix tab icon sizes
2023-05-12 21:00:47 +12:00
3d9ecbf3c7 fix(ui): add missing package 2023-05-12 18:55:59 +10:00
032aa1d59c fix(ui): excise most zIndexs
our stacking contexts are accurate, `zIndex` isn't needed
2023-05-12 18:50:54 +10:00
35e0863bdb fix(ui): fix tab icon sizes 2023-05-12 17:56:18 +10:00
14070d674e build(ui): add style injection plugin
when building for package, CSS is all in JS files. when used as a package, it is then injected into the page. bit of a hack to missing CSS in commercial product
2023-05-12 17:56:18 +10:00
108ce06c62 feat(ui): change custom header to be a prop instead of children 2023-05-12 17:56:18 +10:00
da364f3444 feat(ui): use variable font
reduces package build's CSS by an order of magnitude
2023-05-12 17:56:18 +10:00
df5ba75c14 feat(ui): use custom dark mode localStorage key 2023-05-12 17:56:18 +10:00
e4fb9cb33f chore(ui): regen api client 2023-05-12 17:56:18 +10:00
65b527eb20 fix(ui): do not show progress images in uploads gallery category 2023-05-12 17:56:18 +10:00
7dc9d18052 fix(ui): do not show intermediates uploads in gallery 2023-05-12 17:56:18 +10:00
2ef79b8bf3 fix bug in persistent model scheme 2023-05-12 00:14:56 -04:00
5013a4b9f3 feat(ui): expand config options (#3393)
now may disable individual SD features eg Noise, Variation, etc - stuff
which is not ready for consumption in commercial.
2023-05-12 16:10:17 +12:00
f929359322 Merge branch 'main' into feat/ui/expand-config 2023-05-12 16:06:31 +12:00
6522c71971 feat(nodes): add RandomIntInvocation (#3390)
just outputs a single random int
2023-05-12 16:06:06 +12:00
9c1e65f3a3 Merge branch 'main' into feat/nodes/add-randomintinvocation 2023-05-12 15:56:41 +12:00
ebec200ba6 Remove unused import 2023-05-12 13:56:02 +10:00
e559730b6e feat(nodes): add w/h to latents outputs (#3389)
This reduces the number of nodes needed when working with latents (ie
fewer plain integer value nodes)

Also correct a few mistakes in the fields
2023-05-12 15:40:46 +12:00
11ecf438f5 latents.py converted to use model manager service; events emitted 2023-05-11 23:33:24 -04:00
0acb8ed85d Merge branch 'main' into feat/nodes/add-w-h-latentsoutput 2023-05-12 15:23:29 +12:00
8c1c9cd702 Merge branch 'main' into feat/nodes/add-randomintinvocation 2023-05-12 15:21:49 +12:00
0ece4686aa fix(nodes): remove Optionals on ImageOutputs (#3392) 2023-05-12 15:21:42 +12:00
af95cef7f9 Merge branch 'main' into fix/nodes/fix-imageoutput-optionals 2023-05-12 15:08:19 +12:00
1eca7a918a feat(ui): make core parameters layout consistent (#3394) 2023-05-12 15:08:07 +12:00
9e6b958023 Merge branch 'main' into feat/ui/consistent-param-layout 2023-05-12 15:06:16 +12:00
f7b99d93ae docs(ui): update ui readme (#3396) 2023-05-12 15:05:55 +12:00
85d03dcd90 Merge branch 'main' into docs/ui/update-ui-readme 2023-05-12 15:04:12 +12:00
032555bcfe fix(model manager): fix string formatting error on model checksum timer (#3397)
The error occurs when loading a model for the first time. (or after
removing its checksum file, probably.)
2023-05-12 15:04:01 +12:00
4caa1f19b2 fix(model manager): fix string formatting error on model checksum timer 2023-05-11 19:06:02 -07:00
df5b968954 model manager now running as a service 2023-05-11 21:24:29 -04:00
95d4bd3012 Merge branch 'lstein/bugfix/compel' of github.com:invoke-ai/InvokeAI into lstein/bugfix/compel 2023-05-11 21:13:29 -04:00
037078c8ad make InvokeAIDiffuserComponent.custom_attention_control a classmethod 2023-05-11 21:13:18 -04:00
6de2f66b50 docs(ui): update ui readme 2023-05-12 11:11:59 +10:00
cd7b248eda Add UniPC / Euler Karras / DPMPP_2 Karras / DEIS / DDPM Schedulers (#3388)
**Features:**

- Add UniPC Scheduler
- Add Euler Karras Scheduler
- Add DPMPP_2 Karras Scheduler
- Add DEIS Scheduler
- Add DDPM Scheduler

**Other:**

- Renamed schedulers to their accurate names: _a = Ancestral, _k =
Karras
- Fix scheduler not defaulting correctly to DDIM.
- Code split SCHEDULER_MAP so its consistently loaded from the same
place.

**Known Bugs:**

- dpmpp_2s not working in img2img for denoising values < 0.8 ==> // This
seems to be an upstream bug. I've disabled it in img2img and canvas
until the upstream bug is fixed.
https://github.com/huggingface/diffusers/issues/1866
2023-05-12 09:06:22 +12:00
6d8c077f4e Merge branch 'main' into unipc-sched 2023-05-12 05:59:13 +12:00
97127e560e Disable dpmpp_2s in img2img & unifiedCanvas
... until upstream bug is fixed.
2023-05-12 04:51:58 +12:00
27dc07d95a Set zero eta by default(fix ddim scheduler error) 2023-05-11 18:49:27 +03:00
f7dc171c4f Rename default schedulers across the app 2023-05-12 03:44:20 +12:00
4b957edfec Add DDPM Scheduler 2023-05-12 03:18:34 +12:00
46ca7718d9 Add DEIS Scheduler 2023-05-12 03:10:30 +12:00
b928d7a6e6 Change scheduler names to be accurate
_a = Ancestral
_k = Karras
2023-05-12 02:59:43 +12:00
8a836247c8 Add DPMPP Single, Euler Karras and DPMPP2 Multi Karras Schedulers 2023-05-12 02:23:33 +12:00
95c3644564 fix it again 2023-05-12 00:10:39 +10:00
799cd07174 feat(ui): make core parameters layout consistent 2023-05-11 22:45:53 +10:00
9af385468d feat(ui): expand config options
now may disable individual SD features eg Noise, Variation, etc - stuff which is not ready for consumption in commercial.
2023-05-11 22:42:13 +10:00
3487388788 Merge branch 'unipc-sched' of https://github.com/blessedcoolant/InvokeAI into unipc-sched 2023-05-12 00:40:24 +12:00
9a383e456d Codesplit SCHEDULER_MAP for reusage 2023-05-12 00:40:03 +12:00
805f9f8f4a Merge branch 'main' into unipc-sched 2023-05-12 00:24:55 +12:00
52aa0c9bbd ui: miscellaneous fixes (#3386) 2023-05-12 00:21:29 +12:00
7f5f4689cc fix(ui): clear progress image on cancel 2023-05-11 22:20:37 +10:00
a3f81f4b98 fix(ui): fix results not displaying
- fix for commercial product
2023-05-11 22:20:37 +10:00
15c59e606f feat(ui): add spinner to gallery progress images
- otherwise you may think you can click it but you cannot
2023-05-11 22:20:37 +10:00
40d4cabecd feat(ui): improve image overlay 2023-05-11 22:20:37 +10:00
3493c8119b feat(ui): improve image preview css and fallback 2023-05-11 22:20:30 +10:00
c1e7460d39 Merge branch 'main' into unipc-sched 2023-05-12 00:11:09 +12:00
3ffff023b2 Add missing key to scheduler_map
It was breaking coz the sampler was not being reset. So needs a key on each. Will simplify this later.
2023-05-12 00:08:50 +12:00
f9384be59b fix(ui): fix init image causing overflow 2023-05-11 20:55:30 +10:00
6cf308004a fix(nodes): remove Optionals on ImageOutputs 2023-05-11 20:54:57 +10:00
d1029138d2 Default to DDIM if scheduler is missing 2023-05-11 22:54:35 +12:00
06b5800d28 Add UniPC Scheduler 2023-05-11 22:43:18 +12:00
483f2ccb56 feat(nodes): add RandomIntInvocation
just outputs a single random int
2023-05-11 20:33:32 +10:00
93ced0bec6 feat(nodes): add w/h to latents outputs
This reduces the number of nodes needed when working with latents (ie fewer plain integer value nodes)

Also correct a few mistakes in the fields
2023-05-11 20:32:55 +10:00
4333852c37 fix(nodes): fix missing context arg in LatentsToLatents 2023-05-11 19:28:42 +10:00
3baa230077 Merge branch 'main' into lstein/bugfix/compel 2023-05-11 00:50:45 -04:00
9e594f9018 pad conditioning tensors to same length
fixes crash when prompt length is greater than 75 tokens
2023-05-11 00:34:15 -04:00
8ad8c5c67a resolve conflicts with main 2023-05-11 00:19:20 -04:00
590942edd7 Merge branch 'main' into lstein/new-model-manager 2023-05-11 00:16:03 -04:00
4627910c5d added a wrapper model_manager_service and model events 2023-05-11 00:09:19 -04:00
b0c41b4828 filter our websocket errors (#3382)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-11 01:58:40 +00:00
e0d6946b6b fix(nodes): fix metadata test
- `progress_images` is no longer a parameter
- `seamless` needs to be reworked as a model config, removed as a param
2023-05-11 11:55:51 +10:00
bf7ea8309f fix(ui): change tab to img2img when selected initial image 2023-05-11 11:55:51 +10:00
54b65f725f fix(ui): rescale canvas on gallery resize 2023-05-11 11:55:51 +10:00
8ef49c2640 fix(ui): fix canvas img2img if no init image selected 2023-05-11 11:55:51 +10:00
f488b1a7f2 fix(nodes): fix usage of Optional 2023-05-11 11:55:51 +10:00
d2edb7c402 build(ui): add yalc to gitignore 2023-05-11 11:55:51 +10:00
f0a3f07b45 feat(ui): antialias progress images 2023-05-11 11:55:51 +10:00
b42b630583 fix(ui): h/w disabled bug 2023-05-11 11:55:51 +10:00
31a78d571b feat(ui): canvas antialiasing 2023-05-11 11:55:51 +10:00
fdc2232ea0 feat(ui): progress images in gallery and viewer 2023-05-11 11:55:51 +10:00
e94d0b2d40 fix(ui): fix janky gallery image delete 2023-05-11 11:55:51 +10:00
75ccbaee9c fix(ui): disable invoke button as soon as pressed 2023-05-11 11:55:51 +10:00
2848c8397c fix(ui): fix missing images on reload issue
- Mainly an issue for commercial due to incomplete metadata handling
2023-05-11 11:55:51 +10:00
fe8b5193de feat(ui): half-baked use all parameters
until we have a better system for metadata, this will remain half-baked
2023-05-11 11:55:51 +10:00
3d1470399c fix(ui): fix metadataviewer styling 2023-05-11 11:55:51 +10:00
fcf9c63049 fix(ui): fix copying image link 2023-05-11 11:55:51 +10:00
7bfb5640ad cleanup(ui): Remove unused vars + minor bug fixes 2023-05-11 11:55:51 +10:00
15e57e3a3d fix(ui): duplicate gallery in nodes editor 2023-05-11 11:55:51 +10:00
279468c0e8 feat(ui): restore tab names 2023-05-11 11:55:51 +10:00
c565812723 feat(ui): organize parameters panels 2023-05-11 11:55:51 +10:00
ec6c8e2a38 feat(ui): wip layout 2023-05-11 11:55:51 +10:00
77f2690711 fix(ui): remove duplicate gallery 2023-05-11 11:55:51 +10:00
c4b3a24ed7 feat(ui): revert tabs to txt2img/img2img 2023-05-11 11:55:51 +10:00
33c69359c2 feat(ui): add IAICollapse for parameters 2023-05-11 11:55:51 +10:00
864f4bb4af feat(ui): wip img2img layouting 2023-05-11 11:55:51 +10:00
5365f42a04 feat(ui): wip layouting 2023-05-11 11:55:51 +10:00
3dc60254b9 feat(ui): support collect nodes 2023-05-11 11:55:51 +10:00
027a8562d7 fix(ui): default node model selection 2023-05-11 11:55:51 +10:00
34f3a0f0e3 feat(nodes): improve default model choosing output 2023-05-11 11:55:51 +10:00
d0bac1675e fix(nodes): fix ImageOutput Config 2023-05-11 11:55:51 +10:00
4e56c962f4 fix(nodes): fix infill docstrings 2023-05-11 11:55:51 +10:00
4ef0e43759 fix(nodes): remove dataURL invocation 2023-05-11 11:55:51 +10:00
6945d10297 chore(ui): regen api client 2023-05-11 11:55:51 +10:00
4d6cef7ac8 fix(ui): fix types bug 2023-05-11 11:55:51 +10:00
a7786d5ff2 fix(nodes): restore seamless to TextToLatents 2023-05-11 11:55:51 +10:00
6c1de975d9 feat(nodes): add infill nodes 2023-05-11 11:55:51 +10:00
a1079e455a feat(nodes): cleanup unused params, seed generation 2023-05-11 11:55:51 +10:00
5457c7f069 fix(ui): use lodash-es instead of lodash 2023-05-11 11:55:51 +10:00
b8c1a3f96c chore(ui): remove unused babelrc & npm script 2023-05-11 11:55:51 +10:00
cee8e85f76 chore(ui): bump redux-remember 2023-05-11 11:55:51 +10:00
09f166577e feat(ui): migrate to redux-remember 2023-05-11 11:55:51 +10:00
bcc21531fb feat(ui): update for InfillInvocation 2023-05-11 11:55:51 +10:00
da4eacdffe feat(nodes): add InfillInvocation 2023-05-11 11:55:51 +10:00
6102e560ba feat(nodes): add LatentsToImage node (VAE encode) 2023-05-11 11:55:51 +10:00
ff3aa57117 feat(ui): fix endless gallery scroll for single col layout 2023-05-11 11:55:51 +10:00
49db6f4fac fix(nodes): fix trivial typing issues 2023-05-11 11:55:51 +10:00
20f6a597ab fix(nodes): add MetadataColorField 2023-05-11 11:55:51 +10:00
04c453721c feat(ui): tweak gallery loading indicator 2023-05-11 11:55:51 +10:00
350ffecc1f feat(ui): endless gallery scroll 2023-05-11 11:55:51 +10:00
b0557aa16b fix(ui): fix currentimagepreview not working for uploads 2023-05-11 11:55:51 +10:00
1c9429a6ea feat(ui): wip canvas 2023-05-11 11:55:51 +10:00
206e6b1730 feat(nodes): wip inpaint node 2023-05-11 11:55:51 +10:00
357cee2849 fix(nodes): fix cfg scale min value 2023-05-11 11:55:51 +10:00
0b49997bb6 feat(nodes): allow uploaded images to be any ImageType (eg intermediates) 2023-05-11 11:55:51 +10:00
5e09dd380d Revert "feat(nodes): free gpu mem after invocation"
This reverts commit 99cb33f477306d5dcc455efe04053ce41b8d85bd.
2023-05-11 11:55:51 +10:00
c7303adb0d feat(ui): fix generation mode logic 2023-05-11 11:55:51 +10:00
ed1f096a6f feat(ui): wip canvas migration 4 2023-05-11 11:55:51 +10:00
6ab5d28cf3 feat(ui): wip canvas migration, createListenerMiddleware 2023-05-11 11:55:51 +10:00
a75148cb16 feat(nodes): free gpu mem after invocation 2023-05-11 11:55:51 +10:00
f7bbc4004a feat(ui): wip canvas nodes migration 3 2023-05-11 11:55:51 +10:00
cee21ca082 feat(ui): wip canvas nodes migration 2 2023-05-11 11:55:51 +10:00
08ec12b391 feat(ui): wip canvas nodes migration 2023-05-11 11:55:51 +10:00
ff5e2a9a8c chore(ui): regen api client 2023-05-11 11:55:51 +10:00
e0b9b5cc6c feat(nodes): add dataURL to image node 2023-05-11 11:55:51 +10:00
aca4770481 fixed compel.py as requested 2023-05-10 21:40:44 -04:00
5d5157fc65 make conditioning.py work with compel 1.1.5 2023-05-10 18:08:33 -04:00
fb6ef61a4d change path for locale (#3381)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-10 10:30:17 -04:00
ee24ad7b13 fix(nodes): fix broken docs routes 2023-05-10 08:28:17 -04:00
f8e90ba3f0 feat(nodes): add ui build static route 2023-05-10 08:28:17 -04:00
ad0b70ca23 fix(nodes): fix #3306 (#3377)
Check if the cache has the object before deleting it.
2023-05-10 17:39:45 +12:00
7dfa135b2c fix(nodes): fix #3306
Check if the cache has the object before deleting it.
2023-05-10 15:29:10 +10:00
beeaa05658 Update dependencies to get deterministic image generation behavior (main branch) (#3354)
This PR updates to `xformers ~= 0.0.19` and `torch ~= 2.0.0`, which
together seem to solve the non-deterministic image generation issue that
was previously seen with earlier versions of `xformers`.
2023-05-10 00:10:51 -04:00
fa6a580452 merge with main 2023-05-10 00:03:32 -04:00
6b6d654f60 Merge branch 'main' into enhance/update-dependencies 2023-05-09 23:56:46 -04:00
99c692f397 check that model name matches format 2023-05-09 23:46:59 -04:00
3d85e769ce clean up ckpt handling
- remove legacy ckpt loading code from model_cache
- added placeholders for lora and textual inversion model loading
2023-05-09 22:44:58 -04:00
9cb962cad7 ckpt model conversion now done in ModelCache 2023-05-08 23:39:44 -04:00
853c83d0c2 surface detail field for 403 errors 2023-05-09 12:40:19 +10:00
a108155544 added StALKeR779's great model size calculating routine 2023-05-08 21:47:03 -04:00
1809990ed4 if backend returns an error, show it in toast 2023-05-09 11:09:36 +10:00
79d49853d2 use websocket transport first for socket.io 2023-05-09 11:01:02 +10:00
1f608d3743 add v2.3 branch to push trigger (#3363)
Update the push trigger with the branch which should deploy the docs,
also bring over the updates to the workflow from the v2.3 branch and:

- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method (`pip install ".[docs]"`)
2023-05-08 16:26:06 -04:00
df024dd982 bring changes from v2.3 branch over
- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method
2023-05-08 21:50:00 +02:00
45da85765c add v2.3 branch to push trigger 2023-05-08 21:10:20 +02:00
c15b49c805 implement StALKeR7779 requested API for fetching submodels 2023-05-07 23:18:17 -04:00
fd63e36822 optimize subfolder so that it returns submodel if parent is in RAM 2023-05-07 21:39:11 -04:00
4649920074 adjust t2i to work with new model structure 2023-05-07 19:06:49 -04:00
667171ed90 cap model cache size using bytes, not # models 2023-05-07 18:07:28 -04:00
bd0ad59c27 bump compel version 2023-05-07 15:22:46 -04:00
cce40acba5 Merge branch 'enhance/update-dependencies' of github.com:invoke-ai/InvokeAI into enhance/update-dependencies 2023-05-07 15:22:31 -04:00
bc9491ab69 bump compel version 2023-05-07 15:21:24 -04:00
f28632980d Merge branch 'main' into lstein/global-configuration 2023-05-07 07:52:46 -04:00
b909bac0dc Merge branch 'main' into enhance/update-dependencies 2023-05-07 21:44:43 +12:00
8618e41b32 Deploy documentation from v2.3 branch rather than main (#3356)
This PR instructs github to deploy documentation pages from the v2.3
branch.
2023-05-07 21:43:44 +12:00
4687f94141 Merge branch 'main' into actions/mkdocs-deploy 2023-05-07 21:43:18 +12:00
440912dcff feat(ui): make base log level debug 2023-05-07 15:36:37 +10:00
8b87a26e7e feat(ui): support collect nodes 2023-05-07 15:36:37 +10:00
44ae93df3e Deploy documentation from v2.3 branch rather than main 2023-05-06 23:56:04 -04:00
42d938fda5 remove debugging statement 2023-05-06 23:54:11 -04:00
8f80ba9520 update dependencies to get deterministic image generation 2023-05-06 23:09:24 -04:00
25ce47c44f remove reference to globals in compel.py 2023-05-06 22:49:35 -04:00
647ffb2a0f defined abstract baseclass for model manager service 2023-05-06 22:41:19 -04:00
afd2e32092 Merge branch 'main' into lstein/global-configuration 2023-05-06 21:20:25 -04:00
05a27bda5e generalize model loading support, include loras/embeds 2023-05-06 15:58:44 -04:00
2b213da967 add -y to the automated install instructions (#3349)
hi there, love the project! i noticed a small typo when going over the
install process.

when copying the automated install instructions from the docs into a
terminal, the line to install the python packages failed as it was
missing the `-y` flag.
2023-05-06 13:34:37 -04:00
e91e1eb9aa Merge branch 'main' into patch-1 2023-05-06 13:34:12 -04:00
b24129fb3e Fix logger namespace clash in web server (#3344)
This PR fixes a bug that appeared in the legacy web server after the
logging PR was merged.

closes #3343
2023-05-06 08:35:13 -04:00
350b1421bb Merge branch 'main' into lstein/bugfix/logger-namespace 2023-05-06 08:14:44 -04:00
a8cfa3565c Merge branch 'lstein/new-model-manager' of github.com:invoke-ai/InvokeAI into lstein/new-model-manager 2023-05-06 08:14:15 -04:00
e0214a32bc mostly ported to new manager API; needs testing 2023-05-06 00:44:12 -04:00
f01c79a94f add -y to the automated install instructions
when copying the automated install instructions from the docs into a terminal, the line to install the python packages failed as it was missing the `-y` flag.
2023-05-05 21:28:00 -04:00
463f6352ce Add compel node and conditioning field type (#3265)
Done as I said in title, but need to test(and understand) how cli works,
as previously it uses single prompt and now it's positive and negative.
2023-05-06 13:05:04 +12:00
af8c7c7d29 model manager rewritten to use model_cache; API changed! 2023-05-05 19:32:28 -04:00
a80fe05e23 Rename compel node 2023-05-05 21:30:16 +03:00
58d7833c5c Review changes 2023-05-05 21:09:29 +03:00
5012f61599 Separate conditionings back to positive and negative 2023-05-05 15:47:51 +03:00
a4e36bc02a when model is forcibly moved into RAM update loaded_models set 2023-05-04 23:28:03 -04:00
2e9bec15e7 Merge branch 'main' into lstein/new-model-manager 2023-05-04 23:19:38 -04:00
68bc0112fa implement lazy GPU offloading and ref counting 2023-05-04 23:15:32 -04:00
85c33823c3 Merge branch 'main' into feat/compel_node 2023-05-05 14:41:45 +12:00
c83a112669 Fix inpaint node (#3284)
Seems like this is the only change needed for the existing inpaint code
to work as a node. Kyle said on Discord that inpaint shouldn't be a
node, so feel free to just reject this if this code is going to be gone
soon.
2023-05-05 14:41:13 +12:00
e04ada1319 Merge branch 'main' into patch-1 2023-05-05 10:38:45 +10:00
d866dcb3d2 close #3343 2023-05-04 20:30:59 -04:00
81ec476f3a Revert seed field addition 2023-05-04 21:50:40 +03:00
1e6adf0a06 Fix default graph and test 2023-05-04 21:14:31 +03:00
7d221e2518 Combine conditioning to one field(better fits for multiple type conditioning like perp-neg) 2023-05-04 20:14:22 +03:00
742ed19d66 add missing config module 2023-05-04 01:20:30 -04:00
29c2ada23c add test for the configuration module 2023-05-04 00:45:52 -04:00
e4196bbe5b adjust non-app modules to use new config system 2023-05-04 00:43:51 -04:00
15ffb53e59 remove globals, args, generate and the legacy CLI 2023-05-03 23:36:51 -04:00
90054ddf0d use InvokeAISettings for app-wide configuration 2023-05-03 22:30:30 -04:00
a273bdbdc1 Merge branch 'main' into lstein/new-model-manager 2023-05-03 18:09:29 -04:00
56d3cbead0 Merge branch 'main' into feat/compel_node 2023-05-04 00:28:33 +03:00
5e8c97f1ba [Enhancement] Regularize logging messages (#3176)
# Intro

This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions:

```
 ### A critical error
 *** A non-fatal error
 ** A warning
  >> Informational message
        | Debugging message
```

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add InvokeAI's
informational message decorations, while `ialog.error("foo")` will add
the decorations.
    
# Usage:

This is a thin wrapper around the standard Python logging module. It can
be used in several ways:


## Module-level logging style
 
This style logs everything through a single default logging object and
is identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus to
logging:
    
```
      import invokeai.backend.util.logging as logger
    
      logger.debug('this is a debugging message')
      logger.info('this is a informational message')
      logger.log(level=logging.CRITICAL, 'get out of dodge')

      logger.disable(level=logging.INFO)
      logger.basicConfig(filename='/var/log/invokeai.log')
      logger.error('this will be logged to console and to invokeai.log')
```    

Internally these functions all go through a custom logging object named
"invokeai". You can access it to perform additional customization in
either of these ways:

```
logger = logger.getLogger()
logger = logger.getLogger('invokeai')
```
    
## Object-oriented style

For more control, the logging module's object-oriented logging style is
also supported. The API is identical to the vanilla logging usage. In
fact, the only thing that has changed is that the getLogger() method
adds a custom formatter to the log messages.
    
```
     import logging
     from invokeai.backend.util.logging import InvokeAILogger
    
     logger = InvokeAILogger.getLogger(__name__)
     fh = logging.FileHandler('/var/invokeai.log')
     logger.addHandler(fh)
     logger.critical('this will be logged to both the console and the log file')
```

## Within the nodes API

From within the nodes API, the logger module is stored in the `logger`
slot of InvocationServices during dependency initialization. For
example, in a router, the idiom is:

```
from ..dependencies import ApiDependencies
logger = ApiDependencies.invoker.services.logger
logger.warning('uh oh')
```

Currently, to change the logger used by the API, one must change the
logging module passed to `ApiDependencies.initialize()` in `api_app.py`.
However, this will eventually be replaced with a method to select the
preferred logging module using the configuration file (dependent on
merging of PR #3221)
2023-05-03 15:00:05 -04:00
4687ad4ed6 Merge branch 'main' into enhance/invokeai-logs 2023-05-03 13:36:06 -04:00
8a0ec0fa0f Merge branch 'main' into lstein/new-model-manager 2023-05-03 13:30:50 -04:00
e1fed52c66 work on model cache and its regression test finished 2023-05-03 12:38:18 -04:00
994b247f8e feat(ui): do not persist gallery images
- I've sorted out the issues that make *not* persisting troublesome, these will be rolled out with canvas
- Also realized that persisting gallery images very quickly fills up localStorage, so we can't really do it anyways
2023-05-03 23:41:48 +10:00
bb959448c1 implement hashing for local & remote models 2023-05-02 16:52:27 -04:00
0419f50ab0 chore(ui): bump react-virtuoso
- Resolves an issue with gallery not rendering all items
2023-05-02 20:15:29 +10:00
f9f40adcdc fix(nodes): fix t2i graph
Removed width and height edges.
2023-05-02 13:11:28 +10:00
2e2abf6ea6 caching of subparts working 2023-05-01 22:57:30 -04:00
3264d30b44 feat(nodes): allow multiples of 8 for dimensions 2023-05-02 12:01:52 +10:00
4d885653e9 feat(ui): tidy 2023-05-02 11:27:08 +10:00
475b6bef53 feat(ui): use windowing for gallery
vastly improves the gallery performance when many images are loaded.

- `react-virtuoso` to do the virtualized list
- `overlayscrollbars` for a scrollbar
2023-05-02 11:27:08 +10:00
d39de0ad38 fix(nodes): fix duplicate Invoker start/stop events 2023-05-01 18:24:37 -04:00
d14a7d756e nodes-api: enforce single thread for the processor
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.

Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)

Fixes #3289
2023-05-01 18:24:37 -04:00
b050c1bb8f use logger in ApiDependencies 2023-05-01 16:27:44 -04:00
276dfc591b feat(ui): disable w/h when img2img & not fit 2023-05-01 17:28:22 +10:00
b49d76ebee feat(nodes): fix image to image fit param
it was ignored previously.
2023-05-01 17:28:22 +10:00
a6be44789b fix(ui): progress image rerender, checkbox 2023-05-01 11:16:49 +10:00
a4313c26cb fix: Do not hide Preview button & color code it 2023-05-01 11:16:49 +10:00
d4b250d509 feat(ui): Add auto show progress previews setting 2023-05-01 11:16:49 +10:00
29743a9e02 fix(ui): next/prev image buttons 2023-05-01 11:16:49 +10:00
fecb77e344 feat(ui): dndkit --> rnd for draggable 2023-05-01 11:16:49 +10:00
779671753d feat(ui): tweak floating preview 2023-05-01 11:16:49 +10:00
d5e152b35e fix(ui): ignore events after canceling session 2023-05-01 11:16:49 +10:00
270657a62c feat(ui): gallery & progress image refactor 2023-05-01 11:16:49 +10:00
3601b9c860 feat(ui): revamp status indicator 2023-05-01 11:16:49 +10:00
c8fe12cd91 feat(ui): init image tweaks 2023-05-01 11:16:49 +10:00
deae5fbaec fix(ui): socket event types 2023-05-01 11:16:49 +10:00
5b558af2b3 fix(ui): fix metadata viewer scroll 2023-05-01 11:16:49 +10:00
4150d5306f chore(ui): regen api client 2023-05-01 11:16:49 +10:00
8c2e4700f9 feat(ui): persist gallery state 2023-05-01 11:16:49 +10:00
adaecada20 fix(ui): fix current image seed button 2023-05-01 11:16:49 +10:00
258895bcc9 feat(ui): being dismantling old sio stuff, fix recall seed/prompt/init
- still need to fix up metadataviewer's recall features
2023-05-01 11:16:49 +10:00
2eb7c25bae feat(ui): clean up and simplify socketio middleware 2023-05-01 11:16:49 +10:00
2e4e9434c1 fix(ui): fix initial image for uploads 2023-05-01 11:16:49 +10:00
0cad204e74 feat(ui): add error handling for linear graph generation 2023-05-01 11:16:49 +10:00
0bc2edc044 Merge branch 'main' into enhance/invokeai-logs 2023-04-29 11:00:18 -04:00
16488e7db8 fix tests 2023-04-29 10:59:50 -04:00
974841926d logger is a interchangeable service 2023-04-29 10:48:50 -04:00
8db20e0d95 rename log to logger throughout 2023-04-29 09:43:40 -04:00
d00d29d6b5 feat(ui): update settings modal 2023-04-29 18:28:19 +10:00
dc976cd665 feat(ui): add switch for logging 2023-04-29 18:28:19 +10:00
6d6b986a66 feat(ui): remove Console and redux logging state 2023-04-29 18:28:19 +10:00
bffdede0fa feat(ui): improve log messages 2023-04-29 18:28:19 +10:00
a4c258e9ec feat(ui): add roarr logger 2023-04-29 18:28:19 +10:00
8d837558ac fix(ui): fix spelling of systemPersistDenylist.ts 2023-04-29 18:28:19 +10:00
e673ed08ec fix(ui): restore missing chakra-cli package
(amending to try and get the workflow to run)
2023-04-29 12:21:11 +10:00
f0e07bff5a fix bad logging path in config script 2023-04-28 15:39:00 -04:00
3ec06a1fc3 Merge branch 'main' into enhance/invokeai-logs 2023-04-28 10:10:33 -04:00
6b79e2b407 Merge branch 'main' into enhance/invokeai-logs
- resolve conflicts
- remove unused code identified by pyflakes
2023-04-28 10:09:46 -04:00
0eed9dbc44 fix(ui): fix packaging import issue (#3294)
I accidentally merged a broken #3292 (merge conflicts incorrectly
resolved). Fixing it
2023-04-29 00:39:56 +12:00
53c7832fd1 fix(ui): fix packaging import issue 2023-04-28 22:37:51 +10:00
ca1cc0e2c2 feat(ui): rerender mitigation sweep 2023-04-28 22:00:18 +10:00
5d8728c7ef feat(ui): persist socket session ids and re-sub on connect 2023-04-28 22:00:18 +10:00
a8cec4c7e6 fix(ui): improve schema parsing error handling 2023-04-28 22:00:18 +10:00
2b5ccdc55f build(ui): treeshake lodash via lodash-es 2023-04-28 21:56:43 +10:00
d92d5b5258 build(ui): fix types exports 2023-04-28 21:56:43 +10:00
a591184d2a build(ui): remove unneeded types file 2023-04-28 21:56:43 +10:00
ee881e4c78 build(ui): add react/react-dom peer deps 2023-04-28 21:56:43 +10:00
61fbb24e36 feat(ui): set up for packaging 2023-04-28 21:56:43 +10:00
d582949488 feat(ui): rename main app components 2023-04-28 21:56:43 +10:00
de574eb4d9 chore(ui): upgrade all packages 2023-04-28 21:56:43 +10:00
bfd90968f1 chore(ui): tidy npm structure 2023-04-28 21:56:43 +10:00
956ad6bcf5 add redesigned model cache for diffusers & transformers 2023-04-28 00:41:52 -04:00
4a924c9b54 feat(nodes): hardcode resize latents downsampling 2023-04-28 09:52:09 +10:00
0453d60c64 fix(nodes): fix slatents and rlatents bugs 2023-04-28 09:52:09 +10:00
c4f4f8b1b8 fix(nodes): remove unused width and height from t2l 2023-04-28 09:52:09 +10:00
3e80eaa342 feat(nodes): add resize and scale latents nodes
- this resize/scale latents is what is needed for hires fix
- also remove unused `seed` from t2l
2023-04-28 09:52:09 +10:00
00a0cb3403 fix(ui): update exported types 2023-04-28 09:20:09 +10:00
ea93cad5ff fix(ui): update to match change in route params 2023-04-28 09:19:03 +10:00
4453a0d20d feat(ui): remove toasts for network bc we have status to tell us 2023-04-28 09:18:19 +10:00
1e837e3c9d fix(ui): add formatted neg prompt for linear nodes (#3282)
* fix(ui): add formatted neg prompt for linear nodes

* remove conditional

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-04-27 15:05:35 -04:00
0f95f7cea3 Fix inpaint node
Seems like this is the only change needed for the existing inpaint node to work.
2023-04-27 11:03:07 -07:00
0b0068ab86 Merge branch 'main' into feat/compel_node 2023-04-27 14:53:10 +03:00
31c7fa833e feat(ui): simplify image display 2023-04-27 14:10:44 +10:00
db16ca0079 fix(ui): Current Image Buttons position 2023-04-27 14:10:44 +10:00
a824f47bc6 fix(nodes): use absolute path when deleting 2023-04-27 14:10:44 +10:00
99392debe8 feat(ui): refactor DeleteImageModal
- refactor the component
- use translations
- add config for systems where deleted images are not sent to bin (only changes the messaging)
2023-04-27 14:10:44 +10:00
0cc739afc8 feat(nodes): use send2trash to delete images, fix thumbnail_path 2023-04-27 14:10:44 +10:00
0ab62b0343 feat(ui): "blacklist" -> "denylist" 2023-04-27 14:10:44 +10:00
75d25dd5cc feat(ui): restore image deletion functionality 2023-04-27 14:10:44 +10:00
2e54da13d8 chore(ui): regen api client 2023-04-27 14:10:44 +10:00
f34f416bf5 fix(ui): handle floats in NumberInputFieldComponent 2023-04-27 14:10:44 +10:00
021c63891d fix(ui): fix config types and merging 2023-04-27 14:10:44 +10:00
a968862e6b feat(ui): Move img2img badge info to top right 2023-04-27 14:10:44 +10:00
a08189d457 ui: Match styling of img2img to the rest of the accordions 2023-04-27 14:10:44 +10:00
0a936696c3 feat(ui): add config slice, configuration default values 2023-04-27 14:10:44 +10:00
55e33eaf4c docs: add note on README about migration (#3277) 2023-04-27 13:17:43 +12:00
3da5fb223f docs: add note on README about migration 2023-04-27 11:05:32 +10:00
a3c5a664e5 fix(ui): update UI to handle uploads with alternate URLs (#3274) 2023-04-26 07:14:08 -07:00
b638fb2f30 fix(ui): use name in response instead of parsing out of URL to handle alternative URLs 2023-04-26 09:48:16 -04:00
c1b10b2222 feat(ui): open in new tab @ hoverable image 2023-04-26 12:40:10 +10:00
bee29714d9 fix(ui): fix templates not refreshing correctly 2023-04-26 12:40:10 +10:00
d40d5276dd feat(ui): wip img2img ui 2023-04-26 12:40:10 +10:00
568f0aad71 feat(ui): wip img2img ui 2023-04-26 12:40:10 +10:00
38474fa9d4 feat(ui): add lil spinner to loading 2023-04-26 12:17:01 +10:00
f7f974a28b fix(ui): fix inverted conditional 2023-04-26 12:17:01 +10:00
3c150b384c fix(ui): fix export of ApplicationFeature type 2023-04-26 12:17:01 +10:00
65816049ba feat(ui): add secret loading screen override button 2023-04-26 12:17:01 +10:00
c1c881ded5 feat(ui): support disabledFeatures, add nicer loading
- `disabledParametersPanels` -> `disabledFeatures`
- handle disabling `faceRestore`, `upscaling`, `lightbox`, `modelManager` and OSS header links/buttons
- wait until models are loaded to hide loading screen
- also wait until schema is parsed if `nodes` is an enabled tab
2023-04-26 12:17:01 +10:00
82c4dd8b86 fix(api): return same URL on location header 2023-04-26 06:29:30 +10:00
711d09a107 feat(nodes): add get_uri method to image storage
- gets the external URI of an image
2023-04-26 06:29:30 +10:00
74013b6611 fix(nodes): address feedback 2023-04-26 06:29:30 +10:00
790f399986 feat(nodes): tidy images routes 2023-04-26 06:29:30 +10:00
73cdd36594 feat(nodes): raise HTTPExceptions instead of returning Reponses 2023-04-26 06:29:30 +10:00
50ac3eb28d feat(nodes): add delete_image & delete_images routes 2023-04-26 06:29:30 +10:00
d753cff91a Undo debug message 2023-04-25 13:18:50 +03:00
89f1909e4b Update default graph 2023-04-25 13:11:50 +03:00
37916a22ad Use textual inversion manager from pipeline, remove extra conditioning info for uc 2023-04-25 12:53:13 +03:00
76e5d0595d fix(ui): fix no progress images when gallery is empty (#3268)
When gallery was empty (and there is therefore no selected image), no
progress images were displayed.

- fix by correcting the logic in CurrentImageDisplay
- also fix app crash introduced by fixing the first bug
2023-04-25 17:48:24 +12:00
f03cb8f134 fix(ui): fix no progress images when gallery is empty 2023-04-25 15:00:54 +10:00
c2a0e8afc3 [Bugfix] prevent cli crash (#3132)
Prevent legacy CLI crash caused by removal of convert option
    
- Compensatory change to the CLI that prevents it from crashing when it
tries to import a model.
- Bug introduced when the "convert" option removed from the model
manager.
2023-04-25 03:55:33 +01:00
31a904b903 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:28:45 +01:00
c174cab3ee [Bugfix] fixes and code cleanup to update and installation routines (#3101)
- Fix the update script to work again and fixes the ambiguity between
when a user wants to update to a tag vs updating to a branch, by making
these two operations explicitly separate.
- Remove dangling functions and arguments related to legacy checkpoint
conversion. These are no longer needed now that all legacy models are
either converted at import time, or on-the-fly in RAM.
2023-04-25 03:28:23 +01:00
fe12938c23 update to diffusers 0.15 and fix code for name changes (#3201)
- This is a port of #3184 to the main branch
2023-04-25 03:23:24 +01:00
4fa5c963a1 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:10:51 +01:00
48ce256ba2 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-25 02:49:59 +01:00
8cb2fa8600 Restore log_tokenization check 2023-04-25 04:29:17 +03:00
8f460b92f1 Make latent generation nodes use conditions instead of prompt 2023-04-25 04:21:03 +03:00
d99a08a441 Add compel node and conditioning field type 2023-04-25 03:48:44 +03:00
7555b1f876 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly (#3256)
I noticed that the current invokeai-new.py was using almost all of a CPU
core. After a bit of profileing I noticed that there were many thousands
of calls to epoll() which suggested to me that something wasn't sleeping
properly in asyncio's loop.

A bit of further investigation with Python profiling revealed that the
__dispatch_from_queue() method in FastAPIEventService
(app/api/events.py:33) was also being called thousands of times.

I believe the asyncio.sleep(0.001) in that method is too aggressive (it
means that the queue will be polled every 1ms) and that 0.1 (100ms) is
still entirely reasonable.
2023-04-24 19:35:27 +12:00
a537231f19 Merge branch 'main' into reduce-event-polling 2023-04-24 19:14:10 +12:00
8044d1b840 translationBot(ui): update translation (Turkish)
Currently translated at 11.3% (58 of 512 strings)

translationBot(ui): added translation (Turkish)

Co-authored-by: ismail ihsan bülbül <e-ben@msn.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
2b58ce4ae4 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 75.0% (380 of 506 strings)

Co-authored-by: Patrick Tien <ivetien@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
ef605cd76c translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

Co-authored-by: Fabian Bahl <fabian98@bahl-netz.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
a84b5b168f translationBot(ui): update translation (Swedish)
Currently translated at 34.7% (176 of 506 strings)

translationBot(ui): added translation (Swedish)

Co-authored-by: figgefigge <qvintuz@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/sv/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
16f6ee04d0 translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

translationBot(ui): update translation (German)

Currently translated at 80.8% (409 of 506 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
44be057aa3 translationBot(ui): update translation (Ukrainian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (English)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Ukrainian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
422f6967b2 translationBot(ui): update translation (Ukrainian)
Currently translated at 75.8% (384 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 85.5% (433 of 506 strings)

Co-authored-by: mitien <mitien@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
4528cc8ba6 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
87e91ebc1d translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
fd00d111ea translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
b8dc9000bd translationBot(ui): update translation (German)
Currently translated at 73.4% (370 of 504 strings)

Co-authored-by: Jaulustus <jaulustus@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
58c1066765 translationBot(ui): update translation (Finnish)
Currently translated at 18.2% (92 of 504 strings)

translationBot(ui): added translation (Finnish)

Co-authored-by: Juuso V <juuso.vantola@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fi/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
37096a697b translationBot(ui): added translation (Mongolian)
Co-authored-by: Bouncyknighter <gebifirm@gmail.com>
2023-04-24 16:05:16 +10:00
17d0920186 translationBot(ui): update translation (Japanese)
Currently translated at 73.0% (368 of 504 strings)

Co-authored-by: 唐澤 克幸 <4ranci0ne@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
1e05538364 translationBot(ui): added translation (Vietnamese)
Co-authored-by: techybrain-dev <techybrain.dev@gmail.com>
2023-04-24 16:05:16 +10:00
cf28617cd6 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly 2023-04-23 21:27:02 +01:00
d0d8640711 feat(ui): add reload schema button (#3252) 2023-04-23 19:51:37 +12:00
e6158d1874 feat(ui): add reload schema button 2023-04-23 17:49:02 +10:00
2e9d1ea8a3 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL (#3250)
* if `shouldFetchImages` is passed in, UI will make an additional
request to get valid image URL when an invocation is complete
* this is necessary in order to have optional authorization for images
2023-04-23 16:00:13 +10:00
59b0153236 add to types 2023-04-23 15:59:55 +10:00
9f8ff912c4 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL 2023-04-23 15:59:55 +10:00
f0e4a2124a [Nodes UI] More Work (#3248)
- Style the Minimap
- Made the Node UI Legend Responsive
- Set Min Width for nodes on Spawn so resize doesn't snap.
- Initial Implementation of Node Search
- Added FuseJS to handle the node filtering
2023-04-23 17:51:40 +12:00
11ab5c7d56 fix(ui): Fix up arrow not working on unfiltered list 2023-04-23 15:18:35 +12:00
3f334d9e5e feat(ui): Add fusejs to NodeSearch 2023-04-23 15:14:44 +12:00
ff891b1ff2 feat(ui): Basic Node Search Component
Very buggy
2023-04-23 13:35:02 +12:00
2914ee10b0 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-22 20:21:59 +01:00
e29c2fb782 Merge branch 'more-nodes-work' of https://github.com/blessedcoolant/InvokeAI into more-nodes-work 2023-04-23 02:53:25 +12:00
b763f1809e feat(ui): Stylize Node Minimap 2023-04-23 02:52:32 +12:00
d26b44104a fix(ui): minor tidy 2023-04-23 00:45:03 +10:00
b73fd2a6d2 fix(ui): Set Min Width for Nodes 2023-04-23 00:55:43 +12:00
f258aba6d1 chore(ui): Make the Node UI Legend Responsive 2023-04-23 00:55:22 +12:00
2e70848aa0 Responsive Mobile Layout (#3207)
The first draft for a Responsive Mobile Layout for InvokeAI. Some basic
documentation to help contributors. // Notes from: @blessedcoolant

---

The whole rework needs to be done using the `mobile first` concept where
the base design will be catered to mobile and we add responsive changes
as we grow to larger screens.

**Added**

- Basic breakpoints have been added to the `theme.ts` file that indicate
at which values Chakra makes the responsive changes.
- A basic `useResolution` hook has been added that either returns
`mobile`, `tablet` or `desktop` based on the breakpoint. We can
customize this hook further to do more complex checks for us if need be.

**Syntax**

- Any Chakra component is directly capable of taking different values
for the different breakpoints set in our `theme.ts` file. These can be
passed in a few ways with the most descriptive being an object. For
example:

`flexDir={{ base: 'column', xl: 'row' }}` - This would set the `0em and
above` to be column for the flex direction but change to row
automatically when we hit `xl` and above resolutions which in our case
is `80em or 1280px`. This same format is applicable for any element in
Chakra.

`flexDir={['column', null, null, 'row', null]}` - The above syntax can
also be passed as an array to the property with each value in the array
corresponding to each breakpoint we have. Setting `null` just bypasses
it. This is a good short hand but I think we stick to the above syntax
for readability.

**Note**: I've modified a few elements here and there to give an idea on
how the responsive syntax works for reference.

---

**Problems to be solved** @SammCheese 

- Some issues you might run into are with the Resizable components.
We've decided we will get not use resizable components for smaller
resolutions. Doesn't make sense. So you'll need to make conditional
renderings around these.
- Some components that need custom layouts for different screens might
be better if ported over to `Grid` and use `gridTemplateAreas` to swap
out the design layout. I've demonstrated an example of this in a commit
I've made. I'll let you be the judge of where we might need this.
- The header will probably need to be converted to a burger menu of some
sort with the model changing being handled correctly UX wise. We'll
discuss this on discord.

---

Anyone willing to contribute to this PR can feel free to join the
discussion on discord.

https://discord.com/channels/1020123559063990373/1020839344170348605/threads/1097323866780606615
2023-04-22 22:34:30 +10:00
e973aeef0d Merge branch 'main' into responsive-ui 2023-04-22 14:31:19 +02:00
50e1ac731d fix(ui): make input/outputs renderfn callback 2023-04-22 22:25:17 +10:00
43addc1548 fix(ui): memoize everything nodes 2023-04-22 22:25:17 +10:00
4901911c1a fix(ui): improve nodes performance 2023-04-22 22:25:17 +10:00
44a653925a feat(ui): node styling, controls
- custom node controls
- fix some types
- fix badge colors via colorScheme
- style nodes
2023-04-22 22:25:17 +10:00
94a07a8da7 feat(ui): Make Nodes always spawn in center of work area 2023-04-22 22:25:17 +10:00
ad41afe65e feat(ui): Make Nodes Resizable 2023-04-22 22:25:17 +10:00
77fa7519c4 chore(ui): Cleanup Invocation Component 2023-04-22 22:25:17 +10:00
6e29148d4d delete ImageToImageContent.tsx 2023-04-22 08:43:14 +02:00
3044f3bfe5 fix(ui): adapt NodeEditor for smaller screens 2023-04-22 08:33:05 +02:00
67a8627cf6 add dev:host script 2023-04-22 08:30:09 +02:00
3fb433cb91 Merge branch 'main' of https://github.com/invoke-ai/InvokeAI into responsive-ui 2023-04-22 08:27:00 +02:00
5f498e10bd Partial migration of UI to nodes API (#3195)
* feat(ui): add axios client generator and simple example

* fix(ui): update client & nodes test code w/ new Edge type

* chore(ui): organize generated files

* chore(ui): update .eslintignore, .prettierignore

* chore(ui): update openapi.json

* feat(backend): fixes for nodes/generator

* feat(ui): generate object args for api client

* feat(ui): more nodes api prototyping

* feat(ui): nodes cancel

* chore(ui): regenerate api client

* fix(ui): disable OG web server socket connection

* fix(ui): fix scrollbar styles typing and prop

just noticed the typo, and made the types stronger.

* feat(ui): add socketio types

* feat(ui): wip nodes

- extract api client method arg types instead of manually declaring them
- update example to display images
- general tidy up

* start building out node translations from frontend state and add notes about missing features

* use reference to sampler_name

* use reference to sampler_name

* add optional apiUrl prop

* feat(ui): start hooking up dynamic txt2img node generation, create middleware for session invocation

* feat(ui): write separate nodes socket layer, txt2img generating and rendering w single node

* feat(ui): img2img implementation

* feat(ui): get intermediate images working but types are stubbed out

* chore(ui): add support for package mode

* feat(ui): add nodes mode script

* feat(ui): handle random seeds

* fix(ui): fix middleware types

* feat(ui): add rtk action type guard

* feat(ui): disable NodeAPITest

This was polluting the network/socket logs.

* feat(ui): fix parameters panel border color

This commit should be elsewhere but I don't want to break my flow

* feat(ui): make thunk types more consistent

* feat(ui): add type guards for outputs

* feat(ui): load images on socket connect

Rudimentary

* chore(ui): bump redux-toolkit

* docs(ui): update readme

* chore(ui): regenerate api client

* chore(ui): add typescript as dev dependency

I am having trouble with TS versions after vscode updated and now uses TS 5. `madge` has installed 3.9.10 and for whatever reason my vscode wants to use that. Manually specifying 4.9.5 and then setting vscode to use that as the workspace TS fixes the issue.

* feat(ui): begin migrating gallery to nodes

Along the way, migrate to use RTK `createEntityAdapter` for gallery images, and separate `results` and `uploads` into separate slices. Much cleaner this way.

* feat(ui): clean up & comment results slice

* fix(ui): separate thunk for initial gallery load so it properly gets index 0

* feat(ui): POST upload working

* fix(ui): restore removed type

* feat(ui): patch api generation for headers access

* chore(ui): regenerate api

* feat(ui): wip gallery migration

* feat(ui): wip gallery migration

* chore(ui): regenerate api

* feat(ui): wip refactor socket events

* feat(ui): disable panels based on app props

* feat(ui): invert logic to be disabled

* disable panels when app mounts

* feat(ui): add support to disableTabs

* docs(ui): organise and update docs

* lang(ui): add toast strings

* feat(ui): wip events, comments, and general refactoring

* feat(ui): add optional token for auth

* feat(ui): export StatusIndicator and ModelSelect for header use

* feat(ui) working on making socket URL dynamic

* feat(ui): dynamic middleware loading

* feat(ui): prep for socket jwt

* feat(ui): migrate cancelation

also updated action names to be event-like instead of declaration-like

sorry, i was scattered and this commit has a lot of unrelated stuff in it.

* fix(ui): fix img2img type

* chore(ui): regenerate api client

* feat(ui): improve InvocationCompleteEvent types

* feat(ui): increase StatusIndicator font size

* fix(ui): fix middleware order for multi-node graphs

* feat(ui): add exampleGraphs object w/ iterations example

* feat(ui): generate iterations graph

* feat(ui): update ModelSelect for nodes API

* feat(ui): add hi-res functionality for txt2img generations

* feat(ui): "subscribe" to particular nodes

feels like a dirty hack but oh well it works

* feat(ui): first steps to node editor ui

* fix(ui): disable event subscription

it is not fully baked just yet

* feat(ui): wip node editor

* feat(ui): remove extraneous field types

* feat(ui): nodes before deleting stuff

* feat(ui): cleanup nodes ui stuff

* feat(ui): hook up nodes to redux

* fix(ui): fix handle

* fix(ui): add basic node edges & connection validation

* feat(ui): add connection validation styling

* feat(ui): increase edge width

* feat(ui): it blends

* feat(ui): wip model handling and graph topology validation

* feat(ui): validation connections w/ graphlib

* docs(ui): update nodes doc

* feat(ui): wip node editor

* chore(ui): rebuild api, update types

* add redux-dynamic-middlewares as a dependency

* feat(ui): add url host transformation

* feat(ui): handle already-connected fields

* feat(ui): rewrite SqliteItemStore in sqlalchemy

* fix(ui): fix sqlalchemy dynamic model instantiation

* feat(ui, nodes): metadata wip

* feat(ui, nodes): models

* feat(ui, nodes): more metadata wip

* feat(ui): wip range/iterate

* fix(nodes): fix sqlite typing

* feat(ui): export new type for invoke component

* tests(nodes): fix test instantiation of ImageField

* feat(nodes): fix LoadImageInvocation

* feat(nodes): add `title` ui hint

* feat(nodes): make ImageField attrs optional

* feat(ui): wip nodes etc

* feat(nodes): roll back sqlalchemy

* fix(nodes): partially address feedback

* fix(backend): roll back changes to pngwriter

* feat(nodes): wip address metadata feedback

* feat(nodes): add seeded rng to RandomRange

* feat(nodes): address feedback

* feat(nodes): move GET images error handling to DiskImageStorage

* feat(nodes): move GET images error handling to DiskImageStorage

* fix(nodes): fix image output schema customization

* feat(ui): img2img/txt2img -> linear

- remove txt2img and img2img tabs
- add linear tab
- add initial image selection to linear parameters accordion

* feat(ui): tidy graph builders

* feat(ui): tidy misc

* feat(ui): improve invocation union types

* feat(ui): wip metadata viewer recall

* feat(ui): move fonts to normal deps

* feat(nodes): fix broken upload

* feat(nodes): add metadata module + tests, thumbnails

- `MetadataModule` is stateless and needed in places where the `InvocationContext` is not available, so have not made it a `service`
- Handles loading/parsing/building metadata, and creating png info objects
- added tests for MetadataModule
- Lifted thumbnail stuff to util

* fix(nodes): revert change to RandomRangeInvocation

* feat(nodes): address feedback

- make metadata a service
- rip out pydantic validation, implement metadata parsing as simple functions
- update tests
- address other minor feedback items

* fix(nodes): fix other tests

* fix(nodes): add metadata service to cli

* fix(nodes): fix latents/image field parsing

* feat(nodes): customise LatentsField schema

* feat(nodes): move metadata parsing to frontend

* fix(nodes): fix metadata test

---------

Co-authored-by: maryhipp <maryhipp@gmail.com>
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-04-22 13:10:20 +10:00
fdad62e88b chore: add ".version" and ".last_model" to gitignore (#3208)
Mistakenly closed the previous pr.
2023-04-20 18:26:27 +01:00
955c81acef Merge branch 'main' into patch-1 2023-04-20 18:26:06 +01:00
e1058f3416 update CODEOWNERS for changed team composition (#3234)
Remove @mauwii and @keturn until they are able to reengage with the
development effort. @GreggHelt2 is designated co-codeowner for the
backend.
2023-04-20 17:19:10 +01:00
edf16a253d Merge branch 'main' into patch-1 2023-04-20 14:16:10 +02:00
46f5ef4100 Merge branch 'main' into dev/codeowner-fix-main 2023-04-19 22:40:56 +01:00
b843255236 update CODEOWNERS for changed team composition 2023-04-19 17:37:48 -04:00
3a968e5072 Update NSFW.md
Outdated doc said to change the '.invokeai' file, but it's now named 'invokeai.init' afaik.
2023-04-18 21:18:32 -04:00
b164330e3c replaced remaining print statements with log.*() 2023-04-18 20:49:00 -04:00
69433c9f68 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-18 19:21:53 -04:00
bd8ffd36bf bump to diffusers 0.15.1, remove dangling module 2023-04-18 19:20:38 -04:00
fd80e84ea6 Merge branch 'main' into patch-1 2023-04-18 19:14:28 -04:00
4824237a98 Added CPU instruction for README (#3225)
Since the change itself is quite straight-forward, I'll just describe
the context. Tried using automatic installer on my laptop, kept erroring
out on line 140-something of installer.py, "ERROR: Can not perform a
'--user' install. User site-packages are not visible in this
virtualenv."
Got tired of of fighting with pip so moved on to command line install.
Worked immediately, but at the time lacked instruction for CPU, so
instead of opening any helpful hyperlinks in the readme, took a few
minutes to grab the link from installer.py - thus this pr.
2023-04-18 19:07:37 -04:00
2c9a05eb59 Added CPU instruction for README 2023-04-18 18:46:55 +03:00
ecb5bdaf7e [bug] #3218 HuggingFace API off when --no-internet set (#3219)
#3218 

Huggingface API will not be queried if --no-internet flag is set
2023-04-18 14:34:34 +12:00
2feeb1f44c fix(ui): more responsive layout work 2023-04-18 04:29:31 +12:00
554f353773 fix(ui): Fix Width and Height showing 0 as input 2023-04-18 04:28:58 +12:00
f6cdff2c5b [bug] #3218 HuggingFace API off when --no-internet set
https://github.com/invoke-ai/InvokeAI/issues/3218

Huggingface API will not be queried if --no-internet flag is set
2023-04-17 16:53:31 +02:00
aee27e94c9 fix(ui): Fix site header on really small screens 2023-04-18 01:25:53 +12:00
695893e1ac fix(ui): Improve parameters panel and preview display 2023-04-18 01:09:48 +12:00
b800a8eb2e feat(ui): responsive wip
- Fixed a bunch of padding and margin issues across the app
- Fixed the Invoke logo compressing
- Disabled the visibility of the options panel pin button in tablet and mobile views
- Refined the header menu options in mobile and tablet views
- Refined other site header elements in mobile and tablet views
- Aligned Tab Icons to center in mobile and tablet views
2023-04-18 00:50:09 +12:00
9749ef34b5 layout improvements 2023-04-17 13:30:33 +02:00
9a43362127 Revert "Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207"
This reverts commit 866024ea6c, reversing
changes made to 601cc1f92c.
2023-04-17 13:51:08 +12:00
866024ea6c Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207 2023-04-17 13:50:44 +12:00
601cc1f92c help(ui): Basic responsive updates to demonstrate
Made some basic responsive changes to demonstrate how to go about making changes.

There are a bunch of problems not addressed yet. Like dealing with the resizeable component and etc.
2023-04-17 13:50:13 +12:00
d6a9a4464d feat(ui): Add Basic useResolution Component
This component just classifies `base` and `sm` as mobile, `md` and `lg` as tablet and `xl` and `2xl` as desktop.

This is a basic hook for quicker work with resolutions. Can be modified and adjusted to our needs. All resolution related work can go into this hook.
2023-04-17 13:48:42 +12:00
dac271725a feat(ui): Add Basic Breakpoints 2023-04-17 13:26:10 +12:00
e1fbecfcf7 fix(ui): Syntax issue with the HidePreview icon 2023-04-17 12:42:06 +12:00
63d10027a4 nodes: invocation queue item - make more pydantic 2023-04-16 09:39:33 -04:00
ef0773b8a3 nodes: set default for InvocationQueueItem.invoke_all 2023-04-16 09:39:33 -04:00
3daaddf15b nodes: remove duplicate LatentsToLatentsInvocation 2023-04-16 09:39:33 -04:00
570c3fe690 nodes: ensure Graph and GraphExecutionState ids are cast to str on instantiation 2023-04-16 09:39:33 -04:00
cbd1a7263a nodes: fix typing of GraphExecutionState.id 2023-04-16 09:39:33 -04:00
7fc5fbd4ce nodes: convert InvocationQueueItem to Pydantic class 2023-04-16 09:39:33 -04:00
6f6de402ad make InvocationQueueItem serializable 2023-04-16 09:39:33 -04:00
2ec4f5af10 remove unused import to pass lint & revert package.json 2023-04-15 21:53:33 +02:00
281662a6e1 chore: add ".version" and ".last_model" to gitignore
Mistakenly closed the previous pr
2023-04-15 21:46:47 +02:00
2edd032ec7 draft mobile layout 2023-04-15 21:34:03 +02:00
50eb02f68b chore(ui): build 2023-04-15 20:45:17 +10:00
d73f3adc43 moving shouldHidePreview from gallery to ui slice. 2023-04-15 20:45:17 +10:00
116107f464 chore(ui): build 2023-04-15 20:45:17 +10:00
da44bb1707 rename setter 2023-04-15 20:45:17 +10:00
f43aed677e chore(ui): build 2023-04-15 20:45:17 +10:00
0d051aaae2 rename hidden variable to something more descriptive 2023-04-15 20:45:17 +10:00
e4e48ff995 i forgor to push the locale 2023-04-15 20:45:17 +10:00
442a6bffa4 feat: add "Hide Preview" Button 2023-04-15 20:45:17 +10:00
aab262d991 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-14 20:12:38 -04:00
47b9910b48 update to diffusers 0.15 and fix code for name changes
- This is a port of #3184 to the main branch
2023-04-14 15:35:03 -04:00
0b0e6fe448 convert remainder of print() to log.info() 2023-04-14 15:15:14 -04:00
23d65e7162 [nodes] Add subgraph library, subgraph usage in CLI, and fix subgraph execution (#3180)
* Add latent to latent (img2img equivalent)
Fix a CLI bug with multiple links per node

* Using "latents" instead of "latent"

* [nodes] In-progress implementation of graph library

* Add linking to CLI for graph nodes (still broken)

* Fix subgraph execution, fix subgraph linking in CLI

* Fix LatentsToLatents
2023-04-14 06:41:06 +00:00
024fd54d0b Fixed a Typo. (#3190) 2023-04-14 14:33:31 +12:00
c44c19e911 Fixed a Typo. 2023-04-13 17:42:34 +02:00
c132dbdefa change "ialog" to "log" 2023-04-11 18:48:20 -04:00
f3081e7013 add module-level getLogger() method 2023-04-11 12:23:13 -04:00
f904f14f9e add missing module-level methods 2023-04-11 11:10:43 -04:00
8917a6d99b add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)

This style logs everything through a single logging object and is
identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus
to logging:

  import invokeai.backend.util.logging as ialog

  ialog.debug('this is a debugging message')
  ialog.info('this is a informational message')
  ialog.log(level=logging.CRITICAL, 'get out of dodge')
  ialog.disable(level=logging.INFO)
  ialog.basicConfig(filename='/var/log/invokeai.log')

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add the additional
message decorations.

For more control, the logging module's object-oriented logging style
is also supported. The API is identical to the vanilla logging
usage. In fact, the only thing that has changed is that the
getLogger() method adds a custom formatter to the log messages.

 import logging
 from invokeai.backend.util.logging import InvokeAILogger

 logger = InvokeAILogger.getLogger(__name__)
 fh = logging.FileHandler('/var/invokeai.log')
 logger.addHandler(fh)
 logger.critical('this will be logged to both the console and the log file')
2023-04-11 10:46:38 -04:00
5a4765046e add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)
2023-04-11 09:33:28 -04:00
d923d1d66b fix(nodes): fix naming of CvInvocationConfig 2023-04-11 12:13:53 +10:00
1f2c1e14db fix(nodes): move InvocationConfig to baseinvocation.py 2023-04-11 12:13:53 +10:00
07e3a0ec15 feat(nodes): add invocation schema customisation, add model selection
- add invocation schema customisation

done via fastapi's `Config` class and `schema_extra`. when using `Config`, inherit from `InvocationConfig` to get type hints.

where it makes sense - like for all math invocations - define a `MathInvocationConfig` class and have all invocations inherit from it.

this customisation can provide any arbitrary additional data to the UI. currently it provides tags and field type hints.

this is necessary for `model` type fields, which are actually string fields. without something like this, we can't reliably differentiate  `model` fields from normal `string` fields.

can also be used for future field types.

all invocations now have tags, and all `model` fields have ui type hints.

- fix model handling for invocations

added a helper to fall back to the default model if an invalid model name is chosen. model names in graphs now work.

- fix latents progress callback

noticed this wasn't correct while working on everything else.
2023-04-11 12:13:53 +10:00
427db7c7e2 feat(nodes): fix typo in PasteImageInvocation 2023-04-10 21:33:08 +10:00
dad3a7f263 fix(nodes): sampler_name --> scheduler
the name of this was changed at some point. nodes still used the old name, so scheduler selection did nothing. simple fix.
2023-04-10 19:54:09 +10:00
5bd0bb637f fix(nodes): add missing type to ImageField 2023-04-10 19:33:15 +10:00
f05095770c Increase chunk size when computing diffusers SHAs (#3159)
When running this app first time in WSL2 environment, which is
notoriously slow when it comes to IO, computing the SHAs of the models
takes an eternity.

Computing shas for sd2.1
```
| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 510.87s)
```

I increased the chunk size to 16MB reduce the number of round trips for
loading the data. New results:

```
| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 59.89s)
```

Higher values don't seem to make an impact.
2023-04-09 22:29:43 -04:00
de189f2db6 Increase chunk size when computing SHAs 2023-04-09 21:53:59 +02:00
cee159dfa3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-09 12:08:09 -04:00
4463124bdd feat(nodes): mark ImageField properties required, add docs 2023-04-09 22:53:17 +10:00
34402cc46a feat(nodes): add list_images endpoint
- add `list_images` endpoint at `GET api/v1/images`
- extend `ImageStorageBase` with `list()` method, implemented it for `DiskImageStorage`
- add `ImageReponse` class to for image responses, which includes urls, metadata
- add `ImageMetadata` class (basically a stub at the moment)
- uploaded images now named `"{uuid}_{timestamp}.png"`
- add `models` modules. besides separating concerns more clearly, this helps to mitigate circular dependencies
- improve thumbnail handling
2023-04-09 13:48:44 +10:00
54d9833db0 Else. 2023-04-08 12:08:51 -04:00
5fe8cb56fc Correct response note 2023-04-08 12:08:51 -04:00
7919d81fb1 Update to address feedback 2023-04-08 12:08:51 -04:00
9d80b28a4f Begin Convert Work 2023-04-08 12:08:51 -04:00
1fcd91bcc5 Add/Update and Delete Models 2023-04-08 12:08:51 -04:00
e456e2e63a fix typo (#3147)
fix typo.

reference:
21f79e5919/invokeai/configs/INITIAL_MODELS.yaml (L21-L25)
2023-04-08 20:25:31 +12:00
ee41b99049 Update 050_INSTALLING_MODELS.md
fix typo
2023-04-08 17:02:47 +09:00
111d674e71 fix(nodes): use correct torch device in NoiseInvocation 2023-04-08 12:32:03 +10:00
8f048cfbd9 Add python-multipart, which is needed by nodes (#3141)
I'm not quite sure why this isn't being installed by fastapi's
dependencies, but running without it installed yields:

```
root@gnubert:/srv/ssdtank/docker/invokeai/git/InvokeAI# docker run --gpus all -p 9989:9090 -v /srv/ssdtank/docker/invokeai/data:/data -v /srv/ssdtank/docker/invokeai/git/InvokeAI/static/dream_web/:/static/dream_web --rm -ti -u root --entrypoint /bin/bash ghcr.io/cmsj/invokeai-nodes@sha256:426ebc414936cb67e02f5f64d963196500a77b2f485df8122a2d462797293938
root@7a77b56a5771:/usr/src# /invoke-new.py --web
Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /invoke-new.py:22 in <module>                                                                    │
│                                                                                                  │
│   19                                                                                             │
│   20                                                                                             │
│   21 if __name__ == '__main__':                                                                  │
│ ❱ 22 │   main()                                                                                  │
│   23                                                                                             │
│                                                                                                  │
│ /invoke-new.py:13 in main                                                                        │
│                                                                                                  │
│   10 │   os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))                │
│   11 │                                                                                           │
│   12 │   if '--web' in sys.argv:                                                                 │
│ ❱ 13 │   │   from invokeai.app.api_app import invoke_api                                         │
│   14 │   │   invoke_api()                                                                        │
│   15 │   else:                                                                                   │
│   16 │   │   # TODO: Parse some top-level args here.                                             │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/invokeai/app/api_app.py:17 in <module>            │
│                                                                                                  │
│    14                                                                                            │
│    15 from ..backend import Args                                                                 │
│    16 from .api.dependencies import ApiDependencies                                              │
│ ❱  17 from .api.routers import images, sessions, models                                          │
│    18 from .api.sockets import SocketIO                                                          │
│    19 from .invocations import *                                                                 │
│    20 from .invocations.baseinvocation import BaseInvocation                                     │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/invokeai/app/api/routers/images.py:45 in <module> │
│                                                                                                  │
│   42 │   │   404: {"description": "Session not found"},                                          │
│   43 │   },                                                                                      │
│   44 )                                                                                           │
│ ❱ 45 async def upload_image(file: UploadFile, request: Request):                                 │
│   46 │   if not file.content_type.startswith("image"):                                           │
│   47 │   │   return Response(status_code=415)                                                    │
│   48                                                                                             │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:630 in decorator               │
│                                                                                                  │
│    627 │   │   ),                                                                                │
│    628 │   ) -> Callable[[DecoratedCallable], DecoratedCallable]:                                │
│    629 │   │   def decorator(func: DecoratedCallable) -> DecoratedCallable:                      │
│ ❱  630 │   │   │   self.add_api_route(                                                           │
│    631 │   │   │   │   path,                                                                     │
│    632 │   │   │   │   func,                                                                     │
│    633 │   │   │   │   response_model=response_model,                                            │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:569 in add_api_route           │
│                                                                                                  │
│    566 │   │   current_generate_unique_id = get_value_or_default(                                │
│    567 │   │   │   generate_unique_id_function, self.generate_unique_id_function                 │
│    568 │   │   )                                                                                 │
│ ❱  569 │   │   route = route_class(                                                              │
│    570 │   │   │   self.prefix + path,                                                           │
│    571 │   │   │   endpoint=endpoint,                                                            │
│    572 │   │   │   response_model=response_model,                                                │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:444 in __init__                │
│                                                                                                  │
│    441 │   │   │   │   0,                                                                        │
│    442 │   │   │   │   get_parameterless_sub_dependant(depends=depends, path=self.path_format),  │
│    443 │   │   │   )                                                                             │
│ ❱  444 │   │   self.body_field = get_body_field(dependant=self.dependant, name=self.unique_id)   │
│    445 │   │   self.app = request_response(self.get_route_handler())                             │
│    446 │                                                                                         │
│    447 │   def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:    │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/dependencies/utils.py:756 in              │
│ get_body_field                                                                                   │
│                                                                                                  │
│   753 │   │   alias="body",                                                                      │
│   754 │   │   field_info=BodyFieldInfo(**BodyFieldInfo_kwargs),                                  │
│   755 │   )                                                                                      │
│ ❱ 756 │   check_file_field(final_field)                                                          │
│   757 │   return final_field                                                                     │
│   758                                                                                            │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/dependencies/utils.py:111 in              │
│ check_file_field                                                                                 │
│                                                                                                  │
│   108 │   │   │   │   raise RuntimeError(multipart_incorrect_install_error) from None            │
│   109 │   │   except ImportError:                                                                │
│   110 │   │   │   logger.error(multipart_not_installed_error)                                    │
│ ❱ 111 │   │   │   raise RuntimeError(multipart_not_installed_error) from None                    │
│   112                                                                                            │
│   113                                                                                            │
│   114 def get_param_sub_dependant(                                                               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart
```
2023-04-07 19:17:37 -04:00
cd1b350dae Merge branch 'main' into bugfix/release-updater 2023-04-07 18:56:21 -04:00
8334757af9 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-07 18:55:54 -04:00
7103ac6a32 Add python-multipart, which is needed by nodes 2023-04-07 19:43:42 +01:00
f6b131e706 remove vestiges of non-functional autoimport code for legacy checkpoints (#3076)
- the functionality to automatically import and run legacy checkpoint
files in a designated folder has been removed from the backend but there
are vestiges of the code remaining in the frontend that are causing
crashes.
- This fixes the problem.

- Closes #3075
2023-04-08 02:21:23 +12:00
d1b2b99226 Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 09:59:58 -04:00
e356f2511b chore: configure stale bot 2023-04-07 20:45:08 +10:00
e5f8b22a43 add a new method to model_manager that retrieves individual pipeline components (#3120)
This PR introduces a new set of ModelManager methods that enables you to
retrieve the individual parts of a stable diffusion pipeline model,
including the vae, text_encoder, unet, tokenizer, etc.

To use:

```
from invokeai.backend import ModelManager

manager = ModelManager('/path/to/models.yaml')

# get the VAE
vae = manager.get_model_vae('stable-diffusion-1.5')

# get the unet
unet = manager.get_model_unet('stable-diffusion-1.5')

# get the tokenizer
tokenizer = manager.get_model_tokenizer('stable-diffusion-1.5')

# etc etc
feature_extractor = manager.get_model_feature_extractor('stable-diffusion-1.5')
scheduler = manager.get_model_scheduler('stable-diffusion-1.5')
text_encoder = manager.get_model_text_encoder('stable-diffusion-1.5')

# if no model provided, then defaults to the one currently in GPU, if any
vae = manager.get_model_vae()
```
2023-04-07 01:39:57 -04:00
45b84fb4bb Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 17:07:25 +12:00
f022c89249 Merge branch 'main' into feat/return-submodels 2023-04-06 22:03:31 -04:00
ab05144716 Change where !replay looks for its infile (#3129)
!fetch puts its output file into the output directory; it may be
beneficial to have !replay look in the output directory as well.
2023-04-06 22:02:06 -04:00
aeb4914e67 Merge branch 'main' into replay-file_path 2023-04-06 21:45:23 -04:00
76bcd4d44f Fix typo (#3133)
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-07 12:38:05 +12:00
50f5e1bc83 Fix typo
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-06 16:47:57 -07:00
4c339dd4b0 refactor get_submodels() into individual methods 2023-04-06 17:08:23 -04:00
bc2b9500e3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-06 15:38:46 -04:00
32857d81c5 prevent legacy CLI crash caused by removal of convert option
- Compensatory change to the CLI that prevents it from crashing
  when it tries to import a model.
- Bug introduced when the "convert" option removed from the model
  manager.
2023-04-06 15:36:05 -04:00
7268131f57 change where !replay looks for its infile
!fetch puts its output file into the output directory; it may be beneficial to have !replay look in the output directory as well.
2023-04-06 08:14:11 -04:00
85b020f76c [nodes] Add latent nodes, storage, and fix iteration bugs (#3091)
* Add latents nodes.
* Fix iteration expansion.
* Add collection generator nodes, math nodes.
* Add noise node.
* Add some graph debug commands to the CLI.
* Fix negative id linking in CLI.
* Fix a CLI bug with multiple links per node.
2023-04-06 04:06:05 +00:00
a7833cc9a9 [api] Add models router and list model API. 2023-04-05 23:59:07 -04:00
28f75d80d5 Merge branch 'main' into bugfix/release-updater 2023-04-05 18:25:33 -04:00
919294e977 fix build-container.yml (#3117)
Add permission go write packages to GITHUB_TOKEN
2023-04-06 00:25:00 +02:00
b917ffa4d7 Merge branch 'main' into bugfix/release-updater 2023-04-05 17:37:27 -04:00
d44151d6ff add a new method to model_manager that retrieves individual pipeline parts
- New method is ModelManager.get_sub_model(model_name:str,model_part:SDModelComponent)

To use:

```
from invokeai.backend import ModelManager, SDModelComponent as sdmc
manager = ModelManager('/path/to/models.yaml')
vae = manager.get_sub_model('stable-diffusion-1.5', sdmc.vae)
```
2023-04-05 17:25:42 -04:00
7640acfb1f update build-container.yml
- add packages write permission
2023-04-05 15:44:26 +02:00
aed9ecef2a feat(nodes): add thumbnail generation to DiskImageStorage 2023-04-05 08:22:23 +10:00
18cddd7972 Right link on pytorch installer for linux rocm (#3084)
Right link on pytorch installer for linux rocm
2023-04-04 17:40:42 -04:00
e6b25f4ae3 Merge branch 'main' into patch-1 2023-04-04 17:40:12 -04:00
d1c0050e65 fix(nodes): fix typo in list_sessions handler (#3109)
The typo accidentally did not affect functionality; when `query==""`, it
`search()`ed but found everything due to empty query, then paginated
results, so it worked the same as `list()`.

Still fix it
2023-04-03 21:24:48 -04:00
ecdfa136a0 fix(nodes): fix typo in list_sessions handler 2023-04-04 00:34:32 +10:00
5cd513ee63 [deps] bump compel version to fix crash on invalid (auto111) syntax (#3107)
currently if users input eg `happy (camper:0.3)` it gets parsed
incorrectly, which causes crashes if it's in the negative prompt. bump
to compel 1.0.5 fixes the parser to avoid this (note the weight is
parsed as plain text, it's not converted to proper invoke syntax)
2023-04-04 02:30:17 +12:00
ab45086546 Merge branch 'main' into deps_bump_compel 2023-04-04 02:05:40 +12:00
77ba7359f4 fix(nodes): commit changes to db 2023-04-03 19:09:49 +10:00
8cbe2e14d9 bump compel version to fix on invalid (auto111) syntax 2023-04-03 10:37:01 +02:00
f682fb8040 fix invokeai-update script
- This commit fixes the update script to work again, as well as fixing
  the ambiguity between updating to a tag and updating to a branch.
2023-04-02 11:08:12 -04:00
ee86eedf01 Right link on pytorch installer for linux rocm
Right link on pytorch installer for linux rocm
2023-03-31 17:22:00 -03:00
1f89cf3343 remove vestiges of non-functional autoimport code for legacy checkpoints
- Closes #3075
2023-03-31 04:27:03 -04:00
c4e6511a59 Add support for yet another TI embedding format (main version) (#3050)
- This PR adds support for embedding files that contain a single key
"emb_params". The only example I know of this format is the
"EasyNegative" embedding on HuggingFace, but there are certainly others.

- This PR also adds support for loading embedding files that have been
saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
selecting the right format parser is clear.

- This is the same as #3045, which is on the 2.3 branch.
2023-03-31 03:57:57 -04:00
44843be4c8 Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 23:16:52 -04:00
054e963bef add basic autocomplete functionality to node cli (#3035)
- Commands, invocations and their parameters will now autocomplete using
introspection.
- Two types of parameter *arguments* will also autocomplete:
  - --sampler_name  will autocomplete the scheduler name
  - --model will autocomplete the model name
- There don't seem to be commands for reading/writing image files yet,
so path autocompletion is not implemented
2023-03-30 08:25:36 -04:00
afb66a7884 Merge branch 'main' into feat/node-cli-autocompleter 2023-03-30 07:51:51 -04:00
b9df9e26f2 Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 07:51:23 -04:00
25ae36ceb5 I18n build mode (#3051)
Add build mode option to bundle english translation with UI
2023-03-29 22:26:45 -04:00
3ae8daedaa Merge branch 'main' into i18n-build-mode 2023-03-29 22:26:17 -04:00
e11c1d66ab handle multiple tokens and embeddings in single file 2023-03-29 22:05:06 -04:00
b913e1e11e improve importation and conversion of legacy checkpoint files (#3053)
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

## Model configuration file selection

To improve the user experience, the model manager's `heuristic_import()`
method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
The yaml file is then used as the configuration file for importation and
conversion.

3. If no such file is found, then the method opens up the checkpoint and
probes it to determine whether it is V1, V1-inpaint or V2. If it is a V1
format, then the appropriate v1-inference.yaml config file is used.
Unfortunately there are two V2 variants that cannot be distinguished by
introspection.

4. If the probe algorithm is unable to determine the model type, then
its last-ditch effort is to execute an optional callback function that
can be provided by the caller. This callback, named
`config_file_callback` receives the path to the legacy checkpoint and
returns the path to the config file to use. The CLI uses to put up a
multiple choice prompt to the user. The WebUI **could** use this to
prompt the user to choose from a radio-button selection.

5. If the config file cannot be determined, then the import is
abandoned.

## Custom VAE Selection

The user can attach a custom VAE to the imported and converted model by
copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file can
be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.

Note that this is the same fix that was applied to the 2.3 branch in
#3043 . This applies to `main`.
2023-03-29 17:22:15 -04:00
3c4b6d5735 Merge branch 'main' into enhance/heuristic-import-improvements 2023-03-29 16:54:43 -04:00
e6123eac19 Merge branch 'main' into i18n-build-mode 2023-03-29 05:33:14 -07:00
30ca25897e Fix bugs in online ckpt conversion of 2.0 models (#3057)
## Enable the on-the-fly conversion of models based on SD 2.0/2.1 into
diffusers

This commit fixes bugs related to the on-the-fly conversion and loading
of legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
on-the-fly using --ckpt_convert, generation would crash with a precision
incompatibility error. This problem has been found and fixed.
2023-03-28 23:34:53 -04:00
abaee6b9ed Merge branch 'main' into feat/node-cli-autocompleter 2023-03-28 23:32:10 -04:00
4d7c9e1ab7 Merge branch 'main' into bugfix/convert-2.0-models 2023-03-28 23:01:36 -04:00
cc5687f26c [nodes] downgrade fastapi+uvicorn to fix openapi schema 2023-03-28 22:53:20 -04:00
cdb3616dca Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-28 21:03:06 -04:00
78e76f26f9 Merge branch 'main' into i18n-build-mode 2023-03-28 11:04:32 -04:00
9a7580dedd fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.
2023-03-28 00:17:20 -04:00
dc2da8cff4 Doc: updating ROCm version in documentation (#3041)
The Pytorch ROCm version in the documentation in outdated (`rocm5.2`)
which leads to errors during the installation of InvokeAI.

This PR updates the documentation with the latest Pytorch ROCm `5.4.2`
version.
2023-03-27 22:37:43 -04:00
019a9f0329 address change requests in PR
1. Prompt has changed to "invoke> ".
2. Function to initialize the autocompleter has been renamed "set_autocompleter()"
2023-03-27 12:20:24 -04:00
fe5d9ad171 improve importation and conversion of legacy checkpoint files
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
   The yaml file is then used as the configuration file for
   importation and conversion.

3. If no such file is found, then the method opens up the checkpoint
   and probes it to determine whether it is V1, V1-inpaint or V2.
   If it is a V1 format, then the appropriate v1-inference.yaml config
   file is used. Unfortunately there are two V2 variants that cannot be
   distinguished by introspection.

4. If the probe algorithm is unable to determine the model type, then its
   last-ditch effort is to execute an optional callback function that can
   be provided by the caller. This callback, named `config_file_callback`
   receives the path to the legacy checkpoint and returns the path to the
   config file to use. The CLI uses to put up a multiple choice prompt to
   the user. The WebUI **could** use this to prompt the user to choose
   from a radio-button selection.

5. If the config file cannot be determined, then the import is abandoned.

The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
dbc0093b31 Merge remote-tracking branch 'origin' into i18n-build-mode 2023-03-27 10:57:41 -04:00
92e512b8b6 add package mode option for i18next 2023-03-27 10:49:52 -04:00
abe4dc8ac1 Add support for yet another textual inversion embedding format
- This PR adds support for embedding files that contain a single key
  "emb_params". The only example I know of this format is the
  "EasyNegative" embedding on HuggingFace, but there are certainly
  others.

- This PR also adds support for loading embedding files that have been
  saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
  selecting the right format parser is clear.
2023-03-27 09:39:03 -04:00
dc14701d20 Merge branch 'main' into feat/node-cli-autocompleter 2023-03-26 23:46:10 -04:00
737e0f3085 doc: fixing error in rocm version 2023-03-26 12:40:20 +02:00
81b7ea4362 doc: updating ROCm version for pip install 2023-03-26 12:32:12 +02:00
09dfde0ba1 fix(ui): fix viewer tooltip localisation strings (#3037)
fixes #2923
2023-03-26 20:35:52 +13:00
3ba7e966b5 Merge branch 'main' into fix/ui/viewer-localisation 2023-03-26 20:35:12 +13:00
a1cd4834d1 nodes: add cancelation, updated progress callback, typing fixes (#3036)
keeping `main` up to date with my api nodes branch:
- bd7e515290: [nodes] Add cancelation to
the API @Kyle0654
- 5fe38f7: fix(backend): simple typing fixes
  - just picking some low-hanging fruit to improve IDE hinting
- c34ac91: fix(nodes): fix cancel; fix callback for img2img, inpaint
- makes nodes cancel immediate, use fix progress images on nodes, fix
callbacks for img2img/inpaint
- 4221cf7: fix(nodes): fix schema generation for output classes
- did this previously for some other class; needed to not have node
outputs be optional
2023-03-26 20:34:27 +13:00
a724038dc6 fix(ui): fix viewer tooltip localisation strings
fixes #2923
2023-03-26 17:43:00 +11:00
4221cf7731 fix(nodes): fix schema generation for output classes
All output classes need to have their properties flagged as `required` for the schema generation to work as needed.
2023-03-26 17:20:10 +11:00
c34ac91ff0 fix(nodes): fix cancel; fix callback for img2img, inpaint 2023-03-26 17:07:40 +11:00
5fe38f7c88 fix(backend): simple typing fixes 2023-03-26 17:07:03 +11:00
bd7e515290 [nodes] Add cancelation to the API 2023-03-26 15:47:32 +11:00
076fac07eb feat[web]: use the predicted denoised image for previews (#2915)
Some schedulers report not only the noisy latents at the current
timestep, but also their estimate so far of what the de-noised latents
will be.

It makes for a more legible preview than the noisy latents do.

I think this is a huge improvement, but there are a few considerations:
- Need to not spook @JPPhoto by changing how previews look.
- Some schedulers (most notably **DPM Solver++**) don't provide this
data, and it falls back to the current behavior there. That's not
terrible, but seeing such a big difference in how _previews_ look from
one scheduler to the next might mislead people into thinking there's a
bigger difference in their overall effectiveness than there really is.

My fear of configuration-option-overwhelm leaves me inclined to _not_
add a configuration option for this, but we could.
2023-03-26 00:29:00 -04:00
9348161600 add basic autocomplete functionality to node cli
- Commands, invocations and their parameters will now autocomplete
  using introspection.
- Two types of parameter *arguments* will also autocomplete:
  - --sampler_name  will autocomplete the scheduler name
  - --model will autocomplete the model name
- There don't seem to be commands for reading/writing image files yet, so
  path autocompletion is not implemented
2023-03-26 00:24:27 -04:00
dac3c158a5 Merge branch 'main' into feat/preview_predicted_x0
- resolve conflicts with generate.py invocation
- remove unused symbols that pyflakes complains about
- add **untested** code for passing intermediate latent image to the
  step callback in the format expected.
2023-03-25 16:07:18 -04:00
17d8bbf330 ask for escalated privileges in push workflows 2023-03-25 15:22:25 -04:00
9344687a56 installer: fix indentation in invoke.sh template (tabs -> spaces) 2023-03-25 13:57:09 -04:00
cf534d735c duplicate of PR #3016, but based on main 2023-03-25 13:57:09 -04:00
501924bc60 do not reexport PipelineIntermediateState 2023-03-25 13:57:09 -04:00
d117251747 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-25 13:57:09 -04:00
6ea61a8486 fix issue with embeddings being loaded twice (#3029)
This bug was causing a bunch of annoying warnings about not overwriting
previously loaded tokens.

- as noted by JPPhoto
2023-03-26 04:45:20 +13:00
e4d903af20 Merge branch 'main' into bugfix/load-embeddings-once 2023-03-26 04:19:43 +13:00
2d9797da35 (fix)[docs] Fixed snippet/code formatting (#2918)
It was pasted as plain text, now it's a code fence.
2023-03-25 10:49:13 -04:00
07ea806553 Merge branch 'main' into patch-1 2023-03-25 10:48:25 -04:00
5ac0316c62 fix issue with embeddings being loaded twice
- as noted by JPPhoto
2023-03-25 10:45:03 -04:00
9536ba22af Convert custom VAEs during legacy checkpoint loading (#3010)
- When a legacy checkpoint model is loaded via --convert_ckpt and its
models.yaml stanza refers to a custom VAE path (using the 'vae:' key),
the custom VAE will be converted and used within the diffusers model.
Otherwise the VAE contained within the legacy model will be used.
    
- Note that the checkpoint import functions in the CLI or Web UIs
continue to default to the standard stabilityai/sd-vae-ft-mse VAE. This
can be fixed after the fact by editing VAE key using either the CLI or
Web UI.
   
- Fixes issue #2917
2023-03-25 00:37:12 -04:00
5503749085 Merge branch 'main' into feat/use-custom-vaes 2023-03-25 17:09:38 +13:00
9bfe2fa371 add github API token to mkdocs workflow (#3023)
The mkdocs-workflow has been failing over the past week due to
permission denied errors. I *think* this is the result of not passing
the GitHub API token to the workflow, and this is a speculative fix for
the issue.
2023-03-24 17:59:53 -04:00
d8ce6e4426 Merge branch 'bugfix/mkdocs-workflow' of github.com:invoke-ai/InvokeAI into bugfix/mkdocs-workflow 2023-03-24 17:58:16 -04:00
43d2d6d98c add blessedcoolant as backup to mauwii codeowner 2023-03-24 17:58:03 -04:00
64c233efd4 Merge branch 'main' into bugfix/mkdocs-workflow 2023-03-24 17:47:14 -04:00
2245a4e117 doc(readme): fix incorrect install command (#3024)
Hi, I am trying to install InvokeAI on my linux machine, the command in
README.md cannot install correct dependency
2023-03-24 17:46:58 -04:00
9ceec40b76 Merge branch 'main' into feat/use-custom-vaes 2023-03-24 17:45:02 -04:00
0f13b90059 doc(readme): fix incorrect install command 2023-03-24 23:21:51 +08:00
d91fc16ae4 add github API token to mkdocs workflow 2023-03-24 09:17:30 -04:00
bc01a96f9d re-implement model scanning when loading legacy checkpoint files (#3012)
- This PR turns on pickle scanning before a legacy checkpoint file is
loaded from disk within the checkpoint_to_diffusers module.

- Also miscellaneous diagnostic message cleanup.

- See also #3011 for a similar patch to the 2.3 branch.
2023-03-24 08:57:07 -04:00
85b2822f5e Merge branch 'main' into security/scan-ckpt-files-main 2023-03-24 08:39:59 -04:00
c33d8694bb build: do not run python tests on ui build (#2987)
`invokeai/frontend/web/dist/**` should not be triggering the full test
suite.
2023-03-25 00:54:40 +13:00
685bd027f0 Merge branch 'main' into build/no-test-on-ui-build 2023-03-25 00:51:26 +13:00
f592d620d5 ui: translations update from weblate (#3021)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-24 19:25:17 +11:00
Tom
2b127b73ac translationBot(ui): update translation (French)
Currently translated at 82.7% (417 of 504 strings)

Co-authored-by: Tom <tom.fouthier@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
8855902cfe translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (504 of 504 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (501 of 501 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
9d8ddc6a08 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
4ca5189e73 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (504 of 504 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (501 of 501 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (500 of 500 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
873597cb84 Allow loading all types of dreambooth models - Fix issue #2932 (#2933)
Allows to load models with EMA using `model_ema.diffusion_model.xxxx` or
`model_ema.xxxx` weights.

Fixes #2932
2023-03-23 23:40:04 -04:00
44d742f232 Merge branch 'main' into security/scan-ckpt-files-main 2023-03-23 23:33:49 -04:00
6e7dbf99f3 Merge branch 'main' into bugfix/dreambooth_ema 2023-03-23 23:24:15 -04:00
1ba1076888 Tidy up Tests and Provide Documentation (#2869)
Bit of basic housekeeping and documentation to explain to people how to
get local development environment running (including the tests).
2023-03-23 23:23:20 -04:00
cafa108f69 Merge branch 'main' into tests 2023-03-23 23:22:27 -04:00
deeff36e16 Merge branch 'main' into security/scan-ckpt-files-main 2023-03-23 23:20:52 -04:00
d770b14358 [deps] upgrade compel for better .swap defaults and a bugfix (#3014) 2023-03-23 19:01:12 -04:00
20414ba4ad Merge branch 'main' into deps_upgrade_compel 2023-03-23 18:38:46 -04:00
92721a1d45 do not reexport PipelineIntermediateState 2023-03-24 09:32:47 +11:00
f329fddab9 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-24 09:32:47 +11:00
f2efde27f6 load embeddings after a ckpt legacy model is converted to diffusers (#3013)
This PR corrects a bug in which embeddings were not being applied when a
non-diffusers model was loaded.

- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 18:10:19 -04:00
02c58f22be upgrade compel for better .swap defaults and a bugfix 2023-03-23 22:34:54 +01:00
f751dcd245 load embeddings after a ckpt legacy model is converted to diffusers
- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 15:21:58 -04:00
a97107bd90 handle VAEs that do not have a "state_dict" key 2023-03-23 15:11:29 -04:00
b2ce45a417 re-implement model scanning when loading legacy checkpoint files
- This PR turns on pickle scanning before a legacy checkpoint file
  is loaded from disk within the checkpoint_to_diffusers module.

- Also miscellaneous diagnostic message cleanup.
2023-03-23 15:03:30 -04:00
4e0b5d85ba convert custom VAEs into diffusers
- When a legacy checkpoint model is loaded via --convert_ckpt and its
  models.yaml stanza refers to a custom VAE path (using the 'vae:'
  key), the custom VAE will be converted and used within the diffusers
  model. Otherwise the VAE contained within the legacy model will be
  used.

- Note that the heuristic_import() method, which imports arbitrary
  legacy files on disk and URLs, will continue to default to the
  the standard stabilityai/sd-vae-ft-mse VAE. This can be fixed after
  the fact by editing the models.yaml stanza using the Web or CLI
  UIs.

- Fixes issue #2917
2023-03-23 13:14:19 -04:00
a958ae5e29 Merge branch 'main' into feat/use-custom-vaes 2023-03-23 10:32:56 -04:00
4d50fbf8dc Merge branch 'main' into build/no-test-on-ui-build 2023-03-23 01:08:24 +13:00
485f6e5954 Export more for header (#2996)
* export more items needed for dynamic header
* remove build mode that is no longer needed
2023-03-23 01:07:16 +13:00
1f6ce838ba Merge branch 'main' into export-more-for-header 2023-03-22 07:49:15 -04:00
0dc5773849 [nodes] Update fastapi packages to latest (except FastAPI, which has an annotation bug in the newest version) (#3004) 2023-03-22 19:12:45 +13:00
bc347f749c [nodes] Update fastapi packages to latest (except FastAPI, which has an annotation bug in the newest version) 2023-03-21 19:45:17 -07:00
1b215059e7 Merge branch 'main' into export-more-for-header 2023-03-21 16:29:53 -04:00
db079a2733 remove unneeded build:package code 2023-03-21 10:29:27 -04:00
26f71d3536 change back 2023-03-21 10:28:29 -04:00
eb7ae2588c unused var 2023-03-21 10:21:58 -04:00
278c14ba2e try jsx.element 2023-03-21 10:18:38 -04:00
74e83dda54 update type 2023-03-21 10:10:48 -04:00
28c1fca477 Merge branch 'main' into build/no-test-on-ui-build 2023-03-20 02:21:40 +13:00
1f0324102a chore(ui): build 2023-03-19 23:16:29 +11:00
a782ad092d feat(ui): localise iaialertdialog defaults 2023-03-19 23:16:29 +11:00
eae4eb419a fix(ui): popovers trigger on click (accessibility) 2023-03-19 23:16:29 +11:00
fb7f38f46e fix(ui): make alertdialogs centered 2023-03-19 23:16:29 +11:00
93d0cae455 fix(ui): fix alertdialogs closing immediately 2023-03-19 23:16:29 +11:00
35f6b5d562 fix(ui): make invoketabs not lazy 2023-03-19 23:16:29 +11:00
2aefa06ef1 fix(ui): Clean up manual add forms 2023-03-19 23:16:29 +11:00
5906888477 feat(ui): add current image loading fallback 2023-03-19 23:16:29 +11:00
f22c7d0da6 feat(ui): add more w/h options 2023-03-19 23:16:29 +11:00
93b38707b2 feat(ui): tidy up model manager styling
fixes #2970
2023-03-19 23:16:29 +11:00
6ecf53078f fix(ui): Misalignment of model search entries 2023-03-19 23:16:29 +11:00
9c93b7cb59 build: do not run python tests on ui build
`invokeai/frontend/web/dist/**` should not be triggering the full test suite.
2023-03-19 23:01:30 +11:00
7789e8319c Fix some text and a link (#2910)
- Fix link to `070_INSTALL_XFORMERS.md`.
- Fix some spelling.
2023-03-19 05:55:18 +13:00
7d7a28beb3 Merge branch 'main' into main-text-fixup-PR 2023-03-18 09:54:41 -07:00
27a113d872 nodes: api fixes (#2959)
- 86932469e76f1315ee18bfa2fc52b588241dace1 add image_to_dataURL util
- 0c2611059711b45bb6142d30b1d1343ac24268f3 make fast latents method
static
- this method doesn't really need `self` and should be able to be called
without instantiating `Generator`
- 2360bfb6558ea511e9c9576f3d4b5535870d84b4 fix schema gen for
GraphExecutionState
- `GraphExecutionState` uses `default_factory` in its fields; the result
is the OpenAPI schema marks those fields as optional, which propagates
to the generated API client, which means we need a lot of unnecessary
type guards to use this data type. the [simple
fix](https://github.com/pydantic/pydantic/discussions/4577) is to add
config to explicitly say all class properties are required. looks this
this will be resolved in a future pydantic release
- 3cd7319cfdb0f07c6bb12d62d7d02efe1ab12675 fix step callback and fast
latent generation on nodes. have this working in UI. depends on the
small change in #2957
2023-03-16 20:24:28 +11:00
67f8f222d9 fix(nodes): fix step_callback + fast latents generation
this depends on the small change in #2957
2023-03-16 20:03:08 +11:00
5347c12fed fix(nodes): fix schema gen for GraphExecutionState 2023-03-16 20:03:08 +11:00
b194180f76 feat(backend): make fast latents method static 2023-03-16 20:03:08 +11:00
fb30b7d17a feat(backend): add image_to_dataURL util 2023-03-16 20:03:08 +11:00
c341dcaa3d update compel to fix black screens and use new downweighting algorithm (#2961)
Update `compel` to 1.0.0.

This fixes #2832.

It also changes the way downweighting is applied. In particular,
downweighting should now be much better and more controllable.

From the [compel
changelog](https://github.com/damian0815/compel#changelog):

> Downweighting now works by applying an attention mask to remove the
downweighted tokens, rather than literally removing them from the
sequence. This behaviour is the default, but the old behaviour can be
re-enabled by passing `downweight_mode=DownweightMode.REMOVE` on init of
the `Compel` instance.
>
> Formerly, downweighting a token worked by both multiplying the
weighting of the token's embedding, and doing an inverse-weighted blend
with a copy of the token sequence that had the downweighted tokens
removed. The intuition is that as weight approaches zero, the tokens
being downweighted should be actually removed from the sequence.
However, removing the tokens resulted in the positioning of all
downstream tokens becoming messed up. The blend ended up blending a lot
more than just the tokens in question.
> 
> As of v1.0.0, taking advice from @keturn and @bonlime
(https://github.com/damian0815/compel/issues/7) the procedure is by
default different. Downweighting still involves a blend but what is
blended is a version of the token sequence with the downweighted tokens
masked out, rather than removed. This correctly preserves positioning
embeddings of the other tokens.
2023-03-16 17:49:47 +13:00
b695a2574b bump compel version 2023-03-16 01:55:39 +01:00
aa68a326c8 update compel 2023-03-15 23:05:55 +01:00
c2922d5991 add settingsmodal 2023-03-15 16:12:51 -04:00
85888030c3 more things needed for header 2023-03-15 14:38:22 -04:00
7cf59c1e60 Merge branch 'main' into main-text-fixup-PR 2023-03-16 04:43:22 +13:00
9738b0ff69 [nodes] Add Edge data type (#2958)
Adds an `Edge` data type, replacing the current tuple used for edges.
2023-03-15 18:41:56 +11:00
3021c78390 [nodes] Add Edge data type 2023-03-14 23:09:30 -07:00
6eeaf8d9fb Allow for dynamic header (#2955)
* Update root component to allow optional children that will render as
dynamic header of UI
* Export additional components (logo & themeChanger) for use in said
dynamic header (more to come here)
2023-03-15 07:41:24 +13:00
fa9afec0c2 fix npm deps 2023-03-14 14:15:03 -04:00
d6862bf8c1 fix npm deps 2023-03-14 14:14:16 -04:00
de01c38bbe fresh build 2023-03-14 14:11:42 -04:00
7e811908e0 remove 2023-03-14 14:09:16 -04:00
5f59f24f92 cleanup 2023-03-14 14:08:42 -04:00
e414fcf3fb bump version 2023-03-14 13:26:49 -04:00
079ad8f35a fix props 2023-03-14 13:22:57 -04:00
a4d7e0c78e export other components 2023-03-14 12:37:28 -04:00
e9c2f173c5 fix(inpaint): Seam painting being broken (#2952)
After #2942, seed needs to be passed down from inpaint to seam_paint.
Not doing so breaks inpainting and outpainting. This PR fixes it.
2023-03-15 00:38:26 +13:00
44f489d581 Merge branch 'main' into fix-seampaint 2023-03-14 06:19:25 -05:00
cb48bbd806 Removed file-extension-based arbitrary code execution attack vector (#2946)
# The Problem
Pickle files (.pkl, .ckpt, etc) are extremely unsafe as they can be
trivially crafted to execute arbitrary code when parsed using
`torch.load`
Right now the conventional wisdom among ML researchers and users is to
simply `not run untrusted pickle files ever` and instead only use
Safetensor files, which cannot be injected with arbitrary code. This is
very good advice.

Unfortunately, **I have discovered a vulnerability inside of InvokeAI
that allows an attacker to disguise a pickle file as a safetensor and
have the payload execute within InvokeAI.**

# How It Works
Within `model_manager.py` and `convert_ckpt_to_diffusers.py` there are
if-statements that decide which `load` method to use based on the file
extension of the model file. The logic (written in a slightly more
readable format than it exists in the codebase) is as follows:
```
if Path(file).suffix == '.safetensors':
   safetensor_load(file)
else:
   unsafe_pickle_load(file)
```

A malicious actor would only need to create an infected .ckpt file, and
then rename the extension to something that does not pass the `==
'.safetensors'` check, but still appears to a user to be a safetensors
file.
For example, this might be something like `.Safetensors`,
`.SAFETENSORS`, `SafeTensors`, etc.

InvokeAI will happily import the file in the Model Manager and execute
the payload.

# Proof of Concept
1. Create a malicious pickle file.
(https://gist.github.com/CodeZombie/27baa20710d976f45fb93928cbcfe368)
2. Rename the `.ckpt` extension to some variation of `.Safetensors`,
ensuring there is a capital letter anywhere in the extension (eg.
`malicious_pickle.SAFETENSORS`)
3. Import the 'model' like you would normally with any other safetensors
file with the Model Manager.
4. Upon trying to select the model in the web ui, it will be loaded (or
attempt to be converted to a Diffuser) with `torch.load` and the payload
will execute.


![image](https://user-images.githubusercontent.com/466103/224835490-4cf97ff3-41b3-4a31-85df-922cc99042d2.png)


# The Fix
This pull request changes the logic InvokeAI uses to decide which model
loader to use so that the safe behavior is the default. Instead of
loading as a pickle if the extension is not exactly `.safetensors`, it
will now **always** load as a safetensors file unless the extension is
**exactly** `.ckpt`.

# Notes:
I think support for pickle files should be totally dropped ASAP as a
matter of security, but I understand that there are reasons this would
be difficult.

In the meantime, I think `RestrictedUnpickler` or something similar
should be implemented as a replacement for `torch.load`, as this
significantly reduces the amount of Python methods that an attacker has
to work with when crafting malicious payloads
inside a pickle file. 
Automatic1111 already uses this with some success.
(https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/safe.py)
2023-03-15 00:09:17 +13:00
0a761d7c43 fix(inpaint): Seam painting being broken 2023-03-15 00:00:08 +13:00
a0f47aa72e Merge branch 'main' into main 2023-03-14 11:41:29 +01:00
f9abc6fc85 fix --png_compression command line argument (#2950)
- The value of png_compression was always 6, despite the value provided
to the --png_compression argument. This fixes the bug.
- It also fixes an inconsistency between the maximum range of
png_compression and the help text.

- Closes #2945
2023-03-14 18:20:17 +13:00
d840c597b5 fix --png_compression command line argument
- The value of png_compression was always 6, despite the value provided to the
  --png_compression argument. This fixes the bug.
- It also fixes an inconsistency between the maximum range of png_compression
  and the help text.

- Closes #2945
2023-03-14 00:24:05 -04:00
3ca654d256 speculative fix for alternative vaes 2023-03-13 23:27:29 -04:00
e0e01f6c50 Reduced Pickle ACE attack surface
Prior to this commit, all models would be loaded with the extremely unsafe `torch.load` method, except those with the exact extension `.safetensors`. Even a change in casing (eg. `saFetensors`, `Safetensors`, etc) would cause the file to be loaded with torch.load instead of the much safer `safetensors.toch.load_file`.
If a malicious actor renamed an infected `.ckpt` to something like `.SafeTensors` or `.SAFETENSORS` an unsuspecting user would think they are loading a safe .safetensor, but would in fact be parsing an unsafe pickle file, and executing an attacker's payload. This commit fixes this vulnerability by reversing the loading-method decision logic to only use the unsafe `torch.load` when the file extension is exactly `.ckpt`.
2023-03-13 16:16:30 -04:00
d9dab1b6c7 Update BUG_REPORT.yml 2023-03-13 11:17:38 -04:00
3b2ef6e1a8 Update BUG_REPORT.yml 2023-03-13 11:14:53 -04:00
c125a3871a Update BUG_REPORT.yml 2023-03-13 11:14:04 -04:00
0996bd5acf Merge branch 'main' into patch-1 2023-03-14 03:18:58 +13:00
ea77d557da add additional build mode (#2904)
*`yarn build:package` will build default component 
* moved some devDependencies to dependencies that are needed for
postinstall script
2023-03-14 03:15:51 +13:00
1b01161ea4 Merge branch 'main' into pr/2904 2023-03-14 03:14:35 +13:00
2230cb9562 chore(UI, accessibility): Icons. Header links & radio button (#2935)
# Overview
- Links should be parent of icon
- _Added style to link still so they still line up with sibling
components_
- Radio icon buttons
2023-03-14 03:13:19 +13:00
9e0c7c46a2 Merge branch 'main' into add-a-build-config 2023-03-13 09:58:17 -04:00
be305588d3 merged and rebuilt 2023-03-13 09:55:56 -04:00
9f994df814 Merge branch 'main' into chore/UI_more-accessibility-items 2023-03-14 02:49:47 +13:00
3062580006 Fix bug #2931 (#2942)
#2931 was caused by new code that held onto the PRNG in `get_make_image`
and used it in `make_image` for img2img and inpainting. This
functionality has been moved elsewhere so that we can generate multiple
images again.
2023-03-14 02:48:07 +13:00
596ba754b1 Removed seed from get_make_image. 2023-03-13 08:15:46 -05:00
b980e563b9 Fix bug #2931 2023-03-13 08:11:09 -05:00
7fe2606cb3 [nodes] Fixes calls into image to image and inpaint from nodes (#2940) 2023-03-13 19:05:32 +13:00
0c3b1fe3c4 [nodes] Fixes calls into image to image and inpaint from nodes 2023-03-12 22:12:42 -07:00
c9ee2e351c yarn build 2023-03-12 23:29:29 -05:00
e3aef20f42 chore(UI, accessibility): more items
- radio icon buttons
- links should be parent of icon
styled links to still line up with sibling components
2023-03-12 23:27:47 -05:00
60614badaf [nodes-api] Fix API generation to correctly reference outputs (#2939)
Correctly reference output types in node schemas
2023-03-13 17:02:55 +13:00
288cee9611 Merge remote-tracking branch 'origin/main' into feat/preview_predicted_x0
# Conflicts:
#	invokeai/app/invocations/generate.py
2023-03-12 20:56:02 -07:00
24aca37538 Just set output value in node schemas. Don't use additionalProperties, which would impact the schema. 2023-03-12 20:40:29 -07:00
b853ceea65 [nodes-api] Fix API generation to correctly reference outputs 2023-03-12 20:03:26 -07:00
3ee2798ede [fix] Get the model again if current model is empty 2023-03-12 22:26:11 -04:00
5c5106c14a Add keys when non EMA 2023-03-12 16:22:22 -05:00
c367b21c71 Fix issue #2932 2023-03-12 15:40:33 -05:00
2eef6df66a [ui]: add resizable pinnable drawer component (#2874)
wip

this is based off the branch in #2873
2023-03-12 22:46:48 +13:00
300aa8d86c chore(ui): build 2023-03-12 20:13:58 +11:00
727f1638d7 chore(ui): lint 2023-03-12 20:13:58 +11:00
ee6df5852a fix(ui): fix lightbox 2023-03-12 20:13:38 +11:00
90525b1c43 fix(ui): fix scrollable shadow 2023-03-12 20:13:38 +11:00
bbb95dbc5b fix(ui): add color mode watcher 2023-03-12 20:13:38 +11:00
f4b7f80d59 fix(ui): remove key prop 2023-03-12 20:13:38 +11:00
220f7373c8 feat(ui): Basic IAIOption Component & Fix Select Dropdown 2023-03-12 20:13:38 +11:00
4bb5785f29 fix(ui): Move Form Components to the correct folder 2023-03-12 20:13:38 +11:00
f9a7a7d161 fix(ui): set colorMode to fix native selects 2023-03-12 20:13:38 +11:00
de94c780d9 fix(ui): fix canvas status text bg 2023-03-12 20:13:38 +11:00
0b9230380c fix(ui): default gallery category buttons to icon 2023-03-12 20:13:38 +11:00
209a55b681 fix(ui): canvas rescale when toggle gallery 2023-03-12 20:13:38 +11:00
dc2f69f5d1 fix(ui): process buttons display on canvas beta 2023-03-12 20:13:38 +11:00
ad2f1b7b36 fix(ui): hack for hiding pinned panels 2023-03-12 20:13:38 +11:00
dd2d96a50f fix(ui): Bad styling on form elements 2023-03-12 20:13:38 +11:00
2bff28e305 fix(ui): Remove size limitation off the theme changer button 2023-03-12 20:13:38 +11:00
d68234d879 fix(ui): Gallery placeholder text not being centered 2023-03-12 20:13:38 +11:00
b3babf26a5 fix(ui): Fix current image buttons overflow 2023-03-12 20:13:38 +11:00
ecca0eff31 fix(ui): hotkey accordion spacing 2023-03-12 20:13:38 +11:00
28677f9621 fix(ui): process buttons display on canvas beta layout 2023-03-12 20:13:38 +11:00
caecfadf11 fix(ui): fix shadow 2023-03-12 20:13:38 +11:00
5cf8e3aa53 chore(ui): build 2023-03-12 20:13:38 +11:00
76cf2c61db feat(ui): drawer almost done
TODO:
- hide while pinned
- lightbox interaction with gallery
2023-03-12 20:13:38 +11:00
b4d976f2db fix(ui): fix flash of mini preview image
Restored the code that fixes this after having ripped it out thinking it didn't do anything. Spotted in #2915
2023-03-12 20:13:38 +11:00
777d127c74 feat(ui): wip drawer component and build 2023-03-12 20:13:38 +11:00
0678803803 lang(ui): update show canvas debug info string 2023-03-12 20:13:37 +11:00
d2fbc9f5e3 feat(ui): Add ThemeTypes & Move Grid Line Color 2023-03-12 20:13:37 +11:00
d81088dff7 feat(ui): wip resizable pinnable drawer
fix(ui): remove old scrollbar css

fix(ui): make guidepopover lazy

feat(ui): wip resizable drawer

feat(ui): wip resizable drawer

feat(ui): add scroll-linked shadow

feat(ui): organize files

Align Scrollbar next to content

Move resizable drawer underneath the progress bar

Add InvokeLogo to unpinned & align

Adds Invoke Logo to Unpinned Parameters panel and aligns to make it feel seamless.
2023-03-12 20:13:37 +11:00
1aaad9336f Remove image generation node dependencies on generate.py (#2902)
# Remove node dependencies on generate.py

This is a draft PR in which I am replacing `generate.py` with a cleaner,
more structured interface to the underlying image generation routines.
The basic code pattern to generate an image using the new API is this:

```
from invokeai.backend import ModelManager, Txt2Img, Img2Img

manager = ModelManager('/data/lstein/invokeai-main/configs/models.yaml')
model = manager.get_model('stable-diffusion-1.5')
txt2img = Txt2Img(model)
outputs = txt2img.generate(prompt='banana sushi', steps=12, scheduler='k_euler_a', iterations=5)

# generate() returns an iterator
for next_output in outputs:
    print(next_output.image, next_output.seed)

outputs = Img2Img(model).generate(prompt='strawberry` sushi', init_img='./banana_sushi.png')
output = next(outputs)
output.image.save('strawberries.png')
```

### model management

The `ModelManager` handles model selection and initialization. Its
`get_model()` method will return a `dict` with the following keys:
`model`, `model_name`,`hash`, `width`, and `height`, where `model` is
the actual StableDiffusionGeneratorPIpeline. If `get_model()` is called
without a model name, it will return whatever is defined as the default
in `models.yaml`, or the first entry if no default is designated.

### InvokeAIGenerator

The abstract base class `InvokeAIGenerator` is subclassed into into
`Txt2Img`, `Img2Img`, `Inpaint` and `Embiggen`. The constructor for
these classes takes the model dict returned by
`model_manager.get_model()` and optionally an
`InvokeAIGeneratorBasicParams` object, which encapsulates all the
parameters in common among `Txt2Img`, `Img2Img` etc. If you don't
provide the basic params, a reasonable set of defaults will be chosen.
Any of these parameters can be overridden at `generate()` time.

These classes are defined in `invokeai.backend.generator`, but they are
also exported by `invokeai.backend` as shown in the example below.

```
from invokeai.backend import InvokeAIGeneratorBasicParams, Img2Img
params = InvokeAIGeneratorBasicParams(
    perlin = 0.15
    steps = 30
   scheduler = 'k_lms'
)
img2img = Img2Img(model, params)
outputs = img2img.generate(scheduler='k_heun')
```

Note that we were able to override the basic params in the call to
`generate()`

The `generate()` method will returns an iterator over a series of
`InvokeAIGeneratorOutput` objects. These objects contain the PIL image,
the seed, the model name and hash, and attributes for all the parameters
used to generate the object (you can also get these as a dict). The
`iterations` argument controls how many objects will be returned,
defaulting to 1. Pass `None` to get an infinite iterator.

Given the proposed use of `compel` to generate a templated series of
prompts, I thought the API would benefit from a style that lets you loop
over the output results indefinitely. I did consider returning a single
`InvokeAIGeneratorOutput` object in the event that `iterations=1`, but I
think it's dangerous for a method to return different types of result
under different circumstances.

Changing the model is as easy as this:
```
model = manager.get_model('inkspot-2.0`)
txt2img = Txt2Img(model)
```

### Node and legacy support

With respect to `Nodes`, I have written `model_manager_initializer` and
`restoration_services` modules that return `model_manager` and
`restoration` services respectively. The latter is used by the face
reconstruction and upscaling nodes. There is no longer any reference to
`Generate` in the `app` tree.

I have confirmed that `txt2img` and `img2img` work in the nodes client.
I have not tested `embiggen` or `inpaint` yet. pytests are passing, with
some warnings that I don't think are related to what I did.

The legacy WebUI and CLI are still working off `Generate` (which has not
yet been removed from the source tree) and fully functional.

I've finished all the tasks on my TODO list:

- [x] Update the pytests, which are failing due to dangling references
to `generate`
- [x] Rewrite the `reconstruct.py` and `upscale.py` nodes to call
directly into the postprocessing modules rather than going through
`Generate`
- [x] Update the pytests, which are failing due to dangling references
to `generate`
2023-03-11 21:48:23 -05:00
1f3c024d9d Merge branch 'main' into refactor/nodes-on-generator 2023-03-11 21:31:42 -05:00
74a480f94e add back static web directory 2023-03-11 21:23:41 -05:00
c6e8d3269c build: exclude ui from test-invoke-pip (#2892)
Prior to the folder restructure, the `paths` for `test-invoke-pip` did
not include the UI's path `invokeai/frontend/`:

```yaml
    paths:
      - 'pyproject.toml'
      - 'ldm/**'
      - 'invokeai/backend/**'
      - 'invokeai/configs/**'
      - 'invokeai/frontend/dist/**'
```

After the restructure, more code was moved into the `invokeai/frontend/`
folder, and `paths` was updated:

```yaml
    paths:
      - 'pyproject.toml'
      - 'invokeai/**'
      - 'invokeai/backend/**'
      - 'invokeai/configs/**'
      - 'invokeai/frontend/web/dist/**'
```

Now, the second path includes the UI. The UI now needs to be excluded,
and must be excluded prior to `invokeai/frontend/web/dist/**` being
included.

On `test-invoke-pip-skip`, we need to do a bit of logic juggling to
invert the folder selection. First, include the web folder, then exclude
everying around it and finally exclude the `dist/` folder
2023-03-12 14:18:51 +13:00
dcb5a3a740 Merge branch 'main' into build/exclude-ui-actions 2023-03-12 14:18:03 +13:00
c0ef546b02 Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator 2023-03-11 18:31:47 -05:00
7a78a83651 raise operations-per-run for issue workflow to 500 (#2925)
- default value is 30
- limit per hour is 1000

This should help getting the count of open issues down.
2023-03-12 00:10:55 +01:00
10cbf99310 add TODO comments 2023-03-11 18:08:45 -05:00
b63aefcda9 Merge branch 'main' into refactor/nodes-on-generator 2023-03-11 16:22:29 -06:00
6a77634b34 remove unneeded generate initializer routines 2023-03-11 17:14:03 -05:00
8ca91b1774 add restoration services to nodes 2023-03-11 17:00:00 -05:00
1c9d9e79d5 raise operations-per-run to 500
- default value is 30
- limit per hour is 1000
2023-03-11 22:32:13 +01:00
3aa1ee1218 restore NSFW checker 2023-03-11 16:16:44 -05:00
06aa5a8120 Merge branch 'main' into feat/preview_predicted_x0 2023-03-11 14:50:30 -06:00
580f9ecded simplify passing of config options 2023-03-11 11:32:57 -05:00
270032670a build: exclude ui from test-invoke-pip 2023-03-12 03:27:49 +11:00
4f056cdb55 ui: translations update from weblate (#2922)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-12 03:18:23 +11:00
c14241436b move ModelManager initialization into its own module and restore embedding support 2023-03-11 10:56:53 -05:00
50b56d6088 translationBot(ui): update translation (Portuguese)
Currently translated at 99.2% (496 of 500 strings)

Co-authored-by: ssantos <ssantos@web.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-11 16:56:06 +01:00
8ec2ae7954 translationBot(ui): update translation (Russian)
Currently translated at 86.3% (416 of 482 strings)

Co-authored-by: Sergey Krashevich <svk@svk.su>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-03-11 16:56:05 +01:00
40d82b29cf translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 7.0% (34 of 480 strings)

Co-authored-by: wa.code <adt107118@gm.ntcu.edu.tw>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-03-11 16:56:05 +01:00
0b953d98f5 translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 98.1% (471 of 480 strings)

Co-authored-by: Felipe Nogueira <contato.fnog@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-03-11 16:56:04 +01:00
8833d76709 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (482 of 482 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (480 of 480 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-03-11 16:56:04 +01:00
027b316fd2 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (482 of 482 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (480 of 480 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-03-11 16:56:03 +01:00
d612f11c11 initialize InvokeAIGenerator object with model, not manager 2023-03-11 09:06:46 -05:00
250b0ab182 add seamless tiling support 2023-03-11 08:33:23 -05:00
675dd12b6c add attention map images to output object 2023-03-11 08:07:01 -05:00
7e76eea059 add embiggen, remove complicated constructor 2023-03-11 07:50:39 -05:00
f45483e519 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 22:25:26 -06:00
65047bf976 Chore/accessibility add all aria labels to translation (#2919)
# Overview
Setting up the `aria-label` props as translations
2023-03-11 16:16:02 +13:00
d586a82a53 yarn build 2023-03-10 20:54:59 -06:00
28709961e9 add import 2023-03-10 20:53:42 -06:00
e9f237f39d chore(accessibility): add all aria-labels 2023-03-10 20:49:16 -06:00
4156bfd810 Fixed snippet/code formatting
It was pasted as plain text, now it's a code fence.
2023-03-11 02:08:59 +01:00
fe75b95464 Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator 2023-03-10 19:36:40 -05:00
95954188b2 remove factory pattern
Factory pattern is now removed. Typical usage of the InvokeAIGenerator is now:

```
from invokeai.backend.generator import (
    InvokeAIGeneratorBasicParams,
    Txt2Img,
    Img2Img,
    Inpaint,
)
    params = InvokeAIGeneratorBasicParams(
        model_name = 'stable-diffusion-1.5',
        steps = 30,
        scheduler = 'k_lms',
        cfg_scale = 8.0,
        height = 640,
        width = 640
        )
    print ('=== TXT2IMG TEST ===')
    txt2img = Txt2Img(manager, params)
    outputs = txt2img.generate(prompt='banana sushi', iterations=2)

    for i in outputs:
        print(f'image={output.image}, seed={output.seed}, model={output.params.model_name}, hash={output.model_hash}, steps={output.params.steps}')
```

The `params` argument is optional, so if you wish to accept default
parameters and selectively override them, just do this:

```
    outputs = Txt2Img(manager).generate(prompt='banana sushi',
                                        steps=50,
					scheduler='k_heun',
					model_name='stable-diffusion-2.1'
					)
```
2023-03-10 19:33:04 -05:00
63f59201f8 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 12:34:07 -06:00
370e8281b3 Merge branch 'main' into refactor/nodes-on-generator 2023-03-10 12:34:00 -06:00
685df33584 fix bug that caused black images when converting ckpts to diffusers in RAM (#2914)
Cause of the problem was inadvertent activation of the safety checker.

When conversion occurs on disk, the safety checker is disabled during loading.
However, when converting in RAM, the safety checker was not removed, resulting
in it activating even when user specified --no-nsfw_checker.

This PR fixes the problem by detecting when the caller has requested the InvokeAi
StableDiffusionGeneratorPipeline class to be returned and setting safety checker
to None. Do not do this with diffusers models destined for disk because then they
will be incompatible with the merge script!!

Closes #2836
2023-03-10 18:11:32 +00:00
4332c9c7a6 add generic jsx type definition for default export 2023-03-10 12:14:49 -05:00
4a00f1cc74 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 09:20:01 -06:00
7ff77504cb Make sure command also works with Oh-my-zsh (#2905)
Many people use oh-my-zsh for their command line: https://ohmyz.sh/ 

Adding `""` should work both on ohmyzsh and native bash
2023-03-10 19:05:22 +13:00
0d1854e44a Merge branch 'main' into patch-1 2023-03-10 19:04:42 +13:00
fe6858f2d9 feat: use the predicted denoised image for previews
Some schedulers report not only the noisy latents at the current timestep,
but also their estimate so far of what the de-noised latents will be.

It makes for a more legible preview than the noisy latents do.
2023-03-09 20:28:06 -08:00
12c7db3a16 backend: more post-ldm-removal cleanup (#2911) 2023-03-09 23:11:10 -05:00
3ecdec02bf Merge branch 'main' into cleanup/more_ldm_cleanup 2023-03-09 22:49:09 -05:00
d6c24d59b0 Revert "Remove label from stale issues on comment event" (#2912)
Reverts invoke-ai/InvokeAI#2903

@mauwii has a point here. It looks like triggering on a comment results
in an action for each of the stale issues, even ones that have been
previously dealt with. I'd like to revert this back to the original
behavior of running once every time the cron job executes.

What's the original motivation for having more frequent labeling of the
issues?
2023-03-09 22:28:49 -05:00
bb3d1bb6cb Revert "Remove label from stale issues on comment event" 2023-03-09 22:24:43 -05:00
14c8738a71 fix dangling reference to _model_to_cpu and missing variable model_description 2023-03-09 21:41:45 -05:00
1a829bb998 pipeline: remove code for legacy model 2023-03-09 18:15:12 -08:00
9d339e94f2 backend..conditioning: remove code for legacy model 2023-03-09 18:15:12 -08:00
ad7b1fa6fb model_manager: model to/from CPU methods are implemented on the Pipeline 2023-03-09 18:15:12 -08:00
42355b70c2 fix(Pipeline.debug_latents): fix import for moved utility function 2023-03-09 18:15:12 -08:00
faa2558e2f chore: add new argument to overridden method to match new signature upstream 2023-03-09 18:15:12 -08:00
081397737b typo: docstring spelling fixes
looks like they've already been corrected in the upstream copy
2023-03-09 18:15:12 -08:00
55d36eaf4f fix: image_resized_to_grid_as_tensor: reconnect dropped multiple_of argument 2023-03-09 18:15:12 -08:00
26cd1728ac Fix some text and a link 2023-03-09 20:03:11 -06:00
a0065da4a4 Remove label from stale issues on comment event (#2903)
I found it to be a chore to remove labels manually in order to
"un-stale" issues. This is contrary to the bot message which says
commenting should remove "stale" status. On the current `cron` schedule,
there may be a delay of up to 24 hours before the label is removed. This
PR will trigger the workflow on issue comments in addition to the
schedule.

Also adds a condition to not run this job on PRs (Github treats issues
and PRs equivalently in this respect), and rewords the messages for
clarity.
2023-03-09 20:17:54 -05:00
c11e823ff3 remove unused _wrap_results 2023-03-09 16:30:06 -05:00
197e50a298 unstage some changes 2023-03-09 15:33:18 -05:00
507e12520e Make sure command also works with Oh-my-zsh
Many people use oh-my-zsh for their command line: https://ohmyz.sh/ 

Adding `""` should work both on ohmyzsh and native bash
2023-03-09 19:21:57 +01:00
2cc04de397 dont care about linting build 2023-03-09 11:46:20 -05:00
f4150a7829 add new build command for building package 2023-03-09 11:10:18 -05:00
5418bd3b24 (ci) unlabel stale issues when commented 2023-03-09 09:22:29 -05:00
76d5fa4694 Bypass the 77 token limit (#2896)
This ought to be working but i don't know how it's supposed to behave so
i haven't been able to verify. At least, I know the numbers are getting
pushed all the way to the SD unet, i just have been unable to verify if
what's coming out is what is expected. Please test.

You'll `need to pip install -e .` after switching to the branch, because
it's currently pulling from a non-main `compel` branch. Once it's
verified as working as intended i'll promote the compel branch to pypi.
2023-03-09 23:52:28 +13:00
386dda8233 Merge branch 'main' into feat_longer_prompts 2023-03-09 22:37:10 +13:00
8076c1697c Merge branch 'feat_longer_prompts' of github.com:damian0815/InvokeAI into feat_longer_prompts 2023-03-09 10:28:13 +01:00
65fc9a6e0e bump compel version to address issues 2023-03-09 10:28:07 +01:00
cde0b6ae8d Merge branch 'main' into refactor/nodes-on-generator 2023-03-09 01:52:45 -05:00
b12760b976 [ui] chore(Accessibility): various additions (#2888)
# Overview
Adding a few accessibility items (I think 9 total items). Mostly
`aria-label`, but also a `<VisuallyHidden>` to the left-side nav tab
icons. Tried to match existing copy that was being used. Feedback
welcome
2023-03-09 19:14:42 +13:00
b679a6ba37 model manager defaults to consistent values of device and precision 2023-03-09 01:09:54 -05:00
2f5f08c35d yarn build 2023-03-08 23:51:46 -06:00
8f48c14ed4 Merge branch 'main' into chore/accessability_various-additions 2023-03-08 23:49:08 -06:00
5d37fa6e36 node-based txt2img working without generate 2023-03-09 00:18:29 -05:00
f51581bd1b Merge branch 'main' into feat_longer_prompts 2023-03-08 23:08:49 -06:00
50ca6b6ffc add back pytorch-lightning dependency (#2899)
- Closes #2893
2023-03-09 17:22:17 +13:00
63b9ec4c5e Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-09 16:57:14 +13:00
b115bc4247 [cli] Execute commands in-order with nodes (#2901)
Executes piped commands in the order they were provided (instead of
executing CLI commands immediately).
2023-03-09 16:55:23 +13:00
dadc30f795 Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-09 16:46:08 +13:00
111d8391e2 Merge branch 'main' into kyle0654/cli_execution_order 2023-03-09 16:37:15 +13:00
1157b454b2 decouple default component from react root (#2897)
Decouple default component from react root
2023-03-09 16:34:47 +13:00
8a6473610b [cli] Execute commands in-order with nodes 2023-03-08 19:25:03 -08:00
ea7911be89 Merge branch 'main' into chore/accessability_various-additions 2023-03-08 17:15:28 -06:00
9ee648e0c3 Merge branch 'main' into feat_longer_prompts 2023-03-09 00:13:01 +01:00
543682fd3b Merge branch 'feat_longer_prompts' of github.com:damian0815/InvokeAI into feat_longer_prompts 2023-03-08 23:24:41 +01:00
88cb63e4a1 pin new compel version 2023-03-08 23:24:30 +01:00
76212d1cca Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-08 17:05:11 -05:00
a8df9e5122 Merge branch 'main' into decouple-component-from-root 2023-03-08 16:58:34 -05:00
2db180d909 Make img2img strength 1 behave the same as txt2img (#2895)
* Fix img2img and inpainting code so a strength of 1 behaves the same as txt2img.

* Make generated images identical to their txt2img counterparts when strength is 1.
2023-03-08 22:50:16 +01:00
b716fe8f06 add pytorch-lightning dependency back in
- Closes #2893
2023-03-08 16:48:39 -05:00
69e2dc0404 update for compel changes 2023-03-08 20:45:01 +01:00
a38b75572f don't log excess tokens as truncated 2023-03-08 20:00:18 +01:00
e18de761b6 Merge branch 'main' into decouple-component-from-root 2023-03-08 13:13:43 -05:00
816ea39827 decouple default component from react root 2023-03-08 12:48:49 -05:00
1cd4cdd0e5 Merge branch 'main' into tests 2023-03-08 12:19:04 -05:00
768e969c90 cleanup and fix kwarg 2023-03-08 18:00:54 +01:00
57db66634d longer prompts wip 2023-03-08 14:25:48 +01:00
87789c1de8 add InvokeAIGenerator and InvokeAIGeneratorFactory classes 2023-03-07 23:52:53 -05:00
c3c1511ec6 add accessibility to localization
only set fallback english values
implement on ModelSelect and ProgressBar
2023-03-07 21:30:51 -06:00
6b41127421 Merge branch 'main' into chore/accessability_various-additions 2023-03-07 17:44:55 -06:00
d232a439f7 build: update actions (#2883)
- Updates triggers for UI workflow `lint-frontend`
- Corrects UI paths for `test-invoke-pip` and `test-invoke-pip-skip`
2023-03-08 11:51:32 +13:00
c04f21e83e Merge branch 'main' into build/update-actions 2023-03-08 11:50:50 +13:00
8762069b37 ui: update readme & scripts (#2884)
- Update ui readme
- Update scripts to use `yarn` instead of `npm` and use `concurrently`
to watch/build the theme token types along with SPA
2023-03-08 00:20:46 +13:00
d9ebdd2684 build(ui): use concurrently to run dev 2023-03-07 21:58:46 +11:00
3e4c10ef9c docs(ui): update readme 2023-03-07 21:58:42 +11:00
17eb2ca5a2 build: update actions
- Updates triggers for UI workflow `lint-frontend`
- Corrects UI paths for `test-invoke-pip` and `test-invoke-pip-skip`
2023-03-07 21:25:43 +11:00
63725d7534 add .pytest.ini to .gitignore 2023-03-07 09:08:27 +00:00
00f30ea457 Merge branch 'main' into tests 2023-03-07 09:03:18 +00:00
1b2a3c7144 ui: translations update from weblate (#2882)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-07 21:51:09 +13:00
01a1777370 translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 4.1% (20 of 480 strings)

translationBot(ui): update translation (Portuguese (Brazil))

Currently translated at 97.2% (467 of 480 strings)

translationBot(ui): update translation (Dutch)

Currently translated at 97.2% (467 of 480 strings)

translationBot(ui): update translation (French)

Currently translated at 83.1% (399 of 480 strings)

Co-authored-by: psychedelicious <mabianfu@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-03-07 09:09:42 +01:00
32945c7f45 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-03-07 09:09:42 +01:00
b0b8846430 Add aria-label to icon variant of IAISimpleMenu
Uses whatever the iconTooltip copy is
2023-03-06 22:43:41 -06:00
fdb146a43a add aria-label to UnifiedCanvasLayerSelect
matching tooltip copy
2023-03-06 22:42:39 -06:00
42c1f1fc9d add VisuallyHidden tab text to InvokeTabs 2023-03-06 22:42:04 -06:00
89a8ef86b5 add an aria-label to ProgressBar 2023-03-06 22:41:45 -06:00
f0fb767f57 add aria-label to ModelSelect 2023-03-06 22:39:08 -06:00
4bd93464bf [cli] Update CLI to define commands as Pydantic objects (#2861)
Updates the CLI to define CLI commands as Pydantic objects, similar to
how Invocations (nodes) work. For example:

```py
class HelpCommand(BaseCommand):
    """Shows help"""
    type: Literal['help'] = 'help'

    def run(self, context: CliContext) -> None:
        context.parser.print_help()
```
2023-03-07 13:25:06 +13:00
3d3de82ca9 Merge branch 'main' into kyle/cli_commands 2023-03-07 12:56:30 +13:00
c3ff9e6be8 Fixed startup issues with the web UI. (#2876) 2023-03-06 18:29:28 -05:00
21f79e5919 add missing package (#2878)
Added missing dependency declaration `@chakra-ui/styled-system`
2023-03-07 10:34:50 +13:00
0342e25c74 add missing package 2023-03-06 16:13:17 -05:00
91f982fb0b feat(ui): migrate theming to chakra ui (#2873)
*looks like this #2814 was reverted accidentally. instead of trying to
revert the revert, this PR can simply be re-accepted and will fix the
ui.*

- Migrate UI from SCSS to Chakra's CSS-in-JS system 
  - better dx
  - more capable theming 
  - full RTL language support (we now have Arabic and Hebrew)
  - general cleanup of the whole UI's styling
- Tidy npm packages and update scripts, necessitates update to github
actions

To test this PR in dev mode, you will need to do a `yarn install` as a
lot has changed.

thanks to @blessedcoolant for helping out on this, it was a big effort.
2023-03-07 08:43:26 +13:00
b9ab43a4bb build(ui): clean build chakra migration 2023-03-07 08:39:44 +13:00
6e0e48bf8a Merge branch 'main' into pr/2873 2023-03-07 08:36:09 +13:00
dcc8313dbf support both epsilon and v-prediction v2 inference (#2870)
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1. "epsilon" prediction type for Stable Diffusion v2 Base 
2. "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so automatically.
2023-03-06 14:29:35 -05:00
bf5831faa3 Merge branch 'main' into kyle/cli_commands 2023-03-06 08:52:38 -05:00
5eff035f55 Merge branch 'main' into tests 2023-03-06 08:37:07 -05:00
7c60068388 Merge branch 'main' into bugfix/fix-convert-sd-to-diffusers-error 2023-03-06 08:20:29 -05:00
d843fb078a feat(ui): remove references to dark mode 2023-03-06 20:40:59 +11:00
41b2e4633f chore(ui): remove unused scss files 2023-03-06 20:06:23 +11:00
57144ac0cf feat(ui): migrate theming to chakra ui 2023-03-06 20:03:39 +11:00
a305b6adbf fix call signature of import_diffuser_model() (#2871)
This fixes the borked #2867 PR.
2023-03-05 23:58:08 -05:00
94daaa4abf fix call signature of import_diffuser_model() 2023-03-05 23:37:59 -05:00
901337186d add .git-blame-ignore-revs file to maintain provenance (#2855)
To avoid `git blame` recording all the autoformatting changes under the
name 'lstein', this PR adds a `.git-blame-ignore-revs` that will ignore
any provenance changes that occurred during the recent refactor merge.
2023-03-05 22:58:34 -05:00
7e2f64f60b Merge branch 'main' into refactor/maintain-blame-provenance 2023-03-05 22:57:50 -05:00
126cba2324 Bugfix/reenable ckpt conversion to ram (#2868)
This fixes the crash that was occurring when trying to load a legacy
checkpoint file.

Note that this PR includes commits from #2867 to avoid diffusers files
from re-downloading at startup time.
2023-03-05 22:57:19 -05:00
2f9dcd7906 support both epsilon and v-prediction v2 inference
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1) "epsilon" prediction type for Stable Diffusion v2 Base
2) "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so
automatically.
2023-03-05 22:51:40 -05:00
e537b5d8e1 Revert "Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram"
This reverts commit e0e70c9222, reversing
changes made to 0b184913b9.
2023-03-06 14:29:39 +13:00
e0e70c9222 Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-06 14:27:30 +13:00
1b21e5df54 Migrate to new HF diffusers cache location (#2867)
# Migrate to new HF diffusers cache location

This PR adjusts the model cache directory to use the layout of
`diffusers 0.14`. This will automatically migrate any diffusers models
located in `INVOKEAI_ROOT/models/diffusers` to
`INVOKEAI_ROOT/models/hub`, and cache new downloaded diffusers files
into the same location.

As before, if environment variable `HF_HOME` is set, then both
HuggingFace `from_pretrained()` calls as well as all InvokeAI methods
will use `HF_HOME/hub` as their cache.
2023-03-06 13:05:13 +13:00
4b76af37ae Merge branch 'main' into enhance/use-new-diffusers-path 2023-03-06 12:42:30 +13:00
486c445afb fix typos and replace frontend REAMDE content 2023-03-05 21:05:09 +00:00
4547c48013 add docs for local development including tests 2023-03-05 19:59:06 +00:00
8f21201c91 [ui]: migrate all styling to chakra-ui theme (#2814)
- Migrate UI from SCSS to Chakra's CSS-in-JS system 
  - better dx
  - more capable theming 
  - full RTL language support (we now have Arabic and Hebrew)
  - general cleanup of the whole UI's styling
- Tidy npm packages and update scripts, necessitates update to github
actions

To test this PR in dev mode, you will need to do a `yarn install` as a
lot has changed.

thanks to @blessedcoolant for helping out on this, it was a big effort.
2023-03-06 07:23:59 +13:00
532b74a206 Merge branch 'main' into feat/ui/chakra-theme 2023-03-06 06:54:33 +13:00
0b184913b9 Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-05 12:37:43 -05:00
97719e40e4 fix Dockerfile after restructure (#2863)
this PR should close #2862
2023-03-05 18:33:00 +01:00
5ad3062b66 Merge branch 'main' into fix/broken-dockerfile-2862 2023-03-05 12:32:25 -05:00
92d012a92d Merge branch 'main' into enhance/use-new-diffusers-path 2023-03-05 12:30:24 -05:00
fc187f263e deal with non-directories in diffusers/ 2023-03-05 12:29:52 -05:00
fd94f85abe remove legacy ldm code (#2866)
This removes modules that appear to be no longer used by any code under
the `invokeai` package now that the `ckpt_generator` is gone.

There are a few small changes in here to code that was referencing code
in a conditional branch for ckpt, or to swap out a  function for a
🤗 one, but only as much was strictly necessary to get things to
run. We'll follow with more clean-up to get lingering `if isinstance` or
`except AttributeError` branches later.
2023-03-05 12:10:38 -05:00
4e9e1b660d respect HF_HOME setting when migrating 2023-03-05 12:08:29 -05:00
d01adedff5 give user chance to back out before migration 2023-03-05 12:04:31 -05:00
c247f430f7 combine pytest.ini with pyproject.toml 2023-03-05 17:00:08 +00:00
3d6a358042 remove .coveragerc from source contrl 2023-03-05 16:59:12 +00:00
4d1dcd11de Merge branch 'main' into dev/rm_legacy_deps 2023-03-05 11:50:53 -05:00
b33655b0d6 restore automatic conversion of legacy files to diffusers pipelines 2023-03-05 11:45:25 -05:00
81dee04dc9 during migration do not overwrite symlinks 2023-03-05 08:40:12 -05:00
114018e3e6 Unified spelling of Hugging Face 2023-03-05 07:30:35 -06:00
ef8cf83b28 migrate to new HF diffusers cache location 2023-03-05 08:20:24 -05:00
633857b0e3 build(ui): Migrate UI to Chakra 2023-03-05 21:50:50 +13:00
214574d11f Merge branch 'feat/ui/chakra-theme' of https://github.com/psychedelicious/InvokeAI into pr/2814 2023-03-05 21:48:08 +13:00
8584665ade feat(ui): migrate theming to chakra 2023-03-05 19:41:57 +11:00
516c56d0c5 feat(ui): Model Manager Cleanup 2023-03-05 21:41:55 +13:00
5891b43ce2 Merge branch 'feat/ui/chakra-theme' of https://github.com/psychedelicious/InvokeAI into pr/2814 2023-03-05 21:41:12 +13:00
62e75f95aa feat(ui): migrate theming to chakra 2023-03-05 19:39:51 +11:00
b07621e27e chore(ui): build frontend 2023-03-05 19:30:28 +11:00
545d8968fd feat(ui): migrated theming to chakra
build(ui): fix husky path

build(ui): fix hmr issue, remove emotion cache

build(ui): clean up package.json

build(ui): update gh action and npm scripts

feat(ui): wip port lightbox to chakra theme

feat(ui): wip use chakra theme tokens

feat(ui): Add status text to main loading spinner

feat(ui): wip chakra theme tweaking

feat(ui): simply iaisimplemenu button

feat(ui): wip chakra theming

feat(ui): Theme Management

feat(ui): Add Ocean Blue Theme

feat(ui): wip lightbox

fix(ui): fix lightbox mouse

feat(ui): set default theme variants

feat(ui): model manager chakra theme

chore(ui): lint

feat(ui): remove last scss

feat(ui): fix switch theme

feat(ui): Theme Cleanup

feat(ui): Stylize Search Models Found List

feat(ui): hide scrollbars

feat(ui): fix floating button position

feat(ui): Scrollbar Styling

fix broken scripts

This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.

   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file

2) Scripts that are installed by pip install. They get placed into the venv's
   PATH and are intended to be the official entry points:

   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files

protect invocations against black autoformatting

deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16
2023-03-05 19:30:02 +11:00
7cf2f58513 deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16 (#2865)
Things to check for in this version:

- `diffusers` cache location is now more consistent with other
huggingface-hub using code (i.e. `transformers`) as of
https://github.com/huggingface/diffusers/pull/2005. I think ultimately
this should make @damian0815 (and other folks with multiple
diffusers-using projects) happier, but it's worth taking a look to make
sure the way @lstein set things up to respect `HF_HOME` is still
functioning as intended.
- I've gone ahead and updated `transformers` to the current version
(4.26), but I have a vague memory that we were holding it back at some
point? Need to look that up and see if that's the case and why.
2023-03-05 01:53:01 -05:00
618e3e5e91 deps: add explicitly dependency to rich
was previously pulled in as a secondary dependency of something else.
2023-03-04 18:37:39 -08:00
c703b60986 remove legacy ldm code 2023-03-04 18:16:59 -08:00
7c0ce5c282 fix push expression
- make use of `github.ref_type`
2023-03-05 02:58:13 +01:00
82fe34b1f7 update build-container workflow
- switch versioning from semver to pep440
- remove unecesarry paths
- include `.dockerignore` in paths
2023-03-05 02:13:57 +01:00
65f9aae81d deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16 2023-03-04 16:32:16 -08:00
2d9fac23e7 fix Dockerfile
- update broken paths after restructure
2023-03-04 23:51:07 +01:00
ebc4b52f41 [cli] Update CLI to define commands as Pydantic objects 2023-03-04 14:46:02 -08:00
c4e6d4b348 fix broken scripts (#2857)
This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.
```
   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file
```

2) Scripts that are installed by pip install. They get placed into the
venv's
   PATH and are intended to be the official entry points:
```
   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files
```
2023-03-04 16:57:45 -05:00
eab32bce6c Merge branch 'main' into bugfix/fix-scripts 2023-03-04 13:19:02 -06:00
55d2094094 Protect invocations against black autoformatting (#2854)
This places `#fmt: off` and `#fmt: on` blocks around sections of the
invocation code that shouldn't be reformatted by Black.
2023-03-04 12:26:43 -05:00
a0d50a2b23 Merge branch 'main' into formatting/undo-black-formatting-of-invocations 2023-03-04 12:05:11 -05:00
9efeb1b2ec Merge branch 'main' into bugfix/fix-scripts 2023-03-03 20:36:29 -06:00
86e2cb0428 Fix for txt2img2img.py (#2856)
Fix error when using txt2img 
ModuleNotFoundError: No module named 'invokeai.backend.models'
and
ModuleNotFoundError: No module named
'invokeai.backend.generator.diffusers_pipeline'
2023-03-04 15:24:39 +13:00
53c2c0f91d Update txt2img2img.py 2023-03-04 12:58:33 +11:00
bdc7b8b75a fix broken scripts
This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.

   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file

2) Scripts that are installed by pip install. They get placed into the venv's
   PATH and are intended to be the official entry points:

   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files
2023-03-03 20:19:37 -05:00
1bfdd54810 Update txt2img2img.py 2023-03-04 11:23:21 +11:00
b4bf6c12a5 add .git-blame-ignore-revs file to maintain provenance
To avoid `git blame` recording all the autoformatting changes
under the name 'lstein', this PR adds a `.git-blame-ignore-revs`
that will ignore any provenance changes that occurred during the
recent refactor merge.
2023-03-03 16:23:48 -05:00
ab35c241c2 protect invocations against black autoformatting 2023-03-03 15:25:08 -05:00
b3dccfaeb6 Final phase of source tree restructure (#2833)
# All python code has been moved under `invokeai`. All vestiges of `ldm`
and `ldm.invoke` are now gone.

***You will need to run `pip install -e .` before the code will work
again!***

Everything seems to be functional, but extensive testing is advised.

A guide to where the files have gone is forthcoming.
2023-03-03 15:05:41 -05:00
6477e31c1e revert and disable auto-formatting of invocations 2023-03-03 14:59:17 -05:00
dd4a1c998b merge localisation files that were added in main 2023-03-03 14:47:01 -05:00
70203e6e5a CODEOWNERS coarse draft 2023-03-03 14:36:43 -05:00
d778a7c5ca ui: translations update from weblate (#2850)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-03 20:07:34 +11:00
f8e59636cd translationBot(ui): update translation (Korean)
Currently translated at 15.5% (73 of 469 strings)

translationBot(ui): added translation (Korean)

Co-authored-by: LemonDouble <lemondouble2@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translation: InvokeAI/Web UI
2023-03-03 10:06:13 +01:00
2d1a0b0a05 translationBot(ui): update translation (Portuguese)
Currently translated at 12.7% (60 of 469 strings)

Co-authored-by: Airton Silva <airtonsilva2009@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-03 10:06:13 +01:00
c9b2234d90 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-03-03 10:06:12 +01:00
82b224539b translationBot(ui): update translation (Hebrew)
Currently translated at 100.0% (469 of 469 strings)

translationBot(ui): added translation (Hebrew)

Co-authored-by: Netz <pixi@pixelabs.net>
Co-authored-by: Netzer R <pixi@pixelabs.net>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/he/
Translation: InvokeAI/Web UI
2023-03-03 10:06:12 +01:00
0b15ffb95b translationBot(ui): update translation (Portuguese)
Currently translated at 12.5% (59 of 469 strings)

translationBot(ui): added translation (Portuguese)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-03 10:06:11 +01:00
ce9aaab22f translationBot(ui): added translation (Chinese (Traditional))
Co-authored-by: psychedelicious <mabianfu@icloud.com>
2023-03-03 10:06:11 +01:00
3f53f1186d move diagnostic message to stderr; was confusing CI 2023-03-03 01:54:48 -05:00
c0aff396d2 fix ldm->invokeai pathnames in workflows 2023-03-03 01:44:55 -05:00
955900507f fix issue with invokeai.version 2023-03-03 01:34:38 -05:00
d606abc544 fix weblint call 2023-03-03 01:09:56 -05:00
44400d2a66 fix incorrect import of merge code 2023-03-03 01:07:31 -05:00
60a98cacef all vestiges of ldm.invoke removed 2023-03-03 01:02:00 -05:00
6a990565ff all files migrated; tweaks needed 2023-03-03 00:02:15 -05:00
3f0b0f3250 almost all of backend migrated; restoration next 2023-03-02 13:28:17 -05:00
1a7371ea17 remove unused embeddings code 2023-03-01 21:09:22 -05:00
850d1ee984 move models and modules under invokeai/backend/ldm 2023-03-01 18:24:18 -05:00
2c7928b163 remove pycaches from repo 2023-02-28 23:25:35 -05:00
87d1ec6a4c Merge branch 'main' into refactor/move-models-and-generators 2023-02-28 17:34:05 -05:00
53c62537f7 fix newlines causing negative prompt to be parsed incorrectly (#2837)
closes #2753
2023-02-28 17:29:46 -05:00
418d93fdfd fix newlines causing negative prompt to be parsed incorrectly 2023-02-28 22:37:28 +01:00
f2ce2f1778 fix import of moved model_manager module 2023-02-28 08:38:14 -05:00
5b6c61fc75 move models and generator into backend 2023-02-28 08:32:11 -05:00
1d77581d96 restore behavior of !import_model; fix initial models bug 2023-02-28 00:45:56 -05:00
3b921cf393 add more missing files 2023-02-28 00:37:13 -05:00
d334f7f1f6 add missing files 2023-02-28 00:31:15 -05:00
8c9764476c first phase of source tree restructure
This is the first phase of a big shifting of files and directories
in the source tree.

You will need to run `pip install -e .` before the code will work again!

Here's what's in the current commit:

1) Remove a lot of dead code that dealt with checkpoint and safetensor loading.
2) Entire ckpt_generator hierarchy is now gone!
3) ldm.invoke.generator.*   => invokeai.generator.*
4) ldm.model.*              => invokeai.model.*
5) ldm.invoke.model_manager => invokeai.model.model_manager

6) In addition, a number of frequently-accessed classes can be imported
   from the invokeai.model and invokeai.generator modules:

   from invokeai.generator import ( Generator, PipelineIntermediateState,
                                    StableDiffusionGeneratorPipeline, infill_methods)

   from invokeai.models import ( ModelManager, SDLegacyType
                                 InvokeAIDiffuserComponent, AttentionMapSaver,
                                 DDIMSampler, KSampler, PLMSSampler,
                                 PostprocessingSettings )
2023-02-27 23:52:46 -05:00
b7d5a3e0b5 [nodes] Add better error handling to processor and CLI (#2828)
* [nodes] Add better error handling to processor and CLI

* [nodes] Use more explicit name for marking node execution error

* [nodes] Update the processor call to error
2023-02-27 10:01:07 -08:00
e0405031a7 add a workflow to close stale issues (#2808)
with values set as discussed in discord
2023-02-26 16:14:42 -05:00
ee24b686b3 Merge branch 'main' into dev/ci/add-close-inactive-issues 2023-02-26 16:14:03 -05:00
835eb14c79 Split requirements / pyproject installation in Dockerfile (#2815)
This should make caching way easier and therefore speed up the image
(re-)creation a lot.

Other small improvements:
- reorder .dockerignore
- rename amd flavor to rocm to align with cuda flavor
- use `user:group` for definitions
- add `--platform=${TARGETPLATFORM}` to base
2023-02-26 13:48:32 -05:00
9aadf7abc1 Merge branch 'main' into dev/ci/add-close-inactive-issues 2023-02-26 13:13:42 -05:00
243f9e8377 Merge branch 'main' into dev/docker/separate-req-inst 2023-02-26 13:13:07 -05:00
6e0c6d9cc9 perf(invoke_ai_web_server): encode intermediate result previews as jpeg (#2817)
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-26 18:47:51 +13:00
a3076cf951 perf(invoke_ai_web_server): encode intermediate result previews as jpeg
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-25 21:23:25 -08:00
6696882c71 doc(invoke_ai_web_server): put docstrings inside their functions (#2816)
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-26 18:20:10 +13:00
17b039e85d doc(invoke_ai_web_server): put docstrings inside their functions
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-25 20:21:47 -08:00
81539e6ab4 Merge remote-tracking branch 'upstream/main' into dev/docker/separate-req-inst 2023-02-26 00:55:03 +01:00
92304b9f8a remove pip-tools, still split requirements install
- also use user:group for definitions
- add `--platform=${TARGETPLATFORM}` to base
2023-02-26 00:53:43 +01:00
ec1de5ae8b more detailed volume parameters 2023-02-26 00:51:30 +01:00
49198a61ef enable BuildKit in env.sh 2023-02-26 00:50:13 +01:00
8c5773abc1 add a workflow to close stale issues
with values set as discussed in discord
2023-02-25 13:20:05 +01:00
01f8c37bd3 rename amd flavor to rocm 2023-02-24 06:20:44 +01:00
b7718985d5 update build-container.yml
- add branches 'dev/ci/docker/*' and 'dev/docker/*'
2023-02-24 03:58:22 +01:00
90cda11868 separate installation of requirements and source
this should highly increase rebuilding of the image when:
- version did not change
- requirements didn't change
2023-02-24 03:51:18 +01:00
5cb877e096 reorder .dockerignore 2023-02-24 02:53:27 +01:00
1464 changed files with 114988 additions and 90204 deletions

View File

@ -1,6 +0,0 @@
[run]
omit='.env/*'
source='.'
[report]
show_missing = true

View File

@ -4,22 +4,22 @@
!ldm
!pyproject.toml
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# ignore frontend but whitelist dist
invokeai/frontend/
!invokeai/frontend/dist/
# ignore frontend/web but whitelist dist
invokeai/frontend/web/
!invokeai/frontend/web/dist/
# ignore invokeai/assets but whitelist invokeai/assets/web
invokeai/assets/
!invokeai/assets/web/
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# Byte-compiled / optimized / DLL files
**/__pycache__/
**/*.py[cod]
# Distribution / packaging
*.egg-info/
*.egg
**/*.egg-info/
**/*.egg

View File

@ -1,8 +1,5 @@
root = true
# All files
[*]
max_line_length = 80
charset = utf-8
end_of_line = lf
indent_size = 2
@ -13,18 +10,3 @@ trim_trailing_whitespace = true
# Python
[*.py]
indent_size = 4
max_line_length = 120
# css
[*.css]
indent_size = 4
# flake8
[.flake8]
indent_size = 4
# Markdown MkDocs
[docs/**/*.md]
max_line_length = 80
indent_size = 4
indent_style = unset

37
.flake8
View File

@ -1,37 +0,0 @@
[flake8]
max-line-length = 120
extend-ignore =
# See https://github.com/PyCQA/pycodestyle/issues/373
E203,
# use Bugbear's B950 instead
E501,
# from black repo https://github.com/psf/black/blob/main/.flake8
E266, W503, B907
extend-select =
# Bugbear line length
B950
extend-exclude =
scripts/orig_scripts/*
ldm/models/*
ldm/modules/*
ldm/data/*
ldm/generate.py
ldm/util.py
ldm/simplet2i.py
per-file-ignores =
# B950 line too long
# W605 invalid escape sequence
# F841 assigned to but never used
# F401 imported but unused
tests/test_prompt_parser.py: B950, W605, F401
tests/test_textual_inversion.py: F841, B950
# B023 Function definition does not bind loop variable
scripts/legacy_api.py: F401, B950, B023, F841
ldm/invoke/__init__.py: F401
# B010 Do not call setattr with a constant attribute value
ldm/invoke/server_legacy.py: B010
# =====================
# flake-quote settings:
# =====================
# Set this to match black style:
inline-quotes = double

1
.git-blame-ignore-revs Normal file
View File

@ -0,0 +1 @@
b3dccfaeb636599c02effc377cdd8a87d658256c

71
.github/CODEOWNERS vendored
View File

@ -2,60 +2,33 @@
/.github/workflows/ @lstein @blessedcoolant
# documentation
/docs/ @lstein @blessedcoolant
mkdocs.yml @lstein @ebr
/docs/ @lstein @blessedcoolant @hipsterusername
/mkdocs.yml @lstein @blessedcoolant
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant
# installation and configuration
/pyproject.toml @lstein @ebr
/docker/ @lstein
/scripts/ @ebr @lstein @blessedcoolant
/installer/ @ebr @lstein
ldm/invoke/config @lstein @ebr
invokeai/assets @lstein @blessedcoolant
invokeai/configs @lstein @ebr @blessedcoolant
/ldm/invoke/_version.py @lstein @blessedcoolant
/pyproject.toml @lstein @blessedcoolant
/docker/ @lstein @blessedcoolant
/scripts/ @ebr @lstein
/installer/ @lstein @ebr
/invokeai/assets @lstein @ebr
/invokeai/configs @lstein
/invokeai/version @lstein @blessedcoolant
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious
/invokeai/backend @blessedcoolant @psychedelicious
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp
# generation and model management
/ldm/*.py @lstein @blessedcoolant
/ldm/generate.py @lstein @gregghelt2
/ldm/invoke/args.py @lstein @blessedcoolant
/ldm/invoke/ckpt* @lstein @blessedcoolant
/ldm/invoke/ckpt_generator @lstein @blessedcoolant
/ldm/invoke/CLI.py @lstein @blessedcoolant
/ldm/invoke/config @lstein @ebr @blessedcoolant
/ldm/invoke/generator @gregghelt2 @damian0815
/ldm/invoke/globals.py @lstein @blessedcoolant
/ldm/invoke/merge_diffusers.py @lstein @blessedcoolant
/ldm/invoke/model_manager.py @lstein @blessedcoolant
/ldm/invoke/txt2mask.py @lstein @blessedcoolant
/ldm/invoke/patchmatch.py @Kyle0654 @lstein
/ldm/invoke/restoration @lstein @blessedcoolant
# generation, model management, postprocessing
/invokeai/backend @damian0815 @lstein @blessedcoolant @jpphoto @gregghelt2 @StAlKeR7779
# attention, textual inversion, model configuration
/ldm/models @damian0815 @gregghelt2 @blessedcoolant
/ldm/modules/textual_inversion_manager.py @lstein @blessedcoolant
/ldm/modules/attention.py @damian0815 @gregghelt2
/ldm/modules/diffusionmodules @damian0815 @gregghelt2
/ldm/modules/distributions @damian0815 @gregghelt2
/ldm/modules/ema.py @damian0815 @gregghelt2
/ldm/modules/embedding_manager.py @lstein
/ldm/modules/encoders @damian0815 @gregghelt2
/ldm/modules/image_degradation @damian0815 @gregghelt2
/ldm/modules/losses @damian0815 @gregghelt2
/ldm/modules/x_transformer.py @damian0815 @gregghelt2
# Nodes
apps/ @Kyle0654 @jpphoto
# legacy REST API
# these are dead code
#/ldm/invoke/pngwriter.py @CapableWeb
#/ldm/invoke/server_legacy.py @CapableWeb
#/scripts/legacy_api.py @CapableWeb
#/tests/legacy_tests.sh @CapableWeb
# front ends
/invokeai/frontend/CLI @lstein
/invokeai/frontend/install @lstein @ebr
/invokeai/frontend/merge @lstein @blessedcoolant
/invokeai/frontend/training @lstein @blessedcoolant
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp

View File

@ -65,6 +65,16 @@ body:
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
validations:
required: true
- type: textarea
id: what-happened

19
.github/stale.yaml vendored Normal file
View File

@ -0,0 +1,19 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 28
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Please
update the ticket if this is still a problem on the latest release.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
Due to inactivity, this issue has been automatically closed. If this is
still a problem on the latest release, please recreate the issue.

View File

@ -5,17 +5,20 @@ on:
- 'main'
- 'update/ci/docker/*'
- 'update/docker/*'
- 'dev/ci/docker/*'
- 'dev/docker/*'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
- '.dockerignore'
- 'invokeai/**'
- 'docker/Dockerfile'
tags:
- 'v*.*.*'
workflow_dispatch:
permissions:
contents: write
packages: write
jobs:
docker:
@ -24,11 +27,11 @@ jobs:
fail-fast: false
matrix:
flavor:
- amd
- rocm
- cuda
- cpu
include:
- flavor: amd
- flavor: rocm
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
- flavor: cuda
pip-extra-index-url: ''
@ -54,9 +57,9 @@ jobs:
tags: |
type=ref,event=branch
type=ref,event=tag
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=pep440,pattern={{version}}
type=pep440,pattern={{major}}.{{minor}}
type=pep440,pattern={{major}}
type=sha,enable=true,prefix=sha-,format=short
flavor: |
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
@ -92,7 +95,7 @@ jobs:
context: .
file: ${{ env.DOCKERFILE }}
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}

View File

@ -0,0 +1,27 @@
name: Close inactive issues
on:
schedule:
- cron: "00 6 * * *"
env:
DAYS_BEFORE_ISSUE_STALE: 14
DAYS_BEFORE_ISSUE_CLOSE: 28
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: ${{ env.DAYS_BEFORE_ISSUE_STALE }}
days-before-issue-close: ${{ env.DAYS_BEFORE_ISSUE_CLOSE }}
stale-issue-label: "Inactive Issue"
stale-issue-message: "There has been no activity in this issue for ${{ env.DAYS_BEFORE_ISSUE_STALE }} days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release."
close-issue-message: "Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}
operations-per-run: 500

View File

@ -3,14 +3,22 @@ name: Lint frontend
on:
pull_request:
paths:
- 'invokeai/frontend/**'
- 'invokeai/frontend/web/**'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
push:
branches:
- 'main'
paths:
- 'invokeai/frontend/**'
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:
defaults:
run:
working-directory: invokeai/frontend
working-directory: invokeai/frontend/web
jobs:
lint-frontend:
@ -23,7 +31,7 @@ jobs:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn tsc'
- run: 'yarn run madge'
- run: 'yarn run lint --max-warnings=0'
- run: 'yarn run prettier --check'
- run: 'yarn run lint:tsc'
- run: 'yarn run lint:madge'
- run: 'yarn run lint:eslint'
- run: 'yarn run lint:prettier'

View File

@ -2,8 +2,10 @@ name: mkdocs-material
on:
push:
branches:
- 'main'
- 'development'
- 'refs/heads/v2.3'
permissions:
contents: write
jobs:
mkdocs-material:
@ -41,7 +43,7 @@ jobs:
--verbose
- name: deploy to gh-pages
if: ${{ github.ref == 'refs/heads/main' }}
if: ${{ github.ref == 'refs/heads/v2.3' }}
run: |
python -m \
mkdocs gh-deploy \

View File

@ -3,7 +3,7 @@ name: PyPI Release
on:
push:
paths:
- 'ldm/invoke/_version.py'
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
jobs:

View File

@ -1,12 +1,11 @@
name: Test invoke.py pip
on:
pull_request:
paths-ignore:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
paths:
- '**'
- '!pyproject.toml'
- '!invokeai/**'
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:

View File

@ -5,17 +5,13 @@ on:
- 'main'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
pull_request:
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
types:
- 'ready_for_review'
- 'opened'
@ -84,11 +80,6 @@ jobs:
uses: actions/checkout@v3
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: setup python
@ -109,12 +100,6 @@ jobs:
id: run-pytest
run: pytest
- name: set INVOKEAI_OUTDIR
run: >
python -c
"import os;from ldm.invoke.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
>> ${{ matrix.github-env }}
- name: run invokeai-configure
id: run-preload-models
env:
@ -133,15 +118,21 @@ jobs:
HF_HUB_OFFLINE: 1
HF_DATASETS_OFFLINE: 1
TRANSFORMERS_OFFLINE: 1
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
run: >
invokeai
--no-patchmatch
--no-nsfw_checker
--from_file ${{ env.TEST_PROMPTS }}
--precision=float32
--always_use_cpu
--use_memory_db
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
--from_file ${{ env.TEST_PROMPTS }}
- name: Archive results
id: archive-results
env:
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
uses: actions/upload-artifact@v3
with:
name: results

14
.gitignore vendored
View File

@ -9,6 +9,8 @@ models/ldm/stable-diffusion-v1/model.ckpt
configs/models.user.yaml
config/models.user.yml
invokeai.init
.version
.last_model
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
@ -63,6 +65,7 @@ pip-delete-this-directory.txt
htmlcov/
.tox/
.nox/
.coveragerc
.coverage
.coverage.*
.cache
@ -73,6 +76,7 @@ cov.xml
*.py,cover
.hypothesis/
.pytest_cache/
.pytest.ini
cover/
junit/
@ -197,8 +201,10 @@ checkpoints
# If it's a Mac
.DS_Store
invokeai/frontend/web/dist/*
# Let the frontend manage its own gitignore
!invokeai/frontend/*
!invokeai/frontend/web/*
# Scratch folder
.scratch/
@ -213,11 +219,6 @@ gfpgan/
# config file (will be created by installer)
configs/models.yaml
# weights (will be created by installer)
models/ldm/stable-diffusion-v1/*.ckpt
models/clipseg
models/gfpgan
# ignore initfile
.invokeai
@ -232,4 +233,3 @@ installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh

View File

@ -1,41 +0,0 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
- id: flake8
additional_dependencies:
- flake8-black
- flake8-bugbear
- flake8-comprehensions
- flake8-simplify
- repo: https://github.com/pre-commit/mirrors-prettier
rev: 'v3.0.0-alpha.4'
hooks:
- id: prettier
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-added-large-files
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: check-merge-conflict
- id: check-symlinks
- id: check-toml
- id: end-of-file-fixer
- id: no-commit-to-branch
args: ['--branch', 'main']
- id: trailing-whitespace

View File

@ -1,14 +0,0 @@
invokeai/frontend/.husky
invokeai/frontend/patches
# Ignore artifacts:
build
coverage
static
invokeai/frontend/dist
# Ignore all HTML files:
*.html
# Ignore deprecated docs
docs/installation/deprecated_documentation

View File

@ -1,9 +1,9 @@
embeddedLanguageFormatting: auto
endOfLine: lf
singleQuote: true
semi: true
trailingComma: es5
tabWidth: 2
useTabs: false
singleQuote: true
quoteProps: as-needed
embeddedLanguageFormatting: auto
overrides:
- files: '*.md'
options:
@ -11,9 +11,3 @@ overrides:
printWidth: 80
parser: markdown
cursorOffset: -1
- files: docs/**/*.md
options:
tabWidth: 4
- files: 'invokeai/frontend/public/locales/*.json'
options:
tabWidth: 4

View File

@ -1,5 +0,0 @@
[pytest]
DJANGO_SETTINGS_MODULE = webtas.settings
; python_files = tests.py test_*.py *_tests.py
addopts = --cov=. --cov-config=.coveragerc --cov-report xml:cov.xml

View File

@ -33,6 +33,8 @@
</div>
_**Note: The UI is not fully functional on `main`. If you need a stable UI based on `main`, use the `pre-nodes` tag while we [migrate to a new backend](https://github.com/invoke-ai/InvokeAI/discussions/3246).**_
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
**Quick links**: [[How to Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
@ -41,6 +43,23 @@ _Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
## FOR DEVELOPERS - MIGRATING TO THE 3.0.0 MODELS FORMAT
The models directory and models.yaml have changed. To migrate to the
new layout, please follow this recipe:
1. Run `python scripts/migrate_models_to_3.0.py <path_to_root_directory>
2. This will create a new models directory named `models-3.0` and a
new config directory named `models.yaml-3.0`, both in the current
working directory. If you prefer to name them something else, pass
the `--dest-directory` and/or `--dest-yaml` arguments.
3. Check that the new models directory and yaml file look ok.
4. Replace the existing directory and file, keeping backup copies just in
case.
<div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
@ -84,7 +103,7 @@ installing lots of models.
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generaiton models.
select a set of starting image generation models.
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
@ -139,7 +158,7 @@ not supported.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
```
_For Linux with an AMD GPU:_
@ -148,6 +167,11 @@ not supported.
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
_For non-GPU systems:_
```terminal
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2:_
```sh

4
coverage/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore

View File

@ -4,15 +4,15 @@ ARG PYTHON_VERSION=3.9
##################
## base image ##
##################
FROM python:${PYTHON_VERSION}-slim AS python-base
FROM --platform=${TARGETPLATFORM} python:${PYTHON_VERSION}-slim AS python-base
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
# prepare for buildkit cache
# Prepare apt for buildkit cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
# Install necessary packages
# Install dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
@ -23,7 +23,7 @@ RUN \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.*
# set working directory and env
# Set working directory and env
ARG APPDIR=/usr/src
ARG APPNAME=InvokeAI
WORKDIR ${APPDIR}
@ -32,7 +32,7 @@ ENV PATH ${APPDIR}/${APPNAME}/bin:$PATH
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# don't fall back to legacy build system
# Don't fall back to legacy build system
ENV PIP_USE_PEP517=1
#######################
@ -40,7 +40,7 @@ ENV PIP_USE_PEP517=1
#######################
FROM python-base AS pyproject-builder
# Install dependencies
# Install build dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
@ -51,26 +51,30 @@ RUN \
gcc=4:10.2.* \
python3-dev=3.9.*
# prepare pip for buildkit cache
# Prepare pip for buildkit cache
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
RUN mkdir -p ${PIP_CACHE_DIR}
# create virtual environment
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
# Create virtual environment
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
python3 -m venv "${APPNAME}" \
--upgrade-deps
# copy sources
COPY --link . .
# install pyproject.toml
# Install requirements
COPY --link pyproject.toml .
COPY --link invokeai/version/invokeai_version.py invokeai/version/__init__.py invokeai/version/
ARG PIP_EXTRA_INDEX_URL
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
"${APPNAME}"/bin/pip install .
# Install pyproject.toml
COPY --link . .
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
"${APPNAME}/bin/pip" install .
# build patchmatch
# Build patchmatch
RUN python3 -c "from patchmatch import patch_match"
#####################
@ -86,14 +90,14 @@ RUN useradd \
-U \
"${UNAME}"
# create volume directory
# Create volume directory
ARG VOLUME_DIR=/data
RUN mkdir -p "${VOLUME_DIR}" \
&& chown -R "${UNAME}" "${VOLUME_DIR}"
&& chown -hR "${UNAME}:${UNAME}" "${VOLUME_DIR}"
# setup runtime environment
USER ${UNAME}
COPY --chown=${UNAME} --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPNAME}
# Setup runtime environment
USER ${UNAME}:${UNAME}
COPY --chown=${UNAME}:${UNAME} --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPNAME}
ENV INVOKEAI_ROOT ${VOLUME_DIR}
ENV TRANSFORMERS_CACHE ${VOLUME_DIR}/.cache
ENV INVOKE_MODEL_RECONFIGURE "--yes --default_only"

View File

@ -41,7 +41,7 @@ else
fi
# Build Container
DOCKER_BUILDKIT=1 docker build \
docker build \
--platform="${PLATFORM:-linux/amd64}" \
--tag="${CONTAINER_IMAGE:-invokeai}" \
${CONTAINER_FLAVOR:+--build-arg="CONTAINER_FLAVOR=${CONTAINER_FLAVOR}"} \

View File

@ -49,3 +49,6 @@ CONTAINER_FLAVOR="${CONTAINER_FLAVOR-cuda}"
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"
# enable docker buildkit
export DOCKER_BUILDKIT=1

View File

@ -21,10 +21,10 @@ docker run \
--tty \
--rm \
--platform="${PLATFORM}" \
--name="${REPOSITORY_NAME,,}" \
--hostname="${REPOSITORY_NAME,,}" \
--mount=source="${VOLUMENAME}",target=/data \
--mount type=bind,source="$(pwd)"/outputs,target=/data/outputs \
--name="${REPOSITORY_NAME}" \
--hostname="${REPOSITORY_NAME}" \
--mount type=volume,volume-driver=local,source="${VOLUMENAME}",target=/data \
--mount type=bind,source="$(pwd)"/outputs/,target=/data/outputs/ \
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
--publish=9090:9090 \
@ -32,7 +32,7 @@ docker run \
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
"${CONTAINER_IMAGE}" ${@:+$@}
# Remove Trash folder
echo -e "\nCleaning trash folder ..."
for f in outputs/.Trash*; do
if [ -e "$f" ]; then
rm -Rf "$f"

View File

@ -1,5 +0,0 @@
{
"MD046": false,
"MD007": false,
"MD030": false
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 470 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 457 KiB

View File

@ -1,105 +1,277 @@
# Invocations
Invocations represent a single operation, its inputs, and its outputs. These operations and their outputs can be chained together to generate and modify images.
Invocations represent a single operation, its inputs, and its outputs. These
operations and their outputs can be chained together to generate and modify
images.
## Creating a new invocation
To create a new invocation, either find the appropriate module file in `/ldm/invoke/app/invocations` to add your invocation to, or create a new one in that folder. All invocations in that folder will be discovered and made available to the CLI and API automatically. Invocations make use of [typing](https://docs.python.org/3/library/typing.html) and [pydantic](https://pydantic-docs.helpmanual.io/) for validation and integration into the CLI and API.
To create a new invocation, either find the appropriate module file in
`/ldm/invoke/app/invocations` to add your invocation to, or create a new one in
that folder. All invocations in that folder will be discovered and made
available to the CLI and API automatically. Invocations make use of
[typing](https://docs.python.org/3/library/typing.html) and
[pydantic](https://pydantic-docs.helpmanual.io/) for validation and integration
into the CLI and API.
An invocation looks like this:
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
# fmt: off
type: Literal["upscale"] = "upscale"
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description = "The upscale level")
image: Union[ImageField, None] = Field(description="The input image", default=None)
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2, 4] = Field(default=2, description="The upscale level")
# fmt: on
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["upscaling", "image"],
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
image = context.services.images.get_pil_image(
self.image.image_origin, self.image.image_name
)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=(self.level, self.strength),
strength=0.0, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(
image_name=image_dto.image_name,
image_origin=image_dto.image_origin,
),
width=image_dto.width,
height=image_dto.height,
)
```
Each portion is important to implement correctly.
### Class definition and type
```py
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
type: Literal['upscale'] = 'upscale'
```
All invocations must derive from `BaseInvocation`. They should have a docstring that declares what they do in a single, short line. They should also have a `type` with a type hint that's `Literal["command_name"]`, where `command_name` is what the user will type on the CLI or use in the API to create this invocation. The `command_name` must be unique. The `type` must be assigned to the value of the literal in the type hint.
All invocations must derive from `BaseInvocation`. They should have a docstring
that declares what they do in a single, short line. They should also have a
`type` with a type hint that's `Literal["command_name"]`, where `command_name`
is what the user will type on the CLI or use in the API to create this
invocation. The `command_name` must be unique. The `type` must be assigned to
the value of the literal in the type hint.
### Inputs
```py
# Inputs
image: Union[ImageField,None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2,4] = Field(default=2, description="The upscale level")
```
Inputs consist of three parts: a name, a type hint, and a `Field` with default, description, and validation information. For example:
| Part | Value | Description |
| ---- | ----- | ----------- |
| Name | `strength` | This field is referred to as `strength` |
| Type Hint | `float` | This field must be of type `float` |
| Field | `Field(default=0.75, gt=0, le=1, description="The strength")` | The default value is `0.75`, the value must be in the range (0,1], and help text will show "The strength" for this field. |
Notice that `image` has type `Union[ImageField,None]`. The `Union` allows this field to be parsed with `None` as a value, which enables linking to previous invocations. All fields should either provide a default value or allow `None` as a value, so that they can be overwritten with a linked output from another invocation.
Inputs consist of three parts: a name, a type hint, and a `Field` with default,
description, and validation information. For example:
The special type `ImageField` is also used here. All images are passed as `ImageField`, which protects them from pydantic validation errors (since images only ever come from links).
| Part | Value | Description |
| --------- | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| Name | `strength` | This field is referred to as `strength` |
| Type Hint | `float` | This field must be of type `float` |
| Field | `Field(default=0.75, gt=0, le=1, description="The strength")` | The default value is `0.75`, the value must be in the range (0,1], and help text will show "The strength" for this field. |
Finally, note that for all linking, the `type` of the linked fields must match. If the `name` also matches, then the field can be **automatically linked** to a previous invocation by name and matching.
Notice that `image` has type `Union[ImageField,None]`. The `Union` allows this
field to be parsed with `None` as a value, which enables linking to previous
invocations. All fields should either provide a default value or allow `None` as
a value, so that they can be overwritten with a linked output from another
invocation.
The special type `ImageField` is also used here. All images are passed as
`ImageField`, which protects them from pydantic validation errors (since images
only ever come from links).
Finally, note that for all linking, the `type` of the linked fields must match.
If the `name` also matches, then the field can be **automatically linked** to a
previous invocation by name and matching.
### Config
```py
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["upscaling", "image"],
},
}
```
This is an optional configuration for the invocation. It inherits from
pydantic's model `Config` class, and it used primarily to customize the
autogenerated OpenAPI schema.
The UI relies on the OpenAPI schema in two ways:
- An API client & Typescript types are generated from it. This happens at build
time.
- The node editor parses the schema into a template used by the UI to create the
node editor UI. This parsing happens at runtime.
In this example, a `ui` key has been added to the `schema_extra` dict to provide
some tags for the UI, to facilitate filtering nodes.
See the Schema Generation section below for more information.
### Invoke Function
```py
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get(self.image.image_type, self.image.image_name)
results = context.services.generate.upscale_and_reconstruct(
image_list = [[image, 0]],
upscale = (self.level, self.strength),
strength = 0.0, # GFPGAN strength
save_original = False,
image_callback = None,
image = context.services.images.get_pil_image(
self.image.image_origin, self.image.image_name
)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=(self.level, self.strength),
strength=0.0, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
image_type = ImageType.RESULT
image_name = context.services.images.create_name(context.graph_execution_state_id, self.id)
context.services.images.save(image_type, image_name, results[0][0])
# TODO: can this return multiple results?
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image = ImageField(image_type = image_type, image_name = image_name)
image=ImageField(
image_name=image_dto.image_name,
image_origin=image_dto.image_origin,
),
width=image_dto.width,
height=image_dto.height,
)
```
The `invoke` function is the last portion of an invocation. It is provided an `InvocationContext` which contains services to perform work as well as a `session_id` for use as needed. It should return a class with output values that derives from `BaseInvocationOutput`.
Before being called, the invocation will have all of its fields set from defaults, inputs, and finally links (overriding in that order).
The `invoke` function is the last portion of an invocation. It is provided an
`InvocationContext` which contains services to perform work as well as a
`session_id` for use as needed. It should return a class with output values that
derives from `BaseInvocationOutput`.
Assume that this invocation may be running simultaneously with other invocations, may be running on another machine, or in other interesting scenarios. If you need functionality, please provide it as a service in the `InvocationServices` class, and make sure it can be overridden.
Before being called, the invocation will have all of its fields set from
defaults, inputs, and finally links (overriding in that order).
Assume that this invocation may be running simultaneously with other
invocations, may be running on another machine, or in other interesting
scenarios. If you need functionality, please provide it as a service in the
`InvocationServices` class, and make sure it can be overridden.
### Outputs
```py
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
type: Literal['image'] = 'image'
image: ImageField = Field(default=None, description="The output image")
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
```
Output classes look like an invocation class without the invoke method. Prefer to use an existing output class if available, and prefer to name inputs the same as outputs when possible, to promote automatic invocation linking.
Output classes look like an invocation class without the invoke method. Prefer
to use an existing output class if available, and prefer to name inputs the same
as outputs when possible, to promote automatic invocation linking.
## Schema Generation
Invocation, output and related classes are used to generate an OpenAPI schema.
### Required Properties
The schema generation treat all properties with default values as optional. This
makes sense internally, but when when using these classes via the generated
schema, we end up with e.g. the `ImageOutput` class having its `image` property
marked as optional.
We know that this property will always be present, so the additional logic
needed to always check if the property exists adds a lot of extraneous cruft.
To fix this, we can leverage `pydantic`'s
[schema customisation](https://docs.pydantic.dev/usage/schema/#schema-customization)
to mark properties that we know will always be present as required.
Here's that `ImageOutput` class, without the needed schema customisation:
```python
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
```
The OpenAPI schema that results from this `ImageOutput` will have the `type`,
`image`, `width` and `height` properties marked as optional, even though we know
they will always have a value.
```python
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
# Add schema customization
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
```
With the customization in place, the schema will now show these properties as
required, obviating the need for extensive null checks in client code.
See this `pydantic` issue for discussion on this solution:
<https://github.com/pydantic/pydantic/discussions/4577>

View File

@ -0,0 +1,83 @@
# Local Development
If you are looking to contribute you will need to have a local development
environment. See the
[Developer Install](../installation/020_INSTALL_MANUAL.md#developer-install) for
full details.
Broadly this involves cloning the repository, installing the pre-reqs, and
InvokeAI (in editable form). Assuming this is working, choose your area of
focus.
## Documentation
We use [mkdocs](https://www.mkdocs.org) for our documentation with the
[material theme](https://squidfunk.github.io/mkdocs-material/). Documentation is
written in markdown files under the `./docs` folder and then built into a static
website for hosting with GitHub Pages at
[invoke-ai.github.io/InvokeAI](https://invoke-ai.github.io/InvokeAI).
To contribute to the documentation you'll need to install the dependencies. Note
the use of `"`.
```zsh
pip install ".[docs]"
```
Now, to run the documentation locally with hot-reloading for changes made.
```zsh
mkdocs serve
```
You'll then be prompted to connect to `http://127.0.0.1:8080` in order to
access.
## Backend
The backend is contained within the `./invokeai/backend` folder structure. To
get started however please install the development dependencies.
From the root of the repository run the following command. Note the use of `"`.
```zsh
pip install ".[test]"
```
This in an optional group of packages which is defined within the
`pyproject.toml` and will be required for testing the changes you make the the
code.
### Running Tests
We use [pytest](https://docs.pytest.org/en/7.2.x/) for our test suite. Tests can
be found under the `./tests` folder and can be run with a single `pytest`
command. Optionally, to review test coverage you can append `--cov`.
```zsh
pytest --cov
```
Test outcomes and coverage will be reported in the terminal. In addition a more
detailed report is created in both XML and HTML format in the `./coverage`
folder. The HTML one in particular can help identify missing statements
requiring tests to ensure coverage. This can be run by opening
`./coverage/html/index.html`.
For example.
```zsh
pytest --cov; open ./coverage/html/index.html
```
??? info "HTML coverage report output"
![html-overview](../assets/contributing/html-overview.png)
![html-detail](../assets/contributing/html-detail.png)
## Front End
<!--#TODO: get input from blessedcoolant here, for the moment inserted the frontend README via snippets extension.-->
--8<-- "invokeai/frontend/web/README.md"

View File

@ -1,5 +1,5 @@
---
title: Styles and Subjects
title: Concepts Library
---
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
@ -25,14 +25,10 @@ library which downloads and merges TI files automatically upon request. You can
also install your own or others' TI files by placing them in a designated
directory.
You may also be interested in using [LoRA Models](LORAS.md) to
generate images with specialized styles and subjects.
### An Example
Here are a few examples to illustrate how Textual Inversion works. All
these images were generated using the command-line client and the
Stable Diffusion 1.5 model:
Here are a few examples to illustrate how it works. All these images were
generated using the command-line client and the Stable Diffusion 1.5 model:
| Japanese gardener | Japanese gardener &lt;ghibli-face&gt; | Japanese gardener &lt;hoi4-leaders&gt; | Japanese gardener &lt;cartoona-animals&gt; |
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
@ -113,50 +109,21 @@ For example, TI files generated by the Hugging Face toolkit share the named
`learned_embedding.bin`. You can use subdirectories to keep them distinct.
At startup time, InvokeAI will scan the `embeddings` directory and load any TI
files it finds there. At startup you will see messages similar to these:
files it finds there. At startup you will see a message similar to this one:
```bash
>> Loading embeddings from /data/lstein/invokeai-2.3/embeddings
| Loading v1 embedding file: style-hamunaptra
| Loading v4 embedding file: embeddings/learned_embeds-steps-500.bin
| Loading v2 embedding file: lfa
| Loading v3 embedding file: easynegative
| Loading v1 embedding file: rem_rezero
| Loading v2 embedding file: midj-strong
| Loading v4 embedding file: anime-background-style-v2/learned_embeds.bin
| Loading v4 embedding file: kamon-style/learned_embeds.bin
** Notice: kamon-style/learned_embeds.bin was trained on a model with an incompatible token dimension: 768 vs 1024.
>> Textual inversion triggers: <anime-background-style-v2>, <easynegative>, <lfa>, <midj-strong>, <milo>, Rem3-2600, Style-Hamunaptra
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
```
Textual Inversion embeddings trained on version 1.X stable diffusion
models are incompatible with version 2.X models and vice-versa.
Note the `*` trigger term. This is a placeholder term that many early TI
tutorials taught people to use rather than a more descriptive term.
Unfortunately, if you have multiple TI files that all use this term, only the
first one loaded will be triggered by use of the term.
After the embeddings load, InvokeAI will print out a list of all the
recognized trigger terms. To trigger the term, include it in the
prompt exactly as written, including angle brackets if any and
respecting the capitalization.
There are at least four different embedding file formats, and each uses
a different convention for the trigger terms. In some cases, the
trigger term is specified in the file contents and may or may not be
surrounded by angle brackets. In the example above, `Rem3-2600`,
`Style-Hamunaptra`, and `<midj-strong>` were specified this way and
there is no easy way to change the term.
In other cases the trigger term is not contained within the embedding
file. In this case, InvokeAI constructs a trigger term consisting of
the base name of the file (without the file extension) surrounded by
angle brackets. In the example above `<easynegative`> is such a file
(the filename was `easynegative.safetensors`). In such cases, you can
change the trigger term simply by renaming the file.
## Training your own Textual Inversion models
InvokeAI provides a script that lets you train your own Textual
Inversion embeddings using a small number (about a half-dozen) images
of your desired style or subject. Please see [Textual
Inversion](TEXTUAL_INVERSION.md) for details.
To avoid this problem, you can use the `merge_embeddings.py` script to merge two
or more TI files together. If it encounters a collision of terms, the script
will prompt you to select new terms that do not collide. See
[Textual Inversion](TEXTUAL_INVERSION.md) for details.
## Further Reading

View File

@ -168,11 +168,15 @@ used by Stable Diffusion 1.4 and 1.5.
After installation, your `models.yaml` should contain an entry that looks like
this one:
inpainting-1.5: weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
description: SD inpainting v1.5 config:
configs/stable-diffusion/v1-inpainting-inference.yaml vae:
models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512
height: 512
```yml
inpainting-1.5:
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
description: SD inpainting v1.5
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
```
As shown in the example, you may include a VAE fine-tuning weights file as well.
This is strongly recommended.

171
docs/features/LOGGING.md Normal file
View File

@ -0,0 +1,171 @@
---
title: Controlling Logging
---
# :material-image-off: Controlling Logging
## Controlling How InvokeAI Logs Status Messages
InvokeAI logs status messages using a configurable logging system. You
can log to the terminal window, to a designated file on the local
machine, to the syslog facility on a Linux or Mac, or to a properly
configured web server. You can configure several logs at the same
time, and control the level of message logged and the logging format
(to a limited extent).
Three command-line options control logging:
### `--log_handlers <handler1> <handler2> ...`
This option activates one or more log handlers. Options are "console",
"file", "syslog" and "http". To specify more than one, separate them
by spaces:
```bash
invokeai-web --log_handlers console syslog=/dev/log file=C:\Users\fred\invokeai.log
```
The format of these options is described below.
### `--log_format {plain|color|legacy|syslog}`
This controls the format of log messages written to the console. Only
the "console" log handler is currently affected by this setting.
* "plain" provides formatted messages like this:
```bash
[2023-05-24 23:18:2[2023-05-24 23:18:50,352]::[InvokeAI]::DEBUG --> this is a debug message
[2023-05-24 23:18:50,352]::[InvokeAI]::INFO --> this is an informational messages
[2023-05-24 23:18:50,352]::[InvokeAI]::WARNING --> this is a warning
[2023-05-24 23:18:50,352]::[InvokeAI]::ERROR --> this is an error
[2023-05-24 23:18:50,352]::[InvokeAI]::CRITICAL --> this is a critical error
```
* "color" produces similar output, but the text will be color coded to
indicate the severity of the message.
* "legacy" produces output similar to InvokeAI versions 2.3 and earlier:
```bash
### this is a critical error
*** this is an error
** this is a warning
>> this is an informational messages
| this is a debug message
```
* "syslog" produces messages suitable for syslog entries:
```bash
InvokeAI [2691178] <CRITICAL> this is a critical error
InvokeAI [2691178] <ERROR> this is an error
InvokeAI [2691178] <WARNING> this is a warning
InvokeAI [2691178] <INFO> this is an informational messages
InvokeAI [2691178] <DEBUG> this is a debug message
```
(note that the date, time and hostname will be added by the syslog
system)
### `--log_level {debug|info|warning|error|critical}`
Providing this command-line option will cause only messages at the
specified level or above to be emitted.
## Console logging
When "console" is provided to `--log_handlers`, messages will be
written to the command line window in which InvokeAI was launched. By
default, the color formatter will be used unless overridden by
`--log_format`.
## File logging
When "file" is provided to `--log_handlers`, entries will be written
to the file indicated in the path argument. By default, the "plain"
format will be used:
```bash
invokeai-web --log_handlers file=/var/log/invokeai.log
```
## Syslog logging
When "syslog" is requested, entries will be sent to the syslog
system. There are a variety of ways to control where the log message
is sent:
* Send to the local machine using the `/dev/log` socket:
```
invokeai-web --log_handlers syslog=/dev/log
```
* Send to the local machine using a UDP message:
```
invokeai-web --log_handlers syslog=localhost
```
* Send to the local machine using a UDP message on a nonstandard
port:
```
invokeai-web --log_handlers syslog=localhost:512
```
* Send to a remote machine named "loghost" on the local LAN using
facility LOG_USER and UDP packets:
```
invokeai-web --log_handlers syslog=loghost,facility=LOG_USER,socktype=SOCK_DGRAM
```
This can be abbreviated `syslog=loghost`, as LOG_USER and SOCK_DGRAM
are defaults.
* Send to a remote machine named "loghost" using the facility LOCAL0
and using a TCP socket:
```
invokeai-web --log_handlers syslog=loghost,facility=LOG_LOCAL0,socktype=SOCK_STREAM
```
If no arguments are specified (just a bare "syslog"), then the logging
system will look for a UNIX socket named `/dev/log`, and if not found
try to send a UDP message to `localhost`. The Macintosh OS used to
support logging to a socket named `/var/run/syslog`, but this feature
has since been disabled.
## Web logging
If you have access to a web server that is configured to log messages
when a particular URL is requested, you can log using the "http"
method:
```
invokeai-web --log_handlers http=http://my.server/path/to/logger,method=POST
```
The optional [,method=] part can be used to specify whether the URL
accepts GET (default) or POST messages.
Currently password authentication and SSL are not supported.
## Using the configuration file
You can set and forget logging options by adding a "Logging" section
to `invokeai.yaml`:
```
InvokeAI:
[... other settings...]
Logging:
log_handlers:
- console
- syslog=/dev/log
log_level: info
log_format: color
```

View File

@ -1,100 +0,0 @@
---
title: Low-Rank Adaptation (LoRA) Models
---
# :material-library-shelves: Using Low-Rank Adaptation (LoRA) Models
## Introduction
LoRA is a technique for fine-tuning Stable Diffusion models using much
less time and memory than traditional training techniques. The
resulting model files are much smaller than full model files, and can
be used to generate specialized styles and subjects.
LoRAs are built on top of Stable Diffusion v1.x or 2.x checkpoint or
diffusers models. To load a LoRA, you include its name in the text
prompt using a simple syntax described below. While you will generally
get the best results when you use the same model the LoRA was trained
on, they will work to a greater or lesser extent with other models.
The major caveat is that a LoRA built on top of a SD v1.x model cannot
be used with a v2.x model, and vice-versa. If you try, you will get an
error! You may refer to multiple LoRAs in your prompt.
When you apply a LoRA in a prompt you can specify a weight. The higher
the weight, the more influence it will have on the image. Useful
ranges for weights are usually in the 0.0 to 1.0 range (with ranges
between 0.5 and 1.0 being most typical). However you can specify a
higher weight if you wish. Like models, each LoRA has a slightly
different useful weight range and will interact with other generation
parameters such as the CFG, step count and sampler. The author of the
LoRA will often provide guidance on the best settings, but feel free
to experiment. Be aware that it often helps to reduce the CFG value
when using LoRAs.
## Installing LoRAs
This is very easy! Download a LoRA model file from your favorite site
(e.g. [CIVITAI](https://civitai.com) and place it in the `loras`
folder in the InvokeAI root directory (usually `~invokeai/loras` on
Linux/Macintosh machines, and `C:\Users\your-name\invokeai/loras` on
Windows systems). If the `loras` folder does not already exist, just
create it. The vast majority of LoRA models use the Kohya file format,
which is a type of `.safetensors` file.
You may change where InvokeAI looks for the `loras` folder by passing the
`--lora_directory` option to the `invoke.sh`/`invoke.bat` launcher, or
by placing the option in `invokeai.init`. For example:
```
invoke.sh --lora_directory=C:\Users\your-name\SDModels\lora
```
## Using a LoRA in your prompt
To activate a LoRA use the syntax `withLora(my-lora-name,weight)`
somewhere in the text of the prompt. The position doesn't matter; use
whatever is most comfortable for you.
For example, if you have a LoRA named `parchment_people.safetensors`
in your `loras` directory, you can load it with a weight of 0.9 with a
prompt like this one:
```
family sitting at dinner table withLora(parchment_people,0.9)
```
Add additional `withLora()` phrases to load more LoRAs.
You may omit the weight entirely to default to a weight of 1.0:
```
family sitting at dinner table withLora(parchment_people)
```
If you watch the console as your prompt executes, you will see
messages relating to the loading and execution of the LoRA. If things
don't work as expected, note down the console messages and report them
on the InvokeAI Issues pages or Discord channel.
That's pretty much all you need to know!
## Training Kohya Models
InvokeAI cannot currently train LoRA models, but it can load and use
existing LoRA ones to generate images. While there are several LoRA
model file formats, the predominant one is ["Kohya"
format](https://github.com/kohya-ss/sd-scripts), written by [Kohya
S.](https://github.com/kohya-ss). InvokeAI provides support for this
format. For creating your own Kohya models, we recommend the Windows
GUI written by former InvokeAI-team member
[bmaltais](https://github.com/bmaltais), which can be found at
[kohya_ss](https://github.com/bmaltais/kohya_ss).
We can also recommend the [HuggingFace DreamBooth Training
UI](https://huggingface.co/spaces/lora-library/LoRA-DreamBooth-Training-UI),
a paid service that supports both Textual Inversion and LoRA training.
You may also be interested in [Textual
Inversion](TEXTUAL_INVERSION.md) training, which is supported by
InvokeAI as a text console and command-line tool.

View File

@ -32,7 +32,7 @@ turned on and off on the command line using `--nsfw_checker` and
At installation time, InvokeAI will ask whether the checker should be
activated by default (neither argument given on the command line). The
response is stored in the InvokeAI initialization file (usually
`.invokeai` in your home directory). You can change the default at any
`invokeai.init` in your home directory). You can change the default at any
time by opening this file in a text editor and commenting or
uncommenting the line `--nsfw_checker`.

View File

@ -268,7 +268,7 @@ model is so good at inpainting, a good substitute is to use the `clipseg` text
masking option:
```bash
invoke> a fluffy cat eating a hotdot
invoke> a fluffy cat eating a hotdog
Outputs:
[1010] outputs/000025.2182095108.png: a fluffy cat eating a hotdog
invoke> a smiling dog eating a hotdog -I 000025.2182095108.png -tm cat

View File

@ -17,7 +17,7 @@ notebooks.
You will need a GPU to perform training in a reasonable length of
time, and at least 12 GB of VRAM. We recommend using the [`xformers`
library](../installation/070_INSTALL_XFORMERS) to accelerate the
library](../installation/070_INSTALL_XFORMERS.md) to accelerate the
training process further. During training, about ~8 GB is temporarily
needed in order to store intermediate models, checkpoints and logs.
@ -154,11 +154,8 @@ training sets will converge with 2000-3000 steps.
This adjusts how many training images are processed simultaneously in
each step. Higher values will cause the training process to run more
quickly, but use more memory. The default size is selected based on
whether you have the `xformers` memory-efficient attention library
installed. If `xformers` is available, the batch size will be 8,
otherwise 3. These values were chosen to allow training to run with
GPUs with as little as 12 GB VRAM.
quickly, but use more memory. The default size will run with GPUs with
as little as 12 GB.
### Learning rate
@ -175,10 +172,8 @@ learning rate to improve performance.
### Use xformers acceleration
This will activate XFormers memory-efficient attention, which will
reduce memory requirements by half or more and allow you to select a
higher batch size. You need to have XFormers installed for this to
have an effect.
This will activate XFormers memory-efficient attention. You need to
have XFormers installed for this to have an effect.
### Learning rate scheduler
@ -255,49 +250,6 @@ invokeai-ti \
--only_save_embeds
```
## Using Distributed Training
If you have multiple GPUs on one machine, or a cluster of GPU-enabled
machines, you can activate distributed training. See the [HuggingFace
Accelerate pages](https://huggingface.co/docs/accelerate/index) for
full information, but the basic recipe is:
1. Enter the InvokeAI developer's console command line by selecting
option [8] from the `invoke.sh`/`invoke.bat` script.
2. Configurate Accelerate using `accelerate config`:
```sh
accelerate config
```
This will guide you through the configuration process, including
specifying how many machines you will run training on and the number
of GPUs pe rmachine.
You only need to do this once.
3. Launch training from the command line using `accelerate launch`. Be sure
that your current working directory is the InvokeAI root directory (usually
named `invokeai` in your home directory):
```sh
accelerate launch .venv/bin/invokeai-ti \
--model=stable-diffusion-1.5 \
--resolution=512 \
--learnable_property=object \
--initializer_token='*' \
--placeholder_token='<shraddha>' \
--train_data_dir=/home/lstein/invokeai/text-inversion-training-data/shraddha \
--output_dir=/home/lstein/invokeai/text-inversion-training/shraddha \
--scale_lr \
--train_batch_size=10 \
--gradient_accumulation_steps=4 \
--max_train_steps=2000 \
--learning_rate=0.0005 \
--lr_scheduler=constant \
--mixed_precision=fp16 \
--only_save_embeds
```
## Using Embeddings
After training completes, the resultant embeddings will be saved into your `$INVOKEAI_ROOT/embeddings/<trigger word>/learned_embeds.bin`.

View File

@ -2,84 +2,65 @@
title: Overview
---
- The Basics
Here you can find the documentation for InvokeAI's various features.
- The [Web User Interface](WEB.md)
## The Basics
### * The [Web User Interface](WEB.md)
Guide to the Web interface. Also see the [WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md)
Guide to the Web interface. Also see the
[WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md)
### * The [Unified Canvas](UNIFIED_CANVAS.md)
Build complex scenes by combine and modifying multiple images in a stepwise
fashion. This feature combines img2img, inpainting and outpainting in
a single convenient digital artist-optimized user interface.
- The [Unified Canvas](UNIFIED_CANVAS.md)
### * The [Command Line Interface (CLI)](CLI.md)
Scriptable access to InvokeAI's features.
Build complex scenes by combine and modifying multiple images in a
stepwise fashion. This feature combines img2img, inpainting and
outpainting in a single convenient digital artist-optimized user
interface.
## Image Generation
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
- The [Command Line Interface (CLI)](CLI.md)
## * [Post-Processing](POSTPROCESS.md)
Restore mangled faces and make images larger with upscaling. Also see the [Embiggen Upscaling Guide](EMBIGGEN.md).
Scriptable access to InvokeAI's features.
## * The [Concepts Library](CONCEPTS.md)
Add custom subjects and styles using HuggingFace's repository of embeddings.
- [Visual Manual for InvokeAI](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
### * [Image-to-Image Guide for the CLI](IMG2IMG.md)
Use a seed image to build new creations in the CLI.
- Image Generation
### * [Inpainting Guide for the CLI](INPAINTING.md)
Selectively erase and replace portions of an existing image in the CLI.
- [Prompt Engineering](PROMPTS.md)
### * [Outpainting Guide for the CLI](OUTPAINTING.md)
Extend the borders of the image with an "outcrop" function within the CLI.
Get the images you want with the InvokeAI prompt engineering language.
### * [Generating Variations](VARIATIONS.md)
Have an image you like and want to generate many more like it? Variations
are the ticket.
- [Post-Processing](POSTPROCESS.md)
## Model Management
Restore mangled faces and make images larger with upscaling. Also see
the [Embiggen Upscaling Guide](EMBIGGEN.md).
## * [Model Installation](../installation/050_INSTALLING_MODELS.md)
Learn how to import third-party models and switch among them. This
guide also covers optimizing models to load quickly.
- The [Concepts Library](CONCEPTS.md)
## * [Merging Models](MODEL_MERGING.md)
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
Add custom subjects and styles using HuggingFace's repository of
embeddings.
## * [Textual Inversion](TEXTUAL_INVERSION.md)
Personalize models by adding your own style or subjects.
- [Image-to-Image Guide for the CLI](IMG2IMG.md)
# Other Features
Use a seed image to build new creations in the CLI.
## * [The NSFW Checker](NSFW.md)
Prevent InvokeAI from displaying unwanted racy images.
- [Inpainting Guide for the CLI](INPAINTING.md)
## * [Controlling Logging](LOGGING.md)
Control how InvokeAI logs status messages.
Selectively erase and replace portions of an existing image in the CLI.
- [Outpainting Guide for the CLI](OUTPAINTING.md)
Extend the borders of the image with an "outcrop" function within the
CLI.
- [Generating Variations](VARIATIONS.md)
Have an image you like and want to generate many more like it?
Variations are the ticket.
- Model Management
- [Model Installation](../installation/050_INSTALLING_MODELS.md)
Learn how to import third-party models and switch among them. This guide
also covers optimizing models to load quickly.
- [Merging Models](MODEL_MERGING.md)
Teach an old model new tricks. Merge 2-3 models together to create a new
model that combines characteristics of the originals.
- [Textual Inversion](TEXTUAL_INVERSION.md)
Personalize models by adding your own style or subjects.
- Other Features
- [The NSFW Checker](NSFW.md)
Prevent InvokeAI from displaying unwanted racy images.
- [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns,
batch process a file of prompts, increase the "creativity" of image
generation by adding initial noise, and more!
## * [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns,
batch process a file of prompts, increase the "creativity" of image
generation by adding initial noise, and more!

View File

@ -1,4 +0,0 @@
# :octicons-file-code-16: IDE-Settings
Here we will share settings for IDEs used by our developers, maybe you can find
something interestening which will help to boost your development efficency 🔥

View File

@ -1,250 +0,0 @@
---
title: Visual Studio Code
---
# :material-microsoft-visual-studio-code:Visual Studio Code
The Workspace Settings are stored in the project (repository) root and get
higher priorized than your user settings.
This helps to have different settings for different projects, while the user
settings get used as a default value if no workspace settings are provided.
## tasks.json
First we will create a task configuration which will create a virtual
environment and update the deps (pip, setuptools and wheel).
Into this venv we will then install the pyproject.toml in editable mode with
dev, docs and test dependencies.
```json title=".vscode/tasks.json"
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "Create virtual environment",
"detail": "Create .venv and upgrade pip, setuptools and wheel",
"command": "python3",
"args": [
"-m",
"venv",
".venv",
"--prompt",
"InvokeAI",
"--upgrade-deps"
],
"runOptions": {
"instanceLimit": 1,
"reevaluateOnRerun": true
},
"group": {
"kind": "build"
},
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": false
}
},
{
"label": "build InvokeAI",
"detail": "Build pyproject.toml with extras dev, docs and test",
"command": "${workspaceFolder}/.venv/bin/python3",
"args": [
"-m",
"pip",
"install",
"--use-pep517",
"--editable",
".[dev,docs,test]"
],
"dependsOn": "Create virtual environment",
"dependsOrder": "sequence",
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": false
}
}
]
}
```
The fastest way to build InvokeAI now is ++cmd+shift+b++
## launch.json
This file is used to define debugger configurations, so that you can one-click
launch and monitor the application, set halt points to inspect specific states,
...
```json title=".vscode/launch.json"
{
"version": "0.2.0",
"configurations": [
{
"name": "invokeai web",
"type": "python",
"request": "launch",
"program": ".venv/bin/invokeai",
"justMyCode": true
},
{
"name": "invokeai cli",
"type": "python",
"request": "launch",
"program": ".venv/bin/invokeai",
"justMyCode": true
},
{
"name": "mkdocs serve",
"type": "python",
"request": "launch",
"program": ".venv/bin/mkdocs",
"args": ["serve"],
"justMyCode": true
}
]
}
```
Then you only need to hit ++f5++ and the fun begins :nerd: (It is asumed that
you have created a virtual environment via the [tasks](#tasksjson) from the
previous step.)
## extensions.json
A list of recommended vscode-extensions to make your life easier:
```json title=".vscode/extensions.json"
{
"recommendations": [
"editorconfig.editorconfig",
"github.vscode-pull-request-github",
"ms-python.black-formatter",
"ms-python.flake8",
"ms-python.isort",
"ms-python.python",
"ms-python.vscode-pylance",
"redhat.vscode-yaml",
"tamasfe.even-better-toml",
"eamodio.gitlens",
"foxundermoon.shell-format",
"timonwong.shellcheck",
"esbenp.prettier-vscode",
"davidanson.vscode-markdownlint",
"yzhang.markdown-all-in-one",
"bierner.github-markdown-preview",
"ms-azuretools.vscode-docker",
"mads-hartmann.bash-ide-vscode"
]
}
```
## settings.json
With bellow settings your files already get formated when you save them (only
your modifications if available), which will help you to not run into trouble
with the pre-commit hooks. If the hooks fail, they will prevent you from
commiting, but most hooks directly add a fixed version, so that you just need to
stage and commit them:
```json title=".vscode/settings.json"
{
"[json]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.quickSuggestions": {
"comments": false,
"strings": true,
"other": true
},
"editor.suggest.insertMode": "replace",
"gitlens.codeLens.scopes": ["document"]
},
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "file"
},
"[toml]": {
"editor.defaultFormatter": "tamasfe.even-better-toml",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[yaml]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[markdown]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.rulers": [80],
"editor.unicodeHighlight.ambiguousCharacters": false,
"editor.unicodeHighlight.invisibleCharacters": false,
"diffEditor.ignoreTrimWhitespace": false,
"editor.wordWrap": "on",
"editor.quickSuggestions": {
"comments": "off",
"strings": "off",
"other": "off"
},
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[shellscript]": {
"editor.defaultFormatter": "foxundermoon.shell-format"
},
"[ignore]": {
"editor.defaultFormatter": "foxundermoon.shell-format"
},
"editor.rulers": [88],
"evenBetterToml.formatter.alignEntries": false,
"evenBetterToml.formatter.allowedBlankLines": 1,
"evenBetterToml.formatter.arrayAutoExpand": true,
"evenBetterToml.formatter.arrayTrailingComma": true,
"evenBetterToml.formatter.arrayAutoCollapse": true,
"evenBetterToml.formatter.columnWidth": 88,
"evenBetterToml.formatter.compactArrays": true,
"evenBetterToml.formatter.compactInlineTables": true,
"evenBetterToml.formatter.indentEntries": false,
"evenBetterToml.formatter.inlineTableExpand": true,
"evenBetterToml.formatter.reorderArrays": true,
"evenBetterToml.formatter.reorderKeys": true,
"evenBetterToml.formatter.compactEntries": false,
"evenBetterToml.schema.enabled": true,
"python.analysis.typeCheckingMode": "basic",
"python.formatting.provider": "black",
"python.languageServer": "Pylance",
"python.linting.enabled": true,
"python.linting.flake8Enabled": true,
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"tests",
"--cov=ldm",
"--cov-branch",
"--cov-report=term:skip-covered"
],
"yaml.schemas": {
"https://json.schemastore.org/prettierrc.json": "${workspaceFolder}/.prettierrc.yaml"
}
}
```

View File

@ -1,135 +0,0 @@
---
title: Pull-Request
---
# :octicons-git-pull-request-16: Pull-Request
## pre-requirements
To follow the steps in this tutorial you will need:
- [GitHub](https://github.com) account
- [git](https://git-scm.com/downloads) source controll
- Text / Code Editor (personally I preffer
[Visual Studio Code](https://code.visualstudio.com/Download))
- Terminal:
- If you are on Linux/MacOS you can use bash or zsh
- for Windows Users the commands are written for PowerShell
## Fork Repository
The first step to be done if you want to contribute to InvokeAI, is to fork the
rpeository.
Since you are already reading this doc, the easiest way to do so is by clicking
[here](https://github.com/invoke-ai/InvokeAI/fork). You could also open
[InvokeAI](https://github.com/invoke-ai/InvoekAI) and click on the "Fork" Button
in the top right.
## Clone your fork
After you forked the Repository, you should clone it to your dev machine:
=== ":fontawesome-brands-linux:Linux / :simple-apple:macOS"
``` sh
git clone https://github.com/<github username>/InvokeAI \
&& cd InvokeAI
```
=== ":fontawesome-brands-windows:Windows"
``` powershell
git clone https://github.com/<github username>/InvokeAI `
&& cd InvokeAI
```
## Install in Editable Mode
To install InvokeAI in editable mode, (as always) we recommend to create and
activate a venv first. Afterwards you can install the InvokeAI Package,
including dev and docs extras in editable mode, follwed by the installation of
the pre-commit hook:
=== ":fontawesome-brands-linux:Linux / :simple-apple:macOS"
``` sh
python -m venv .venv \
--prompt InvokeAI \
--upgrade-deps \
&& source .venv/bin/activate \
&& pip install \
--upgrade-deps \
--use-pep517 \
--editable=".[dev,docs]" \
&& pre-commit install
```
=== ":fontawesome-brands-windows:Windows"
``` powershell
python -m venv .venv `
--prompt InvokeAI `
--upgrade-deps `
&& .venv/scripts/activate.ps1 `
&& pip install `
--upgrade `
--use-pep517 `
--editable=".[dev,docs]" `
&& pre-commit install
```
## Create a branch
Make sure you are on main branch, from there create your feature branch:
=== ":fontawesome-brands-linux:Linux / :simple-apple:macOS"
``` sh
git checkout main \
&& git pull \
&& git checkout -B <branch name>
```
=== ":fontawesome-brands-windows:Windows"
``` powershell
git checkout main `
&& git pull `
&& git checkout -B <branch name>
```
## Commit your changes
When you are done with adding / updating content, you need to commit those
changes to your repository before you can actually open an PR:
```{ .sh .annotate }
git add <files you have changed> # (1)!
git commit -m "A commit message which describes your change"
git push
```
1. Replace this with a space seperated list of the files you changed, like:
`README.md foo.sh bar.json baz`
## Create a Pull Request
After pushing your changes, you are ready to create a Pull Request. just head
over to your fork on [GitHub](https://github.com), which should already show you
a message that there have been recent changes on your feature branch and a green
button which you could use to create the PR.
The default target for your PRs would be the main branch of
[invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)
Another way would be to create it in VS-Code or via the GitHub CLI (or even via
the GitHub CLI in a VS-Code Terminal Window 🤭):
```sh
gh pr create
```
The CLI will inform you if there are still unpushed commits on your branch. It
will also prompt you for things like the the Title and the Body (Description) if
you did not already pass them as arguments.

View File

@ -1,26 +0,0 @@
---
title: Issues
---
# :octicons-issue-opened-16: Issues
## :fontawesome-solid-bug: Report a bug
If you stumbled over a bug while using InvokeAI, we would apreciate it a lot if
you
[open a issue](https://github.com/invoke-ai/InvokeAI/issues/new?assignees=&labels=bug&template=BUG_REPORT.yml&title=%5Bbug%5D%3A+)
to inform us about the details so that our developers can look into it.
If you also know how to fix the bug, take a look [here](010_PULL_REQUEST.md) to
find out how to create a Pull Request.
## Request a feature
If you have a idea for a new feature on your mind which you would like to see in
InvokeAI, there is a
[feature request](https://github.com/invoke-ai/InvokeAI/issues/new?assignees=&labels=bug&template=BUG_REPORT.yml&title=%5Bbug%5D%3A+)
available in the issues section of the repository.
If you are just curious which features already got requested you can find the
overview of open requests
[here](https://github.com/invoke-ai/InvokeAI/labels/enhancement)

View File

@ -1,32 +0,0 @@
---
title: docs
---
# :simple-readthedocs: MkDocs-Material
If you want to contribute to the docs, there is a easy way to verify the results
of your changes before commiting them.
Just follow the steps in the [Pull-Requests](010_PULL_REQUEST.md) docs, there we
already
[create a venv and install the docs extras](010_PULL_REQUEST.md#install-in-editable-mode).
When installed it's as simple as:
```sh
mkdocs serve
```
This will build the docs locally and serve them on your local host, even
auto-refresh is included, so you can just update a doc, save it and tab to the
browser, without the needs of restarting the `mkdocs serve`.
More information about the "mkdocs flavored markdown syntax" can be found
[here](https://squidfunk.github.io/mkdocs-material/reference/).
## :material-microsoft-visual-studio-code:VS-Code
We also provide a
[launch configuration for VS-Code](../IDE-Settings/vs-code.md#launchjson) which
includes a `mkdocs serve` entrypoint as well. You also don't have to worry about
the formatting since this is automated via prettier, but this is of course not
limited to VS-Code.

View File

@ -1,76 +0,0 @@
# Tranformation to nodes
## Current state
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img| generate(generate);
web --> |txt2img| generate(generate);
cli --> |txt2img| generate(generate);
cli --> |img2img| generate(generate);
generate --> model_manager;
generate --> generators;
generate --> ti_manager[TI Manager];
generate --> etc;
```
## Transitional Architecture
### first step
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img| img2img_node(Img2img node);
web --> |txt2img| generate(generate);
img2img_node --> model_manager;
img2img_node --> generators;
cli --> |txt2img| generate;
cli --> |img2img| generate;
generate --> model_manager;
generate --> generators;
generate --> ti_manager[TI Manager];
generate --> etc;
```
### second step
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img| img2img_node(img2img node);
img2img_node --> model_manager;
img2img_node --> generators;
web --> |txt2img| txt2img_node(txt2img node);
cli --> |txt2img| txt2img_node;
cli --> |img2img| generate(generate);
generate --> model_manager;
generate --> generators;
generate --> ti_manager[TI Manager];
generate --> etc;
txt2img_node --> model_manager;
txt2img_node --> generators;
txt2img_node --> ti_manager[TI Manager];
```
## Final Architecture
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img|img2img_node(img2img node);
cli --> |img2img|img2img_node;
web --> |txt2img|txt2img_node(txt2img node);
cli --> |txt2img|txt2img_node;
img2img_node --> model_manager;
txt2img_node --> model_manager;
img2img_node --> generators;
txt2img_node --> generators;
img2img_node --> ti_manager[TI Manager];
txt2img_node --> ti_manager[TI Manager];
```

View File

@ -1,16 +0,0 @@
---
title: Contributing
---
# :fontawesome-solid-code-commit: Contributing
There are different ways how you can contribute to
[InvokeAI](https://github.com/invoke-ai/InvokeAI), like Translations, opening
Issues for Bugs or ideas how to improve.
This Section of the docs will explain some of the different ways of how you can
contribute to make it easier for newcommers as well as advanced users :nerd:
If you want to contribute code, but you do not have an exact idea yet, take a
look at the currently open
[:fontawesome-solid-bug: Bug Reports](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)

View File

@ -1,12 +0,0 @@
# :material-help:Help
If you are looking for help with the installation of InvokeAI, please take a
look into the [Installation](../installation/index.md) section of the docs.
Here you will find help to topics like
- how to contribute
- configuration recommendation for IDEs
If you have an Idea about what's missing and aren't scared from contributing,
just take a look at [DOCS](./contributing/030_DOCS.md) to find out how to do so.

View File

@ -2,8 +2,6 @@
title: Home
---
# :octicons-home-16: Home
<!--
The Docs you find here (/docs/*) are built and deployed via mkdocs. If you want to run a local version to verify your changes, it's as simple as::
@ -31,36 +29,36 @@ title: Home
[![github open prs badge]][github open prs link]
[ci checks on dev badge]:
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[ci checks on dev link]:
https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[ci checks on main badge]:
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[ci checks on main link]:
https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]:
https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
[github forks link]:
https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
[github open issues badge]:
https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
[github open issues link]:
https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]:
https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
[github open prs link]:
https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]:
https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to dev badge]:
https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]:
https://github.com/invoke-ai/InvokeAI/commits/development
https://github.com/invoke-ai/InvokeAI/commits/development
[latest release badge]:
https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
</div>
@ -69,7 +67,7 @@ title: Home
implementation of Stable Diffusion, the open source text-to-image and
image-to-image generator. It provides a streamlined process with various new
features and options to aid the image generation process. It runs on Windows,
Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.
Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>]
[<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a
@ -89,24 +87,24 @@ Q&A</a>]
You wil need one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
We do **not recommend** the following video cards due to issues with their
running in half-precision mode and having insufficient VRAM to render 512x512
images in full-precision mode:
- NVIDIA 10xx series cards such as the 1080ti
- GTX 1650 series cards
- GTX 1660 series cards
- NVIDIA 10xx series cards such as the 1080ti
- GTX 1650 series cards
- GTX 1660 series cards
### :fontawesome-solid-memory: Memory and Disk
- At least 12 GB Main Memory RAM.
- At least 18 GB of free disk space for the machine learning model, Python,
and all its dependencies.
- At least 12 GB Main Memory RAM.
- At least 18 GB of free disk space for the machine learning model, Python, and
all its dependencies.
## :octicons-package-dependencies-24: Installation
@ -115,407 +113,133 @@ either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
### [Installation Getting Started Guide](installation)
#### [Automated Installer](installation/010_INSTALL_AUTOMATED.md)
This method is recommended for 1st time users
#### [Manual Installation](installation/020_INSTALL_MANUAL.md)
This method is recommended for experienced users and developers
#### [Docker Installation](installation/040_INSTALL_DOCKER.md)
This method is recommended for those familiar with running Docker containers
### Other Installation Guides
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
- [XFormers](installation/070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
- [XFormers](installation/070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
## :octicons-gift-24: InvokeAI Features
### The InvokeAI Web Interface
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
- [Visual Manual for InvokeAI v2.3.1](https://docs.google.com/presentation/d/e/2PACX-1vSE90aC7bVVg0d9KXVMhy-Wve-wModgPFp7AGVTOCgf4xE03SnV24mjdwldolfCr59D_35oheHe4Cow/pub?start=false&loop=true&delayms=60000) (contributed by Statcomm)
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
<!-- separator -->
<!-- separator -->
### The InvokeAI Command Line Interface
- [Command Line Interace Reference Guide](features/CLI.md)
- [Command Line Interace Reference Guide](features/CLI.md)
<!-- separator -->
### Image Management
- [Image2Image](features/IMG2IMG.md)
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Using LoRA models](features/LORAS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other Features](features/OTHER.md)
- [Image2Image](features/IMG2IMG.md)
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other Features](features/OTHER.md)
<!-- separator -->
### Model Management
- [Installing](installation/050_INSTALLING_MODELS.md)
- [Model Merging](features/MODEL_MERGING.md)
- [Adding custom styles and subjects via embeddings](features/CONCEPTS.md)
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
- [Installing](installation/050_INSTALLING_MODELS.md)
- [Model Merging](features/MODEL_MERGING.md)
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
<!-- seperator -->
### Prompt Engineering
- [Prompt Syntax](features/PROMPTS.md)
- [Generating Variations](features/VARIATIONS.md)
- [Prompt Syntax](features/PROMPTS.md)
- [Generating Variations](features/VARIATIONS.md)
## :octicons-log-16: Latest Changes
### v2.3.3 <small>(29 March 2023)</small>
#### Bug Fixes
1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
2. Textual inversion will select an appropriate batchsize based on whether `xformers` is active, and will default to `xformers` enabled if the library is detected.
3. The batch script log file names have been fixed to be compatible with Windows.
4. Occasional corruption of the `.next_prefix` file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
5. An infinite loop when opening the developer's console from within the `invoke.sh` script has been corrected.
#### Enhancements
1. It is now possible to load and run several community-contributed SD-2.0 based models, including the infamous "Illuminati" model.
2. The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI `embeddings` directory.
3. If no `--model` is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
4. On Linux systems, the `invoke.sh` launcher now uses a prettier console-based interface. To take advantage of it, install the `dialog` package using your package manager (e.g. `sudo apt install dialog`).
5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
```
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
```
### v2.3.2 <small>(13 March 2023)</small>
#### Bugfixes
Since version 2.3.1 the following bugs have been fixed:
1. Black images appearing for potential NSFW images when generating with legacy checkpoint models and both `--no-nsfw_checker` and `--ckpt_convert` turned on.
2. Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
3. The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
4. When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
5. Crashes that occurred during model merging.
6. Restore previous naming of Stable Diffusion base and 768 models.
7. Upgraded to latest versions of `diffusers`, `transformers`, `safetensors` and `accelerate` libraries upstream. We hope that this will fix the `assertion NDArray > 2**32` issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
As part of the upgrade to `diffusers`, the location of the diffusers-based models has changed from `models/diffusers` to `models/hub`. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your `models/diffusers` directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
#### New "Invokeai-batch" script
2.3.2 introduces a new command-line only script called
`invokeai-batch` that can be used to generate hundreds of images from
prompts and settings that vary systematically. This can be used to try
the same prompt across multiple combinations of models, steps, CFG
settings and so forth. It also allows you to template prompts and
generate a combinatorial list like: ``` a shack in the mountains,
photograph a shack in the mountains, watercolor a shack in the
mountains, oil painting a chalet in the mountains, photograph a chalet
in the mountains, watercolor a chalet in the mountains, oil painting a
shack in the desert, photograph ... ```
If you have a system with multiple GPUs, or a single GPU with lots of
VRAM, you can parallelize generation across the combinatorial set,
reducing wait times and using your system's resources efficiently
(make sure you have good GPU cooling).
To try `invokeai-batch` out. Launch the "developer's console" using
the `invoke` launcher script, or activate the invokeai virtual
environment manually. From the console, give the command
`invokeai-batch --help` in order to learn how the script works and
create your first template file for dynamic prompt generation.
### v2.3.1 <small>(26 February 2023)</small>
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
#### Enhanced support for model management
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
1. ***From the WebUI***, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
![image](https://user-images.githubusercontent.com/111189/220638091-918492cc-0719-4194-b033-3741e8289b30.png)
2. **Using the Model Installer App**
Choose option (5) _download and install models_ from the `invoke` launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command `invokeai-model-install`.
![image](https://user-images.githubusercontent.com/111189/220660363-22ff3a2e-8082-410e-a818-d2b3a0529bac.png)
3. **Using the Command Line Client (CLI)**
The `!install_model` and `!convert_model` commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do **not** need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management.
#### An Improved Installer Experience
The installer now launches a console-based UI for setting and changing commonly-used startup options:
![image](https://user-images.githubusercontent.com/111189/220644777-3d3a90ca-f9e2-4e6d-93da-cbdd66bf12f3.png)
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching `invoke.sh`/`invoke.bat` and entering option (6) _change InvokeAI startup options_
Command-line users can launch the new configure app using `invokeai-configure`.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch `invoke.sh` or `invoke.bat` and choose option (9) _update InvokeAI_ . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
![image](https://user-images.githubusercontent.com/111189/220650124-30a77137-d9cd-406e-a87d-d8283f99a4b3.png)
Command-line users can run this interface by typing `invokeai-configure`
#### Image Symmetry Options
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting _Symmetry_ from the image generation settings, or within the CLI by using the options `--h_symmetry_time_pct` and `--v_symmetry_time_pct` (these can be abbreviated to `--h_sym` and `--v_sym` like all other options).
![image](https://user-images.githubusercontent.com/111189/220658687-47fd0f2c-7069-4d95-aec9-7196fceb360d.png)
#### A New Unified Canvas Look
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select _Use Canvas Beta Layout_:
![image](https://user-images.githubusercontent.com/111189/220646958-b7eca95e-dc39-4cd2-b277-63eac98ed446.png)
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
![image](https://user-images.githubusercontent.com/111189/220647560-4a9265a1-6926-44f9-9d08-e1ef2ce61ff8.png)
#### Model conversion and merging within the WebUI
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the `invoke.sh`/`invoke.bat` scripts.
#### An easier way to contribute translations to the WebUI
We have migrated our translation efforts to [Weblate](https://hosted.weblate.org/engage/invokeai/), a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief [translation guide](https://github.com/invoke-ai/InvokeAI/blob/v2.3.1/docs/other/TRANSLATION.md) for more information on how to contribute.
#### Numerous internal bugfixes and performance issues
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to `diffusers 0.13.0`, and using the `compel` library for prompt parsing. See [Detailed Change Log](#full-change-log) for a detailed list of bugs caught and squished.
#### Summary of InvokeAI command line scripts (all accessible via the launcher menu)
| Command | Description |
|--------------------------|---------------------------------------------------------------------|
| `invokeai` | Command line interface |
| `invokeai --web` | Web interface |
| `invokeai-model-install` | Model installer with console forms-based front end |
| `invokeai-ti --gui` | Textual inversion, with a console forms-based front end |
| `invokeai-merge --gui` | Model merging, with a console forms-based front end |
| `invokeai-configure` | Startup configuration; can also be used to reinstall support models |
| `invokeai-update` | InvokeAI software updater |
### v2.3.0 <small>(9 February 2023)</small>
#### Migration to Stable Diffusion `diffusers` models
Previous versions of InvokeAI supported the original model file format
introduced with Stable Diffusion 1.4. In the original format, known variously as
"checkpoint", or "legacy" format, there is a single large weights file ending
with `.ckpt` or `.safetensors`. Though this format has served the community
well, it has a number of disadvantages, including file size, slow loading times,
and a variety of non-standard variants that require special-case code to handle.
In addition, because checkpoint files are actually a bundle of multiple machine
learning sub-models, it is hard to swap different sub-models in and out, or to
share common sub-models. A new format, introduced by the StabilityAI company in
collaboration with HuggingFace, is called `diffusers` and consists of a
directory of individual models. The most immediate benefit of `diffusers` is
that they load from disk very quickly. A longer term benefit is that in the near
future `diffusers` models will be able to share common sub-models, dramatically
reducing disk space when you have multiple fine-tune models derived from the
same base.
Previous versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as "checkpoint", or "legacy" format, there is a single large weights file ending with `.ckpt` or `.safetensors`. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called `diffusers` and consists of a directory of individual models. The most immediate benefit of `diffusers` is that they load from disk very quickly. A longer term benefit is that in the near future `diffusers` models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.
When you perform a new install of version 2.3.0, you will be offered the option
to install the `diffusers` versions of a number of popular SD models, including
Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of
2.1). These will act and work just like the checkpoint versions. Do not be
concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk!
InvokeAI 2.3.0 can still load these and generate images from them without any
extra intervention on your part.
When you perform a new install of version 2.3.0, you will be offered the option to install the `diffusers` versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.
To take advantage of the optimized loading times of `diffusers` models, InvokeAI
offers options to convert legacy checkpoint models into optimized `diffusers`
models. If you use the `invokeai` command line interface, the relevant commands
are:
To take advantage of the optimized loading times of `diffusers` models, InvokeAI offers options to convert legacy checkpoint models into optimized `diffusers` models. If you use the `invokeai` command line interface, the relevant commands are:
- `!convert_model` -- Take the path to a local checkpoint file or a URL that
is pointing to one, convert it into a `diffusers` model, and import it into
InvokeAI's models registry file.
- `!optimize_model` -- If you already have a checkpoint model in your InvokeAI
models file, this command will accept its short name and convert it into a
like-named `diffusers` model, optionally deleting the original checkpoint
file.
- `!import_model` -- Take the local path of either a checkpoint file or a
`diffusers` model directory and import it into InvokeAI's registry file. You
may also provide the ID of any diffusers model that has been published on
the
[HuggingFace models repository](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads)
and it will be downloaded and installed automatically.
* `!convert_model` -- Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a `diffusers` model, and import it into InvokeAI's models registry file.
* `!optimize_model` -- If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named `diffusers` model, optionally deleting the original checkpoint file.
* `!import_model` -- Take the local path of either a checkpoint file or a `diffusers` model directory and import it into InvokeAI's registry file. You may also provide the ID of any diffusers model that has been published on the [HuggingFace models repository](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads) and it will be downloaded and installed automatically.
The WebGUI offers similar functionality for model management.
For advanced users, new command-line options provide additional functionality.
Launching `invokeai` with the argument `--autoconvert <path to directory>` takes
the path to a directory of checkpoint files, automatically converts them into
`diffusers` models and imports them. Each time the script is launched, the
directory will be scanned for new checkpoint files to be loaded. Alternatively,
the `--ckpt_convert` argument will cause any checkpoint or safetensors model
that is already registered with InvokeAI to be converted into a `diffusers`
model on the fly, allowing you to take advantage of future diffusers-only
features without explicitly converting the model and saving it to disk.
For advanced users, new command-line options provide additional functionality. Launching `invokeai` with the argument `--autoconvert <path to directory>` takes the path to a directory of checkpoint files, automatically converts them into `diffusers` models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the `--ckpt_convert` argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a `diffusers` model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.
Please see
[INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/)
for more information on model management in both the command-line and Web
interfaces.
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management in both the command-line and Web interfaces.
#### Support for the `XFormers` Memory-Efficient Crossattention Package
On CUDA (Nvidia) systems, version 2.3.0 supports the `XFormers` library. Once
installed, the`xformers` package dramatically reduces the memory footprint of
loaded Stable Diffusion models files and modestly increases image generation
speed. `xformers` will be installed and activated automatically if you specify a
CUDA system at install time.
On CUDA (Nvidia) systems, version 2.3.0 supports the `XFormers` library. Once installed, the`xformers` package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. `xformers` will be installed and activated automatically if you specify a CUDA system at install time.
The caveat with using `xformers` is that it introduces slightly
non-deterministic behavior, and images generated using the same seed and other
settings will be subtly different between invocations. Generally the changes are
unnoticeable unless you rapidly shift back and forth between images, but to
disable `xformers` and restore fully deterministic behavior, you may launch
InvokeAI using the `--no-xformers` option. This is most conveniently done by
opening the file `invokeai/invokeai.init` with a text editor, and adding the
line `--no-xformers` at the bottom.
The caveat with using `xformers` is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable `xformers` and restore fully deterministic behavior, you may launch InvokeAI using the `--no-xformers` option. This is most conveniently done by opening the file `invokeai/invokeai.init` with a text editor, and adding the line `--no-xformers` at the bottom.
#### A Negative Prompt Box in the WebUI
There is now a separate text input box for negative prompts in the WebUI. This
is convenient for stashing frequently-used negative prompts ("mangled limbs, bad
anatomy"). The `[negative prompt]` syntax continues to work in the main prompt
box as well.
There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts ("mangled limbs, bad anatomy"). The `[negative prompt]` syntax continues to work in the main prompt box as well.
To see exactly how your prompts are being parsed, launch `invokeai` with the
`--log_tokenization` option. The console window will then display the
tokenization process for both positive and negative prompts.
To see exactly how your prompts are being parsed, launch `invokeai` with the `--log_tokenization` option. The console window will then display the tokenization process for both positive and negative prompts.
#### Model Merging
Version 2.3.0 offers an intuitive user interface for merging up to three Stable
Diffusion models using an intuitive user interface. Model merging allows you to
mix the behavior of models to achieve very interesting effects. To use this,
each of the models must already be imported into InvokeAI and saved in
`diffusers` format, then launch the merger using a new menu item in the InvokeAI
launcher script (`invoke.sh`, `invoke.bat`) or directly from the command line
with `invokeai-merge --gui`. You will be prompted to select the models to merge,
the proportions in which to mix them, and the mixing algorithm. The script will
create a new merged `diffusers` model and import it into InvokeAI for your use.
Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in `diffusers` format, then launch the merger using a new menu item in the InvokeAI launcher script (`invoke.sh`, `invoke.bat`) or directly from the command line with `invokeai-merge --gui`. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged `diffusers` model and import it into InvokeAI for your use.
See
[MODEL MERGING](https://invoke-ai.github.io/InvokeAI/features/MODEL_MERGING/)
for more details.
See [MODEL MERGING](https://invoke-ai.github.io/InvokeAI/features/MODEL_MERGING/) for more details.
#### Textual Inversion Training
Textual Inversion (TI) is a technique for training a Stable Diffusion model to
emit a particular subject or style when triggered by a keyword phrase. You can
perform TI training by placing a small number of images of the subject or style
in a directory, and choosing a distinctive trigger phrase, such as
"pointillist-style". After successful training, The subject or style will be
activated by including `<pointillist-style>` in your prompt.
Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as "pointillist-style". After successful training, The subject or style will be activated by including `<pointillist-style>` in your prompt.
Previous versions of InvokeAI were able to perform TI, but it required using a
command-line script with dozens of obscure command-line arguments. Version 2.3.0
features an intuitive TI frontend that will build a TI model on top of any
`diffusers` model. To access training you can launch from a new item in the
launcher script or from the command line using `invokeai-ti --gui`.
Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any `diffusers` model. To access training you can launch from a new item in the launcher script or from the command line using `invokeai-ti --gui`.
See
[TEXTUAL INVERSION](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
for further details.
See [TEXTUAL INVERSION](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/) for further details.
#### A New Installer Experience
The InvokeAI installer has been upgraded in order to provide a smoother and
hopefully more glitch-free experience. In addition, InvokeAI is now packaged as
a PyPi project, allowing developers and power-users to install InvokeAI with the
command `pip install InvokeAI --use-pep517`. Please see
[Installation](#installation) for details.
The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command `pip install InvokeAI --use-pep517`. Please see [Installation](#installation) for details.
Developers should be aware that the `pip` installation procedure has been
simplified and that the `conda` method is no longer supported at all.
Accordingly, the `environments_and_requirements` directory has been deleted from
the repository.
Developers should be aware that the `pip` installation procedure has been simplified and that the `conda` method is no longer supported at all. Accordingly, the `environments_and_requirements` directory has been deleted from the repository.
#### Command-line name changes
All of InvokeAI's functionality, including the WebUI, command-line interface,
textual inversion training and model merging, can all be accessed from the
`invoke.sh` and `invoke.bat` launcher scripts. The menu of options has been
expanded to add the new functionality. For the convenience of developers and
power users, we have normalized the names of the InvokeAI command-line scripts:
All of InvokeAI's functionality, including the WebUI, command-line interface, textual inversion training and model merging, can all be accessed from the `invoke.sh` and `invoke.bat` launcher scripts. The menu of options has been expanded to add the new functionality. For the convenience of developers and power users, we have normalized the names of the InvokeAI command-line scripts:
- `invokeai` -- Command-line client
- `invokeai --web` -- Web GUI
- `invokeai-merge --gui` -- Model merging script with graphical front end
- `invokeai-ti --gui` -- Textual inversion script with graphical front end
- `invokeai-configure` -- Configuration tool for initializing the `invokeai`
directory and selecting popular starter models.
* `invokeai` -- Command-line client
* `invokeai --web` -- Web GUI
* `invokeai-merge --gui` -- Model merging script with graphical front end
* `invokeai-ti --gui` -- Textual inversion script with graphical front end
* `invokeai-configure` -- Configuration tool for initializing the `invokeai` directory and selecting popular starter models.
For backward compatibility, the old command names are also recognized, including
`invoke.py` and `configure-invokeai.py`. However, these are deprecated and will
eventually be removed.
For backward compatibility, the old command names are also recognized, including `invoke.py` and `configure-invokeai.py`. However, these are deprecated and will eventually be removed.
Developers should be aware that the locations of the script's source code has
been moved. The new locations are:
Developers should be aware that the locations of the script's source code has been moved. The new locations are:
* `invokeai` => `ldm/invoke/CLI.py`
* `invokeai-configure` => `ldm/invoke/config/configure_invokeai.py`
* `invokeai-ti`=> `ldm/invoke/training/textual_inversion.py`
* `invokeai-merge` => `ldm/invoke/merge_diffusers`
- `invokeai` => `ldm/invoke/CLI.py`
- `invokeai-configure` => `ldm/invoke/config/configure_invokeai.py`
- `invokeai-ti`=> `ldm/invoke/training/textual_inversion.py`
- `invokeai-merge` => `ldm/invoke/merge_diffusers`
Developers are strongly encouraged to perform an "editable" install of InvokeAI using `pip install -e . --use-pep517` in the Git repository, and then to call the scripts using their 2.3.0 names, rather than executing the scripts directly. Developers should also be aware that the several important data files have been relocated into a new directory named `invokeai`. This includes the WebGUI's `frontend` and `backend` directories, and the `INITIAL_MODELS.yaml` files used by the installer to select starter models. Eventually all InvokeAI modules will be in subdirectories of `invokeai`.
Developers are strongly encouraged to perform an "editable" install of InvokeAI
using `pip install -e . --use-pep517` in the Git repository, and then to call
the scripts using their 2.3.0 names, rather than executing the scripts directly.
Developers should also be aware that the several important data files have been
relocated into a new directory named `invokeai`. This includes the WebGUI's
`frontend` and `backend` directories, and the `INITIAL_MODELS.yaml` files used
by the installer to select starter models. Eventually all InvokeAI modules will
be in subdirectories of `invokeai`.
Please see
[2.3.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v2.3.0)
for further details. For older changelogs, please visit the
Please see [2.3.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v2.3.0) for further details.
For older changelogs, please visit the
**[CHANGELOG](CHANGELOG/#v223-2-december-2022)**.
## :material-target: Troubleshooting
Please check out our
**[:material-frequently-asked-questions: Troubleshooting Guide](installation/010_INSTALL_AUTOMATED.md#troubleshooting)**
to get solutions for common installation problems and other issues.
Please check out our **[:material-frequently-asked-questions:
Troubleshooting
Guide](installation/010_INSTALL_AUTOMATED.md#troubleshooting)** to
get solutions for common installation problems and other issues.
## :octicons-repo-push-24: Contributing
@ -541,8 +265,8 @@ thank them for their time, hard work and effort.
For support, please use this repository's GitHub Issues tracking service. Feel
free to send me an email if you use and like the script.
Original portions of the software are Copyright (c) 2022-23 by
[The InvokeAI Team](https://github.com/invoke-ai).
Original portions of the software are Copyright (c) 2022-23
by [The InvokeAI Team](https://github.com/invoke-ai).
## :octicons-book-24: Further Reading

View File

@ -89,7 +89,7 @@ experimental versions later.
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt install python3.10 python3-pip python3.10-venv
sudo apt install -y python3.10 python3-pip python3.10-venv
sudo update-alternatives --install /usr/local/bin/python python /usr/bin/python3.10 3
```

View File

@ -148,13 +148,13 @@ manager, please follow these steps:
=== "CUDA (NVidia)"
```bash
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
```
=== "ROCm (AMD)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.2
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
=== "CPU (Intel Macs & non-GPU systems)"
@ -216,7 +216,7 @@ manager, please follow these steps:
9. Run the command-line- or the web- interface:
From within INVOKEAI_ROOT, activate the environment
(with `source .venv/bin/activate` or `.venv\scripts\activate), and then run
(with `source .venv/bin/activate` or `.venv\scripts\activate`), and then run
the script `invokeai`. If the virtual environment you selected is NOT inside
INVOKEAI_ROOT, then you must specify the path to the root directory by adding
`--root_dir \path\to\invokeai` to the commands below:
@ -315,7 +315,7 @@ installation protocol (important!)
=== "ROCm (AMD)"
```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.2
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
=== "CPU (Intel Macs & non-GPU systems)"

View File

@ -77,7 +77,7 @@ machine. To test, open up a terminal window and issue the following
command:
```
rocminfo
rocm-smi
```
If you get a table labeled "ROCm System Management Interface" the
@ -95,17 +95,9 @@ recent version of Ubuntu, 22.04. However, this [community-contributed
recipe](https://novaspirit.github.io/amdgpu-rocm-ubu22/) is reported
to work well.
After installation, please run `rocminfo` a second time to confirm
After installation, please run `rocm-smi` a second time to confirm
that the driver is present and the GPU is recognized. You may need to
do a reboot in order to load the driver. In addition, if you see
errors relating to your username not being a member of the `render`
group, you may fix this by adding yourself to this group with the command:
```
sudo usermod -a -G render myUserName
```
(Thanks to @EgoringKosmos for the usermod recipe.)
do a reboot in order to load the driver.
### Linux Install with a ROCm-docker Container

View File

@ -11,7 +11,7 @@ The model checkpoint files ('\*.ckpt') are the Stable Diffusion
captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file,
which many people named `model.ckpt`. Now there are hundreds
which many people named `model.ckpt`. Now there are dozens or more
that have been fine tuned to provide particulary styles, genres, or
other features. In addition, there are several new formats that
improve on the original checkpoint format: a `.safetensors` format
@ -29,10 +29,9 @@ and performance are being made at a rapid pace. Among other features
is the ability to download and install a `diffusers` model just by
providing its HuggingFace repository ID.
While InvokeAI will continue to support legacy `.ckpt` and `.safetensors`
While InvokeAI will continue to support `.ckpt` and `.safetensors`
models for the near future, these are deprecated and support will
be withdrawn in version 3.0, after which all legacy models will be
converted into diffusers at the time they are loaded.
likely be withdrawn at some point in the not-too-distant future.
This manual will guide you through installing and configuring model
weight files and converting legacy `.ckpt` and `.safetensors` files
@ -51,7 +50,7 @@ subset that are currently installed are found in
|stable-diffusion-1.5|runwayml/stable-diffusion-v1-5|Stable Diffusion version 1.5 diffusers model (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|sd-inpainting-1.5|runwayml/stable-diffusion-inpainting|RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-inpainting |
|stable-diffusion-2.1|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|sd-inpainting-2.0|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.0 inpainting model (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|sd-inpainting-2.0|stabilityai/stable-diffusion-2-inpainting|Stable Diffusion version 2.0 inpainting model (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-inpainting |
|analog-diffusion-1.0|wavymulder/Analog-Diffusion|An SD-1.5 model trained on diverse analog photographs (2.13 GB)|https://huggingface.co/wavymulder/Analog-Diffusion |
|deliberate-1.0|XpucT/Deliberate|Versatile model that produces detailed images up to 768px (4.27 GB)|https://huggingface.co/XpucT/Deliberate |
|d&d-diffusion-1.0|0xJustin/Dungeons-and-Diffusion|Dungeons & Dragons characters (2.13 GB)|https://huggingface.co/0xJustin/Dungeons-and-Diffusion |
@ -90,18 +89,15 @@ aware that CIVITAI hosts many models that generate NSFW content.
!!! note
InvokeAI 2.3.x does not support directly importing and
running Stable Diffusion version 2 checkpoint models. If you
try to import them, they will be automatically
converted into `diffusers` models on the fly. This adds about 20s
to loading time. To avoid this overhead, you are encouraged to
use one of the conversion methods described below to convert them
permanently.
running Stable Diffusion version 2 checkpoint models. You may instead
convert them into `diffusers` models using the conversion methods
described below.
## Installation
There are multiple ways to install and manage models:
1. The `invokeai-model-install` script which will download and install them for you.
1. The `invokeai-configure` script which will download and install them for you.
2. The command-line tool (CLI) has commands that allows you to import, configure and modify
models files.
@ -109,41 +105,14 @@ There are multiple ways to install and manage models:
3. The web interface (WebUI) has a GUI for importing and managing
models.
### Installation via `invokeai-model-install`
### Installation via `invokeai-configure`
From the `invoke` launcher, choose option (5) "Download and install
models." This will launch the same script that prompted you to select
models at install time. You can use this to add models that you
skipped the first time around. It is all right to specify a model that
was previously downloaded; the script will just confirm that the files
are complete.
This script allows you to load 3d party models. Look for a large text
entry box labeled "IMPORT LOCAL AND REMOTE MODELS." In this box, you
can cut and paste one or more of any of the following:
1. A URL that points to a downloadable .ckpt or .safetensors file.
2. A file path pointing to a .ckpt or .safetensors file.
3. A diffusers model repo_id (from HuggingFace) in the format
"owner/repo_name".
4. A directory path pointing to a diffusers model directory.
5. A directory path pointing to a directory containing a bunch of
.ckpt and .safetensors files. All will be imported.
You can enter multiple items into the textbox, each one on a separate
line. You can paste into the textbox using ctrl-shift-V or by dragging
and dropping a file/directory from the desktop into the box.
The script also lets you designate a directory that will be scanned
for new model files each time InvokeAI starts up. These models will be
added automatically.
Lastly, the script gives you a checkbox option to convert legacy models
into diffusers, or to run the legacy model directly. If you choose to
convert, the original .ckpt/.safetensors file will **not** be deleted,
but a new diffusers directory will be created, using twice your disk
space. However, the diffusers version will load faster, and will be
compatible with InvokeAI 3.0.
From the `invoke` launcher, choose option (6) "re-run the configure
script to download new models." This will launch the same script that
prompted you to select models at install time. You can use this to add
models that you skipped the first time around. It is all right to
specify a model that was previously downloaded; the script will just
confirm that the files are complete.
### Installation via the CLI
@ -175,15 +144,19 @@ invoke> !import_model https://example.org/sd_models/martians.safetensors
For this to work, the URL must not be password-protected. Otherwise
you will receive a 404 error.
When you import a legacy model, the CLI will try to figure out what
type of model it is and select the correct load configuration file.
However, one thing it can't do is to distinguish between Stable
Diffusion 2.x models trained on 512x512 vs 768x768 images. In this
case, the CLI will pop up a menu of choices, asking you to select
which type of model it is. Please consult the model documentation to
identify the correct answer, as loading with the wrong configuration
will lead to black images. You can correct the model type after the
fact using the `!edit_model` command.
When you import a legacy model, the CLI will first ask you what type
of model this is. You can indicate whether it is a model based on
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
or a 1.x inpainting model. Be careful to indicate the correct model
type, or it will not load correctly. You can correct the model type
after the fact using the `!edit_model` command.
The system will then ask you a few other questions about the model,
including what size image it was trained on (usually 512x512), what
name and description you wish to use for it, and whether you would
like to install a custom VAE (variable autoencoder) file for the
model. For recent models, the answer to the VAE question is usually
"no," but it won't hurt to answer "yes".
After importing, the model will load. If this is successful, you will
be asked if you want to keep the model loaded in memory to start
@ -238,6 +211,109 @@ description for the model, whether to make this the default model that
is loaded at InvokeAI startup time, and whether to replace its
VAE. Generally the answer to the latter question is "no".
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to `!import_model`. You may point the command
to either a downloaded model file on disk, or to a (non-password
protected) URL:
```bash
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
```
After a successful conversion, the CLI will offer you the option of
deleting the original `.ckpt` or `.safetensors` file.
### Optimizing a previously-installed model
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
file and wish to convert it into a `diffusers` model, you can do this
without re-downloading and converting the original file using the
`!optimize_model` command. Simply pass the short name of an existing
installed model:
```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml`
directly.
You will need to download the desired `.ckpt/.safetensors` file and
place it somewhere on your machine's filesystem. Alternatively, for a
`diffusers` model, record the repo_id or download the whole model
directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
#### A legacy model
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512
height: 512
default: false
```
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
#### A diffusers model
A stanza for a `diffusers` model will look like this for a HuggingFace
model with a repository ID:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.
### Installation via the WebUI
To access the WebUI Model Manager, click on the button that looks like
@ -317,143 +393,3 @@ And here is what the same argument looks like in `invokeai.init`:
--no-nsfw_checker
--autoconvert /home/fred/stable-diffusion-checkpoints
```
### Specifying a configuration file for legacy checkpoints
Some checkpoint files come with instructions to use a specific .yaml
configuration file. For InvokeAI load this file correctly, please put
the config file in the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the model file. Here is an example:
```bash
wonderful-model-v2.ckpt
wonderful-model-v2.yaml
```
This is not needed for `diffusers` models, which come with their own
pre-packaged configuration.
### Specifying a custom VAE file for legacy checkpoints
To associate a custom VAE with a legacy file, place the VAE file in
the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the model file. Use the suffix `.vae.pt` for VAE checkpoint files, and
`.vae.safetensors` for VAE safetensors files. There is no requirement
that both the model and the VAE follow the same format.
Example:
```bash
wonderful-model-v2.pt
wonderful-model-v2.vae.safetensors
```
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
models file into `diffusers` and install it.This will enable the model
to load and run faster without loss of image quality.
The usage is identical to `!import_model`. You may point the command
to either a downloaded model file on disk, or to a (non-password
protected) URL:
```bash
invoke> !convert_model C:/Users/fred/Downloads/martians.safetensors
```
After a successful conversion, the CLI will offer you the option of
deleting the original `.ckpt` or `.safetensors` file.
### Optimizing a previously-installed model
Lastly, if you have previously installed a `.ckpt` or `.safetensors`
file and wish to convert it into a `diffusers` model, you can do this
without re-downloading and converting the original file using the
`!optimize_model` command. Simply pass the short name of an existing
installed model:
```bash
invoke> !optimize_model martians-v1.0
```
The model will be converted into `diffusers` format and replace the
previously installed version. You will again be offered the
opportunity to delete the original `.ckpt` or `.safetensors` file.
Alternatively you can use the WebUI's model manager to handle diffusers
optimization. Select the legacy model you wish to convert, and then
look for a button labeled "Convert to Diffusers" in the upper right of
the window.
### Related CLI Commands
There are a whole series of additional model management commands in
the CLI that you can read about in [Command-Line
Interface](../features/CLI.md). These include:
* `!models` - List all installed models
* `!switch <model name>` - Switch to the indicated model
* `!edit_model <model name>` - Edit the indicated model to change its name, description or other properties
* `!del_model <model name>` - Delete the indicated model
### Manually editing `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit `models.yaml`
directly.
You will need to download the desired `.ckpt/.safetensors` file and
place it somewhere on your machine's filesystem. Alternatively, for a
`diffusers` model, record the repo_id or download the whole model
directory. Then using a **text** editor (e.g. the Windows Notepad
application), open the file `configs/models.yaml`, and add a new
stanza that follows this model:
#### A legacy model
A legacy `.ckpt` or `.safetensors` entry will look like this:
```yaml
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./path/to/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
format: ckpt
width: 512
height: 512
default: false
```
Note that `format` is `ckpt` for both `.ckpt` and `.safetensors` files.
#### A diffusers model
A stanza for a `diffusers` model will look like this for a HuggingFace
model with a repository ID:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
repo_id: captahab/arabian-nights-1.1
format: diffusers
default: true
```
And for a downloaded directory:
```yaml
arabian-nights-1.1:
description: An even better fine-tune of the Arabian Nights
path: /path/to/captahab-arabian-nights-1.1
format: diffusers
default: true
```
There is additional syntax for indicating an external VAE to use with
this model. See `INITIAL_MODELS.yaml` and `models.yaml` for examples.
After you save the modified `models.yaml` file relaunch
`invokeai`. The new model will now be available for your use.

View File

@ -24,7 +24,7 @@ You need to have opencv installed so that pypatchmatch can be built:
brew install opencv
```
The next time you start `invoke`, after sucesfully installing opencv, pypatchmatch will be built.
The next time you start `invoke`, after successfully installing opencv, pypatchmatch will be built.
## Linux
@ -56,7 +56,7 @@ Prior to installing PyPatchMatch, you need to take the following steps:
5. Confirm that pypatchmatch is installed. At the command-line prompt enter
`python`, and then at the `>>>` line type
`from patchmatch import patch_match`: It should look like the follwing:
`from patchmatch import patch_match`: It should look like the following:
```py
Python 3.9.5 (default, Nov 23 2021, 15:27:38)
@ -108,4 +108,4 @@ Prior to installing PyPatchMatch, you need to take the following steps:
[**Next, Follow Steps 4-6 from the Debian Section above**](#linux)
If you see no errors, then you're ready to go!
If you see no errors you're ready to go!

View File

@ -23,16 +23,14 @@ We thank them for all of their time and hard work.
* @damian0815 - Attention Systems and Gameplay Engineer
* @mauwii (Matthias Wild) - Continuous integration and product maintenance engineer
* @Netsvetaev (Artur Netsvetaev) - UI/UX Developer
* @tildebyte - General gadfly and resident (self-appointed) know-it-all
* @keturn - Lead for Diffusers port
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @jpphoto (Jonathan Pollack) - Inference and rendering engine optimization
* @genomancer (Gregg Helt) - Model training and merging
* @gogurtenjoyer - User support and testing
* @whosawwhatsis - User support and testing
## **Contributions by**
- [tildebyte](https://github.com/tildebyte)
- [Sean McLellan](https://github.com/Oceanswave)
- [Kevin Gibbons](https://github.com/bakkot)
- [Tesseract Cat](https://github.com/TesseractCat)
@ -80,7 +78,6 @@ We thank them for all of their time and hard work.
- [psychedelicious](https://github.com/psychedelicious)
- [damian0815](https://github.com/damian0815)
- [Eugene Brodsky](https://github.com/ebr)
- [Statcomm](https://github.com/statcomm)
## **Original CompVis Authors**

View File

@ -0,0 +1,5 @@
mkdocs
mkdocs-material>=8, <9
mkdocs-git-revision-date-localized-plugin
mkdocs-redirects==1.2.0

View File

@ -11,10 +11,10 @@ if [[ -v "VIRTUAL_ENV" ]]; then
exit -1
fi
VERSION=$(cd ..; python -c "from ldm.invoke import __version__ as version; print(version)")
VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)")
PATCH=""
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v2.3-latest"
LATEST_TAG="v3.0-latest"
echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing."

View File

@ -144,8 +144,8 @@ class Installer:
from plumbum import FG, local
python = local[get_python_from_venv(venv_dir)]
python[ "-m", "pip", "install", "--upgrade", "pip"] & FG
pip = local[get_pip_from_venv(venv_dir)]
pip[ "install", "--upgrade", "pip"] & FG
return venv_dir
@ -241,18 +241,14 @@ class InvokeAiInstance:
from plumbum import FG, local
# Note that we're installing pinned versions of torch and
# torchvision here, which *should* correspond to what is
# in pyproject.toml. This is to prevent torch 2.0 from
# being installed and immediately uninstalled and replaced with 1.13
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"torch~=1.13.1",
"torchvision~=0.14.1",
"torch~=2.0.0",
"torchvision>=0.14.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
@ -295,7 +291,7 @@ class InvokeAiInstance:
src = Path(__file__).parents[1].expanduser().resolve()
# if the above directory contains one of these files, we'll do a source install
next(src.glob("pyproject.toml"))
next(src.glob("ldm"))
next(src.glob("invokeai"))
except StopIteration:
print("Unable to find a wheel or perform a source install. Giving up.")
@ -346,14 +342,14 @@ class InvokeAiInstance:
introduction()
from ldm.invoke.config import invokeai_configure
from invokeai.frontend.install import invokeai_configure
# NOTE: currently the config script does its own arg parsing! this means the command-line switches
# from the installer will also automatically propagate down to the config script.
# this may change in the future with config refactoring!
succeeded = False
try:
invokeai_configure.main()
invokeai_configure()
succeeded = True
except requests.exceptions.ConnectionError as e:
print(f'\nA network error was encountered during configuration and download: {str(e)}')
@ -383,9 +379,6 @@ class InvokeAiInstance:
shutil.copy(src, dest)
os.chmod(dest, 0o0755)
if OS == "Linux":
shutil.copy(Path(__file__).parents[1] / "templates" / "dialogrc", self.runtime / '.dialogrc')
def update(self):
pass
@ -412,22 +405,6 @@ def get_pip_from_venv(venv_path: Path) -> str:
return str(venv_path.expanduser().resolve() / pip)
def get_python_from_venv(venv_path: Path) -> str:
"""
Given a path to a virtual environment, get the absolute path to the `python` executable
in a cross-platform fashion. Does not validate that the python executable
actually exists in the virtualenv.
:param venv_path: Path to the virtual environment
:type venv_path: Path
:return: Absolute path to the python executable
:rtype: str
"""
python = "Scripts\python.exe" if OS == "Windows" else "bin/python"
return str(venv_path.expanduser().resolve() / python)
def set_sys_path(venv_path: Path) -> None:
"""
Given a path to a virtual environment, set the sys.path, in a cross-platform fashion,
@ -479,7 +456,7 @@ def get_torch_source() -> (Union[str, None],str):
optional_modules = None
if OS == "Linux":
if device == "rocm":
url = "https://download.pytorch.org/whl/rocm5.2"
url = "https://download.pytorch.org/whl/rocm5.4.2"
elif device == "cpu":
url = "https://download.pytorch.org/whl/cpu"

View File

@ -1,27 +0,0 @@
# Screen
use_shadow = OFF
use_colors = ON
screen_color = (BLACK, BLACK, ON)
# Box
dialog_color = (YELLOW, BLACK , ON)
title_color = (YELLOW, BLACK, ON)
border_color = (YELLOW, BLACK, OFF)
border2_color = (YELLOW, BLACK, OFF)
# Button
button_active_color = (RED, BLACK, OFF)
button_inactive_color = (YELLOW, BLACK, OFF)
button_label_active_color = (YELLOW,BLACK,ON)
button_label_inactive_color = (YELLOW,BLACK,ON)
# Menu box
menubox_color = (BLACK, BLACK, ON)
menubox_border_color = (YELLOW, BLACK, OFF)
menubox_border2_color = (YELLOW, BLACK, OFF)
# Menu window
item_color = (YELLOW, BLACK, OFF)
item_selected_color = (BLACK, YELLOW, OFF)
tag_key_color = (YELLOW, BLACK, OFF)
tag_key_selected_color = (BLACK, YELLOW, OFF)

View File

@ -7,42 +7,42 @@ call .venv\Scripts\activate.bat
set INVOKEAI_ROOT=.
:start
echo Do you want to generate images using the
echo 1. command-line interface
echo 2. browser-based UI
echo 3. run textual inversion training
echo 4. merge models (diffusers type only)
echo 5. download and install models
echo 6. change InvokeAI startup options
echo 7. re-run the configure script to fix a broken install
echo 8. open the developer console
echo 9. update InvokeAI
echo 10. command-line help
echo Q - quit
set /P restore="Please enter 1-10, Q: [2] "
if not defined restore set restore=2
IF /I "%restore%" == "1" (
echo Desired action:
echo 1. Generate images with the browser-based interface
echo 2. Explore InvokeAI nodes using a command-line interface
echo 3. Run textual inversion training
echo 4. Merge models (diffusers type only)
echo 5. Download and install models
echo 6. Change InvokeAI startup options
echo 7. Re-run the configure script to fix a broken install
echo 8. Open the developer console
echo 9. Update InvokeAI
echo 10. Command-line help
echo Q - Quit
set /P choice="Please enter 1-10, Q: [2] "
if not defined choice set choice=2
IF /I "%choice%" == "1" (
echo Starting the InvokeAI browser-based UI..
python .venv\Scripts\invokeai-web.exe %*
) ELSE IF /I "%choice%" == "2" (
echo Starting the InvokeAI command-line..
python .venv\Scripts\invokeai.exe %*
) ELSE IF /I "%restore%" == "2" (
echo Starting the InvokeAI browser-based UI..
python .venv\Scripts\invokeai.exe --web %*
) ELSE IF /I "%restore%" == "3" (
) ELSE IF /I "%choice%" == "3" (
echo Starting textual inversion training..
python .venv\Scripts\invokeai-ti.exe --gui
) ELSE IF /I "%restore%" == "4" (
) ELSE IF /I "%choice%" == "4" (
echo Starting model merging script..
python .venv\Scripts\invokeai-merge.exe --gui
) ELSE IF /I "%restore%" == "5" (
) ELSE IF /I "%choice%" == "5" (
echo Running invokeai-model-install...
python .venv\Scripts\invokeai-model-install.exe
) ELSE IF /I "%restore%" == "6" (
) ELSE IF /I "%choice%" == "6" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%restore%" == "7" (
) ELSE IF /I "%choice%" == "7" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --default_only
) ELSE IF /I "%restore%" == "8" (
) ELSE IF /I "%choice%" == "8" (
echo Developer Console
echo Python command is:
where python
@ -54,15 +54,15 @@ IF /I "%restore%" == "1" (
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%restore%" == "9" (
) ELSE IF /I "%choice%" == "9" (
echo Running invokeai-update...
python .venv\Scripts\invokeai-update.exe %*
) ELSE IF /I "%restore%" == "10" (
) ELSE IF /I "%choice%" == "10" (
echo Displaying command line help...
python .venv\Scripts\invokeai.exe --help %*
pause
exit /b
) ELSE IF /I "%restore%" == "q" (
) ELSE IF /I "%choice%" == "q" (
echo Goodbye!
goto ending
) ELSE (

View File

@ -52,11 +52,11 @@ do_choice() {
1)
clear
printf "Generate images with a browser-based interface\n"
invokeai --web $PARAMS
invokeai-web $PARAMS
;;
2)
clear
printf "Generate images using a command-line interface\n"
printf "Explore InvokeAI nodes using a command-line interface\n"
invokeai $PARAMS
;;
3)
@ -130,7 +130,7 @@ do_dialog() {
choice=$(dialog --clear \
--backtitle "\Zb\Zu\Z3InvokeAI" \
--colors \
--title "What would you like to run?" \
--title "What would you like to do?" \
--ok-label "Run" \
--cancel-label "Exit" \
--help-button \
@ -147,9 +147,9 @@ do_dialog() {
do_line_input() {
clear
printf " ** For a more attractive experience, please install the 'dialog' utility using your package manager. **\n\n"
printf "Do you want to generate images using the\n"
printf "1: Browser-based UI\n"
printf "2: Command-line interface\n"
printf "What would you like to do?\n"
printf "1: Generate images using the browser-based interface\n"
printf "2: Explore InvokeAI nodes using the command-line interface\n"
printf "3: Run textual inversion training\n"
printf "4: Merge models (diffusers type only)\n"
printf "5: Download and install models\n"

View File

@ -1,3 +1,11 @@
After version 2.3 is released, the ldm/invoke modules will be migrated to this location
so that we have a proper invokeai distribution. Currently it is only being used for
data files.
Organization of the source tree:
app -- Home of nodes invocations and services
assets -- Images and other data files used by InvokeAI
backend -- Non-user facing libraries, including the rendering
core.
configs -- Configuration files used at install and run times
frontend -- User-facing scripts, including the CLI and the WebUI
version -- Current InvokeAI version string, stored
in version/invokeai_version.py

View File

@ -0,0 +1,145 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from logging import Logger
import os
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
)
from invokeai.app.services.board_images import (
BoardImagesService,
BoardImagesServiceDependencies,
)
from invokeai.app.services.board_record_storage import SqliteBoardRecordStorage
from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.metadata import CoreMetadataService
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from invokeai.backend.util.logging import InvokeAILogger
from ..services.default_graphs import create_system_graphs
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from ..services.restoration_services import RestorationServices
from ..services.graph import GraphExecutionState, LibraryGraph
from ..services.image_file_storage import DiskImageFileStorage
from ..services.invocation_queue import MemoryInvocationQueue
from ..services.invocation_services import InvocationServices
from ..services.invoker import Invoker
from ..services.processor import DefaultInvocationProcessor
from ..services.sqlite import SqliteItemStorage
from ..services.model_manager_service import ModelManagerService
from .events import FastAPIEventService
# TODO: is there a better way to achieve this?
def check_internet() -> bool:
"""
Return true if the internet is reachable.
It does this by pinging huggingface.co.
"""
import urllib.request
host = "http://huggingface.co"
try:
urllib.request.urlopen(host, timeout=1)
return True
except:
return False
logger = InvokeAILogger.getLogger()
class ApiDependencies:
"""Contains and initializes all dependencies for the API"""
invoker: Invoker = None
@staticmethod
def initialize(config, event_handler_id: int, logger: Logger = logger):
logger.info(f"Internet connectivity is {config.internet_available}")
events = FastAPIEventService(event_handler_id)
output_folder = config.output_path
# TODO: build a file/path manager?
db_location = config.db_path
db_location.parent.mkdir(parents=True, exist_ok=True)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](
filename=db_location, table_name="graph_executions"
)
urls = LocalUrlService()
metadata = CoreMetadataService()
image_record_storage = SqliteImageRecordStorage(db_location)
image_file_storage = DiskImageFileStorage(f"{output_folder}/images")
names = SimpleNameService()
latents = ForwardCacheLatentsStorage(
DiskLatentsStorage(f"{output_folder}/latents")
)
board_record_storage = SqliteBoardRecordStorage(db_location)
board_image_record_storage = SqliteBoardImageRecordStorage(db_location)
boards = BoardService(
services=BoardServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
board_images = BoardImagesService(
services=BoardImagesServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
images = ImageService(
services=ImageServiceDependencies(
board_image_record_storage=board_image_record_storage,
image_record_storage=image_record_storage,
image_file_storage=image_file_storage,
metadata=metadata,
url=urls,
logger=logger,
names=names,
graph_execution_manager=graph_execution_manager,
)
)
services = InvocationServices(
model_manager=ModelManagerService(config,logger),
events=events,
latents=latents,
images=images,
boards=boards,
board_images=board_images,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](
filename=db_location, table_name="graphs"
),
graph_execution_manager=graph_execution_manager,
processor=DefaultInvocationProcessor(),
restoration=RestorationServices(config, logger),
configuration=config,
logger=logger,
)
create_system_graphs(services.graph_library)
ApiDependencies.invoker = Invoker(services)
@staticmethod
def shutdown():
if ApiDependencies.invoker:
ApiDependencies.invoker.stop()

View File

@ -0,0 +1,52 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
import asyncio
import threading
from queue import Empty, Queue
from typing import Any
from fastapi_events.dispatcher import dispatch
from ..services.events import EventServiceBase
class FastAPIEventService(EventServiceBase):
event_handler_id: int
__queue: Queue
__stop_event: threading.Event
def __init__(self, event_handler_id: int) -> None:
self.event_handler_id = event_handler_id
self.__queue = Queue()
self.__stop_event = threading.Event()
asyncio.create_task(self.__dispatch_from_queue(stop_event=self.__stop_event))
super().__init__()
def stop(self, *args, **kwargs):
self.__stop_event.set()
self.__queue.put(None)
def dispatch(self, event_name: str, payload: Any) -> None:
self.__queue.put(dict(event_name=event_name, payload=payload))
async def __dispatch_from_queue(self, stop_event: threading.Event):
"""Get events on from the queue and dispatch them, from the correct thread"""
while not stop_event.is_set():
try:
event = self.__queue.get(block=False)
if not event: # Probably stopping
continue
dispatch(
event.get("event_name"),
payload=event.get("payload"),
middleware_id=self.event_handler_id,
)
except Empty:
await asyncio.sleep(0.1)
pass
except asyncio.CancelledError as e:
raise e # Raise a proper error

View File

@ -0,0 +1,69 @@
from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from invokeai.app.services.board_record_storage import BoardRecord, BoardChanges
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.models.image_record import ImageDTO
from ..dependencies import ApiDependencies
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])
@board_images_router.post(
"/",
operation_id="create_board_image",
responses={
201: {"description": "The image was added to a board successfully"},
},
status_code=201,
)
async def create_board_image(
board_id: str = Body(description="The id of the board to add to"),
image_name: str = Body(description="The name of the image to add"),
):
"""Creates a board_image"""
try:
result = ApiDependencies.invoker.services.board_images.add_image_to_board(board_id=board_id, image_name=image_name)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to add to board")
@board_images_router.delete(
"/",
operation_id="remove_board_image",
responses={
201: {"description": "The image was removed from the board successfully"},
},
status_code=201,
)
async def remove_board_image(
board_id: str = Body(description="The id of the board"),
image_name: str = Body(description="The name of the image to remove"),
):
"""Deletes a board_image"""
try:
result = ApiDependencies.invoker.services.board_images.remove_image_from_board(board_id=board_id, image_name=image_name)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to update board")
@board_images_router.get(
"/{board_id}",
operation_id="list_board_images",
response_model=OffsetPaginatedResults[ImageDTO],
)
async def list_board_images(
board_id: str = Path(description="The id of the board"),
offset: int = Query(default=0, description="The page offset"),
limit: int = Query(default=10, description="The number of boards per page"),
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a list of images for a board"""
results = ApiDependencies.invoker.services.board_images.get_images_for_board(
board_id,
)
return results

View File

@ -0,0 +1,108 @@
from typing import Optional, Union
from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from invokeai.app.services.board_record_storage import BoardChanges
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import BoardDTO
from ..dependencies import ApiDependencies
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
@boards_router.post(
"/",
operation_id="create_board",
responses={
201: {"description": "The board was created successfully"},
},
status_code=201,
response_model=BoardDTO,
)
async def create_board(
board_name: str = Query(description="The name of the board to create"),
) -> BoardDTO:
"""Creates a board"""
try:
result = ApiDependencies.invoker.services.boards.create(board_name=board_name)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to create board")
@boards_router.get("/{board_id}", operation_id="get_board", response_model=BoardDTO)
async def get_board(
board_id: str = Path(description="The id of board to get"),
) -> BoardDTO:
"""Gets a board"""
try:
result = ApiDependencies.invoker.services.boards.get_dto(board_id=board_id)
return result
except Exception as e:
raise HTTPException(status_code=404, detail="Board not found")
@boards_router.patch(
"/{board_id}",
operation_id="update_board",
responses={
201: {
"description": "The board was updated successfully",
},
},
status_code=201,
response_model=BoardDTO,
)
async def update_board(
board_id: str = Path(description="The id of board to update"),
changes: BoardChanges = Body(description="The changes to apply to the board"),
) -> BoardDTO:
"""Updates a board"""
try:
result = ApiDependencies.invoker.services.boards.update(
board_id=board_id, changes=changes
)
return result
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to update board")
@boards_router.delete("/{board_id}", operation_id="delete_board")
async def delete_board(
board_id: str = Path(description="The id of board to delete"),
) -> None:
"""Deletes a board"""
try:
ApiDependencies.invoker.services.boards.delete(board_id=board_id)
except Exception as e:
# TODO: Does this need any exception handling at all?
pass
@boards_router.get(
"/",
operation_id="list_boards",
response_model=Union[OffsetPaginatedResults[BoardDTO], list[BoardDTO]],
)
async def list_boards(
all: Optional[bool] = Query(default=None, description="Whether to list all boards"),
offset: Optional[int] = Query(default=None, description="The page offset"),
limit: Optional[int] = Query(
default=None, description="The number of boards per page"
),
) -> Union[OffsetPaginatedResults[BoardDTO], list[BoardDTO]]:
"""Gets a list of boards"""
if all:
return ApiDependencies.invoker.services.boards.get_all()
elif offset is not None and limit is not None:
return ApiDependencies.invoker.services.boards.get_many(
offset,
limit,
)
else:
raise HTTPException(
status_code=400,
detail="Invalid request: Must provide either 'all' or both 'offset' and 'limit'",
)

View File

@ -0,0 +1,241 @@
import io
from typing import Optional
from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.routing import APIRouter
from fastapi.responses import FileResponse
from PIL import Image
from invokeai.app.models.image import (
ImageCategory,
ResourceOrigin,
)
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecordChanges,
ImageUrlsDTO,
)
from invokeai.app.services.item_storage import PaginatedResults
from ..dependencies import ApiDependencies
images_router = APIRouter(prefix="/v1/images", tags=["images"])
@images_router.post(
"/",
operation_id="upload_image",
responses={
201: {"description": "The image was uploaded successfully"},
415: {"description": "Image upload failed"},
},
status_code=201,
response_model=ImageDTO,
)
async def upload_image(
file: UploadFile,
request: Request,
response: Response,
image_category: ImageCategory = Query(description="The category of the image"),
is_intermediate: bool = Query(description="Whether this is an intermediate image"),
session_id: Optional[str] = Query(
default=None, description="The session ID associated with this upload, if any"
),
) -> ImageDTO:
"""Uploads an image"""
if not file.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
contents = await file.read()
try:
pil_image = Image.open(io.BytesIO(contents))
except:
# Error opening the image
raise HTTPException(status_code=415, detail="Failed to read image")
try:
image_dto = ApiDependencies.invoker.services.images.create(
image=pil_image,
image_origin=ResourceOrigin.EXTERNAL,
image_category=image_category,
session_id=session_id,
is_intermediate=is_intermediate,
)
response.status_code = 201
response.headers["Location"] = image_dto.image_url
return image_dto
except Exception as e:
raise HTTPException(status_code=500, detail="Failed to create image")
@images_router.delete("/{image_name}", operation_id="delete_image")
async def delete_image(
image_name: str = Path(description="The name of the image to delete"),
) -> None:
"""Deletes an image"""
try:
ApiDependencies.invoker.services.images.delete(image_name)
except Exception as e:
# TODO: Does this need any exception handling at all?
pass
@images_router.patch(
"/{image_name}",
operation_id="update_image",
response_model=ImageDTO,
)
async def update_image(
image_name: str = Path(description="The name of the image to update"),
image_changes: ImageRecordChanges = Body(
description="The changes to apply to the image"
),
) -> ImageDTO:
"""Updates an image"""
try:
return ApiDependencies.invoker.services.images.update(image_name, image_changes)
except Exception as e:
raise HTTPException(status_code=400, detail="Failed to update image")
@images_router.get(
"/{image_name}/metadata",
operation_id="get_image_metadata",
response_model=ImageDTO,
)
async def get_image_metadata(
image_name: str = Path(description="The name of image to get"),
) -> ImageDTO:
"""Gets an image's metadata"""
try:
return ApiDependencies.invoker.services.images.get_dto(image_name)
except Exception as e:
raise HTTPException(status_code=404)
@images_router.get(
"/{image_name}",
operation_id="get_image_full",
response_class=Response,
responses={
200: {
"description": "Return the full-resolution image",
"content": {"image/png": {}},
},
404: {"description": "Image not found"},
},
)
async def get_image_full(
image_name: str = Path(description="The name of full-resolution image file to get"),
) -> FileResponse:
"""Gets a full-resolution image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(image_name)
if not ApiDependencies.invoker.services.images.validate_path(path):
raise HTTPException(status_code=404)
return FileResponse(
path,
media_type="image/png",
filename=image_name,
content_disposition_type="inline",
)
except Exception as e:
raise HTTPException(status_code=404)
@images_router.get(
"/{image_name}/thumbnail",
operation_id="get_image_thumbnail",
response_class=Response,
responses={
200: {
"description": "Return the image thumbnail",
"content": {"image/webp": {}},
},
404: {"description": "Image not found"},
},
)
async def get_image_thumbnail(
image_name: str = Path(description="The name of thumbnail image file to get"),
) -> FileResponse:
"""Gets a thumbnail image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(
image_name, thumbnail=True
)
if not ApiDependencies.invoker.services.images.validate_path(path):
raise HTTPException(status_code=404)
return FileResponse(
path, media_type="image/webp", content_disposition_type="inline"
)
except Exception as e:
raise HTTPException(status_code=404)
@images_router.get(
"/{image_name}/urls",
operation_id="get_image_urls",
response_model=ImageUrlsDTO,
)
async def get_image_urls(
image_name: str = Path(description="The name of the image whose URL to get"),
) -> ImageUrlsDTO:
"""Gets an image and thumbnail URL"""
try:
image_url = ApiDependencies.invoker.services.images.get_url(image_name)
thumbnail_url = ApiDependencies.invoker.services.images.get_url(
image_name, thumbnail=True
)
return ImageUrlsDTO(
image_name=image_name,
image_url=image_url,
thumbnail_url=thumbnail_url,
)
except Exception as e:
raise HTTPException(status_code=404)
@images_router.get(
"/",
operation_id="list_images_with_metadata",
response_model=OffsetPaginatedResults[ImageDTO],
)
async def list_images_with_metadata(
image_origin: Optional[ResourceOrigin] = Query(
default=None, description="The origin of images to list"
),
categories: Optional[list[ImageCategory]] = Query(
default=None, description="The categories of image to include"
),
is_intermediate: Optional[bool] = Query(
default=None, description="Whether to list intermediate images"
),
board_id: Optional[str] = Query(
default=None, description="The board id to filter by"
),
offset: int = Query(default=0, description="The page offset"),
limit: int = Query(default=10, description="The number of images per page"),
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a list of images"""
image_dtos = ApiDependencies.invoker.services.images.get_many(
offset,
limit,
image_origin,
categories,
is_intermediate,
board_id,
)
return image_dtos

View File

@ -0,0 +1,261 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) and 2023 Kent Keirsey (https://github.com/hipsterusername)
from typing import Annotated, Literal, Optional, Union, Dict
from fastapi import Query
from fastapi.routing import APIRouter, HTTPException
from pydantic import BaseModel, Field, parse_obj_as
from ..dependencies import ApiDependencies
from invokeai.backend import BaseModelType, ModelType
from invokeai.backend.model_management.models import OPENAPI_MODEL_CONFIGS
MODEL_CONFIGS = Union[tuple(OPENAPI_MODEL_CONFIGS)]
models_router = APIRouter(prefix="/v1/models", tags=["models"])
class VaeRepo(BaseModel):
repo_id: str = Field(description="The repo ID to use for this VAE")
path: Optional[str] = Field(description="The path to the VAE")
subfolder: Optional[str] = Field(description="The subfolder to use for this VAE")
class ModelInfo(BaseModel):
description: Optional[str] = Field(description="A description of the model")
model_name: str = Field(description="The name of the model")
model_type: str = Field(description="The type of the model")
class DiffusersModelInfo(ModelInfo):
format: Literal['folder'] = 'folder'
vae: Optional[VaeRepo] = Field(description="The VAE repo to use for this model")
repo_id: Optional[str] = Field(description="The repo ID to use for this model")
path: Optional[str] = Field(description="The path to the model")
class CkptModelInfo(ModelInfo):
format: Literal['ckpt'] = 'ckpt'
config: str = Field(description="The path to the model config")
weights: str = Field(description="The path to the model weights")
vae: str = Field(description="The path to the model VAE")
width: Optional[int] = Field(description="The width of the model")
height: Optional[int] = Field(description="The height of the model")
class SafetensorsModelInfo(CkptModelInfo):
format: Literal['safetensors'] = 'safetensors'
class CreateModelRequest(BaseModel):
name: str = Field(description="The name of the model")
info: Union[CkptModelInfo, DiffusersModelInfo] = Field(discriminator="format", description="The model info")
class CreateModelResponse(BaseModel):
name: str = Field(description="The name of the new model")
info: Union[CkptModelInfo, DiffusersModelInfo] = Field(discriminator="format", description="The model info")
status: str = Field(description="The status of the API response")
class ConversionRequest(BaseModel):
name: str = Field(description="The name of the new model")
info: CkptModelInfo = Field(description="The converted model info")
save_location: str = Field(description="The path to save the converted model weights")
class ConvertedModelResponse(BaseModel):
name: str = Field(description="The name of the new model")
info: DiffusersModelInfo = Field(description="The converted model info")
class ModelsList(BaseModel):
models: list[MODEL_CONFIGS]
@models_router.get(
"/",
operation_id="list_models",
responses={200: {"model": ModelsList }},
)
async def list_models(
base_model: Optional[BaseModelType] = Query(
default=None, description="Base model"
),
model_type: Optional[ModelType] = Query(
default=None, description="The type of model to get"
),
) -> ModelsList:
"""Gets a list of models"""
models_raw = ApiDependencies.invoker.services.model_manager.list_models(base_model, model_type)
models = parse_obj_as(ModelsList, { "models": models_raw })
return models
@models_router.post(
"/",
operation_id="update_model",
responses={200: {"status": "success"}},
)
async def update_model(
model_request: CreateModelRequest
) -> CreateModelResponse:
""" Add Model """
model_request_info = model_request.info
info_dict = model_request_info.dict()
model_response = CreateModelResponse(name=model_request.name, info=model_request.info, status="success")
ApiDependencies.invoker.services.model_manager.add_model(
model_name=model_request.name,
model_attributes=info_dict,
clobber=True,
)
return model_response
@models_router.delete(
"/{model_name}",
operation_id="del_model",
responses={
204: {
"description": "Model deleted successfully"
},
404: {
"description": "Model not found"
}
},
)
async def delete_model(model_name: str) -> None:
"""Delete Model"""
model_names = ApiDependencies.invoker.services.model_manager.model_names()
logger = ApiDependencies.invoker.services.logger
model_exists = model_name in model_names
# check if model exists
logger.info(f"Checking for model {model_name}...")
if model_exists:
logger.info(f"Deleting Model: {model_name}")
ApiDependencies.invoker.services.model_manager.del_model(model_name, delete_files=True)
logger.info(f"Model Deleted: {model_name}")
raise HTTPException(status_code=204, detail=f"Model '{model_name}' deleted successfully")
else:
logger.error("Model not found")
raise HTTPException(status_code=404, detail=f"Model '{model_name}' not found")
# @socketio.on("convertToDiffusers")
# def convert_to_diffusers(model_to_convert: dict):
# try:
# if model_info := self.generate.model_manager.model_info(
# model_name=model_to_convert["model_name"]
# ):
# if "weights" in model_info:
# ckpt_path = Path(model_info["weights"])
# original_config_file = Path(model_info["config"])
# model_name = model_to_convert["model_name"]
# model_description = model_info["description"]
# else:
# self.socketio.emit(
# "error", {"message": "Model is not a valid checkpoint file"}
# )
# else:
# self.socketio.emit(
# "error", {"message": "Could not retrieve model info."}
# )
# if not ckpt_path.is_absolute():
# ckpt_path = Path(Globals.root, ckpt_path)
# if original_config_file and not original_config_file.is_absolute():
# original_config_file = Path(Globals.root, original_config_file)
# diffusers_path = Path(
# ckpt_path.parent.absolute(), f"{model_name}_diffusers"
# )
# if model_to_convert["save_location"] == "root":
# diffusers_path = Path(
# global_converted_ckpts_dir(), f"{model_name}_diffusers"
# )
# if (
# model_to_convert["save_location"] == "custom"
# and model_to_convert["custom_location"] is not None
# ):
# diffusers_path = Path(
# model_to_convert["custom_location"], f"{model_name}_diffusers"
# )
# if diffusers_path.exists():
# shutil.rmtree(diffusers_path)
# self.generate.model_manager.convert_and_import(
# ckpt_path,
# diffusers_path,
# model_name=model_name,
# model_description=model_description,
# vae=None,
# original_config_file=original_config_file,
# commit_to_conf=opt.conf,
# )
# new_model_list = self.generate.model_manager.list_models()
# socketio.emit(
# "modelConverted",
# {
# "new_model_name": model_name,
# "model_list": new_model_list,
# "update": True,
# },
# )
# print(f">> Model Converted: {model_name}")
# except Exception as e:
# self.handle_exceptions(e)
# @socketio.on("mergeDiffusersModels")
# def merge_diffusers_models(model_merge_info: dict):
# try:
# models_to_merge = model_merge_info["models_to_merge"]
# model_ids_or_paths = [
# self.generate.model_manager.model_name_or_path(x)
# for x in models_to_merge
# ]
# merged_pipe = merge_diffusion_models(
# model_ids_or_paths,
# model_merge_info["alpha"],
# model_merge_info["interp"],
# model_merge_info["force"],
# )
# dump_path = global_models_dir() / "merged_models"
# if model_merge_info["model_merge_save_path"] is not None:
# dump_path = Path(model_merge_info["model_merge_save_path"])
# os.makedirs(dump_path, exist_ok=True)
# dump_path = dump_path / model_merge_info["merged_model_name"]
# merged_pipe.save_pretrained(dump_path, safe_serialization=1)
# merged_model_config = dict(
# model_name=model_merge_info["merged_model_name"],
# description=f'Merge of models {", ".join(models_to_merge)}',
# commit_to_conf=opt.conf,
# )
# if vae := self.generate.model_manager.config[models_to_merge[0]].get(
# "vae", None
# ):
# print(f">> Using configured VAE assigned to {models_to_merge[0]}")
# merged_model_config.update(vae=vae)
# self.generate.model_manager.import_diffuser_model(
# dump_path, **merged_model_config
# )
# new_model_list = self.generate.model_manager.list_models()
# socketio.emit(
# "modelsMerged",
# {
# "merged_models": models_to_merge,
# "merged_model_name": model_merge_info["merged_model_name"],
# "model_list": new_model_list,
# "update": True,
# },
# )
# print(f">> Models Merged: {models_to_merge}")
# print(f">> New Model Added: {model_merge_info['merged_model_name']}")
# except Exception as e:

View File

@ -0,0 +1,286 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Annotated, List, Optional, Union
from fastapi import Body, HTTPException, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic.fields import Field
from ...invocations import *
from ...invocations.baseinvocation import BaseInvocation
from ...services.graph import (
Edge,
EdgeConnection,
Graph,
GraphExecutionState,
NodeAlreadyExecutedError,
)
from ...services.item_storage import PaginatedResults
from ..dependencies import ApiDependencies
session_router = APIRouter(prefix="/v1/sessions", tags=["sessions"])
@session_router.post(
"/",
operation_id="create_session",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid json"},
},
)
async def create_session(
graph: Optional[Graph] = Body(
default=None, description="The graph to initialize the session with"
)
) -> GraphExecutionState:
"""Creates a new session, optionally initializing it with an invocation graph"""
session = ApiDependencies.invoker.create_execution_state(graph)
return session
@session_router.get(
"/",
operation_id="list_sessions",
responses={200: {"model": PaginatedResults[GraphExecutionState]}},
)
async def list_sessions(
page: int = Query(default=0, description="The page of results to get"),
per_page: int = Query(default=10, description="The number of results per page"),
query: str = Query(default="", description="The query string to search for"),
) -> PaginatedResults[GraphExecutionState]:
"""Gets a list of sessions, optionally searching"""
if query == "":
result = ApiDependencies.invoker.services.graph_execution_manager.list(
page, per_page
)
else:
result = ApiDependencies.invoker.services.graph_execution_manager.search(
query, page, per_page
)
return result
@session_router.get(
"/{session_id}",
operation_id="get_session",
responses={
200: {"model": GraphExecutionState},
404: {"description": "Session not found"},
},
)
async def get_session(
session_id: str = Path(description="The id of the session to get"),
) -> GraphExecutionState:
"""Gets a session"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
else:
return session
@session_router.post(
"/{session_id}/nodes",
operation_id="add_node",
responses={
200: {"model": str},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def add_node(
session_id: str = Path(description="The id of the session"),
node: Annotated[
Union[BaseInvocation.get_invocations()], Field(discriminator="type") # type: ignore
] = Body(description="The node to add"),
) -> str:
"""Adds a node to the graph"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
try:
session.add_node(node)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session.id
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
@session_router.put(
"/{session_id}/nodes/{node_path}",
operation_id="update_node",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def update_node(
session_id: str = Path(description="The id of the session"),
node_path: str = Path(description="The path to the node in the graph"),
node: Annotated[
Union[BaseInvocation.get_invocations()], Field(discriminator="type") # type: ignore
] = Body(description="The new node"),
) -> GraphExecutionState:
"""Updates a node in the graph and removes all linked edges"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
try:
session.update_node(node_path, node)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
@session_router.delete(
"/{session_id}/nodes/{node_path}",
operation_id="delete_node",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def delete_node(
session_id: str = Path(description="The id of the session"),
node_path: str = Path(description="The path to the node to delete"),
) -> GraphExecutionState:
"""Deletes a node in the graph and removes all linked edges"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
try:
session.delete_node(node_path)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
@session_router.post(
"/{session_id}/edges",
operation_id="add_edge",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def add_edge(
session_id: str = Path(description="The id of the session"),
edge: Edge = Body(description="The edge to add"),
) -> GraphExecutionState:
"""Adds an edge to the graph"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
try:
session.add_edge(edge)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
# TODO: the edge being in the path here is really ugly, find a better solution
@session_router.delete(
"/{session_id}/edges/{from_node_id}/{from_field}/{to_node_id}/{to_field}",
operation_id="delete_edge",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def delete_edge(
session_id: str = Path(description="The id of the session"),
from_node_id: str = Path(description="The id of the node the edge is coming from"),
from_field: str = Path(description="The field of the node the edge is coming from"),
to_node_id: str = Path(description="The id of the node the edge is going to"),
to_field: str = Path(description="The field of the node the edge is going to"),
) -> GraphExecutionState:
"""Deletes an edge from the graph"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
try:
edge = Edge(
source=EdgeConnection(node_id=from_node_id, field=from_field),
destination=EdgeConnection(node_id=to_node_id, field=to_field)
)
session.delete_edge(edge)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
@session_router.put(
"/{session_id}/invoke",
operation_id="invoke_session",
responses={
200: {"model": None},
202: {"description": "The invocation is queued"},
400: {"description": "The session has no invocations ready to invoke"},
404: {"description": "Session not found"},
},
)
async def invoke_session(
session_id: str = Path(description="The id of the session to invoke"),
all: bool = Query(
default=False, description="Whether or not to invoke all remaining invocations"
),
) -> Response:
"""Invokes a session"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
if session.is_complete():
raise HTTPException(status_code=400)
ApiDependencies.invoker.invoke(session, invoke_all=all)
return Response(status_code=202)
@session_router.delete(
"/{session_id}/invoke",
operation_id="cancel_session_invoke",
responses={
202: {"description": "The invocation is canceled"}
},
)
async def cancel_session_invoke(
session_id: str = Path(description="The id of the session to cancel"),
) -> Response:
"""Invokes a session"""
ApiDependencies.invoker.cancel(session_id)
return Response(status_code=202)

View File

@ -0,0 +1,38 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from fastapi import FastAPI
from fastapi_events.handlers.local import local_handler
from fastapi_events.typing import Event
from fastapi_socketio import SocketManager
from ..services.events import EventServiceBase
class SocketIO:
__sio: SocketManager
def __init__(self, app: FastAPI):
self.__sio = SocketManager(app=app)
self.__sio.on("subscribe", handler=self._handle_sub)
self.__sio.on("unsubscribe", handler=self._handle_unsub)
local_handler.register(
event_name=EventServiceBase.session_event, _func=self._handle_session_event
)
async def _handle_session_event(self, event: Event):
await self.__sio.emit(
event=event[1]["event"],
data=event[1]["data"],
room=event[1]["data"]["graph_execution_state_id"],
)
async def _handle_sub(self, sid, data, *args, **kwargs):
if "session" in data:
self.__sio.enter_room(sid, data["session"])
# @app.sio.on('unsubscribe')
async def _handle_unsub(self, sid, data, *args, **kwargs):
if "session" in data:
self.__sio.leave_room(sid, data["session"])

182
invokeai/app/api_app.py Normal file
View File

@ -0,0 +1,182 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
import asyncio
from inspect import signature
import uvicorn
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.staticfiles import StaticFiles
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pathlib import Path
from pydantic.schema import schema
#This should come early so that modules can log their initialization properly
from .services.config import InvokeAIAppConfig
from ..backend.util.logging import InvokeAILogger
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
logger = InvokeAILogger.getLogger(config=app_config)
import invokeai.frontend.web as web_dir
from .api.dependencies import ApiDependencies
from .api.routers import sessions, models, images, boards, board_images
from .api.sockets import SocketIO
from .invocations.baseinvocation import BaseInvocation
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None)
# Add event handler
event_handler_id: int = id(app)
app.add_middleware(
EventHandlerASGIMiddleware,
handlers=[
local_handler
], # TODO: consider doing this in services to support different configurations
middleware_id=event_handler_id,
)
socket_io = SocketIO(app)
# Add startup event to load dependencies
@app.on_event("startup")
async def startup_event():
app.add_middleware(
CORSMiddleware,
allow_origins=app_config.allow_origins,
allow_credentials=app_config.allow_credentials,
allow_methods=app_config.allow_methods,
allow_headers=app_config.allow_headers,
)
ApiDependencies.initialize(
config=app_config, event_handler_id=event_handler_id, logger=logger
)
# Shut down threads
@app.on_event("shutdown")
async def shutdown_event():
ApiDependencies.shutdown()
# Include all routers
# TODO: REMOVE
# app.include_router(
# invocation.invocation_router,
# prefix = '/api')
app.include_router(sessions.session_router, prefix="/api")
app.include_router(models.models_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
# Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow?
def custom_openapi():
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
description="An API for invoking AI image operations",
version="1.0.0",
routes=app.routes,
)
# Add all outputs
all_invocations = BaseInvocation.get_invocations()
output_types = set()
output_type_titles = dict()
for invoker in all_invocations:
output_type = signature(invoker.invoke).return_annotation
output_types.add(output_type)
output_schemas = schema(output_types, ref_prefix="#/components/schemas/")
for schema_key, output_schema in output_schemas["definitions"].items():
openapi_schema["components"]["schemas"][schema_key] = output_schema
# TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__
output_type = signature(invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][invoker_name]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
from invokeai.backend.model_management.models import get_model_config_enums
for model_config_format_enum in set(get_model_config_enums()):
name = model_config_format_enum.__qualname__
if name in openapi_schema["components"]["schemas"]:
# print(f"Config with name {name} already defined")
continue
# "BaseModelType":{"title":"BaseModelType","description":"An enumeration.","enum":["sd-1","sd-2"],"type":"string"}
openapi_schema["components"]["schemas"][name] = dict(
title=name,
description="An enumeration.",
type="string",
enum=list(v.value for v in model_config_format_enum),
)
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi
# Override API doc favicons
app.mount("/static", StaticFiles(directory=Path(web_dir.__path__[0], 'static/dream_web')), name="static")
@app.get("/docs", include_in_schema=False)
def overridden_swagger():
return get_swagger_ui_html(
openapi_url=app.openapi_url,
title=app.title,
swagger_favicon_url="/static/favicon.ico",
)
@app.get("/redoc", include_in_schema=False)
def overridden_redoc():
return get_redoc_html(
openapi_url=app.openapi_url,
title=app.title,
redoc_favicon_url="/static/favicon.ico",
)
# Must mount *after* the other routes else it borks em
app.mount("/",
StaticFiles(directory=Path(web_dir.__path__[0],"dist"),
html=True
), name="ui"
)
def invoke_api():
# Start our own event loop for eventing usage
loop = asyncio.new_event_loop()
config = uvicorn.Config(app=app, host=app_config.host, port=app_config.port, loop=loop)
# Use access_log to turn off logging
server = uvicorn.Server(config)
loop.run_until_complete(server.serve())
if __name__ == "__main__":
invoke_api()

View File

@ -0,0 +1,303 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from abc import ABC, abstractmethod
import argparse
from typing import Any, Callable, Iterable, Literal, Union, get_args, get_origin, get_type_hints
from pydantic import BaseModel, Field
import networkx as nx
import matplotlib.pyplot as plt
import invokeai.backend.util.logging as logger
from ..invocations.baseinvocation import BaseInvocation
from ..invocations.image import ImageField
from ..services.graph import GraphExecutionState, LibraryGraph, Edge
from ..services.invoker import Invoker
def add_field_argument(command_parser, name: str, field, default_override = None):
default = default_override if default_override is not None else field.default if field.default_factory is None else field.default_factory()
if get_origin(field.type_) == Literal:
allowed_values = get_args(field.type_)
allowed_types = set()
for val in allowed_values:
allowed_types.add(type(val))
allowed_types_list = list(allowed_types)
field_type = allowed_types_list[0] if len(allowed_types) == 1 else Union[allowed_types_list] # type: ignore
command_parser.add_argument(
f"--{name}",
dest=name,
type=field_type,
default=default,
choices=allowed_values,
help=field.field_info.description,
)
else:
command_parser.add_argument(
f"--{name}",
dest=name,
type=field.type_,
default=default,
help=field.field_info.description,
)
def add_parsers(
subparsers,
commands: list[type],
command_field: str = "type",
exclude_fields: list[str] = ["id", "type"],
add_arguments: Callable[[argparse.ArgumentParser], None]|None = None
):
"""Adds parsers for each command to the subparsers"""
# Create subparsers for each command
for command in commands:
hints = get_type_hints(command)
cmd_name = get_args(hints[command_field])[0]
command_parser = subparsers.add_parser(cmd_name, help=command.__doc__)
if add_arguments is not None:
add_arguments(command_parser)
# Convert all fields to arguments
fields = command.__fields__ # type: ignore
for name, field in fields.items():
if name in exclude_fields:
continue
add_field_argument(command_parser, name, field)
def add_graph_parsers(
subparsers,
graphs: list[LibraryGraph],
add_arguments: Callable[[argparse.ArgumentParser], None]|None = None
):
for graph in graphs:
command_parser = subparsers.add_parser(graph.name, help=graph.description)
if add_arguments is not None:
add_arguments(command_parser)
# Add arguments for inputs
for exposed_input in graph.exposed_inputs:
node = graph.graph.get_node(exposed_input.node_path)
field = node.__fields__[exposed_input.field]
default_override = getattr(node, exposed_input.field)
add_field_argument(command_parser, exposed_input.alias, field, default_override)
class CliContext:
invoker: Invoker
session: GraphExecutionState
parser: argparse.ArgumentParser
defaults: dict[str, Any]
graph_nodes: dict[str, str]
nodes_added: list[str]
def __init__(self, invoker: Invoker, session: GraphExecutionState, parser: argparse.ArgumentParser):
self.invoker = invoker
self.session = session
self.parser = parser
self.defaults = dict()
self.graph_nodes = dict()
self.nodes_added = list()
def get_session(self):
self.session = self.invoker.services.graph_execution_manager.get(self.session.id)
return self.session
def reset(self):
self.session = self.invoker.create_execution_state()
self.graph_nodes = dict()
self.nodes_added = list()
# Leave defaults unchanged
def add_node(self, node: BaseInvocation):
self.get_session()
self.session.graph.add_node(node)
self.nodes_added.append(node.id)
self.invoker.services.graph_execution_manager.set(self.session)
def add_edge(self, edge: Edge):
self.get_session()
self.session.add_edge(edge)
self.invoker.services.graph_execution_manager.set(self.session)
class ExitCli(Exception):
"""Exception to exit the CLI"""
pass
class BaseCommand(ABC, BaseModel):
"""A CLI command"""
# All commands must include a type name like this:
# type: Literal['your_command_name'] = 'your_command_name'
@classmethod
def get_all_subclasses(cls):
subclasses = []
toprocess = [cls]
while len(toprocess) > 0:
next = toprocess.pop(0)
next_subclasses = next.__subclasses__()
subclasses.extend(next_subclasses)
toprocess.extend(next_subclasses)
return subclasses
@classmethod
def get_commands(cls):
return tuple(BaseCommand.get_all_subclasses())
@classmethod
def get_commands_map(cls):
# Get the type strings out of the literals and into a dictionary
return dict(map(lambda t: (get_args(get_type_hints(t)['type'])[0], t),BaseCommand.get_all_subclasses()))
@abstractmethod
def run(self, context: CliContext) -> None:
"""Run the command. Raise ExitCli to exit."""
pass
class ExitCommand(BaseCommand):
"""Exits the CLI"""
type: Literal['exit'] = 'exit'
def run(self, context: CliContext) -> None:
raise ExitCli()
class HelpCommand(BaseCommand):
"""Shows help"""
type: Literal['help'] = 'help'
def run(self, context: CliContext) -> None:
context.parser.print_help()
def get_graph_execution_history(
graph_execution_state: GraphExecutionState,
) -> Iterable[str]:
"""Gets the history of fully-executed invocations for a graph execution"""
return (
n
for n in reversed(graph_execution_state.executed_history)
if n in graph_execution_state.graph.nodes
)
def get_invocation_command(invocation) -> str:
fields = invocation.__fields__.items()
type_hints = get_type_hints(type(invocation))
command = [invocation.type]
for name, field in fields:
if name in ["id", "type"]:
continue
# TODO: add links
# Skip image fields when serializing command
type_hint = type_hints.get(name) or None
if type_hint is ImageField or ImageField in get_args(type_hint):
continue
field_value = getattr(invocation, name)
field_default = field.default
if field_value != field_default:
if type_hint is str or str in get_args(type_hint):
command.append(f'--{name} "{field_value}"')
else:
command.append(f"--{name} {field_value}")
return " ".join(command)
class HistoryCommand(BaseCommand):
"""Shows the invocation history"""
type: Literal['history'] = 'history'
# Inputs
# fmt: off
count: int = Field(default=5, gt=0, description="The number of history entries to show")
# fmt: on
def run(self, context: CliContext) -> None:
history = list(get_graph_execution_history(context.get_session()))
for i in range(min(self.count, len(history))):
entry_id = history[-1 - i]
entry = context.get_session().graph.get_node(entry_id)
logger.info(f"{entry_id}: {get_invocation_command(entry)}")
class SetDefaultCommand(BaseCommand):
"""Sets a default value for a field"""
type: Literal['default'] = 'default'
# Inputs
# fmt: off
field: str = Field(description="The field to set the default for")
value: str = Field(description="The value to set the default to, or None to clear the default")
# fmt: on
def run(self, context: CliContext) -> None:
if self.value is None:
if self.field in context.defaults:
del context.defaults[self.field]
else:
context.defaults[self.field] = self.value
class DrawGraphCommand(BaseCommand):
"""Debugs a graph"""
type: Literal['draw_graph'] = 'draw_graph'
def run(self, context: CliContext) -> None:
session: GraphExecutionState = context.invoker.services.graph_execution_manager.get(context.session.id)
nxgraph = session.graph.nx_graph_flat()
# Draw the networkx graph
plt.figure(figsize=(20, 20))
pos = nx.spectral_layout(nxgraph)
nx.draw_networkx_nodes(nxgraph, pos, node_size=1000)
nx.draw_networkx_edges(nxgraph, pos, width=2)
nx.draw_networkx_labels(nxgraph, pos, font_size=20, font_family="sans-serif")
plt.axis("off")
plt.show()
class DrawExecutionGraphCommand(BaseCommand):
"""Debugs an execution graph"""
type: Literal['draw_xgraph'] = 'draw_xgraph'
def run(self, context: CliContext) -> None:
session: GraphExecutionState = context.invoker.services.graph_execution_manager.get(context.session.id)
nxgraph = session.execution_graph.nx_graph_flat()
# Draw the networkx graph
plt.figure(figsize=(20, 20))
pos = nx.spectral_layout(nxgraph)
nx.draw_networkx_nodes(nxgraph, pos, node_size=1000)
nx.draw_networkx_edges(nxgraph, pos, width=2)
nx.draw_networkx_labels(nxgraph, pos, font_size=20, font_family="sans-serif")
plt.axis("off")
plt.show()
class SortedHelpFormatter(argparse.HelpFormatter):
def _iter_indented_subactions(self, action):
try:
get_subactions = action._get_subactions
except AttributeError:
pass
else:
self._indent()
if isinstance(action, argparse._SubParsersAction):
for subaction in sorted(get_subactions(), key=lambda x: x.dest):
yield subaction
else:
for subaction in get_subactions():
yield subaction
self._dedent()

View File

@ -0,0 +1,169 @@
"""
Readline helper functions for cli_app.py
You may import the global singleton `completer` to get access to the
completer object.
"""
import atexit
import readline
import shlex
from pathlib import Path
from typing import List, Dict, Literal, get_args, get_type_hints, get_origin
import invokeai.backend.util.logging as logger
from ...backend import ModelManager
from ..invocations.baseinvocation import BaseInvocation
from .commands import BaseCommand
from ..services.invocation_services import InvocationServices
# singleton object, class variable
completer = None
class Completer(object):
def __init__(self, model_manager: ModelManager):
self.commands = self.get_commands()
self.matches = None
self.linebuffer = None
self.manager = model_manager
return
def complete(self, text, state):
"""
Complete commands and switches fromm the node CLI command line.
Switches are determined in a context-specific manner.
"""
buffer = readline.get_line_buffer()
if state == 0:
options = None
try:
current_command, current_switch = self.get_current_command(buffer)
options = self.get_command_options(current_command, current_switch)
except IndexError:
pass
options = options or list(self.parse_commands().keys())
if not text: # first time
self.matches = options
else:
self.matches = [s for s in options if s and s.startswith(text)]
try:
match = self.matches[state]
except IndexError:
match = None
return match
@classmethod
def get_commands(self)->List[object]:
"""
Return a list of all the client commands and invocations.
"""
return BaseCommand.get_commands() + BaseInvocation.get_invocations()
def get_current_command(self, buffer: str)->tuple[str, str]:
"""
Parse the readline buffer to find the most recent command and its switch.
"""
if len(buffer)==0:
return None, None
tokens = shlex.split(buffer)
command = None
switch = None
for t in tokens:
if t[0].isalpha():
if switch is None:
command = t
else:
switch = t
# don't try to autocomplete switches that are already complete
if switch and buffer.endswith(' '):
switch=None
return command or '', switch or ''
def parse_commands(self)->Dict[str, List[str]]:
"""
Return a dict in which the keys are the command name
and the values are the parameters the command takes.
"""
result = dict()
for command in self.commands:
hints = get_type_hints(command)
name = get_args(hints['type'])[0]
result.update({name:hints})
return result
def get_command_options(self, command: str, switch: str)->List[str]:
"""
Return all the parameters that can be passed to the command as
command-line switches. Returns None if the command is unrecognized.
"""
parsed_commands = self.parse_commands()
if command not in parsed_commands:
return None
# handle switches in the format "-foo=bar"
argument = None
if switch and '=' in switch:
switch, argument = switch.split('=')
parameter = switch.strip('-')
if parameter in parsed_commands[command]:
if argument is None:
return self.get_parameter_options(parameter, parsed_commands[command][parameter])
else:
return [f"--{parameter}={x}" for x in self.get_parameter_options(parameter, parsed_commands[command][parameter])]
else:
return [f"--{x}" for x in parsed_commands[command].keys()]
def get_parameter_options(self, parameter: str, typehint)->List[str]:
"""
Given a parameter type (such as Literal), offers autocompletions.
"""
if get_origin(typehint) == Literal:
return get_args(typehint)
if parameter == 'model':
return self.manager.model_names()
def _pre_input_hook(self):
if self.linebuffer:
readline.insert_text(self.linebuffer)
readline.redisplay()
self.linebuffer = None
def set_autocompleter(services: InvocationServices) -> Completer:
global completer
if completer:
return completer
completer = Completer(services.model_manager)
readline.set_completer(completer.complete)
# pyreadline3 does not have a set_auto_history() method
try:
readline.set_auto_history(True)
except:
pass
readline.set_pre_input_hook(completer._pre_input_hook)
readline.set_completer_delims(" ")
readline.parse_and_bind("tab: complete")
readline.parse_and_bind("set print-completions-horizontally off")
readline.parse_and_bind("set page-completions on")
readline.parse_and_bind("set skip-completed-text on")
readline.parse_and_bind("set show-all-if-ambiguous on")
histfile = Path(services.configuration.root_dir / ".invoke_history")
try:
readline.read_history_file(histfile)
readline.set_history_length(1000)
except FileNotFoundError:
pass
except OSError: # file likely corrupted
newname = f"{histfile}.old"
logger.error(
f"Your history file {histfile} couldn't be loaded and may be corrupted. Renaming it to {newname}"
)
histfile.replace(Path(newname))
atexit.register(readline.write_history_file, histfile)

427
invokeai/app/cli_app.py Normal file
View File

@ -0,0 +1,427 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
import argparse
import os
import re
import shlex
import sys
import time
from typing import Union, get_type_hints
from pydantic import BaseModel, ValidationError
from pydantic.fields import Field
# This should come early so that the logger can pick up its configuration options
from .services.config import InvokeAIAppConfig
from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config()
config.parse_args()
logger = InvokeAILogger().getLogger(config=config)
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService
from invokeai.app.services.metadata import CoreMetadataService
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from .services.default_graphs import (default_text_to_image_graph_id,
create_system_graphs)
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from .cli.commands import (BaseCommand, CliContext, ExitCli,
SortedHelpFormatter, add_graph_parsers, add_parsers)
from .cli.completer import set_autocompleter
from .invocations.baseinvocation import BaseInvocation
from .services.events import EventServiceBase
from .services.graph import (Edge, EdgeConnection, GraphExecutionState,
GraphInvocation, LibraryGraph,
are_connection_types_compatible)
from .services.image_file_storage import DiskImageFileStorage
from .services.invocation_queue import MemoryInvocationQueue
from .services.invocation_services import InvocationServices
from .services.invoker import Invoker
from .services.model_manager_service import ModelManagerService
from .services.processor import DefaultInvocationProcessor
from .services.restoration_services import RestorationServices
from .services.sqlite import SqliteItemStorage
class CliCommand(BaseModel):
command: Union[BaseCommand.get_commands() + BaseInvocation.get_invocations()] = Field(discriminator="type") # type: ignore
class InvalidArgs(Exception):
pass
def add_invocation_args(command_parser):
# Add linking capability
command_parser.add_argument(
"--link",
"-l",
action="append",
nargs=3,
help="A link in the format 'source_node source_field dest_field'. source_node can be relative to history (e.g. -1)",
)
command_parser.add_argument(
"--link_node",
"-ln",
action="append",
help="A link from all fields in the specified node. Node can be relative to history (e.g. -1)",
)
def get_command_parser(services: InvocationServices) -> argparse.ArgumentParser:
# Create invocation parser
parser = argparse.ArgumentParser(formatter_class=SortedHelpFormatter)
def exit(*args, **kwargs):
raise InvalidArgs
parser.exit = exit
subparsers = parser.add_subparsers(dest="type")
# Create subparsers for each invocation
invocations = BaseInvocation.get_all_subclasses()
add_parsers(subparsers, invocations, add_arguments=add_invocation_args)
# Create subparsers for each command
commands = BaseCommand.get_all_subclasses()
add_parsers(subparsers, commands, exclude_fields=["type"])
# Create subparsers for exposed CLI graphs
# TODO: add a way to identify these graphs
text_to_image = services.graph_library.get(default_text_to_image_graph_id)
add_graph_parsers(subparsers, [text_to_image], add_arguments=add_invocation_args)
return parser
class NodeField():
alias: str
node_path: str
field: str
field_type: type
def __init__(self, alias: str, node_path: str, field: str, field_type: type):
self.alias = alias
self.node_path = node_path
self.field = field
self.field_type = field_type
def fields_from_type_hints(hints: dict[str, type], node_path: str) -> dict[str,NodeField]:
return {k:NodeField(alias=k, node_path=node_path, field=k, field_type=v) for k, v in hints.items()}
def get_node_input_field(graph: LibraryGraph, field_alias: str, node_id: str) -> NodeField:
"""Gets the node field for the specified field alias"""
exposed_input = next(e for e in graph.exposed_inputs if e.alias == field_alias)
node_type = type(graph.graph.get_node(exposed_input.node_path))
return NodeField(alias=exposed_input.alias, node_path=f'{node_id}.{exposed_input.node_path}', field=exposed_input.field, field_type=get_type_hints(node_type)[exposed_input.field])
def get_node_output_field(graph: LibraryGraph, field_alias: str, node_id: str) -> NodeField:
"""Gets the node field for the specified field alias"""
exposed_output = next(e for e in graph.exposed_outputs if e.alias == field_alias)
node_type = type(graph.graph.get_node(exposed_output.node_path))
node_output_type = node_type.get_output_type()
return NodeField(alias=exposed_output.alias, node_path=f'{node_id}.{exposed_output.node_path}', field=exposed_output.field, field_type=get_type_hints(node_output_type)[exposed_output.field])
def get_node_inputs(invocation: BaseInvocation, context: CliContext) -> dict[str, NodeField]:
"""Gets the inputs for the specified invocation from the context"""
node_type = type(invocation)
if node_type is not GraphInvocation:
return fields_from_type_hints(get_type_hints(node_type), invocation.id)
else:
graph: LibraryGraph = context.invoker.services.graph_library.get(context.graph_nodes[invocation.id])
return {e.alias: get_node_input_field(graph, e.alias, invocation.id) for e in graph.exposed_inputs}
def get_node_outputs(invocation: BaseInvocation, context: CliContext) -> dict[str, NodeField]:
"""Gets the outputs for the specified invocation from the context"""
node_type = type(invocation)
if node_type is not GraphInvocation:
return fields_from_type_hints(get_type_hints(node_type.get_output_type()), invocation.id)
else:
graph: LibraryGraph = context.invoker.services.graph_library.get(context.graph_nodes[invocation.id])
return {e.alias: get_node_output_field(graph, e.alias, invocation.id) for e in graph.exposed_outputs}
def generate_matching_edges(
a: BaseInvocation, b: BaseInvocation, context: CliContext
) -> list[Edge]:
"""Generates all possible edges between two invocations"""
afields = get_node_outputs(a, context)
bfields = get_node_inputs(b, context)
matching_fields = set(afields.keys()).intersection(bfields.keys())
# Remove invalid fields
invalid_fields = set(["type", "id"])
matching_fields = matching_fields.difference(invalid_fields)
# Validate types
matching_fields = [f for f in matching_fields if are_connection_types_compatible(afields[f].field_type, bfields[f].field_type)]
edges = [
Edge(
source=EdgeConnection(node_id=afields[alias].node_path, field=afields[alias].field),
destination=EdgeConnection(node_id=bfields[alias].node_path, field=bfields[alias].field)
)
for alias in matching_fields
]
return edges
class SessionError(Exception):
"""Raised when a session error has occurred"""
pass
def invoke_all(context: CliContext):
"""Runs all invocations in the specified session"""
context.invoker.invoke(context.session, invoke_all=True)
while not context.get_session().is_complete():
# Wait some time
time.sleep(0.1)
# Print any errors
if context.session.has_error():
for n in context.session.errors:
context.invoker.services.logger.error(
f"Error in node {n} (source node {context.session.prepared_source_mapping[n]}): {context.session.errors[n]}"
)
raise SessionError()
def invoke_cli():
# get the optional list of invocations to execute on the command line
parser = config.get_parser()
parser.add_argument('commands',nargs='*')
invocation_commands = parser.parse_args().commands
# get the optional file to read commands from.
# Simplest is to use it for STDIN
if infile := config.from_file:
sys.stdin = open(infile,"r")
model_manager = ModelManagerService(config,logger)
events = EventServiceBase()
output_folder = config.output_path
# TODO: build a file/path manager?
if config.use_memory_db:
db_location = ":memory:"
else:
db_location = config.db_path
db_location.parent.mkdir(parents=True,exist_ok=True)
logger.info(f'InvokeAI database location is "{db_location}"')
graph_execution_manager = SqliteItemStorage[GraphExecutionState](
filename=db_location, table_name="graph_executions"
)
urls = LocalUrlService()
metadata = CoreMetadataService()
image_record_storage = SqliteImageRecordStorage(db_location)
image_file_storage = DiskImageFileStorage(f"{output_folder}/images")
names = SimpleNameService()
images = ImageService(
image_record_storage=image_record_storage,
image_file_storage=image_file_storage,
metadata=metadata,
url=urls,
logger=logger,
names=names,
graph_execution_manager=graph_execution_manager,
)
services = InvocationServices(
model_manager=model_manager,
events=events,
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f'{output_folder}/latents')),
images=images,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](
filename=db_location, table_name="graphs"
),
graph_execution_manager=graph_execution_manager,
processor=DefaultInvocationProcessor(),
restoration=RestorationServices(config,logger=logger),
logger=logger,
configuration=config,
)
system_graphs = create_system_graphs(services.graph_library)
system_graph_names = set([g.name for g in system_graphs])
set_autocompleter(services)
invoker = Invoker(services)
session: GraphExecutionState = invoker.create_execution_state()
parser = get_command_parser(services)
re_negid = re.compile('^-[0-9]+$')
# Uncomment to print out previous sessions at startup
# print(services.session_manager.list())
context = CliContext(invoker, session, parser)
set_autocompleter(services)
command_line_args_exist = len(invocation_commands) > 0
done = False
while not done:
try:
if command_line_args_exist:
cmd_input = invocation_commands.pop(0)
done = len(invocation_commands) == 0
else:
cmd_input = input("invoke> ")
except (KeyboardInterrupt, EOFError):
# Ctrl-c exits
break
try:
# Refresh the state of the session
#history = list(get_graph_execution_history(context.session))
history = list(reversed(context.nodes_added))
# Split the command for piping
cmds = cmd_input.split("|")
start_id = len(context.nodes_added)
current_id = start_id
new_invocations = list()
for cmd in cmds:
if cmd is None or cmd.strip() == "":
raise InvalidArgs("Empty command")
# Parse args to create invocation
args = vars(context.parser.parse_args(shlex.split(cmd.strip())))
# Override defaults
for field_name, field_default in context.defaults.items():
if field_name in args:
args[field_name] = field_default
# Parse invocation
command: CliCommand = None # type:ignore
system_graph: LibraryGraph|None = None
if args['type'] in system_graph_names:
system_graph = next(filter(lambda g: g.name == args['type'], system_graphs))
invocation = GraphInvocation(graph=system_graph.graph, id=str(current_id))
for exposed_input in system_graph.exposed_inputs:
if exposed_input.alias in args:
node = invocation.graph.get_node(exposed_input.node_path)
field = exposed_input.field
setattr(node, field, args[exposed_input.alias])
command = CliCommand(command = invocation)
context.graph_nodes[invocation.id] = system_graph.id
else:
args["id"] = current_id
command = CliCommand(command=args)
if command is None:
continue
# Run any CLI commands immediately
if isinstance(command.command, BaseCommand):
# Invoke all current nodes to preserve operation order
invoke_all(context)
# Run the command
command.command.run(context)
continue
# TODO: handle linking with library graphs
# Pipe previous command output (if there was a previous command)
edges: list[Edge] = list()
if len(history) > 0 or current_id != start_id:
from_id = (
history[0] if current_id == start_id else str(current_id - 1)
)
from_node = (
next(filter(lambda n: n[0].id == from_id, new_invocations))[0]
if current_id != start_id
else context.session.graph.get_node(from_id)
)
matching_edges = generate_matching_edges(
from_node, command.command, context
)
edges.extend(matching_edges)
# Parse provided links
if "link_node" in args and args["link_node"]:
for link in args["link_node"]:
node_id = link
if re_negid.match(node_id):
node_id = str(current_id + int(node_id))
link_node = context.session.graph.get_node(node_id)
matching_edges = generate_matching_edges(
link_node, command.command, context
)
matching_destinations = [e.destination for e in matching_edges]
edges = [e for e in edges if e.destination not in matching_destinations]
edges.extend(matching_edges)
if "link" in args and args["link"]:
for link in args["link"]:
edges = [e for e in edges if e.destination.node_id != command.command.id or e.destination.field != link[2]]
node_id = link[0]
if re_negid.match(node_id):
node_id = str(current_id + int(node_id))
# TODO: handle missing input/output
node_output = get_node_outputs(context.session.graph.get_node(node_id), context)[link[1]]
node_input = get_node_inputs(command.command, context)[link[2]]
edges.append(
Edge(
source=EdgeConnection(node_id=node_output.node_path, field=node_output.field),
destination=EdgeConnection(node_id=node_input.node_path, field=node_input.field)
)
)
new_invocations.append((command.command, edges))
current_id = current_id + 1
# Add the node to the session
context.add_node(command.command)
for edge in edges:
print(edge)
context.add_edge(edge)
# Execute all remaining nodes
invoke_all(context)
except InvalidArgs:
invoker.services.logger.warning('Invalid command, use "help" to list commands')
continue
except ValidationError:
invoker.services.logger.warning('Invalid command arguments, run "<command> --help" for summary')
except SessionError:
# Start a new session
invoker.services.logger.warning("Session error: creating a new session")
context.reset()
except ExitCli:
break
except SystemExit:
continue
invoker.stop()
if __name__ == "__main__":
invoke_cli()

View File

@ -0,0 +1,12 @@
import os
__all__ = []
dirname = os.path.dirname(os.path.abspath(__file__))
for f in os.listdir(dirname):
if (
f != "__init__.py"
and os.path.isfile("%s/%s" % (dirname, f))
and f[-3:] == ".py"
):
__all__.append(f[:-3])

View File

@ -0,0 +1,136 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from __future__ import annotations
from abc import ABC, abstractmethod
from inspect import signature
from typing import get_args, get_type_hints, Dict, List, Literal, TypedDict, TYPE_CHECKING
from pydantic import BaseModel, Field
if TYPE_CHECKING:
from ..services.invocation_services import InvocationServices
class InvocationContext:
services: InvocationServices
graph_execution_state_id: str
def __init__(self, services: InvocationServices, graph_execution_state_id: str):
self.services = services
self.graph_execution_state_id = graph_execution_state_id
class BaseInvocationOutput(BaseModel):
"""Base class for all invocation outputs"""
# All outputs must include a type name like this:
# type: Literal['your_output_name']
@classmethod
def get_all_subclasses_tuple(cls):
subclasses = []
toprocess = [cls]
while len(toprocess) > 0:
next = toprocess.pop(0)
next_subclasses = next.__subclasses__()
subclasses.extend(next_subclasses)
toprocess.extend(next_subclasses)
return tuple(subclasses)
class BaseInvocation(ABC, BaseModel):
"""A node to process inputs and produce outputs.
May use dependency injection in __init__ to receive providers.
"""
# All invocations must include a type name like this:
# type: Literal['your_output_name']
@classmethod
def get_all_subclasses(cls):
subclasses = []
toprocess = [cls]
while len(toprocess) > 0:
next = toprocess.pop(0)
next_subclasses = next.__subclasses__()
subclasses.extend(next_subclasses)
toprocess.extend(next_subclasses)
return subclasses
@classmethod
def get_invocations(cls):
return tuple(BaseInvocation.get_all_subclasses())
@classmethod
def get_invocations_map(cls):
# Get the type strings out of the literals and into a dictionary
return dict(map(lambda t: (get_args(get_type_hints(t)['type'])[0], t),BaseInvocation.get_all_subclasses()))
@classmethod
def get_output_type(cls):
return signature(cls.invoke).return_annotation
@abstractmethod
def invoke(self, context: InvocationContext) -> BaseInvocationOutput:
"""Invoke with provided context and return outputs."""
pass
#fmt: off
id: str = Field(description="The id of this node. Must be unique among all nodes.")
is_intermediate: bool = Field(default=False, description="Whether or not this node is an intermediate node.")
#fmt: on
# TODO: figure out a better way to provide these hints
# TODO: when we can upgrade to python 3.11, we can use the`NotRequired` type instead of `total=False`
class UIConfig(TypedDict, total=False):
type_hints: Dict[
str,
Literal[
"integer",
"float",
"boolean",
"string",
"enum",
"image",
"latents",
"model",
"control",
],
]
tags: List[str]
title: str
class CustomisedSchemaExtra(TypedDict):
ui: UIConfig
class InvocationConfig(BaseModel.Config):
"""Customizes pydantic's BaseModel.Config class for use by Invocations.
Provide `schema_extra` a `ui` dict to add hints for generated UIs.
`tags`
- A list of strings, used to categorise invocations.
`type_hints`
- A dict of field types which override the types in the invocation definition.
- Each key should be the name of one of the invocation's fields.
- Each value should be one of the valid types:
- `integer`, `float`, `boolean`, `string`, `enum`, `image`, `latents`, `model`
```python
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["stable-diffusion", "image"],
"type_hints": {
"initial_image": "image",
},
},
}
```
"""
schema_extra: CustomisedSchemaExtra

View File

@ -0,0 +1,94 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
from typing import Literal
import numpy as np
from pydantic import Field, validator
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from .baseinvocation import (
BaseInvocation,
InvocationContext,
BaseInvocationOutput,
)
class IntCollectionOutput(BaseInvocationOutput):
"""A collection of integers"""
type: Literal["int_collection"] = "int_collection"
# Outputs
collection: list[int] = Field(default=[], description="The int collection")
class FloatCollectionOutput(BaseInvocationOutput):
"""A collection of floats"""
type: Literal["float_collection"] = "float_collection"
# Outputs
collection: list[float] = Field(default=[], description="The float collection")
class RangeInvocation(BaseInvocation):
"""Creates a range of numbers from start to stop with step"""
type: Literal["range"] = "range"
# Inputs
start: int = Field(default=0, description="The start of the range")
stop: int = Field(default=10, description="The stop of the range")
step: int = Field(default=1, description="The step of the range")
@validator("stop")
def stop_gt_start(cls, v, values):
if "start" in values and v <= values["start"]:
raise ValueError("stop must be greater than start")
return v
def invoke(self, context: InvocationContext) -> IntCollectionOutput:
return IntCollectionOutput(
collection=list(range(self.start, self.stop, self.step))
)
class RangeOfSizeInvocation(BaseInvocation):
"""Creates a range from start to start + size with step"""
type: Literal["range_of_size"] = "range_of_size"
# Inputs
start: int = Field(default=0, description="The start of the range")
size: int = Field(default=1, description="The number of values")
step: int = Field(default=1, description="The step of the range")
def invoke(self, context: InvocationContext) -> IntCollectionOutput:
return IntCollectionOutput(
collection=list(range(self.start, self.start + self.size, self.step))
)
class RandomRangeInvocation(BaseInvocation):
"""Creates a collection of random numbers"""
type: Literal["random_range"] = "random_range"
# Inputs
low: int = Field(default=0, description="The inclusive low value")
high: int = Field(
default=np.iinfo(np.int32).max, description="The exclusive high value"
)
size: int = Field(default=1, description="The number of values to generate")
seed: int = Field(
ge=0,
le=SEED_MAX,
description="The seed for the RNG (omit for random)",
default_factory=get_random_seed,
)
def invoke(self, context: InvocationContext) -> IntCollectionOutput:
rng = np.random.default_rng(self.seed)
return IntCollectionOutput(
collection=list(rng.integers(low=self.low, high=self.high, size=self.size))
)

View File

@ -0,0 +1,269 @@
from typing import Literal, Optional, Union
from pydantic import BaseModel, Field
from contextlib import ExitStack
import re
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext, InvocationConfig
from .model import ClipField
from ...backend.util.devices import torch_dtype
from ...backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent
from ...backend.model_management import BaseModelType, ModelType, SubModelType
from ...backend.model_management.lora import ModelPatcher
from compel import Compel
from compel.prompt_parser import (
Blend,
CrossAttentionControlSubstitute,
FlattenedPrompt,
Fragment, Conjunction,
)
class ConditioningField(BaseModel):
conditioning_name: Optional[str] = Field(default=None, description="The name of conditioning data")
class Config:
schema_extra = {"required": ["conditioning_name"]}
class CompelOutput(BaseInvocationOutput):
"""Compel parser output"""
#fmt: off
type: Literal["compel_output"] = "compel_output"
conditioning: ConditioningField = Field(default=None, description="Conditioning")
#fmt: on
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
type: Literal["compel"] = "compel"
prompt: str = Field(default="", description="Prompt")
clip: ClipField = Field(None, description="Clip to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"title": "Prompt (Compel)",
"tags": ["prompt", "compel"],
"type_hints": {
"model": "model"
}
},
}
def invoke(self, context: InvocationContext) -> CompelOutput:
tokenizer_info = context.services.model_manager.get_model(
**self.clip.tokenizer.dict(),
)
text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.dict(),
)
with tokenizer_info as orig_tokenizer,\
text_encoder_info as text_encoder,\
ExitStack() as stack:
loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
name = trigger[1:-1]
try:
ti_list.append(
stack.enter_context(
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
)
)
)
except Exception:
#print(e)
#import traceback
#print(traceback.format_exc())
print(f"Warn: trigger: \"{trigger}\" not found")
with ModelPatcher.apply_lora_text_encoder(text_encoder, loras),\
ModelPatcher.apply_ti(orig_tokenizer, text_encoder, ti_list) as (tokenizer, ti_manager):
compel = Compel(
tokenizer=tokenizer,
text_encoder=text_encoder,
textual_inversion_manager=ti_manager,
dtype_for_device_getter=torch_dtype,
truncate_long_prompts=True, # TODO:
)
conjunction = Compel.parse_prompt_string(self.prompt)
prompt: Union[FlattenedPrompt, Blend] = conjunction.prompts[0]
if context.services.configuration.log_tokenization:
log_tokenization_for_prompt_object(prompt, tokenizer)
c, options = compel.build_conditioning_tensor_for_prompt_object(prompt)
# TODO: long prompt support
#if not self.truncate_long_prompts:
# [c, uc] = compel.pad_conditioning_tensors_to_same_length([c, uc])
ec = InvokeAIDiffuserComponent.ExtraConditioningInfo(
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
cross_attention_control_args=options.get("cross_attention_control", None),
)
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
# TODO: hacky but works ;D maybe rename latents somehow?
context.services.latents.save(conditioning_name, (c, ec))
return CompelOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
def get_max_token_count(
tokenizer, prompt: Union[FlattenedPrompt, Blend, Conjunction], truncate_if_too_long=False
) -> int:
if type(prompt) is Blend:
blend: Blend = prompt
return max(
[
get_max_token_count(tokenizer, p, truncate_if_too_long)
for p in blend.prompts
]
)
elif type(prompt) is Conjunction:
conjunction: Conjunction = prompt
return sum(
[
get_max_token_count(tokenizer, p, truncate_if_too_long)
for p in conjunction.prompts
]
)
else:
return len(
get_tokens_for_prompt_object(tokenizer, prompt, truncate_if_too_long)
)
def get_tokens_for_prompt_object(
tokenizer, parsed_prompt: FlattenedPrompt, truncate_if_too_long=True
) -> [str]:
if type(parsed_prompt) is Blend:
raise ValueError(
"Blend is not supported here - you need to get tokens for each of its .children"
)
text_fragments = [
x.text
if type(x) is Fragment
else (
" ".join([f.text for f in x.original])
if type(x) is CrossAttentionControlSubstitute
else str(x)
)
for x in parsed_prompt.children
]
text = " ".join(text_fragments)
tokens = tokenizer.tokenize(text)
if truncate_if_too_long:
max_tokens_length = tokenizer.model_max_length - 2 # typically 75
tokens = tokens[0:max_tokens_length]
return tokens
def log_tokenization_for_conjunction(
c: Conjunction, tokenizer, display_label_prefix=None
):
display_label_prefix = display_label_prefix or ""
for i, p in enumerate(c.prompts):
if len(c.prompts)>1:
this_display_label_prefix = f"{display_label_prefix}(conjunction part {i + 1}, weight={c.weights[i]})"
else:
this_display_label_prefix = display_label_prefix
log_tokenization_for_prompt_object(
p,
tokenizer,
display_label_prefix=this_display_label_prefix
)
def log_tokenization_for_prompt_object(
p: Union[Blend, FlattenedPrompt], tokenizer, display_label_prefix=None
):
display_label_prefix = display_label_prefix or ""
if type(p) is Blend:
blend: Blend = p
for i, c in enumerate(blend.prompts):
log_tokenization_for_prompt_object(
c,
tokenizer,
display_label_prefix=f"{display_label_prefix}(blend part {i + 1}, weight={blend.weights[i]})",
)
elif type(p) is FlattenedPrompt:
flattened_prompt: FlattenedPrompt = p
if flattened_prompt.wants_cross_attention_control:
original_fragments = []
edited_fragments = []
for f in flattened_prompt.children:
if type(f) is CrossAttentionControlSubstitute:
original_fragments += f.original
edited_fragments += f.edited
else:
original_fragments.append(f)
edited_fragments.append(f)
original_text = " ".join([x.text for x in original_fragments])
log_tokenization_for_text(
original_text,
tokenizer,
display_label=f"{display_label_prefix}(.swap originals)",
)
edited_text = " ".join([x.text for x in edited_fragments])
log_tokenization_for_text(
edited_text,
tokenizer,
display_label=f"{display_label_prefix}(.swap replacements)",
)
else:
text = " ".join([x.text for x in flattened_prompt.children])
log_tokenization_for_text(
text, tokenizer, display_label=display_label_prefix
)
def log_tokenization_for_text(text, tokenizer, display_label=None, truncate_if_too_long=False):
"""shows how the prompt is tokenized
# usually tokens have '</w>' to indicate end-of-word,
# but for readability it has been replaced with ' '
"""
tokens = tokenizer.tokenize(text)
tokenized = ""
discarded = ""
usedTokens = 0
totalTokens = len(tokens)
for i in range(0, totalTokens):
token = tokens[i].replace("</w>", " ")
# alternate color
s = (usedTokens % 6) + 1
if truncate_if_too_long and i >= tokenizer.model_max_length:
discarded = discarded + f"\x1b[0;3{s};40m{token}"
else:
tokenized = tokenized + f"\x1b[0;3{s};40m{token}"
usedTokens += 1
if usedTokens > 0:
print(f'\n>> [TOKENLOG] Tokens {display_label or ""} ({usedTokens}):')
print(f"{tokenized}\x1b[0m")
if discarded != "":
print(f"\n>> [TOKENLOG] Tokens Discarded ({totalTokens - usedTokens}):")
print(f"{discarded}\x1b[0m")

View File

@ -0,0 +1,454 @@
# InvokeAI nodes for ControlNet image preprocessors
# initial implementation by Gregg Helt, 2023
# heavily leverages controlnet_aux package: https://github.com/patrickvonplaten/controlnet_aux
from builtins import float
import numpy as np
from typing import Literal, Optional, Union, List
from PIL import Image, ImageFilter, ImageOps
from pydantic import BaseModel, Field, validator
from ..models.image import ImageField, ImageCategory, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
)
from controlnet_aux import (
CannyDetector,
HEDdetector,
LineartDetector,
LineartAnimeDetector,
MidasDetector,
MLSDdetector,
NormalBaeDetector,
OpenposeDetector,
PidiNetDetector,
ContentShuffleDetector,
ZoeDetector,
MediapipeFaceDetector,
)
from .image import ImageOutput, PILInvocationConfig
CONTROLNET_DEFAULT_MODELS = [
###########################################
# lllyasviel sd v1.5, ControlNet v1.0 models
##############################################
"lllyasviel/sd-controlnet-canny",
"lllyasviel/sd-controlnet-depth",
"lllyasviel/sd-controlnet-hed",
"lllyasviel/sd-controlnet-seg",
"lllyasviel/sd-controlnet-openpose",
"lllyasviel/sd-controlnet-scribble",
"lllyasviel/sd-controlnet-normal",
"lllyasviel/sd-controlnet-mlsd",
#############################################
# lllyasviel sd v1.5, ControlNet v1.1 models
#############################################
"lllyasviel/control_v11p_sd15_canny",
"lllyasviel/control_v11p_sd15_openpose",
"lllyasviel/control_v11p_sd15_seg",
# "lllyasviel/control_v11p_sd15_depth", # broken
"lllyasviel/control_v11f1p_sd15_depth",
"lllyasviel/control_v11p_sd15_normalbae",
"lllyasviel/control_v11p_sd15_scribble",
"lllyasviel/control_v11p_sd15_mlsd",
"lllyasviel/control_v11p_sd15_softedge",
"lllyasviel/control_v11p_sd15s2_lineart_anime",
"lllyasviel/control_v11p_sd15_lineart",
"lllyasviel/control_v11p_sd15_inpaint",
# "lllyasviel/control_v11u_sd15_tile",
# problem (temporary?) with huffingface "lllyasviel/control_v11u_sd15_tile",
# so for now replace "lllyasviel/control_v11f1e_sd15_tile",
"lllyasviel/control_v11e_sd15_shuffle",
"lllyasviel/control_v11e_sd15_ip2p",
"lllyasviel/control_v11f1e_sd15_tile",
#################################################
# thibaud sd v2.1 models (ControlNet v1.0? or v1.1?
##################################################
"thibaud/controlnet-sd21-openpose-diffusers",
"thibaud/controlnet-sd21-canny-diffusers",
"thibaud/controlnet-sd21-depth-diffusers",
"thibaud/controlnet-sd21-scribble-diffusers",
"thibaud/controlnet-sd21-hed-diffusers",
"thibaud/controlnet-sd21-zoedepth-diffusers",
"thibaud/controlnet-sd21-color-diffusers",
"thibaud/controlnet-sd21-openposev2-diffusers",
"thibaud/controlnet-sd21-lineart-diffusers",
"thibaud/controlnet-sd21-normalbae-diffusers",
"thibaud/controlnet-sd21-ade20k-diffusers",
##############################################
# ControlNetMediaPipeface, ControlNet v1.1
##############################################
# ["CrucibleAI/ControlNetMediaPipeFace", "diffusion_sd15"], # SD 1.5
# diffusion_sd15 needs to be passed to from_pretrained() as subfolder arg
# hacked t2l to split to model & subfolder if format is "model,subfolder"
"CrucibleAI/ControlNetMediaPipeFace,diffusion_sd15", # SD 1.5
"CrucibleAI/ControlNetMediaPipeFace", # SD 2.1?
]
CONTROLNET_NAME_VALUES = Literal[tuple(CONTROLNET_DEFAULT_MODELS)]
class ControlField(BaseModel):
image: ImageField = Field(default=None, description="The control image")
control_model: Optional[str] = Field(default=None, description="The ControlNet model to use")
# control_weight: Optional[float] = Field(default=1, description="weight given to controlnet")
control_weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(default=0, ge=0, le=1,
description="When the ControlNet is first applied (% of total steps)")
end_step_percent: float = Field(default=1, ge=0, le=1,
description="When the ControlNet is last applied (% of total steps)")
@validator("control_weight")
def abs_le_one(cls, v):
"""validate that all abs(values) are <=1"""
if isinstance(v, list):
for i in v:
if abs(i) > 1:
raise ValueError('all abs(control_weight) must be <= 1')
else:
if abs(v) > 1:
raise ValueError('abs(control_weight) must be <= 1')
return v
class Config:
schema_extra = {
"required": ["image", "control_model", "control_weight", "begin_step_percent", "end_step_percent"],
"ui": {
"type_hints": {
"control_weight": "float",
# "control_weight": "number",
}
}
}
class ControlOutput(BaseInvocationOutput):
"""node output for ControlNet info"""
# fmt: off
type: Literal["control_output"] = "control_output"
control: ControlField = Field(default=None, description="The control info")
# fmt: on
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
# fmt: off
type: Literal["controlnet"] = "controlnet"
# Inputs
image: ImageField = Field(default=None, description="The control image")
control_model: CONTROLNET_NAME_VALUES = Field(default="lllyasviel/sd-controlnet-canny",
description="control model used")
control_weight: Union[float, List[float]] = Field(default=1.0, description="The weight given to the ControlNet")
# TODO: add support in backend core for begin_step_percent, end_step_percent, guess_mode
begin_step_percent: float = Field(default=0, ge=0, le=1,
description="When the ControlNet is first applied (% of total steps)")
end_step_percent: float = Field(default=1, ge=0, le=1,
description="When the ControlNet is last applied (% of total steps)")
# fmt: on
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents"],
"type_hints": {
"model": "model",
"control": "control",
# "cfg_scale": "float",
"cfg_scale": "number",
"control_weight": "float",
}
},
}
def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput(
control=ControlField(
image=self.image,
control_model=self.control_model,
control_weight=self.control_weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
),
)
# TODO: move image processors to separate file (image_analysis.py
class ImageProcessorInvocation(BaseInvocation, PILInvocationConfig):
"""Base class for invocations that preprocess images for ControlNet"""
# fmt: off
type: Literal["image_processor"] = "image_processor"
# Inputs
image: ImageField = Field(default=None, description="The image to process")
# fmt: on
def run_processor(self, image):
# superclass just passes through image without processing
return image
def invoke(self, context: InvocationContext) -> ImageOutput:
raw_image = context.services.images.get_pil_image(self.image.image_name)
# image type should be PIL.PngImagePlugin.PngImageFile ?
processed_image = self.run_processor(raw_image)
# FIXME: what happened to image metadata?
# metadata = context.services.metadata.build_metadata(
# session_id=context.graph_execution_state_id, node=self
# )
# currently can't see processed image in node UI without a showImage node,
# so for now setting image_type to RESULT instead of INTERMEDIATE so will get saved in gallery
image_dto = context.services.images.create(
image=processed_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.CONTROL,
session_id=context.graph_execution_state_id,
node_id=self.id,
is_intermediate=self.is_intermediate
)
"""Builds an ImageOutput and its ImageField"""
processed_image_field = ImageField(image_name=image_dto.image_name)
return ImageOutput(
image=processed_image_field,
# width=processed_image.width,
width = image_dto.width,
# height=processed_image.height,
height = image_dto.height,
# mode=processed_image.mode,
)
class CannyImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Canny edge detection for ControlNet"""
# fmt: off
type: Literal["canny_image_processor"] = "canny_image_processor"
# Input
low_threshold: int = Field(default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)")
high_threshold: int = Field(default=200, ge=0, le=255, description="The high threshold of the Canny pixel gradient (0-255)")
# fmt: on
def run_processor(self, image):
canny_processor = CannyDetector()
processed_image = canny_processor(image, self.low_threshold, self.high_threshold)
return processed_image
class HedImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies HED edge detection to image"""
# fmt: off
type: Literal["hed_image_processor"] = "hed_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# safe not supported in controlnet_aux v0.0.3
# safe: bool = Field(default=False, description="whether to use safe mode")
scribble: bool = Field(default=False, description="Whether to use scribble mode")
# fmt: on
def run_processor(self, image):
hed_processor = HEDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = hed_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
# safe not supported in controlnet_aux v0.0.3
# safe=self.safe,
scribble=self.scribble,
)
return processed_image
class LineartImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies line art processing to image"""
# fmt: off
type: Literal["lineart_image_processor"] = "lineart_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
coarse: bool = Field(default=False, description="Whether to use coarse mode")
# fmt: on
def run_processor(self, image):
lineart_processor = LineartDetector.from_pretrained("lllyasviel/Annotators")
processed_image = lineart_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
coarse=self.coarse)
return processed_image
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies line art anime processing to image"""
# fmt: off
type: Literal["lineart_anime_image_processor"] = "lineart_anime_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
def run_processor(self, image):
processor = LineartAnimeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
)
return processed_image
class OpenposeImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies Openpose processing to image"""
# fmt: off
type: Literal["openpose_image_processor"] = "openpose_image_processor"
# Inputs
hand_and_face: bool = Field(default=False, description="Whether to use hands and face mode")
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
def run_processor(self, image):
openpose_processor = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = openpose_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
hand_and_face=self.hand_and_face,
)
return processed_image
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies Midas depth processing to image"""
# fmt: off
type: Literal["midas_depth_image_processor"] = "midas_depth_image_processor"
# Inputs
a_mult: float = Field(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = Field(default=0.1, ge=0, description="Midas parameter `bg_th`")
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = Field(default=False, description="whether to use depth and normal mode")
# fmt: on
def run_processor(self, image):
midas_processor = MidasDetector.from_pretrained("lllyasviel/Annotators")
processed_image = midas_processor(image,
a=np.pi * self.a_mult,
bg_th=self.bg_th,
# dept_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal=self.depth_and_normal,
)
return processed_image
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies NormalBae processing to image"""
# fmt: off
type: Literal["normalbae_image_processor"] = "normalbae_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
# fmt: on
def run_processor(self, image):
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = normalbae_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution)
return processed_image
class MlsdImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies MLSD processing to image"""
# fmt: off
type: Literal["mlsd_image_processor"] = "mlsd_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
thr_v: float = Field(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = Field(default=0.1, ge=0, description="MLSD parameter `thr_d`")
# fmt: on
def run_processor(self, image):
mlsd_processor = MLSDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = mlsd_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
thr_v=self.thr_v,
thr_d=self.thr_d)
return processed_image
class PidiImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies PIDI processing to image"""
# fmt: off
type: Literal["pidi_image_processor"] = "pidi_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
safe: bool = Field(default=False, description="Whether to use safe mode")
scribble: bool = Field(default=False, description="Whether to use scribble mode")
# fmt: on
def run_processor(self, image):
pidi_processor = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
processed_image = pidi_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
safe=self.safe,
scribble=self.scribble)
return processed_image
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies content shuffle processing to image"""
# fmt: off
type: Literal["content_shuffle_image_processor"] = "content_shuffle_image_processor"
# Inputs
detect_resolution: int = Field(default=512, ge=0, description="The pixel resolution for detection")
image_resolution: int = Field(default=512, ge=0, description="The pixel resolution for the output image")
h: Union[int, None] = Field(default=512, ge=0, description="Content shuffle `h` parameter")
w: Union[int, None] = Field(default=512, ge=0, description="Content shuffle `w` parameter")
f: Union[int, None] = Field(default=256, ge=0, description="Content shuffle `f` parameter")
# fmt: on
def run_processor(self, image):
content_shuffle_processor = ContentShuffleDetector()
processed_image = content_shuffle_processor(image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
h=self.h,
w=self.w,
f=self.f
)
return processed_image
# should work with controlnet_aux >= 0.0.4 and timm <= 0.6.13
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies Zoe depth processing to image"""
# fmt: off
type: Literal["zoe_depth_image_processor"] = "zoe_depth_image_processor"
# fmt: on
def run_processor(self, image):
zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = zoe_depth_processor(image)
return processed_image
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation, PILInvocationConfig):
"""Applies mediapipe face processing to image"""
# fmt: off
type: Literal["mediapipe_face_processor"] = "mediapipe_face_processor"
# Inputs
max_faces: int = Field(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = Field(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
# fmt: on
def run_processor(self, image):
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(image, max_faces=self.max_faces, min_confidence=self.min_confidence)
return processed_image

View File

@ -0,0 +1,67 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal
import cv2 as cv
import numpy
from PIL import Image, ImageOps
from pydantic import BaseModel, Field
from invokeai.app.models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput
class CvInvocationConfig(BaseModel):
"""Helper class to provide all OpenCV invocations with additional config"""
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["cv", "image"],
},
}
class CvInpaintInvocation(BaseInvocation, CvInvocationConfig):
"""Simple inpaint using opencv."""
# fmt: off
type: Literal["cv_inpaint"] = "cv_inpaint"
# Inputs
image: ImageField = Field(default=None, description="The image to inpaint")
mask: ImageField = Field(default=None, description="The mask to use when inpainting")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
mask = context.services.images.get_pil_image(self.mask.image_name)
# Convert to cv image/mask
# TODO: consider making these utility functions
cv_image = cv.cvtColor(numpy.array(image.convert("RGB")), cv.COLOR_RGB2BGR)
cv_mask = numpy.array(ImageOps.invert(mask.convert("L")))
# Inpaint
cv_inpainted = cv.inpaint(cv_image, cv_mask, 3, cv.INPAINT_TELEA)
# Convert back to Pillow
# TODO: consider making a utility function
image_inpainted = Image.fromarray(cv.cvtColor(cv_inpainted, cv.COLOR_BGR2RGB))
image_dto = context.services.images.create(
image=image_inpainted,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -0,0 +1,248 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from functools import partial
from typing import Literal, Optional, Union, get_args
import torch
from diffusers import ControlNetModel
from pydantic import BaseModel, Field
from invokeai.app.models.image import (ColorField, ImageCategory, ImageField,
ResourceOrigin)
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.backend.generator.inpaint import infill_methods
from ...backend.generator import Inpaint, InvokeAIGenerator
from ...backend.stable_diffusion import PipelineIntermediateState
from ..util.step_callback import stable_diffusion_step_callback
from .baseinvocation import BaseInvocation, InvocationConfig, InvocationContext
from .image import ImageOutput
import re
from ...backend.model_management.lora import ModelPatcher
from ...backend.stable_diffusion.diffusers_pipeline import StableDiffusionGeneratorPipeline
from .model import UNetField, VaeField
from .compel import ConditioningField
from contextlib import contextmanager, ExitStack, ContextDecorator
SAMPLER_NAME_VALUES = Literal[tuple(InvokeAIGenerator.schedulers())]
INFILL_METHODS = Literal[tuple(infill_methods())]
DEFAULT_INFILL_METHOD = (
"patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
)
from .latent import get_scheduler
class OldModelContext(ContextDecorator):
model: StableDiffusionGeneratorPipeline
def __init__(self, model):
self.model = model
def __enter__(self):
return self.model
def __exit__(self, *exc):
return False
class OldModelInfo:
name: str
hash: str
context: OldModelContext
def __init__(self, name: str, hash: str, model: StableDiffusionGeneratorPipeline):
self.name = name
self.hash = hash
self.context = OldModelContext(
model=model,
)
class InpaintInvocation(BaseInvocation):
"""Generates an image using inpaint."""
type: Literal["inpaint"] = "inpaint"
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
seed: int = Field(ge=0, le=SEED_MAX, description="The seed to use (omit for random)", default_factory=get_random_seed)
steps: int = Field(default=30, gt=0, description="The number of steps to use to generate the image")
width: int = Field(default=512, multiple_of=8, gt=0, description="The width of the resulting image", )
height: int = Field(default=512, multiple_of=8, gt=0, description="The height of the resulting image", )
cfg_scale: float = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
unet: UNetField = Field(default=None, description="UNet model")
vae: VaeField = Field(default=None, description="Vae model")
# Inputs
image: Union[ImageField, None] = Field(description="The input image")
strength: float = Field(
default=0.75, gt=0, le=1, description="The strength of the original image"
)
fit: bool = Field(
default=True,
description="Whether or not the result should be fit to the aspect ratio of the input image",
)
# Inputs
mask: Union[ImageField, None] = Field(description="The mask")
seam_size: int = Field(default=96, ge=1, description="The seam inpaint size (px)")
seam_blur: int = Field(
default=16, ge=0, description="The seam inpaint blur radius (px)"
)
seam_strength: float = Field(
default=0.75, gt=0, le=1, description="The seam inpaint strength"
)
seam_steps: int = Field(
default=30, ge=1, description="The number of steps to use for seam inpaint"
)
tile_size: int = Field(
default=32, ge=1, description="The tile infill method size (px)"
)
infill_method: INFILL_METHODS = Field(
default=DEFAULT_INFILL_METHOD,
description="The method used to infill empty regions (px)",
)
inpaint_width: Optional[int] = Field(
default=None,
multiple_of=8,
gt=0,
description="The width of the inpaint region (px)",
)
inpaint_height: Optional[int] = Field(
default=None,
multiple_of=8,
gt=0,
description="The height of the inpaint region (px)",
)
inpaint_fill: Optional[ColorField] = Field(
default=ColorField(r=127, g=127, b=127, a=255),
description="The solid infill method color",
)
inpaint_replace: float = Field(
default=0.0,
ge=0.0,
le=1.0,
description="The amount by which to replace masked areas with latent noise",
)
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["stable-diffusion", "image"],
},
}
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
intermediate_state: PipelineIntermediateState,
) -> None:
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
node=self.dict(),
source_node_id=source_node_id,
)
def get_conditioning(self, context):
c, extra_conditioning_info = context.services.latents.get(self.positive_conditioning.conditioning_name)
uc, _ = context.services.latents.get(self.negative_conditioning.conditioning_name)
return (uc, c, extra_conditioning_info)
@contextmanager
def load_model_old_way(self, context, scheduler):
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
vae_info = context.services.model_manager.get_model(**self.vae.vae.dict())
#unet = unet_info.context.model
#vae = vae_info.context.model
with ExitStack() as stack:
loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
with vae_info as vae,\
unet_info as unet,\
ModelPatcher.apply_lora_unet(unet, loras):
device = context.services.model_manager.mgr.cache.execution_device
dtype = context.services.model_manager.mgr.cache.precision
pipeline = StableDiffusionGeneratorPipeline(
vae=vae,
text_encoder=None,
tokenizer=None,
unet=unet,
scheduler=scheduler,
safety_checker=None,
feature_extractor=None,
requires_safety_checker=False,
precision="float16" if dtype == torch.float16 else "float32",
execution_device=device,
)
yield OldModelInfo(
name=self.unet.unet.model_name,
hash="<NO-HASH>",
model=pipeline,
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = (
None
if self.image is None
else context.services.images.get_pil_image(self.image.image_name)
)
mask = (
None
if self.mask is None
else context.services.images.get_pil_image(self.mask.image_name)
)
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
conditioning = self.get_conditioning(context)
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
with self.load_model_old_way(context, scheduler) as model:
outputs = Inpaint(model).generate(
conditioning=conditioning,
scheduler=scheduler,
init_image=image,
mask_image=mask,
step_callback=partial(self.dispatch_progress, context, source_node_id),
**self.dict(
exclude={"positive_conditioning", "negative_conditioning", "scheduler", "image", "mask"}
), # Shorthand for passing all of the parameters above manually
)
# Outputs is an infinite iterator that will return a new InvokeAIGeneratorOutput object
# each time it is called. We only need the first one.
generator_output = next(outputs)
image_dto = context.services.images.create(
image=generator_output.image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
session_id=context.graph_execution_state_id,
node_id=self.id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -0,0 +1,547 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
import io
from typing import Literal, Optional, Union
import numpy
from PIL import Image, ImageFilter, ImageOps, ImageChops
from pydantic import BaseModel, Field
from ..models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
)
class PILInvocationConfig(BaseModel):
"""Helper class to provide all PIL invocations with additional config"""
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["PIL", "image"],
},
}
class ImageOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
# fmt: off
type: Literal["image_output"] = "image_output"
image: ImageField = Field(default=None, description="The output image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
# fmt: on
class Config:
schema_extra = {"required": ["type", "image", "width", "height"]}
class MaskOutput(BaseInvocationOutput):
"""Base class for invocations that output a mask"""
# fmt: off
type: Literal["mask"] = "mask"
mask: ImageField = Field(default=None, description="The output mask")
width: int = Field(description="The width of the mask in pixels")
height: int = Field(description="The height of the mask in pixels")
# fmt: on
class Config:
schema_extra = {
"required": [
"type",
"mask",
]
}
class LoadImageInvocation(BaseInvocation):
"""Load an image and provide it as output."""
# fmt: off
type: Literal["load_image"] = "load_image"
# Inputs
image: Union[ImageField, None] = Field(
default=None, description="The image to load"
)
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
width=image.width,
height=image.height,
)
class ShowImageInvocation(BaseInvocation):
"""Displays a provided image, and passes it forward in the pipeline."""
type: Literal["show_image"] = "show_image"
# Inputs
image: Union[ImageField, None] = Field(
default=None, description="The image to show"
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
if image:
image.show()
# TODO: how to handle failure?
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
width=image.width,
height=image.height,
)
class ImageCropInvocation(BaseInvocation, PILInvocationConfig):
"""Crops an image to a specified box. The box can be outside of the image."""
# fmt: off
type: Literal["img_crop"] = "img_crop"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to crop")
x: int = Field(default=0, description="The left x coordinate of the crop rectangle")
y: int = Field(default=0, description="The top y coordinate of the crop rectangle")
width: int = Field(default=512, gt=0, description="The width of the crop rectangle")
height: int = Field(default=512, gt=0, description="The height of the crop rectangle")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_crop = Image.new(
mode="RGBA", size=(self.width, self.height), color=(0, 0, 0, 0)
)
image_crop.paste(image, (-self.x, -self.y))
image_dto = context.services.images.create(
image=image_crop,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ImagePasteInvocation(BaseInvocation, PILInvocationConfig):
"""Pastes an image into another image."""
# fmt: off
type: Literal["img_paste"] = "img_paste"
# Inputs
base_image: Union[ImageField, None] = Field(default=None, description="The base image")
image: Union[ImageField, None] = Field(default=None, description="The image to paste")
mask: Optional[ImageField] = Field(default=None, description="The mask to use when pasting")
x: int = Field(default=0, description="The left x coordinate at which to paste the image")
y: int = Field(default=0, description="The top y coordinate at which to paste the image")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
base_image = context.services.images.get_pil_image(self.base_image.image_name)
image = context.services.images.get_pil_image(self.image.image_name)
mask = (
None
if self.mask is None
else ImageOps.invert(
context.services.images.get_pil_image(self.mask.image_name)
)
)
# TODO: probably shouldn't invert mask here... should user be required to do it?
min_x = min(0, self.x)
min_y = min(0, self.y)
max_x = max(base_image.width, image.width + self.x)
max_y = max(base_image.height, image.height + self.y)
new_image = Image.new(
mode="RGBA", size=(max_x - min_x, max_y - min_y), color=(0, 0, 0, 0)
)
new_image.paste(base_image, (abs(min_x), abs(min_y)))
new_image.paste(image, (max(0, self.x), max(0, self.y)), mask=mask)
image_dto = context.services.images.create(
image=new_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class MaskFromAlphaInvocation(BaseInvocation, PILInvocationConfig):
"""Extracts the alpha channel of an image as a mask."""
# fmt: off
type: Literal["tomask"] = "tomask"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to create the mask from")
invert: bool = Field(default=False, description="Whether or not to invert the mask")
# fmt: on
def invoke(self, context: InvocationContext) -> MaskOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_mask = image.split()[-1]
if self.invert:
image_mask = ImageOps.invert(image_mask)
image_dto = context.services.images.create(
image=image_mask,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.MASK,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return MaskOutput(
mask=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ImageMultiplyInvocation(BaseInvocation, PILInvocationConfig):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
# fmt: off
type: Literal["img_mul"] = "img_mul"
# Inputs
image1: Union[ImageField, None] = Field(default=None, description="The first image to multiply")
image2: Union[ImageField, None] = Field(default=None, description="The second image to multiply")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image1 = context.services.images.get_pil_image(self.image1.image_name)
image2 = context.services.images.get_pil_image(self.image2.image_name)
multiply_image = ImageChops.multiply(image1, image2)
image_dto = context.services.images.create(
image=multiply_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
class ImageChannelInvocation(BaseInvocation, PILInvocationConfig):
"""Gets a channel from an image."""
# fmt: off
type: Literal["img_chan"] = "img_chan"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to get the channel from")
channel: IMAGE_CHANNELS = Field(default="A", description="The channel to get")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
channel_image = image.getchannel(self.channel)
image_dto = context.services.images.create(
image=channel_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
class ImageConvertInvocation(BaseInvocation, PILInvocationConfig):
"""Converts an image to a different mode."""
# fmt: off
type: Literal["img_conv"] = "img_conv"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to convert")
mode: IMAGE_MODES = Field(default="L", description="The mode to convert to")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
converted_image = image.convert(self.mode)
image_dto = context.services.images.create(
image=converted_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ImageBlurInvocation(BaseInvocation, PILInvocationConfig):
"""Blurs an image"""
# fmt: off
type: Literal["img_blur"] = "img_blur"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to blur")
radius: float = Field(default=8.0, ge=0, description="The blur radius")
blur_type: Literal["gaussian", "box"] = Field(default="gaussian", description="The type of blur")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
blur = (
ImageFilter.GaussianBlur(self.radius)
if self.blur_type == "gaussian"
else ImageFilter.BoxBlur(self.radius)
)
blur_image = image.filter(blur)
image_dto = context.services.images.create(
image=blur_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
PIL_RESAMPLING_MODES = Literal[
"nearest",
"box",
"bilinear",
"hamming",
"bicubic",
"lanczos",
]
PIL_RESAMPLING_MAP = {
"nearest": Image.Resampling.NEAREST,
"box": Image.Resampling.BOX,
"bilinear": Image.Resampling.BILINEAR,
"hamming": Image.Resampling.HAMMING,
"bicubic": Image.Resampling.BICUBIC,
"lanczos": Image.Resampling.LANCZOS,
}
class ImageResizeInvocation(BaseInvocation, PILInvocationConfig):
"""Resizes an image to specific dimensions"""
# fmt: off
type: Literal["img_resize"] = "img_resize"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to resize")
width: int = Field(ge=64, multiple_of=8, description="The width to resize to (px)")
height: int = Field(ge=64, multiple_of=8, description="The height to resize to (px)")
resample_mode: PIL_RESAMPLING_MODES = Field(default="bicubic", description="The resampling mode")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
resize_image = image.resize(
(self.width, self.height),
resample=resample_mode,
)
image_dto = context.services.images.create(
image=resize_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ImageScaleInvocation(BaseInvocation, PILInvocationConfig):
"""Scales an image by a factor"""
# fmt: off
type: Literal["img_scale"] = "img_scale"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to scale")
scale_factor: float = Field(gt=0, description="The factor by which to scale the image")
resample_mode: PIL_RESAMPLING_MODES = Field(default="bicubic", description="The resampling mode")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
width = int(image.width * self.scale_factor)
height = int(image.height * self.scale_factor)
resize_image = image.resize(
(width, height),
resample=resample_mode,
)
image_dto = context.services.images.create(
image=resize_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ImageLerpInvocation(BaseInvocation, PILInvocationConfig):
"""Linear interpolation of all pixels of an image"""
# fmt: off
type: Literal["img_lerp"] = "img_lerp"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to lerp")
min: int = Field(default=0, ge=0, le=255, description="The minimum output value")
max: int = Field(default=255, ge=0, le=255, description="The maximum output value")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_arr = numpy.asarray(image, dtype=numpy.float32) / 255
image_arr = image_arr * (self.max - self.min) + self.max
lerp_image = Image.fromarray(numpy.uint8(image_arr))
image_dto = context.services.images.create(
image=lerp_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class ImageInverseLerpInvocation(BaseInvocation, PILInvocationConfig):
"""Inverse linear interpolation of all pixels of an image"""
# fmt: off
type: Literal["img_ilerp"] = "img_ilerp"
# Inputs
image: Union[ImageField, None] = Field(default=None, description="The image to lerp")
min: int = Field(default=0, ge=0, le=255, description="The minimum input value")
max: int = Field(default=255, ge=0, le=255, description="The maximum input value")
# fmt: on
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_arr = numpy.asarray(image, dtype=numpy.float32)
image_arr = (
numpy.minimum(
numpy.maximum(image_arr - self.min, 0) / float(self.max - self.min), 1
)
* 255
)
ilerp_image = Image.fromarray(numpy.uint8(image_arr))
image_dto = context.services.images.create(
image=ilerp_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -0,0 +1,230 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
from typing import Literal, Union, get_args
import numpy as np
import math
from PIL import Image, ImageOps
from pydantic import Field
from invokeai.app.invocations.image import ImageOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.backend.image_util.patchmatch import PatchMatch
from ..models.image import ColorField, ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
InvocationContext,
)
def infill_methods() -> list[str]:
methods = [
"tile",
"solid",
]
if PatchMatch.patchmatch_available():
methods.insert(0, "patchmatch")
return methods
INFILL_METHODS = Literal[tuple(infill_methods())]
DEFAULT_INFILL_METHOD = (
"patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
)
def infill_patchmatch(im: Image.Image) -> Image.Image:
if im.mode != "RGBA":
return im
# Skip patchmatch if patchmatch isn't available
if not PatchMatch.patchmatch_available():
return im
# Patchmatch (note, we may want to expose patch_size? Increasing it significantly impacts performance though)
im_patched_np = PatchMatch.inpaint(
im.convert("RGB"), ImageOps.invert(im.split()[-1]), patch_size=3
)
im_patched = Image.fromarray(im_patched_np, mode="RGB")
return im_patched
def get_tile_images(image: np.ndarray, width=8, height=8):
_nrows, _ncols, depth = image.shape
_strides = image.strides
nrows, _m = divmod(_nrows, height)
ncols, _n = divmod(_ncols, width)
if _m != 0 or _n != 0:
return None
return np.lib.stride_tricks.as_strided(
np.ravel(image),
shape=(nrows, ncols, height, width, depth),
strides=(height * _strides[0], width * _strides[1], *_strides),
writeable=False,
)
def tile_fill_missing(
im: Image.Image, tile_size: int = 16, seed: Union[int, None] = None
) -> Image.Image:
# Only fill if there's an alpha layer
if im.mode != "RGBA":
return im
a = np.asarray(im, dtype=np.uint8)
tile_size_tuple = (tile_size, tile_size)
# Get the image as tiles of a specified size
tiles = get_tile_images(a, *tile_size_tuple).copy()
# Get the mask as tiles
tiles_mask = tiles[:, :, :, :, 3]
# Find any mask tiles with any fully transparent pixels (we will be replacing these later)
tmask_shape = tiles_mask.shape
tiles_mask = tiles_mask.reshape(math.prod(tiles_mask.shape))
n, ny = (math.prod(tmask_shape[0:2])), math.prod(tmask_shape[2:])
tiles_mask = tiles_mask > 0
tiles_mask = tiles_mask.reshape((n, ny)).all(axis=1)
# Get RGB tiles in single array and filter by the mask
tshape = tiles.shape
tiles_all = tiles.reshape((math.prod(tiles.shape[0:2]), *tiles.shape[2:]))
filtered_tiles = tiles_all[tiles_mask]
if len(filtered_tiles) == 0:
return im
# Find all invalid tiles and replace with a random valid tile
replace_count = (tiles_mask == False).sum()
rng = np.random.default_rng(seed=seed)
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[
rng.choice(filtered_tiles.shape[0], replace_count), :, :, :
]
# Convert back to an image
tiles_all = tiles_all.reshape(tshape)
tiles_all = tiles_all.swapaxes(1, 2)
st = tiles_all.reshape(
(
math.prod(tiles_all.shape[0:2]),
math.prod(tiles_all.shape[2:4]),
tiles_all.shape[4],
)
)
si = Image.fromarray(st, mode="RGBA")
return si
class InfillColorInvocation(BaseInvocation):
"""Infills transparent areas of an image with a solid color"""
type: Literal["infill_rgba"] = "infill_rgba"
image: Union[ImageField, None] = Field(
default=None, description="The image to infill"
)
color: ColorField = Field(
default=ColorField(r=127, g=127, b=127, a=255),
description="The color to use to infill",
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
solid_bg = Image.new("RGBA", image.size, self.color.tuple())
infilled = Image.alpha_composite(solid_bg, image.convert("RGBA"))
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class InfillTileInvocation(BaseInvocation):
"""Infills transparent areas of an image with tiles of the image"""
type: Literal["infill_tile"] = "infill_tile"
image: Union[ImageField, None] = Field(
default=None, description="The image to infill"
)
tile_size: int = Field(default=32, ge=1, description="The tile size (px)")
seed: int = Field(
ge=0,
le=SEED_MAX,
description="The seed to use for tile generation (omit for random)",
default_factory=get_random_seed,
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
infilled = tile_fill_missing(
image.copy(), seed=self.seed, tile_size=self.tile_size
)
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
class InfillPatchMatchInvocation(BaseInvocation):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
type: Literal["infill_patchmatch"] = "infill_patchmatch"
image: Union[ImageField, None] = Field(
default=None, description="The image to infill"
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
if PatchMatch.patchmatch_available():
infilled = infill_patchmatch(image.copy())
else:
raise ValueError("PatchMatch is not available on this system")
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -0,0 +1,674 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from contextlib import ExitStack
from typing import List, Literal, Optional, Union
import einops
from pydantic import BaseModel, Field, validator
import torch
from diffusers import ControlNetModel, DPMSolverMultistepScheduler
from diffusers.image_processor import VaeImageProcessor
from diffusers.schedulers import SchedulerMixin as Scheduler
from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from ..models.image import ImageCategory, ImageField, ResourceOrigin
from ...backend.image_util.seamless import configure_model_padding
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import (
ConditioningData, ControlNetData, StableDiffusionGeneratorPipeline,
image_resized_to_grid_as_tensor)
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import \
PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_torch_device, torch_dtype
from ...backend.model_management.lora import ModelPatcher
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
from .compel import ConditioningField
from .controlnet_image_processors import ControlField
from .image import ImageOutput
from .model import ModelInfo, UNetField, VaeField
class LatentsField(BaseModel):
"""A latents field used for passing latents between invocations"""
latents_name: Optional[str] = Field(default=None, description="The name of the latents")
class Config:
schema_extra = {"required": ["latents_name"]}
class LatentsOutput(BaseInvocationOutput):
"""Base class for invocations that output latents"""
#fmt: off
type: Literal["latents_output"] = "latents_output"
# Inputs
latents: LatentsField = Field(default=None, description="The output latents")
width: int = Field(description="The width of the latents in pixels")
height: int = Field(description="The height of the latents in pixels")
#fmt: on
def build_latents_output(latents_name: str, latents: torch.Tensor):
return LatentsOutput(
latents=LatentsField(latents_name=latents_name),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
class NoiseOutput(BaseInvocationOutput):
"""Invocation noise output"""
#fmt: off
type: Literal["noise_output"] = "noise_output"
# Inputs
noise: LatentsField = Field(default=None, description="The output noise")
width: int = Field(description="The width of the noise in pixels")
height: int = Field(description="The height of the noise in pixels")
#fmt: on
def build_noise_output(latents_name: str, latents: torch.Tensor):
return NoiseOutput(
noise=LatentsField(latents_name=latents_name),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
SAMPLER_NAME_VALUES = Literal[
tuple(list(SCHEDULER_MAP.keys()))
]
def get_scheduler(
context: InvocationContext,
scheduler_info: ModelInfo,
scheduler_name: str,
) -> Scheduler:
scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(scheduler_name, SCHEDULER_MAP['ddim'])
orig_scheduler_info = context.services.model_manager.get_model(**scheduler_info.dict())
with orig_scheduler_info as orig_scheduler:
scheduler_config = orig_scheduler.config
if "_backup" in scheduler_config:
scheduler_config = scheduler_config["_backup"]
scheduler_config = {**scheduler_config, **scheduler_extra_config, "_backup": scheduler_config}
scheduler = scheduler_class.from_config(scheduler_config)
# hack copied over from generate.py
if not hasattr(scheduler, 'uses_inpainting_model'):
scheduler.uses_inpainting_model = lambda: False
return scheduler
def get_noise(width:int, height:int, device:torch.device, seed:int = 0, latent_channels:int=4, use_mps_noise:bool=False, downsampling_factor:int = 8):
# limit noise to only the diffusion image channels, not the mask channels
input_channels = min(latent_channels, 4)
use_device = "cpu" if (use_mps_noise or device.type == "mps") else device
generator = torch.Generator(device=use_device).manual_seed(seed)
x = torch.randn(
[
1,
input_channels,
height // downsampling_factor,
width // downsampling_factor,
],
dtype=torch_dtype(device),
device=use_device,
generator=generator,
).to(device)
# if self.perlin > 0.0:
# perlin_noise = self.get_perlin_noise(
# width // self.downsampling_factor, height // self.downsampling_factor
# )
# x = (1 - self.perlin) * x + self.perlin * perlin_noise
return x
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""
type: Literal["noise"] = "noise"
# Inputs
seed: int = Field(ge=0, le=SEED_MAX, description="The seed to use", default_factory=get_random_seed)
width: int = Field(default=512, multiple_of=8, gt=0, description="The width of the resulting noise", )
height: int = Field(default=512, multiple_of=8, gt=0, description="The height of the resulting noise", )
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents", "noise"],
},
}
@validator("seed", pre=True)
def modulo_seed(cls, v):
"""Returns the seed modulo SEED_MAX to ensure it is within the valid range."""
return v % SEED_MAX
def invoke(self, context: InvocationContext) -> NoiseOutput:
device = torch.device(choose_torch_device())
noise = get_noise(self.width, self.height, device, self.seed)
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, noise)
return build_noise_output(latents_name=name, latents=noise)
# Text to image
class TextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
type: Literal["t2l"] = "t2l"
# Inputs
# fmt: off
positive_conditioning: Optional[ConditioningField] = Field(description="Positive conditioning for generation")
negative_conditioning: Optional[ConditioningField] = Field(description="Negative conditioning for generation")
noise: Optional[LatentsField] = Field(description="The noise to use")
steps: int = Field(default=10, gt=0, description="The number of steps to use to generate the image")
cfg_scale: Union[float, List[float]] = Field(default=7.5, ge=1, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt", )
scheduler: SAMPLER_NAME_VALUES = Field(default="euler", description="The scheduler to use" )
unet: UNetField = Field(default=None, description="UNet submodel")
control: Union[ControlField, list[ControlField]] = Field(default=None, description="The control to use")
#seamless: bool = Field(default=False, description="Whether or not to generate an image that can tile without seams", )
#seamless_axes: str = Field(default="", description="The axes to tile the image on, 'x' and/or 'y'")
# fmt: on
@validator("cfg_scale")
def ge_one(cls, v):
"""validate that all cfg_scale values are >= 1"""
if isinstance(v, list):
for i in v:
if i < 1:
raise ValueError('cfg_scale must be greater than 1')
else:
if v < 1:
raise ValueError('cfg_scale must be greater than 1')
return v
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents"],
"type_hints": {
"model": "model",
"control": "control",
# "cfg_scale": "float",
"cfg_scale": "number"
}
},
}
# TODO: pass this an emitter method or something? or a session for dispatching?
def dispatch_progress(
self, context: InvocationContext, source_node_id: str, intermediate_state: PipelineIntermediateState
) -> None:
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
node=self.dict(),
source_node_id=source_node_id,
)
def get_conditioning_data(self, context: InvocationContext, scheduler) -> ConditioningData:
c, extra_conditioning_info = context.services.latents.get(self.positive_conditioning.conditioning_name)
uc, _ = context.services.latents.get(self.negative_conditioning.conditioning_name)
conditioning_data = ConditioningData(
unconditioned_embeddings=uc,
text_embeddings=c,
guidance_scale=self.cfg_scale,
extra=extra_conditioning_info,
postprocessing_settings=PostprocessingSettings(
threshold=0.0,#threshold,
warmup=0.2,#warmup,
h_symmetry_time_pct=None,#h_symmetry_time_pct,
v_symmetry_time_pct=None#v_symmetry_time_pct,
),
)
conditioning_data = conditioning_data.add_scheduler_args_if_applicable(
scheduler,
# for ddim scheduler
eta=0.0, #ddim_eta
# for ancestral and sde schedulers
generator=torch.Generator(device=uc.device).manual_seed(0),
)
return conditioning_data
def create_pipeline(self, unet, scheduler) -> StableDiffusionGeneratorPipeline:
# TODO:
#configure_model_padding(
# unet,
# self.seamless,
# self.seamless_axes,
#)
class FakeVae:
class FakeVaeConfig:
def __init__(self):
self.block_out_channels = [0]
def __init__(self):
self.config = FakeVae.FakeVaeConfig()
return StableDiffusionGeneratorPipeline(
vae=FakeVae(), # TODO: oh...
text_encoder=None,
tokenizer=None,
unet=unet,
scheduler=scheduler,
safety_checker=None,
feature_extractor=None,
requires_safety_checker=False,
precision="float16" if unet.dtype == torch.float16 else "float32",
)
def prep_control_data(
self,
context: InvocationContext,
model: StableDiffusionGeneratorPipeline, # really only need model for dtype and device
control_input: List[ControlField],
latents_shape: List[int],
do_classifier_free_guidance: bool = True,
) -> List[ControlNetData]:
# assuming fixed dimensional scaling of 8:1 for image:latents
control_height_resize = latents_shape[2] * 8
control_width_resize = latents_shape[3] * 8
if control_input is None:
# print("control input is None")
control_list = None
elif isinstance(control_input, list) and len(control_input) == 0:
# print("control input is empty list")
control_list = None
elif isinstance(control_input, ControlField):
# print("control input is ControlField")
control_list = [control_input]
elif isinstance(control_input, list) and len(control_input) > 0 and isinstance(control_input[0], ControlField):
# print("control input is list[ControlField]")
control_list = control_input
else:
# print("input control is unrecognized:", type(self.control))
control_list = None
if (control_list is None):
control_data = None
# from above handling, any control that is not None should now be of type list[ControlField]
else:
# FIXME: add checks to skip entry if model or image is None
# and if weight is None, populate with default 1.0?
control_data = []
control_models = []
for control_info in control_list:
# handle control models
if ("," in control_info.control_model):
control_model_split = control_info.control_model.split(",")
control_name = control_model_split[0]
control_subfolder = control_model_split[1]
print("Using HF model subfolders")
print(" control_name: ", control_name)
print(" control_subfolder: ", control_subfolder)
control_model = ControlNetModel.from_pretrained(control_name,
subfolder=control_subfolder,
torch_dtype=model.unet.dtype).to(model.device)
else:
control_model = ControlNetModel.from_pretrained(control_info.control_model,
torch_dtype=model.unet.dtype).to(model.device)
control_models.append(control_model)
control_image_field = control_info.image
input_image = context.services.images.get_pil_image(control_image_field.image_name)
# self.image.image_type, self.image.image_name
# FIXME: still need to test with different widths, heights, devices, dtypes
# and add in batch_size, num_images_per_prompt?
# and do real check for classifier_free_guidance?
# prepare_control_image should return torch.Tensor of shape(batch_size, 3, height, width)
control_image = model.prepare_control_image(
image=input_image,
do_classifier_free_guidance=do_classifier_free_guidance,
width=control_width_resize,
height=control_height_resize,
# batch_size=batch_size * num_images_per_prompt,
# num_images_per_prompt=num_images_per_prompt,
device=control_model.device,
dtype=control_model.dtype,
)
control_item = ControlNetData(model=control_model,
image_tensor=control_image,
weight=control_info.control_weight,
begin_step_percent=control_info.begin_step_percent,
end_step_percent=control_info.end_step_percent)
control_data.append(control_item)
# MultiControlNetModel has been refactored out, just need list[ControlNetData]
return control_data
def invoke(self, context: InvocationContext) -> LatentsOutput:
noise = context.services.latents.get(self.noise.latents_name)
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state)
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
with unet_info as unet,\
ExitStack() as stack:
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
pipeline = self.create_pipeline(unet, scheduler)
conditioning_data = self.get_conditioning_data(context, scheduler)
loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
control_data = self.prep_control_data(
model=pipeline, context=context, control_input=self.control,
latents_shape=noise.shape,
# do_classifier_free_guidance=(self.cfg_scale >= 1.0))
do_classifier_free_guidance=True,
)
with ModelPatcher.apply_lora_unet(pipeline.unet, loras):
# TODO: Verify the noise is the right size
result_latents, result_attention_map_saver = pipeline.latents_from_embeddings(
latents=torch.zeros_like(noise, dtype=torch_dtype(unet.device)),
noise=noise,
num_inference_steps=self.steps,
conditioning_data=conditioning_data,
control_data=control_data, # list[ControlNetData]
callback=step_callback,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
torch.cuda.empty_cache()
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, result_latents)
return build_latents_output(latents_name=name, latents=result_latents)
class LatentsToLatentsInvocation(TextToLatentsInvocation):
"""Generates latents using latents as base image."""
type: Literal["l2l"] = "l2l"
# Inputs
latents: Optional[LatentsField] = Field(description="The latents to use as a base image")
strength: float = Field(default=0.7, ge=0, le=1, description="The strength of the latents to use")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents"],
"type_hints": {
"model": "model",
"control": "control",
"cfg_scale": "number",
}
},
}
def invoke(self, context: InvocationContext) -> LatentsOutput:
noise = context.services.latents.get(self.noise.latents_name)
latent = context.services.latents.get(self.latents.latents_name)
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state)
unet_info = context.services.model_manager.get_model(
**self.unet.unet.dict(),
)
with unet_info as unet,\
ExitStack() as stack:
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
)
pipeline = self.create_pipeline(unet, scheduler)
conditioning_data = self.get_conditioning_data(context, scheduler)
control_data = self.prep_control_data(
model=pipeline, context=context, control_input=self.control,
latents_shape=noise.shape,
# do_classifier_free_guidance=(self.cfg_scale >= 1.0))
do_classifier_free_guidance=True,
)
# TODO: Verify the noise is the right size
initial_latents = latent if self.strength < 1.0 else torch.zeros_like(
latent, device=unet.device, dtype=latent.dtype
)
timesteps, _ = pipeline.get_img2img_timesteps(
self.steps,
self.strength,
device=unet.device,
)
loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
with ModelPatcher.apply_lora_unet(pipeline.unet, loras):
result_latents, result_attention_map_saver = pipeline.latents_from_embeddings(
latents=initial_latents,
timesteps=timesteps,
noise=noise,
num_inference_steps=self.steps,
conditioning_data=conditioning_data,
control_data=control_data, # list[ControlNetData]
callback=step_callback
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
torch.cuda.empty_cache()
name = f'{context.graph_execution_state_id}__{self.id}'
context.services.latents.save(name, result_latents)
return build_latents_output(latents_name=name, latents=result_latents)
# Latent to image
class LatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents."""
type: Literal["l2i"] = "l2i"
# Inputs
latents: Optional[LatentsField] = Field(description="The latents to generate an image from")
vae: VaeField = Field(default=None, description="Vae submodel")
tiled: bool = Field(default=False, description="Decode latents by overlaping tiles(less memory consumption)")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents", "image"],
},
}
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.services.latents.get(self.latents.latents_name)
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(),
)
with vae_info as vae:
if self.tiled or context.services.configuration.tiled_decode:
vae.enable_tiling()
else:
vae.disable_tiling()
# clear memory as vae decode can request a lot
torch.cuda.empty_cache()
with torch.inference_mode():
# copied from diffusers pipeline
latents = latents / vae.config.scaling_factor
image = vae.decode(latents, return_dict=False)[0]
image = (image / 2 + 0.5).clamp(0, 1) # denormalize
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
np_image = image.cpu().permute(0, 2, 3, 1).float().numpy()
image = VaeImageProcessor.numpy_to_pil(np_image)[0]
torch.cuda.empty_cache()
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
LATENTS_INTERPOLATION_MODE = Literal[
"nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"
]
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
type: Literal["lresize"] = "lresize"
# Inputs
latents: Optional[LatentsField] = Field(description="The latents to resize")
width: int = Field(ge=64, multiple_of=8, description="The width to resize to (px)")
height: int = Field(ge=64, multiple_of=8, description="The height to resize to (px)")
mode: LATENTS_INTERPOLATION_MODE = Field(default="bilinear", description="The interpolation mode")
antialias: bool = Field(default=False, description="Whether or not to antialias (applied in bilinear and bicubic modes only)")
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
resized_latents = torch.nn.functional.interpolate(
latents,
size=(self.height // 8, self.width // 8),
mode=self.mode,
antialias=self.antialias if self.mode in ["bilinear", "bicubic"] else False,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
torch.cuda.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents)
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
type: Literal["lscale"] = "lscale"
# Inputs
latents: Optional[LatentsField] = Field(description="The latents to scale")
scale_factor: float = Field(gt=0, description="The factor by which to scale the latents")
mode: LATENTS_INTERPOLATION_MODE = Field(default="bilinear", description="The interpolation mode")
antialias: bool = Field(default=False, description="Whether or not to antialias (applied in bilinear and bicubic modes only)")
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
# resizing
resized_latents = torch.nn.functional.interpolate(
latents,
scale_factor=self.scale_factor,
mode=self.mode,
antialias=self.antialias if self.mode in ["bilinear", "bicubic"] else False,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
torch.cuda.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
type: Literal["i2l"] = "i2l"
# Inputs
image: Union[ImageField, None] = Field(description="The image to encode")
vae: VaeField = Field(default=None, description="Vae submodel")
tiled: bool = Field(default=False, description="Encode latents by overlaping tiles(less memory consumption)")
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["latents", "image"],
},
}
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
# image = context.services.images.get(
# self.image.image_type, self.image.image_name
# )
image = context.services.images.get_pil_image(self.image.image_name)
#vae_info = context.services.model_manager.get_model(**self.vae.vae.dict())
vae_info = context.services.model_manager.get_model(
**self.vae.vae.dict(),
)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
with vae_info as vae:
if self.tiled:
vae.enable_tiling()
else:
vae.disable_tiling()
# non_noised_latents_from_image
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
with torch.inference_mode():
image_tensor_dist = vae.encode(image_tensor).latent_dist
latents = image_tensor_dist.sample().to(
dtype=vae.dtype
) # FIXME: uses torch.randn. make reproducible!
latents = 0.18215 * latents
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, latents)
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=latents)

View File

@ -0,0 +1,109 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal
from pydantic import BaseModel, Field
import numpy as np
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
)
class MathInvocationConfig(BaseModel):
"""Helper class to provide all math invocations with additional config"""
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["math"],
}
}
class IntOutput(BaseInvocationOutput):
"""An integer output"""
# fmt: off
type: Literal["int_output"] = "int_output"
a: int = Field(default=None, description="The output integer")
# fmt: on
class FloatOutput(BaseInvocationOutput):
"""A float output"""
# fmt: off
type: Literal["float_output"] = "float_output"
param: float = Field(default=None, description="The output float")
# fmt: on
class AddInvocation(BaseInvocation, MathInvocationConfig):
"""Adds two numbers"""
# fmt: off
type: Literal["add"] = "add"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a + self.b)
class SubtractInvocation(BaseInvocation, MathInvocationConfig):
"""Subtracts two numbers"""
# fmt: off
type: Literal["sub"] = "sub"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a - self.b)
class MultiplyInvocation(BaseInvocation, MathInvocationConfig):
"""Multiplies two numbers"""
# fmt: off
type: Literal["mul"] = "mul"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a * self.b)
class DivideInvocation(BaseInvocation, MathInvocationConfig):
"""Divides two numbers"""
# fmt: off
type: Literal["div"] = "div"
a: int = Field(default=0, description="The first number")
b: int = Field(default=0, description="The second number")
# fmt: on
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=int(self.a / self.b))
class RandomIntInvocation(BaseInvocation):
"""Outputs a single random integer."""
# fmt: off
type: Literal["rand_int"] = "rand_int"
low: int = Field(default=0, description="The inclusive low value")
high: int = Field(
default=np.iinfo(np.int32).max, description="The exclusive high value"
)
# fmt: on
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=np.random.randint(self.low, self.high))

View File

@ -0,0 +1,217 @@
from typing import Literal, Optional, Union, List
from pydantic import BaseModel, Field
import copy
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext, InvocationConfig
from ...backend.util.devices import choose_torch_device, torch_dtype
from ...backend.model_management import BaseModelType, ModelType, SubModelType
class ModelInfo(BaseModel):
model_name: str = Field(description="Info to load submodel")
base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Info to load submodel")
submodel: Optional[SubModelType] = Field(description="Info to load submodel")
class LoraInfo(ModelInfo):
weight: float = Field(description="Lora's weight which to use when apply to model")
class UNetField(BaseModel):
unet: ModelInfo = Field(description="Info to load unet submodel")
scheduler: ModelInfo = Field(description="Info to load scheduler submodel")
loras: List[LoraInfo] = Field(description="Loras to apply on model loading")
class ClipField(BaseModel):
tokenizer: ModelInfo = Field(description="Info to load tokenizer submodel")
text_encoder: ModelInfo = Field(description="Info to load text_encoder submodel")
loras: List[LoraInfo] = Field(description="Loras to apply on model loading")
class VaeField(BaseModel):
# TODO: better naming?
vae: ModelInfo = Field(description="Info to load vae submodel")
class ModelLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
#fmt: off
type: Literal["model_loader_output"] = "model_loader_output"
unet: UNetField = Field(default=None, description="UNet submodel")
clip: ClipField = Field(default=None, description="Tokenizer and text_encoder submodels")
vae: VaeField = Field(default=None, description="Vae submodel")
#fmt: on
class PipelineModelField(BaseModel):
"""Pipeline model field"""
model_name: str = Field(description="Name of the model")
base_model: BaseModelType = Field(description="Base model")
class PipelineModelLoaderInvocation(BaseInvocation):
"""Loads a pipeline model, outputting its submodels."""
type: Literal["pipeline_model_loader"] = "pipeline_model_loader"
model: PipelineModelField = Field(description="The model to load")
# TODO: precision?
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["model", "loader"],
"type_hints": {
"model": "model"
}
},
}
def invoke(self, context: InvocationContext) -> ModelLoaderOutput:
base_model = self.model.base_model
model_name = self.model.model_name
model_type = ModelType.Pipeline
# TODO: not found exceptions
if not context.services.model_manager.model_exists(
model_name=model_name,
base_model=base_model,
model_type=model_type,
):
raise Exception(f"Unknown {base_model} {model_type} model: {model_name}")
"""
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.Tokenizer,
):
raise Exception(
f"Failed to find tokenizer submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.TextEncoder,
):
raise Exception(
f"Failed to find text_encoder submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.UNet,
):
raise Exception(
f"Failed to find unet submodel from {self.model_name}! Check if model corrupted"
)
"""
return ModelLoaderOutput(
unet=UNetField(
unet=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.UNet,
),
scheduler=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Scheduler,
),
loras=[],
),
clip=ClipField(
tokenizer=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Tokenizer,
),
text_encoder=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.TextEncoder,
),
loras=[],
),
vae=VaeField(
vae=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Vae,
),
)
)
class LoraLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
#fmt: off
type: Literal["lora_loader_output"] = "lora_loader_output"
unet: Optional[UNetField] = Field(default=None, description="UNet submodel")
clip: Optional[ClipField] = Field(default=None, description="Tokenizer and text_encoder submodels")
#fmt: on
class LoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
type: Literal["lora_loader"] = "lora_loader"
lora_name: str = Field(description="Lora model name")
weight: float = Field(default=0.75, description="With what weight to apply lora")
unet: Optional[UNetField] = Field(description="UNet model for applying lora")
clip: Optional[ClipField] = Field(description="Clip model for applying lora")
def invoke(self, context: InvocationContext) -> LoraLoaderOutput:
if not context.services.model_manager.model_exists(
model_name=self.lora_name,
model_type=SDModelType.Lora,
):
raise Exception(f"Unkown lora name: {self.lora_name}!")
if self.unet is not None and any(lora.model_name == self.lora_name for lora in self.unet.loras):
raise Exception(f"Lora \"{self.lora_name}\" already applied to unet")
if self.clip is not None and any(lora.model_name == self.lora_name for lora in self.clip.loras):
raise Exception(f"Lora \"{self.lora_name}\" already applied to clip")
output = LoraLoaderOutput()
if self.unet is not None:
output.unet = copy.deepcopy(self.unet)
output.unet.loras.append(
LoraInfo(
model_name=self.lora_name,
model_type=SDModelType.Lora,
submodel=None,
weight=self.weight,
)
)
if self.clip is not None:
output.clip = copy.deepcopy(self.clip)
output.clip.loras.append(
LoraInfo(
model_name=self.lora_name,
model_type=SDModelType.Lora,
submodel=None,
weight=self.weight,
)
)
return output

View File

@ -0,0 +1,237 @@
import io
from typing import Literal, Optional, Any
# from PIL.Image import Image
import PIL.Image
from matplotlib.ticker import MaxNLocator
from matplotlib.figure import Figure
from pydantic import BaseModel, Field
import numpy as np
import matplotlib.pyplot as plt
from easing_functions import (
LinearInOut,
QuadEaseInOut, QuadEaseIn, QuadEaseOut,
CubicEaseInOut, CubicEaseIn, CubicEaseOut,
QuarticEaseInOut, QuarticEaseIn, QuarticEaseOut,
QuinticEaseInOut, QuinticEaseIn, QuinticEaseOut,
SineEaseInOut, SineEaseIn, SineEaseOut,
CircularEaseIn, CircularEaseInOut, CircularEaseOut,
ExponentialEaseInOut, ExponentialEaseIn, ExponentialEaseOut,
ElasticEaseIn, ElasticEaseInOut, ElasticEaseOut,
BackEaseIn, BackEaseInOut, BackEaseOut,
BounceEaseIn, BounceEaseInOut, BounceEaseOut)
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
)
from ...backend.util.logging import InvokeAILogger
from .collections import FloatCollectionOutput
class FloatLinearRangeInvocation(BaseInvocation):
"""Creates a range"""
type: Literal["float_range"] = "float_range"
# Inputs
start: float = Field(default=5, description="The first value of the range")
stop: float = Field(default=10, description="The last value of the range")
steps: int = Field(default=30, description="number of values to interpolate over (including start and stop)")
def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
param_list = list(np.linspace(self.start, self.stop, self.steps))
return FloatCollectionOutput(
collection=param_list
)
EASING_FUNCTIONS_MAP = {
"Linear": LinearInOut,
"QuadIn": QuadEaseIn,
"QuadOut": QuadEaseOut,
"QuadInOut": QuadEaseInOut,
"CubicIn": CubicEaseIn,
"CubicOut": CubicEaseOut,
"CubicInOut": CubicEaseInOut,
"QuarticIn": QuarticEaseIn,
"QuarticOut": QuarticEaseOut,
"QuarticInOut": QuarticEaseInOut,
"QuinticIn": QuinticEaseIn,
"QuinticOut": QuinticEaseOut,
"QuinticInOut": QuinticEaseInOut,
"SineIn": SineEaseIn,
"SineOut": SineEaseOut,
"SineInOut": SineEaseInOut,
"CircularIn": CircularEaseIn,
"CircularOut": CircularEaseOut,
"CircularInOut": CircularEaseInOut,
"ExponentialIn": ExponentialEaseIn,
"ExponentialOut": ExponentialEaseOut,
"ExponentialInOut": ExponentialEaseInOut,
"ElasticIn": ElasticEaseIn,
"ElasticOut": ElasticEaseOut,
"ElasticInOut": ElasticEaseInOut,
"BackIn": BackEaseIn,
"BackOut": BackEaseOut,
"BackInOut": BackEaseInOut,
"BounceIn": BounceEaseIn,
"BounceOut": BounceEaseOut,
"BounceInOut": BounceEaseInOut,
}
EASING_FUNCTION_KEYS: Any = Literal[
tuple(list(EASING_FUNCTIONS_MAP.keys()))
]
# actually I think for now could just use CollectionOutput (which is list[Any]
class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps"""
type: Literal["step_param_easing"] = "step_param_easing"
# Inputs
# fmt: off
easing: EASING_FUNCTION_KEYS = Field(default="Linear", description="The easing function to use")
num_steps: int = Field(default=20, description="number of denoising steps")
start_value: float = Field(default=0.0, description="easing starting value")
end_value: float = Field(default=1.0, description="easing ending value")
start_step_percent: float = Field(default=0.0, description="fraction of steps at which to start easing")
end_step_percent: float = Field(default=1.0, description="fraction of steps after which to end easing")
# if None, then start_value is used prior to easing start
pre_start_value: Optional[float] = Field(default=None, description="value before easing start")
# if None, then end value is used prior to easing end
post_end_value: Optional[float] = Field(default=None, description="value after easing end")
mirror: bool = Field(default=False, description="include mirror of easing function")
# FIXME: add alt_mirror option (alternative to default or mirror), or remove entirely
# alt_mirror: bool = Field(default=False, description="alternative mirroring by dual easing")
show_easing_plot: bool = Field(default=False, description="show easing plot")
# fmt: on
def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
log_diagnostics = False
# convert from start_step_percent to nearest step <= (steps * start_step_percent)
# start_step = int(np.floor(self.num_steps * self.start_step_percent))
start_step = int(np.round(self.num_steps * self.start_step_percent))
# convert from end_step_percent to nearest step >= (steps * end_step_percent)
# end_step = int(np.ceil((self.num_steps - 1) * self.end_step_percent))
end_step = int(np.round((self.num_steps - 1) * self.end_step_percent))
# end_step = int(np.ceil(self.num_steps * self.end_step_percent))
num_easing_steps = end_step - start_step + 1
# num_presteps = max(start_step - 1, 0)
num_presteps = start_step
num_poststeps = self.num_steps - (num_presteps + num_easing_steps)
prelist = list(num_presteps * [self.pre_start_value])
postlist = list(num_poststeps * [self.post_end_value])
if log_diagnostics:
logger = InvokeAILogger.getLogger(name="StepParamEasing")
logger.debug("start_step: " + str(start_step))
logger.debug("end_step: " + str(end_step))
logger.debug("num_easing_steps: " + str(num_easing_steps))
logger.debug("num_presteps: " + str(num_presteps))
logger.debug("num_poststeps: " + str(num_poststeps))
logger.debug("prelist size: " + str(len(prelist)))
logger.debug("postlist size: " + str(len(postlist)))
logger.debug("prelist: " + str(prelist))
logger.debug("postlist: " + str(postlist))
easing_class = EASING_FUNCTIONS_MAP[self.easing]
if log_diagnostics:
logger.debug("easing class: " + str(easing_class))
easing_list = list()
if self.mirror: # "expected" mirroring
# if number of steps is even, squeeze duration down to (number_of_steps)/2
# and create reverse copy of list to append
# if number of steps is odd, squeeze duration down to ceil(number_of_steps/2)
# and create reverse copy of list[1:end-1]
# but if even then number_of_steps/2 === ceil(number_of_steps/2), so can just use ceil always
base_easing_duration = int(np.ceil(num_easing_steps/2.0))
if log_diagnostics: logger.debug("base easing duration: " + str(base_easing_duration))
even_num_steps = (num_easing_steps % 2 == 0) # even number of steps
easing_function = easing_class(start=self.start_value,
end=self.end_value,
duration=base_easing_duration - 1)
base_easing_vals = list()
for step_index in range(base_easing_duration):
easing_val = easing_function.ease(step_index)
base_easing_vals.append(easing_val)
if log_diagnostics:
logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(easing_val))
if even_num_steps:
mirror_easing_vals = list(reversed(base_easing_vals))
else:
mirror_easing_vals = list(reversed(base_easing_vals[0:-1]))
if log_diagnostics:
logger.debug("base easing vals: " + str(base_easing_vals))
logger.debug("mirror easing vals: " + str(mirror_easing_vals))
easing_list = base_easing_vals + mirror_easing_vals
# FIXME: add alt_mirror option (alternative to default or mirror), or remove entirely
# elif self.alt_mirror: # function mirroring (unintuitive behavior (at least to me))
# # half_ease_duration = round(num_easing_steps - 1 / 2)
# half_ease_duration = round((num_easing_steps - 1) / 2)
# easing_function = easing_class(start=self.start_value,
# end=self.end_value,
# duration=half_ease_duration,
# )
#
# mirror_function = easing_class(start=self.end_value,
# end=self.start_value,
# duration=half_ease_duration,
# )
# for step_index in range(num_easing_steps):
# if step_index <= half_ease_duration:
# step_val = easing_function.ease(step_index)
# else:
# step_val = mirror_function.ease(step_index - half_ease_duration)
# easing_list.append(step_val)
# if log_diagnostics: logger.debug(step_index, step_val)
#
else: # no mirroring (default)
easing_function = easing_class(start=self.start_value,
end=self.end_value,
duration=num_easing_steps - 1)
for step_index in range(num_easing_steps):
step_val = easing_function.ease(step_index)
easing_list.append(step_val)
if log_diagnostics:
logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(step_val))
if log_diagnostics:
logger.debug("prelist size: " + str(len(prelist)))
logger.debug("easing_list size: " + str(len(easing_list)))
logger.debug("postlist size: " + str(len(postlist)))
param_list = prelist + easing_list + postlist
if self.show_easing_plot:
plt.figure()
plt.xlabel("Step")
plt.ylabel("Param Value")
plt.title("Per-Step Values Based On Easing: " + self.easing)
plt.bar(range(len(param_list)), param_list)
# plt.plot(param_list)
ax = plt.gca()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
im = PIL.Image.open(buf)
im.show()
buf.close()
# output array of size steps, each entry list[i] is param value for step i
return FloatCollectionOutput(
collection=param_list
)

View File

@ -0,0 +1,28 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal
from pydantic import Field
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext
from .math import IntOutput, FloatOutput
# Pass-through parameter nodes - used by subgraphs
class ParamIntInvocation(BaseInvocation):
"""An integer parameter"""
#fmt: off
type: Literal["param_int"] = "param_int"
a: int = Field(default=0, description="The integer value")
#fmt: on
def invoke(self, context: InvocationContext) -> IntOutput:
return IntOutput(a=self.a)
class ParamFloatInvocation(BaseInvocation):
"""A float parameter"""
#fmt: off
type: Literal["param_float"] = "param_float"
param: float = Field(default=0.0, description="The float value")
#fmt: on
def invoke(self, context: InvocationContext) -> FloatOutput:
return FloatOutput(param=self.param)

View File

@ -0,0 +1,57 @@
from typing import Literal
from pydantic.fields import Field
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationContext
from dynamicprompts.generators import RandomPromptGenerator, CombinatorialPromptGenerator
class PromptOutput(BaseInvocationOutput):
"""Base class for invocations that output a prompt"""
#fmt: off
type: Literal["prompt"] = "prompt"
prompt: str = Field(default=None, description="The output prompt")
#fmt: on
class Config:
schema_extra = {
'required': [
'type',
'prompt',
]
}
class PromptCollectionOutput(BaseInvocationOutput):
"""Base class for invocations that output a collection of prompts"""
# fmt: off
type: Literal["prompt_collection_output"] = "prompt_collection_output"
prompt_collection: list[str] = Field(description="The output prompt collection")
count: int = Field(description="The size of the prompt collection")
# fmt: on
class Config:
schema_extra = {"required": ["type", "prompt_collection", "count"]}
class DynamicPromptInvocation(BaseInvocation):
"""Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator"""
type: Literal["dynamic_prompt"] = "dynamic_prompt"
prompt: str = Field(description="The prompt to parse with dynamicprompts")
max_prompts: int = Field(default=1, description="The number of prompts to generate")
combinatorial: bool = Field(
default=False, description="Whether to use the combinatorial generator"
)
def invoke(self, context: InvocationContext) -> PromptCollectionOutput:
if self.combinatorial:
generator = CombinatorialPromptGenerator()
prompts = generator.generate(self.prompt, max_prompts=self.max_prompts)
else:
generator = RandomPromptGenerator()
prompts = generator.generate(self.prompt, num_images=self.max_prompts)
return PromptCollectionOutput(prompt_collection=prompts, count=len(prompts))

View File

@ -0,0 +1,55 @@
from typing import Literal, Union
from pydantic import Field
from invokeai.app.models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput
class RestoreFaceInvocation(BaseInvocation):
"""Restores faces in an image."""
# fmt: off
type: Literal["restore_face"] = "restore_face"
# Inputs
image: Union[ImageField, None] = Field(description="The input image")
strength: float = Field(default=0.75, gt=0, le=1, description="The strength of the restoration" )
# fmt: on
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["restoration", "image"],
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=None,
strength=self.strength, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -0,0 +1,57 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal, Union
from pydantic import Field
from invokeai.app.models.image import ImageCategory, ImageField, ResourceOrigin
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput
class UpscaleInvocation(BaseInvocation):
"""Upscales an image."""
# fmt: off
type: Literal["upscale"] = "upscale"
# Inputs
image: Union[ImageField, None] = Field(description="The input image", default=None)
strength: float = Field(default=0.75, gt=0, le=1, description="The strength")
level: Literal[2, 4] = Field(default=2, description="The upscale level")
# fmt: on
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {
"tags": ["upscaling", "image"],
},
}
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
results = context.services.restoration.upscale_and_reconstruct(
image_list=[[image, 0]],
upscale=(self.level, self.strength),
strength=0.0, # GFPGAN strength
save_original=False,
image_callback=None,
)
# Results are image and seed, unwrap for now
# TODO: can this return multiple results?
image_dto = context.services.images.create(
image=results[0][0],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -0,0 +1,3 @@
class CanceledException(Exception):
"""Execution canceled by user."""
pass

View File

@ -0,0 +1,90 @@
from enum import Enum
from typing import Optional, Tuple
from pydantic import BaseModel, Field
from invokeai.app.util.metaenum import MetaEnum
class ResourceOrigin(str, Enum, metaclass=MetaEnum):
"""The origin of a resource (eg image).
- INTERNAL: The resource was created by the application.
- EXTERNAL: The resource was not created by the application.
This may be a user-initiated upload, or an internal application upload (eg Canvas init image).
"""
INTERNAL = "internal"
"""The resource was created by the application."""
EXTERNAL = "external"
"""The resource was not created by the application.
This may be a user-initiated upload, or an internal application upload (eg Canvas init image).
"""
class InvalidOriginException(ValueError):
"""Raised when a provided value is not a valid ResourceOrigin.
Subclasses `ValueError`.
"""
def __init__(self, message="Invalid resource origin."):
super().__init__(message)
class ImageCategory(str, Enum, metaclass=MetaEnum):
"""The category of an image.
- GENERAL: The image is an output, init image, or otherwise an image without a specialized purpose.
- MASK: The image is a mask image.
- CONTROL: The image is a ControlNet control image.
- USER: The image is a user-provide image.
- OTHER: The image is some other type of image with a specialized purpose. To be used by external nodes.
"""
GENERAL = "general"
"""GENERAL: The image is an output, init image, or otherwise an image without a specialized purpose."""
MASK = "mask"
"""MASK: The image is a mask image."""
CONTROL = "control"
"""CONTROL: The image is a ControlNet control image."""
USER = "user"
"""USER: The image is a user-provide image."""
OTHER = "other"
"""OTHER: The image is some other type of image with a specialized purpose. To be used by external nodes."""
class InvalidImageCategoryException(ValueError):
"""Raised when a provided value is not a valid ImageCategory.
Subclasses `ValueError`.
"""
def __init__(self, message="Invalid image category."):
super().__init__(message)
class ImageField(BaseModel):
"""An image field used for passing image objects between invocations"""
image_name: Optional[str] = Field(default=None, description="The name of the image")
class Config:
schema_extra = {"required": ["image_name"]}
class ColorField(BaseModel):
r: int = Field(ge=0, le=255, description="The red component")
g: int = Field(ge=0, le=255, description="The green component")
b: int = Field(ge=0, le=255, description="The blue component")
a: int = Field(ge=0, le=255, description="The alpha component")
def tuple(self) -> Tuple[int, int, int, int]:
return (self.r, self.g, self.b, self.a)
class ProgressImage(BaseModel):
"""The progress image sent intermittently during processing"""
width: int = Field(description="The effective width of the image in pixels")
height: int = Field(description="The effective height of the image in pixels")
dataURL: str = Field(description="The image data as a b64 data URL")

View File

@ -0,0 +1,93 @@
from typing import Optional, Union, List
from pydantic import BaseModel, Extra, Field, StrictFloat, StrictInt, StrictStr
class ImageMetadata(BaseModel):
"""
Core generation metadata for an image/tensor generated in InvokeAI.
Also includes any metadata from the image's PNG tEXt chunks.
Generated by traversing the execution graph, collecting the parameters of the nearest ancestors
of a given node.
Full metadata may be accessed by querying for the session in the `graph_executions` table.
"""
class Config:
extra = Extra.allow
"""
This lets the ImageMetadata class accept arbitrary additional fields. The CoreMetadataService
won't add any fields that are not already defined, but other a different metadata service
implementation might.
"""
type: Optional[StrictStr] = Field(
default=None,
description="The type of the ancestor node of the image output node.",
)
"""The type of the ancestor node of the image output node."""
positive_conditioning: Optional[StrictStr] = Field(
default=None, description="The positive conditioning."
)
"""The positive conditioning"""
negative_conditioning: Optional[StrictStr] = Field(
default=None, description="The negative conditioning."
)
"""The negative conditioning"""
width: Optional[StrictInt] = Field(
default=None, description="Width of the image/latents in pixels."
)
"""Width of the image/latents in pixels"""
height: Optional[StrictInt] = Field(
default=None, description="Height of the image/latents in pixels."
)
"""Height of the image/latents in pixels"""
seed: Optional[StrictInt] = Field(
default=None, description="The seed used for noise generation."
)
"""The seed used for noise generation"""
# cfg_scale: Optional[StrictFloat] = Field(
# cfg_scale: Union[float, list[float]] = Field(
cfg_scale: Union[StrictFloat, List[StrictFloat]] = Field(
default=None, description="The classifier-free guidance scale."
)
"""The classifier-free guidance scale"""
steps: Optional[StrictInt] = Field(
default=None, description="The number of steps used for inference."
)
"""The number of steps used for inference"""
scheduler: Optional[StrictStr] = Field(
default=None, description="The scheduler used for inference."
)
"""The scheduler used for inference"""
model: Optional[StrictStr] = Field(
default=None, description="The model used for inference."
)
"""The model used for inference"""
strength: Optional[StrictFloat] = Field(
default=None,
description="The strength used for image-to-image/latents-to-latents.",
)
"""The strength used for image-to-image/latents-to-latents."""
latents: Optional[StrictStr] = Field(
default=None, description="The ID of the initial latents."
)
"""The ID of the initial latents"""
vae: Optional[StrictStr] = Field(
default=None, description="The VAE used for decoding."
)
"""The VAE used for decoding"""
unet: Optional[StrictStr] = Field(
default=None, description="The UNet used dor inference."
)
"""The UNet used dor inference"""
clip: Optional[StrictStr] = Field(
default=None, description="The CLIP Encoder used for conditioning."
)
"""The CLIP Encoder used for conditioning"""
extra: Optional[StrictStr] = Field(
default=None,
description="Uploaded image metadata, extracted from the PNG tEXt chunk.",
)
"""Uploaded image metadata, extracted from the PNG tEXt chunk."""

View File

@ -0,0 +1,254 @@
from abc import ABC, abstractmethod
import sqlite3
import threading
from typing import Union, cast
from invokeai.app.services.board_record_storage import BoardRecord
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.image_record import (
ImageRecord,
deserialize_image_record,
)
class BoardImageRecordStorageBase(ABC):
"""Abstract base class for the one-to-many board-image relationship record storage."""
@abstractmethod
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Adds an image to a board."""
pass
@abstractmethod
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Removes an image from a board."""
pass
@abstractmethod
def get_images_for_board(
self,
board_id: str,
) -> OffsetPaginatedResults[ImageRecord]:
"""Gets images for a board."""
pass
@abstractmethod
def get_board_for_image(
self,
image_name: str,
) -> Union[str, None]:
"""Gets an image's board id, if it has one."""
pass
@abstractmethod
def get_image_count_for_board(
self,
board_id: str,
) -> int:
"""Gets the number of images for a board."""
pass
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
_filename: str
_conn: sqlite3.Connection
_cursor: sqlite3.Cursor
_lock: threading.Lock
def __init__(self, filename: str) -> None:
super().__init__()
self._filename = filename
self._conn = sqlite3.connect(filename, check_same_thread=False)
# Enable row factory to get rows as dictionaries (must be done before making the cursor!)
self._conn.row_factory = sqlite3.Row
self._cursor = self._conn.cursor()
self._lock = threading.Lock()
try:
self._lock.acquire()
# Enable foreign keys
self._conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `board_images` junction table."""
# Create the `board_images` junction table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS board_images (
board_id TEXT NOT NULL,
image_name TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between boards and images using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (image_name),
FOREIGN KEY (board_id) REFERENCES boards (board_id) ON DELETE CASCADE,
FOREIGN KEY (image_name) REFERENCES images (image_name) ON DELETE CASCADE
);
"""
)
# Add index for board id
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_board_images_board_id ON board_images (board_id);
"""
)
# Add index for board id, sorted by created_at
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_board_images_board_id_created_at ON board_images (board_id, created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_board_images_updated_at
AFTER UPDATE
ON board_images FOR EACH ROW
BEGIN
UPDATE board_images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE board_id = old.board_id AND image_name = old.image_name;
END;
"""
)
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
INSERT INTO board_images (board_id, image_name)
VALUES (?, ?)
ON CONFLICT (image_name) DO UPDATE SET board_id = ?;
""",
(board_id, image_name, board_id),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
DELETE FROM board_images
WHERE board_id = ? AND image_name = ?;
""",
(board_id, image_name),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()
def get_images_for_board(
self,
board_id: str,
offset: int = 0,
limit: int = 10,
) -> OffsetPaginatedResults[ImageRecord]:
# TODO: this isn't paginated yet?
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT images.*
FROM board_images
INNER JOIN images ON board_images.image_name = images.image_name
WHERE board_images.board_id = ?
ORDER BY board_images.updated_at DESC;
""",
(board_id,),
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
images = list(map(lambda r: deserialize_image_record(dict(r)), result))
self._cursor.execute(
"""--sql
SELECT COUNT(*) FROM images WHERE 1=1;
"""
)
count = cast(int, self._cursor.fetchone()[0])
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()
return OffsetPaginatedResults(
items=images, offset=offset, limit=limit, total=count
)
def get_board_for_image(
self,
image_name: str,
) -> Union[str, None]:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT board_id
FROM board_images
WHERE image_name = ?;
""",
(image_name,),
)
result = self._cursor.fetchone()
if result is None:
return None
return cast(str, result[0])
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()
def get_image_count_for_board(self, board_id: str) -> int:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT COUNT(*) FROM board_images WHERE board_id = ?;
""",
(board_id,),
)
count = cast(int, self._cursor.fetchone()[0])
return count
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()

View File

@ -0,0 +1,142 @@
from abc import ABC, abstractmethod
from logging import Logger
from typing import List, Union
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.board_record_storage import (
BoardRecord,
BoardRecordStorageBase,
)
from invokeai.app.services.image_record_storage import (
ImageRecordStorageBase,
OffsetPaginatedResults,
)
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.models.image_record import ImageDTO, image_record_to_dto
from invokeai.app.services.urls import UrlServiceBase
class BoardImagesServiceABC(ABC):
"""High-level service for board-image relationship management."""
@abstractmethod
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Adds an image to a board."""
pass
@abstractmethod
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Removes an image from a board."""
pass
@abstractmethod
def get_images_for_board(
self,
board_id: str,
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets images for a board."""
pass
@abstractmethod
def get_board_for_image(
self,
image_name: str,
) -> Union[str, None]:
"""Gets an image's board id, if it has one."""
pass
class BoardImagesServiceDependencies:
"""Service dependencies for the BoardImagesService."""
board_image_records: BoardImageRecordStorageBase
board_records: BoardRecordStorageBase
image_records: ImageRecordStorageBase
urls: UrlServiceBase
logger: Logger
def __init__(
self,
board_image_record_storage: BoardImageRecordStorageBase,
image_record_storage: ImageRecordStorageBase,
board_record_storage: BoardRecordStorageBase,
url: UrlServiceBase,
logger: Logger,
):
self.board_image_records = board_image_record_storage
self.image_records = image_record_storage
self.board_records = board_record_storage
self.urls = url
self.logger = logger
class BoardImagesService(BoardImagesServiceABC):
_services: BoardImagesServiceDependencies
def __init__(self, services: BoardImagesServiceDependencies):
self._services = services
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
self._services.board_image_records.add_image_to_board(board_id, image_name)
def remove_image_from_board(
self,
board_id: str,
image_name: str,
) -> None:
self._services.board_image_records.remove_image_from_board(board_id, image_name)
def get_images_for_board(
self,
board_id: str,
) -> OffsetPaginatedResults[ImageDTO]:
image_records = self._services.board_image_records.get_images_for_board(
board_id
)
image_dtos = list(
map(
lambda r: image_record_to_dto(
r,
self._services.urls.get_image_url(r.image_name),
self._services.urls.get_image_url(r.image_name, True),
board_id,
),
image_records.items,
)
)
return OffsetPaginatedResults[ImageDTO](
items=image_dtos,
offset=image_records.offset,
limit=image_records.limit,
total=image_records.total,
)
def get_board_for_image(
self,
image_name: str,
) -> Union[str, None]:
board_id = self._services.board_image_records.get_board_for_image(image_name)
return board_id
def board_record_to_dto(
board_record: BoardRecord, cover_image_name: str | None, image_count: int
) -> BoardDTO:
"""Converts a board record to a board DTO."""
return BoardDTO(
**board_record.dict(exclude={'cover_image_name'}),
cover_image_name=cover_image_name,
image_count=image_count,
)

View File

@ -0,0 +1,329 @@
from abc import ABC, abstractmethod
from typing import Optional, cast
import sqlite3
import threading
from typing import Optional, Union
import uuid
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import (
BoardRecord,
deserialize_board_record,
)
from pydantic import BaseModel, Field, Extra
class BoardChanges(BaseModel, extra=Extra.forbid):
board_name: Optional[str] = Field(description="The board's new name.")
cover_image_name: Optional[str] = Field(
description="The name of the board's new cover image."
)
class BoardRecordNotFoundException(Exception):
"""Raised when an board record is not found."""
def __init__(self, message="Board record not found"):
super().__init__(message)
class BoardRecordSaveException(Exception):
"""Raised when an board record cannot be saved."""
def __init__(self, message="Board record not saved"):
super().__init__(message)
class BoardRecordDeleteException(Exception):
"""Raised when an board record cannot be deleted."""
def __init__(self, message="Board record not deleted"):
super().__init__(message)
class BoardRecordStorageBase(ABC):
"""Low-level service responsible for interfacing with the board record store."""
@abstractmethod
def delete(self, board_id: str) -> None:
"""Deletes a board record."""
pass
@abstractmethod
def save(
self,
board_name: str,
) -> BoardRecord:
"""Saves a board record."""
pass
@abstractmethod
def get(
self,
board_id: str,
) -> BoardRecord:
"""Gets a board record."""
pass
@abstractmethod
def update(
self,
board_id: str,
changes: BoardChanges,
) -> BoardRecord:
"""Updates a board record."""
pass
@abstractmethod
def get_many(
self,
offset: int = 0,
limit: int = 10,
) -> OffsetPaginatedResults[BoardRecord]:
"""Gets many board records."""
pass
@abstractmethod
def get_all(
self,
) -> list[BoardRecord]:
"""Gets all board records."""
pass
class SqliteBoardRecordStorage(BoardRecordStorageBase):
_filename: str
_conn: sqlite3.Connection
_cursor: sqlite3.Cursor
_lock: threading.Lock
def __init__(self, filename: str) -> None:
super().__init__()
self._filename = filename
self._conn = sqlite3.connect(filename, check_same_thread=False)
# Enable row factory to get rows as dictionaries (must be done before making the cursor!)
self._conn.row_factory = sqlite3.Row
self._cursor = self._conn.cursor()
self._lock = threading.Lock()
try:
self._lock.acquire()
# Enable foreign keys
self._conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `boards` table and `board_images` junction table."""
# Create the `boards` table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS boards (
board_id TEXT NOT NULL PRIMARY KEY,
board_name TEXT NOT NULL,
cover_image_name TEXT,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
FOREIGN KEY (cover_image_name) REFERENCES images (image_name) ON DELETE SET NULL
);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_boards_created_at ON boards (created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_boards_updated_at
AFTER UPDATE
ON boards FOR EACH ROW
BEGIN
UPDATE boards SET updated_at = current_timestamp
WHERE board_id = old.board_id;
END;
"""
)
def delete(self, board_id: str) -> None:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
DELETE FROM boards
WHERE board_id = ?;
""",
(board_id,),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BoardRecordDeleteException from e
except Exception as e:
self._conn.rollback()
raise BoardRecordDeleteException from e
finally:
self._lock.release()
def save(
self,
board_name: str,
) -> BoardRecord:
try:
board_id = str(uuid.uuid4())
self._lock.acquire()
self._cursor.execute(
"""--sql
INSERT OR IGNORE INTO boards (board_id, board_name)
VALUES (?, ?);
""",
(board_id, board_name),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BoardRecordSaveException from e
finally:
self._lock.release()
return self.get(board_id)
def get(
self,
board_id: str,
) -> BoardRecord:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT *
FROM boards
WHERE board_id = ?;
""",
(board_id,),
)
result = cast(Union[sqlite3.Row, None], self._cursor.fetchone())
except sqlite3.Error as e:
self._conn.rollback()
raise BoardRecordNotFoundException from e
finally:
self._lock.release()
if result is None:
raise BoardRecordNotFoundException
return BoardRecord(**dict(result))
def update(
self,
board_id: str,
changes: BoardChanges,
) -> BoardRecord:
try:
self._lock.acquire()
# Change the name of a board
if changes.board_name is not None:
self._cursor.execute(
f"""--sql
UPDATE boards
SET board_name = ?
WHERE board_id = ?;
""",
(changes.board_name, board_id),
)
# Change the cover image of a board
if changes.cover_image_name is not None:
self._cursor.execute(
f"""--sql
UPDATE boards
SET cover_image_name = ?
WHERE board_id = ?;
""",
(changes.cover_image_name, board_id),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BoardRecordSaveException from e
finally:
self._lock.release()
return self.get(board_id)
def get_many(
self,
offset: int = 0,
limit: int = 10,
) -> OffsetPaginatedResults[BoardRecord]:
try:
self._lock.acquire()
# Get all the boards
self._cursor.execute(
"""--sql
SELECT *
FROM boards
ORDER BY created_at DESC
LIMIT ? OFFSET ?;
""",
(limit, offset),
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
boards = list(map(lambda r: deserialize_board_record(dict(r)), result))
# Get the total number of boards
self._cursor.execute(
"""--sql
SELECT COUNT(*)
FROM boards
WHERE 1=1;
"""
)
count = cast(int, self._cursor.fetchone()[0])
return OffsetPaginatedResults[BoardRecord](
items=boards, offset=offset, limit=limit, total=count
)
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()
def get_all(
self,
) -> list[BoardRecord]:
try:
self._lock.acquire()
# Get all the boards
self._cursor.execute(
"""--sql
SELECT *
FROM boards
ORDER BY created_at DESC
"""
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
boards = list(map(lambda r: deserialize_board_record(dict(r)), result))
return boards
except sqlite3.Error as e:
self._conn.rollback()
raise e
finally:
self._lock.release()

View File

@ -0,0 +1,185 @@
from abc import ABC, abstractmethod
from logging import Logger
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.board_images import board_record_to_dto
from invokeai.app.services.board_record_storage import (
BoardChanges,
BoardRecordStorageBase,
)
from invokeai.app.services.image_record_storage import (
ImageRecordStorageBase,
OffsetPaginatedResults,
)
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.urls import UrlServiceBase
class BoardServiceABC(ABC):
"""High-level service for board management."""
@abstractmethod
def create(
self,
board_name: str,
) -> BoardDTO:
"""Creates a board."""
pass
@abstractmethod
def get_dto(
self,
board_id: str,
) -> BoardDTO:
"""Gets a board."""
pass
@abstractmethod
def update(
self,
board_id: str,
changes: BoardChanges,
) -> BoardDTO:
"""Updates a board."""
pass
@abstractmethod
def delete(
self,
board_id: str,
) -> None:
"""Deletes a board."""
pass
@abstractmethod
def get_many(
self,
offset: int = 0,
limit: int = 10,
) -> OffsetPaginatedResults[BoardDTO]:
"""Gets many boards."""
pass
@abstractmethod
def get_all(
self,
) -> list[BoardDTO]:
"""Gets all boards."""
pass
class BoardServiceDependencies:
"""Service dependencies for the BoardService."""
board_image_records: BoardImageRecordStorageBase
board_records: BoardRecordStorageBase
image_records: ImageRecordStorageBase
urls: UrlServiceBase
logger: Logger
def __init__(
self,
board_image_record_storage: BoardImageRecordStorageBase,
image_record_storage: ImageRecordStorageBase,
board_record_storage: BoardRecordStorageBase,
url: UrlServiceBase,
logger: Logger,
):
self.board_image_records = board_image_record_storage
self.image_records = image_record_storage
self.board_records = board_record_storage
self.urls = url
self.logger = logger
class BoardService(BoardServiceABC):
_services: BoardServiceDependencies
def __init__(self, services: BoardServiceDependencies):
self._services = services
def create(
self,
board_name: str,
) -> BoardDTO:
board_record = self._services.board_records.save(board_name)
return board_record_to_dto(board_record, None, 0)
def get_dto(self, board_id: str) -> BoardDTO:
board_record = self._services.board_records.get(board_id)
cover_image = self._services.image_records.get_most_recent_image_for_board(
board_record.board_id
)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
board_id
)
return board_record_to_dto(board_record, cover_image_name, image_count)
def update(
self,
board_id: str,
changes: BoardChanges,
) -> BoardDTO:
board_record = self._services.board_records.update(board_id, changes)
cover_image = self._services.image_records.get_most_recent_image_for_board(
board_record.board_id
)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
board_id
)
return board_record_to_dto(board_record, cover_image_name, image_count)
def delete(self, board_id: str) -> None:
self._services.board_records.delete(board_id)
def get_many(
self, offset: int = 0, limit: int = 10
) -> OffsetPaginatedResults[BoardDTO]:
board_records = self._services.board_records.get_many(offset, limit)
board_dtos = []
for r in board_records.items:
cover_image = self._services.image_records.get_most_recent_image_for_board(
r.board_id
)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
r.board_id
)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
return OffsetPaginatedResults[BoardDTO](
items=board_dtos, offset=offset, limit=limit, total=len(board_dtos)
)
def get_all(self) -> list[BoardDTO]:
board_records = self._services.board_records.get_all()
board_dtos = []
for r in board_records:
cover_image = self._services.image_records.get_most_recent_image_for_board(
r.board_id
)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self._services.board_image_records.get_image_count_for_board(
r.board_id
)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
return board_dtos

Some files were not shown because too many files have changed in this diff Show More