- Do not _merge_ prompt and style prompt when concat is enabled - either use the prompt as style, or use the style directly.
- Set style prompt metadata correctly.
- Add metadata recall for style prompt.
`react-select` has some weird behaviour where if the value is `undefined`, it shows the last-selected value instead of nothing. Must fall back to `null`
Ensure workflow editor model selector component gets a value
This introduced some funky type issues related to ONNX models. ONNX doesn't work anyways (unmaintained). Instead of fixing the types to work with a non-working feature, ONNX is now removed entirely from the UI.
- Remove all refs to ONNX (and Olives)
- Fix some type issues
- Add ONNX nodes to the nodes denylist (so they are not visible in UI)
- Update VAE graph helper, which still had some ONNX logic. It's a very simple change and doesn't change any logic. Just removes some conditions that were for ONNX. I tested it and nothing broke.
- Regenerate types
- Fix prettier and eslint ignores for generated types
- Lint
* Udpater suggest db backup when installing RC
* Update invokeai_update.py to be more specific
* Update invokeai_update.py
* Update invokeai_update.py
* Update invokeai_update.py
* Update invokeai_update.py
- Prompt must have an open curly brace followed by a close curly brace to enable dynamic prompts processing
- If a the given prompt already had a dynamic prompt cached, do not re-process
- If processing is not needed, user may invoke immediately
- Invoke button shows loading state when dynamic prompts are processing, tooltip says generating
- Dynamic prompts preview icon in prompt box shows loading state when processing, tooltip says generating
- Support grid size of 8 on canvas
- Internal canvas math works on 8
- Update gridlines rendering to show 64 spaced lines and 32/16/8 when zoomed in
- Bbox manipulation defaults to grid of 64 - hold shift to get grid of 8
Besides being something we support internally, supporting 8 on canvas avoids a lot of hacky logic needed to work well with aspect ratios.
Canvas and non-canvas have separate width and height and need their own separate aspect ratios. In order to not duplicate a lot of aspect ratio logic, the components relating to image size have been modularized.
- Fix `weight` and `begin_step_percent`, the constraints were mixed up
- Add model validatort to ensure `begin_step_percent < end_step_percent`
- Bump version
- Store workflow in nanostore as singleton instead of building for each consumer
- Debounce the build (already was indirectly debounced)
- When the workflow is needed, imperatively grab it from the nanostores, instead of letting react handle it via reactivity
This drastically reduces the computation needed when moving the cursor. It also correctly separates ephemeral interaction state from redux, where it is not needed.
Also removed some unused canvas state.
This uses the previous implementation of the memoization function in reselect. It's possible for the new weakmap-based memoization to cause memory leaks in certain scenarios, so we will avoid it for now.
If the user specifies `torch-sdp` as the attention type in `config.yaml`, we can go ahead and use it (if available) rather than always throwing an exception.
* add base definition of download manager
* basic functionality working
* add unit tests for download queue
* add documentation and FastAPI route
* fix docs
* add missing test dependency; fix import ordering
* fix file path length checking on windows
* fix ruff check error
* move release() into the __del__ method
* disable testing of stderr messages due to issues with pytest capsys fixture
* fix unsorted imports
* harmonized implementation of start() and stop() calls in download and & install modules
* Update invokeai/app/services/download/download_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* replace test datadir fixture with tmp_path
* replace DownloadJobBase->DownloadJob in download manager documentation
* make source and dest arguments to download_queue.download() an AnyHttpURL and Path respectively
* fix pydantic typecheck errors in the download unit test
* ruff formatting
* add "job cancelled" as an event rather than an exception
* fix ruff errors
* Update invokeai/app/services/download/download_default.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* use threading.Event to stop service worker threads; handle unfinished job edge cases
* remove dangling STOP job definition
* fix ruff complaint
* fix ruff check again
* avoid race condition when start() and stop() are called simultaneously from different threads
* avoid race condition in stop() when a job becomes active while shutting down
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
The graph library occasionally causes issues when the default graph changes substantially between versions and pydantic validation fails. See #5289 for an example.
We are not currently using the graph library, so we can disable it until we are ready to use it. It's possible that the workflow library will supersede it anyways.
* remove MacOS Sonoma check in devices.py
As of pytorch 2.1.0, float16 works with our MPS fixes on Sonoma, so the check is no longer needed.
* remove unused platform import
* add code to repopulate model config records after schema update
* reformat for ruff
* migrate model records using db cursor rather than the ModelRecordConfigService
* ruff fixes
* tweak exception reporting
* fix: build frontend in pypi-release workflow
This was missing, resulting in the 3.5.0rc1 having no frontend.
* fix: use node 18, set working directory
- Node 20 has a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands
* Don't copy extraneous paths into installer .zip
* feat(installer): delete frontend build after creating installer
This prevents an empty `dist/` from breaking the app on startup.
* feat: add python dist as release artifact, as input to enable publish to pypi
- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do
* translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
* freeze yaml migration logic at upgrade to 3.5
* moved migration code to migration_3
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
The VAE decode on linear graphs was getting cached. This caused some unexpected behaviour around image outputs.
For example, say you ran the exact same graph twice. The first time, you get an image written to disk and added to gallery. The second time, the VAE decode is cached and no image file is created. But, the UI still gets the graph complete event and selects the first image in the gallery. The second run does not add an image to the gallery.
There are probbably edge cases related to this - the UI does not expect this to happen. I'm not sure how to handle it any better in the UI.
The solution is to not cache VAE decode on the linear graphs, ever. If you run a graph twice in linear, you expect two images.
This simple change disables the node cache for terminal VAE decode nodes in all linear graphs, ensuring you always get images. If they graph was fully cached, all images after the first will be created very quickly of course.
- "Reset Workflow Editor" -> "New Workflow"
- "New Workflow" gets nodes icon & is no longer danger coloured
- When creating a new workflow, if the current workflow has unsaved changes, you get a dialog asking for confirmation. If the current workflow is saved, it immediately creates a new workflow.
- "Download Workflow" -> "Save to File"
- "Upload Workflow" -> "Load from File"
- Moved "Load from File" up 1 in the menu
This model was a bit too strict, and raised validation errors when workflows we expect to *not* have an ID (eg, an embedded workflow) have one.
Now it strips unknown attributes, allowing those workflows to load.
- Handle an image file not existing despite being in the database.
- Add a simple pydantic model that tests only for the existence of a workflow's version.
- Check against this new model when migrating workflows, skipping if the workflow fails validation. If it succeeds, the frontend should be able to handle the workflow.
Currently translated at 98.1% (1340 of 1365 strings)
translationBot(ui): update translation (Russian)
Currently translated at 84.2% (1150 of 1365 strings)
translationBot(ui): update translation (Russian)
Currently translated at 83.1% (1135 of 1365 strings)
Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
- use simpler pattern for migration dependencies
- move SqliteDatabase & migration to utility method `init_db`, use this in both the app and tests, ensuring the same db schema is used in both
This fixes a problem with `Annotated` which prevented us from using pydantic's `Field` to specify a discriminator for a union. We had to use FastAPI's `Body` as a workaround.
* selector added
* ref and useeffect added
* scrolling done using useeffect
* fixed scroll and changed the ref name
* fixed scroll again
* created hook for scroll logic
* feat(ui): debounce metadata fetch by 300ms
This vastly reduces the network requests when using the arrow keys to quickly skim through images.
* feat(ui): extract logic to determine virtuoso scrollToIndex align
This needs to be used in `useNextPrevImage()` to ensure the scrolling puts the image at the top or bottom appropriately
* feat(ui): add debounce to image workflow hook
This was spamming network requests like the metadata query
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Invocations now have a classification:
- Stable: LTS
- Beta: LTS planned, API may change
- Prototype: No LTS planned, API may change, may be removed entirely
The `@invocation` decorator has a new arg `classification`, and an enum `Classification` is added to `baseinvocation.py`.
The default is Stable; this is a non-breaking change.
The classification is presented in the node header as a hammer icon (Beta) or flask icon (prototype).
The icon has a tooltip briefly describing the classification.