* chore: bump pydantic to 2.5.2
This release fixespydantic/pydantic#8175 and allows us to use `JsonValue`
* fix(ui): exclude public/en.json from prettier config
* fix(workflow_records): fix SQLite workflow insertion to ignore duplicates
* feat(backend): update workflows handling
Update workflows handling for Workflow Library.
**Updated Workflow Storage**
"Embedded Workflows" are workflows associated with images, and are now only stored in the image files. "Library Workflows" are not associated with images, and are stored only in DB.
This works out nicely. We have always saved workflows to files, but recently began saving them to the DB in addition to in image files. When that happened, we stopped reading workflows from files, so all the workflows that only existed in images were inaccessible. With this change, access to those workflows is restored, and no workflows are lost.
**Updated Workflow Handling in Nodes**
Prior to this change, workflows were embedded in images by passing the whole workflow JSON to a special workflow field on a node. In the node's `invoke()` function, the node was able to access this workflow and save it with the image. This (inaccurately) models workflows as a property of an image and is rather awkward technically.
A workflow is now a property of a batch/session queue item. It is available in the InvocationContext and therefore available to all nodes during `invoke()`.
**Database Migrations**
Added a `SQLiteMigrator` class to handle database migrations. Migrations were needed to accomodate the DB-related changes in this PR. See the code for details.
The `images`, `workflows` and `session_queue` tables required migrations for this PR, and are using the new migrator. Other tables/services are still creating tables themselves. A followup PR will adapt them to use the migrator.
**Other/Support Changes**
- Add a `has_workflow` column to `images` table to indicate that the image has an embedded workflow.
- Add handling for retrieving the workflow from an image in python. The image file must be fetched, the workflow extracted, and then sent to client, avoiding needing the browser to parse the image file. With the `has_workflow` column, the UI knows if there is a workflow to be fetched, and only fetches when the user requests to load the workflow.
- Add route to get the workflow from an image
- Add CRUD service/routes for the library workflows
- `workflow_images` table and services removed (no longer needed now that embedded workflows are not in the DB)
* feat(ui): updated workflow handling (WIP)
Clientside updates for the backend workflow changes.
Includes roughed-out workflow library UI.
* feat: revert SQLiteMigrator class
Will pursue this in a separate PR.
* feat(nodes): do not overwrite custom node module names
Use a different, simpler method to detect if a node is custom.
* feat(nodes): restore WithWorkflow as no-op class
This class is deprecated and no longer needed. Set its workflow attr value to None (meaning it is now a no-op), and issue a warning when an invocation subclasses it.
* fix(nodes): fix get_workflow from queue item dict func
* feat(backend): add WorkflowRecordListItemDTO
This is the id, name, description, created at and updated at workflow columns/attrs. Used to display lists of workflowsl
* chore(ui): typegen
* feat(ui): add workflow loading, deleting to workflow library UI
* feat(ui): workflow library pagination button styles
* wip
* feat: workflow library WIP
- Save to library
- Duplicate
- Filter/sort
- UI/queries
* feat: workflow library - system graphs - wip
* feat(backend): sync system workflows to db
* fix: merge conflicts
* feat: simplify default workflows
- Rename "system" -> "default"
- Simplify syncing logic
- Update UI to match
* feat(workflows): update default workflows
- Update TextToImage_SD15
- Add TextToImage_SDXL
- Add README
* feat(ui): refine workflow list UI
* fix(workflow_records): typo
* fix(tests): fix tests
* feat(ui): clean up workflow library hooks
* fix(db): fix mis-ordered db cleanup step
It was happening before pruning queue items - should happen afterwards, else you have to restart the app again to free disk space made available by the pruning.
* feat(ui): tweak reset workflow editor translations
* feat(ui): split out workflow redux state
The `nodes` slice is a rather complicated slice. Removing `workflow` makes it a bit more reasonable.
Also helps to flatten state out a bit.
* docs: update default workflows README
* fix: tidy up unused files, unrelated changes
* fix(backend): revert unrelated service organisational changes
* feat(backend): workflow_records.get_many arg "filter_text" -> "query"
* feat(ui): use custom hook in current image buttons
Already in use elsewhere, forgot to use it here.
* fix(ui): remove commented out property
* fix(ui): fix workflow loading
- Different handling for loading from library vs external
- Fix bug where only nodes and edges loaded
* fix(ui): fix save/save-as workflow naming
* fix(ui): fix circular dependency
* fix(db): fix bug with releasing without lock in db.clean()
* fix(db): remove extraneous lock
* chore: bump ruff
* fix(workflow_records): default `category` to `WorkflowCategory.User`
This allows old workflows to validate when reading them from the db or image files.
* hide workflow library buttons if feature is disabled
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
- Refactor how metadata is handled to support a user-defined metadata in graphs
- Update workflow embed handling
- Update UI to work with these changes
- Update tests to support metadata/workflow changes
Upgrade pydantic and fastapi to latest.
- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1
**Big Changes**
There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.
**Invocations**
The biggest change relates to invocation creation, instantiation and validation.
Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.
Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.
With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.
This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.
In the end, this implementation is cleaner.
**Invocation Fields**
In pydantic v2, you can no longer directly add or remove fields from a model.
Previously, we did this to add the `type` field to invocations.
**Invocation Decorators**
With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.
A similar technique is used for `invocation_output()`.
**Minor Changes**
There are a number of minor changes around the pydantic v2 models API.
**Protected `model_` Namespace**
All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".
Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.
```py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
```
**Model Serialization**
Pydantic models no longer have `Model.dict()` or `Model.json()`.
Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.
**Model Deserialization**
Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.
Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.
```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```
**Field Customisation**
Pydantic `Field`s no longer accept arbitrary args.
Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.
**Schema Customisation**
FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.
This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised
The specific aren't important, but this does present additional surface area for bugs.
**Performance Improvements**
Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.
I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
`mallinfo2` is not available on `glibc` < 2.33.
On these systems, we successfully load the library but get an `AttributeError` on attempting to access `mallinfo2`.
I'm not sure if the old `mallinfo` will work, and not sure how to install it safely to test, so for now we just handle the `AttributeError`.
This means the enhanced memory snapshot logic will be skipped for these systems, which isn't a big deal.
Refactor services folder/module structure.
**Motivation**
While working on our services I've repeatedly encountered circular imports and a general lack of clarity regarding where to put things. The structure introduced goes a long way towards resolving those issues, setting us up for a clean structure going forward.
**Services**
Services are now in their own folder with a few files:
- `services/{service_name}/__init__.py`: init as needed, mostly empty now
- `services/{service_name}/{service_name}_base.py`: the base class for the service
- `services/{service_name}/{service_name}_{impl_type}.py`: the default concrete implementation of the service - typically one of `sqlite`, `default`, or `memory`
- `services/{service_name}/{service_name}_common.py`: any common items - models, exceptions, utilities, etc
Though it's a bit verbose to have the service name both as the folder name and the prefix for files, I found it is _extremely_ confusing to have all of the base classes just be named `base.py`. So, at the cost of some verbosity when importing things, I've included the service name in the filename.
There are some minor logic changes. For example, in `InvocationProcessor`, instead of assigning the model manager service to a variable to be used later in the file, the service is used directly via the `Invoker`.
**Shared**
Things that are used across disparate services are in `services/shared/`:
- `default_graphs.py`: previously in `services/`
- `graphs.py`: previously in `services/`
- `paginatation`: generic pagination models used in a few services
- `sqlite`: the `SqliteDatabase` class, other sqlite-specific things
* Bump diffusers to 0.21.2.
* Add T2IAdapterInvocation boilerplate.
* Add T2I-Adapter model to model-management.
* (minor) Tidy prepare_control_image(...).
* Add logic to run the T2I-Adapter models at the start of the DenoiseLatentsInvocation.
* Add logic for applying T2I-Adapter weights and accumulating.
* Add T2IAdapter to MODEL_CLASSES map.
* yarn typegen
* Add model probes for T2I-Adapter models.
* Add all of the frontend boilerplate required to use T2I-Adapter in the nodes editor.
* Add T2IAdapterModel.convert_if_required(...).
* Fix errors in T2I-Adapter input image sizing logic.
* Fix bug with handling of multiple T2I-Adapters.
* black / flake8
* Fix typo
* yarn build
* Add num_channels param to prepare_control_image(...).
* Link to upstream diffusers bugfix PR that currently requires a workaround.
* feat: Add Color Map Preprocessor
Needed for the color T2I Adapter
* feat: Add Color Map Preprocessor to Linear UI
* Revert "feat: Add Color Map Preprocessor"
This reverts commit a1119a00bf.
* Revert "feat: Add Color Map Preprocessor to Linear UI"
This reverts commit bd8a9b82d8.
* Fix T2I-Adapter field rendering in workflow editor.
* yarn build, yarn typegen
---------
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Add `batch_id` to outbound events. This necessitates adding it to both `InvocationContext` and `InvocationQueueItem`. This allows the canvas to receive images.
When the user enqueues a batch on the canvas, it is expected that all images from that batch are directed to the canvas.
The simplest, most flexible solution is to add the `batch_id` to the invocation context-y stuff. Then everything knows what batch it came from, and we can have the canvas pick up images associated with its list of canvas `batch_id`s.
* fix(config): fix typing issues in `config/`
`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere
`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)
* feat: queued generation and batches
Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.
* chore: flake8, isort, black
* fix(nodes): fix incorrect service stop() method
* fix(nodes): improve names of a few variables
* fix(tests): fix up tests after changes to batches/queue
* feat(tests): add unit tests for session queue helper functions
* feat(ui): dynamic prompts is always enabled
* feat(queue): add queue_status_changed event
* feat(ui): wip queue graphs
* feat(nodes): move cleanup til after invoker startup
* feat(nodes): add cancel_by_batch_ids
* feat(ui): wip batch graphs & UI
* fix(nodes): remove `Batch.batch_id` from required
* fix(ui): cleanup and use fixedCacheKey for all mutations
* fix(ui): remove orphaned nodes from canvas graphs
* fix(nodes): fix cancel_by_batch_ids result count
* fix(ui): only show cancel batch tooltip when batches were canceled
* chore: isort
* fix(api): return `[""]` when dynamic prompts generates no prompts
Just a simple fallback so we always have a prompt.
* feat(ui): dynamicPrompts.combinatorial is always on
There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.
* feat: add queue_id & support logic
* feat(ui): fix upscale button
It prepends the upscale operation to queue
* feat(nodes): return queue item when enqueuing a single graph
This facilitates one-off graph async workflows in the client.
* feat(ui): move controlnet autoprocess to queue
* fix(ui): fix non-serializable DOMRect in redux state
* feat(ui): QueueTable performance tweaks
* feat(ui): update queue list
Queue items expand to show the full queue item. Just as JSON for now.
* wip threaded session_processor
* feat(nodes,ui): fully migrate queue to session_processor
* feat(nodes,ui): add processor events
* feat(ui): ui tweaks
* feat(nodes,ui): consolidate events, reduce network requests
* feat(ui): cleanup & abstract queue hooks
* feat(nodes): optimize batch permutation
Use a generator to do only as much work as is needed.
Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.
The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.
* feat(ui): add seed behaviour parameter
This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt
"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.
* fix(ui): remove extraneous random seed nodes from linear graphs
* fix(ui): fix controlnet autoprocess not working when queue is running
* feat(queue): add timestamps to queue status updates
Also show execution time in queue list
* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem
This allows for much simpler handling of queue items.
* feat(api): deprecate sessions router
* chore(backend): tidy logging in `dependencies.py`
* fix(backend): respect `use_memory_db`
* feat(backend): add `config.log_sql` (enables sql trace logging)
* feat: add invocation cache
Supersedes #4574
The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.
## Results
This feature provides anywhere some significant to massive performance improvement.
The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.
## Overview
A new `invocation_cache` service is added to handle the caching. There's not much to it.
All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.
The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.
To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.
## In-Memory Implementation
An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.
Max node cache size is added as `node_cache_size` under the `Generation` config category.
It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.
Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.
## Node Definition
The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.
Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.
The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.
## One Gotcha
Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.
If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.
## Linear UI
The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.
This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.
This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.
## Workflow Editor
All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.
The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.
Users should consider saving their workflows after loading them in and having them updated.
## Future Enhancements - Callback
A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.
This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.
## Future Enhancements - Persisted Cache
Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.
* fix(ui): fix queue list item width
* feat(nodes): do not send the whole node on every generator progress
* feat(ui): strip out old logic related to sessions
Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...
* feat(ui): fix up param collapse labels
* feat(ui): click queue count to go to queue tab
* tidy(queue): update comment, query format
* feat(ui): fix progress bar when canceling
* fix(ui): fix circular dependency
* feat(nodes): bail on node caching logic if `node_cache_size == 0`
* feat(nodes): handle KeyError on node cache pop
* feat(nodes): bypass cache codepath if caches is disabled
more better no do thing
* fix(ui): reset api cache on connect/disconnect
* feat(ui): prevent enqueue when no prompts generated
* feat(ui): add queue controls to workflow editor
* feat(ui): update floating buttons & other incidental UI tweaks
* fix(ui): fix missing/incorrect translation keys
* fix(tests): add config service to mock invocation services
invoking needs access to `node_cache_size` to occur
* optionally remove pause/resume buttons from queue UI
* option to disable prepending
* chore(ui): remove unused file
* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
We need to parse the config before doing anything related to invocations to ensure that the invocations union picks up on denied nodes.
- Move that to the top of api_app and cli_app
- Wrap subsequent imports in `if True:`, as a hack to satisfy flake8 and not have to noqa every line or the whole file
- Add tests to ensure graph validation fails when using a denied node, and that the invocations union does not have denied nodes (this indirectly provides confidence that the generated OpenAPI schema will not include denied nodes)
All invocation metadata (type, title, tags and category) are now defined in decorators.
The decorators add the `type: Literal["invocation_type"]: "invocation_type"` field to the invocation.
Category is a new invocation metadata, but it is not used by the frontend just yet.
- `@invocation()` decorator for invocations
```py
@invocation(
"sdxl_compel_prompt",
title="SDXL Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
...
```
- `@invocation_output()` decorator for invocation outputs
```py
@invocation_output("clip_skip_output")
class ClipSkipInvocationOutput(BaseInvocationOutput):
...
```
- update invocation docs
- add category to decorator
- regen frontend types
Refine concept of "parameter" nodes to "primitives":
- integer
- float
- string
- boolean
- image
- latents
- conditioning
- color
Each primitive has:
- A field definition, if it is not already python primitive value. The field is how this primitive value is passed between nodes. Collections are lists of the field in node definitions. ex: `ImageField` & `list[ImageField]`
- A single output class. ex: `ImageOutput`
- A collection output class. ex: `ImageCollectionOutput`
- A node, which functions to load or pass on the primitive value. ex: `ImageInvocation` (in this case, `ImageInvocation` replaces `LoadImage`)
Plus a number of related changes:
- Reorganize these into `primitives.py`
- Update all nodes and logic to use primitives
- Consolidate "prompt" outputs into "string" & "mask" into "image" (there's no reason for these to be different, the function identically)
- Update default graphs & tests
- Regen frontend types & minor frontend tidy related to changes
fixes the test suite generally, but some tests needed to be
skipped/xfailed due to recent refactor
- ignore three test suites that broke following the model manager
refactor
- move InvocationServices fixture to conftest.py
- add `boards` InvocationServices to the fixture
- Make environment variable settings case InSenSiTive:
INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
environment variables will both set `max_loaded_models`
- Updated realesrgan to use new config system.
- Updated textual_inversion_training to use new config system.
- Discovered a race condition when InvokeAIAppConfig is created
at module load time, which makes it impossible to customize
or replace the help message produced with --help on the command
line. To fix this, moved all instances of get_invokeai_config()
from module load time to object initialization time. Makes code
cleaner, too.
- Added `--from_file` argument to `invokeai-node-cli` and changed
github action to match. CI tests will hopefully work now.
- invokeai-configure updated to work with new config system
- migrate invokeai.init to invokeai.yaml during configure
- replace legacy invokeai with invokeai-node-cli
- add ability to run an invocation directly from invokeai-node-cli command line
- update CI tests to work with new invokeai syntax
* feat(ui): add axios client generator and simple example
* fix(ui): update client & nodes test code w/ new Edge type
* chore(ui): organize generated files
* chore(ui): update .eslintignore, .prettierignore
* chore(ui): update openapi.json
* feat(backend): fixes for nodes/generator
* feat(ui): generate object args for api client
* feat(ui): more nodes api prototyping
* feat(ui): nodes cancel
* chore(ui): regenerate api client
* fix(ui): disable OG web server socket connection
* fix(ui): fix scrollbar styles typing and prop
just noticed the typo, and made the types stronger.
* feat(ui): add socketio types
* feat(ui): wip nodes
- extract api client method arg types instead of manually declaring them
- update example to display images
- general tidy up
* start building out node translations from frontend state and add notes about missing features
* use reference to sampler_name
* use reference to sampler_name
* add optional apiUrl prop
* feat(ui): start hooking up dynamic txt2img node generation, create middleware for session invocation
* feat(ui): write separate nodes socket layer, txt2img generating and rendering w single node
* feat(ui): img2img implementation
* feat(ui): get intermediate images working but types are stubbed out
* chore(ui): add support for package mode
* feat(ui): add nodes mode script
* feat(ui): handle random seeds
* fix(ui): fix middleware types
* feat(ui): add rtk action type guard
* feat(ui): disable NodeAPITest
This was polluting the network/socket logs.
* feat(ui): fix parameters panel border color
This commit should be elsewhere but I don't want to break my flow
* feat(ui): make thunk types more consistent
* feat(ui): add type guards for outputs
* feat(ui): load images on socket connect
Rudimentary
* chore(ui): bump redux-toolkit
* docs(ui): update readme
* chore(ui): regenerate api client
* chore(ui): add typescript as dev dependency
I am having trouble with TS versions after vscode updated and now uses TS 5. `madge` has installed 3.9.10 and for whatever reason my vscode wants to use that. Manually specifying 4.9.5 and then setting vscode to use that as the workspace TS fixes the issue.
* feat(ui): begin migrating gallery to nodes
Along the way, migrate to use RTK `createEntityAdapter` for gallery images, and separate `results` and `uploads` into separate slices. Much cleaner this way.
* feat(ui): clean up & comment results slice
* fix(ui): separate thunk for initial gallery load so it properly gets index 0
* feat(ui): POST upload working
* fix(ui): restore removed type
* feat(ui): patch api generation for headers access
* chore(ui): regenerate api
* feat(ui): wip gallery migration
* feat(ui): wip gallery migration
* chore(ui): regenerate api
* feat(ui): wip refactor socket events
* feat(ui): disable panels based on app props
* feat(ui): invert logic to be disabled
* disable panels when app mounts
* feat(ui): add support to disableTabs
* docs(ui): organise and update docs
* lang(ui): add toast strings
* feat(ui): wip events, comments, and general refactoring
* feat(ui): add optional token for auth
* feat(ui): export StatusIndicator and ModelSelect for header use
* feat(ui) working on making socket URL dynamic
* feat(ui): dynamic middleware loading
* feat(ui): prep for socket jwt
* feat(ui): migrate cancelation
also updated action names to be event-like instead of declaration-like
sorry, i was scattered and this commit has a lot of unrelated stuff in it.
* fix(ui): fix img2img type
* chore(ui): regenerate api client
* feat(ui): improve InvocationCompleteEvent types
* feat(ui): increase StatusIndicator font size
* fix(ui): fix middleware order for multi-node graphs
* feat(ui): add exampleGraphs object w/ iterations example
* feat(ui): generate iterations graph
* feat(ui): update ModelSelect for nodes API
* feat(ui): add hi-res functionality for txt2img generations
* feat(ui): "subscribe" to particular nodes
feels like a dirty hack but oh well it works
* feat(ui): first steps to node editor ui
* fix(ui): disable event subscription
it is not fully baked just yet
* feat(ui): wip node editor
* feat(ui): remove extraneous field types
* feat(ui): nodes before deleting stuff
* feat(ui): cleanup nodes ui stuff
* feat(ui): hook up nodes to redux
* fix(ui): fix handle
* fix(ui): add basic node edges & connection validation
* feat(ui): add connection validation styling
* feat(ui): increase edge width
* feat(ui): it blends
* feat(ui): wip model handling and graph topology validation
* feat(ui): validation connections w/ graphlib
* docs(ui): update nodes doc
* feat(ui): wip node editor
* chore(ui): rebuild api, update types
* add redux-dynamic-middlewares as a dependency
* feat(ui): add url host transformation
* feat(ui): handle already-connected fields
* feat(ui): rewrite SqliteItemStore in sqlalchemy
* fix(ui): fix sqlalchemy dynamic model instantiation
* feat(ui, nodes): metadata wip
* feat(ui, nodes): models
* feat(ui, nodes): more metadata wip
* feat(ui): wip range/iterate
* fix(nodes): fix sqlite typing
* feat(ui): export new type for invoke component
* tests(nodes): fix test instantiation of ImageField
* feat(nodes): fix LoadImageInvocation
* feat(nodes): add `title` ui hint
* feat(nodes): make ImageField attrs optional
* feat(ui): wip nodes etc
* feat(nodes): roll back sqlalchemy
* fix(nodes): partially address feedback
* fix(backend): roll back changes to pngwriter
* feat(nodes): wip address metadata feedback
* feat(nodes): add seeded rng to RandomRange
* feat(nodes): address feedback
* feat(nodes): move GET images error handling to DiskImageStorage
* feat(nodes): move GET images error handling to DiskImageStorage
* fix(nodes): fix image output schema customization
* feat(ui): img2img/txt2img -> linear
- remove txt2img and img2img tabs
- add linear tab
- add initial image selection to linear parameters accordion
* feat(ui): tidy graph builders
* feat(ui): tidy misc
* feat(ui): improve invocation union types
* feat(ui): wip metadata viewer recall
* feat(ui): move fonts to normal deps
* feat(nodes): fix broken upload
* feat(nodes): add metadata module + tests, thumbnails
- `MetadataModule` is stateless and needed in places where the `InvocationContext` is not available, so have not made it a `service`
- Handles loading/parsing/building metadata, and creating png info objects
- added tests for MetadataModule
- Lifted thumbnail stuff to util
* fix(nodes): revert change to RandomRangeInvocation
* feat(nodes): address feedback
- make metadata a service
- rip out pydantic validation, implement metadata parsing as simple functions
- update tests
- address other minor feedback items
* fix(nodes): fix other tests
* fix(nodes): add metadata service to cli
* fix(nodes): fix latents/image field parsing
* feat(nodes): customise LatentsField schema
* feat(nodes): move metadata parsing to frontend
* fix(nodes): fix metadata test
---------
Co-authored-by: maryhipp <maryhipp@gmail.com>
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
* Add latents nodes.
* Fix iteration expansion.
* Add collection generator nodes, math nodes.
* Add noise node.
* Add some graph debug commands to the CLI.
* Fix negative id linking in CLI.
* Fix a CLI bug with multiple links per node.
* [nodes] Add better error handling to processor and CLI
* [nodes] Use more explicit name for marking node execution error
* [nodes] Update the processor call to error
author Kyle Schouviller <kyle0654@hotmail.com> 1669872800 -0800
committer Kyle Schouviller <kyle0654@hotmail.com> 1676240900 -0800
Adding base node architecture
Fix type annotation errors
Runs and generates, but breaks in saving session
Fix default model value setting. Fix deprecation warning.
Fixed node api
Adding markdown docs
Simplifying Generate construction in apps
[nodes] A few minor changes (#2510)
* Pin api-related requirements
* Remove confusing extra CORS origins list
* Adds response models for HTTP 200
[nodes] Adding graph_execution_state to soon replace session. Adding tests with pytest.
Minor typing fixes
[nodes] Fix some small output query hookups
[node] Fixing some additional typing issues
[nodes] Move and expand graph code. Add base item storage and sqlite implementation.
Update startup to match new code
[nodes] Add callbacks to item storage
[nodes] Adding an InvocationContext object to use for invocations to provide easier extensibility
[nodes] New execution model that handles iteration
[nodes] Fixing the CLI
[nodes] Adding a note to the CLI
[nodes] Split processing thread into separate service
[node] Add error message on node processing failure
Removing old files and duplicated packages
Adding python-multipart
* initial commit of DiffusionPipeline class
* spike: proof of concept using diffusers for txt2img
* doc: type hints for Generator
* refactor(model_cache): factor out load_ckpt
* model_cache: add ability to load a diffusers model pipeline
and update associated things in Generate & Generator to not instantly fail when that happens
* model_cache: fix model default image dimensions
* txt2img: support switching diffusers schedulers
* diffusers: let the scheduler do its scaling of the initial latents
Remove IPNDM scheduler; it is not behaving.
* web server: update image_progress callback for diffusers data
* diffusers: restore prompt weighting feature
* diffusers: fix set-sampler error following model switch
* diffusers: use InvokeAIDiffuserComponent for conditioning
* cross_attention_control: stub (no-op) implementations for diffusers
* model_cache: let offload_model work with DiffusionPipeline, sorta.
* models.yaml.example: add diffusers-format model, set as default
* test-invoke-conda: use diffusers-format model
test-invoke-conda: put huggingface-token where the library can use it
* environment-mac: upgrade to diffusers 0.7 (from 0.6)
this was already done for linux; mac must have been lost in the merge.
* preload_models: explicitly load diffusers models
In non-interactive mode too, as long as you're logged in.
* fix(model_cache): don't check `model.config` in diffusers format
clean-up from recent merge.
* diffusers integration: support img2img
* dev: upgrade to diffusers 0.8 (from 0.7.1)
We get to remove some code by using methods that were factored out in the base class.
* refactor: remove backported img2img.get_timesteps
now that we can use it directly from diffusers 0.8.1
* ci: use diffusers model
* dev: upgrade to diffusers 0.9 (from 0.8.1)
* lint: correct annotations for Python 3.9.
* lint: correct AttributeError.name reference for Python 3.9.
* CI: prefer diffusers-1.4 because it no longer requires a token
The RunwayML models still do.
* build: there's yet another place to update requirements?
* configure: try to download models even without token
Models in the CompVis and stabilityai repos no longer require them. (But runwayml still does.)
* configure: add troubleshooting info for config-not-found
* fix(configure): prepend root to config path
* fix(configure): remove second `default: true` from models example
* CI: simplify test-on-push logic now that we don't need secrets
The "test on push but only in forks" logic was only necessary when tests didn't work for PRs-from-forks.
* create an embedding_manager for diffusers
* internal: avoid importing diffusers DummyObject
see https://github.com/huggingface/diffusers/issues/1479
* fix "config attributes…not expected" diffusers warnings.
* fix deprecated scheduler construction
* work around an apparent MPS torch bug that causes conditioning to have no effect
* 🚧 post-rebase repair
* preliminary support for outpainting (no masking yet)
* monkey-patch diffusers.attention and use Invoke lowvram code
* add always_use_cpu arg to bypass MPS
* add cross-attention control support to diffusers (fails on MPS)
For unknown reasons MPS produces garbage output with .swap(). Use
--always_use_cpu arg to invoke.py for now to test this code on MPS.
* diffusers support for the inpainting model
* fix debug_image to not crash with non-RGB images.
* inpainting for the normal model [WIP]
This seems to be performing well until the LAST STEP, at which point it dissolves to confetti.
* fix off-by-one bug in cross-attention-control (#1774)
prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).
based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.
* refactor common CrossAttention stuff into a mixin so that the old ldm code can still work if necessary
* inpainting for the normal model. I think it works this time.
* diffusers: reset num_vectors_per_token
sync with 44a0055571
* diffusers: txt2img2img (hires_fix)
with so much slicing and dicing of pipeline methods to stitch them together
* refactor(diffusers): reduce some code duplication amongst the different tasks
* fixup! refactor(diffusers): reduce some code duplication amongst the different tasks
* diffusers: enable DPMSolver++ scheduler
* diffusers: upgrade to diffusers 0.10, add Heun scheduler
* diffusers(ModelCache): stopgap to make from_cpu compatible with diffusers
* CI: default to diffusers-1.5 now that runwayml token requirement is gone
* diffusers: update to 0.10 (and transformers to 4.25)
* diffusers: use xformers when available
diffusers no longer auto-enables this as of 0.10.2.
* diffusers: make masked img2img behave better with multi-step schedulers
re-randomizing the noise each step was confusing them.
* diffusers: work more better with more models.
fixed relative path problem with local models.
fixed models on hub not always having a `fp16` branch.
* diffusers: stopgap fix for attention_maps_callback crash after recent merge
* fixup import merge conflicts
correction for 061c5369a2
* test: add tests/inpainting inputs for masked img2img
* diffusers(AddsMaskedGuidance): partial fix for k-schedulers
Prevents them from crashing, but results are still hot garbage.
* fix --safety_checker arg parsing
and add note to diffusers loader about where safety checker gets called
* generate: fix import error
* CI: don't try to read the old init location
* diffusers: support loading an alternate VAE
* CI: remove sh-syntax if-statement so it doesn't crash powershell
* CI: fold strings in yaml because backslash is not line-continuation in powershell
* attention maps callback stuff for diffusers
* build: fix syntax error in environment-mac
* diffusers: add INITIAL_MODELS with diffusers-compatible repos
* re-enable the embedding manager; closes#1778
* Squashed commit of the following:
commit e4a956abc37fcb5cf188388b76b617bc5c8fda7d
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 15:43:07 2022 +0100
import new load handling from EmbeddingManager and cleanup
commit c4abe91a5ba0d415b45bf734068385668b7a66e6
Merge: 032e856e 1efc6397
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 15:09:53 2022 +0100
Merge branch 'feature_textual_inversion_mgr' into dev/diffusers_with_textual_inversion_manager
commit 032e856eefb3bbc39534f5daafd25764bcfcef8b
Merge: 8b4f0fe9 bc515e24
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 15:08:01 2022 +0100
Merge remote-tracking branch 'upstream/dev/diffusers' into dev/diffusers_with_textual_inversion_manager
commit 1efc6397fc6e61c1aff4b0258b93089d61de5955
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 15:04:28 2022 +0100
cleanup and add performance notes
commit e400f804ac471a0ca2ba432fd658778b20c7bdab
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 14:45:07 2022 +0100
fix bug and update unit tests
commit deb9ae0ae1016750e93ce8275734061f7285a231
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 14:28:29 2022 +0100
textual inversion manager seems to work
commit 162e02505dec777e91a983c4d0fb52e950d25ff0
Merge: cbad4583 12769b3d
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 11:58:03 2022 +0100
Merge branch 'main' into feature_textual_inversion_mgr
commit cbad45836c6aace6871a90f2621a953f49433131
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 11:54:10 2022 +0100
use position embeddings
commit 070344c69b0e0db340a183857d0a787b348681d3
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 11:53:47 2022 +0100
Don't crash CLI on exceptions
commit b035ac8c6772dfd9ba41b8eeb9103181cda028f8
Author: Damian Stewart <d@damianstewart.com>
Date: Sun Dec 18 11:11:55 2022 +0100
add missing position_embeddings
commit 12769b3d3562ef71e0f54946b532ad077e10043c
Author: Damian Stewart <d@damianstewart.com>
Date: Fri Dec 16 13:33:25 2022 +0100
debugging why it don't work
commit bafb7215eabe1515ca5e8388fd3bb2f3ac5362cf
Author: Damian Stewart <d@damianstewart.com>
Date: Fri Dec 16 13:21:33 2022 +0100
debugging why it don't work
commit 664a6e9e14
Author: Damian Stewart <d@damianstewart.com>
Date: Fri Dec 16 12:48:38 2022 +0100
use TextualInversionManager in place of embeddings (wip, doesn't work)
commit 8b4f0fe9d6e4e2643b36dfa27864294785d7ba4e
Author: Damian Stewart <d@damianstewart.com>
Date: Fri Dec 16 12:48:38 2022 +0100
use TextualInversionManager in place of embeddings (wip, doesn't work)
commit ffbe1ab11163ba712e353d89404e301d0e0c6cdf
Merge: 6e4dad60023df37e
Author: Damian Stewart <d@damianstewart.com>
Date: Fri Dec 16 02:37:31 2022 +0100
Merge branch 'feature_textual_inversion_mgr' into dev/diffusers
commit 023df37eff
Author: Damian Stewart <d@damianstewart.com>
Date: Fri Dec 16 02:36:54 2022 +0100
cleanup
commit 05fac594ea
Author: Damian Stewart <d@damianstewart.com>
Date: Fri Dec 16 02:07:49 2022 +0100
tweak error checking
commit 009f32ed39
Author: damian <null@damianstewart.com>
Date: Thu Dec 15 21:29:47 2022 +0100
unit tests passing for embeddings with vector length >1
commit beb1b08d9a
Author: Damian Stewart <d@damianstewart.com>
Date: Thu Dec 15 13:39:09 2022 +0100
more explicit equality tests when overwriting
commit 44d8a5a7c8
Author: Damian Stewart <d@damianstewart.com>
Date: Thu Dec 15 13:30:13 2022 +0100
wip textual inversion manager (unit tests passing for 1v embedding overwriting)
commit 417c2b57d9
Author: Damian Stewart <d@damianstewart.com>
Date: Thu Dec 15 12:30:55 2022 +0100
wip textual inversion manager (unit tests passing for base stuff + padding)
commit 2e80872e3b
Author: Damian Stewart <d@damianstewart.com>
Date: Thu Dec 15 10:57:57 2022 +0100
wip new TextualInversionManager
* stop using WeightedFrozenCLIPEmbedder
* store diffusion models locally
- configure_invokeai.py reconfigured to store diffusion models rather than
CompVis models
- hugging face caching model is used, but cache is set to ~/invokeai/models/repo_id
- models.yaml does **NOT** use path, just repo_id
- "repo_name" changed to "repo_id" to following hugging face conventions
- Models are loaded with full precision pending further work.
* allow non-local files during development
* path takes priority over repo_id
* MVP for model_cache and configure_invokeai
- Feature complete (almost)
- configure_invokeai.py downloads both .ckpt and diffuser models,
along with their VAEs. Both types of download are controlled by
a unified INITIAL_MODELS.yaml file.
- model_cache can load both type of model and switches back and forth
in CPU. No memory leaks detected
TO DO:
1. I have not yet turned on the LocalOnly flag for diffuser models, so
the code will check the Hugging Face repo for updates before using the
locally cached models. This will break firewalled systems. I am thinking
of putting in a global check for internet connectivity at startup time
and setting the LocalOnly flag based on this. It would be good to check
updates if there is connectivity.
2. I have not gone completely through INITIAL_MODELS.yaml to check which
models are available as diffusers and which are not. So models like
PaperCut and VoxelArt may not load properly. The runway and stability
models are checked, as well as the Trinart models.
3. Add stanzas for SD 2.0 and 2.1 in INITIAL_MODELS.yaml
REMAINING PROBLEMS NOT DIRECTLY RELATED TO MODEL_CACHE:
1. When loading a .ckpt file there are lots of messages like this:
Warning! ldm.modules.attention.CrossAttention is no longer being
maintained. Please use InvokeAICrossAttention instead.
I'm not sure how to address this.
2. The ckpt models ***don't actually run*** due to the lack of special-case
support for them in the generator objects. For example, here's the hard
crash you get when you run txt2img against the legacy waifu-diffusion-1.3
model:
```
>> An error occurred:
Traceback (most recent call last):
File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 140, in main
main_loop(gen, opt)
File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 371, in main_loop
gen.prompt2image(
File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
results = generator.generate(
File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
image = make_image(x_T)
File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
pipeline_output = pipeline.image_from_embeddings(
File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1265, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LatentDiffusion' object has no attribute 'image_from_embeddings'
```
3. The inpainting diffusion model isn't working. Here's the output of "banana
sushi" when inpainting-1.5 is loaded:
```
Traceback (most recent call last):
File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
results = generator.generate(
File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
image = make_image(x_T)
File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
pipeline_output = pipeline.image_from_embeddings(
File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 301, in image_from_embeddings
result_latents, result_attention_map_saver = self.latents_from_embeddings(
File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 330, in latents_from_embeddings
result: PipelineIntermediateState = infer_latents_from_embeddings(
File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 185, in __call__
for result in self.generator_method(*args, **kwargs):
File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 367, in generate_latents_from_embeddings
step_output = self.step(batched_t, latents, guidance_scale,
File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 409, in step
step_output = self.scheduler.step(noise_pred, timestep, latents, **extra_step_kwargs)
File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_lms_discrete.py", line 223, in step
pred_original_sample = sample - sigma * model_output
RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1
```
* proper support for float32/float16
- configure script now correctly detects user's preference for
fp16/32 and downloads the correct diffuser version. If fp16
version not available, falls back to fp32 version.
- misc code cleanup and simplification in model_cache
* add on-the-fly conversion of .ckpt to diffusers models
1. On-the-fly conversion code can be found in the file ldm/invoke/ckpt_to_diffusers.py.
2. A new !optimize command has been added to the CLI. Should be ported to Web GUI.
User experience on the CLI is this:
```
invoke> !optimize /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
INFO: Converting legacy weights file /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt to optimized diffuser model.
This operation will take 30-60s to complete.
Success. Optimized model is now located at /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
Writing new config file entry for sd-v1-4...
>> New configuration:
sd-v1-4:
description: Optimized version of sd-v1-4
format: diffusers
path: /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
OK to import [n]? y
>> Verifying that new model loads...
>> Current VRAM usage: 2.60G
>> Offloading stable-diffusion-2.1 to CPU
>> Loading diffusers model from /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
| Using faster float16 precision
You have disabled the safety checker for <class 'ldm.invoke.generator.diffusers_pipeline.StableDiffusionGeneratorPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion \
license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances,\
disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
| training width x height = (512 x 512)
>> Model loaded in 3.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
Keep model loaded? [y]
```
* add parallel set of generator files for ckpt legacy generation
* generation using legacy ckpt models now working
* diffusers: fix missing attention_maps_callback
fix for 23eb80b404
* associate legacy CrossAttention with .ckpt models
* enable autoconvert
New --autoconvert CLI option will scan a designated directory for
new .ckpt files, convert them into diffuser models, and import
them into models.yaml.
Works like this:
invoke.py --autoconvert /path/to/weights/directory
In ModelCache added two new methods:
autoconvert_weights(config_path, weights_directory_path, models_directory_path)
convert_and_import(ckpt_path, diffuser_path)
* diffusers: update to diffusers 0.11 (from 0.10.2)
* fix vae loading & width/height calculation
* refactor: encapsulate these conditioning data into one container
* diffusers: fix some noise-scaling issues by pushing the noise-mixing down to the common function
* add support for safetensors and accelerate
* set local_files_only when internet unreachable
* diffusers: fix error-handling path when model repo has no fp16 branch
* fix generatorinpaint error
Fixes :
"ModuleNotFoundError: No module named 'ldm.invoke.generatorinpaint'
https://github.com/invoke-ai/InvokeAI/pull/1583#issuecomment-1363634318
* quench diffuser safety-checker warning
* diffusers: support stochastic DDIM eta parameter
* fix conda env creation on macos
* fix cross-attention with diffusers 0.11
* diffusers: the VAE needs to be tiling as well as the U-Net
* diffusers: comment on subfolders
* diffusers: embiggen!
* diffusers: make model_cache.list_models serializable
* diffusers(inpaint): restore scaling functionality
* fix requirements clash between numba and numpy 1.24
* diffusers: allow inpainting model to do non-inpainting tasks
* start expanding model_cache functionality
* add import_ckpt_model() and import_diffuser_model() methods to model_manager
- in addition, model_cache.py is now renamed to model_manager.py
* allow "recommended" flag to be optional in INITIAL_MODELS.yaml
* configure_invokeai now downloads VAE diffusers in advance
* rename ModelCache to ModelManager
* remove support for `repo_name` in models.yaml
* check for and refuse to load embeddings trained on incompatible models
* models.yaml.example: s/repo_name/repo_id
and remove extra INITIAL_MODELS now that the main one has diffusers models in it.
* add MVP textual inversion script
* refactor(InvokeAIDiffuserComponent): factor out _combine()
* InvokeAIDiffuserComponent: implement threshold
* InvokeAIDiffuserComponent: diagnostic logs for threshold
...this does not look right
* add a curses-based frontend to textual inversion
- not quite working yet
- requires npyscreen installed
- on windows will also have the windows-curses requirement, but not added
to requirements yet
* add curses-based interface for textual inversion
* fix crash in convert_and_import()
- This corrects a "local variable referenced before assignment" error
in model_manager.convert_and_import()
* potential workaround for no 'state_dict' key error
- As reported in https://github.com/huggingface/diffusers/issues/1876
* create TI output dir if needed
* Update environment-lin-cuda.yml (#2159)
Fixing line 42 to be the proper order to define the transformers requirement: ~= instead of =~
* diffusers: update sampler-to-scheduler mapping
based on https://github.com/huggingface/diffusers/issues/277#issuecomment-1371428672
* improve user exp for ckt to diffusers conversion
- !optimize_models command now operates on an existing ckpt file entry in models.yaml
- replaces existing entry, rather than adding a new one
- offers to delete the ckpt file after conversion
* web: adapt progress callback to deal with old generator or new diffusers pipeline
* clean-up model_manager code
- add_model() verified to work for .ckpt local paths,
.ckpt remote URLs, diffusers local paths, and
diffusers repo_ids
- convert_and_import() verified to work for local and
remove .ckpt files
* handle edge cases for import_model() and convert_model()
* add support for safetensor .ckpt files
* fix name error
* code cleanup with pyflake
* improve model setting behavior
- If the user enters an invalid model name at startup time, will not
try to load it, warn, and use default model
- CLI UI enhancement: include currently active model in the command
line prompt.
* update test-invoke-pip.yml
- fix model cache path to point to runwayml/stable-diffusion-v1-5
- remove `skip-sd-weights` from configure_invokeai.py args
* exclude dev/diffusers from "fail for draft PRs"
* disable "fail on PR jobs"
* re-add `--skip-sd-weights` since no space
* update workflow environments
- include `INVOKE_MODEL_RECONFIGURE: '--yes'`
* clean up model load failure handling
- Allow CLI to run even when no model is defined or loadable.
- Inhibit stack trace when model load fails - only show last error
- Give user *option* to run configure_invokeai.py when no models
successfully load.
- Restart invokeai after reconfiguration.
* further edge-case handling
1) only one model in models.yaml file, and that model is broken
2) no models in models.yaml
3) models.yaml doesn't exist at all
* fix incorrect model status listing
- "cached" was not being returned from list_models()
- normalize handling of exceptions during model loading:
- Passing an invalid model name to generate.set_model() will return
a KeyError
- All other exceptions are returned as the appropriate Exception
* CI: do download weights (if not already cached)
* diffusers: fix scheduler loading in offline mode
* CI: fix model name (no longer has `diffusers-` prefix)
* Update txt2img2img.py (#2256)
* fixes to share models with HuggingFace cache system
- If HF_HOME environment variable is defined, then all huggingface models
are stored in that directory following the standard conventions.
- For seamless interoperability, set HF_HOME to ~/.cache/huggingface
- If HF_HOME not defined, then models are stored in ~/invokeai/models.
This is equivalent to setting HF_HOME to ~/invokeai/models
A future commit will add a migration mechanism so that this change doesn't
break previous installs.
* feat - make model storage compatible with hugging face caching system
This commit alters the InvokeAI model directory to be compatible with
hugging face, making it easier to share diffusers (and other models)
across different programs.
- If the HF_HOME environment variable is not set, then models are
cached in ~/invokeai/models in a format that is identical to the
HuggingFace cache.
- If HF_HOME is set, then models are cached wherever HF_HOME points.
- To enable sharing with other HuggingFace library clients, set
HF_HOME to ~/.cache/huggingface to set the default cache location
or to ~/invokeai/models to have huggingface cache inside InvokeAI.
* fixes to share models with HuggingFace cache system
- If HF_HOME environment variable is defined, then all huggingface models
are stored in that directory following the standard conventions.
- For seamless interoperability, set HF_HOME to ~/.cache/huggingface
- If HF_HOME not defined, then models are stored in ~/invokeai/models.
This is equivalent to setting HF_HOME to ~/invokeai/models
A future commit will add a migration mechanism so that this change doesn't
break previous installs.
* fix error "no attribute CkptInpaint"
* model_manager.list_models() returns entire model config stanza+status
* Initial Draft - Model Manager Diffusers
* added hash function to diffusers
* implement sha256 hashes on diffusers models
* Add Model Manager Support for Diffusers
* fix various problems with model manager
- in cli import functions, fix not enough values to unpack from
_get_name_and_desc()
- fix crash when using old-style vae: value with new-style diffuser
* rebuild frontend
* fix dictconfig-not-serializable issue
* fix NoneType' object is not subscriptable crash in model_manager
* fix "str has no attribute get" error in model_manager list_models()
* Add path and repo_id support for Diffusers Model Manager
Also fixes bugs
* Fix tooltip IT localization not working
* Add Version Number To WebUI
* Optimize Model Search
* Fix incorrect font on the Model Manager UI
* Fix image degradation on merge fixes - [Experimental]
This change should effectively fix a couple of things.
- Fix image degradation on subsequent merges of the canvas layers.
- Fix the slight transparent border that is left behind when filling the bounding box with a color.
- Fix the left over line of color when filling a bounding box with color.
So far there are no side effects for this. If any, please report.
* Add local model filtering for Diffusers / Checkpoints
* Go to home on modal close for the Add Modal UI
* Styling Fixes
* Model Manager Diffusers Localization Update
* Add Safe Tensor scanning to Model Manager
* Fix model edit form dispatching string values instead of numbers.
* Resolve VAE handling / edge cases for supplied repos
* defer injecting tokens for textual inversions until they're used for the first time
* squash a console warning
* implement model migration check
* add_model() overwrites previous config rather than merges
* fix model config file attribute merging
* fix precision handling in textual inversion script
* allow ckpt conversion script to work with safetensors .ckpts
Applied patch here:
beb932c5d1
* fix name "args" is not defined crash in textual_inversion_training
* fix a second NameError: name 'args' is not defined crash
* fix loading of the safety checker from the global cache dir
* add installation step to textual inversion frontend
- After a successful training run, the script will copy learned_embeds.bin
to a subfolder of the embeddings directory.
- User given the option to delete the logs and intermediate checkpoints
(which together use 7-8G of space)
- If textual inversion training fails, reports the error gracefully.
* don't crash out on incompatible embeddings
- put try: blocks around places where the system tries to load an embedding
which is incompatible with the currently loaded model
* add support for checkpoint resuming
* textual inversion preferences are saved and restored between sessions
- Preferences are stored in a file named text-inversion-training/preferences.conf
- Currently the resume-from-checkpoint option is not working correctly. Possible
bug in textual_inversion_training.py?
* copy learned_embeddings.bin into right location
* add front end for diffusers model merging
- Front end doesn't do anything yet!!!!
- Made change to model name parsing in CLI to support ability to have merged models
with the "+" character in their names.
* improve inpainting experience
- recommend ckpt version of inpainting-1.5 to user
- fix get_noise() bug in ckpt version of omnibus.py
* update environment*yml
* tweak instructions to install HuggingFace token
* bump version number
* enhance update scripts
- update scripts will now fetch new INITIAL_MODELS.yaml so that
configure_invokeai.py will know about the diffusers versions.
* enhance invoke.sh/invoke.bat launchers
- added configure_invokeai.py to menu
- menu defaults to browser-based invoke
* remove conda workflow (#2321)
* fix `token_ids has shape torch.Size([79]) - expected [77]`
* update CHANGELOG.md with 2.3.* info
- Add information on how formats have changed and the upgrade process.
- Add short bug list.
Co-authored-by: Damian Stewart <d@damianstewart.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: Wybartel-luxmc <37852506+Wybartel-luxmc@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: mickr777 <115216705+mickr777@users.noreply.github.com>
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
- remove tests already performed in PR
- remove tests pointing to non existing files
- reduce steps to 20
This should decrease test time a lot and also "fix" failing mac tests.
I still recommend to invent why mac invoke takes so much longer!
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Squashed commit of the following:
commit 9a1fe8e7fb
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 12 17:07:40 2022 +0000
swap in release URLs for installers
commit ff56f5251b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 12 17:03:21 2022 +0000
fix up bad unicode chars in invoke.py
commit ed943bd6c7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 12 16:05:45 2022 +0000
outcrop improvements, hand-added
commit 7ad2355b1d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 12 15:14:33 2022 +0000
documentation fixes
commit 66c920fc19
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 16:49:25 2022 -0500
Revert "Resize hires as an image"
This reverts commit d05b1b3544.
commit 3fc5cb09f8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 12 12:43:17 2022 +0000
fix incorrect link in install
commit 1345ec77ab
Author: tildebyte <337875+tildebyte@users.noreply.github.com>
Date: Sun Nov 6 19:07:31 2022 -0500
toil(repo): add tildebyte as owner of installer/ directory
commit b116715490
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date: Thu Nov 10 21:43:56 2022 -0800
Fix performance issue introduced by torch cuda cache clear during generation
commit fa3670270e
Author: mauwii <Mauwii@outlook.de>
Date: Fri Nov 11 12:42:03 2022 +0100
small update to dockers huggingface section
commit c304250ef6
Author: mauwii <Mauwii@outlook.de>
Date: Fri Nov 11 12:19:27 2022 +0100
fix format and Link in INSTALL_INVOKE.md
commit 802ce5dde5
Author: mauwii <Mauwii@outlook.de>
Date: Fri Nov 11 11:17:49 2022 +0100
small fixex to format and a link in INSTALL_MANUAL
commit 311ee320ec
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 11 07:23:35 2022 +0000
ignore installer intermediate files
commit e9df17b374
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 11 07:19:25 2022 +0000
fix backslash-related syntax error
commit 061fb4ef00
Merge: 52be0d234095acd1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 11 06:50:04 2022 +0000
Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3
commit 52be0d2396
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 11 06:49:45 2022 +0000
add WindowsLongFileName batfile to source installer
commit 4095acd10e
Author: mauwii <Mauwii@outlook.de>
Date: Fri Nov 11 07:05:17 2022 +0100
Doc Updates
A lot of re-formating of new Installation Docs
also some content updates/corrections
commit 201eb22d76
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 11 04:41:02 2022 +0000
prevent two models from being marked default in models.yaml
commit 17ab982200
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 11 03:56:54 2022 +0000
installers download branch HEAD not tag
commit a04965b0e9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 11 03:48:21 2022 +0000
improve messaging during installation process
commit 0b529f0c57
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 15:22:32 2022 +0000
enable outcropping of random JPG/PNG images
- Works best with runwayML inpainting model
- Numerous code changes required to propagate seed to final metadata.
Original code predicated on the image being generated within InvokeAI.
commit 6f9f848345
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 17:27:42 2022 +0000
enhance outcropping with ability to direct contents of new regions
- When outcropping an image you can now add a `--new_prompt` option, to specify
a new prompt to be used instead of the original one used to generate the image.
- Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
will pick one randomly.
- This PR also fixes the crash that happened when trying to outcrop an image
that does not contain InvokeAI metadata.
commit 918c1589ef
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 20:16:47 2022 +0000
fix#1402
commit 116415b3fc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 21:27:25 2022 +0000
fix invoke.py crash if no models.yaml file present
- Script will now offer the user the ability to create a
minimal models.yaml and then gracefully exit.
- Closes#1420
commit b4b6eabaac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 16:49:25 2022 -0500
Revert "Log strength with hires"
This reverts commit 82d4904c07.
commit 4ef1f4a854
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 20:01:49 2022 +0000
remove temporary directory from repo
commit 510fc4ebaa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 19:59:03 2022 +0000
remove -e from clipseg load in installer
commit a20914434b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 19:37:07 2022 +0000
change clipseg repo branch to avoid clipseg not found error
commit 0d134195fd
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 18:39:29 2022 +0000
update repo URL to point to rc
commit 649d8c8573
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 18:13:28 2022 +0000
integrate tildebyte installer
commit a358d370a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 17:48:14 2022 +0000
add @tildebyte compiled pip installer
commit 94a9033c4f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 14:52:00 2022 +0000
ignore source installer zip files
commit 18a947c503
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 14:46:36 2022 +0000
documentation and environment file fixes
- Have clarified the relationship between the @tildebyte and @cmdr2 installers;
However, @tildebyte installer merge is still a WIP due to conflicts over
such things as `invoke.sh`.
- Rechristened 1click installer as "source" installer. @tildebyte installer will be
"the" installer. (We'll see which one generates the least support requests and
maintenance work.)
- Sync'd `environment-mac.yml` with `development`. The former was failing with a
taming-transformers error as per https://discord.com/channels/@me/1037201214154231899/1040060947378749460
commit a23b031895
Author: Mike DiGiovanni <vinblau@gmail.com>
Date: Wed Nov 9 16:44:59 2022 -0500
Fixes typos in README.md
commit 23af68c7d7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 07:02:27 2022 -0500
downgrade win installs to basicsr==1.4.1
commit e258beeb51
Merge: 7460c069e481bfac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 06:37:45 2022 -0500
Merge branch 'release-candidate-2-1-3' of github.com:invoke-ai/InvokeAI into release-candidate-2-1-3
commit 7460c069b8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 06:36:48 2022 -0500
remove --prefer-binary from requirements-base.txt
It appears that some versions of pip do not recognize this option
when it appears in the requirements file. Did not explore this further
but recommend --prefer-binary in the manual install instructions on
the command line.
commit e481bfac61
Merge: 5040747cd1ab65a4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 11:21:56 2022 +0000
Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3
commit 5040747c67
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 10 11:21:43 2022 +0000
fix windows install instructions & bat file
commit d1ab65a431
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 10 07:18:59 2022 +0100
update WEBUIHOTKEYS.md
commit af4ee7feb8
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 10 06:33:49 2022 +0100
update INSTALL_DOCKER.md
commit 764fb29ade
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 10 06:30:15 2022 +0100
fix formatting in INSTALL.md
commit 1014d3ba44
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 10 06:29:14 2022 +0100
fix build.sh invokeai_conda_env_file default value
commit 40a48aca88
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 10 05:25:30 2022 +0100
fix environment-mac.yml
moved taming-transformers-rom1504 to pip dependencies
commit 92abc00f16
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 10 05:19:52 2022 +0100
fix test-invoke-conda
- copy required conda environment yaml
- use environment.yml
- I use cp instead of ln since would be compatible for windows runners
commit a5719aabf8
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 10 04:14:35 2022 +0100
update Dockerfile
- link environment.yml from new environemnts path
- change default conda_env_file
- quote all variables to avoid splitting
- also remove paths from conda-env-files in build-container.yml
commit 44a18511fa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 20:51:06 2022 +0000
update paths in container build workflow
commit b850dbadaf
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 20:16:57 2022 +0000
finished reorganization of install docs
commit 9ef8b944d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 18:50:58 2022 +0000
tweaks to manual install documentation
--prefer-binary is an iffy option in the requirements file. It isn't
supported by some versions of pip, so I removed it from
requirements-base.txt and inserted it into the manual install
instructions where it seems to do what it is supposed to.
commit efc5a98488
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 18:20:03 2022 +0000
manual installation documentation tested on Linux
commit 1417c87928
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 17:37:06 2022 +0000
change name of requirements.txt to avoid confusion
commit 2dd6fc2b93
Merge: 2221361271ee44a8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 17:26:24 2022 +0000
Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3
commit 22213612a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 17:25:59 2022 +0000
directory cleanup; working on install docs
commit 71ee44a827
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 02:07:13 2022 +0000
prevent crash when switching to an invalid model
commit b17ca0a5e7
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 14:28:38 2022 +0100
don't suppress exceptions when doing cross-attention control
commit 71bbfe4a1a
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 12:59:34 2022 +0100
Fix#1362 by improving VRAM usage patterns when doing .swap()
commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 12:18:37 2022 +0100
remove log spam
commit 7189d649622d4668b120b0dd278388ad672142c4
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 12:10:28 2022 +0100
change the way saved slicing strategy is applied
commit 01c40f751ab72955140165c16f95ae411732265b
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 12:04:43 2022 +0100
fix slicing_strategy_getter callsite
commit f8cfe25150a346958903316bc710737d99839923
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 11:56:22 2022 +0100
cleanup, consistent dim=0 also tested
commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
Author: damian0815 <null@damianstewart.com>
Date: Tue Nov 8 11:34:09 2022 +0100
refactored context, tested with non-sliced cross attention control
commit d58a46e39bf562e7459290d2444256e8c08ad0b6
Author: damian0815 <null@damianstewart.com>
Date: Sun Nov 6 00:41:52 2022 +0100
cleanup
commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:57:31 2022 +0100
disable logs
commit 20ee89d93841b070738b3d8a4385c93b097d92eb
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:36:58 2022 +0100
slice saved attention if necessary
commit 0a7684a22c880ec0f48cc22bfed4526358f71546
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:32:38 2022 +0100
raise instead of asserting
commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:31:00 2022 +0100
store dim when saving slices
commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:27:16 2022 +0100
don't retry on exception
commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:24:50 2022 +0100
stuff
commit 032ab90e9533be8726301ec91b97137e2aadef9a
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:20:17 2022 +0100
more logging
commit 3dc34b387f033482305360e605809d95a40bf6f8
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:16:47 2022 +0100
logs
commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:12:39 2022 +0100
actually set save_slicing_strategy to True
commit f780e0a0a7c6b6a3db320891064da82589358c8a
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 22:10:35 2022 +0100
store slicing strategy
commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
Author: damian <git@damianstewart.com>
Date: Sat Nov 5 20:43:48 2022 +0100
still not it
commit 5e3a9541f8ae00bde524046963910323e20c40b7
Author: damian <git@damianstewart.com>
Date: Sat Nov 5 17:20:02 2022 +0100
wip offloading attention slices on-demand
commit 4c2966aa856b6f3b446216da3619ae931552ef08
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 15:47:40 2022 +0100
pre-emptive offloading, idk if it works
commit 572576755e9f0a878d38e8173e485126c0efbefb
Author: root <you@example.com>
Date: Sat Nov 5 11:25:32 2022 +0000
push attention slices to cpu. slow but saves memory.
commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 12:04:22 2022 +0100
verbose logging
commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
Author: damian0815 <null@damianstewart.com>
Date: Sat Nov 5 11:50:48 2022 +0100
wip fixing mem strategy crash (4 test on runpod)
commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
Author: damian0815 <null@damianstewart.com>
Date: Fri Nov 4 09:02:40 2022 +0100
wip, only works on cuda
commit 5702271991
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 14:09:36 2022 +0000
speculative reorganization of the requirements & environment files
- This is only a test!
- The various environment*.yml and requirements*.txt files have all
been moved into a directory named "environments-and-requirements".
- The idea is to clean up our root directory so that the github home
page is tidy.
- The manual install instructions will start with the instructions to
create a symbolic link from environment.yml to the appropriate file
for OS and GPU.
- The 1-click installers have been updated to accommodate this change.
commit 10781e7dc4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 9 01:59:45 2022 +0000
refactoring requirements
commit 099d1157c5
Author: mauwii <Mauwii@outlook.de>
Date: Wed Nov 9 00:16:18 2022 +0100
better way to make sure if conda is useable
commit ab825bf7ee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 22:05:33 2022 +0000
add back --prefer-binaries to requirements
commit 10cfeb5ada
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 8 22:27:19 2022 +0100
add quotes to set and use `$environment_file`
commit e97515d045
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 8 22:24:21 2022 +0100
set environment file for conda update
commit 0f04bc5789
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 8 22:21:25 2022 +0100
use conda env update
commit 3f74aabecd
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 8 22:20:44 2022 +0100
use command instead of hash
commit b1a99a51b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 14:44:44 2022 -0500
remove --global git config from 1-click installers
commit 8004f8a6d9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Mon Nov 7 09:07:20 2022 -0500
Revert "Use array slicing to calc ddim timesteps"
This reverts commit 1f0c5b4cf1.
commit ff8ff2212a
Merge: 8e5363cd636620b1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 14:01:40 2022 +0000
add initfile support from PR #1386
commit 8e5363cd83
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 13:26:18 2022 +0000
move 'installer/' to '1-click-installer' to make room for tildebyte installer
commit 1450779146
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 12:56:36 2022 +0000
update branch for installer to pull against
commit 8cd5d95b8a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 05:30:20 2022 +0000
move all models into subdirectories of ./models
- this required an update to the invoke-ai fork of gfpgan
- simultaneously reverted consolidation of environment and
requirements files, as their presence in a directory
triggered setup.py to try to install a sub-package.
commit abd6407394
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 03:52:46 2022 +0000
leave a copy of environment-cuda.yml at top level
- named it environment.yml
- need to avoid a big change for users and breaking older support
instructions.
commit 734dacfbe9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 03:50:07 2022 +0000
consolidate environment files
- starting to remove unneeded entries and pins
- no longer require -e in front of github dependencies
- update setup.py with release number
- update manual installation instructions
commit 636620b1d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 8 03:26:16 2022 +0000
change initfile to ~/.invokeai
- adjust documentation
- also fix 'clipseg_models' to 'clipseg', which seems to be working now
commit 1fe41146f0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 5 14:28:01 2022 -0400
add support for an initialization file, invokeai.init
- Place preferred startup command switches in a file named
"invokeai.init". The file can consist of a single line of switches
such as "--web --steps=28", a series of switches on each
line, or any combination of the two.
Example:
```
--web
--host=0.0.0.0
--steps=28
--grid
-f 0.6 -C 11.0 -A k_euler_a
```
- The following options, which were previously only available within
the CLI, are now available on the command line as well:
--steps
--strength
--cfg_scale
--width
--height
--fit
commit 2ad6ef355a
Merge: 865502ee8b47c829
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sun Nov 6 18:08:36 2022 +0000
update discord link
commit 865502ee4f
Author: mauwii <Mauwii@outlook.de>
Date: Sun Nov 6 18:00:16 2022 +0100
update changelog
commit c7984f3299
Author: mauwii <Mauwii@outlook.de>
Date: Sun Nov 6 17:07:27 2022 +0100
update TROUBLESHOOT.md
commit 7f150ed833
Author: mauwii <Mauwii@outlook.de>
Date: Sun Nov 6 16:56:58 2022 +0100
remove `:`from headlines in CONTRIBUTORS.md
commit badf4e256c
Author: mauwii <Mauwii@outlook.de>
Date: Sun Nov 6 16:56:37 2022 +0100
enable navigation tabs
Since the docs are growing, this way they look cleaner
commit e64c60bbb3
Author: mauwii <Mauwii@outlook.de>
Date: Sun Nov 6 16:18:59 2022 +0100
remove preflight checks from assets
seems like somebody executed tests and commited them
commit 1780618543
Author: mauwii <Mauwii@outlook.de>
Date: Sun Nov 6 16:15:06 2022 +0100
update INSTALLING_MODELS.md
commit f91fd27624
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date: Sat Nov 5 14:47:53 2022 -0700
Bug fix for inpaint size
commit 09e41e8f76
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date: Sat Nov 5 14:34:52 2022 -0700
Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination
commit 6eeb2107b3
Author: mauwii <Mauwii@outlook.de>
Date: Sat Nov 5 21:01:14 2022 +0100
remove create-caches.yml since not used anywhere
commit 17053ad8b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 5 16:01:55 2022 -0400
fix duplicated argument introduced by conflict resolution
commit fefb4dc1f8
Merge: 762ca60ad05b1b35
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 5 12:47:35 2022 -0700
Merge branch 'development' into fix_generate.py
commit d05b1b3544
Author: Craig <cwallen@users.noreply.github.com>
Date: Sat Oct 29 20:40:30 2022 -0400
Resize hires as an image
commit 82d4904c07
Author: Craig <cwallen@users.noreply.github.com>
Date: Sat Oct 29 20:37:40 2022 -0400
Log strength with hires
commit 1cdcf33cfa
Merge: 6616fa83cbc029c6
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Sat Nov 5 09:57:38 2022 -0400
Merge branch 'main' into development
- this synchronizes recent document fixes by mauwii
commit 6616fa835a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 4 00:47:03 2022 -0400
fix Windows library dependency issues
This commit addresses two bugs:
1) invokeai.py crashes immediately with a message about an undefined
attritube sigKILL (closes#1288). The fix is to pin torch at 1.12.1.
2) Version 1.4.2 of basicsr fails to load properly on Windows, and is
a requirement of realesrgan, however 1.4.1 works. Pinning basicsr
in our requirements file resulted in a dependency conflict, so I
ended up cloning realesrgan into the invoke-ai Git space and changing
the requirements file there.
If there is a more elegant solution, please advise.
commit 7b9a4564b1
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date: Sat Nov 5 14:36:45 2022 +0100
Update-docs (#1382)
* update IMG2IMG.md
* update INPAINTING.md
* update WEBUIHOTKEYS.md
* more doc updates (mostly fix formatting):
- OUTPAINTING.md
- POSTPROCESS.md
- PROMPTS.md
- VARIATIONS.md
- WEB.md
- WEBUIHOTKEYS.md
commit fcdefa0620
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date: Fri Nov 4 20:47:31 2022 +0100
Hotifx docs (#1376) (#1377)
commit ef8b3ce639
Merge: b704209536870a8f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Fri Nov 4 12:08:44 2022 -0400
Merge-main-into-development (#1373)
To get the rid of the difference between main and development.
Since otherwise it will be a pain to start fixing the documentatino
(when the state between main and development is not the same ...)
Also this should fix the problem of all tests failing since environment
yamls get updated.
commit 36870a8f53
Merge: 6b89adfab7042095
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date: Fri Nov 4 16:25:00 2022 +0100
Merge branch 'development' into merge-main-into-development
commit b70420951d
Author: damian0815 <null@damianstewart.com>
Date: Thu Nov 3 12:39:45 2022 +0100
fix parsing error doing eg `forest ().swap(in winter)`
commit 1f0c5b4cf1
Author: wfng92 <43742196+wfng92@users.noreply.github.com>
Date: Thu Nov 3 17:13:52 2022 +0800
Use array slicing to calc ddim timesteps
commit 8648da8111
Author: mauwii <Mauwii@outlook.de>
Date: Fri Nov 4 00:06:19 2022 +0100
update environment-linux-aarch64 to use python 3.9
commit 45b4593563
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 3 22:31:46 2022 +0100
update environment-linux-aarch64.yml
- move getpass_asterisk to pip
commit 41b04316cf
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 3 20:40:08 2022 +0100
rename job, remove debug branch from triggers
commit e97c6db2a3
Author: mauwii <Mauwii@outlook.de>
Date: Thu Nov 3 20:34:01 2022 +0100
include build matrix to build x86_64 and aarch64
commit 896820a349
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 05:01:15 2022 +0100
disable caching
commit 06c8f468bf
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 04:26:39 2022 +0100
disable PR-Validation
since there are no files passed from context this is unecesarry
commit 61920e2701
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 04:09:39 2022 +0100
update action to use current branch
also update build-args of dockerfile and build.sh
commit f34ba7ca70
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 02:30:24 2022 +0100
remove unecesarry mkdir command again
commit c30ef0895d
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 01:51:12 2022 +0100
remove symlink to GFPGANv1.4
also re-add mkdir to prevent action from failing
commit aa3a774f73
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 01:48:59 2022 +0100
update build-container.yml to use cachev3
commit 2c30555b84
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 01:34:20 2022 +0100
update Dockerfile
- create models.yaml from models.yaml.example
- run preload_models.py with --no-interactive
commit 743f605773
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 01:21:15 2022 +0100
update build.sh to download sd-v1.5 model
commit 519c661abb
Author: mauwii <Mauwii@outlook.de>
Date: Tue Oct 25 01:26:50 2022 +0200
replace old fashined markdown templates with forms
this will help the readability of issues a lot 🤓
commit 22c956c75f
Merge: 13696adc0196571a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 3 10:20:21 2022 -0400
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
commit 13696adc3a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Thu Nov 3 10:20:10 2022 -0400
speculative change to solve windows esrgan issues
commit 0196571a12
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 2 22:39:35 2022 -0400
remove merge markers from preload_models.py
commit 9666f466ab
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 2 18:29:34 2022 -0400
use refined model by default
commit 240e5486c8
Merge: 8164b6b9aa247e68
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 2 18:35:00 2022 -0400
Merge branch 'spezialspezial-patch-9' into development
commit 8164b6b9cf
Merge: 4fc82d55dd5a88dc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Wed Nov 2 17:06:46 2022 -0400
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
commit 4fc82d554f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Thu Nov 3 04:17:28 2022 +1300
[WebUI] Final 2.1 Release Build
commit 96b34c0f85
Author: damian0815 <null@damianstewart.com>
Date: Wed Nov 2 09:08:11 2022 +0100
Final WebUI build for Release 2.1
- squashed commit of 52 commits from PR #1327
don't log base64 progress images
Fresh Build For WebUI
[WebUI] Loopback Default False
Fixes bugs/styling
- Fixes missing web app state on new version:
Adds stateReconciler to redux-persist.
When we add more values to the state and then release the update app, they will be automatically merged in.
Reseting web UI will be needed far less.
7159ec
- Fixes console z-index
- Moves reset web UI button to visible area
Decreases gallery width on inpainting
Increases workarea split padding to 1rem
Adds missing tooltips to site header
Changes inpainting controls settings to hover
Fixes hotkeys and settings buttons not working
Improves bounding box interactions
- Bounding box can now be moved by dragging any of its edges
- Bounding box does not affect drawing if already drawing a stroke
- Can lock bounding box to draw directly on the bounding box edges
- Removes spacebar-hold behaviour due to technical issues
Fixes silent crash when init image too large
To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.
If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.
Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.
Disabled bounding box settings when locked
Styles image uploader
Builds fresh bundle
Improves bounding box interaction
Added spacebar-hold-to-transform back.
Address bounding box feedback
- Adds back toggle to hide bounding box
- Box quick toggle = q, normal toggle = shift + q
- Styles canvas alert icons
Adds hints when unable to invoke
- Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
- There may be more than one reason; all are displayed.
Fix Inpainting Alerts Styling
Preventing unnecessary re-renders across the app
Code Split Inpaint Options
Isolate features to their own components so they dont re-render the other stuff each time.
[TESTING] Remove global isReady checking
I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.
Fresh Bundle
Fix Bounding Box Settings re-rendering on brush stroke
[Code Splitting] Bounding Box Options
Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix bounding box triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.
Inpainting Controls Code Spitting and Performance
Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.
Fixes rerenders on ClearBrushHistory
Fixes crash when requesting post-generation upscale/face restoration
- Moves the inpainting paste to before the postprocessing.
Removes unused isReady state
Changes Report Bug icon to a bug
Restores shift+q bounding box shortcut
Adds alert for bounding box size to status icons
Adds asCheckbox to IAIIconButton
Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.
Fixes crash related to old value of progress_latents in state
Styling changes and settings modal minor refactor
Fixes: uploaded JPG images not loading
Reworks CurrentImageButtons.tsx
- Change all icons to FA iconset for consistency
- Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
- Redesigns buttons into group
Only generate 1 iteration when seed fixed & variations disabled
Fixes progress images select
Fixes edge case: upload over gets stuck while alt tabbing
- Press esc to close it now
Fixes display progress images select typing
Fixes current image button rerenders
Adds min width to ImageUploader
Makes fast-latents in progress default
Update Icon Button Checkbox Style Styling
Fixes next/prev image buttons
Refactor canvas buttons + more
Add Save Intermediates Step Count
For accurate mode only.
Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>
Restores "initial image" text
Address feedback
- moves mask clear button
- fixes intermediates
- shrinks inpainting icons by 10%
Fix Loopback Styling
Adds escape hotkey to close floating panels
Readd Hotkey for Dual Display
Updated Current Image Button Styling
commit dd5a88dcee
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Thu Nov 3 04:17:28 2022 +1300
[WebUI] Final 2.1 Release Build
commit 95ed56bf82
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Thu Nov 3 04:16:31 2022 +1300
Updated Current Image Button Styling
commit 1ae80f5ab9
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Thu Nov 3 04:07:57 2022 +1300
Readd Hotkey for Dual Display
commit 1f0bd3ca6c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Thu Nov 3 02:07:00 2022 +1100
Adds escape hotkey to close floating panels
commit a1971f6830
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Thu Nov 3 03:38:15 2022 +1300
Fix Loopback Styling
commit c6118e8898
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Thu Nov 3 01:29:51 2022 +1100
Address feedback
- moves mask clear button
- fixes intermediates
- shrinks inpainting icons by 10%
commit 7ba958cf7f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Thu Nov 3 01:10:38 2022 +1100
Restores "initial image" text
commit 383905d5d2
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Thu Nov 3 02:59:11 2022 +1300
Add Save Intermediates Step Count
For accurate mode only.
Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>
commit 6173e3e9ca
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Thu Nov 3 00:53:53 2022 +1100
Refactor canvas buttons + more
commit 3feb7d8922
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Thu Nov 3 00:49:23 2022 +1100
Fixes next/prev image buttons
commit 1d9edbd0dd
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Thu Nov 3 00:50:44 2022 +1300
Update Icon Button Checkbox Style Styling
commit d439abdb89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 21:37:24 2022 +1100
Makes fast-latents in progress default
commit ee47ea0c89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 21:37:09 2022 +1100
Adds min width to ImageUploader
commit 300bb2e627
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 21:28:22 2022 +1100
Fixes current image button rerenders
commit ccf8593501
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 21:27:43 2022 +1100
Fixes display progress images select typing
commit 0fda612f3f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 21:02:01 2022 +1100
Fixes edge case: upload over gets stuck while alt tabbing
- Press esc to close it now
commit 5afff65b71
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 20:33:19 2022 +1100
Fixes progress images select
commit 7e55bdefce
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 20:27:47 2022 +1100
Only generate 1 iteration when seed fixed & variations disabled
commit 620cf84d3d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 19:51:38 2022 +1100
Reworks CurrentImageButtons.tsx
- Change all icons to FA iconset for consistency
- Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
- Redesigns buttons into group
commit cfe567c62a
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 16:14:50 2022 +1100
Fixes: uploaded JPG images not loading
commit cefe12f1df
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 15:31:18 2022 +1100
Styling changes and settings modal minor refactor
commit 1e51c39928
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 15:27:46 2022 +1100
Fixes crash related to old value of progress_latents in state
commit 42a02bbb80
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 13:15:06 2022 +1100
Adds asCheckbox to IAIIconButton
Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.
commit f1ae6dae4c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 13:13:56 2022 +1100
Adds alert for bounding box size to status icons
commit 6195579910
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 11:52:19 2022 +1100
Restores shift+q bounding box shortcut
commit 16c8b23b34
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 11:32:07 2022 +1100
Changes Report Bug icon to a bug
commit 07ae626b22
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 11:17:16 2022 +1100
Removes unused isReady state
commit 8d171bb044
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 11:13:26 2022 +1100
Fixes crash when requesting post-generation upscale/face restoration
- Moves the inpainting paste to before the postprocessing.
commit 6e33ca7e9e
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Wed Nov 2 10:59:01 2022 +1100
Fixes rerenders on ClearBrushHistory
commit db46e12f2b
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 11:36:28 2022 +1300
Inpainting Controls Code Spitting and Performance
Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.
commit 868e4b2db8
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 07:40:31 2022 +1300
[Code Splitting] Bounding Box Options
Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix bounding box triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.
commit 2e562742c1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 06:40:27 2022 +1300
Fix Bounding Box Settings re-rendering on brush stroke
commit 68e6958009
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 06:28:34 2022 +1300
Fresh Bundle
commit ea6e3a7949
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 06:26:56 2022 +1300
[TESTING] Remove global isReady checking
I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.
commit b2879ca99f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 06:08:59 2022 +1300
Code Split Inpaint Options
Isolate features to their own components so they dont re-render the other stuff each time.
commit 4e911566c3
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 03:50:56 2022 +1300
Preventing unnecessary re-renders across the app
commit 9bafda6a15
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 03:02:35 2022 +1300
Fix Inpainting Alerts Styling
commit 871a8a5375
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 23:52:07 2022 +1100
Adds hints when unable to invoke
- Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
- There may be more than one reason; all are displayed.
commit 0eef74bc00
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 23:40:11 2022 +1100
Address bounding box feedback
- Adds back toggle to hide bounding box
- Box quick toggle = q, normal toggle = shift + q
- Styles canvas alert icons
commit 423ae32097
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 17:06:07 2022 +1100
Improves bounding box interaction
Added spacebar-hold-to-transform back.
commit 8282e5d045
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 16:57:07 2022 +1100
Builds fresh bundle
commit 19305cdbdf
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 16:51:11 2022 +1100
Styles image uploader
commit eb9028ab30
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 16:51:03 2022 +1100
Disabled bounding box settings when locked
commit 21483f5d07
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 16:50:24 2022 +1100
Fixes silent crash when init image too large
To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.
If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.
Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.
commit 82dcbac28f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 16:28:30 2022 +1100
Improves bounding box interactions
- Bounding box can now be moved by dragging any of its edges
- Bounding box does not affect drawing if already drawing a stroke
- Can lock bounding box to draw directly on the bounding box edges
- Removes spacebar-hold behaviour due to technical issues
commit d43bd4625d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 15:10:49 2022 +1100
Fixes hotkeys and settings buttons not working
commit ea891324a2
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 11:04:02 2022 +1100
Changes inpainting controls settings to hover
commit 8fd9ea2193
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 11:03:41 2022 +1100
Adds missing tooltips to site header
commit fb02666856
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 11:03:25 2022 +1100
Increases workarea split padding to 1rem
commit f6f5c2731b
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 11:03:10 2022 +1100
Decreases gallery width on inpainting
commit b4e3f771e0
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 10:54:59 2022 +1100
Fixes bugs/styling
- Fixes missing web app state on new version:
Adds stateReconciler to redux-persist.
When we add more values to the state and then release the update app, they will be automatically merged in.
Reseting web UI will be needed far less.
7159ec
- Fixes console z-index
- Moves reset web UI button to visible area
commit 99bb9491ac
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Tue Nov 1 08:35:45 2022 +1300
[WebUI] Loopback Default False
commit 0453f21127
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Wed Nov 2 23:23:51 2022 +1300
Fresh Build For WebUI
commit 9fc09aa4bd
Author: damian0815 <null@damianstewart.com>
Date: Wed Nov 2 09:08:11 2022 +0100
don't log base64 progress images
commit 5e87062cf8
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date: Wed Nov 2 00:21:27 2022 +0100
Option to directly invert the grayscale heatmap - fix
commit 3e7a459990
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date: Tue Nov 1 21:37:33 2022 +0100
Update txt2mask.py
commit bbf4c03e50
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date: Tue Nov 1 21:11:19 2022 +0100
Option to directly invert the grayscale heatmap
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
commit 611a3a9753
Author: mauwii <Mauwii@outlook.de>
Date: Wed Nov 2 02:23:09 2022 +0100
fix name of caching step
commit 1611f0d181
Author: mauwii <Mauwii@outlook.de>
Date: Wed Nov 2 02:18:46 2022 +0100
readd caching of sd-models
- this would remove the necesarrity of the secret availability in PRs
commit 08835115e4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 1 22:10:12 2022 -0400
pin pytorch_lightning to 1.7.7, issue #1331
commit 2d84e28d32
Merge: 533fd04eef17aae8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 1 22:11:04 2022 -0400
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
commit ef17aae8ab
Author: damian0815 <null@damianstewart.com>
Date: Wed Nov 2 00:39:48 2022 +0100
add damian0815 to contributors list
commit 0cc39f01a3
Author: damian0815 <null@damianstewart.com>
Date: Wed Nov 2 01:18:50 2022 +0100
report full size for fast latents and update conversion matrix for v1.5
commit 688d7258f1
Author: damian0815 <null@damianstewart.com>
Date: Wed Nov 2 00:33:00 2022 +0100
fix a bug that broke cross attention control index mapping
commit 4513320bf1
Author: damian0815 <null@damianstewart.com>
Date: Wed Nov 2 00:31:58 2022 +0100
save VRAM by not recombining tensors that have been sliced to save VRAM
commit 533fd04ef0
Merge: 6215592bdff5681c
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 1 17:40:36 2022 -0400
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
commit dff5681cf0
Author: damian0815 <d@d.com>
Date: Tue Nov 1 13:56:03 2022 +0100
shorter strings
commit 5a2790a69b
Author: damian0815 <d@d.com>
Date: Tue Nov 1 13:19:20 2022 +0100
convert progress display to a drop-down
commit 7c5305ccba
Author: damian0815 <d@d.com>
Date: Tue Nov 1 12:54:46 2022 +0100
do not try to save base64 intermediates in gallery on cancellation
commit 4013e8ad6f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Tue Nov 1 21:54:35 2022 +1100
Fixes b64 image sending and displaying
commit d1dfd257f9
Author: damian <d@d.com>
Date: Tue Nov 1 11:40:40 2022 +0100
wip base64
commit 5322d735ee
Author: damian <d@d.com>
Date: Tue Nov 1 11:31:42 2022 +0100
update frontend
commit cdb107dcda
Author: damian <d@d.com>
Date: Tue Nov 1 11:17:43 2022 +0100
add option to show intermediate latent space
commit be1393a41c
Author: damian <d@d.com>
Date: Tue Nov 1 10:16:55 2022 +0100
ensure existing exception handling code also handles new exception class
commit e554c2607f
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Tue Nov 1 10:08:42 2022 +0100
Rebuilt prompt parsing logic
Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.
In the process it has also become more robust to badly-formed prompts.
Squashed commit of the following:
commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 17:05:57 2022 +0100
further cleanup
commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 16:07:57 2022 +0100
cleanup and document
commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:54:58 2022 +0100
works fully
commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 15:24:31 2022 +0100
further...
commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Sun Oct 30 14:08:57 2022 +0100
getting there...
commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 14:29:03 2022 +0200
wip doesn't compile
commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:21:43 2022 +0200
working with CrossAttentionCtonrol but no Attention support yet
commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date: Fri Oct 28 13:04:52 2022 +0200
wip rebuiling prompt parser
commit 6215592b12
Merge: ef24d76a349cc254
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 1 17:34:55 2022 -0400
Merge branch 'development' of github.com:invoke-ai/InvokeAI into development
commit 349cc25433
Author: damian0815 <d@d.com>
Date: Tue Nov 1 20:08:52 2022 +0100
fix crash (be a little less aggressive clearing out the attention slice)
commit 214d276379
Author: damian0815 <d@d.com>
Date: Tue Nov 1 19:57:55 2022 +0100
be more aggressive at clearing out saved_attn_slice
commit ef24d76adc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 1 14:34:23 2022 -0400
fix library problems in preload_modules
commit ab2b5a691d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Nov 1 17:22:48 2022 -0400
fix model_cache memory management issues
commit c7de2b2801
Author: mauwii <Mauwii@outlook.de>
Date: Tue Nov 1 02:02:14 2022 +0100
disable checks with sd-V1.4 model...
...to save some resources, since V1.5 is the default now
commit e8075658ac
Author: mauwii <Mauwii@outlook.de>
Date: Mon Oct 31 22:20:51 2022 +0100
update test-invoke-conda.yml
- fix model dl path for sd-v1-4.ckpt
- copy configs/models.yaml.example to configs/models.yaml
commit 4202dabee1
Author: mauwii <Mauwii@outlook.de>
Date: Mon Oct 31 22:17:21 2022 +0100
fix models example weights for sd-v1.4
commit d67db2bcf1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date: Tue Nov 1 08:35:45 2022 +1300
[WebUI] Loopback Default False
commit 7159ec885f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Mon Oct 31 11:33:05 2022 -0400
further improvements to preload_models.py
- Faster startup for command line switch processing
- Specify configuration file to modify using --config option:
./scripts/preload_models.ply --config models/my-models-file.yaml
commit b5cf734ba9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Mon Oct 31 11:08:19 2022 -0400
improve behavior of preload_models.py
- NEVER overwrite user's existing models.yaml
- Instead, merge its contents into new config file,
and rename original to models.yaml.orig (with
message)
- models.yaml has been removed from repository and renamed
models.yaml.example
commit f7dc8eafee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Mon Oct 31 10:47:35 2022 -0400
restore models.yaml to virgin state
commit 762ca60a30
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date: Tue Oct 4 22:55:10 2022 -0400
Update INPAINTING.md
commit e7fb9f342c
Author: Hideyuki Katsushiro <h.katsushiro@qualia.tokyo.jp>
Date: Wed Oct 5 10:08:53 2022 +0900
add argument --outdir
- Due to misuse of rebase command, main was transiently
in an inconsistent state.
- This repairs the damage, and adds a few post-release
patches that ensure stable conda installs on Mac and Windows.
attention is parsed but ignored, blends old syntax doesn't work,
conjunctions are parsed but ignored, the only part that's used
here is the new .blend() syntax and cross-attention control
using .swap()
This commit "reverts" the new API changes by extracting the old
functionality into new files.
The work is based on the commit `803a51d5adca7e6e28491fc414fd3937bee7cb79`
PngWriter regained PromptFormatter as old server used that.
`server_legacy.py` is the old server that `dream.py` used.
Finally `legacy_api.py` is what `dream.py` used to be at the mentioned
commit.
One manually run test has been added in order to be able to test
compatibility with the old API, currently just testing that the API
endpoint works the same way + the image hash is the same as it used to
be before.
* Added linux to the workflows
- rename workflow files
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
* fixes: run on merge to 'main', 'dev';
- reduce dev merge test cases to 1 (1 takes 11 minutes 😯)
- fix model cache name
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
* add test prompts to workflows
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
Co-authored-by: James Reynolds <magnsuviri@me.com>
Co-authored-by: Ben Alkov <ben.alkov@gmail.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* macOS CI workflow, dream.py exits with an error, but the workflow completes.
* Files for testing
Co-authored-by: James Reynolds <magnsuviri@me.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>