Lincoln Stein
e57809e1c6
Merge branch 'main' into lstein/feat/multi-gpu
2024-05-03 00:05:04 -04:00
psychedelicious
33a9f9a4dc
fix(nodes): fix constraints in cnet processors
...
There were some invalid constraints with the processors - minimum of 0 for resolution or multiple of 64 for resolution.
Made minimum 1px and no multiple ofs.
2024-05-02 12:24:04 +10:00
blessedcoolant
dce8b88aaf
fix: change eta only for TCD Scheduler
2024-05-01 12:47:46 +05:30
blessedcoolant
1bdcbe3284
cleanup: use dict update to actually update the scheduler keyword args
2024-05-01 12:22:39 +05:30
Lincoln Stein
1c0067f931
Merge branch 'main' into lstein/feat/multi-gpu
2024-04-30 18:14:03 -04:00
blessedcoolant
2ddb82200c
fix: Manually update eta(gamma) to 1.0 for TCDScheduler
...
seems to work best with invoke at 4 steps
2024-05-01 01:20:53 +05:30
psychedelicious
e822897b1c
feat(nodes): add prototype heuristic image resize node
...
Uses the fancy cnet resize that retains edges.
2024-04-30 08:10:59 -04:00
psychedelicious
d861bc690e
feat(mm): handle PC_PATH_MAX on external drives on macOS
...
`PC_PATH_MAX` doesn't exist for (some?) external drives on macOS. We need error handling when retrieving this value.
Also added error handling for `PC_NAME_MAX` just in case. This does work for me for external drives on macOS, though.
Closes #6277
2024-04-30 07:57:03 -04:00
dunkeroni
f262b9032d
fix: changed validation to not error on connection
2024-04-28 12:48:56 -04:00
dunkeroni
71c3197eab
fix: denoise latents accepts CFG lists as input
2024-04-28 12:48:56 -04:00
psychedelicious
398f37c0ed
tidy(backend): clean up controlnet_utils
...
- Use the our adaptation of the HWC3 function with better types
- Extraction some of the util functions, name them better, add comments
- Improve type annotations
- Remove unreachable codepaths
2024-04-25 13:20:09 +10:00
psychedelicious
5b8f77f990
tidy(nodes): move cnet mode literals to utils
...
Now they can be used in type signatures without circular imports.
2024-04-25 13:20:09 +10:00
psychedelicious
1bef13db37
feat(nodes): restore unet check on CreateGradientMaskInvocation
...
Special handling for inpainting models
2024-04-23 07:32:53 -04:00
dunkeroni
bc12d6654e
chore: comments and ruff
2024-04-23 07:32:53 -04:00
dunkeroni
6d7c8d5f57
remove unet test
2024-04-23 07:32:53 -04:00
dunkeroni
781de914f4
fix threshhold
2024-04-23 07:32:53 -04:00
dunkeroni
c094bad233
add unet check in gradient mask node
2024-04-23 07:32:53 -04:00
dunkeroni
0063014f2b
gradient mask node test for inpaint
2024-04-23 07:32:53 -04:00
psychedelicious
2cee436ecf
tidy(app): remove unused class
2024-04-23 17:12:14 +10:00
psychedelicious
e6386d969f
fix(app): only clear tempdirs if ephemeral and before creating tempdir
...
Also, this needs to happen in init, else it deletes the temp dir created in init
2024-04-23 17:12:14 +10:00
psychedelicious
4b2b983646
tidy(api): reverted unnecessary changes in dependencies.py
2024-04-23 17:12:14 +10:00
Lincoln Stein
53808149fb
moved cleanup routine into object_serializer_disk.py
2024-04-23 17:12:14 +10:00
Lincoln Stein
21ba55d0a6
add an initialization function that removes dangling tmpdirs from outputs/tensors
2024-04-23 17:12:14 +10:00
psychedelicious
a00e703144
feat(nodes): image mask to tensor invocation
...
Thanks @JPPhoto!
2024-04-20 11:32:08 -04:00
psychedelicious
ea527f5fe1
feat(nodes): add beta classification to mask tensor nodes
2024-04-19 09:32:56 -04:00
psychedelicious
aace364677
feat(nodes): add InvertTensorMaskInvocation
2024-04-19 09:32:56 -04:00
psychedelicious
602a59066e
fix(nodes): handle invert in alpha_mask_to_tensor
2024-04-19 09:32:56 -04:00
psychedelicious
8911017bd1
feat(ui): selectable & draggable layers
2024-04-19 09:32:56 -04:00
psychedelicious
fc26f3e430
feat(nodes): add alpha mask to tensor invocation
2024-04-19 09:32:56 -04:00
Lincoln Stein
2b9f06dc4c
Re-enable app shutdown actions ( #6244 )
...
* closes #6242
* only override sigINT during slow model scanning
* fix ruff formatting
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-19 06:45:42 -04:00
Lincoln Stein
c3d1252892
revert to old system for doing RAM <-> VRAM transfers; new way leaks memory
2024-04-17 09:51:57 -04:00
Lincoln Stein
84f5cbdd97
make choose_torch_dtype() usable outside an invocation context
2024-04-16 19:19:19 -04:00
Lincoln Stein
edac01d4fb
reverse stupid hack
2024-04-16 18:13:59 -04:00
Lincoln Stein
eaadc55c7d
make pause/resume work in multithreaded environment
2024-04-16 16:55:56 -04:00
Lincoln Stein
89f8326c0b
Merge branch 'lstein/feat/multi-gpu' of github.com:invoke-ai/InvokeAI into lstein/feat/multi-gpu
2024-04-16 16:27:08 -04:00
Lincoln Stein
99558de178
device selection calls go through TorchDevice
2024-04-16 16:26:58 -04:00
Lincoln Stein
77130f108d
Merge branch 'main' into lstein/feat/multi-gpu
2024-04-16 16:14:27 -04:00
Lincoln Stein
371f5bc782
simplify logic for retrieving execution devices
2024-04-16 15:52:03 -04:00
Lincoln Stein
fb9b7fb63a
make object_serializer._new_name() thread-safe; add max_threads config
2024-04-16 15:23:49 -04:00
Lincoln Stein
bd833900a3
add tid to cache name to avoid non-safe uuid4 on windows
2024-04-16 15:02:06 -04:00
blessedcoolant
6bab040d24
Merge branch 'main' into ip-adapter-style-comp
2024-04-16 21:14:06 +05:30
Lincoln Stein
fce6b3e44c
maybe solve race issue
2024-04-16 13:09:26 +10:00
Lincoln Stein
a84f3058e2
revert object_serializer_forward_cache.py
2024-04-15 22:28:48 -04:00
Lincoln Stein
f7436f3bae
fixup config_default; patch TorchDevice to work dynamically
2024-04-15 22:15:50 -04:00
Lincoln Stein
7dd93cb810
fix merge issues; likely nonfunctional
2024-04-15 21:16:21 -04:00
blessedcoolant
b39ce642b6
cleanup: raise ValueErrors when target_blocks dont match base model
2024-04-16 04:12:30 +05:30
Lincoln Stein
e93f4d632d
[util] Add generic torch device class ( #6174 )
...
* introduce new abstraction layer for GPU devices
* add unit test for device abstraction
* fix ruff
* convert TorchDeviceSelect into a stateless class
* move logic to select context-specific execution device into context API
* add mock hardware environments to pytest
* remove dangling mocker fixture
* fix unit test for running on non-CUDA systems
* remove unimplemented get_execution_device() call
* remove autocast precision
* Multiple changes:
1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.
* add deprecation warnings to choose_torch_device() and choose_precision()
* fix test crash
* remove app_config argument from choose_torch_device() and choose_torch_dtype()
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-15 13:12:49 +00:00
blessedcoolant
2d5786d3bb
fix: Incorrect composition blocks for SD1.5
2024-04-13 13:52:10 +05:30
blessedcoolant
27466ffa1a
chore: update the ip adapter node version
2024-04-13 13:39:08 +05:30
blessedcoolant
9fc73743b2
feat: support SD1.5
2024-04-13 12:30:39 +05:30