Commit Graph

599 Commits

Author SHA1 Message Date
2ad5b5cc2e Flux Kontext UI support (#8111)
* add support for flux-kontext models in nodes

* flux kontext in canvas

* add aspect ratio support

* lint

* restore aspect ratio logic

* more linting

* typegen

* fix typegen

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-Air.lan>
2025-06-25 09:39:57 -04:00
2832ca300f Formatting 2025-06-24 07:26:42 +10:00
de5f413440 Filter bundle_emb for all LoRAs 2025-06-24 07:12:11 +10:00
61b049ad35 Fix to config 2025-06-23 09:52:47 -04:00
150a876c73 Formatting 2025-06-23 13:52:19 +10:00
62c3b01e4f Merge branch 'main' into OMI 2025-06-23 13:52:07 +10:00
e1157f343b Support for Flux and SDXL 2025-06-23 13:51:16 +10:00
4ee54eac1d Another attempt 2025-06-20 14:10:06 +10:00
5851c46c81 Hard code source 2025-06-19 11:05:43 +10:00
a296559e79 Ignore 2025-06-19 11:02:18 +10:00
1fd83f5e68 Import 2025-06-19 11:01:50 +10:00
637487c573 Convert FROM OMI to diffusers 2025-06-19 11:00:27 +10:00
4e98e7d0a2 Typo: dot should be comma 2025-06-19 10:47:24 +10:00
12f65d800d Formatting 2025-06-19 09:40:58 +10:00
45d09f8f51 Use OMI conversion utils 2025-06-19 09:40:49 +10:00
9b4fdb493e Loader 2025-06-18 10:53:54 +10:00
47e21d6e04 Formatting 2025-06-17 13:56:38 +10:00
84ab4a1c30 Convert from OMI to default LoRA state dict 2025-06-17 13:56:22 +10:00
85c4304efd Add OMI LoRA config 2025-06-17 13:34:03 +10:00
8f152f162b Add OMI to model format taxonomy 2025-06-17 13:33:40 +10:00
2981591c36 test: add some aitoolkit lora tests 2025-06-16 19:08:11 +10:00
ab8c739cd8 fix(LoRA): add ai-toolkit to lora loader 2025-06-16 19:08:11 +10:00
5c5108c28a feat(LoRA): support AI Toolkit LoRA for FLUX [WIP] 2025-06-16 19:08:11 +10:00
d3ecaa740f Add Precise Reference to Starter Models 2025-06-09 22:02:11 +10:00
e81579f752 fix(mm): handle invoke syntax for HF repo ids when fetching HF model metadata
Closes #8074
2025-06-04 22:27:15 +10:00
2f35d74902 backend updates 2025-05-22 13:50:15 +10:00
af41dc83f7 Make ruff happy 2025-05-20 11:19:59 +10:00
a17e771eba Re-enable classification API as fallback 2025-05-20 11:19:59 +10:00
6c0bd7d150 fix import ordering, remove code I reverted that the resync added back 2025-05-19 11:16:23 +10:00
8abcc99ced add check for state_dict, required to load TI's 2025-05-19 11:16:23 +10:00
73ab4b8895 fix offload device 2025-05-19 11:16:23 +10:00
86719f2065 revert to overload due to failing tests, use Torch futures instead 2025-05-19 11:16:23 +10:00
5f12b9185f feat(mm): add cache_snapshot to model cache clear callback 2025-05-15 16:06:47 +10:00
d958d2e5a0 feat(mm): iterate on cache callbacks API 2025-05-15 14:37:22 +10:00
823ca214e6 feat(mm): iterate on cache callbacks API 2025-05-15 13:28:51 +10:00
a33da450fd feat(mm): support cache callbacks 2025-05-15 11:23:58 +10:00
518a896521 feat(mm): add usage_info to model config 2025-05-06 09:07:52 -04:00
1f63b60021 Implementing support for Non-Standard LoRA Format (#7985)
* integrate loRA

* idk anymore tbh

* enable fused matrix for quantized models

* integrate loRA

* idk anymore tbh

* enable fused matrix for quantized models

* ruff fix

---------

Co-authored-by: Sam <bhaskarmdutt@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2025-05-05 09:40:38 -04:00
fb91f48722 change base model for chatGPT 4o 2025-04-29 09:12:49 +10:00
04c005284c add gpt-image to possible base model types 2025-04-28 15:39:11 -04:00
14944872c4 feat(mm): add model taxonomy for API models & Imagen3 as base model type 2025-04-28 13:31:26 -04:00
814406d98a feat(mm): siglip model loading supports partial loading
In the previous commit, the LLaVA model was updated to support partial loading.

In this commit, the SigLIP model is updated in the same way.

This model is used for FLUX Redux. It's <4GB and only ever run in isolation, so it won't benefit from partial loading for the vast majority of users. Regardless, I think it is best if we make _all_ models work with partial loading.

PS: I also fixed the initial load dtype issue, described in the prev commit. It's probably a non-issue for this model, but we may as well fix it.
2025-04-18 10:12:03 +10:00
c054501103 feat(mm): llava model loading supports partial loading; fix OOM crash on initial load
The model manager has two types of model cache entries:
- `CachedModelOnlyFullLoad`: The model may only ever be loaded and unloaded as a single object.
- `CachedModelWithPartialLoad`: The model may be partially loaded and unloaded.

Partial loaded is enabled by overwriting certain torch layer classes, adding the ability to autocast the layer to a device on-the-fly. See `CustomLinear` for an example.

So, to take advantage of partial loading and be cached as a `CachedModelWithPartialLoad`, the model must inherit from `torch.nn.Module`.

The LLaVA classes provided by `transformers` do inherit from `torch.nn.Module`, but we wrap those classes in a separate class called `LlavaOnevisionModel`. The wrapper encapsulate both the LLaVA model and its "processor" - a lightweight class that prepares model inputs like text and images.

While it is more elegant to encapsulate both model and processor classes in a single entity, this prevents the model cache from enabling partial loading for the chunky vLLM model.

Fixing this involved a few changes.
- Update the `LlavaOnevisionModelLoader` class to operate on the vLLM model directly, instead the `LlavaOnevisionModel` wrapper class.
- Instantiate the processor directly in the node. The processor is lightweight and does its business on the CPU. We don't need to worry about caching in the model manager.
- Remove caching support code from the `LlavaOnevisionModel` wrapper class. It's not needed, because we do not cache this class. The class now only handles running the models provided to it.
- Rename `LlavaOnevisionModel` to `LlavaOnevisionPipeline` to better represent its purpose.

These changes have a bonus effect of fixing an OOM crash when initially loading the models. This was most apparent when loading LLaVA 7B, which is pretty chunky.

The initial load is onto CPU RAM. In the old version of the loaders, we ignored the loader's target dtype for the initial load. Instead, we loaded the model at `transformers`'s "default" dtype of fp32.

LLaVA 7B is fp16 and weighs ~17GB. Loading as fp32 means we need double that amount (~34GB) of CPU RAM. Many users only have 32GB RAM, so this causes a _CPU_ OOM - which is a hard crash of the whole process.

With the updated loaders, the initial load logic now uses the target dtype for the initial load. LLaVA now needs the expected ~17GB RAM for its initial load.

PS: If we didn't make the accompanying partial loading changes, we still could have solved this OOM. We'd just need to pass the initial load dtype to the wrapper class and have it load on that dtype. But we may as well fix both issues.

PPS: There are other models whose model classes are wrappers around a torch module class, and thus cannot be partially loaded. However, these models are typically fairly small and/or are run only on their own, so they don't benefit as much from partial loading. It's the really big models (like LLaVA 7B) that benefit most from the partial loading.
2025-04-18 10:12:03 +10:00
9846229e52 build graph for cogview4 2025-04-10 10:50:13 +10:00
46316e43f0 typegen 2025-04-10 10:50:13 +10:00
321c2d358c Add CogView4 model loader. And various other fixes to get a CogView4 workflow running (though quality is still below expectations). 2025-04-10 10:50:13 +10:00
0338983895 Update CogView4 starter model entry with approximate bundle size. 2025-04-10 10:50:13 +10:00
e2c4ea8e89 Add CogView4 model probing. 2025-04-10 10:50:13 +10:00
52a8ad1c18 chore: rename model.size to model.file_size
to disambiguate from RAM size or pixel size
2025-04-10 09:53:03 +10:00
f09aacf992 fix: ModelProbe.probe needs to return a size field 2025-04-10 09:53:03 +10:00