maryhipp
25482c031b
fix(worker) fix T5 type
2024-08-21 14:45:46 -04:00
maryhipp
68a917aed5
update default workflow
2024-08-21 14:45:46 -04:00
maryhipp
2c72295b1c
update flux_model_loader node to take a T5 encoder from node field instead of hardcoded list, assume all models have been downloaded
2024-08-21 14:27:34 -04:00
Brandon Rising
ada483f65e
Various styling and exception type updates
2024-08-21 11:59:04 -04:00
Brandon Rising
d4872253a1
Update doc string for import_local_model and remove access_token since it's only usable for local file paths
2024-08-21 11:18:07 -04:00
Ryan Dick
e680cf76f6
Address minor review comments.
2024-08-21 13:45:22 +00:00
Ryan Dick
253b2b1dc6
Rename t5Encoder -> t5_encoder.
2024-08-21 13:27:54 +00:00
Mary Hipp
5edec7f105
add default workflow for flux t2i
2024-08-21 09:11:17 -04:00
Brandon Rising
c819da8859
Some cleanup of the tags and description of flux nodes
2024-08-21 09:11:15 -04:00
Brandon Rising
dd24f83d43
Fix styling/lint
2024-08-21 09:10:22 -04:00
Brandon Rising
da766f5a7e
Fix support for 8b quantized t5 encoders, update exception messages in flux loaders
2024-08-21 09:10:22 -04:00
Ryan Dick
5e2351f3bf
Fix FLUX output image clamping. And a few other minor fixes to make inference work with the full bfloat16 FLUX transformer model.
2024-08-21 09:10:22 -04:00
Brandon Rising
d705c3cf0e
Select dev/schnell based on state dict, use correct max seq len based on dev/schnell, and shift in inference, separate vae flux params into separate config
2024-08-21 09:10:20 -04:00
Brandon Rising
115f350f6f
Install sub directories with folders correctly, ensure consistent dtype of tensors in flux pipeline and vae
2024-08-21 09:09:39 -04:00
Brandon Rising
be6cb2c07c
Working inference node with quantized bnb nf4 checkpoint
2024-08-21 09:09:39 -04:00
Brandon Rising
4fb5529493
Remove unused param on _run_vae_decoding in flux text to image
2024-08-21 09:09:39 -04:00
Brandon Rising
b43ee0b837
Add nf4 bnb quantized format
2024-08-21 09:09:39 -04:00
Brandon Rising
3312fe8fc4
Run ruff, setup initial text to image node
2024-08-21 09:09:39 -04:00
Brandon Rising
01a2449dae
Add backend functions and classes for Flux implementation, Update the way flux encoders/tokenizers are loaded for prompt encoding, Update way flux vae is loaded
2024-08-21 09:09:37 -04:00
Brandon Rising
cfe9d0ce0a
Some UI cleanup, regenerate schema
2024-08-21 09:08:22 -04:00
Brandon Rising
46b6314482
Run Ruff
2024-08-21 09:06:38 -04:00
Brandon Rising
46d5107ff1
Run Ruff
2024-08-21 09:06:38 -04:00
Brandon Rising
6ea1278d22
Manage quantization of models within the loader
2024-08-21 09:06:34 -04:00
Brandon Rising
f425d3aa3c
Setup flux model loading in the UI
2024-08-21 09:04:37 -04:00
Ryan Dick
d7a39a4d67
WIP on moving from diffusers to FLUX
2024-08-21 08:59:19 -04:00
Ryan Dick
0e96794c6e
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-21 08:59:19 -04:00
Ryan Dick
23a7328a66
Clean up NF4 implementation.
2024-08-21 08:59:19 -04:00
Ryan Dick
c3cf8c3b6b
NF4 inference working
2024-08-21 08:59:19 -04:00
Ryan Dick
3ba60e1656
Split a FluxTextEncoderInvocation out from the FluxTextToImageInvocation. This has the advantage that we benfit from automatic caching when the prompt isn't changed.
2024-08-21 08:59:19 -04:00
Ryan Dick
cdd47b657b
Make quantized loading fast for both T5XXL and FLUX transformer.
2024-08-21 08:59:19 -04:00
Ryan Dick
e8fb8f4d12
Make float16 inference work with FLUX on 24GB GPU.
2024-08-21 08:59:19 -04:00
Ryan Dick
9381211508
Add support for 8-bit quantizatino of the FLUX T5XXL text encoder.
2024-08-21 08:59:19 -04:00
Ryan Dick
8cce4a40d4
Make 8-bit quantization save/reload work for the FLUX transformer. Reload is still very slow with the current optimum.quanto implementation.
2024-08-21 08:59:19 -04:00
Ryan Dick
4833746698
Minor improvements to FLUX workflow.
2024-08-21 08:59:19 -04:00
Ryan Dick
8b9bf55bba
Got FLUX schnell working with 8-bit quantization. Still lots of rough edges to clean up.
2024-08-21 08:59:19 -04:00
Ryan Dick
7b199fed4f
Use the FluxPipeline.encode_prompt() api rather than trying to run the two text encoders separately.
2024-08-21 08:59:18 -04:00
Ryan Dick
13513465c8
First draft of FluxTextToImageInvocation.
2024-08-21 08:59:18 -04:00
Mary Hipp
3e7923d072
fix(api): allow updating of type for style preset
2024-08-19 16:12:39 -04:00
psychedelicious
5a24b89e54
fix(app): include style preset defaults in build
2024-08-16 21:47:06 +10:00
psychedelicious
7a3eaa8da9
feat(api): save file as prompt_templates.csv
2024-08-16 09:51:46 +10:00
Mary Hipp
599db7296f
export only user style presets
2024-08-15 16:07:32 -04:00
Mary Hipp
24f298283f
clean up, add context menu to import/download templates
2024-08-15 12:39:55 -04:00
Mary Hipp
68dac6349d
Merge remote-tracking branch 'origin/main' into maryhipp/export-presets
2024-08-15 11:21:56 -04:00
psychedelicious
60d754d1df
feat(api): tidy style presets import logic
...
- Extract parsing into utility function
- Log import errors
- Forbid extra properties on the imported data
2024-08-15 09:47:49 -04:00
psychedelicious
bcbf8b6bd8
feat(ui): revert to using {prompt}
for prompt template placeholder
2024-08-15 09:47:49 -04:00
psychedelicious
356661459b
feat(api): support JSON for preset imports
...
This allows us to support Fooocus format presets.
2024-08-15 09:47:49 -04:00
psychedelicious
deb917825e
feat(api): use pydantic validation during style preset import
...
- Enforce name is present and not an empty string
- Provide empty string as default for positive and negative prompt
- Add `positive_prompt` as validation alias for `prompt` field
- Strip whitespace automatically
- Create `TypeAdapter` to validate the whole list in one go
2024-08-15 09:47:49 -04:00
Mary Hipp
2d58754789
feat(api): add endpoint to take a CSV, parse it, validate it, and create many style preset entries
2024-08-15 09:47:49 -04:00
Mary Hipp
a9014673a0
wip export
2024-08-15 09:00:11 -04:00
psychedelicious
982c266073
tidy: remove extra characters in prompt templates
2024-08-14 12:31:57 +10:00