psychedelicious
576f1cbb75
build: remove broken scripts
...
These two scripts are broken and can cause data loss. Remove them.
They are not in the launcher script, but _are_ available to users in the terminal/file browser.
Hopefully, when we removing them here, `pip` will delete them on next installation of the package...
2024-08-27 22:01:45 +10:00
Ryan Dick
50085b40bb
Update starter model size estimates.
2024-08-26 20:17:50 -04:00
Mary Hipp
cff382715a
default workflow: add steps to exposed fields, add more notes
2024-08-26 20:17:50 -04:00
Brandon Rising
54d54d1bf2
Run ruff
2024-08-26 20:17:50 -04:00
Mary Hipp
e84ea68282
remove prompt
2024-08-26 20:17:50 -04:00
Mary Hipp
160dd36782
update default workflow for flux
2024-08-26 20:17:50 -04:00
Brandon Rising
65bb46bcca
Rename params for flux and flux vae, add comments explaining use of the config_path in model config
2024-08-26 20:17:50 -04:00
Brandon Rising
2d185fb766
Run ruff
2024-08-26 20:17:50 -04:00
Brandon Rising
2ba9b02932
Fix type error in tsc
2024-08-26 20:17:50 -04:00
Brandon Rising
849da67cc7
Remove no longer used code in the flux denoise function
2024-08-26 20:17:50 -04:00
Brandon Rising
3ea6c9666e
Remove in progress images until we're able to make the valuable
2024-08-26 20:17:50 -04:00
Brandon Rising
cf633e4ef2
Only install starter models if not already installed
2024-08-26 20:17:50 -04:00
Ryan Dick
bbf934d980
Remove outdated TODO.
2024-08-26 20:17:50 -04:00
Ryan Dick
620f733110
ruff format
2024-08-26 20:17:50 -04:00
Ryan Dick
67928609a3
Downgrade accelerate and huggingface-hub deps to original versions.
2024-08-26 20:17:50 -04:00
Ryan Dick
5f15afb7db
Remove flux repo dependency
2024-08-26 20:17:50 -04:00
Ryan Dick
635d2f480d
ruff
2024-08-26 20:17:50 -04:00
Brandon Rising
70c278c810
Remove dependency on flux config files
2024-08-26 20:17:50 -04:00
Brandon Rising
56b9906e2e
Setup scaffolding for in progress images and add ability to cancel the flux node
2024-08-26 20:17:50 -04:00
Ryan Dick
a808ce81fd
Replace swish() with torch.nn.functional.silu(h). They are functionally equivalent, but in my test VAE deconding was ~8% faster after the change.
2024-08-26 20:17:50 -04:00
Ryan Dick
83f82c5ddf
Switch the CLIP-L start model to use our hosted version - which is much smaller.
2024-08-26 20:17:50 -04:00
Brandon Rising
101de8c25d
Update t5 encoder formats to accurately reflect the quantization strategy and data type
2024-08-26 20:17:50 -04:00
Ryan Dick
3339a4baf0
Downgrade revert torch version after removing optimum-qanto, and other minor version-related fixes.
2024-08-26 20:17:50 -04:00
Ryan Dick
dff4a88baa
Move quantization scripts to a scripts/ subdir.
2024-08-26 20:17:50 -04:00
Ryan Dick
a21f6c4964
Update docs for T5 quantization script.
2024-08-26 20:17:50 -04:00
Ryan Dick
97562504b7
Remove all references to optimum-quanto and downgrade diffusers.
2024-08-26 20:17:50 -04:00
Ryan Dick
75d8ac378c
Update the T5 8-bit quantized starter model to use the BnB LLM.int8() variant.
2024-08-26 20:17:50 -04:00
Ryan Dick
b9dd354e2b
Fixes to the T5XXL quantization script.
2024-08-26 20:17:50 -04:00
Ryan Dick
33c2fbd201
Add script for quantizing a T5 model.
2024-08-26 20:17:50 -04:00
Brandon Rising
5063be92bf
Switch flux to using its own conditioning field
2024-08-26 20:17:50 -04:00
Brandon Rising
1047584b3e
Only import bnb quantize file if bitsandbytes is installed
2024-08-26 20:17:50 -04:00
Brandon Rising
6764dcfdaa
Load and unload clip/t5 encoders and run inference separately in text encoding
2024-08-26 20:17:50 -04:00
Brandon Rising
012864ceb1
Update macos test vm to macOS-14
2024-08-26 20:17:50 -04:00
Ryan Dick
a0bf20bcee
Run FLUX VAE decoding in the user's preferred dtype rather than float32. Tested, and seems to work well at float16.
2024-08-26 20:17:50 -04:00
Ryan Dick
14ab339b33
Move prepare_latent_image_patches(...) to sampling.py with all of the related FLUX inference code.
2024-08-26 20:17:50 -04:00
Ryan Dick
25c91efbb6
Rename field positive_prompt -> prompt.
2024-08-26 20:17:50 -04:00
Ryan Dick
1c1f2c6664
Add comment about incorrect T5 Tokenizer size calculation.
2024-08-26 20:17:50 -04:00
Ryan Dick
d7c22b3bf7
Tidy is_schnell detection logic.
2024-08-26 20:17:50 -04:00
Ryan Dick
185f2a395f
Make FLUX get_noise(...) consistent across devices/dtypes.
2024-08-26 20:17:50 -04:00
Ryan Dick
0c5649491e
Mark FLUX nodes as prototypes.
2024-08-26 20:17:50 -04:00
Brandon Rising
94aba5892a
Attribute black-forest-labs/flux for much of the flux code
2024-08-26 20:17:50 -04:00
Brandon Rising
ef093dde29
Don't install bitsandbytes on macOS
2024-08-26 20:17:50 -04:00
maryhipp
34451e5f27
added FLUX dev to starter models
2024-08-26 20:17:50 -04:00
Brandon Rising
1f9bdd1a9a
Undo changes to the v2 dir of frontend types
2024-08-26 20:17:50 -04:00
Brandon Rising
c27d59baf7
Run ruff
2024-08-26 20:17:50 -04:00
Brandon Rising
f130ddec7c
Remove automatic install of models during flux model loader, remove no longer used import function on context
2024-08-26 20:17:50 -04:00
Ryan Dick
a0a259eef1
Fix max_seq_len field description.
2024-08-26 20:17:50 -04:00
Ryan Dick
b66f19d4d1
Add docs to the quantization scripts.
2024-08-26 20:17:50 -04:00
Ryan Dick
4105a78b83
Update load_flux_model_bnb_llm_int8.py to work with a single-file FLUX transformer checkpoint.
2024-08-26 20:17:50 -04:00
Ryan Dick
19a68afb3a
Fix bug in InvokeInt8Params that was causing it to use double the necessary VRAM.
2024-08-26 20:17:50 -04:00