This website requires JavaScript.
Explore
Help
Sign In
Mirrored_Repos
/
InvokeAI
Watch
1
Star
0
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI
synced
2024-08-30 20:32:17 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
InvokeAI
/
invokeai
/
backend
/
quantization
History
Brandon Rising
4bd7fda694
Install sub directories with folders correctly, ensure consistent dtype of tensors in flux pipeline and vae
2024-08-26 20:17:50 -04:00
..
bnb_llm_int8.py
More improvements for LLM.int8() - not fully tested.
2024-08-26 20:17:50 -04:00
bnb_nf4.py
Install sub directories with folders correctly, ensure consistent dtype of tensors in flux pipeline and vae
2024-08-26 20:17:50 -04:00
fast_quantized_diffusion_model.py
Run Ruff
2024-08-26 20:17:50 -04:00
fast_quantized_transformers_model.py
Run Ruff
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_llm_int8.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_nf4.py
WIP on moving from diffusers to FLUX
2024-08-26 20:17:50 -04:00