This website requires JavaScript.
Explore
Help
Sign In
Mirrored_Repos
/
InvokeAI
Watch
1
Star
0
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI
synced
2024-08-30 20:32:17 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
InvokeAI
/
invokeai
/
backend
/
quantization
History
Ryan Dick
4105a78b83
Update load_flux_model_bnb_llm_int8.py to work with a single-file FLUX transformer checkpoint.
2024-08-26 20:17:50 -04:00
..
__init__.py
Move requantize.py to the quatnization/ dir.
2024-08-26 20:17:50 -04:00
bnb_llm_int8.py
Fix bug in InvokeInt8Params that was causing it to use double the necessary VRAM.
2024-08-26 20:17:50 -04:00
bnb_nf4.py
Install sub directories with folders correctly, ensure consistent dtype of tensors in flux pipeline and vae
2024-08-26 20:17:50 -04:00
fast_quantized_diffusion_model.py
Move requantize.py to the quatnization/ dir.
2024-08-26 20:17:50 -04:00
fast_quantized_transformers_model.py
Move requantize.py to the quatnization/ dir.
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_llm_int8.py
Update load_flux_model_bnb_llm_int8.py to work with a single-file FLUX transformer checkpoint.
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_nf4.py
WIP on moving from diffusers to FLUX
2024-08-26 20:17:50 -04:00
requantize.py
Move requantize.py to the quatnization/ dir.
2024-08-26 20:17:50 -04:00