InvokeAI/invokeai/backend/quantization
Brandon Rising 2d9042fb93 Run Ruff
2024-08-26 20:17:50 -04:00
..
bnb_llm_int8.py More improvements for LLM.int8() - not fully tested. 2024-08-26 20:17:50 -04:00
bnb_nf4.py LLM.int8() quantization is working, but still some rough edges to solve. 2024-08-26 20:17:50 -04:00
fast_quantized_diffusion_model.py Run Ruff 2024-08-26 20:17:50 -04:00
fast_quantized_transformers_model.py Run Ruff 2024-08-26 20:17:50 -04:00
load_flux_model_bnb_llm_int8.py LLM.int8() quantization is working, but still some rough edges to solve. 2024-08-26 20:17:50 -04:00
load_flux_model_bnb_nf4.py WIP on moving from diffusers to FLUX 2024-08-26 20:17:50 -04:00