InvokeAI/invokeai/backend/quantization
2024-08-15 19:34:34 +00:00
..
bnb_llm_int8.py LLM.int8() quantization is working, but still some rough edges to solve. 2024-08-15 19:34:34 +00:00
bnb_nf4.py LLM.int8() quantization is working, but still some rough edges to solve. 2024-08-15 19:34:34 +00:00
fast_quantized_diffusion_model.py Make quantized loading fast for both T5XXL and FLUX transformer. 2024-08-09 19:54:09 +00:00
fast_quantized_transformers_model.py Make quantized loading fast for both T5XXL and FLUX transformer. 2024-08-09 19:54:09 +00:00
load_flux_model_bnb_llm_int8.py LLM.int8() quantization is working, but still some rough edges to solve. 2024-08-15 19:34:34 +00:00
load_flux_model_bnb_nf4.py LLM.int8() quantization is working, but still some rough edges to solve. 2024-08-15 19:34:34 +00:00