Logo
Explore Help
Sign In
Mirrored_Repos/InvokeAI
1
0
Fork 0
You've already forked InvokeAI
mirror of https://github.com/invoke-ai/InvokeAI synced 2025-07-25 12:55:55 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
3e8a550fabff6dfc6db064d75d4c195eb2e6911c
InvokeAI/invokeai/backend/quantization
History
Ryan Dick 3e8a550fab More improvements for LLM.int8() - not fully tested.
2024-08-21 08:59:19 -04:00
..
bnb_llm_int8.py
More improvements for LLM.int8() - not fully tested.
2024-08-21 08:59:19 -04:00
bnb_nf4.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-21 08:59:19 -04:00
fast_quantized_diffusion_model.py
Make quantized loading fast for both T5XXL and FLUX transformer.
2024-08-21 08:59:19 -04:00
fast_quantized_transformers_model.py
Make quantized loading fast for both T5XXL and FLUX transformer.
2024-08-21 08:59:19 -04:00
load_flux_model_bnb_llm_int8.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-21 08:59:19 -04:00
load_flux_model_bnb_nf4.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-21 08:59:19 -04:00
Powered by Gitea Version: 1.24.2 Page: 112ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API