Logo
Explore Help
Sign In
Mirrored_Repos/InvokeAI
1
0
Fork 0
You've already forked InvokeAI
mirror of https://github.com/invoke-ai/InvokeAI synced 2024-08-30 20:32:17 +00:00
Code Issues Packages Projects Releases Wiki Activity
InvokeAI/invokeai/backend/quantization
History
Brandon Rising 6ea1278d22 Manage quantization of models within the loader
2024-08-21 09:06:34 -04:00
..
bnb_llm_int8.py
More improvements for LLM.int8() - not fully tested.
2024-08-21 08:59:19 -04:00
bnb_nf4.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-21 08:59:19 -04:00
fast_quantized_diffusion_model.py
Manage quantization of models within the loader
2024-08-21 09:06:34 -04:00
fast_quantized_transformers_model.py
Manage quantization of models within the loader
2024-08-21 09:06:34 -04:00
load_flux_model_bnb_llm_int8.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-21 08:59:19 -04:00
load_flux_model_bnb_nf4.py
WIP on moving from diffusers to FLUX
2024-08-21 08:59:19 -04:00
Powered by Gitea Version: 1.23.6 Page: 119ms Template: 16ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API