Logo
Explore Help
Sign In
Mirrored_Repos/InvokeAI
1
0
Fork 0
You've already forked InvokeAI
mirror of https://github.com/invoke-ai/InvokeAI synced 2024-08-30 20:32:17 +00:00
Code Issues Packages Projects Releases Wiki Activity
12,384 Commits 274 Branches 239 Tags
Commit Graph

9 Commits

Author SHA1 Message Date
Ryan Dick
f0baf880b5 Split a FluxTextEncoderInvocation out from the FluxTextToImageInvocation. This has the advantage that we benfit from automatic caching when the prompt isn't changed. 2024-08-12 18:23:02 +00:00
Ryan Dick
a8a2fc106d Make quantized loading fast for both T5XXL and FLUX transformer. 2024-08-09 19:54:09 +00:00
Ryan Dick
1c97360f9f Make float16 inference work with FLUX on 24GB GPU. 2024-08-08 18:12:04 -04:00
Ryan Dick
74d6fceeb6 Add support for 8-bit quantizatino of the FLUX T5XXL text encoder. 2024-08-08 18:23:20 +00:00
Ryan Dick
766ddc18dc Make 8-bit quantization save/reload work for the FLUX transformer. Reload is still very slow with the current optimum.quanto implementation. 2024-08-08 16:40:11 +00:00
Ryan Dick
e6ff7488a1 Minor improvements to FLUX workflow. 2024-08-07 22:10:09 +00:00
Ryan Dick
89a652cfcd Got FLUX schnell working with 8-bit quantization. Still lots of rough edges to clean up. 2024-08-07 19:50:03 +00:00
Ryan Dick
b227b9059d Use the FluxPipeline.encode_prompt() api rather than trying to run the two text encoders separately. 2024-08-07 15:12:01 +00:00
Ryan Dick
5dd619e137 First draft of FluxTextToImageInvocation. 2024-08-06 21:51:22 +00:00
Powered by Gitea Version: 1.23.6 Page: 177ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API