Logo
Explore Help
Sign In
Mirrored_Repos/InvokeAI
1
0
Fork 0
You've already forked InvokeAI
mirror of https://github.com/invoke-ai/InvokeAI synced 2024-08-30 20:32:17 +00:00
Code Issues Packages Projects Releases Wiki Activity
InvokeAI/invokeai/backend
History
Lincoln Stein 03b9d17d0b draft sd3 loading; probable VRAM leak when using quantized submodels
2024-06-13 00:51:00 -04:00
..
image_util
Merge branch 'main' into lstein/feat/simple-mm2-api
2024-05-17 22:54:03 -04:00
ip_adapter
Create a UNetAttentionPatcher for patching UNet models with CustomAttnProcessor2_0 modules.
2024-04-09 08:12:12 -04:00
model_hash
feat(mm): rename "blake3" to "blake3_multi"
2024-03-22 08:26:36 +11:00
model_manager
draft sd3 loading; probable VRAM leak when using quantized submodels
2024-06-13 00:51:00 -04:00
onnx
final tidying before marking PR as ready for review
2024-03-01 10:42:33 +11:00
stable_diffusion
cleanup: seamless unused older code cleanup
2024-05-13 08:11:08 +10:00
tiles
feat(nodes): extract LATENT_SCALE_FACTOR to constants.py
2024-03-01 10:42:33 +11:00
util
add draft SD3 probing; there is an issue with FromOriginalControlNetMixin in backend.util.hotfixes due to new diffusers
2024-06-12 22:44:34 -04:00
__init__.py
consolidate model manager parts into a single class
2024-03-01 10:42:33 +11:00
lora.py
final tidying before marking PR as ready for review
2024-03-01 10:42:33 +11:00
model_patcher.py
LoRA patching optimization (#6439)
2024-06-06 13:53:35 +00:00
raw_model.py
final tidying before marking PR as ready for review
2024-03-01 10:42:33 +11:00
textual_inversion.py
Add a callout about the hackiness of dropping tokens in the TextualInversionManager.
2024-05-28 05:11:54 -07:00
Powered by Gitea Version: 1.23.6 Page: 110ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API