InvokeAI/ldm/invoke/generator
Jonathan 9eed1919c2
Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
..
__init__.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
base.py Refactor CUDA cache clearing to add statistical reporting. (#2553) 2023-02-06 12:53:30 -05:00
diffusers_pipeline.py Strategize slicing based on free [V]RAM (#2572) 2023-02-12 18:24:15 +00:00
embiggen.py use 🧨diffusers model (#1583) 2023-01-15 09:22:46 -05:00
img2img.py fix crash in txt2img and img2img w/ inpainting models and perlin > 0 2023-02-05 22:55:08 -05:00
inpaint.py fix crash when text mask applied to img2img (#2526) 2023-02-05 15:26:40 +00:00
omnibus.py use 🧨diffusers model (#1583) 2023-01-15 09:22:46 -05:00
txt2img2img.py Refactor CUDA cache clearing to add statistical reporting. (#2553) 2023-02-06 12:53:30 -05:00
txt2img.py fix crash in txt2img and img2img w/ inpainting models and perlin > 0 2023-02-05 22:55:08 -05:00