- supports gfpgan, esrgan, codeformer and embiggen
- To use:
dream> !fix ./outputs/img-samples/000056.292144555.png -ft gfpgan -U2 -G0.8
dream> !fix ./outputs/img-samples/000056.292144555.png -ft codeformer -G 0.8
dream> !fix ./outputs/img-samples/000056.29214455.png -U4
dream> !fix ./outputs/img-samples/000056.292144555.png -embiggen 1.5
The first example invokes gfpgan to fix faces and esrgan to upscale.
The second example invokes codeformer to fix faces, no upscaling
The third example uses esrgan to upscale 4X
The four example runs embiggen to enlarge 1.5X
- This is very preliminary work. There are some anomalies to note:
1. The syntax is non-obvious. I would prefer something like:
!fix esrgan,gfpgan
!fix esrgan
!fix embiggen,codeformer
However, this will require refactoring the gfpgan and embiggen
code.
2. Images generated using gfpgan, esrgan or codeformer all are named
"xxxxxx.xxxxxx.postprocessed.png" and the original is saved.
However, the prefix is a new one that is not related to the
original.
3. Images generated using embiggen are named "xxxxx.xxxxxxx.png",
and once again the prefix is new. I'm not sure whether the
prefix should be aligned with the original file's prefix or not.
Probably not, but opinions welcome.
* Support color correction for img2img and inpainting, avoiding the shift to magenta seen when running images through img2img repeatedly.
* Fix docs for color correction
* add --init_color to prompt reconstruction
* For best results, the --init_color option should point to the *very first* image used in the sequence of img2img operations. Otherwise color correction will skew towards cyan.
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Fixes:
File "stable-diffusion/ldm/modules/diffusionmodules/model.py", line 37, in nonlinearity
return x*torch.sigmoid(x)
RuntimeError: CUDA out of memory. Tried to allocate 1.56 GiB [..]
Now up to 1536x1280 is possible on 8GB VRAM.
Also remove unused SiLU class.
Apply ~6% speedup by moving * self.scale to earlier on a smaller tensor.
When we have enough VRAM don't make a useless zeros tensor.
Switch between cuda/mps/cpu based on q.device.type to allow cleaner per architecture future optimizations.
For cuda and cpu keep VRAM usage and faster slicing consistent.
For cpu use smaller slices. Tested ~20% faster on i7, 9.8 to 7.7 s/it.
Fix = typo to self.mem_total >= 8 in einsum_op_mps_v2 as per #582 discussion.
- fixes no closing quote in pretty-printed dream_prompt string
- removes unecessary -f switch when txt2img used
In addition, this commit does an experimental commenting-out of the
random.seed() call in the variation-generating part of ldm.dream.generator.base.
This fixes the problem of two calls that use the same seed and -v0.1
generating different images (#641). However, it does not fix the issue
of two images generated using the same seed and -VXXXXXX being
different.
- switch badge service to badgen, as I couldn't figure out shields.io
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
* Added linux to the workflows
- rename workflow files
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
* fixes: run on merge to 'main', 'dev';
- reduce dev merge test cases to 1 (1 takes 11 minutes 😯)
- fix model cache name
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
* add test prompts to workflows
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
Co-authored-by: James Reynolds <magnsuviri@me.com>
Co-authored-by: Ben Alkov <ben.alkov@gmail.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
* due to changes in the metadata written to PNG files, web server cannot
display images
* issue is identified and will be fixed in next 24h
* Python 3.9 required for flask/react web server; environment must be
updated.