mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Refactoring simplet2i (#387)
* start refactoring -not yet functional * first phase of refactor done - not sure weighted prompts working * Second phase of refactoring. Everything mostly working. * The refactoring has moved all the hard-core inference work into ldm.dream.generator.*, where there are submodules for txt2img and img2img. inpaint will go in there as well. * Some additional refactoring will be done soon, but relatively minor work. * fix -save_orig flag to actually work * add @neonsecret attention.py memory optimization * remove unneeded imports * move token logging into conditioning.py * add placeholder version of inpaint; porting in progress * fix crash in img2img * inpainting working; not tested on variations * fix crashes in img2img * ported attention.py memory optimization #117 from basujindal branch * added @torch_no_grad() decorators to img2img, txt2img, inpaint closures * Final commit prior to PR against development * fixup crash when generating intermediate images in web UI * rename ldm.simplet2i to ldm.generate * add backward-compatibility simplet2i shell with deprecation warning * add back in mps exception, addresses @vargol comment in #354 * replaced Conditioning class with exported functions * fix wrong type of with_variations attribute during intialization * changed "image_iterator()" to "get_make_image()" * raise NotImplementedError for calling get_make_image() in parent class * Update ldm/generate.py better error message Co-authored-by: Kevin Gibbons <bakkot@gmail.com> * minor stylistic fixes and assertion checks from code review * moved get_noise() method into img2img class * break get_noise() into two methods, one for txt2img and the other for img2img * inpainting works on non-square images now * make get_noise() an abstract method in base class * much improved inpainting Co-authored-by: Kevin Gibbons <bakkot@gmail.com>
This commit is contained in:
@ -40,7 +40,7 @@ def main():
|
||||
print('* Initializing, be patient...\n')
|
||||
sys.path.append('.')
|
||||
from pytorch_lightning import logging
|
||||
from ldm.simplet2i import T2I
|
||||
from ldm.generate import Generate
|
||||
|
||||
# these two lines prevent a horrible warning message from appearing
|
||||
# when the frozen CLIP tokenizer is imported
|
||||
@ -52,19 +52,18 @@ def main():
|
||||
# defaults passed on the command line.
|
||||
# additional parameters will be added (or overriden) during
|
||||
# the user input loop
|
||||
t2i = T2I(
|
||||
width=width,
|
||||
height=height,
|
||||
sampler_name=opt.sampler_name,
|
||||
weights=weights,
|
||||
full_precision=opt.full_precision,
|
||||
config=config,
|
||||
grid = opt.grid,
|
||||
t2i = Generate(
|
||||
width = width,
|
||||
height = height,
|
||||
sampler_name = opt.sampler_name,
|
||||
weights = weights,
|
||||
full_precision = opt.full_precision,
|
||||
config = config,
|
||||
grid = opt.grid,
|
||||
# this is solely for recreating the prompt
|
||||
latent_diffusion_weights=opt.laion400m,
|
||||
seamless=opt.seamless,
|
||||
embedding_path=opt.embedding_path,
|
||||
device_type=opt.device
|
||||
seamless = opt.seamless,
|
||||
embedding_path = opt.embedding_path,
|
||||
device_type = opt.device
|
||||
)
|
||||
|
||||
# make sure the output directory exists
|
||||
@ -567,6 +566,17 @@ def create_cmd_parser():
|
||||
type=str,
|
||||
help='Path to input image for img2img mode (supersedes width and height)',
|
||||
)
|
||||
parser.add_argument(
|
||||
'-M',
|
||||
'--mask',
|
||||
type=str,
|
||||
help='Path to inpainting mask; transparent areas will be painted over',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--invert_mask',
|
||||
action='store_true',
|
||||
help='Invert the inpainting mask; opaque areas will be painted over',
|
||||
)
|
||||
parser.add_argument(
|
||||
'-T',
|
||||
'-fit',
|
||||
|
Reference in New Issue
Block a user