mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
720e5cd651
* start refactoring -not yet functional * first phase of refactor done - not sure weighted prompts working * Second phase of refactoring. Everything mostly working. * The refactoring has moved all the hard-core inference work into ldm.dream.generator.*, where there are submodules for txt2img and img2img. inpaint will go in there as well. * Some additional refactoring will be done soon, but relatively minor work. * fix -save_orig flag to actually work * add @neonsecret attention.py memory optimization * remove unneeded imports * move token logging into conditioning.py * add placeholder version of inpaint; porting in progress * fix crash in img2img * inpainting working; not tested on variations * fix crashes in img2img * ported attention.py memory optimization #117 from basujindal branch * added @torch_no_grad() decorators to img2img, txt2img, inpaint closures * Final commit prior to PR against development * fixup crash when generating intermediate images in web UI * rename ldm.simplet2i to ldm.generate * add backward-compatibility simplet2i shell with deprecation warning * add back in mps exception, addresses @vargol comment in #354 * replaced Conditioning class with exported functions * fix wrong type of with_variations attribute during intialization * changed "image_iterator()" to "get_make_image()" * raise NotImplementedError for calling get_make_image() in parent class * Update ldm/generate.py better error message Co-authored-by: Kevin Gibbons <bakkot@gmail.com> * minor stylistic fixes and assertion checks from code review * moved get_noise() method into img2img class * break get_noise() into two methods, one for txt2img and the other for img2img * inpainting works on non-square images now * make get_noise() an abstract method in base class * much improved inpainting Co-authored-by: Kevin Gibbons <bakkot@gmail.com>
21 lines
693 B
Python
21 lines
693 B
Python
import torch
|
|
from torch import autocast
|
|
from contextlib import contextmanager, nullcontext
|
|
|
|
def choose_torch_device() -> str:
|
|
'''Convenience routine for guessing which GPU device to run model on'''
|
|
if torch.cuda.is_available():
|
|
return 'cuda'
|
|
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
|
|
return 'mps'
|
|
return 'cpu'
|
|
|
|
def choose_autocast_device(device):
|
|
'''Returns an autocast compatible device from a torch device'''
|
|
device_type = device.type # this returns 'mps' on M1
|
|
# autocast only supports cuda or cpu
|
|
if device_type in ('cuda','cpu'):
|
|
return device_type,autocast
|
|
else:
|
|
return 'cpu',nullcontext
|