mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'development' into create-invokeai-run-directory
This commit is contained in:
commit
9200b26f21
128
CODE_OF_CONDUCT.md
Normal file
128
CODE_OF_CONDUCT.md
Normal file
@ -0,0 +1,128 @@
|
|||||||
|
# Contributor Covenant Code of Conduct
|
||||||
|
|
||||||
|
## Our Pledge
|
||||||
|
|
||||||
|
We as members, contributors, and leaders pledge to make participation in our
|
||||||
|
community a harassment-free experience for everyone, regardless of age, body
|
||||||
|
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||||
|
identity and expression, level of experience, education, socio-economic status,
|
||||||
|
nationality, personal appearance, race, religion, or sexual identity
|
||||||
|
and orientation.
|
||||||
|
|
||||||
|
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||||
|
diverse, inclusive, and healthy community.
|
||||||
|
|
||||||
|
## Our Standards
|
||||||
|
|
||||||
|
Examples of behavior that contributes to a positive environment for our
|
||||||
|
community include:
|
||||||
|
|
||||||
|
* Demonstrating empathy and kindness toward other people
|
||||||
|
* Being respectful of differing opinions, viewpoints, and experiences
|
||||||
|
* Giving and gracefully accepting constructive feedback
|
||||||
|
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||||
|
and learning from the experience
|
||||||
|
* Focusing on what is best not just for us as individuals, but for the
|
||||||
|
overall community
|
||||||
|
|
||||||
|
Examples of unacceptable behavior include:
|
||||||
|
|
||||||
|
* The use of sexualized language or imagery, and sexual attention or
|
||||||
|
advances of any kind
|
||||||
|
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||||
|
* Public or private harassment
|
||||||
|
* Publishing others' private information, such as a physical or email
|
||||||
|
address, without their explicit permission
|
||||||
|
* Other conduct which could reasonably be considered inappropriate in a
|
||||||
|
professional setting
|
||||||
|
|
||||||
|
## Enforcement Responsibilities
|
||||||
|
|
||||||
|
Community leaders are responsible for clarifying and enforcing our standards of
|
||||||
|
acceptable behavior and will take appropriate and fair corrective action in
|
||||||
|
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||||
|
or harmful.
|
||||||
|
|
||||||
|
Community leaders have the right and responsibility to remove, edit, or reject
|
||||||
|
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||||
|
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||||
|
decisions when appropriate.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
This Code of Conduct applies within all community spaces, and also applies when
|
||||||
|
an individual is officially representing the community in public spaces.
|
||||||
|
Examples of representing our community include using an official e-mail address,
|
||||||
|
posting via an official social media account, or acting as an appointed
|
||||||
|
representative at an online or offline event.
|
||||||
|
|
||||||
|
## Enforcement
|
||||||
|
|
||||||
|
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||||
|
may be reported to the community leaders responsible for enforcement
|
||||||
|
at https://github.com/invoke-ai/InvokeAI/issues. All complaints will
|
||||||
|
be reviewed and investigated promptly and fairly.
|
||||||
|
|
||||||
|
All community leaders are obligated to respect the privacy and security of the
|
||||||
|
reporter of any incident.
|
||||||
|
|
||||||
|
## Enforcement Guidelines
|
||||||
|
|
||||||
|
Community leaders will follow these Community Impact Guidelines in determining
|
||||||
|
the consequences for any action they deem in violation of this Code of Conduct:
|
||||||
|
|
||||||
|
### 1. Correction
|
||||||
|
|
||||||
|
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||||
|
unprofessional or unwelcome in the community.
|
||||||
|
|
||||||
|
**Consequence**: A private, written warning from community leaders, providing
|
||||||
|
clarity around the nature of the violation and an explanation of why the
|
||||||
|
behavior was inappropriate. A public apology may be requested.
|
||||||
|
|
||||||
|
### 2. Warning
|
||||||
|
|
||||||
|
**Community Impact**: A violation through a single incident or series
|
||||||
|
of actions.
|
||||||
|
|
||||||
|
**Consequence**: A warning with consequences for continued behavior. No
|
||||||
|
interaction with the people involved, including unsolicited interaction with
|
||||||
|
those enforcing the Code of Conduct, for a specified period of time. This
|
||||||
|
includes avoiding interactions in community spaces as well as external channels
|
||||||
|
like social media. Violating these terms may lead to a temporary or
|
||||||
|
permanent ban.
|
||||||
|
|
||||||
|
### 3. Temporary Ban
|
||||||
|
|
||||||
|
**Community Impact**: A serious violation of community standards, including
|
||||||
|
sustained inappropriate behavior.
|
||||||
|
|
||||||
|
**Consequence**: A temporary ban from any sort of interaction or public
|
||||||
|
communication with the community for a specified period of time. No public or
|
||||||
|
private interaction with the people involved, including unsolicited interaction
|
||||||
|
with those enforcing the Code of Conduct, is allowed during this period.
|
||||||
|
Violating these terms may lead to a permanent ban.
|
||||||
|
|
||||||
|
### 4. Permanent Ban
|
||||||
|
|
||||||
|
**Community Impact**: Demonstrating a pattern of violation of community
|
||||||
|
standards, including sustained inappropriate behavior, harassment of an
|
||||||
|
individual, or aggression toward or disparagement of classes of individuals.
|
||||||
|
|
||||||
|
**Consequence**: A permanent ban from any sort of public interaction within
|
||||||
|
the community.
|
||||||
|
|
||||||
|
## Attribution
|
||||||
|
|
||||||
|
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||||
|
version 2.0, available at
|
||||||
|
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||||
|
|
||||||
|
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||||
|
enforcement ladder](https://github.com/mozilla/diversity).
|
||||||
|
|
||||||
|
[homepage]: https://www.contributor-covenant.org
|
||||||
|
|
||||||
|
For answers to common questions about this code of conduct, see the FAQ at
|
||||||
|
https://www.contributor-covenant.org/faq. Translations are available at
|
||||||
|
https://www.contributor-covenant.org/translations.
|
@ -85,7 +85,7 @@ increasing size, every tile after the first in a row or column
|
|||||||
effectively only covers an extra `1 - overlap_ratio` on each axis. If
|
effectively only covers an extra `1 - overlap_ratio` on each axis. If
|
||||||
the input/`--init_img` is same size as a tile, the ideal (for time)
|
the input/`--init_img` is same size as a tile, the ideal (for time)
|
||||||
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
|
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
|
||||||
4.0 etc..
|
4.0, etc.
|
||||||
|
|
||||||
`-embiggen_tiles <spaced list of tiles>`
|
`-embiggen_tiles <spaced list of tiles>`
|
||||||
|
|
||||||
@ -100,6 +100,15 @@ Tiles are numbered starting with one, and left-to-right,
|
|||||||
top-to-bottom. So, if you are generating a 3x3 tiled image, the
|
top-to-bottom. So, if you are generating a 3x3 tiled image, the
|
||||||
middle row would be `4 5 6`.
|
middle row would be `4 5 6`.
|
||||||
|
|
||||||
|
`-embiggen_strength <strength>`
|
||||||
|
|
||||||
|
Another advanced option if you want to experiment with the strength parameter
|
||||||
|
that embiggen uses when it calls Img2Img. Values range from 0.0 to 1.0
|
||||||
|
and lower values preserve more of the character of the initial image.
|
||||||
|
Values that are too high will result in a completely different end image,
|
||||||
|
while values that are too low will result in an image not dissimilar to one
|
||||||
|
you would get with ESRGAN upscaling alone. The default value is 0.4.
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
!!! example ""
|
!!! example ""
|
||||||
|
@ -33,6 +33,7 @@ dependencies:
|
|||||||
- dependency_injector==4.40.0
|
- dependency_injector==4.40.0
|
||||||
- getpass_asterisk
|
- getpass_asterisk
|
||||||
- omegaconf==2.1.1
|
- omegaconf==2.1.1
|
||||||
|
- picklescan
|
||||||
- pyreadline3
|
- pyreadline3
|
||||||
- realesrgan
|
- realesrgan
|
||||||
- taming-transformers-rom1504
|
- taming-transformers-rom1504
|
||||||
|
@ -23,6 +23,7 @@ dependencies:
|
|||||||
- kornia==0.6.0
|
- kornia==0.6.0
|
||||||
- omegaconf==2.2.3
|
- omegaconf==2.2.3
|
||||||
- opencv-python==4.5.5.64
|
- opencv-python==4.5.5.64
|
||||||
|
- picklescan
|
||||||
- pillow==9.2.0
|
- pillow==9.2.0
|
||||||
- pudb==2019.2
|
- pudb==2019.2
|
||||||
- pyreadline3
|
- pyreadline3
|
||||||
|
@ -26,6 +26,7 @@ dependencies:
|
|||||||
- kornia==0.6.0
|
- kornia==0.6.0
|
||||||
- omegaconf==2.2.3
|
- omegaconf==2.2.3
|
||||||
- opencv-python==4.5.5.64
|
- opencv-python==4.5.5.64
|
||||||
|
- picklescan
|
||||||
- pillow==9.2.0
|
- pillow==9.2.0
|
||||||
- pudb==2019.2
|
- pudb==2019.2
|
||||||
- pyreadline3
|
- pyreadline3
|
||||||
|
@ -52,6 +52,7 @@ dependencies:
|
|||||||
- transformers=4.23
|
- transformers=4.23
|
||||||
- pip:
|
- pip:
|
||||||
- getpass_asterisk
|
- getpass_asterisk
|
||||||
|
- picklescan
|
||||||
- taming-transformers-rom1504
|
- taming-transformers-rom1504
|
||||||
- test-tube==0.7.5
|
- test-tube==0.7.5
|
||||||
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||||
|
@ -27,6 +27,7 @@ dependencies:
|
|||||||
- kornia==0.6.0
|
- kornia==0.6.0
|
||||||
- omegaconf==2.2.3
|
- omegaconf==2.2.3
|
||||||
- opencv-python==4.5.5.64
|
- opencv-python==4.5.5.64
|
||||||
|
- picklescan
|
||||||
- pillow==9.2.0
|
- pillow==9.2.0
|
||||||
- pudb==2019.2
|
- pudb==2019.2
|
||||||
- pyreadline3
|
- pyreadline3
|
||||||
|
@ -30,6 +30,7 @@ test-tube>=0.7.5
|
|||||||
torch-fidelity
|
torch-fidelity
|
||||||
torchmetrics
|
torchmetrics
|
||||||
transformers==4.21.*
|
transformers==4.21.*
|
||||||
|
picklescan
|
||||||
git+https://github.com/openai/CLIP.git@main#egg=clip
|
git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||||
git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
|
git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
|
||||||
git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
|
git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
|
||||||
|
@ -19,6 +19,7 @@ torch-fidelity
|
|||||||
torchvision==0.13.1 ; platform_system == 'Darwin'
|
torchvision==0.13.1 ; platform_system == 'Darwin'
|
||||||
torchvision==0.13.1+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
|
torchvision==0.13.1+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
|
||||||
transformers
|
transformers
|
||||||
|
picklescan
|
||||||
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
|
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
|
||||||
https://github.com/TencentARC/GFPGAN/archive/2eac2033893ca7f427f4035d80fe95b92649ac56.zip
|
https://github.com/TencentARC/GFPGAN/archive/2eac2033893ca7f427f4035d80fe95b92649ac56.zip
|
||||||
https://github.com/invoke-ai/k-diffusion/archive/7f16b2c33411f26b3eae78d10648d625cb0c1095.zip
|
https://github.com/invoke-ai/k-diffusion/archive/7f16b2c33411f26b3eae78d10648d625cb0c1095.zip
|
||||||
|
@ -295,8 +295,9 @@ class Generate:
|
|||||||
strength = None,
|
strength = None,
|
||||||
init_color = None,
|
init_color = None,
|
||||||
# these are specific to embiggen (which also relies on img2img args)
|
# these are specific to embiggen (which also relies on img2img args)
|
||||||
embiggen = None,
|
embiggen = None,
|
||||||
embiggen_tiles = None,
|
embiggen_tiles = None,
|
||||||
|
embiggen_strength = None,
|
||||||
# these are specific to GFPGAN/ESRGAN
|
# these are specific to GFPGAN/ESRGAN
|
||||||
gfpgan_strength= 0,
|
gfpgan_strength= 0,
|
||||||
facetool = None,
|
facetool = None,
|
||||||
@ -351,6 +352,7 @@ class Generate:
|
|||||||
perlin // optional 0-1 value to add a percentage of perlin noise to the initial noise
|
perlin // optional 0-1 value to add a percentage of perlin noise to the initial noise
|
||||||
embiggen // scale factor relative to the size of the --init_img (-I), followed by ESRGAN upscaling strength (0-1.0), followed by minimum amount of overlap between tiles as a decimal ratio (0 - 1.0) or number of pixels
|
embiggen // scale factor relative to the size of the --init_img (-I), followed by ESRGAN upscaling strength (0-1.0), followed by minimum amount of overlap between tiles as a decimal ratio (0 - 1.0) or number of pixels
|
||||||
embiggen_tiles // list of tiles by number in order to process and replace onto the image e.g. `0 2 4`
|
embiggen_tiles // list of tiles by number in order to process and replace onto the image e.g. `0 2 4`
|
||||||
|
embiggen_strength // strength for embiggen. 0.0 preserves image exactly, 1.0 replaces it completely
|
||||||
|
|
||||||
To use the step callback, define a function that receives two arguments:
|
To use the step callback, define a function that receives two arguments:
|
||||||
- Image GPU data
|
- Image GPU data
|
||||||
@ -492,6 +494,7 @@ class Generate:
|
|||||||
perlin=perlin,
|
perlin=perlin,
|
||||||
embiggen=embiggen,
|
embiggen=embiggen,
|
||||||
embiggen_tiles=embiggen_tiles,
|
embiggen_tiles=embiggen_tiles,
|
||||||
|
embiggen_strength=embiggen_strength,
|
||||||
inpaint_replace=inpaint_replace,
|
inpaint_replace=inpaint_replace,
|
||||||
mask_blur_radius=mask_blur_radius,
|
mask_blur_radius=mask_blur_radius,
|
||||||
safety_checker=checker,
|
safety_checker=checker,
|
||||||
@ -640,7 +643,7 @@ class Generate:
|
|||||||
elif tool == 'embiggen':
|
elif tool == 'embiggen':
|
||||||
# fetch the metadata from the image
|
# fetch the metadata from the image
|
||||||
generator = self.select_generator(embiggen=True)
|
generator = self.select_generator(embiggen=True)
|
||||||
opt.strength = 0.40
|
opt.strength = opt.embiggen_strength or 0.40
|
||||||
print(f'>> Setting img2img strength to {opt.strength} for happy embiggening')
|
print(f'>> Setting img2img strength to {opt.strength} for happy embiggening')
|
||||||
generator.generate(
|
generator.generate(
|
||||||
prompt,
|
prompt,
|
||||||
@ -656,6 +659,7 @@ class Generate:
|
|||||||
height = opt.height,
|
height = opt.height,
|
||||||
embiggen = opt.embiggen,
|
embiggen = opt.embiggen,
|
||||||
embiggen_tiles = opt.embiggen_tiles,
|
embiggen_tiles = opt.embiggen_tiles,
|
||||||
|
embiggen_strength = opt.embiggen_strength,
|
||||||
image_callback = callback,
|
image_callback = callback,
|
||||||
)
|
)
|
||||||
elif tool == 'outpaint':
|
elif tool == 'outpaint':
|
||||||
|
@ -287,6 +287,8 @@ class Args(object):
|
|||||||
switches.append(f'--embiggen {" ".join([str(u) for u in a["embiggen"]])}')
|
switches.append(f'--embiggen {" ".join([str(u) for u in a["embiggen"]])}')
|
||||||
if a['embiggen_tiles']:
|
if a['embiggen_tiles']:
|
||||||
switches.append(f'--embiggen_tiles {" ".join([str(u) for u in a["embiggen_tiles"]])}')
|
switches.append(f'--embiggen_tiles {" ".join([str(u) for u in a["embiggen_tiles"]])}')
|
||||||
|
if a['embiggen_strength']:
|
||||||
|
switches.append(f'--embiggen_strength {a["embiggen_strength"]}')
|
||||||
|
|
||||||
# outpainting parameters
|
# outpainting parameters
|
||||||
if a['out_direction']:
|
if a['out_direction']:
|
||||||
@ -921,6 +923,13 @@ class Args(object):
|
|||||||
help='For embiggen, provide list of tiles to process and replace onto the image e.g. `1 3 5`.',
|
help='For embiggen, provide list of tiles to process and replace onto the image e.g. `1 3 5`.',
|
||||||
default=None,
|
default=None,
|
||||||
)
|
)
|
||||||
|
postprocessing_group.add_argument(
|
||||||
|
'--embiggen_strength',
|
||||||
|
'-embiggen_strength',
|
||||||
|
type=float,
|
||||||
|
help='The strength of the embiggen img2img step, defaults to 0.4',
|
||||||
|
default=0.4,
|
||||||
|
)
|
||||||
special_effects_group.add_argument(
|
special_effects_group.add_argument(
|
||||||
'--seamless',
|
'--seamless',
|
||||||
action='store_true',
|
action='store_true',
|
||||||
|
@ -12,14 +12,15 @@ import time
|
|||||||
import gc
|
import gc
|
||||||
import hashlib
|
import hashlib
|
||||||
import psutil
|
import psutil
|
||||||
|
import sys
|
||||||
import transformers
|
import transformers
|
||||||
import traceback
|
import traceback
|
||||||
import os
|
import os
|
||||||
from sys import getrefcount
|
|
||||||
from omegaconf import OmegaConf
|
from omegaconf import OmegaConf
|
||||||
from omegaconf.errors import ConfigAttributeError
|
from omegaconf.errors import ConfigAttributeError
|
||||||
from ldm.util import instantiate_from_config
|
from ldm.util import instantiate_from_config
|
||||||
from ldm.invoke.globals import Globals
|
from ldm.invoke.globals import Globals
|
||||||
|
from picklescan.scanner import scan_file_path
|
||||||
|
|
||||||
DEFAULT_MAX_MODELS=2
|
DEFAULT_MAX_MODELS=2
|
||||||
|
|
||||||
@ -203,6 +204,8 @@ class ModelCache(object):
|
|||||||
|
|
||||||
if not os.path.isabs(weights):
|
if not os.path.isabs(weights):
|
||||||
weights = os.path.normpath(os.path.join(Globals.root,weights))
|
weights = os.path.normpath(os.path.join(Globals.root,weights))
|
||||||
|
# scan model
|
||||||
|
self._scan_model(model_name, weights)
|
||||||
|
|
||||||
print(f'>> Loading {model_name} from {weights}')
|
print(f'>> Loading {model_name} from {weights}')
|
||||||
|
|
||||||
@ -283,6 +286,30 @@ class ModelCache(object):
|
|||||||
gc.collect()
|
gc.collect()
|
||||||
if self._has_cuda():
|
if self._has_cuda():
|
||||||
torch.cuda.empty_cache()
|
torch.cuda.empty_cache()
|
||||||
|
|
||||||
|
def _scan_model(self, model_name, checkpoint):
|
||||||
|
# scan model
|
||||||
|
print(f'>> Scanning Model: {model_name}')
|
||||||
|
scan_result = scan_file_path(checkpoint)
|
||||||
|
if scan_result.infected_files != 0:
|
||||||
|
if scan_result.infected_files == 1:
|
||||||
|
print(f'\n### Issues Found In Model: {scan_result.issues_count}')
|
||||||
|
print('### WARNING: The model you are trying to load seems to be infected.')
|
||||||
|
print('### For your safety, InvokeAI will not load this model.')
|
||||||
|
print('### Please use checkpoints from trusted sources.')
|
||||||
|
print("### Exiting InvokeAI")
|
||||||
|
sys.exit()
|
||||||
|
else:
|
||||||
|
print('\n### WARNING: InvokeAI was unable to scan the model you are using.')
|
||||||
|
from ldm.util import ask_user
|
||||||
|
model_safe_check_fail = ask_user('Do you want to to continue loading the model?', ['y', 'n'])
|
||||||
|
if model_safe_check_fail.lower() == 'y':
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
print("### Exiting InvokeAI")
|
||||||
|
sys.exit()
|
||||||
|
else:
|
||||||
|
print('>> Model Scanned. OK!!')
|
||||||
|
|
||||||
def _make_cache_room(self):
|
def _make_cache_room(self):
|
||||||
num_loaded_models = len(self.models)
|
num_loaded_models = len(self.models)
|
||||||
|
@ -30,6 +30,7 @@ def build_opt(post_data, seed, gfpgan_model_exists):
|
|||||||
# however, this code is here against that eventuality
|
# however, this code is here against that eventuality
|
||||||
setattr(opt, 'embiggen', None)
|
setattr(opt, 'embiggen', None)
|
||||||
setattr(opt, 'embiggen_tiles', None)
|
setattr(opt, 'embiggen_tiles', None)
|
||||||
|
setattr(opt, 'embiggen_strength', None)
|
||||||
|
|
||||||
setattr(opt, 'facetool_strength', float(post_data['facetool_strength']) if gfpgan_model_exists else 0)
|
setattr(opt, 'facetool_strength', float(post_data['facetool_strength']) if gfpgan_model_exists else 0)
|
||||||
setattr(opt, 'upscale', [int(post_data['upscale_level']), float(post_data['upscale_strength'])] if post_data['upscale_level'] != '' else None)
|
setattr(opt, 'upscale', [int(post_data['upscale_level']), float(post_data['upscale_strength'])] if post_data['upscale_level'] != '' else None)
|
||||||
|
@ -235,3 +235,12 @@ def rand_perlin_2d(shape, res, device, fade = lambda t: 6*t**5 - 15*t**4 + 10*t*
|
|||||||
n11 = dot(tile_grads([1, None], [1, None]), [-1,-1]).to(device)
|
n11 = dot(tile_grads([1, None], [1, None]), [-1,-1]).to(device)
|
||||||
t = fade(grid[:shape[0], :shape[1]])
|
t = fade(grid[:shape[0], :shape[1]])
|
||||||
return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1]).to(device)
|
return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1]).to(device)
|
||||||
|
|
||||||
|
def ask_user(question: str, answers: list):
|
||||||
|
from itertools import chain, repeat
|
||||||
|
user_prompt = f'\n>> {question} {answers}: '
|
||||||
|
invalid_answer_msg = 'Invalid answer. Please try again.'
|
||||||
|
pose_question = chain([user_prompt], repeat('\n'.join([invalid_answer_msg, user_prompt])))
|
||||||
|
user_answers = map(input, pose_question)
|
||||||
|
valid_response = next(filter(answers.__contains__, user_answers))
|
||||||
|
return valid_response
|
||||||
|
Loading…
x
Reference in New Issue
Block a user