Relax Huggingface login requirement during setup (#2046)

* (config) handle huggingface token more gracefully

* (docs) document HuggingFace token requirement for Concepts

* (cli) deprecate the --(no)-interactive CLI flag

It was previously only used to skip the SD weights download, and therefore
the prompt for Huggingface token (the "interactive" part).

Now that we don't need a Huggingface token
to download the SD weights at all, we can replace this flag with
"--skip-sd-weights", to clearly describe its purpose

The `--(no)-interactive` flag still functions the same, but shows a deprecation message

* (cli) fix emergency_model_reconfigure argument parsing

* (config) fix installation issues on systems with non-UTF8 locale

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
This commit is contained in:
Eugene Brodsky
2022-12-18 04:44:50 -05:00
committed by GitHub
parent 5c5454e4a5
commit f41da11d66
7 changed files with 148 additions and 77 deletions

View File

@ -116,7 +116,7 @@ jobs:
- name: run configure_invokeai.py - name: run configure_invokeai.py
id: run-preload-models id: run-preload-models
run: | run: |
python scripts/configure_invokeai.py --no-interactive --yes python scripts/configure_invokeai.py --skip-sd-weights --yes
- name: cat invokeai.init - name: cat invokeai.init
id: cat-invokeai id: cat-invokeai

View File

@ -118,7 +118,7 @@ jobs:
- name: run configure_invokeai.py - name: run configure_invokeai.py
id: run-preload-models id: run-preload-models
run: python3 scripts/configure_invokeai.py --no-interactive --yes run: python3 scripts/configure_invokeai.py --skip-sd-weights --yes
- name: Run the tests - name: Run the tests
id: run-tests id: run-tests

View File

@ -43,6 +43,22 @@ You can also combine styles and concepts:
</figure> </figure>
## Using a Hugging Face Concept ## Using a Hugging Face Concept
!!! warning "Authenticating to HuggingFace"
Some concepts require valid authentication to HuggingFace. Without it, they will not be downloaded
and will be silently ignored.
If you used an installer to install InvokeAI, you may have already set a HuggingFace token.
If you skipped this step, you can:
- run the InvokeAI configuration script again (if you used a manual installer): `scripts/configure_invokeai.py`
- set one of the `HUGGINGFACE_TOKEN` or `HUGGING_FACE_HUB_TOKEN` environment variables to contain your token
Finally, if you already used any HuggingFace library on your computer, you might already have a token
in your local cache. Check for a hidden `.huggingface` directory in your home folder. If it
contains a `token` file, then you are all set.
Hugging Face TI concepts are downloaded and installed automatically as you Hugging Face TI concepts are downloaded and installed automatically as you
require them. This requires your machine to be connected to the Internet. To require them. This requires your machine to be connected to the Internet. To
find out what each concept is for, you can browse the find out what each concept is for, you can browse the

View File

@ -459,12 +459,12 @@ greatest version, launch the Anaconda window, enter `InvokeAI` and type:
```bash ```bash
git pull git pull
conda env update conda env update
python scripts/configure_invokeai.py --no-interactive #optional python scripts/configure_invokeai.py --skip-sd-weights #optional
``` ```
This will bring your local copy into sync with the remote one. The last step may This will bring your local copy into sync with the remote one. The last step may
be needed to take advantage of new features or released models. The be needed to take advantage of new features or released models. The
`--no-interactive` flag will prevent the script from prompting you to download `--skip-sd-weights` flag will prevent the script from prompting you to download
the big Stable Diffusion weights files. the big Stable Diffusion weights files.
## Troubleshooting ## Troubleshooting

View File

@ -110,7 +110,7 @@ def main():
max_loaded_models=opt.max_loaded_models, max_loaded_models=opt.max_loaded_models,
) )
except (FileNotFoundError, TypeError, AssertionError): except (FileNotFoundError, TypeError, AssertionError):
emergency_model_reconfigure() emergency_model_reconfigure(opt)
sys.exit(-1) sys.exit(-1)
except (IOError, KeyError) as e: except (IOError, KeyError) as e:
print(f'{e}. Aborting.') print(f'{e}. Aborting.')
@ -123,7 +123,7 @@ def main():
try: try:
gen.load_model() gen.load_model()
except AssertionError: except AssertionError:
emergency_model_reconfigure() emergency_model_reconfigure(opt)
sys.exit(-1) sys.exit(-1)
# web server loops forever # web server loops forever
@ -939,7 +939,7 @@ def write_commands(opt, file_path:str, outfilepath:str):
f.write('\n'.join(commands)) f.write('\n'.join(commands))
print(f'>> File {outfilepath} with commands created') print(f'>> File {outfilepath} with commands created')
def emergency_model_reconfigure(): def emergency_model_reconfigure(opt):
print() print()
print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
print(' You appear to have a missing or misconfigured model file(s). ') print(' You appear to have a missing or misconfigured model file(s). ')
@ -948,11 +948,17 @@ def emergency_model_reconfigure():
print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!') print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
print('configure_invokeai is launching....\n') print('configure_invokeai is launching....\n')
sys.argv = [ # Match arguments that were set on the CLI
'configure_invokeai', # only the arguments accepted by the configuration script are parsed
os.environ.get( root_dir = ["--root", opt.root_dir] if opt.root_dir is not None else []
'INVOKE_MODEL_RECONFIGURE', config = ["--config", opt.conf] if opt.conf is not None else []
'--interactive')] yes_to_all = os.environ.get('INVOKE_MODEL_RECONFIGURE')
sys.argv = [ 'configure_invokeai' ]
sys.argv.extend(root_dir)
sys.argv.extend(config)
if yes_to_all is not None:
sys.argv.append(yes_to_all)
import configure_invokeai import configure_invokeai
configure_invokeai.main() configure_invokeai.main()

176
scripts/configure_invokeai.py Normal file → Executable file
View File

@ -10,13 +10,14 @@ print('Loading Python libraries...\n')
import argparse import argparse
import sys import sys
import os import os
import io
import re import re
import warnings import warnings
import shutil import shutil
from urllib import request from urllib import request
from tqdm import tqdm from tqdm import tqdm
from omegaconf import OmegaConf from omegaconf import OmegaConf
from huggingface_hub import HfFolder, hf_hub_url from huggingface_hub import HfFolder, hf_hub_url, login as hf_hub_login
from pathlib import Path from pathlib import Path
from typing import Union from typing import Union
from getpass_asterisk import getpass_asterisk from getpass_asterisk import getpass_asterisk
@ -191,61 +192,110 @@ def all_datasets()->dict:
datasets[ds]=True datasets[ds]=True
return datasets return datasets
#---------------------------------------------
def HfLogin(access_token) -> str:
"""
Helper for logging in to Huggingface
The stdout capture is needed to hide the irrelevant "git credential helper" warning
"""
capture = io.StringIO()
sys.stdout = capture
try:
hf_hub_login(token = access_token, add_to_git_credential=False)
sys.stdout = sys.__stdout__
except Exception as exc:
sys.stdout = sys.__stdout__
print(exc)
raise exc
#-------------------------------Authenticate against Hugging Face #-------------------------------Authenticate against Hugging Face
def authenticate(): def authenticate(yes_to_all=False):
print('** LICENSE AGREEMENT FOR WEIGHT FILES **')
print("=" * os.get_terminal_size()[0])
print(''' print('''
To download the Stable Diffusion weight files from the official Hugging Face By downloading the Stable Diffusion weight files from the official Hugging Face
repository, you need to read and accept the CreativeML Responsible AI license. repository, you agree to have read and accepted the CreativeML Responsible AI License.
The license terms are located here:
This involves a few easy steps. https://huggingface.co/spaces/CompVis/stable-diffusion-license
1. If you have not already done so, create an account on Hugging Face's web site ''')
using the "Sign Up" button: print("=" * os.get_terminal_size()[0])
https://huggingface.co/join if not yes_to_all:
accepted = False
You will need to verify your email address as part of the HuggingFace while not accepted:
registration process. accepted = yes_or_no('Accept the above License terms?')
if not accepted:
2. Log into your Hugging Face account: print('Please accept the License or Ctrl+C to exit.')
else:
https://huggingface.co/login print('Thank you!')
3. Accept the license terms located here:
https://huggingface.co/runwayml/stable-diffusion-v1-5
and here:
https://huggingface.co/runwayml/stable-diffusion-inpainting
(Yes, you have to accept two slightly different license agreements)
'''
)
input('Press <enter> when you are ready to continue:')
print('(Fetching Hugging Face token from cache...',end='')
access_token = HfFolder.get_token()
if access_token is not None:
print('found')
else: else:
print('not found') print("The program was started with a '--yes' flag, which indicates user's acceptance of the above License terms.")
# Authenticate to Huggingface using environment variables.
# If successful, authentication will persist for either interactive or non-interactive use.
# Default env var expected by HuggingFace is HUGGING_FACE_HUB_TOKEN.
print("=" * os.get_terminal_size()[0])
print('Authenticating to Huggingface')
hf_envvars = [ "HUGGING_FACE_HUB_TOKEN", "HUGGINGFACE_TOKEN" ]
if not (access_token := HfFolder.get_token()):
print(f"Huggingface token not found in cache.")
for ev in hf_envvars:
if (access_token := os.getenv(ev)):
print(f"Token was found in the {ev} environment variable.... Logging in.")
try:
HfLogin(access_token)
continue
except ValueError:
print(f"Login failed due to invalid token found in {ev}")
else:
print(f"Token was not found in the environment variable {ev}.")
else:
print(f"Huggingface token found in cache.")
try:
HfLogin(access_token)
except ValueError:
print(f"Login failed due to invalid token found in cache")
if not yes_to_all:
print(''' print('''
4. Thank you! The last step is to enter your HuggingFace access token so that You may optionally enter your Huggingface token now. InvokeAI *will* work without it, but some functionality may be limited.
this script is authorized to initiate the download. Go to the access tokens See https://invoke-ai.github.io/InvokeAI/features/CONCEPTS/#using-a-hugging-face-concept for more information.
page of your Hugging Face account and create a token by clicking the
"New token" button:
https://huggingface.co/settings/tokens Visit https://huggingface.co/settings/tokens to generate a token. (Sign up for an account if needed).
(You can enter anything you like in the token creation field marked "Name". Paste the token below using Ctrl-Shift-V (macOS/Linux) or right-click (Windows), and/or 'Enter' to continue.
"Role" should be "read"). You may re-run the configuration script again in the future if you do not wish to set the token right now.
''')
again = True
while again:
try:
access_token = getpass_asterisk.getpass_asterisk(prompt="HF Token ⮞ ")
HfLogin(access_token)
access_token = HfFolder.get_token()
again = False
except ValueError:
again = yes_or_no('Failed to log in to Huggingface. Would you like to try again?')
if not again:
print('\nRe-run the configuration script whenever you wish to set the token.')
print('...Continuing...')
except EOFError:
# this happens if the user pressed Enter on the prompt without any input; assume this means they don't want to input a token
# safety net needed against accidental "Enter"?
print("None provided - continuing")
again = False
elif access_token is None:
print()
print("HuggingFace login did not succeed. Some functionality may be limited; see https://invoke-ai.github.io/InvokeAI/features/CONCEPTS/#using-a-hugging-face-concept for more information")
print()
print(f"Re-run the configuration script without '--yes' to set the HuggingFace token interactively, or use one of the environment variables: {', '.join(hf_envvars)}")
print("=" * os.get_terminal_size()[0])
Now copy the token to your clipboard and paste it at the prompt. Windows
users can paste with right-click or Ctrl-Shift-V.
Token: '''
)
access_token = getpass_asterisk.getpass_asterisk()
HfFolder.save_token(access_token)
return access_token return access_token
#--------------------------------------------- #---------------------------------------------
@ -537,25 +587,14 @@ def download_safety_checker():
#------------------------------------- #-------------------------------------
def download_weights(opt:dict) -> Union[str, None]: def download_weights(opt:dict) -> Union[str, None]:
# Authenticate to Huggingface using environment variables.
# If successful, authentication will persist for either interactive or non-interactive use.
# Default env var expected by HuggingFace is HUGGING_FACE_HUB_TOKEN.
if not (access_token := HfFolder.get_token()):
# If unable to find an existing token or expected environment, try the non-canonical environment variable (widely used in the community and supported as per docs)
if (access_token := os.getenv("HUGGINGFACE_TOKEN")):
# set the environment variable here instead of simply calling huggingface_hub.login(token), to maintain consistent behaviour.
# when calling the .login() method, the token is cached in the user's home directory. When the env var is used, the token is NOT cached.
os.environ['HUGGING_FACE_HUB_TOKEN'] = access_token
if opt.yes_to_all: if opt.yes_to_all:
models = recommended_datasets() models = recommended_datasets()
if len(models)>0 and access_token is not None: access_token = authenticate(opt.yes_to_all)
if len(models)>0:
successfully_downloaded = download_weight_datasets(models, access_token) successfully_downloaded = download_weight_datasets(models, access_token)
update_config_file(successfully_downloaded,opt) update_config_file(successfully_downloaded,opt)
return return
else:
print('** Cannot download models because no Hugging Face access token could be found. Please re-run without --yes')
return "could not download model weights from Huggingface due to missing or invalid access token"
else: else:
choice = user_wants_to_download_weights() choice = user_wants_to_download_weights()
@ -571,11 +610,12 @@ def download_weights(opt:dict) -> Union[str, None]:
else: # 'skip' else: # 'skip'
return return
print('** LICENSE AGREEMENT FOR WEIGHT FILES **')
# We are either already authenticated, or will be asked to provide the token interactively
access_token = authenticate() access_token = authenticate()
print('\n** DOWNLOADING WEIGHTS **') print('\n** DOWNLOADING WEIGHTS **')
successfully_downloaded = download_weight_datasets(models, access_token) successfully_downloaded = download_weight_datasets(models, access_token)
update_config_file(successfully_downloaded,opt) update_config_file(successfully_downloaded,opt)
if len(successfully_downloaded) < len(models): if len(successfully_downloaded) < len(models):
return "some of the model weights downloads were not successful" return "some of the model weights downloads were not successful"
@ -695,7 +735,12 @@ def main():
dest='interactive', dest='interactive',
action=argparse.BooleanOptionalAction, action=argparse.BooleanOptionalAction,
default=True, default=True,
help='run in interactive mode (default)') help='run in interactive mode (default) - DEPRECATED')
parser.add_argument('--skip-sd-weights',
dest='skip_sd_weights',
action=argparse.BooleanOptionalAction,
default=False,
help='skip downloading the large Stable Diffusion weight files')
parser.add_argument('--yes','-y', parser.add_argument('--yes','-y',
dest='yes_to_all', dest='yes_to_all',
action='store_true', action='store_true',
@ -728,7 +773,12 @@ def main():
# Optimistically try to download all required assets. If any errors occur, add them and proceed anyway. # Optimistically try to download all required assets. If any errors occur, add them and proceed anyway.
errors=set() errors=set()
if opt.interactive: if not opt.interactive:
print("WARNING: The --(no)-interactive argument is deprecated and will be removed. Use --skip-sd-weights.")
opt.skip_sd_weights=True
if opt.skip_sd_weights:
print('** SKIPPING DIFFUSION WEIGHTS DOWNLOAD PER USER REQUEST **')
else:
print('** DOWNLOADING DIFFUSION WEIGHTS **') print('** DOWNLOADING DIFFUSION WEIGHTS **')
errors.add(download_weights(opt)) errors.add(download_weights(opt))
print('\n** DOWNLOADING SUPPORT MODELS **') print('\n** DOWNLOADING SUPPORT MODELS **')

View File

@ -1,6 +1,5 @@
import os import os
import re import re
from setuptools import setup, find_packages from setuptools import setup, find_packages
def list_files(directory): def list_files(directory):