* add whole <style token> to vocab for concept library embeddings
* add ability to load multiple concept .bin files
* make --log_tokenization respect custom tokens
* start working on concept downloading system
* preliminary support for dynamic loading and merging of multiple embedded models
- The embedding_manager is now enhanced with ldm.invoke.concepts_lib,
which handles dynamic downloading and caching of embedded models from
the Hugging Face concepts library (https://huggingface.co/sd-concepts-library)
- Downloading of a embedded model is triggered by the presence of one or more
<concept> tags in the prompt.
- Once the embedded model is downloaded, its trigger phrase will be loaded
into the embedding manager and the prompt's <concept> tag will be replaced
with the <trigger_phrase>
- The downloaded model stays on disk for fast loading later.
- The CLI autocomplete will complete partial <concept> tags for you. Type a
'<' and hit tab to get all ~700 concepts.
BUGS AND LIMITATIONS:
- MODEL NAME VS TRIGGER PHRASE
You must use the name of the concept embed model from the SD
library, and not the trigger phrase itself. Usually these are the
same, but not always. For example, the model named "hoi4-leaders"
corresponds to the trigger "<HOI4-Leader>"
One reason for this design choice is that there is no apparent
constraint on the uniqueness of the trigger phrases and one trigger
phrase may map onto multiple models. So we use the model name
instead.
The second reason is that there is no way I know of to search
Hugging Face for models with certain trigger phrases. So we'd have
to download all 700 models to index the phrases.
The problem this presents is that this may confuse users, who will
want to reuse prompts from distributions that use the trigger phrase
directly. Usually this will work, but not always.
- WON'T WORK ON A FIREWALLED SYSTEM
If the host running IAI has no internet connection, it can't
download the concept libraries. I will add a script that allows
users to preload a list of concept models.
- BUG IN PROMPT REPLACEMENT WHEN MODEL NOT FOUND
There's a small bug that occurs when the user provides an invalid
model name. The <concept> gets replaced with <None> in the prompt.
* fix loading .pt embeddings; allow multi-vector embeddings; warn on dupes
* simplify replacement logic and remove cuda assumption
* download list of concepts from hugging face
* remove misleading customization of '*' placeholder
the existing code as-is did not do anything; unclear what it was supposed to do.
the obvious alternative -- setting using 'placeholder_strings' instead of
'placeholder_tokens' to match model.params.personalization_config.params.placeholder_strings --
caused a crash. i think this is because the passed string also needed to be handed over
on init of the PersonalizedBase as the 'placeholder_token' argument.
this is weird config dict magic and i don't want to touch it. put a
breakpoint in personalzied.py line 116 (top of PersonalizedBase.__init__) if
you want to have a crack at it yourself.
* address all the issues raised by damian0815 in review of PR #1526
* actually resize the token_embeddings
* multiple improvements to the concept loader based on code reviews
1. Activated the --embedding_directory option (alias --embedding_path)
to load a single embedding or an entire directory of embeddings at
startup time.
2. Can turn off automatic loading of embeddings using --no-embeddings.
3. Embedding checkpoints are scanned with the pickle scanner.
4. More informative error messages when a concept can't be loaded due
either to a 404 not found error or a network error.
* autocomplete terms end with ">" now
* fix startup error and network unreachable
1. If the .invokeai file does not contain the --root and --outdir options,
invoke.py will now fix it.
2. Catch and handle network problems when downloading hugging face textual
inversion concepts.
* fix misformatted error string
Co-authored-by: Damian Stewart <d@damianstewart.com>
- If initial model fails to load, invoke.py will inform the user that
something is wrong with models.yaml or the models themselves and
drop user into configure_invokeai.py to repair the problem.
- The model caching system will longer try to reload the current model
if there is none.
remove duplicate import: os
ldm.util.ask_user is imported only once now
introduce textwrap and contextlib packages to clean up the code
return, returns None implicitly so it is omitted
a function returns None by default so it is omitted
dict.get returns None by default if the value is not found so it is omitted
type of True is a bool and if the module only returns True then it should not return anything in the first place
added some indentations and line breaks to further improve readability
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
- dangling debug messages in several files, introduced during
testing of the external root directory
- these need to be removed before they are interpreted as errors by users
- Loader is renamed `configure_invokeai.py`, but `preload_models.py` is retained
(as a shell) for backward compatibility
- At startup, if no runtime root directory exists and no `.invokeai` startup file is
present, user will be prompted to select the runtime and outputs directories.
- Also expanded the number of initial models offered to the user to include the
most "liked" ones from HuggingFace, including the two trinart models, the
PaperCut model, and the VoxelArt model.
- Created a configuration file for initial models to be offered to the user, at
configs/INITIAL_MODELS.yaml
- dangling debug messages in several files, introduced during
testing of the external root directory
- these need to be removed before they are interpreted as errors by users
- If there is not already a `.invokeai` file in the user's home directory
the first time invoke.py runs, it will create an empty one with comments
showing how to customize it.
- This fixes an issue in which generated images were not being saved
into the ~/invokeai/outputs directory, but were instead being stored
to a relative './outputs/img_samples' path as before.
- Note that if you specify a relative directory in the --outdir argument,
it will now be interpreted as relative to the invokeai run directory.
You will need to provide an absolute pathname in order to save the
outputs outside this directory.
- Also found and fixed a minor problem in which commands with syntax
errors were not being stored to the CLI command history.
- This fixes the clipseg loading code so that it looks in the root directory
for the model.
- It also adds several __init__.py files needed to allow InvokeAI to be
installed without the -e (editable) flag. This lets you delete the
source code directory after installation.
- preload_models.py has been renamed load_models.py. I've left a
shell legacy version with the previous name to avoid breaking any
code.
- The load_models.py script now takes an optional --root argument,
which points to an install directory for the models, scripts, config
files, and the default outputs directory. In the future, the
embeddings manager directory will also be stored here.
- If no --root is provided, and no init file or environment variable
is present, load_models.py will install to '.' by default, which is
the current behavior. (This has *not* been tested thoroughly.)
- The location of the root directory is stored in the file .invokeai
in the user's home directory ($HOME on Linux/Mac, or HOMEPATH on
windows). The load_models.py script creates this file if it
does not already exist.
- invoke.py and load_models.py use the following search path to find
the install directory:
1. Contents of the environment variable INVOKEAI_ROOT
2. The --root=XXXXX option in ~/.invokeai
3. The --root option passed on the script command line.
4. As a last gasp, the currently working directory (".")
Running `python scripts/load_models.py --root ~/invokeai` will
create a directory structured like this (shortened for clarity):
~/invokeai
├── configs
│ ├── models.yaml
│ └── stable-diffusion
│ ├── v1-finetune.yaml
│ ├── v1-finetune_style.yaml
│ ├── v1-inference.yaml
│ ├── v1-inpainting-inference.yaml
│ └── v1-m1-finetune.yaml
├── models
│ ├── CompVis
│ ├── bert-base-uncased
│ ├── clipseg
│ ├── codeformer
│ ├── gfpgan
│ ├── ldm
│ │ └── stable-diffusion-v1
│ │ ├── sd-v1-5-inpainting.ckpt
│ │ └── vae-ft-mse-840000-ema-pruned.ckpt
│ └── openai
├── outputs
└── scripts
├── dream.py
├── images2prompt.py
├── invoke.py
├── legacy_api.py
├── load_models.py
├── merge_embeddings.py
├── orig_scripts
│ ├── download_first_stages.sh
│ ├── train_searcher.py
│ └── txt2img.py
├── preload_models.py
└── sd-metadata.py
1. You can now run invoke.py anywhere! Just copy it to one of your
bin directories, or put the ~/invokeai/scripts onto your PATH.
2. git pulls will no longer fight with you over models.yaml
3. It keeps end users out of the source code repo and will create
a path for us to do installs from invokeai.tar.gz.
- Under some circumstances, the image resizer was fitting
the wrong dimension to the user-provided bounding box
when an init image provided.
- Closes#1470.