add ability to import and edit alternative models online

- !import_model <path/to/model/weights> will import a new model,
  prompt the user for its name and description, write it to the
  models.yaml file, and load it.

- !edit_model <model_name> will bring up a previously-defined model
  and prompt the user to edit its descriptive fields.

Example of !import_model

<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:

Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu Diffusion v1.3
  height: 512
  weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
  width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
</pre>

Example of !edit_model

<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512

>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu diffusion v1.4beta
  weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
  height: 512
  width: 512

OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
This commit is contained in:
Lincoln Stein 2022-10-13 23:48:07 -04:00
parent 916f5bfbb2
commit 6afc0f9b38
7 changed files with 380 additions and 75 deletions

View File

@ -157,7 +157,8 @@ Here are the invoke> command that apply to txt2img:
| --gfpgan_strength <float> | -G <float> | -G0 | Fix faces using the GFPGAN algorithm; argument indicates how hard the algorithm should try (0.0-1.0) |
| --save_original | -save_orig| False | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| --variation <float> |-v<float>| 0.0 | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with -S<seed> and -n<int> to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| --with_variations <pattern> | -V<pattern>| None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| --with_variations <pattern> | | None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| --save_intermediates <n> | | None | Save the image from every nth step into an "intermediates" folder inside the output directory |
Note that the width and height of the image must be multiples of
64. You can provide different values, but they will be rounded down to
@ -206,10 +207,10 @@ well as the --mask (-M) argument:
| --init_mask <path> | -M<path> | None |Path to an image the same size as the initial_image, with areas for inpainting made transparent.|
# Convenience commands
# Postprocessing
In addition to the standard image generation arguments, there are a
series of convenience commands that begin with !:
To postprocess a file using face restoration or upscaling, use the
`!fix` command.
## !fix
@ -243,21 +244,156 @@ Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
~~~
## !fetch
# Model selection and importation
This command retrieves the generation parameters from a previously
generated image and either loads them into the command line. You may
provide either the name of a file in the current output directory, or
a full file path.
The CLI allows you to add new models on the fly, as well as to switch
among them rapidly without leaving the script.
~~~
invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
~~~
## !models
Note that this command may behave unexpectedly if given a PNG file that
was not generated by InvokeAI.
This prints out a list of the models defined in `config/models.yaml'.
The active model is bold-faced
Example:
<pre>
laion400m not loaded <no description>
<b>stable-diffusion-1.4 active Stable Diffusion v1.4</b>
waifu-diffusion not loaded Waifu Diffusion v1.3
</pre>
## !switch <model>
This quickly switches from one model to another without leaving the
CLI script. `invoke.py` uses a memory caching system; once a model
has been loaded, switching back and forth is quick. The following
example shows this in action. Note how the second column of the
`!models` table changes to `cached` after a model is first loaded,
and that the long initialization step is not needed when loading
a cached model.
<pre>
invoke> !models
laion400m not loaded <no description>
<b>stable-diffusion-1.4 cached Stable Diffusion v1.4</b>
waifu-diffusion active Waifu Diffusion v1.3
invoke> !switch waifu-diffusion
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
>> Model loaded in 18.24s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Setting Sampler to k_lms
invoke> !models
laion400m not loaded <no description>
stable-diffusion-1.4 cached Stable Diffusion v1.4
<b>waifu-diffusion active Waifu Diffusion v1.3</b>
invoke> !switch stable-diffusion-1.4
>> Caching model waifu-diffusion in system RAM
>> Retrieving model stable-diffusion-1.4 from system RAM cache
>> Setting Sampler to k_lms
invoke> !models
laion400m not loaded <no description>
<b>stable-diffusion-1.4 active Stable Diffusion v1.4</b>
waifu-diffusion cached Waifu Diffusion v1.3
</pre>
## !import_model <path/to/model/weights>
This command imports a new model weights file into InvokeAI, makes it
available for image generation within the script, and writes out the
configuration for the model into `config/models.yaml` for use in
subsequent sessions.
Provide `!import_model` with the path to a weights file ending in
`.ckpt`. If you type a partial path and press tab, the CLI will
autocomplete. Although it will also autocomplete to `.vae` files,
these are not currenty supported (but will be soon).
When you hit return, the CLI will prompt you to fill in additional
information about the model, including the short name you wish to use
for it with the `!switch` command, a brief description of the model,
the default image width and height to use with this model, and the
model's configuration file. The latter three fields are automatically
filled with reasonable defaults. In the example below, the bold-faced
text shows what the user typed in with the exception of the width,
height and configuration file paths, which were filled in
automatically.
Example:
<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:
Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Really horrible Hentai pictures
height: 512
weights: models/ldm/stable-diffusion-v1/RD1412.ckpt
width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
invoke>
</pre>
##!edit_model <name_of_model>
The `!edit_model` command can be used to modify a model that is
already defined in `config/models.yaml`. Call it with the short
name of the model you wish to modify, and it will allow you to
modify the model's `description`, `weights` and other fields.
Example:
<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu diffusion v1.4beta
weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
height: 512
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
# History processing
The CLI provides a series of convenient commands for reviewing previous
actions, retrieving them, modifying them, and re-running them.
## !history
@ -284,6 +420,22 @@ invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~
## !fetch
This command retrieves the generation parameters from a previously
generated image and either loads them into the command line. You may
provide either the name of a file in the current output directory, or
a full file path.
~~~
invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
~~~
Note that this command may behave unexpectedly if given a PNG file that
was not generated by InvokeAI.
## !search <search string>
This is similar to !history but it only returns lines that contain

View File

@ -121,6 +121,26 @@ class ModelCache(object):
else:
print(line)
def add_model(self, model_name:str, model_attributes:dict, clobber=False) ->str:
'''
Update the named model with a dictionary of attributes. Will fail with an
assertion error if the name already exists. Pass clobber=True to overwrite.
On a successful update, the config will be changed in memory and a YAML
string will be returned.
'''
omega = self.config
# check that all the required fields are present
for field in ('description','weights','height','width','config'):
assert field in model_attributes, f'required field {field} is missing'
assert (clobber or model_name not in omega), f'attempt to overwrite existing model definition "{model_name}"'
config = omega[model_name] if model_name in omega else {}
for field in model_attributes:
config[field] = model_attributes[field]
omega[model_name] = config
return OmegaConf.to_yaml(omega)
def _check_memory(self):
avail_memory = psutil.virtual_memory()[1]
if AVG_MODEL_SIZE + self.min_avail_mem > avail_memory:
@ -163,10 +183,10 @@ class ModelCache(object):
m, u = model.load_state_dict(sd, strict=False)
if self.precision == 'float16':
print('>> Using faster float16 precision')
print(' | Using faster float16 precision')
model.to(torch.float16)
else:
print('>> Using more accurate float32 precision')
print(' | Using more accurate float32 precision')
model.to(self.device)
# model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here

View File

@ -21,6 +21,8 @@ except (ImportError,ModuleNotFoundError):
readline_available = False
IMG_EXTENSIONS = ('.png','.jpg','.jpeg','.PNG','.JPG','.JPEG','.gif','.GIF')
WEIGHT_EXTENSIONS = ('.ckpt','.bae')
CONFIG_EXTENSIONS = ('.yaml','.yml')
COMMANDS = (
'--steps','-s',
'--seed','-S',
@ -47,10 +49,15 @@ COMMANDS = (
'--skip_normalize','-x',
'--log_tokenization','-t',
'--hires_fix',
'!fix','!fetch','!history','!search','!clear','!models','!switch',
'!fix','!fetch','!history','!search','!clear',
'!models','!switch','!import_model','!edit_model'
)
MODEL_COMMANDS = (
'!switch',
'!edit_model',
)
WEIGHT_COMMANDS = (
'!import_model',
)
IMG_PATH_COMMANDS = (
'--outdir[=\s]',
@ -64,6 +71,7 @@ IMG_FILE_COMMANDS=(
'--embedding_path[=\s]',
)
path_regexp = '('+'|'.join(IMG_PATH_COMMANDS+IMG_FILE_COMMANDS) + ')\s*\S*$'
weight_regexp = '('+'|'.join(WEIGHT_COMMANDS) + ')\s*\S*$'
class Completer(object):
def __init__(self, options, models=[]):
@ -74,6 +82,7 @@ class Completer(object):
self.default_dir = None
self.linebuffer = None
self.auto_history_active = True
self.extensions = None
return
def complete(self, text, state):
@ -84,7 +93,13 @@ class Completer(object):
buffer = readline.get_line_buffer()
if state == 0:
if re.search(path_regexp,buffer):
# extensions defined, so go directly into path completion mode
if self.extensions is not None:
self.matches = self._path_completions(text, state, self.extensions)
# looking for an image file
elif re.search(path_regexp,buffer):
do_shortcut = re.search('^'+'|'.join(IMG_FILE_COMMANDS),buffer)
self.matches = self._path_completions(text, state, IMG_EXTENSIONS,shortcut_ok=do_shortcut)
@ -92,8 +107,12 @@ class Completer(object):
elif re.search('(-S\s*|--seed[=\s])\d*$',buffer):
self.matches= self._seed_completions(text,state)
# looking for a model
elif re.match('^'+'|'.join(MODEL_COMMANDS),buffer):
self.matches= self._model_completions(text,state)
self.matches= self._model_completions(text, state)
elif re.search(weight_regexp,buffer):
self.matches = self._path_completions(text, state, WEIGHT_EXTENSIONS)
# This is the first time for this text, so build a match list.
elif text:
@ -111,6 +130,13 @@ class Completer(object):
response = None
return response
def complete_extensions(self, extensions:list):
'''
If called with a list of extensions, will force completer
to do file path completions.
'''
self.extensions=extensions
def add_history(self,line):
'''
Pass thru to readline

View File

@ -106,7 +106,7 @@ class DDPM(pl.LightningModule):
], 'currently only supporting "eps" and "x0"'
self.parameterization = parameterization
print(
f' >> {self.__class__.__name__}: Running in {self.parameterization}-prediction mode'
f' | {self.__class__.__name__}: Running in {self.parameterization}-prediction mode'
)
self.cond_stage_model = None
self.clip_denoised = clip_denoised

View File

@ -245,7 +245,7 @@ class AttnBlock(nn.Module):
def make_attn(in_channels, attn_type="vanilla"):
assert attn_type in ["vanilla", "linear", "none"], f'attn_type {attn_type} unknown'
print(f" >> Making attention of type '{attn_type}' with {in_channels} in_channels")
print(f" | Making attention of type '{attn_type}' with {in_channels} in_channels")
if attn_type == "vanilla":
return AttnBlock(in_channels)
elif attn_type == "none":
@ -521,7 +521,7 @@ class Decoder(nn.Module):
block_in = ch*ch_mult[self.num_resolutions-1]
curr_res = resolution // 2**(self.num_resolutions-1)
self.z_shape = (1,z_channels,curr_res,curr_res)
print(" >> Working with z of shape {} = {} dimensions.".format(
print(" | Working with z of shape {} = {} dimensions.".format(
self.z_shape, np.prod(self.z_shape)))
# z to block_in

View File

@ -75,7 +75,7 @@ def count_params(model, verbose=False):
total_params = sum(p.numel() for p in model.parameters())
if verbose:
print(
f' >> {model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.'
f' | {model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.'
)
return total_params

View File

@ -9,6 +9,7 @@ import copy
import warnings
import time
import traceback
import yaml
sys.path.append('.') # corrects a weird problem on Macs
from ldm.invoke.readline import get_completer
from ldm.invoke.args import Args, metadata_dumps, metadata_from_png, dream_cmd_from_png
@ -108,6 +109,7 @@ def main_loop(gen, opt, infile):
# output directory specified at the time of script launch. We do not currently support
# changing the history file midstream when the output directory is changed.
completer = get_completer(opt, models=list(model_config.keys()))
completer.set_default_dir(opt.outdir)
output_cntr = completer.get_current_history_length()+1
# os.pathconf is not available on Windows
@ -119,10 +121,8 @@ def main_loop(gen, opt, infile):
name_max = 255
while not done:
operation = 'generate' # default operation, alternative is 'postprocess'
if completer:
completer.set_default_dir(opt.outdir)
operation = 'generate'
try:
command = get_next_command(infile)
@ -142,53 +142,11 @@ def main_loop(gen, opt, infile):
break
if command.startswith('!'):
subcommand = command[1:]
command, operation = do_command(command, gen, opt, completer)
if subcommand.startswith('dream'): # in case a stored prompt still contains the !dream command
command = command.replace('!dream ','',1)
elif subcommand.startswith('fix'):
command = command.replace('!fix ','',1)
operation = 'postprocess'
elif subcommand.startswith('switch'):
model_name = command.replace('!switch ','',1)
gen.set_model(model_name)
completer.add_history(command)
if operation is None:
continue
elif subcommand.startswith('models'):
model_name = command.replace('!models ','',1)
gen.model_cache.print_models()
continue
elif subcommand.startswith('fetch'):
file_path = command.replace('!fetch ','',1)
retrieve_dream_command(opt,file_path,completer)
continue
elif subcommand.startswith('history'):
completer.show_history()
continue
elif subcommand.startswith('search'):
search_str = command.replace('!search ','',1)
completer.show_history(search_str)
continue
elif subcommand.startswith('clear'):
completer.clear_history()
continue
elif re.match('^(\d+)',subcommand):
command_no = re.match('^(\d+)',subcommand).groups()[0]
command = completer.get_line(int(command_no))
completer.set_line(command)
continue
else: # not a recognized subcommand, so give the --help text
command = '-h'
if opt.parse_cmd(command) is None:
continue
@ -381,6 +339,155 @@ def main_loop(gen, opt, infile):
print('goodbye!')
def do_command(command:str, gen, opt:Args, completer) -> tuple:
operation = 'generate' # default operation, alternative is 'postprocess'
if command.startswith('!dream'): # in case a stored prompt still contains the !dream command
command = command.replace('!dream ','',1)
elif command.startswith('!fix'):
command = command.replace('!fix ','',1)
operation = 'postprocess'
elif command.startswith('!switch'):
model_name = command.replace('!switch ','',1)
gen.set_model(model_name)
completer.add_history(command)
operation = None
elif command.startswith('!models'):
gen.model_cache.print_models()
operation = None
elif command.startswith('!import'):
path = shlex.split(command)
if len(path) < 2:
print('** please provide a path to a .ckpt or .vae model file')
elif not os.path.exists(path[1]):
print(f'** {path[1]}: file not found')
else:
add_weights_to_config(path[1], gen, opt, completer)
completer.add_history(command)
operation = None
elif command.startswith('!edit'):
path = shlex.split(command)
if len(path) < 2:
print('** please provide the name of a model')
else:
edit_config(path[1], gen, opt, completer)
completer.add_history(command)
operation = None
elif command.startswith('!fetch'):
file_path = command.replace('!fetch ','',1)
retrieve_dream_command(opt,file_path,completer)
operation = None
elif command.startswith('!history'):
completer.show_history()
operation = None
elif command.startswith('!search'):
search_str = command.replace('!search ','',1)
completer.show_history(search_str)
operation = None
elif command.startswith('!clear'):
completer.clear_history()
operation = None
elif re.match('^!(\d+)',command):
command_no = re.match('^!(\d+)',command).groups()[0]
command = completer.get_line(int(command_no))
completer.set_line(command)
operation = None
else: # not a recognized command, so give the --help text
command = '-h'
return command, operation
def add_weights_to_config(model_path:str, gen, opt, completer):
print(f'>> Model import in process. Please enter the values needed to configure this model:')
print()
new_config = {}
new_config['weights'] = model_path
done = False
while not done:
model_name = input('Name for this model: ')
if not re.match('^[\w._-]+$',model_name):
print('** model name must contain only words, digits and the characters [._-] **')
else:
done = True
new_config['description'] = input('Description of this model: ')
completer.complete_extensions(('.yaml','.yml'))
completer.linebuffer = 'configs/stable-diffusion/v1-inference.yaml'
done = False
while not done:
new_config['config'] = input('Configuration file for this model: ')
done = os.path.exists(new_config['config'])
completer.complete_extensions(None)
for field in ('width','height'):
done = False
while not done:
try:
completer.linebuffer = '512'
value = int(input(f'Default image {field}: '))
assert value >= 64 and value <= 2048
new_config[field] = value
done = True
except:
print('** Please enter a valid integer between 64 and 2048')
if write_config_file(opt.conf, gen, model_name, new_config):
gen.set_model(model_name)
def edit_config(model_name:str, gen, opt, completer):
config = gen.model_cache.config
if model_name not in config:
print(f'** Unknown model {model_name}')
return
print(f'\n>> Editing model {model_name} from configuration file {opt.conf}')
conf = config[model_name]
new_config = {}
completer.complete_extensions(('.yaml','.yml','.ckpt','.vae'))
for field in ('description', 'weights', 'config', 'width','height'):
completer.linebuffer = str(conf[field]) if field in conf else ''
new_value = input(f'{field}: ')
new_config[field] = int(new_value) if field in ('width','height') else new_value
completer.complete_extensions(None)
if write_config_file(opt.conf, gen, model_name, new_config, clobber=True):
gen.set_model(model_name)
def write_config_file(conf_path, gen, model_name, new_config, clobber=False):
op = 'modify' if clobber else 'import'
print('\n>> New configuration:')
print(yaml.dump({model_name:new_config}))
if input(f'OK to {op} [n]? ') not in ('y','Y'):
return False
try:
yaml_str = gen.model_cache.add_model(model_name, new_config, clobber)
except AssertionError as e:
print(f'** configuration failed: {str(e)}')
return False
tmpfile = os.path.join(os.path.dirname(conf_path),'new_config.tmp')
with open(tmpfile, 'w') as outfile:
outfile.write(yaml_str)
os.rename(tmpfile,conf_path)
return True
def do_postprocess (gen, opt, callback):
file_path = opt.prompt # treat the prompt as the file pathname
if os.path.dirname(file_path) == '': #basename given