Commit Graph

16 Commits

Author SHA1 Message Date
Martin Kristiansen
537ae2f901 Resolving merge conflicts for flake8 2023-08-18 15:52:04 +10:00
Lincoln Stein
15ffb53e59 remove globals, args, generate and the legacy CLI 2023-05-03 23:36:51 -04:00
Lincoln Stein
8db20e0d95 rename log to logger throughout 2023-04-29 09:43:40 -04:00
Lincoln Stein
c132dbdefa change "ialog" to "log" 2023-04-11 18:48:20 -04:00
Lincoln Stein
8917a6d99b add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)

This style logs everything through a single logging object and is
identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus
to logging:

  import invokeai.backend.util.logging as ialog

  ialog.debug('this is a debugging message')
  ialog.info('this is a informational message')
  ialog.log(level=logging.CRITICAL, 'get out of dodge')
  ialog.disable(level=logging.INFO)
  ialog.basicConfig(filename='/var/log/invokeai.log')

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add the additional
message decorations.

For more control, the logging module's object-oriented logging style
is also supported. The API is identical to the vanilla logging
usage. In fact, the only thing that has changed is that the
getLogger() method adds a custom formatter to the log messages.

 import logging
 from invokeai.backend.util.logging import InvokeAILogger

 logger = InvokeAILogger.getLogger(__name__)
 fh = logging.FileHandler('/var/invokeai.log')
 logger.addHandler(fh)
 logger.critical('this will be logged to both the console and the log file')
2023-04-11 10:46:38 -04:00
blessedcoolant
45b84fb4bb
Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 17:07:25 +12:00
Thomas
7268131f57
change where !replay looks for its infile
!fetch puts its output file into the output directory; it may be beneficial to have !replay look in the output directory as well.
2023-04-06 08:14:11 -04:00
Lincoln Stein
1f89cf3343 remove vestiges of non-functional autoimport code for legacy checkpoints
- Closes #3075
2023-03-31 04:27:03 -04:00
Lincoln Stein
fe5d9ad171 improve importation and conversion of legacy checkpoint files
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
   The yaml file is then used as the configuration file for
   importation and conversion.

3. If no such file is found, then the method opens up the checkpoint
   and probes it to determine whether it is V1, V1-inpaint or V2.
   If it is a V1 format, then the appropriate v1-inference.yaml config
   file is used. Unfortunately there are two V2 variants that cannot be
   distinguished by introspection.

4. If the probe algorithm is unable to determine the model type, then its
   last-ditch effort is to execute an optional callback function that can
   be provided by the caller. This callback, named `config_file_callback`
   receives the path to the legacy checkpoint and returns the path to the
   config file to use. The CLI uses to put up a multiple choice prompt to
   the user. The WebUI **could** use this to prompt the user to choose
   from a radio-button selection.

5. If the config file cannot be determined, then the import is abandoned.

The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
Lincoln Stein
4e0b5d85ba convert custom VAEs into diffusers
- When a legacy checkpoint model is loaded via --convert_ckpt and its
  models.yaml stanza refers to a custom VAE path (using the 'vae:'
  key), the custom VAE will be converted and used within the diffusers
  model. Otherwise the VAE contained within the legacy model will be
  used.

- Note that the heuristic_import() method, which imports arbitrary
  legacy files on disk and URLs, will continue to default to the
  the standard stabilityai/sd-vae-ft-mse VAE. This can be fixed after
  the fact by editing the models.yaml stanza using the Web or CLI
  UIs.

- Fixes issue #2917
2023-03-23 13:14:19 -04:00
Jonathan
c3ff9e6be8
Fixed startup issues with the web UI. (#2876) 2023-03-06 18:29:28 -05:00
Lincoln Stein
2f9dcd7906 support both epsilon and v-prediction v2 inference
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1) "epsilon" prediction type for Stable Diffusion v2 Base
2) "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so
automatically.
2023-03-05 22:51:40 -05:00
Lincoln Stein
bdc7b8b75a fix broken scripts
This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.

   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file

2) Scripts that are installed by pip install. They get placed into the venv's
   PATH and are intended to be the official entry points:

   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files
2023-03-03 20:19:37 -05:00
Lincoln Stein
955900507f fix issue with invokeai.version 2023-03-03 01:34:38 -05:00
Lincoln Stein
60a98cacef all vestiges of ldm.invoke removed 2023-03-03 01:02:00 -05:00
Lincoln Stein
6a990565ff all files migrated; tweaks needed 2023-03-03 00:02:15 -05:00