Commit Graph

247 Commits

Author SHA1 Message Date
Lincoln Stein
5659d10778 remove unused function get_root() 2023-05-25 22:06:37 -04:00
Lincoln Stein
e56965ad76 documentation tweaks; fixed initialization in a couple more places 2023-05-25 21:10:00 -04:00
Lincoln Stein
2273b3a8c8 fix potential race condition in config system 2023-05-25 20:41:26 -04:00
Lincoln Stein
9110838fe4
Merge branch 'main' into release/make-web-dist-startable 2023-05-25 19:06:09 -04:00
Lincoln Stein
ca7b267326 raise error if syslogging requested and syslog lib not available 2023-05-25 10:10:46 -04:00
Lincoln Stein
88776fb2de get invokeai_configure working again 2023-05-25 09:39:45 -04:00
Lincoln Stein
b87f3043ae add logging configuration 2023-05-24 23:57:15 -04:00
psychedelicious
d14b02e93f feat(logger): fix logger type issues 2023-05-24 11:30:47 -04:00
psychedelicious
fb0b63c580 fix(nodes): fix seam painting
The problem was the same seed was getting used for the seam painting pass, causing the fried look.

Same issue as if you do img2img on a txt2img with the same seed/prompt.

Thanks to @hipsterusername for teaming up to debug this. We got pretty deep into the weeds.
2023-05-25 00:58:03 +10:00
Lincoln Stein
d2dc1ed26f make InvokeAI package installable
This commit makes InvokeAI 3.0 to be installable via PyPi.org and the
installer script.

Main changes.

1. Move static web pages into `invokeai/frontend/web` and modify the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root
to launch.

2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.

3. Fix a bug in the checkpoint converter script that was identified
during testing.

4. Better error reporting when checkpoint converter fails.

5. Rebuild front end.
2023-05-22 17:51:47 -04:00
Lincoln Stein
7ea995149e fixes to env parsing, textual inversion & help text
- Make environment variable settings case InSenSiTive:
  INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
  environment variables will both set `max_loaded_models`

- Updated realesrgan to use new config system.

- Updated textual_inversion_training to use new config system.

- Discovered a race condition when InvokeAIAppConfig is created
  at module load time, which makes it impossible to customize
  or replace the help message produced with --help on the command
  line. To fix this, moved all instances of get_invokeai_config()
  from module load time to object initialization time. Makes code
  cleaner, too.

- Added `--from_file` argument to `invokeai-node-cli` and changed
  github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
Eugene
f9710dd6ed remove reference to legacy opt.hf_token, clean up whitespace in invokeai_configure 2023-05-17 20:39:00 -04:00
Lincoln Stein
8adff96e29
Merge branch 'main' into lstein/global-configuration 2023-05-17 14:37:09 -04:00
Lincoln Stein
7593dc19d6 complete several steps needed to make 3.0 installable
- invokeai-configure updated to work with new config system
- migrate invokeai.init to invokeai.yaml during configure
- replace legacy invokeai with invokeai-node-cli
- add ability to run an invocation directly from invokeai-node-cli command line
- update CI tests to work with new invokeai syntax
2023-05-17 14:13:27 -04:00
Lincoln Stein
b7c5a39685 make invokeai.yaml more hierarchical; fix list configuration bug 2023-05-17 12:19:19 -04:00
Lincoln Stein
eadfd239a8 update config script to work with new config system 2023-05-17 00:18:19 -04:00
Lincoln Stein
8d75e50435 partial port of invokeai-configure 2023-05-16 01:50:01 -04:00
Lincoln Stein
7ef0d2aa35 merge with main 2023-05-15 09:07:17 -04:00
Eugene Brodsky
c8a98a9a22
Merge branch 'main' into lstein/bugfix/compel 2023-05-14 14:43:18 -04:00
blessedcoolant
c4681774a5
Merge branch 'main' into logging-facelift 2023-05-15 02:08:29 +12:00
Damian Stewart
050add58d2
fix getting conditionings 2023-05-14 12:20:54 +02:00
Eugene
b72c9787a9 Revert "comment out customer_attention_context"
This reverts commit 8f8cd90787.

Due to NameError: name 'options' is not defined
2023-05-14 00:37:55 -04:00
Eugene Brodsky
2623941d91
Merge branch 'main' into lstein/bugfix/compel 2023-05-13 22:23:59 -04:00
blessedcoolant
026d3260b4 Add Heun Karras Scheduler 2023-05-14 11:45:08 +10:00
Lincoln Stein
1103ab2844 merge with main 2023-05-13 21:35:19 -04:00
blessedcoolant
691e1bf829 Make debug messages cyan/blue 2023-05-14 09:06:57 +12:00
Kent Keirsey
8f8cd90787 comment out customer_attention_context 2023-05-12 13:59:00 -04:00
blessedcoolant
d796ea7bec feat: Logging Improvements 2023-05-13 02:13:49 +12:00
Eugene Brodsky
af060188bd
Merge branch 'main' into lstein/bugfix/compel 2023-05-12 08:22:18 -04:00
Kevin Turner
4caa1f19b2 fix(model manager): fix string formatting error on model checksum timer 2023-05-11 19:06:02 -07:00
Lincoln Stein
95d4bd3012 Merge branch 'lstein/bugfix/compel' of github.com:invoke-ai/InvokeAI into lstein/bugfix/compel 2023-05-11 21:13:29 -04:00
Lincoln Stein
037078c8ad make InvokeAIDiffuserComponent.custom_attention_control a classmethod 2023-05-11 21:13:18 -04:00
blessedcoolant
f7dc171c4f Rename default schedulers across the app 2023-05-12 03:44:20 +12:00
blessedcoolant
4b957edfec Add DDPM Scheduler 2023-05-12 03:18:34 +12:00
blessedcoolant
46ca7718d9 Add DEIS Scheduler 2023-05-12 03:10:30 +12:00
blessedcoolant
b928d7a6e6 Change scheduler names to be accurate
_a = Ancestral
_k = Karras
2023-05-12 02:59:43 +12:00
blessedcoolant
8a836247c8 Add DPMPP Single, Euler Karras and DPMPP2 Multi Karras Schedulers 2023-05-12 02:23:33 +12:00
blessedcoolant
9a383e456d Codesplit SCHEDULER_MAP for reusage 2023-05-12 00:40:03 +12:00
blessedcoolant
3ffff023b2 Add missing key to scheduler_map
It was breaking coz the sampler was not being reset. So needs a key on each. Will simplify this later.
2023-05-12 00:08:50 +12:00
blessedcoolant
d1029138d2 Default to DDIM if scheduler is missing 2023-05-11 22:54:35 +12:00
blessedcoolant
06b5800d28 Add UniPC Scheduler 2023-05-11 22:43:18 +12:00
Eugene Brodsky
3baa230077
Merge branch 'main' into lstein/bugfix/compel 2023-05-11 00:50:45 -04:00
Eugene
9e594f9018 pad conditioning tensors to same length
fixes crash when prompt length is greater than 75 tokens
2023-05-11 00:34:15 -04:00
psychedelicious
49db6f4fac fix(nodes): fix trivial typing issues 2023-05-11 11:55:51 +10:00
psychedelicious
206e6b1730 feat(nodes): wip inpaint node 2023-05-11 11:55:51 +10:00
Lincoln Stein
5d5157fc65 make conditioning.py work with compel 1.1.5 2023-05-10 18:08:33 -04:00
Lincoln Stein
afd2e32092
Merge branch 'main' into lstein/global-configuration 2023-05-06 21:20:25 -04:00
Lincoln Stein
d866dcb3d2 close #3343 2023-05-04 20:30:59 -04:00
Lincoln Stein
e4196bbe5b adjust non-app modules to use new config system 2023-05-04 00:43:51 -04:00
Lincoln Stein
15ffb53e59 remove globals, args, generate and the legacy CLI 2023-05-03 23:36:51 -04:00
Lincoln Stein
90054ddf0d use InvokeAISettings for app-wide configuration 2023-05-03 22:30:30 -04:00
Lincoln Stein
974841926d logger is a interchangeable service 2023-04-29 10:48:50 -04:00
Lincoln Stein
8db20e0d95 rename log to logger throughout 2023-04-29 09:43:40 -04:00
Lincoln Stein
6b79e2b407 Merge branch 'main' into enhance/invokeai-logs
- resolve conflicts
- remove unused code identified by pyflakes
2023-04-28 10:09:46 -04:00
Lincoln Stein
31a904b903
Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:28:45 +01:00
Lincoln Stein
4fa5c963a1
Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:10:51 +01:00
Lincoln Stein
b164330e3c replaced remaining print statements with log.*() 2023-04-18 20:49:00 -04:00
Lincoln Stein
69433c9f68
Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-18 19:21:53 -04:00
Lincoln Stein
bd8ffd36bf bump to diffusers 0.15.1, remove dangling module 2023-04-18 19:20:38 -04:00
Tim Cabbage
f6cdff2c5b
[bug] #3218 HuggingFace API off when --no-internet set
https://github.com/invoke-ai/InvokeAI/issues/3218

Huggingface API will not be queried if --no-internet flag is set
2023-04-17 16:53:31 +02:00
Lincoln Stein
aab262d991
Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-14 20:12:38 -04:00
Lincoln Stein
47b9910b48 update to diffusers 0.15 and fix code for name changes
- This is a port of #3184 to the main branch
2023-04-14 15:35:03 -04:00
Lincoln Stein
0b0e6fe448 convert remainder of print() to log.info() 2023-04-14 15:15:14 -04:00
Lincoln Stein
c132dbdefa change "ialog" to "log" 2023-04-11 18:48:20 -04:00
Lincoln Stein
f3081e7013 add module-level getLogger() method 2023-04-11 12:23:13 -04:00
Lincoln Stein
f904f14f9e add missing module-level methods 2023-04-11 11:10:43 -04:00
Lincoln Stein
8917a6d99b add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)

This style logs everything through a single logging object and is
identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus
to logging:

  import invokeai.backend.util.logging as ialog

  ialog.debug('this is a debugging message')
  ialog.info('this is a informational message')
  ialog.log(level=logging.CRITICAL, 'get out of dodge')
  ialog.disable(level=logging.INFO)
  ialog.basicConfig(filename='/var/log/invokeai.log')

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add the additional
message decorations.

For more control, the logging module's object-oriented logging style
is also supported. The API is identical to the vanilla logging
usage. In fact, the only thing that has changed is that the
getLogger() method adds a custom formatter to the log messages.

 import logging
 from invokeai.backend.util.logging import InvokeAILogger

 logger = InvokeAILogger.getLogger(__name__)
 fh = logging.FileHandler('/var/invokeai.log')
 logger.addHandler(fh)
 logger.critical('this will be logged to both the console and the log file')
2023-04-11 10:46:38 -04:00
Lincoln Stein
5a4765046e add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)
2023-04-11 09:33:28 -04:00
AbdBarho
de189f2db6
Increase chunk size when computing SHAs 2023-04-09 21:53:59 +02:00
Lincoln Stein
8334757af9
Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-07 18:55:54 -04:00
Lincoln Stein
d1b2b99226
Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 09:59:58 -04:00
Lincoln Stein
4c339dd4b0 refactor get_submodels() into individual methods 2023-04-06 17:08:23 -04:00
Lincoln Stein
d44151d6ff add a new method to model_manager that retrieves individual pipeline parts
- New method is ModelManager.get_sub_model(model_name:str,model_part:SDModelComponent)

To use:

```
from invokeai.backend import ModelManager, SDModelComponent as sdmc
manager = ModelManager('/path/to/models.yaml')
vae = manager.get_sub_model('stable-diffusion-1.5', sdmc.vae)
```
2023-04-05 17:25:42 -04:00
Lincoln Stein
1f89cf3343 remove vestiges of non-functional autoimport code for legacy checkpoints
- Closes #3075
2023-03-31 04:27:03 -04:00
Lincoln Stein
b9df9e26f2
Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 07:51:23 -04:00
Lincoln Stein
e11c1d66ab handle multiple tokens and embeddings in single file 2023-03-29 22:05:06 -04:00
Lincoln Stein
3c4b6d5735
Merge branch 'main' into enhance/heuristic-import-improvements 2023-03-29 16:54:43 -04:00
Lincoln Stein
9a7580dedd fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.
2023-03-28 00:17:20 -04:00
Lincoln Stein
fe5d9ad171 improve importation and conversion of legacy checkpoint files
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
   The yaml file is then used as the configuration file for
   importation and conversion.

3. If no such file is found, then the method opens up the checkpoint
   and probes it to determine whether it is V1, V1-inpaint or V2.
   If it is a V1 format, then the appropriate v1-inference.yaml config
   file is used. Unfortunately there are two V2 variants that cannot be
   distinguished by introspection.

4. If the probe algorithm is unable to determine the model type, then its
   last-ditch effort is to execute an optional callback function that can
   be provided by the caller. This callback, named `config_file_callback`
   receives the path to the legacy checkpoint and returns the path to the
   config file to use. The CLI uses to put up a multiple choice prompt to
   the user. The WebUI **could** use this to prompt the user to choose
   from a radio-button selection.

5. If the config file cannot be determined, then the import is abandoned.

The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
Lincoln Stein
abe4dc8ac1 Add support for yet another textual inversion embedding format
- This PR adds support for embedding files that contain a single key
  "emb_params". The only example I know of this format is the
  "EasyNegative" embedding on HuggingFace, but there are certainly
  others.

- This PR also adds support for loading embedding files that have been
  saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
  selecting the right format parser is clear.
2023-03-27 09:39:03 -04:00
psychedelicious
5fe38f7c88 fix(backend): simple typing fixes 2023-03-26 17:07:03 +11:00
Lincoln Stein
dac3c158a5 Merge branch 'main' into feat/preview_predicted_x0
- resolve conflicts with generate.py invocation
- remove unused symbols that pyflakes complains about
- add **untested** code for passing intermediate latent image to the
  step callback in the format expected.
2023-03-25 16:07:18 -04:00
Lincoln Stein
501924bc60 do not reexport PipelineIntermediateState 2023-03-25 13:57:09 -04:00
Lincoln Stein
d117251747 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-25 13:57:09 -04:00
Lincoln Stein
5ac0316c62 fix issue with embeddings being loaded twice
- as noted by JPPhoto
2023-03-25 10:45:03 -04:00
Lincoln Stein
9ceec40b76
Merge branch 'main' into feat/use-custom-vaes 2023-03-24 17:45:02 -04:00
Lincoln Stein
85b2822f5e
Merge branch 'main' into security/scan-ckpt-files-main 2023-03-24 08:39:59 -04:00
Lincoln Stein
6e7dbf99f3
Merge branch 'main' into bugfix/dreambooth_ema 2023-03-23 23:24:15 -04:00
Lincoln Stein
deeff36e16
Merge branch 'main' into security/scan-ckpt-files-main 2023-03-23 23:20:52 -04:00
Lincoln Stein
92721a1d45 do not reexport PipelineIntermediateState 2023-03-24 09:32:47 +11:00
Lincoln Stein
f329fddab9 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-24 09:32:47 +11:00
Lincoln Stein
f751dcd245 load embeddings after a ckpt legacy model is converted to diffusers
- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 15:21:58 -04:00
Lincoln Stein
a97107bd90 handle VAEs that do not have a "state_dict" key 2023-03-23 15:11:29 -04:00
Lincoln Stein
b2ce45a417 re-implement model scanning when loading legacy checkpoint files
- This PR turns on pickle scanning before a legacy checkpoint file
  is loaded from disk within the checkpoint_to_diffusers module.

- Also miscellaneous diagnostic message cleanup.
2023-03-23 15:03:30 -04:00
Lincoln Stein
4e0b5d85ba convert custom VAEs into diffusers
- When a legacy checkpoint model is loaded via --convert_ckpt and its
  models.yaml stanza refers to a custom VAE path (using the 'vae:'
  key), the custom VAE will be converted and used within the diffusers
  model. Otherwise the VAE contained within the legacy model will be
  used.

- Note that the heuristic_import() method, which imports arbitrary
  legacy files on disk and URLs, will continue to default to the
  the standard stabilityai/sd-vae-ft-mse VAE. This can be fixed after
  the fact by editing the models.yaml stanza using the Web or CLI
  UIs.

- Fixes issue #2917
2023-03-23 13:14:19 -04:00
Lincoln Stein
a958ae5e29 Merge branch 'main' into feat/use-custom-vaes 2023-03-23 10:32:56 -04:00
psychedelicious
b194180f76 feat(backend): make fast latents method static 2023-03-16 20:03:08 +11:00
psychedelicious
fb30b7d17a feat(backend): add image_to_dataURL util 2023-03-16 20:03:08 +11:00
Jonathan
44f489d581
Merge branch 'main' into fix-seampaint 2023-03-14 06:19:25 -05:00
blessedcoolant
0a761d7c43 fix(inpaint): Seam painting being broken 2023-03-15 00:00:08 +13:00