Commit Graph

3428 Commits

Author SHA1 Message Date
Sergey Krashevich
ee2c0ab51b
translationBot(ui): update translation (Russian)
Currently translated at 81.4% (382 of 469 strings)

translationBot(ui): update translation (Russian)

Currently translated at 81.6% (382 of 468 strings)

Co-authored-by: Sergey Krashevich <svk@svk.su>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Riccardo Giovanetti
ca5f129902
translationBot(ui): update translation (Italian)
Currently translated at 100.0% (469 of 469 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (468 of 468 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Lincoln Stein
cf2eca7c60
Add new console frontend to initial model selection, and other model mgmt improvements (#2644)
## Major Changes
The invokeai-configure script has now been refactored. The work of
selecting and downloading initial models at install time is now done by
a script named `invokeai-model-install` (module name is
`ldm.invoke.config.model_install`)

Screen 1 - adjust startup options:

![screenshot1](https://user-images.githubusercontent.com/111189/219976468-b642df78-a6fe-44a2-bf97-54ccf34e9656.png)

Screen 2 - select SD models:

![screenshot2](https://user-images.githubusercontent.com/111189/219976494-13c7d257-cc8d-4dae-9521-3b352aab010b.png)

The calling arguments for `invokeai-configure` have not changed, so
nothing should break. After initializing the root directory, the script
calls `invokeai-model-install` to let the user select the starting
models to install.

`invokeai-model-install puts up a console GUI with checkboxes to
indicate which models to install. It respects the `--default_only` and
`--yes` arguments so that CI will continue to work. Here are the various
effects you can achieve:

`invokeai-configure`
       This will use console-based UI to initialize invokeai.init,
       download support models, and choose and download SD models
    
`invokeai-configure --yes`
Without activating the GUI, populate invokeai.init with default values,
       download support models and download the "recommended" SD models
    
`invokeai-configure --default_only`
Activate the GUI for changing init options, but don't show the SD
download
form, and automatically download the default SD model (currently SD-1.5)
    
`invokeai-model-install`
       Select and install models. This can be used to download arbitrary
models from the Internet, install HuggingFace models using their
repo_id,
       or watch a directory for models to load at startup time
    
`invokeai-model-install --yes`
       Import the recommended SD models without a GUI
    
`invokeai-model-install --default_only`
       As above, but only import the default model

## Flexible Model Imports

The console GUI allows the user to import arbitrary models into InvokeAI
using:

1. A HuggingFace Repo_id
2. A URL (http/https/ftp) that points to a checkpoint or safetensors
file
3. A local path on disk pointing to a checkpoint/safetensors file or
diffusers directory
4. A directory to be scanned for all checkpoint/safetensors files to be
imported

The UI allows the user to specify multiple models to bulk import. The
user can specify whether to import the ckpt/safetensors as-is, or
convert to `diffusers`. The user can also designate a directory to be
scanned at startup time for checkpoint/safetensors files.

## Backend Changes

To support the model selection GUI PR introduces a new method in
`ldm.invoke.model_manager` called `heuristic_import(). This accepts a
string-like object which can be a repo_id, URL, local path or directory.
It will figure out what the object is and import it. It interrogates the
contents of checkpoint and safetensors files to determine what type of
SD model they are -- v1.x, v2.x or v1.x inpainting.

## Installer

I am attaching a zip file of the installer if you would like to try the
process from end to end.

[InvokeAI-installer-v2.3.0.zip](https://github.com/invoke-ai/InvokeAI/files/10785474/InvokeAI-installer-v2.3.0.zip)
2023-02-22 15:24:59 -05:00
Lincoln Stein
16aea1e869
Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-22 14:22:52 -05:00
blessedcoolant
75ff6cd3c3
Refactor prompting code paths to use the compel library (#2729)
motivation: i want to be doing future prompting development work in the
`compel` lib (https://github.com/damian0815/compel) - which is currently
pip installable with `pip install compel`.
2023-02-23 08:09:52 +13:00
blessedcoolant
7b7b31637c
Merge branch 'main' into refactor_use_compel 2023-02-23 07:43:30 +13:00
blessedcoolant
fca564c18a
ui: fix use prompt when prompt has colon (#2760)
- Fixes wonky use prompt when prompt contains colon
2023-02-23 07:41:38 +13:00
Lincoln Stein
eb8d87e185
Merge branch 'main' into refactor_use_compel 2023-02-22 12:34:16 -05:00
Lincoln Stein
dbadb1d7b5
Merge branch 'main' into fix/ui/prompt-metadata 2023-02-22 12:33:54 -05:00
Lincoln Stein
a4afb69615
fix crash in textual inversion with "num_samples=0" error (#2762)
-At some point pathlib was added to the list of imported modules and
this broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 12:31:28 -05:00
Lincoln Stein
8b7925edf3 fix crash in textual inversion with "num_samples=0" error
-At some point pathlib was added to the list of imported modules and this
broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 11:29:30 -05:00
Lincoln Stein
168a51c5a6 fix textual inversion output directory path
- The configure script was misnaming the directory for text-inversion-output.
- Now fixed.
2023-02-22 10:06:04 -05:00
Damian Stewart
3f5d8c3e44 remove inaccurate docstring 2023-02-22 13:18:39 +01:00
Lincoln Stein
609bb19573 fixes to resizing and init file editing
- Disable responsive resizing below starting dimensions (you can make
  form larger, but not smaller than what it was at startup)

- Fix bug that caused multiple --ckpt_convert entries (and similar) to
  be written to init file.
2023-02-22 07:05:51 -05:00
psychedelicious
d561d6d3dd chore(ui): build frontend 2023-02-22 22:09:11 +11:00
psychedelicious
7ffaa17551 fix(ui): use prompt bug when prompt has colon
This bug is related to the format in which we stored prompts for some time: an array of weighted subprompts.

This caused some strife when recalling a prompt if the prompt had colons in it, due to our recently introduced handling of negative prompts.

Currently there is no need to store a prompt as anything other than a string, so we revert to doing that.

Compatibility with structured prompts is maintained via helper hook.
2023-02-22 20:33:58 +11:00
Damian Stewart
97eac58a50 fix blend tokenizaiton reporting; fix LDM checkpoint support 2023-02-22 10:29:42 +01:00
Damian Stewart
cedbe8fcd7 fix .blend 2023-02-22 09:04:23 +01:00
Jonathan
a461875abd
Merge branch 'main' into refactor_use_compel 2023-02-21 21:14:28 -06:00
Lincoln Stein
ab018ccdfe
Fallback to using filename to trigger embeddings (#2752)
Lots of earlier embeds use a common trigger token such as * or the
hebrew letter shan. Previously, the textual inversion manager would
refuse to load the second and subsequent embeddings that used a
previously-claimed trigger. Now, when this case is encountered, the
trigger token is replaced by <filename> and the user is informed of the
fact.
2023-02-21 21:58:11 -05:00
Lincoln Stein
d41dcdfc46 move trigger_str registration into try block 2023-02-21 21:38:42 -05:00
Lincoln Stein
972aecc4c5 fix responsive resizing 2023-02-21 21:33:44 -05:00
Lincoln Stein
6b7be4e5dc remove dangling debug statement 2023-02-21 20:09:34 -05:00
Lincoln Stein
9b1a7b553f add "hit any key to exit" pause at end of install 2023-02-21 20:03:08 -05:00
Lincoln Stein
7f99efc5df require diffusers 0.13 2023-02-21 17:28:07 -05:00
Lincoln Stein
0a6d8b4855
Merge branch 'main' into refactor_use_compel 2023-02-21 17:19:48 -05:00
Lincoln Stein
5e41811fb5 move trigger text munging to upper level per review 2023-02-21 17:04:42 -05:00
Lincoln Stein
5a4967582e reformat with black and isort 2023-02-21 14:12:57 -05:00
Jonathan
1d0ba4a1a7
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 13:12:34 -06:00
Lincoln Stein
4878c7a2d5
Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-21 14:09:38 -05:00
blessedcoolant
9e5aa645a7
Fix crashing when using 2.1 model (#2757)
We now require more free memory to avoid attention slicing. 17.5% free
was not sufficient headroom in all cases, so now we require 25%.
2023-02-22 08:03:51 +13:00
Lincoln Stein
d01e23973e fix problem that was causing CI failures 2023-02-21 13:44:32 -05:00
Jonathan
71bbd78574
Fix crashing when using 2.1 model
We now require more free memory to avoid attention slicing. 17.5% free was not sufficient headroom, so now we require 25%.
2023-02-21 12:35:03 -06:00
Lincoln Stein
fff41a7349 merged with main 2023-02-21 12:20:59 -05:00
blessedcoolant
d5f524a156
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-22 06:13:41 +13:00
Jonathan
3ab9d02883
Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. (#2756) 2023-02-22 06:12:24 +13:00
Lincoln Stein
27a2e27c3a fix crash when installed models < number columns
1. Fixed display crash when the number of installed models is less than
   the number of desired columns to display them.

2. Added --ckpt_convert option to init file.
2023-02-21 12:09:34 -05:00
Jonathan
da04b11a31
Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 10:52:13 -06:00
Lincoln Stein
3795b40f63 implemented the following fixes:
Enhancements:
1. Directory-based imports will not attempt to import components of diffusers models.
2. Diffuser directory imports now supported
3. Files that end with .ckpt that are not Stable Diffusion models (such as VAEs) are
   skipped during import.

Bugs identified in Psychedelicious's review:
1. The invokeai-configure form now tracks the current contents of `invokeai.init` correctly.
2. The autoencoders are no longer treated like installable models, but instead are
   mandatory support models. They will no longer appear in `models.yaml`

Bugs identified in Damian's review:
1. If invokeai-model-install is started before the root directory is initialized, it will
   call invokeai-configure to fix the matter.
2. Fix bug that was causing empty `models.yaml` under certain conditions.
3. Made import textbox smaller
4. Hide the "convert to diffusers" options if nothing to import.
2023-02-21 11:47:41 -05:00
Lincoln Stein
9436f2e3d1 alphabetize trigger strings 2023-02-21 06:23:34 -05:00
Lincoln Stein
7fadd5e5c4
performance: low-memory option for calculating guidance sequentially (#2732)
In theory, this reduces peak memory consumption by doing the conditioned
and un-conditioned predictions one after the other instead of in a
single mini-batch.

In practice, it doesn't reduce the reported "Max VRAM used for this
generation" for me, even without xformers. (But it does slow things down
by a good 18%.)

That suggests to me that the peak memory usage is during VAE decoding,
not the diffusion unet, but ymmv. It does [improve things for gogurt's
16 GB
M1](https://github.com/invoke-ai/InvokeAI/pull/2732#issuecomment-1436187407),
so it seems worthwhile.

To try it out, use the `--sequential_guidance` option:
2dded68267/ldm/invoke/args.py (L487-L492)
2023-02-20 23:00:54 -05:00
Lincoln Stein
4c2a588e1f
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 22:40:31 -05:00
Lincoln Stein
5f9de762ff
update installation docs for 2.3.1 installer screens (#2749)
This PR updates the manual page for automatic installation, and contains
screenshots of the new installer screens.
2023-02-20 22:40:02 -05:00
Lincoln Stein
91f7abb398 replace repeated triggers with <filename> 2023-02-20 22:33:13 -05:00
Damian Stewart
6420b81a5d Merge remote-tracking branch 'upstream/main' into refactor_use_compel 2023-02-20 23:34:38 +01:00
Lincoln Stein
b6ed5eafd6 update installation docs for 2.3.1 installer screens 2023-02-20 17:24:52 -05:00
blessedcoolant
694d5aa2e8
Add 'update' action to launcher script (#2636)
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
user to update to latest release version, main development version, or
an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.

The user interface (such as it is) looks like this:

![updater-screenshot](https://user-images.githubusercontent.com/111189/218291539-e5542662-6bfd-46ef-8ea9-655ca77392b7.png)
2023-02-21 11:17:22 +13:00
Lincoln Stein
833079140b
Merge branch 'main' into enhance/update-menu 2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36
Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 17:15:33 -05:00
Damian Stewart
1dfaaa2a57 fix web ui issues 2023-02-20 22:58:07 +01:00