Commit Graph

3039 Commits

Author SHA1 Message Date
blessedcoolant
5f0848bf7d feat(ui): add all-sliders option 2023-02-17 19:53:44 +11:00
Lincoln Stein
6551527fe2
Update 050_INSTALLING_MODELS.md (#2690)
Fix typo; "cute" to "cube"
2023-02-16 23:03:30 -05:00
Steven Frank
3715570d17
Update 050_INSTALLING_MODELS.md
Fix typo; "cute" to "cube"
2023-02-16 19:53:01 -08:00
Lincoln Stein
557e28f460
Fix workflow path filters (#2689)
remove leading Slash from paths
2023-02-16 22:15:31 -05:00
Lincoln Stein
62a7f252f5
Merge branch 'main' into fix/ci/workflow-path-filters 2023-02-16 22:14:45 -05:00
mauwii
0605cf94f0
remove leading Slash from paths 2023-02-17 04:10:40 +01:00
Kevin Turner
8a0d45ac5a
new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
Matthias Wild
2468ba7445
skip huge workflows if not needed (#2688)
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:57:36 +01:00
mauwii
65b7d2db47 skip huge workflows if not needed
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:56:39 +01:00
Ryan Cao
e07f1bb89c build frontend 2023-02-16 21:33:47 +01:00
Ryan Cao
f4f813d108 design: smooth progress bar animations 2023-02-16 21:33:47 +01:00
blessedcoolant
a76038bac4
[WebUI] Even off JSX string syntax (#2058)
Assuming that mixing `"literal strings"` and `{'JSX expressions'}`
throughout the code is not for a explicit reason but just a result IDE
autocompletion, I changed all props to be consistent with the
conventional style of using simple string literals where it is
sufficient.

This is a somewhat trivial change, but it makes the code a little more
readable and uniform
2023-02-17 01:22:17 +13:00
blessedcoolant
ff4942f9b4 Merge branch 'main' into pr/2058 2023-02-17 01:05:20 +13:00
blessedcoolant
1ccad64871
build: lint/format ignores stats.html (#2681) 2023-02-17 00:42:51 +13:00
psychedelicious
19f0022bbe build: lint/format ignores stats.html 2023-02-16 20:02:52 +11:00
psychedelicious
ecc7b7a700 builds frontend 2023-02-16 19:54:38 +11:00
David Regla
e46102124e [WebUI] Even off JSX string props
Increased consistency and readability by replacing any unnecessary JSX expressions in places where string literals are sufficient
2023-02-16 19:54:25 +11:00
Eugene Brodsky
caf07479ec fix spelling mistake 2023-02-16 00:19:08 -05:00
Johnathon Selstad
508780d07f Also fix .bat file to point at correct configurer 2023-02-16 00:19:08 -05:00
Johnathon Selstad
05e67e924c Make configure_invokeai.py call invokeai_configure 2023-02-16 00:19:08 -05:00
blessedcoolant
fb2488314f
fix minor typos (#2666)
Very, very minor typos I noticed.
2023-02-16 10:14:30 +13:00
blessedcoolant
062f58209b
Merge branch 'main' into fix_typos 2023-02-16 10:01:28 +13:00
Matthias Wild
7cb9d6b1a6
[WebUI] Model Conversion (#2616)
### WebUI Model Conversion

**Model Search Updates**

- Model Search now has a radio group that allows users to pick the type
of model they are importing. If they know their model has a custom
config file, they can assign it right here. Based on their pick, the
model config data is automatically populated. And this same information
is used when converting the model to `diffusers`.


![firefox_q8b4Iog73A](https://user-images.githubusercontent.com/54517381/218283322-6bf31fd5-349a-410f-991a-2aa50ee8b6e1.png)

- Files named `model.safetensors` and
`diffusion_pytorch_model.safetensors` are excluded from the search
because these are naming conventions used by diffusers models and they
will end up showing on the list because our conversion saves safetensors
and not bin files.

**Model Conversion UI**

- The **Convert To Diffusers** button can be found on the Edit page of
any **Checkpoint Model**.


![firefox_VUzv10CZ7m](https://user-images.githubusercontent.com/54517381/218283424-d9864406-ebb3-44a4-9e00-b6adda72d817.png)

- When converting the model, the entire process is handled
automatically. The corresponding config while at the time of the Ckpt
addition is used in the process.
- Users are presented with the choice on where to save the diffusers
converted model - same location as the ckpt, InvokeAI models root folder
or a completely custom location.


![firefox_HJlR97KY0u](https://user-images.githubusercontent.com/54517381/218283443-b9136edd-b432-4569-a8cc-50961544f31f.png)

- When the model is converted, the checkpoint entry is replaced with the
diffusers model entry. A user can readd the ckpt if they wish to.

--- 

More or less done. Might make some minor UX improvements as I refine
things.
2023-02-15 21:58:29 +01:00
blessedcoolant
fb721234ec final build (webui-model-conversion) 2023-02-16 09:32:54 +13:00
blessedcoolant
92906aeb08 Merge branch 'main' into webui-model-conversion 2023-02-16 09:31:28 +13:00
Jonathan
cab41f0538
Fix perlin noise generator for diffusers tensors (#2678)
Tensors with diffusers no longer have to be multiples of 8. This broke Perlin noise generation. We now generate noise for the next largest multiple of 8 and return a cropped result. Fixes #2674.
2023-02-15 19:37:42 +01:00
Kent Keirsey
5d0dcaf81e Fix typo and Hi-Res Bug 2023-02-15 13:06:31 +01:00
psychedelicious
9591c8d4e0 builds frontend 2023-02-15 22:30:47 +11:00
psychedelicious
bcb1fbe031 add tooltips & status messages to model conversion 2023-02-15 22:28:36 +11:00
fattire
b08a514594 missed one. 2023-02-14 17:49:01 -08:00
fattire
7aa6c827f7 fix minor typos 2023-02-14 17:38:21 -08:00
Jonathan
093174942b
Add thresholding for all diffusers types (#2479)
`generator` now asks `InvokeAIDiffuserComponent` to do postprocessing work on latents after every step. Thresholding - now implemented as replacing latents outside of the threshold with random noise - is called at this point. This postprocessing step is also where we can hook up symmetry and other image latent manipulations in the future.

Note: code at this layer doesn't need to worry about MPS as relevant torch functions are wrapped and made MPS-safe by `generator.py`.
2023-02-14 18:00:34 -06:00
Lincoln Stein
0bc55a0d55 Fix link to the installation documentation
Broken link in the README. Now pointing to correct mkdocs file.
2023-02-14 04:15:23 +01:00
Lincoln Stein
628e82fa79
Added arabic locale files (#2561)
I have added the arabic locale files. There need to be some
modifications to the code in order to detect the language direction and
add it to the current document body properties.

For example we can use this:

import { appWithTranslation, useTranslation } from "next-i18next";
import React, { useEffect } from "react";

  const { t, i18n } = useTranslation();
  const direction = i18n.dir();
  useEffect(() => {
    document.body.dir = direction;
  }, [direction]);

This should be added to the app file. It uses next-i18next to
automatically get the current language and sets the body text direction
(ltr or rtl) depending on the selected language.
2023-02-13 07:45:16 -05:00
blessedcoolant
9cbf9d52b4 Merge branch 'main' into pr/2561 2023-02-13 23:48:18 +13:00
blessedcoolant
fb35fe1a41 Merge branch 'main' into pr/2561 2023-02-13 23:47:21 +13:00
psychedelicious
b60b5750af builds frontend 2023-02-13 21:23:26 +11:00
psychedelicious
3ff40114fa adds arabic to language picker 2023-02-13 21:22:39 +11:00
psychedelicious
71c6ae8789 fixes mislocated language file 2023-02-13 21:22:18 +11:00
psychedelicious
d9a7536fa8 moves languages to fallback lang (en) 2023-02-13 21:21:46 +11:00
Lincoln Stein
99f4417cd7
Improve error messages from Textual Inversion and Merge scripts (#2641)
## Provide informative error messages when TI and Merge scripts have
insufficient space for console UI

- The invokeai-ti and invokeai-merge scripts will crash if there is not
enough space in the console to fit the user interface (even after
responsive formatting).

- This PR intercepts the errors and prints a useful error message
advising user to make window larger.
2023-02-13 00:12:32 -05:00
Lincoln Stein
8e47ca8d57
Merge branch 'main' into bugfix/prevent-ti-frontend-crash 2023-02-12 23:56:41 -05:00
Lincoln Stein
d1ad46d6f1 ask user to make window larger if not enough space for textual inversion/merge gui
- The invokeai-ti and invokeai-merge scripts will crash if there is not enough space
  in the console to fit the user interface (even after responsive formatting).

- This PR intercepts the errors and prints a useful error message advising user to
  make window larger.
2023-02-12 17:38:46 -05:00
Lincoln Stein
6ae7560f66
Merge branch 'main' into webui-model-conversion 2023-02-12 17:22:32 -05:00
Lincoln Stein
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
Jonathan
9eed1919c2
Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
blessedcoolant
b87f7b1129 Update Model Conversion Help Text 2023-02-13 00:30:50 +13:00
blessedcoolant
7410a60208 Merge branch 'main' into webui-model-conversion 2023-02-12 23:35:49 +13:00
Matthias Wild
7c86130a3d
add merge_group trigger to test-invoke-pip.yml (#2590) 2023-02-12 05:00:04 +01:00
Lincoln Stein
58a1d9aae0
Merge branch 'main' into update/ci/prepare-test-invoke-pip-for-queue 2023-02-11 22:38:55 -05:00