Commit Graph

931 Commits

Author SHA1 Message Date
Lincoln Stein
d96175d127 resolve some undefined symbols in model_cache 2023-05-18 14:31:47 -04:00
Lincoln Stein
b1a99d772c added method to convert vaes 2023-05-18 13:31:11 -04:00
Lincoln Stein
7ea995149e fixes to env parsing, textual inversion & help text
- Make environment variable settings case InSenSiTive:
  INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
  environment variables will both set `max_loaded_models`

- Updated realesrgan to use new config system.

- Updated textual_inversion_training to use new config system.

- Discovered a race condition when InvokeAIAppConfig is created
  at module load time, which makes it impossible to customize
  or replace the help message produced with --help on the command
  line. To fix this, moved all instances of get_invokeai_config()
  from module load time to object initialization time. Makes code
  cleaner, too.

- Added `--from_file` argument to `invokeai-node-cli` and changed
  github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
Sergey Borisov
fd82763412 Model manager draft 2023-05-18 03:56:52 +03:00
Eugene
f9710dd6ed remove reference to legacy opt.hf_token, clean up whitespace in invokeai_configure 2023-05-17 20:39:00 -04:00
Eugene
20ca9e1fc1 config: move 'CORS' settings to 'Web Server' in the docstring to match the actual category 2023-05-17 19:45:51 -04:00
Eugene
8a8b09a953 api_app: rename web_config to app_config for consistency 2023-05-17 19:42:13 -04:00
Eugene
9e4e386c9b web and formatting fixes
- remove non-existent import InvokeAIWebConfig
- fix workflow file formatting
- clean up whitespace
2023-05-17 19:12:03 -04:00
Lincoln Stein
eca1e449a8 Merge branch 'lstein/global-configuration' of github.com:invoke-ai/InvokeAI into lstein/global-configuration 2023-05-17 15:23:21 -04:00
Lincoln Stein
ffaadb9d05 reorder options in help text 2023-05-17 15:22:58 -04:00
Lincoln Stein
8adff96e29
Merge branch 'main' into lstein/global-configuration 2023-05-17 14:37:09 -04:00
Lincoln Stein
7593dc19d6 complete several steps needed to make 3.0 installable
- invokeai-configure updated to work with new config system
- migrate invokeai.init to invokeai.yaml during configure
- replace legacy invokeai with invokeai-node-cli
- add ability to run an invocation directly from invokeai-node-cli command line
- update CI tests to work with new invokeai syntax
2023-05-17 14:13:27 -04:00
Lincoln Stein
b7c5a39685 make invokeai.yaml more hierarchical; fix list configuration bug 2023-05-17 12:19:19 -04:00
Mary Hipp Rogers
bd1b84f7d0
tell user to refresh page on image load error (#3425)
* refetch images list if error loading

* tell user to refresh instead of refetching

* unused import

* feat(ui): use `useAppToaster` to make toast

* fix(ui): clear selected/initial image on error

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-05-17 11:52:37 -04:00
Lincoln Stein
eadfd239a8 update config script to work with new config system 2023-05-17 00:18:19 -04:00
Lincoln Stein
e971a7f35c when migrating models.yaml, rename original models.yaml.orig 2023-05-16 22:37:53 -04:00
Lincoln Stein
8d75e50435 partial port of invokeai-configure 2023-05-16 01:50:01 -04:00
psychedelicious
6ab84741a0 fix(nodes): make ModelsList an enum-keyed dict
The `ModelsList` OpenAPI schema is generated as being keyed by plain strings. This means that API consumers do not know the shape of the dict. It _should_ be keyed by the `SDModelType` enum.

Unfortunately, `fastapi` does not actually handle this correctly yet; it still generates the schema with plain string keys.

Adding this anyways though in hopes that it will be resolved upstream and we can get the correct schema. Until then, I'll implement the (simple but annoying) logic on the frontend.

https://github.com/pydantic/pydantic/issues/4393
2023-05-16 15:02:58 +10:00
Lincoln Stein
cd16857f38 fix None in model_type 2023-05-16 00:13:44 -04:00
Lincoln Stein
1442f1cb8d change model filter to None in second place 2023-05-16 00:03:57 -04:00
Lincoln Stein
eea0d6f7bc default to no filter in list_models() 2023-05-15 23:52:29 -04:00
psychedelicious
1d9c115225 feat(nodes): add low and high to RandomIntInvocation 2023-05-16 13:50:52 +10:00
Lincoln Stein
4fe94a9315 list_models() now returns a dict of {type,{name: info}} 2023-05-15 23:44:08 -04:00
psychedelicious
cc21fb216c chore(ui): clean up GalleryPanel 2023-05-16 10:43:26 +10:00
psychedelicious
6fe62a2705 feat(ui): sampler --> scheduler 2023-05-16 10:40:26 +10:00
psychedelicious
da87378713 chore(ui): regen api client 2023-05-16 10:39:40 +10:00
psychedelicious
b6f5267385 chore(ui): clean up generationSlice 2023-05-16 10:21:18 +10:00
psychedelicious
f9e78d3c64 chore(ui): clean up gallerySlice 2023-05-16 10:16:36 +10:00
psychedelicious
b7b5bd1b46 chore(ui): clean up uiSlice 2023-05-16 09:57:19 +10:00
psychedelicious
9a3727d3ad chore(ui): clean up systemSlice 2023-05-16 09:48:58 +10:00
psychedelicious
d68c14516c chore(ui): clean up persist denylists 2023-05-16 09:46:03 +10:00
psychedelicious
9f4d39aa42 chore(ui): clean up modelSlice 2023-05-16 09:45:49 +10:00
blessedcoolant
2fc70c509b
Merge branch 'main' into feat/ui/fix-uploading 2023-05-16 02:20:59 +12:00
Lincoln Stein
80bdd550cf
Merge branch 'main' into lstein/bugfix/compel 2023-05-15 09:25:21 -04:00
Lincoln Stein
7ef0d2aa35 merge with main 2023-05-15 09:07:17 -04:00
psychedelicious
2359b92b46 chore(ui): tidy unused component ref 2023-05-15 22:58:15 +10:00
psychedelicious
a404fb2d32 docs(ui): update PACKAGE_SCRIPTS.md 2023-05-15 22:49:28 +10:00
psychedelicious
513eb11616 chore(ui): clean up unused files/packages 2023-05-15 22:48:06 +10:00
psychedelicious
d2c9140e69 feat(ui): restore save/copy/download/merge functionality 2023-05-15 22:21:03 +10:00
psychedelicious
d95fe5925a feat(ui): restore image post-upload actions
eg set init image if on img2img when uploading
2023-05-15 18:52:48 +10:00
psychedelicious
835922ea8f fix(ui): floor canvas coords to prevent partial pixel offset rendering issues 2023-05-15 18:50:34 +10:00
psychedelicious
e1e5266fc3 feat(ui): refactor base image uploading logic 2023-05-15 17:45:05 +10:00
psychedelicious
5e4457445f feat(ui): make toast/hotkey into logical components 2023-05-15 15:25:27 +10:00
psychedelicious
0221ca8f49 fix(ui): use cloned canvas for retrieving dataURL/Blobs 2023-05-15 13:54:30 +10:00
Lincoln Stein
c8f765cc06 improve debugging messages 2023-05-14 18:29:55 -04:00
Eugene Brodsky
cf36e4029e fix(ui): fix syntax error in the logo component flexbox 2023-05-15 08:24:33 +10:00
Lincoln Stein
b9e9087dbe do not manage GPU for pipelines if sequential_offloading is True 2023-05-14 18:09:38 -04:00
Lincoln Stein
63e465eb5c tweaks to get_model() behavior
1. If an external VAE is specified in config file, then
   get_model(submodel=vae) will return the external VAE, not the one
   burnt into the parent diffusers pipeline.

2. The mechanism in (1) is generalized such that you can now have
   "unet:", "text_encoder:" and similar stanzas in the config file.
   Valid formats of these subsections:

       unet:
          repo_id: foo/bar

       unet:
          path: /path/to/local/folder

       unet:
          repo_id: foo/bar
	  subfolder: unet

    In the near future, these will also be used to attach external
    parts to the pipeline, generalizing VAE behavior.

3. Accommodate callers (i.e. the WebUI) that are passing the
   model key ("diffusers/stable-diffusion-1.5") to get_model()
   instead of the tuple of model_name and model_type.

4. Fixed bug in VAE model attaching code.

5. Rebuilt web front end.
2023-05-14 16:50:59 -04:00
Eugene Brodsky
c8a98a9a22
Merge branch 'main' into lstein/bugfix/compel 2023-05-14 14:43:18 -04:00
blessedcoolant
c4681774a5
Merge branch 'main' into logging-facelift 2023-05-15 02:08:29 +12:00