Compare commits

..

538 Commits

Author SHA1 Message Date
971f5c5ab1 Configure the NSFW checker at install time with default on (#1624)
* configure the NSFW checker at install time with default on

1. Changes the --safety_checker argument to --nsfw_checker and
--no-nsfw_checker. The original argument is recognized for backward
compatibility.

2. The configure script asks users whether to enable the checker
(default yes). Also offers users ability to select default sampler and
number of generation steps.

3.Enables the pasting of the caution icon on blurred images when
InvokeAI is installed into the package directory.

4. Adds documentation for the NSFW checker, including caveats about
accuracy, memory requirements, and intermediate image dispaly.

* use better fitting icon

* NSFW defaults false for testing

* set default back to nsfw active

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
2022-11-30 14:50:57 -05:00
22133392b2 disable NSFW checker loading during the CI tests (#1641)
* disable NSFW checker loading during the CI tests

The NSFW filter apparently causes invoke.py to crash during CI testing,
possibly due to out of memory errors. This workaround disables NSFW
model loading.

* doc change

* fix formatting errors in yml files
2022-11-30 14:11:51 -05:00
5e81f51f6a Web UI 2.2 bugfixes (#1572)
* Fixes bug preventing multiple images from being generated

* Fixes valid seam strength value range

* Update Delete Alert Text

Indicates to the user that images are not permanently deleted.

* Fixes left/right arrows not working on gallery

* Fixes initial image on load erroneously set to a user uploaded image

Should be a result gallery image.

* Lightbox Fixes

- Lightbox is now a button in the current image buttons
- Lightbox is also now available in the gallery context menu
- Lightbox zoom issues fixed
- Lightbox has a fade in animation.

* Fix image display wrapper in current preview not overflow bounds

* Revert "Fix image display wrapper in current preview not overflow bounds"

This reverts commit 5511c82714dbf1d1999d64e8bc357bafa34ddf37.

* Change Staging Area discard icon from Bin to X

* Expose Snap Threshold and Move Snap Settings to BBox Panel

* Changes img2img strength default to 0.75

* Fixes drawing triggering when mouse enters canvas w/ button down

When we only supported inpainting and no zoom, this was useful. It allowed the cursor to leave the canvas (which was easy to do given the limited canvas dimensions) and without losing the "I am drawing" state. 

With a zoomable canvas this is no longer as useful.

Additionally, we have more popovers and tools (like the color pickers) which result in unexpected brush strokes. This fixes that issue.

* Revert "Expose Snap Threshold and Move Snap Settings to BBox Panel"

We will handle this a bit differently - by allowing the grid origin to be moved. I will dig in at some point.

This reverts commit 33c92ecf4da724c2f17d9d91c7ea31a43a2f6deb.

* Adds Limit Strokes to Box

* Adds fill bounding box button

* Adds erase bounding box button

* Changes Staging area discard icon to match others

* Fixes right click breaking move tool

* Fixes brush preview visibility issue with "darken outside box"

* Fixes history bugs with addFillRect, addEraseRect, and other actions

* Adds missing `key`

* Fixes postprocessing being applied to canvas generations

* Fixes bbox not getting scaled in various situations

* Fixes staging area show image toggle not resetting on accept/discard

* Locks down canvas while generating/staging

* Fixes move tool breaking when canvas loses focus during move/transform

* Hides cursor when restrict strokes is on and mouse outside bbox

* Lints

* Builds fresh bundle

* Fix overlapping hotkey for Fill Bounding Box

* Build Fresh Bundle

* Fixes bug with mask and bbox overlay

* Builds fresh bundle

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-11-30 12:08:06 -05:00
9fae65ed69 add .editorconfig (#1636) 2022-11-30 08:35:15 -05:00
2443e5dc01 add k_dpmpp_2_a and k_dpmpp_2 solvers options (#1389)
* add k_dpmpp_2_a and k_dpmpp_2 solvers options

* update frontend

Co-authored-by: Victor <victorca25@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-11-30 08:32:52 -05:00
a9aa4e45aa fix error when inpainting using runwayml inpainting model (#1634)
- error was "Omnibus object has no attribute pil_image"
- closes #1596
2022-11-30 08:32:33 -05:00
9b6b27a156 Fix inpainting with iterations (#1635) 2022-11-30 08:28:20 -05:00
b68074bb8f Removes symlinked environment.yaml (#1631)
Was unintentionally added in #1621
2022-11-30 00:24:21 -05:00
1f8e56672c Fix installer script for macOS. (#1630)
* refer to the platform as 'osx' instead of 'mac', otherwise the
composed URL to micromamba is wrong.
* move the `-O` option to `tar` to be grouped with the other tar flags
to avoid the `-O` being interpreted as something to unarchive.
2022-11-29 23:49:29 -05:00
f8708f5dbe disable patchmatch in CI actions (#1626)
* disable patchmatch in CI actions

* fix indention

* replace tab with spaces

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
2022-11-30 05:03:54 +01:00
103efea641 include a step after config to cat ~/.invokeai (#1629) 2022-11-30 03:34:24 +01:00
b60edab0fa 2.2 Doc Updates (#1589)
* Unified Canvas Docs & Assets

Unified Canvas draft

Advanced Tools Updates

Doc Updates (lstein feedback)

* copy edits to Unified Canvas docs

- consistent capitalisation and feature naming
- more intimate address (replace "the user" with "you") for improved User
  Engagement(tm)
- grammatical massaging and *poesie*

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: damian <git@damianstewart.com>
2022-11-29 16:09:54 -05:00
6bc11bfd3f Test installer (#1618)
* test linux install

* try removing http from parsed requirements

* pip install confirmed working on linux

* ready for linux testing

- rebuilt py3.10-linux-x86_64-cuda-reqs.txt to include pypatchmatch
  dependency.
- point install.sh and install.bat to test-installer branch.

* Updates MPS reqs

* detect broken readline history files

* fix download.pytorch.org URL

* Test installer (Win 11) (#1620)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* Test installer (MacOS 13.0.1 w/ torch==1.12.0) (#1621)

* Test installer (Win 11)

* Test installer (MacOS 13.0.1 w/ torch==1.12.0)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* change sourceball to development for testing

* Test installer (MacOS 13.0.1 w/ torch==1.12.1 & torchvision==1.13.1) (#1622)

* Test installer (Win 11)

* Test installer (MacOS 13.0.1 w/ torch==1.12.0)

* Test installer (MacOS 13.0.1 w/ torch==1.12.1 & torchvision==1.13.1)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Cyrus Chan <82143712+cyruschan360@users.noreply.github.com>
Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>
2022-11-29 15:05:43 -05:00
5897e511f1 Debloat-docker (#1612)
* debloat Dockerfile
- less options more but more userfriendly
- better Entrypoint to simulate CLI usage
- without command the container still starts the web-host

* debloat build.sh

* better syntax in run.sh

* update Docker docs
- fix description of VOLUMENAME
- update run script example to reflect new entrypoint
2022-11-29 10:04:03 -05:00
f43b767b87 update index.md (#1609)
- comment out non existing link
- fix indention
- add seperator between feature categories
2022-11-29 10:02:55 -05:00
61cc41aa3f Fixes for #1604 (#1605)
* Converts ESRGAN image input to RGB

- Also adds typing for image input.
- Partially resolves #1604

* ensure there are unmasked pixels before color matching

Co-authored-by: Kyle Schouviller <kyle0654@hotmail.com>
2022-11-29 09:34:38 -05:00
40c3ab0181 revert to older version of list_models() (#1611)
This restores the correct behavior of list_models() and quenches
the bug of list_models() returning a single model entry named "name".

I have not investigated what was wrong with the new version, but I
think it may have to do with changes to the behavior in dict.update()
2022-11-29 08:54:39 -05:00
8999a5564b make concepts library work with Web UI (#1608)
## The concepts library now works with the Web UI

This PR makes it possible to include a Hugging Face concepts library
<style-or-subject-trigger> in the WebUI prompts. The metadata seems to
be correctly handled.
2022-11-29 12:20:05 +01:00
8423be539b tweak setup and environment files for linux & pypatchmatch (#1580)
* tweak setup and environment files for linux & pypatchmatch

- Downgrade python requirements to 3.9 because 3.10 is not supported
  on Ubuntu 20.04 LTS (widely-used distro)
- Use our github pypatchmatch 0.1.3 in order to install Makefile
  where it needs to be.
- Restored "-e ." as the last install step on pip installs. Hopefully
  this will not trigger the high-CPU hang we've previously experienced.

* keep windows on basicsr 1.4.1

* keep windows on basicsr 1.4.1

* bump pypatchmatch requirement to 0.1.4

- This brings in a version of pypatchmatch that will gracefully
  handle internet connection not available at startup time.
- Also refactors and simplifies the handling of gfpgan's basicsr requirement
  across various platforms.
2022-11-28 20:22:31 -05:00
6cc56043e2 documentation enhancements (#1603)
- Add documentation for the Hugging Face concepts library and TI embedding.

- Fixup index.md to point to each of the feature documentation files,
  including ones that are pending.
2022-11-28 18:48:56 -05:00
62cda009dd make concepts library work with Web UI
This PR makes it possible to include a Hugging Face concepts library
<style-or-subject-trigger> in the WebUI prompt. The metadata seems
to be correctly handled.
2022-11-28 23:44:41 +00:00
45e51bac9a Fix #1599 by relaxing the match_trigger regex (#1601)
* Fix #1599 by relaxing the `match_trigger` regex

Also simplify logic and reduce duplication.

* restrict trigger regex again (but not so far)
2022-11-28 17:58:52 -05:00
a514f9b236 add a --no-patchmatch option to disable patchmatch loading (#1598)
This feature was added to prevent the CI Macintosh tests from erroring
out when patchmatch is unable to retrieve its shared library from
github assets.
2022-11-28 16:29:52 -05:00
90b21db86c Adds psychedelicious to statement of values signature (#1602) 2022-11-28 15:49:04 -05:00
bc44ab786c Bug Fix: Model import fixes (#1566)
These bug fixes address issues #1546 and #1547 .
2022-11-28 21:46:27 +01:00
281a2e3ecb make the docstring more readable and improve the list_models logic (fixes #1539) (#1594) 2022-11-28 21:20:56 +01:00
84cd96decf set readline root to ROOTDIR for model import 2022-11-28 18:34:42 +00:00
a3121b8137 !model_import autocompletes in ROOTDIR 2022-11-28 17:44:32 +00:00
3f6d0fb7da update dockerfile (#1551)
* update dockerfile

* remove not existing file from .dockerignore

* remove bloat and unecesary step
also use --no-cache-dir for pip install
image is now close to 2GB

* make Dockerfile a variable

* set base image to `ubuntu:22.10`

* add build-essential

* link outputs folder for persistence

* update tag variable

* update docs

* fix not customizeable build args, add reqs output
2022-11-28 18:20:25 +01:00
08ef4d62e9 add statement of values (#1584)
* this adds the Statement of Values

Google doc source = https://docs.google.com/document/d/1-PrUKDJcxy8OyNGc8CyiHhv2VgLvjt7LRGlEpbg1nmQ/edit?usp=sharing

* Fix heading

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* add keturn and mauwii to the team member list

* Fix punctuation

* this adds the Statement of Values

Google doc source = https://docs.google.com/document/d/1-PrUKDJcxy8OyNGc8CyiHhv2VgLvjt7LRGlEpbg1nmQ/edit?usp=sharing

* add keturn and mauwii to the team member list

* fix formating
- make sub bullets use * (decide to all use - or *)
- indent sub bullets
Sorry, first only looked at the code version and found this only after
looking at the markdown rendered version

* use multiparagraph numbered sections

* Break up Statement Of Values as per comments on #1584

* remove duplicated word, reduce vagueness

it's important not to overstate how many artists we are consulting.

* fix typo (sorry blessedcoolant)

Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: damian <git@damianstewart.com>
2022-11-28 11:00:43 -05:00
81cb7fd1b7 Merge branch 'development' into model-import-fixes 2022-11-28 09:28:14 -05:00
7c658c6d76 model_cache.py: fix list_models
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-28 19:20:38 +05:30
495104e941 Merge branch 'invoke-ai:development' into development 2022-11-28 19:14:31 +05:30
1e1f871ee1 Embedding merging (#1526)
* add whole <style token> to vocab for concept library embeddings

* add ability to load multiple concept .bin files

* make --log_tokenization respect custom tokens

* start working on concept downloading system

* preliminary support for dynamic loading and merging of multiple embedded models

- The embedding_manager is now enhanced with ldm.invoke.concepts_lib,
  which handles dynamic downloading and caching of embedded models from
  the Hugging Face concepts library (https://huggingface.co/sd-concepts-library)

- Downloading of a embedded model is triggered by the presence of one or more
  <concept> tags in the prompt.

- Once the embedded model is downloaded, its trigger phrase will be loaded
  into the embedding manager and the prompt's <concept> tag will be replaced
  with the <trigger_phrase>

- The downloaded model stays on disk for fast loading later.

- The CLI autocomplete will complete partial <concept> tags for you. Type a
  '<' and hit tab to get all ~700 concepts.

BUGS AND LIMITATIONS:

- MODEL NAME VS TRIGGER PHRASE

  You must use the name of the concept embed model from the SD
  library, and not the trigger phrase itself. Usually these are the
  same, but not always. For example, the model named "hoi4-leaders"
  corresponds to the trigger "<HOI4-Leader>"

  One reason for this design choice is that there is no apparent
  constraint on the uniqueness of the trigger phrases and one trigger
  phrase may map onto multiple models. So we use the model name
  instead.

  The second reason is that there is no way I know of to search
  Hugging Face for models with certain trigger phrases. So we'd have
  to download all 700 models to index the phrases.

  The problem this presents is that this may confuse users, who will
  want to reuse prompts from distributions that use the trigger phrase
  directly. Usually this will work, but not always.

- WON'T WORK ON A FIREWALLED SYSTEM

  If the host running IAI has no internet connection, it can't
  download the concept libraries. I will add a script that allows
  users to preload a list of concept models.

- BUG IN PROMPT REPLACEMENT WHEN MODEL NOT FOUND

  There's a small bug that occurs when the user provides an invalid
  model name. The <concept> gets replaced with <None> in the prompt.

* fix loading .pt embeddings; allow multi-vector embeddings; warn on dupes

* simplify replacement logic and remove cuda assumption

* download list of concepts from hugging face

* remove misleading customization of '*' placeholder

the existing code as-is did not do anything; unclear what it was supposed to do.

the obvious alternative -- setting using 'placeholder_strings' instead of
'placeholder_tokens' to match model.params.personalization_config.params.placeholder_strings --
caused a crash. i think this is because the passed string also needed to be handed over
on init of the PersonalizedBase as the 'placeholder_token' argument.
this is weird config dict magic and i don't want to touch it. put a
breakpoint in personalzied.py line 116 (top of PersonalizedBase.__init__) if
you want to have a crack at it yourself.

* address all the issues raised by damian0815 in review of PR #1526

* actually resize the token_embeddings

* multiple improvements to the concept loader based on code reviews

1. Activated the --embedding_directory option (alias --embedding_path)
   to load a single embedding or an entire directory of embeddings at
   startup time.

2. Can turn off automatic loading of embeddings using --no-embeddings.

3. Embedding checkpoints are scanned with the pickle scanner.

4. More informative error messages when a concept can't be loaded due
   either to a 404 not found error or a network error.

* autocomplete terms end with ">" now

* fix startup error and network unreachable

1. If the .invokeai file does not contain the --root and --outdir options,
  invoke.py will now fix it.

2. Catch and handle network problems when downloading hugging face textual
   inversion concepts.

* fix misformatted error string

Co-authored-by: Damian Stewart <d@damianstewart.com>
2022-11-28 02:40:24 -05:00
30c5a0b067 Interactive configuration (#1517)
* Update scripts/configure_invokeai.py

prevent crash if output exists

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* implement changes requested by reviews

* default to correct root and output directory on Windows systems

- Previously the script was relying on the readline buffer editing
  feature to set up the correct default. But this feature doesn't
  exist on windows.

- This commit detects when user typed return with an empty directory
  value and replaces with the default directory.

* improved readability of directory choices

* Update scripts/configure_invokeai.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* better error reporting at startup

- If user tries to run the script outside of the repo or runtime directory,
  a more informative message will appear explaining the problem.

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-11-27 21:29:56 -05:00
9f02595ef2 Fixes outpainting with resized inpaint size 2022-11-27 22:24:15 +13:00
b101334b4e Merge branch 'development' into model-import-fixes 2022-11-27 08:26:05 +01:00
0608d259dd move requirements-mkdocs.txt to docs folder (#1575)
* move requirements-mkdocs.txt to docs folder

* update copyright
2022-11-27 07:59:56 +01:00
12eff0dd42 Update-requirements and test-invoke-pip workflow (#1574)
* update requirements files

* update test-invoke-pip workflow
2022-11-27 03:43:04 +01:00
939164eaa7 Merge branch 'development' into model-import-fixes 2022-11-27 01:30:43 +01:00
f2d2a49977 disable checks for python 3.9 2022-11-26 17:43:09 -05:00
37535f5897 fix output path for Archive results 2022-11-26 17:43:09 -05:00
e5646dee27 also set fail-fast to it's default (true)
in this way the whole action fails if one job fails
this should unblock the runners!!!
2022-11-26 17:43:09 -05:00
d5011efaa1 fix model cache path 2022-11-26 17:43:09 -05:00
74487a95a9 Merge branch 'development' into model-import-fixes 2022-11-26 16:54:21 -05:00
a50f4da9d1 install frontend/dist into package directory (#1554)
- When invokeai installed with `pip install .`, the frontend will be in
the venv directory under invokeai.
- When invokeai installed with `pip install -e .`, the frontend will be
in the source repo. -invoke_ai_web_sever.py will look in both places
using relative
  addressing.
2022-11-26 16:21:00 -05:00
4ae1df5b5e Merge branch 'development' into backend-can-find-frontend 2022-11-26 14:01:23 -05:00
7f3ba16cd2 Revert "make the docstring more readable and improve the list_models logic"
This reverts commit 248068fe5d.
2022-11-26 13:38:20 -05:00
7a0438586c Merge branch 'development' into model-import-fixes 2022-11-26 10:18:49 -05:00
9adaf8f8ad prevent "!switch state gets confused if model switching fails"
- If !switch were to fail on a particular model, then generate got
  confused and wouldn't try again until you switch to a different working
  model and back again.

- This commit fixes and closes #1547
2022-11-26 15:15:10 +00:00
a341297b0c stop crash on !import_models call on model inside rootdir
- addresses bug report #1546
2022-11-26 14:58:22 +00:00
9e0504abe5 Builds fresh bundle 2022-11-27 03:35:49 +13:00
3131edb255 Fixes canvas dimensions not setting on first load 2022-11-27 03:35:49 +13:00
b0697bc4ff Fix desktop mode being broken with new versions of flaskwebgui 2022-11-27 03:35:49 +13:00
1e9121c8d6 Builds fresh bundle 2022-11-27 03:35:49 +13:00
916e795c26 Adds gallery drag and drop to img2img/canvas 2022-11-27 03:35:49 +13:00
3aebe754fa Fixes unnecessary canvas scaling 2022-11-27 03:35:49 +13:00
3f0cfaac4a Builds fresh bundle 2022-11-27 03:35:49 +13:00
a3a0a87f55 Fixes canvas failing to scale on first run 2022-11-27 03:35:49 +13:00
f5e8ffe7b4 Builds fresh bundle 2022-11-27 03:35:49 +13:00
404d81f6fd Fixes shouldShowStagingImage not resetting to true on commit 2022-11-27 03:35:49 +13:00
c7864f8a6d Fixes bug with clear mask and history 2022-11-27 03:35:49 +13:00
9568ac66e0 Improves scaled bbox display logic 2022-11-27 03:35:49 +13:00
d4280bbaaa Adds auto-scaling for inpaint size 2022-11-27 03:35:49 +13:00
46a5fd67ed Adds inpaint size (as scale bounding box) to UI 2022-11-27 03:35:49 +13:00
b93336dbf9 Bug fix for inpaint size 2022-11-27 03:35:49 +13:00
9fe9301762 Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination 2022-11-27 03:35:49 +13:00
7f1b95fbda Removes force_outpaint param 2022-11-27 03:35:49 +13:00
52c79fa097 Lints 2022-11-27 03:35:49 +13:00
62ac725ba9 Adds brush color alpha hotkey 2022-11-27 03:35:49 +13:00
db188cd3c3 Color picker does not overwrite user-selected alpha 2022-11-27 03:35:49 +13:00
e67ef4aec2 Committing color picker color changes tool to brush 2022-11-27 03:35:49 +13:00
473869b8ed Fixes mask brush preview color 2022-11-27 03:35:49 +13:00
c8c1b3e217 Simplifies Accordion
Prep for adding reset buttons for each section
2022-11-27 03:35:49 +13:00
fcd3ef1f98 Fixes invoke hotkey not working in input fields 2022-11-27 03:35:49 +13:00
a7f11a8c09 Fixes variation params not set correctly when recalled 2022-11-27 03:35:49 +13:00
318426b67a Changes color picker preview to circles 2022-11-27 03:35:49 +13:00
6f3e99efc3 Un-floors cursor position 2022-11-27 03:35:49 +13:00
7515bcfe78 Fixes iterations being disabled when seed random & variations are off 2022-11-27 03:35:49 +13:00
8d0ef022eb Lints & builds fresh bundle 2022-11-27 03:35:49 +13:00
9f1c1cf2e6 Adds color picker 2022-11-27 03:35:49 +13:00
d44112c209 Builds fresh bundle 2022-11-27 03:35:49 +13:00
b31f90c0bd Fixes postprocessing not being disabled when clicking use all 2022-11-27 03:35:49 +13:00
344cdf0ade Renames "Threshold" > "Noise Threshold" 2022-11-27 03:35:49 +13:00
500bde5b0e Fixes missing threshold and perlin parameters in metadata viewer 2022-11-27 03:35:49 +13:00
df03927ec6 Fixes img2img attempting inpaint when init image has transparency 2022-11-27 03:35:49 +13:00
419f670f86 Updates npm dependencies 2022-11-27 03:35:49 +13:00
30dc9220c1 Fixes crash on cancel with intermediates enabled, fixes #1416 2022-11-27 03:35:49 +13:00
941d427302 Adds single-column gallery layout 2022-11-27 03:35:49 +13:00
876ae7f70f Move full screen hotkey to floating to prevent tab rerenders 2022-11-27 03:35:49 +13:00
a86049f822 Adds Training icon 2022-11-27 03:35:49 +13:00
ec3d25d778 Add Training WIP Tab 2022-11-27 03:35:49 +13:00
69a4a6fec5 Simplify fullscreen hotkey selector 2022-11-27 03:35:49 +13:00
7b76b79887 Floating panel re-render fix 2022-11-27 03:35:49 +13:00
3ea732365c Fix rerenders on model select 2022-11-27 03:35:49 +13:00
dc5d696ed2 Builds fresh bundle 2022-11-27 03:35:49 +13:00
0060551490 Fixes missing postprocessed image metadata before refresh 2022-11-27 03:35:49 +13:00
2fcc7d9b36 Isolate Cursor Pos debug text on canvas to prevent rerenders 2022-11-27 03:35:49 +13:00
78217f5ef9 Fix unnecessary gallery re-renders 2022-11-27 03:35:49 +13:00
c6112e3295 memoize outpainting options 2022-11-27 03:35:49 +13:00
8b08af714d Tab Styling Fixes 2022-11-27 03:35:49 +13:00
723dcf4236 Adds infill method 2022-11-27 03:35:49 +13:00
ddfd82559f Styling Updates 2022-11-27 03:35:49 +13:00
8488575e5c Minor styling fixes to new options panel layout 2022-11-27 03:35:49 +13:00
7e4e51b224 Removes Advanced checkbox, cleans up options panel for unified canvas 2022-11-27 03:35:49 +13:00
f3b7316683 Fix to gallery resizing 2022-11-27 03:35:49 +13:00
25b19b9ab8 Add loopback to just img2img. Remove from settings. 2022-11-27 03:35:49 +13:00
9a6a970771 Fix gallery not resizing correctly on open and close 2022-11-27 03:35:49 +13:00
93de78b6e8 Highlight mask icon when on mask layer 2022-11-27 03:35:49 +13:00
00da042dab Update feature tooltip text 2022-11-27 03:35:49 +13:00
6445e802f6 Fix Lightbox images of different res not centering 2022-11-27 03:35:49 +13:00
7caf20aad3 Builds fresh bundle 2022-11-27 03:35:49 +13:00
11969c2e2e Fixes gallery width on lightbox, fixes gallery button expansion 2022-11-27 03:35:49 +13:00
e821b97cfc Linting 2022-11-27 03:35:49 +13:00
ef1dbdb33d Adds outpainting specific options 2022-11-27 03:35:49 +13:00
0cdb7bb0cd Fixes metadata viewer not showing metadata after refresh
Also adds Dream-style prompt to metadata
2022-11-27 03:35:49 +13:00
306ed44e19 Moves Loopback to app settings 2022-11-27 03:35:49 +13:00
b0810e1ed7 Adds IAIAlertDialog component 2022-11-27 03:35:49 +13:00
089c85a017 Fixes bug when postprocessing image with no metadata 2022-11-27 03:35:49 +13:00
a1d80fd106 Cap gallery size on canvas tab so it doesnt overflow 2022-11-27 03:35:49 +13:00
d9c7a28c90 Improves gallery resize behaviour 2022-11-27 03:35:49 +13:00
c787a3a801 Styling fixes 2022-11-27 03:35:49 +13:00
1f772e4bdc Fix input checkbox styling being incorrect on light theme 2022-11-27 03:35:49 +13:00
cb7458db77 Fix styling on alert modals 2022-11-27 03:35:49 +13:00
ef482b4d3e Builds fresh bundle 2022-11-27 03:35:49 +13:00
3e22160462 Updates mask options popover behavior 2022-11-27 03:35:49 +13:00
6a3d725dbb Adds clear temp folder 2022-11-27 03:35:49 +13:00
8a16c8a196 Crop to Bounding Box > Save Box Region Only 2022-11-27 03:35:49 +13:00
90eaac5134 Masking option tweaks 2022-11-27 03:35:49 +13:00
896c2532c7 Adds option to crop to bounding box on save 2022-11-27 03:35:49 +13:00
f68702520b Update Layer hotkey display to UI 2022-11-27 03:35:49 +13:00
286e46aaa3 Fix gallery maxwidth on unified canvas 2022-11-27 03:35:49 +13:00
088fd97418 Rearrange some canvas toolbar icons
Put brush stuff together and canvas movement stuff together
2022-11-27 03:35:49 +13:00
e1e978b423 Adds Save to Gallery button to staging toolbar 2022-11-27 03:35:49 +13:00
d27d92325d Fixes bug where discarding staged images results in loss of history 2022-11-27 03:35:49 +13:00
80f6f9a931 First pass on Canvas options panel 2022-11-27 03:35:49 +13:00
7dff8ccd31 Styles buttons for clearing canvas history and mask 2022-11-27 03:35:49 +13:00
3f6b275bec Image gallery resize/style tweaks 2022-11-27 03:35:49 +13:00
5ed6a31b97 Removes reasonsWhyNotReady
The popover doesn't play well with the button being disabled, and I don't think adds any value.
2022-11-27 03:35:49 +13:00
b72b61b790 Styling updates 2022-11-27 03:35:49 +13:00
b81231823e Builds fresh bundle 2022-11-27 03:35:49 +13:00
c7c6940e1a Fixes repo root .gitignore ignoring frontend things 2022-11-27 03:35:49 +13:00
6c33d1356d Removes unused imports 2022-11-27 03:35:49 +13:00
f08c78a043 Minor bugfixes
- When doing long-running canvas image exporting actions, display indeterminate progress bar
- Fix staging area image outline not displaying after committing/discarding results
2022-11-27 03:35:49 +13:00
b6dd5b664c Fixes bug causing gallery to close on context menu open 2022-11-27 03:35:49 +13:00
76e7e82f5e Removes stray console.log() 2022-11-27 03:35:49 +13:00
d4376ed240 Adds hotkey to reset canvas interaction state
If the canvas' interaction state (e.g. isMovingBoundingBox, isDrawing, etc) get stuck somehow, user can press Escape to reset the state.
2022-11-27 03:35:49 +13:00
9d34213b4c Gracefully handles corrupted images; fixes #1486
- App does not crash if corrupted image loaded
- Error is displayed in the UI console and CLI output if an image cannot be loaded
2022-11-27 03:35:49 +13:00
b908f2b4bc Improves metadata handling, fixes #1450
- Removes model list from metadata
- Adds generation's specific model to metadata
- Displays full metadata in JSON viewer
2022-11-27 03:35:49 +13:00
9418324030 Cleans up IAICanvasStatusText 2022-11-27 03:35:49 +13:00
0f6856b719 Fixes canvas toolbar upload button 2022-11-27 03:35:49 +13:00
83d8e69219 Reworks canvas toolbar 2022-11-27 03:35:49 +13:00
7f999e9dfc Fixes another similar index error, simplifies logic 2022-11-27 03:35:49 +13:00
0c3ae232af WIP - Lightbox Fixes
Still need to fix the images not being centered on load when the image res changes
2022-11-27 03:35:49 +13:00
9950790f4c Fix index error on going past last image in Gallery 2022-11-27 03:35:49 +13:00
b50a1eb63f Disables canvas image saving functions when processing 2022-11-27 03:35:49 +13:00
d55b1e169c Fix Lightbox Issues 2022-11-27 03:35:49 +13:00
1071a12777 Thumbnail size = 256px 2022-11-27 03:35:49 +13:00
d987d0a336 Saves thumbnails to separate thumbnails directory 2022-11-27 03:35:49 +13:00
50a67a7172 Implements thumbnails for gallery
- Thumbnails are saved whenever an image is saved, and when gallery requests images from server
- Thumbnails saved at original image aspect ratio with width of 128px as WEBP
- If the thumbnail property of an image is unavailable for whatever reason, the image's full size URL is used instead
2022-11-27 03:35:49 +13:00
a3308c853d Fix canvas resizing when both options and gallery are unpinned 2022-11-27 03:35:49 +13:00
cde395e02f Hotkey Cleanup
- Viewer is now Z
- Canvas Move tool is V - sync with PS
- Removed some unused hotkeys
2022-11-27 03:35:49 +13:00
e7f670a5b6 Fixes stage position changing on zoom 2022-11-27 03:35:49 +13:00
917c576ddb Fix missing key on ThemeChanger map 2022-11-27 03:35:49 +13:00
dfc0c587b1 Adds theme changer popover 2022-11-27 03:35:49 +13:00
548bcaceb2 Adds model drop-down to site header 2022-11-27 03:35:49 +13:00
5fd43fca13 Fixes paste image to upload 2022-11-27 03:35:49 +13:00
37a356d377 Improves canvas status text and adds option to toggle debug info 2022-11-27 03:35:49 +13:00
cccbfb12aa Removes stale code 2022-11-27 03:35:49 +13:00
d018b2d7a7 Fixes intermediate images being tiny in txt2img/img2img 2022-11-27 03:35:49 +13:00
e358adecdd Fix metadata viewer image url length when viewing intermediate 2022-11-27 03:35:49 +13:00
cdc5f66592 Fixes Use All Parameters 2022-11-27 03:35:49 +13:00
b8cebf29f2 Adds staging area hotkeys, disables gallery left/right when staging 2022-11-27 03:35:49 +13:00
68aebad7ad Fixes staging area outline 2022-11-27 03:35:49 +13:00
ae4a44de3e Fixes Canvas Auto Save to Gallery 2022-11-27 03:35:49 +13:00
2ab868314f Reorganises app file structure 2022-11-27 03:35:49 +13:00
bc46c46835 Refactors upload-related async thunks
- Now standard thunks instead of RTK createAsyncThunk()
- Adds toasts for all canvas upload-related actions
2022-11-27 03:35:49 +13:00
d82a21cfb2 Integrates #1487 - touch events
Need to add:
- Pinch zoom
- Touch-specific handling (some things aren't quite right)
2022-11-27 03:35:49 +13:00
87439feeb2 Add arguments to use SSL to webserver 2022-11-27 03:35:49 +13:00
e5951ad098 Revert "Fix theme changer not displaying current theme on page refresh"
This reverts commit 903edfb803e743500242589ff093a8a8a0912726.
2022-11-27 03:35:49 +13:00
4f51680307 Staging Area delete button is now red
So it doesnt feel blended into to the rest of them.
2022-11-27 03:35:49 +13:00
d0ceabd372 Fix staging area display toggle not working 2022-11-27 03:35:49 +13:00
2bda3d6d2f Unify Brush and Eraser Sizes 2022-11-27 03:35:49 +13:00
a96af7a15d Fix tab count in hotkeys panel 2022-11-27 03:35:49 +13:00
93192b90f4 Fix theme changer not displaying current theme on page refresh 2022-11-27 03:35:49 +13:00
024acf42af Update Hotkey Info
Add missing tooltip hotkeys and update the hotkeys modal to reflect the new hotkeys for the Unified Canvas.
2022-11-27 03:35:49 +13:00
04cb2d39cb Adds useToastWatcher hook
- Dispatch an `addToast` action with standard Chakra toast options object to add a toast to the toastQueue
- The hook is called in App.tsx and just useEffect's w/ toastQueue as dependency to create the toasts
- So now you can add toasts anywhere you have access to `dispatch`, which includes middleware and thunks
- Adds first usage of this for the save image buttons in canvas
2022-11-27 03:35:49 +13:00
c69573e65d Disables canvas actions which cannot be done during processing 2022-11-27 03:35:49 +13:00
84f702b6d0 Resets bounding box coords/dims when no image present 2022-11-27 03:35:49 +13:00
bb70c32ad5 Improves behaviour when setting init canvas image/reset view 2022-11-27 03:35:49 +13:00
425a1713ab Fixes possible hang on MaskCompositer 2022-11-27 03:35:49 +13:00
70e67c45dd Fixes canvas showing spinner on first load
Also adds good default canvas scale and positioning when no image is on it
2022-11-27 03:35:49 +13:00
07ca0876ec Updates hotkeys 2022-11-27 03:35:49 +13:00
aa96a457b6 Adds hotkeys and refactors sharing of konva instances
Adds hotkeys to canvas. As part of this change, the access to konva instance objects was refactored:

Previously closure'd refs were used to indirectly get access to the konva instances outside of react components.

Now, a  getter and setter function are used to provide access directly to the konva objects.
2022-11-27 03:35:49 +13:00
e28599cadb Sets status immediately when clicking Invoke 2022-11-27 03:35:49 +13:00
ae6dd219d9 Fix Current Image display background going over image bounds 2022-11-27 03:35:49 +13:00
19322fc1ec Fixes save to gallery including empty area, adds download and copy image 2022-11-27 03:35:49 +13:00
635e7da05d Abandons "inpainting" canvas lock 2022-11-27 03:35:49 +13:00
c0005eb063 Fixes bounding box not being rounded to 64 2022-11-27 03:35:49 +13:00
74485411a8 Fixes send to buttons 2022-11-27 03:35:49 +13:00
ed70fc683c Fixes reset canvas view when locked 2022-11-27 03:35:49 +13:00
425d3bc95d Clips lines drawn while canvas locked
When drawing with the locked canvas, if a brush stroke gets too close to the edge of the canvas and its stroke would extend past the edge of the canvas, the edge of that stroke will be seen after unlocking the canvas.

This could cause a problem if you unlock the canvas and now have a bunch of strokes just outside the init image area, which are far back in undo history and you cannot easily erase.

With this change, lines drawn while the canvas is locked get clipped to the initial image bbox, fixing this issue.

Additionally, the merge and save to gallery functions have been updated to respect the initial image bbox so they function how you'd expect.
2022-11-27 03:35:49 +13:00
3994c28b77 Fixes 2px layout shift on toggle canvas lock 2022-11-27 03:35:49 +13:00
0100a63b59 Stops unnecessary canvas rescales on gallery state change 2022-11-27 03:35:49 +13:00
432dc704a6 Organises features/canvas 2022-11-27 03:35:49 +13:00
1d540219fa Fixes bounding box ending up offscreen 2022-11-27 03:35:49 +13:00
827f516baf Organises features/canvas 2022-11-27 03:35:49 +13:00
48ad0c289c Rebases on dev, updates new env files w/ patchmatch 2022-11-27 03:35:49 +13:00
223e0529ba Fixes app after removing in/out-painting refs 2022-11-27 03:35:49 +13:00
98e3bbb3bd Add patchmatch and infill_method parameter to prompt2image (options are 'patchmatch' or 'tile'). 2022-11-27 03:35:49 +13:00
e3efcc620c Removes all references to split inpainting/outpainting canvas 2022-11-27 03:35:49 +13:00
15dd1339d2 Initial unification of canvas 2022-11-27 03:35:49 +13:00
caf8f0ae35 Removes console.log from redux-persist patch 2022-11-27 03:35:49 +13:00
cfb87bc116 WIP refactor to unified canvas 2022-11-27 03:35:49 +13:00
c0ad1b3469 Fixes: outpainting temp images show in gallery 2022-11-27 03:35:49 +13:00
4382cd0b91 Moves image uploading to HTTP
- It all seems to work fine
- A lot of cleanup is still needed
- Logging needs to be added
- May need types to be reviewed
2022-11-27 03:35:49 +13:00
b049bbc64e Fix iterative outpainting by restoring original images 2022-11-27 03:35:49 +13:00
34395ff490 Fixes crashes during iterative outpaint. Still doesn't work correctly though. 2022-11-27 03:35:49 +13:00
1bc1085542 Fixes bbox not resizing in outpainting if partially off screen 2022-11-27 03:35:49 +13:00
d7884432c9 Fixes inpainting not doing img2img when no mask 2022-11-27 03:35:49 +13:00
82f6402d04 Hides staging area outline on mouseover prev/next 2022-11-27 03:35:49 +13:00
0e7b735611 Fixes error on inpainting paste back
`TypeError: 'float' object cannot be interpreted as an integer`
2022-11-27 03:35:49 +13:00
5304ef504c Fixes wonky canvas layer ordering & compositing 2022-11-27 03:35:49 +13:00
17b295871f Outpainting tab loads to empty canvas instead of upload 2022-11-27 03:35:49 +13:00
70dcfa1684 Builds fresh bundle 2022-11-27 03:35:49 +13:00
5d484273ed Fixes "use all" not setting variationAmount
Now sets to 0 when the image had variations.
2022-11-27 03:35:49 +13:00
179656d541 Adds staging area 2022-11-27 03:35:49 +13:00
73099af6ec Fixes disappearing canvas grid lines 2022-11-27 03:35:49 +13:00
c223d93b4d Fix gallery width size for Outpainting
Also fixes the canvas resizing failing n fast pushes
2022-11-27 03:35:49 +13:00
4e34194479 Increases CFG Scale max to 200 2022-11-27 03:35:49 +13:00
00e2674076 Add Metadata To Viewer 2022-11-27 03:35:49 +13:00
0a2e67df1a Hotkeys improvement 2022-11-27 03:35:49 +13:00
7831468304 Canvas styling 2022-11-27 03:35:49 +13:00
88d02585e7 Limits history to 256 for each of undo and redo 2022-11-27 03:35:49 +13:00
f82e82f1bb Debounce > 300ms 2022-11-27 03:35:49 +13:00
317762861f Fixes invert mask 2022-11-27 03:35:49 +13:00
3f1360368d Fixes undo/redo 2022-11-27 03:35:49 +13:00
d5467e7db5 Attempts to fix redux-persist debounce patch 2022-11-27 03:35:49 +13:00
9284983429 Updates package.json to use redux-persist patches 2022-11-27 03:35:49 +13:00
bb79c78fe8 Fixes AttributeError: 'dict' object has no attribute 'invert_mask' 2022-11-27 03:35:49 +13:00
e3735ebb45 Adds debouncing 2022-11-27 03:35:49 +13:00
eb17dfdeaa Patches redux-persist and redux-deep-persist with debounced persists
Our app changes redux state very, very often. As our undo/redo history grows, the calls to persist state start to take in the 100ms range, due to a the deep cloning of the history. This causes very noticeable performance lag.

The deep cloning is required because we need to blacklist certain items in redux from being persisted (e.g. the app's connection status).

Debouncing the whole process of persistence is a simple and effective solution. Unfortunately, `redux-persist` dropped `debounce` between v4 and v5, replacing it with `throttle`. `throttle`, instead of delaying the expensive action until a period of X ms of inactivity, simply ensures the action is executed at least every X ms. Of course, this does not fix our performance issue. 

The patch is very simple. It adds a `debounce` argument - a number of milliseconds - and debounces `redux-persist`'s `update()` method (provided by `createPersistoid`) by that many ms.

Before this, I also tried writing a custom storage adapter for `redux-persist` to debounce the calls to `localStorage.setItem()`. While this worked and was far less invasive, it doesn't actually address the issue. It turns out `setItem()` is a very fast part of the process.

We use `redux-deep-persist` to simplify the `redux-persist` configuration, which can get complicated when you need to blacklist or whitelist deeply nested state. There is also a patch here for that library because it uses the same types as `redux-persist`.

Unfortunately, the last release of `redux-persist` used a package `flat-stream` which was malicious and has been removed from npm. The latest commits to `redux-persist` (about 1 year ago) do not build; we cannot use the master branch. And between the last release and last commit, the changes have all been breaking.

Patching this last release (about 3 years old at this point) directly is far simpler than attempting to fix the upstream library's master branch or figuring out an alternative to the malicious and now non-existent dependency.
2022-11-27 03:35:49 +13:00
1114ac97e2 Fixes (?) spacebar issues 2022-11-27 03:35:49 +13:00
c7ef41af54 Changes "Invert Mask" to "Preserve Masked Areas" 2022-11-27 03:35:49 +13:00
7075a17091 Implements invert mask 2022-11-27 03:35:49 +13:00
7f0fb47cf3 Remove save button from Canvas Controls (cleanup) 2022-11-27 03:35:49 +13:00
775f032c56 Mask Brush Preview now always at 0.5 opacity
The new mask is only visible properly at max opacity but at max opacity the brush preview becomes fully opaque blocking the view. So the mask brush preview no remains at 0.5 no matter what the Brush opacity is.
2022-11-27 03:35:49 +13:00
5410d42da0 Disable stage info in Inpainting Tab 2022-11-27 03:35:49 +13:00
e21e901fa2 Fixes inpainting + code cleanup 2022-11-27 03:35:49 +13:00
00385240e7 Adds mask design file 2022-11-27 03:35:49 +13:00
0a96d2a888 Fixes mask for FF 2022-11-27 03:35:49 +13:00
016551e036 Fixes warning about NaN? 2022-11-27 03:35:49 +13:00
b8bb46042c SVG mask 2022-11-27 03:35:49 +13:00
b44e9c7752 Changes mask to diagonal line pattern 2022-11-27 03:35:49 +13:00
8ed10c732b Revert "Fix Inpainting Canvas Rendering"
This reverts commit 114a74982944fbcd0feb3ce79e81fade4d3da147.
2022-11-27 03:35:49 +13:00
82a53782d0 Fix Inpainting Canvas Rendering 2022-11-27 03:35:49 +13:00
6adebf065f Fixes bad import 2022-11-27 03:35:49 +13:00
83f369053f Fixes issue with intermediates size
Sorry @lstein !
2022-11-27 03:35:49 +13:00
77d3839860 Do not show progress images in the viewer 2022-11-27 03:35:49 +13:00
c02a0da837 Builds fresh bundle 2022-11-27 03:35:49 +13:00
4f4c6bbe33 Fix delete hotkey not working 2022-11-27 03:35:49 +13:00
72ea5453ce Fix broken styling on the Clear Mask button 2022-11-27 03:35:49 +13:00
458081f9c9 Builds fresh bundle 2022-11-27 03:35:49 +13:00
d1fbe81a60 Pins react-hotkeys-hook to v4.0.2
See: https://github.com/JohannesKlauss/react-hotkeys-hook/issues/835
2022-11-27 03:35:49 +13:00
6c7191712f Rebases against development 2022-11-27 03:35:49 +13:00
248068fe5d make the docstring more readable and improve the list_models logic
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-26 08:49:41 -05:00
9b281856ee Merge branch 'development' into development 2022-11-26 17:37:57 +05:30
fdf41cc739 Installer final tweaks (#1550)
This is the same as PR #1537 except that it removes a redundant
`scripts` argument from `setup.py` that appeared at some point.

I also had to unpin the github dependencies in `requirements.in` in
order to get conda CI tests to pass. However, dependencies are still
pinned in `requirements-base.txt` and the environment files, and install
itself is working. So I think we are good.
2022-11-25 11:24:30 -05:00
d079445943 install frontend/dist into package directory
- When invokeai installed with `pip install .`, the frontend
  will be in the venv directory under invokeai.
- When invokeai installed with `pip install -e .`, the frontend
  will be in the source repo.
-invoke_ai_web_sever.py will look in both places using relative
  addressing.
2022-11-25 06:42:00 +00:00
a997ab2cf6 Merge branch 'development' into development 2022-11-25 11:06:06 +05:30
e98068a546 unpinned clip, clipseg, gfpgan and k-diffusion
- conflicts with their counterparts in the environment files was
  causing the CI conda-based tests to fail.
- installer seems to work still
2022-11-25 04:49:41 +00:00
b945ae4e01 two more fixups
1. removed redundant `data_files` argument from setup.py
2. upped requirement to Python >= 3.9. This is due to a feature
   used in `argparse` that is only available in 3.9 or higher.
2022-11-25 03:50:52 +00:00
b23c471cf0 make mauwii code owner for docker build (#1549) 2022-11-25 04:46:10 +01:00
964e584bd3 remove redundant scripts arg from setup.py 2022-11-25 03:03:24 +00:00
461358bdde Merge branch 'development' into feat-install-unify-setup-requirements-pip 2022-11-24 21:55:39 -05:00
2433cc344a add test-invoke-pip.yml (#1521)
* add test-invoke-pip.yml

* update requirements-base.txt to fix tests

* install requirements-base.txt separate
since it requires to have torch already installed
also restore origin requirements-base.txt after suc. test in my fork

* restore origin requirements
add `basicsr>=1.4.2` to requirements-base.txt
remove second installation step

* re-add previously overseen req in lin-cuda

* fix typo in setup.py - `scripts/preload_models.py`

* use GFBGAN from branch `basicsr-1.4.2`

* remove `basicsr>=1.4.2` from base reqs

* add INVOKEAI_ROOT to env

* disable upgrade of `pip`, `setuptools` and `wheel`

* try to use a venv which should not contain `wheel`

* add relative path to pip command

* use `configure_invokeai.py --no-interactive --yes`

* set grpcio to `<1.51.0`

* revert changes to use venv

* remove `--prefer-binary`

* disable step to create models.yaml
since this will not be used anymore with new `configure_invokeai.py`

* use `pip install --no-binary=":all:"`

* another try to use venv

* try uninstalling wheel before installing reqs

* dont use requirements.txt as filename

* update cache-dependency-path

* add facexlib to requirements-base.txt

* first install requirements-base.txt

* first install `-e .`, then install requirements
I know that this is obviously the wrong order, but still have a feeling

* add facexlib to requirements.in

* remove `-e .` from reqs and install after reqs

* unpin torch and torchvision in requirements.in

* fix model dl path

* fix curl output path

* create directory before downloading model

* set INVOKEAI_ROOT_PATH
https://docs.github.com/en/actions/learn-github-actions/environment-variables#naming-conventions-for-environment-variables

* INVOKEAI_ROOT ${{ env.GITHUB_WORKSPACE }}/invokeai

* fix matrix stable-diffusion-model-dl-path

* fix INVOKEAI_ROOT

* fix INVOKEAI_ROOT

* add --root and --outdir to run-tests step

* create models.yaml from example

* fix scripts variable in setup.py
by removing unused scripts

* fix archive-results path

* fix workflow to reflect latest code changes

* fix copy paste error

* fix job name

* fix matrix.stable-diffusion-model

* restructure matrix

* fix `activate conda env` step

* update the environment yamls
use same 4 git packages as for pip

* rename job in test-invoke-conda

* add tqdm to environment-lin-amd.yml

* fix python commands in test-invoke-conda.yml

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-11-25 01:24:24 +01:00
bd2eea1c70 Merge branch 'development' into development 2022-11-24 10:30:48 +05:30
16df759499 fix non-interactive behavior of config_invokeai.py
This corrects behavior of --no-interactive, which was in fact
asking for interaction!

New behavior:

If you pass --no-interactive it will behave exactly as it did before
and completely skip the downloading of SD models.

If you pass --yes it will do almost the same, but download the
recommended models. The combination of the two arguments is the same
as --no-interactive.
2022-11-23 23:07:47 -05:00
5a1a36ec29 feat(install); unify setup.py, requirements.in, pip
This allows populating setup.py's 'install_requires' directly from 'requirements.in'

- setup.py:
  - read 'requirements.in' instead of 'requirements.txt'
  - add correct upstream pytorch repo to "dependency_links"
- requirements.in:
  - append "name @" to git packages
  - fix torch repo URL -> 'download.pytorch.org/whl/torch_stable.html'

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-23 13:34:42 -05:00
c76badfb08 make the docstring more readable and improve the list_models logic
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-23 10:40:27 +05:30
71c4f401b0 Merge branch 'fix-model-load-error-handling' of github.com:/invoke-ai/InvokeAI into fix-model-load-error-handling 2022-11-22 19:15:06 +00:00
c59b9897d9 remove file that shouldn't have been in PR 2022-11-22 19:14:52 +00:00
4cf1c856ed Merge branch 'development' into fix-model-load-error-handling 2022-11-22 14:11:26 -05:00
a78a1020be grpcio 1.51.0 is broken on M1 Macs. limit it to last good version til fixed 2022-11-22 12:05:45 -05:00
90cb7a6442 fix behavior when models.yaml missing entirely 2022-11-22 16:56:38 +00:00
8f5cded86e fix regression in ldm.invoke.model_cache.list_models()
- this was introduced in PR #1525 and not caught during my
  code review
2022-11-22 16:46:26 +00:00
02d02a86b1 gracefully handle broken or missing models at initial load time
- If initial model fails to load, invoke.py will inform the user that
  something is wrong with models.yaml or the models themselves and
  drop user into configure_invokeai.py to repair the problem.

- The model caching system will longer try to reload the current model
  if there is none.
2022-11-22 16:36:11 +00:00
ba9c695463 Merge branch 'development' into fix-model-load-error-reporting 2022-11-22 16:24:00 +00:00
8202f34f38 Merge remote-tracking branch 'origin' into fix-model-load-error-reporting 2022-11-22 16:22:29 +00:00
40a7f47d22 change typehint "a|b" operation to Union[a,b] to run on Python < 3.10
- this incompatibility was introduced by #1525 and missed during
  code review
2022-11-22 11:21:04 -05:00
37bcf9cc47 this small fix adds back the load_models.py script
- fixes broken setup.py in current dev
- it is just an alias for configure_invokeai.py
- preload_models.py will be deprecated, but for now
  it is a second alias
2022-11-22 11:21:04 -05:00
0340d9ad53 fix(install): more fixes
- install scripts:
   - allow EN-abling pip cache (use 'use-cache' as an arg to the install script)
   - debug message showing which sourceball we're downloading
   - add 'wheel' to pip update, so we can speed up installs from source (and quiet deprecations)
- install.sh: use absolute path for micromamba
- setup.py:
  - fill 'install_requires' using 'requirements.in'
  - fix 'load_models' script name
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-22 11:06:50 -05:00
0d35a67e9c fix run-breaking typo (#1532) 2022-11-22 14:27:23 +01:00
1260e28d94 fix typo 2022-11-22 14:21:15 +01:00
229f782e3b check the function signatures and add some easy annotations
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-22 08:14:58 -05:00
c15b839dd4 remove additional newline from the textwrap.dedent string
Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-22 08:14:58 -05:00
a095214e52 cleanup ldm/invoke/model_cache.py
remove duplicate import: os
ldm.util.ask_user is imported only once now
introduce textwrap and contextlib packages to clean up the code
return, returns None implicitly so it is omitted
a function returns None by default so it is omitted
dict.get returns None by default if the value is not found so it is omitted
type of True is a bool and if the module only returns True then it should not return anything in the first place
added some indentations and line breaks to further improve readability

Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
2022-11-22 08:14:58 -05:00
8e81425e89 fix outcropping crash when png has no InvokeAI metadata
- Closes #1461
2022-11-21 16:35:00 -05:00
c5cbe8f87d When doing -t/--log_tokenization, also log prompt parser output (#1529)
The log was deleted at some point, this brings it back when user does
`--log_tokenization`/`-t`
2022-11-21 19:37:55 +01:00
e0581a2c37 when doing --log_tokenization/-t also log parsed prompt 2022-11-21 19:27:44 +01:00
32f538bf3a fix another place where rename() should be replace() 2022-11-21 08:44:26 -05:00
3c5a14a814 fixes configure_invokeai.py crash on Windows systems
The step in which the new models.yaml file replaces the old one was
crashing on Windows due to the fact that on Windows, the os.rename()
function will refuse to replace an existing file, unlike the behavior
on Linux and Mac. The os.replace() function, which was introduced in
python3, supposedly fixes this.
2022-11-21 08:44:26 -05:00
0661256b61 Merge branch 'interactive-configuration' into development 2022-11-20 23:32:28 +00:00
602e35db65 Fix issues with '.' not being consisent when run using web gui. 2022-11-20 18:22:13 -05:00
bc7ece771d instead linking modelfile use custom models.yaml 2022-11-20 18:21:34 -05:00
38bdb440d0 remove several debugging messages
- dangling debug messages in several files, introduced during
  testing of the external root directory
- these need to be removed before they are interpreted as errors by users
2022-11-20 18:20:40 -05:00
ce8c2bea2f fix(install): load_models needs to be absolutely last
setup.py: Put in the name of the *product*, not the project

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-20 12:23:20 -05:00
3ac0f11e97 toil(invoke): more meaningful messaging
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-20 12:23:20 -05:00
98fe49cb55 hotfix(unified install): last minute changes missing from PR #1506
'requirements.in':
  - add picklescan
  - finally find a good compromise for torch (==1.12.0) and
    torchvision (==0.13.0) across all platforms
'invoke.sh: hotfix for MacOS - add `export PYTORCH_ENABLE_MPS_FALLBACK=1`

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-20 12:23:20 -05:00
2b7e3abe57 fix(args): fix INITFILE spelling (#1518)
fixes #1516
2022-11-20 01:45:27 +01:00
150c4a5d2d fix(args): fix INITFILE spelling 2022-11-19 12:01:02 -08:00
0381a853b5 add interactive configuration to the model loader
- Loader is renamed `configure_invokeai.py`, but `preload_models.py` is retained
  (as a shell) for backward compatibility

- At startup, if no runtime root directory exists and no `.invokeai` startup file is
  present, user will be prompted to select the runtime and outputs directories.

- Also expanded the number of initial models offered to the user to include the
  most "liked" ones from HuggingFace, including the two trinart models, the
  PaperCut model, and the VoxelArt model.

- Created a configuration file for initial models to be offered to the user, at
  configs/INITIAL_MODELS.yaml
2022-11-19 19:20:28 +00:00
c79ec204ec Fixed default --embiggen_strength to None to avoid it being printed on every run (#1515) 2022-11-19 13:24:11 +01:00
8d3b1582a5 Fixed default to None 2022-11-19 11:50:26 +00:00
5fd7d71a7a remove several debugging messages
- dangling debug messages in several files, introduced during
  testing of the external root directory
- these need to be removed before they are interpreted as errors by users
2022-11-18 21:14:28 +00:00
1f0220697b Fix micromamba tar command for macOS
Moved the -O from after the file to after the tar command for compatibility with macOS

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2022-11-18 15:57:06 -05:00
18ae3949ef fix typo in error message 2022-11-18 20:53:49 +00:00
aa95510444 Merge branch 'development' into create-invokeai-run-directory 2022-11-18 15:27:51 -05:00
f33df25830 address all review comments; needs testing 2022-11-18 15:25:23 -05:00
3a5a8ceba5 Merge branch 'create-invokeai-run-directory' of github.com:/invoke-ai/InvokeAI into create-invokeai-run-directory 2022-11-18 19:35:45 +00:00
a1e5f17d1e realesrgan and facexlib now download models to correct directory
- fix issue in which both realesrgan and facexlib were downloading
  weight files to source directory

- cleaned up status reporting in load_models.py
2022-11-18 19:35:13 +00:00
303431be89 move CLI into its own module 2022-11-18 19:35:10 +00:00
8e9f80cc97 web server runs off runtime directory now 2022-11-18 19:34:28 +00:00
3ad598761c support for wheel building; webserver broken 2022-11-18 19:34:28 +00:00
b4eaf8b751 fix (unified installer): various fixes (#1506)
This list makes it look like there's lot going on for a single commit,
but the changes are actually pretty small

- 'install'/'invoke' scripts:
  - use venv's 'activate' script instead of hacking PATH
- 'deactivate' before exiting, so we don't leave a confusing environment
hanging around
- 'setup.py':
- make 'install_requires" an accurate list of our direct dependencies,
as it should be
  - add more info/details for eventual use in pypi
- 'invoke' scripts: "developer console" invocation simplified/better
logging (it's now *much* more obvious from inspection what the
"developer console" actually *is*)
- 'requirements.in':
- move 'clipseg' package out of installer and into requirements where it
should be
- bump/pin 'accelerate' package to 0.14.0 to bypass torch 1.13 SIGKILL
issue on Windows (prep for when we decide to upgrade)
- pin 'torch' as well as 'torchvision', to reduce pip-compile's
confusion
- notebooks: delete unused/deprecated notebook installer

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-18 16:54:00 +01:00
fa608efa11 move CLI into its own module 2022-11-18 06:48:42 +00:00
e9d319bfde ensure web server works with legacy runtime directory 2022-11-18 06:48:14 +00:00
561721aef7 web server runs off runtime directory now 2022-11-18 06:10:52 +00:00
891c0f21d5 web server now observes --root option 2022-11-18 05:00:02 +00:00
8973ce7d47 support for wheel building; webserver broken 2022-11-18 03:21:07 +00:00
51c283ba56 fix(install): various fixes
- 'install'/'invoke' scripts: use venv 'activate' script
- 'setup.py':
  - make  'install_requires" accurate
  - add more details for eventual use in pypi
- 'invoke' scripts: "developer" console invocation simplified/better logging
- requirements:
  - move 'clipseg' package out of installer and into requirements where it should be
  - bump/pin 'accelerate' package to 0.14.0 to bypass torch 1.13 SIGKILL issue on Windows (prep for when we decide to upgrade)
- 'requirements.in': pin torch as well to reduce pip-compile's confusion
- notebooks: delete unused/deprecated notebook installer

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-11-17 16:35:24 -05:00
7d262fc158 Fix macOS install.sh by patching sysconfig (#1488)
On macOS, patch python sysconfig just before creating the venv so that
extensions (greenlet & grpcio) can build.

Ref https://github.com/indygreg/python-build-standalone/issues/103, in
particular the solution from @alecthomas posted here:
https://github.com/indygreg/python-build-standalone/issues/103#issuecomment-1234942425

To use: checkout, cd into `installer`, run `create_installers.sh`, copy
`InvokeAI-mac.zip` into an empty folder outside of your existing
invokeAI install, unzip it and run `install.sh`.
2022-11-17 22:34:23 +01:00
fdb16000ab add module __init__ files for backend 2022-11-17 19:54:28 +00:00
f62cc7db9d add a ~/.invokeai file the first time we load
- If there is not already a `.invokeai` file in the user's home directory
  the first time invoke.py runs, it will create an empty one with comments
  showing how to customize it.
2022-11-17 10:15:05 -05:00
9fa3e28dd4 Fixed Google Colab requirements URL
Signed-off-by: slashtechno <77907286+slashtechno@users.noreply.github.com>
2022-11-17 10:14:43 -05:00
9200b26f21 Merge branch 'development' into create-invokeai-run-directory 2022-11-16 23:10:46 -05:00
d998b2f806 Add picklescan to env files 2022-11-16 23:02:35 -05:00
ac8a7ff70b Unpin picklescan req and cleanup 2022-11-16 23:02:35 -05:00
2d6e0baa87 Add Model Scanning 2022-11-16 23:02:35 -05:00
c212b74990 add code of conduct 2022-11-16 21:40:36 -05:00
0352979a8b Added documentation about --embiggen_strength 2022-11-16 11:55:45 -05:00
70bd61d616 Fixed opt.embiggen_strength again 2022-11-16 11:55:45 -05:00
f2a6985c78 Added --embiggen_strength option 2022-11-16 11:55:45 -05:00
fe5a581313 allow images to be saved into invokeai run directory
- This fixes an issue in which generated images were not being saved
  into the ~/invokeai/outputs directory, but were instead being stored
  to a relative './outputs/img_samples' path as before.

- Note that if you specify a relative directory in the --outdir argument,
  it will now be interpreted as relative to the invokeai run directory.
  You will need to provide an absolute pathname in order to save the
  outputs outside this directory.

- Also found and fixed a minor problem in which commands with syntax
  errors were not being stored to the CLI command history.
2022-11-15 20:33:58 +00:00
2ec9792f50 fix clipseg model loading
- This fixes the clipseg loading code so that it looks in the root directory
  for the model.

- It also adds several __init__.py files needed to allow InvokeAI to be
  installed without the -e (editable) flag. This lets you delete the
  source code directory after installation.
2022-11-15 19:17:14 +00:00
a4204abfce This commit separates the InvokeAI source code from end-user files
- preload_models.py has been renamed load_models.py. I've left a
  shell legacy version with the previous name to avoid breaking any
  code.

- The load_models.py script now takes an optional --root argument,
  which points to an install directory for the models, scripts, config
  files, and the default outputs directory. In the future, the
  embeddings manager directory will also be stored here.

- If no --root is provided, and no init file or environment variable
  is present, load_models.py will install to '.' by default, which is
  the current behavior. (This has *not* been tested thoroughly.)

- The location of the root directory is stored in the file .invokeai
  in the user's home directory ($HOME on Linux/Mac, or HOMEPATH on
  windows). The load_models.py script creates this file if it
  does not already exist.

- invoke.py and load_models.py use the following search path to find
  the install directory:

  1. Contents of the environment variable INVOKEAI_ROOT
  2. The --root=XXXXX option in ~/.invokeai
  3. The --root option passed on the script command line.
  4. As a last gasp, the currently working directory (".")

    Running `python scripts/load_models.py --root ~/invokeai`  will
    create a directory structured like this (shortened for clarity):

    ~/invokeai
    ├── configs
    │   ├── models.yaml
    │   └── stable-diffusion
    │       ├── v1-finetune.yaml
    │       ├── v1-finetune_style.yaml
    │       ├── v1-inference.yaml
    │       ├── v1-inpainting-inference.yaml
    │       └── v1-m1-finetune.yaml
    ├── models
    │   ├── CompVis
    │   ├── bert-base-uncased
    │   ├── clipseg
    │   ├── codeformer
    │   ├── gfpgan
    │   ├── ldm
    │   │   └── stable-diffusion-v1
    │   │       ├── sd-v1-5-inpainting.ckpt
    │   │       └── vae-ft-mse-840000-ema-pruned.ckpt
    │   └── openai
    ├── outputs
    └── scripts
	├── dream.py
	├── images2prompt.py
	├── invoke.py
	├── legacy_api.py
	├── load_models.py
	├── merge_embeddings.py
	├── orig_scripts
	│   ├── download_first_stages.sh
	│   ├── train_searcher.py
	│   └── txt2img.py
	├── preload_models.py
	└── sd-metadata.py

1. You can now run invoke.py anywhere! Just copy it to one of your
   bin directories, or put the ~/invokeai/scripts onto your PATH.

2. git pulls will no longer fight with you over models.yaml

3. It keeps end users out of the source code repo and will create
   a path for us to do installs from invokeai.tar.gz.
2022-11-15 18:39:31 +00:00
274b276133 model paths fixed, codeformer needs attention 2022-11-15 18:39:31 +00:00
7707bc7818 patch python sysconfig so that extensions (greenlet & grpcio) can build 2022-11-15 18:41:58 +01:00
4c035ad4ae update test-tube version requirements to match yanl files, partitally fixes pip install build on macOS 2022-11-14 17:34:10 -05:00
e9090bca8f make @mauwii codeowner for the CI workflows 2022-11-14 12:45:15 -05:00
398a9bc0c6 fix incorrect bounding-box calculation in ImageResizer
- Under some circumstances, the image resizer was fitting
  the wrong dimension to the user-provided bounding box
  when an init image provided.
- Closes #1470.
2022-11-14 17:41:02 +00:00
38b9658c15 adding troubleshooting tips to the newer doc 2022-11-14 15:32:22 +00:00
f04d1bab21 Merge branch 'development' into sync-dev-with-main 2022-11-13 21:51:17 +00:00
c23efb8e2b change installer download repo to main.tar.gz 2022-11-13 21:49:09 +00:00
5604d3c447 documentation hot fixes
- changes pointers to installation instructions from README
- Adds the changelog for the 2.1.3 release
2022-11-13 21:46:54 +00:00
206101f59d revert initializer words for embeddings 2022-11-13 15:47:28 -05:00
23348dcd3f sync dev to main 2022-11-13 13:47:26 +00:00
9bf6013fdd refactor(cross_attention_control): remove outer CrossAttentionControl class (#1459)
I was working on attention control in #1384, started making a few
changes to improve the typing and make it easier to work with. Then the
whitespace changes touched so many lines it seemed worth separating out
these refactoring operations to this PR so they don't get mixed up with
other functional changes.

It would be helpful to merge this to `development` before continuing
work on attention control in #1384

The github diff isn't good at showing these together since they changed
whitespace on so many lines. It may be easier to review by looking at
the individual commits, and/or toggling the "hide whitespace
differences" option in the view.
2022-11-13 14:20:18 +01:00
1d11e06e6f Remove gfpgan_dir
+ Update GFPGAN Model Path Defaults
>  Update them to match the new file heirarchy
2022-11-12 19:24:11 -05:00
47e6f94111 refactor(cross_attention_control): type hints and other lint 🚮 2022-11-12 11:25:39 -08:00
810fad9e06 refactor(cross_attention_control): re-order enum class for easier reference 2022-11-12 11:05:33 -08:00
853c6af623 refactor(cross_attention_control): remove outer CrossAttentionControl class
Python has modules. We don't need to use a class to provide a namespace.
2022-11-12 11:01:10 -08:00
1b6bbfb4db Merge branch 'lstein-outcrop-improvements' of github.com:/invoke-ai/InvokeAI into lstein-outcrop-improvements 2022-11-12 15:41:16 +00:00
67e25624b9 simplify logic around negative seeds 2022-11-12 15:41:01 +00:00
9c218788e2 Merge branch 'development' into lstein-outcrop-improvements 2022-11-12 10:39:57 -05:00
bb084a844b simplify logic around negative seeds 2022-11-12 15:39:03 +00:00
0a88243911 Revert "Outcrop improvements" (#1449)
Reverts invoke-ai/InvokeAI#1414

- missed review comments from @Kyle0654
2022-11-11 15:38:02 -05:00
8a0a90d0f3 Merge branch 'lstein-outcrop-improvements' of github.com:/invoke-ai/InvokeAI into lstein-outcrop-improvements 2022-11-11 20:37:13 +00:00
9141132a5c enhance outcropping with ability to direct contents of new regions
This commit does several things that improve the customizability of the CLI `outcrop` command:

1. When outcropping an image you can now add a `--new_prompt` option, to specify a new prompt to be applied to the outpainted region instead of the prompt used to generate the image.
2. Similarly you can provide a new seed using `--seed` (or `-S`). A seed less than zero will pick one randomly.
3. The metadata written into the outcropped file is now more informative about what was previously stored.
4. This PR also fixes the crash that happened when trying to outcrop an image  that does not contain InvokeAI metadata.

Other changes:

- add error checking suggested by @Kyle0654
- add special case in invoke.py to allow -1 to be passed as seed.
  This now only occurs for postprocessing commands. Previously, -1
  caused previous seed to be used, and this still applies to generate
  operations.
2022-11-11 20:34:21 +00:00
78f7bef1a3 Revert "enable outcropping of random JPG/PNG images"
This reverts commit 48aa6416dc.
2022-11-11 10:30:44 -05:00
1fb7b50be7 Revert "enhance outcropping with ability to direct contents of new regions"
This reverts commit 8aa94d5774.
2022-11-11 10:30:44 -05:00
b57c81ab38 Remove editable flag from clipseg in requirements 2022-11-11 09:32:07 -05:00
af040e97af prevent two models from being marked default in models.yaml 2022-11-11 09:28:17 -05:00
8dc7f119e5 Fix performance issue introduced by torch cuda cache clear during generation 2022-11-10 23:01:32 -08:00
4b4111a802 fix invoke.py crash if no models.yaml file present
- Script will now offer the user the ability to create a
  minimal models.yaml and then gracefully exit.
- Closes #1420
2022-11-10 21:54:26 -05:00
832f183320 fix #1402 2022-11-10 21:54:13 -05:00
8aa94d5774 enhance outcropping with ability to direct contents of new regions
- When outcropping an image you can now add a `--new_prompt` option, to specify
  a new prompt to be used instead of the original one used to generate the image.

- Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
  will pick one randomly.

- This PR also fixes the crash that happened when trying to outcrop an image
  that does not contain InvokeAI metadata.
2022-11-10 21:53:52 -05:00
48aa6416dc enable outcropping of random JPG/PNG images
- Works best with runwayML inpainting model
- Numerous code changes required to propagate seed to final metadata.
  Original code predicated on the image being generated within InvokeAI.
2022-11-10 21:53:52 -05:00
47ddda1f64 Revert "Log strength with hires"
This reverts commit 82d4904c07.
2022-11-10 16:50:00 -05:00
c248ae44d4 Revert "Resize hires as an image"
This reverts commit d05b1b3544.
2022-11-10 16:50:00 -05:00
9e4545b2fc Fixes typos in README.md 2022-11-10 09:15:29 -05:00
8cf3883adc re-change TencentARC/GFPGAN to invoke-ai/GFPGAN 2022-11-09 12:53:36 -05:00
e06a6ed4c8 add changes required by @tildebyte 2022-11-09 12:53:36 -05:00
12a33f6e2d fix conflict in environment-linux-aarch64.yml 2022-11-09 12:53:36 -05:00
6d9638ba31 remove PIP_EXISTS_ACTION from env 2022-11-09 12:53:36 -05:00
c54eb00055 update python version 2022-11-09 12:53:36 -05:00
72338506ed update environment.yml 2022-11-09 12:53:36 -05:00
78c1d07c4b update environment-linux-aarch64.yml 2022-11-09 12:53:36 -05:00
143b18af8a update pip dependencies
- remove realesrgan
- add git+https://github.com/invoke-ai/Real-ESRGAN.git
- remove git+https://github.com/CompVis/taming-transformers.git
- add taming-transformers-rom1504
- change TencentARC/GFPGAN to invoke-ai/GFPGAN
2022-11-09 12:53:36 -05:00
9d39d6ecb3 add PIP_EXISTS_ACTION=w to test-invoke-conda`s env 2022-11-09 12:53:36 -05:00
9686bf0ea8 switch back to getpass_asterisk... ... until preload_models.py is ready 2022-11-09 12:53:36 -05:00
7aa7be6b24 use taming-transformers-rom1504, remove -e ...
... to address required changes
2022-11-09 12:53:36 -05:00
443c9110f1 remove push triggers, since pr trigger is enough 2022-11-09 12:53:36 -05:00
ae0ce82609 add 2 missed versions
unpinned them for testing purpose with linux container, 4got to re-pin
2022-11-09 12:53:36 -05:00
f1982cb6d8 update push triggers in test-invoke-conda.yml 2022-11-09 12:53:36 -05:00
af62958323 update environment-mac.yml 2022-11-09 12:53:36 -05:00
9342ad8d97 prevent crash when switching to an invalid model 2022-11-09 10:07:15 -05:00
5214742d02 don't suppress exceptions when doing cross-attention control 2022-11-09 07:21:21 -05:00
178f0c78d8 Fix #1362 by improving VRAM usage patterns when doing .swap()
commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:18:37 2022 +0100

    remove log spam

commit 7189d649622d4668b120b0dd278388ad672142c4
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:10:28 2022 +0100

    change the way saved slicing strategy is applied

commit 01c40f751ab72955140165c16f95ae411732265b
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:04:43 2022 +0100

    fix slicing_strategy_getter callsite

commit f8cfe25150a346958903316bc710737d99839923
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 11:56:22 2022 +0100

    cleanup, consistent dim=0 also tested

commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 11:34:09 2022 +0100

    refactored context, tested with non-sliced cross attention control

commit d58a46e39bf562e7459290d2444256e8c08ad0b6
Author: damian0815 <null@damianstewart.com>
Date:   Sun Nov 6 00:41:52 2022 +0100

    cleanup

commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:57:31 2022 +0100

    disable logs

commit 20ee89d93841b070738b3d8a4385c93b097d92eb
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:36:58 2022 +0100

    slice saved attention if necessary

commit 0a7684a22c880ec0f48cc22bfed4526358f71546
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:32:38 2022 +0100

    raise instead of asserting

commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:31:00 2022 +0100

    store dim when saving slices

commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:27:16 2022 +0100

    don't retry on exception

commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:24:50 2022 +0100

    stuff

commit 032ab90e9533be8726301ec91b97137e2aadef9a
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:20:17 2022 +0100

    more logging

commit 3dc34b387f033482305360e605809d95a40bf6f8
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:16:47 2022 +0100

    logs

commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:12:39 2022 +0100

    actually set save_slicing_strategy to True

commit f780e0a0a7c6b6a3db320891064da82589358c8a
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 22:10:35 2022 +0100

    store slicing strategy

commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
Author: damian <git@damianstewart.com>
Date:   Sat Nov 5 20:43:48 2022 +0100

    still not it

commit 5e3a9541f8ae00bde524046963910323e20c40b7
Author: damian <git@damianstewart.com>
Date:   Sat Nov 5 17:20:02 2022 +0100

    wip offloading attention slices on-demand

commit 4c2966aa856b6f3b446216da3619ae931552ef08
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 15:47:40 2022 +0100

    pre-emptive offloading, idk if it works

commit 572576755e9f0a878d38e8173e485126c0efbefb
Author: root <you@example.com>
Date:   Sat Nov 5 11:25:32 2022 +0000

    push attention slices to cpu. slow but saves memory.

commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 12:04:22 2022 +0100

    verbose logging

commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
Author: damian0815 <null@damianstewart.com>
Date:   Sat Nov 5 11:50:48 2022 +0100

    wip fixing mem strategy crash (4 test on runpod)

commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
Author: damian0815 <null@damianstewart.com>
Date:   Fri Nov 4 09:02:40 2022 +0100

    wip, only works on cuda
2022-11-09 07:21:21 -05:00
2487040ae3 enhance outcropping with ability to direct contents of new regions
- When outcropping an image you can now add a `--new_prompt` option, to specify
  a new prompt to be used instead of the original one used to generate the image.

- Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
  will pick one randomly.

- This PR also fixes the crash that happened when trying to outcrop an image
  that does not contain InvokeAI metadata.
2022-11-08 17:27:42 +00:00
5606af5083 enable outcropping of random JPG/PNG images
- Works best with runwayML inpainting model
- Numerous code changes required to propagate seed to final metadata.
  Original code predicated on the image being generated within InvokeAI.
2022-11-08 15:22:32 +00:00
4b5a96501d load favorite options from ~/.invokeai init file 2022-11-08 13:55:42 +00:00
ededeaed86 Merge branch 'add-invokeai-initfile' into development 2022-11-08 13:41:11 +00:00
636620b1d5 change initfile to ~/.invokeai
- adjust documentation
- also fix 'clipseg_models' to 'clipseg', which seems to be working now
2022-11-08 03:26:16 +00:00
21961f0c32 Revert "Use array slicing to calc ddim timesteps"
This reverts commit 1f0c5b4cf1.
2022-11-07 15:37:53 -05:00
1fe41146f0 add support for an initialization file, invokeai.init
- Place preferred startup command switches in a file named
  "invokeai.init". The file can consist of a single line of switches
  such as "--web --steps=28", a series of switches on each
  line, or any combination of the two.

 Example:
 ```
   --web
   --host=0.0.0.0
   --steps=28
   --grid
   -f 0.6 -C 11.0 -A k_euler_a
```

- The following options, which were previously only available within
  the CLI, are now available on the command line as well:

  --steps
  --strength
  --cfg_scale
  --width
  --height
  --fit
2022-11-06 22:02:45 -05:00
2ad6ef355a update discord link 2022-11-06 18:08:36 +00:00
865502ee4f update changelog 2022-11-06 09:27:59 -08:00
c7984f3299 update TROUBLESHOOT.md 2022-11-06 09:27:59 -08:00
7f150ed833 remove :from headlines in CONTRIBUTORS.md 2022-11-06 09:27:59 -08:00
badf4e256c enable navigation tabs
Since the docs are growing, this way they look cleaner
2022-11-06 09:27:59 -08:00
e64c60bbb3 remove preflight checks from assets
seems like somebody executed tests and commited them
2022-11-06 09:27:59 -08:00
1780618543 update INSTALLING_MODELS.md 2022-11-06 09:27:59 -08:00
f91fd27624 Bug fix for inpaint size 2022-11-06 09:25:50 -08:00
09e41e8f76 Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination 2022-11-06 09:25:50 -08:00
6eeb2107b3 remove create-caches.yml since not used anywhere 2022-11-06 09:21:43 -08:00
17053ad8b7 fix duplicated argument introduced by conflict resolution 2022-11-05 16:01:55 -04:00
fefb4dc1f8 Merge branch 'development' into fix_generate.py 2022-11-05 12:47:35 -07:00
d05b1b3544 Resize hires as an image 2022-11-05 11:54:23 -07:00
82d4904c07 Log strength with hires 2022-11-05 11:54:23 -07:00
1cdcf33cfa Merge branch 'main' into development
- this synchronizes recent document fixes by mauwii
2022-11-05 09:57:38 -04:00
6616fa835a fix Windows library dependency issues
This commit addresses two bugs:

1) invokeai.py crashes immediately with a message about an undefined
   attritube sigKILL (closes #1288). The fix is to pin torch at 1.12.1.

2) Version 1.4.2 of basicsr fails to load properly on Windows, and is
   a requirement of realesrgan, however 1.4.1 works. Pinning basicsr
   in our requirements file resulted in a dependency conflict, so I
   ended up cloning realesrgan into the invoke-ai Git space and changing
   the requirements file there.

If there is a more elegant solution, please advise.
2022-11-05 09:46:29 -04:00
7b9a4564b1 Update-docs (#1382)
* update IMG2IMG.md

* update INPAINTING.md

* update WEBUIHOTKEYS.md

* more doc updates (mostly fix formatting):
- OUTPAINTING.md
- POSTPROCESS.md
- PROMPTS.md
- VARIATIONS.md
- WEB.md
- WEBUIHOTKEYS.md
2022-11-05 09:36:45 -04:00
fcdefa0620 Hotifx docs (#1376) (#1377) 2022-11-04 12:47:31 -07:00
ef8b3ce639 Merge-main-into-development (#1373)
To get the rid of the difference between main and development.

Since otherwise it will be a pain to start fixing the documentatino
(when the state between main and development is not the same ...)

Also this should fix the problem of all tests failing since environment
yamls get updated.
2022-11-04 12:08:44 -04:00
36870a8f53 Merge branch 'development' into merge-main-into-development 2022-11-04 16:25:00 +01:00
b70420951d fix parsing error doing eg forest ().swap(in winter) 2022-11-03 20:15:23 -04:00
1f0c5b4cf1 Use array slicing to calc ddim timesteps 2022-11-03 20:11:04 -04:00
8648da8111 update environment-linux-aarch64 to use python 3.9 2022-11-03 20:06:26 -04:00
45b4593563 update environment-linux-aarch64.yml
- move getpass_asterisk to pip
2022-11-03 20:06:26 -04:00
41b04316cf rename job, remove debug branch from triggers 2022-11-03 20:06:26 -04:00
e97c6db2a3 include build matrix to build x86_64 and aarch64 2022-11-03 20:06:26 -04:00
896820a349 disable caching 2022-11-03 20:06:26 -04:00
06c8f468bf disable PR-Validation
since there are no files passed from context this is unecesarry
2022-11-03 20:06:26 -04:00
61920e2701 update action to use current branch
also update build-args of dockerfile and build.sh
2022-11-03 20:06:26 -04:00
f34ba7ca70 remove unecesarry mkdir command again 2022-11-03 20:06:26 -04:00
c30ef0895d remove symlink to GFPGANv1.4
also re-add mkdir to prevent action from failing
2022-11-03 20:06:26 -04:00
aa3a774f73 update build-container.yml to use cachev3 2022-11-03 20:06:26 -04:00
2c30555b84 update Dockerfile
- create models.yaml from models.yaml.example
- run preload_models.py with --no-interactive
2022-11-03 20:06:26 -04:00
743f605773 update build.sh to download sd-v1.5 model 2022-11-03 20:06:26 -04:00
519c661abb replace old fashined markdown templates with forms
this will help the readability of issues a lot 🤓
2022-11-03 21:21:43 +01:00
22c956c75f Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-03 10:20:21 -04:00
13696adc3a speculative change to solve windows esrgan issues 2022-11-03 10:20:10 -04:00
0196571a12 remove merge markers from preload_models.py 2022-11-02 22:39:35 -04:00
9666f466ab use refined model by default 2022-11-02 18:35:35 -04:00
240e5486c8 Merge branch 'spezialspezial-patch-9' into development 2022-11-02 18:35:00 -04:00
8164b6b9cf Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-02 17:06:46 -04:00
4fc82d554f [WebUI] Final 2.1 Release Build 2022-11-02 16:46:07 -04:00
96b34c0f85 Final WebUI build for Release 2.1
- squashed commit of 52 commits from PR #1327

don't log base64 progress images

Fresh Build For WebUI

[WebUI] Loopback Default False

Fixes bugs/styling

- Fixes missing web app state on new version:
Adds stateReconciler to redux-persist.

When we add more values to the state and then release the update app, they will be automatically merged in.

Reseting web UI will be needed far less.
7159ec

- Fixes console z-index
- Moves reset web UI button to visible area

Decreases gallery width on inpainting

Increases workarea split padding to 1rem

Adds missing tooltips to site header

Changes inpainting controls settings to hover

Fixes hotkeys and settings buttons not working

Improves bounding box interactions

- Bounding box can now be moved by dragging any of its edges
- Bounding box does not affect drawing if already drawing a stroke
- Can lock bounding box to draw directly on the bounding box edges
- Removes spacebar-hold behaviour due to technical issues

Fixes silent crash when init image too large

To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.

Disabled bounding box settings when locked

Styles image uploader

Builds fresh bundle

Improves bounding box interaction

Added spacebar-hold-to-transform back.

Address bounding box feedback

- Adds back toggle to hide bounding box
- Box quick toggle = q, normal toggle = shift + q
- Styles canvas alert icons

Adds hints when unable to invoke

- Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
- There may be more than one reason; all are displayed.

Fix Inpainting Alerts Styling

Preventing unnecessary re-renders across the app

Code Split Inpaint Options

Isolate features to their own components so they dont re-render the other stuff each time.

[TESTING] Remove  global isReady checking

I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.

Fresh Bundle

Fix Bounding Box Settings re-rendering on brush stroke

[Code Splitting] Bounding Box Options

Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.

Inpainting Controls Code Spitting and Performance

Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.

Fixes rerenders on ClearBrushHistory

Fixes crash when requesting post-generation upscale/face restoration

- Moves the inpainting paste to before the postprocessing.

Removes unused isReady state

Changes Report Bug icon to a bug

Restores shift+q bounding box shortcut

Adds alert for bounding box size to status icons

Adds asCheckbox to IAIIconButton

Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.

Fixes crash related to old value of progress_latents in state

Styling changes and settings modal minor refactor

Fixes: uploaded JPG images not loading

Reworks CurrentImageButtons.tsx

- Change all icons to FA iconset for consistency
- Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
- Redesigns buttons into group

Only generate 1 iteration when seed fixed & variations disabled

Fixes progress images select

Fixes edge case: upload over gets stuck while alt tabbing

- Press esc to close it now

Fixes display progress images select typing

Fixes current image button rerenders

Adds min width to ImageUploader

Makes fast-latents in progress default

Update Icon Button Checkbox Style Styling

Fixes next/prev image buttons

Refactor canvas buttons + more

Add Save Intermediates Step Count

For accurate mode only.

Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>

Restores "initial image" text

Address feedback

- moves mask clear button
- fixes intermediates
- shrinks inpainting icons by 10%

Fix Loopback Styling

Adds escape hotkey to close floating panels

Readd Hotkey for Dual Display

Updated Current Image Button Styling
2022-11-02 16:46:18 -04:00
dd5a88dcee [WebUI] Final 2.1 Release Build 2022-11-02 16:40:47 -04:00
95ed56bf82 Updated Current Image Button Styling 2022-11-02 16:40:47 -04:00
1ae80f5ab9 Readd Hotkey for Dual Display 2022-11-02 16:40:47 -04:00
1f0bd3ca6c Adds escape hotkey to close floating panels 2022-11-02 16:40:47 -04:00
a1971f6830 Fix Loopback Styling 2022-11-02 16:40:47 -04:00
c6118e8898 Address feedback
- moves mask clear button
- fixes intermediates
- shrinks inpainting icons by 10%
2022-11-02 16:40:47 -04:00
7ba958cf7f Restores "initial image" text 2022-11-02 16:40:47 -04:00
383905d5d2 Add Save Intermediates Step Count
For accurate mode only.

Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>
2022-11-02 16:40:47 -04:00
6173e3e9ca Refactor canvas buttons + more 2022-11-02 16:40:47 -04:00
3feb7d8922 Fixes next/prev image buttons 2022-11-02 16:40:47 -04:00
1d9edbd0dd Update Icon Button Checkbox Style Styling 2022-11-02 16:40:47 -04:00
d439abdb89 Makes fast-latents in progress default 2022-11-02 16:40:47 -04:00
ee47ea0c89 Adds min width to ImageUploader 2022-11-02 16:40:47 -04:00
300bb2e627 Fixes current image button rerenders 2022-11-02 16:40:47 -04:00
ccf8593501 Fixes display progress images select typing 2022-11-02 16:40:47 -04:00
0fda612f3f Fixes edge case: upload over gets stuck while alt tabbing
- Press esc to close it now
2022-11-02 16:40:47 -04:00
5afff65b71 Fixes progress images select 2022-11-02 16:40:47 -04:00
7e55bdefce Only generate 1 iteration when seed fixed & variations disabled 2022-11-02 16:40:47 -04:00
620cf84d3d Reworks CurrentImageButtons.tsx
- Change all icons to FA iconset for consistency
- Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
- Redesigns buttons into group
2022-11-02 16:40:47 -04:00
cfe567c62a Fixes: uploaded JPG images not loading 2022-11-02 16:40:47 -04:00
cefe12f1df Styling changes and settings modal minor refactor 2022-11-02 16:40:47 -04:00
1e51c39928 Fixes crash related to old value of progress_latents in state 2022-11-02 16:40:47 -04:00
42a02bbb80 Adds asCheckbox to IAIIconButton
Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.
2022-11-02 16:40:47 -04:00
f1ae6dae4c Adds alert for bounding box size to status icons 2022-11-02 16:40:47 -04:00
6195579910 Restores shift+q bounding box shortcut 2022-11-02 16:40:47 -04:00
16c8b23b34 Changes Report Bug icon to a bug 2022-11-02 16:40:47 -04:00
07ae626b22 Removes unused isReady state 2022-11-02 16:40:47 -04:00
8d171bb044 Fixes crash when requesting post-generation upscale/face restoration
- Moves the inpainting paste to before the postprocessing.
2022-11-02 16:40:47 -04:00
6e33ca7e9e Fixes rerenders on ClearBrushHistory 2022-11-02 16:40:47 -04:00
db46e12f2b Inpainting Controls Code Spitting and Performance
Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.
2022-11-02 16:40:47 -04:00
868e4b2db8 [Code Splitting] Bounding Box Options
Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.
2022-11-02 16:40:47 -04:00
2e562742c1 Fix Bounding Box Settings re-rendering on brush stroke 2022-11-02 16:40:47 -04:00
68e6958009 Fresh Bundle 2022-11-02 16:40:47 -04:00
ea6e3a7949 [TESTING] Remove global isReady checking
I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.
2022-11-02 16:40:47 -04:00
b2879ca99f Code Split Inpaint Options
Isolate features to their own components so they dont re-render the other stuff each time.
2022-11-02 16:40:47 -04:00
4e911566c3 Preventing unnecessary re-renders across the app 2022-11-02 16:40:47 -04:00
9bafda6a15 Fix Inpainting Alerts Styling 2022-11-02 16:40:47 -04:00
871a8a5375 Adds hints when unable to invoke
- Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc. 
- There may be more than one reason; all are displayed.
2022-11-02 16:40:47 -04:00
0eef74bc00 Address bounding box feedback
- Adds back toggle to hide bounding box
- Box quick toggle = q, normal toggle = shift + q
- Styles canvas alert icons
2022-11-02 16:40:47 -04:00
423ae32097 Improves bounding box interaction
Added spacebar-hold-to-transform back.
2022-11-02 16:40:47 -04:00
8282e5d045 Builds fresh bundle 2022-11-02 16:40:47 -04:00
19305cdbdf Styles image uploader 2022-11-02 16:40:47 -04:00
eb9028ab30 Disabled bounding box settings when locked 2022-11-02 16:40:47 -04:00
21483f5d07 Fixes silent crash when init image too large
To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.
2022-11-02 16:40:47 -04:00
82dcbac28f Improves bounding box interactions
- Bounding box can now be moved by dragging any of its edges
- Bounding box does not affect drawing if already drawing a stroke
- Can lock bounding box to draw directly on the bounding box edges
- Removes spacebar-hold behaviour due to technical issues
2022-11-02 16:40:47 -04:00
d43bd4625d Fixes hotkeys and settings buttons not working 2022-11-02 16:40:47 -04:00
ea891324a2 Changes inpainting controls settings to hover 2022-11-02 16:40:47 -04:00
8fd9ea2193 Adds missing tooltips to site header 2022-11-02 16:40:47 -04:00
fb02666856 Increases workarea split padding to 1rem 2022-11-02 16:40:47 -04:00
f6f5c2731b Decreases gallery width on inpainting 2022-11-02 16:40:47 -04:00
b4e3f771e0 Fixes bugs/styling
- Fixes missing web app state on new version:
Adds stateReconciler to redux-persist.

When we add more values to the state and then release the update app, they will be automatically merged in.

Reseting web UI will be needed far less.
7159ec

- Fixes console z-index
- Moves reset web UI button to visible area
2022-11-02 16:40:47 -04:00
99bb9491ac [WebUI] Loopback Default False 2022-11-02 16:40:47 -04:00
0453f21127 Fresh Build For WebUI 2022-11-02 23:26:49 +13:00
9fc09aa4bd don't log base64 progress images 2022-11-02 22:32:31 +13:00
5e87062cf8 Option to directly invert the grayscale heatmap - fix 2022-11-01 22:24:31 -04:00
3e7a459990 Update txt2mask.py 2022-11-01 22:24:31 -04:00
bbf4c03e50 Option to directly invert the grayscale heatmap
Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.
2022-11-01 22:24:31 -04:00
611a3a9753 fix name of caching step 2022-11-01 22:17:23 -04:00
1611f0d181 readd caching of sd-models
- this would remove the necesarrity of the secret availability in PRs
2022-11-01 22:17:23 -04:00
08835115e4 pin pytorch_lightning to 1.7.7, issue #1331 2022-11-01 22:11:44 -04:00
2d84e28d32 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 22:11:04 -04:00
ef17aae8ab add damian0815 to contributors list 2022-11-02 13:55:52 +13:00
0cc39f01a3 report full size for fast latents and update conversion matrix for v1.5 2022-11-02 13:55:29 +13:00
688d7258f1 fix a bug that broke cross attention control index mapping 2022-11-02 13:54:54 +13:00
4513320bf1 save VRAM by not recombining tensors that have been sliced to save VRAM 2022-11-02 13:54:54 +13:00
533fd04ef0 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 17:40:36 -04:00
dff5681cf0 shorter strings 2022-11-01 17:39:08 -04:00
5a2790a69b convert progress display to a drop-down 2022-11-01 17:39:08 -04:00
7c5305ccba do not try to save base64 intermediates in gallery on cancellation 2022-11-01 17:39:08 -04:00
4013e8ad6f Fixes b64 image sending and displaying 2022-11-01 17:39:08 -04:00
d1dfd257f9 wip base64 2022-11-01 17:39:08 -04:00
5322d735ee update frontend 2022-11-01 17:39:08 -04:00
cdb107dcda add option to show intermediate latent space 2022-11-01 17:39:08 -04:00
be1393a41c ensure existing exception handling code also handles new exception class 2022-11-01 17:37:26 -04:00
e554c2607f Rebuilt prompt parsing logic
Complete re-write of the prompt parsing logic to be more readable and
logical, and therefore also hopefully easier to debug, maintain, and
augment.

In the process it has also become more robust to badly-formed prompts.

Squashed commit of the following:

commit 8fcfa88a16e1390d41717e940d72aed64712171c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 17:05:57 2022 +0100

    further cleanup

commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 16:07:57 2022 +0100

    cleanup and document

commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 15:54:58 2022 +0100

    works fully

commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 15:24:31 2022 +0100

    further...

commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 30 14:08:57 2022 +0100

    getting there...

commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 14:29:03 2022 +0200

    wip doesn't compile

commit 5e533f731cfd20cd435330eeb0012e5689e87e81
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 13:21:43 2022 +0200

    working with CrossAttentionCtonrol but no Attention support yet

commit 9678348773431e500e110e8aede99086bb7b5955
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 28 13:04:52 2022 +0200

    wip rebuiling prompt parser
2022-11-01 17:37:26 -04:00
6215592b12 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-11-01 17:34:55 -04:00
349cc25433 fix crash (be a little less aggressive clearing out the attention slice) 2022-11-01 17:34:28 -04:00
214d276379 be more aggressive at clearing out saved_attn_slice 2022-11-01 17:34:28 -04:00
ef24d76adc fix library problems in preload_modules 2022-11-01 17:23:27 -04:00
ab2b5a691d fix model_cache memory management issues 2022-11-01 17:23:20 -04:00
c7de2b2801 disable checks with sd-V1.4 model...
...to save some resources, since V1.5 is the default now
2022-10-31 21:19:53 -04:00
e8075658ac update test-invoke-conda.yml
- fix model dl path for sd-v1-4.ckpt
- copy configs/models.yaml.example to configs/models.yaml
2022-10-31 21:19:53 -04:00
4202dabee1 fix models example weights for sd-v1.4 2022-10-31 21:19:53 -04:00
d67db2bcf1 [WebUI] Loopback Default False 2022-10-31 21:18:03 -04:00
7159ec885f further improvements to preload_models.py
- Faster startup for command line switch processing
- Specify configuration file to modify using --config option:

  ./scripts/preload_models.ply --config models/my-models-file.yaml
2022-10-31 11:33:05 -04:00
b5cf734ba9 improve behavior of preload_models.py
- NEVER overwrite user's existing models.yaml
- Instead, merge its contents into new config file,
  and rename original to models.yaml.orig (with
  message)
- models.yaml has been removed from repository and renamed
  models.yaml.example
2022-10-31 11:08:19 -04:00
f7dc8eafee restore models.yaml to virgin state 2022-10-31 10:47:35 -04:00
762ca60a30 Update INPAINTING.md 2022-10-04 22:55:10 -04:00
e7fb9f342c add argument --outdir 2022-10-05 10:08:53 +09:00
1828 changed files with 73580 additions and 382632 deletions

View File

@ -20,13 +20,13 @@ def calc_images_mean_L1(image1_path, image2_path):
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("image1_path")
parser.add_argument("image2_path")
parser.add_argument('image1_path')
parser.add_argument('image2_path')
args = parser.parse_args()
return args
if __name__ == "__main__":
if __name__ == '__main__':
args = parse_args()
mean_L1 = calc_images_mean_L1(args.image1_path, args.image2_path)
print(mean_L1)

View File

@ -1,9 +1,12 @@
*
!invokeai
!pyproject.toml
!docker/docker-entrypoint.sh
!LICENSE
**/node_modules
**/__pycache__
**/*.egg-info
!backend
!configs
!environments-and-requirements
!frontend
!installer
!ldm
!main.py
!scripts
!server
!static
!setup.py

View File

@ -1,2 +0,0 @@
b3dccfaeb636599c02effc377cdd8a87d658256c
218b6d0546b990fc449c876fb99f44b50c4daa35

3
.gitattributes vendored
View File

@ -1,5 +1,4 @@
# Auto normalizes line endings on commit so devs don't need to change local settings.
# Only affects text files and ignores other file types.
# Only affects text files and ignores other file types.
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
* text=auto
docker/** text eol=lf

41
.github/CODEOWNERS vendored
View File

@ -1,34 +1,7 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
/mkdocs.yml @lstein @blessedcoolant @hipsterusername @Millu
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising @hipsterusername
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername
/scripts/ @ebr @lstein @hipsterusername
/installer/ @lstein @ebr @hipsterusername
/invokeai/assets @lstein @ebr @hipsterusername
/invokeai/configs @lstein @hipsterusername
/invokeai/version @lstein @blessedcoolant @hipsterusername
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
# generation, model management, postprocessing
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising @ryanjdick @hipsterusername
# front ends
/invokeai/frontend/CLI @lstein @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername
ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb
installer/ @tildebyte
.github/workflows/ @mauwii
docker_build/ @mauwii

View File

@ -65,16 +65,6 @@ body:
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
validations:
required: true
- type: textarea
id: what-happened

View File

@ -1,5 +1,5 @@
name: Feature Request
description: Contribute a idea or request a new feature
description: Commit a idea or Request a new feature
title: '[enhancement]: '
labels: ['enhancement']
# assignees:
@ -9,14 +9,14 @@ body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this feature request!
Thanks for taking the time to fill out this Feature request!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: |
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
to see if a similar issue already exists for the feature you want to request
to see if a simmilar issue already exists for the feature you want to request
options:
- label: I have searched the existing issues
required: true
@ -34,9 +34,12 @@ body:
id: whatisexpected
attributes:
label: What should this feature add?
description: Explain the functionality this feature should add. Feature requests should be for single features. Please create multiple requests if you want to request multiple features.
description: Please try to explain the functionality this feature should add
placeholder: |
I'd like a button that creates an image of banana sushi every time I press it. Each image should be different. There should be a toggle next to the button that enables strawberry mode, in which the images are of strawberry sushi instead.
Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
validations:
required: true
@ -48,6 +51,6 @@ body:
- type: textarea
attributes:
label: Additional Content
label: Aditional Content
description: Add any other context or screenshots about the feature request here.
placeholder: This is a mockup of the design how I imagine it <screenshot>
placeholder: This is a Mockup of the design how I imagine it <screenshot>

View File

@ -1,51 +0,0 @@
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?

19
.github/stale.yaml vendored
View File

@ -1,19 +0,0 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 28
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Please
update the ticket if this is still a problem on the latest release.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
Due to inactivity, this issue has been automatically closed. If this is
still a problem on the latest release, please recreate the issue.

View File

@ -1,115 +1,43 @@
# Building the Image without pushing to confirm it is still buildable
# confirum functionality would unfortunately need way more resources
name: build container image
on:
push:
branches:
- 'main'
paths:
- 'pyproject.toml'
- '.dockerignore'
- 'invokeai/**'
- 'docker/Dockerfile'
- 'docker/docker-entrypoint.sh'
- 'workflows/build-container.yml'
tags:
- 'v*'
workflow_dispatch:
permissions:
contents: write
packages: write
- 'development'
- 'update-dockerfile'
jobs:
docker:
if: github.event.pull_request.draft == false
strategy:
fail-fast: false
matrix:
gpu-driver:
- cuda
- cpu
- rocm
arch:
- x86_64
- aarch64
pip-requirements:
- requirements-lin-amd.txt
- requirements-lin-cuda.txt
runs-on: ubuntu-latest
name: ${{ matrix.gpu-driver }}
env:
# torch/arm64 does not support GPU currently, so arm64 builds
# would not be GPU-accelerated.
# re-enable arm64 if there is sufficient demand.
# PLATFORMS: 'linux/amd64,linux/arm64'
PLATFORMS: 'linux/amd64'
name: ${{ matrix.pip-requirements }} ${{ matrix.arch }}
steps:
- name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
run: |
sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
- name: prepare docker-tag
env:
repository: ${{ github.repository }}
run: echo "dockertag=${repository,,}" >> $GITHUB_ENV
- name: Checkout
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: |
ghcr.io/${{ github.repository }}
${{ env.DOCKERHUB_REPOSITORY }}
tags: |
type=ref,event=branch
type=ref,event=tag
type=pep440,pattern={{version}}
type=pep440,pattern={{major}}.{{minor}}
type=pep440,pattern={{major}}
type=sha,enable=true,prefix=sha-,format=short
flavor: |
latest=${{ matrix.gpu-driver == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=-${{ matrix.gpu-driver }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
platforms: ${{ env.PLATFORMS }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
# - name: Login to Docker Hub
# if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
# uses: docker/login-action@v2
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
id: docker_build
uses: docker/build-push-action@v4
uses: docker/build-push-action@v3
with:
context: .
file: docker/Dockerfile
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
type=gha,scope=main-${{ matrix.gpu-driver }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
# - name: Docker Hub Description
# if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
# uses: peter-evans/dockerhub-description@v3
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
# repository: ${{ vars.DOCKERHUB_REPOSITORY }}
# short-description: ${{ github.event.repository.description }}
file: docker-build/Dockerfile
platforms: Linux/${{ matrix.arch }}
push: false
tags: ${{ env.dockertag }}:${{ matrix.pip-requirements }}-${{ matrix.arch }}
build-args: pip_requirements=${{ matrix.pip-requirements }}

View File

@ -1,34 +0,0 @@
name: cleanup caches by a branch
on:
pull_request:
types:
- closed
workflow_dispatch:
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH=${{ github.ref }}
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,28 +0,0 @@
name: Close inactive issues
on:
schedule:
- cron: "00 4 * * *"
env:
DAYS_BEFORE_ISSUE_STALE: 30
DAYS_BEFORE_ISSUE_CLOSE: 14
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v8
with:
days-before-issue-stale: ${{ env.DAYS_BEFORE_ISSUE_STALE }}
days-before-issue-close: ${{ env.DAYS_BEFORE_ISSUE_CLOSE }}
stale-issue-label: "Inactive Issue"
stale-issue-message: "There has been no activity in this issue for ${{ env.DAYS_BEFORE_ISSUE_STALE }} days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release."
close-issue-message: "Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue."
days-before-pr-stale: -1
days-before-pr-close: -1
exempt-issue-labels: "Active Issue"
repo-token: ${{ secrets.GITHUB_TOKEN }}
operations-per-run: 500

View File

@ -1,33 +0,0 @@
name: Lint frontend
on:
pull_request:
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
push:
branches:
- 'main'
merge_group:
workflow_dispatch:
defaults:
run:
working-directory: invokeai/frontend/web
jobs:
lint-frontend:
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Setup Node 18
uses: actions/setup-node@v3
with:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn run lint:tsc'
- run: 'yarn run lint:madge'
- run: 'yarn run lint:eslint'
- run: 'yarn run lint:prettier'

View File

@ -2,19 +2,12 @@ name: mkdocs-material
on:
push:
branches:
- 'refs/heads/main'
permissions:
contents: write
- 'main'
- 'development'
jobs:
mkdocs-material:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
env:
REPO_URL: '${{ github.server_url }}/${{ github.repository }}'
REPO_NAME: '${{ github.repository }}'
SITE_URL: 'https://${{ github.repository_owner }}.github.io/InvokeAI'
steps:
- name: checkout sources
uses: actions/checkout@v3
@ -25,15 +18,11 @@ jobs:
uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: pip
cache-dependency-path: pyproject.toml
- name: install requirements
env:
PIP_USE_PEP517: 1
run: |
python -m \
pip install ".[docs]"
pip install -r docs/requirements-mkdocs.txt
- name: confirm buildability
run: |

View File

@ -1,41 +0,0 @@
name: PyPI Release
on:
push:
paths:
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
jobs:
release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: install deps
run: pip install --upgrade build twine
- name: build package
run: python3 -m build
- name: check distribution
run: twine check dist/*
- name: check PyPI versions
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')
run: |
pip install --upgrade requests
python -c "\
import scripts.pypi_helper; \
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
run: twine upload dist/*

View File

@ -1,24 +0,0 @@
name: style checks
on:
pull_request:
push:
branches: main
jobs:
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies with pip
run: |
pip install ruff
- run: ruff check --output-format=github .
- run: ruff format --check .

135
.github/workflows/test-invoke-conda.yml vendored Normal file
View File

@ -0,0 +1,135 @@
name: Test invoke.py
on:
push:
branches:
- 'main'
- 'development'
- 'fix-gh-actions-fork'
pull_request:
branches:
- 'main'
- 'development'
jobs:
matrix:
strategy:
matrix:
stable-diffusion-model:
- 'stable-diffusion-1.5'
environment-yaml:
- environment-lin-amd.yml
- environment-lin-cuda.yml
- environment-mac.yml
include:
- environment-yaml: environment-lin-amd.yml
os: ubuntu-latest
default-shell: bash -l {0}
- environment-yaml: environment-lin-cuda.yml
os: ubuntu-latest
default-shell: bash -l {0}
- environment-yaml: environment-mac.yml
os: macos-12
default-shell: bash -l {0}
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.environment-yaml }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
env:
CONDA_ENV_NAME: invokeai
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
- name: create environment.yml
run: cp "environments-and-requirements/${{ matrix.environment-yaml }}" environment.yml
- name: Use cached conda packages
id: use-cached-conda-packages
uses: actions/cache@v3
with:
path: ~/conda_pkgs_dir
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-yaml) }}
- name: Activate Conda Env
id: activate-conda-env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: environment.yml
miniconda-version: latest
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
- name: Use Cached Stable Diffusion Model
id: cache-sd-model
uses: actions/cache@v3
env:
cache-name: cache-${{ matrix.stable-diffusion-model }}
with:
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
- name: Download ${{ matrix.stable-diffusion-model }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" \
-L ${{ matrix.stable-diffusion-model-url }}
- name: run configure_invokeai.py
id: run-preload-models
run: |
python scripts/configure_invokeai.py --no-interactive --yes
- name: cat ~/.invokeai
id: cat-invokeai
run: cat ~/.invokeai
- name: Run the tests
id: run-tests
run: |
time python scripts/invoke.py \
--no-patchmatch \
--no-nsfw_checker \
--model ${{ matrix.stable-diffusion-model }} \
--from_file ${{ env.TEST_PROMPTS }} \
--root="${{ env.INVOKEAI_ROOT }}" \
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
- name: export conda env
id: export-conda-env
run: |
mkdir -p outputs/img-samples
conda env export --name ${{ env.CONDA_ENV_NAME }} > outputs/img-samples/environment-${{ runner.os }}-${{ runner.arch }}.yml
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs

View File

@ -3,127 +3,126 @@ on:
push:
branches:
- 'main'
- 'development'
pull_request:
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
branches:
- 'main'
- 'development'
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
stable-diffusion-model:
- stable-diffusion-1.5
requirements-file:
- requirements-lin-cuda.txt
- requirements-lin-amd.txt
- requirements-mac-mps-cpu.txt
python-version:
# - '3.9'
- '3.10'
pytorch:
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
include:
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
- requirements-file: requirements-lin-cuda.txt
os: ubuntu-latest
default-shell: bash -l {0}
- requirements-file: requirements-lin-amd.txt
os: ubuntu-latest
default-shell: bash -l {0}
- requirements-file: requirements-mac-mps-cpu.txt
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
default-shell: bash -l {0}
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.requirements-file }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
defaults:
run:
shell: ${{ matrix.default-shell }}
env:
PIP_USE_PEP517: '1'
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: Check for changed python files
id: changed-files
uses: tj-actions/changed-files@v37
with:
files_yaml: |
python:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
- 'tests/**'
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
- name: set test prompt to main branch validation
if: steps.changed-files.outputs.python_any_changed == 'true'
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
- name: create requirements.txt
run: cp 'environments-and-requirements/${{ matrix.requirements-file }}' '${{ matrix.requirements-file }}'
- name: setup python
if: steps.changed-files.outputs.python_any_changed == 'true'
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: pyproject.toml
cache: 'pip'
cache-dependency-path: ${{ matrix.requirements-file }}
- name: install invokeai
if: steps.changed-files.outputs.python_any_changed == 'true'
# - name: install dependencies
# run: ${{ env.pythonLocation }}/bin/pip install --upgrade pip setuptools wheel
- name: install requirements
run: ${{ env.pythonLocation }}/bin/pip install -r '${{ matrix.requirements-file }}'
- name: Use Cached Stable Diffusion Model
id: cache-sd-model
uses: actions/cache@v3
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install
--editable=".[test]"
cache-name: cache-${{ matrix.stable-diffusion-model }}
with:
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
- name: run pytest
if: steps.changed-files.outputs.python_any_changed == 'true'
id: run-pytest
run: pytest
- name: Download ${{ matrix.stable-diffusion-model }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" \
-L ${{ matrix.stable-diffusion-model-url }}
# - name: run invokeai-configure
# env:
# HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
# run: >
# invokeai-configure
# --yes
# --default_only
# --full-precision
# # can't use fp16 weights without a GPU
- name: run configure_invokeai.py
id: run-preload-models
run: |
${{ env.pythonLocation }}/bin/python scripts/configure_invokeai.py --no-interactive --yes
# - name: run invokeai
# id: run-invokeai
# env:
# # Set offline mode to make sure configure preloaded successfully.
# HF_HUB_OFFLINE: 1
# HF_DATASETS_OFFLINE: 1
# TRANSFORMERS_OFFLINE: 1
# INVOKEAI_OUTDIR: ${{ github.workspace }}/results
# run: >
# invokeai
# --no-patchmatch
# --no-nsfw_checker
# --precision=float32
# --always_use_cpu
# --use_memory_db
# --outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
# --from_file ${{ env.TEST_PROMPTS }}
- name: cat ~/.invokeai
id: cat-invokeai
run: cat ~/.invokeai
# - name: Archive results
# env:
# INVOKEAI_OUTDIR: ${{ github.workspace }}/results
# uses: actions/upload-artifact@v3
# with:
# name: results
# path: ${{ env.INVOKEAI_OUTDIR }}
- name: Run the tests
id: run-tests
run: |
time ${{ env.pythonLocation }}/bin/python scripts/invoke.py \
--no-patchmatch \
--no-nsfw_checker \
--model ${{ matrix.stable-diffusion-model }} \
--from_file ${{ env.TEST_PROMPTS }} \
--root="${{ env.INVOKEAI_ROOT }}" \
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs

75
.gitignore vendored
View File

@ -1,4 +1,17 @@
.idea/
# ignore default image save location and model symbolic link
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
**/restoration/codeformer/weights
# ignore user models config
configs/models.user.yaml
config/models.user.yml
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
# ignore a directory which serves as a place for initial images
inputs/
# Byte-compiled / optimized / DLL files
__pycache__/
@ -16,10 +29,11 @@ __pycache__/
.Python
build/
develop-eggs/
# dist/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
@ -46,21 +60,16 @@ pip-delete-this-directory.txt
htmlcov/
.tox/
.nox/
.coveragerc
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
cov.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.pytest.ini
cover/
junit/
notes/
# Translations
*.mo
@ -133,10 +142,12 @@ celerybeat.pid
# Environments
.env
.venv*
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
@ -169,21 +180,57 @@ cython_debug/
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
src
**/__pycache__/
outputs
# Logs and associated folders
# created from generated embeddings.
logs
testtube
checkpoints
# If it's a Mac
.DS_Store
# Let the frontend manage its own gitignore
!invokeai/frontend/web/*
!frontend/*
# Scratch folder
.scratch/
.vscode/
gfpgan/
models/ldm/stable-diffusion-v1/*.sha256
# GFPGAN model files
gfpgan/
# config file (will be created by installer)
configs/models.yaml
# weights (will be created by installer)
models/ldm/stable-diffusion-v1/*.ckpt
models/clipseg
models/gfpgan
# ignore initfile
.invokeai
# ignore environment.yml and requirements.txt
# these are links to the real files in environments-and-requirements
environment.yml
requirements.txt
# source installer files
installer/*zip
installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh
source_installer/*zip
source_installer/invokeAI
install.bat
install.sh
update.bat
update.sh
# this may be present if the user created a venv
invokeai
# no longer stored in source directory
models

View File

@ -1,24 +0,0 @@
# See https://pre-commit.com/ for usage and config
repos:
- repo: local
hooks:
- id: black
name: black
stages: [commit]
language: system
entry: black
types: [python]
- id: flake8
name: flake8
stages: [commit]
language: system
entry: flake8
types: [python]
- id: isort
name: isort
stages: [commit]
language: system
entry: isort
types: [python]

View File

@ -1,4 +1,4 @@
<img src="docs/assets/invoke_ai_banner.png" align="center">
<img src="docs/assets/invoke_ai_banner.png" align="center">
Invoke-AI is a community of software developers, researchers, and user
interface experts who have come together on a voluntary basis to build
@ -81,4 +81,5 @@ area. Disputes are resolved by open and honest communication.
## Signature
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, **keturn**, and **ebr** (Eugene Brodsky). Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, and **keturn**. Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.

189
LICENSE
View File

@ -1,176 +1,21 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
MIT License
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Copyright (c) 2022 InvokeAI Team
1. Definitions.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,290 +0,0 @@
Copyright (c) 2023 Stability AI
CreativeML Open RAIL++-M License dated July 26, 2023
Section I: PREAMBLE
Multimodal generative models are being widely adopted and used, and
have the potential to transform the way artists, among other
individuals, conceive and benefit from AI or ML technologies as a tool
for content creation.
Notwithstanding the current and potential benefits that these
artifacts can bring to society at large, there are also concerns about
potential misuses of them, either due to their technical limitations
or ethical considerations.
In short, this license strives for both the open and responsible
downstream use of the accompanying model. When it comes to the open
character, we took inspiration from open source permissive licenses
regarding the grant of IP rights. Referring to the downstream
responsible use, we added use-based restrictions not permitting the
use of the model in very specific scenarios, in order for the licensor
to be able to enforce the license in case potential misuses of the
Model may occur. At the same time, we strive to promote open and
responsible research on generative models for art and content
generation.
Even though downstream derivative versions of the model could be
released under different licensing terms, the latter will always have
to include - at minimum - the same use-based restrictions as the ones
in the original license (this license). We believe in the intersection
between open and responsible AI development; thus, this agreement aims
to strike a balance between both in order to enable responsible
open-science in the field of AI.
This CreativeML Open RAIL++-M License governs the use of the model
(and its derivatives) and is informed by the model card associated
with the model.
NOW THEREFORE, You and Licensor agree as follows:
Definitions
"License" means the terms and conditions for use, reproduction, and
Distribution as defined in this document.
"Data" means a collection of information and/or content extracted from
the dataset used with the Model, including to train, pretrain, or
otherwise evaluate the Model. The Data is not licensed under this
License.
"Output" means the results of operating a Model as embodied in
informational content resulting therefrom.
"Model" means any accompanying machine-learning based assemblies
(including checkpoints), consisting of learnt weights, parameters
(including optimizer states), corresponding to the model architecture
as embodied in the Complementary Material, that have been trained or
tuned, in whole or in part on the Data, using the Complementary
Material.
"Derivatives of the Model" means all modifications to the Model, works
based on the Model, or any other model which is created or initialized
by transfer of patterns of the weights, parameters, activations or
output of the Model, to the other model, in order to cause the other
model to perform similarly to the Model, including - but not limited
to - distillation methods entailing the use of intermediate data
representations or methods based on the generation of synthetic data
by the Model for training the other model.
"Complementary Material" means the accompanying source code and
scripts used to define, run, load, benchmark or evaluate the Model,
and used to prepare data for training or evaluation, if any. This
includes any accompanying documentation, tutorials, examples, etc, if
any.
"Distribution" means any transmission, reproduction, publication or
other sharing of the Model or Derivatives of the Model to a third
party, including providing the Model as a hosted service made
available by electronic or other remote means - e.g. API-based or web
access.
"Licensor" means the copyright owner or entity authorized by the
copyright owner that is granting the License, including the persons or
entities that may have rights in the Model and/or distributing the
Model.
"You" (or "Your") means an individual or Legal Entity exercising
permissions granted by this License and/or making use of the Model for
whichever purpose and in any field of use, including usage of the
Model in an end-use application - e.g. chatbot, translator, image
generator.
"Third Parties" means individuals or legal entities that are not under
common control with Licensor or You.
"Contribution" means any work of authorship, including the original
version of the Model and any modifications or additions to that Model
or Derivatives of the Model thereof, that is intentionally submitted
to Licensor for inclusion in the Model by the copyright owner or by an
individual or Legal Entity authorized to submit on behalf of the
copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent to
the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control
systems, and issue tracking systems that are managed by, or on behalf
of, the Licensor for the purpose of discussing and improving the
Model, but excluding communication that is conspicuously marked or
otherwise designated in writing by the copyright owner as "Not a
Contribution."
"Contributor" means Licensor and any individual or Legal Entity on
behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Model.
Section II: INTELLECTUAL PROPERTY RIGHTS
Both copyright and patent grants apply to the Model, Derivatives of
the Model and Complementary Material. The Model and Derivatives of the
Model are subject to additional terms as described in
Section III.
Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare, publicly display, publicly
perform, sublicense, and distribute the Complementary Material, the
Model, and Derivatives of the Model.
Grant of Patent License. Subject to the terms and conditions of this
License and where and as applicable, each Contributor hereby grants to
You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this paragraph) patent license to
make, have made, use, offer to sell, sell, import, and otherwise
transfer the Model and the Complementary Material, where such license
applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by
combination of their Contribution(s) with the Model to which such
Contribution(s) was submitted. If You institute patent litigation
against any entity (including a cross-claim or counterclaim in a
lawsuit) alleging that the Model and/or Complementary Material or a
Contribution incorporated within the Model and/or Complementary
Material constitutes direct or contributory patent infringement, then
any patent licenses granted to You under this License for the Model
and/or Work shall terminate as of the date such litigation is asserted
or filed.
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
Distribution and Redistribution. You may host for Third Party remote
access purposes (e.g. software-as-a-service), reproduce and distribute
copies of the Model or Derivatives of the Model thereof in any medium,
with or without modifications, provided that You meet the following
conditions: Use-based restrictions as referenced in paragraph 5 MUST
be included as an enforceable provision by You in any type of legal
agreement (e.g. a license) governing the use and/or distribution of
the Model or Derivatives of the Model, and You shall give notice to
subsequent users You Distribute to, that the Model or Derivatives of
the Model are subject to paragraph 5. This provision does not apply to
the use of Complementary Material. You must give any Third Party
recipients of the Model or Derivatives of the Model a copy of this
License; You must cause any modified files to carry prominent notices
stating that You changed the files; You must retain all copyright,
patent, trademark, and attribution notices excluding those notices
that do not pertain to any part of the Model, Derivatives of the
Model. You may add Your own copyright statement to Your modifications
and may provide additional or different license terms and conditions -
respecting paragraph 4.a. - for use, reproduction, or Distribution of
Your modifications, or for any such Derivatives of the Model as a
whole, provided Your use, reproduction, and Distribution of the Model
otherwise complies with the conditions stated in this License.
Use-based restrictions. The restrictions set forth in Attachment A are
considered Use-based restrictions. Therefore You cannot use the Model
and the Derivatives of the Model for the specified restricted
uses. You may use the Model subject to this License, including only
for lawful purposes and in accordance with the License. Use may
include creating any content with, finetuning, updating, running,
training, evaluating and/or reparametrizing the Model. You shall
require all of Your users who use the Model or a Derivative of the
Model to comply with the terms of this paragraph (paragraph 5).
The Output You Generate. Except as set forth herein, Licensor claims
no rights in the Output You generate using the Model. You are
accountable for the Output you generate and its subsequent uses. No
use of the output can contravene any provision as stated in the
License.
Section IV: OTHER PROVISIONS
Updates and Runtime Restrictions. To the maximum extent permitted by
law, Licensor reserves the right to restrict (remotely or otherwise)
usage of the Model in violation of this License.
Trademarks and related. Nothing in this License permits You to make
use of Licensors trademarks, trade names, logos or to otherwise
suggest endorsement or misrepresent the relationship between the
parties; and any rights not expressly granted herein are reserved by
the Licensors.
Disclaimer of Warranty. Unless required by applicable law or agreed to
in writing, Licensor provides the Model and the Complementary Material
(and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Model, Derivatives of
the Model, and the Complementary Material and assume any risks
associated with Your exercise of permissions under this License.
Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise, unless
required by applicable law (such as deliberate and grossly negligent
acts) or agreed to in writing, shall any Contributor be liable to You
for damages, including any direct, indirect, special, incidental, or
consequential damages of any character arising as a result of this
License or out of the use or inability to use the Model and the
Complementary Material (including but not limited to damages for loss
of goodwill, work stoppage, computer failure or malfunction, or any
and all other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
Accepting Warranty or Additional Liability. While redistributing the
Model, Derivatives of the Model and the Complementary Material
thereof, You may choose to offer, and charge a fee for, acceptance of
support, warranty, indemnity, or other liability obligations and/or
rights consistent with this License. However, in accepting such
obligations, You may act only on Your own behalf and on Your sole
responsibility, not on behalf of any other Contributor, and only if
You agree to indemnify, defend, and hold each Contributor harmless for
any liability incurred by, or claims asserted against, such
Contributor by reason of your accepting any such warranty or
additional liability.
If any provision of this License is held to be invalid, illegal or
unenforceable, the remaining provisions shall be unaffected thereby
and remain valid as if such provision had not been set forth herein.
END OF TERMS AND CONDITIONS
Attachment A
Use Restrictions
You agree not to use the Model or Derivatives of the Model:
* In any way that violates any applicable national, federal, state,
local or international law or regulation;
* For the purpose of exploiting, harming or attempting to exploit or
harm minors in any way;
* To generate or disseminate verifiably false information and/or
content with the purpose of harming others;
* To generate or disseminate personal identifiable information that
can be used to harm an individual;
* To defame, disparage or otherwise harass others;
* For fully automated decision making that adversely impacts an
individuals legal rights or otherwise creates or modifies a
binding, enforceable obligation;
* For any use intended to or which has the effect of discriminating
against or harming individuals or groups based on online or offline
social behavior or known or predicted personal or personality
characteristics;
* To exploit any of the vulnerabilities of a specific group of persons
based on their age, social, physical or mental characteristics, in
order to materially distort the behavior of a person pertaining to
that group in a manner that causes or is likely to cause that person
or another person physical or psychological harm;
* For any use intended to or which has the effect of discriminating
against individuals or groups based on legally protected
characteristics or categories;
* To provide medical advice and medical results interpretation;
* To generate or disseminate information for the purpose to be used
for administration of justice, law enforcement, immigration or
asylum processes, such as predicting an individual will commit
fraud/crime commitment (e.g. by text profiling, drawing causal
relationships between assertions made in documents, indiscriminate
and arbitrarily-targeted use).

461
README.md
View File

@ -1,22 +1,23 @@
<div align="center">
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018)
# InvokeAI: A Stable Diffusion Toolkit
# Invoke AI - Generative AI for Professional Creatives
## Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more.
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit [invoke.ai](https://invoke.ai)
_Formerly known as lstein/stable-diffusion_
![project logo](docs/assets/logo.png)
[![discord badge]][discord link]
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@ -27,387 +28,159 @@
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
</div>
InvokeAI is a leading creative engine built to empower professionals
and enthusiasts alike. Generate and create stunning visual media using
the latest AI-driven technologies. InvokeAI offers an industry leading
Web Interface, interactive Command Line Interface, and also serves as
the foundation for multiple commercial products.
This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image
generation process. It runs on Windows, Mac and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface (see below), and an easy-to-use command-line interface.
**Quick links**: [[How to
Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>]
[<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
[<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
Ideas & Q&A</a>]
[<a
href="https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/">Contributing</a>]
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
<div align="center">
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
</div>
_Note: This fork is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
## Table of Contents
Table of Contents 📝
1. [Installation](#installation)
2. [Hardware Requirements](#hardware-requirements)
3. [Features](#features)
4. [Latest Changes](#latest-changes)
5. [Troubleshooting](#troubleshooting)
6. [Contributing](#contributing)
7. [Contributors](#contributors)
8. [Support](#support)
9. [Further Reading](#further-reading)
**Getting Started**
1. 🏁 [Quick Start](#quick-start)
3. 🖥️ [Hardware Requirements](#hardware-requirements)
**More About Invoke**
1. 🌟 [Features](#features)
2. 📣 [Latest Changes](#latest-changes)
3. 🛠️ [Troubleshooting](#troubleshooting)
**Supporting the Project**
1. 🤝 [Contributing](#contributing)
2. 👥 [Contributors](#contributors)
3. 💕 [Support](#support)
## Quick Start
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)
If upgrading from version 2.3, please read [Migrating a 2.3 root
directory to 3.0](#migrating-to-3) first.
### Automatic Installer (suggested for 1st time users)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
2. Download the .zip file for your OS (Windows/macOS/Linux).
3. Unzip the file.
4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. **Linux:** run `install.sh`.
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
location with at least 15 GB of free memory. More if you plan on
installing lots of models.
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generation models.
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
folder (if you didn't change it in step 5) is `~/invokeai` on
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
8. On Windows systems, double-click on the `invoke.bat` file. On
macOS, open a Terminal window, drag `invoke.sh` from the folder into
the Terminal, and press return. On Linux, run `invoke.sh`
9. Press 2 to open the "browser-based UI", press enter/return, wait a
minute or two for Stable Diffusion to start up, then open your browser
and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command `npm install -g yarn` if needed)
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
```terminal
mkdir invokeai
````
3. Create a virtual environment named `.venv` inside this directory and activate it:
```terminal
cd invokeai
python -m venv .venv --prompt InvokeAI
```
4. Activate the virtual environment (do it every time you run InvokeAI)
_For Linux/Mac users:_
```sh
source .venv/bin/activate
```
_For Windows users:_
```ps
.venv\Scripts\activate
```
5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
_For non-GPU systems:_
```terminal
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2/M3:_
```sh
pip install InvokeAI --use-pep517
```
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure --root .
```
Don't miss the dot at the end!
7. Launch the web server (do it every time you run InvokeAI):
```terminal
invokeai-web
```
8. Point your browser to http://localhost:9090 to bring up the web interface.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
## Detailed Installation Instructions
### Installation
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
<a name="migrating-to-3"></a>
### Migrating a v2.3 InvokeAI root directory
### Hardware Requirements
The InvokeAI root directory is where the InvokeAI startup file,
installed models, and generated images are stored. It is ordinarily
named `invokeai` and located in your home directory. The contents and
layout of this directory has changed between versions 2.3 and 3.0 and
cannot be used directly.
#### System
We currently recommend that you use the installer to create a new root
directory named differently from the 2.3 one, e.g. `invokeai-3` and
then use a migration script to copy your 2.3 models into the new
location. However, if you choose, you can upgrade this directory in
place. This section gives both recipes.
You wil need one of the following:
#### Creating a new root directory and migrating old models
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An Apple computer with an M1 chip.
This is the safer recipe because it leaves your old root directory in
place to fall back on.
#### Memory
1. Follow the instructions above to create and install InvokeAI in a
directory that has a different name from the 2.3 invokeai directory.
In this example, we will use "invokeai-3"
- At least 12 GB Main Memory RAM.
2. When you are prompted to select models to install, select a minimal
set of models, such as stable-diffusion-v1.5 only.
#### Disk
3. After installation is complete launch `invokeai.sh` (Linux/Mac) or
`invokeai.bat` and select option 8 "Open the developers console". This
will take you to the command line.
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
4. Issue the command `invokeai-migrate3 --from /path/to/v2.3-root --to
/path/to/invokeai-3-root`. Provide the correct `--from` and `--to`
paths for your v2.3 and v3.0 root directories respectively.
**Note**
This will copy and convert your old models from 2.3 format to 3.0
format and create a new `models` directory in the 3.0 directory. The
old models directory (which contains the models selected at install
time) will be renamed `models.orig` and can be deleted once you have
confirmed that the migration was successful.
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
run the dream script in full-precision mode as shown below.
If you wish, you can pass the 2.3 root directory to both `--from` and
`--to` in order to update in place. Warning: this directory will no
longer be usable with InvokeAI 2.3.
Similarly, specify full-precision mode on Apple M1 hardware.
#### Migrating in place
For the adventurous, you may do an in-place upgrade from 2.3 to 3.0
without touching the command line. ***This recipe does not work on
Windows platforms due to a bug in the Windows version of the 2.3
upgrade script.** See the next section for a Windows recipe.
##### For Mac and Linux Users:
1. Launch the InvokeAI launcher script in your current v2.3 root directory.
2. Select option [9] "Update InvokeAI" to bring up the updater dialog.
3. Select option [1] to upgrade to the latest release.
4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [7] "Re-run the configure script to fix a broken
install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and
update it to the 3.0 format. The following files will be replaced:
- The invokeai.init file, replaced by invokeai.yaml
- The models directory
- The configs/models.yaml model index
The original versions of these files will be saved with the suffix
".orig" appended to the end. Once you have confirmed that the upgrade
worked, you can safely remove these files. Alternatively you can
restore a working v2.3 directory by removing the new files and
restoring the ".orig" files' original names.
##### For Windows Users:
Windows Users can upgrade with the
1. Enter the 2.3 root directory you wish to upgrade
2. Launch `invoke.sh` or `invoke.bat`
3. Select the "Developer's console" option [8]
4. Type the following commands
```
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
invokeai-configure --root .
```
(Replace `v3.0.0` with the current release number if this document is out of date).
The first command will install and upgrade new software to run
InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script
#### Migrating Images
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. To do this, you
need to run an additional step:
1. From a working InvokeAI 3.0 root directory, start the launcher and
enter menu option [8] to open the "developer's console".
2. At the developer's console command line, type the command:
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
you can try starting `invoke.py` with the `--precision=float32` flag:
```bash
invokeai-import-images
(invokeai) ~/InvokeAI$ python scripts/invoke.py --precision=float32
```
3. This will lead you through the process of confirming the desired
source and destination for the imported images. The images will
appear in the gallery board of your choice, and contain the
original prompt, model name, and other parameters used to generate
the image.
(Many kudos to **techjedi** for contributing this script.)
### Features
## Hardware Requirements
#### Major Features
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
- [Web Server](https://invoke-ai.github.io/InvokeAI/features/WEB/)
- [Interactive Command Line Interface](https://invoke-ai.github.io/InvokeAI/features/CLI/)
- [Image To Image](https://invoke-ai.github.io/InvokeAI/features/IMG2IMG/)
- [Inpainting Support](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)
- [Outpainting Support](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/)
- [Upscaling, face-restoration and outpainting](https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/)
- [Reading Prompts From File](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#reading-prompts-from-a-file)
- [Prompt Blending](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-blending)
- [Thresholding and Perlin Noise Initialization Options](https://invoke-ai.github.io/InvokeAI/features/OTHER/#thresholding-and-perlin-noise-initialization-options)
- [Negative/Unconditioned Prompts](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts)
- [Variations](https://invoke-ai.github.io/InvokeAI/features/VARIATIONS/)
- [Personalizing Text-to-Image Generation](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
- [Simplified API for text to image generation](https://invoke-ai.github.io/InvokeAI/features/OTHER/#simplified-api)
### System
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
of VRAM is highly recommended for rendering using the Stable
Diffusion XL models
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory (Linux
only), 6-8 GB for XL rendering.
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
**Memory** - At least 12 GB Main Memory RAM.
**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
## Features
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
### *Web Server & UI*
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
### *Unified Canvas*
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Workflows & Nodes*
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### *Board & Gallery Management*
Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1, XL support*
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture*
#### Other Features
- [Google Colab](https://invoke-ai.github.io/InvokeAI/features/OTHER/#google-colab)
- [Seamless Tiling](https://invoke-ai.github.io/InvokeAI/features/OTHER/#seamless-tiling)
- [Shortcut: Reusing Seeds](https://invoke-ai.github.io/InvokeAI/features/OTHER/#shortcuts-reusing-seeds)
- [Preload Models](https://invoke-ai.github.io/InvokeAI/features/OTHER/#preload-models)
### Latest Changes
For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
- v2.0.1 (13 October 2022)
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment)
- v2.0.0 (9 October 2022)
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/INPAINTING/">inpainting</a> and <a href="https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/#txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/">command-line completion behavior</a>.
New commands added:
- List command-line history with `!history`
- Search command-line history with `!search`
- Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
For older changelogs, please visit the **[CHANGELOG](https://invoke-ai.github.io/InvokeAI/CHANGELOG#v114-11-september-2022)**.
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues. For more help, please join our [Discord][discord link]
problems and other issues.
## Contributing
# Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
cleanup, testing, or code reviews, is very much encouraged to do so. To join, just raise your hand on the InvokeAI
Discord server or discussion board.
If you are unfamiliar with how
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress, but for now the most
important thing is to **make your pull request against the "development" branch**, and not against
"main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
@ -423,7 +196,13 @@ their time, hard work and effort.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
email if you use and like the script.
Original portions of the software are Copyright (c) 2023 by respective contributors.
Original portions of the software are Copyright (c) 2020
[Lincoln D. Stein](https://github.com/lstein)
### Further Reading
Please see the original README for more information on this software and underlying algorithm,
located in the file [README-CompViz.md](https://invoke-ai.github.io/InvokeAI/other/README-CompViz/).

View File

@ -21,7 +21,7 @@ This model card focuses on the model associated with the Stable Diffusion model,
# Uses
## Direct Use
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
@ -68,11 +68,11 @@ Using the model to generate content that is cruel to individuals is a misuse of
considerations.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
@ -84,7 +84,7 @@ The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
@ -108,12 +108,12 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:
![pareto](assets/v1-variants-scores.jpg)
![pareto](assets/v1-variants-scores.jpg)
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact

View File

Before

Width:  |  Height:  |  Size: 651 KiB

After

Width:  |  Height:  |  Size: 651 KiB

View File

Before

Width:  |  Height:  |  Size: 596 KiB

After

Width:  |  Height:  |  Size: 596 KiB

View File

Before

Width:  |  Height:  |  Size: 609 KiB

After

Width:  |  Height:  |  Size: 609 KiB

View File

Before

Width:  |  Height:  |  Size: 548 KiB

After

Width:  |  Height:  |  Size: 548 KiB

View File

Before

Width:  |  Height:  |  Size: 705 KiB

After

Width:  |  Height:  |  Size: 705 KiB

View File

Before

Width:  |  Height:  |  Size: 757 KiB

After

Width:  |  Height:  |  Size: 757 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 466 KiB

After

Width:  |  Height:  |  Size: 466 KiB

View File

Before

Width:  |  Height:  |  Size: 7.4 KiB

After

Width:  |  Height:  |  Size: 7.4 KiB

View File

Before

Width:  |  Height:  |  Size: 539 KiB

After

Width:  |  Height:  |  Size: 539 KiB

View File

Before

Width:  |  Height:  |  Size: 7.6 KiB

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

Before

Width:  |  Height:  |  Size: 450 KiB

After

Width:  |  Height:  |  Size: 450 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 553 KiB

After

Width:  |  Height:  |  Size: 553 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 418 KiB

After

Width:  |  Height:  |  Size: 418 KiB

View File

Before

Width:  |  Height:  |  Size: 542 KiB

After

Width:  |  Height:  |  Size: 542 KiB

View File

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View File

Before

Width:  |  Height:  |  Size: 612 KiB

After

Width:  |  Height:  |  Size: 612 KiB

View File

Before

Width:  |  Height:  |  Size: 312 KiB

After

Width:  |  Height:  |  Size: 312 KiB

View File

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 72 KiB

View File

Before

Width:  |  Height:  |  Size: 319 KiB

After

Width:  |  Height:  |  Size: 319 KiB

View File

Before

Width:  |  Height:  |  Size: 788 KiB

After

Width:  |  Height:  |  Size: 788 KiB

View File

Before

Width:  |  Height:  |  Size: 958 KiB

After

Width:  |  Height:  |  Size: 958 KiB

View File

Before

Width:  |  Height:  |  Size: 9.4 MiB

After

Width:  |  Height:  |  Size: 9.4 MiB

View File

Before

Width:  |  Height:  |  Size: 610 KiB

After

Width:  |  Height:  |  Size: 610 KiB

View File

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

Before

Width:  |  Height:  |  Size: 945 KiB

After

Width:  |  Height:  |  Size: 945 KiB

View File

Before

Width:  |  Height:  |  Size: 972 KiB

After

Width:  |  Height:  |  Size: 972 KiB

View File

Before

Width:  |  Height:  |  Size: 662 KiB

After

Width:  |  Height:  |  Size: 662 KiB

View File

Before

Width:  |  Height:  |  Size: 302 KiB

After

Width:  |  Height:  |  Size: 302 KiB

View File

Before

Width:  |  Height:  |  Size: 2.2 MiB

After

Width:  |  Height:  |  Size: 2.2 MiB

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,55 @@
import argparse
import os
from ldm.invoke.args import PRECISION_CHOICES
def create_cmd_parser():
parser = argparse.ArgumentParser(description="InvokeAI web UI")
parser.add_argument(
"--host",
type=str,
help="The host to serve on",
default="localhost",
)
parser.add_argument("--port", type=int, help="The port to serve on", default=9090)
parser.add_argument(
"--cors",
nargs="*",
type=str,
help="Additional allowed origins, comma-separated",
)
parser.add_argument(
"--embedding_path",
type=str,
help="Path to a pre-trained embedding manager checkpoint - can only be set on command line",
)
# TODO: Can't get flask to serve images from any dir (saving to the dir does work when specified)
# parser.add_argument(
# "--output_dir",
# default="outputs/",
# type=str,
# help="Directory for output images",
# )
parser.add_argument(
"-v",
"--verbose",
action="store_true",
help="Enables verbose logging",
)
parser.add_argument(
"--precision",
dest="precision",
type=str,
choices=PRECISION_CHOICES,
metavar="PRECISION",
help=f'Set model precision. Defaults to auto selected based on device. Options: {", ".join(PRECISION_CHOICES)}',
default="auto",
)
parser.add_argument(
'--free_gpu_mem',
dest='free_gpu_mem',
action='store_true',
help='Force free gpu memory before final decoding',
)
return parser

View File

@ -0,0 +1,117 @@
from PIL import Image, ImageChops
from PIL.Image import Image as ImageType
from typing import Union, Literal
# https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent
def check_for_any_transparency(img: Union[ImageType, str]) -> bool:
if type(img) is str:
img = Image.open(str)
if img.info.get("transparency", None) is not None:
return True
if img.mode == "P":
transparent = img.info.get("transparency", -1)
for _, index in img.getcolors():
if index == transparent:
return True
elif img.mode == "RGBA":
extrema = img.getextrema()
if extrema[3][0] < 255:
return True
return False
def get_canvas_generation_mode(
init_img: Union[ImageType, str], init_mask: Union[ImageType, str]
) -> Literal["txt2img", "outpainting", "inpainting", "img2img",]:
if type(init_img) is str:
init_img = Image.open(init_img)
if type(init_mask) is str:
init_mask = Image.open(init_mask)
init_img = init_img.convert("RGBA")
# Get alpha from init_img
init_img_alpha = init_img.split()[-1]
init_img_alpha_mask = init_img_alpha.convert("L")
init_img_has_transparency = check_for_any_transparency(init_img)
if init_img_has_transparency:
init_img_is_fully_transparent = (
True if init_img_alpha_mask.getbbox() is None else False
)
"""
Mask images are white in areas where no change should be made, black where changes
should be made.
"""
# Fit the mask to init_img's size and convert it to greyscale
init_mask = init_mask.resize(init_img.size).convert("L")
"""
PIL.Image.getbbox() returns the bounding box of non-zero areas of the image, so we first
invert the mask image so that masked areas are white and other areas black == zero.
getbbox() now tells us if the are any masked areas.
"""
init_mask_bbox = ImageChops.invert(init_mask).getbbox()
init_mask_exists = False if init_mask_bbox is None else True
if init_img_has_transparency:
if init_img_is_fully_transparent:
return "txt2img"
else:
return "outpainting"
else:
if init_mask_exists:
return "inpainting"
else:
return "img2img"
def main():
# Testing
init_img_opaque = "test_images/init-img_opaque.png"
init_img_partial_transparency = "test_images/init-img_partial_transparency.png"
init_img_full_transparency = "test_images/init-img_full_transparency.png"
init_mask_no_mask = "test_images/init-mask_no_mask.png"
init_mask_has_mask = "test_images/init-mask_has_mask.png"
print(
"OPAQUE IMAGE, NO MASK, expect img2img, got ",
get_canvas_generation_mode(init_img_opaque, init_mask_no_mask),
)
print(
"IMAGE WITH TRANSPARENCY, NO MASK, expect outpainting, got ",
get_canvas_generation_mode(
init_img_partial_transparency, init_mask_no_mask
),
)
print(
"FULLY TRANSPARENT IMAGE NO MASK, expect txt2img, got ",
get_canvas_generation_mode(init_img_full_transparency, init_mask_no_mask),
)
print(
"OPAQUE IMAGE, WITH MASK, expect inpainting, got ",
get_canvas_generation_mode(init_img_opaque, init_mask_has_mask),
)
print(
"IMAGE WITH TRANSPARENCY, WITH MASK, expect outpainting, got ",
get_canvas_generation_mode(
init_img_partial_transparency, init_mask_has_mask
),
)
print(
"FULLY TRANSPARENT IMAGE WITH MASK, expect txt2img, got ",
get_canvas_generation_mode(init_img_full_transparency, init_mask_has_mask),
)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,71 @@
from backend.modules.parse_seed_weights import parse_seed_weights
import argparse
SAMPLER_CHOICES = [
"ddim",
"k_dpm_2_a",
"k_dpm_2",
"k_dpmpp_2_a",
"k_dpmpp_2",
"k_euler_a",
"k_euler",
"k_heun",
"k_lms",
"plms",
]
def parameters_to_command(params):
"""
Converts dict of parameters into a `invoke.py` REPL command.
"""
switches = list()
if "prompt" in params:
switches.append(f'"{params["prompt"]}"')
if "steps" in params:
switches.append(f'-s {params["steps"]}')
if "seed" in params:
switches.append(f'-S {params["seed"]}')
if "width" in params:
switches.append(f'-W {params["width"]}')
if "height" in params:
switches.append(f'-H {params["height"]}')
if "cfg_scale" in params:
switches.append(f'-C {params["cfg_scale"]}')
if "sampler_name" in params:
switches.append(f'-A {params["sampler_name"]}')
if "seamless" in params and params["seamless"] == True:
switches.append(f"--seamless")
if "hires_fix" in params and params["hires_fix"] == True:
switches.append(f"--hires")
if "init_img" in params and len(params["init_img"]) > 0:
switches.append(f'-I {params["init_img"]}')
if "init_mask" in params and len(params["init_mask"]) > 0:
switches.append(f'-M {params["init_mask"]}')
if "init_color" in params and len(params["init_color"]) > 0:
switches.append(f'--init_color {params["init_color"]}')
if "strength" in params and "init_img" in params:
switches.append(f'-f {params["strength"]}')
if "fit" in params and params["fit"] == True:
switches.append(f"--fit")
if "facetool" in params:
switches.append(f'-ft {params["facetool"]}')
if "facetool_strength" in params and params["facetool_strength"]:
switches.append(f'-G {params["facetool_strength"]}')
elif "gfpgan_strength" in params and params["gfpgan_strength"]:
switches.append(f'-G {params["gfpgan_strength"]}')
if "codeformer_fidelity" in params:
switches.append(f'-cf {params["codeformer_fidelity"]}')
if "upscale" in params and params["upscale"]:
switches.append(f'-U {params["upscale"][0]} {params["upscale"][1]}')
if "variation_amount" in params and params["variation_amount"] > 0:
switches.append(f'-v {params["variation_amount"]}')
if "with_variations" in params:
seed_weight_pairs = ",".join(
f"{seed}:{weight}" for seed, weight in params["with_variations"]
)
switches.append(f"-V {seed_weight_pairs}")
return " ".join(switches)

View File

@ -0,0 +1,47 @@
def parse_seed_weights(seed_weights):
"""
Accepts seed weights as string in "12345:0.1,23456:0.2,3456:0.3" format
Validates them
If valid: returns as [[12345, 0.1], [23456, 0.2], [3456, 0.3]]
If invalid: returns False
"""
# Must be a string
if not isinstance(seed_weights, str):
return False
# String must not be empty
if len(seed_weights) == 0:
return False
pairs = []
for pair in seed_weights.split(","):
split_values = pair.split(":")
# Seed and weight are required
if len(split_values) != 2:
return False
if len(split_values[0]) == 0 or len(split_values[1]) == 1:
return False
# Try casting the seed to int and weight to float
try:
seed = int(split_values[0])
weight = float(split_values[1])
except ValueError:
return False
# Seed must be 0 or above
if not seed >= 0:
return False
# Weight must be between 0 and 1
if not (weight >= 0 and weight <= 1):
return False
# This pair is valid
pairs.append([seed, weight])
# All pairs are valid
return pairs

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

@ -0,0 +1,80 @@
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
repo_id: runwayml/stable-diffusion-v1-5
config: v1-inference.yaml
file: v1-5-pruned-emaonly.ckpt
recommended: true
width: 512
height: 512
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
config: v1-inpainting-inference.yaml
file: sd-v1-5-inpainting.ckpt
recommended: True
width: 512
height: 512
ft-mse-improved-autoencoder-840000:
description: StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB)
repo_id: stabilityai/sd-vae-ft-mse-original
config: VAE/default
file: vae-ft-mse-840000-ema-pruned.ckpt
recommended: True
width: 512
height: 512
stable-diffusion-1.4:
description: The original Stable Diffusion version 1.4 weight file (4.27 GB)
repo_id: CompVis/stable-diffusion-v-1-4-original
config: v1-inference.yaml
file: sd-v1-4.ckpt
recommended: False
width: 512
height: 512
waifu-diffusion-1.3:
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
repo_id: hakurei/waifu-diffusion-v1-3
config: v1-inference.yaml
file: model-epoch09-float32.ckpt
recommended: False
width: 512
height: 512
trinart-2.0:
description: An SD model finetuned with ~40,000 assorted high resolution manga/anime-style pictures (2.13 GB)
repo_id: naclbit/trinart_stable_diffusion_v2
config: v1-inference.yaml
file: trinart2_step95000.ckpt
recommended: False
width: 512
height: 512
trinart_characters-1.0:
description: An SD model finetuned with 19.2M anime/manga style images (2.13 GB)
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: v1-inference.yaml
file: trinart_characters_it4_v1.ckpt
recommended: False
width: 512
height: 512
trinart_vae:
description: Custom autoencoder for trinart_characters
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: VAE/trinart
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
recommended: False
width: 512
height: 512
papercut-1.0:
description: SD 1.5 fine-tuned for papercut art (use "PaperCut" in your prompts) (2.13 GB)
repo_id: Fictiverse/Stable_Diffusion_PaperCut_Model
config: v1-inference.yaml
file: PaperCut_v1.ckpt
recommended: False
width: 512
height: 512
voxel_art-1.0:
description: Stable Diffusion trained on voxel art (use "VoxelArt" in your prompts) (4.27 GB)
repo_id: Fictiverse/Stable_Diffusion_VoxelArt_Model
config: v1-inference.yaml
file: VoxelArt_v1.ckpt
recommended: False
width: 512
height: 512

View File

@ -0,0 +1,27 @@
# This file describes the alternative machine learning models
# available to InvokeAI script.
#
# To add a new model, follow the examples below. Each
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
weights: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
default: true
stable-diffusion-1.4:
description: Stable Diffusion inference model version 1.4
config: configs/stable-diffusion/v1-inference.yaml
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
inpainting-1.5:
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
description: RunwayML SD 1.5 model optimized for inpainting

803
configs/sd-concepts.txt Normal file
View File

@ -0,0 +1,803 @@
sd-concepts-library/001glitch-core
sd-concepts-library/2814-roth
sd-concepts-library/3d-female-cyborgs
sd-concepts-library/4tnght
sd-concepts-library/80s-anime-ai
sd-concepts-library/80s-anime-ai-being
sd-concepts-library/852style-girl
sd-concepts-library/8bit
sd-concepts-library/8sconception
sd-concepts-library/Aflac-duck
sd-concepts-library/Akitsuki
sd-concepts-library/Atako
sd-concepts-library/Exodus-Styling
sd-concepts-library/RINGAO
sd-concepts-library/a-female-hero-from-the-legend-of-mir
sd-concepts-library/a-hat-kid
sd-concepts-library/a-tale-of-two-empires
sd-concepts-library/aadhav-face
sd-concepts-library/aavegotchi
sd-concepts-library/abby-face
sd-concepts-library/abstract-concepts
sd-concepts-library/accurate-angel
sd-concepts-library/agm-style-nao
sd-concepts-library/aj-fosik
sd-concepts-library/alberto-mielgo
sd-concepts-library/alex-portugal
sd-concepts-library/alex-thumbnail-object-2000-steps
sd-concepts-library/aleyna-tilki
sd-concepts-library/alf
sd-concepts-library/alicebeta
sd-concepts-library/alien-avatar
sd-concepts-library/alisa
sd-concepts-library/all-rings-albuns
sd-concepts-library/altvent
sd-concepts-library/altyn-helmet
sd-concepts-library/amine
sd-concepts-library/amogus
sd-concepts-library/anders-zorn
sd-concepts-library/angus-mcbride-style
sd-concepts-library/animalve3-1500seq
sd-concepts-library/anime-background-style
sd-concepts-library/anime-background-style-v2
sd-concepts-library/anime-boy
sd-concepts-library/anime-girl
sd-concepts-library/anyXtronXredshift
sd-concepts-library/anya-forger
sd-concepts-library/apex-wingman
sd-concepts-library/apulian-rooster-v0-1
sd-concepts-library/arcane-face
sd-concepts-library/arcane-style-jv
sd-concepts-library/arcimboldo-style
sd-concepts-library/armando-reveron-style
sd-concepts-library/armor-concept
sd-concepts-library/arq-render
sd-concepts-library/art-brut
sd-concepts-library/arthur1
sd-concepts-library/artist-yukiko-kanagai
sd-concepts-library/arwijn
sd-concepts-library/ashiok
sd-concepts-library/at-wolf-boy-object
sd-concepts-library/atm-ant
sd-concepts-library/atm-ant-2
sd-concepts-library/axe-tattoo
sd-concepts-library/ayush-spider-spr
sd-concepts-library/azura-from-vibrant-venture
sd-concepts-library/ba-shiroko
sd-concepts-library/babau
sd-concepts-library/babs-bunny
sd-concepts-library/babushork
sd-concepts-library/backrooms
sd-concepts-library/bad_Hub_Hugh
sd-concepts-library/bada-club
sd-concepts-library/baldi
sd-concepts-library/baluchitherian
sd-concepts-library/bamse
sd-concepts-library/bamse-og-kylling
sd-concepts-library/bee
sd-concepts-library/beholder
sd-concepts-library/beldam
sd-concepts-library/belen
sd-concepts-library/bella-goth
sd-concepts-library/belle-delphine
sd-concepts-library/bert-muppet
sd-concepts-library/better-collage3
sd-concepts-library/between2-mt-fade
sd-concepts-library/birb-style
sd-concepts-library/black-and-white-design
sd-concepts-library/black-waifu
sd-concepts-library/bloo
sd-concepts-library/blue-haired-boy
sd-concepts-library/blue-zombie
sd-concepts-library/blue-zombiee
sd-concepts-library/bluebey
sd-concepts-library/bluebey-2
sd-concepts-library/bobs-burgers
sd-concepts-library/boissonnard
sd-concepts-library/bonzi-monkey
sd-concepts-library/borderlands
sd-concepts-library/bored-ape-textual-inversion
sd-concepts-library/boris-anderson
sd-concepts-library/bozo-22
sd-concepts-library/breakcore
sd-concepts-library/brittney-williams-art
sd-concepts-library/bruma
sd-concepts-library/brunnya
sd-concepts-library/buddha-statue
sd-concepts-library/bullvbear
sd-concepts-library/button-eyes
sd-concepts-library/canadian-goose
sd-concepts-library/canary-cap
sd-concepts-library/cancer_style
sd-concepts-library/captain-haddock
sd-concepts-library/captainkirb
sd-concepts-library/car-toy-rk
sd-concepts-library/carasibana
sd-concepts-library/carlitos-el-mago
sd-concepts-library/carrascharacter
sd-concepts-library/cartoona-animals
sd-concepts-library/cat-toy
sd-concepts-library/centaur
sd-concepts-library/cgdonny1
sd-concepts-library/cham
sd-concepts-library/chandra-nalaar
sd-concepts-library/char-con
sd-concepts-library/character-pingu
sd-concepts-library/cheburashka
sd-concepts-library/chen-1
sd-concepts-library/child-zombie
sd-concepts-library/chillpill
sd-concepts-library/chonkfrog
sd-concepts-library/chop
sd-concepts-library/christo-person
sd-concepts-library/chuck-walton
sd-concepts-library/chucky
sd-concepts-library/chungus-poodl-pet
sd-concepts-library/cindlop
sd-concepts-library/collage-cutouts
sd-concepts-library/collage14
sd-concepts-library/collage3
sd-concepts-library/collage3-hubcity
sd-concepts-library/cologne
sd-concepts-library/color-page
sd-concepts-library/colossus
sd-concepts-library/command-and-conquer-remastered-cameos
sd-concepts-library/concept-art
sd-concepts-library/conner-fawcett-style
sd-concepts-library/conway-pirate
sd-concepts-library/coop-himmelblau
sd-concepts-library/coraline
sd-concepts-library/cornell-box
sd-concepts-library/cortana
sd-concepts-library/covid-19-rapid-test
sd-concepts-library/cow-uwu
sd-concepts-library/cowboy
sd-concepts-library/crazy-1
sd-concepts-library/crazy-2
sd-concepts-library/crb-portraits
sd-concepts-library/crb-surrealz
sd-concepts-library/crbart
sd-concepts-library/crested-gecko
sd-concepts-library/crinos-form-garou
sd-concepts-library/cry-baby-style
sd-concepts-library/crybaby-style-2-0
sd-concepts-library/csgo-awp-object
sd-concepts-library/csgo-awp-texture-map
sd-concepts-library/cubex
sd-concepts-library/cumbia-peruana
sd-concepts-library/cute-bear
sd-concepts-library/cute-cat
sd-concepts-library/cute-game-style
sd-concepts-library/cyberpunk-lucy
sd-concepts-library/dabotap
sd-concepts-library/dan-mumford
sd-concepts-library/dan-seagrave-art-style
sd-concepts-library/dark-penguin-pinguinanimations
sd-concepts-library/darkpenguinanimatronic
sd-concepts-library/darkplane
sd-concepts-library/david-firth-artstyle
sd-concepts-library/david-martinez-cyberpunk
sd-concepts-library/david-martinez-edgerunners
sd-concepts-library/david-moreno-architecture
sd-concepts-library/daycare-attendant-sun-fnaf
sd-concepts-library/ddattender
sd-concepts-library/degods
sd-concepts-library/degodsheavy
sd-concepts-library/depthmap
sd-concepts-library/depthmap-style
sd-concepts-library/design
sd-concepts-library/detectivedinosaur1
sd-concepts-library/diaosu-toy
sd-concepts-library/dicoo
sd-concepts-library/dicoo2
sd-concepts-library/dishonored-portrait-styles
sd-concepts-library/disquieting-muses
sd-concepts-library/ditko
sd-concepts-library/dlooak
sd-concepts-library/doc
sd-concepts-library/doener-red-line-art
sd-concepts-library/dog
sd-concepts-library/dog-django
sd-concepts-library/doge-pound
sd-concepts-library/dong-ho
sd-concepts-library/dong-ho2
sd-concepts-library/doose-s-realistic-art-style
sd-concepts-library/dq10-anrushia
sd-concepts-library/dr-livesey
sd-concepts-library/dr-strange
sd-concepts-library/dragonborn
sd-concepts-library/dreamcore
sd-concepts-library/dreamy-painting
sd-concepts-library/drive-scorpion-jacket
sd-concepts-library/dsmuses
sd-concepts-library/dtv-pkmn
sd-concepts-library/dullboy-caricature
sd-concepts-library/duranduran
sd-concepts-library/durer-style
sd-concepts-library/dyoudim-style
sd-concepts-library/early-mishima-kurone
sd-concepts-library/eastward
sd-concepts-library/eddie
sd-concepts-library/edgerunners-style
sd-concepts-library/edgerunners-style-v2
sd-concepts-library/el-salvador-style-style
sd-concepts-library/elegant-flower
sd-concepts-library/elspeth-tirel
sd-concepts-library/eru-chitanda-casual
sd-concepts-library/erwin-olaf-style
sd-concepts-library/ettblackteapot
sd-concepts-library/explosions-cat
sd-concepts-library/eye-of-agamotto
sd-concepts-library/f-22
sd-concepts-library/facadeplace
sd-concepts-library/fairy-tale-painting-style
sd-concepts-library/fairytale
sd-concepts-library/fang-yuan-001
sd-concepts-library/faraon-love-shady
sd-concepts-library/fasina
sd-concepts-library/felps
sd-concepts-library/female-kpop-singer
sd-concepts-library/fergal-cat
sd-concepts-library/filename-2
sd-concepts-library/fileteado-porteno
sd-concepts-library/final-fantasy-logo
sd-concepts-library/fireworks-over-water
sd-concepts-library/fish
sd-concepts-library/flag-ussr
sd-concepts-library/flatic
sd-concepts-library/floral
sd-concepts-library/fluid-acrylic-jellyfish-creatures-style-of-carl-ingram-art
sd-concepts-library/fnf-boyfriend
sd-concepts-library/fold-structure
sd-concepts-library/fox-purple
sd-concepts-library/fractal
sd-concepts-library/fractal-flame
sd-concepts-library/fractal-temple-style
sd-concepts-library/frank-frazetta
sd-concepts-library/franz-unterberger
sd-concepts-library/freddy-fazbear
sd-concepts-library/freefonix-style
sd-concepts-library/furrpopasthetic
sd-concepts-library/fursona
sd-concepts-library/fzk
sd-concepts-library/galaxy-explorer
sd-concepts-library/ganyu-genshin-impact
sd-concepts-library/garcon-the-cat
sd-concepts-library/garfield-pizza-plush
sd-concepts-library/garfield-pizza-plush-v2
sd-concepts-library/gba-fe-class-cards
sd-concepts-library/gba-pokemon-sprites
sd-concepts-library/geggin
sd-concepts-library/ggplot2
sd-concepts-library/ghost-style
sd-concepts-library/ghostproject-men
sd-concepts-library/gibasachan-v0
sd-concepts-library/gim
sd-concepts-library/gio
sd-concepts-library/giygas
sd-concepts-library/glass-pipe
sd-concepts-library/glass-prism-cube
sd-concepts-library/glow-forest
sd-concepts-library/goku
sd-concepts-library/gram-tops
sd-concepts-library/green-blue-shanshui
sd-concepts-library/green-tent
sd-concepts-library/grifter
sd-concepts-library/grisstyle
sd-concepts-library/grit-toy
sd-concepts-library/gt-color-paint-2
sd-concepts-library/gta5-artwork
sd-concepts-library/guttestreker
sd-concepts-library/gymnastics-leotard-v2
sd-concepts-library/half-life-2-dog
sd-concepts-library/handstand
sd-concepts-library/hanfu-anime-style
sd-concepts-library/happy-chaos
sd-concepts-library/happy-person12345
sd-concepts-library/happy-person12345-assets
sd-concepts-library/harley-quinn
sd-concepts-library/harmless-ai-1
sd-concepts-library/harmless-ai-house-style-1
sd-concepts-library/hd-emoji
sd-concepts-library/heather
sd-concepts-library/henjo-techno-show
sd-concepts-library/herge-style
sd-concepts-library/hiten-style-nao
sd-concepts-library/hitokomoru-style-nao
sd-concepts-library/hiyuki-chan
sd-concepts-library/hk-bamboo
sd-concepts-library/hk-betweenislands
sd-concepts-library/hk-bicycle
sd-concepts-library/hk-blackandwhite
sd-concepts-library/hk-breakfast
sd-concepts-library/hk-buses
sd-concepts-library/hk-clouds
sd-concepts-library/hk-goldbuddha
sd-concepts-library/hk-goldenlantern
sd-concepts-library/hk-hkisland
sd-concepts-library/hk-leaves
sd-concepts-library/hk-market
sd-concepts-library/hk-oldcamera
sd-concepts-library/hk-opencamera
sd-concepts-library/hk-peach
sd-concepts-library/hk-phonevax
sd-concepts-library/hk-streetpeople
sd-concepts-library/hk-vintage
sd-concepts-library/hoi4
sd-concepts-library/hoi4-leaders
sd-concepts-library/homestuck-sprite
sd-concepts-library/homestuck-troll
sd-concepts-library/hours-sentry-fade
sd-concepts-library/hours-style
sd-concepts-library/hrgiger-drmacabre
sd-concepts-library/huang-guang-jian
sd-concepts-library/huatli
sd-concepts-library/huayecai820-greyscale
sd-concepts-library/hub-city
sd-concepts-library/hubris-oshri
sd-concepts-library/huckleberry
sd-concepts-library/hydrasuit
sd-concepts-library/i-love-chaos
sd-concepts-library/ibere-thenorio
sd-concepts-library/ic0n
sd-concepts-library/ie-gravestone
sd-concepts-library/ikea-fabler
sd-concepts-library/illustration-style
sd-concepts-library/ilo-kunst
sd-concepts-library/ilya-shkipin
sd-concepts-library/im-poppy
sd-concepts-library/ina-art
sd-concepts-library/indian-watercolor-portraits
sd-concepts-library/indiana
sd-concepts-library/ingmar-bergman
sd-concepts-library/insidewhale
sd-concepts-library/interchanges
sd-concepts-library/inuyama-muneto-style-nao
sd-concepts-library/irasutoya
sd-concepts-library/iridescent-illustration-style
sd-concepts-library/iridescent-photo-style
sd-concepts-library/isabell-schulte-pv-pvii-3000steps
sd-concepts-library/isabell-schulte-pviii-1-image-style
sd-concepts-library/isabell-schulte-pviii-1024px-1500-steps-style
sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-1-lr-3000-steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-3-lr-5000-steps-style
sd-concepts-library/isabell-schulte-pviii-4tiles-500steps
sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps
sd-concepts-library/isabell-schulte-pviii-style
sd-concepts-library/isometric-tile-test
sd-concepts-library/jacqueline-the-unicorn
sd-concepts-library/james-web-space-telescope
sd-concepts-library/jamie-hewlett-style
sd-concepts-library/jamiels
sd-concepts-library/jang-sung-rak-style
sd-concepts-library/jetsetdreamcastcovers
sd-concepts-library/jin-kisaragi
sd-concepts-library/jinjoon-lee-they
sd-concepts-library/jm-bergling-monogram
sd-concepts-library/joe-mad
sd-concepts-library/joe-whiteford-art-style
sd-concepts-library/joemad
sd-concepts-library/john-blanche
sd-concepts-library/johnny-silverhand
sd-concepts-library/jojo-bizzare-adventure-manga-lineart
sd-concepts-library/jos-de-kat
sd-concepts-library/junji-ito-artstyle
sd-concepts-library/kaleido
sd-concepts-library/kaneoya-sachiko
sd-concepts-library/kanovt
sd-concepts-library/kanv1
sd-concepts-library/karan-gloomy
sd-concepts-library/karl-s-lzx-1
sd-concepts-library/kasumin
sd-concepts-library/kawaii-colors
sd-concepts-library/kawaii-girl-plus-object
sd-concepts-library/kawaii-girl-plus-style
sd-concepts-library/kawaii-girl-plus-style-v1-1
sd-concepts-library/kay
sd-concepts-library/kaya-ghost-assasin
sd-concepts-library/ki
sd-concepts-library/kinda-sus
sd-concepts-library/kings-quest-agd
sd-concepts-library/kiora
sd-concepts-library/kira-sensei
sd-concepts-library/kirby
sd-concepts-library/klance
sd-concepts-library/kodakvision500t
sd-concepts-library/kogatan-shiny
sd-concepts-library/kogecha
sd-concepts-library/kojima-ayami
sd-concepts-library/koko-dog
sd-concepts-library/kuvshinov
sd-concepts-library/kysa-v-style
sd-concepts-library/laala-character
sd-concepts-library/larrette
sd-concepts-library/lavko
sd-concepts-library/lazytown-stephanie
sd-concepts-library/ldr
sd-concepts-library/ldrs
sd-concepts-library/led-toy
sd-concepts-library/lego-astronaut
sd-concepts-library/leica
sd-concepts-library/leif-jones
sd-concepts-library/lex
sd-concepts-library/liliana
sd-concepts-library/liliana-vess
sd-concepts-library/liminal-spaces-2-0
sd-concepts-library/liminalspaces
sd-concepts-library/line-art
sd-concepts-library/line-style
sd-concepts-library/linnopoke
sd-concepts-library/liquid-light
sd-concepts-library/liqwid-aquafarmer
sd-concepts-library/lizardman
sd-concepts-library/loab-character
sd-concepts-library/loab-style
sd-concepts-library/lofa
sd-concepts-library/logo-with-face-on-shield
sd-concepts-library/lolo
sd-concepts-library/looney-anime
sd-concepts-library/lost-rapper
sd-concepts-library/lphr-style
sd-concepts-library/lucario
sd-concepts-library/lucky-luke
sd-concepts-library/lugal-ki-en
sd-concepts-library/luinv2
sd-concepts-library/lula-13
sd-concepts-library/lumio
sd-concepts-library/lxj-o4
sd-concepts-library/m-geo
sd-concepts-library/m-geoo
sd-concepts-library/madhubani-art
sd-concepts-library/mafalda-character
sd-concepts-library/magic-pengel
sd-concepts-library/malika-favre-art-style
sd-concepts-library/manga-style
sd-concepts-library/marbling-art
sd-concepts-library/margo
sd-concepts-library/marty
sd-concepts-library/marty6
sd-concepts-library/mass
sd-concepts-library/masyanya
sd-concepts-library/masyunya
sd-concepts-library/mate
sd-concepts-library/matthew-stone
sd-concepts-library/mattvidpro
sd-concepts-library/maurice-quentin-de-la-tour-style
sd-concepts-library/maus
sd-concepts-library/max-foley
sd-concepts-library/mayor-richard-irvin
sd-concepts-library/mechasoulall
sd-concepts-library/medazzaland
sd-concepts-library/memnarch-mtg
sd-concepts-library/metagabe
sd-concepts-library/meyoco
sd-concepts-library/meze-audio-elite-headphones
sd-concepts-library/midjourney-style
sd-concepts-library/mikako-method
sd-concepts-library/mikako-methodi2i
sd-concepts-library/miko-3-robot
sd-concepts-library/milady
sd-concepts-library/mildemelwe-style
sd-concepts-library/million-live-akane-15k
sd-concepts-library/million-live-akane-3k
sd-concepts-library/million-live-akane-shifuku-3k
sd-concepts-library/million-live-spade-q-object-3k
sd-concepts-library/million-live-spade-q-style-3k
sd-concepts-library/minecraft-concept-art
sd-concepts-library/mishima-kurone
sd-concepts-library/mizkif
sd-concepts-library/moeb-style
sd-concepts-library/moebius
sd-concepts-library/mokoko
sd-concepts-library/mokoko-seed
sd-concepts-library/monster-girl
sd-concepts-library/monster-toy
sd-concepts-library/monte-novo
sd-concepts-library/moo-moo
sd-concepts-library/morino-hon-style
sd-concepts-library/moxxi
sd-concepts-library/msg
sd-concepts-library/mtg-card
sd-concepts-library/mtl-longsky
sd-concepts-library/mu-sadr
sd-concepts-library/munch-leaks-style
sd-concepts-library/museum-by-coop-himmelblau
sd-concepts-library/muxoyara
sd-concepts-library/my-hero-academia-style
sd-concepts-library/my-mug
sd-concepts-library/mycat
sd-concepts-library/mystical-nature
sd-concepts-library/naf
sd-concepts-library/nahiri
sd-concepts-library/namine-ritsu
sd-concepts-library/naoki-saito
sd-concepts-library/nard-style
sd-concepts-library/naruto
sd-concepts-library/natasha-johnston
sd-concepts-library/nathan-wyatt
sd-concepts-library/naval-portrait
sd-concepts-library/nazuna
sd-concepts-library/nebula
sd-concepts-library/ned-flanders
sd-concepts-library/neon-pastel
sd-concepts-library/new-priests
sd-concepts-library/nic-papercuts
sd-concepts-library/nikodim
sd-concepts-library/nissa-revane
sd-concepts-library/nixeu
sd-concepts-library/noggles
sd-concepts-library/nomad
sd-concepts-library/nouns-glasses
sd-concepts-library/obama-based-on-xi
sd-concepts-library/obama-self-2
sd-concepts-library/og-mox-style
sd-concepts-library/ohisashiburi-style
sd-concepts-library/oleg-kuvaev
sd-concepts-library/olli-olli
sd-concepts-library/on-kawara
sd-concepts-library/one-line-drawing
sd-concepts-library/onepunchman
sd-concepts-library/onzpo
sd-concepts-library/orangejacket
sd-concepts-library/ori
sd-concepts-library/ori-toor
sd-concepts-library/orientalist-art
sd-concepts-library/osaka-jyo
sd-concepts-library/osaka-jyo2
sd-concepts-library/osrsmini2
sd-concepts-library/osrstiny
sd-concepts-library/other-mother
sd-concepts-library/ouroboros
sd-concepts-library/outfit-items
sd-concepts-library/overprettified
sd-concepts-library/owl-house
sd-concepts-library/painted-by-silver-of-999
sd-concepts-library/painted-by-silver-of-999-2
sd-concepts-library/painted-student
sd-concepts-library/painting
sd-concepts-library/pantone-milk
sd-concepts-library/paolo-bonolis
sd-concepts-library/party-girl
sd-concepts-library/pascalsibertin
sd-concepts-library/pastelartstyle
sd-concepts-library/paul-noir
sd-concepts-library/pen-ink-portraits-bennorthen
sd-concepts-library/phan
sd-concepts-library/phan-s-collage
sd-concepts-library/phc
sd-concepts-library/phoenix-01
sd-concepts-library/pineda-david
sd-concepts-library/pink-beast-pastelae-style
sd-concepts-library/pintu
sd-concepts-library/pion-by-august-semionov
sd-concepts-library/piotr-jablonski
sd-concepts-library/pixel-mania
sd-concepts-library/pixel-toy
sd-concepts-library/pjablonski-style
sd-concepts-library/plant-style
sd-concepts-library/plen-ki-mun
sd-concepts-library/pokemon-conquest-sprites
sd-concepts-library/pool-test
sd-concepts-library/poolrooms
sd-concepts-library/poring-ragnarok-online
sd-concepts-library/poutine-dish
sd-concepts-library/princess-knight-art
sd-concepts-library/progress-chip
sd-concepts-library/puerquis-toy
sd-concepts-library/purplefishli
sd-concepts-library/pyramidheadcosplay
sd-concepts-library/qpt-atrium
sd-concepts-library/quiesel
sd-concepts-library/r-crumb-style
sd-concepts-library/rahkshi-bionicle
sd-concepts-library/raichu
sd-concepts-library/rail-scene
sd-concepts-library/rail-scene-style
sd-concepts-library/ralph-mcquarrie
sd-concepts-library/ransom
sd-concepts-library/rayne-weynolds
sd-concepts-library/rcrumb-portraits-style
sd-concepts-library/rd-chaos
sd-concepts-library/rd-paintings
sd-concepts-library/red-glasses
sd-concepts-library/reeducation-camp
sd-concepts-library/reksio-dog
sd-concepts-library/rektguy
sd-concepts-library/remert
sd-concepts-library/renalla
sd-concepts-library/repeat
sd-concepts-library/retro-girl
sd-concepts-library/retro-mecha-rangers
sd-concepts-library/retropixelart-pinguin
sd-concepts-library/rex-deno
sd-concepts-library/rhizomuse-machine-bionic-sculpture
sd-concepts-library/ricar
sd-concepts-library/rickyart
sd-concepts-library/rico-face
sd-concepts-library/riker-doll
sd-concepts-library/rikiart
sd-concepts-library/rikiboy-art
sd-concepts-library/rilakkuma
sd-concepts-library/rishusei-style
sd-concepts-library/rj-palmer
sd-concepts-library/rl-pkmn-test
sd-concepts-library/road-to-ruin
sd-concepts-library/robertnava
sd-concepts-library/roblox-avatar
sd-concepts-library/roy-lichtenstein
sd-concepts-library/ruan-jia
sd-concepts-library/russian
sd-concepts-library/s1m-naoto-ohshima
sd-concepts-library/saheeli-rai
sd-concepts-library/sakimi-style
sd-concepts-library/salmonid
sd-concepts-library/sam-yang
sd-concepts-library/sanguo-guanyu
sd-concepts-library/sas-style
sd-concepts-library/scarlet-witch
sd-concepts-library/schloss-mosigkau
sd-concepts-library/scrap-style
sd-concepts-library/scratch-project
sd-concepts-library/sculptural-style
sd-concepts-library/sd-concepts-library-uma-meme
sd-concepts-library/seamless-ground
sd-concepts-library/selezneva-alisa
sd-concepts-library/sem-mac2n
sd-concepts-library/senneca
sd-concepts-library/seraphimmoonshadow-art
sd-concepts-library/sewerslvt
sd-concepts-library/she-hulk-law-art
sd-concepts-library/she-mask
sd-concepts-library/sherhook-painting
sd-concepts-library/sherhook-painting-v2
sd-concepts-library/shev-linocut
sd-concepts-library/shigure-ui-style
sd-concepts-library/shiny-polyman
sd-concepts-library/shrunken-head
sd-concepts-library/shu-doll
sd-concepts-library/shvoren-style
sd-concepts-library/sims-2-portrait
sd-concepts-library/singsing
sd-concepts-library/singsing-doll
sd-concepts-library/sintez-ico
sd-concepts-library/skyfalls
sd-concepts-library/slm
sd-concepts-library/smarties
sd-concepts-library/smiling-friend-style
sd-concepts-library/smooth-pencils
sd-concepts-library/smurf-style
sd-concepts-library/smw-map
sd-concepts-library/society-finch
sd-concepts-library/sorami-style
sd-concepts-library/spider-gwen
sd-concepts-library/spritual-monsters
sd-concepts-library/stable-diffusion-conceptualizer
sd-concepts-library/star-tours-posters
sd-concepts-library/stardew-valley-pixel-art
sd-concepts-library/starhavenmachinegods
sd-concepts-library/sterling-archer
sd-concepts-library/stretch-re1-robot
sd-concepts-library/stuffed-penguin-toy
sd-concepts-library/style-of-marc-allante
sd-concepts-library/summie-style
sd-concepts-library/sunfish
sd-concepts-library/super-nintendo-cartridge
sd-concepts-library/supitcha-mask
sd-concepts-library/sushi-pixel
sd-concepts-library/swamp-choe-2
sd-concepts-library/t-skrang
sd-concepts-library/takuji-kawano
sd-concepts-library/tamiyo
sd-concepts-library/tangles
sd-concepts-library/tb303
sd-concepts-library/tcirle
sd-concepts-library/teelip-ir-landscape
sd-concepts-library/teferi
sd-concepts-library/tela-lenca
sd-concepts-library/tela-lenca2
sd-concepts-library/terraria-style
sd-concepts-library/tesla-bot
sd-concepts-library/test
sd-concepts-library/test-epson
sd-concepts-library/test2
sd-concepts-library/testing
sd-concepts-library/thalasin
sd-concepts-library/thegeneral
sd-concepts-library/thorneworks
sd-concepts-library/threestooges
sd-concepts-library/thunderdome-cover
sd-concepts-library/thunderdome-covers
sd-concepts-library/ti-junglepunk-v0
sd-concepts-library/tili-concept
sd-concepts-library/titan-robot
sd-concepts-library/tnj
sd-concepts-library/toho-pixel
sd-concepts-library/tomcat
sd-concepts-library/tonal1
sd-concepts-library/tony-diterlizzi-s-planescape-art
sd-concepts-library/towerplace
sd-concepts-library/toy
sd-concepts-library/toy-bonnie-plush
sd-concepts-library/toyota-sera
sd-concepts-library/transmutation-circles
sd-concepts-library/trash-polka-artstyle
sd-concepts-library/travis-bedel
sd-concepts-library/trigger-studio
sd-concepts-library/trust-support
sd-concepts-library/trypophobia
sd-concepts-library/ttte
sd-concepts-library/tubby
sd-concepts-library/tubby-cats
sd-concepts-library/tudisco
sd-concepts-library/turtlepics
sd-concepts-library/type
sd-concepts-library/ugly-sonic
sd-concepts-library/uliana-kudinova
sd-concepts-library/uma
sd-concepts-library/uma-clean-object
sd-concepts-library/uma-meme
sd-concepts-library/uma-meme-style
sd-concepts-library/uma-style-classic
sd-concepts-library/unfinished-building
sd-concepts-library/urivoldemort
sd-concepts-library/uzumaki
sd-concepts-library/valorantstyle
sd-concepts-library/vb-mox
sd-concepts-library/vcr-classique
sd-concepts-library/venice
sd-concepts-library/vespertine
sd-concepts-library/victor-narm
sd-concepts-library/vietstoneking
sd-concepts-library/vivien-reid
sd-concepts-library/vkuoo1
sd-concepts-library/vraska
sd-concepts-library/w3u
sd-concepts-library/walter-wick-photography
sd-concepts-library/warhammer-40k-drawing-style
sd-concepts-library/waterfallshadow
sd-concepts-library/wayne-reynolds-character
sd-concepts-library/wedding
sd-concepts-library/wedding-HandPainted
sd-concepts-library/werebloops
sd-concepts-library/wheatland
sd-concepts-library/wheatland-arknight
sd-concepts-library/wheelchair
sd-concepts-library/wildkat
sd-concepts-library/willy-hd
sd-concepts-library/wire-angels
sd-concepts-library/wish-artist-stile
sd-concepts-library/wlop-style
sd-concepts-library/wojak
sd-concepts-library/wojaks-now
sd-concepts-library/wojaks-now-now-now
sd-concepts-library/xatu
sd-concepts-library/xatu2
sd-concepts-library/xbh
sd-concepts-library/xi
sd-concepts-library/xidiversity
sd-concepts-library/xioboma
sd-concepts-library/xuna
sd-concepts-library/xyz
sd-concepts-library/yb-anime
sd-concepts-library/yerba-mate
sd-concepts-library/yesdelete
sd-concepts-library/yf21
sd-concepts-library/yilanov2
sd-concepts-library/yinit
sd-concepts-library/yoji-shinkawa-style
sd-concepts-library/yolandi-visser
sd-concepts-library/yoshi
sd-concepts-library/youpi2
sd-concepts-library/youtooz-candy
sd-concepts-library/yuji-himukai-style
sd-concepts-library/zaney
sd-concepts-library/zaneypixelz
sd-concepts-library/zdenek-art
sd-concepts-library/zero
sd-concepts-library/zero-bottle
sd-concepts-library/zero-suit-samus
sd-concepts-library/zillertal-can
sd-concepts-library/zizigooloo
sd-concepts-library/zk
sd-concepts-library/zoroark

View File

@ -0,0 +1,110 @@
model:
base_learning_rate: 5.0e-03
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ["sculpture"]
per_image_tokens: false
num_vectors_per_token: 1
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 1
num_workers: 2
wrap: false
train:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
lightning:
modelcheckpoint:
params:
every_n_train_steps: 500
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 500
max_images: 8
increase_log_steps: False
trainer:
benchmark: True
max_steps: 4000000
# max_steps: 4000

View File

@ -0,0 +1,103 @@
model:
base_learning_rate: 5.0e-03
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ["painting"]
per_image_tokens: false
num_vectors_per_token: 1
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 2
num_workers: 16
wrap: false
train:
target: ldm.data.personalized_style.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized_style.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 500
max_images: 8
increase_log_steps: False
trainer:
benchmark: True

View File

@ -0,0 +1,79 @@
model:
base_learning_rate: 1.0e-04
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 10000 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
per_image_tokens: false
num_vectors_per_token: 8
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder

View File

@ -0,0 +1,79 @@
model:
base_learning_rate: 7.5e-05
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: hybrid # important
monitor: val/loss_simple_ema
scale_factor: 0.18215
finetune_keys: null
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 2500 ] # NOTE for resuming. use 10000 if starting from scratch
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
per_image_tokens: false
num_vectors_per_token: 8
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 9 # 4 data + 4 downscaled image + 1 mask
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder

View File

@ -0,0 +1,110 @@
model:
base_learning_rate: 5.0e-03
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
per_image_tokens: false
num_vectors_per_token: 6
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 1
num_workers: 2
wrap: false
train:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
lightning:
modelcheckpoint:
params:
every_n_train_steps: 500
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 500
max_images: 5
increase_log_steps: False
trainer:
benchmark: False
max_steps: 6200
# max_steps: 4000

4
coverage/.gitignore vendored
View File

@ -1,4 +0,0 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore

34
docker-build/Dockerfile Normal file
View File

@ -0,0 +1,34 @@
FROM ubuntu:22.10
# use bash
SHELL [ "/bin/bash", "-c" ]
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential \
gcc \
git \
libgl1-mesa-glx \
libglib2.0-0 \
pip \
python3 \
python3-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set workdir and copy sources
WORKDIR /invokeai
ARG PIP_REQUIREMENTS=requirements-lin-cuda.txt
COPY . ./environments-and-requirements/${PIP_REQUIREMENTS} ./
# install requirements and link outputs folder
RUN pip install \
--no-cache-dir \
-r ${PIP_REQUIREMENTS}
# set Environment, Entrypoint and default CMD
ENV INVOKEAI_ROOT /data
ENTRYPOINT [ "python3", "scripts/invoke.py", "--outdir=/data/outputs" ]
CMD [ "--web", "--host=0.0.0.0" ]

49
docker-build/build.sh Executable file
View File

@ -0,0 +1,49 @@
#!/usr/bin/env bash
set -e
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
# configure values by using env when executing build.sh f.e. `env ARCH=aarch64 ./build.sh`
source ./docker-build/env.sh \
|| echo "please execute docker-build/build.sh from repository root" \
|| exit 1
pip_requirements=${PIP_REQUIREMENTS:-requirements-lin-cuda.txt}
dockerfile=${INVOKE_DOCKERFILE:-docker-build/Dockerfile}
# print the settings
echo "You are using these values:"
echo -e "Dockerfile:\t\t ${dockerfile}"
echo -e "requirements:\t\t ${pip_requirements}"
echo -e "volumename:\t\t ${volumename}"
echo -e "arch:\t\t\t ${arch}"
echo -e "platform:\t\t ${platform}"
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
echo "Volume already exists"
echo
else
echo -n "createing docker volume "
docker volume create "${volumename}"
fi
# Build Container
docker build \
--platform="${platform}" \
--tag="${invokeai_tag}" \
--build-arg="PIP_REQUIREMENTS=${pip_requirements}" \
--file="${dockerfile}" \
.
docker run \
--rm \
--platform="$platform" \
--name="$project_name" \
--hostname="$project_name" \
--mount="source=$volumename,target=/data" \
--mount="type=bind,source=$HOME/.huggingface,target=/root/.huggingface" \
--env="HUGGINGFACE_TOKEN=${HUGGINGFACE_TOKEN}" \
--entrypoint="python3" \
"${invokeai_tag}" \
scripts/configure_invokeai.py --yes

13
docker-build/env.sh Normal file
View File

@ -0,0 +1,13 @@
#!/usr/bin/env bash
project_name=${PROJECT_NAME:-invokeai}
volumename=${VOLUMENAME:-${project_name}_data}
arch=${ARCH:-x86_64}
platform=${PLATFORM:-Linux/${arch}}
invokeai_tag=${INVOKEAI_TAG:-${project_name}:${arch}}
export project_name
export volumename
export arch
export platform
export invokeai_tag

15
docker-build/run.sh Executable file
View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
docker run \
--interactive \
--tty \
--rm \
--platform="$platform" \
--name="$project_name" \
--hostname="$project_name" \
--mount="source=$volumename,target=/data" \
--publish=9090:9090 \
"$invokeai_tag" ${1:+$@}

View File

@ -1,15 +0,0 @@
## Make a copy of this file named `.env` and fill in the values below.
## Any environment variables supported by InvokeAI can be specified here,
## in addition to the examples below.
# INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data.
# Outputs will also be stored here by default.
# This **must** be an absolute path.
INVOKEAI_ROOT=
# Get this value from your HuggingFace account settings page.
# HUGGING_FACE_HUB_TOKEN=
## optional variables specific to the docker setup.
# GPU_DRIVER=cuda # or rocm
# CONTAINER_UID=1000

View File

@ -1,124 +0,0 @@
# syntax=docker/dockerfile:1.4
## Builder stage
FROM library/ubuntu:23.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get install -y \
git \
python3-venv \
python3-pip \
build-essential
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.1.0
ARG TORCHVISION_VERSION=0.16
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
ARG BUILDPLATFORM
WORKDIR ${INVOKEAI_SRC}
# Install pytorch before all other pip packages
# NOTE: there are no pytorch builds for arm64 + cuda, only cpu
# x86_64/CUDA is default
RUN --mount=type=cache,target=/root/.cache/pip \
python3 -m venv ${VIRTUAL_ENV} &&\
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--index-url https://download.pytorch.org/whl/rocm5.6"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\
pip install $extra_index_url_arg \
torch==$TORCH_VERSION \
torchvision==$TORCHVISION_VERSION
# Install the local package.
# Editable mode helps use the same image for development:
# the local working copy can be bind-mounted into the image
# at path defined by ${INVOKEAI_SRC}
COPY invokeai ./invokeai
COPY pyproject.toml ./
RUN --mount=type=cache,target=/root/.cache/pip \
# xformers + triton fails to install on arm64
if [ "$GPU_DRIVER" = "cuda" ] && [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
pip install -e ".[xformers]"; \
else \
pip install -e "."; \
fi
# #### Build the Web UI ------------------------------------
FROM node:18 AS web-builder
WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/usr/lib/node_modules \
npm install --include dev
RUN --mount=type=cache,target=/usr/lib/node_modules \
yarn vite build
#### Runtime stage ---------------------------------------
FROM library/ubuntu:23.04 AS runtime
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
RUN apt update && apt install -y --no-install-recommends \
git \
curl \
vim \
tmux \
ncdu \
iotop \
bzip2 \
gosu \
magic-wormhole \
libglib2.0-0 \
libgl1-mesa-glx \
python3-venv \
python3-pip \
build-essential \
libopencv-dev \
libstdc++-10-dev &&\
apt-get clean && apt-get autoclean
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV INVOKEAI_ROOT=/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$INVOKEAI_SRC:$PATH"
# --link requires buldkit w/ dockerfile syntax 1.4
COPY --link --from=builder ${INVOKEAI_SRC} ${INVOKEAI_SRC}
COPY --link --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY --link --from=web-builder /build/dist ${INVOKEAI_SRC}/invokeai/frontend/web/dist
# Link amdgpu.ids for ROCm builds
# contributed by https://github.com/Rubonnek
RUN mkdir -p "/opt/amdgpu/share/libdrm" &&\
ln -s "/usr/share/libdrm/amdgpu.ids" "/opt/amdgpu/share/libdrm/amdgpu.ids"
WORKDIR ${INVOKEAI_SRC}
# build patchmatch
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python3 -c "from patchmatch import patch_match"
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R 1000:1000 ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]
CMD ["invokeai-web", "--host", "0.0.0.0"]

View File

@ -1,78 +0,0 @@
# InvokeAI Containerized
All commands are to be run from the `docker` directory: `cd docker`
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
3. Ensure docker daemon is able to access the GPU.
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
#### macOS
1. Ensure Docker has at least 16GB RAM
2. Enable VirtioFS for file sharing
3. Enable `docker compose` V2 support
This is done via Docker Desktop preferences
## Quickstart
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. `docker compose up`
The image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
### Use a GPU
- Linux is *recommended* for GPU support in Docker.
- WSL2 is *required* for Windows.
- only `x86_64` architecture is supported.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (values are optional, but setting `INVOKEAI_ROOT` is highly recommended):
```bash
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even Moar Customizing!
See the `docker-compose.yml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
### Reconfigure the runtime directory
Can be used to download additional models from the supported model list
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
```yaml
command:
- invokeai-configure
- --yes
```
Or install models:
```yaml
command:
- invokeai-model-install
```

Some files were not shown because too many files have changed in this diff Show More