mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'main' into lstein-improve-ti-frontend
This commit is contained in:
commit
00839d02ab
77
docs/features/MODEL_MERGING.md
Normal file
77
docs/features/MODEL_MERGING.md
Normal file
@ -0,0 +1,77 @@
|
||||
---
|
||||
title: Model Merging
|
||||
---
|
||||
|
||||
# :material-image-off: Model Merging
|
||||
|
||||
## How to Merge Models
|
||||
|
||||
As of version 2.3, InvokeAI comes with a script that allows you to
|
||||
merge two or three diffusers-type models into a new merged model. The
|
||||
resulting model will combine characteristics of the original, and can
|
||||
be used to teach an old model new tricks.
|
||||
|
||||
You may run the merge script by starting the invoke launcher
|
||||
(`invoke.sh` or `invoke.bat`) and choosing the option for _merge
|
||||
models_. This will launch a text-based interactive user interface that
|
||||
prompts you to select the models to merge, how to merge them, and the
|
||||
merged model name.
|
||||
|
||||
Alternatively you may activate InvokeAI's virtual environment from the
|
||||
command line, and call the script via `merge_models_fe.py` (the "fe"
|
||||
stands for "front end"). There is also a version that accepts
|
||||
command-line arguments, which you can run with the command
|
||||
`merge_models.py`.
|
||||
|
||||
The user interface for the text-based interactive script is
|
||||
straightforward. It shows you a series of setting fields. Use control-N (^N)
|
||||
to move to the next field, and control-P (^P) to move to the previous
|
||||
one. You can also use TAB and shift-TAB to move forward and
|
||||
backward. Once you are in a multiple choice field, use the up and down
|
||||
cursor arrows to move to your desired selection, and press <SPACE> or
|
||||
<ENTER> to select it. Change text fields by typing in them, and adjust
|
||||
scrollbars using the left and right arrow keys.
|
||||
|
||||
Once you are happy with your settings, press the OK button. Note that
|
||||
there may be two pages of settings, depending on the height of your
|
||||
screen, and the OK button may be on the second page. Advance past the
|
||||
last field of the first page to get to the second page, and reverse
|
||||
this to get back.
|
||||
|
||||
If the merge runs successfully, it will create a new diffusers model
|
||||
under the selected name and register it with InvokeAI.
|
||||
|
||||
## The Settings
|
||||
|
||||
* Model Selection -- there are three multiple choice fields that
|
||||
display all the diffusers-style models that InvokeAI knows about.
|
||||
If you do not see the model you are looking for, then it is probably
|
||||
a legacy checkpoint model and needs to be converted using the
|
||||
`invoke.py` command-line client and its `!optimize` command. You
|
||||
must select at least two models to merge. The third can be left at
|
||||
"None" if you desire.
|
||||
|
||||
* Alpha -- This is the ratio to use when combining models. It ranges
|
||||
from 0 to 1. The higher the value, the more weight is given to the
|
||||
2d and (optionally) 3d models. So if you have two models named "A"
|
||||
and "B", an alpha value of 0.25 will give you a merged model that is
|
||||
25% A and 75% B.
|
||||
|
||||
* Interpolation Method -- This is the method used to combine
|
||||
weights. The options are "weighted_sum" (the default), "sigmoid",
|
||||
"inv_sigmoid" and "add_difference". Each produces slightly different
|
||||
results. When three models are in use, only "add_difference" is
|
||||
available. (TODO: cite a reference that describes what these
|
||||
interpolation methods actually do and how to decide among them).
|
||||
|
||||
* Force -- Not all models are compatible with each other. The merge
|
||||
script will check for compatibility and refuse to merge ones that
|
||||
are incompatible. Set this checkbox to try merging anyway.
|
||||
|
||||
* Name for merged model - This is the name for the new model. Please
|
||||
use InvokeAI conventions - only alphanumeric letters and the
|
||||
characters ".+-".
|
||||
|
||||
## Caveats
|
||||
|
||||
This is a new script and may contain bugs.
|
@ -93,9 +93,15 @@ getting InvokeAI up and running on your system. For alternative installation and
|
||||
upgrade instructions, please see:
|
||||
[InvokeAI Installation Overview](installation/)
|
||||
|
||||
Linux users who wish to make use of the PyPatchMatch inpainting functions will
|
||||
need to perform a bit of extra work to enable this module. Instructions can be
|
||||
found at [Installing PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md).
|
||||
Users who wish to make use of the **PyPatchMatch** inpainting functions
|
||||
will need to perform a bit of extra work to enable this
|
||||
module. Instructions can be found at [Installing
|
||||
PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md).
|
||||
|
||||
If you have an NVIDIA card, you can benefit from the significant
|
||||
memory savings and performance benefits provided by Facebook Lab's
|
||||
**xFormers** module. Instructions for Linux and Windows users can be found
|
||||
at [Installing xFormers](installation/070_INSTALL_XFORMERS.md).
|
||||
|
||||
## :fontawesome-solid-computer: Hardware Requirements
|
||||
|
||||
@ -151,6 +157,8 @@ images in full-precision mode:
|
||||
<!-- seperator -->
|
||||
- [Prompt Engineering](features/PROMPTS.md)
|
||||
<!-- seperator -->
|
||||
- [Model Merging](features/MODEL_MERGING.md)
|
||||
<!-- seperator -->
|
||||
- Miscellaneous
|
||||
- [NSFW Checker](features/NSFW.md)
|
||||
- [Embiggen upscaling](features/EMBIGGEN.md)
|
||||
|
206
docs/installation/070_INSTALL_XFORMERS.md
Normal file
206
docs/installation/070_INSTALL_XFORMERS.md
Normal file
@ -0,0 +1,206 @@
|
||||
---
|
||||
title: Installing xFormers
|
||||
---
|
||||
|
||||
# :material-image-size-select-large: Installing xformers
|
||||
|
||||
xFormers is toolbox that integrates with the pyTorch and CUDA
|
||||
libraries to provide accelerated performance and reduced memory
|
||||
consumption for applications using the transformers machine learning
|
||||
architecture. After installing xFormers, InvokeAI users who have
|
||||
CUDA GPUs will see a noticeable decrease in GPU memory consumption and
|
||||
an increase in speed.
|
||||
|
||||
xFormers can be installed into a working InvokeAI installation without
|
||||
any code changes or other updates. This document explains how to
|
||||
install xFormers.
|
||||
|
||||
## Pip Install
|
||||
|
||||
For both Windows and Linux, you can install `xformers` in just a
|
||||
couple of steps from the command line.
|
||||
|
||||
If you are used to launching `invoke.sh` or `invoke.bat` to start
|
||||
InvokeAI, then run the launcher and select the "developer's console"
|
||||
to get to the command line. If you run invoke.py directly from the
|
||||
command line, then just be sure to activate it's virtual environment.
|
||||
|
||||
Then run the following three commands:
|
||||
|
||||
```sh
|
||||
pip install xformers==0.0.16rc425
|
||||
pip install triton
|
||||
python -m xformers.info output
|
||||
```
|
||||
|
||||
The first command installs `xformers`, the second installs the
|
||||
`triton` training accelerator, and the third prints out the `xformers`
|
||||
installation status. If all goes well, you'll see a report like the
|
||||
following:
|
||||
|
||||
```sh
|
||||
xFormers 0.0.16rc425
|
||||
memory_efficient_attention.cutlassF: available
|
||||
memory_efficient_attention.cutlassB: available
|
||||
memory_efficient_attention.flshattF: available
|
||||
memory_efficient_attention.flshattB: available
|
||||
memory_efficient_attention.smallkF: available
|
||||
memory_efficient_attention.smallkB: available
|
||||
memory_efficient_attention.tritonflashattF: available
|
||||
memory_efficient_attention.tritonflashattB: available
|
||||
swiglu.fused.p.cpp: available
|
||||
is_triton_available: True
|
||||
is_functorch_available: False
|
||||
pytorch.version: 1.13.1+cu117
|
||||
pytorch.cuda: available
|
||||
gpu.compute_capability: 8.6
|
||||
gpu.name: NVIDIA RTX A2000 12GB
|
||||
build.info: available
|
||||
build.cuda_version: 1107
|
||||
build.python_version: 3.10.9
|
||||
build.torch_version: 1.13.1+cu117
|
||||
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
|
||||
build.env.XFORMERS_BUILD_TYPE: Release
|
||||
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
|
||||
build.env.NVCC_FLAGS: None
|
||||
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.16rc425
|
||||
source.privacy: open source
|
||||
```
|
||||
|
||||
## Source Builds
|
||||
|
||||
`xformers` is currently under active development and at some point you
|
||||
may wish to build it from sourcce to get the latest features and
|
||||
bugfixes.
|
||||
|
||||
### Source Build on Linux
|
||||
|
||||
Note that xFormers only works with true NVIDIA GPUs and will not work
|
||||
properly with the ROCm driver for AMD acceleration.
|
||||
|
||||
xFormers is not currently available as a pip binary wheel and must be
|
||||
installed from source. These instructions were written for a system
|
||||
running Ubuntu 22.04, but other Linux distributions should be able to
|
||||
adapt this recipe.
|
||||
|
||||
#### 1. Install CUDA Toolkit 11.7
|
||||
|
||||
You will need the CUDA developer's toolkit in order to compile and
|
||||
install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit
|
||||
package.** It is out of date and will cause conflicts among the NVIDIA
|
||||
driver and binaries. Instead install the CUDA Toolkit package provided
|
||||
by NVIDIA itself. Go to [CUDA Toolkit 11.7
|
||||
Downloads](https://developer.nvidia.com/cuda-11-7-0-download-archive)
|
||||
and use the target selection wizard to choose your platform and Linux
|
||||
distribution. Select an installer type of "runfile (local)" at the
|
||||
last step.
|
||||
|
||||
This will provide you with a recipe for downloading and running a
|
||||
install shell script that will install the toolkit and drivers. For
|
||||
example, the install script recipe for Ubuntu 22.04 running on a
|
||||
x86_64 system is:
|
||||
|
||||
```
|
||||
wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda_11.7.0_515.43.04_linux.run
|
||||
sudo sh cuda_11.7.0_515.43.04_linux.run
|
||||
```
|
||||
|
||||
Rather than cut-and-paste this example, We recommend that you walk
|
||||
through the toolkit wizard in order to get the most up to date
|
||||
installer for your system.
|
||||
|
||||
#### 2. Confirm/Install pyTorch 1.13 with CUDA 11.7 support
|
||||
|
||||
If you are using InvokeAI 2.3 or higher, these will already be
|
||||
installed. If not, you can check whether you have the needed libraries
|
||||
using a quick command. Activate the invokeai virtual environment,
|
||||
either by entering the "developer's console", or manually with a
|
||||
command similar to `source ~/invokeai/.venv/bin/activate` (depending
|
||||
on where your `invokeai` directory is.
|
||||
|
||||
Then run the command:
|
||||
|
||||
```sh
|
||||
python -c 'exec("import torch\nprint(torch.__version__)")'
|
||||
```
|
||||
|
||||
If it prints __1.13.1+cu117__ you're good. If not, you can install the
|
||||
most up to date libraries with this command:
|
||||
|
||||
```sh
|
||||
pip install --upgrade --force-reinstall torch torchvision
|
||||
```
|
||||
|
||||
#### 3. Install the triton module
|
||||
|
||||
This module isn't necessary for xFormers image inference optimization,
|
||||
but avoids a startup warning.
|
||||
|
||||
```sh
|
||||
pip install triton
|
||||
```
|
||||
|
||||
#### 4. Install source code build prerequisites
|
||||
|
||||
To build xFormers from source, you will need the `build-essentials`
|
||||
package. If you don't have it installed already, run:
|
||||
|
||||
```sh
|
||||
sudo apt install build-essential
|
||||
```
|
||||
|
||||
#### 5. Build xFormers
|
||||
|
||||
There is no pip wheel package for xFormers at this time (January
|
||||
2023). Although there is a conda package, InvokeAI no longer
|
||||
officially supports conda installations and you're on your own if you
|
||||
wish to try this route.
|
||||
|
||||
Following the recipe provided at the [xFormers GitHub
|
||||
page](https://github.com/facebookresearch/xformers), and with the
|
||||
InvokeAI virtual environment active (see step 1) run the following
|
||||
commands:
|
||||
|
||||
```sh
|
||||
pip install ninja
|
||||
export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.2;7.5;8.0;8.6"
|
||||
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
|
||||
```
|
||||
|
||||
The TORCH_CUDA_ARCH_LIST is a list of GPU architectures to compile
|
||||
xFormer support for. You can speed up compilation by selecting
|
||||
the architecture specific for your system. You'll find the list of
|
||||
GPUs and their architectures at NVIDIA's [GPU Compute
|
||||
Capability](https://developer.nvidia.com/cuda-gpus) table.
|
||||
|
||||
If the compile and install completes successfully, you can check that
|
||||
xFormers is installed with this command:
|
||||
|
||||
```sh
|
||||
python -m xformers.info
|
||||
```
|
||||
|
||||
If suiccessful, the top of the listing should indicate "available" for
|
||||
each of the `memory_efficient_attention` modules, as shown here:
|
||||
|
||||
```sh
|
||||
memory_efficient_attention.cutlassF: available
|
||||
memory_efficient_attention.cutlassB: available
|
||||
memory_efficient_attention.flshattF: available
|
||||
memory_efficient_attention.flshattB: available
|
||||
memory_efficient_attention.smallkF: available
|
||||
memory_efficient_attention.smallkB: available
|
||||
memory_efficient_attention.tritonflashattF: available
|
||||
memory_efficient_attention.tritonflashattB: available
|
||||
[...]
|
||||
```
|
||||
|
||||
You can now launch InvokeAI and enjoy the benefits of xFormers.
|
||||
|
||||
### Windows
|
||||
|
||||
To come
|
||||
|
||||
|
||||
---
|
||||
(c) Copyright 2023 Lincoln Stein and the InvokeAI Development Team
|
@ -19,6 +19,8 @@ experience and preferences.
|
||||
those who prefer the `conda` tool, and one suited to those who prefer
|
||||
`pip` and Python virtual environments. In our hands the pip install
|
||||
is faster and more reliable, but your mileage may vary.
|
||||
Note that the conda installation method is currently deprecated and
|
||||
will not be supported at some point in the future.
|
||||
|
||||
This method is recommended for users who have previously used `conda`
|
||||
or `pip` in the past, developers, and anyone who wishes to remain on
|
||||
|
File diff suppressed because one or more lines are too long
1
frontend/dist/assets/index.0dadf5d0.css
vendored
1
frontend/dist/assets/index.0dadf5d0.css
vendored
File diff suppressed because one or more lines are too long
625
frontend/dist/assets/index.1b59e83a.js
vendored
625
frontend/dist/assets/index.1b59e83a.js
vendored
File diff suppressed because one or more lines are too long
1
frontend/dist/assets/index.8badc8b4.css
vendored
Normal file
1
frontend/dist/assets/index.8badc8b4.css
vendored
Normal file
File diff suppressed because one or more lines are too long
625
frontend/dist/assets/index.dd470915.js
vendored
Normal file
625
frontend/dist/assets/index.dd470915.js
vendored
Normal file
File diff suppressed because one or more lines are too long
6
frontend/dist/index.html
vendored
6
frontend/dist/index.html
vendored
@ -7,8 +7,8 @@
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>InvokeAI - A Stable Diffusion Toolkit</title>
|
||||
<link rel="shortcut icon" type="icon" href="./assets/favicon.0d253ced.ico" />
|
||||
<script type="module" crossorigin src="./assets/index.1b59e83a.js"></script>
|
||||
<link rel="stylesheet" href="./assets/index.0dadf5d0.css">
|
||||
<script type="module" crossorigin src="./assets/index.dd470915.js"></script>
|
||||
<link rel="stylesheet" href="./assets/index.8badc8b4.css">
|
||||
<script type="module">try{import.meta.url;import("_").catch(()=>1);}catch(e){}window.__vite_is_modern_browser=true;</script>
|
||||
<script type="module">!function(){if(window.__vite_is_modern_browser)return;console.warn("vite: loading legacy build because dynamic import or import.meta.url is unsupported, syntax error above should be ignored");var e=document.getElementById("vite-legacy-polyfill"),n=document.createElement("script");n.src=e.src,n.onload=function(){System.import(document.getElementById('vite-legacy-entry').getAttribute('data-src'))},document.body.appendChild(n)}();</script>
|
||||
</head>
|
||||
@ -18,6 +18,6 @@
|
||||
|
||||
<script nomodule>!function(){var e=document,t=e.createElement("script");if(!("noModule"in t)&&"onbeforeload"in t){var n=!1;e.addEventListener("beforeload",(function(e){if(e.target===t)n=!0;else if(!e.target.hasAttribute("nomodule")||!n)return;e.preventDefault()}),!0),t.type="module",t.src=".",e.head.appendChild(t),t.remove()}}();</script>
|
||||
<script nomodule crossorigin id="vite-legacy-polyfill" src="./assets/polyfills-legacy-dde3a68a.js"></script>
|
||||
<script nomodule crossorigin id="vite-legacy-entry" data-src="./assets/index-legacy-474a75fe.js">System.import(document.getElementById('vite-legacy-entry').getAttribute('data-src'))</script>
|
||||
<script nomodule crossorigin id="vite-legacy-entry" data-src="./assets/index-legacy-6edbec57.js">System.import(document.getElementById('vite-legacy-entry').getAttribute('data-src'))</script>
|
||||
</body>
|
||||
</html>
|
||||
|
4
frontend/dist/locales/common/en-US.json
vendored
4
frontend/dist/locales/common/en-US.json
vendored
@ -17,6 +17,9 @@
|
||||
"langPortuguese": "Portuguese",
|
||||
"langFrench": "French",
|
||||
"langPolish": "Polish",
|
||||
"langSimplifiedChinese": "Simplified Chinese",
|
||||
"langSpanish": "Spanish",
|
||||
"langJapanese": "Japanese",
|
||||
"text2img": "Text To Image",
|
||||
"img2img": "Image To Image",
|
||||
"unifiedCanvas": "Unified Canvas",
|
||||
@ -32,6 +35,7 @@
|
||||
"upload": "Upload",
|
||||
"close": "Close",
|
||||
"load": "Load",
|
||||
"back": "Back",
|
||||
"statusConnected": "Connected",
|
||||
"statusDisconnected": "Disconnected",
|
||||
"statusError": "Error",
|
||||
|
2
frontend/dist/locales/common/en.json
vendored
2
frontend/dist/locales/common/en.json
vendored
@ -19,6 +19,7 @@
|
||||
"langPolish": "Polish",
|
||||
"langSimplifiedChinese": "Simplified Chinese",
|
||||
"langSpanish": "Spanish",
|
||||
"langJapanese": "Japanese",
|
||||
"text2img": "Text To Image",
|
||||
"img2img": "Image To Image",
|
||||
"unifiedCanvas": "Unified Canvas",
|
||||
@ -34,6 +35,7 @@
|
||||
"upload": "Upload",
|
||||
"close": "Close",
|
||||
"load": "Load",
|
||||
"back": "Back",
|
||||
"statusConnected": "Connected",
|
||||
"statusDisconnected": "Disconnected",
|
||||
"statusError": "Error",
|
||||
|
60
frontend/dist/locales/common/ja.json
vendored
Normal file
60
frontend/dist/locales/common/ja.json
vendored
Normal file
@ -0,0 +1,60 @@
|
||||
{
|
||||
"hotkeysLabel": "Hotkeys",
|
||||
"themeLabel": "テーマ",
|
||||
"languagePickerLabel": "言語選択",
|
||||
"reportBugLabel": "バグ報告",
|
||||
"githubLabel": "Github",
|
||||
"discordLabel": "Discord",
|
||||
"settingsLabel": "設定",
|
||||
"darkTheme": "ダーク",
|
||||
"lightTheme": "ライト",
|
||||
"greenTheme": "緑",
|
||||
"langEnglish": "English",
|
||||
"langRussian": "Russian",
|
||||
"langItalian": "Italian",
|
||||
"langBrPortuguese": "Portuguese (Brazilian)",
|
||||
"langGerman": "German",
|
||||
"langPortuguese": "Portuguese",
|
||||
"langFrench": "French",
|
||||
"langPolish": "Polish",
|
||||
"langSimplifiedChinese": "Simplified Chinese",
|
||||
"langSpanish": "Spanish",
|
||||
"text2img": "Text To Image",
|
||||
"img2img": "Image To Image",
|
||||
"unifiedCanvas": "Unified Canvas",
|
||||
"nodes": "Nodes",
|
||||
"nodesDesc": "現在、画像生成のためのノードベースシステムを開発中です。機能についてのアップデートにご期待ください。",
|
||||
"postProcessing": "後処理",
|
||||
"postProcessDesc1": "Invoke AIは、多彩な後処理の機能を備えています。アップスケーリングと顔修復は、すでにWebUI上で利用可能です。これらは、[Text To Image]および[Image To Image]タブの[詳細オプション]メニューからアクセスできます。また、現在の画像表示の上やビューア内の画像アクションボタンを使って、画像を直接処理することもできます。",
|
||||
"postProcessDesc2": "より高度な後処理の機能を実現するための専用UIを近日中にリリース予定です。",
|
||||
"postProcessDesc3": "Invoke AI CLIでは、この他にもEmbiggenをはじめとする様々な機能を利用することができます。",
|
||||
"training": "追加学習",
|
||||
"trainingDesc1": "Textual InversionとDreamboothを使って、WebUIから独自のEmbeddingとチェックポイントを追加学習するための専用ワークフローです。",
|
||||
"trainingDesc2": "InvokeAIは、すでにメインスクリプトを使ったTextual Inversionによるカスタム埋め込み追加学習にも対応しています。",
|
||||
"upload": "アップロード",
|
||||
"close": "閉じる",
|
||||
"load": "ロード",
|
||||
"back": "戻る",
|
||||
"statusConnected": "接続済",
|
||||
"statusDisconnected": "切断済",
|
||||
"statusError": "エラー",
|
||||
"statusPreparing": "準備中",
|
||||
"statusProcessingCanceled": "処理をキャンセル",
|
||||
"statusProcessingComplete": "処理完了",
|
||||
"statusGenerating": "生成中",
|
||||
"statusGeneratingTextToImage": "Text To Imageで生成中",
|
||||
"statusGeneratingImageToImage": "Image To Imageで生成中",
|
||||
"statusGeneratingInpainting": "Generating Inpainting",
|
||||
"statusGeneratingOutpainting": "Generating Outpainting",
|
||||
"statusGenerationComplete": "生成完了",
|
||||
"statusIterationComplete": "Iteration Complete",
|
||||
"statusSavingImage": "画像を保存",
|
||||
"statusRestoringFaces": "顔の修復",
|
||||
"statusRestoringFacesGFPGAN": "顔の修復 (GFPGAN)",
|
||||
"statusRestoringFacesCodeFormer": "顔の修復 (CodeFormer)",
|
||||
"statusUpscaling": "アップスケーリング",
|
||||
"statusUpscalingESRGAN": "アップスケーリング (ESRGAN)",
|
||||
"statusLoadingModel": "モデルを読み込む",
|
||||
"statusModelChanged": "モデルを変更"
|
||||
}
|
||||
|
17
frontend/dist/locales/gallery/ja.json
vendored
Normal file
17
frontend/dist/locales/gallery/ja.json
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
{
|
||||
"generations": "Generations",
|
||||
"showGenerations": "Show Generations",
|
||||
"uploads": "アップロード",
|
||||
"showUploads": "アップロードした画像を見る",
|
||||
"galleryImageSize": "画像のサイズ",
|
||||
"galleryImageResetSize": "サイズをリセット",
|
||||
"gallerySettings": "ギャラリーの設定",
|
||||
"maintainAspectRatio": "アスペクト比を維持",
|
||||
"autoSwitchNewImages": "Auto-Switch to New Images",
|
||||
"singleColumnLayout": "シングルカラムレイアウト",
|
||||
"pinGallery": "ギャラリーにピン留め",
|
||||
"allImagesLoaded": "すべての画像を読み込む",
|
||||
"loadMore": "さらに読み込む",
|
||||
"noImagesInGallery": "ギャラリーに画像がありません"
|
||||
}
|
||||
|
208
frontend/dist/locales/hotkeys/ja.json
vendored
Normal file
208
frontend/dist/locales/hotkeys/ja.json
vendored
Normal file
@ -0,0 +1,208 @@
|
||||
{
|
||||
"keyboardShortcuts": "キーボードショートカット",
|
||||
"appHotkeys": "アプリのホットキー",
|
||||
"generalHotkeys": "Generalのホットキー",
|
||||
"galleryHotkeys": "ギャラリーのホットキー",
|
||||
"unifiedCanvasHotkeys": "Unified Canvasのホットキー",
|
||||
"invoke": {
|
||||
"title": "Invoke",
|
||||
"desc": "画像を生成"
|
||||
},
|
||||
"cancel": {
|
||||
"title": "キャンセル",
|
||||
"desc": "画像の生成をキャンセル"
|
||||
},
|
||||
"focusPrompt": {
|
||||
"title": "Focus Prompt",
|
||||
"desc": "プロンプトテキストボックスにフォーカス"
|
||||
},
|
||||
"toggleOptions": {
|
||||
"title": "オプションパネルのトグル",
|
||||
"desc": "オプションパネルの開閉"
|
||||
},
|
||||
"pinOptions": {
|
||||
"title": "ピン",
|
||||
"desc": "オプションパネルを固定"
|
||||
},
|
||||
"toggleViewer": {
|
||||
"title": "ビュワーのトグル",
|
||||
"desc": "ビュワーを開閉"
|
||||
},
|
||||
"toggleGallery": {
|
||||
"title": "ギャラリーのトグル",
|
||||
"desc": "ギャラリードロワーの開閉"
|
||||
},
|
||||
"maximizeWorkSpace": {
|
||||
"title": "作業領域の最大化",
|
||||
"desc": "パネルを閉じて、作業領域を最大に"
|
||||
},
|
||||
"changeTabs": {
|
||||
"title": "タブの切替",
|
||||
"desc": "他の作業領域と切替"
|
||||
},
|
||||
"consoleToggle": {
|
||||
"title": "コンソールのトグル",
|
||||
"desc": "コンソールの開閉"
|
||||
},
|
||||
"setPrompt": {
|
||||
"title": "プロンプトをセット",
|
||||
"desc": "現在の画像のプロンプトを使用"
|
||||
},
|
||||
"setSeed": {
|
||||
"title": "シード値をセット",
|
||||
"desc": "現在の画像のシード値を使用"
|
||||
},
|
||||
"setParameters": {
|
||||
"title": "パラメータをセット",
|
||||
"desc": "現在の画像のすべてのパラメータを使用"
|
||||
},
|
||||
"restoreFaces": {
|
||||
"title": "顔の修復",
|
||||
"desc": "現在の画像を修復"
|
||||
},
|
||||
"upscale": {
|
||||
"title": "アップスケール",
|
||||
"desc": "現在の画像をアップスケール"
|
||||
},
|
||||
"showInfo": {
|
||||
"title": "情報を見る",
|
||||
"desc": "現在の画像のメタデータ情報を表示"
|
||||
},
|
||||
"sendToImageToImage": {
|
||||
"title": "Image To Imageに転送",
|
||||
"desc": "現在の画像をImage to Imageに転送"
|
||||
},
|
||||
"deleteImage": {
|
||||
"title": "画像を削除",
|
||||
"desc": "現在の画像を削除"
|
||||
},
|
||||
"closePanels": {
|
||||
"title": "パネルを閉じる",
|
||||
"desc": "開いているパネルを閉じる"
|
||||
},
|
||||
"previousImage": {
|
||||
"title": "前の画像",
|
||||
"desc": "ギャラリー内の1つ前の画像を表示"
|
||||
},
|
||||
"nextImage": {
|
||||
"title": "次の画像",
|
||||
"desc": "ギャラリー内の1つ後の画像を表示"
|
||||
},
|
||||
"toggleGalleryPin": {
|
||||
"title": "ギャラリードロワーの固定",
|
||||
"desc": "ギャラリーをUIにピン留め/解除"
|
||||
},
|
||||
"increaseGalleryThumbSize": {
|
||||
"title": "ギャラリーの画像を拡大",
|
||||
"desc": "ギャラリーのサムネイル画像を拡大"
|
||||
},
|
||||
"decreaseGalleryThumbSize": {
|
||||
"title": "ギャラリーの画像サイズを縮小",
|
||||
"desc": "ギャラリーのサムネイル画像を縮小"
|
||||
},
|
||||
"selectBrush": {
|
||||
"title": "ブラシを選択",
|
||||
"desc": "ブラシを選択"
|
||||
},
|
||||
"selectEraser": {
|
||||
"title": "消しゴムを選択",
|
||||
"desc": "消しゴムを選択"
|
||||
},
|
||||
"decreaseBrushSize": {
|
||||
"title": "ブラシサイズを縮小",
|
||||
"desc": "ブラシ/消しゴムのサイズを縮小"
|
||||
},
|
||||
"increaseBrushSize": {
|
||||
"title": "ブラシサイズを拡大",
|
||||
"desc": "ブラシ/消しゴムのサイズを拡大"
|
||||
},
|
||||
"decreaseBrushOpacity": {
|
||||
"title": "ブラシの不透明度を下げる",
|
||||
"desc": "キャンバスブラシの不透明度を下げる"
|
||||
},
|
||||
"increaseBrushOpacity": {
|
||||
"title": "ブラシの不透明度を上げる",
|
||||
"desc": "キャンバスブラシの不透明度を上げる"
|
||||
},
|
||||
"moveTool": {
|
||||
"title": "Move Tool",
|
||||
"desc": "Allows canvas navigation"
|
||||
},
|
||||
"fillBoundingBox": {
|
||||
"title": "バウンディングボックスを塗りつぶす",
|
||||
"desc": "ブラシの色でバウンディングボックス領域を塗りつぶす"
|
||||
},
|
||||
"eraseBoundingBox": {
|
||||
"title": "バウンディングボックスを消す",
|
||||
"desc": "バウンディングボックス領域を消す"
|
||||
},
|
||||
"colorPicker": {
|
||||
"title": "カラーピッカーを選択",
|
||||
"desc": "カラーピッカーを選択"
|
||||
},
|
||||
"toggleSnap": {
|
||||
"title": "Toggle Snap",
|
||||
"desc": "Toggles Snap to Grid"
|
||||
},
|
||||
"quickToggleMove": {
|
||||
"title": "Quick Toggle Move",
|
||||
"desc": "Temporarily toggles Move mode"
|
||||
},
|
||||
"toggleLayer": {
|
||||
"title": "レイヤーを切替",
|
||||
"desc": "マスク/ベースレイヤの選択を切替"
|
||||
},
|
||||
"clearMask": {
|
||||
"title": "マスクを消す",
|
||||
"desc": "マスク全体を消す"
|
||||
},
|
||||
"hideMask": {
|
||||
"title": "マスクを非表示",
|
||||
"desc": "マスクを表示/非表示"
|
||||
},
|
||||
"showHideBoundingBox": {
|
||||
"title": "バウンディングボックスを表示/非表示",
|
||||
"desc": "バウンディングボックスの表示/非表示を切替"
|
||||
},
|
||||
"mergeVisible": {
|
||||
"title": "Merge Visible",
|
||||
"desc": "Merge all visible layers of canvas"
|
||||
},
|
||||
"saveToGallery": {
|
||||
"title": "ギャラリーに保存",
|
||||
"desc": "現在のキャンバスをギャラリーに保存"
|
||||
},
|
||||
"copyToClipboard": {
|
||||
"title": "クリップボードにコピー",
|
||||
"desc": "現在のキャンバスをクリップボードにコピー"
|
||||
},
|
||||
"downloadImage": {
|
||||
"title": "画像をダウンロード",
|
||||
"desc": "現在の画像をダウンロード"
|
||||
},
|
||||
"undoStroke": {
|
||||
"title": "Undo Stroke",
|
||||
"desc": "Undo a brush stroke"
|
||||
},
|
||||
"redoStroke": {
|
||||
"title": "Redo Stroke",
|
||||
"desc": "Redo a brush stroke"
|
||||
},
|
||||
"resetView": {
|
||||
"title": "キャンバスをリセット",
|
||||
"desc": "キャンバスをリセット"
|
||||
},
|
||||
"previousStagingImage": {
|
||||
"title": "Previous Staging Image",
|
||||
"desc": "Previous Staging Area Image"
|
||||
},
|
||||
"nextStagingImage": {
|
||||
"title": "Next Staging Image",
|
||||
"desc": "Next Staging Area Image"
|
||||
},
|
||||
"acceptStagingImage": {
|
||||
"title": "Accept Staging Image",
|
||||
"desc": "Accept Current Staging Area Image"
|
||||
}
|
||||
}
|
||||
|
19
frontend/dist/locales/modelmanager/en-US.json
vendored
19
frontend/dist/locales/modelmanager/en-US.json
vendored
@ -1,12 +1,18 @@
|
||||
{
|
||||
"modelManager": "Model Manager",
|
||||
"model": "Model",
|
||||
"allModels": "All Models",
|
||||
"checkpointModels": "Checkpoints",
|
||||
"diffusersModels": "Diffusers",
|
||||
"safetensorModels": "SafeTensors",
|
||||
"modelAdded": "Model Added",
|
||||
"modelUpdated": "Model Updated",
|
||||
"modelEntryDeleted": "Model Entry Deleted",
|
||||
"cannotUseSpaces": "Cannot Use Spaces",
|
||||
"addNew": "Add New",
|
||||
"addNewModel": "Add New Model",
|
||||
"addCheckpointModel": "Add Checkpoint / Safetensor Model",
|
||||
"addDiffuserModel": "Add Diffusers",
|
||||
"addManually": "Add Manually",
|
||||
"manual": "Manual",
|
||||
"name": "Name",
|
||||
@ -17,8 +23,12 @@
|
||||
"configValidationMsg": "Path to the config file of your model.",
|
||||
"modelLocation": "Model Location",
|
||||
"modelLocationValidationMsg": "Path to where your model is located.",
|
||||
"repo_id": "Repo ID",
|
||||
"repoIDValidationMsg": "Online repository of your model",
|
||||
"vaeLocation": "VAE Location",
|
||||
"vaeLocationValidationMsg": "Path to where your VAE is located.",
|
||||
"vaeRepoID": "VAE Repo ID",
|
||||
"vaeRepoIDValidationMsg": "Online repository of your VAE",
|
||||
"width": "Width",
|
||||
"widthValidationMsg": "Default width of your model.",
|
||||
"height": "Height",
|
||||
@ -34,6 +44,7 @@
|
||||
"checkpointFolder": "Checkpoint Folder",
|
||||
"clearCheckpointFolder": "Clear Checkpoint Folder",
|
||||
"findModels": "Find Models",
|
||||
"scanAgain": "Scan Again",
|
||||
"modelsFound": "Models Found",
|
||||
"selectFolder": "Select Folder",
|
||||
"selected": "Selected",
|
||||
@ -42,9 +53,15 @@
|
||||
"showExisting": "Show Existing",
|
||||
"addSelected": "Add Selected",
|
||||
"modelExists": "Model Exists",
|
||||
"selectAndAdd": "Select and Add Models Listed Below",
|
||||
"noModelsFound": "No Models Found",
|
||||
"delete": "Delete",
|
||||
"deleteModel": "Delete Model",
|
||||
"deleteConfig": "Delete Config",
|
||||
"deleteMsg1": "Are you sure you want to delete this model entry from InvokeAI?",
|
||||
"deleteMsg2": "This will not delete the model checkpoint file from your disk. You can readd them if you wish to."
|
||||
"deleteMsg2": "This will not delete the model checkpoint file from your disk. You can readd them if you wish to.",
|
||||
"formMessageDiffusersModelLocation": "Diffusers Model Location",
|
||||
"formMessageDiffusersModelLocationDesc": "Please enter at least one.",
|
||||
"formMessageDiffusersVAELocation": "VAE Location",
|
||||
"formMessageDiffusersVAELocationDesc": "If not provided, InvokeAI will look for the VAE file inside the model location given above."
|
||||
}
|
||||
|
16
frontend/dist/locales/modelmanager/en.json
vendored
16
frontend/dist/locales/modelmanager/en.json
vendored
@ -1,12 +1,18 @@
|
||||
{
|
||||
"modelManager": "Model Manager",
|
||||
"model": "Model",
|
||||
"allModels": "All Models",
|
||||
"checkpointModels": "Checkpoints",
|
||||
"diffusersModels": "Diffusers",
|
||||
"safetensorModels": "SafeTensors",
|
||||
"modelAdded": "Model Added",
|
||||
"modelUpdated": "Model Updated",
|
||||
"modelEntryDeleted": "Model Entry Deleted",
|
||||
"cannotUseSpaces": "Cannot Use Spaces",
|
||||
"addNew": "Add New",
|
||||
"addNewModel": "Add New Model",
|
||||
"addCheckpointModel": "Add Checkpoint / Safetensor Model",
|
||||
"addDiffuserModel": "Add Diffusers",
|
||||
"addManually": "Add Manually",
|
||||
"manual": "Manual",
|
||||
"name": "Name",
|
||||
@ -17,8 +23,12 @@
|
||||
"configValidationMsg": "Path to the config file of your model.",
|
||||
"modelLocation": "Model Location",
|
||||
"modelLocationValidationMsg": "Path to where your model is located.",
|
||||
"repo_id": "Repo ID",
|
||||
"repoIDValidationMsg": "Online repository of your model",
|
||||
"vaeLocation": "VAE Location",
|
||||
"vaeLocationValidationMsg": "Path to where your VAE is located.",
|
||||
"vaeRepoID": "VAE Repo ID",
|
||||
"vaeRepoIDValidationMsg": "Online repository of your VAE",
|
||||
"width": "Width",
|
||||
"widthValidationMsg": "Default width of your model.",
|
||||
"height": "Height",
|
||||
@ -49,5 +59,9 @@
|
||||
"deleteModel": "Delete Model",
|
||||
"deleteConfig": "Delete Config",
|
||||
"deleteMsg1": "Are you sure you want to delete this model entry from InvokeAI?",
|
||||
"deleteMsg2": "This will not delete the model checkpoint file from your disk. You can readd them if you wish to."
|
||||
"deleteMsg2": "This will not delete the model checkpoint file from your disk. You can readd them if you wish to.",
|
||||
"formMessageDiffusersModelLocation": "Diffusers Model Location",
|
||||
"formMessageDiffusersModelLocationDesc": "Please enter at least one.",
|
||||
"formMessageDiffusersVAELocation": "VAE Location",
|
||||
"formMessageDiffusersVAELocationDesc": "If not provided, InvokeAI will look for the VAE file inside the model location given above."
|
||||
}
|
||||
|
68
frontend/dist/locales/modelmanager/ja.json
vendored
Normal file
68
frontend/dist/locales/modelmanager/ja.json
vendored
Normal file
@ -0,0 +1,68 @@
|
||||
{
|
||||
"modelManager": "モデルマネージャ",
|
||||
"model": "モデル",
|
||||
"allModels": "すべてのモデル",
|
||||
"checkpointModels": "Checkpoints",
|
||||
"diffusersModels": "Diffusers",
|
||||
"safetensorModels": "SafeTensors",
|
||||
"modelAdded": "モデルを追加",
|
||||
"modelUpdated": "モデルをアップデート",
|
||||
"modelEntryDeleted": "Model Entry Deleted",
|
||||
"cannotUseSpaces": "Cannot Use Spaces",
|
||||
"addNew": "新規に追加",
|
||||
"addNewModel": "新規モデル追加",
|
||||
"addCheckpointModel": "Checkpointを追加 / Safetensorモデル",
|
||||
"addDiffuserModel": "Diffusersを追加",
|
||||
"addManually": "手動で追加",
|
||||
"manual": "手動",
|
||||
"name": "名前",
|
||||
"nameValidationMsg": "モデルの名前を入力",
|
||||
"description": "概要",
|
||||
"descriptionValidationMsg": "モデルの概要を入力",
|
||||
"config": "Config",
|
||||
"configValidationMsg": "モデルの設定ファイルへのパス",
|
||||
"modelLocation": "モデルの場所",
|
||||
"modelLocationValidationMsg": "モデルが配置されている場所へのパス。",
|
||||
"repo_id": "Repo ID",
|
||||
"repoIDValidationMsg": "モデルのリモートリポジトリ",
|
||||
"vaeLocation": "VAEの場所",
|
||||
"vaeLocationValidationMsg": "Vaeが配置されている場所へのパス",
|
||||
"vaeRepoID": "VAE Repo ID",
|
||||
"vaeRepoIDValidationMsg": "Vaeのリモートリポジトリ",
|
||||
"width": "幅",
|
||||
"widthValidationMsg": "モデルのデフォルトの幅",
|
||||
"height": "高さ",
|
||||
"heightValidationMsg": "モデルのデフォルトの高さ",
|
||||
"addModel": "モデルを追加",
|
||||
"updateModel": "モデルをアップデート",
|
||||
"availableModels": "モデルを有効化",
|
||||
"search": "検索",
|
||||
"load": "Load",
|
||||
"active": "active",
|
||||
"notLoaded": "読み込まれていません",
|
||||
"cached": "キャッシュ済",
|
||||
"checkpointFolder": "Checkpointフォルダ",
|
||||
"clearCheckpointFolder": "Checkpointフォルダ内を削除",
|
||||
"findModels": "モデルを見つける",
|
||||
"scanAgain": "再度スキャン",
|
||||
"modelsFound": "モデルを発見",
|
||||
"selectFolder": "フォルダを選択",
|
||||
"selected": "選択済",
|
||||
"selectAll": "すべて選択",
|
||||
"deselectAll": "すべて選択解除",
|
||||
"showExisting": "既存を表示",
|
||||
"addSelected": "選択済を追加",
|
||||
"modelExists": "モデルの有無",
|
||||
"selectAndAdd": "以下のモデルを選択し、追加できます。",
|
||||
"noModelsFound": "モデルが見つかりません。",
|
||||
"delete": "削除",
|
||||
"deleteModel": "モデルを削除",
|
||||
"deleteConfig": "設定を削除",
|
||||
"deleteMsg1": "InvokeAIからこのモデルエントリーを削除してよろしいですか?",
|
||||
"deleteMsg2": "これは、ドライブからモデルのCheckpointファイルを削除するものではありません。必要であればそれらを読み込むことができます。",
|
||||
"formMessageDiffusersModelLocation": "Diffusersモデルの場所",
|
||||
"formMessageDiffusersModelLocationDesc": "最低でも1つは入力してください。",
|
||||
"formMessageDiffusersVAELocation": "VAEの場所s",
|
||||
"formMessageDiffusersVAELocationDesc": "指定しない場合、InvokeAIは上記のモデルの場所にあるVAEファイルを探します。"
|
||||
}
|
||||
|
63
frontend/dist/locales/options/ja.json
vendored
Normal file
63
frontend/dist/locales/options/ja.json
vendored
Normal file
@ -0,0 +1,63 @@
|
||||
{
|
||||
"images": "画像",
|
||||
"steps": "ステップ数",
|
||||
"cfgScale": "CFG Scale",
|
||||
"width": "幅",
|
||||
"height": "高さ",
|
||||
"sampler": "Sampler",
|
||||
"seed": "シード値",
|
||||
"randomizeSeed": "ランダムなシード値",
|
||||
"shuffle": "シャッフル",
|
||||
"noiseThreshold": "Noise Threshold",
|
||||
"perlinNoise": "Perlin Noise",
|
||||
"variations": "Variations",
|
||||
"variationAmount": "Variation Amount",
|
||||
"seedWeights": "シード値の重み",
|
||||
"faceRestoration": "顔の修復",
|
||||
"restoreFaces": "顔の修復",
|
||||
"type": "Type",
|
||||
"strength": "強度",
|
||||
"upscaling": "アップスケーリング",
|
||||
"upscale": "アップスケール",
|
||||
"upscaleImage": "画像をアップスケール",
|
||||
"scale": "Scale",
|
||||
"otherOptions": "その他のオプション",
|
||||
"seamlessTiling": "Seamless Tiling",
|
||||
"hiresOptim": "High Res Optimization",
|
||||
"imageFit": "Fit Initial Image To Output Size",
|
||||
"codeformerFidelity": "Fidelity",
|
||||
"seamSize": "Seam Size",
|
||||
"seamBlur": "Seam Blur",
|
||||
"seamStrength": "Seam Strength",
|
||||
"seamSteps": "Seam Steps",
|
||||
"inpaintReplace": "Inpaint Replace",
|
||||
"scaleBeforeProcessing": "処理前のスケール",
|
||||
"scaledWidth": "幅のスケール",
|
||||
"scaledHeight": "高さのスケール",
|
||||
"infillMethod": "Infill Method",
|
||||
"tileSize": "Tile Size",
|
||||
"boundingBoxHeader": "バウンディングボックス",
|
||||
"seamCorrectionHeader": "Seam Correction",
|
||||
"infillScalingHeader": "Infill and Scaling",
|
||||
"img2imgStrength": "Image To Imageの強度",
|
||||
"toggleLoopback": "Toggle Loopback",
|
||||
"invoke": "Invoke",
|
||||
"cancel": "キャンセル",
|
||||
"promptPlaceholder": "Type prompt here. [negative tokens], (upweight)++, (downweight)--, swap and blend are available (see docs)",
|
||||
"sendTo": "転送",
|
||||
"sendToImg2Img": "Image to Imageに転送",
|
||||
"sendToUnifiedCanvas": "Unified Canvasに転送",
|
||||
"copyImageToLink": "Copy Image To Link",
|
||||
"downloadImage": "画像をダウンロード",
|
||||
"openInViewer": "ビュワーを開く",
|
||||
"closeViewer": "ビュワーを閉じる",
|
||||
"usePrompt": "プロンプトを使用",
|
||||
"useSeed": "シード値を使用",
|
||||
"useAll": "すべてを使用",
|
||||
"useInitImg": "Use Initial Image",
|
||||
"info": "情報",
|
||||
"deleteImage": "画像を削除",
|
||||
"initialImage": "Inital Image",
|
||||
"showOptionsPanel": "オプションパネルを表示"
|
||||
}
|
||||
|
14
frontend/dist/locales/settings/ja.json
vendored
Normal file
14
frontend/dist/locales/settings/ja.json
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"models": "モデル",
|
||||
"displayInProgress": "生成中の画像を表示する",
|
||||
"saveSteps": "nステップごとに画像を保存",
|
||||
"confirmOnDelete": "削除時に確認",
|
||||
"displayHelpIcons": "ヘルプアイコンを表示",
|
||||
"useCanvasBeta": "キャンバスレイアウト(Beta)を使用する",
|
||||
"enableImageDebugging": "画像のデバッグを有効化",
|
||||
"resetWebUI": "WebUIをリセット",
|
||||
"resetWebUIDesc1": "WebUIのリセットは、画像と保存された設定のキャッシュをリセットするだけです。画像を削除するわけではありません。",
|
||||
"resetWebUIDesc2": "もしギャラリーに画像が表示されないなど、何か問題が発生した場合はGitHubにissueを提出する前にリセットを試してください。",
|
||||
"resetComplete": "WebUIはリセットされました。F5を押して再読み込みしてください。"
|
||||
}
|
||||
|
32
frontend/dist/locales/toast/ja.json
vendored
Normal file
32
frontend/dist/locales/toast/ja.json
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
{
|
||||
"tempFoldersEmptied": "Temp Folder Emptied",
|
||||
"uploadFailed": "アップロード失敗",
|
||||
"uploadFailedMultipleImagesDesc": "一度にアップロードできる画像は1枚のみです。",
|
||||
"uploadFailedUnableToLoadDesc": "ファイルを読み込むことができません。",
|
||||
"downloadImageStarted": "画像ダウンロード開始",
|
||||
"imageCopied": "画像をコピー",
|
||||
"imageLinkCopied": "画像のURLをコピー",
|
||||
"imageNotLoaded": "画像を読み込めません。",
|
||||
"imageNotLoadedDesc": "Image To Imageに転送する画像が見つかりません。",
|
||||
"imageSavedToGallery": "画像をギャラリーに保存する",
|
||||
"canvasMerged": "Canvas Merged",
|
||||
"sentToImageToImage": "Image To Imageに転送",
|
||||
"sentToUnifiedCanvas": "Unified Canvasに転送",
|
||||
"parametersSet": "Parameters Set",
|
||||
"parametersNotSet": "Parameters Not Set",
|
||||
"parametersNotSetDesc": "この画像にはメタデータがありません。",
|
||||
"parametersFailed": "パラメータ読み込みの不具合",
|
||||
"parametersFailedDesc": "initイメージを読み込めません。",
|
||||
"seedSet": "Seed Set",
|
||||
"seedNotSet": "Seed Not Set",
|
||||
"seedNotSetDesc": "この画像のシード値が見つかりません。",
|
||||
"promptSet": "Prompt Set",
|
||||
"promptNotSet": "Prompt Not Set",
|
||||
"promptNotSetDesc": "この画像のプロンプトが見つかりませんでした。",
|
||||
"upscalingFailed": "アップスケーリング失敗",
|
||||
"faceRestoreFailed": "顔の修復に失敗",
|
||||
"metadataLoadFailed": "メタデータの読み込みに失敗。",
|
||||
"initialImageSet": "Initial Image Set",
|
||||
"initialImageNotSet": "Initial Image Not Set",
|
||||
"initialImageNotSetDesc": "Could not load initial image"
|
||||
}
|
16
frontend/dist/locales/tooltip/it.json
vendored
16
frontend/dist/locales/tooltip/it.json
vendored
@ -1 +1,15 @@
|
||||
{}
|
||||
{
|
||||
"feature": {
|
||||
"prompt": "Questo è il campo del prompt. Il prompt include oggetti di generazione e termini stilistici. Puoi anche aggiungere il peso (importanza del token) nel prompt, ma i comandi e i parametri dell'interfaccia a linea di comando non funzioneranno.",
|
||||
"gallery": "Galleria visualizza le generazioni dalla cartella degli output man mano che vengono create. Le impostazioni sono memorizzate all'interno di file e accessibili dal menu contestuale.",
|
||||
"other": "Queste opzioni abiliteranno modalità di elaborazione alternative per Invoke. 'Piastrella senza cuciture' creerà modelli ripetuti nell'output. 'Ottimizzzazione Alta risoluzione' è la generazione in due passaggi con 'Immagine a Immagine': usa questa impostazione quando vuoi un'immagine più grande e più coerente senza artefatti. Ci vorrà più tempo del solito 'Testo a Immagine'.",
|
||||
"seed": "Il valore del Seme influenza il rumore iniziale da cui è formata l'immagine. Puoi usare i semi già esistenti dalle immagini precedenti. 'Soglia del rumore' viene utilizzato per mitigare gli artefatti a valori CFG elevati (provare l'intervallo 0-10) e Perlin per aggiungere il rumore Perlin durante la generazione: entrambi servono per aggiungere variazioni ai risultati.",
|
||||
"variations": "Prova una variazione con un valore compreso tra 0.1 e 1.0 per modificare il risultato per un dato seme. Variazioni interessanti del seme sono comprese tra 0.1 e 0.3.",
|
||||
"upscale": "Utilizza ESRGAN per ingrandire l'immagine subito dopo la generazione.",
|
||||
"faceCorrection": "Correzione del volto con GFPGAN o Codeformer: l'algoritmo rileva i volti nell'immagine e corregge eventuali difetti. Un valore alto cambierà maggiormente l'immagine, dando luogo a volti più attraenti. Codeformer con una maggiore fedeltà preserva l'immagine originale a scapito di una correzione facciale più forte.",
|
||||
"imageToImage": "Da Immagine a Immagine carica qualsiasi immagine come iniziale, che viene quindi utilizzata per generarne una nuova in base al prompt. Più alto è il valore, più cambierà l'immagine risultante. Sono possibili valori da 0.0 a 1.0, l'intervallo consigliato è 0.25-0.75",
|
||||
"boundingBox": "Il riquadro di selezione è lo stesso delle impostazioni Larghezza e Altezza per da Testo a Immagine o da Immagine a Immagine. Verrà elaborata solo l'area nella casella.",
|
||||
"seamCorrection": "Controlla la gestione delle giunzioni visibili che si verificano tra le immagini generate sulla tela.",
|
||||
"infillAndScaling": "Gestisce i metodi di riempimento (utilizzati su aree mascherate o cancellate dell'area di disegno) e il ridimensionamento (utile per i riquadri di selezione di piccole dimensioni)."
|
||||
}
|
||||
}
|
16
frontend/dist/locales/tooltip/ja.json
vendored
Normal file
16
frontend/dist/locales/tooltip/ja.json
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"feature": {
|
||||
"prompt": "これはプロンプトフィールドです。プロンプトには生成オブジェクトや文法用語が含まれます。プロンプトにも重み(Tokenの重要度)を付けることができますが、CLIコマンドやパラメータは機能しません。",
|
||||
"gallery": "ギャラリーは、出力先フォルダから生成物を表示します。設定はファイル内に保存され、コンテキストメニューからアクセスできます。.",
|
||||
"other": "These options will enable alternative processing modes for Invoke. 'Seamless tiling' will create repeating patterns in the output. 'High resolution' is generation in two steps with img2img: use this setting when you want a larger and more coherent image without artifacts. It will take longer that usual txt2img.",
|
||||
"seed": "シード値は、画像が形成される際の初期ノイズに影響します。以前の画像から既に存在するシードを使用することができます。ノイズしきい値は高いCFG値でのアーティファクトを軽減するために使用され、Perlinは生成中にPerlinノイズを追加します(0-10の範囲を試してみてください): どちらも出力にバリエーションを追加するのに役立ちます。",
|
||||
"variations": "0.1から1.0の間の値で試し、付与されたシードに対する結果を変えてみてください。面白いバリュエーションは0.1〜0.3の間です。",
|
||||
"upscale": "生成直後の画像をアップスケールするには、ESRGANを使用します。",
|
||||
"faceCorrection": "GFPGANまたはCodeformerによる顔の修復: 画像内の顔を検出し不具合を修正するアルゴリズムです。高い値を設定すると画像がより変化し、より魅力的な顔になります。Codeformerは顔の修復を犠牲にして、元の画像をできる限り保持します。",
|
||||
"imageToImage": "Image To Imageは任意の画像を初期値として読み込み、プロンプトとともに新しい画像を生成するために使用されます。値が高いほど結果画像はより変化します。0.0から1.0までの値が可能で、推奨範囲は0.25から0.75です。",
|
||||
"boundingBox": "バウンディングボックスは、Text To ImageまたはImage To Imageの幅/高さの設定と同じです。ボックス内の領域のみが処理されます。",
|
||||
"seamCorrection": "キャンバス上の生成された画像間に発生する可視可能な境界の処理を制御します。",
|
||||
"infillAndScaling": "Manage infill methods (used on masked or erased areas of the canvas) and scaling (useful for small bounding box sizes)."
|
||||
}
|
||||
}
|
||||
|
15
frontend/dist/locales/tooltips/it.json
vendored
15
frontend/dist/locales/tooltips/it.json
vendored
@ -1,15 +0,0 @@
|
||||
{
|
||||
"feature": {
|
||||
"prompt": "Questo è il campo del prompt. Il prompt include oggetti di generazione e termini stilistici. Puoi anche aggiungere il peso (importanza del token) nel prompt, ma i comandi e i parametri dell'interfaccia a linea di comando non funzioneranno.",
|
||||
"gallery": "Galleria visualizza le generazioni dalla cartella degli output man mano che vengono create. Le impostazioni sono memorizzate all'interno di file e accessibili dal menu contestuale.",
|
||||
"other": "Queste opzioni abiliteranno modalità di elaborazione alternative per Invoke. 'Piastrella senza cuciture' creerà modelli ripetuti nell'output. 'Ottimizzzazione Alta risoluzione' è la generazione in due passaggi con 'Immagine a Immagine': usa questa impostazione quando vuoi un'immagine più grande e più coerente senza artefatti. Ci vorrà più tempo del solito 'Testo a Immagine'.",
|
||||
"seed": "Il valore del Seme influenza il rumore iniziale da cui è formata l'immagine. Puoi usare i semi già esistenti dalle immagini precedenti. 'Soglia del rumore' viene utilizzato per mitigare gli artefatti a valori CFG elevati (provare l'intervallo 0-10) e Perlin per aggiungere il rumore Perlin durante la generazione: entrambi servono per aggiungere variazioni ai risultati.",
|
||||
"variations": "Prova una variazione con un valore compreso tra 0.1 e 1.0 per modificare il risultato per un dato seme. Variazioni interessanti del seme sono comprese tra 0.1 e 0.3.",
|
||||
"upscale": "Utilizza ESRGAN per ingrandire l'immagine subito dopo la generazione.",
|
||||
"faceCorrection": "Correzione del volto con GFPGAN o Codeformer: l'algoritmo rileva i volti nell'immagine e corregge eventuali difetti. Un valore alto cambierà maggiormente l'immagine, dando luogo a volti più attraenti. Codeformer con una maggiore fedeltà preserva l'immagine originale a scapito di una correzione facciale più forte.",
|
||||
"imageToImage": "Da Immagine a Immagine carica qualsiasi immagine come iniziale, che viene quindi utilizzata per generarne una nuova in base al prompt. Più alto è il valore, più cambierà l'immagine risultante. Sono possibili valori da 0.0 a 1.0, l'intervallo consigliato è 0.25-0.75",
|
||||
"boundingBox": "Il riquadro di selezione è lo stesso delle impostazioni Larghezza e Altezza per dat Testo a Immagine o da Immagine a Immagine. Verrà elaborata solo l'area nella casella.",
|
||||
"seamCorrection": "Controlla la gestione delle giunzioni visibili che si verificano tra le immagini generate sulla tela.",
|
||||
"infillAndScaling": "Gestisce i metodi di riempimento (utilizzati su aree mascherate o cancellate dell'area di disegno) e il ridimensionamento (utile per i riquadri di selezione di piccole dimensioni)."
|
||||
}
|
||||
}
|
60
frontend/dist/locales/unifiedcanvas/ja.json
vendored
Normal file
60
frontend/dist/locales/unifiedcanvas/ja.json
vendored
Normal file
@ -0,0 +1,60 @@
|
||||
{
|
||||
"layer": "Layer",
|
||||
"base": "Base",
|
||||
"mask": "マスク",
|
||||
"maskingOptions": "マスクのオプション",
|
||||
"enableMask": "マスクを有効化",
|
||||
"preserveMaskedArea": "マスク領域の保存",
|
||||
"clearMask": "マスクを解除",
|
||||
"brush": "ブラシ",
|
||||
"eraser": "消しゴム",
|
||||
"fillBoundingBox": "バウンディングボックスの塗りつぶし",
|
||||
"eraseBoundingBox": "バウンディングボックスの消去",
|
||||
"colorPicker": "カラーピッカー",
|
||||
"brushOptions": "ブラシオプション",
|
||||
"brushSize": "サイズ",
|
||||
"move": "Move",
|
||||
"resetView": "Reset View",
|
||||
"mergeVisible": "Merge Visible",
|
||||
"saveToGallery": "ギャラリーに保存",
|
||||
"copyToClipboard": "クリップボードにコピー",
|
||||
"downloadAsImage": "画像としてダウンロード",
|
||||
"undo": "取り消し",
|
||||
"redo": "やり直し",
|
||||
"clearCanvas": "キャンバスを片付ける",
|
||||
"canvasSettings": "キャンバスの設定",
|
||||
"showIntermediates": "Show Intermediates",
|
||||
"showGrid": "グリッドを表示",
|
||||
"snapToGrid": "Snap to Grid",
|
||||
"darkenOutsideSelection": "外周を暗くする",
|
||||
"autoSaveToGallery": "ギャラリーに自動保存",
|
||||
"saveBoxRegionOnly": "ボックス領域のみ保存",
|
||||
"limitStrokesToBox": "Limit Strokes to Box",
|
||||
"showCanvasDebugInfo": "キャンバスのデバッグ情報を表示",
|
||||
"clearCanvasHistory": "キャンバスの履歴を削除",
|
||||
"clearHistory": "履歴を削除",
|
||||
"clearCanvasHistoryMessage": "履歴を消去すると現在のキャンバスは残りますが、取り消しややり直しの履歴は不可逆的に消去されます。",
|
||||
"clearCanvasHistoryConfirm": "履歴を削除しますか?",
|
||||
"emptyTempImageFolder": "Empty Temp Image Folde",
|
||||
"emptyFolder": "空のフォルダ",
|
||||
"emptyTempImagesFolderMessage": "一時フォルダを空にすると、Unified Canvasも完全にリセットされます。これには、すべての取り消し/やり直しの履歴、ステージング領域の画像、およびキャンバスのベースレイヤーが含まれます。",
|
||||
"emptyTempImagesFolderConfirm": "一時フォルダを削除しますか?",
|
||||
"activeLayer": "Active Layer",
|
||||
"canvasScale": "Canvas Scale",
|
||||
"boundingBox": "バウンディングボックス",
|
||||
"scaledBoundingBox": "Scaled Bounding Box",
|
||||
"boundingBoxPosition": "バウンディングボックスの位置",
|
||||
"canvasDimensions": "キャンバスの大きさ",
|
||||
"canvasPosition": "キャンバスの位置",
|
||||
"cursorPosition": "カーソルの位置",
|
||||
"previous": "前",
|
||||
"next": "次",
|
||||
"accept": "同意",
|
||||
"showHide": "表示/非表示",
|
||||
"discardAll": "すべて破棄",
|
||||
"betaClear": "Clear",
|
||||
"betaDarkenOutside": "Darken Outside",
|
||||
"betaLimitToBox": "Limit To Box",
|
||||
"betaPreserveMasked": "Preserve Masked"
|
||||
}
|
||||
|
@ -19,6 +19,7 @@
|
||||
"langPolish": "Polish",
|
||||
"langSimplifiedChinese": "Simplified Chinese",
|
||||
"langSpanish": "Spanish",
|
||||
"langJapanese": "Japanese",
|
||||
"text2img": "Text To Image",
|
||||
"img2img": "Image To Image",
|
||||
"unifiedCanvas": "Unified Canvas",
|
||||
|
@ -19,6 +19,7 @@
|
||||
"langPolish": "Polish",
|
||||
"langSimplifiedChinese": "Simplified Chinese",
|
||||
"langSpanish": "Spanish",
|
||||
"langJapanese": "Japanese",
|
||||
"text2img": "Text To Image",
|
||||
"img2img": "Image To Image",
|
||||
"unifiedCanvas": "Unified Canvas",
|
||||
|
60
frontend/public/locales/common/ja.json
Normal file
60
frontend/public/locales/common/ja.json
Normal file
@ -0,0 +1,60 @@
|
||||
{
|
||||
"hotkeysLabel": "Hotkeys",
|
||||
"themeLabel": "テーマ",
|
||||
"languagePickerLabel": "言語選択",
|
||||
"reportBugLabel": "バグ報告",
|
||||
"githubLabel": "Github",
|
||||
"discordLabel": "Discord",
|
||||
"settingsLabel": "設定",
|
||||
"darkTheme": "ダーク",
|
||||
"lightTheme": "ライト",
|
||||
"greenTheme": "緑",
|
||||
"langEnglish": "English",
|
||||
"langRussian": "Russian",
|
||||
"langItalian": "Italian",
|
||||
"langBrPortuguese": "Portuguese (Brazilian)",
|
||||
"langGerman": "German",
|
||||
"langPortuguese": "Portuguese",
|
||||
"langFrench": "French",
|
||||
"langPolish": "Polish",
|
||||
"langSimplifiedChinese": "Simplified Chinese",
|
||||
"langSpanish": "Spanish",
|
||||
"text2img": "Text To Image",
|
||||
"img2img": "Image To Image",
|
||||
"unifiedCanvas": "Unified Canvas",
|
||||
"nodes": "Nodes",
|
||||
"nodesDesc": "現在、画像生成のためのノードベースシステムを開発中です。機能についてのアップデートにご期待ください。",
|
||||
"postProcessing": "後処理",
|
||||
"postProcessDesc1": "Invoke AIは、多彩な後処理の機能を備えています。アップスケーリングと顔修復は、すでにWebUI上で利用可能です。これらは、[Text To Image]および[Image To Image]タブの[詳細オプション]メニューからアクセスできます。また、現在の画像表示の上やビューア内の画像アクションボタンを使って、画像を直接処理することもできます。",
|
||||
"postProcessDesc2": "より高度な後処理の機能を実現するための専用UIを近日中にリリース予定です。",
|
||||
"postProcessDesc3": "Invoke AI CLIでは、この他にもEmbiggenをはじめとする様々な機能を利用することができます。",
|
||||
"training": "追加学習",
|
||||
"trainingDesc1": "Textual InversionとDreamboothを使って、WebUIから独自のEmbeddingとチェックポイントを追加学習するための専用ワークフローです。",
|
||||
"trainingDesc2": "InvokeAIは、すでにメインスクリプトを使ったTextual Inversionによるカスタム埋め込み追加学習にも対応しています。",
|
||||
"upload": "アップロード",
|
||||
"close": "閉じる",
|
||||
"load": "ロード",
|
||||
"back": "戻る",
|
||||
"statusConnected": "接続済",
|
||||
"statusDisconnected": "切断済",
|
||||
"statusError": "エラー",
|
||||
"statusPreparing": "準備中",
|
||||
"statusProcessingCanceled": "処理をキャンセル",
|
||||
"statusProcessingComplete": "処理完了",
|
||||
"statusGenerating": "生成中",
|
||||
"statusGeneratingTextToImage": "Text To Imageで生成中",
|
||||
"statusGeneratingImageToImage": "Image To Imageで生成中",
|
||||
"statusGeneratingInpainting": "Generating Inpainting",
|
||||
"statusGeneratingOutpainting": "Generating Outpainting",
|
||||
"statusGenerationComplete": "生成完了",
|
||||
"statusIterationComplete": "Iteration Complete",
|
||||
"statusSavingImage": "画像を保存",
|
||||
"statusRestoringFaces": "顔の修復",
|
||||
"statusRestoringFacesGFPGAN": "顔の修復 (GFPGAN)",
|
||||
"statusRestoringFacesCodeFormer": "顔の修復 (CodeFormer)",
|
||||
"statusUpscaling": "アップスケーリング",
|
||||
"statusUpscalingESRGAN": "アップスケーリング (ESRGAN)",
|
||||
"statusLoadingModel": "モデルを読み込む",
|
||||
"statusModelChanged": "モデルを変更"
|
||||
}
|
||||
|
17
frontend/public/locales/gallery/ja.json
Normal file
17
frontend/public/locales/gallery/ja.json
Normal file
@ -0,0 +1,17 @@
|
||||
{
|
||||
"generations": "Generations",
|
||||
"showGenerations": "Show Generations",
|
||||
"uploads": "アップロード",
|
||||
"showUploads": "アップロードした画像を見る",
|
||||
"galleryImageSize": "画像のサイズ",
|
||||
"galleryImageResetSize": "サイズをリセット",
|
||||
"gallerySettings": "ギャラリーの設定",
|
||||
"maintainAspectRatio": "アスペクト比を維持",
|
||||
"autoSwitchNewImages": "Auto-Switch to New Images",
|
||||
"singleColumnLayout": "シングルカラムレイアウト",
|
||||
"pinGallery": "ギャラリーにピン留め",
|
||||
"allImagesLoaded": "すべての画像を読み込む",
|
||||
"loadMore": "さらに読み込む",
|
||||
"noImagesInGallery": "ギャラリーに画像がありません"
|
||||
}
|
||||
|
208
frontend/public/locales/hotkeys/ja.json
Normal file
208
frontend/public/locales/hotkeys/ja.json
Normal file
@ -0,0 +1,208 @@
|
||||
{
|
||||
"keyboardShortcuts": "キーボードショートカット",
|
||||
"appHotkeys": "アプリのホットキー",
|
||||
"generalHotkeys": "Generalのホットキー",
|
||||
"galleryHotkeys": "ギャラリーのホットキー",
|
||||
"unifiedCanvasHotkeys": "Unified Canvasのホットキー",
|
||||
"invoke": {
|
||||
"title": "Invoke",
|
||||
"desc": "画像を生成"
|
||||
},
|
||||
"cancel": {
|
||||
"title": "キャンセル",
|
||||
"desc": "画像の生成をキャンセル"
|
||||
},
|
||||
"focusPrompt": {
|
||||
"title": "Focus Prompt",
|
||||
"desc": "プロンプトテキストボックスにフォーカス"
|
||||
},
|
||||
"toggleOptions": {
|
||||
"title": "オプションパネルのトグル",
|
||||
"desc": "オプションパネルの開閉"
|
||||
},
|
||||
"pinOptions": {
|
||||
"title": "ピン",
|
||||
"desc": "オプションパネルを固定"
|
||||
},
|
||||
"toggleViewer": {
|
||||
"title": "ビュワーのトグル",
|
||||
"desc": "ビュワーを開閉"
|
||||
},
|
||||
"toggleGallery": {
|
||||
"title": "ギャラリーのトグル",
|
||||
"desc": "ギャラリードロワーの開閉"
|
||||
},
|
||||
"maximizeWorkSpace": {
|
||||
"title": "作業領域の最大化",
|
||||
"desc": "パネルを閉じて、作業領域を最大に"
|
||||
},
|
||||
"changeTabs": {
|
||||
"title": "タブの切替",
|
||||
"desc": "他の作業領域と切替"
|
||||
},
|
||||
"consoleToggle": {
|
||||
"title": "コンソールのトグル",
|
||||
"desc": "コンソールの開閉"
|
||||
},
|
||||
"setPrompt": {
|
||||
"title": "プロンプトをセット",
|
||||
"desc": "現在の画像のプロンプトを使用"
|
||||
},
|
||||
"setSeed": {
|
||||
"title": "シード値をセット",
|
||||
"desc": "現在の画像のシード値を使用"
|
||||
},
|
||||
"setParameters": {
|
||||
"title": "パラメータをセット",
|
||||
"desc": "現在の画像のすべてのパラメータを使用"
|
||||
},
|
||||
"restoreFaces": {
|
||||
"title": "顔の修復",
|
||||
"desc": "現在の画像を修復"
|
||||
},
|
||||
"upscale": {
|
||||
"title": "アップスケール",
|
||||
"desc": "現在の画像をアップスケール"
|
||||
},
|
||||
"showInfo": {
|
||||
"title": "情報を見る",
|
||||
"desc": "現在の画像のメタデータ情報を表示"
|
||||
},
|
||||
"sendToImageToImage": {
|
||||
"title": "Image To Imageに転送",
|
||||
"desc": "現在の画像をImage to Imageに転送"
|
||||
},
|
||||
"deleteImage": {
|
||||
"title": "画像を削除",
|
||||
"desc": "現在の画像を削除"
|
||||
},
|
||||
"closePanels": {
|
||||
"title": "パネルを閉じる",
|
||||
"desc": "開いているパネルを閉じる"
|
||||
},
|
||||
"previousImage": {
|
||||
"title": "前の画像",
|
||||
"desc": "ギャラリー内の1つ前の画像を表示"
|
||||
},
|
||||
"nextImage": {
|
||||
"title": "次の画像",
|
||||
"desc": "ギャラリー内の1つ後の画像を表示"
|
||||
},
|
||||
"toggleGalleryPin": {
|
||||
"title": "ギャラリードロワーの固定",
|
||||
"desc": "ギャラリーをUIにピン留め/解除"
|
||||
},
|
||||
"increaseGalleryThumbSize": {
|
||||
"title": "ギャラリーの画像を拡大",
|
||||
"desc": "ギャラリーのサムネイル画像を拡大"
|
||||
},
|
||||
"decreaseGalleryThumbSize": {
|
||||
"title": "ギャラリーの画像サイズを縮小",
|
||||
"desc": "ギャラリーのサムネイル画像を縮小"
|
||||
},
|
||||
"selectBrush": {
|
||||
"title": "ブラシを選択",
|
||||
"desc": "ブラシを選択"
|
||||
},
|
||||
"selectEraser": {
|
||||
"title": "消しゴムを選択",
|
||||
"desc": "消しゴムを選択"
|
||||
},
|
||||
"decreaseBrushSize": {
|
||||
"title": "ブラシサイズを縮小",
|
||||
"desc": "ブラシ/消しゴムのサイズを縮小"
|
||||
},
|
||||
"increaseBrushSize": {
|
||||
"title": "ブラシサイズを拡大",
|
||||
"desc": "ブラシ/消しゴムのサイズを拡大"
|
||||
},
|
||||
"decreaseBrushOpacity": {
|
||||
"title": "ブラシの不透明度を下げる",
|
||||
"desc": "キャンバスブラシの不透明度を下げる"
|
||||
},
|
||||
"increaseBrushOpacity": {
|
||||
"title": "ブラシの不透明度を上げる",
|
||||
"desc": "キャンバスブラシの不透明度を上げる"
|
||||
},
|
||||
"moveTool": {
|
||||
"title": "Move Tool",
|
||||
"desc": "Allows canvas navigation"
|
||||
},
|
||||
"fillBoundingBox": {
|
||||
"title": "バウンディングボックスを塗りつぶす",
|
||||
"desc": "ブラシの色でバウンディングボックス領域を塗りつぶす"
|
||||
},
|
||||
"eraseBoundingBox": {
|
||||
"title": "バウンディングボックスを消す",
|
||||
"desc": "バウンディングボックス領域を消す"
|
||||
},
|
||||
"colorPicker": {
|
||||
"title": "カラーピッカーを選択",
|
||||
"desc": "カラーピッカーを選択"
|
||||
},
|
||||
"toggleSnap": {
|
||||
"title": "Toggle Snap",
|
||||
"desc": "Toggles Snap to Grid"
|
||||
},
|
||||
"quickToggleMove": {
|
||||
"title": "Quick Toggle Move",
|
||||
"desc": "Temporarily toggles Move mode"
|
||||
},
|
||||
"toggleLayer": {
|
||||
"title": "レイヤーを切替",
|
||||
"desc": "マスク/ベースレイヤの選択を切替"
|
||||
},
|
||||
"clearMask": {
|
||||
"title": "マスクを消す",
|
||||
"desc": "マスク全体を消す"
|
||||
},
|
||||
"hideMask": {
|
||||
"title": "マスクを非表示",
|
||||
"desc": "マスクを表示/非表示"
|
||||
},
|
||||
"showHideBoundingBox": {
|
||||
"title": "バウンディングボックスを表示/非表示",
|
||||
"desc": "バウンディングボックスの表示/非表示を切替"
|
||||
},
|
||||
"mergeVisible": {
|
||||
"title": "Merge Visible",
|
||||
"desc": "Merge all visible layers of canvas"
|
||||
},
|
||||
"saveToGallery": {
|
||||
"title": "ギャラリーに保存",
|
||||
"desc": "現在のキャンバスをギャラリーに保存"
|
||||
},
|
||||
"copyToClipboard": {
|
||||
"title": "クリップボードにコピー",
|
||||
"desc": "現在のキャンバスをクリップボードにコピー"
|
||||
},
|
||||
"downloadImage": {
|
||||
"title": "画像をダウンロード",
|
||||
"desc": "現在の画像をダウンロード"
|
||||
},
|
||||
"undoStroke": {
|
||||
"title": "Undo Stroke",
|
||||
"desc": "Undo a brush stroke"
|
||||
},
|
||||
"redoStroke": {
|
||||
"title": "Redo Stroke",
|
||||
"desc": "Redo a brush stroke"
|
||||
},
|
||||
"resetView": {
|
||||
"title": "キャンバスをリセット",
|
||||
"desc": "キャンバスをリセット"
|
||||
},
|
||||
"previousStagingImage": {
|
||||
"title": "Previous Staging Image",
|
||||
"desc": "Previous Staging Area Image"
|
||||
},
|
||||
"nextStagingImage": {
|
||||
"title": "Next Staging Image",
|
||||
"desc": "Next Staging Area Image"
|
||||
},
|
||||
"acceptStagingImage": {
|
||||
"title": "Accept Staging Image",
|
||||
"desc": "Accept Current Staging Area Image"
|
||||
}
|
||||
}
|
||||
|
68
frontend/public/locales/modelmanager/ja.json
Normal file
68
frontend/public/locales/modelmanager/ja.json
Normal file
@ -0,0 +1,68 @@
|
||||
{
|
||||
"modelManager": "モデルマネージャ",
|
||||
"model": "モデル",
|
||||
"allModels": "すべてのモデル",
|
||||
"checkpointModels": "Checkpoints",
|
||||
"diffusersModels": "Diffusers",
|
||||
"safetensorModels": "SafeTensors",
|
||||
"modelAdded": "モデルを追加",
|
||||
"modelUpdated": "モデルをアップデート",
|
||||
"modelEntryDeleted": "Model Entry Deleted",
|
||||
"cannotUseSpaces": "Cannot Use Spaces",
|
||||
"addNew": "新規に追加",
|
||||
"addNewModel": "新規モデル追加",
|
||||
"addCheckpointModel": "Checkpointを追加 / Safetensorモデル",
|
||||
"addDiffuserModel": "Diffusersを追加",
|
||||
"addManually": "手動で追加",
|
||||
"manual": "手動",
|
||||
"name": "名前",
|
||||
"nameValidationMsg": "モデルの名前を入力",
|
||||
"description": "概要",
|
||||
"descriptionValidationMsg": "モデルの概要を入力",
|
||||
"config": "Config",
|
||||
"configValidationMsg": "モデルの設定ファイルへのパス",
|
||||
"modelLocation": "モデルの場所",
|
||||
"modelLocationValidationMsg": "モデルが配置されている場所へのパス。",
|
||||
"repo_id": "Repo ID",
|
||||
"repoIDValidationMsg": "モデルのリモートリポジトリ",
|
||||
"vaeLocation": "VAEの場所",
|
||||
"vaeLocationValidationMsg": "Vaeが配置されている場所へのパス",
|
||||
"vaeRepoID": "VAE Repo ID",
|
||||
"vaeRepoIDValidationMsg": "Vaeのリモートリポジトリ",
|
||||
"width": "幅",
|
||||
"widthValidationMsg": "モデルのデフォルトの幅",
|
||||
"height": "高さ",
|
||||
"heightValidationMsg": "モデルのデフォルトの高さ",
|
||||
"addModel": "モデルを追加",
|
||||
"updateModel": "モデルをアップデート",
|
||||
"availableModels": "モデルを有効化",
|
||||
"search": "検索",
|
||||
"load": "Load",
|
||||
"active": "active",
|
||||
"notLoaded": "読み込まれていません",
|
||||
"cached": "キャッシュ済",
|
||||
"checkpointFolder": "Checkpointフォルダ",
|
||||
"clearCheckpointFolder": "Checkpointフォルダ内を削除",
|
||||
"findModels": "モデルを見つける",
|
||||
"scanAgain": "再度スキャン",
|
||||
"modelsFound": "モデルを発見",
|
||||
"selectFolder": "フォルダを選択",
|
||||
"selected": "選択済",
|
||||
"selectAll": "すべて選択",
|
||||
"deselectAll": "すべて選択解除",
|
||||
"showExisting": "既存を表示",
|
||||
"addSelected": "選択済を追加",
|
||||
"modelExists": "モデルの有無",
|
||||
"selectAndAdd": "以下のモデルを選択し、追加できます。",
|
||||
"noModelsFound": "モデルが見つかりません。",
|
||||
"delete": "削除",
|
||||
"deleteModel": "モデルを削除",
|
||||
"deleteConfig": "設定を削除",
|
||||
"deleteMsg1": "InvokeAIからこのモデルエントリーを削除してよろしいですか?",
|
||||
"deleteMsg2": "これは、ドライブからモデルのCheckpointファイルを削除するものではありません。必要であればそれらを読み込むことができます。",
|
||||
"formMessageDiffusersModelLocation": "Diffusersモデルの場所",
|
||||
"formMessageDiffusersModelLocationDesc": "最低でも1つは入力してください。",
|
||||
"formMessageDiffusersVAELocation": "VAEの場所s",
|
||||
"formMessageDiffusersVAELocationDesc": "指定しない場合、InvokeAIは上記のモデルの場所にあるVAEファイルを探します。"
|
||||
}
|
||||
|
63
frontend/public/locales/options/ja.json
Normal file
63
frontend/public/locales/options/ja.json
Normal file
@ -0,0 +1,63 @@
|
||||
{
|
||||
"images": "画像",
|
||||
"steps": "ステップ数",
|
||||
"cfgScale": "CFG Scale",
|
||||
"width": "幅",
|
||||
"height": "高さ",
|
||||
"sampler": "Sampler",
|
||||
"seed": "シード値",
|
||||
"randomizeSeed": "ランダムなシード値",
|
||||
"shuffle": "シャッフル",
|
||||
"noiseThreshold": "Noise Threshold",
|
||||
"perlinNoise": "Perlin Noise",
|
||||
"variations": "Variations",
|
||||
"variationAmount": "Variation Amount",
|
||||
"seedWeights": "シード値の重み",
|
||||
"faceRestoration": "顔の修復",
|
||||
"restoreFaces": "顔の修復",
|
||||
"type": "Type",
|
||||
"strength": "強度",
|
||||
"upscaling": "アップスケーリング",
|
||||
"upscale": "アップスケール",
|
||||
"upscaleImage": "画像をアップスケール",
|
||||
"scale": "Scale",
|
||||
"otherOptions": "その他のオプション",
|
||||
"seamlessTiling": "Seamless Tiling",
|
||||
"hiresOptim": "High Res Optimization",
|
||||
"imageFit": "Fit Initial Image To Output Size",
|
||||
"codeformerFidelity": "Fidelity",
|
||||
"seamSize": "Seam Size",
|
||||
"seamBlur": "Seam Blur",
|
||||
"seamStrength": "Seam Strength",
|
||||
"seamSteps": "Seam Steps",
|
||||
"inpaintReplace": "Inpaint Replace",
|
||||
"scaleBeforeProcessing": "処理前のスケール",
|
||||
"scaledWidth": "幅のスケール",
|
||||
"scaledHeight": "高さのスケール",
|
||||
"infillMethod": "Infill Method",
|
||||
"tileSize": "Tile Size",
|
||||
"boundingBoxHeader": "バウンディングボックス",
|
||||
"seamCorrectionHeader": "Seam Correction",
|
||||
"infillScalingHeader": "Infill and Scaling",
|
||||
"img2imgStrength": "Image To Imageの強度",
|
||||
"toggleLoopback": "Toggle Loopback",
|
||||
"invoke": "Invoke",
|
||||
"cancel": "キャンセル",
|
||||
"promptPlaceholder": "Type prompt here. [negative tokens], (upweight)++, (downweight)--, swap and blend are available (see docs)",
|
||||
"sendTo": "転送",
|
||||
"sendToImg2Img": "Image to Imageに転送",
|
||||
"sendToUnifiedCanvas": "Unified Canvasに転送",
|
||||
"copyImageToLink": "Copy Image To Link",
|
||||
"downloadImage": "画像をダウンロード",
|
||||
"openInViewer": "ビュワーを開く",
|
||||
"closeViewer": "ビュワーを閉じる",
|
||||
"usePrompt": "プロンプトを使用",
|
||||
"useSeed": "シード値を使用",
|
||||
"useAll": "すべてを使用",
|
||||
"useInitImg": "Use Initial Image",
|
||||
"info": "情報",
|
||||
"deleteImage": "画像を削除",
|
||||
"initialImage": "Inital Image",
|
||||
"showOptionsPanel": "オプションパネルを表示"
|
||||
}
|
||||
|
14
frontend/public/locales/settings/ja.json
Normal file
14
frontend/public/locales/settings/ja.json
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"models": "モデル",
|
||||
"displayInProgress": "生成中の画像を表示する",
|
||||
"saveSteps": "nステップごとに画像を保存",
|
||||
"confirmOnDelete": "削除時に確認",
|
||||
"displayHelpIcons": "ヘルプアイコンを表示",
|
||||
"useCanvasBeta": "キャンバスレイアウト(Beta)を使用する",
|
||||
"enableImageDebugging": "画像のデバッグを有効化",
|
||||
"resetWebUI": "WebUIをリセット",
|
||||
"resetWebUIDesc1": "WebUIのリセットは、画像と保存された設定のキャッシュをリセットするだけです。画像を削除するわけではありません。",
|
||||
"resetWebUIDesc2": "もしギャラリーに画像が表示されないなど、何か問題が発生した場合はGitHubにissueを提出する前にリセットを試してください。",
|
||||
"resetComplete": "WebUIはリセットされました。F5を押して再読み込みしてください。"
|
||||
}
|
||||
|
32
frontend/public/locales/toast/ja.json
Normal file
32
frontend/public/locales/toast/ja.json
Normal file
@ -0,0 +1,32 @@
|
||||
{
|
||||
"tempFoldersEmptied": "Temp Folder Emptied",
|
||||
"uploadFailed": "アップロード失敗",
|
||||
"uploadFailedMultipleImagesDesc": "一度にアップロードできる画像は1枚のみです。",
|
||||
"uploadFailedUnableToLoadDesc": "ファイルを読み込むことができません。",
|
||||
"downloadImageStarted": "画像ダウンロード開始",
|
||||
"imageCopied": "画像をコピー",
|
||||
"imageLinkCopied": "画像のURLをコピー",
|
||||
"imageNotLoaded": "画像を読み込めません。",
|
||||
"imageNotLoadedDesc": "Image To Imageに転送する画像が見つかりません。",
|
||||
"imageSavedToGallery": "画像をギャラリーに保存する",
|
||||
"canvasMerged": "Canvas Merged",
|
||||
"sentToImageToImage": "Image To Imageに転送",
|
||||
"sentToUnifiedCanvas": "Unified Canvasに転送",
|
||||
"parametersSet": "Parameters Set",
|
||||
"parametersNotSet": "Parameters Not Set",
|
||||
"parametersNotSetDesc": "この画像にはメタデータがありません。",
|
||||
"parametersFailed": "パラメータ読み込みの不具合",
|
||||
"parametersFailedDesc": "initイメージを読み込めません。",
|
||||
"seedSet": "Seed Set",
|
||||
"seedNotSet": "Seed Not Set",
|
||||
"seedNotSetDesc": "この画像のシード値が見つかりません。",
|
||||
"promptSet": "Prompt Set",
|
||||
"promptNotSet": "Prompt Not Set",
|
||||
"promptNotSetDesc": "この画像のプロンプトが見つかりませんでした。",
|
||||
"upscalingFailed": "アップスケーリング失敗",
|
||||
"faceRestoreFailed": "顔の修復に失敗",
|
||||
"metadataLoadFailed": "メタデータの読み込みに失敗。",
|
||||
"initialImageSet": "Initial Image Set",
|
||||
"initialImageNotSet": "Initial Image Not Set",
|
||||
"initialImageNotSetDesc": "Could not load initial image"
|
||||
}
|
16
frontend/public/locales/tooltip/ja.json
Normal file
16
frontend/public/locales/tooltip/ja.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"feature": {
|
||||
"prompt": "これはプロンプトフィールドです。プロンプトには生成オブジェクトや文法用語が含まれます。プロンプトにも重み(Tokenの重要度)を付けることができますが、CLIコマンドやパラメータは機能しません。",
|
||||
"gallery": "ギャラリーは、出力先フォルダから生成物を表示します。設定はファイル内に保存され、コンテキストメニューからアクセスできます。.",
|
||||
"other": "These options will enable alternative processing modes for Invoke. 'Seamless tiling' will create repeating patterns in the output. 'High resolution' is generation in two steps with img2img: use this setting when you want a larger and more coherent image without artifacts. It will take longer that usual txt2img.",
|
||||
"seed": "シード値は、画像が形成される際の初期ノイズに影響します。以前の画像から既に存在するシードを使用することができます。ノイズしきい値は高いCFG値でのアーティファクトを軽減するために使用され、Perlinは生成中にPerlinノイズを追加します(0-10の範囲を試してみてください): どちらも出力にバリエーションを追加するのに役立ちます。",
|
||||
"variations": "0.1から1.0の間の値で試し、付与されたシードに対する結果を変えてみてください。面白いバリュエーションは0.1〜0.3の間です。",
|
||||
"upscale": "生成直後の画像をアップスケールするには、ESRGANを使用します。",
|
||||
"faceCorrection": "GFPGANまたはCodeformerによる顔の修復: 画像内の顔を検出し不具合を修正するアルゴリズムです。高い値を設定すると画像がより変化し、より魅力的な顔になります。Codeformerは顔の修復を犠牲にして、元の画像をできる限り保持します。",
|
||||
"imageToImage": "Image To Imageは任意の画像を初期値として読み込み、プロンプトとともに新しい画像を生成するために使用されます。値が高いほど結果画像はより変化します。0.0から1.0までの値が可能で、推奨範囲は0.25から0.75です。",
|
||||
"boundingBox": "バウンディングボックスは、Text To ImageまたはImage To Imageの幅/高さの設定と同じです。ボックス内の領域のみが処理されます。",
|
||||
"seamCorrection": "キャンバス上の生成された画像間に発生する可視可能な境界の処理を制御します。",
|
||||
"infillAndScaling": "Manage infill methods (used on masked or erased areas of the canvas) and scaling (useful for small bounding box sizes)."
|
||||
}
|
||||
}
|
||||
|
60
frontend/public/locales/unifiedcanvas/ja.json
Normal file
60
frontend/public/locales/unifiedcanvas/ja.json
Normal file
@ -0,0 +1,60 @@
|
||||
{
|
||||
"layer": "Layer",
|
||||
"base": "Base",
|
||||
"mask": "マスク",
|
||||
"maskingOptions": "マスクのオプション",
|
||||
"enableMask": "マスクを有効化",
|
||||
"preserveMaskedArea": "マスク領域の保存",
|
||||
"clearMask": "マスクを解除",
|
||||
"brush": "ブラシ",
|
||||
"eraser": "消しゴム",
|
||||
"fillBoundingBox": "バウンディングボックスの塗りつぶし",
|
||||
"eraseBoundingBox": "バウンディングボックスの消去",
|
||||
"colorPicker": "カラーピッカー",
|
||||
"brushOptions": "ブラシオプション",
|
||||
"brushSize": "サイズ",
|
||||
"move": "Move",
|
||||
"resetView": "Reset View",
|
||||
"mergeVisible": "Merge Visible",
|
||||
"saveToGallery": "ギャラリーに保存",
|
||||
"copyToClipboard": "クリップボードにコピー",
|
||||
"downloadAsImage": "画像としてダウンロード",
|
||||
"undo": "取り消し",
|
||||
"redo": "やり直し",
|
||||
"clearCanvas": "キャンバスを片付ける",
|
||||
"canvasSettings": "キャンバスの設定",
|
||||
"showIntermediates": "Show Intermediates",
|
||||
"showGrid": "グリッドを表示",
|
||||
"snapToGrid": "Snap to Grid",
|
||||
"darkenOutsideSelection": "外周を暗くする",
|
||||
"autoSaveToGallery": "ギャラリーに自動保存",
|
||||
"saveBoxRegionOnly": "ボックス領域のみ保存",
|
||||
"limitStrokesToBox": "Limit Strokes to Box",
|
||||
"showCanvasDebugInfo": "キャンバスのデバッグ情報を表示",
|
||||
"clearCanvasHistory": "キャンバスの履歴を削除",
|
||||
"clearHistory": "履歴を削除",
|
||||
"clearCanvasHistoryMessage": "履歴を消去すると現在のキャンバスは残りますが、取り消しややり直しの履歴は不可逆的に消去されます。",
|
||||
"clearCanvasHistoryConfirm": "履歴を削除しますか?",
|
||||
"emptyTempImageFolder": "Empty Temp Image Folde",
|
||||
"emptyFolder": "空のフォルダ",
|
||||
"emptyTempImagesFolderMessage": "一時フォルダを空にすると、Unified Canvasも完全にリセットされます。これには、すべての取り消し/やり直しの履歴、ステージング領域の画像、およびキャンバスのベースレイヤーが含まれます。",
|
||||
"emptyTempImagesFolderConfirm": "一時フォルダを削除しますか?",
|
||||
"activeLayer": "Active Layer",
|
||||
"canvasScale": "Canvas Scale",
|
||||
"boundingBox": "バウンディングボックス",
|
||||
"scaledBoundingBox": "Scaled Bounding Box",
|
||||
"boundingBoxPosition": "バウンディングボックスの位置",
|
||||
"canvasDimensions": "キャンバスの大きさ",
|
||||
"canvasPosition": "キャンバスの位置",
|
||||
"cursorPosition": "カーソルの位置",
|
||||
"previous": "前",
|
||||
"next": "次",
|
||||
"accept": "同意",
|
||||
"showHide": "表示/非表示",
|
||||
"discardAll": "すべて破棄",
|
||||
"betaClear": "Clear",
|
||||
"betaDarkenOutside": "Darken Outside",
|
||||
"betaLimitToBox": "Limit To Box",
|
||||
"betaPreserveMasked": "Preserve Masked"
|
||||
}
|
||||
|
@ -20,6 +20,7 @@ export default function LanguagePicker() {
|
||||
pl: t('common:langPolish'),
|
||||
zh_cn: t('common:langSimplifiedChinese'),
|
||||
es: t('common:langSpanish'),
|
||||
ja: t('common:langJapanese'),
|
||||
};
|
||||
|
||||
const renderLanguagePicker = () => {
|
||||
|
@ -10,8 +10,9 @@ echo Do you want to generate images using the
|
||||
echo 1. command-line
|
||||
echo 2. browser-based UI
|
||||
echo 3. run textual inversion training
|
||||
echo 4. open the developer console
|
||||
echo 5. re-run the configure script to download new models
|
||||
echo 4. merge models (diffusers type only)
|
||||
echo 5. open the developer console
|
||||
echo 6. re-run the configure script to download new models
|
||||
set /P restore="Please enter 1, 2, 3, 4 or 5: [5] "
|
||||
if not defined restore set restore=2
|
||||
IF /I "%restore%" == "1" (
|
||||
@ -24,6 +25,9 @@ IF /I "%restore%" == "1" (
|
||||
echo Starting textual inversion training..
|
||||
python .venv\Scripts\textual_inversion_fe.py --web %*
|
||||
) ELSE IF /I "%restore%" == "4" (
|
||||
echo Starting model merging script..
|
||||
python .venv\Scripts\merge_models_fe.py --web %*
|
||||
) ELSE IF /I "%restore%" == "5" (
|
||||
echo Developer Console
|
||||
echo Python command is:
|
||||
where python
|
||||
@ -35,7 +39,7 @@ IF /I "%restore%" == "1" (
|
||||
echo *************************
|
||||
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
|
||||
call cmd /k
|
||||
) ELSE IF /I "%restore%" == "5" (
|
||||
) ELSE IF /I "%restore%" == "6" (
|
||||
echo Running configure_invokeai.py...
|
||||
python .venv\Scripts\configure_invokeai.py --web %*
|
||||
) ELSE (
|
||||
|
@ -20,16 +20,18 @@ if [ "$0" != "bash" ]; then
|
||||
echo "1. command-line"
|
||||
echo "2. browser-based UI"
|
||||
echo "3. run textual inversion training"
|
||||
echo "4. open the developer console"
|
||||
echo "4. merge models (diffusers type only)"
|
||||
echo "5. re-run the configure script to download new models"
|
||||
echo "6. open the developer console"
|
||||
read -p "Please enter 1, 2, 3, 4 or 5: [1] " yn
|
||||
choice=${yn:='2'}
|
||||
case $choice in
|
||||
1 ) printf "\nStarting the InvokeAI command-line..\n"; .venv/bin/python .venv/bin/invoke.py $*;;
|
||||
2 ) printf "\nStarting the InvokeAI browser-based UI..\n"; .venv/bin/python .venv/bin/invoke.py --web $*;;
|
||||
3 ) printf "\nStarting Textual Inversion:\n"; .venv/bin/python .venv/bin/textual_inversion_fe.py $*;;
|
||||
4 ) printf "\nDeveloper Console:\n"; file_name=$(basename "${BASH_SOURCE[0]}"); bash --init-file "$file_name";;
|
||||
5 ) printf "\nRunning configure_invokeai.py:\n"; .venv/bin/python .venv/bin/configure_invokeai.py $*;;
|
||||
4 ) printf "\nMerging Models:\n"; .venv/bin/python .venv/bin/merge_models_fe.py $*;;
|
||||
5 ) printf "\nDeveloper Console:\n"; file_name=$(basename "${BASH_SOURCE[0]}"); bash --init-file "$file_name";;
|
||||
6 ) printf "\nRunning configure_invokeai.py:\n"; .venv/bin/python .venv/bin/configure_invokeai.py $*;;
|
||||
* ) echo "Invalid selection"; exit;;
|
||||
esac
|
||||
else # in developer console
|
||||
|
@ -146,7 +146,7 @@ class Generate:
|
||||
gfpgan=None,
|
||||
codeformer=None,
|
||||
esrgan=None,
|
||||
free_gpu_mem=False,
|
||||
free_gpu_mem: bool=False,
|
||||
safety_checker:bool=False,
|
||||
max_loaded_models:int=2,
|
||||
# these are deprecated; if present they override values in the conf file
|
||||
@ -445,7 +445,11 @@ class Generate:
|
||||
self._set_sampler()
|
||||
|
||||
# apply the concepts library to the prompt
|
||||
prompt = self.huggingface_concepts_library.replace_concepts_with_triggers(prompt, lambda concepts: self.load_huggingface_concepts(concepts))
|
||||
prompt = self.huggingface_concepts_library.replace_concepts_with_triggers(
|
||||
prompt,
|
||||
lambda concepts: self.load_huggingface_concepts(concepts),
|
||||
self.model.textual_inversion_manager.get_all_trigger_strings()
|
||||
)
|
||||
|
||||
# bit of a hack to change the cached sampler's karras threshold to
|
||||
# whatever the user asked for
|
||||
@ -460,10 +464,13 @@ class Generate:
|
||||
init_image = None
|
||||
mask_image = None
|
||||
|
||||
|
||||
try:
|
||||
if self.free_gpu_mem and self.model.cond_stage_model.device != self.model.device:
|
||||
self.model.cond_stage_model.device = self.model.device
|
||||
self.model.cond_stage_model.to(self.model.device)
|
||||
except AttributeError:
|
||||
print(">> Warning: '--free_gpu_mem' is not yet supported when generating image using model based on HuggingFace Diffuser.")
|
||||
pass
|
||||
|
||||
try:
|
||||
uc, c, extra_conditioning_info = get_uc_and_c_and_ec(
|
||||
@ -531,6 +538,7 @@ class Generate:
|
||||
inpaint_height = inpaint_height,
|
||||
inpaint_width = inpaint_width,
|
||||
enable_image_debugging = enable_image_debugging,
|
||||
free_gpu_mem=self.free_gpu_mem,
|
||||
)
|
||||
|
||||
if init_color:
|
||||
|
@ -573,7 +573,7 @@ def import_model(model_path:str, gen, opt, completer):
|
||||
|
||||
if model_path.startswith(('http:','https:','ftp:')):
|
||||
model_name = import_ckpt_model(model_path, gen, opt, completer)
|
||||
elif os.path.exists(model_path) and model_path.endswith('.ckpt') and os.path.isfile(model_path):
|
||||
elif os.path.exists(model_path) and model_path.endswith(('.ckpt','.safetensors')) and os.path.isfile(model_path):
|
||||
model_name = import_ckpt_model(model_path, gen, opt, completer)
|
||||
elif re.match('^[\w.+-]+/[\w.+-]+$',model_path):
|
||||
model_name = import_diffuser_model(model_path, gen, opt, completer)
|
||||
@ -627,9 +627,9 @@ def import_ckpt_model(path_or_url:str, gen, opt, completer)->str:
|
||||
model_description=default_description
|
||||
)
|
||||
config_file = None
|
||||
|
||||
default = Path(Globals.root,'configs/stable-diffusion/v1-inference.yaml')
|
||||
completer.complete_extensions(('.yaml','.yml'))
|
||||
completer.set_line('configs/stable-diffusion/v1-inference.yaml')
|
||||
completer.set_line(str(default))
|
||||
done = False
|
||||
while not done:
|
||||
config_file = input('Configuration file for this model: ').strip()
|
||||
|
@ -56,9 +56,11 @@ class CkptGenerator():
|
||||
image_callback=None, step_callback=None, threshold=0.0, perlin=0.0,
|
||||
safety_checker:dict=None,
|
||||
attention_maps_callback = None,
|
||||
free_gpu_mem: bool=False,
|
||||
**kwargs):
|
||||
scope = choose_autocast(self.precision)
|
||||
self.safety_checker = safety_checker
|
||||
self.free_gpu_mem = free_gpu_mem
|
||||
attention_maps_images = []
|
||||
attention_maps_callback = lambda saver: attention_maps_images.append(saver.get_stacked_maps_image())
|
||||
make_image = self.get_make_image(
|
||||
|
@ -21,7 +21,7 @@ import os
|
||||
import re
|
||||
import torch
|
||||
from pathlib import Path
|
||||
from ldm.invoke.globals import Globals
|
||||
from ldm.invoke.globals import Globals, global_cache_dir
|
||||
from safetensors.torch import load_file
|
||||
|
||||
try:
|
||||
@ -637,7 +637,7 @@ def convert_ldm_bert_checkpoint(checkpoint, config):
|
||||
|
||||
|
||||
def convert_ldm_clip_checkpoint(checkpoint):
|
||||
text_model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
|
||||
text_model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14",cache_dir=global_cache_dir('hub'))
|
||||
|
||||
keys = list(checkpoint.keys())
|
||||
|
||||
@ -677,7 +677,8 @@ textenc_pattern = re.compile("|".join(protected.keys()))
|
||||
|
||||
|
||||
def convert_paint_by_example_checkpoint(checkpoint):
|
||||
config = CLIPVisionConfig.from_pretrained("openai/clip-vit-large-patch14")
|
||||
cache_dir = global_cache_dir('hub')
|
||||
config = CLIPVisionConfig.from_pretrained("openai/clip-vit-large-patch14",cache_dir=cache_dir)
|
||||
model = PaintByExampleImageEncoder(config)
|
||||
|
||||
keys = list(checkpoint.keys())
|
||||
@ -744,7 +745,8 @@ def convert_paint_by_example_checkpoint(checkpoint):
|
||||
|
||||
|
||||
def convert_open_clip_checkpoint(checkpoint):
|
||||
text_model = CLIPTextModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="text_encoder")
|
||||
cache_dir=global_cache_dir('hub')
|
||||
text_model = CLIPTextModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="text_encoder", cache_dir=cache_dir)
|
||||
|
||||
keys = list(checkpoint.keys())
|
||||
|
||||
@ -795,6 +797,7 @@ def convert_ckpt_to_diffuser(checkpoint_path:str,
|
||||
):
|
||||
|
||||
checkpoint = load_file(checkpoint_path) if Path(checkpoint_path).suffix == '.safetensors' else torch.load(checkpoint_path)
|
||||
cache_dir = global_cache_dir('hub')
|
||||
|
||||
# Sometimes models don't have the global_step item
|
||||
if "global_step" in checkpoint:
|
||||
@ -904,7 +907,7 @@ def convert_ckpt_to_diffuser(checkpoint_path:str,
|
||||
|
||||
if model_type == "FrozenOpenCLIPEmbedder":
|
||||
text_model = convert_open_clip_checkpoint(checkpoint)
|
||||
tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-2", subfolder="tokenizer")
|
||||
tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-2", subfolder="tokenizer",cache_dir=global_cache_dir('diffusers'))
|
||||
pipe = StableDiffusionPipeline(
|
||||
vae=vae,
|
||||
text_encoder=text_model,
|
||||
@ -917,8 +920,8 @@ def convert_ckpt_to_diffuser(checkpoint_path:str,
|
||||
)
|
||||
elif model_type == "PaintByExample":
|
||||
vision_model = convert_paint_by_example_checkpoint(checkpoint)
|
||||
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained("CompVis/stable-diffusion-safety-checker")
|
||||
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14",cache_dir=cache_dir)
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained("CompVis/stable-diffusion-safety-checker",cache_dir=cache_dir)
|
||||
pipe = PaintByExamplePipeline(
|
||||
vae=vae,
|
||||
image_encoder=vision_model,
|
||||
@ -929,9 +932,9 @@ def convert_ckpt_to_diffuser(checkpoint_path:str,
|
||||
)
|
||||
elif model_type in ['FrozenCLIPEmbedder','WeightedFrozenCLIPEmbedder']:
|
||||
text_model = convert_ldm_clip_checkpoint(checkpoint)
|
||||
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
|
||||
safety_checker = StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker")
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained("CompVis/stable-diffusion-safety-checker")
|
||||
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14",cache_dir=cache_dir)
|
||||
safety_checker = StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker",cache_dir=cache_dir)
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained("CompVis/stable-diffusion-safety-checker",cache_dir=cache_dir)
|
||||
pipe = StableDiffusionPipeline(
|
||||
vae=vae,
|
||||
text_encoder=text_model,
|
||||
@ -944,7 +947,7 @@ def convert_ckpt_to_diffuser(checkpoint_path:str,
|
||||
else:
|
||||
text_config = create_ldm_bert_config(original_config)
|
||||
text_model = convert_ldm_bert_checkpoint(checkpoint, text_config)
|
||||
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
|
||||
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased",cache_dir=cache_dir)
|
||||
pipe = LDMTextToImagePipeline(vqvae=vae, bert=text_model, tokenizer=tokenizer, unet=unet, scheduler=scheduler)
|
||||
|
||||
pipe.save_pretrained(
|
||||
|
@ -59,7 +59,7 @@ class HuggingFaceConceptsLibrary(object):
|
||||
be downloaded.
|
||||
'''
|
||||
if not concept_name in self.list_concepts():
|
||||
print(f'This concept is not known to the Hugging Face library. Generation will continue without the concept.')
|
||||
print(f'This concept is not a local embedding trigger, nor is it a HuggingFace concept. Generation will continue without the concept.')
|
||||
return None
|
||||
return self.get_concept_file(concept_name.lower(),'learned_embeds.bin')
|
||||
|
||||
@ -115,13 +115,19 @@ class HuggingFaceConceptsLibrary(object):
|
||||
return self.trigger_to_concept(match.group(1)) or f'<{match.group(1)}>'
|
||||
return self.match_trigger.sub(do_replace, prompt)
|
||||
|
||||
def replace_concepts_with_triggers(self, prompt:str, load_concepts_callback: Callable[[list], any])->str:
|
||||
def replace_concepts_with_triggers(self,
|
||||
prompt:str,
|
||||
load_concepts_callback: Callable[[list], any],
|
||||
excluded_tokens:list[str])->str:
|
||||
'''
|
||||
Given a prompt string that contains `<concept_name>` tags, replace
|
||||
these tags with the appropriate trigger.
|
||||
|
||||
If any `<concept_name>` tags are found, `load_concepts_callback()` is called with a list
|
||||
of `concepts_name` strings.
|
||||
|
||||
`excluded_tokens` are any tokens that should not be replaced, typically because they
|
||||
are trigger tokens from a locally-loaded embedding.
|
||||
'''
|
||||
concepts = self.match_concept.findall(prompt)
|
||||
if not concepts:
|
||||
@ -129,6 +135,8 @@ class HuggingFaceConceptsLibrary(object):
|
||||
load_concepts_callback(concepts)
|
||||
|
||||
def do_replace(match)->str:
|
||||
if excluded_tokens and f'<{match.group(1)}>' in excluded_tokens:
|
||||
return f'<{match.group(1)}>'
|
||||
return self.concept_to_trigger(match.group(1)) or f'<{match.group(1)}>'
|
||||
return self.match_concept.sub(do_replace, prompt)
|
||||
|
||||
|
@ -62,9 +62,11 @@ class Generator:
|
||||
def generate(self,prompt,init_image,width,height,sampler, iterations=1,seed=None,
|
||||
image_callback=None, step_callback=None, threshold=0.0, perlin=0.0,
|
||||
safety_checker:dict=None,
|
||||
free_gpu_mem: bool=False,
|
||||
**kwargs):
|
||||
scope = nullcontext
|
||||
self.safety_checker = safety_checker
|
||||
self.free_gpu_mem = free_gpu_mem
|
||||
attention_maps_images = []
|
||||
attention_maps_callback = lambda saver: attention_maps_images.append(saver.get_stacked_maps_image())
|
||||
make_image = self.get_make_image(
|
||||
|
@ -29,6 +29,7 @@ else:
|
||||
|
||||
# Where to look for the initialization file
|
||||
Globals.initfile = 'invokeai.init'
|
||||
Globals.models_file = 'models.yaml'
|
||||
Globals.models_dir = 'models'
|
||||
Globals.config_dir = 'configs'
|
||||
Globals.autoscan_dir = 'weights'
|
||||
@ -49,6 +50,9 @@ Globals.disable_xformers = False
|
||||
# whether we are forcing full precision
|
||||
Globals.full_precision = False
|
||||
|
||||
def global_config_file()->Path:
|
||||
return Path(Globals.root, Globals.config_dir, Globals.models_file)
|
||||
|
||||
def global_config_dir()->Path:
|
||||
return Path(Globals.root, Globals.config_dir)
|
||||
|
||||
|
62
ldm/invoke/merge_diffusers.py
Normal file
62
ldm/invoke/merge_diffusers.py
Normal file
@ -0,0 +1,62 @@
|
||||
'''
|
||||
ldm.invoke.merge_diffusers exports a single function call merge_diffusion_models()
|
||||
used to merge 2-3 models together and create a new InvokeAI-registered diffusion model.
|
||||
'''
|
||||
import os
|
||||
from typing import List
|
||||
from diffusers import DiffusionPipeline
|
||||
from ldm.invoke.globals import global_config_file, global_models_dir, global_cache_dir
|
||||
from ldm.invoke.model_manager import ModelManager
|
||||
from omegaconf import OmegaConf
|
||||
|
||||
def merge_diffusion_models(models:List['str'],
|
||||
merged_model_name:str,
|
||||
alpha:float=0.5,
|
||||
interp:str=None,
|
||||
force:bool=False,
|
||||
**kwargs):
|
||||
'''
|
||||
models - up to three models, designated by their InvokeAI models.yaml model name
|
||||
merged_model_name = name for new model
|
||||
alpha - The interpolation parameter. Ranges from 0 to 1. It affects the ratio in which the checkpoints are merged. A 0.8 alpha
|
||||
would mean that the first model checkpoints would affect the final result far less than an alpha of 0.2
|
||||
interp - The interpolation method to use for the merging. Supports "sigmoid", "inv_sigmoid", "add_difference" and None.
|
||||
Passing None uses the default interpolation which is weighted sum interpolation. For merging three checkpoints, only "add_difference" is supported.
|
||||
force - Whether to ignore mismatch in model_config.json for the current models. Defaults to False.
|
||||
|
||||
**kwargs - the default DiffusionPipeline.get_config_dict kwargs:
|
||||
cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map
|
||||
'''
|
||||
config_file = global_config_file()
|
||||
model_manager = ModelManager(OmegaConf.load(config_file))
|
||||
for mod in models:
|
||||
assert (mod in model_manager.model_names()), f'** Unknown model "{mod}"'
|
||||
assert (model_manager.model_info(mod).get('format',None) == 'diffusers'), f'** {mod} is not a diffusers model. It must be optimized before merging.'
|
||||
model_ids_or_paths = [model_manager.model_name_or_path(x) for x in models]
|
||||
|
||||
pipe = DiffusionPipeline.from_pretrained(model_ids_or_paths[0],
|
||||
cache_dir=kwargs.get('cache_dir',global_cache_dir()),
|
||||
custom_pipeline='checkpoint_merger')
|
||||
merged_pipe = pipe.merge(pretrained_model_name_or_path_list=model_ids_or_paths,
|
||||
alpha=alpha,
|
||||
interp=interp,
|
||||
force=force,
|
||||
**kwargs)
|
||||
dump_path = global_models_dir() / 'merged_diffusers'
|
||||
os.makedirs(dump_path,exist_ok=True)
|
||||
dump_path = dump_path / merged_model_name
|
||||
merged_pipe.save_pretrained (
|
||||
dump_path,
|
||||
safe_serialization=1
|
||||
)
|
||||
model_manager.import_diffuser_model(
|
||||
dump_path,
|
||||
model_name = merged_model_name,
|
||||
description = f'Merge of models {", ".join(models)}'
|
||||
)
|
||||
print('REMINDER: When PR 2369 is merged, replace merge_diffusers.py line 56 with vae= argument to impormodel()')
|
||||
if vae := model_manager.config[models[0]].get('vae',None):
|
||||
print(f'>> Using configured VAE assigned to {models[0]}')
|
||||
model_manager.config[merged_model_name]['vae'] = vae
|
||||
|
||||
model_manager.commit(config_file)
|
@ -37,7 +37,11 @@ from ldm.util import instantiate_from_config, ask_user
|
||||
DEFAULT_MAX_MODELS=2
|
||||
|
||||
class ModelManager(object):
|
||||
def __init__(self, config:OmegaConf, device_type:str, precision:str, max_loaded_models=DEFAULT_MAX_MODELS):
|
||||
def __init__(self,
|
||||
config:OmegaConf,
|
||||
device_type:str='cpu',
|
||||
precision:str='float16',
|
||||
max_loaded_models=DEFAULT_MAX_MODELS):
|
||||
'''
|
||||
Initialize with the path to the models.yaml config file,
|
||||
the torch device type, and precision. The optional
|
||||
@ -143,7 +147,7 @@ class ModelManager(object):
|
||||
Return true if this is a legacy (.ckpt) model
|
||||
'''
|
||||
info = self.model_info(model_name)
|
||||
if 'weights' in info and info['weights'].endswith('.ckpt'):
|
||||
if 'weights' in info and info['weights'].endswith(('.ckpt','.safetensors')):
|
||||
return True
|
||||
return False
|
||||
|
||||
@ -362,8 +366,14 @@ class ModelManager(object):
|
||||
vae = os.path.normpath(os.path.join(Globals.root,vae))
|
||||
if os.path.exists(vae):
|
||||
print(f' | Loading VAE weights from: {vae}')
|
||||
vae_ckpt = None
|
||||
vae_dict = None
|
||||
if vae.endswith('.safetensors'):
|
||||
vae_ckpt = safetensors.torch.load_file(vae)
|
||||
vae_dict = {k: v for k, v in vae_ckpt.items() if k[0:4] != "loss"}
|
||||
else:
|
||||
vae_ckpt = torch.load(vae, map_location="cpu")
|
||||
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
|
||||
vae_dict = {k: v for k, v in vae_ckpt['state_dict'].items() if k[0:4] != "loss"}
|
||||
model.first_stage_model.load_state_dict(vae_dict, strict=False)
|
||||
else:
|
||||
print(f' | VAE file {vae} not found. Skipping.')
|
||||
@ -536,7 +546,7 @@ class ModelManager(object):
|
||||
format='diffusers',
|
||||
)
|
||||
if isinstance(repo_or_path,Path) and repo_or_path.exists():
|
||||
new_config.update(path=repo_or_path)
|
||||
new_config.update(path=str(repo_or_path))
|
||||
else:
|
||||
new_config.update(repo_id=repo_or_path)
|
||||
|
||||
|
@ -1,18 +1,16 @@
|
||||
import math
|
||||
import os.path
|
||||
from functools import partial
|
||||
from typing import Optional
|
||||
|
||||
import clip
|
||||
import kornia
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from functools import partial
|
||||
import clip
|
||||
from einops import rearrange, repeat
|
||||
from einops import repeat
|
||||
from transformers import CLIPTokenizer, CLIPTextModel
|
||||
import kornia
|
||||
from ldm.invoke.devices import choose_torch_device
|
||||
from ldm.invoke.globals import Globals, global_cache_dir
|
||||
#from ldm.modules.textual_inversion_manager import TextualInversionManager
|
||||
|
||||
from ldm.invoke.devices import choose_torch_device
|
||||
from ldm.invoke.globals import global_cache_dir
|
||||
from ldm.modules.x_transformer import (
|
||||
Encoder,
|
||||
TransformerWrapper,
|
||||
@ -654,21 +652,22 @@ class WeightedFrozenCLIPEmbedder(FrozenCLIPEmbedder):
|
||||
per_token_weights += [weight] * len(this_fragment_token_ids)
|
||||
|
||||
# leave room for bos/eos
|
||||
if len(all_token_ids) > self.max_length - 2:
|
||||
excess_token_count = len(all_token_ids) - self.max_length - 2
|
||||
max_token_count_without_bos_eos_markers = self.max_length - 2
|
||||
if len(all_token_ids) > max_token_count_without_bos_eos_markers:
|
||||
excess_token_count = len(all_token_ids) - max_token_count_without_bos_eos_markers
|
||||
# TODO build nice description string of how the truncation was applied
|
||||
# this should be done by calling self.tokenizer.convert_ids_to_tokens() then passing the result to
|
||||
# self.tokenizer.convert_tokens_to_string() for the token_ids on each side of the truncation limit.
|
||||
print(f">> Prompt is {excess_token_count} token(s) too long and has been truncated")
|
||||
all_token_ids = all_token_ids[0:self.max_length]
|
||||
per_token_weights = per_token_weights[0:self.max_length]
|
||||
all_token_ids = all_token_ids[0:max_token_count_without_bos_eos_markers]
|
||||
per_token_weights = per_token_weights[0:max_token_count_without_bos_eos_markers]
|
||||
|
||||
# pad out to a 77-entry array: [eos_token, <prompt tokens>, eos_token, ..., eos_token]
|
||||
# pad out to a 77-entry array: [bos_token, <prompt tokens>, eos_token, pad_token…]
|
||||
# (77 = self.max_length)
|
||||
all_token_ids = [self.tokenizer.bos_token_id] + all_token_ids + [self.tokenizer.eos_token_id]
|
||||
per_token_weights = [1.0] + per_token_weights + [1.0]
|
||||
pad_length = self.max_length - len(all_token_ids)
|
||||
all_token_ids += [self.tokenizer.eos_token_id] * pad_length
|
||||
all_token_ids += [self.tokenizer.pad_token_id] * pad_length
|
||||
per_token_weights += [1.0] * pad_length
|
||||
|
||||
all_token_ids_tensor = torch.tensor(all_token_ids, dtype=torch.long).to(self.device)
|
||||
|
@ -3,8 +3,9 @@ import math
|
||||
import torch
|
||||
from transformers import CLIPTokenizer, CLIPTextModel
|
||||
|
||||
from ldm.modules.textual_inversion_manager import TextualInversionManager
|
||||
from ldm.invoke.devices import torch_dtype
|
||||
from ldm.modules.textual_inversion_manager import TextualInversionManager
|
||||
|
||||
|
||||
class WeightedPromptFragmentsToEmbeddingsConverter():
|
||||
|
||||
@ -22,8 +23,8 @@ class WeightedPromptFragmentsToEmbeddingsConverter():
|
||||
return self.tokenizer.model_max_length
|
||||
|
||||
def get_embeddings_for_weighted_prompt_fragments(self,
|
||||
text: list[str],
|
||||
fragment_weights: list[float],
|
||||
text: list[list[str]],
|
||||
fragment_weights: list[list[float]],
|
||||
should_return_tokens: bool = False,
|
||||
device='cpu'
|
||||
) -> torch.Tensor:
|
||||
@ -198,12 +199,12 @@ class WeightedPromptFragmentsToEmbeddingsConverter():
|
||||
all_token_ids = all_token_ids[0:max_token_count_without_bos_eos_markers]
|
||||
per_token_weights = per_token_weights[0:max_token_count_without_bos_eos_markers]
|
||||
|
||||
# pad out to a self.max_length-entry array: [eos_token, <prompt tokens>, eos_token, ..., eos_token]
|
||||
# pad out to a self.max_length-entry array: [bos_token, <prompt tokens>, eos_token, pad_token…]
|
||||
# (typically self.max_length == 77)
|
||||
all_token_ids = [self.tokenizer.bos_token_id] + all_token_ids + [self.tokenizer.eos_token_id]
|
||||
per_token_weights = [1.0] + per_token_weights + [1.0]
|
||||
pad_length = self.max_length - len(all_token_ids)
|
||||
all_token_ids += [self.tokenizer.eos_token_id] * pad_length
|
||||
all_token_ids += [self.tokenizer.pad_token_id] * pad_length
|
||||
per_token_weights += [1.0] * pad_length
|
||||
|
||||
all_token_ids_tensor = torch.tensor(all_token_ids, dtype=torch.long, device=device)
|
||||
|
@ -38,11 +38,15 @@ class TextualInversionManager():
|
||||
if concept_name in self.hf_concepts_library.concepts_loaded:
|
||||
continue
|
||||
trigger = self.hf_concepts_library.concept_to_trigger(concept_name)
|
||||
if self.has_textual_inversion_for_trigger_string(trigger):
|
||||
if self.has_textual_inversion_for_trigger_string(trigger) \
|
||||
or self.has_textual_inversion_for_trigger_string(concept_name) \
|
||||
or self.has_textual_inversion_for_trigger_string(f'<{concept_name}>'): # in case a token with literal angle brackets encountered
|
||||
print(f'>> Loaded local embedding for trigger {concept_name}')
|
||||
continue
|
||||
bin_file = self.hf_concepts_library.get_concept_model_path(concept_name)
|
||||
if not bin_file:
|
||||
continue
|
||||
print(f'>> Loaded remote embedding for trigger {concept_name}')
|
||||
self.load_textual_inversion(bin_file)
|
||||
self.hf_concepts_library.concepts_loaded[concept_name]=True
|
||||
|
||||
|
0
scripts/load_models.py
Normal file → Executable file
0
scripts/load_models.py
Normal file → Executable file
92
scripts/merge_models.py
Executable file
92
scripts/merge_models.py
Executable file
@ -0,0 +1,92 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
import traceback
|
||||
from pathlib import Path
|
||||
|
||||
from omegaconf import OmegaConf
|
||||
|
||||
from ldm.invoke.globals import (Globals, global_cache_dir, global_config_file,
|
||||
global_set_root)
|
||||
from ldm.invoke.model_manager import ModelManager
|
||||
|
||||
parser = argparse.ArgumentParser(description="InvokeAI textual inversion training")
|
||||
parser.add_argument(
|
||||
"--root_dir",
|
||||
"--root-dir",
|
||||
type=Path,
|
||||
default=Globals.root,
|
||||
help="Path to the invokeai runtime directory",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--models",
|
||||
required=True,
|
||||
type=str,
|
||||
nargs="+",
|
||||
help="Two to three model names to be merged",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--merged_model_name",
|
||||
"--destination",
|
||||
dest="merged_model_name",
|
||||
type=str,
|
||||
help="Name of the output model. If not specified, will be the concatenation of the input model names.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--alpha",
|
||||
type=float,
|
||||
default=0.5,
|
||||
help="The interpolation parameter, ranging from 0 to 1. It affects the ratio in which the checkpoints are merged. Higher values give more weight to the 2d and 3d models",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--interpolation",
|
||||
dest="interp",
|
||||
type=str,
|
||||
choices=["weighted_sum", "sigmoid", "inv_sigmoid", "add_difference"],
|
||||
default="weighted_sum",
|
||||
help='Interpolation method to use. If three models are present, only "add_difference" will work.',
|
||||
)
|
||||
parser.add_argument(
|
||||
"--force",
|
||||
action="store_true",
|
||||
help="Try to merge models even if they are incompatible with each other",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--clobber",
|
||||
"--overwrite",
|
||||
dest='clobber',
|
||||
action="store_true",
|
||||
help="Overwrite the merged model if --merged_model_name already exists",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
global_set_root(args.root_dir)
|
||||
|
||||
assert args.alpha >= 0 and args.alpha <= 1.0, "alpha must be between 0 and 1"
|
||||
assert len(args.models) >= 1 and len(args.models) <= 3, "provide 2 or 3 models to merge"
|
||||
|
||||
if not args.merged_model_name:
|
||||
args.merged_model_name = "+".join(args.models)
|
||||
print(
|
||||
f'>> No --merged_model_name provided. Defaulting to "{args.merged_model_name}"'
|
||||
)
|
||||
|
||||
model_manager = ModelManager(OmegaConf.load(global_config_file()))
|
||||
assert (args.clobber or args.merged_model_name not in model_manager.model_names()), f'A model named "{args.merged_model_name}" already exists. Use --clobber to overwrite.'
|
||||
|
||||
# It seems that the merge pipeline is not honoring cache_dir, so we set the
|
||||
# HF_HOME environment variable here *before* we load diffusers.
|
||||
cache_dir = str(global_cache_dir("diffusers"))
|
||||
os.environ["HF_HOME"] = cache_dir
|
||||
from ldm.invoke.merge_diffusers import merge_diffusion_models
|
||||
|
||||
try:
|
||||
merge_diffusion_models(**vars(args))
|
||||
print(f'>> Models merged into new model: "{args.merged_model_name}".')
|
||||
except Exception as e:
|
||||
print(f"** An error occurred while merging the pipelines: {str(e)}")
|
||||
print("** DETAILS:")
|
||||
print(traceback.format_exc())
|
||||
sys.exit(-1)
|
87
scripts/merge_fe.py → scripts/merge_models_fe.py
Normal file → Executable file
87
scripts/merge_fe.py → scripts/merge_models_fe.py
Normal file → Executable file
@ -3,11 +3,10 @@
|
||||
import npyscreen
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import shutil
|
||||
import traceback
|
||||
import argparse
|
||||
from ldm.invoke.globals import Globals, global_set_root
|
||||
from ldm.invoke.globals import Globals, global_set_root, global_cache_dir, global_config_file
|
||||
from ldm.invoke.model_manager import ModelManager
|
||||
from omegaconf import OmegaConf
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
@ -30,6 +29,14 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
|
||||
'inv_sigmoid',
|
||||
'add_difference']
|
||||
|
||||
def __init__(self, parentApp, name):
|
||||
self.parentApp = parentApp
|
||||
super().__init__(parentApp, name)
|
||||
|
||||
@property
|
||||
def model_manager(self):
|
||||
return self.parentApp.model_manager
|
||||
|
||||
def afterEditing(self):
|
||||
self.parentApp.setNextForm(None)
|
||||
|
||||
@ -83,6 +90,11 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
|
||||
lowest=0,
|
||||
value=0.5,
|
||||
)
|
||||
self.force = self.add_widget_intelligent(
|
||||
npyscreen.Checkbox,
|
||||
name='Force merge of incompatible models',
|
||||
value=False,
|
||||
)
|
||||
self.merged_model_name = self.add_widget_intelligent(
|
||||
npyscreen.TitleText,
|
||||
name='Name for merged model',
|
||||
@ -105,20 +117,51 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
|
||||
self.merge_method.value=0
|
||||
|
||||
def on_ok(self):
|
||||
if self.validate_field_values():
|
||||
if self.validate_field_values() and self.check_for_overwrite():
|
||||
self.parentApp.setNextForm(None)
|
||||
self.editing = False
|
||||
self.parentApp.merge_arguments = self.marshall_arguments()
|
||||
npyscreen.notify('Starting the merge...')
|
||||
import ldm.invoke.merge_diffusers # this keeps the message up while diffusers loads
|
||||
else:
|
||||
self.editing = True
|
||||
|
||||
def ok_cancel(self):
|
||||
def on_cancel(self):
|
||||
sys.exit(0)
|
||||
|
||||
def marshall_arguments(self)->dict:
|
||||
model_names = self.model_names
|
||||
models = [
|
||||
model_names[self.model1.value[0]],
|
||||
model_names[self.model2.value[0]],
|
||||
]
|
||||
if self.model3.value[0] > 0:
|
||||
models.append(model_names[self.model3.value[0]-1])
|
||||
|
||||
args = dict(
|
||||
models=models,
|
||||
alpha = self.alpha.value,
|
||||
interp = self.interpolations[self.merge_method.value[0]],
|
||||
force = self.force.value,
|
||||
merged_model_name = self.merged_model_name.value,
|
||||
)
|
||||
return args
|
||||
|
||||
def check_for_overwrite(self)->bool:
|
||||
model_out = self.merged_model_name.value
|
||||
if model_out not in self.model_names:
|
||||
return True
|
||||
else:
|
||||
return npyscreen.notify_yes_no(f'The chosen merged model destination, {model_out}, is already in use. Overwrite?')
|
||||
|
||||
def validate_field_values(self)->bool:
|
||||
bad_fields = []
|
||||
selected_models = set((self.model1.value[0],self.model2.value[0],self.model3.value[0]))
|
||||
if len(selected_models) < 3:
|
||||
bad_fields.append('Please select two or three DIFFERENT models to compare')
|
||||
model_names = self.model_names
|
||||
selected_models = set((model_names[self.model1.value[0]],model_names[self.model2.value[0]]))
|
||||
if self.model3.value[0] > 0:
|
||||
selected_models.add(model_names[self.model3.value[0]-1])
|
||||
if len(selected_models) < 2:
|
||||
bad_fields.append(f'Please select two or three DIFFERENT models to compare. You selected {selected_models}')
|
||||
if len(bad_fields) > 0:
|
||||
message = 'The following problems were detected and must be corrected:'
|
||||
for problem in bad_fields:
|
||||
@ -129,13 +172,15 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
|
||||
return True
|
||||
|
||||
def get_model_names(self)->List[str]:
|
||||
conf = OmegaConf.load(os.path.join(Globals.root,'configs/models.yaml'))
|
||||
model_names = [name for name in conf.keys() if conf[name].get('format',None)=='diffusers']
|
||||
model_names = [name for name in self.model_manager.model_names() if self.model_manager.model_info(name).get('format') == 'diffusers']
|
||||
print(model_names)
|
||||
return sorted(model_names)
|
||||
|
||||
class MyApplication(npyscreen.NPSAppManaged):
|
||||
class Mergeapp(npyscreen.NPSAppManaged):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
conf = OmegaConf.load(global_config_file())
|
||||
self.model_manager = ModelManager(conf,'cpu','float16') # precision doesn't really matter here
|
||||
|
||||
def onStart(self):
|
||||
npyscreen.setTheme(npyscreen.Themes.DefaultTheme)
|
||||
@ -152,5 +197,21 @@ if __name__ == '__main__':
|
||||
args = parser.parse_args()
|
||||
global_set_root(args.root_dir)
|
||||
|
||||
myapplication = MyApplication()
|
||||
myapplication.run()
|
||||
cache_dir = str(global_cache_dir('diffusers')) # because not clear the merge pipeline is honoring cache_dir
|
||||
os.environ['HF_HOME'] = cache_dir
|
||||
|
||||
mergeapp = Mergeapp()
|
||||
mergeapp.run()
|
||||
|
||||
args = mergeapp.merge_arguments
|
||||
args.update(cache_dir = cache_dir)
|
||||
from ldm.invoke.merge_diffusers import merge_diffusion_models
|
||||
|
||||
try:
|
||||
merge_diffusion_models(**args)
|
||||
print(f'>> Models merged into new model: "{args["merged_model_name"]}".')
|
||||
except Exception as e:
|
||||
print(f'** An error occurred while merging the pipelines: {str(e)}')
|
||||
print('** DETAILS:')
|
||||
print(traceback.format_exc())
|
||||
sys.exit(-1)
|
0
scripts/merge_embeddings.py → scripts/orig_scripts/merge_embeddings.py
Normal file → Executable file
0
scripts/merge_embeddings.py → scripts/orig_scripts/merge_embeddings.py
Normal file → Executable file
5
setup.py
5
setup.py
@ -92,8 +92,9 @@ setup(
|
||||
'Topic :: Scientific/Engineering :: Image Processing',
|
||||
],
|
||||
scripts = ['scripts/invoke.py','scripts/configure_invokeai.py', 'scripts/sd-metadata.py',
|
||||
'scripts/preload_models.py', 'scripts/images2prompt.py','scripts/merge_embeddings.py',
|
||||
'scripts/textual_inversion_fe.py','scripts/textual_inversion.py'
|
||||
'scripts/preload_models.py', 'scripts/images2prompt.py',
|
||||
'scripts/textual_inversion_fe.py','scripts/textual_inversion.py',
|
||||
'scripts/merge_models_fe.py', 'scripts/merge_models.py',
|
||||
],
|
||||
data_files=FRONTEND_FILES,
|
||||
)
|
||||
|
Loading…
Reference in New Issue
Block a user