Merge branch 'main' into main

This commit is contained in:
Lincoln Stein 2023-02-20 07:38:08 -05:00 committed by GitHub
commit eec5c3bbb1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
74 changed files with 198038 additions and 3210 deletions

View File

@ -10,7 +10,7 @@
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
@ -28,6 +28,8 @@
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
</div>
@ -257,6 +259,8 @@ cleanup, testing, or code reviews, is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with localization, please register on [Weblate][translation status link]. If you want add a new language, please let us know which language and we will add it to the Weblate project.
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.

View File

@ -214,6 +214,8 @@ Here are the invoke> command that apply to txt2img:
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
!!! note

View File

@ -40,7 +40,7 @@ for adj in adjectives:
print(f'a {adj} day -A{samp} -C{cg}')
```
It's output looks like this (abbreviated):
Its output looks like this (abbreviated):
```bash
a sunny day -Aklms -C7.5

View File

@ -250,6 +250,24 @@ invokeai-ti \
--only_save_embeds
```
## Using Embeddings
After training completes, the resultant embeddings will be saved into your `$INVOKEAI_ROOT/embeddings/<trigger word>/learned_embeds.bin`.
These will be automatically loaded when you start InvokeAI.
Add the trigger word, surrounded by angle brackets, to use that embedding. For example, if your trigger word was `terence`, use `<terence>` in prompts. This is the same syntax used by the HuggingFace concepts library.
**Note:** `.pt` embeddings do not require the angle brackets.
## Troubleshooting
### `Cannot load embedding for <trigger>. It was trained on a model with token dimension 1024, but the current model has token dimension 768`
Messages like this indicate you trained the embedding on a different base model than the currently selected one.
For example, in the error above, the training was done on SD2.1 (768x768) but it was used on SD1.5 (512x512).
## Reading
For more information on textual inversion, please see the following

View File

@ -25,4 +25,13 @@ dist-ssr
*.sw?
# build stats
stats.html
stats.html
# Yarn - https://yarnpkg.com/getting-started/qa#which-files-should-be-gitignored
.pnp.*
.yarn/*
!.yarn/patches
!.yarn/plugins
!.yarn/releases
!.yarn/sdks
!.yarn/versions

View File

@ -1,4 +1,4 @@
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
cd invokeai/frontend/ && npx run lint
cd invokeai/frontend/ && npm run lint-staged

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,5 @@
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
yarn-path ".yarn/releases/yarn-1.22.19.cjs"

View File

@ -0,0 +1 @@
yarnPath: .yarn/releases/yarn-1.22.19.cjs

View File

@ -7,7 +7,7 @@ The UI is in `invokeai/frontend`.
Install [node](https://nodejs.org/en/download/) (includes npm) and
[yarn](https://yarnpkg.com/getting-started/install).
From `invokeai/frontend/` run `yarn install` to get everything set up.
From `invokeai/frontend/` run `yarn install --immutable` to get everything set up.
## Dev

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -5,7 +5,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="./assets/favicon-0d253ced.ico" />
<script type="module" crossorigin src="./assets/index-1e76002e.js"></script>
<script type="module" crossorigin src="./assets/index-53ecf883.js"></script>
<link rel="stylesheet" href="./assets/index-14cb2922.css">
</head>

View File

@ -66,7 +66,7 @@
"hotkeys": {
"keyboardShortcuts": "مفاتيح الأزرار المختصرة",
"appHotkeys": "مفاتيح التطبيق",
"GeneralHotkeys": "مفاتيح عامة",
"generalHotkeys": "مفاتيح عامة",
"galleryHotkeys": "مفاتيح المعرض",
"unifiedCanvasHotkeys": "مفاتيح اللوحةالموحدة ",
"invoke": {
@ -380,7 +380,6 @@
"img2imgStrength": "قوة صورة إلى صورة",
"toggleLoopback": "تبديل الإعادة",
"invoke": "إطلاق",
"cancel": "إلغاء",
"promptPlaceholder": "اكتب المحث هنا. [العلامات السلبية], (زيادة الوزن) ++, (نقص الوزن)--, التبديل و الخلط متاحة (انظر الوثائق)",
"sendTo": "أرسل إلى",
"sendToImg2Img": "أرسل إلى صورة إلى صورة",
@ -452,10 +451,10 @@
"seed": "يؤثر قيمة البذور على الضوضاء الأولي الذي يتم تكوين الصورة منه. يمكنك استخدام البذور الخاصة بالصور السابقة. 'عتبة الضوضاء' يتم استخدامها لتخفيف العناصر الخللية في قيم CFG العالية (جرب مدى 0-10), و Perlin لإضافة ضوضاء Perlin أثناء الإنتاج: كلا منهما يعملان على إضافة التنوع إلى النتائج الخاصة بك.",
"variations": "جرب التغيير مع قيمة بين 0.1 و 1.0 لتغيير النتائج لبذور معينة. التغييرات المثيرة للاهتمام للبذور تكون بين 0.1 و 0.3.",
"upscale": "استخدم إي إس آر جان لتكبير الصورة على الفور بعد الإنتاج.",
"face Correction": "تصحيح الوجه باستخدام جي إف بي جان أو كود فورمر: يكتشف الخوارزمية الوجوه في الصورة وتصحح أي عيوب. قيمة عالية ستغير الصورة أكثر، مما يؤدي إلى وجوه أكثر جمالا. كود فورمر بدقة أعلى يحتفظ بالصورة الأصلية على حساب تصحيح وجه أكثر قوة.",
"faceCorrection": "تصحيح الوجه باستخدام جي إف بي جان أو كود فورمر: يكتشف الخوارزمية الوجوه في الصورة وتصحح أي عيوب. قيمة عالية ستغير الصورة أكثر، مما يؤدي إلى وجوه أكثر جمالا. كود فورمر بدقة أعلى يحتفظ بالصورة الأصلية على حساب تصحيح وجه أكثر قوة.",
"imageToImage": "تحميل صورة إلى صورة أي صورة كأولية، والتي يتم استخدامها لإنشاء صورة جديدة مع التشعيب. كلما كانت القيمة أعلى، كلما تغيرت نتيجة الصورة. من الممكن أن تكون القيم بين 0.0 و 1.0، وتوصي النطاق الموصى به هو .25-.75",
"boundingBox": "مربع الحدود هو نفس الإعدادات العرض والارتفاع لنص إلى صورة أو صورة إلى صورة. فقط المنطقة في المربع سيتم معالجتها.",
"seam Correction": "يتحكم بالتعامل مع الخطوط المرئية التي تحدث بين الصور المولدة في سطح اللوحة.",
"seamCorrection": "يتحكم بالتعامل مع الخطوط المرئية التي تحدث بين الصور المولدة في سطح اللوحة.",
"infillAndScaling": "إدارة أساليب التعبئة (المستخدمة على المناطق المخفية أو الممحوة في سطح اللوحة) والزيادة في الحجم (مفيدة لحجوزات الإطارات الصغيرة)."
}
},

View File

@ -357,7 +357,6 @@
"img2imgStrength": "Bild-zu-Bild-Stärke",
"toggleLoopback": "Toggle Loopback",
"invoke": "Invoke",
"cancel": "Abbrechen",
"promptPlaceholder": "Prompt hier eingeben. [negative Token], (mehr Gewicht)++, (geringeres Gewicht)--, Tausch und Überblendung sind verfügbar (siehe Dokumente)",
"sendTo": "Senden an",
"sendToImg2Img": "Senden an Bild zu Bild",

View File

@ -390,7 +390,10 @@
"modelMergeHeaderHelp1": "You can merge upto three different models to create a blend that suits your needs.",
"modelMergeHeaderHelp2": "Only Diffusers are available for merging. If you want to merge a checkpoint model, please convert it to Diffusers first.",
"modelMergeAlphaHelp": "Alpha controls blend strength for the models. Lower alpha values lead to lower influence of the second model.",
"modelMergeInterpAddDifferenceHelp": "In this mode, Model 3 is first subtracted from Model 2. The resulting version is blended with Model 1 with the alpha rate set above."
"modelMergeInterpAddDifferenceHelp": "In this mode, Model 3 is first subtracted from Model 2. The resulting version is blended with Model 1 with the alpha rate set above.",
"inverseSigmoid": "Inverse Sigmoid",
"sigmoid": "Sigmoid",
"weightedSum": "Weighted Sum"
},
"parameters": {
"general": "General",
@ -401,6 +404,7 @@
"height": "Height",
"sampler": "Sampler",
"seed": "Seed",
"imageToImage": "Image to Image",
"randomizeSeed": "Randomize Seed",
"shuffle": "Shuffle",
"noiseThreshold": "Noise Threshold",
@ -438,7 +442,12 @@
"img2imgStrength": "Image To Image Strength",
"toggleLoopback": "Toggle Loopback",
"invoke": "Invoke",
"cancel": "Cancel",
"cancel": {
"immediate": "Cancel immediately",
"schedule": "Cancel after current iteration",
"isScheduled": "Canceling",
"setType": "Set cancel type"
},
"promptPlaceholder": "Type prompt here. [negative tokens], (upweight)++, (downweight)--, swap and blend are available (see docs)",
"negativePrompts": "Negative Prompts",
"sendTo": "Send to",
@ -465,8 +474,8 @@
"confirmOnDelete": "Confirm On Delete",
"displayHelpIcons": "Display Help Icons",
"useCanvasBeta": "Use Canvas Beta Layout",
"useSlidersForAll": "Use Sliders For All Options",
"enableImageDebugging": "Enable Image Debugging",
"useSlidersForAll": "Use Sliders For All Options",
"resetWebUI": "Reset Web UI",
"resetWebUIDesc1": "Resetting the web UI only resets the browser's local cache of your images and remembered settings. It does not delete any images from disk.",
"resetWebUIDesc2": "If images aren't showing up in the gallery or something else isn't working, please try resetting before submitting an issue on GitHub.",
@ -508,7 +517,7 @@
"feature": {
"prompt": "This is the prompt field. Prompt includes generation objects and stylistic terms. You can add weight (token importance) in the prompt as well, but CLI commands and parameters will not work.",
"gallery": "Gallery displays generations from the outputs folder as they're created. Settings are stored within files and accesed by context menu.",
"other": "These options will enable alternative processing modes for Invoke. 'Seamless tiling' will create repeating patterns in the output. 'High resolution' is generation in two steps with img2img: use this setting when you want a larger and more coherent image without artifacts. It will take longer that usual txt2img.",
"other": "These options will enable alternative processing modes for Invoke. 'Seamless tiling' will create repeating patterns in the output. 'High resolution' is generation in two steps with img2img: use this setting when you want a larger and more coherent image without artifacts. It will take longer than usual txt2img.",
"seed": "Seed value affects the initial noise from which the image is formed. You can use the already existing seeds from previous images. 'Noise Threshold' is used to mitigate artifacts at high CFG values (try the 0-10 range), and Perlin to add Perlin noise during generation: both serve to add variation to your outputs.",
"variations": "Try a variation with a value between 0.1 and 1.0 to change the result for a given seed. Interesting variations of the seed are between 0.1 and 0.3.",
"upscale": "Use ESRGAN to enlarge the image immediately after generation.",

View File

@ -365,7 +365,6 @@
"img2imgStrength": "Peso de Imagen a Imagen",
"toggleLoopback": "Alternar Retroalimentación",
"invoke": "Invocar",
"cancel": "Cancelar",
"promptPlaceholder": "Ingrese la entrada aquí. [símbolos negativos], (subir peso)++, (bajar peso)--, también disponible alternado y mezclado (ver documentación)",
"sendTo": "Enviar a",
"sendToImg2Img": "Enviar a Imagen a Imagen",

View File

@ -15,8 +15,8 @@
"langFrench": "Français",
"nodesDesc": "Un système basé sur les nœuds pour la génération d'images est actuellement en développement. Restez à l'écoute pour des mises à jour à ce sujet.",
"postProcessing": "Post-traitement",
"postProcessDesc1": "Invoke AI offre une grande variété de fonctionnalités de post-traitement. Le redimensionnement d'images et la restauration de visages sont déjà disponibles dans la WebUI. Vous pouvez y accéder à partir du menu Options avancées des onglets Texte en image et Image en image. Vous pouvez également traiter les images directement en utilisant les boutons d'action d'image ci-dessus l'affichage d'image actuel ou dans le visualiseur.",
"postProcessDesc2": "Une interface utilisateur dédiée sera bientôt disponible pour faciliter les workflows de post-traitement plus avancés.",
"postProcessDesc1": "Invoke AI offre une grande variété de fonctionnalités de post-traitement. Le redimensionnement d'images et la restauration de visages sont déjà disponibles dans la WebUI. Vous pouvez y accéder à partir du menu 'Options avancées' des onglets 'Texte vers image' et 'Image vers image'. Vous pouvez également traiter les images directement en utilisant les boutons d'action d'image au-dessus de l'affichage d'image actuel ou dans le visualiseur.",
"postProcessDesc2": "Une interface dédiée sera bientôt disponible pour faciliter les workflows de post-traitement plus avancés.",
"postProcessDesc3": "L'interface en ligne de commande d'Invoke AI offre diverses autres fonctionnalités, notamment Embiggen.",
"training": "Formation",
"trainingDesc1": "Un workflow dédié pour former vos propres embeddings et checkpoints en utilisant Textual Inversion et Dreambooth depuis l'interface web.",
@ -25,27 +25,27 @@
"close": "Fermer",
"load": "Charger",
"back": "Retour",
"statusConnected": "Connecté",
"statusDisconnected": "Déconnecté",
"statusConnected": "En ligne",
"statusDisconnected": "Hors ligne",
"statusError": "Erreur",
"statusPreparing": "Préparation",
"statusProcessingCanceled": "Traitement Annulé",
"statusProcessingComplete": "Traitement Terminé",
"statusProcessingCanceled": "Traitement annulé",
"statusProcessingComplete": "Traitement terminé",
"statusGenerating": "Génération",
"statusGeneratingTextToImage": "Génération Texte vers Image",
"statusGeneratingImageToImage": "Génération Image vers Image",
"statusGeneratingInpainting": "Génération de Réparation",
"statusGeneratingOutpainting": "Génération de Completion",
"statusGenerationComplete": "Génération Terminée",
"statusIterationComplete": "Itération Terminée",
"statusSavingImage": "Sauvegarde de l'Image",
"statusRestoringFaces": "Restauration des Visages",
"statusRestoringFacesGFPGAN": "Restauration des Visages (GFPGAN)",
"statusRestoringFacesCodeFormer": "Restauration des Visages (CodeFormer)",
"statusUpscaling": "Mise à Échelle",
"statusUpscalingESRGAN": "Mise à Échelle (ESRGAN)",
"statusLoadingModel": "Chargement du Modèle",
"statusModelChanged": "Modèle Changé"
"statusGeneratingInpainting": "Génération de réparation",
"statusGeneratingOutpainting": "Génération de complétion",
"statusGenerationComplete": "Génération terminée",
"statusIterationComplete": "Itération terminée",
"statusSavingImage": "Sauvegarde de l'image",
"statusRestoringFaces": "Restauration des visages",
"statusRestoringFacesGFPGAN": "Restauration des visages (GFPGAN)",
"statusRestoringFacesCodeFormer": "Restauration des visages (CodeFormer)",
"statusUpscaling": "Mise à échelle",
"statusUpscalingESRGAN": "Mise à échelle (ESRGAN)",
"statusLoadingModel": "Chargement du modèle",
"statusModelChanged": "Modèle changé"
},
"gallery": {
"generations": "Générations",
@ -66,9 +66,9 @@
"hotkeys": {
"keyboardShortcuts": "Raccourcis clavier",
"appHotkeys": "Raccourcis de l'application",
"GeneralHotkeys": "Raccourcis généraux",
"generalHotkeys": "Raccourcis généraux",
"galleryHotkeys": "Raccourcis de la galerie",
"unifiedCanvasHotkeys": "Raccourcis du Canvas unifié",
"unifiedCanvasHotkeys": "Raccourcis du canvas unifié",
"invoke": {
"title": "Invoquer",
"desc": "Générer une image"
@ -78,36 +78,36 @@
"desc": "Annuler la génération d'image"
},
"focusPrompt": {
"title": "Prompt de Focus",
"title": "Prompt de focus",
"desc": "Mettre en focus la zone de saisie de la commande"
},
"toggleOptions": {
"title": "Basculer Options",
"desc": "Ouvrir et fermer le panneau d'options"
"title": "Affichage des options",
"desc": "Afficher et masquer le panneau d'options"
},
"pinOptions": {
"title": "Epingler Options",
"title": "Epinglage des options",
"desc": "Epingler le panneau d'options"
},
"toggleViewer": {
"title": "Basculer Visionneuse",
"desc": "Ouvrir et fermer la visionneuse d'image"
"title": "Affichage de la visionneuse",
"desc": "Afficher et masquer la visionneuse d'image"
},
"toggleGallery": {
"title": "Basculer Galerie",
"desc": "Ouvrir et fermer le tiroir de galerie"
"title": "Affichage de la galerie",
"desc": "Afficher et masquer la galerie"
},
"maximizeWorkSpace": {
"title": "Maximiser Espace de travail",
"title": "Maximiser la zone de travail",
"desc": "Fermer les panneaux et maximiser la zone de travail"
},
"changeTabs": {
"title": "Changer d'onglets",
"title": "Changer d'onglet",
"desc": "Passer à un autre espace de travail"
},
"consoleToggle": {
"title": "Bascule de la console",
"desc": "Ouvrir et fermer la console"
"title": "Affichage de la console",
"desc": "Afficher et masquer la console"
},
"setPrompt": {
"title": "Définir le prompt",
@ -122,7 +122,7 @@
"desc": "Utiliser tous les paramètres de l'image actuelle"
},
"restoreFaces": {
"title": "Restaurer les faces",
"title": "Restaurer les visages",
"desc": "Restaurer l'image actuelle"
},
"upscale": {
@ -155,7 +155,7 @@
},
"toggleGalleryPin": {
"title": "Activer/désactiver l'épinglage de la galerie",
"desc": "Épingle ou dépingle la galerie à l'interface utilisateur"
"desc": "Épingle ou dépingle la galerie à l'interface"
},
"increaseGalleryThumbSize": {
"title": "Augmenter la taille des miniatures de la galerie",
@ -330,7 +330,7 @@
"delete": "Supprimer",
"deleteModel": "Supprimer le modèle",
"deleteConfig": "Supprimer la configuration",
"deleteMsg1": "Êtes-vous sûr de vouloir supprimer cette entrée de modèle dans InvokeAI?",
"deleteMsg1": "Voulez-vous vraiment supprimer cette entrée de modèle dans InvokeAI ?",
"deleteMsg2": "Cela n'effacera pas le fichier de point de contrôle du modèle de votre disque. Vous pouvez les réajouter si vous le souhaitez.",
"formMessageDiffusersModelLocation": "Emplacement du modèle de diffuseurs",
"formMessageDiffusersModelLocationDesc": "Veuillez en entrer au moins un.",
@ -380,7 +380,6 @@
"img2imgStrength": "Force de l'Image à l'Image",
"toggleLoopback": "Activer/Désactiver la Boucle",
"invoke": "Invoker",
"cancel": "Annuler",
"promptPlaceholder": "Tapez le prompt ici. [tokens négatifs], (poids positif)++, (poids négatif)--, swap et blend sont disponibles (voir les docs)",
"sendTo": "Envoyer à",
"sendToImg2Img": "Envoyer à Image à Image",
@ -448,11 +447,11 @@
"feature": {
"prompt": "Ceci est le champ prompt. Le prompt inclut des objets de génération et des termes stylistiques. Vous pouvez également ajouter un poids (importance du jeton) dans le prompt, mais les commandes CLI et les paramètres ne fonctionneront pas.",
"gallery": "La galerie affiche les générations à partir du dossier de sortie à mesure qu'elles sont créées. Les paramètres sont stockés dans des fichiers et accessibles via le menu contextuel.",
"other": "Ces options activent des modes de traitement alternatifs pour Invoke. 'Tuilage seamless' créera des motifs répétitifs dans la sortie. 'Haute résolution' est la génération en deux étapes avec img2img: utilisez ce paramètre lorsque vous souhaitez une image plus grande et plus cohérente sans artefacts. Cela prendra plus de temps que d'habitude txt2img.",
"seed": "La valeur de grain affecte le bruit initial à partir duquel l'image est formée. Vous pouvez utiliser les graines déjà existantes provenant d'images précédentes. 'Seuil de bruit' est utilisé pour atténuer les artefacts à des valeurs CFG élevées (essayez la plage de 0 à 10), et Perlin pour ajouter du bruit Perlin pendant la génération: les deux servent à ajouter de la variété à vos sorties.",
"other": "Ces options activent des modes de traitement alternatifs pour Invoke. 'Tuilage seamless' créera des motifs répétitifs dans la sortie. 'Haute résolution' est la génération en deux étapes avec img2img : utilisez ce paramètre lorsque vous souhaitez une image plus grande et plus cohérente sans artefacts. Cela prendra plus de temps que d'habitude txt2img.",
"seed": "La valeur de grain affecte le bruit initial à partir duquel l'image est formée. Vous pouvez utiliser les graines déjà existantes provenant d'images précédentes. 'Seuil de bruit' est utilisé pour atténuer les artefacts à des valeurs CFG élevées (essayez la plage de 0 à 10), et Perlin pour ajouter du bruit Perlin pendant la génération : les deux servent à ajouter de la variété à vos sorties.",
"variations": "Essayez une variation avec une valeur comprise entre 0,1 et 1,0 pour changer le résultat pour une graine donnée. Des variations intéressantes de la graine sont entre 0,1 et 0,3.",
"upscale": "Utilisez ESRGAN pour agrandir l'image immédiatement après la génération.",
"faceCorrection": "Correction de visage avec GFPGAN ou Codeformer: l'algorithme détecte les visages dans l'image et corrige tout défaut. La valeur élevée changera plus l'image, ce qui donnera des visages plus attirants. Codeformer avec une fidélité plus élevée préserve l'image originale au prix d'une correction de visage plus forte.",
"faceCorrection": "Correction de visage avec GFPGAN ou Codeformer : l'algorithme détecte les visages dans l'image et corrige tout défaut. La valeur élevée changera plus l'image, ce qui donnera des visages plus attirants. Codeformer avec une fidélité plus élevée préserve l'image originale au prix d'une correction de visage plus forte.",
"imageToImage": "Image to Image charge n'importe quelle image en tant qu'initiale, qui est ensuite utilisée pour générer une nouvelle avec le prompt. Plus la valeur est élevée, plus l'image de résultat changera. Des valeurs de 0,0 à 1,0 sont possibles, la plage recommandée est de 0,25 à 0,75",
"boundingBox": "La boîte englobante est la même que les paramètres Largeur et Hauteur pour Texte à Image ou Image à Image. Seulement la zone dans la boîte sera traitée.",
"seamCorrection": "Contrôle la gestion des coutures visibles qui se produisent entre les images générées sur la toile.",
@ -495,11 +494,11 @@
"clearCanvasHistory": "Effacer l'historique du canvas",
"clearHistory": "Effacer l'historique",
"clearCanvasHistoryMessage": "Effacer l'historique du canvas laisse votre canvas actuel intact, mais efface de manière irréversible l'historique annuler et refaire.",
"clearCanvasHistoryConfirm": "Êtes-vous sûr de vouloir effacer l'historique du canvas?",
"clearCanvasHistoryConfirm": "Voulez-vous vraiment effacer l'historique du canvas ?",
"emptyTempImageFolder": "Vider le dossier d'images temporaires",
"emptyFolder": "Vider le dossier",
"emptyTempImagesFolderMessage": "Vider le dossier d'images temporaires réinitialise également complètement le canvas unifié. Cela inclut tout l'historique annuler/refaire, les images dans la zone de mise en attente et la couche de base du canvas.",
"emptyTempImagesFolderConfirm": "Êtes-vous sûr de vouloir vider le dossier temporaire?",
"emptyTempImagesFolderConfirm": "Voulez-vous vraiment vider le dossier temporaire ?",
"activeLayer": "Calque actif",
"canvasScale": "Échelle du canevas",
"boundingBox": "Boîte englobante",

View File

@ -15,11 +15,11 @@
"langItalian": "Italiano",
"nodesDesc": "Attualmente è in fase di sviluppo un sistema basato su nodi per la generazione di immagini. Resta sintonizzato per gli aggiornamenti su questa fantastica funzionalità.",
"postProcessing": "Post-elaborazione",
"postProcessDesc1": "Invoke AI offre un'ampia varietà di funzionalità di post-elaborazione. Ampiamento Immagine e Restaura i Volti sono già disponibili nell'interfaccia Web. È possibile accedervi dal menu 'Opzioni avanzate' delle schede 'Testo a Immagine' e 'Immagine a Immagine'. È inoltre possibile elaborare le immagini direttamente, utilizzando i pulsanti di azione dell'immagine sopra la visualizzazione dell'immagine corrente o nel visualizzatore.",
"postProcessDesc1": "Invoke AI offre un'ampia varietà di funzionalità di post-elaborazione. Ampliamento Immagine e Restaura Volti sono già disponibili nell'interfaccia Web. È possibile accedervi dal menu 'Opzioni avanzate' delle schede 'Testo a Immagine' e 'Immagine a Immagine'. È inoltre possibile elaborare le immagini direttamente, utilizzando i pulsanti di azione dell'immagine sopra la visualizzazione dell'immagine corrente o nel visualizzatore.",
"postProcessDesc2": "Presto verrà rilasciata un'interfaccia utente dedicata per facilitare flussi di lavoro di post-elaborazione più avanzati.",
"postProcessDesc3": "L'interfaccia da riga di comando di 'Invoke AI' offre varie altre funzionalità tra cui Embiggen.",
"training": "Addestramento",
"trainingDesc1": "Un flusso di lavoro dedicato per addestrare i tuoi incorporamenti e checkpoint utilizzando Inversione Testuale e Dreambooth dall'interfaccia web.",
"trainingDesc1": "Un flusso di lavoro dedicato per addestrare i tuoi Incorporamenti e Checkpoint utilizzando Inversione Testuale e Dreambooth dall'interfaccia web.",
"trainingDesc2": "InvokeAI supporta già l'addestramento di incorporamenti personalizzati utilizzando l'inversione testuale utilizzando lo script principale.",
"upload": "Caricamento",
"close": "Chiudi",
@ -45,7 +45,25 @@
"statusUpscaling": "Ampliamento",
"statusUpscalingESRGAN": "Ampliamento (ESRGAN)",
"statusLoadingModel": "Caricamento del modello",
"statusModelChanged": "Modello cambiato"
"statusModelChanged": "Modello cambiato",
"githubLabel": "GitHub",
"discordLabel": "Discord",
"langArabic": "Arabo",
"langEnglish": "Inglese",
"langFrench": "Francese",
"langGerman": "Tedesco",
"langJapanese": "Giapponese",
"langPolish": "Polacco",
"langBrPortuguese": "Portoghese Basiliano",
"langRussian": "Russo",
"langUkranian": "Ucraino",
"langSpanish": "Spagnolo",
"statusMergingModels": "Fusione Modelli",
"statusMergedModels": "Modelli fusi",
"langSimplifiedChinese": "Cinese semplificato",
"langDutch": "Olandese",
"statusModelConverted": "Modello Convertito",
"statusConvertingModel": "Conversione Modello"
},
"gallery": {
"generations": "Generazioni",
@ -70,7 +88,7 @@
"galleryHotkeys": "Tasti di scelta rapida della galleria",
"unifiedCanvasHotkeys": "Tasti di scelta rapida Tela Unificata",
"invoke": {
"title": "Invoca",
"title": "Invoke",
"desc": "Genera un'immagine"
},
"cancel": {
@ -335,7 +353,47 @@
"formMessageDiffusersModelLocation": "Ubicazione modelli diffusori",
"formMessageDiffusersModelLocationDesc": "Inseriscine almeno uno.",
"formMessageDiffusersVAELocation": "Ubicazione file VAE",
"formMessageDiffusersVAELocationDesc": "Se non fornito, InvokeAI cercherà il file VAE all'interno dell'ubicazione del modello sopra indicata."
"formMessageDiffusersVAELocationDesc": "Se non fornito, InvokeAI cercherà il file VAE all'interno dell'ubicazione del modello sopra indicata.",
"convert": "Converti",
"convertToDiffusers": "Converti in Diffusori",
"convertToDiffusersHelpText2": "Questo processo sostituirà la voce in Gestione Modelli con la versione Diffusori dello stesso modello.",
"convertToDiffusersHelpText4": "Questo è un processo una tantum. Potrebbero essere necessari circa 30-60 secondi a seconda delle specifiche del tuo computer.",
"convertToDiffusersHelpText5": "Assicurati di avere spazio su disco sufficiente. I modelli generalmente variano tra 4 GB e 7 GB di dimensioni.",
"convertToDiffusersHelpText6": "Vuoi convertire questo modello?",
"convertToDiffusersSaveLocation": "Ubicazione salvataggio",
"v2": "v2",
"inpainting": "v1 Inpainting",
"customConfig": "Configurazione personalizzata",
"statusConverting": "Conversione in corso",
"modelConverted": "Modello convertito",
"sameFolder": "Stessa cartella",
"invokeRoot": "Cartella InvokeAI",
"merge": "Fondere",
"modelsMerged": "Modelli fusi",
"mergeModels": "Fondi Modelli",
"modelOne": "Modello 1",
"modelTwo": "Modello 2",
"mergedModelName": "Nome del modello fuso",
"alpha": "Alpha",
"interpolationType": "Tipo di interpolazione",
"mergedModelCustomSaveLocation": "Percorso personalizzato",
"invokeAIFolder": "Cartella Invoke AI",
"ignoreMismatch": "Ignora le discrepanze tra i modelli selezionati",
"modelMergeHeaderHelp2": "Solo i diffusori sono disponibili per l'unione. Se desideri unire un modello Checkpoint, convertilo prima in Diffusori.",
"modelMergeInterpAddDifferenceHelp": "In questa modalità, il Modello 3 viene prima sottratto dal Modello 2. La versione risultante viene unita al Modello 1 con il tasso Alpha impostato sopra.",
"mergedModelSaveLocation": "Ubicazione salvataggio",
"convertToDiffusersHelpText1": "Questo modello verrà convertito nel formato 🧨 Diffusore.",
"custom": "Personalizzata",
"convertToDiffusersHelpText3": "Il tuo file checkpoint sul disco NON verrà comunque cancellato o modificato. Se lo desideri, puoi aggiungerlo di nuovo in Gestione Modelli.",
"v1": "v1",
"pathToCustomConfig": "Percorso alla configurazione personalizzata",
"modelThree": "Modello 3",
"modelMergeHeaderHelp1": "Puoi unire fino a tre diversi modelli per creare una miscela adatta alle tue esigenze.",
"modelMergeAlphaHelp": "Il valore Alpha controlla la forza di miscelazione dei modelli. Valori Alpha più bassi attenuano l'influenza del secondo modello.",
"customSaveLocation": "Ubicazione salvataggio personalizzata",
"weightedSum": "Somma pesata",
"sigmoid": "Sigmoide",
"inverseSigmoid": "Sigmoide inverso"
},
"parameters": {
"images": "Immagini",
@ -352,7 +410,7 @@
"variations": "Variazioni",
"variationAmount": "Quantità di variazione",
"seedWeights": "Pesi dei semi",
"faceRestoration": "Restaura volti",
"faceRestoration": "Restauro volti",
"restoreFaces": "Restaura volti",
"type": "Tipo",
"strength": "Forza",
@ -380,7 +438,6 @@
"img2imgStrength": "Forza da Immagine a Immagine",
"toggleLoopback": "Attiva/disattiva elaborazione ricorsiva",
"invoke": "Invoke",
"cancel": "Annulla",
"promptPlaceholder": "Digita qui il prompt usando termini in lingua inglese. [token negativi], (aumenta il peso)++, (diminuisci il peso)--, scambia e fondi sono disponibili (consulta la documentazione)",
"sendTo": "Invia a",
"sendToImg2Img": "Invia a da Immagine a Immagine",
@ -396,7 +453,19 @@
"info": "Informazioni",
"deleteImage": "Elimina immagine",
"initialImage": "Immagine iniziale",
"showOptionsPanel": "Mostra pannello opzioni"
"showOptionsPanel": "Mostra pannello opzioni",
"general": "Generale",
"denoisingStrength": "Forza riduzione rumore",
"copyImage": "Copia immagine",
"hiresStrength": "Forza Alta Risoluzione",
"negativePrompts": "Prompt Negativi",
"imageToImage": "Immagine a Immagine",
"cancel": {
"schedule": "Annulla dopo l'iterazione corrente",
"isScheduled": "Annullamento",
"setType": "Imposta il tipo di annullamento",
"immediate": "Annulla immediatamente"
}
},
"settings": {
"models": "Modelli",
@ -409,7 +478,8 @@
"resetWebUI": "Reimposta l'interfaccia utente Web",
"resetWebUIDesc1": "Il ripristino dell'interfaccia utente Web reimposta solo la cache locale del browser delle immagini e le impostazioni memorizzate. Non cancella alcuna immagine dal disco.",
"resetWebUIDesc2": "Se le immagini non vengono visualizzate nella galleria o qualcos'altro non funziona, prova a reimpostare prima di segnalare un problema su GitHub.",
"resetComplete": "L'interfaccia utente Web è stata reimpostata. Aggiorna la pagina per ricaricarla."
"resetComplete": "L'interfaccia utente Web è stata reimpostata. Aggiorna la pagina per ricaricarla.",
"useSlidersForAll": "Usa i cursori per tutte le opzioni"
},
"toast": {
"tempFoldersEmptied": "Cartella temporanea svuotata",
@ -447,7 +517,7 @@
"feature": {
"prompt": "Questo è il campo del prompt. Il prompt include oggetti di generazione e termini stilistici. Puoi anche aggiungere il peso (importanza del token) nel prompt, ma i comandi e i parametri dell'interfaccia a linea di comando non funzioneranno.",
"gallery": "Galleria visualizza le generazioni dalla cartella degli output man mano che vengono create. Le impostazioni sono memorizzate all'interno di file e accessibili dal menu contestuale.",
"other": "Queste opzioni abiliteranno modalità di elaborazione alternative per Invoke. 'Piastrella senza cuciture' creerà modelli ripetuti nell'output. 'Ottimizzzazione Alta risoluzione' è la generazione in due passaggi con 'Immagine a Immagine': usa questa impostazione quando vuoi un'immagine più grande e più coerente senza artefatti. Ci vorrà più tempo del solito 'Testo a Immagine'.",
"other": "Queste opzioni abiliteranno modalità di elaborazione alternative per Invoke. 'Piastrella senza cuciture' creerà modelli ripetuti nell'output. 'Ottimizzazione Alta risoluzione' è la generazione in due passaggi con 'Immagine a Immagine': usa questa impostazione quando vuoi un'immagine più grande e più coerente senza artefatti. Ci vorrà più tempo del solito 'Testo a Immagine'.",
"seed": "Il valore del Seme influenza il rumore iniziale da cui è formata l'immagine. Puoi usare i semi già esistenti dalle immagini precedenti. 'Soglia del rumore' viene utilizzato per mitigare gli artefatti a valori CFG elevati (provare l'intervallo 0-10) e Perlin per aggiungere il rumore Perlin durante la generazione: entrambi servono per aggiungere variazioni ai risultati.",
"variations": "Prova una variazione con un valore compreso tra 0.1 e 1.0 per modificare il risultato per un dato seme. Variazioni interessanti del seme sono comprese tra 0.1 e 0.3.",
"upscale": "Utilizza ESRGAN per ingrandire l'immagine subito dopo la generazione.",
@ -515,6 +585,6 @@
"betaClear": "Svuota",
"betaDarkenOutside": "Oscura all'esterno",
"betaLimitToBox": "Limita al rettangolo",
"betaPreserveMasked": "Conserva quanto mascheato"
"betaPreserveMasked": "Conserva quanto mascherato"
}
}

View File

@ -304,7 +304,6 @@
"scaledHeight": "高さのスケール",
"boundingBoxHeader": "バウンディングボックス",
"img2imgStrength": "Image To Imageの強度",
"cancel": "キャンセル",
"sendTo": "転送",
"sendToImg2Img": "Image to Imageに転送",
"sendToUnifiedCanvas": "Unified Canvasに転送",

View File

@ -364,7 +364,6 @@
"img2imgStrength": "Sterkte Afbeelding naar afbeelding",
"toggleLoopback": "Zet recursieve verwerking aan/uit",
"invoke": "Genereer",
"cancel": "Annuleer",
"promptPlaceholder": "Voer invoertekst hier in. [negatieve trefwoorden], (verhoogdgewicht)++, (verlaagdgewicht)--, swap (wisselen) en blend (mengen) zijn beschikbaar (zie documentatie)",
"sendTo": "Stuur naar",
"sendToImg2Img": "Stuur naar Afbeelding naar afbeelding",

View File

@ -269,7 +269,6 @@
"desc": "Akceptuje aktualnie wybrany obraz tymczasowy"
}
},
"modelManager": {},
"parameters": {
"images": "L. obrazów",
"steps": "L. kroków",
@ -313,7 +312,6 @@
"img2imgStrength": "Wpływ sugestii na obraz",
"toggleLoopback": "Wł/wył sprzężenie zwrotne",
"invoke": "Wywołaj",
"cancel": "Anuluj",
"promptPlaceholder": "W tym miejscu wprowadź swoje sugestie. [negatywne sugestie], (wzmocnienie), (osłabienie)--, po więcej opcji (np. swap lub blend) zajrzyj do dokumentacji",
"sendTo": "Wyślij do",
"sendToImg2Img": "Użyj w trybie \"Obraz na obraz\"",

View File

@ -362,7 +362,6 @@
"img2imgStrength": "Força de Imagem Para Imagem",
"toggleLoopback": "Ativar Loopback",
"invoke": "Invoke",
"cancel": "Cancelar",
"promptPlaceholder": "Digite o prompt aqui. [tokens negativos], (upweight)++, (downweight)--, trocar e misturar estão disponíveis (veja docs)",
"sendTo": "Mandar para",
"sendToImg2Img": "Mandar para Imagem Para Imagem",
@ -425,7 +424,6 @@
"initialImageNotSet": "Imagem Inicial Não Definida",
"initialImageNotSetDesc": "Não foi possível carregar imagem incial"
},
"tooltip": {},
"unifiedCanvas": {
"layer": "Camada",
"base": "Base",

View File

@ -160,7 +160,7 @@
"title": "Увеличить размер миниатюр галереи",
"desc": "Увеличивает размер миниатюр галереи"
},
"reduceGalleryThumbSize": {
"decreaseGalleryThumbSize": {
"title": "Уменьшает размер миниатюр галереи",
"desc": "Уменьшает размер миниатюр галереи"
},
@ -172,7 +172,7 @@
"title": "Выбрать ластик",
"desc": "Выбирает ластик для холста"
},
"reduceBrushSize": {
"decreaseBrushSize": {
"title": "Уменьшить размер кисти",
"desc": "Уменьшает размер кисти/ластика холста"
},
@ -180,7 +180,7 @@
"title": "Увеличить размер кисти",
"desc": "Увеличивает размер кисти/ластика холста"
},
"reduceBrushOpacity": {
"decreaseBrushOpacity": {
"title": "Уменьшить непрозрачность кисти",
"desc": "Уменьшает непрозрачность кисти холста"
},
@ -365,7 +365,6 @@
"img2imgStrength": "Сила обработки img2img",
"toggleLoopback": "Зациклить обработку",
"invoke": "Вызвать",
"cancel": "Отменить",
"promptPlaceholder": "Введите запрос здесь (на английском). [исключенные токены], (более значимые)++, (менее значимые)--, swap и blend тоже доступны (смотрите Github)",
"sendTo": "Отправить",
"sendToImg2Img": "Отправить в img2img",
@ -494,7 +493,7 @@
"cursorPosition": "Положение курсора",
"previous": "Предыдущее",
"next": "Следующее",
"принять": "Принять",
"accept": "Принять",
"showHide": "Показать/Скрыть",
"discardAll": "Отменить все",
"betaClear": "Очистить",

View File

@ -160,7 +160,7 @@
"title": "Збільшити розмір мініатюр галереї",
"desc": "Збільшує розмір мініатюр галереї"
},
"reduceGalleryThumbSize": {
"decreaseGalleryThumbSize": {
"title": "Зменшує розмір мініатюр галереї",
"desc": "Зменшує розмір мініатюр галереї"
},
@ -172,7 +172,7 @@
"title": "Вибрати ластик",
"desc": "Вибирає ластик для полотна"
},
"reduceBrushSize": {
"decreaseBrushSize": {
"title": "Зменшити розмір пензля",
"desc": "Зменшує розмір пензля/ластика полотна"
},
@ -180,7 +180,7 @@
"title": "Збільшити розмір пензля",
"desc": "Збільшує розмір пензля/ластика полотна"
},
"reduceBrushOpacity": {
"decreaseBrushOpacity": {
"title": "Зменшити непрозорість пензля",
"desc": "Зменшує непрозорість пензля полотна"
},
@ -354,7 +354,6 @@
"seamBlur": "Розмиття шву",
"seamStrength": "Сила шву",
"seamSteps": "Кроки шву",
"inpaintReplace": "Inpaint-заміна",
"scaleBeforeProcessing": "Масштабувати",
"scaledWidth": "Масштаб Ш",
"scaledHeight": "Масштаб В",
@ -366,7 +365,6 @@
"img2imgStrength": "Сила обробки img2img",
"toggleLoopback": "Зациклити обробку",
"invoke": "Викликати",
"cancel": "Скасувати",
"promptPlaceholder": "Введіть запит тут (англійською). [видалені токени], (більш вагомі)++, (менш вагомі)--, swap и blend також доступні (дивіться Github)",
"sendTo": "Надіслати",
"sendToImg2Img": "Надіслати у img2img",
@ -495,7 +493,7 @@
"cursorPosition": "Розташування курсора",
"previous": "Попереднє",
"next": "Наступне",
"принять": "Приняти",
"accept": "Приняти",
"showHide": "Показати/Сховати",
"discardAll": "Відмінити все",
"betaClear": "Очистити",

View File

@ -362,7 +362,6 @@
"img2imgStrength": "图像到图像强度",
"toggleLoopback": "切换环回",
"invoke": "Invoke",
"cancel": "取消",
"promptPlaceholder": "在这里输入提示。可以使用[反提示]、(加权)++、(减权)--、交换和混合(见文档)",
"sendTo": "发送到",
"sendToImg2Img": "发送到图像到图像",
@ -425,7 +424,6 @@
"initialImageNotSet": "初始图像未设定",
"initialImageNotSetDesc": "无法加载初始图像"
},
"tooltip": {},
"unifiedCanvas": {
"layer": "图层",
"base": "基础层",

View File

@ -1 +1,41 @@
export {};
declare module 'redux-socket.io-middleware';
declare global {
/* eslint-disable @typescript-eslint/no-explicit-any */
interface Array<T> {
/**
* Returns the value of the last element in the array where predicate is true, and undefined
* otherwise.
* @param predicate findLast calls predicate once for each element of the array, in descending
* order, until it finds one where predicate returns true. If such an element is found, findLast
* immediately returns that element value. Otherwise, findLast returns undefined.
* @param thisArg If provided, it will be used as the this value for each invocation of
* predicate. If it is not provided, undefined is used instead.
*/
findLast<S extends T>(
predicate: (value: T, index: number, array: T[]) => value is S,
thisArg?: any
): S | undefined;
findLast(
predicate: (value: T, index: number, array: T[]) => unknown,
thisArg?: any
): T | undefined;
/**
* Returns the index of the last element in the array where predicate is true, and -1
* otherwise.
* @param predicate findLastIndex calls predicate once for each element of the array, in descending
* order, until it finds one where predicate returns true. If such an element is found,
* findLastIndex immediately returns that element index. Otherwise, findLastIndex returns -1.
* @param thisArg If provided, it will be used as the this value for each invocation of
* predicate. If it is not provided, undefined is used instead.
*/
findLastIndex(
predicate: (value: T, index: number, array: T[]) => unknown,
thisArg?: any
): number;
}
/* eslint-enable @typescript-eslint/no-explicit-any */
}

View File

@ -15,72 +15,70 @@
"postinstall": "patch-package"
},
"dependencies": {
"@chakra-ui/icons": "^2.0.10",
"@chakra-ui/react": "^2.3.1",
"@chakra-ui/icons": "^2.0.17",
"@chakra-ui/react": "^2.5.1",
"@emotion/cache": "^11.10.5",
"@emotion/react": "^11.10.4",
"@emotion/styled": "^11.10.4",
"@radix-ui/react-context-menu": "^2.0.1",
"@emotion/react": "^11.10.6",
"@emotion/styled": "^11.10.6",
"@radix-ui/react-context-menu": "^2.1.1",
"@radix-ui/react-slider": "^1.1.0",
"@radix-ui/react-tooltip": "^1.0.2",
"@reduxjs/toolkit": "^1.8.5",
"@types/uuid": "^8.3.4",
"@vitejs/plugin-react-swc": "^3.1.0",
"@radix-ui/react-tooltip": "^1.0.3",
"@reduxjs/toolkit": "^1.9.2",
"@types/uuid": "^9.0.0",
"@vitejs/plugin-react-swc": "^3.2.0",
"add": "^2.0.6",
"dateformat": "^5.0.3",
"formik": "^2.2.9",
"framer-motion": "^7.2.1",
"i18next": "^22.4.5",
"framer-motion": "^9.0.4",
"i18next": "^22.4.10",
"i18next-browser-languagedetector": "^7.0.1",
"i18next-http-backend": "^2.1.0",
"konva": "^8.3.13",
"i18next-http-backend": "^2.1.1",
"konva": "^8.4.2",
"lodash": "^4.17.21",
"re-resizable": "^6.9.9",
"react": "^18.2.0",
"react-colorful": "^5.6.1",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.2",
"react-hotkeys-hook": "4.0.2",
"react-i18next": "^12.1.1",
"react-icons": "^4.4.0",
"react-konva": "^18.2.3",
"react-konva-utils": "^0.3.0",
"react-redux": "^8.0.2",
"react-dropzone": "^14.2.3",
"react-hotkeys-hook": "4.3.5",
"react-i18next": "^12.1.5",
"react-icons": "^4.7.1",
"react-konva": "^18.2.4",
"react-konva-utils": "^0.3.2",
"react-redux": "^8.0.5",
"react-transition-group": "^4.4.5",
"react-zoom-pan-pinch": "^2.1.3",
"redux-deep-persist": "^1.0.6",
"react-zoom-pan-pinch": "^2.6.1",
"redux-deep-persist": "^1.0.7",
"redux-persist": "^6.0.0",
"socket.io": "^4.5.2",
"socket.io-client": "^4.5.2",
"socket.io": "^4.6.0",
"socket.io-client": "^4.6.0",
"use-image": "^1.1.0",
"uuid": "^9.0.0",
"yarn": "^1.22.19"
},
"devDependencies": {
"@types/dateformat": "^5.0.0",
"@types/react": "^18.0.17",
"@types/react-dom": "^18.0.6",
"@types/react": "^18.0.28",
"@types/react-dom": "^18.0.11",
"@types/react-transition-group": "^4.4.5",
"@typescript-eslint/eslint-plugin": "^5.36.2",
"@typescript-eslint/parser": "^5.36.2",
"@typescript-eslint/eslint-plugin": "^5.52.0",
"@typescript-eslint/parser": "^5.52.0",
"babel-plugin-transform-imports": "^2.0.0",
"eslint": "^8.23.0",
"eslint": "^8.34.0",
"eslint-config-prettier": "^8.6.0",
"eslint-plugin-prettier": "^4.2.1",
"eslint-plugin-react": "^7.32.2",
"eslint-plugin-react-hooks": "^4.6.0",
"husky": "^8.0.3",
"lint-staged": "^13.1.0",
"madge": "^5.0.1",
"patch-package": "^6.5.0",
"lint-staged": "^13.1.2",
"madge": "^6.0.0",
"patch-package": "^6.5.1",
"postinstall-postinstall": "^2.1.0",
"prettier": "^2.8.3",
"prettier": "^2.8.4",
"rollup-plugin-visualizer": "^5.9.0",
"sass": "^1.55.0",
"terser": "^5.16.1",
"tsc-watch": "^5.0.3",
"typescript": "^5.0.0-beta",
"vite": "^4.1.1",
"sass": "^1.58.3",
"terser": "^5.16.4",
"vite": "^4.1.2",
"vite-plugin-eslint": "^1.8.1",
"vite-tsconfig-paths": "^4.0.5"
},
@ -95,9 +93,9 @@
}
},
"lint-staged": {
"**/*.{js,jsx,ts,tsx,cjs}": [
"npx prettier --write",
"npx eslint --fix"
"**/*.{js,jsx,ts,tsx,cjs,json,html,scss}": [
"npm run prettier",
"npm run lint"
]
}
}

View File

@ -66,7 +66,7 @@
"hotkeys": {
"keyboardShortcuts": "مفاتيح الأزرار المختصرة",
"appHotkeys": "مفاتيح التطبيق",
"GeneralHotkeys": "مفاتيح عامة",
"generalHotkeys": "مفاتيح عامة",
"galleryHotkeys": "مفاتيح المعرض",
"unifiedCanvasHotkeys": "مفاتيح اللوحةالموحدة ",
"invoke": {
@ -380,7 +380,6 @@
"img2imgStrength": "قوة صورة إلى صورة",
"toggleLoopback": "تبديل الإعادة",
"invoke": "إطلاق",
"cancel": "إلغاء",
"promptPlaceholder": "اكتب المحث هنا. [العلامات السلبية], (زيادة الوزن) ++, (نقص الوزن)--, التبديل و الخلط متاحة (انظر الوثائق)",
"sendTo": "أرسل إلى",
"sendToImg2Img": "أرسل إلى صورة إلى صورة",
@ -452,10 +451,10 @@
"seed": "يؤثر قيمة البذور على الضوضاء الأولي الذي يتم تكوين الصورة منه. يمكنك استخدام البذور الخاصة بالصور السابقة. 'عتبة الضوضاء' يتم استخدامها لتخفيف العناصر الخللية في قيم CFG العالية (جرب مدى 0-10), و Perlin لإضافة ضوضاء Perlin أثناء الإنتاج: كلا منهما يعملان على إضافة التنوع إلى النتائج الخاصة بك.",
"variations": "جرب التغيير مع قيمة بين 0.1 و 1.0 لتغيير النتائج لبذور معينة. التغييرات المثيرة للاهتمام للبذور تكون بين 0.1 و 0.3.",
"upscale": "استخدم إي إس آر جان لتكبير الصورة على الفور بعد الإنتاج.",
"face Correction": "تصحيح الوجه باستخدام جي إف بي جان أو كود فورمر: يكتشف الخوارزمية الوجوه في الصورة وتصحح أي عيوب. قيمة عالية ستغير الصورة أكثر، مما يؤدي إلى وجوه أكثر جمالا. كود فورمر بدقة أعلى يحتفظ بالصورة الأصلية على حساب تصحيح وجه أكثر قوة.",
"faceCorrection": "تصحيح الوجه باستخدام جي إف بي جان أو كود فورمر: يكتشف الخوارزمية الوجوه في الصورة وتصحح أي عيوب. قيمة عالية ستغير الصورة أكثر، مما يؤدي إلى وجوه أكثر جمالا. كود فورمر بدقة أعلى يحتفظ بالصورة الأصلية على حساب تصحيح وجه أكثر قوة.",
"imageToImage": "تحميل صورة إلى صورة أي صورة كأولية، والتي يتم استخدامها لإنشاء صورة جديدة مع التشعيب. كلما كانت القيمة أعلى، كلما تغيرت نتيجة الصورة. من الممكن أن تكون القيم بين 0.0 و 1.0، وتوصي النطاق الموصى به هو .25-.75",
"boundingBox": "مربع الحدود هو نفس الإعدادات العرض والارتفاع لنص إلى صورة أو صورة إلى صورة. فقط المنطقة في المربع سيتم معالجتها.",
"seam Correction": "يتحكم بالتعامل مع الخطوط المرئية التي تحدث بين الصور المولدة في سطح اللوحة.",
"seamCorrection": "يتحكم بالتعامل مع الخطوط المرئية التي تحدث بين الصور المولدة في سطح اللوحة.",
"infillAndScaling": "إدارة أساليب التعبئة (المستخدمة على المناطق المخفية أو الممحوة في سطح اللوحة) والزيادة في الحجم (مفيدة لحجوزات الإطارات الصغيرة)."
}
},

View File

@ -357,7 +357,6 @@
"img2imgStrength": "Bild-zu-Bild-Stärke",
"toggleLoopback": "Toggle Loopback",
"invoke": "Invoke",
"cancel": "Abbrechen",
"promptPlaceholder": "Prompt hier eingeben. [negative Token], (mehr Gewicht)++, (geringeres Gewicht)--, Tausch und Überblendung sind verfügbar (siehe Dokumente)",
"sendTo": "Senden an",
"sendToImg2Img": "Senden an Bild zu Bild",

View File

@ -390,7 +390,10 @@
"modelMergeHeaderHelp1": "You can merge upto three different models to create a blend that suits your needs.",
"modelMergeHeaderHelp2": "Only Diffusers are available for merging. If you want to merge a checkpoint model, please convert it to Diffusers first.",
"modelMergeAlphaHelp": "Alpha controls blend strength for the models. Lower alpha values lead to lower influence of the second model.",
"modelMergeInterpAddDifferenceHelp": "In this mode, Model 3 is first subtracted from Model 2. The resulting version is blended with Model 1 with the alpha rate set above."
"modelMergeInterpAddDifferenceHelp": "In this mode, Model 3 is first subtracted from Model 2. The resulting version is blended with Model 1 with the alpha rate set above.",
"inverseSigmoid": "Inverse Sigmoid",
"sigmoid": "Sigmoid",
"weightedSum": "Weighted Sum"
},
"parameters": {
"general": "General",
@ -401,6 +404,7 @@
"height": "Height",
"sampler": "Sampler",
"seed": "Seed",
"imageToImage": "Image to Image",
"randomizeSeed": "Randomize Seed",
"shuffle": "Shuffle",
"noiseThreshold": "Noise Threshold",
@ -438,7 +442,12 @@
"img2imgStrength": "Image To Image Strength",
"toggleLoopback": "Toggle Loopback",
"invoke": "Invoke",
"cancel": "Cancel",
"cancel": {
"immediate": "Cancel immediately",
"schedule": "Cancel after current iteration",
"isScheduled": "Canceling",
"setType": "Set cancel type"
},
"promptPlaceholder": "Type prompt here. [negative tokens], (upweight)++, (downweight)--, swap and blend are available (see docs)",
"negativePrompts": "Negative Prompts",
"sendTo": "Send to",
@ -465,8 +474,8 @@
"confirmOnDelete": "Confirm On Delete",
"displayHelpIcons": "Display Help Icons",
"useCanvasBeta": "Use Canvas Beta Layout",
"useSlidersForAll": "Use Sliders For All Options",
"enableImageDebugging": "Enable Image Debugging",
"useSlidersForAll": "Use Sliders For All Options",
"resetWebUI": "Reset Web UI",
"resetWebUIDesc1": "Resetting the web UI only resets the browser's local cache of your images and remembered settings. It does not delete any images from disk.",
"resetWebUIDesc2": "If images aren't showing up in the gallery or something else isn't working, please try resetting before submitting an issue on GitHub.",
@ -508,7 +517,7 @@
"feature": {
"prompt": "This is the prompt field. Prompt includes generation objects and stylistic terms. You can add weight (token importance) in the prompt as well, but CLI commands and parameters will not work.",
"gallery": "Gallery displays generations from the outputs folder as they're created. Settings are stored within files and accesed by context menu.",
"other": "These options will enable alternative processing modes for Invoke. 'Seamless tiling' will create repeating patterns in the output. 'High resolution' is generation in two steps with img2img: use this setting when you want a larger and more coherent image without artifacts. It will take longer that usual txt2img.",
"other": "These options will enable alternative processing modes for Invoke. 'Seamless tiling' will create repeating patterns in the output. 'High resolution' is generation in two steps with img2img: use this setting when you want a larger and more coherent image without artifacts. It will take longer than usual txt2img.",
"seed": "Seed value affects the initial noise from which the image is formed. You can use the already existing seeds from previous images. 'Noise Threshold' is used to mitigate artifacts at high CFG values (try the 0-10 range), and Perlin to add Perlin noise during generation: both serve to add variation to your outputs.",
"variations": "Try a variation with a value between 0.1 and 1.0 to change the result for a given seed. Interesting variations of the seed are between 0.1 and 0.3.",
"upscale": "Use ESRGAN to enlarge the image immediately after generation.",

View File

@ -365,7 +365,6 @@
"img2imgStrength": "Peso de Imagen a Imagen",
"toggleLoopback": "Alternar Retroalimentación",
"invoke": "Invocar",
"cancel": "Cancelar",
"promptPlaceholder": "Ingrese la entrada aquí. [símbolos negativos], (subir peso)++, (bajar peso)--, también disponible alternado y mezclado (ver documentación)",
"sendTo": "Enviar a",
"sendToImg2Img": "Enviar a Imagen a Imagen",

View File

@ -15,8 +15,8 @@
"langFrench": "Français",
"nodesDesc": "Un système basé sur les nœuds pour la génération d'images est actuellement en développement. Restez à l'écoute pour des mises à jour à ce sujet.",
"postProcessing": "Post-traitement",
"postProcessDesc1": "Invoke AI offre une grande variété de fonctionnalités de post-traitement. Le redimensionnement d'images et la restauration de visages sont déjà disponibles dans la WebUI. Vous pouvez y accéder à partir du menu Options avancées des onglets Texte en image et Image en image. Vous pouvez également traiter les images directement en utilisant les boutons d'action d'image ci-dessus l'affichage d'image actuel ou dans le visualiseur.",
"postProcessDesc2": "Une interface utilisateur dédiée sera bientôt disponible pour faciliter les workflows de post-traitement plus avancés.",
"postProcessDesc1": "Invoke AI offre une grande variété de fonctionnalités de post-traitement. Le redimensionnement d'images et la restauration de visages sont déjà disponibles dans la WebUI. Vous pouvez y accéder à partir du menu 'Options avancées' des onglets 'Texte vers image' et 'Image vers image'. Vous pouvez également traiter les images directement en utilisant les boutons d'action d'image au-dessus de l'affichage d'image actuel ou dans le visualiseur.",
"postProcessDesc2": "Une interface dédiée sera bientôt disponible pour faciliter les workflows de post-traitement plus avancés.",
"postProcessDesc3": "L'interface en ligne de commande d'Invoke AI offre diverses autres fonctionnalités, notamment Embiggen.",
"training": "Formation",
"trainingDesc1": "Un workflow dédié pour former vos propres embeddings et checkpoints en utilisant Textual Inversion et Dreambooth depuis l'interface web.",
@ -25,27 +25,27 @@
"close": "Fermer",
"load": "Charger",
"back": "Retour",
"statusConnected": "Connecté",
"statusDisconnected": "Déconnecté",
"statusConnected": "En ligne",
"statusDisconnected": "Hors ligne",
"statusError": "Erreur",
"statusPreparing": "Préparation",
"statusProcessingCanceled": "Traitement Annulé",
"statusProcessingComplete": "Traitement Terminé",
"statusProcessingCanceled": "Traitement annulé",
"statusProcessingComplete": "Traitement terminé",
"statusGenerating": "Génération",
"statusGeneratingTextToImage": "Génération Texte vers Image",
"statusGeneratingImageToImage": "Génération Image vers Image",
"statusGeneratingInpainting": "Génération de Réparation",
"statusGeneratingOutpainting": "Génération de Completion",
"statusGenerationComplete": "Génération Terminée",
"statusIterationComplete": "Itération Terminée",
"statusSavingImage": "Sauvegarde de l'Image",
"statusRestoringFaces": "Restauration des Visages",
"statusRestoringFacesGFPGAN": "Restauration des Visages (GFPGAN)",
"statusRestoringFacesCodeFormer": "Restauration des Visages (CodeFormer)",
"statusUpscaling": "Mise à Échelle",
"statusUpscalingESRGAN": "Mise à Échelle (ESRGAN)",
"statusLoadingModel": "Chargement du Modèle",
"statusModelChanged": "Modèle Changé"
"statusGeneratingInpainting": "Génération de réparation",
"statusGeneratingOutpainting": "Génération de complétion",
"statusGenerationComplete": "Génération terminée",
"statusIterationComplete": "Itération terminée",
"statusSavingImage": "Sauvegarde de l'image",
"statusRestoringFaces": "Restauration des visages",
"statusRestoringFacesGFPGAN": "Restauration des visages (GFPGAN)",
"statusRestoringFacesCodeFormer": "Restauration des visages (CodeFormer)",
"statusUpscaling": "Mise à échelle",
"statusUpscalingESRGAN": "Mise à échelle (ESRGAN)",
"statusLoadingModel": "Chargement du modèle",
"statusModelChanged": "Modèle changé"
},
"gallery": {
"generations": "Générations",
@ -66,9 +66,9 @@
"hotkeys": {
"keyboardShortcuts": "Raccourcis clavier",
"appHotkeys": "Raccourcis de l'application",
"GeneralHotkeys": "Raccourcis généraux",
"generalHotkeys": "Raccourcis généraux",
"galleryHotkeys": "Raccourcis de la galerie",
"unifiedCanvasHotkeys": "Raccourcis du Canvas unifié",
"unifiedCanvasHotkeys": "Raccourcis du canvas unifié",
"invoke": {
"title": "Invoquer",
"desc": "Générer une image"
@ -78,36 +78,36 @@
"desc": "Annuler la génération d'image"
},
"focusPrompt": {
"title": "Prompt de Focus",
"title": "Prompt de focus",
"desc": "Mettre en focus la zone de saisie de la commande"
},
"toggleOptions": {
"title": "Basculer Options",
"desc": "Ouvrir et fermer le panneau d'options"
"title": "Affichage des options",
"desc": "Afficher et masquer le panneau d'options"
},
"pinOptions": {
"title": "Epingler Options",
"title": "Epinglage des options",
"desc": "Epingler le panneau d'options"
},
"toggleViewer": {
"title": "Basculer Visionneuse",
"desc": "Ouvrir et fermer la visionneuse d'image"
"title": "Affichage de la visionneuse",
"desc": "Afficher et masquer la visionneuse d'image"
},
"toggleGallery": {
"title": "Basculer Galerie",
"desc": "Ouvrir et fermer le tiroir de galerie"
"title": "Affichage de la galerie",
"desc": "Afficher et masquer la galerie"
},
"maximizeWorkSpace": {
"title": "Maximiser Espace de travail",
"title": "Maximiser la zone de travail",
"desc": "Fermer les panneaux et maximiser la zone de travail"
},
"changeTabs": {
"title": "Changer d'onglets",
"title": "Changer d'onglet",
"desc": "Passer à un autre espace de travail"
},
"consoleToggle": {
"title": "Bascule de la console",
"desc": "Ouvrir et fermer la console"
"title": "Affichage de la console",
"desc": "Afficher et masquer la console"
},
"setPrompt": {
"title": "Définir le prompt",
@ -122,7 +122,7 @@
"desc": "Utiliser tous les paramètres de l'image actuelle"
},
"restoreFaces": {
"title": "Restaurer les faces",
"title": "Restaurer les visages",
"desc": "Restaurer l'image actuelle"
},
"upscale": {
@ -155,7 +155,7 @@
},
"toggleGalleryPin": {
"title": "Activer/désactiver l'épinglage de la galerie",
"desc": "Épingle ou dépingle la galerie à l'interface utilisateur"
"desc": "Épingle ou dépingle la galerie à l'interface"
},
"increaseGalleryThumbSize": {
"title": "Augmenter la taille des miniatures de la galerie",
@ -330,7 +330,7 @@
"delete": "Supprimer",
"deleteModel": "Supprimer le modèle",
"deleteConfig": "Supprimer la configuration",
"deleteMsg1": "Êtes-vous sûr de vouloir supprimer cette entrée de modèle dans InvokeAI?",
"deleteMsg1": "Voulez-vous vraiment supprimer cette entrée de modèle dans InvokeAI ?",
"deleteMsg2": "Cela n'effacera pas le fichier de point de contrôle du modèle de votre disque. Vous pouvez les réajouter si vous le souhaitez.",
"formMessageDiffusersModelLocation": "Emplacement du modèle de diffuseurs",
"formMessageDiffusersModelLocationDesc": "Veuillez en entrer au moins un.",
@ -380,7 +380,6 @@
"img2imgStrength": "Force de l'Image à l'Image",
"toggleLoopback": "Activer/Désactiver la Boucle",
"invoke": "Invoker",
"cancel": "Annuler",
"promptPlaceholder": "Tapez le prompt ici. [tokens négatifs], (poids positif)++, (poids négatif)--, swap et blend sont disponibles (voir les docs)",
"sendTo": "Envoyer à",
"sendToImg2Img": "Envoyer à Image à Image",
@ -448,11 +447,11 @@
"feature": {
"prompt": "Ceci est le champ prompt. Le prompt inclut des objets de génération et des termes stylistiques. Vous pouvez également ajouter un poids (importance du jeton) dans le prompt, mais les commandes CLI et les paramètres ne fonctionneront pas.",
"gallery": "La galerie affiche les générations à partir du dossier de sortie à mesure qu'elles sont créées. Les paramètres sont stockés dans des fichiers et accessibles via le menu contextuel.",
"other": "Ces options activent des modes de traitement alternatifs pour Invoke. 'Tuilage seamless' créera des motifs répétitifs dans la sortie. 'Haute résolution' est la génération en deux étapes avec img2img: utilisez ce paramètre lorsque vous souhaitez une image plus grande et plus cohérente sans artefacts. Cela prendra plus de temps que d'habitude txt2img.",
"seed": "La valeur de grain affecte le bruit initial à partir duquel l'image est formée. Vous pouvez utiliser les graines déjà existantes provenant d'images précédentes. 'Seuil de bruit' est utilisé pour atténuer les artefacts à des valeurs CFG élevées (essayez la plage de 0 à 10), et Perlin pour ajouter du bruit Perlin pendant la génération: les deux servent à ajouter de la variété à vos sorties.",
"other": "Ces options activent des modes de traitement alternatifs pour Invoke. 'Tuilage seamless' créera des motifs répétitifs dans la sortie. 'Haute résolution' est la génération en deux étapes avec img2img : utilisez ce paramètre lorsque vous souhaitez une image plus grande et plus cohérente sans artefacts. Cela prendra plus de temps que d'habitude txt2img.",
"seed": "La valeur de grain affecte le bruit initial à partir duquel l'image est formée. Vous pouvez utiliser les graines déjà existantes provenant d'images précédentes. 'Seuil de bruit' est utilisé pour atténuer les artefacts à des valeurs CFG élevées (essayez la plage de 0 à 10), et Perlin pour ajouter du bruit Perlin pendant la génération : les deux servent à ajouter de la variété à vos sorties.",
"variations": "Essayez une variation avec une valeur comprise entre 0,1 et 1,0 pour changer le résultat pour une graine donnée. Des variations intéressantes de la graine sont entre 0,1 et 0,3.",
"upscale": "Utilisez ESRGAN pour agrandir l'image immédiatement après la génération.",
"faceCorrection": "Correction de visage avec GFPGAN ou Codeformer: l'algorithme détecte les visages dans l'image et corrige tout défaut. La valeur élevée changera plus l'image, ce qui donnera des visages plus attirants. Codeformer avec une fidélité plus élevée préserve l'image originale au prix d'une correction de visage plus forte.",
"faceCorrection": "Correction de visage avec GFPGAN ou Codeformer : l'algorithme détecte les visages dans l'image et corrige tout défaut. La valeur élevée changera plus l'image, ce qui donnera des visages plus attirants. Codeformer avec une fidélité plus élevée préserve l'image originale au prix d'une correction de visage plus forte.",
"imageToImage": "Image to Image charge n'importe quelle image en tant qu'initiale, qui est ensuite utilisée pour générer une nouvelle avec le prompt. Plus la valeur est élevée, plus l'image de résultat changera. Des valeurs de 0,0 à 1,0 sont possibles, la plage recommandée est de 0,25 à 0,75",
"boundingBox": "La boîte englobante est la même que les paramètres Largeur et Hauteur pour Texte à Image ou Image à Image. Seulement la zone dans la boîte sera traitée.",
"seamCorrection": "Contrôle la gestion des coutures visibles qui se produisent entre les images générées sur la toile.",
@ -495,11 +494,11 @@
"clearCanvasHistory": "Effacer l'historique du canvas",
"clearHistory": "Effacer l'historique",
"clearCanvasHistoryMessage": "Effacer l'historique du canvas laisse votre canvas actuel intact, mais efface de manière irréversible l'historique annuler et refaire.",
"clearCanvasHistoryConfirm": "Êtes-vous sûr de vouloir effacer l'historique du canvas?",
"clearCanvasHistoryConfirm": "Voulez-vous vraiment effacer l'historique du canvas ?",
"emptyTempImageFolder": "Vider le dossier d'images temporaires",
"emptyFolder": "Vider le dossier",
"emptyTempImagesFolderMessage": "Vider le dossier d'images temporaires réinitialise également complètement le canvas unifié. Cela inclut tout l'historique annuler/refaire, les images dans la zone de mise en attente et la couche de base du canvas.",
"emptyTempImagesFolderConfirm": "Êtes-vous sûr de vouloir vider le dossier temporaire?",
"emptyTempImagesFolderConfirm": "Voulez-vous vraiment vider le dossier temporaire ?",
"activeLayer": "Calque actif",
"canvasScale": "Échelle du canevas",
"boundingBox": "Boîte englobante",

View File

@ -15,11 +15,11 @@
"langItalian": "Italiano",
"nodesDesc": "Attualmente è in fase di sviluppo un sistema basato su nodi per la generazione di immagini. Resta sintonizzato per gli aggiornamenti su questa fantastica funzionalità.",
"postProcessing": "Post-elaborazione",
"postProcessDesc1": "Invoke AI offre un'ampia varietà di funzionalità di post-elaborazione. Ampiamento Immagine e Restaura i Volti sono già disponibili nell'interfaccia Web. È possibile accedervi dal menu 'Opzioni avanzate' delle schede 'Testo a Immagine' e 'Immagine a Immagine'. È inoltre possibile elaborare le immagini direttamente, utilizzando i pulsanti di azione dell'immagine sopra la visualizzazione dell'immagine corrente o nel visualizzatore.",
"postProcessDesc1": "Invoke AI offre un'ampia varietà di funzionalità di post-elaborazione. Ampliamento Immagine e Restaura Volti sono già disponibili nell'interfaccia Web. È possibile accedervi dal menu 'Opzioni avanzate' delle schede 'Testo a Immagine' e 'Immagine a Immagine'. È inoltre possibile elaborare le immagini direttamente, utilizzando i pulsanti di azione dell'immagine sopra la visualizzazione dell'immagine corrente o nel visualizzatore.",
"postProcessDesc2": "Presto verrà rilasciata un'interfaccia utente dedicata per facilitare flussi di lavoro di post-elaborazione più avanzati.",
"postProcessDesc3": "L'interfaccia da riga di comando di 'Invoke AI' offre varie altre funzionalità tra cui Embiggen.",
"training": "Addestramento",
"trainingDesc1": "Un flusso di lavoro dedicato per addestrare i tuoi incorporamenti e checkpoint utilizzando Inversione Testuale e Dreambooth dall'interfaccia web.",
"trainingDesc1": "Un flusso di lavoro dedicato per addestrare i tuoi Incorporamenti e Checkpoint utilizzando Inversione Testuale e Dreambooth dall'interfaccia web.",
"trainingDesc2": "InvokeAI supporta già l'addestramento di incorporamenti personalizzati utilizzando l'inversione testuale utilizzando lo script principale.",
"upload": "Caricamento",
"close": "Chiudi",
@ -45,7 +45,25 @@
"statusUpscaling": "Ampliamento",
"statusUpscalingESRGAN": "Ampliamento (ESRGAN)",
"statusLoadingModel": "Caricamento del modello",
"statusModelChanged": "Modello cambiato"
"statusModelChanged": "Modello cambiato",
"githubLabel": "GitHub",
"discordLabel": "Discord",
"langArabic": "Arabo",
"langEnglish": "Inglese",
"langFrench": "Francese",
"langGerman": "Tedesco",
"langJapanese": "Giapponese",
"langPolish": "Polacco",
"langBrPortuguese": "Portoghese Basiliano",
"langRussian": "Russo",
"langUkranian": "Ucraino",
"langSpanish": "Spagnolo",
"statusMergingModels": "Fusione Modelli",
"statusMergedModels": "Modelli fusi",
"langSimplifiedChinese": "Cinese semplificato",
"langDutch": "Olandese",
"statusModelConverted": "Modello Convertito",
"statusConvertingModel": "Conversione Modello"
},
"gallery": {
"generations": "Generazioni",
@ -70,7 +88,7 @@
"galleryHotkeys": "Tasti di scelta rapida della galleria",
"unifiedCanvasHotkeys": "Tasti di scelta rapida Tela Unificata",
"invoke": {
"title": "Invoca",
"title": "Invoke",
"desc": "Genera un'immagine"
},
"cancel": {
@ -335,7 +353,47 @@
"formMessageDiffusersModelLocation": "Ubicazione modelli diffusori",
"formMessageDiffusersModelLocationDesc": "Inseriscine almeno uno.",
"formMessageDiffusersVAELocation": "Ubicazione file VAE",
"formMessageDiffusersVAELocationDesc": "Se non fornito, InvokeAI cercherà il file VAE all'interno dell'ubicazione del modello sopra indicata."
"formMessageDiffusersVAELocationDesc": "Se non fornito, InvokeAI cercherà il file VAE all'interno dell'ubicazione del modello sopra indicata.",
"convert": "Converti",
"convertToDiffusers": "Converti in Diffusori",
"convertToDiffusersHelpText2": "Questo processo sostituirà la voce in Gestione Modelli con la versione Diffusori dello stesso modello.",
"convertToDiffusersHelpText4": "Questo è un processo una tantum. Potrebbero essere necessari circa 30-60 secondi a seconda delle specifiche del tuo computer.",
"convertToDiffusersHelpText5": "Assicurati di avere spazio su disco sufficiente. I modelli generalmente variano tra 4 GB e 7 GB di dimensioni.",
"convertToDiffusersHelpText6": "Vuoi convertire questo modello?",
"convertToDiffusersSaveLocation": "Ubicazione salvataggio",
"v2": "v2",
"inpainting": "v1 Inpainting",
"customConfig": "Configurazione personalizzata",
"statusConverting": "Conversione in corso",
"modelConverted": "Modello convertito",
"sameFolder": "Stessa cartella",
"invokeRoot": "Cartella InvokeAI",
"merge": "Fondere",
"modelsMerged": "Modelli fusi",
"mergeModels": "Fondi Modelli",
"modelOne": "Modello 1",
"modelTwo": "Modello 2",
"mergedModelName": "Nome del modello fuso",
"alpha": "Alpha",
"interpolationType": "Tipo di interpolazione",
"mergedModelCustomSaveLocation": "Percorso personalizzato",
"invokeAIFolder": "Cartella Invoke AI",
"ignoreMismatch": "Ignora le discrepanze tra i modelli selezionati",
"modelMergeHeaderHelp2": "Solo i diffusori sono disponibili per l'unione. Se desideri unire un modello Checkpoint, convertilo prima in Diffusori.",
"modelMergeInterpAddDifferenceHelp": "In questa modalità, il Modello 3 viene prima sottratto dal Modello 2. La versione risultante viene unita al Modello 1 con il tasso Alpha impostato sopra.",
"mergedModelSaveLocation": "Ubicazione salvataggio",
"convertToDiffusersHelpText1": "Questo modello verrà convertito nel formato 🧨 Diffusore.",
"custom": "Personalizzata",
"convertToDiffusersHelpText3": "Il tuo file checkpoint sul disco NON verrà comunque cancellato o modificato. Se lo desideri, puoi aggiungerlo di nuovo in Gestione Modelli.",
"v1": "v1",
"pathToCustomConfig": "Percorso alla configurazione personalizzata",
"modelThree": "Modello 3",
"modelMergeHeaderHelp1": "Puoi unire fino a tre diversi modelli per creare una miscela adatta alle tue esigenze.",
"modelMergeAlphaHelp": "Il valore Alpha controlla la forza di miscelazione dei modelli. Valori Alpha più bassi attenuano l'influenza del secondo modello.",
"customSaveLocation": "Ubicazione salvataggio personalizzata",
"weightedSum": "Somma pesata",
"sigmoid": "Sigmoide",
"inverseSigmoid": "Sigmoide inverso"
},
"parameters": {
"images": "Immagini",
@ -352,7 +410,7 @@
"variations": "Variazioni",
"variationAmount": "Quantità di variazione",
"seedWeights": "Pesi dei semi",
"faceRestoration": "Restaura volti",
"faceRestoration": "Restauro volti",
"restoreFaces": "Restaura volti",
"type": "Tipo",
"strength": "Forza",
@ -380,7 +438,6 @@
"img2imgStrength": "Forza da Immagine a Immagine",
"toggleLoopback": "Attiva/disattiva elaborazione ricorsiva",
"invoke": "Invoke",
"cancel": "Annulla",
"promptPlaceholder": "Digita qui il prompt usando termini in lingua inglese. [token negativi], (aumenta il peso)++, (diminuisci il peso)--, scambia e fondi sono disponibili (consulta la documentazione)",
"sendTo": "Invia a",
"sendToImg2Img": "Invia a da Immagine a Immagine",
@ -396,7 +453,19 @@
"info": "Informazioni",
"deleteImage": "Elimina immagine",
"initialImage": "Immagine iniziale",
"showOptionsPanel": "Mostra pannello opzioni"
"showOptionsPanel": "Mostra pannello opzioni",
"general": "Generale",
"denoisingStrength": "Forza riduzione rumore",
"copyImage": "Copia immagine",
"hiresStrength": "Forza Alta Risoluzione",
"negativePrompts": "Prompt Negativi",
"imageToImage": "Immagine a Immagine",
"cancel": {
"schedule": "Annulla dopo l'iterazione corrente",
"isScheduled": "Annullamento",
"setType": "Imposta il tipo di annullamento",
"immediate": "Annulla immediatamente"
}
},
"settings": {
"models": "Modelli",
@ -409,7 +478,8 @@
"resetWebUI": "Reimposta l'interfaccia utente Web",
"resetWebUIDesc1": "Il ripristino dell'interfaccia utente Web reimposta solo la cache locale del browser delle immagini e le impostazioni memorizzate. Non cancella alcuna immagine dal disco.",
"resetWebUIDesc2": "Se le immagini non vengono visualizzate nella galleria o qualcos'altro non funziona, prova a reimpostare prima di segnalare un problema su GitHub.",
"resetComplete": "L'interfaccia utente Web è stata reimpostata. Aggiorna la pagina per ricaricarla."
"resetComplete": "L'interfaccia utente Web è stata reimpostata. Aggiorna la pagina per ricaricarla.",
"useSlidersForAll": "Usa i cursori per tutte le opzioni"
},
"toast": {
"tempFoldersEmptied": "Cartella temporanea svuotata",
@ -447,7 +517,7 @@
"feature": {
"prompt": "Questo è il campo del prompt. Il prompt include oggetti di generazione e termini stilistici. Puoi anche aggiungere il peso (importanza del token) nel prompt, ma i comandi e i parametri dell'interfaccia a linea di comando non funzioneranno.",
"gallery": "Galleria visualizza le generazioni dalla cartella degli output man mano che vengono create. Le impostazioni sono memorizzate all'interno di file e accessibili dal menu contestuale.",
"other": "Queste opzioni abiliteranno modalità di elaborazione alternative per Invoke. 'Piastrella senza cuciture' creerà modelli ripetuti nell'output. 'Ottimizzzazione Alta risoluzione' è la generazione in due passaggi con 'Immagine a Immagine': usa questa impostazione quando vuoi un'immagine più grande e più coerente senza artefatti. Ci vorrà più tempo del solito 'Testo a Immagine'.",
"other": "Queste opzioni abiliteranno modalità di elaborazione alternative per Invoke. 'Piastrella senza cuciture' creerà modelli ripetuti nell'output. 'Ottimizzazione Alta risoluzione' è la generazione in due passaggi con 'Immagine a Immagine': usa questa impostazione quando vuoi un'immagine più grande e più coerente senza artefatti. Ci vorrà più tempo del solito 'Testo a Immagine'.",
"seed": "Il valore del Seme influenza il rumore iniziale da cui è formata l'immagine. Puoi usare i semi già esistenti dalle immagini precedenti. 'Soglia del rumore' viene utilizzato per mitigare gli artefatti a valori CFG elevati (provare l'intervallo 0-10) e Perlin per aggiungere il rumore Perlin durante la generazione: entrambi servono per aggiungere variazioni ai risultati.",
"variations": "Prova una variazione con un valore compreso tra 0.1 e 1.0 per modificare il risultato per un dato seme. Variazioni interessanti del seme sono comprese tra 0.1 e 0.3.",
"upscale": "Utilizza ESRGAN per ingrandire l'immagine subito dopo la generazione.",
@ -515,6 +585,6 @@
"betaClear": "Svuota",
"betaDarkenOutside": "Oscura all'esterno",
"betaLimitToBox": "Limita al rettangolo",
"betaPreserveMasked": "Conserva quanto mascheato"
"betaPreserveMasked": "Conserva quanto mascherato"
}
}

View File

@ -304,7 +304,6 @@
"scaledHeight": "高さのスケール",
"boundingBoxHeader": "バウンディングボックス",
"img2imgStrength": "Image To Imageの強度",
"cancel": "キャンセル",
"sendTo": "転送",
"sendToImg2Img": "Image to Imageに転送",
"sendToUnifiedCanvas": "Unified Canvasに転送",

View File

@ -364,7 +364,6 @@
"img2imgStrength": "Sterkte Afbeelding naar afbeelding",
"toggleLoopback": "Zet recursieve verwerking aan/uit",
"invoke": "Genereer",
"cancel": "Annuleer",
"promptPlaceholder": "Voer invoertekst hier in. [negatieve trefwoorden], (verhoogdgewicht)++, (verlaagdgewicht)--, swap (wisselen) en blend (mengen) zijn beschikbaar (zie documentatie)",
"sendTo": "Stuur naar",
"sendToImg2Img": "Stuur naar Afbeelding naar afbeelding",

View File

@ -269,7 +269,6 @@
"desc": "Akceptuje aktualnie wybrany obraz tymczasowy"
}
},
"modelManager": {},
"parameters": {
"images": "L. obrazów",
"steps": "L. kroków",
@ -313,7 +312,6 @@
"img2imgStrength": "Wpływ sugestii na obraz",
"toggleLoopback": "Wł/wył sprzężenie zwrotne",
"invoke": "Wywołaj",
"cancel": "Anuluj",
"promptPlaceholder": "W tym miejscu wprowadź swoje sugestie. [negatywne sugestie], (wzmocnienie), (osłabienie)--, po więcej opcji (np. swap lub blend) zajrzyj do dokumentacji",
"sendTo": "Wyślij do",
"sendToImg2Img": "Użyj w trybie \"Obraz na obraz\"",

View File

@ -362,7 +362,6 @@
"img2imgStrength": "Força de Imagem Para Imagem",
"toggleLoopback": "Ativar Loopback",
"invoke": "Invoke",
"cancel": "Cancelar",
"promptPlaceholder": "Digite o prompt aqui. [tokens negativos], (upweight)++, (downweight)--, trocar e misturar estão disponíveis (veja docs)",
"sendTo": "Mandar para",
"sendToImg2Img": "Mandar para Imagem Para Imagem",
@ -425,7 +424,6 @@
"initialImageNotSet": "Imagem Inicial Não Definida",
"initialImageNotSetDesc": "Não foi possível carregar imagem incial"
},
"tooltip": {},
"unifiedCanvas": {
"layer": "Camada",
"base": "Base",

View File

@ -160,7 +160,7 @@
"title": "Увеличить размер миниатюр галереи",
"desc": "Увеличивает размер миниатюр галереи"
},
"reduceGalleryThumbSize": {
"decreaseGalleryThumbSize": {
"title": "Уменьшает размер миниатюр галереи",
"desc": "Уменьшает размер миниатюр галереи"
},
@ -172,7 +172,7 @@
"title": "Выбрать ластик",
"desc": "Выбирает ластик для холста"
},
"reduceBrushSize": {
"decreaseBrushSize": {
"title": "Уменьшить размер кисти",
"desc": "Уменьшает размер кисти/ластика холста"
},
@ -180,7 +180,7 @@
"title": "Увеличить размер кисти",
"desc": "Увеличивает размер кисти/ластика холста"
},
"reduceBrushOpacity": {
"decreaseBrushOpacity": {
"title": "Уменьшить непрозрачность кисти",
"desc": "Уменьшает непрозрачность кисти холста"
},
@ -365,7 +365,6 @@
"img2imgStrength": "Сила обработки img2img",
"toggleLoopback": "Зациклить обработку",
"invoke": "Вызвать",
"cancel": "Отменить",
"promptPlaceholder": "Введите запрос здесь (на английском). [исключенные токены], (более значимые)++, (менее значимые)--, swap и blend тоже доступны (смотрите Github)",
"sendTo": "Отправить",
"sendToImg2Img": "Отправить в img2img",
@ -494,7 +493,7 @@
"cursorPosition": "Положение курсора",
"previous": "Предыдущее",
"next": "Следующее",
"принять": "Принять",
"accept": "Принять",
"showHide": "Показать/Скрыть",
"discardAll": "Отменить все",
"betaClear": "Очистить",

View File

@ -160,7 +160,7 @@
"title": "Збільшити розмір мініатюр галереї",
"desc": "Збільшує розмір мініатюр галереї"
},
"reduceGalleryThumbSize": {
"decreaseGalleryThumbSize": {
"title": "Зменшує розмір мініатюр галереї",
"desc": "Зменшує розмір мініатюр галереї"
},
@ -172,7 +172,7 @@
"title": "Вибрати ластик",
"desc": "Вибирає ластик для полотна"
},
"reduceBrushSize": {
"decreaseBrushSize": {
"title": "Зменшити розмір пензля",
"desc": "Зменшує розмір пензля/ластика полотна"
},
@ -180,7 +180,7 @@
"title": "Збільшити розмір пензля",
"desc": "Збільшує розмір пензля/ластика полотна"
},
"reduceBrushOpacity": {
"decreaseBrushOpacity": {
"title": "Зменшити непрозорість пензля",
"desc": "Зменшує непрозорість пензля полотна"
},
@ -354,7 +354,6 @@
"seamBlur": "Розмиття шву",
"seamStrength": "Сила шву",
"seamSteps": "Кроки шву",
"inpaintReplace": "Inpaint-заміна",
"scaleBeforeProcessing": "Масштабувати",
"scaledWidth": "Масштаб Ш",
"scaledHeight": "Масштаб В",
@ -366,7 +365,6 @@
"img2imgStrength": "Сила обробки img2img",
"toggleLoopback": "Зациклити обробку",
"invoke": "Викликати",
"cancel": "Скасувати",
"promptPlaceholder": "Введіть запит тут (англійською). [видалені токени], (більш вагомі)++, (менш вагомі)--, swap и blend також доступні (дивіться Github)",
"sendTo": "Надіслати",
"sendToImg2Img": "Надіслати у img2img",
@ -495,7 +493,7 @@
"cursorPosition": "Розташування курсора",
"previous": "Попереднє",
"next": "Наступне",
"принять": "Приняти",
"accept": "Приняти",
"showHide": "Показати/Сховати",
"discardAll": "Відмінити все",
"betaClear": "Очистити",

View File

@ -362,7 +362,6 @@
"img2imgStrength": "图像到图像强度",
"toggleLoopback": "切换环回",
"invoke": "Invoke",
"cancel": "取消",
"promptPlaceholder": "在这里输入提示。可以使用[反提示]、(加权)++、(减权)--、交换和混合(见文档)",
"sendTo": "发送到",
"sendToImg2Img": "发送到图像到图像",
@ -425,7 +424,6 @@
"initialImageNotSet": "初始图像未设定",
"initialImageNotSetDesc": "无法加载初始图像"
},
"tooltip": {},
"unifiedCanvas": {
"layer": "图层",
"base": "基础层",

View File

@ -48,6 +48,7 @@ const systemBlacklist = [
'totalIterations',
'totalSteps',
'openModel',
'cancelOptions.cancelAfter',
].map((blacklistItem) => `system.${blacklistItem}`);
const galleryBlacklist = [

View File

@ -0,0 +1,102 @@
import {
Menu,
MenuButton,
MenuItem,
MenuList,
MenuProps,
MenuButtonProps,
MenuListProps,
MenuItemProps,
} from '@chakra-ui/react';
import { MouseEventHandler, ReactNode } from 'react';
import { MdArrowDropDown, MdArrowDropUp } from 'react-icons/md';
import IAIButton from './IAIButton';
import IAIIconButton from './IAIIconButton';
interface IAIMenuItem {
item: ReactNode | string;
onClick: MouseEventHandler<HTMLButtonElement> | undefined;
}
interface IAIMenuProps {
menuType?: 'icon' | 'regular';
buttonText?: string;
iconTooltip?: string;
menuItems: IAIMenuItem[];
menuProps?: MenuProps;
menuButtonProps?: MenuButtonProps;
menuListProps?: MenuListProps;
menuItemProps?: MenuItemProps;
}
export default function IAISimpleMenu(props: IAIMenuProps) {
const {
menuType = 'icon',
iconTooltip,
buttonText,
menuItems,
menuProps,
menuButtonProps,
menuListProps,
menuItemProps,
} = props;
const renderMenuItems = () => {
const menuItemsToRender: ReactNode[] = [];
menuItems.forEach((menuItem, index) => {
menuItemsToRender.push(
<MenuItem
key={index}
onClick={menuItem.onClick}
fontSize="0.9rem"
color="var(--text-color-secondary)"
backgroundColor="var(--background-color-secondary)"
_focus={{
color: 'var(--text-color)',
backgroundColor: 'var(--border-color)',
}}
{...menuItemProps}
>
{menuItem.item}
</MenuItem>
);
});
return menuItemsToRender;
};
return (
<Menu {...menuProps}>
{({ isOpen }) => (
<>
<MenuButton
as={menuType === 'icon' ? IAIIconButton : IAIButton}
tooltip={iconTooltip}
icon={isOpen ? <MdArrowDropUp /> : <MdArrowDropDown />}
padding={menuType === 'regular' ? '0 0.5rem' : 0}
backgroundColor="var(--btn-base-color)"
_hover={{
backgroundColor: 'var(--btn-base-color-hover)',
}}
minWidth="1rem"
minHeight="1rem"
fontSize="1.5rem"
{...menuButtonProps}
>
{menuType === 'regular' && buttonText}
</MenuButton>
<MenuList
zIndex={15}
padding={0}
borderRadius="0.5rem"
backgroundColor="var(--background-color-secondary)"
color="var(--text-color-secondary)"
borderColor="var(--border-color)"
{...menuListProps}
>
{renderMenuItems()}
</MenuList>
</>
)}
</Menu>
);
}

View File

@ -1,12 +1,12 @@
import { FACETOOL_TYPES } from 'app/constants';
import { type RootState } from 'app/store';
import { RootState } from 'app/store';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAISelect from 'common/components/IAISelect';
import {
type FacetoolType,
FacetoolType,
setFacetoolType,
} from 'features/parameters/store/postprocessingSlice';
import { type ChangeEvent } from 'react';
import { ChangeEvent } from 'react';
import { useTranslation } from 'react-i18next';
export default function FaceRestoreType() {

View File

@ -4,7 +4,7 @@ import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAISelect from 'common/components/IAISelect';
import {
setUpscalingLevel,
type UpscalingLevel,
UpscalingLevel,
} from 'features/parameters/store/postprocessingSlice';
import type { ChangeEvent } from 'react';
import { useTranslation } from 'react-i18next';

View File

@ -1,5 +1,5 @@
import { Flex } from '@chakra-ui/react';
import { type RootState } from 'app/store';
import { RootState } from 'app/store';
import { useAppSelector } from 'app/storeHooks';
import { useTranslation } from 'react-i18next';
import ParametersAccordion from '../ParametersAccordion';

View File

@ -5,12 +5,20 @@ import IAIIconButton, {
IAIIconButtonProps,
} from 'common/components/IAIIconButton';
import { systemSelector } from 'features/system/store/systemSelectors';
import { SystemState } from 'features/system/store/systemSlice';
import {
SystemState,
setCancelAfter,
setCancelType,
} from 'features/system/store/systemSlice';
import { isEqual } from 'lodash';
import { useEffect, useCallback } from 'react';
import { ButtonSpinner, ButtonGroup } from '@chakra-ui/react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
import { MdCancel } from 'react-icons/md';
import { MdCancel, MdCancelScheduleSend } from 'react-icons/md';
import IAISimpleMenu from 'common/components/IAISimpleMenu';
const cancelButtonSelector = createSelector(
systemSelector,
@ -19,6 +27,10 @@ const cancelButtonSelector = createSelector(
isProcessing: system.isProcessing,
isConnected: system.isConnected,
isCancelable: system.isCancelable,
currentIteration: system.currentIteration,
totalIterations: system.totalIterations,
cancelType: system.cancelOptions.cancelType,
cancelAfter: system.cancelOptions.cancelAfter,
};
},
{
@ -28,17 +40,33 @@ const cancelButtonSelector = createSelector(
}
);
interface CancelButtonProps {
btnGroupWidth?: string | number;
}
export default function CancelButton(
props: Omit<IAIIconButtonProps, 'aria-label'>
props: CancelButtonProps & Omit<IAIIconButtonProps, 'aria-label'>
) {
const { ...rest } = props;
const dispatch = useAppDispatch();
const { isProcessing, isConnected, isCancelable } =
useAppSelector(cancelButtonSelector);
const handleClickCancel = () => dispatch(cancelProcessing());
const { btnGroupWidth = 'auto', ...rest } = props;
const {
isProcessing,
isConnected,
isCancelable,
currentIteration,
totalIterations,
cancelType,
cancelAfter,
} = useAppSelector(cancelButtonSelector);
const handleClickCancel = useCallback(() => {
dispatch(cancelProcessing());
dispatch(setCancelAfter(null));
}, [dispatch]);
const { t } = useTranslation();
const isCancelScheduled = cancelAfter === null ? false : true;
useHotkeys(
'shift+x',
() => {
@ -49,15 +77,87 @@ export default function CancelButton(
[isConnected, isProcessing, isCancelable]
);
useEffect(() => {
if (cancelAfter !== null && cancelAfter < currentIteration) {
handleClickCancel();
}
}, [cancelAfter, currentIteration, handleClickCancel]);
const cancelMenuItems = [
{
item: t('parameters.cancel.immediate'),
onClick: () => dispatch(setCancelType('immediate')),
},
{
item: t('parameters.cancel.schedule'),
onClick: () => dispatch(setCancelType('scheduled')),
},
];
return (
<IAIIconButton
icon={<MdCancel />}
tooltip={t('parameters.cancel')}
aria-label={t('parameters.cancel')}
isDisabled={!isConnected || !isProcessing || !isCancelable}
onClick={handleClickCancel}
styleClass="cancel-btn"
{...rest}
/>
<ButtonGroup
isAttached
variant="link"
minHeight="2.5rem"
width={btnGroupWidth}
>
{cancelType === 'immediate' ? (
<IAIIconButton
icon={<MdCancel />}
tooltip={t('parameters.cancel.immediate')}
aria-label={t('parameters.cancel.immediate')}
isDisabled={!isConnected || !isProcessing || !isCancelable}
onClick={handleClickCancel}
className="cancel-btn"
{...rest}
/>
) : (
<IAIIconButton
icon={
isCancelScheduled ? (
<ButtonSpinner color="var(--text-color)" />
) : (
<MdCancelScheduleSend />
)
}
tooltip={
isCancelScheduled
? t('parameters.cancel.isScheduled')
: t('parameters.cancel.schedule')
}
aria-label={
isCancelScheduled
? t('parameters.cancel.isScheduled')
: t('parameters.cancel.schedule')
}
isDisabled={
!isConnected ||
!isProcessing ||
!isCancelable ||
currentIteration === totalIterations
}
onClick={() => {
// If a cancel request has already been made, and the user clicks again before the next iteration has been processed, stop the request.
if (isCancelScheduled) dispatch(setCancelAfter(null));
else dispatch(setCancelAfter(currentIteration));
}}
className="cancel-btn"
{...rest}
/>
)}
<IAISimpleMenu
menuItems={cancelMenuItems}
iconTooltip={t('parameters.cancel.setType')}
menuButtonProps={{
backgroundColor: 'var(--destructive-color)',
color: 'var(--text-color)',
minWidth: '1.5rem',
minHeight: '1.5rem',
_hover: {
backgroundColor: 'var(--destructive-color-hover)',
},
}}
/>
</ButtonGroup>
);
}

View File

@ -12,7 +12,7 @@ import {
useDisclosure,
} from '@chakra-ui/react';
import { mergeDiffusersModels } from 'app/socketio/actions';
import { type RootState } from 'app/store';
import { RootState } from 'app/store';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAIButton from 'common/components/IAIButton';
import IAIInput from 'common/components/IAIInput';

View File

@ -1,20 +1,21 @@
import { Box, Flex, Text } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { Box, Flex, Spinner, Text } from '@chakra-ui/react';
import IAIInput from 'common/components/IAIInput';
import { useMemo, useState, useTransition } from 'react';
import IAIButton from 'common/components/IAIButton';
import AddModel from './AddModel';
import ModelListItem from './ModelListItem';
import MergeModels from './MergeModels';
import { useAppSelector } from 'app/storeHooks';
import { useTranslation } from 'react-i18next';
import IAIButton from 'common/components/IAIButton';
import { createSelector } from '@reduxjs/toolkit';
import { systemSelector } from 'features/system/store/systemSelectors';
import type { SystemState } from 'features/system/store/systemSlice';
import { isEqual, map } from 'lodash';
import React, { useMemo, useState, useTransition } from 'react';
import type { ChangeEvent, ReactNode } from 'react';
import MergeModels from './MergeModels';
const modelListSelector = createSelector(
systemSelector,
@ -58,6 +59,16 @@ function ModelFilterButton({
const ModelList = () => {
const models = useAppSelector(modelListSelector);
const [renderModelList, setRenderModelList] = React.useState<boolean>(false);
React.useEffect(() => {
const timer = setTimeout(() => {
setRenderModelList(true);
}, 200);
return () => clearTimeout(timer);
}, []);
const [searchText, setSearchText] = useState<string>('');
const [isSelectedFilter, setIsSelectedFilter] = useState<
'all' | 'ckpt' | 'diffusers'
@ -217,7 +228,19 @@ const ModelList = () => {
isActive={isSelectedFilter === 'diffusers'}
/>
</Flex>
{renderModelListItems}
{renderModelList ? (
renderModelListItems
) : (
<Flex
width="100%"
minHeight="30rem"
justifyContent="center"
alignItems="center"
>
<Spinner />
</Flex>
)}
</Flex>
</Flex>
);

View File

@ -14,7 +14,7 @@ import {
} from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { IN_PROGRESS_IMAGE_TYPES } from 'app/constants';
import { type RootState } from 'app/store';
import { RootState } from 'app/store';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAINumberInput from 'common/components/IAINumberInput';
import IAISelect from 'common/components/IAISelect';
@ -27,14 +27,14 @@ import {
setShouldConfirmOnDelete,
setShouldDisplayGuides,
setShouldDisplayInProgressType,
type SystemState,
SystemState,
} from 'features/system/store/systemSlice';
import { uiSelector } from 'features/ui/store/uiSelectors';
import {
setShouldUseCanvasBetaLayout,
setShouldUseSliders,
} from 'features/ui/store/uiSlice';
import { type UIState } from 'features/ui/store/uiTypes';
import { UIState } from 'features/ui/store/uiTypes';
import { isEqual, map } from 'lodash';
import { persistor } from 'persistor';
import { ChangeEvent, cloneElement, ReactElement } from 'react';

View File

@ -23,6 +23,8 @@ export type ReadinessPayload = {
export type InProgressImageType = 'none' | 'full-res' | 'latents';
export type CancelType = 'immediate' | 'scheduled';
export interface SystemState
extends InvokeAI.SystemStatus,
InvokeAI.SystemConfig {
@ -50,6 +52,10 @@ export interface SystemState
searchFolder: string | null;
foundModels: InvokeAI.FoundModel[] | null;
openModel: string | null;
cancelOptions: {
cancelType: CancelType;
cancelAfter: number | null;
};
}
const initialSystemState: SystemState = {
@ -63,7 +69,7 @@ const initialSystemState: SystemState = {
isESRGANAvailable: true,
socketId: '',
shouldConfirmOnDelete: true,
openAccordions: [],
openAccordions: [0],
currentStep: 0,
totalSteps: 0,
currentIteration: 0,
@ -88,6 +94,10 @@ const initialSystemState: SystemState = {
searchFolder: null,
foundModels: null,
openModel: null,
cancelOptions: {
cancelType: 'immediate',
cancelAfter: null,
},
};
export const systemSlice = createSlice({
@ -255,6 +265,12 @@ export const systemSlice = createSlice({
setOpenModel: (state, action: PayloadAction<string | null>) => {
state.openModel = action.payload;
},
setCancelType: (state, action: PayloadAction<CancelType>) => {
state.cancelOptions.cancelType = action.payload;
},
setCancelAfter: (state, action: PayloadAction<number | null>) => {
state.cancelOptions.cancelAfter = action.payload;
},
},
});
@ -288,6 +304,8 @@ export const {
setSearchFolder,
setFoundModels,
setOpenModel,
setCancelType,
setCancelAfter,
} = systemSlice.actions;
export default systemSlice.reducer;

View File

@ -2,20 +2,13 @@ import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import { setDoesCanvasNeedScaling } from 'features/canvas/store/canvasSlice';
import { setShouldShowGallery } from 'features/gallery/store/gallerySlice';
import { setShouldShowParametersPanel } from 'features/ui/store/uiSlice';
import { useHotkeys } from 'react-hotkeys-hook';
import { MdPhotoLibrary } from 'react-icons/md';
import { floatingSelector } from './FloatingParametersPanelButtons';
const FloatingGalleryButton = () => {
const dispatch = useAppDispatch();
const {
shouldShowGallery,
shouldShowGalleryButton,
shouldPinGallery,
shouldShowParametersPanel,
shouldPinParametersPanel,
} = useAppSelector(floatingSelector);
const { shouldShowGalleryButton, shouldPinGallery } =
useAppSelector(floatingSelector);
const handleShowGallery = () => {
dispatch(setShouldShowGallery(true));
@ -24,22 +17,6 @@ const FloatingGalleryButton = () => {
}
};
useHotkeys(
'f',
() => {
if (shouldShowGallery || shouldShowParametersPanel) {
dispatch(setShouldShowParametersPanel(false));
dispatch(setShouldShowGallery(false));
} else {
dispatch(setShouldShowParametersPanel(true));
dispatch(setShouldShowGallery(true));
}
if (shouldPinGallery || shouldPinParametersPanel)
setTimeout(() => dispatch(setDoesCanvasNeedScaling(true)), 400);
},
[shouldShowGallery, shouldShowParametersPanel]
);
return shouldShowGalleryButton ? (
<IAIIconButton
tooltip="Show Gallery (G)"

View File

@ -3,10 +3,7 @@ import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import { setDoesCanvasNeedScaling } from 'features/canvas/store/canvasSlice';
import { gallerySelector } from 'features/gallery/store/gallerySelectors';
import {
GalleryState,
setShouldShowGallery,
} from 'features/gallery/store/gallerySlice';
import { GalleryState } from 'features/gallery/store/gallerySlice';
import CancelButton from 'features/parameters/components/ProcessButtons/CancelButton';
import InvokeButton from 'features/parameters/components/ProcessButtons/InvokeButton';
import {
@ -16,7 +13,6 @@ import {
import { setShouldShowParametersPanel } from 'features/ui/store/uiSlice';
import { isEqual } from 'lodash';
import { useHotkeys } from 'react-hotkeys-hook';
import { FaSlidersH } from 'react-icons/fa';
export const floatingSelector = createSelector(
@ -67,12 +63,9 @@ export const floatingSelector = createSelector(
const FloatingParametersPanelButtons = () => {
const dispatch = useAppDispatch();
const {
shouldShowParametersPanel,
shouldShowParametersPanelButton,
shouldShowProcessButtons,
shouldPinParametersPanel,
shouldShowGallery,
shouldPinGallery,
} = useAppSelector(floatingSelector);
const handleShowOptionsPanel = () => {
@ -82,22 +75,6 @@ const FloatingParametersPanelButtons = () => {
}
};
useHotkeys(
'f',
() => {
if (shouldShowGallery || shouldShowParametersPanel) {
dispatch(setShouldShowParametersPanel(false));
dispatch(setShouldShowGallery(false));
} else {
dispatch(setShouldShowParametersPanel(true));
dispatch(setShouldShowGallery(true));
}
if (shouldPinGallery || shouldPinParametersPanel)
setTimeout(() => dispatch(setDoesCanvasNeedScaling(true)), 400);
},
[shouldShowGallery, shouldShowParametersPanel]
);
return shouldShowParametersPanelButton ? (
<div className="show-hide-button-options">
<IAIIconButton

View File

@ -11,14 +11,20 @@ import PostprocessingIcon from 'common/icons/PostprocessingIcon';
import TextToImageIcon from 'common/icons/TextToImageIcon';
import TrainingIcon from 'common/icons/TrainingIcon';
import UnifiedCanvasIcon from 'common/icons/UnifiedCanvasIcon';
import { setDoesCanvasNeedScaling } from 'features/canvas/store/canvasSlice';
import { setShouldShowGallery } from 'features/gallery/store/gallerySlice';
import Lightbox from 'features/lightbox/components/Lightbox';
import { setIsLightboxOpen } from 'features/lightbox/store/lightboxSlice';
import { InvokeTabName } from 'features/ui/store/tabMap';
import { setActiveTab } from 'features/ui/store/uiSlice';
import {
setActiveTab,
setShouldShowParametersPanel,
} from 'features/ui/store/uiSlice';
import i18n from 'i18n';
import { ReactElement } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { activeTabIndexSelector } from '../store/uiSelectors';
import { floatingSelector } from './FloatingParametersPanelButtons';
import ImageToImageWorkarea from './ImageToImage';
import TextToImageWorkarea from './TextToImage';
import UnifiedCanvasWorkarea from './UnifiedCanvas/UnifiedCanvasWorkarea';
@ -73,10 +79,18 @@ function updateTabTranslations() {
export default function InvokeTabs() {
const activeTab = useAppSelector(activeTabIndexSelector);
const isLightBoxOpen = useAppSelector(
(state: RootState) => state.lightbox.isLightboxOpen
);
const {
shouldShowGallery,
shouldShowParametersPanel,
shouldPinGallery,
shouldPinParametersPanel,
} = useAppSelector(floatingSelector);
useUpdateTranslations(updateTabTranslations);
const dispatch = useAppDispatch();
@ -114,6 +128,22 @@ export default function InvokeTabs() {
[isLightBoxOpen]
);
useHotkeys(
'f',
() => {
if (shouldShowGallery || shouldShowParametersPanel) {
dispatch(setShouldShowParametersPanel(false));
dispatch(setShouldShowGallery(false));
} else {
dispatch(setShouldShowParametersPanel(true));
dispatch(setShouldShowGallery(true));
}
if (shouldPinGallery || shouldPinParametersPanel)
setTimeout(() => dispatch(setDoesCanvasNeedScaling(true)), 400);
},
[shouldShowGallery, shouldShowParametersPanel]
);
const renderTabs = () => {
const tabsToRender: ReactElement[] = [];
Object.keys(tabDict).forEach((key) => {

View File

@ -38,7 +38,7 @@ export default function UnifiedCanvasProcessingButtons() {
<InvokeButton iconButton />
</Flex>
<Flex>
<CancelButton width="100%" height="40px" />
<CancelButton width="100%" height="40px" btnGroupWidth="100%" />
</Flex>
</Flex>
);

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -272,6 +272,10 @@ class Args(object):
switches.append('--seamless')
if a['hires_fix']:
switches.append('--hires_fix')
if a['h_symmetry_time_pct']:
switches.append(f'--h_symmetry_time_pct {a["h_symmetry_time_pct"]}')
if a['v_symmetry_time_pct']:
switches.append(f'--v_symmetry_time_pct {a["v_symmetry_time_pct"]}')
# img2img generations have parameters relevant only to them and have special handling
if a['init_img'] and len(a['init_img'])>0:
@ -751,6 +755,9 @@ class Args(object):
!fix applies upscaling/facefixing to a previously-generated image.
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
*embeddings*
invoke> !triggers -- return all trigger phrases contained in loaded embedding files
*History manipulation*
!fetch retrieves the command used to generate an earlier image. Provide
a directory wildcard and the name of a file to write and all the commands
@ -842,6 +849,18 @@ class Args(object):
type=float,
help='Perlin noise scale (0.0 - 1.0) - add perlin noise to the initialization instead of the usual gaussian noise.',
)
render_group.add_argument(
'--h_symmetry_time_pct',
default=None,
type=float,
help='Horizontal symmetry point (0.0 - 1.0) - apply horizontal symmetry at this point in image generation.',
)
render_group.add_argument(
'--v_symmetry_time_pct',
default=None,
type=float,
help='Vertical symmetry point (0.0 - 1.0) - apply vertical symmetry at this point in image generation.',
)
render_group.add_argument(
'--fnformat',
default='{prefix}.{seed}.png',
@ -1148,7 +1167,8 @@ def metadata_dumps(opt,
# remove any image keys not mentioned in RFC #266
rfc266_img_fields = ['type','postprocessing','sampler','prompt','seed','variations','steps',
'cfg_scale','threshold','perlin','step_number','width','height','extra','strength','seamless'
'init_img','init_mask','facetool','facetool_strength','upscale']
'init_img','init_mask','facetool','facetool_strength','upscale','h_symmetry_time_pct',
'v_symmetry_time_pct']
rfc_dict ={}
for item in image_dict.items():

View File

@ -53,6 +53,7 @@ from diffusers import (
)
from diffusers.pipelines.latent_diffusion.pipeline_latent_diffusion import LDMBertConfig, LDMBertModel
from diffusers.pipelines.paint_by_example import PaintByExampleImageEncoder, PaintByExamplePipeline
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from diffusers.utils import is_safetensors_available
from transformers import AutoFeatureExtractor, BertTokenizerFast, CLIPTextModel, CLIPTokenizer, CLIPVisionConfig
@ -984,6 +985,7 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
elif model_type in ['FrozenCLIPEmbedder','WeightedFrozenCLIPEmbedder']:
text_model = convert_ldm_clip_checkpoint(checkpoint)
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14",cache_dir=cache_dir)
safety_checker = StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker',cache_dir=global_cache_dir("hub"))
feature_extractor = AutoFeatureExtractor.from_pretrained("CompVis/stable-diffusion-safety-checker",cache_dir=cache_dir)
pipe = pipeline_class(
vae=vae,
@ -991,7 +993,7 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
tokenizer=tokenizer,
unet=unet,
scheduler=scheduler,
safety_checker=None,
safety_checker=safety_checker,
feature_extractor=feature_extractor,
)
else:

View File

@ -40,7 +40,6 @@ from ldm.invoke.globals import Globals, global_cache_dir, global_config_dir
from ldm.invoke.readline import generic_completer
warnings.filterwarnings("ignore")
import torch
transformers.logging.set_verbosity_error()
@ -764,7 +763,7 @@ def download_weights(opt: dict) -> Union[str, None]:
precision = (
"float32"
if opt.full_precision
else choose_precision(torch.device(choose_torch_device()))
else choose_precision(choose_torch_device())
)
if opt.yes_to_all:

View File

@ -1,19 +1,25 @@
from __future__ import annotations
from contextlib import nullcontext
import torch
from torch import autocast
from contextlib import nullcontext
from ldm.invoke.globals import Globals
def choose_torch_device() -> str:
CPU_DEVICE = torch.device("cpu")
def choose_torch_device() -> torch.device:
'''Convenience routine for guessing which GPU device to run model on'''
if Globals.always_use_cpu:
return "cpu"
return CPU_DEVICE
if torch.cuda.is_available():
return 'cuda'
return torch.device('cuda')
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
return 'mps'
return 'cpu'
return torch.device('mps')
return CPU_DEVICE
def choose_precision(device) -> str:
def choose_precision(device: torch.device) -> str:
'''Returns an appropriate precision for the given torch device'''
if device.type == 'cuda':
device_name = torch.cuda.get_device_name(device)
@ -21,7 +27,7 @@ def choose_precision(device) -> str:
return 'float16'
return 'float32'
def torch_dtype(device) -> torch.dtype:
def torch_dtype(device: torch.device) -> torch.dtype:
if Globals.full_precision:
return torch.float32
if choose_precision(device) == 'float16':
@ -36,3 +42,13 @@ def choose_autocast(precision):
if precision == 'autocast' or precision == 'float16':
return autocast
return nullcontext
def normalize_device(device: str | torch.device) -> torch.device:
"""Ensure device has a device index defined, if appropriate."""
device = torch.device(device)
if device.index is None:
# cuda might be the only torch backend that currently uses the device index?
# I don't see anything like `current_device` for cpu or mps.
if device.type == 'cuda':
device = torch.device(device.type, torch.cuda.current_device())
return device

View File

@ -64,6 +64,7 @@ class Generator:
def generate(self,prompt,init_image,width,height,sampler, iterations=1,seed=None,
image_callback=None, step_callback=None, threshold=0.0, perlin=0.0,
h_symmetry_time_pct=None, v_symmetry_time_pct=None,
safety_checker:dict=None,
free_gpu_mem: bool=False,
**kwargs):
@ -81,6 +82,8 @@ class Generator:
step_callback = step_callback,
threshold = threshold,
perlin = perlin,
h_symmetry_time_pct = h_symmetry_time_pct,
v_symmetry_time_pct = v_symmetry_time_pct,
attention_maps_callback = attention_maps_callback,
**kwargs
)

View File

@ -28,6 +28,7 @@ from typing_extensions import ParamSpec
from ldm.invoke.globals import Globals
from ldm.models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent, PostprocessingSettings
from ldm.modules.textual_inversion_manager import TextualInversionManager
from ..devices import normalize_device, CPU_DEVICE
from ..offloading import LazilyLoadedModelGroup, FullyLoadedModelGroup, ModelGroup
from ...models.diffusion.cross_attention_map_saving import AttentionMapSaver
from ...modules.prompt_to_embeddings_converter import WeightedPromptFragmentsToEmbeddingsConverter
@ -319,7 +320,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
if self.device.type == 'cpu' or self.device.type == 'mps':
mem_free = psutil.virtual_memory().free
elif self.device.type == 'cuda':
mem_free, _ = torch.cuda.mem_get_info(self.device)
mem_free, _ = torch.cuda.mem_get_info(normalize_device(self.device))
else:
raise ValueError(f"unrecognized device {self.device}")
# input tensor of [1, 4, h/8, w/8]
@ -380,9 +381,10 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
self._model_group.ready()
def to(self, torch_device: Optional[Union[str, torch.device]] = None):
# overridden method; types match the superclass.
if torch_device is None:
return self
self._model_group.set_device(torch_device)
self._model_group.set_device(torch.device(torch_device))
self._model_group.ready()
@property
@ -689,8 +691,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
if device.type == 'mps':
# workaround for torch MPS bug that has been fixed in https://github.com/kulinseth/pytorch/pull/222
# TODO remove this workaround once kulinseth#222 is merged to pytorch mainline
self.vae.to('cpu')
init_image = init_image.to('cpu')
self.vae.to(CPU_DEVICE)
init_image = init_image.to(CPU_DEVICE)
else:
self._model_group.load(self.vae)
init_latent_dist = self.vae.encode(init_image).latent_dist

View File

@ -16,8 +16,8 @@ class Img2Img(Generator):
self.init_latent = None # by get_noise()
def get_make_image(self,prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,init_image,strength,step_callback=None,threshold=0.0,perlin=0.0,
attention_maps_callback=None,
conditioning,init_image,strength,step_callback=None,threshold=0.0,warmup=0.2,perlin=0.0,
h_symmetry_time_pct=None,v_symmetry_time_pct=None,attention_maps_callback=None,
**kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
@ -33,8 +33,13 @@ class Img2Img(Generator):
conditioning_data = (
ConditioningData(
uc, c, cfg_scale, extra_conditioning_info,
postprocessing_settings = PostprocessingSettings(threshold, warmup=0.2) if threshold else None)
.add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta))
postprocessing_settings=PostprocessingSettings(
threshold=threshold,
warmup=warmup,
h_symmetry_time_pct=h_symmetry_time_pct,
v_symmetry_time_pct=v_symmetry_time_pct
)
).add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta))
def make_image(x_T):

View File

@ -15,8 +15,8 @@ class Txt2Img(Generator):
@torch.no_grad()
def get_make_image(self,prompt,sampler,steps,cfg_scale,ddim_eta,
conditioning,width,height,step_callback=None,threshold=0.0,perlin=0.0,
attention_maps_callback=None,
conditioning,width,height,step_callback=None,threshold=0.0,warmup=0.2,perlin=0.0,
h_symmetry_time_pct=None,v_symmetry_time_pct=None,attention_maps_callback=None,
**kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
@ -33,8 +33,13 @@ class Txt2Img(Generator):
conditioning_data = (
ConditioningData(
uc, c, cfg_scale, extra_conditioning_info,
postprocessing_settings = PostprocessingSettings(threshold, warmup=0.2) if threshold else None)
.add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta))
postprocessing_settings=PostprocessingSettings(
threshold=threshold,
warmup=warmup,
h_symmetry_time_pct=h_symmetry_time_pct,
v_symmetry_time_pct=v_symmetry_time_pct
)
).add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta))
def make_image(x_T) -> PIL.Image.Image:
pipeline_output = pipeline.image_from_embeddings(
@ -44,8 +49,10 @@ class Txt2Img(Generator):
conditioning_data=conditioning_data,
callback=step_callback,
)
if pipeline_output.attention_map_saver is not None and attention_maps_callback is not None:
attention_maps_callback(pipeline_output.attention_map_saver)
return pipeline.numpy_to_pil(pipeline_output.images)[0]
return make_image

View File

@ -21,12 +21,14 @@ class Txt2Img2Img(Generator):
def get_make_image(self, prompt:str, sampler, steps:int, cfg_scale:float, ddim_eta,
conditioning, width:int, height:int, strength:float,
step_callback:Optional[Callable]=None, threshold=0.0, **kwargs):
step_callback:Optional[Callable]=None, threshold=0.0, warmup=0.2, perlin=0.0,
h_symmetry_time_pct=None, v_symmetry_time_pct=None, attention_maps_callback=None, **kwargs):
"""
Returns a function returning an image derived from the prompt and the initial image
Return value depends on the seed at the time you call it
kwargs are 'width' and 'height'
"""
self.perlin = perlin
# noinspection PyTypeChecker
pipeline: StableDiffusionGeneratorPipeline = self.model
@ -36,8 +38,13 @@ class Txt2Img2Img(Generator):
conditioning_data = (
ConditioningData(
uc, c, cfg_scale, extra_conditioning_info,
postprocessing_settings = PostprocessingSettings(threshold=threshold, warmup=0.2) if threshold else None)
.add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta))
postprocessing_settings = PostprocessingSettings(
threshold=threshold,
warmup=0.2,
h_symmetry_time_pct=h_symmetry_time_pct,
v_symmetry_time_pct=v_symmetry_time_pct
)
).add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta))
def make_image(x_T):
@ -69,19 +76,28 @@ class Txt2Img2Img(Generator):
if clear_cuda_cache is not None:
clear_cuda_cache()
second_pass_noise = self.get_noise_like(resized_latents)
second_pass_noise = self.get_noise_like(resized_latents, override_perlin=True)
# Clear symmetry for the second pass
from dataclasses import replace
new_postprocessing_settings = replace(conditioning_data.postprocessing_settings, h_symmetry_time_pct=None)
new_postprocessing_settings = replace(new_postprocessing_settings, v_symmetry_time_pct=None)
new_conditioning_data = replace(conditioning_data, postprocessing_settings=new_postprocessing_settings)
verbosity = get_verbosity()
set_verbosity_error()
pipeline_output = pipeline.img2img_from_latents_and_embeddings(
resized_latents,
num_inference_steps=steps,
conditioning_data=conditioning_data,
conditioning_data=new_conditioning_data,
strength=strength,
noise=second_pass_noise,
callback=step_callback)
set_verbosity(verbosity)
if pipeline_output.attention_map_saver is not None and attention_maps_callback is not None:
attention_maps_callback(pipeline_output.attention_map_saver)
return pipeline.numpy_to_pil(pipeline_output.images)[0]
@ -95,13 +111,13 @@ class Txt2Img2Img(Generator):
return make_image
def get_noise_like(self, like: torch.Tensor):
def get_noise_like(self, like: torch.Tensor, override_perlin: bool=False):
device = like.device
if device.type == 'mps':
x = torch.randn_like(like, device='cpu', dtype=self.torch_dtype()).to(device)
else:
x = torch.randn_like(like, device=device, dtype=self.torch_dtype())
if self.perlin > 0.0:
if self.perlin > 0.0 and override_perlin == False:
shape = like.shape
x = (1-self.perlin)*x + self.perlin*self.get_perlin_noise(shape[3], shape[2])
return x
@ -139,6 +155,9 @@ class Txt2Img2Img(Generator):
shape = (1, channels,
scaled_height // self.downsampling_factor, scaled_width // self.downsampling_factor)
if self.use_mps_noise or device.type == 'mps':
return torch.randn(shape, dtype=self.torch_dtype(), device='cpu').to(device)
tensor = torch.empty(size=shape, device='cpu')
tensor = self.get_noise_like(like=tensor).to(device)
else:
return torch.randn(shape, dtype=self.torch_dtype(), device=device)
tensor = torch.empty(size=shape, device=device)
tensor = self.get_noise_like(like=tensor)
return tensor

View File

@ -419,8 +419,7 @@ def run_gui(args: Namespace):
mergeapp.run()
args = mergeapp.merge_arguments
print(f'DEBUG: {args}')
#merge_diffusion_models_and_commit(**args)
merge_diffusion_models_and_commit(**args)
print(f'>> Models merged into new model: "{args["merged_model_name"]}".')

View File

@ -30,6 +30,7 @@ from omegaconf import OmegaConf
from omegaconf.dictconfig import DictConfig
from picklescan.scanner import scan_file_path
from ldm.invoke.devices import CPU_DEVICE
from ldm.invoke.generator.diffusers_pipeline import \
StableDiffusionGeneratorPipeline
from ldm.invoke.globals import (Globals, global_autoscan_dir, global_cache_dir,
@ -47,7 +48,7 @@ class ModelManager(object):
def __init__(
self,
config: OmegaConf,
device_type: str | torch.device = "cpu",
device_type: torch.device = CPU_DEVICE,
precision: str = "float16",
max_loaded_models=DEFAULT_MAX_MODELS,
sequential_offload = False
@ -675,7 +676,7 @@ class ModelManager(object):
"""
if str(weights).startswith(("http:", "https:")):
model_name = model_name or url_attachment_name(weights)
weights_path = self._resolve_path(weights, "models/ldm/stable-diffusion-v1")
config_path = self._resolve_path(config, "configs/stable-diffusion")
@ -996,25 +997,25 @@ class ModelManager(object):
self.models.pop(model_name, None)
def _model_to_cpu(self, model):
if self.device == "cpu":
if self.device == CPU_DEVICE:
return model
if isinstance(model, StableDiffusionGeneratorPipeline):
model.offload_all()
return model
model.cond_stage_model.device = "cpu"
model.to("cpu")
model.cond_stage_model.device = CPU_DEVICE
model.to(CPU_DEVICE)
for submodel in ("first_stage_model", "cond_stage_model", "model"):
try:
getattr(model, submodel).to("cpu")
getattr(model, submodel).to(CPU_DEVICE)
except AttributeError:
pass
return model
def _model_from_cpu(self, model):
if self.device == "cpu":
if self.device == CPU_DEVICE:
return model
if isinstance(model, StableDiffusionGeneratorPipeline):

View File

@ -58,9 +58,11 @@ COMMANDS = (
'--inpaint_replace','-r',
'--png_compression','-z',
'--text_mask','-tm',
'--h_symmetry_time_pct',
'--v_symmetry_time_pct',
'!fix','!fetch','!replay','!history','!search','!clear',
'!models','!switch','!import_model','!optimize_model','!convert_model','!edit_model','!del_model',
'!mask',
'!mask','!triggers',
)
MODEL_COMMANDS = (
'!switch',
@ -138,7 +140,7 @@ class Completer(object):
elif re.match('^'+'|'.join(MODEL_COMMANDS),buffer):
self.matches= self._model_completions(text, state)
# looking for a ckpt model
# looking for a ckpt model
elif re.match('^'+'|'.join(CKPT_MODEL_COMMANDS),buffer):
self.matches= self._model_completions(text, state, ckpt_only=True)
@ -255,7 +257,7 @@ class Completer(object):
update our list of models
'''
self.models = models
def _seed_completions(self, text, state):
m = re.search('(-S\s?|--seed[=\s]?)(\d*)',text)
if m:

View File

@ -18,6 +18,8 @@ from ldm.models.diffusion.cross_attention_map_saving import AttentionMapSaver
class PostprocessingSettings:
threshold: float
warmup: float
h_symmetry_time_pct: Optional[float]
v_symmetry_time_pct: Optional[float]
class InvokeAIDiffuserComponent:
@ -30,7 +32,7 @@ class InvokeAIDiffuserComponent:
* Hybrid conditioning (used for inpainting)
'''
debug_thresholding = False
last_percent_through = 0.0
@dataclass
class ExtraConditioningInfo:
@ -56,6 +58,7 @@ class InvokeAIDiffuserComponent:
self.is_running_diffusers = is_running_diffusers
self.model_forward_callback = model_forward_callback
self.cross_attention_control_context = None
self.last_percent_through = 0.0
@contextmanager
def custom_attention_context(self,
@ -164,6 +167,7 @@ class InvokeAIDiffuserComponent:
if postprocessing_settings is not None:
percent_through = self.calculate_percent_through(sigma, step_index, total_step_count)
latents = self.apply_threshold(postprocessing_settings, latents, percent_through)
latents = self.apply_symmetry(postprocessing_settings, latents, percent_through)
return latents
def calculate_percent_through(self, sigma, step_index, total_step_count):
@ -292,8 +296,12 @@ class InvokeAIDiffuserComponent:
self,
postprocessing_settings: PostprocessingSettings,
latents: torch.Tensor,
percent_through
percent_through: float
) -> torch.Tensor:
if postprocessing_settings.threshold is None or postprocessing_settings.threshold == 0.0:
return latents
threshold = postprocessing_settings.threshold
warmup = postprocessing_settings.warmup
@ -342,6 +350,56 @@ class InvokeAIDiffuserComponent:
return latents
def apply_symmetry(
self,
postprocessing_settings: PostprocessingSettings,
latents: torch.Tensor,
percent_through: float
) -> torch.Tensor:
# Reset our last percent through if this is our first step.
if percent_through == 0.0:
self.last_percent_through = 0.0
if postprocessing_settings is None:
return latents
# Check for out of bounds
h_symmetry_time_pct = postprocessing_settings.h_symmetry_time_pct
if (h_symmetry_time_pct is not None and (h_symmetry_time_pct <= 0.0 or h_symmetry_time_pct > 1.0)):
h_symmetry_time_pct = None
v_symmetry_time_pct = postprocessing_settings.v_symmetry_time_pct
if (v_symmetry_time_pct is not None and (v_symmetry_time_pct <= 0.0 or v_symmetry_time_pct > 1.0)):
v_symmetry_time_pct = None
dev = latents.device.type
latents.to(device='cpu')
if (
h_symmetry_time_pct != None and
self.last_percent_through < h_symmetry_time_pct and
percent_through >= h_symmetry_time_pct
):
# Horizontal symmetry occurs on the 3rd dimension of the latent
width = latents.shape[3]
x_flipped = torch.flip(latents, dims=[3])
latents = torch.cat([latents[:, :, :, 0:int(width/2)], x_flipped[:, :, :, int(width/2):int(width)]], dim=3)
if (
v_symmetry_time_pct != None and
self.last_percent_through < v_symmetry_time_pct and
percent_through >= v_symmetry_time_pct
):
# Vertical symmetry occurs on the 2nd dimension of the latent
height = latents.shape[2]
y_flipped = torch.flip(latents, dims=[2])
latents = torch.cat([latents[:, :, 0:int(height / 2)], y_flipped[:, :, int(height / 2):int(height)]], dim=2)
self.last_percent_through = percent_through
return latents.to(device=dev)
def estimate_percent_through(self, step_index, sigma):
if step_index is not None and self.cross_attention_control_context is not None:
# percent_through will never reach 1.0 (but this is intended)

View File

@ -1,11 +1,12 @@
import os
import traceback
from typing import Optional
from dataclasses import dataclass
from pathlib import Path
from typing import Optional, Union
import torch
from dataclasses import dataclass
from picklescan.scanner import scan_file_path
from transformers import CLIPTokenizer, CLIPTextModel
from transformers import CLIPTextModel, CLIPTokenizer
from ldm.invoke.concepts_lib import HuggingFaceConceptsLibrary
@ -21,11 +22,14 @@ class TextualInversion:
def embedding_vector_length(self) -> int:
return self.embedding.shape[0]
class TextualInversionManager():
def __init__(self,
tokenizer: CLIPTokenizer,
text_encoder: CLIPTextModel,
full_precision: bool=True):
class TextualInversionManager:
def __init__(
self,
tokenizer: CLIPTokenizer,
text_encoder: CLIPTextModel,
full_precision: bool = True,
):
self.tokenizer = tokenizer
self.text_encoder = text_encoder
self.full_precision = full_precision
@ -38,47 +42,73 @@ class TextualInversionManager():
if concept_name in self.hf_concepts_library.concepts_loaded:
continue
trigger = self.hf_concepts_library.concept_to_trigger(concept_name)
if self.has_textual_inversion_for_trigger_string(trigger) \
or self.has_textual_inversion_for_trigger_string(concept_name) \
or self.has_textual_inversion_for_trigger_string(f'<{concept_name}>'): # in case a token with literal angle brackets encountered
print(f'>> Loaded local embedding for trigger {concept_name}')
if (
self.has_textual_inversion_for_trigger_string(trigger)
or self.has_textual_inversion_for_trigger_string(concept_name)
or self.has_textual_inversion_for_trigger_string(f"<{concept_name}>")
): # in case a token with literal angle brackets encountered
print(f">> Loaded local embedding for trigger {concept_name}")
continue
bin_file = self.hf_concepts_library.get_concept_model_path(concept_name)
if not bin_file:
continue
print(f'>> Loaded remote embedding for trigger {concept_name}')
print(f">> Loaded remote embedding for trigger {concept_name}")
self.load_textual_inversion(bin_file)
self.hf_concepts_library.concepts_loaded[concept_name]=True
self.hf_concepts_library.concepts_loaded[concept_name] = True
def get_all_trigger_strings(self) -> list[str]:
return [ti.trigger_string for ti in self.textual_inversions]
def load_textual_inversion(self, ckpt_path, defer_injecting_tokens: bool=False):
if str(ckpt_path).endswith('.DS_Store'):
def load_textual_inversion(self, ckpt_path: Union[str,Path], defer_injecting_tokens: bool = False):
ckpt_path = Path(ckpt_path)
if str(ckpt_path).endswith(".DS_Store"):
return
try:
scan_result = scan_file_path(ckpt_path)
scan_result = scan_file_path(str(ckpt_path))
if scan_result.infected_files == 1:
print(f'\n### Security Issues Found in Model: {scan_result.issues_count}')
print('### For your safety, InvokeAI will not load this embed.')
print(
f"\n### Security Issues Found in Model: {scan_result.issues_count}"
)
print("### For your safety, InvokeAI will not load this embed.")
return
except Exception:
print(f"### WARNING::: Invalid or corrupt embeddings found. Ignoring: {ckpt_path}")
print(
f"### {ckpt_path.parents[0].name}/{ckpt_path.name} is damaged or corrupt."
)
return
embedding_info = self._parse_embedding(str(ckpt_path))
if embedding_info is None:
# We've already put out an error message about the bad embedding in _parse_embedding, so just return.
return
elif (
self.text_encoder.get_input_embeddings().weight.data[0].shape[0]
!= embedding_info["embedding"].shape[0]
):
print(
f"** Notice: {ckpt_path.parents[0].name}/{ckpt_path.name} was trained on a model with a different token dimension. It can't be used with this model."
)
return
embedding_info = self._parse_embedding(ckpt_path)
if embedding_info:
try:
self._add_textual_inversion(embedding_info['name'],
embedding_info['embedding'],
defer_injecting_tokens=defer_injecting_tokens)
self._add_textual_inversion(
embedding_info["name"],
embedding_info["embedding"],
defer_injecting_tokens=defer_injecting_tokens,
)
except ValueError as e:
print(f' | Ignoring incompatible embedding {embedding_info["name"]}')
print(f' | The error was {str(e)}')
print(f" | The error was {str(e)}")
else:
print(f'>> Failed to load embedding located at {ckpt_path}. Unsupported file.')
print(
f">> Failed to load embedding located at {str(ckpt_path)}. Unsupported file."
)
def _add_textual_inversion(self, trigger_str, embedding, defer_injecting_tokens=False) -> TextualInversion:
def _add_textual_inversion(
self, trigger_str, embedding, defer_injecting_tokens=False
) -> TextualInversion:
"""
Add a textual inversion to be recognised.
:param trigger_str: The trigger text in the prompt that activates this textual inversion. If unknown to the embedder's tokenizer, will be added.
@ -86,46 +116,59 @@ class TextualInversionManager():
:return: The token id for the added embedding, either existing or newly-added.
"""
if trigger_str in [ti.trigger_string for ti in self.textual_inversions]:
print(f">> TextualInversionManager refusing to overwrite already-loaded token '{trigger_str}'")
print(
f">> TextualInversionManager refusing to overwrite already-loaded token '{trigger_str}'"
)
return
if not self.full_precision:
embedding = embedding.half()
if len(embedding.shape) == 1:
embedding = embedding.unsqueeze(0)
elif len(embedding.shape) > 2:
raise ValueError(f"TextualInversionManager cannot add {trigger_str} because the embedding shape {embedding.shape} is incorrect. The embedding must have shape [token_dim] or [V, token_dim] where V is vector length and token_dim is 768 for SD1 or 1280 for SD2.")
raise ValueError(
f"TextualInversionManager cannot add {trigger_str} because the embedding shape {embedding.shape} is incorrect. The embedding must have shape [token_dim] or [V, token_dim] where V is vector length and token_dim is 768 for SD1 or 1280 for SD2."
)
try:
ti = TextualInversion(
trigger_string=trigger_str,
embedding=embedding
)
ti = TextualInversion(trigger_string=trigger_str, embedding=embedding)
if not defer_injecting_tokens:
self._inject_tokens_and_assign_embeddings(ti)
self.textual_inversions.append(ti)
return ti
except ValueError as e:
if str(e).startswith('Warning'):
if str(e).startswith("Warning"):
print(f">> {str(e)}")
else:
traceback.print_exc()
print(f">> TextualInversionManager was unable to add a textual inversion with trigger string {trigger_str}.")
print(
f">> TextualInversionManager was unable to add a textual inversion with trigger string {trigger_str}."
)
raise
def _inject_tokens_and_assign_embeddings(self, ti: TextualInversion) -> int:
if ti.trigger_token_id is not None:
raise ValueError(f"Tokens already injected for textual inversion with trigger '{ti.trigger_string}'")
raise ValueError(
f"Tokens already injected for textual inversion with trigger '{ti.trigger_string}'"
)
trigger_token_id = self._get_or_create_token_id_and_assign_embedding(ti.trigger_string, ti.embedding[0])
trigger_token_id = self._get_or_create_token_id_and_assign_embedding(
ti.trigger_string, ti.embedding[0]
)
if ti.embedding_vector_length > 1:
# for embeddings with vector length > 1
pad_token_strings = [ti.trigger_string + "-!pad-" + str(pad_index) for pad_index in range(1, ti.embedding_vector_length)]
pad_token_strings = [
ti.trigger_string + "-!pad-" + str(pad_index)
for pad_index in range(1, ti.embedding_vector_length)
]
# todo: batched UI for faster loading when vector length >2
pad_token_ids = [self._get_or_create_token_id_and_assign_embedding(pad_token_str, ti.embedding[1 + i]) \
for (i, pad_token_str) in enumerate(pad_token_strings)]
pad_token_ids = [
self._get_or_create_token_id_and_assign_embedding(
pad_token_str, ti.embedding[1 + i]
)
for (i, pad_token_str) in enumerate(pad_token_strings)
]
else:
pad_token_ids = []
@ -133,7 +176,6 @@ class TextualInversionManager():
ti.pad_token_ids = pad_token_ids
return ti.trigger_token_id
def has_textual_inversion_for_trigger_string(self, trigger_string: str) -> bool:
try:
ti = self.get_textual_inversion_for_trigger_string(trigger_string)
@ -141,32 +183,43 @@ class TextualInversionManager():
except StopIteration:
return False
def get_textual_inversion_for_trigger_string(self, trigger_string: str) -> TextualInversion:
return next(ti for ti in self.textual_inversions if ti.trigger_string == trigger_string)
def get_textual_inversion_for_trigger_string(
self, trigger_string: str
) -> TextualInversion:
return next(
ti for ti in self.textual_inversions if ti.trigger_string == trigger_string
)
def get_textual_inversion_for_token_id(self, token_id: int) -> TextualInversion:
return next(ti for ti in self.textual_inversions if ti.trigger_token_id == token_id)
return next(
ti for ti in self.textual_inversions if ti.trigger_token_id == token_id
)
def create_deferred_token_ids_for_any_trigger_terms(self, prompt_string: str) -> list[int]:
def create_deferred_token_ids_for_any_trigger_terms(
self, prompt_string: str
) -> list[int]:
injected_token_ids = []
for ti in self.textual_inversions:
if ti.trigger_token_id is None and ti.trigger_string in prompt_string:
if ti.embedding_vector_length > 1:
print(f">> Preparing tokens for textual inversion {ti.trigger_string}...")
print(
f">> Preparing tokens for textual inversion {ti.trigger_string}..."
)
try:
self._inject_tokens_and_assign_embeddings(ti)
except ValueError as e:
print(f' | Ignoring incompatible embedding trigger {ti.trigger_string}')
print(f' | The error was {str(e)}')
print(
f" | Ignoring incompatible embedding trigger {ti.trigger_string}"
)
print(f" | The error was {str(e)}")
continue
injected_token_ids.append(ti.trigger_token_id)
injected_token_ids.extend(ti.pad_token_ids)
return injected_token_ids
def expand_textual_inversion_token_ids_if_necessary(self, prompt_token_ids: list[int]) -> list[int]:
def expand_textual_inversion_token_ids_if_necessary(
self, prompt_token_ids: list[int]
) -> list[int]:
"""
Insert padding tokens as necessary into the passed-in list of token ids to match any textual inversions it includes.
@ -181,20 +234,31 @@ class TextualInversionManager():
raise ValueError("prompt_token_ids must not start with bos_token_id")
if prompt_token_ids[-1] == self.tokenizer.eos_token_id:
raise ValueError("prompt_token_ids must not end with eos_token_id")
textual_inversion_trigger_token_ids = [ti.trigger_token_id for ti in self.textual_inversions]
textual_inversion_trigger_token_ids = [
ti.trigger_token_id for ti in self.textual_inversions
]
prompt_token_ids = prompt_token_ids.copy()
for i, token_id in reversed(list(enumerate(prompt_token_ids))):
if token_id in textual_inversion_trigger_token_ids:
textual_inversion = next(ti for ti in self.textual_inversions if ti.trigger_token_id == token_id)
for pad_idx in range(0, textual_inversion.embedding_vector_length-1):
prompt_token_ids.insert(i+pad_idx+1, textual_inversion.pad_token_ids[pad_idx])
textual_inversion = next(
ti
for ti in self.textual_inversions
if ti.trigger_token_id == token_id
)
for pad_idx in range(0, textual_inversion.embedding_vector_length - 1):
prompt_token_ids.insert(
i + pad_idx + 1, textual_inversion.pad_token_ids[pad_idx]
)
return prompt_token_ids
def _get_or_create_token_id_and_assign_embedding(self, token_str: str, embedding: torch.Tensor) -> int:
def _get_or_create_token_id_and_assign_embedding(
self, token_str: str, embedding: torch.Tensor
) -> int:
if len(embedding.shape) != 1:
raise ValueError("Embedding has incorrect shape - must be [token_dim] where token_dim is 768 for SD1 or 1280 for SD2")
raise ValueError(
"Embedding has incorrect shape - must be [token_dim] where token_dim is 768 for SD1 or 1280 for SD2"
)
existing_token_id = self.tokenizer.convert_tokens_to_ids(token_str)
if existing_token_id == self.tokenizer.unk_token_id:
num_tokens_added = self.tokenizer.add_tokens(token_str)
@ -207,66 +271,79 @@ class TextualInversionManager():
token_id = self.tokenizer.convert_tokens_to_ids(token_str)
if token_id == self.tokenizer.unk_token_id:
raise RuntimeError(f"Unable to find token id for token '{token_str}'")
if self.text_encoder.get_input_embeddings().weight.data[token_id].shape != embedding.shape:
raise ValueError(f"Warning. Cannot load embedding for {token_str}. It was trained on a model with token dimension {embedding.shape[0]}, but the current model has token dimension {self.text_encoder.get_input_embeddings().weight.data[token_id].shape[0]}.")
if (
self.text_encoder.get_input_embeddings().weight.data[token_id].shape
!= embedding.shape
):
raise ValueError(
f"Warning. Cannot load embedding for {token_str}. It was trained on a model with token dimension {embedding.shape[0]}, but the current model has token dimension {self.text_encoder.get_input_embeddings().weight.data[token_id].shape[0]}."
)
self.text_encoder.get_input_embeddings().weight.data[token_id] = embedding
return token_id
def _parse_embedding(self, embedding_file: str):
file_type = embedding_file.split('.')[-1]
if file_type == 'pt':
file_type = embedding_file.split(".")[-1]
if file_type == "pt":
return self._parse_embedding_pt(embedding_file)
elif file_type == 'bin':
elif file_type == "bin":
return self._parse_embedding_bin(embedding_file)
else:
print(f'>> Not a recognized embedding file: {embedding_file}')
print(f">> Not a recognized embedding file: {embedding_file}")
return None
def _parse_embedding_pt(self, embedding_file):
embedding_ckpt = torch.load(embedding_file, map_location='cpu')
embedding_ckpt = torch.load(embedding_file, map_location="cpu")
embedding_info = {}
# Check if valid embedding file
if 'string_to_token' and 'string_to_param' in embedding_ckpt:
if "string_to_token" and "string_to_param" in embedding_ckpt:
# Catch variants that do not have the expected keys or values.
try:
embedding_info['name'] = embedding_ckpt['name'] or os.path.basename(os.path.splitext(embedding_file)[0])
embedding_info["name"] = embedding_ckpt["name"] or os.path.basename(
os.path.splitext(embedding_file)[0]
)
# Check num of embeddings and warn user only the first will be used
embedding_info['num_of_embeddings'] = len(embedding_ckpt["string_to_token"])
if embedding_info['num_of_embeddings'] > 1:
print('>> More than 1 embedding found. Will use the first one')
embedding_info["num_of_embeddings"] = len(
embedding_ckpt["string_to_token"]
)
if embedding_info["num_of_embeddings"] > 1:
print(">> More than 1 embedding found. Will use the first one")
embedding = list(embedding_ckpt['string_to_param'].values())[0]
except (AttributeError,KeyError):
embedding = list(embedding_ckpt["string_to_param"].values())[0]
except (AttributeError, KeyError):
return self._handle_broken_pt_variants(embedding_ckpt, embedding_file)
embedding_info['embedding'] = embedding
embedding_info['num_vectors_per_token'] = embedding.size()[0]
embedding_info['token_dim'] = embedding.size()[1]
embedding_info["embedding"] = embedding
embedding_info["num_vectors_per_token"] = embedding.size()[0]
embedding_info["token_dim"] = embedding.size()[1]
try:
embedding_info['trained_steps'] = embedding_ckpt['step']
embedding_info['trained_model_name'] = embedding_ckpt['sd_checkpoint_name']
embedding_info['trained_model_checksum'] = embedding_ckpt['sd_checkpoint']
embedding_info["trained_steps"] = embedding_ckpt["step"]
embedding_info["trained_model_name"] = embedding_ckpt[
"sd_checkpoint_name"
]
embedding_info["trained_model_checksum"] = embedding_ckpt[
"sd_checkpoint"
]
except AttributeError:
print(">> No Training Details Found. Passing ...")
# .pt files found at https://cyberes.github.io/stable-diffusion-textual-inversion-models/
# They are actually .bin files
elif len(embedding_ckpt.keys())==1:
print('>> Detected .bin file masquerading as .pt file')
elif len(embedding_ckpt.keys()) == 1:
print(">> Detected .bin file masquerading as .pt file")
embedding_info = self._parse_embedding_bin(embedding_file)
else:
print('>> Invalid embedding format')
print(">> Invalid embedding format")
embedding_info = None
return embedding_info
def _parse_embedding_bin(self, embedding_file):
embedding_ckpt = torch.load(embedding_file, map_location='cpu')
embedding_ckpt = torch.load(embedding_file, map_location="cpu")
embedding_info = {}
if list(embedding_ckpt.keys()) == 0:
@ -274,27 +351,45 @@ class TextualInversionManager():
embedding_info = None
else:
for token in list(embedding_ckpt.keys()):
embedding_info['name'] = token or os.path.basename(os.path.splitext(embedding_file)[0])
embedding_info['embedding'] = embedding_ckpt[token]
embedding_info['num_vectors_per_token'] = 1 # All Concepts seem to default to 1
embedding_info['token_dim'] = embedding_info['embedding'].size()[0]
embedding_info["name"] = token or os.path.basename(
os.path.splitext(embedding_file)[0]
)
embedding_info["embedding"] = embedding_ckpt[token]
embedding_info[
"num_vectors_per_token"
] = 1 # All Concepts seem to default to 1
embedding_info["token_dim"] = embedding_info["embedding"].size()[0]
return embedding_info
def _handle_broken_pt_variants(self, embedding_ckpt:dict, embedding_file:str)->dict:
'''
def _handle_broken_pt_variants(
self, embedding_ckpt: dict, embedding_file: str
) -> dict:
"""
This handles the broken .pt file variants. We only know of one at present.
'''
"""
embedding_info = {}
if isinstance(list(embedding_ckpt['string_to_token'].values())[0],torch.Tensor):
print('>> Detected .pt file variant 1') # example at https://github.com/invoke-ai/InvokeAI/issues/1829
for token in list(embedding_ckpt['string_to_token'].keys()):
embedding_info['name'] = token if token != '*' else os.path.basename(os.path.splitext(embedding_file)[0])
embedding_info['embedding'] = embedding_ckpt['string_to_param'].state_dict()[token]
embedding_info['num_vectors_per_token'] = embedding_info['embedding'].shape[0]
embedding_info['token_dim'] = embedding_info['embedding'].size()[0]
if isinstance(
list(embedding_ckpt["string_to_token"].values())[0], torch.Tensor
):
print(
">> Detected .pt file variant 1"
) # example at https://github.com/invoke-ai/InvokeAI/issues/1829
for token in list(embedding_ckpt["string_to_token"].keys()):
embedding_info["name"] = (
token
if token != "*"
else os.path.basename(os.path.splitext(embedding_file)[0])
)
embedding_info["embedding"] = embedding_ckpt[
"string_to_param"
].state_dict()[token]
embedding_info["num_vectors_per_token"] = embedding_info[
"embedding"
].shape[0]
embedding_info["token_dim"] = embedding_info["embedding"].size()[0]
else:
print('>> Invalid embedding format')
print(">> Invalid embedding format")
embedding_info = None
return embedding_info