Merge branch 'development' into fix-disabled-prompt

This commit is contained in:
Lincoln Stein 2022-10-22 22:46:34 -04:00 committed by GitHub
commit a956bf9fda
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
29 changed files with 1671 additions and 325 deletions

View File

@ -6,15 +6,16 @@
# and the width and height of the images it # and the width and height of the images it
# was trained on. # was trained on.
laion400m:
config: configs/latent-diffusion/txt2img-1p4B-eval.yaml
weights: models/ldm/text2img-large/model.ckpt
description: Latent Diffusion LAION400M model
width: 256
height: 256
stable-diffusion-1.4: stable-diffusion-1.4:
config: configs/stable-diffusion/v1-inference.yaml config: configs/stable-diffusion/v1-inference.yaml
weights: models/ldm/stable-diffusion-v1/model.ckpt weights: models/ldm/stable-diffusion-v1/model.ckpt
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
description: Stable Diffusion inference model version 1.4 description: Stable Diffusion inference model version 1.4
width: 512 width: 512
height: 512 height: 512
stable-diffusion-1.5:
config: configs/stable-diffusion/v1-inference.yaml
weights: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
description: Stable Diffusion inference model version 1.5
width: 512
height: 512

View File

@ -8,7 +8,7 @@ hide:
## **Interactive Command Line Interface** ## **Interactive Command Line Interface**
The `invoke.py` script, located in `scripts/dream.py`, provides an interactive The `invoke.py` script, located in `scripts/`, provides an interactive
interface to image generation similar to the "invoke mothership" bot that Stable interface to image generation similar to the "invoke mothership" bot that Stable
AI provided on its Discord server. AI provided on its Discord server.
@ -283,12 +283,20 @@ Some examples:
Outputs: Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8 [1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
# Model selection and importation ### !mask
This command takes an image, a text prompt, and uses the `clipseg`
algorithm to automatically generate a mask of the area that matches
the text prompt. It is useful for debugging the text masking process
prior to inpainting with the `--text_mask` argument. See
[INPAINTING.md] for details.
## Model selection and importation
The CLI allows you to add new models on the fly, as well as to switch The CLI allows you to add new models on the fly, as well as to switch
among them rapidly without leaving the script. among them rapidly without leaving the script.
## !models ### !models
This prints out a list of the models defined in `config/models.yaml'. This prints out a list of the models defined in `config/models.yaml'.
The active model is bold-faced The active model is bold-faced
@ -300,7 +308,7 @@ laion400m not loaded <no description>
waifu-diffusion not loaded Waifu Diffusion v1.3 waifu-diffusion not loaded Waifu Diffusion v1.3
</pre> </pre>
## !switch <model> ### !switch <model>
This quickly switches from one model to another without leaving the This quickly switches from one model to another without leaving the
CLI script. `invoke.py` uses a memory caching system; once a model CLI script. `invoke.py` uses a memory caching system; once a model
@ -346,7 +354,7 @@ laion400m not loaded <no description>
waifu-diffusion cached Waifu Diffusion v1.3 waifu-diffusion cached Waifu Diffusion v1.3
</pre> </pre>
## !import_model <path/to/model/weights> ### !import_model <path/to/model/weights>
This command imports a new model weights file into InvokeAI, makes it This command imports a new model weights file into InvokeAI, makes it
available for image generation within the script, and writes out the available for image generation within the script, and writes out the
@ -398,7 +406,7 @@ OK to import [n]? <b>y</b>
invoke> invoke>
</pre> </pre>
##!edit_model <name_of_model> ###!edit_model <name_of_model>
The `!edit_model` command can be used to modify a model that is The `!edit_model` command can be used to modify a model that is
already defined in `config/models.yaml`. Call it with the short already defined in `config/models.yaml`. Call it with the short
@ -434,20 +442,12 @@ OK to import [n]? y
Outputs: Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25 [2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
``` ```
# History processing ## History processing
The CLI provides a series of convenient commands for reviewing previous The CLI provides a series of convenient commands for reviewing previous
actions, retrieving them, modifying them, and re-running them. actions, retrieving them, modifying them, and re-running them.
```bash
invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
```
Note that this command may behave unexpectedly if given a PNG file that ### !history
was not generated by InvokeAI.
### `!history`
The invoke script keeps track of all the commands you issue during a The invoke script keeps track of all the commands you issue during a
session, allowing you to re-run them. On Mac and Linux systems, it session, allowing you to re-run them. On Mac and Linux systems, it
@ -472,20 +472,41 @@ invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194 invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
``` ```
## !fetch ### !fetch
This command retrieves the generation parameters from a previously This command retrieves the generation parameters from a previously
generated image and either loads them into the command line. You may generated image and either loads them into the command line
provide either the name of a file in the current output directory, or (Linux|Mac), or prints them out in a comment for copy-and-paste
a full file path. (Windows). You may provide either the name of a file in the current
output directory, or a full file path. Specify path to a folder with
image png files, and wildcard *.png to retrieve the dream command used
to generate the images, and save them to a file commands.txt for
further processing.
~~~ This example loads the generation command for a single png file:
```bash
invoke> !fetch 0000015.8929913.png invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running: # the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5 invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
```
This one fetches the generation commands from a batch of files and
stores them into `selected.txt`:
```bash
invoke> !fetch outputs\selected-imgs\*.png selected.txt
```
### !replay
This command replays a text file generated by !fetch or created manually
~~~
invoke> !replay outputs\selected-imgs\selected.txt
~~~ ~~~
Note that this command may behave unexpectedly if given a PNG file that Note that these commands may behave unexpectedly if given a PNG file that
was not generated by InvokeAI. was not generated by InvokeAI.
### !search <search string> ### !search <search string>
@ -503,16 +524,6 @@ invoke> !search surreal
This clears the search history from memory and disk. Be advised that This clears the search history from memory and disk. Be advised that
this operation is irreversible and does not issue any warnings! this operation is irreversible and does not issue any warnings!
Other ! Commands
### !mask
This command takes an image, a text prompt, and uses the `clipseg`
algorithm to automatically generate a mask of the area that matches
the text prompt. It is useful for debugging the text masking process
prior to inpainting with the `--text_mask` argument. See
[INPAINTING.md] for details.
## Command-line editing and completion ## Command-line editing and completion
The command-line offers convenient history tracking, editing, and The command-line offers convenient history tracking, editing, and

690
frontend/dist/assets/index.2d646c45.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -14,6 +14,7 @@
"@chakra-ui/react": "^2.3.1", "@chakra-ui/react": "^2.3.1",
"@emotion/react": "^11.10.4", "@emotion/react": "^11.10.4",
"@emotion/styled": "^11.10.4", "@emotion/styled": "^11.10.4",
"@radix-ui/react-context-menu": "^2.0.1",
"@reduxjs/toolkit": "^1.8.5", "@reduxjs/toolkit": "^1.8.5",
"@types/uuid": "^8.3.4", "@types/uuid": "^8.3.4",
"dateformat": "^5.0.3", "dateformat": "^5.0.3",
@ -25,7 +26,6 @@
"react-dropzone": "^14.2.2", "react-dropzone": "^14.2.2",
"react-hotkeys-hook": "^3.4.7", "react-hotkeys-hook": "^3.4.7",
"react-icons": "^4.4.0", "react-icons": "^4.4.0",
"react-masonry-css": "^1.0.16",
"react-redux": "^8.0.2", "react-redux": "^8.0.2",
"redux-persist": "^6.0.0", "redux-persist": "^6.0.0",
"socket.io": "^4.5.2", "socket.io": "^4.5.2",

View File

@ -26,7 +26,7 @@ const makeSocketIOEmitters = (
const options = { ...getState().options }; const options = { ...getState().options };
if (tabMap[options.activeTab] === 'txt2img') { if (tabMap[options.activeTab] !== 'img2img') {
options.shouldUseInitImage = false; options.shouldUseInitImage = false;
} }

View File

@ -7,12 +7,17 @@ export const PostProcessingWIP = () => {
<p> <p>
Invoke AI offers a wide variety of post processing features. Image Invoke AI offers a wide variety of post processing features. Image
Upscaling and Face Restoration are already available in the WebUI. You Upscaling and Face Restoration are already available in the WebUI. You
can access them from the Advanced Options menu of the Text To Image tab. can access them from the Advanced Options menu of the Text To Image and
A dedicated UI will be released soon. Image To Image tabs. You can also process images directly, using the
image action buttons above the main image display.
</p>
<p>
A dedicated UI will be released soon to facilitate more advanced post
processing workflows.
</p> </p>
<p> <p>
The Invoke AI Command Line Interface offers various other features The Invoke AI Command Line Interface offers various other features
including Embiggen, High Resolution Fixing and more. including Embiggen.
</p> </p>
</div> </div>
); );

View File

@ -12,6 +12,7 @@ import {
FormControl, FormControl,
FormLabel, FormLabel,
Flex, Flex,
useToast,
} from '@chakra-ui/react'; } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit'; import { createSelector } from '@reduxjs/toolkit';
import { import {
@ -57,6 +58,7 @@ const DeleteImageModal = forwardRef(
const dispatch = useAppDispatch(); const dispatch = useAppDispatch();
const shouldConfirmOnDelete = useAppSelector(systemSelector); const shouldConfirmOnDelete = useAppSelector(systemSelector);
const cancelRef = useRef<HTMLButtonElement>(null); const cancelRef = useRef<HTMLButtonElement>(null);
const toast = useToast();
const handleClickDelete = (e: SyntheticEvent) => { const handleClickDelete = (e: SyntheticEvent) => {
e.stopPropagation(); e.stopPropagation();
@ -65,6 +67,12 @@ const DeleteImageModal = forwardRef(
const handleDelete = () => { const handleDelete = () => {
dispatch(deleteImage(image)); dispatch(deleteImage(image));
toast({
title: 'Image Deleted',
status: 'success',
duration: 2500,
isClosable: true,
});
onClose(); onClose();
}; };

View File

@ -17,6 +17,12 @@
max-height: 100%; max-height: 100%;
} }
.hoverable-image-delete-button {
position: absolute;
top: 0.25rem;
right: 0.25rem;
}
.hoverable-image-content { .hoverable-image-content {
display: flex; display: flex;
position: absolute; position: absolute;
@ -57,3 +63,39 @@
} }
} }
} }
.hoverable-image-context-menu {
z-index: 999;
padding: 0.4rem;
border-radius: 0.25rem;
background-color: var(--context-menu-bg-color);
box-shadow: var(--context-menu-box-shadow);
[role='menuitem'] {
font-size: 0.8rem;
line-height: 1rem;
border-radius: 3px;
display: flex;
align-items: center;
height: 1.75rem;
padding: 0 0.5rem;
position: relative;
user-select: none;
cursor: pointer;
outline: none;
&[data-disabled] {
color: grey;
pointer-events: none;
cursor: not-allowed;
}
&[data-warning] {
color: var(--status-bad-color);
}
&[data-highlighted] {
background-color: var(--context-menu-bg-color-hover);
}
}
}

View File

@ -1,17 +1,27 @@
import { Box, Icon, IconButton, Image, Tooltip } from '@chakra-ui/react'; import {
Box,
Icon,
IconButton,
Image,
Tooltip,
useToast,
} from '@chakra-ui/react';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store'; import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import { setCurrentImage } from './gallerySlice'; import { setCurrentImage } from './gallerySlice';
import { FaCheck, FaImage, FaSeedling, FaTrashAlt } from 'react-icons/fa'; import { FaCheck, FaTrashAlt } from 'react-icons/fa';
import DeleteImageModal from './DeleteImageModal'; import DeleteImageModal from './DeleteImageModal';
import { memo, SyntheticEvent, useState } from 'react'; import { memo, useState } from 'react';
import { import {
setActiveTab, setActiveTab,
setAllParameters, setAllImageToImageParameters,
setAllTextToImageParameters,
setInitialImagePath, setInitialImagePath,
setPrompt,
setSeed, setSeed,
} from '../options/optionsSlice'; } from '../options/optionsSlice';
import * as InvokeAI from '../../app/invokeai'; import * as InvokeAI from '../../app/invokeai';
import { IoArrowUndoCircleOutline } from 'react-icons/io5'; import * as ContextMenu from '@radix-ui/react-context-menu';
import { tabMap } from '../tabs/InvokeTabs';
interface HoverableImageProps { interface HoverableImageProps {
image: InvokeAI.Image; image: InvokeAI.Image;
@ -27,40 +37,95 @@ const memoEqualityCheck = (
* Gallery image component with delete/use all/use seed buttons on hover. * Gallery image component with delete/use all/use seed buttons on hover.
*/ */
const HoverableImage = memo((props: HoverableImageProps) => { const HoverableImage = memo((props: HoverableImageProps) => {
const [isHovered, setIsHovered] = useState<boolean>(false);
const dispatch = useAppDispatch(); const dispatch = useAppDispatch();
const activeTab = useAppSelector( const activeTab = useAppSelector(
(state: RootState) => state.options.activeTab (state: RootState) => state.options.activeTab
); );
const [isHovered, setIsHovered] = useState<boolean>(false);
const toast = useToast();
const { image, isSelected } = props; const { image, isSelected } = props;
const { url, uuid, metadata } = image; const { url, uuid, metadata } = image;
const handleMouseOver = () => setIsHovered(true); const handleMouseOver = () => setIsHovered(true);
const handleMouseOut = () => setIsHovered(false); const handleMouseOut = () => setIsHovered(false);
const handleClickSetAllParameters = (e: SyntheticEvent) => { const handleUsePrompt = () => {
e.stopPropagation(); dispatch(setPrompt(image.metadata.image.prompt));
dispatch(setAllParameters(metadata)); toast({
title: 'Prompt Set',
status: 'success',
duration: 2500,
isClosable: true,
});
}; };
const handleClickSetSeed = (e: SyntheticEvent) => { const handleUseSeed = () => {
e.stopPropagation();
dispatch(setSeed(image.metadata.image.seed)); dispatch(setSeed(image.metadata.image.seed));
toast({
title: 'Seed Set',
status: 'success',
duration: 2500,
isClosable: true,
});
}; };
const handleSetInitImage = (e: SyntheticEvent) => { const handleSendToImageToImage = () => {
e.stopPropagation();
dispatch(setInitialImagePath(image.url)); dispatch(setInitialImagePath(image.url));
if (activeTab !== 1) { if (activeTab !== 1) {
dispatch(setActiveTab(1)); dispatch(setActiveTab(1));
} }
toast({
title: 'Sent to Image To Image',
status: 'success',
duration: 2500,
isClosable: true,
});
}; };
const handleClickImage = () => dispatch(setCurrentImage(image)); const handleUseAllParameters = () => {
dispatch(setAllTextToImageParameters(metadata));
toast({
title: 'Parameters Set',
status: 'success',
duration: 2500,
isClosable: true,
});
};
const handleUseInitialImage = async () => {
// check if the image exists before setting it as initial image
if (metadata?.image?.init_image_path) {
const response = await fetch(metadata.image.init_image_path);
if (response.ok) {
dispatch(setActiveTab(tabMap.indexOf('img2img')));
dispatch(setAllImageToImageParameters(metadata));
toast({
title: 'Initial Image Set',
status: 'success',
duration: 2500,
isClosable: true,
});
return;
}
}
toast({
title: 'Initial Image Not Set',
description: 'Could not load initial image.',
status: 'error',
duration: 2500,
isClosable: true,
});
};
const handleSelectImage = () => dispatch(setCurrentImage(image));
return ( return (
<ContextMenu.Root>
<ContextMenu.Trigger>
<Box <Box
position={'relative'} position={'relative'}
key={uuid} key={uuid}
@ -69,13 +134,13 @@ const HoverableImage = memo((props: HoverableImageProps) => {
onMouseOut={handleMouseOut} onMouseOut={handleMouseOut}
> >
<Image <Image
className="hoverable-image-image"
objectFit="cover" objectFit="cover"
rounded={'md'} rounded={'md'}
src={url} src={url}
loading={'lazy'} loading={'lazy'}
className="hoverable-image-image"
/> />
<div className="hoverable-image-content" onClick={handleClickImage}> <div className="hoverable-image-content" onClick={handleSelectImage}>
{isSelected && ( {isSelected && (
<Icon <Icon
width={'50%'} width={'50%'}
@ -86,11 +151,10 @@ const HoverableImage = memo((props: HoverableImageProps) => {
)} )}
</div> </div>
{isHovered && ( {isHovered && (
<div className="hoverable-image-icons"> <div className="hoverable-image-delete-button">
<Tooltip label={'Delete image'} hasArrow> <Tooltip label={'Delete image'} hasArrow>
<DeleteImageModal image={image}> <DeleteImageModal image={image}>
<IconButton <IconButton
colorScheme="red"
aria-label="Delete image" aria-label="Delete image"
icon={<FaTrashAlt />} icon={<FaTrashAlt />}
size="xs" size="xs"
@ -99,43 +163,48 @@ const HoverableImage = memo((props: HoverableImageProps) => {
/> />
</DeleteImageModal> </DeleteImageModal>
</Tooltip> </Tooltip>
{['txt2img', 'img2img'].includes(image?.metadata?.image?.type) && (
<Tooltip label="Use All Parameters" hasArrow>
<IconButton
aria-label="Use All Parameters"
icon={<IoArrowUndoCircleOutline />}
size="xs"
fontSize={18}
variant={'imageHoverIconButton'}
onClickCapture={handleClickSetAllParameters}
/>
</Tooltip>
)}
{image?.metadata?.image?.seed !== undefined && (
<Tooltip label="Use Seed" hasArrow>
<IconButton
aria-label="Use Seed"
icon={<FaSeedling />}
size="xs"
fontSize={16}
variant={'imageHoverIconButton'}
onClickCapture={handleClickSetSeed}
/>
</Tooltip>
)}
<Tooltip label="Send To Image To Image" hasArrow>
<IconButton
aria-label="Send To Image To Image"
icon={<FaImage />}
size="xs"
fontSize={16}
variant={'imageHoverIconButton'}
onClickCapture={handleSetInitImage}
/>
</Tooltip>
</div> </div>
)} )}
</Box> </Box>
</ContextMenu.Trigger>
<ContextMenu.Content className="hoverable-image-context-menu">
<ContextMenu.Item
onClickCapture={handleUsePrompt}
disabled={image?.metadata?.image?.prompt === undefined}
>
Use Prompt
</ContextMenu.Item>
<ContextMenu.Item
onClickCapture={handleUseSeed}
disabled={image?.metadata?.image?.seed === undefined}
>
Use Seed
</ContextMenu.Item>
<ContextMenu.Item
onClickCapture={handleUseAllParameters}
disabled={
!['txt2img', 'img2img'].includes(image?.metadata?.image?.type)
}
>
Use All Parameters
</ContextMenu.Item>
<Tooltip label="Load initial image used for this generation">
<ContextMenu.Item
onClickCapture={handleUseInitialImage}
disabled={image?.metadata?.image?.type !== 'img2img'}
>
Use Initial Image
</ContextMenu.Item>
</Tooltip>
<ContextMenu.Item onClickCapture={handleSendToImageToImage}>
Send to Image To Image
</ContextMenu.Item>
<DeleteImageModal image={image}>
<ContextMenu.Item data-warning>Delete Image</ContextMenu.Item>
</DeleteImageModal>
</ContextMenu.Content>
</ContextMenu.Root>
); );
}, memoEqualityCheck); }, memoEqualityCheck);

View File

@ -55,31 +55,37 @@
@include HideScrollbar; @include HideScrollbar;
} }
.masonry-grid { // from https://css-tricks.com/a-grid-of-logos-in-squares/
display: -webkit-box; /* Not needed if autoprefixing */ .image-gallery {
display: -ms-flexbox; /* Not needed if autoprefixing */ display: grid;
display: flex; grid-template-columns: repeat(auto-fill, minmax(80px, auto));
margin-left: 0.5rem; /* gutter size offset */ grid-gap: 0.5rem;
width: auto; .hoverable-image {
} padding: 0.5rem;
.masonry-grid_column { position: relative;
padding-left: 0.5rem; /* gutter size */ &::before {
background-clip: padding-box; // for apsect ratio
} content: '';
display: block;
padding-bottom: 100%;
}
.hoverable-image-image {
position: absolute;
max-width: 100%;
/* Style your items */ // Alternate Version
.masonry-grid_column > .hoverable-image { // top: 0;
/* change div to reference your elements you put in <Masonry> */ // bottom: 0;
background: var(--tab-color); // right: 0;
margin-bottom: 0.5rem; // left: 0;
} // margin: auto;
// .image-gallery { top: 50%;
// display: flex; left: 50%;
// grid-template-columns: repeat(auto-fill, minmax(80px, auto)); transform: translate(-50%, -50%);
// gap: 0.5rem; }
// justify-items: center; }
// } }
.image-gallery-load-more-btn { .image-gallery-load-more-btn {
background-color: var(--btn-load-more) !important; background-color: var(--btn-load-more) !important;

View File

@ -1,10 +1,9 @@
import { Button, IconButton } from '@chakra-ui/button'; import { Button, IconButton } from '@chakra-ui/button';
import { Resizable } from 're-resizable'; import { Resizable } from 're-resizable';
import React, { useState } from 'react'; import React from 'react';
import { useHotkeys } from 'react-hotkeys-hook'; import { useHotkeys } from 'react-hotkeys-hook';
import { MdClear, MdPhotoLibrary } from 'react-icons/md'; import { MdClear, MdPhotoLibrary } from 'react-icons/md';
import Masonry from 'react-masonry-css';
import { requestImages } from '../../app/socketio/actions'; import { requestImages } from '../../app/socketio/actions';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store'; import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import IAIIconButton from '../../common/components/IAIIconButton'; import IAIIconButton from '../../common/components/IAIIconButton';
@ -27,12 +26,6 @@ export default function ImageGallery() {
const dispatch = useAppDispatch(); const dispatch = useAppDispatch();
const [column, setColumn] = useState<number | undefined>();
const handleResize = (event: MouseEvent | TouchEvent | any) => {
setColumn(Math.floor((window.innerWidth - event.x) / 120));
};
const handleShowGalleryToggle = () => { const handleShowGalleryToggle = () => {
dispatch(setShouldShowGallery(!shouldShowGallery)); dispatch(setShouldShowGallery(!shouldShowGallery));
}; };
@ -89,9 +82,7 @@ export default function ImageGallery() {
minWidth={'300'} minWidth={'300'}
maxWidth={activeTab == 1 ? '300' : '600'} maxWidth={activeTab == 1 ? '300' : '600'}
className="image-gallery-popup" className="image-gallery-popup"
onResize={handleResize}
> >
{/* <div className="image-gallery-popup"></div> */}
<div className="image-gallery-header"> <div className="image-gallery-header">
<h1>Your Invocations</h1> <h1>Your Invocations</h1>
<IconButton <IconButton
@ -104,12 +95,7 @@ export default function ImageGallery() {
</div> </div>
<div className="image-gallery-container"> <div className="image-gallery-container">
{images.length ? ( {images.length ? (
<Masonry <div className="image-gallery">
className="masonry-grid"
columnClassName="masonry-grid_column"
breakpointCols={column}
>
{/* <div className="image-gallery"> */}
{images.map((image) => { {images.map((image) => {
const { uuid } = image; const { uuid } = image;
const isSelected = currentImageUuid === uuid; const isSelected = currentImageUuid === uuid;
@ -121,8 +107,7 @@ export default function ImageGallery() {
/> />
); );
})} })}
{/* </div> */} </div>
</Masonry>
) : ( ) : (
<div className="image-gallery-container-placeholder"> <div className="image-gallery-container-placeholder">
<MdPhotoLibrary /> <MdPhotoLibrary />

View File

@ -72,7 +72,13 @@ export const gallerySlice = createSlice({
}, },
addImage: (state, action: PayloadAction<InvokeAI.Image>) => { addImage: (state, action: PayloadAction<InvokeAI.Image>) => {
const newImage = action.payload; const newImage = action.payload;
const { uuid, mtime } = newImage; const { uuid, url, mtime } = newImage;
// Do not add duplicate images
if (state.images.find((i) => i.url === url && i.mtime === mtime)) {
return;
}
state.images.unshift(newImage); state.images.unshift(newImage);
state.currentImageUuid = uuid; state.currentImageUuid = uuid;
state.intermediateImage = undefined; state.intermediateImage = undefined;
@ -120,8 +126,15 @@ export const gallerySlice = createSlice({
) => { ) => {
const { images, areMoreImagesAvailable } = action.payload; const { images, areMoreImagesAvailable } = action.payload;
if (images.length > 0) { if (images.length > 0) {
// Filter images that already exist in the gallery
const newImages = images.filter(
(newImage) =>
!state.images.find(
(i) => i.url === newImage.url && i.mtime === newImage.mtime
)
);
state.images = state.images state.images = state.images
.concat(images) .concat(newImages)
.sort((a, b) => b.mtime - a.mtime); .sort((a, b) => b.mtime - a.mtime);
if (!state.currentImage) { if (!state.currentImage) {

View File

@ -15,7 +15,7 @@ export default function MainCFGScale() {
label="CFG Scale" label="CFG Scale"
step={0.5} step={0.5}
min={1} min={1}
max={200} max={30}
onChange={handleChangeCfgScale} onChange={handleChangeCfgScale}
value={cfgScale} value={cfgScale}
width={inputWidth} width={inputWidth}

View File

@ -183,6 +183,67 @@ export const optionsSlice = createSlice({
setSeedWeights: (state, action: PayloadAction<string>) => { setSeedWeights: (state, action: PayloadAction<string>) => {
state.seedWeights = action.payload; state.seedWeights = action.payload;
}, },
setAllTextToImageParameters: (
state,
action: PayloadAction<InvokeAI.Metadata>
) => {
const {
sampler,
prompt,
seed,
variations,
steps,
cfg_scale,
threshold,
perlin,
seamless,
hires_fix,
width,
height,
} = action.payload.image;
if (variations && variations.length > 0) {
state.seedWeights = seedWeightsToString(variations);
state.shouldGenerateVariations = true;
} else {
state.shouldGenerateVariations = false;
}
if (seed) {
state.seed = seed;
state.shouldRandomizeSeed = false;
}
if (prompt) state.prompt = promptToString(prompt);
if (sampler) state.sampler = sampler;
if (steps) state.steps = steps;
if (cfg_scale) state.cfgScale = cfg_scale;
if (threshold) state.threshold = threshold;
if (typeof threshold === 'undefined') state.threshold = 0;
if (perlin) state.perlin = perlin;
if (typeof perlin === 'undefined') state.perlin = 0;
if (typeof seamless === 'boolean') state.seamless = seamless;
if (typeof hires_fix === 'boolean') state.hiresFix = hires_fix;
if (width) state.width = width;
if (height) state.height = height;
},
setAllImageToImageParameters: (
state,
action: PayloadAction<InvokeAI.Metadata>
) => {
const { type, strength, fit, init_image_path, mask_image_path } =
action.payload.image;
if (type === 'img2img') {
if (init_image_path) state.initialImagePath = init_image_path;
if (mask_image_path) state.maskPath = mask_image_path;
if (strength) state.img2imgStrength = strength;
if (typeof fit === 'boolean') state.shouldFitToWidthHeight = fit;
state.shouldUseInitImage = true;
} else {
state.shouldUseInitImage = false;
}
},
setAllParameters: (state, action: PayloadAction<InvokeAI.Metadata>) => { setAllParameters: (state, action: PayloadAction<InvokeAI.Metadata>) => {
const { const {
type, type,
@ -226,43 +287,6 @@ export const optionsSlice = createSlice({
state.shouldRandomizeSeed = false; state.shouldRandomizeSeed = false;
} }
/**
* We support arbitrary numbers of postprocessing steps, so it
* doesnt make sense to be include postprocessing metadata when
* we use all parameters. Because this code needed a bit of braining
* to figure out, I am leaving it, in case it is needed again.
*/
// let postprocessingNotDone = ['gfpgan', 'esrgan'];
// if (postprocessing && postprocessing.length > 0) {
// postprocessing.forEach(
// (postprocess: InvokeAI.PostProcessedImageMetadata) => {
// if (postprocess.type === 'gfpgan') {
// const { strength } = postprocess;
// if (strength) state.facetoolStrength = strength;
// state.shouldRunFacetool = true;
// postprocessingNotDone = postprocessingNotDone.filter(
// (p) => p !== 'gfpgan'
// );
// }
// if (postprocess.type === 'esrgan') {
// const { scale, strength } = postprocess;
// if (scale) state.upscalingLevel = scale;
// if (strength) state.upscalingStrength = strength;
// state.shouldRunESRGAN = true;
// postprocessingNotDone = postprocessingNotDone.filter(
// (p) => p !== 'esrgan'
// );
// }
// }
// );
// }
// postprocessingNotDone.forEach((p) => {
// if (p === 'esrgan') state.shouldRunESRGAN = false;
// if (p === 'gfpgan') state.shouldRunFacetool = false;
// });
if (prompt) state.prompt = promptToString(prompt); if (prompt) state.prompt = promptToString(prompt);
if (sampler) state.sampler = sampler; if (sampler) state.sampler = sampler;
if (steps) state.steps = steps; if (steps) state.steps = steps;
@ -346,6 +370,8 @@ export const {
setActiveTab, setActiveTab,
setShouldShowImageDetails, setShouldShowImageDetails,
setShouldShowGallery, setShouldShowGallery,
setAllTextToImageParameters,
setAllImageToImageParameters,
} = optionsSlice.actions; } = optionsSlice.actions;
export default optionsSlice.reducer; export default optionsSlice.reducer;

View File

@ -1,4 +1,4 @@
import { IconButton, Image } from '@chakra-ui/react'; import { IconButton, Image, useToast } from '@chakra-ui/react';
import React, { SyntheticEvent } from 'react'; import React, { SyntheticEvent } from 'react';
import { MdClear } from 'react-icons/md'; import { MdClear } from 'react-icons/md';
import { RootState, useAppDispatch, useAppSelector } from '../../../app/store'; import { RootState, useAppDispatch, useAppSelector } from '../../../app/store';
@ -11,10 +11,23 @@ export default function InitImagePreview() {
const dispatch = useAppDispatch(); const dispatch = useAppDispatch();
const toast = useToast();
const handleClickResetInitialImage = (e: SyntheticEvent) => { const handleClickResetInitialImage = (e: SyntheticEvent) => {
e.stopPropagation(); e.stopPropagation();
dispatch(setInitialImagePath(null)); dispatch(setInitialImagePath(null));
}; };
const alertMissingInitImage = () => {
toast({
title: 'Problem loading parameters',
description: 'Unable to load init image.',
status: 'error',
isClosable: true,
});
dispatch(setInitialImagePath(null));
};
return ( return (
<div className="init-image-preview"> <div className="init-image-preview">
<div className="init-image-preview-header"> <div className="init-image-preview-header">
@ -29,7 +42,12 @@ export default function InitImagePreview() {
</div> </div>
{initialImagePath && ( {initialImagePath && (
<div className="init-image-image"> <div className="init-image-image">
<Image fit={'contain'} src={initialImagePath} rounded={'md'} /> <Image
fit={'contain'}
src={initialImagePath}
rounded={'md'}
onError={alertMissingInitImage}
/>
</div> </div>
)} )}
</div> </div>

View File

@ -50,8 +50,13 @@ export const tab_dict = {
}, },
}; };
// Array where index maps to the key of tab_dict
export const tabMap = _.map(tab_dict, (tab, key) => key); export const tabMap = _.map(tab_dict, (tab, key) => key);
// Use tabMap to generate a union type of tab names
const tabMapTypes = [...tabMap] as const;
export type InvokeTabName = typeof tabMapTypes[number];
export default function InvokeTabs() { export default function InvokeTabs() {
const activeTab = useAppSelector( const activeTab = useAppSelector(
(state: RootState) => state.options.activeTab (state: RootState) => state.options.activeTab

View File

@ -95,4 +95,9 @@
// Gallery // Gallery
--gallery-resizeable-color: rgb(36, 38, 48); --gallery-resizeable-color: rgb(36, 38, 48);
// Context Menus
--context-menu-bg-color: rgb(46, 48, 58);
--context-menu-box-shadow: none;
--context-menu-bg-color-hover: rgb(30, 32, 42);
} }

View File

@ -94,4 +94,11 @@
// Gallery // Gallery
--gallery-resizeable-color: rgb(192, 194, 196); --gallery-resizeable-color: rgb(192, 194, 196);
// Context Menus
--context-menu-bg-color: var(--background-color);
--context-menu-box-shadow: 0px 10px 38px -10px rgba(22, 23, 24, 0.35),
0px 10px 20px -15px rgba(22, 23, 24, 0.2);
--context-menu-bg-color-hover: var(--background-color-secondary);
} }

View File

@ -213,6 +213,13 @@
dependencies: dependencies:
regenerator-runtime "^0.13.4" regenerator-runtime "^0.13.4"
"@babel/runtime@^7.13.10":
version "7.19.4"
resolved "https://registry.yarnpkg.com/@babel/runtime/-/runtime-7.19.4.tgz#a42f814502ee467d55b38dd1c256f53a7b885c78"
integrity sha512-EXpLCrk55f+cYqmHsSR+yD/0gAIMxxA9QK9lnQWzhMCvt+YmoBN7Zx94s++Kv0+unHk39vxNO8t+CMA2WSS3wA==
dependencies:
regenerator-runtime "^0.13.4"
"@babel/template@^7.18.10": "@babel/template@^7.18.10":
version "7.18.10" version "7.18.10"
resolved "https://registry.yarnpkg.com/@babel/template/-/template-7.18.10.tgz#6f9134835970d1dbf0835c0d100c9f38de0c5e71" resolved "https://registry.yarnpkg.com/@babel/template/-/template-7.18.10.tgz#6f9134835970d1dbf0835c0d100c9f38de0c5e71"
@ -1122,6 +1129,26 @@
minimatch "^3.1.2" minimatch "^3.1.2"
strip-json-comments "^3.1.1" strip-json-comments "^3.1.1"
"@floating-ui/core@^0.7.3":
version "0.7.3"
resolved "https://registry.yarnpkg.com/@floating-ui/core/-/core-0.7.3.tgz#d274116678ffae87f6b60e90f88cc4083eefab86"
integrity sha512-buc8BXHmG9l82+OQXOFU3Kr2XQx9ys01U/Q9HMIrZ300iLc8HLMgh7dcCqgYzAzf4BkoQvDcXf5Y+CuEZ5JBYg==
"@floating-ui/dom@^0.5.3":
version "0.5.4"
resolved "https://registry.yarnpkg.com/@floating-ui/dom/-/dom-0.5.4.tgz#4eae73f78bcd4bd553ae2ade30e6f1f9c73fe3f1"
integrity sha512-419BMceRLq0RrmTSDxn8hf9R3VCJv2K9PUfugh5JyEFmdjzDo+e8U5EdR8nzKq8Yj1htzLm3b6eQEEam3/rrtg==
dependencies:
"@floating-ui/core" "^0.7.3"
"@floating-ui/react-dom@0.7.2":
version "0.7.2"
resolved "https://registry.yarnpkg.com/@floating-ui/react-dom/-/react-dom-0.7.2.tgz#0bf4ceccb777a140fc535c87eb5d6241c8e89864"
integrity sha512-1T0sJcpHgX/u4I1OzIEhlcrvkUN8ln39nz7fMoE/2HDHrPiMFoOGR7++GYyfUmIQHkkrTinaeQsO3XWubjSvGg==
dependencies:
"@floating-ui/dom" "^0.5.3"
use-isomorphic-layout-effect "^1.1.1"
"@humanwhocodes/config-array@^0.10.4": "@humanwhocodes/config-array@^0.10.4":
version "0.10.4" version "0.10.4"
resolved "https://registry.yarnpkg.com/@humanwhocodes/config-array/-/config-array-0.10.4.tgz#01e7366e57d2ad104feea63e72248f22015c520c" resolved "https://registry.yarnpkg.com/@humanwhocodes/config-array/-/config-array-0.10.4.tgz#01e7366e57d2ad104feea63e72248f22015c520c"
@ -1265,6 +1292,246 @@
resolved "https://registry.yarnpkg.com/@popperjs/core/-/core-2.11.6.tgz#cee20bd55e68a1720bdab363ecf0c821ded4cd45" resolved "https://registry.yarnpkg.com/@popperjs/core/-/core-2.11.6.tgz#cee20bd55e68a1720bdab363ecf0c821ded4cd45"
integrity sha512-50/17A98tWUfQ176raKiOGXuYpLyyVMkxxG6oylzL3BPOlA6ADGdK7EYunSa4I064xerltq9TGXs8HmOk5E+vw== integrity sha512-50/17A98tWUfQ176raKiOGXuYpLyyVMkxxG6oylzL3BPOlA6ADGdK7EYunSa4I064xerltq9TGXs8HmOk5E+vw==
"@radix-ui/primitive@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/primitive/-/primitive-1.0.0.tgz#e1d8ef30b10ea10e69c76e896f608d9276352253"
integrity sha512-3e7rn8FDMin4CgeL7Z/49smCA3rFYY3Ha2rUQ7HRWFadS5iCRw08ZgVT1LaNTCNqgvrUiyczLflrVrF0SRQtNA==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-arrow@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-arrow/-/react-arrow-1.0.1.tgz#5246adf79e97f89e819af68da51ddcf349ecf1c4"
integrity sha512-1yientwXqXcErDHEv8av9ZVNEBldH8L9scVR3is20lL+jOCfcJyMFZFEY5cgIrgexsq1qggSXqiEL/d/4f+QXA==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-collection@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-collection/-/react-collection-1.0.1.tgz#259506f97c6703b36291826768d3c1337edd1de5"
integrity sha512-uuiFbs+YCKjn3X1DTSx9G7BHApu4GHbi3kgiwsnFUbOKCrwejAJv4eE4Vc8C0Oaxt9T0aV4ox0WCOdx+39Xo+g==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-context" "1.0.0"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-slot" "1.0.1"
"@radix-ui/react-compose-refs@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-compose-refs/-/react-compose-refs-1.0.0.tgz#37595b1f16ec7f228d698590e78eeed18ff218ae"
integrity sha512-0KaSv6sx787/hK3eF53iOkiSLwAGlFMx5lotrqD2pTjB18KbybKoEIgkNZTKC60YECDQTKGTRcDBILwZVqVKvA==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-context-menu@^2.0.1":
version "2.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-context-menu/-/react-context-menu-2.0.1.tgz#aee7c81bac9983b3748284bf3925dd63796c90b4"
integrity sha512-7DuhU4xDcUk3AMJUlb5tHHOvJZ1GF4+snDIpjtWGlTvO0VktNKgbvBuGLlirdkYoUSI0mJXwOUcUXQapgIyefw==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/primitive" "1.0.0"
"@radix-ui/react-context" "1.0.0"
"@radix-ui/react-menu" "2.0.1"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-use-callback-ref" "1.0.0"
"@radix-ui/react-use-controllable-state" "1.0.0"
"@radix-ui/react-context@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-context/-/react-context-1.0.0.tgz#f38e30c5859a9fb5e9aa9a9da452ee3ed9e0aee0"
integrity sha512-1pVM9RfOQ+n/N5PJK33kRSKsr1glNxomxONs5c49MliinBY6Yw2Q995qfBUUo0/Mbg05B/sGA0gkgPI7kmSHBg==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-direction@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-direction/-/react-direction-1.0.0.tgz#a2e0b552352459ecf96342c79949dd833c1e6e45"
integrity sha512-2HV05lGUgYcA6xgLQ4BKPDmtL+QbIZYH5fCOTAOOcJ5O0QbWS3i9lKaurLzliYUDhORI2Qr3pyjhJh44lKA3rQ==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-dismissable-layer@1.0.2":
version "1.0.2"
resolved "https://registry.yarnpkg.com/@radix-ui/react-dismissable-layer/-/react-dismissable-layer-1.0.2.tgz#f04d1061bddf00b1ca304148516b9ddc62e45fb2"
integrity sha512-WjJzMrTWROozDqLB0uRWYvj4UuXsM/2L19EmQ3Au+IJWqwvwq9Bwd+P8ivo0Deg9JDPArR1I6MbWNi1CmXsskg==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/primitive" "1.0.0"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-use-callback-ref" "1.0.0"
"@radix-ui/react-use-escape-keydown" "1.0.2"
"@radix-ui/react-focus-guards@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-focus-guards/-/react-focus-guards-1.0.0.tgz#339c1c69c41628c1a5e655f15f7020bf11aa01fa"
integrity sha512-UagjDk4ijOAnGu4WMUPj9ahi7/zJJqNZ9ZAiGPp7waUWJO0O1aWXi/udPphI0IUjvrhBsZJGSN66dR2dsueLWQ==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-focus-scope@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-focus-scope/-/react-focus-scope-1.0.1.tgz#faea8c25f537c5a5c38c50914b63722db0e7f951"
integrity sha512-Ej2MQTit8IWJiS2uuujGUmxXjF/y5xZptIIQnyd2JHLwtV0R2j9NRVoRj/1j/gJ7e3REdaBw4Hjf4a1ImhkZcQ==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-use-callback-ref" "1.0.0"
"@radix-ui/react-id@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-id/-/react-id-1.0.0.tgz#8d43224910741870a45a8c9d092f25887bb6d11e"
integrity sha512-Q6iAB/U7Tq3NTolBBQbHTgclPmGWE3OlktGGqrClPozSw4vkQ1DfQAOtzgRPecKsMdJINE05iaoDUG8tRzCBjw==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-use-layout-effect" "1.0.0"
"@radix-ui/react-menu@2.0.1":
version "2.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-menu/-/react-menu-2.0.1.tgz#44ebfd45d8482db678b935c0b9d1102d683372d8"
integrity sha512-I5FFZQxCl2fHoJ7R0m5/oWA9EX8/ttH4AbgneoCH7DAXQioFeb0XMAYnOVSp1GgJZ1Nx/mohxNQSeTMcaF1YPw==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/primitive" "1.0.0"
"@radix-ui/react-collection" "1.0.1"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-context" "1.0.0"
"@radix-ui/react-direction" "1.0.0"
"@radix-ui/react-dismissable-layer" "1.0.2"
"@radix-ui/react-focus-guards" "1.0.0"
"@radix-ui/react-focus-scope" "1.0.1"
"@radix-ui/react-id" "1.0.0"
"@radix-ui/react-popper" "1.0.1"
"@radix-ui/react-portal" "1.0.1"
"@radix-ui/react-presence" "1.0.0"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-roving-focus" "1.0.1"
"@radix-ui/react-slot" "1.0.1"
"@radix-ui/react-use-callback-ref" "1.0.0"
aria-hidden "^1.1.1"
react-remove-scroll "2.5.5"
"@radix-ui/react-popper@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-popper/-/react-popper-1.0.1.tgz#9fa8a6a493404afa225866a5cd75af23d141baa0"
integrity sha512-J4Vj7k3k+EHNWgcKrE+BLlQfpewxA7Zd76h5I0bIa+/EqaIZ3DuwrbPj49O3wqN+STnXsBuxiHLiF0iU3yfovw==
dependencies:
"@babel/runtime" "^7.13.10"
"@floating-ui/react-dom" "0.7.2"
"@radix-ui/react-arrow" "1.0.1"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-context" "1.0.0"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-use-layout-effect" "1.0.0"
"@radix-ui/react-use-rect" "1.0.0"
"@radix-ui/react-use-size" "1.0.0"
"@radix-ui/rect" "1.0.0"
"@radix-ui/react-portal@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-portal/-/react-portal-1.0.1.tgz#169c5a50719c2bb0079cf4c91a27aa6d37e5dd33"
integrity sha512-NY2vUWI5WENgAT1nfC6JS7RU5xRYBfjZVLq0HmgEN1Ezy3rk/UruMV4+Rd0F40PEaFC5SrLS1ixYvcYIQrb4Ig==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-presence@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-presence/-/react-presence-1.0.0.tgz#814fe46df11f9a468808a6010e3f3ca7e0b2e84a"
integrity sha512-A+6XEvN01NfVWiKu38ybawfHsBjWum42MRPnEuqPsBZ4eV7e/7K321B5VgYMPv3Xx5An6o1/l9ZuDBgmcmWK3w==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-use-layout-effect" "1.0.0"
"@radix-ui/react-primitive@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-primitive/-/react-primitive-1.0.1.tgz#c1ebcce283dd2f02e4fbefdaa49d1cb13dbc990a"
integrity sha512-fHbmislWVkZaIdeF6GZxF0A/NH/3BjrGIYj+Ae6eTmTCr7EB0RQAAVEiqsXK6p3/JcRqVSBQoceZroj30Jj3XA==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-slot" "1.0.1"
"@radix-ui/react-roving-focus@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-roving-focus/-/react-roving-focus-1.0.1.tgz#475621f63aee43faa183a5270f35d49e530de3d7"
integrity sha512-TB76u5TIxKpqMpUAuYH2VqMhHYKa+4Vs1NHygo/llLvlffN6mLVsFhz0AnSFlSBAvTBYVHYAkHAyEt7x1gPJOA==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/primitive" "1.0.0"
"@radix-ui/react-collection" "1.0.1"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-context" "1.0.0"
"@radix-ui/react-direction" "1.0.0"
"@radix-ui/react-id" "1.0.0"
"@radix-ui/react-primitive" "1.0.1"
"@radix-ui/react-use-callback-ref" "1.0.0"
"@radix-ui/react-use-controllable-state" "1.0.0"
"@radix-ui/react-slot@1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@radix-ui/react-slot/-/react-slot-1.0.1.tgz#e7868c669c974d649070e9ecbec0b367ee0b4d81"
integrity sha512-avutXAFL1ehGvAXtPquu0YK5oz6ctS474iM3vNGQIkswrVhdrS52e3uoMQBzZhNRAIE0jBnUyXWNmSjGHhCFcw==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-compose-refs" "1.0.0"
"@radix-ui/react-use-callback-ref@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-use-callback-ref/-/react-use-callback-ref-1.0.0.tgz#9e7b8b6b4946fe3cbe8f748c82a2cce54e7b6a90"
integrity sha512-GZtyzoHz95Rhs6S63D2t/eqvdFCm7I+yHMLVQheKM7nBD8mbZIt+ct1jz4536MDnaOGKIxynJ8eHTkVGVVkoTg==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-use-controllable-state@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-use-controllable-state/-/react-use-controllable-state-1.0.0.tgz#a64deaafbbc52d5d407afaa22d493d687c538b7f"
integrity sha512-FohDoZvk3mEXh9AWAVyRTYR4Sq7/gavuofglmiXB2g1aKyboUD4YtgWxKj8O5n+Uak52gXQ4wKz5IFST4vtJHg==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-use-callback-ref" "1.0.0"
"@radix-ui/react-use-escape-keydown@1.0.2":
version "1.0.2"
resolved "https://registry.yarnpkg.com/@radix-ui/react-use-escape-keydown/-/react-use-escape-keydown-1.0.2.tgz#09ab6455ab240b4f0a61faf06d4e5132c4d639f6"
integrity sha512-DXGim3x74WgUv+iMNCF+cAo8xUHHeqvjx8zs7trKf+FkQKPQXLk2sX7Gx1ysH7Q76xCpZuxIJE7HLPxRE+Q+GA==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-use-callback-ref" "1.0.0"
"@radix-ui/react-use-layout-effect@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-use-layout-effect/-/react-use-layout-effect-1.0.0.tgz#2fc19e97223a81de64cd3ba1dc42ceffd82374dc"
integrity sha512-6Tpkq+R6LOlmQb1R5NNETLG0B4YP0wc+klfXafpUCj6JGyaUc8il7/kUZ7m59rGbXGczE9Bs+iz2qloqsZBduQ==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-use-rect@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-use-rect/-/react-use-rect-1.0.0.tgz#b040cc88a4906b78696cd3a32b075ed5b1423b3e"
integrity sha512-TB7pID8NRMEHxb/qQJpvSt3hQU4sqNPM1VCTjTRjEOa7cEop/QMuq8S6fb/5Tsz64kqSvB9WnwsDHtjnrM9qew==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/rect" "1.0.0"
"@radix-ui/react-use-size@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/react-use-size/-/react-use-size-1.0.0.tgz#a0b455ac826749419f6354dc733e2ca465054771"
integrity sha512-imZ3aYcoYCKhhgNpkNDh/aTiU05qw9hX+HHI1QDBTyIlcFjgeFlKKySNGMwTp7nYFLQg/j0VA2FmCY4WPDDHMg==
dependencies:
"@babel/runtime" "^7.13.10"
"@radix-ui/react-use-layout-effect" "1.0.0"
"@radix-ui/rect@1.0.0":
version "1.0.0"
resolved "https://registry.yarnpkg.com/@radix-ui/rect/-/rect-1.0.0.tgz#0dc8e6a829ea2828d53cbc94b81793ba6383bf3c"
integrity sha512-d0O68AYy/9oeEy1DdC07bz1/ZXX+DqCskRd3i4JzLSTXwefzaepQrKjXC7aNM8lTHjFLDO0pDgaEiQ7jEk+HVg==
dependencies:
"@babel/runtime" "^7.13.10"
"@reduxjs/toolkit@^1.8.5": "@reduxjs/toolkit@^1.8.5":
version "1.8.5" version "1.8.5"
resolved "https://registry.yarnpkg.com/@reduxjs/toolkit/-/toolkit-1.8.5.tgz#c14bece03ee08be88467f22dc0ecf9cf875527cd" resolved "https://registry.yarnpkg.com/@reduxjs/toolkit/-/toolkit-1.8.5.tgz#c14bece03ee08be88467f22dc0ecf9cf875527cd"
@ -2850,11 +3117,6 @@ react-is@^18.0.0:
resolved "https://registry.yarnpkg.com/react-is/-/react-is-18.2.0.tgz#199431eeaaa2e09f86427efbb4f1473edb47609b" resolved "https://registry.yarnpkg.com/react-is/-/react-is-18.2.0.tgz#199431eeaaa2e09f86427efbb4f1473edb47609b"
integrity sha512-xWGDIW6x921xtzPkhiULtthJHoJvBbF3q26fzloPCK0hsvxtPVelvftw3zjbHWSkR2km9Z+4uxbDDK/6Zw9B8w== integrity sha512-xWGDIW6x921xtzPkhiULtthJHoJvBbF3q26fzloPCK0hsvxtPVelvftw3zjbHWSkR2km9Z+4uxbDDK/6Zw9B8w==
react-masonry-css@^1.0.16:
version "1.0.16"
resolved "https://registry.yarnpkg.com/react-masonry-css/-/react-masonry-css-1.0.16.tgz#72b28b4ae3484e250534700860597553a10f1a2c"
integrity sha512-KSW0hR2VQmltt/qAa3eXOctQDyOu7+ZBevtKgpNDSzT7k5LA/0XntNa9z9HKCdz3QlxmJHglTZ18e4sX4V8zZQ==
react-redux@^8.0.2: react-redux@^8.0.2:
version "8.0.2" version "8.0.2"
resolved "https://registry.yarnpkg.com/react-redux/-/react-redux-8.0.2.tgz#bc2a304bb21e79c6808e3e47c50fe1caf62f7aad" resolved "https://registry.yarnpkg.com/react-redux/-/react-redux-8.0.2.tgz#bc2a304bb21e79c6808e3e47c50fe1caf62f7aad"
@ -2880,7 +3142,7 @@ react-remove-scroll-bar@^2.3.3:
react-style-singleton "^2.2.1" react-style-singleton "^2.2.1"
tslib "^2.0.0" tslib "^2.0.0"
react-remove-scroll@^2.5.4: react-remove-scroll@2.5.5, react-remove-scroll@^2.5.4:
version "2.5.5" version "2.5.5"
resolved "https://registry.yarnpkg.com/react-remove-scroll/-/react-remove-scroll-2.5.5.tgz#1e31a1260df08887a8a0e46d09271b52b3a37e77" resolved "https://registry.yarnpkg.com/react-remove-scroll/-/react-remove-scroll-2.5.5.tgz#1e31a1260df08887a8a0e46d09271b52b3a37e77"
integrity sha512-ImKhrzJJsyXJfBZ4bzu8Bwpka14c/fQt0k+cyFp/PBhTfyDnU5hjOtM4AG/0AMyy8oKzOTR0lDgJIM7pYXI0kw== integrity sha512-ImKhrzJJsyXJfBZ4bzu8Bwpka14c/fQt0k+cyFp/PBhTfyDnU5hjOtM4AG/0AMyy8oKzOTR0lDgJIM7pYXI0kw==
@ -3255,6 +3517,11 @@ use-callback-ref@^1.3.0:
dependencies: dependencies:
tslib "^2.0.0" tslib "^2.0.0"
use-isomorphic-layout-effect@^1.1.1:
version "1.1.2"
resolved "https://registry.yarnpkg.com/use-isomorphic-layout-effect/-/use-isomorphic-layout-effect-1.1.2.tgz#497cefb13d863d687b08477d9e5a164ad8c1a6fb"
integrity sha512-49L8yCO3iGT/ZF9QttjwLF/ZD9Iwto5LnH5LmEdk/6cFmXddqi2ulF0edxTwjj+7mqvpVVGQWvbXZdn32wRSHA==
use-sidecar@^1.1.2: use-sidecar@^1.1.2:
version "1.1.2" version "1.1.2"
resolved "https://registry.yarnpkg.com/use-sidecar/-/use-sidecar-1.1.2.tgz#2f43126ba2d7d7e117aa5855e5d8f0276dfe73c2" resolved "https://registry.yarnpkg.com/use-sidecar/-/use-sidecar-1.1.2.tgz#2f43126ba2d7d7e117aa5855e5d8f0276dfe73c2"

View File

@ -55,23 +55,8 @@ torch.randint_like = fix_func(torch.randint_like)
torch.bernoulli = fix_func(torch.bernoulli) torch.bernoulli = fix_func(torch.bernoulli)
torch.multinomial = fix_func(torch.multinomial) torch.multinomial = fix_func(torch.multinomial)
def fix_func(orig): # this is fallback model in case no default is defined
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available(): FALLBACK_MODEL_NAME='stable-diffusion-1.4'
def new_func(*args, **kw):
device = kw.get("device", "mps")
kw["device"]="cpu"
return orig(*args, **kw).to(device)
return new_func
return orig
torch.rand = fix_func(torch.rand)
torch.rand_like = fix_func(torch.rand_like)
torch.randn = fix_func(torch.randn)
torch.randn_like = fix_func(torch.randn_like)
torch.randint = fix_func(torch.randint)
torch.randint_like = fix_func(torch.randint_like)
torch.bernoulli = fix_func(torch.bernoulli)
torch.multinomial = fix_func(torch.multinomial)
"""Simplified text to image API for stable diffusion/latent diffusion """Simplified text to image API for stable diffusion/latent diffusion
@ -147,7 +132,7 @@ class Generate:
def __init__( def __init__(
self, self,
model = 'stable-diffusion-1.4', model = None,
conf = 'configs/models.yaml', conf = 'configs/models.yaml',
embedding_path = None, embedding_path = None,
sampler_name = 'k_lms', sampler_name = 'k_lms',
@ -163,7 +148,6 @@ class Generate:
free_gpu_mem=False, free_gpu_mem=False,
): ):
mconfig = OmegaConf.load(conf) mconfig = OmegaConf.load(conf)
self.model_name = model
self.height = None self.height = None
self.width = None self.width = None
self.model_cache = None self.model_cache = None
@ -210,6 +194,7 @@ class Generate:
# model caching system for fast switching # model caching system for fast switching
self.model_cache = ModelCache(mconfig,self.device,self.precision) self.model_cache = ModelCache(mconfig,self.device,self.precision)
self.model_name = model or self.model_cache.default_model() or FALLBACK_MODEL_NAME
# for VRAM usage statistics # for VRAM usage statistics
self.session_peakmem = torch.cuda.max_memory_allocated() if self._has_cuda else None self.session_peakmem = torch.cuda.max_memory_allocated() if self._has_cuda else None
@ -570,8 +555,11 @@ class Generate:
from ldm.invoke.restoration.outcrop import Outcrop from ldm.invoke.restoration.outcrop import Outcrop
extend_instructions = {} extend_instructions = {}
for direction,pixels in _pairwise(opt.outcrop): for direction,pixels in _pairwise(opt.outcrop):
try:
extend_instructions[direction]=int(pixels) extend_instructions[direction]=int(pixels)
except ValueError:
print(f'** invalid extension instruction. Use <directions> <pixels>..., as in "top 64 left 128 right 64 bottom 64"')
if len(extend_instructions)>0:
restorer = Outcrop(image,self,) restorer = Outcrop(image,self,)
return restorer.process ( return restorer.process (
extend_instructions, extend_instructions,
@ -715,8 +703,7 @@ class Generate:
model_data = self.model_cache.get_model(model_name) model_data = self.model_cache.get_model(model_name)
if model_data is None or len(model_data) == 0: if model_data is None or len(model_data) == 0:
print(f'** Model switch failed **') return None
return self.model
self.model = model_data['model'] self.model = model_data['model']
self.width = model_data['width'] self.width = model_data['width']
@ -728,7 +715,7 @@ class Generate:
seed_everything(random.randrange(0, np.iinfo(np.uint32).max)) seed_everything(random.randrange(0, np.iinfo(np.uint32).max))
if self.embedding_path is not None: if self.embedding_path is not None:
model.embedding_manager.load( self.model.embedding_manager.load(
self.embedding_path, self.precision == 'float32' or self.precision == 'autocast' self.embedding_path, self.precision == 'float32' or self.precision == 'autocast'
) )

View File

@ -366,17 +366,16 @@ class Args(object):
deprecated_group.add_argument('--laion400m') deprecated_group.add_argument('--laion400m')
deprecated_group.add_argument('--weights') # deprecated deprecated_group.add_argument('--weights') # deprecated
model_group.add_argument( model_group.add_argument(
'--conf', '--config',
'-c', '-c',
'-conf', '-config',
dest='conf', dest='conf',
default='./configs/models.yaml', default='./configs/models.yaml',
help='Path to configuration file for alternate models.', help='Path to configuration file for alternate models.',
) )
model_group.add_argument( model_group.add_argument(
'--model', '--model',
default='stable-diffusion-1.4', help='Indicates which diffusion model to load (defaults to "default" stanza in configs/models.yaml)',
help='Indicates which diffusion model to load. (currently "stable-diffusion-1.4" (default) or "laion400m")',
) )
model_group.add_argument( model_group.add_argument(
'--png_compression','-z', '--png_compression','-z',
@ -529,7 +528,7 @@ class Args(object):
formatter_class=ArgFormatter, formatter_class=ArgFormatter,
description= description=
""" """
*Image generation:* *Image generation*
invoke> a fantastic alien landscape -W576 -H512 -s60 -n4 invoke> a fantastic alien landscape -W576 -H512 -s60 -n4
*postprocessing* *postprocessing*
@ -544,6 +543,13 @@ class Args(object):
!history lists all the commands issued during the current session. !history lists all the commands issued during the current session.
!NN retrieves the NNth command from the history !NN retrieves the NNth command from the history
*Model manipulation*
!models -- list models in configs/models.yaml
!switch <model_name> -- switch to model named <model_name>
!import_model path/to/weights/file.ckpt -- adds a model to your config
!edit_model <model_name> -- edit a model's description
!del_model <model_name> -- delete a model
""" """
) )
render_group = parser.add_argument_group('General rendering') render_group = parser.add_argument_group('General rendering')
@ -967,17 +973,17 @@ def sha256(path):
return sha.hexdigest() return sha.hexdigest()
def legacy_metadata_load(meta,pathname) -> Args: def legacy_metadata_load(meta,pathname) -> Args:
opt = Args()
if 'Dream' in meta and len(meta['Dream']) > 0: if 'Dream' in meta and len(meta['Dream']) > 0:
dream_prompt = meta['Dream'] dream_prompt = meta['Dream']
opt = Args()
opt.parse_cmd(dream_prompt) opt.parse_cmd(dream_prompt)
return opt
else: # if nothing else, we can get the seed else: # if nothing else, we can get the seed
match = re.search('\d+\.(\d+)',pathname) match = re.search('\d+\.(\d+)',pathname)
if match: if match:
seed = match.groups()[0] seed = match.groups()[0]
opt = Args()
opt.seed = seed opt.seed = seed
else:
opt.prompt = ''
opt.seed = 0
return opt return opt
return None

View File

@ -13,6 +13,7 @@ import gc
import hashlib import hashlib
import psutil import psutil
import transformers import transformers
import os
from sys import getrefcount from sys import getrefcount
from omegaconf import OmegaConf from omegaconf import OmegaConf
from omegaconf.errors import ConfigAttributeError from omegaconf.errors import ConfigAttributeError
@ -73,7 +74,8 @@ class ModelCache(object):
except Exception as e: except Exception as e:
print(f'** model {model_name} could not be loaded: {str(e)}') print(f'** model {model_name} could not be loaded: {str(e)}')
print(f'** restoring {self.current_model}') print(f'** restoring {self.current_model}')
return self.get_model(self.current_model) self.get_model(self.current_model)
return None
self.current_model = model_name self.current_model = model_name
self._push_newest_model(model_name) self._push_newest_model(model_name)
@ -84,6 +86,26 @@ class ModelCache(object):
'hash': hash 'hash': hash
} }
def default_model(self) -> str:
'''
Returns the name of the default model, or None
if none is defined.
'''
for model_name in self.config:
if self.config[model_name].get('default',False):
return model_name
return None
def set_default_model(self,model_name:str):
'''
Set the default model. The change will not take
effect until you call model_cache.commit()
'''
assert model_name in self.models,f"unknown model '{model_name}'"
for model in self.models:
self.models[model].pop('default',None)
self.models[model_name]['default'] = True
def list_models(self) -> dict: def list_models(self) -> dict:
''' '''
Return a dict of models in the format: Return a dict of models in the format:
@ -121,12 +143,23 @@ class ModelCache(object):
else: else:
print(line) print(line)
def add_model(self, model_name:str, model_attributes:dict, clobber=False) ->str: def del_model(self, model_name:str) ->bool:
'''
Delete the named model.
'''
omega = self.config
del omega[model_name]
if model_name in self.stack:
self.stack.remove(model_name)
return True
def add_model(self, model_name:str, model_attributes:dict, clobber=False) ->True:
''' '''
Update the named model with a dictionary of attributes. Will fail with an Update the named model with a dictionary of attributes. Will fail with an
assertion error if the name already exists. Pass clobber=True to overwrite. assertion error if the name already exists. Pass clobber=True to overwrite.
On a successful update, the config will be changed in memory and a YAML On a successful update, the config will be changed in memory and the
string will be returned. method will return True. Will fail with an assertion error if provided
attributes are incorrect or the model name is missing.
''' '''
omega = self.config omega = self.config
# check that all the required fields are present # check that all the required fields are present
@ -139,7 +172,9 @@ class ModelCache(object):
config[field] = model_attributes[field] config[field] = model_attributes[field]
omega[model_name] = config omega[model_name] = config
return OmegaConf.to_yaml(omega) if clobber:
self._invalidate_cached_model(model_name)
return True
def _check_memory(self): def _check_memory(self):
avail_memory = psutil.virtual_memory()[1] avail_memory = psutil.virtual_memory()[1]
@ -159,6 +194,7 @@ class ModelCache(object):
mconfig = self.config[model_name] mconfig = self.config[model_name]
config = mconfig.config config = mconfig.config
weights = mconfig.weights weights = mconfig.weights
vae = mconfig.get('vae',None)
width = mconfig.width width = mconfig.width
height = mconfig.height height = mconfig.height
@ -188,9 +224,17 @@ class ModelCache(object):
else: else:
print(' | Using more accurate float32 precision') print(' | Using more accurate float32 precision')
# look and load a matching vae file. Code borrowed from AUTOMATIC1111 modules/sd_models.py
if vae and os.path.exists(vae):
print(f' | Loading VAE weights from: {vae}')
vae_ckpt = torch.load(vae, map_location="cpu")
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
model.first_stage_model.load_state_dict(vae_dict, strict=False)
model.to(self.device) model.to(self.device)
# model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here # model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here
model.cond_stage_model.device = self.device model.cond_stage_model.device = self.device
model.eval() model.eval()
for m in model.modules(): for m in model.modules():
@ -219,6 +263,36 @@ class ModelCache(object):
if self._has_cuda(): if self._has_cuda():
torch.cuda.empty_cache() torch.cuda.empty_cache()
def commit(self,config_file_path:str):
'''
Write current configuration out to the indicated file.
'''
yaml_str = OmegaConf.to_yaml(self.config)
tmpfile = os.path.join(os.path.dirname(config_file_path),'new_config.tmp')
with open(tmpfile, 'w') as outfile:
outfile.write(self.preamble())
outfile.write(yaml_str)
os.rename(tmpfile,config_file_path)
def preamble(self):
'''
Returns the preamble for the config file.
'''
return '''# This file describes the alternative machine learning models
# available to the dream script.
#
# To add a new model, follow the examples below. Each
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
'''
def _invalidate_cached_model(self,model_name:str):
self.unload_model(model_name)
if model_name in self.stack:
self.stack.remove(model_name)
self.models.pop(model_name,None)
def _model_to_cpu(self,model): def _model_to_cpu(self,model):
if self.device != 'cpu': if self.device != 'cpu':
model.cond_stage_model.device = 'cpu' model.cond_stage_model.device = 'cpu'

View File

@ -22,6 +22,7 @@ except (ImportError,ModuleNotFoundError):
IMG_EXTENSIONS = ('.png','.jpg','.jpeg','.PNG','.JPG','.JPEG','.gif','.GIF') IMG_EXTENSIONS = ('.png','.jpg','.jpeg','.PNG','.JPG','.JPEG','.gif','.GIF')
WEIGHT_EXTENSIONS = ('.ckpt','.bae') WEIGHT_EXTENSIONS = ('.ckpt','.bae')
TEXT_EXTENSIONS = ('.txt','.TXT')
CONFIG_EXTENSIONS = ('.yaml','.yml') CONFIG_EXTENSIONS = ('.yaml','.yml')
COMMANDS = ( COMMANDS = (
'--steps','-s', '--steps','-s',
@ -55,13 +56,14 @@ COMMANDS = (
'--inpaint_replace','-r', '--inpaint_replace','-r',
'--png_compression','-z', '--png_compression','-z',
'--text_mask','-tm', '--text_mask','-tm',
'!fix','!fetch','!history','!search','!clear', '!fix','!fetch','!replay','!history','!search','!clear',
'!models','!switch','!import_model','!edit_model','!del_model',
'!mask', '!mask',
'!models','!switch','!import_model','!edit_model'
) )
MODEL_COMMANDS = ( MODEL_COMMANDS = (
'!switch', '!switch',
'!edit_model', '!edit_model',
'!del_model',
) )
WEIGHT_COMMANDS = ( WEIGHT_COMMANDS = (
'!import_model', '!import_model',
@ -69,6 +71,9 @@ WEIGHT_COMMANDS = (
IMG_PATH_COMMANDS = ( IMG_PATH_COMMANDS = (
'--outdir[=\s]', '--outdir[=\s]',
) )
TEXT_PATH_COMMANDS=(
'!replay',
)
IMG_FILE_COMMANDS=( IMG_FILE_COMMANDS=(
'!fix', '!fix',
'!fetch', '!fetch',
@ -78,8 +83,9 @@ IMG_FILE_COMMANDS=(
'--init_color[=\s]', '--init_color[=\s]',
'--embedding_path[=\s]', '--embedding_path[=\s]',
) )
path_regexp = '('+'|'.join(IMG_PATH_COMMANDS+IMG_FILE_COMMANDS) + ')\s*\S*$' path_regexp = '(' + '|'.join(IMG_PATH_COMMANDS+IMG_FILE_COMMANDS) + ')\s*\S*$'
weight_regexp = '('+'|'.join(WEIGHT_COMMANDS) + ')\s*\S*$' weight_regexp = '(' + '|'.join(WEIGHT_COMMANDS) + ')\s*\S*$'
text_regexp = '(' + '|'.join(TEXT_PATH_COMMANDS) + ')\s*\S*$'
class Completer(object): class Completer(object):
def __init__(self, options, models=[]): def __init__(self, options, models=[]):
@ -122,6 +128,9 @@ class Completer(object):
elif re.search(weight_regexp,buffer): elif re.search(weight_regexp,buffer):
self.matches = self._path_completions(text, state, WEIGHT_EXTENSIONS) self.matches = self._path_completions(text, state, WEIGHT_EXTENSIONS)
elif re.search(text_regexp,buffer):
self.matches = self._path_completions(text, state, TEXT_EXTENSIONS)
# This is the first time for this text, so build a match list. # This is the first time for this text, so build a match list.
elif text: elif text:
self.matches = [ self.matches = [
@ -210,9 +219,24 @@ class Completer(object):
pydoc.pager('\n'.join(lines)) pydoc.pager('\n'.join(lines))
def set_line(self,line)->None: def set_line(self,line)->None:
'''
Set the default string displayed in the next line of input.
'''
self.linebuffer = line self.linebuffer = line
readline.redisplay() readline.redisplay()
def add_model(self,model_name:str)->None:
'''
add a model name to the completion list
'''
self.models.append(model_name)
def del_model(self,model_name:str)->None:
'''
removes a model name from the completion list
'''
self.models.remove(model_name)
def _seed_completions(self, text, state): def _seed_completions(self, text, state):
m = re.search('(-S\s?|--seed[=\s]?)(\d*)',text) m = re.search('(-S\s?|--seed[=\s]?)(\d*)',text)
if m: if m:

View File

@ -61,13 +61,17 @@ class ESRGAN():
f'>> Real-ESRGAN Upscaling seed:{seed} : scale:{upsampler_scale}x' f'>> Real-ESRGAN Upscaling seed:{seed} : scale:{upsampler_scale}x'
) )
# REALSRGAN expects a BGR np array; make array and flip channels
bgr_image_array = np.array(image, dtype=np.uint8)[...,::-1]
output, _ = upsampler.enhance( output, _ = upsampler.enhance(
np.array(image, dtype=np.uint8), bgr_image_array,
outscale=upsampler_scale, outscale=upsampler_scale,
alpha_upsampler='realesrgan', alpha_upsampler='realesrgan',
) )
res = Image.fromarray(output) # Flip the channels back to RGB
res = Image.fromarray(output[...,::-1])
if strength < 1.0: if strength < 1.0:
# Resize the image to the new image if the sizes have changed # Resize the image to the new image if the sizes have changed

View File

@ -64,7 +64,8 @@ def make_ddim_timesteps(
): ):
if ddim_discr_method == 'uniform': if ddim_discr_method == 'uniform':
c = num_ddpm_timesteps // num_ddim_timesteps c = num_ddpm_timesteps // num_ddim_timesteps
# ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) if c < 1:
c = 1
ddim_timesteps = (np.arange(0, num_ddim_timesteps) * c).astype(int) ddim_timesteps = (np.arange(0, num_ddim_timesteps) * c).astype(int)
elif ddim_discr_method == 'quad': elif ddim_discr_method == 'quad':
ddim_timesteps = ( ddim_timesteps = (

22
main.py
View File

@ -439,7 +439,7 @@ class ImageLogger(Callback):
self.rescale = rescale self.rescale = rescale
self.batch_freq = batch_frequency self.batch_freq = batch_frequency
self.max_images = max_images self.max_images = max_images
self.logger_log_images = { pl.loggers.TestTubeLogger: self._testtube, } if torch.cuda.is_available() else { } self.logger_log_images = { }
self.log_steps = [ self.log_steps = [
2**n for n in range(int(np.log2(self.batch_freq)) + 1) 2**n for n in range(int(np.log2(self.batch_freq)) + 1)
] ]
@ -451,17 +451,6 @@ class ImageLogger(Callback):
self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {} self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}
self.log_first_step = log_first_step self.log_first_step = log_first_step
@rank_zero_only
def _testtube(self, pl_module, images, batch_idx, split):
for k in images:
grid = torchvision.utils.make_grid(images[k])
grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
tag = f'{split}/{k}'
pl_module.logger.experiment.add_image(
tag, grid, global_step=pl_module.global_step
)
@rank_zero_only @rank_zero_only
def log_local( def log_local(
self, save_dir, split, images, global_step, current_epoch, batch_idx self, save_dir, split, images, global_step, current_epoch, batch_idx
@ -714,7 +703,7 @@ if __name__ == '__main__':
# merge trainer cli with config # merge trainer cli with config
trainer_config = lightning_config.get('trainer', OmegaConf.create()) trainer_config = lightning_config.get('trainer', OmegaConf.create())
# default to ddp # default to ddp
trainer_config['accelerator'] = 'ddp' trainer_config['accelerator'] = 'auto'
for k in nondefault_trainer_args(opt): for k in nondefault_trainer_args(opt):
trainer_config[k] = getattr(opt, k) trainer_config[k] = getattr(opt, k)
if not 'gpus' in trainer_config: if not 'gpus' in trainer_config:
@ -751,10 +740,6 @@ if __name__ == '__main__':
trainer_kwargs = dict() trainer_kwargs = dict()
# default logger configs # default logger configs
if torch.cuda.is_available():
def_logger = 'testtube'
def_logger_target = 'TestTubeLogger'
else:
def_logger = 'csv' def_logger = 'csv'
def_logger_target = 'CSVLogger' def_logger_target = 'CSVLogger'
default_logger_cfgs = { default_logger_cfgs = {
@ -918,7 +903,8 @@ if __name__ == '__main__':
config.model.base_learning_rate, config.model.base_learning_rate,
) )
if not cpu: if not cpu:
ngpu = len(lightning_config.trainer.gpus.strip(',').split(',')) gpus = str(lightning_config.trainer.gpus).strip(', ').split(',')
ngpu = len(gpus)
else: else:
ngpu = 1 ngpu = 1
if 'accumulate_grad_batches' in lightning_config.trainer: if 'accumulate_grad_batches' in lightning_config.trainer:

View File

@ -17,9 +17,15 @@ from ldm.invoke.pngwriter import PngWriter, retrieve_metadata, write_metadata
from ldm.invoke.image_util import make_grid from ldm.invoke.image_util import make_grid
from ldm.invoke.log import write_log from ldm.invoke.log import write_log
from omegaconf import OmegaConf from omegaconf import OmegaConf
from pathlib import Path
# global used in multiple functions (fix)
infile = None
def main(): def main():
"""Initialize command-line parsers and the diffusion model""" """Initialize command-line parsers and the diffusion model"""
global infile
opt = Args() opt = Args()
args = opt.parse_args() args = opt.parse_args()
if not args: if not args:
@ -48,7 +54,6 @@ def main():
os.makedirs(opt.outdir) os.makedirs(opt.outdir)
# load the infile as a list of lines # load the infile as a list of lines
infile = None
if opt.infile: if opt.infile:
try: try:
if os.path.isfile(opt.infile): if os.path.isfile(opt.infile):
@ -96,14 +101,16 @@ def main():
) )
try: try:
main_loop(gen, opt, infile) main_loop(gen, opt)
except KeyboardInterrupt: except KeyboardInterrupt:
print("\ngoodbye!") print("\ngoodbye!")
# TODO: main_loop() has gotten busy. Needs to be refactored. # TODO: main_loop() has gotten busy. Needs to be refactored.
def main_loop(gen, opt, infile): def main_loop(gen, opt):
"""prompt/read/execute loop""" """prompt/read/execute loop"""
global infile
done = False done = False
doneAfterInFile = infile is not None
path_filter = re.compile(r'[<>:"/\\|?*]') path_filter = re.compile(r'[<>:"/\\|?*]')
last_results = list() last_results = list()
model_config = OmegaConf.load(opt.conf) model_config = OmegaConf.load(opt.conf)
@ -130,7 +137,8 @@ def main_loop(gen, opt, infile):
try: try:
command = get_next_command(infile) command = get_next_command(infile)
except EOFError: except EOFError:
done = True done = infile is None or doneAfterInFile
infile = None
continue continue
# skip empty lines # skip empty lines
@ -368,7 +376,10 @@ def main_loop(gen, opt, infile):
print('goodbye!') print('goodbye!')
# TO DO: remove repetitive code and the awkward command.replace() trope
# Just do a simple parse of the command!
def do_command(command:str, gen, opt:Args, completer) -> tuple: def do_command(command:str, gen, opt:Args, completer) -> tuple:
global infile
operation = 'generate' # default operation, alternative is 'postprocess' operation = 'generate' # default operation, alternative is 'postprocess'
if command.startswith('!dream'): # in case a stored prompt still contains the !dream command if command.startswith('!dream'): # in case a stored prompt still contains the !dream command
@ -413,9 +424,26 @@ def do_command(command:str, gen, opt:Args, completer) -> tuple:
completer.add_history(command) completer.add_history(command)
operation = None operation = None
elif command.startswith('!del'):
path = shlex.split(command)
if len(path) < 2:
print('** please provide the name of a model')
else:
del_config(path[1], gen, opt, completer)
completer.add_history(command)
operation = None
elif command.startswith('!fetch'): elif command.startswith('!fetch'):
file_path = command.replace('!fetch ','',1) file_path = command.replace('!fetch','',1).strip()
retrieve_dream_command(opt,file_path,completer) retrieve_dream_command(opt,file_path,completer)
completer.add_history(command)
operation = None
elif command.startswith('!replay'):
file_path = command.replace('!replay','',1).strip()
if infile is None and os.path.isfile(file_path):
infile = open(file_path, 'r', encoding='utf-8')
completer.add_history(command)
operation = None operation = None
elif command.startswith('!history'): elif command.startswith('!history'):
@ -423,7 +451,7 @@ def do_command(command:str, gen, opt:Args, completer) -> tuple:
operation = None operation = None
elif command.startswith('!search'): elif command.startswith('!search'):
search_str = command.replace('!search ','',1) search_str = command.replace('!search','',1).strip()
completer.show_history(search_str) completer.show_history(search_str)
operation = None operation = None
@ -465,6 +493,16 @@ def add_weights_to_config(model_path:str, gen, opt, completer):
new_config['config'] = input('Configuration file for this model: ') new_config['config'] = input('Configuration file for this model: ')
done = os.path.exists(new_config['config']) done = os.path.exists(new_config['config'])
done = False
completer.complete_extensions(('.vae.pt','.vae','.ckpt'))
while not done:
vae = input('VAE autoencoder file for this model [None]: ')
if os.path.exists(vae):
new_config['vae'] = vae
done = True
else:
done = len(vae)==0
completer.complete_extensions(None) completer.complete_extensions(None)
for field in ('width','height'): for field in ('width','height'):
@ -479,8 +517,24 @@ def add_weights_to_config(model_path:str, gen, opt, completer):
except: except:
print('** Please enter a valid integer between 64 and 2048') print('** Please enter a valid integer between 64 and 2048')
if write_config_file(opt.conf, gen, model_name, new_config): make_default = input('Make this the default model? [n] ') in ('y','Y')
gen.set_model(model_name)
if write_config_file(opt.conf, gen, model_name, new_config, make_default=make_default):
completer.add_model(model_name)
def del_config(model_name:str, gen, opt, completer):
current_model = gen.model_name
if model_name == current_model:
print("** Can't delete active model. !switch to another model first. **")
return
yaml_str = gen.model_cache.del_model(model_name)
tmpfile = os.path.join(os.path.dirname(opt.conf),'new_config.tmp')
with open(tmpfile, 'w') as outfile:
outfile.write(yaml_str)
os.rename(tmpfile,opt.conf)
print(f'** {model_name} deleted')
completer.del_model(model_name)
def edit_config(model_name:str, gen, opt, completer): def edit_config(model_name:str, gen, opt, completer):
config = gen.model_cache.config config = gen.model_cache.config
@ -493,33 +547,46 @@ def edit_config(model_name:str, gen, opt, completer):
conf = config[model_name] conf = config[model_name]
new_config = {} new_config = {}
completer.complete_extensions(('.yaml','.yml','.ckpt','.vae')) completer.complete_extensions(('.yaml','.yml','.ckpt','.vae.pt'))
for field in ('description', 'weights', 'config', 'width','height'): for field in ('description', 'weights', 'vae', 'config', 'width','height'):
completer.linebuffer = str(conf[field]) if field in conf else '' completer.linebuffer = str(conf[field]) if field in conf else ''
new_value = input(f'{field}: ') new_value = input(f'{field}: ')
new_config[field] = int(new_value) if field in ('width','height') else new_value new_config[field] = int(new_value) if field in ('width','height') else new_value
make_default = input('Make this the default model? [n] ') in ('y','Y')
completer.complete_extensions(None) completer.complete_extensions(None)
write_config_file(opt.conf, gen, model_name, new_config, clobber=True, make_default=make_default)
if write_config_file(opt.conf, gen, model_name, new_config, clobber=True): def write_config_file(conf_path, gen, model_name, new_config, clobber=False, make_default=False):
gen.set_model(model_name) current_model = gen.model_name
def write_config_file(conf_path, gen, model_name, new_config, clobber=False):
op = 'modify' if clobber else 'import' op = 'modify' if clobber else 'import'
print('\n>> New configuration:') print('\n>> New configuration:')
if make_default:
new_config['default'] = True
print(yaml.dump({model_name:new_config})) print(yaml.dump({model_name:new_config}))
if input(f'OK to {op} [n]? ') not in ('y','Y'): if input(f'OK to {op} [n]? ') not in ('y','Y'):
return False return False
try: try:
print('>> Verifying that new model loads...')
yaml_str = gen.model_cache.add_model(model_name, new_config, clobber) yaml_str = gen.model_cache.add_model(model_name, new_config, clobber)
assert gen.set_model(model_name) is not None, 'model failed to load'
except AssertionError as e: except AssertionError as e:
print(f'** configuration failed: {str(e)}') print(f'** aborting **')
gen.model_cache.del_model(model_name)
return False return False
tmpfile = os.path.join(os.path.dirname(conf_path),'new_config.tmp') if make_default:
with open(tmpfile, 'w') as outfile: print('making this default')
outfile.write(yaml_str) gen.model_cache.set_default_model(model_name)
os.rename(tmpfile,conf_path)
gen.model_cache.commit(conf_path)
do_switch = input(f'Keep model loaded? [y]')
if len(do_switch)==0 or do_switch[0] in ('y','Y'):
pass
else:
gen.set_model(current_model)
return True return True
def do_textmask(gen, opt, callback): def do_textmask(gen, opt, callback):
@ -579,7 +646,10 @@ def add_postprocessing_to_metadata(opt,original_file,new_file,tool,command):
original_file = original_file if os.path.exists(original_file) else os.path.join(opt.outdir,original_file) original_file = original_file if os.path.exists(original_file) else os.path.join(opt.outdir,original_file)
new_file = new_file if os.path.exists(new_file) else os.path.join(opt.outdir,new_file) new_file = new_file if os.path.exists(new_file) else os.path.join(opt.outdir,new_file)
meta = retrieve_metadata(original_file)['sd-metadata'] meta = retrieve_metadata(original_file)['sd-metadata']
img_data = meta['image'] if 'image' not in meta:
meta = metadata_dumps(opt,seeds=[opt.seed])['image']
meta['image'] = {}
img_data = meta.get('image')
pp = img_data.get('postprocessing',[]) or [] pp = img_data.get('postprocessing',[]) or []
pp.append( pp.append(
{ {
@ -723,27 +793,63 @@ def make_step_callback(gen, opt, prefix):
image.save(filename,'PNG') image.save(filename,'PNG')
return callback return callback
def retrieve_dream_command(opt,file_path,completer): def retrieve_dream_command(opt,command,completer):
''' '''
Given a full or partial path to a previously-generated image file, Given a full or partial path to a previously-generated image file,
will retrieve and format the dream command used to generate the image, will retrieve and format the dream command used to generate the image,
and pop it into the readline buffer (linux, Mac), or print out a comment and pop it into the readline buffer (linux, Mac), or print out a comment
for cut-and-paste (windows) for cut-and-paste (windows)
Given a wildcard path to a folder with image png files,
will retrieve and format the dream command used to generate the images,
and save them to a file commands.txt for further processing
''' '''
if len(command) == 0:
return
tokens = command.split()
if len(tokens) > 1:
outfilepath = tokens[1]
else:
outfilepath = "commands.txt"
file_path = tokens[0]
dir,basename = os.path.split(file_path) dir,basename = os.path.split(file_path)
if len(dir) == 0: if len(dir) == 0:
path = os.path.join(opt.outdir,basename) dir = opt.outdir
else:
path = file_path outdir,outname = os.path.split(outfilepath)
if len(outdir) == 0:
outfilepath = os.path.join(dir,outname)
try:
paths = list(Path(dir).glob(basename))
except ValueError:
print(f'## "{basename}": unacceptable pattern')
return
commands = []
for path in paths:
try: try:
cmd = dream_cmd_from_png(path) cmd = dream_cmd_from_png(path)
except OSError: except OSError:
print(f'** {path}: file could not be read') print(f'## {path}: file could not be read')
return continue
except (KeyError, AttributeError): except (KeyError, AttributeError, IndexError):
print(f'** {path}: file has no metadata') print(f'## {path}: file has no metadata')
return continue
completer.set_line(cmd) except:
print(f'## {path}: file could not be processed')
continue
commands.append(f'# {path}')
commands.append(cmd)
with open(outfilepath, 'w', encoding='utf-8') as f:
f.write('\n'.join(commands))
print(f'>> File {outfilepath} with commands created')
if len(commands) == 2:
completer.set_line(commands[1])
######################################
if __name__ == '__main__': if __name__ == '__main__':
main() main()