mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Add configuration system, remove legacy globals, args, generate and CLI (#3340)
# Application-wide configuration service This PR creates a new `InvokeAIAppConfig` object that reads application-wide settings from an init file, the environment, and the command line. Arguments and fields are taken from the pydantic definition of the model. Defaults can be set by creating a yaml configuration file that has a top-level key of "InvokeAI" and subheadings for each of the categories returned by `invokeai --help`. The file looks like this: [file: invokeai.yaml] ``` InvokeAI: Paths: root: /home/lstein/invokeai-main conf_path: configs/models.yaml legacy_conf_dir: configs/stable-diffusion outdir: outputs embedding_dir: embeddings lora_dir: loras autoconvert_dir: null gfpgan_model_dir: models/gfpgan/GFPGANv1.4.pth Models: model: stable-diffusion-1.5 embeddings: true Memory/Performance: xformers_enabled: false sequential_guidance: false precision: float16 max_loaded_models: 4 always_use_cpu: false free_gpu_mem: false Features: nsfw_checker: true restore: true esrgan: true patchmatch: true internet_available: true log_tokenization: false Cross-Origin Resource Sharing: allow_origins: [] allow_credentials: true allow_methods: - '*' allow_headers: - '*' Web Server: host: 127.0.0.1 port: 8081 ``` The default name of the configuration file is `invokeai.yaml`, located in INVOKEAI_ROOT. You can use any OmegaConf dictionary by passing it to the config object at initialization time: ``` omegaconf = OmegaConf.load('/tmp/init.yaml') conf = InvokeAIAppConfig(conf=omegaconf) ``` The default name of the configuration file is `invokeai.yaml`, located in INVOKEAI_ROOT. You can replace supersede this by providing anyOmegaConf dictionary object initialization time: ``` omegaconf = OmegaConf.load('/tmp/init.yaml') conf = InvokeAIAppConfig(conf=omegaconf) ``` By default, InvokeAIAppConfig will parse the contents of `sys.argv` at initialization time. You may pass a list of strings in the optional `argv` argument to use instead of the system argv: ``` conf = InvokeAIAppConfig(arg=['--xformers_enabled']) ``` It is also possible to set a value at initialization time. This value has highest priority. ``` conf = InvokeAIAppConfig(xformers_enabled=True) ``` Any setting can be overwritten by setting an environment variable of form: "INVOKEAI_<setting>", as in: ``` export INVOKEAI_port=8080 ``` Order of precedence (from highest): 1) initialization options 2) command line options 3) environment variable options 4) config file options 5) pydantic defaults Typical usage: ``` from invokeai.app.services.config import InvokeAIAppConfig # get global configuration and print its nsfw_checker value conf = InvokeAIAppConfig() print(conf.nsfw_checker) ``` Finally, the configuration object is able to recreate its (modified) yaml file, by calling its `to_yaml()` method: ``` conf = InvokeAIAppConfig(outdir='/tmp', port=8080) print(conf.to_yaml()) ``` # Legacy code removal and porting This PR replaces Globals with the InvokeAIAppConfig system throughout, and therefore removes the `globals.py` and `args.py` modules. It also removes `generate` and the legacy CLI. ***The old CLI and web servers are now gone.*** I have ported the functionality of the configuration script, the model installer, and the merge and textual inversion scripts. The `invokeai` command will now launch `invokeai-node-cli`, and `invokeai-web` will launch the web server. I have changed the continuous invocation tests to accommodate the new command syntax in `invokeai-node-cli`. As a convenience function, you can also pass invocations to `invokeai-node-cli` (or its alias `invokeai`) on the command line as as standard input: ``` invokeai-node-cli "t2i --positive_prompt 'banana sushi' --seed 42" invokeai < invocation_commands.txt ```
This commit is contained in:
commit
7025c00581
20
.github/workflows/test-invoke-pip.yml
vendored
20
.github/workflows/test-invoke-pip.yml
vendored
@ -80,12 +80,7 @@ jobs:
|
|||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
- name: set test prompt to main branch validation
|
- name: set test prompt to main branch validation
|
||||||
if: ${{ github.ref == 'refs/heads/main' }}
|
run:echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
|
||||||
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
|
|
||||||
|
|
||||||
- name: set test prompt to Pull Request validation
|
|
||||||
if: ${{ github.ref != 'refs/heads/main' }}
|
|
||||||
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
|
|
||||||
|
|
||||||
- name: setup python
|
- name: setup python
|
||||||
uses: actions/setup-python@v4
|
uses: actions/setup-python@v4
|
||||||
@ -105,12 +100,6 @@ jobs:
|
|||||||
id: run-pytest
|
id: run-pytest
|
||||||
run: pytest
|
run: pytest
|
||||||
|
|
||||||
- name: set INVOKEAI_OUTDIR
|
|
||||||
run: >
|
|
||||||
python -c
|
|
||||||
"import os;from invokeai.backend.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
|
|
||||||
>> ${{ matrix.github-env }}
|
|
||||||
|
|
||||||
- name: run invokeai-configure
|
- name: run invokeai-configure
|
||||||
id: run-preload-models
|
id: run-preload-models
|
||||||
env:
|
env:
|
||||||
@ -129,15 +118,20 @@ jobs:
|
|||||||
HF_HUB_OFFLINE: 1
|
HF_HUB_OFFLINE: 1
|
||||||
HF_DATASETS_OFFLINE: 1
|
HF_DATASETS_OFFLINE: 1
|
||||||
TRANSFORMERS_OFFLINE: 1
|
TRANSFORMERS_OFFLINE: 1
|
||||||
|
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
|
||||||
run: >
|
run: >
|
||||||
invokeai
|
invokeai
|
||||||
--no-patchmatch
|
--no-patchmatch
|
||||||
--no-nsfw_checker
|
--no-nsfw_checker
|
||||||
--from_file ${{ env.TEST_PROMPTS }}
|
--precision=float32
|
||||||
|
--always_use_cpu
|
||||||
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
|
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
|
||||||
|
--from_file ${{ env.TEST_PROMPTS }}
|
||||||
|
|
||||||
- name: Archive results
|
- name: Archive results
|
||||||
id: archive-results
|
id: archive-results
|
||||||
|
env:
|
||||||
|
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
|
||||||
uses: actions/upload-artifact@v3
|
uses: actions/upload-artifact@v3
|
||||||
with:
|
with:
|
||||||
name: results
|
name: results
|
||||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -201,6 +201,8 @@ checkpoints
|
|||||||
# If it's a Mac
|
# If it's a Mac
|
||||||
.DS_Store
|
.DS_Store
|
||||||
|
|
||||||
|
invokeai/frontend/web/dist/*
|
||||||
|
|
||||||
# Let the frontend manage its own gitignore
|
# Let the frontend manage its own gitignore
|
||||||
!invokeai/frontend/web/*
|
!invokeai/frontend/web/*
|
||||||
|
|
||||||
|
@ -7,7 +7,6 @@ from typing import types
|
|||||||
|
|
||||||
from ..services.default_graphs import create_system_graphs
|
from ..services.default_graphs import create_system_graphs
|
||||||
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
|
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
|
||||||
from ...backend import Globals
|
|
||||||
from ..services.model_manager_initializer import get_model_manager
|
from ..services.model_manager_initializer import get_model_manager
|
||||||
from ..services.restoration_services import RestorationServices
|
from ..services.restoration_services import RestorationServices
|
||||||
from ..services.graph import GraphExecutionState, LibraryGraph
|
from ..services.graph import GraphExecutionState, LibraryGraph
|
||||||
@ -42,17 +41,8 @@ class ApiDependencies:
|
|||||||
|
|
||||||
invoker: Invoker = None
|
invoker: Invoker = None
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def initialize(config, event_handler_id: int, logger: types.ModuleType=logger):
|
def initialize(config, event_handler_id: int, logger: types.ModuleType=logger):
|
||||||
Globals.try_patchmatch = config.patchmatch
|
logger.info(f"Internet connectivity is {config.internet_available}")
|
||||||
Globals.always_use_cpu = config.always_use_cpu
|
|
||||||
Globals.internet_available = config.internet_available and check_internet()
|
|
||||||
Globals.disable_xformers = not config.xformers
|
|
||||||
Globals.ckpt_convert = config.ckpt_convert
|
|
||||||
|
|
||||||
# TO DO: Use the config to select the logger rather than use the default
|
|
||||||
# invokeai logging module
|
|
||||||
logger.info(f"Internet connectivity is {Globals.internet_available}")
|
|
||||||
|
|
||||||
events = FastAPIEventService(event_handler_id)
|
events = FastAPIEventService(event_handler_id)
|
||||||
|
|
||||||
@ -72,7 +62,6 @@ class ApiDependencies:
|
|||||||
services = InvocationServices(
|
services = InvocationServices(
|
||||||
model_manager=get_model_manager(config,logger),
|
model_manager=get_model_manager(config,logger),
|
||||||
events=events,
|
events=events,
|
||||||
logger=logger,
|
|
||||||
latents=latents,
|
latents=latents,
|
||||||
images=images,
|
images=images,
|
||||||
metadata=metadata,
|
metadata=metadata,
|
||||||
@ -85,6 +74,8 @@ class ApiDependencies:
|
|||||||
),
|
),
|
||||||
processor=DefaultInvocationProcessor(),
|
processor=DefaultInvocationProcessor(),
|
||||||
restoration=RestorationServices(config,logger),
|
restoration=RestorationServices(config,logger),
|
||||||
|
configuration=config,
|
||||||
|
logger=logger,
|
||||||
)
|
)
|
||||||
|
|
||||||
create_system_graphs(services.graph_library)
|
create_system_graphs(services.graph_library)
|
||||||
|
@ -13,11 +13,11 @@ from fastapi_events.handlers.local import local_handler
|
|||||||
from fastapi_events.middleware import EventHandlerASGIMiddleware
|
from fastapi_events.middleware import EventHandlerASGIMiddleware
|
||||||
from pydantic.schema import schema
|
from pydantic.schema import schema
|
||||||
|
|
||||||
from ..backend import Args
|
|
||||||
from .api.dependencies import ApiDependencies
|
from .api.dependencies import ApiDependencies
|
||||||
from .api.routers import images, sessions, models
|
from .api.routers import images, sessions, models
|
||||||
from .api.sockets import SocketIO
|
from .api.sockets import SocketIO
|
||||||
from .invocations.baseinvocation import BaseInvocation
|
from .invocations.baseinvocation import BaseInvocation
|
||||||
|
from .services.config import InvokeAIAppConfig
|
||||||
|
|
||||||
# Create the app
|
# Create the app
|
||||||
# TODO: create this all in a method so configuration/etc. can be passed in?
|
# TODO: create this all in a method so configuration/etc. can be passed in?
|
||||||
@ -33,30 +33,25 @@ app.add_middleware(
|
|||||||
middleware_id=event_handler_id,
|
middleware_id=event_handler_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Add CORS
|
|
||||||
# TODO: use configuration for this
|
|
||||||
origins = []
|
|
||||||
app.add_middleware(
|
|
||||||
CORSMiddleware,
|
|
||||||
allow_origins=origins,
|
|
||||||
allow_credentials=True,
|
|
||||||
allow_methods=["*"],
|
|
||||||
allow_headers=["*"],
|
|
||||||
)
|
|
||||||
|
|
||||||
socket_io = SocketIO(app)
|
socket_io = SocketIO(app)
|
||||||
|
|
||||||
config = {}
|
# initialize config
|
||||||
|
# this is a module global
|
||||||
|
app_config = InvokeAIAppConfig()
|
||||||
|
|
||||||
# Add startup event to load dependencies
|
# Add startup event to load dependencies
|
||||||
@app.on_event("startup")
|
@app.on_event("startup")
|
||||||
async def startup_event():
|
async def startup_event():
|
||||||
config = Args()
|
app.add_middleware(
|
||||||
config.parse_args()
|
CORSMiddleware,
|
||||||
|
allow_origins=app_config.allow_origins,
|
||||||
|
allow_credentials=app_config.allow_credentials,
|
||||||
|
allow_methods=app_config.allow_methods,
|
||||||
|
allow_headers=app_config.allow_headers,
|
||||||
|
)
|
||||||
|
|
||||||
ApiDependencies.initialize(
|
ApiDependencies.initialize(
|
||||||
config=config, event_handler_id=event_handler_id, logger=logger
|
config=app_config, event_handler_id=event_handler_id, logger=logger
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -148,14 +143,11 @@ app.mount("/", StaticFiles(directory="invokeai/frontend/web/dist", html=True), n
|
|||||||
|
|
||||||
def invoke_api():
|
def invoke_api():
|
||||||
# Start our own event loop for eventing usage
|
# Start our own event loop for eventing usage
|
||||||
# TODO: determine if there's a better way to do this
|
|
||||||
loop = asyncio.new_event_loop()
|
loop = asyncio.new_event_loop()
|
||||||
config = uvicorn.Config(app=app, host="0.0.0.0", port=9090, loop=loop)
|
config = uvicorn.Config(app=app, host=app_config.host, port=app_config.port, loop=loop)
|
||||||
# Use access_log to turn off logging
|
# Use access_log to turn off logging
|
||||||
|
|
||||||
server = uvicorn.Server(config)
|
server = uvicorn.Server(config)
|
||||||
loop.run_until_complete(server.serve())
|
loop.run_until_complete(server.serve())
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
invoke_api()
|
invoke_api()
|
||||||
|
@ -285,3 +285,19 @@ class DrawExecutionGraphCommand(BaseCommand):
|
|||||||
nx.draw_networkx_labels(nxgraph, pos, font_size=20, font_family="sans-serif")
|
nx.draw_networkx_labels(nxgraph, pos, font_size=20, font_family="sans-serif")
|
||||||
plt.axis("off")
|
plt.axis("off")
|
||||||
plt.show()
|
plt.show()
|
||||||
|
|
||||||
|
class SortedHelpFormatter(argparse.HelpFormatter):
|
||||||
|
def _iter_indented_subactions(self, action):
|
||||||
|
try:
|
||||||
|
get_subactions = action._get_subactions
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
self._indent()
|
||||||
|
if isinstance(action, argparse._SubParsersAction):
|
||||||
|
for subaction in sorted(get_subactions(), key=lambda x: x.dest):
|
||||||
|
yield subaction
|
||||||
|
else:
|
||||||
|
for subaction in get_subactions():
|
||||||
|
yield subaction
|
||||||
|
self._dedent()
|
||||||
|
@ -11,9 +11,10 @@ from pathlib import Path
|
|||||||
from typing import List, Dict, Literal, get_args, get_type_hints, get_origin
|
from typing import List, Dict, Literal, get_args, get_type_hints, get_origin
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from ...backend import ModelManager, Globals
|
from ...backend import ModelManager
|
||||||
from ..invocations.baseinvocation import BaseInvocation
|
from ..invocations.baseinvocation import BaseInvocation
|
||||||
from .commands import BaseCommand
|
from .commands import BaseCommand
|
||||||
|
from ..services.invocation_services import InvocationServices
|
||||||
|
|
||||||
# singleton object, class variable
|
# singleton object, class variable
|
||||||
completer = None
|
completer = None
|
||||||
@ -131,13 +132,13 @@ class Completer(object):
|
|||||||
readline.redisplay()
|
readline.redisplay()
|
||||||
self.linebuffer = None
|
self.linebuffer = None
|
||||||
|
|
||||||
def set_autocompleter(model_manager: ModelManager) -> Completer:
|
def set_autocompleter(services: InvocationServices) -> Completer:
|
||||||
global completer
|
global completer
|
||||||
|
|
||||||
if completer:
|
if completer:
|
||||||
return completer
|
return completer
|
||||||
|
|
||||||
completer = Completer(model_manager)
|
completer = Completer(services.model_manager)
|
||||||
|
|
||||||
readline.set_completer(completer.complete)
|
readline.set_completer(completer.complete)
|
||||||
# pyreadline3 does not have a set_auto_history() method
|
# pyreadline3 does not have a set_auto_history() method
|
||||||
@ -153,7 +154,7 @@ def set_autocompleter(model_manager: ModelManager) -> Completer:
|
|||||||
readline.parse_and_bind("set skip-completed-text on")
|
readline.parse_and_bind("set skip-completed-text on")
|
||||||
readline.parse_and_bind("set show-all-if-ambiguous on")
|
readline.parse_and_bind("set show-all-if-ambiguous on")
|
||||||
|
|
||||||
histfile = Path(Globals.root, ".invoke_history")
|
histfile = Path(services.configuration.root_dir / ".invoke_history")
|
||||||
try:
|
try:
|
||||||
readline.read_history_file(histfile)
|
readline.read_history_file(histfile)
|
||||||
readline.set_history_length(1000)
|
readline.set_history_length(1000)
|
||||||
|
@ -4,13 +4,14 @@ import argparse
|
|||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import shlex
|
import shlex
|
||||||
|
import sys
|
||||||
import time
|
import time
|
||||||
from typing import (
|
from typing import (
|
||||||
Union,
|
Union,
|
||||||
get_type_hints,
|
get_type_hints,
|
||||||
)
|
)
|
||||||
|
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel, ValidationError
|
||||||
from pydantic.fields import Field
|
from pydantic.fields import Field
|
||||||
|
|
||||||
|
|
||||||
@ -19,8 +20,7 @@ from invokeai.app.services.metadata import PngMetadataService
|
|||||||
from .services.default_graphs import create_system_graphs
|
from .services.default_graphs import create_system_graphs
|
||||||
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
|
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
|
||||||
|
|
||||||
from ..backend import Args
|
from .cli.commands import BaseCommand, CliContext, ExitCli, add_graph_parsers, add_parsers, SortedHelpFormatter
|
||||||
from .cli.commands import BaseCommand, CliContext, ExitCli, add_graph_parsers, add_parsers
|
|
||||||
from .cli.completer import set_autocompleter
|
from .cli.completer import set_autocompleter
|
||||||
from .invocations.baseinvocation import BaseInvocation
|
from .invocations.baseinvocation import BaseInvocation
|
||||||
from .services.events import EventServiceBase
|
from .services.events import EventServiceBase
|
||||||
@ -34,7 +34,7 @@ from .services.invocation_services import InvocationServices
|
|||||||
from .services.invoker import Invoker
|
from .services.invoker import Invoker
|
||||||
from .services.processor import DefaultInvocationProcessor
|
from .services.processor import DefaultInvocationProcessor
|
||||||
from .services.sqlite import SqliteItemStorage
|
from .services.sqlite import SqliteItemStorage
|
||||||
|
from .services.config import get_invokeai_config
|
||||||
|
|
||||||
class CliCommand(BaseModel):
|
class CliCommand(BaseModel):
|
||||||
command: Union[BaseCommand.get_commands() + BaseInvocation.get_invocations()] = Field(discriminator="type") # type: ignore
|
command: Union[BaseCommand.get_commands() + BaseInvocation.get_invocations()] = Field(discriminator="type") # type: ignore
|
||||||
@ -64,7 +64,7 @@ def add_invocation_args(command_parser):
|
|||||||
|
|
||||||
def get_command_parser(services: InvocationServices) -> argparse.ArgumentParser:
|
def get_command_parser(services: InvocationServices) -> argparse.ArgumentParser:
|
||||||
# Create invocation parser
|
# Create invocation parser
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser(formatter_class=SortedHelpFormatter)
|
||||||
|
|
||||||
def exit(*args, **kwargs):
|
def exit(*args, **kwargs):
|
||||||
raise InvalidArgs
|
raise InvalidArgs
|
||||||
@ -189,24 +189,25 @@ def invoke_all(context: CliContext):
|
|||||||
|
|
||||||
|
|
||||||
def invoke_cli():
|
def invoke_cli():
|
||||||
config = Args()
|
# this gets the basic configuration
|
||||||
config.parse_args()
|
config = get_invokeai_config()
|
||||||
|
|
||||||
|
# get the optional list of invocations to execute on the command line
|
||||||
|
parser = config.get_parser()
|
||||||
|
parser.add_argument('commands',nargs='*')
|
||||||
|
invocation_commands = parser.parse_args().commands
|
||||||
|
|
||||||
|
# get the optional file to read commands from.
|
||||||
|
# Simplest is to use it for STDIN
|
||||||
|
if infile := config.from_file:
|
||||||
|
sys.stdin = open(infile,"r")
|
||||||
|
|
||||||
model_manager = get_model_manager(config,logger=logger)
|
model_manager = get_model_manager(config,logger=logger)
|
||||||
|
|
||||||
# This initializes the autocompleter and returns it.
|
|
||||||
# Currently nothing is done with the returned Completer
|
|
||||||
# object, but the object can be used to change autocompletion
|
|
||||||
# behavior on the fly, if desired.
|
|
||||||
set_autocompleter(model_manager)
|
|
||||||
|
|
||||||
events = EventServiceBase()
|
events = EventServiceBase()
|
||||||
|
output_folder = config.output_path
|
||||||
metadata = PngMetadataService()
|
metadata = PngMetadataService()
|
||||||
|
|
||||||
output_folder = os.path.abspath(
|
|
||||||
os.path.join(os.path.dirname(__file__), "../../../outputs")
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: build a file/path manager?
|
# TODO: build a file/path manager?
|
||||||
db_location = os.path.join(output_folder, "invokeai.db")
|
db_location = os.path.join(output_folder, "invokeai.db")
|
||||||
|
|
||||||
@ -226,6 +227,7 @@ def invoke_cli():
|
|||||||
processor=DefaultInvocationProcessor(),
|
processor=DefaultInvocationProcessor(),
|
||||||
restoration=RestorationServices(config,logger=logger),
|
restoration=RestorationServices(config,logger=logger),
|
||||||
logger=logger,
|
logger=logger,
|
||||||
|
configuration=config,
|
||||||
)
|
)
|
||||||
|
|
||||||
system_graphs = create_system_graphs(services.graph_library)
|
system_graphs = create_system_graphs(services.graph_library)
|
||||||
@ -241,10 +243,18 @@ def invoke_cli():
|
|||||||
# print(services.session_manager.list())
|
# print(services.session_manager.list())
|
||||||
|
|
||||||
context = CliContext(invoker, session, parser)
|
context = CliContext(invoker, session, parser)
|
||||||
|
set_autocompleter(services)
|
||||||
|
|
||||||
while True:
|
command_line_args_exist = len(invocation_commands) > 0
|
||||||
|
done = False
|
||||||
|
|
||||||
|
while not done:
|
||||||
try:
|
try:
|
||||||
cmd_input = input("invoke> ")
|
if command_line_args_exist:
|
||||||
|
cmd_input = invocation_commands.pop(0)
|
||||||
|
done = len(invocation_commands) == 0
|
||||||
|
else:
|
||||||
|
cmd_input = input("invoke> ")
|
||||||
except (KeyboardInterrupt, EOFError):
|
except (KeyboardInterrupt, EOFError):
|
||||||
# Ctrl-c exits
|
# Ctrl-c exits
|
||||||
break
|
break
|
||||||
@ -368,6 +378,9 @@ def invoke_cli():
|
|||||||
invoker.services.logger.warning('Invalid command, use "help" to list commands')
|
invoker.services.logger.warning('Invalid command, use "help" to list commands')
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
except ValidationError:
|
||||||
|
invoker.services.logger.warning('Invalid command arguments, run "<command> --help" for summary')
|
||||||
|
|
||||||
except SessionError:
|
except SessionError:
|
||||||
# Start a new session
|
# Start a new session
|
||||||
invoker.services.logger.warning("Session error: creating a new session")
|
invoker.services.logger.warning("Session error: creating a new session")
|
||||||
|
@ -16,8 +16,6 @@ from compel.prompt_parser import (
|
|||||||
Fragment,
|
Fragment,
|
||||||
)
|
)
|
||||||
|
|
||||||
from invokeai.backend.globals import Globals
|
|
||||||
|
|
||||||
|
|
||||||
class ConditioningField(BaseModel):
|
class ConditioningField(BaseModel):
|
||||||
conditioning_name: Optional[str] = Field(default=None, description="The name of conditioning data")
|
conditioning_name: Optional[str] = Field(default=None, description="The name of conditioning data")
|
||||||
@ -103,7 +101,7 @@ class CompelInvocation(BaseInvocation):
|
|||||||
conjunction = Compel.parse_prompt_string(prompt_str)
|
conjunction = Compel.parse_prompt_string(prompt_str)
|
||||||
prompt: Union[FlattenedPrompt, Blend] = conjunction.prompts[0]
|
prompt: Union[FlattenedPrompt, Blend] = conjunction.prompts[0]
|
||||||
|
|
||||||
if getattr(Globals, "log_tokenization", False):
|
if context.services.configuration.log_tokenization:
|
||||||
log_tokenization_for_prompt_object(prompt, tokenizer)
|
log_tokenization_for_prompt_object(prompt, tokenizer)
|
||||||
|
|
||||||
c, options = compel.build_conditioning_tensor_for_prompt_object(prompt)
|
c, options = compel.build_conditioning_tensor_for_prompt_object(prompt)
|
||||||
|
521
invokeai/app/services/config.py
Normal file
521
invokeai/app/services/config.py
Normal file
@ -0,0 +1,521 @@
|
|||||||
|
# Copyright (c) 2023 Lincoln Stein (https://github.com/lstein) and the InvokeAI Development Team
|
||||||
|
|
||||||
|
'''Invokeai configuration system.
|
||||||
|
|
||||||
|
Arguments and fields are taken from the pydantic definition of the
|
||||||
|
model. Defaults can be set by creating a yaml configuration file that
|
||||||
|
has a top-level key of "InvokeAI" and subheadings for each of the
|
||||||
|
categories returned by `invokeai --help`. The file looks like this:
|
||||||
|
|
||||||
|
[file: invokeai.yaml]
|
||||||
|
|
||||||
|
InvokeAI:
|
||||||
|
Paths:
|
||||||
|
root: /home/lstein/invokeai-main
|
||||||
|
conf_path: configs/models.yaml
|
||||||
|
legacy_conf_dir: configs/stable-diffusion
|
||||||
|
outdir: outputs
|
||||||
|
embedding_dir: embeddings
|
||||||
|
lora_dir: loras
|
||||||
|
autoconvert_dir: null
|
||||||
|
gfpgan_model_dir: models/gfpgan/GFPGANv1.4.pth
|
||||||
|
Models:
|
||||||
|
model: stable-diffusion-1.5
|
||||||
|
embeddings: true
|
||||||
|
Memory/Performance:
|
||||||
|
xformers_enabled: false
|
||||||
|
sequential_guidance: false
|
||||||
|
precision: float16
|
||||||
|
max_loaded_models: 4
|
||||||
|
always_use_cpu: false
|
||||||
|
free_gpu_mem: false
|
||||||
|
Features:
|
||||||
|
nsfw_checker: true
|
||||||
|
restore: true
|
||||||
|
esrgan: true
|
||||||
|
patchmatch: true
|
||||||
|
internet_available: true
|
||||||
|
log_tokenization: false
|
||||||
|
Web Server:
|
||||||
|
host: 127.0.0.1
|
||||||
|
port: 8081
|
||||||
|
allow_origins: []
|
||||||
|
allow_credentials: true
|
||||||
|
allow_methods:
|
||||||
|
- '*'
|
||||||
|
allow_headers:
|
||||||
|
- '*'
|
||||||
|
|
||||||
|
The default name of the configuration file is `invokeai.yaml`, located
|
||||||
|
in INVOKEAI_ROOT. You can replace supersede this by providing any
|
||||||
|
OmegaConf dictionary object initialization time:
|
||||||
|
|
||||||
|
omegaconf = OmegaConf.load('/tmp/init.yaml')
|
||||||
|
conf = InvokeAIAppConfig(conf=omegaconf)
|
||||||
|
|
||||||
|
By default, InvokeAIAppConfig will parse the contents of `sys.argv` at
|
||||||
|
initialization time. You may pass a list of strings in the optional
|
||||||
|
`argv` argument to use instead of the system argv:
|
||||||
|
|
||||||
|
conf = InvokeAIAppConfig(arg=['--xformers_enabled'])
|
||||||
|
|
||||||
|
It is also possible to set a value at initialization time. This value
|
||||||
|
has highest priority.
|
||||||
|
|
||||||
|
conf = InvokeAIAppConfig(xformers_enabled=True)
|
||||||
|
|
||||||
|
Any setting can be overwritten by setting an environment variable of
|
||||||
|
form: "INVOKEAI_<setting>", as in:
|
||||||
|
|
||||||
|
export INVOKEAI_port=8080
|
||||||
|
|
||||||
|
Order of precedence (from highest):
|
||||||
|
1) initialization options
|
||||||
|
2) command line options
|
||||||
|
3) environment variable options
|
||||||
|
4) config file options
|
||||||
|
5) pydantic defaults
|
||||||
|
|
||||||
|
Typical usage:
|
||||||
|
|
||||||
|
from invokeai.app.services.config import InvokeAIAppConfig
|
||||||
|
from invokeai.invocations.generate import TextToImageInvocation
|
||||||
|
|
||||||
|
# get global configuration and print its nsfw_checker value
|
||||||
|
conf = InvokeAIAppConfig()
|
||||||
|
print(conf.nsfw_checker)
|
||||||
|
|
||||||
|
# get the text2image invocation and print its step value
|
||||||
|
text2image = TextToImageInvocation()
|
||||||
|
print(text2image.steps)
|
||||||
|
|
||||||
|
Computed properties:
|
||||||
|
|
||||||
|
The InvokeAIAppConfig object has a series of properties that
|
||||||
|
resolve paths relative to the runtime root directory. They each return
|
||||||
|
a Path object:
|
||||||
|
|
||||||
|
root_path - path to InvokeAI root
|
||||||
|
output_path - path to default outputs directory
|
||||||
|
model_conf_path - path to models.yaml
|
||||||
|
conf - alias for the above
|
||||||
|
embedding_path - path to the embeddings directory
|
||||||
|
lora_path - path to the LoRA directory
|
||||||
|
|
||||||
|
In most cases, you will want to create a single InvokeAIAppConfig
|
||||||
|
object for the entire application. The get_invokeai_config() function
|
||||||
|
does this:
|
||||||
|
|
||||||
|
config = get_invokeai_config()
|
||||||
|
print(config.root)
|
||||||
|
|
||||||
|
# Subclassing
|
||||||
|
|
||||||
|
If you wish to create a similar class, please subclass the
|
||||||
|
`InvokeAISettings` class and define a Literal field named "type",
|
||||||
|
which is set to the desired top-level name. For example, to create a
|
||||||
|
"InvokeBatch" configuration, define like this:
|
||||||
|
|
||||||
|
class InvokeBatch(InvokeAISettings):
|
||||||
|
type: Literal["InvokeBatch"] = "InvokeBatch"
|
||||||
|
node_count : int = Field(default=1, description="Number of nodes to run on", category='Resources')
|
||||||
|
cpu_count : int = Field(default=8, description="Number of GPUs to run on per node", category='Resources')
|
||||||
|
|
||||||
|
This will now read and write from the "InvokeBatch" section of the
|
||||||
|
config file, look for environment variables named INVOKEBATCH_*, and
|
||||||
|
accept the command-line arguments `--node_count` and `--cpu_count`. The
|
||||||
|
two configs are kept in separate sections of the config file:
|
||||||
|
|
||||||
|
# invokeai.yaml
|
||||||
|
|
||||||
|
InvokeBatch:
|
||||||
|
Resources:
|
||||||
|
node_count: 1
|
||||||
|
cpu_count: 8
|
||||||
|
|
||||||
|
InvokeAI:
|
||||||
|
Paths:
|
||||||
|
root: /home/lstein/invokeai-main
|
||||||
|
conf_path: configs/models.yaml
|
||||||
|
legacy_conf_dir: configs/stable-diffusion
|
||||||
|
outdir: outputs
|
||||||
|
...
|
||||||
|
'''
|
||||||
|
import argparse
|
||||||
|
import pydoc
|
||||||
|
import typing
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from argparse import ArgumentParser
|
||||||
|
from omegaconf import OmegaConf, DictConfig
|
||||||
|
from pathlib import Path
|
||||||
|
from pydantic import BaseSettings, Field, parse_obj_as
|
||||||
|
from typing import Any, ClassVar, Dict, List, Literal, Type, Union, get_origin, get_type_hints, get_args
|
||||||
|
|
||||||
|
INIT_FILE = Path('invokeai.yaml')
|
||||||
|
LEGACY_INIT_FILE = Path('invokeai.init')
|
||||||
|
|
||||||
|
# This global stores a singleton InvokeAIAppConfig configuration object
|
||||||
|
global_config = None
|
||||||
|
|
||||||
|
class InvokeAISettings(BaseSettings):
|
||||||
|
'''
|
||||||
|
Runtime configuration settings in which default values are
|
||||||
|
read from an omegaconf .yaml file.
|
||||||
|
'''
|
||||||
|
initconf : ClassVar[DictConfig] = None
|
||||||
|
argparse_groups : ClassVar[Dict] = {}
|
||||||
|
|
||||||
|
def parse_args(self, argv: list=sys.argv[1:]):
|
||||||
|
parser = self.get_parser()
|
||||||
|
opt, _ = parser.parse_known_args(argv)
|
||||||
|
for name in self.__fields__:
|
||||||
|
if name not in self._excluded():
|
||||||
|
setattr(self, name, getattr(opt,name))
|
||||||
|
|
||||||
|
def to_yaml(self)->str:
|
||||||
|
"""
|
||||||
|
Return a YAML string representing our settings. This can be used
|
||||||
|
as the contents of `invokeai.yaml` to restore settings later.
|
||||||
|
"""
|
||||||
|
cls = self.__class__
|
||||||
|
type = get_args(get_type_hints(cls)['type'])[0]
|
||||||
|
field_dict = dict({type:dict()})
|
||||||
|
for name,field in self.__fields__.items():
|
||||||
|
if name in cls._excluded():
|
||||||
|
continue
|
||||||
|
category = field.field_info.extra.get("category") or "Uncategorized"
|
||||||
|
value = getattr(self,name)
|
||||||
|
if category not in field_dict[type]:
|
||||||
|
field_dict[type][category] = dict()
|
||||||
|
# keep paths as strings to make it easier to read
|
||||||
|
field_dict[type][category][name] = str(value) if isinstance(value,Path) else value
|
||||||
|
conf = OmegaConf.create(field_dict)
|
||||||
|
return OmegaConf.to_yaml(conf)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def add_parser_arguments(cls, parser):
|
||||||
|
if 'type' in get_type_hints(cls):
|
||||||
|
settings_stanza = get_args(get_type_hints(cls)['type'])[0]
|
||||||
|
else:
|
||||||
|
settings_stanza = "Uncategorized"
|
||||||
|
|
||||||
|
env_prefix = cls.Config.env_prefix if hasattr(cls.Config,'env_prefix') else settings_stanza.upper()
|
||||||
|
|
||||||
|
initconf = cls.initconf.get(settings_stanza) \
|
||||||
|
if cls.initconf and settings_stanza in cls.initconf \
|
||||||
|
else OmegaConf.create()
|
||||||
|
|
||||||
|
# create an upcase version of the environment in
|
||||||
|
# order to achieve case-insensitive environment
|
||||||
|
# variables (the way Windows does)
|
||||||
|
upcase_environ = dict()
|
||||||
|
for key,value in os.environ.items():
|
||||||
|
upcase_environ[key.upper()] = value
|
||||||
|
|
||||||
|
fields = cls.__fields__
|
||||||
|
cls.argparse_groups = {}
|
||||||
|
|
||||||
|
for name, field in fields.items():
|
||||||
|
if name not in cls._excluded():
|
||||||
|
current_default = field.default
|
||||||
|
|
||||||
|
category = field.field_info.extra.get("category","Uncategorized")
|
||||||
|
env_name = env_prefix + '_' + name
|
||||||
|
if category in initconf and name in initconf.get(category):
|
||||||
|
field.default = initconf.get(category).get(name)
|
||||||
|
if env_name.upper() in upcase_environ:
|
||||||
|
field.default = upcase_environ[env_name.upper()]
|
||||||
|
cls.add_field_argument(parser, name, field)
|
||||||
|
|
||||||
|
field.default = current_default
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def cmd_name(self, command_field: str='type')->str:
|
||||||
|
hints = get_type_hints(self)
|
||||||
|
if command_field in hints:
|
||||||
|
return get_args(hints[command_field])[0]
|
||||||
|
else:
|
||||||
|
return 'Uncategorized'
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get_parser(cls)->ArgumentParser:
|
||||||
|
parser = PagingArgumentParser(
|
||||||
|
prog=cls.cmd_name(),
|
||||||
|
description=cls.__doc__,
|
||||||
|
)
|
||||||
|
cls.add_parser_arguments(parser)
|
||||||
|
return parser
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def add_subparser(cls, parser: argparse.ArgumentParser):
|
||||||
|
parser.add_parser(cls.cmd_name(), help=cls.__doc__)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _excluded(self)->List[str]:
|
||||||
|
return ['type','initconf']
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
env_file_encoding = 'utf-8'
|
||||||
|
arbitrary_types_allowed = True
|
||||||
|
case_sensitive = True
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def add_field_argument(cls, command_parser, name: str, field, default_override = None):
|
||||||
|
field_type = get_type_hints(cls).get(name)
|
||||||
|
default = default_override if default_override is not None else field.default if field.default_factory is None else field.default_factory()
|
||||||
|
if category := field.field_info.extra.get("category"):
|
||||||
|
if category not in cls.argparse_groups:
|
||||||
|
cls.argparse_groups[category] = command_parser.add_argument_group(category)
|
||||||
|
argparse_group = cls.argparse_groups[category]
|
||||||
|
else:
|
||||||
|
argparse_group = command_parser
|
||||||
|
|
||||||
|
if get_origin(field_type) == Literal:
|
||||||
|
allowed_values = get_args(field.type_)
|
||||||
|
allowed_types = set()
|
||||||
|
for val in allowed_values:
|
||||||
|
allowed_types.add(type(val))
|
||||||
|
allowed_types_list = list(allowed_types)
|
||||||
|
field_type = allowed_types_list[0] if len(allowed_types) == 1 else Union[allowed_types_list] # type: ignore
|
||||||
|
|
||||||
|
argparse_group.add_argument(
|
||||||
|
f"--{name}",
|
||||||
|
dest=name,
|
||||||
|
type=field_type,
|
||||||
|
default=default,
|
||||||
|
choices=allowed_values,
|
||||||
|
help=field.field_info.description,
|
||||||
|
)
|
||||||
|
|
||||||
|
elif get_origin(field_type) == list:
|
||||||
|
argparse_group.add_argument(
|
||||||
|
f"--{name}",
|
||||||
|
dest=name,
|
||||||
|
nargs='*',
|
||||||
|
type=field.type_,
|
||||||
|
default=default,
|
||||||
|
action=argparse.BooleanOptionalAction if field.type_==bool else 'store',
|
||||||
|
help=field.field_info.description,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
argparse_group.add_argument(
|
||||||
|
f"--{name}",
|
||||||
|
dest=name,
|
||||||
|
type=field.type_,
|
||||||
|
default=default,
|
||||||
|
action=argparse.BooleanOptionalAction if field.type_==bool else 'store',
|
||||||
|
help=field.field_info.description,
|
||||||
|
)
|
||||||
|
def _find_root()->Path:
|
||||||
|
if os.environ.get("INVOKEAI_ROOT"):
|
||||||
|
root = Path(os.environ.get("INVOKEAI_ROOT")).resolve()
|
||||||
|
elif (
|
||||||
|
os.environ.get("VIRTUAL_ENV")
|
||||||
|
and (Path(os.environ.get("VIRTUAL_ENV"), "..", INIT_FILE).exists()
|
||||||
|
or
|
||||||
|
Path(os.environ.get("VIRTUAL_ENV"), "..", LEGACY_INIT_FILE).exists()
|
||||||
|
)
|
||||||
|
):
|
||||||
|
root = Path(os.environ.get("VIRTUAL_ENV"), "..").resolve()
|
||||||
|
else:
|
||||||
|
root = Path("~/invokeai").expanduser().resolve()
|
||||||
|
return root
|
||||||
|
|
||||||
|
class InvokeAIAppConfig(InvokeAISettings):
|
||||||
|
'''
|
||||||
|
Generate images using Stable Diffusion. Use "invokeai" to launch
|
||||||
|
the command-line client (recommended for experts only), or
|
||||||
|
"invokeai-web" to launch the web server. Global options
|
||||||
|
can be changed by editing the file "INVOKEAI_ROOT/invokeai.yaml" or by
|
||||||
|
setting environment variables INVOKEAI_<setting>.
|
||||||
|
'''
|
||||||
|
#fmt: off
|
||||||
|
type: Literal["InvokeAI"] = "InvokeAI"
|
||||||
|
host : str = Field(default="127.0.0.1", description="IP address to bind to", category='Web Server')
|
||||||
|
port : int = Field(default=9090, description="Port to bind to", category='Web Server')
|
||||||
|
allow_origins : List[str] = Field(default=[], description="Allowed CORS origins", category='Web Server')
|
||||||
|
allow_credentials : bool = Field(default=True, description="Allow CORS credentials", category='Web Server')
|
||||||
|
allow_methods : List[str] = Field(default=["*"], description="Methods allowed for CORS", category='Web Server')
|
||||||
|
allow_headers : List[str] = Field(default=["*"], description="Headers allowed for CORS", category='Web Server')
|
||||||
|
|
||||||
|
esrgan : bool = Field(default=True, description="Enable/disable upscaling code", category='Features')
|
||||||
|
internet_available : bool = Field(default=True, description="If true, attempt to download models on the fly; otherwise only use local models", category='Features')
|
||||||
|
log_tokenization : bool = Field(default=False, description="Enable logging of parsed prompt tokens.", category='Features')
|
||||||
|
nsfw_checker : bool = Field(default=True, description="Enable/disable the NSFW checker", category='Features')
|
||||||
|
patchmatch : bool = Field(default=True, description="Enable/disable patchmatch inpaint code", category='Features')
|
||||||
|
restore : bool = Field(default=True, description="Enable/disable face restoration code", category='Features')
|
||||||
|
|
||||||
|
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", category='Memory/Performance')
|
||||||
|
free_gpu_mem : bool = Field(default=False, description="If true, purge model from GPU after each generation.", category='Memory/Performance')
|
||||||
|
max_loaded_models : int = Field(default=2, gt=0, description="Maximum number of models to keep in memory for rapid switching", category='Memory/Performance')
|
||||||
|
precision : Literal[tuple(['auto','float16','float32','autocast'])] = Field(default='float16',description='Floating point precision', category='Memory/Performance')
|
||||||
|
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", category='Memory/Performance')
|
||||||
|
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", category='Memory/Performance')
|
||||||
|
|
||||||
|
root : Path = Field(default=_find_root(), description='InvokeAI runtime root directory', category='Paths')
|
||||||
|
autoconvert_dir : Path = Field(default=None, description='Path to a directory of ckpt files to be converted into diffusers and imported on startup.', category='Paths')
|
||||||
|
conf_path : Path = Field(default='configs/models.yaml', description='Path to models definition file', category='Paths')
|
||||||
|
embedding_dir : Path = Field(default='embeddings', description='Path to InvokeAI textual inversion aembeddings directory', category='Paths')
|
||||||
|
gfpgan_model_dir : Path = Field(default="./models/gfpgan/GFPGANv1.4.pth", description='Path to GFPGAN models directory.', category='Paths')
|
||||||
|
legacy_conf_dir : Path = Field(default='configs/stable-diffusion', description='Path to directory of legacy checkpoint config files', category='Paths')
|
||||||
|
lora_dir : Path = Field(default='loras', description='Path to InvokeAI LoRA model directory', category='Paths')
|
||||||
|
outdir : Path = Field(default='outputs', description='Default folder for output images', category='Paths')
|
||||||
|
from_file : Path = Field(default=None, description='Take command input from the indicated file (command-line client only)', category='Paths')
|
||||||
|
|
||||||
|
model : str = Field(default='stable-diffusion-1.5', description='Initial model name', category='Models')
|
||||||
|
embeddings : bool = Field(default=True, description='Load contents of embeddings directory', category='Models')
|
||||||
|
#fmt: on
|
||||||
|
|
||||||
|
def __init__(self, conf: DictConfig = None, argv: List[str]=None, **kwargs):
|
||||||
|
'''
|
||||||
|
Initialize InvokeAIAppconfig.
|
||||||
|
:param conf: alternate Omegaconf dictionary object
|
||||||
|
:param argv: aternate sys.argv list
|
||||||
|
:param **kwargs: attributes to initialize with
|
||||||
|
'''
|
||||||
|
super().__init__(**kwargs)
|
||||||
|
|
||||||
|
# Set the runtime root directory. We parse command-line switches here
|
||||||
|
# in order to pick up the --root_dir option.
|
||||||
|
self.parse_args(argv)
|
||||||
|
if conf is None:
|
||||||
|
try:
|
||||||
|
conf = OmegaConf.load(self.root_dir / INIT_FILE)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
InvokeAISettings.initconf = conf
|
||||||
|
|
||||||
|
# parse args again in order to pick up settings in configuration file
|
||||||
|
self.parse_args(argv)
|
||||||
|
|
||||||
|
# restore initialization values
|
||||||
|
hints = get_type_hints(self)
|
||||||
|
for k in kwargs:
|
||||||
|
setattr(self,k,parse_obj_as(hints[k],kwargs[k]))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def root_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to the runtime root directory
|
||||||
|
'''
|
||||||
|
if self.root:
|
||||||
|
return Path(self.root).expanduser()
|
||||||
|
else:
|
||||||
|
return self.find_root()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def root_dir(self)->Path:
|
||||||
|
'''
|
||||||
|
Alias for above.
|
||||||
|
'''
|
||||||
|
return self.root_path
|
||||||
|
|
||||||
|
def _resolve(self,partial_path:Path)->Path:
|
||||||
|
return (self.root_path / partial_path).resolve()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def output_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to defaults outputs directory.
|
||||||
|
'''
|
||||||
|
return self._resolve(self.outdir)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def model_conf_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to models configuration file.
|
||||||
|
'''
|
||||||
|
return self._resolve(self.conf_path)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def legacy_conf_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to directory of legacy configuration files (e.g. v1-inference.yaml)
|
||||||
|
'''
|
||||||
|
return self._resolve(self.legacy_conf_dir)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def cache_dir(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to the global cache directory for HuggingFace hub-managed models
|
||||||
|
'''
|
||||||
|
return self.models_dir / "hub"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def models_dir(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to the models directory
|
||||||
|
'''
|
||||||
|
return self._resolve("models")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def embedding_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to the textual inversion embeddings directory.
|
||||||
|
'''
|
||||||
|
return self._resolve(self.embedding_dir) if self.embedding_dir else None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def lora_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to the LoRA models directory.
|
||||||
|
'''
|
||||||
|
return self._resolve(self.lora_dir) if self.lora_dir else None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def autoconvert_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to the directory containing models to be imported automatically at startup.
|
||||||
|
'''
|
||||||
|
return self._resolve(self.autoconvert_dir) if self.autoconvert_dir else None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def gfpgan_model_path(self)->Path:
|
||||||
|
'''
|
||||||
|
Path to the GFPGAN model.
|
||||||
|
'''
|
||||||
|
return self._resolve(self.gfpgan_model_dir) if self.gfpgan_model_dir else None
|
||||||
|
|
||||||
|
# the following methods support legacy calls leftover from the Globals era
|
||||||
|
@property
|
||||||
|
def full_precision(self)->bool:
|
||||||
|
"""Return true if precision set to float32"""
|
||||||
|
return self.precision=='float32'
|
||||||
|
|
||||||
|
@property
|
||||||
|
def disable_xformers(self)->bool:
|
||||||
|
"""Return true if xformers_enabled is false"""
|
||||||
|
return not self.xformers_enabled
|
||||||
|
|
||||||
|
@property
|
||||||
|
def try_patchmatch(self)->bool:
|
||||||
|
"""Return true if patchmatch true"""
|
||||||
|
return self.patchmatch
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def find_root()->Path:
|
||||||
|
'''
|
||||||
|
Choose the runtime root directory when not specified on command line or
|
||||||
|
init file.
|
||||||
|
'''
|
||||||
|
return _find_root()
|
||||||
|
|
||||||
|
|
||||||
|
class PagingArgumentParser(argparse.ArgumentParser):
|
||||||
|
'''
|
||||||
|
A custom ArgumentParser that uses pydoc to page its output.
|
||||||
|
It also supports reading defaults from an init file.
|
||||||
|
'''
|
||||||
|
def print_help(self, file=None):
|
||||||
|
text = self.format_help()
|
||||||
|
pydoc.pager(text)
|
||||||
|
|
||||||
|
def get_invokeai_config(cls:Type[InvokeAISettings]=InvokeAIAppConfig,**kwargs)->InvokeAISettings:
|
||||||
|
'''
|
||||||
|
This returns a singleton InvokeAIAppConfig configuration object.
|
||||||
|
'''
|
||||||
|
global global_config
|
||||||
|
if global_config is None or type(global_config)!=cls:
|
||||||
|
global_config = cls(**kwargs)
|
||||||
|
return global_config
|
@ -135,6 +135,7 @@ class GraphInvocationOutput(BaseInvocationOutput):
|
|||||||
|
|
||||||
# TODO: Fill this out and move to invocations
|
# TODO: Fill this out and move to invocations
|
||||||
class GraphInvocation(BaseInvocation):
|
class GraphInvocation(BaseInvocation):
|
||||||
|
"""Execute a graph"""
|
||||||
type: Literal["graph"] = "graph"
|
type: Literal["graph"] = "graph"
|
||||||
|
|
||||||
# TODO: figure out how to create a default here
|
# TODO: figure out how to create a default here
|
||||||
@ -162,6 +163,7 @@ class IterateInvocationOutput(BaseInvocationOutput):
|
|||||||
|
|
||||||
# TODO: Fill this out and move to invocations
|
# TODO: Fill this out and move to invocations
|
||||||
class IterateInvocation(BaseInvocation):
|
class IterateInvocation(BaseInvocation):
|
||||||
|
"""Iterates over a list of items"""
|
||||||
type: Literal["iterate"] = "iterate"
|
type: Literal["iterate"] = "iterate"
|
||||||
|
|
||||||
collection: list[Any] = Field(
|
collection: list[Any] = Field(
|
||||||
|
@ -10,6 +10,7 @@ from .image_storage import ImageStorageBase
|
|||||||
from .restoration_services import RestorationServices
|
from .restoration_services import RestorationServices
|
||||||
from .invocation_queue import InvocationQueueABC
|
from .invocation_queue import InvocationQueueABC
|
||||||
from .item_storage import ItemStorageABC
|
from .item_storage import ItemStorageABC
|
||||||
|
from .config import InvokeAISettings
|
||||||
|
|
||||||
class InvocationServices:
|
class InvocationServices:
|
||||||
"""Services that can be used by invocations"""
|
"""Services that can be used by invocations"""
|
||||||
@ -21,6 +22,7 @@ class InvocationServices:
|
|||||||
queue: InvocationQueueABC
|
queue: InvocationQueueABC
|
||||||
model_manager: ModelManager
|
model_manager: ModelManager
|
||||||
restoration: RestorationServices
|
restoration: RestorationServices
|
||||||
|
configuration: InvokeAISettings
|
||||||
|
|
||||||
# NOTE: we must forward-declare any types that include invocations, since invocations can use services
|
# NOTE: we must forward-declare any types that include invocations, since invocations can use services
|
||||||
graph_library: ItemStorageABC["LibraryGraph"]
|
graph_library: ItemStorageABC["LibraryGraph"]
|
||||||
@ -40,6 +42,7 @@ class InvocationServices:
|
|||||||
graph_execution_manager: ItemStorageABC["GraphExecutionState"],
|
graph_execution_manager: ItemStorageABC["GraphExecutionState"],
|
||||||
processor: "InvocationProcessorABC",
|
processor: "InvocationProcessorABC",
|
||||||
restoration: RestorationServices,
|
restoration: RestorationServices,
|
||||||
|
configuration: InvokeAISettings=None,
|
||||||
):
|
):
|
||||||
self.model_manager = model_manager
|
self.model_manager = model_manager
|
||||||
self.events = events
|
self.events = events
|
||||||
@ -52,3 +55,4 @@ class InvocationServices:
|
|||||||
self.graph_execution_manager = graph_execution_manager
|
self.graph_execution_manager = graph_execution_manager
|
||||||
self.processor = processor
|
self.processor = processor
|
||||||
self.restoration = restoration
|
self.restoration = restoration
|
||||||
|
self.configuration = configuration
|
||||||
|
@ -2,27 +2,25 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
import torch
|
import torch
|
||||||
from argparse import Namespace
|
from argparse import Namespace
|
||||||
from invokeai.backend import Args
|
|
||||||
from omegaconf import OmegaConf
|
from omegaconf import OmegaConf
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import types
|
from typing import types
|
||||||
|
|
||||||
import invokeai.version
|
import invokeai.version
|
||||||
|
from .config import InvokeAISettings
|
||||||
from ...backend import ModelManager
|
from ...backend import ModelManager
|
||||||
from ...backend.util import choose_precision, choose_torch_device
|
from ...backend.util import choose_precision, choose_torch_device
|
||||||
from ...backend import Globals
|
|
||||||
|
|
||||||
# TODO: Replace with an abstract class base ModelManagerBase
|
# TODO: Replace with an abstract class base ModelManagerBase
|
||||||
def get_model_manager(config: Args, logger: types.ModuleType) -> ModelManager:
|
def get_model_manager(config: InvokeAISettings, logger: types.ModuleType) -> ModelManager:
|
||||||
if not config.conf:
|
model_config = config.model_conf_path
|
||||||
config_file = os.path.join(Globals.root, "configs", "models.yaml")
|
if not model_config.exists():
|
||||||
if not os.path.exists(config_file):
|
report_model_error(
|
||||||
report_model_error(
|
config, FileNotFoundError(f"The file {model_config} could not be found."), logger
|
||||||
config, FileNotFoundError(f"The file {config_file} could not be found."), logger
|
)
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(f"{invokeai.version.__app_name__}, version {invokeai.version.__version__}")
|
logger.info(f"{invokeai.version.__app_name__}, version {invokeai.version.__version__}")
|
||||||
logger.info(f'InvokeAI runtime directory is "{Globals.root}"')
|
logger.info(f'InvokeAI runtime directory is "{config.root}"')
|
||||||
|
|
||||||
# these two lines prevent a horrible warning message from appearing
|
# these two lines prevent a horrible warning message from appearing
|
||||||
# when the frozen CLIP tokenizer is imported
|
# when the frozen CLIP tokenizer is imported
|
||||||
@ -32,20 +30,7 @@ def get_model_manager(config: Args, logger: types.ModuleType) -> ModelManager:
|
|||||||
import diffusers
|
import diffusers
|
||||||
|
|
||||||
diffusers.logging.set_verbosity_error()
|
diffusers.logging.set_verbosity_error()
|
||||||
|
embedding_path = config.embedding_path
|
||||||
# normalize the config directory relative to root
|
|
||||||
if not os.path.isabs(config.conf):
|
|
||||||
config.conf = os.path.normpath(os.path.join(Globals.root, config.conf))
|
|
||||||
|
|
||||||
if config.embeddings:
|
|
||||||
if not os.path.isabs(config.embedding_path):
|
|
||||||
embedding_path = os.path.normpath(
|
|
||||||
os.path.join(Globals.root, config.embedding_path)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
embedding_path = config.embedding_path
|
|
||||||
else:
|
|
||||||
embedding_path = None
|
|
||||||
|
|
||||||
# migrate legacy models
|
# migrate legacy models
|
||||||
ModelManager.migrate_models()
|
ModelManager.migrate_models()
|
||||||
@ -58,11 +43,11 @@ def get_model_manager(config: Args, logger: types.ModuleType) -> ModelManager:
|
|||||||
else choose_precision(device)
|
else choose_precision(device)
|
||||||
|
|
||||||
model_manager = ModelManager(
|
model_manager = ModelManager(
|
||||||
OmegaConf.load(config.conf),
|
OmegaConf.load(config.model_conf_path),
|
||||||
precision=precision,
|
precision=precision,
|
||||||
device_type=device,
|
device_type=device,
|
||||||
max_loaded_models=config.max_loaded_models,
|
max_loaded_models=config.max_loaded_models,
|
||||||
embedding_path = Path(embedding_path),
|
embedding_path = embedding_path,
|
||||||
logger = logger,
|
logger = logger,
|
||||||
)
|
)
|
||||||
except (FileNotFoundError, TypeError, AssertionError) as e:
|
except (FileNotFoundError, TypeError, AssertionError) as e:
|
||||||
@ -73,12 +58,10 @@ def get_model_manager(config: Args, logger: types.ModuleType) -> ModelManager:
|
|||||||
|
|
||||||
# try to autoconvert new models
|
# try to autoconvert new models
|
||||||
# autoimport new .ckpt files
|
# autoimport new .ckpt files
|
||||||
if path := config.autoconvert:
|
if config.autoconvert_path:
|
||||||
model_manager.autoconvert_weights(
|
model_manager.heuristic_import(
|
||||||
conf_path=config.conf,
|
config.autoconvert_path,
|
||||||
weights_directory=path,
|
|
||||||
)
|
)
|
||||||
logger.info('Model manager initialized')
|
|
||||||
return model_manager
|
return model_manager
|
||||||
|
|
||||||
def report_model_error(opt: Namespace, e: Exception, logger: types.ModuleType):
|
def report_model_error(opt: Namespace, e: Exception, logger: types.ModuleType):
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
"""
|
"""
|
||||||
Initialization file for invokeai.backend
|
Initialization file for invokeai.backend
|
||||||
"""
|
"""
|
||||||
from .generate import Generate
|
|
||||||
from .generator import (
|
from .generator import (
|
||||||
InvokeAIGeneratorBasicParams,
|
InvokeAIGeneratorBasicParams,
|
||||||
InvokeAIGenerator,
|
InvokeAIGenerator,
|
||||||
@ -12,5 +11,3 @@ from .generator import (
|
|||||||
)
|
)
|
||||||
from .model_management import ModelManager, SDModelComponent
|
from .model_management import ModelManager, SDModelComponent
|
||||||
from .safety_checker import SafetyChecker
|
from .safety_checker import SafetyChecker
|
||||||
from .args import Args
|
|
||||||
from .globals import Globals
|
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -19,10 +19,10 @@ import warnings
|
|||||||
from argparse import Namespace
|
from argparse import Namespace
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from shutil import get_terminal_size
|
from shutil import get_terminal_size
|
||||||
|
from typing import get_type_hints
|
||||||
from urllib import request
|
from urllib import request
|
||||||
|
|
||||||
import npyscreen
|
import npyscreen
|
||||||
import torch
|
|
||||||
import transformers
|
import transformers
|
||||||
from diffusers import AutoencoderKL
|
from diffusers import AutoencoderKL
|
||||||
from huggingface_hub import HfFolder
|
from huggingface_hub import HfFolder
|
||||||
@ -38,34 +38,40 @@ from transformers import (
|
|||||||
|
|
||||||
import invokeai.configs as configs
|
import invokeai.configs as configs
|
||||||
|
|
||||||
from ...frontend.install.model_install import addModelsForm, process_and_execute
|
from invokeai.frontend.install.model_install import addModelsForm, process_and_execute
|
||||||
from ...frontend.install.widgets import (
|
from invokeai.frontend.install.widgets import (
|
||||||
CenteredButtonPress,
|
CenteredButtonPress,
|
||||||
IntTitleSlider,
|
IntTitleSlider,
|
||||||
set_min_terminal_size,
|
set_min_terminal_size,
|
||||||
)
|
)
|
||||||
from ..args import PRECISION_CHOICES, Args
|
from invokeai.backend.config.legacy_arg_parsing import legacy_parser
|
||||||
from ..globals import Globals, global_cache_dir, global_config_dir, global_config_file
|
from invokeai.backend.config.model_install_backend import (
|
||||||
from .model_install_backend import (
|
|
||||||
default_dataset,
|
default_dataset,
|
||||||
download_from_hf,
|
download_from_hf,
|
||||||
hf_download_with_resume,
|
hf_download_with_resume,
|
||||||
recommended_datasets,
|
recommended_datasets,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.config import (
|
||||||
|
get_invokeai_config,
|
||||||
|
InvokeAIAppConfig,
|
||||||
|
)
|
||||||
|
|
||||||
warnings.filterwarnings("ignore")
|
warnings.filterwarnings("ignore")
|
||||||
|
|
||||||
transformers.logging.set_verbosity_error()
|
transformers.logging.set_verbosity_error()
|
||||||
|
|
||||||
|
|
||||||
# --------------------------globals-----------------------
|
# --------------------------globals-----------------------
|
||||||
|
config = get_invokeai_config()
|
||||||
|
|
||||||
Model_dir = "models"
|
Model_dir = "models"
|
||||||
Weights_dir = "ldm/stable-diffusion-v1/"
|
Weights_dir = "ldm/stable-diffusion-v1/"
|
||||||
|
|
||||||
# the initial "configs" dir is now bundled in the `invokeai.configs` package
|
# the initial "configs" dir is now bundled in the `invokeai.configs` package
|
||||||
Dataset_path = Path(configs.__path__[0]) / "INITIAL_MODELS.yaml"
|
Dataset_path = Path(configs.__path__[0]) / "INITIAL_MODELS.yaml"
|
||||||
|
|
||||||
Default_config_file = Path(global_config_dir()) / "models.yaml"
|
Default_config_file = config.model_conf_path
|
||||||
SD_Configs = Path(global_config_dir()) / "stable-diffusion"
|
SD_Configs = config.legacy_conf_path
|
||||||
|
|
||||||
Datasets = OmegaConf.load(Dataset_path)
|
Datasets = OmegaConf.load(Dataset_path)
|
||||||
|
|
||||||
@ -73,17 +79,12 @@ Datasets = OmegaConf.load(Dataset_path)
|
|||||||
MIN_COLS = 135
|
MIN_COLS = 135
|
||||||
MIN_LINES = 45
|
MIN_LINES = 45
|
||||||
|
|
||||||
|
PRECISION_CHOICES = ['auto','float16','float32','autocast']
|
||||||
|
|
||||||
INIT_FILE_PREAMBLE = """# InvokeAI initialization file
|
INIT_FILE_PREAMBLE = """# InvokeAI initialization file
|
||||||
# This is the InvokeAI initialization file, which contains command-line default values.
|
# This is the InvokeAI initialization file, which contains command-line default values.
|
||||||
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
|
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
|
||||||
# or renaming it and then running invokeai-configure again.
|
# or renaming it and then running invokeai-configure again.
|
||||||
# Place frequently-used startup commands here, one or more per line.
|
|
||||||
# Examples:
|
|
||||||
# --outdir=D:\data\images
|
|
||||||
# --no-nsfw_checker
|
|
||||||
# --web --host=0.0.0.0
|
|
||||||
# --steps=20
|
|
||||||
# -Ak_euler_a -C10.0
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
@ -96,14 +97,13 @@ If you installed manually from source or with 'pip install': activate the virtua
|
|||||||
then run one of the following commands to start InvokeAI.
|
then run one of the following commands to start InvokeAI.
|
||||||
|
|
||||||
Web UI:
|
Web UI:
|
||||||
invokeai --web # (connect to http://localhost:9090)
|
invokeai-web
|
||||||
invokeai --web --host 0.0.0.0 # (connect to http://your-lan-ip:9090 from another computer on the local network)
|
|
||||||
|
|
||||||
Command-line interface:
|
Command-line client:
|
||||||
invokeai
|
invokeai
|
||||||
|
|
||||||
If you installed using an installation script, run:
|
If you installed using an installation script, run:
|
||||||
{Globals.root}/invoke.{"bat" if sys.platform == "win32" else "sh"}
|
{config.root}/invoke.{"bat" if sys.platform == "win32" else "sh"}
|
||||||
|
|
||||||
Add the '--help' argument to see all of the command-line switches available for use.
|
Add the '--help' argument to see all of the command-line switches available for use.
|
||||||
"""
|
"""
|
||||||
@ -216,11 +216,11 @@ def download_realesrgan():
|
|||||||
wdn_model_url = "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth"
|
wdn_model_url = "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth"
|
||||||
|
|
||||||
model_dest = os.path.join(
|
model_dest = os.path.join(
|
||||||
Globals.root, "models/realesrgan/realesr-general-x4v3.pth"
|
config.root, "models/realesrgan/realesr-general-x4v3.pth"
|
||||||
)
|
)
|
||||||
|
|
||||||
wdn_model_dest = os.path.join(
|
wdn_model_dest = os.path.join(
|
||||||
Globals.root, "models/realesrgan/realesr-general-wdn-x4v3.pth"
|
config.root, "models/realesrgan/realesr-general-wdn-x4v3.pth"
|
||||||
)
|
)
|
||||||
|
|
||||||
download_with_progress_bar(model_url, model_dest, "RealESRGAN")
|
download_with_progress_bar(model_url, model_dest, "RealESRGAN")
|
||||||
@ -243,7 +243,7 @@ def download_gfpgan():
|
|||||||
"./models/gfpgan/weights/parsing_parsenet.pth",
|
"./models/gfpgan/weights/parsing_parsenet.pth",
|
||||||
],
|
],
|
||||||
):
|
):
|
||||||
model_url, model_dest = model[0], os.path.join(Globals.root, model[1])
|
model_url, model_dest = model[0], os.path.join(config.root, model[1])
|
||||||
download_with_progress_bar(model_url, model_dest, "GFPGAN weights")
|
download_with_progress_bar(model_url, model_dest, "GFPGAN weights")
|
||||||
|
|
||||||
|
|
||||||
@ -253,7 +253,7 @@ def download_codeformer():
|
|||||||
model_url = (
|
model_url = (
|
||||||
"https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth"
|
"https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth"
|
||||||
)
|
)
|
||||||
model_dest = os.path.join(Globals.root, "models/codeformer/codeformer.pth")
|
model_dest = os.path.join(config.root, "models/codeformer/codeformer.pth")
|
||||||
download_with_progress_bar(model_url, model_dest, "CodeFormer")
|
download_with_progress_bar(model_url, model_dest, "CodeFormer")
|
||||||
|
|
||||||
|
|
||||||
@ -295,7 +295,7 @@ def download_vaes():
|
|||||||
# first the diffusers version
|
# first the diffusers version
|
||||||
repo_id = "stabilityai/sd-vae-ft-mse"
|
repo_id = "stabilityai/sd-vae-ft-mse"
|
||||||
args = dict(
|
args = dict(
|
||||||
cache_dir=global_cache_dir("hub"),
|
cache_dir=config.cache_dir,
|
||||||
)
|
)
|
||||||
if not AutoencoderKL.from_pretrained(repo_id, **args):
|
if not AutoencoderKL.from_pretrained(repo_id, **args):
|
||||||
raise Exception(f"download of {repo_id} failed")
|
raise Exception(f"download of {repo_id} failed")
|
||||||
@ -306,7 +306,7 @@ def download_vaes():
|
|||||||
if not hf_download_with_resume(
|
if not hf_download_with_resume(
|
||||||
repo_id=repo_id,
|
repo_id=repo_id,
|
||||||
model_name=model_name,
|
model_name=model_name,
|
||||||
model_dir=str(Globals.root / Model_dir / Weights_dir),
|
model_dir=str(config.root / Model_dir / Weights_dir),
|
||||||
):
|
):
|
||||||
raise Exception(f"download of {model_name} failed")
|
raise Exception(f"download of {model_name} failed")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@ -321,8 +321,7 @@ def get_root(root: str = None) -> str:
|
|||||||
elif os.environ.get("INVOKEAI_ROOT"):
|
elif os.environ.get("INVOKEAI_ROOT"):
|
||||||
return os.environ.get("INVOKEAI_ROOT")
|
return os.environ.get("INVOKEAI_ROOT")
|
||||||
else:
|
else:
|
||||||
return Globals.root
|
return config.root
|
||||||
|
|
||||||
|
|
||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
class editOptsForm(npyscreen.FormMultiPage):
|
class editOptsForm(npyscreen.FormMultiPage):
|
||||||
@ -332,7 +331,7 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
def create(self):
|
def create(self):
|
||||||
program_opts = self.parentApp.program_opts
|
program_opts = self.parentApp.program_opts
|
||||||
old_opts = self.parentApp.invokeai_opts
|
old_opts = self.parentApp.invokeai_opts
|
||||||
first_time = not (Globals.root / Globals.initfile).exists()
|
first_time = not (config.root / 'invokeai.yaml').exists()
|
||||||
access_token = HfFolder.get_token()
|
access_token = HfFolder.get_token()
|
||||||
window_width, window_height = get_terminal_size()
|
window_width, window_height = get_terminal_size()
|
||||||
for i in [
|
for i in [
|
||||||
@ -366,7 +365,7 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
self.outdir = self.add_widget_intelligent(
|
self.outdir = self.add_widget_intelligent(
|
||||||
npyscreen.TitleFilename,
|
npyscreen.TitleFilename,
|
||||||
name="(<tab> autocompletes, ctrl-N advances):",
|
name="(<tab> autocompletes, ctrl-N advances):",
|
||||||
value=old_opts.outdir or str(default_output_dir()),
|
value=str(old_opts.outdir) or str(default_output_dir()),
|
||||||
select_dir=True,
|
select_dir=True,
|
||||||
must_exist=False,
|
must_exist=False,
|
||||||
use_two_lines=False,
|
use_two_lines=False,
|
||||||
@ -381,17 +380,17 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
editable=False,
|
editable=False,
|
||||||
color="CONTROL",
|
color="CONTROL",
|
||||||
)
|
)
|
||||||
self.safety_checker = self.add_widget_intelligent(
|
self.nsfw_checker = self.add_widget_intelligent(
|
||||||
npyscreen.Checkbox,
|
npyscreen.Checkbox,
|
||||||
name="NSFW checker",
|
name="NSFW checker",
|
||||||
value=old_opts.safety_checker,
|
value=old_opts.nsfw_checker,
|
||||||
relx=5,
|
relx=5,
|
||||||
scroll_exit=True,
|
scroll_exit=True,
|
||||||
)
|
)
|
||||||
self.nextrely += 1
|
self.nextrely += 1
|
||||||
for i in [
|
for i in [
|
||||||
"If you have an account at HuggingFace you may paste your access token here",
|
"If you have an account at HuggingFace you may optionally paste your access token here",
|
||||||
'to allow InvokeAI to download styles & subjects from the "Concept Library".',
|
'to allow InvokeAI to download restricted styles & subjects from the "Concept Library".',
|
||||||
"See https://huggingface.co/settings/tokens",
|
"See https://huggingface.co/settings/tokens",
|
||||||
]:
|
]:
|
||||||
self.add_widget_intelligent(
|
self.add_widget_intelligent(
|
||||||
@ -435,17 +434,10 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
relx=5,
|
relx=5,
|
||||||
scroll_exit=True,
|
scroll_exit=True,
|
||||||
)
|
)
|
||||||
self.xformers = self.add_widget_intelligent(
|
self.xformers_enabled = self.add_widget_intelligent(
|
||||||
npyscreen.Checkbox,
|
npyscreen.Checkbox,
|
||||||
name="Enable xformers support if available",
|
name="Enable xformers support if available",
|
||||||
value=old_opts.xformers,
|
value=old_opts.xformers_enabled,
|
||||||
relx=5,
|
|
||||||
scroll_exit=True,
|
|
||||||
)
|
|
||||||
self.ckpt_convert = self.add_widget_intelligent(
|
|
||||||
npyscreen.Checkbox,
|
|
||||||
name="Load legacy checkpoint models into memory as diffusers models",
|
|
||||||
value=old_opts.ckpt_convert,
|
|
||||||
relx=5,
|
relx=5,
|
||||||
scroll_exit=True,
|
scroll_exit=True,
|
||||||
)
|
)
|
||||||
@ -480,19 +472,30 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
self.nextrely += 1
|
self.nextrely += 1
|
||||||
self.add_widget_intelligent(
|
self.add_widget_intelligent(
|
||||||
npyscreen.FixedText,
|
npyscreen.FixedText,
|
||||||
value="Directory containing embedding/textual inversion files:",
|
value="Directories containing textual inversion and LoRA models (<tab> autocompletes, ctrl-N advances):",
|
||||||
editable=False,
|
editable=False,
|
||||||
color="CONTROL",
|
color="CONTROL",
|
||||||
)
|
)
|
||||||
self.embedding_path = self.add_widget_intelligent(
|
self.embedding_dir = self.add_widget_intelligent(
|
||||||
npyscreen.TitleFilename,
|
npyscreen.TitleFilename,
|
||||||
name="(<tab> autocompletes, ctrl-N advances):",
|
name=" Textual Inversion Embeddings:",
|
||||||
value=str(default_embedding_dir()),
|
value=str(default_embedding_dir()),
|
||||||
select_dir=True,
|
select_dir=True,
|
||||||
must_exist=False,
|
must_exist=False,
|
||||||
use_two_lines=False,
|
use_two_lines=False,
|
||||||
labelColor="GOOD",
|
labelColor="GOOD",
|
||||||
begin_entry_at=40,
|
begin_entry_at=32,
|
||||||
|
scroll_exit=True,
|
||||||
|
)
|
||||||
|
self.lora_dir = self.add_widget_intelligent(
|
||||||
|
npyscreen.TitleFilename,
|
||||||
|
name=" LoRA and LyCORIS:",
|
||||||
|
value=str(default_lora_dir()),
|
||||||
|
select_dir=True,
|
||||||
|
must_exist=False,
|
||||||
|
use_two_lines=False,
|
||||||
|
labelColor="GOOD",
|
||||||
|
begin_entry_at=32,
|
||||||
scroll_exit=True,
|
scroll_exit=True,
|
||||||
)
|
)
|
||||||
self.nextrely += 1
|
self.nextrely += 1
|
||||||
@ -559,9 +562,9 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
bad_fields.append(
|
bad_fields.append(
|
||||||
f"The output directory does not seem to be valid. Please check that {str(Path(opt.outdir).parent)} is an existing directory."
|
f"The output directory does not seem to be valid. Please check that {str(Path(opt.outdir).parent)} is an existing directory."
|
||||||
)
|
)
|
||||||
if not Path(opt.embedding_path).parent.exists():
|
if not Path(opt.embedding_dir).parent.exists():
|
||||||
bad_fields.append(
|
bad_fields.append(
|
||||||
f"The embedding directory does not seem to be valid. Please check that {str(Path(opt.embedding_path).parent)} is an existing directory."
|
f"The embedding directory does not seem to be valid. Please check that {str(Path(opt.embedding_dir).parent)} is an existing directory."
|
||||||
)
|
)
|
||||||
if len(bad_fields) > 0:
|
if len(bad_fields) > 0:
|
||||||
message = "The following problems were detected and must be corrected:\n"
|
message = "The following problems were detected and must be corrected:\n"
|
||||||
@ -576,14 +579,14 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
new_opts = Namespace()
|
new_opts = Namespace()
|
||||||
|
|
||||||
for attr in [
|
for attr in [
|
||||||
"outdir",
|
"outdir",
|
||||||
"safety_checker",
|
"nsfw_checker",
|
||||||
"free_gpu_mem",
|
"free_gpu_mem",
|
||||||
"max_loaded_models",
|
"max_loaded_models",
|
||||||
"xformers",
|
"xformers_enabled",
|
||||||
"always_use_cpu",
|
"always_use_cpu",
|
||||||
"embedding_path",
|
"embedding_dir",
|
||||||
"ckpt_convert",
|
"lora_dir",
|
||||||
]:
|
]:
|
||||||
setattr(new_opts, attr, getattr(self, attr).value)
|
setattr(new_opts, attr, getattr(self, attr).value)
|
||||||
|
|
||||||
@ -591,6 +594,9 @@ class editOptsForm(npyscreen.FormMultiPage):
|
|||||||
new_opts.license_acceptance = self.license_acceptance.value
|
new_opts.license_acceptance = self.license_acceptance.value
|
||||||
new_opts.precision = PRECISION_CHOICES[self.precision.value[0]]
|
new_opts.precision = PRECISION_CHOICES[self.precision.value[0]]
|
||||||
|
|
||||||
|
# widget library workaround to make max_loaded_models an int rather than a float
|
||||||
|
new_opts.max_loaded_models = int(new_opts.max_loaded_models)
|
||||||
|
|
||||||
return new_opts
|
return new_opts
|
||||||
|
|
||||||
|
|
||||||
@ -628,15 +634,14 @@ def edit_opts(program_opts: Namespace, invokeai_opts: Namespace) -> argparse.Nam
|
|||||||
|
|
||||||
|
|
||||||
def default_startup_options(init_file: Path) -> Namespace:
|
def default_startup_options(init_file: Path) -> Namespace:
|
||||||
opts = Args().parse_args([])
|
opts = InvokeAIAppConfig(argv=[])
|
||||||
outdir = Path(opts.outdir)
|
outdir = Path(opts.outdir)
|
||||||
if not outdir.is_absolute():
|
if not outdir.is_absolute():
|
||||||
opts.outdir = str(Globals.root / opts.outdir)
|
opts.outdir = str(config.root / opts.outdir)
|
||||||
if not init_file.exists():
|
if not init_file.exists():
|
||||||
opts.safety_checker = True
|
opts.nsfw_checker = True
|
||||||
return opts
|
return opts
|
||||||
|
|
||||||
|
|
||||||
def default_user_selections(program_opts: Namespace) -> Namespace:
|
def default_user_selections(program_opts: Namespace) -> Namespace:
|
||||||
return Namespace(
|
return Namespace(
|
||||||
starter_models=default_dataset()
|
starter_models=default_dataset()
|
||||||
@ -690,70 +695,61 @@ def run_console_ui(
|
|||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
def write_opts(opts: Namespace, init_file: Path):
|
def write_opts(opts: Namespace, init_file: Path):
|
||||||
"""
|
"""
|
||||||
Update the invokeai.init file with values from opts Namespace
|
Update the invokeai.yaml file with values from current settings.
|
||||||
"""
|
"""
|
||||||
# touch file if it doesn't exist
|
|
||||||
if not init_file.exists():
|
|
||||||
with open(init_file, "w") as f:
|
|
||||||
f.write(INIT_FILE_PREAMBLE)
|
|
||||||
|
|
||||||
# We want to write in the changed arguments without clobbering
|
# this will load current settings
|
||||||
# any other initialization values the user has entered. There is
|
config = InvokeAIAppConfig()
|
||||||
# no good way to do this because of the one-way nature of
|
for key,value in opts.__dict__.items():
|
||||||
# argparse: i.e. --outdir could be --outdir, --out, or -o
|
if hasattr(config,key):
|
||||||
# initfile needs to be replaced with a fully structured format
|
setattr(config,key,value)
|
||||||
# such as yaml; this is a hack that will work much of the time
|
|
||||||
args_to_skip = re.compile(
|
|
||||||
"^--?(o|out|no-xformer|xformer|no-ckpt|ckpt|free|no-nsfw|nsfw|prec|max_load|embed|always|ckpt|free_gpu)"
|
|
||||||
)
|
|
||||||
# fix windows paths
|
|
||||||
opts.outdir = opts.outdir.replace("\\", "/")
|
|
||||||
opts.embedding_path = opts.embedding_path.replace("\\", "/")
|
|
||||||
new_file = f"{init_file}.new"
|
|
||||||
try:
|
|
||||||
lines = [x.strip() for x in open(init_file, "r").readlines()]
|
|
||||||
with open(new_file, "w") as out_file:
|
|
||||||
for line in lines:
|
|
||||||
if len(line) > 0 and not args_to_skip.match(line):
|
|
||||||
out_file.write(line + "\n")
|
|
||||||
out_file.write(
|
|
||||||
f"""
|
|
||||||
--outdir={opts.outdir}
|
|
||||||
--embedding_path={opts.embedding_path}
|
|
||||||
--precision={opts.precision}
|
|
||||||
--max_loaded_models={int(opts.max_loaded_models)}
|
|
||||||
--{'no-' if not opts.safety_checker else ''}nsfw_checker
|
|
||||||
--{'no-' if not opts.xformers else ''}xformers
|
|
||||||
--{'no-' if not opts.ckpt_convert else ''}ckpt_convert
|
|
||||||
{'--free_gpu_mem' if opts.free_gpu_mem else ''}
|
|
||||||
{'--always_use_cpu' if opts.always_use_cpu else ''}
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
except OSError as e:
|
|
||||||
print(f"** An error occurred while writing the init file: {str(e)}")
|
|
||||||
|
|
||||||
os.replace(new_file, init_file)
|
|
||||||
|
|
||||||
if opts.hf_token:
|
|
||||||
HfLogin(opts.hf_token)
|
|
||||||
|
|
||||||
|
with open(init_file,'w', encoding='utf-8') as file:
|
||||||
|
file.write(config.to_yaml())
|
||||||
|
|
||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
def default_output_dir() -> Path:
|
def default_output_dir() -> Path:
|
||||||
return Globals.root / "outputs"
|
return config.root / "outputs"
|
||||||
|
|
||||||
|
|
||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
def default_embedding_dir() -> Path:
|
def default_embedding_dir() -> Path:
|
||||||
return Globals.root / "embeddings"
|
return config.root / "embeddings"
|
||||||
|
|
||||||
|
# -------------------------------------
|
||||||
|
def default_lora_dir() -> Path:
|
||||||
|
return config.root / "loras"
|
||||||
|
|
||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
def write_default_options(program_opts: Namespace, initfile: Path):
|
def write_default_options(program_opts: Namespace, initfile: Path):
|
||||||
opt = default_startup_options(initfile)
|
opt = default_startup_options(initfile)
|
||||||
opt.hf_token = HfFolder.get_token()
|
|
||||||
write_opts(opt, initfile)
|
write_opts(opt, initfile)
|
||||||
|
|
||||||
|
# -------------------------------------
|
||||||
|
# Here we bring in
|
||||||
|
# the legacy Args object in order to parse
|
||||||
|
# the old init file and write out the new
|
||||||
|
# yaml format.
|
||||||
|
def migrate_init_file(legacy_format:Path):
|
||||||
|
old = legacy_parser.parse_args([f'@{str(legacy_format)}'])
|
||||||
|
new = InvokeAIAppConfig(conf={})
|
||||||
|
|
||||||
|
fields = list(get_type_hints(InvokeAIAppConfig).keys())
|
||||||
|
for attr in fields:
|
||||||
|
if hasattr(old,attr):
|
||||||
|
setattr(new,attr,getattr(old,attr))
|
||||||
|
|
||||||
|
# a few places where the field names have changed and we have to
|
||||||
|
# manually add in the new names/values
|
||||||
|
new.nsfw_checker = old.safety_checker
|
||||||
|
new.xformers_enabled = old.xformers
|
||||||
|
new.conf_path = old.conf
|
||||||
|
new.embedding_dir = old.embedding_path
|
||||||
|
|
||||||
|
invokeai_yaml = legacy_format.parent / 'invokeai.yaml'
|
||||||
|
with open(invokeai_yaml,"w", encoding="utf-8") as outfile:
|
||||||
|
outfile.write(new.to_yaml())
|
||||||
|
|
||||||
|
legacy_format.replace(legacy_format.parent / 'invokeai.init.old')
|
||||||
|
|
||||||
# -------------------------------------
|
# -------------------------------------
|
||||||
def main():
|
def main():
|
||||||
@ -810,7 +806,8 @@ def main():
|
|||||||
opt = parser.parse_args()
|
opt = parser.parse_args()
|
||||||
|
|
||||||
# setting a global here
|
# setting a global here
|
||||||
Globals.root = Path(os.path.expanduser(get_root(opt.root) or ""))
|
global config
|
||||||
|
config.root = Path(os.path.expanduser(get_root(opt.root) or ""))
|
||||||
|
|
||||||
errors = set()
|
errors = set()
|
||||||
|
|
||||||
@ -818,19 +815,26 @@ def main():
|
|||||||
models_to_download = default_user_selections(opt)
|
models_to_download = default_user_selections(opt)
|
||||||
|
|
||||||
# We check for to see if the runtime directory is correctly initialized.
|
# We check for to see if the runtime directory is correctly initialized.
|
||||||
init_file = Path(Globals.root, Globals.initfile)
|
old_init_file = Path(config.root, 'invokeai.init')
|
||||||
if not init_file.exists() or not global_config_file().exists():
|
new_init_file = Path(config.root, 'invokeai.yaml')
|
||||||
initialize_rootdir(Globals.root, opt.yes_to_all)
|
if old_init_file.exists() and not new_init_file.exists():
|
||||||
|
print('** Migrating invokeai.init to invokeai.yaml')
|
||||||
|
migrate_init_file(old_init_file)
|
||||||
|
config = get_invokeai_config() # reread defaults
|
||||||
|
|
||||||
|
|
||||||
|
if not config.model_conf_path.exists():
|
||||||
|
initialize_rootdir(config.root, opt.yes_to_all)
|
||||||
|
|
||||||
if opt.yes_to_all:
|
if opt.yes_to_all:
|
||||||
write_default_options(opt, init_file)
|
write_default_options(opt, new_init_file)
|
||||||
init_options = Namespace(
|
init_options = Namespace(
|
||||||
precision="float32" if opt.full_precision else "float16"
|
precision="float32" if opt.full_precision else "float16"
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
init_options, models_to_download = run_console_ui(opt, init_file)
|
init_options, models_to_download = run_console_ui(opt, new_init_file)
|
||||||
if init_options:
|
if init_options:
|
||||||
write_opts(init_options, init_file)
|
write_opts(init_options, new_init_file)
|
||||||
else:
|
else:
|
||||||
print(
|
print(
|
||||||
'\n** CANCELLED AT USER\'S REQUEST. USE THE "invoke.sh" LAUNCHER TO RUN LATER **\n'
|
'\n** CANCELLED AT USER\'S REQUEST. USE THE "invoke.sh" LAUNCHER TO RUN LATER **\n'
|
||||||
|
390
invokeai/backend/config/legacy_arg_parsing.py
Normal file
390
invokeai/backend/config/legacy_arg_parsing.py
Normal file
@ -0,0 +1,390 @@
|
|||||||
|
# Copyright 2023 Lincoln D. Stein and the InvokeAI Team
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import shlex
|
||||||
|
from argparse import ArgumentParser
|
||||||
|
|
||||||
|
SAMPLER_CHOICES = [
|
||||||
|
"ddim",
|
||||||
|
"ddpm",
|
||||||
|
"deis",
|
||||||
|
"lms",
|
||||||
|
"pndm",
|
||||||
|
"heun",
|
||||||
|
"heun_k",
|
||||||
|
"euler",
|
||||||
|
"euler_k",
|
||||||
|
"euler_a",
|
||||||
|
"kdpm_2",
|
||||||
|
"kdpm_2_a",
|
||||||
|
"dpmpp_2s",
|
||||||
|
"dpmpp_2m",
|
||||||
|
"dpmpp_2m_k",
|
||||||
|
"unipc",
|
||||||
|
]
|
||||||
|
|
||||||
|
PRECISION_CHOICES = [
|
||||||
|
"auto",
|
||||||
|
"float32",
|
||||||
|
"autocast",
|
||||||
|
"float16",
|
||||||
|
]
|
||||||
|
|
||||||
|
class FileArgumentParser(ArgumentParser):
|
||||||
|
"""
|
||||||
|
Supports reading defaults from an init file.
|
||||||
|
"""
|
||||||
|
def convert_arg_line_to_args(self, arg_line):
|
||||||
|
return shlex.split(arg_line, comments=True)
|
||||||
|
|
||||||
|
|
||||||
|
legacy_parser = FileArgumentParser(
|
||||||
|
description=
|
||||||
|
"""
|
||||||
|
Generate images using Stable Diffusion.
|
||||||
|
Use --web to launch the web interface.
|
||||||
|
Use --from_file to load prompts from a file path or standard input ("-").
|
||||||
|
Otherwise you will be dropped into an interactive command prompt (type -h for help.)
|
||||||
|
Other command-line arguments are defaults that can usually be overridden
|
||||||
|
prompt the command prompt.
|
||||||
|
""",
|
||||||
|
fromfile_prefix_chars='@',
|
||||||
|
)
|
||||||
|
general_group = legacy_parser.add_argument_group('General')
|
||||||
|
model_group = legacy_parser.add_argument_group('Model selection')
|
||||||
|
file_group = legacy_parser.add_argument_group('Input/output')
|
||||||
|
web_server_group = legacy_parser.add_argument_group('Web server')
|
||||||
|
render_group = legacy_parser.add_argument_group('Rendering')
|
||||||
|
postprocessing_group = legacy_parser.add_argument_group('Postprocessing')
|
||||||
|
deprecated_group = legacy_parser.add_argument_group('Deprecated options')
|
||||||
|
|
||||||
|
deprecated_group.add_argument('--laion400m')
|
||||||
|
deprecated_group.add_argument('--weights') # deprecated
|
||||||
|
general_group.add_argument(
|
||||||
|
'--version','-V',
|
||||||
|
action='store_true',
|
||||||
|
help='Print InvokeAI version number'
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--root_dir',
|
||||||
|
default=None,
|
||||||
|
help='Path to directory containing "models", "outputs" and "configs". If not present will read from environment variable INVOKEAI_ROOT. Defaults to ~/invokeai.',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--config',
|
||||||
|
'-c',
|
||||||
|
'-config',
|
||||||
|
dest='conf',
|
||||||
|
default='./configs/models.yaml',
|
||||||
|
help='Path to configuration file for alternate models.',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--model',
|
||||||
|
help='Indicates which diffusion model to load (defaults to "default" stanza in configs/models.yaml)',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--weight_dirs',
|
||||||
|
nargs='+',
|
||||||
|
type=str,
|
||||||
|
help='List of one or more directories that will be auto-scanned for new model weights to import',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--png_compression','-z',
|
||||||
|
type=int,
|
||||||
|
default=6,
|
||||||
|
choices=range(0,9),
|
||||||
|
dest='png_compression',
|
||||||
|
help='level of PNG compression, from 0 (none) to 9 (maximum). Default is 6.'
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'-F',
|
||||||
|
'--full_precision',
|
||||||
|
dest='full_precision',
|
||||||
|
action='store_true',
|
||||||
|
help='Deprecated way to set --precision=float32',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--max_loaded_models',
|
||||||
|
dest='max_loaded_models',
|
||||||
|
type=int,
|
||||||
|
default=2,
|
||||||
|
help='Maximum number of models to keep in memory for fast switching, including the one in GPU',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--free_gpu_mem',
|
||||||
|
dest='free_gpu_mem',
|
||||||
|
action='store_true',
|
||||||
|
help='Force free gpu memory before final decoding',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--sequential_guidance',
|
||||||
|
dest='sequential_guidance',
|
||||||
|
action='store_true',
|
||||||
|
help="Calculate guidance in serial instead of in parallel, lowering memory requirement "
|
||||||
|
"at the expense of speed",
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--xformers',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
default=True,
|
||||||
|
help='Enable/disable xformers support (default enabled if installed)',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
"--always_use_cpu",
|
||||||
|
dest="always_use_cpu",
|
||||||
|
action="store_true",
|
||||||
|
help="Force use of CPU even if GPU is available"
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--precision',
|
||||||
|
dest='precision',
|
||||||
|
type=str,
|
||||||
|
choices=PRECISION_CHOICES,
|
||||||
|
metavar='PRECISION',
|
||||||
|
help=f'Set model precision. Defaults to auto selected based on device. Options: {", ".join(PRECISION_CHOICES)}',
|
||||||
|
default='auto',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--ckpt_convert',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
dest='ckpt_convert',
|
||||||
|
default=True,
|
||||||
|
help='Deprecated option. Legacy ckpt files are now always converted to diffusers when loaded.'
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--internet',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
dest='internet_available',
|
||||||
|
default=True,
|
||||||
|
help='Indicate whether internet is available for just-in-time model downloading (default: probe automatically).',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--nsfw_checker',
|
||||||
|
'--safety_checker',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
dest='safety_checker',
|
||||||
|
default=False,
|
||||||
|
help='Check for and blur potentially NSFW images. Use --no-nsfw_checker to disable.',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--autoimport',
|
||||||
|
default=None,
|
||||||
|
type=str,
|
||||||
|
help='Check the indicated directory for .ckpt/.safetensors weights files at startup and import directly',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--autoconvert',
|
||||||
|
default=None,
|
||||||
|
type=str,
|
||||||
|
help='Check the indicated directory for .ckpt/.safetensors weights files at startup and import as optimized diffuser models',
|
||||||
|
)
|
||||||
|
model_group.add_argument(
|
||||||
|
'--patchmatch',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
default=True,
|
||||||
|
help='Load the patchmatch extension for outpainting. Use --no-patchmatch to disable.',
|
||||||
|
)
|
||||||
|
file_group.add_argument(
|
||||||
|
'--from_file',
|
||||||
|
dest='infile',
|
||||||
|
type=str,
|
||||||
|
help='If specified, load prompts from this file',
|
||||||
|
)
|
||||||
|
file_group.add_argument(
|
||||||
|
'--outdir',
|
||||||
|
'-o',
|
||||||
|
type=str,
|
||||||
|
help='Directory to save generated images and a log of prompts and seeds. Default: ROOTDIR/outputs',
|
||||||
|
default='outputs',
|
||||||
|
)
|
||||||
|
file_group.add_argument(
|
||||||
|
'--prompt_as_dir',
|
||||||
|
'-p',
|
||||||
|
action='store_true',
|
||||||
|
help='Place images in subdirectories named after the prompt.',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--fnformat',
|
||||||
|
default='{prefix}.{seed}.png',
|
||||||
|
type=str,
|
||||||
|
help='Overwrite the filename format. You can use any argument as wildcard enclosed in curly braces. Default is {prefix}.{seed}.png',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'-s',
|
||||||
|
'--steps',
|
||||||
|
type=int,
|
||||||
|
default=50,
|
||||||
|
help='Number of steps'
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'-W',
|
||||||
|
'--width',
|
||||||
|
type=int,
|
||||||
|
help='Image width, multiple of 64',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'-H',
|
||||||
|
'--height',
|
||||||
|
type=int,
|
||||||
|
help='Image height, multiple of 64',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'-C',
|
||||||
|
'--cfg_scale',
|
||||||
|
default=7.5,
|
||||||
|
type=float,
|
||||||
|
help='Classifier free guidance (CFG) scale - higher numbers cause generator to "try" harder.',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--sampler',
|
||||||
|
'-A',
|
||||||
|
'-m',
|
||||||
|
dest='sampler_name',
|
||||||
|
type=str,
|
||||||
|
choices=SAMPLER_CHOICES,
|
||||||
|
metavar='SAMPLER_NAME',
|
||||||
|
help=f'Set the default sampler. Supported samplers: {", ".join(SAMPLER_CHOICES)}',
|
||||||
|
default='k_lms',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--log_tokenization',
|
||||||
|
'-t',
|
||||||
|
action='store_true',
|
||||||
|
help='shows how the prompt is split into tokens'
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'-f',
|
||||||
|
'--strength',
|
||||||
|
type=float,
|
||||||
|
help='img2img strength for noising/unnoising. 0.0 preserves image exactly, 1.0 replaces it completely',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'-T',
|
||||||
|
'-fit',
|
||||||
|
'--fit',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
help='If specified, will resize the input image to fit within the dimensions of width x height (512x512 default)',
|
||||||
|
)
|
||||||
|
|
||||||
|
render_group.add_argument(
|
||||||
|
'--grid',
|
||||||
|
'-g',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
help='generate a grid'
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--embedding_directory',
|
||||||
|
'--embedding_path',
|
||||||
|
dest='embedding_path',
|
||||||
|
default='embeddings',
|
||||||
|
type=str,
|
||||||
|
help='Path to a directory containing .bin and/or .pt files, or a single .bin/.pt file. You may use subdirectories. (default is ROOTDIR/embeddings)'
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--lora_directory',
|
||||||
|
dest='lora_path',
|
||||||
|
default='loras',
|
||||||
|
type=str,
|
||||||
|
help='Path to a directory containing LoRA files; subdirectories are not supported. (default is ROOTDIR/loras)'
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--embeddings',
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
default=True,
|
||||||
|
help='Enable embedding directory (default). Use --no-embeddings to disable.',
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--enable_image_debugging',
|
||||||
|
action='store_true',
|
||||||
|
help='Generates debugging image to display'
|
||||||
|
)
|
||||||
|
render_group.add_argument(
|
||||||
|
'--karras_max',
|
||||||
|
type=int,
|
||||||
|
default=None,
|
||||||
|
help="control the point at which the K* samplers will shift from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts). Set to 0 to use LatentDiffusion for all step values, and to a high value (e.g. 1000) to use Karras for all step values. [29]."
|
||||||
|
)
|
||||||
|
# Restoration related args
|
||||||
|
postprocessing_group.add_argument(
|
||||||
|
'--no_restore',
|
||||||
|
dest='restore',
|
||||||
|
action='store_false',
|
||||||
|
help='Disable face restoration with GFPGAN or codeformer',
|
||||||
|
)
|
||||||
|
postprocessing_group.add_argument(
|
||||||
|
'--no_upscale',
|
||||||
|
dest='esrgan',
|
||||||
|
action='store_false',
|
||||||
|
help='Disable upscaling with ESRGAN',
|
||||||
|
)
|
||||||
|
postprocessing_group.add_argument(
|
||||||
|
'--esrgan_bg_tile',
|
||||||
|
type=int,
|
||||||
|
default=400,
|
||||||
|
help='Tile size for background sampler, 0 for no tile during testing. Default: 400.',
|
||||||
|
)
|
||||||
|
postprocessing_group.add_argument(
|
||||||
|
'--esrgan_denoise_str',
|
||||||
|
type=float,
|
||||||
|
default=0.75,
|
||||||
|
help='esrgan denoise str. 0 is no denoise, 1 is max denoise. Default: 0.75',
|
||||||
|
)
|
||||||
|
postprocessing_group.add_argument(
|
||||||
|
'--gfpgan_model_path',
|
||||||
|
type=str,
|
||||||
|
default='./models/gfpgan/GFPGANv1.4.pth',
|
||||||
|
help='Indicates the path to the GFPGAN model',
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
'--web',
|
||||||
|
dest='web',
|
||||||
|
action='store_true',
|
||||||
|
help='Start in web server mode.',
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
'--web_develop',
|
||||||
|
dest='web_develop',
|
||||||
|
action='store_true',
|
||||||
|
help='Start in web server development mode.',
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
"--web_verbose",
|
||||||
|
action="store_true",
|
||||||
|
help="Enables verbose logging",
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
"--cors",
|
||||||
|
nargs="*",
|
||||||
|
type=str,
|
||||||
|
help="Additional allowed origins, comma-separated",
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
'--host',
|
||||||
|
type=str,
|
||||||
|
default='127.0.0.1',
|
||||||
|
help='Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network.'
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
'--port',
|
||||||
|
type=int,
|
||||||
|
default='9090',
|
||||||
|
help='Web server: Port to listen on'
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
'--certfile',
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help='Web server: Path to certificate file to use for SSL. Use together with --keyfile'
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
'--keyfile',
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help='Web server: Path to private key file to use for SSL. Use together with --certfile'
|
||||||
|
)
|
||||||
|
web_server_group.add_argument(
|
||||||
|
'--gui',
|
||||||
|
dest='gui',
|
||||||
|
action='store_true',
|
||||||
|
help='Start InvokeAI GUI',
|
||||||
|
)
|
@ -19,13 +19,15 @@ from tqdm import tqdm
|
|||||||
|
|
||||||
import invokeai.configs as configs
|
import invokeai.configs as configs
|
||||||
|
|
||||||
from ..globals import Globals, global_cache_dir, global_config_dir
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
from ..model_management import ModelManager
|
from ..model_management import ModelManager
|
||||||
from ..stable_diffusion import StableDiffusionGeneratorPipeline
|
from ..stable_diffusion import StableDiffusionGeneratorPipeline
|
||||||
|
|
||||||
|
|
||||||
warnings.filterwarnings("ignore")
|
warnings.filterwarnings("ignore")
|
||||||
|
|
||||||
# --------------------------globals-----------------------
|
# --------------------------globals-----------------------
|
||||||
|
config = get_invokeai_config()
|
||||||
Model_dir = "models"
|
Model_dir = "models"
|
||||||
Weights_dir = "ldm/stable-diffusion-v1/"
|
Weights_dir = "ldm/stable-diffusion-v1/"
|
||||||
|
|
||||||
@ -47,12 +49,11 @@ Config_preamble = """
|
|||||||
|
|
||||||
|
|
||||||
def default_config_file():
|
def default_config_file():
|
||||||
return Path(global_config_dir()) / "models.yaml"
|
return config.model_conf_path
|
||||||
|
|
||||||
|
|
||||||
def sd_configs():
|
def sd_configs():
|
||||||
return Path(global_config_dir()) / "stable-diffusion"
|
return config.legacy_conf_path
|
||||||
|
|
||||||
|
|
||||||
def initial_models():
|
def initial_models():
|
||||||
global Datasets
|
global Datasets
|
||||||
@ -121,8 +122,9 @@ def install_requested_models(
|
|||||||
|
|
||||||
if scan_at_startup and scan_directory.is_dir():
|
if scan_at_startup and scan_directory.is_dir():
|
||||||
argument = "--autoconvert"
|
argument = "--autoconvert"
|
||||||
initfile = Path(Globals.root, Globals.initfile)
|
print('** The global initfile is no longer supported; rewrite to support new yaml format **')
|
||||||
replacement = Path(Globals.root, f"{Globals.initfile}.new")
|
initfile = Path(config.root, 'invokeai.init')
|
||||||
|
replacement = Path(config.root, f"invokeai.init.new")
|
||||||
directory = str(scan_directory).replace("\\", "/")
|
directory = str(scan_directory).replace("\\", "/")
|
||||||
with open(initfile, "r") as input:
|
with open(initfile, "r") as input:
|
||||||
with open(replacement, "w") as output:
|
with open(replacement, "w") as output:
|
||||||
@ -150,7 +152,7 @@ def get_root(root: str = None) -> str:
|
|||||||
elif os.environ.get("INVOKEAI_ROOT"):
|
elif os.environ.get("INVOKEAI_ROOT"):
|
||||||
return os.environ.get("INVOKEAI_ROOT")
|
return os.environ.get("INVOKEAI_ROOT")
|
||||||
else:
|
else:
|
||||||
return Globals.root
|
return config.root
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------
|
# ---------------------------------------------
|
||||||
@ -183,7 +185,7 @@ def all_datasets() -> dict:
|
|||||||
# look for legacy model.ckpt in models directory and offer to
|
# look for legacy model.ckpt in models directory and offer to
|
||||||
# normalize its name
|
# normalize its name
|
||||||
def migrate_models_ckpt():
|
def migrate_models_ckpt():
|
||||||
model_path = os.path.join(Globals.root, Model_dir, Weights_dir)
|
model_path = os.path.join(config.root, Model_dir, Weights_dir)
|
||||||
if not os.path.exists(os.path.join(model_path, "model.ckpt")):
|
if not os.path.exists(os.path.join(model_path, "model.ckpt")):
|
||||||
return
|
return
|
||||||
new_name = initial_models()["stable-diffusion-1.4"]["file"]
|
new_name = initial_models()["stable-diffusion-1.4"]["file"]
|
||||||
@ -228,7 +230,7 @@ def _download_repo_or_file(
|
|||||||
def _download_ckpt_weights(mconfig: DictConfig, access_token: str) -> Path:
|
def _download_ckpt_weights(mconfig: DictConfig, access_token: str) -> Path:
|
||||||
repo_id = mconfig["repo_id"]
|
repo_id = mconfig["repo_id"]
|
||||||
filename = mconfig["file"]
|
filename = mconfig["file"]
|
||||||
cache_dir = os.path.join(Globals.root, Model_dir, Weights_dir)
|
cache_dir = os.path.join(config.root, Model_dir, Weights_dir)
|
||||||
return hf_download_with_resume(
|
return hf_download_with_resume(
|
||||||
repo_id=repo_id,
|
repo_id=repo_id,
|
||||||
model_dir=cache_dir,
|
model_dir=cache_dir,
|
||||||
@ -239,9 +241,9 @@ def _download_ckpt_weights(mconfig: DictConfig, access_token: str) -> Path:
|
|||||||
|
|
||||||
# ---------------------------------------------
|
# ---------------------------------------------
|
||||||
def download_from_hf(
|
def download_from_hf(
|
||||||
model_class: object, model_name: str, cache_subdir: Path = Path("hub"), **kwargs
|
model_class: object, model_name: str, **kwargs
|
||||||
):
|
):
|
||||||
path = global_cache_dir(cache_subdir)
|
path = config.cache_dir
|
||||||
model = model_class.from_pretrained(
|
model = model_class.from_pretrained(
|
||||||
model_name,
|
model_name,
|
||||||
cache_dir=path,
|
cache_dir=path,
|
||||||
@ -417,7 +419,7 @@ def new_config_file_contents(
|
|||||||
stanza["height"] = mod["height"]
|
stanza["height"] = mod["height"]
|
||||||
if "file" in mod:
|
if "file" in mod:
|
||||||
stanza["weights"] = os.path.relpath(
|
stanza["weights"] = os.path.relpath(
|
||||||
successfully_downloaded[model], start=Globals.root
|
successfully_downloaded[model], start=config.root
|
||||||
)
|
)
|
||||||
stanza["config"] = os.path.normpath(
|
stanza["config"] = os.path.normpath(
|
||||||
os.path.join(sd_configs(), mod["config"])
|
os.path.join(sd_configs(), mod["config"])
|
||||||
@ -456,7 +458,7 @@ def delete_weights(model_name: str, conf_stanza: dict):
|
|||||||
|
|
||||||
weights = Path(weights)
|
weights = Path(weights)
|
||||||
if not weights.is_absolute():
|
if not weights.is_absolute():
|
||||||
weights = Path(Globals.root) / weights
|
weights = Path(config.root) / weights
|
||||||
try:
|
try:
|
||||||
weights.unlink()
|
weights.unlink()
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -1,122 +0,0 @@
|
|||||||
"""
|
|
||||||
invokeai.backend.globals defines a small number of global variables that would
|
|
||||||
otherwise have to be passed through long and complex call chains.
|
|
||||||
|
|
||||||
It defines a Namespace object named "Globals" that contains
|
|
||||||
the attributes:
|
|
||||||
|
|
||||||
- root - the root directory under which "models" and "outputs" can be found
|
|
||||||
- initfile - path to the initialization file
|
|
||||||
- try_patchmatch - option to globally disable loading of 'patchmatch' module
|
|
||||||
- always_use_cpu - force use of CPU even if GPU is available
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import os.path as osp
|
|
||||||
from argparse import Namespace
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Union
|
|
||||||
|
|
||||||
Globals = Namespace()
|
|
||||||
|
|
||||||
# Where to look for the initialization file and other key components
|
|
||||||
Globals.initfile = "invokeai.init"
|
|
||||||
Globals.models_file = "models.yaml"
|
|
||||||
Globals.models_dir = "models"
|
|
||||||
Globals.config_dir = "configs"
|
|
||||||
Globals.autoscan_dir = "weights"
|
|
||||||
Globals.converted_ckpts_dir = "converted_ckpts"
|
|
||||||
|
|
||||||
# Set the default root directory. This can be overwritten by explicitly
|
|
||||||
# passing the `--root <directory>` argument on the command line.
|
|
||||||
# logic is:
|
|
||||||
# 1) use INVOKEAI_ROOT environment variable (no check for this being a valid directory)
|
|
||||||
# 2) use VIRTUAL_ENV environment variable, with a check for initfile being there
|
|
||||||
# 3) use ~/invokeai
|
|
||||||
|
|
||||||
if os.environ.get("INVOKEAI_ROOT"):
|
|
||||||
Globals.root = osp.abspath(os.environ.get("INVOKEAI_ROOT"))
|
|
||||||
elif (
|
|
||||||
os.environ.get("VIRTUAL_ENV")
|
|
||||||
and Path(os.environ.get("VIRTUAL_ENV"), "..", Globals.initfile).exists()
|
|
||||||
):
|
|
||||||
Globals.root = osp.abspath(osp.join(os.environ.get("VIRTUAL_ENV"), ".."))
|
|
||||||
else:
|
|
||||||
Globals.root = osp.abspath(osp.expanduser("~/invokeai"))
|
|
||||||
|
|
||||||
# Try loading patchmatch
|
|
||||||
Globals.try_patchmatch = True
|
|
||||||
|
|
||||||
# Use CPU even if GPU is available (main use case is for debugging MPS issues)
|
|
||||||
Globals.always_use_cpu = False
|
|
||||||
|
|
||||||
# Whether the internet is reachable for dynamic downloads
|
|
||||||
# The CLI will test connectivity at startup time.
|
|
||||||
Globals.internet_available = True
|
|
||||||
|
|
||||||
# Whether to disable xformers
|
|
||||||
Globals.disable_xformers = False
|
|
||||||
|
|
||||||
# Low-memory tradeoff for guidance calculations.
|
|
||||||
Globals.sequential_guidance = False
|
|
||||||
|
|
||||||
# whether we are forcing full precision
|
|
||||||
Globals.full_precision = False
|
|
||||||
|
|
||||||
# whether we should convert ckpt files into diffusers models on the fly
|
|
||||||
Globals.ckpt_convert = True
|
|
||||||
|
|
||||||
# logging tokenization everywhere
|
|
||||||
Globals.log_tokenization = False
|
|
||||||
|
|
||||||
|
|
||||||
def global_config_file() -> Path:
|
|
||||||
return Path(Globals.root, Globals.config_dir, Globals.models_file)
|
|
||||||
|
|
||||||
|
|
||||||
def global_config_dir() -> Path:
|
|
||||||
return Path(Globals.root, Globals.config_dir)
|
|
||||||
|
|
||||||
|
|
||||||
def global_models_dir() -> Path:
|
|
||||||
return Path(Globals.root, Globals.models_dir)
|
|
||||||
|
|
||||||
|
|
||||||
def global_autoscan_dir() -> Path:
|
|
||||||
return Path(Globals.root, Globals.autoscan_dir)
|
|
||||||
|
|
||||||
|
|
||||||
def global_converted_ckpts_dir() -> Path:
|
|
||||||
return Path(global_models_dir(), Globals.converted_ckpts_dir)
|
|
||||||
|
|
||||||
|
|
||||||
def global_set_root(root_dir: Union[str, Path]):
|
|
||||||
Globals.root = root_dir
|
|
||||||
|
|
||||||
|
|
||||||
def global_cache_dir(subdir: Union[str, Path] = "") -> Path:
|
|
||||||
"""
|
|
||||||
Returns Path to the model cache directory. If a subdirectory
|
|
||||||
is provided, it will be appended to the end of the path, allowing
|
|
||||||
for Hugging Face-style conventions. Currently, Hugging Face has
|
|
||||||
moved all models into the "hub" subfolder, so for any pretrained
|
|
||||||
HF model, use:
|
|
||||||
global_cache_dir('hub')
|
|
||||||
|
|
||||||
The legacy location for transformers used to be global_cache_dir('transformers')
|
|
||||||
and global_cache_dir('diffusers') for diffusers.
|
|
||||||
"""
|
|
||||||
home: str = os.getenv("HF_HOME")
|
|
||||||
|
|
||||||
if home is None:
|
|
||||||
home = os.getenv("XDG_CACHE_HOME")
|
|
||||||
|
|
||||||
if home is not None:
|
|
||||||
# Set `home` to $XDG_CACHE_HOME/huggingface, which is the default location mentioned in Hugging Face Hub Client Library.
|
|
||||||
# See: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
|
|
||||||
home += os.sep + "huggingface"
|
|
||||||
|
|
||||||
if home is not None:
|
|
||||||
return Path(home, subdir)
|
|
||||||
else:
|
|
||||||
return Path(Globals.root, "models", subdir)
|
|
@ -6,7 +6,7 @@ be suppressed or deferred
|
|||||||
"""
|
"""
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
class PatchMatch:
|
class PatchMatch:
|
||||||
"""
|
"""
|
||||||
@ -21,9 +21,10 @@ class PatchMatch:
|
|||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def _load_patch_match(self):
|
def _load_patch_match(self):
|
||||||
|
config = get_invokeai_config()
|
||||||
if self.tried_load:
|
if self.tried_load:
|
||||||
return
|
return
|
||||||
if Globals.try_patchmatch:
|
if config.try_patchmatch:
|
||||||
from patchmatch import patch_match as pm
|
from patchmatch import patch_match as pm
|
||||||
|
|
||||||
if pm.patchmatch_available:
|
if pm.patchmatch_available:
|
||||||
|
@ -33,12 +33,11 @@ from PIL import Image, ImageOps
|
|||||||
from transformers import AutoProcessor, CLIPSegForImageSegmentation
|
from transformers import AutoProcessor, CLIPSegForImageSegmentation
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import global_cache_dir
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
CLIPSEG_MODEL = "CIDAS/clipseg-rd64-refined"
|
CLIPSEG_MODEL = "CIDAS/clipseg-rd64-refined"
|
||||||
CLIPSEG_SIZE = 352
|
CLIPSEG_SIZE = 352
|
||||||
|
|
||||||
|
|
||||||
class SegmentedGrayscale(object):
|
class SegmentedGrayscale(object):
|
||||||
def __init__(self, image: Image, heatmap: torch.Tensor):
|
def __init__(self, image: Image, heatmap: torch.Tensor):
|
||||||
self.heatmap = heatmap
|
self.heatmap = heatmap
|
||||||
@ -84,14 +83,15 @@ class Txt2Mask(object):
|
|||||||
|
|
||||||
def __init__(self, device="cpu", refined=False):
|
def __init__(self, device="cpu", refined=False):
|
||||||
logger.info("Initializing clipseg model for text to mask inference")
|
logger.info("Initializing clipseg model for text to mask inference")
|
||||||
|
config = get_invokeai_config()
|
||||||
|
|
||||||
# BUG: we are not doing anything with the device option at this time
|
# BUG: we are not doing anything with the device option at this time
|
||||||
self.device = device
|
self.device = device
|
||||||
self.processor = AutoProcessor.from_pretrained(
|
self.processor = AutoProcessor.from_pretrained(
|
||||||
CLIPSEG_MODEL, cache_dir=global_cache_dir("hub")
|
CLIPSEG_MODEL, cache_dir=config.cache_dir
|
||||||
)
|
)
|
||||||
self.model = CLIPSegForImageSegmentation.from_pretrained(
|
self.model = CLIPSegForImageSegmentation.from_pretrained(
|
||||||
CLIPSEG_MODEL, cache_dir=global_cache_dir("hub")
|
CLIPSEG_MODEL, cache_dir=config.cache_dir
|
||||||
)
|
)
|
||||||
|
|
||||||
@torch.no_grad()
|
@torch.no_grad()
|
||||||
|
@ -26,7 +26,7 @@ import torch
|
|||||||
from safetensors.torch import load_file
|
from safetensors.torch import load_file
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import global_cache_dir, global_config_dir
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
from .model_manager import ModelManager, SDLegacyType
|
from .model_manager import ModelManager, SDLegacyType
|
||||||
|
|
||||||
@ -74,7 +74,6 @@ from transformers import (
|
|||||||
|
|
||||||
from ..stable_diffusion import StableDiffusionGeneratorPipeline
|
from ..stable_diffusion import StableDiffusionGeneratorPipeline
|
||||||
|
|
||||||
|
|
||||||
def shave_segments(path, n_shave_prefix_segments=1):
|
def shave_segments(path, n_shave_prefix_segments=1):
|
||||||
"""
|
"""
|
||||||
Removes segments. Positive values shave the first segments, negative shave the last segments.
|
Removes segments. Positive values shave the first segments, negative shave the last segments.
|
||||||
@ -843,7 +842,7 @@ def convert_ldm_bert_checkpoint(checkpoint, config):
|
|||||||
|
|
||||||
def convert_ldm_clip_checkpoint(checkpoint):
|
def convert_ldm_clip_checkpoint(checkpoint):
|
||||||
text_model = CLIPTextModel.from_pretrained(
|
text_model = CLIPTextModel.from_pretrained(
|
||||||
"openai/clip-vit-large-patch14", cache_dir=global_cache_dir("hub")
|
"openai/clip-vit-large-patch14", cache_dir=get_invokeai_config().cache_dir
|
||||||
)
|
)
|
||||||
|
|
||||||
keys = list(checkpoint.keys())
|
keys = list(checkpoint.keys())
|
||||||
@ -898,7 +897,7 @@ textenc_pattern = re.compile("|".join(protected.keys()))
|
|||||||
|
|
||||||
|
|
||||||
def convert_paint_by_example_checkpoint(checkpoint):
|
def convert_paint_by_example_checkpoint(checkpoint):
|
||||||
cache_dir = global_cache_dir("hub")
|
cache_dir = get_invokeai_config().cache_dir
|
||||||
config = CLIPVisionConfig.from_pretrained(
|
config = CLIPVisionConfig.from_pretrained(
|
||||||
"openai/clip-vit-large-patch14", cache_dir=cache_dir
|
"openai/clip-vit-large-patch14", cache_dir=cache_dir
|
||||||
)
|
)
|
||||||
@ -970,7 +969,7 @@ def convert_paint_by_example_checkpoint(checkpoint):
|
|||||||
|
|
||||||
|
|
||||||
def convert_open_clip_checkpoint(checkpoint):
|
def convert_open_clip_checkpoint(checkpoint):
|
||||||
cache_dir = global_cache_dir("hub")
|
cache_dir = get_invokeai_config().cache_dir
|
||||||
text_model = CLIPTextModel.from_pretrained(
|
text_model = CLIPTextModel.from_pretrained(
|
||||||
"stabilityai/stable-diffusion-2", subfolder="text_encoder", cache_dir=cache_dir
|
"stabilityai/stable-diffusion-2", subfolder="text_encoder", cache_dir=cache_dir
|
||||||
)
|
)
|
||||||
@ -1093,7 +1092,7 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
|
|||||||
:param vae: A diffusers VAE to load into the pipeline.
|
:param vae: A diffusers VAE to load into the pipeline.
|
||||||
:param vae_path: Path to a checkpoint VAE that will be converted into diffusers and loaded into the pipeline.
|
:param vae_path: Path to a checkpoint VAE that will be converted into diffusers and loaded into the pipeline.
|
||||||
"""
|
"""
|
||||||
|
config = get_invokeai_config()
|
||||||
with warnings.catch_warnings():
|
with warnings.catch_warnings():
|
||||||
warnings.simplefilter("ignore")
|
warnings.simplefilter("ignore")
|
||||||
verbosity = dlogging.get_verbosity()
|
verbosity = dlogging.get_verbosity()
|
||||||
@ -1106,7 +1105,7 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
|
|||||||
else:
|
else:
|
||||||
checkpoint = load_file(checkpoint_path)
|
checkpoint = load_file(checkpoint_path)
|
||||||
|
|
||||||
cache_dir = global_cache_dir("hub")
|
cache_dir = config.cache_dir
|
||||||
pipeline_class = (
|
pipeline_class = (
|
||||||
StableDiffusionGeneratorPipeline
|
StableDiffusionGeneratorPipeline
|
||||||
if return_generator_pipeline
|
if return_generator_pipeline
|
||||||
@ -1130,25 +1129,23 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
|
|||||||
|
|
||||||
if model_type == SDLegacyType.V2_v:
|
if model_type == SDLegacyType.V2_v:
|
||||||
original_config_file = (
|
original_config_file = (
|
||||||
global_config_dir() / "stable-diffusion" / "v2-inference-v.yaml"
|
config.legacy_conf_path / "v2-inference-v.yaml"
|
||||||
)
|
)
|
||||||
if global_step == 110000:
|
if global_step == 110000:
|
||||||
# v2.1 needs to upcast attention
|
# v2.1 needs to upcast attention
|
||||||
upcast_attention = True
|
upcast_attention = True
|
||||||
elif model_type == SDLegacyType.V2_e:
|
elif model_type == SDLegacyType.V2_e:
|
||||||
original_config_file = (
|
original_config_file = (
|
||||||
global_config_dir() / "stable-diffusion" / "v2-inference.yaml"
|
config.legacy_conf_path / "v2-inference.yaml"
|
||||||
)
|
)
|
||||||
elif model_type == SDLegacyType.V1_INPAINT:
|
elif model_type == SDLegacyType.V1_INPAINT:
|
||||||
original_config_file = (
|
original_config_file = (
|
||||||
global_config_dir()
|
config.legacy_conf_path / "v1-inpainting-inference.yaml"
|
||||||
/ "stable-diffusion"
|
|
||||||
/ "v1-inpainting-inference.yaml"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
elif model_type == SDLegacyType.V1:
|
elif model_type == SDLegacyType.V1:
|
||||||
original_config_file = (
|
original_config_file = (
|
||||||
global_config_dir() / "stable-diffusion" / "v1-inference.yaml"
|
config.legacy_conf_path / "v1-inference.yaml"
|
||||||
)
|
)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
@ -1300,7 +1297,7 @@ def load_pipeline_from_original_stable_diffusion_ckpt(
|
|||||||
)
|
)
|
||||||
safety_checker = StableDiffusionSafetyChecker.from_pretrained(
|
safety_checker = StableDiffusionSafetyChecker.from_pretrained(
|
||||||
"CompVis/stable-diffusion-safety-checker",
|
"CompVis/stable-diffusion-safety-checker",
|
||||||
cache_dir=global_cache_dir("hub"),
|
cache_dir=config.cache_dir,
|
||||||
)
|
)
|
||||||
feature_extractor = AutoFeatureExtractor.from_pretrained(
|
feature_extractor = AutoFeatureExtractor.from_pretrained(
|
||||||
"CompVis/stable-diffusion-safety-checker", cache_dir=cache_dir
|
"CompVis/stable-diffusion-safety-checker", cache_dir=cache_dir
|
||||||
|
@ -36,8 +36,6 @@ from omegaconf import OmegaConf
|
|||||||
from omegaconf.dictconfig import DictConfig
|
from omegaconf.dictconfig import DictConfig
|
||||||
from picklescan.scanner import scan_file_path
|
from picklescan.scanner import scan_file_path
|
||||||
|
|
||||||
from invokeai.backend.globals import Globals, global_cache_dir
|
|
||||||
|
|
||||||
from transformers import (
|
from transformers import (
|
||||||
CLIPTextModel,
|
CLIPTextModel,
|
||||||
CLIPTokenizer,
|
CLIPTokenizer,
|
||||||
@ -49,9 +47,9 @@ from diffusers.pipelines.stable_diffusion.safety_checker import (
|
|||||||
from ..stable_diffusion import (
|
from ..stable_diffusion import (
|
||||||
StableDiffusionGeneratorPipeline,
|
StableDiffusionGeneratorPipeline,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
from ..util import CUDA_DEVICE, ask_user, download_with_resume
|
from ..util import CUDA_DEVICE, ask_user, download_with_resume
|
||||||
|
|
||||||
|
|
||||||
class SDLegacyType(Enum):
|
class SDLegacyType(Enum):
|
||||||
V1 = auto()
|
V1 = auto()
|
||||||
V1_INPAINT = auto()
|
V1_INPAINT = auto()
|
||||||
@ -100,6 +98,7 @@ class ModelManager(object):
|
|||||||
if not isinstance(config, DictConfig):
|
if not isinstance(config, DictConfig):
|
||||||
config = OmegaConf.load(config)
|
config = OmegaConf.load(config)
|
||||||
self.config = config
|
self.config = config
|
||||||
|
self.globals = get_invokeai_config()
|
||||||
self.precision = precision
|
self.precision = precision
|
||||||
self.device = torch.device(device_type)
|
self.device = torch.device(device_type)
|
||||||
self.max_loaded_models = max_loaded_models
|
self.max_loaded_models = max_loaded_models
|
||||||
@ -292,7 +291,7 @@ class ModelManager(object):
|
|||||||
"""
|
"""
|
||||||
# if we are converting legacy files automatically, then
|
# if we are converting legacy files automatically, then
|
||||||
# there are no legacy ckpts!
|
# there are no legacy ckpts!
|
||||||
if Globals.ckpt_convert:
|
if self.globals.ckpt_convert:
|
||||||
return False
|
return False
|
||||||
info = self.model_info(model_name)
|
info = self.model_info(model_name)
|
||||||
if "weights" in info and info["weights"].endswith((".ckpt", ".safetensors")):
|
if "weights" in info and info["weights"].endswith((".ckpt", ".safetensors")):
|
||||||
@ -502,13 +501,13 @@ class ModelManager(object):
|
|||||||
|
|
||||||
# TODO: scan weights maybe?
|
# TODO: scan weights maybe?
|
||||||
pipeline_args: dict[str, Any] = dict(
|
pipeline_args: dict[str, Any] = dict(
|
||||||
safety_checker=None, local_files_only=not Globals.internet_available
|
safety_checker=None, local_files_only=not self.globals.internet_available
|
||||||
)
|
)
|
||||||
if "vae" in mconfig and mconfig["vae"] is not None:
|
if "vae" in mconfig and mconfig["vae"] is not None:
|
||||||
if vae := self._load_vae(mconfig["vae"]):
|
if vae := self._load_vae(mconfig["vae"]):
|
||||||
pipeline_args.update(vae=vae)
|
pipeline_args.update(vae=vae)
|
||||||
if not isinstance(name_or_path, Path):
|
if not isinstance(name_or_path, Path):
|
||||||
pipeline_args.update(cache_dir=global_cache_dir("hub"))
|
pipeline_args.update(cache_dir=self.globals.cache_dir)
|
||||||
if using_fp16:
|
if using_fp16:
|
||||||
pipeline_args.update(torch_dtype=torch.float16)
|
pipeline_args.update(torch_dtype=torch.float16)
|
||||||
fp_args_list = [{"revision": "fp16"}, {}]
|
fp_args_list = [{"revision": "fp16"}, {}]
|
||||||
@ -560,10 +559,9 @@ class ModelManager(object):
|
|||||||
width = mconfig.width
|
width = mconfig.width
|
||||||
height = mconfig.height
|
height = mconfig.height
|
||||||
|
|
||||||
if not os.path.isabs(config):
|
root_dir = self.globals.root_dir
|
||||||
config = os.path.join(Globals.root, config)
|
config = str(root_dir / config)
|
||||||
if not os.path.isabs(weights):
|
weights = str(root_dir / weights)
|
||||||
weights = os.path.normpath(os.path.join(Globals.root, weights))
|
|
||||||
|
|
||||||
# Convert to diffusers and return a diffusers pipeline
|
# Convert to diffusers and return a diffusers pipeline
|
||||||
self.logger.info(f"Converting legacy checkpoint {model_name} into a diffusers model...")
|
self.logger.info(f"Converting legacy checkpoint {model_name} into a diffusers model...")
|
||||||
@ -578,11 +576,7 @@ class ModelManager(object):
|
|||||||
|
|
||||||
vae_path = None
|
vae_path = None
|
||||||
if vae:
|
if vae:
|
||||||
vae_path = (
|
vae_path = str(root_dir / vae)
|
||||||
vae
|
|
||||||
if os.path.isabs(vae)
|
|
||||||
else os.path.normpath(os.path.join(Globals.root, vae))
|
|
||||||
)
|
|
||||||
if self._has_cuda():
|
if self._has_cuda():
|
||||||
torch.cuda.empty_cache()
|
torch.cuda.empty_cache()
|
||||||
pipeline = load_pipeline_from_original_stable_diffusion_ckpt(
|
pipeline = load_pipeline_from_original_stable_diffusion_ckpt(
|
||||||
@ -614,9 +608,7 @@ class ModelManager(object):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if "path" in mconfig and mconfig["path"] is not None:
|
if "path" in mconfig and mconfig["path"] is not None:
|
||||||
path = Path(mconfig["path"])
|
path = self.globals.root_dir / Path(mconfig["path"])
|
||||||
if not path.is_absolute():
|
|
||||||
path = Path(Globals.root, path).resolve()
|
|
||||||
return path
|
return path
|
||||||
elif "repo_id" in mconfig:
|
elif "repo_id" in mconfig:
|
||||||
return mconfig["repo_id"]
|
return mconfig["repo_id"]
|
||||||
@ -864,25 +856,16 @@ class ModelManager(object):
|
|||||||
model_type = self.probe_model_type(checkpoint)
|
model_type = self.probe_model_type(checkpoint)
|
||||||
if model_type == SDLegacyType.V1:
|
if model_type == SDLegacyType.V1:
|
||||||
self.logger.debug("SD-v1 model detected")
|
self.logger.debug("SD-v1 model detected")
|
||||||
model_config_file = Path(
|
model_config_file = self.globals.legacy_conf_path / "v1-inference.yaml"
|
||||||
Globals.root, "configs/stable-diffusion/v1-inference.yaml"
|
|
||||||
)
|
|
||||||
elif model_type == SDLegacyType.V1_INPAINT:
|
elif model_type == SDLegacyType.V1_INPAINT:
|
||||||
self.logger.debug("SD-v1 inpainting model detected")
|
self.logger.debug("SD-v1 inpainting model detected")
|
||||||
model_config_file = Path(
|
model_config_file = self.globals.legacy_conf_path / "v1-inpainting-inference.yaml",
|
||||||
Globals.root,
|
|
||||||
"configs/stable-diffusion/v1-inpainting-inference.yaml",
|
|
||||||
)
|
|
||||||
elif model_type == SDLegacyType.V2_v:
|
elif model_type == SDLegacyType.V2_v:
|
||||||
self.logger.debug("SD-v2-v model detected")
|
self.logger.debug("SD-v2-v model detected")
|
||||||
model_config_file = Path(
|
model_config_file = self.globals.legacy_conf_path / "v2-inference-v.yaml"
|
||||||
Globals.root, "configs/stable-diffusion/v2-inference-v.yaml"
|
|
||||||
)
|
|
||||||
elif model_type == SDLegacyType.V2_e:
|
elif model_type == SDLegacyType.V2_e:
|
||||||
self.logger.debug("SD-v2-e model detected")
|
self.logger.debug("SD-v2-e model detected")
|
||||||
model_config_file = Path(
|
model_config_file = self.globals.legacy_conf_path / "v2-inference.yaml"
|
||||||
Globals.root, "configs/stable-diffusion/v2-inference.yaml"
|
|
||||||
)
|
|
||||||
elif model_type == SDLegacyType.V2:
|
elif model_type == SDLegacyType.V2:
|
||||||
self.logger.warning(
|
self.logger.warning(
|
||||||
f"{thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide configuration file path."
|
f"{thing} is a V2 checkpoint file, but its parameterization cannot be determined. Please provide configuration file path."
|
||||||
@ -909,9 +892,7 @@ class ModelManager(object):
|
|||||||
self.logger.debug(f"Using VAE file {vae_path.name}")
|
self.logger.debug(f"Using VAE file {vae_path.name}")
|
||||||
vae = None if vae_path else dict(repo_id="stabilityai/sd-vae-ft-mse")
|
vae = None if vae_path else dict(repo_id="stabilityai/sd-vae-ft-mse")
|
||||||
|
|
||||||
diffuser_path = Path(
|
diffuser_path = self.globals.root_dir / "models/converted_ckpts" / model_path.stem
|
||||||
Globals.root, "models", Globals.converted_ckpts_dir, model_path.stem
|
|
||||||
)
|
|
||||||
model_name = self.convert_and_import(
|
model_name = self.convert_and_import(
|
||||||
model_path,
|
model_path,
|
||||||
diffusers_path=diffuser_path,
|
diffusers_path=diffuser_path,
|
||||||
@ -1044,9 +1025,7 @@ class ModelManager(object):
|
|||||||
"""
|
"""
|
||||||
yaml_str = OmegaConf.to_yaml(self.config)
|
yaml_str = OmegaConf.to_yaml(self.config)
|
||||||
if not os.path.isabs(config_file_path):
|
if not os.path.isabs(config_file_path):
|
||||||
config_file_path = os.path.normpath(
|
config_file_path = self.globals.model_conf_path
|
||||||
os.path.join(Globals.root, config_file_path)
|
|
||||||
)
|
|
||||||
tmpfile = os.path.join(os.path.dirname(config_file_path), "new_config.tmp")
|
tmpfile = os.path.join(os.path.dirname(config_file_path), "new_config.tmp")
|
||||||
with open(tmpfile, "w", encoding="utf-8") as outfile:
|
with open(tmpfile, "w", encoding="utf-8") as outfile:
|
||||||
outfile.write(self.preamble())
|
outfile.write(self.preamble())
|
||||||
@ -1078,7 +1057,8 @@ class ModelManager(object):
|
|||||||
"""
|
"""
|
||||||
# Three transformer models to check: bert, clip and safety checker, and
|
# Three transformer models to check: bert, clip and safety checker, and
|
||||||
# the diffusers as well
|
# the diffusers as well
|
||||||
models_dir = Path(Globals.root, "models")
|
config = get_invokeai_config()
|
||||||
|
models_dir = config.root_dir / "models"
|
||||||
legacy_locations = [
|
legacy_locations = [
|
||||||
Path(
|
Path(
|
||||||
models_dir,
|
models_dir,
|
||||||
@ -1090,8 +1070,8 @@ class ModelManager(object):
|
|||||||
"openai/clip-vit-large-patch14/models--openai--clip-vit-large-patch14",
|
"openai/clip-vit-large-patch14/models--openai--clip-vit-large-patch14",
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
legacy_locations.extend(list(global_cache_dir("diffusers").glob("*")))
|
legacy_cache_dir = config.cache_dir / "../diffusers"
|
||||||
|
legacy_locations.extend(list(legacy_cache_dir.glob("*")))
|
||||||
legacy_layout = False
|
legacy_layout = False
|
||||||
for model in legacy_locations:
|
for model in legacy_locations:
|
||||||
legacy_layout = legacy_layout or model.exists()
|
legacy_layout = legacy_layout or model.exists()
|
||||||
@ -1113,7 +1093,7 @@ class ModelManager(object):
|
|||||||
|
|
||||||
# transformer files get moved into the hub directory
|
# transformer files get moved into the hub directory
|
||||||
if cls._is_huggingface_hub_directory_present():
|
if cls._is_huggingface_hub_directory_present():
|
||||||
hub = global_cache_dir("hub")
|
hub = config.cache_dir
|
||||||
else:
|
else:
|
||||||
hub = models_dir / "hub"
|
hub = models_dir / "hub"
|
||||||
|
|
||||||
@ -1152,13 +1132,12 @@ class ModelManager(object):
|
|||||||
if str(source).startswith(("http:", "https:", "ftp:")):
|
if str(source).startswith(("http:", "https:", "ftp:")):
|
||||||
dest_directory = Path(dest_directory)
|
dest_directory = Path(dest_directory)
|
||||||
if not dest_directory.is_absolute():
|
if not dest_directory.is_absolute():
|
||||||
dest_directory = Globals.root / dest_directory
|
dest_directory = self.globals.root_dir / dest_directory
|
||||||
dest_directory.mkdir(parents=True, exist_ok=True)
|
dest_directory.mkdir(parents=True, exist_ok=True)
|
||||||
resolved_path = download_with_resume(str(source), dest_directory)
|
resolved_path = download_with_resume(str(source), dest_directory)
|
||||||
else:
|
else:
|
||||||
if not os.path.isabs(source):
|
source = self.globals.root_dir / source
|
||||||
source = os.path.join(Globals.root, source)
|
resolved_path = source
|
||||||
resolved_path = Path(source)
|
|
||||||
return resolved_path
|
return resolved_path
|
||||||
|
|
||||||
def _invalidate_cached_model(self, model_name: str) -> None:
|
def _invalidate_cached_model(self, model_name: str) -> None:
|
||||||
@ -1208,7 +1187,7 @@ class ModelManager(object):
|
|||||||
path = name_or_path
|
path = name_or_path
|
||||||
else:
|
else:
|
||||||
owner, repo = name_or_path.split("/")
|
owner, repo = name_or_path.split("/")
|
||||||
path = Path(global_cache_dir("hub") / f"models--{owner}--{repo}")
|
path = self.globals.cache_dir / f"models--{owner}--{repo}"
|
||||||
if not path.exists():
|
if not path.exists():
|
||||||
return None
|
return None
|
||||||
hashpath = path / "checksum.sha256"
|
hashpath = path / "checksum.sha256"
|
||||||
@ -1269,8 +1248,8 @@ class ModelManager(object):
|
|||||||
using_fp16 = self.precision == "float16"
|
using_fp16 = self.precision == "float16"
|
||||||
|
|
||||||
vae_args.update(
|
vae_args.update(
|
||||||
cache_dir=global_cache_dir("hub"),
|
cache_dir=self.globals.cache_dir,
|
||||||
local_files_only=not Globals.internet_available,
|
local_files_only=not self.globals.internet_available,
|
||||||
)
|
)
|
||||||
|
|
||||||
self.logger.debug(f"Loading diffusers VAE from {name_or_path}")
|
self.logger.debug(f"Loading diffusers VAE from {name_or_path}")
|
||||||
@ -1308,7 +1287,7 @@ class ModelManager(object):
|
|||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def _delete_model_from_cache(cls,repo_id):
|
def _delete_model_from_cache(cls,repo_id):
|
||||||
cache_info = scan_cache_dir(global_cache_dir("hub"))
|
cache_info = scan_cache_dir(get_invokeai_config().cache_dir)
|
||||||
|
|
||||||
# I'm sure there is a way to do this with comprehensions
|
# I'm sure there is a way to do this with comprehensions
|
||||||
# but the code quickly became incomprehensible!
|
# but the code quickly became incomprehensible!
|
||||||
@ -1325,9 +1304,10 @@ class ModelManager(object):
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _abs_path(path: str | Path) -> Path:
|
def _abs_path(path: str | Path) -> Path:
|
||||||
|
globals = get_invokeai_config()
|
||||||
if path is None or Path(path).is_absolute():
|
if path is None or Path(path).is_absolute():
|
||||||
return path
|
return path
|
||||||
return Path(Globals.root, path).resolve()
|
return Path(globals.root_dir, path).resolve()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _is_huggingface_hub_directory_present() -> bool:
|
def _is_huggingface_hub_directory_present() -> bool:
|
||||||
|
@ -20,12 +20,11 @@ from compel.prompt_parser import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals
|
|
||||||
|
|
||||||
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
from ..stable_diffusion import InvokeAIDiffuserComponent
|
from ..stable_diffusion import InvokeAIDiffuserComponent
|
||||||
from ..util import torch_dtype
|
from ..util import torch_dtype
|
||||||
|
|
||||||
|
|
||||||
def get_uc_and_c_and_ec(prompt_string,
|
def get_uc_and_c_and_ec(prompt_string,
|
||||||
model: InvokeAIDiffuserComponent,
|
model: InvokeAIDiffuserComponent,
|
||||||
log_tokens=False, skip_normalize_legacy_blend=False):
|
log_tokens=False, skip_normalize_legacy_blend=False):
|
||||||
@ -40,6 +39,8 @@ def get_uc_and_c_and_ec(prompt_string,
|
|||||||
truncate_long_prompts=False,
|
truncate_long_prompts=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
config = get_invokeai_config()
|
||||||
|
|
||||||
# get rid of any newline characters
|
# get rid of any newline characters
|
||||||
prompt_string = prompt_string.replace("\n", " ")
|
prompt_string = prompt_string.replace("\n", " ")
|
||||||
positive_prompt_string, negative_prompt_string = split_prompt_to_positive_and_negative(prompt_string)
|
positive_prompt_string, negative_prompt_string = split_prompt_to_positive_and_negative(prompt_string)
|
||||||
@ -56,7 +57,7 @@ def get_uc_and_c_and_ec(prompt_string,
|
|||||||
negative_prompt: FlattenedPrompt | Blend = negative_conjunction.prompts[0]
|
negative_prompt: FlattenedPrompt | Blend = negative_conjunction.prompts[0]
|
||||||
|
|
||||||
tokens_count = get_max_token_count(model.tokenizer, positive_prompt)
|
tokens_count = get_max_token_count(model.tokenizer, positive_prompt)
|
||||||
if log_tokens or getattr(Globals, "log_tokenization", False):
|
if log_tokens or config.log_tokenization:
|
||||||
log_tokenization(positive_prompt, negative_prompt, tokenizer=model.tokenizer)
|
log_tokenization(positive_prompt, negative_prompt, tokenizer=model.tokenizer)
|
||||||
|
|
||||||
c, options = compel.build_conditioning_tensor_for_prompt_object(positive_prompt)
|
c, options = compel.build_conditioning_tensor_for_prompt_object(positive_prompt)
|
||||||
|
@ -6,7 +6,7 @@ import numpy as np
|
|||||||
import torch
|
import torch
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from ..globals import Globals
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
pretrained_model_url = (
|
pretrained_model_url = (
|
||||||
"https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth"
|
"https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth"
|
||||||
@ -17,11 +17,11 @@ class CodeFormerRestoration:
|
|||||||
def __init__(
|
def __init__(
|
||||||
self, codeformer_dir="models/codeformer", codeformer_model_path="codeformer.pth"
|
self, codeformer_dir="models/codeformer", codeformer_model_path="codeformer.pth"
|
||||||
) -> None:
|
) -> None:
|
||||||
if not os.path.isabs(codeformer_dir):
|
|
||||||
codeformer_dir = os.path.join(Globals.root, codeformer_dir)
|
|
||||||
|
|
||||||
self.model_path = os.path.join(codeformer_dir, codeformer_model_path)
|
self.globals = get_invokeai_config()
|
||||||
self.codeformer_model_exists = os.path.isfile(self.model_path)
|
codeformer_dir = self.globals.root_dir / codeformer_dir
|
||||||
|
self.model_path = codeformer_dir / codeformer_model_path
|
||||||
|
self.codeformer_model_exists = self.model_path.exists()
|
||||||
|
|
||||||
if not self.codeformer_model_exists:
|
if not self.codeformer_model_exists:
|
||||||
logger.error("NOT FOUND: CodeFormer model not found at " + self.model_path)
|
logger.error("NOT FOUND: CodeFormer model not found at " + self.model_path)
|
||||||
@ -71,9 +71,7 @@ class CodeFormerRestoration:
|
|||||||
upscale_factor=1,
|
upscale_factor=1,
|
||||||
use_parse=True,
|
use_parse=True,
|
||||||
device=device,
|
device=device,
|
||||||
model_rootpath=os.path.join(
|
model_rootpath = self.globals.root_dir / "gfpgan" / "weights"
|
||||||
Globals.root, "models", "gfpgan", "weights"
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
face_helper.clean_all()
|
face_helper.clean_all()
|
||||||
face_helper.read_image(bgr_image_array)
|
face_helper.read_image(bgr_image_array)
|
||||||
|
@ -7,14 +7,13 @@ import torch
|
|||||||
from PIL import Image
|
from PIL import Image
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
class GFPGAN:
|
class GFPGAN:
|
||||||
def __init__(self, gfpgan_model_path="models/gfpgan/GFPGANv1.4.pth") -> None:
|
def __init__(self, gfpgan_model_path="models/gfpgan/GFPGANv1.4.pth") -> None:
|
||||||
|
self.globals = get_invokeai_config()
|
||||||
if not os.path.isabs(gfpgan_model_path):
|
if not os.path.isabs(gfpgan_model_path):
|
||||||
gfpgan_model_path = os.path.abspath(
|
gfpgan_model_path = self.globals.root_dir / gfpgan_model_path
|
||||||
os.path.join(Globals.root, gfpgan_model_path)
|
|
||||||
)
|
|
||||||
self.model_path = gfpgan_model_path
|
self.model_path = gfpgan_model_path
|
||||||
self.gfpgan_model_exists = os.path.isfile(self.model_path)
|
self.gfpgan_model_exists = os.path.isfile(self.model_path)
|
||||||
|
|
||||||
@ -33,7 +32,7 @@ class GFPGAN:
|
|||||||
warnings.filterwarnings("ignore", category=DeprecationWarning)
|
warnings.filterwarnings("ignore", category=DeprecationWarning)
|
||||||
warnings.filterwarnings("ignore", category=UserWarning)
|
warnings.filterwarnings("ignore", category=UserWarning)
|
||||||
cwd = os.getcwd()
|
cwd = os.getcwd()
|
||||||
os.chdir(os.path.join(Globals.root, "models"))
|
os.chdir(self.globals.root_dir / 'models')
|
||||||
try:
|
try:
|
||||||
from gfpgan import GFPGANer
|
from gfpgan import GFPGANer
|
||||||
|
|
||||||
|
@ -1,4 +1,3 @@
|
|||||||
import os
|
|
||||||
import warnings
|
import warnings
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
@ -7,7 +6,8 @@ from PIL import Image
|
|||||||
from PIL.Image import Image as ImageType
|
from PIL.Image import Image as ImageType
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
config = get_invokeai_config()
|
||||||
|
|
||||||
class ESRGAN:
|
class ESRGAN:
|
||||||
def __init__(self, bg_tile_size=400) -> None:
|
def __init__(self, bg_tile_size=400) -> None:
|
||||||
@ -30,12 +30,8 @@ class ESRGAN:
|
|||||||
upscale=4,
|
upscale=4,
|
||||||
act_type="prelu",
|
act_type="prelu",
|
||||||
)
|
)
|
||||||
model_path = os.path.join(
|
model_path = config.root_dir / "models/realesrgan/realesr-general-x4v3.pth"
|
||||||
Globals.root, "models/realesrgan/realesr-general-x4v3.pth"
|
wdn_model_path = config.root_dir / "models/realesrgan/realesr-general-wdn-x4v3.pth"
|
||||||
)
|
|
||||||
wdn_model_path = os.path.join(
|
|
||||||
Globals.root, "models/realesrgan/realesr-general-wdn-x4v3.pth"
|
|
||||||
)
|
|
||||||
scale = 4
|
scale = 4
|
||||||
|
|
||||||
bg_upsampler = RealESRGANer(
|
bg_upsampler = RealESRGANer(
|
||||||
|
@ -15,7 +15,7 @@ from transformers import AutoFeatureExtractor
|
|||||||
|
|
||||||
import invokeai.assets.web as web_assets
|
import invokeai.assets.web as web_assets
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from .globals import global_cache_dir
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
from .util import CPU_DEVICE
|
from .util import CPU_DEVICE
|
||||||
|
|
||||||
class SafetyChecker(object):
|
class SafetyChecker(object):
|
||||||
@ -26,10 +26,11 @@ class SafetyChecker(object):
|
|||||||
caution = Image.open(path)
|
caution = Image.open(path)
|
||||||
self.caution_img = caution.resize((caution.width // 2, caution.height // 2))
|
self.caution_img = caution.resize((caution.width // 2, caution.height // 2))
|
||||||
self.device = device
|
self.device = device
|
||||||
|
config = get_invokeai_config()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
safety_model_id = "CompVis/stable-diffusion-safety-checker"
|
safety_model_id = "CompVis/stable-diffusion-safety-checker"
|
||||||
safety_model_path = global_cache_dir("hub")
|
safety_model_path = config.cache_dir
|
||||||
self.safety_checker = StableDiffusionSafetyChecker.from_pretrained(
|
self.safety_checker = StableDiffusionSafetyChecker.from_pretrained(
|
||||||
safety_model_id,
|
safety_model_id,
|
||||||
local_files_only=True,
|
local_files_only=True,
|
||||||
|
@ -18,15 +18,15 @@ from huggingface_hub import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
|
|
||||||
class HuggingFaceConceptsLibrary(object):
|
class HuggingFaceConceptsLibrary(object):
|
||||||
def __init__(self, root=None):
|
def __init__(self, root=None):
|
||||||
"""
|
"""
|
||||||
Initialize the Concepts object. May optionally pass a root directory.
|
Initialize the Concepts object. May optionally pass a root directory.
|
||||||
"""
|
"""
|
||||||
self.root = root or Globals.root
|
self.config = get_invokeai_config()
|
||||||
|
self.root = root or self.config.root
|
||||||
self.hf_api = HfApi()
|
self.hf_api = HfApi()
|
||||||
self.local_concepts = dict()
|
self.local_concepts = dict()
|
||||||
self.concept_list = None
|
self.concept_list = None
|
||||||
@ -58,7 +58,7 @@ class HuggingFaceConceptsLibrary(object):
|
|||||||
self.concept_list.extend(list(local_concepts_to_add))
|
self.concept_list.extend(list(local_concepts_to_add))
|
||||||
return self.concept_list
|
return self.concept_list
|
||||||
return self.concept_list
|
return self.concept_list
|
||||||
elif Globals.internet_available is True:
|
elif self.config.internet_available is True:
|
||||||
try:
|
try:
|
||||||
models = self.hf_api.list_models(
|
models = self.hf_api.list_models(
|
||||||
filter=ModelFilter(model_name="sd-concepts-library/")
|
filter=ModelFilter(model_name="sd-concepts-library/")
|
||||||
|
@ -33,8 +33,7 @@ from torchvision.transforms.functional import resize as tv_resize
|
|||||||
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
||||||
from typing_extensions import ParamSpec
|
from typing_extensions import ParamSpec
|
||||||
|
|
||||||
from invokeai.backend.globals import Globals
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
from ..util import CPU_DEVICE, normalize_device
|
from ..util import CPU_DEVICE, normalize_device
|
||||||
from .diffusion import (
|
from .diffusion import (
|
||||||
AttentionMapSaver,
|
AttentionMapSaver,
|
||||||
@ -44,7 +43,6 @@ from .diffusion import (
|
|||||||
from .offloading import FullyLoadedModelGroup, LazilyLoadedModelGroup, ModelGroup
|
from .offloading import FullyLoadedModelGroup, LazilyLoadedModelGroup, ModelGroup
|
||||||
from .textual_inversion_manager import TextualInversionManager
|
from .textual_inversion_manager import TextualInversionManager
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class PipelineIntermediateState:
|
class PipelineIntermediateState:
|
||||||
run_id: str
|
run_id: str
|
||||||
@ -348,10 +346,11 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
|
|||||||
"""
|
"""
|
||||||
if xformers is available, use it, otherwise use sliced attention.
|
if xformers is available, use it, otherwise use sliced attention.
|
||||||
"""
|
"""
|
||||||
|
config = get_invokeai_config()
|
||||||
if (
|
if (
|
||||||
torch.cuda.is_available()
|
torch.cuda.is_available()
|
||||||
and is_xformers_available()
|
and is_xformers_available()
|
||||||
and not Globals.disable_xformers
|
and not config.disable_xformers
|
||||||
):
|
):
|
||||||
self.enable_xformers_memory_efficient_attention()
|
self.enable_xformers_memory_efficient_attention()
|
||||||
else:
|
else:
|
||||||
|
@ -10,7 +10,7 @@ from diffusers.models.attention_processor import AttentionProcessor
|
|||||||
from typing_extensions import TypeAlias
|
from typing_extensions import TypeAlias
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
from .cross_attention_control import (
|
from .cross_attention_control import (
|
||||||
Arguments,
|
Arguments,
|
||||||
@ -32,7 +32,6 @@ ModelForwardCallback: TypeAlias = Union[
|
|||||||
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor],
|
Callable[[torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor],
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
@dataclass(frozen=True)
|
||||||
class PostprocessingSettings:
|
class PostprocessingSettings:
|
||||||
threshold: float
|
threshold: float
|
||||||
@ -73,12 +72,13 @@ class InvokeAIDiffuserComponent:
|
|||||||
:param model: the unet model to pass through to cross attention control
|
:param model: the unet model to pass through to cross attention control
|
||||||
:param model_forward_callback: a lambda with arguments (x, sigma, conditioning_to_apply). will be called repeatedly. most likely, this should simply call model.forward(x, sigma, conditioning)
|
:param model_forward_callback: a lambda with arguments (x, sigma, conditioning_to_apply). will be called repeatedly. most likely, this should simply call model.forward(x, sigma, conditioning)
|
||||||
"""
|
"""
|
||||||
|
config = get_invokeai_config()
|
||||||
self.conditioning = None
|
self.conditioning = None
|
||||||
self.model = model
|
self.model = model
|
||||||
self.is_running_diffusers = is_running_diffusers
|
self.is_running_diffusers = is_running_diffusers
|
||||||
self.model_forward_callback = model_forward_callback
|
self.model_forward_callback = model_forward_callback
|
||||||
self.cross_attention_control_context = None
|
self.cross_attention_control_context = None
|
||||||
self.sequential_guidance = Globals.sequential_guidance
|
self.sequential_guidance = config.sequential_guidance
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@contextmanager
|
@contextmanager
|
||||||
|
@ -7,7 +7,6 @@
|
|||||||
This is the backend to "textual_inversion.py"
|
This is the backend to "textual_inversion.py"
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import argparse
|
|
||||||
import logging
|
import logging
|
||||||
import math
|
import math
|
||||||
import os
|
import os
|
||||||
@ -47,8 +46,7 @@ from tqdm.auto import tqdm
|
|||||||
from transformers import CLIPTextModel, CLIPTokenizer
|
from transformers import CLIPTextModel, CLIPTokenizer
|
||||||
|
|
||||||
# invokeai stuff
|
# invokeai stuff
|
||||||
from ..args import ArgFormatter, PagingArgumentParser
|
from invokeai.app.services.config import InvokeAIAppConfig,PagingArgumentParser
|
||||||
from ..globals import Globals, global_cache_dir
|
|
||||||
|
|
||||||
if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
|
if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
|
||||||
PIL_INTERPOLATION = {
|
PIL_INTERPOLATION = {
|
||||||
@ -90,8 +88,9 @@ def save_progress(
|
|||||||
|
|
||||||
|
|
||||||
def parse_args():
|
def parse_args():
|
||||||
|
config = InvokeAIAppConfig(argv=[])
|
||||||
parser = PagingArgumentParser(
|
parser = PagingArgumentParser(
|
||||||
description="Textual inversion training", formatter_class=ArgFormatter
|
description="Textual inversion training"
|
||||||
)
|
)
|
||||||
general_group = parser.add_argument_group("General")
|
general_group = parser.add_argument_group("General")
|
||||||
model_group = parser.add_argument_group("Models and Paths")
|
model_group = parser.add_argument_group("Models and Paths")
|
||||||
@ -112,7 +111,7 @@ def parse_args():
|
|||||||
"--root_dir",
|
"--root_dir",
|
||||||
"--root",
|
"--root",
|
||||||
type=Path,
|
type=Path,
|
||||||
default=Globals.root,
|
default=config.root,
|
||||||
help="Path to the invokeai runtime directory",
|
help="Path to the invokeai runtime directory",
|
||||||
)
|
)
|
||||||
general_group.add_argument(
|
general_group.add_argument(
|
||||||
@ -127,7 +126,7 @@ def parse_args():
|
|||||||
general_group.add_argument(
|
general_group.add_argument(
|
||||||
"--output_dir",
|
"--output_dir",
|
||||||
type=Path,
|
type=Path,
|
||||||
default=f"{Globals.root}/text-inversion-model",
|
default=f"{config.root}/text-inversion-model",
|
||||||
help="The output directory where the model predictions and checkpoints will be written.",
|
help="The output directory where the model predictions and checkpoints will be written.",
|
||||||
)
|
)
|
||||||
model_group.add_argument(
|
model_group.add_argument(
|
||||||
@ -528,6 +527,7 @@ def get_full_repo_name(
|
|||||||
|
|
||||||
|
|
||||||
def do_textual_inversion_training(
|
def do_textual_inversion_training(
|
||||||
|
config: InvokeAIAppConfig,
|
||||||
model: str,
|
model: str,
|
||||||
train_data_dir: Path,
|
train_data_dir: Path,
|
||||||
output_dir: Path,
|
output_dir: Path,
|
||||||
@ -580,7 +580,7 @@ def do_textual_inversion_training(
|
|||||||
|
|
||||||
# setting up things the way invokeai expects them
|
# setting up things the way invokeai expects them
|
||||||
if not os.path.isabs(output_dir):
|
if not os.path.isabs(output_dir):
|
||||||
output_dir = os.path.join(Globals.root, output_dir)
|
output_dir = os.path.join(config.root, output_dir)
|
||||||
|
|
||||||
logging_dir = output_dir / logging_dir
|
logging_dir = output_dir / logging_dir
|
||||||
|
|
||||||
@ -628,7 +628,7 @@ def do_textual_inversion_training(
|
|||||||
elif output_dir is not None:
|
elif output_dir is not None:
|
||||||
os.makedirs(output_dir, exist_ok=True)
|
os.makedirs(output_dir, exist_ok=True)
|
||||||
|
|
||||||
models_conf = OmegaConf.load(os.path.join(Globals.root, "configs/models.yaml"))
|
models_conf = OmegaConf.load(config.model_conf_path)
|
||||||
model_conf = models_conf.get(model, None)
|
model_conf = models_conf.get(model, None)
|
||||||
assert model_conf is not None, f"Unknown model: {model}"
|
assert model_conf is not None, f"Unknown model: {model}"
|
||||||
assert (
|
assert (
|
||||||
@ -640,7 +640,7 @@ def do_textual_inversion_training(
|
|||||||
assert (
|
assert (
|
||||||
pretrained_model_name_or_path
|
pretrained_model_name_or_path
|
||||||
), f"models.yaml error: neither 'repo_id' nor 'path' is defined for {model}"
|
), f"models.yaml error: neither 'repo_id' nor 'path' is defined for {model}"
|
||||||
pipeline_args = dict(cache_dir=global_cache_dir("hub"))
|
pipeline_args = dict(cache_dir=config.cache_dir)
|
||||||
|
|
||||||
# Load tokenizer
|
# Load tokenizer
|
||||||
if tokenizer_name:
|
if tokenizer_name:
|
||||||
|
@ -4,17 +4,16 @@ from contextlib import nullcontext
|
|||||||
|
|
||||||
import torch
|
import torch
|
||||||
from torch import autocast
|
from torch import autocast
|
||||||
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
from invokeai.backend.globals import Globals
|
|
||||||
|
|
||||||
CPU_DEVICE = torch.device("cpu")
|
CPU_DEVICE = torch.device("cpu")
|
||||||
CUDA_DEVICE = torch.device("cuda")
|
CUDA_DEVICE = torch.device("cuda")
|
||||||
MPS_DEVICE = torch.device("mps")
|
MPS_DEVICE = torch.device("mps")
|
||||||
|
|
||||||
|
|
||||||
def choose_torch_device() -> torch.device:
|
def choose_torch_device() -> torch.device:
|
||||||
"""Convenience routine for guessing which GPU device to run model on"""
|
"""Convenience routine for guessing which GPU device to run model on"""
|
||||||
if Globals.always_use_cpu:
|
config = get_invokeai_config()
|
||||||
|
if config.always_use_cpu:
|
||||||
return CPU_DEVICE
|
return CPU_DEVICE
|
||||||
if torch.cuda.is_available():
|
if torch.cuda.is_available():
|
||||||
return torch.device("cuda")
|
return torch.device("cuda")
|
||||||
@ -33,7 +32,8 @@ def choose_precision(device: torch.device) -> str:
|
|||||||
|
|
||||||
|
|
||||||
def torch_dtype(device: torch.device) -> torch.dtype:
|
def torch_dtype(device: torch.device) -> torch.dtype:
|
||||||
if Globals.full_precision:
|
config = get_invokeai_config()
|
||||||
|
if config.full_precision:
|
||||||
return torch.float32
|
return torch.float32
|
||||||
if choose_precision(device) == "float16":
|
if choose_precision(device) == "float16":
|
||||||
return torch.float16
|
return torch.float16
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -1,497 +0,0 @@
|
|||||||
"""
|
|
||||||
Readline helper functions for invoke.py.
|
|
||||||
You may import the global singleton `completer` to get access to the
|
|
||||||
completer object itself. This is useful when you want to autocomplete
|
|
||||||
seeds:
|
|
||||||
|
|
||||||
from invokeai.frontend.CLI.readline import completer
|
|
||||||
completer.add_seed(18247566)
|
|
||||||
completer.add_seed(9281839)
|
|
||||||
"""
|
|
||||||
import atexit
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
|
|
||||||
from ...backend.args import Args
|
|
||||||
from ...backend.globals import Globals
|
|
||||||
from ...backend.stable_diffusion import HuggingFaceConceptsLibrary
|
|
||||||
|
|
||||||
# ---------------readline utilities---------------------
|
|
||||||
try:
|
|
||||||
import readline
|
|
||||||
|
|
||||||
readline_available = True
|
|
||||||
except (ImportError, ModuleNotFoundError) as e:
|
|
||||||
print(f"** An error occurred when loading the readline module: {str(e)}")
|
|
||||||
readline_available = False
|
|
||||||
|
|
||||||
IMG_EXTENSIONS = (".png", ".jpg", ".jpeg", ".PNG", ".JPG", ".JPEG", ".gif", ".GIF")
|
|
||||||
WEIGHT_EXTENSIONS = (".ckpt", ".vae", ".safetensors")
|
|
||||||
TEXT_EXTENSIONS = (".txt", ".TXT")
|
|
||||||
CONFIG_EXTENSIONS = (".yaml", ".yml")
|
|
||||||
COMMANDS = (
|
|
||||||
"--steps",
|
|
||||||
"-s",
|
|
||||||
"--seed",
|
|
||||||
"-S",
|
|
||||||
"--iterations",
|
|
||||||
"-n",
|
|
||||||
"--width",
|
|
||||||
"-W",
|
|
||||||
"--height",
|
|
||||||
"-H",
|
|
||||||
"--cfg_scale",
|
|
||||||
"-C",
|
|
||||||
"--threshold",
|
|
||||||
"--perlin",
|
|
||||||
"--grid",
|
|
||||||
"-g",
|
|
||||||
"--individual",
|
|
||||||
"-i",
|
|
||||||
"--save_intermediates",
|
|
||||||
"--init_img",
|
|
||||||
"-I",
|
|
||||||
"--init_mask",
|
|
||||||
"-M",
|
|
||||||
"--init_color",
|
|
||||||
"--strength",
|
|
||||||
"-f",
|
|
||||||
"--variants",
|
|
||||||
"-v",
|
|
||||||
"--outdir",
|
|
||||||
"-o",
|
|
||||||
"--sampler",
|
|
||||||
"-A",
|
|
||||||
"-m",
|
|
||||||
"--embedding_path",
|
|
||||||
"--device",
|
|
||||||
"--grid",
|
|
||||||
"-g",
|
|
||||||
"--facetool",
|
|
||||||
"-ft",
|
|
||||||
"--facetool_strength",
|
|
||||||
"-G",
|
|
||||||
"--codeformer_fidelity",
|
|
||||||
"-cf",
|
|
||||||
"--upscale",
|
|
||||||
"-U",
|
|
||||||
"-save_orig",
|
|
||||||
"--save_original",
|
|
||||||
"--log_tokenization",
|
|
||||||
"-t",
|
|
||||||
"--hires_fix",
|
|
||||||
"--inpaint_replace",
|
|
||||||
"-r",
|
|
||||||
"--png_compression",
|
|
||||||
"-z",
|
|
||||||
"--text_mask",
|
|
||||||
"-tm",
|
|
||||||
"--h_symmetry_time_pct",
|
|
||||||
"--v_symmetry_time_pct",
|
|
||||||
"!fix",
|
|
||||||
"!fetch",
|
|
||||||
"!replay",
|
|
||||||
"!history",
|
|
||||||
"!search",
|
|
||||||
"!clear",
|
|
||||||
"!models",
|
|
||||||
"!switch",
|
|
||||||
"!import_model",
|
|
||||||
"!optimize_model",
|
|
||||||
"!convert_model",
|
|
||||||
"!edit_model",
|
|
||||||
"!del_model",
|
|
||||||
"!mask",
|
|
||||||
"!triggers",
|
|
||||||
)
|
|
||||||
MODEL_COMMANDS = (
|
|
||||||
"!switch",
|
|
||||||
"!edit_model",
|
|
||||||
"!del_model",
|
|
||||||
)
|
|
||||||
CKPT_MODEL_COMMANDS = ("!optimize_model",)
|
|
||||||
WEIGHT_COMMANDS = (
|
|
||||||
"!import_model",
|
|
||||||
"!convert_model",
|
|
||||||
)
|
|
||||||
IMG_PATH_COMMANDS = ("--outdir[=\s]",)
|
|
||||||
TEXT_PATH_COMMANDS = ("!replay",)
|
|
||||||
IMG_FILE_COMMANDS = (
|
|
||||||
"!fix",
|
|
||||||
"!fetch",
|
|
||||||
"!mask",
|
|
||||||
"--init_img[=\s]",
|
|
||||||
"-I",
|
|
||||||
"--init_mask[=\s]",
|
|
||||||
"-M",
|
|
||||||
"--init_color[=\s]",
|
|
||||||
"--embedding_path[=\s]",
|
|
||||||
)
|
|
||||||
|
|
||||||
path_regexp = "(" + "|".join(IMG_PATH_COMMANDS + IMG_FILE_COMMANDS) + ")\s*\S*$"
|
|
||||||
weight_regexp = "(" + "|".join(WEIGHT_COMMANDS) + ")\s*\S*$"
|
|
||||||
text_regexp = "(" + "|".join(TEXT_PATH_COMMANDS) + ")\s*\S*$"
|
|
||||||
|
|
||||||
|
|
||||||
class Completer(object):
|
|
||||||
def __init__(self, options, models={}):
|
|
||||||
self.options = sorted(options)
|
|
||||||
self.models = models
|
|
||||||
self.seeds = set()
|
|
||||||
self.matches = list()
|
|
||||||
self.default_dir = None
|
|
||||||
self.linebuffer = None
|
|
||||||
self.auto_history_active = True
|
|
||||||
self.extensions = None
|
|
||||||
self.concepts = None
|
|
||||||
self.embedding_terms = set()
|
|
||||||
return
|
|
||||||
|
|
||||||
def complete(self, text, state):
|
|
||||||
"""
|
|
||||||
Completes invoke command line.
|
|
||||||
BUG: it doesn't correctly complete files that have spaces in the name.
|
|
||||||
"""
|
|
||||||
buffer = readline.get_line_buffer()
|
|
||||||
|
|
||||||
if state == 0:
|
|
||||||
# extensions defined, so go directly into path completion mode
|
|
||||||
if self.extensions is not None:
|
|
||||||
self.matches = self._path_completions(text, state, self.extensions)
|
|
||||||
|
|
||||||
# looking for an image file
|
|
||||||
elif re.search(path_regexp, buffer):
|
|
||||||
do_shortcut = re.search("^" + "|".join(IMG_FILE_COMMANDS), buffer)
|
|
||||||
self.matches = self._path_completions(
|
|
||||||
text, state, IMG_EXTENSIONS, shortcut_ok=do_shortcut
|
|
||||||
)
|
|
||||||
|
|
||||||
# looking for a seed
|
|
||||||
elif re.search("(-S\s*|--seed[=\s])\d*$", buffer):
|
|
||||||
self.matches = self._seed_completions(text, state)
|
|
||||||
|
|
||||||
# looking for an embedding concept
|
|
||||||
elif re.search("<[\w-]*$", buffer):
|
|
||||||
self.matches = self._concept_completions(text, state)
|
|
||||||
|
|
||||||
# looking for a model
|
|
||||||
elif re.match("^" + "|".join(MODEL_COMMANDS), buffer):
|
|
||||||
self.matches = self._model_completions(text, state)
|
|
||||||
|
|
||||||
# looking for a ckpt model
|
|
||||||
elif re.match("^" + "|".join(CKPT_MODEL_COMMANDS), buffer):
|
|
||||||
self.matches = self._model_completions(text, state, ckpt_only=True)
|
|
||||||
|
|
||||||
elif re.search(weight_regexp, buffer):
|
|
||||||
self.matches = self._path_completions(
|
|
||||||
text,
|
|
||||||
state,
|
|
||||||
WEIGHT_EXTENSIONS,
|
|
||||||
default_dir=Globals.root,
|
|
||||||
)
|
|
||||||
|
|
||||||
elif re.search(text_regexp, buffer):
|
|
||||||
self.matches = self._path_completions(text, state, TEXT_EXTENSIONS)
|
|
||||||
|
|
||||||
# This is the first time for this text, so build a match list.
|
|
||||||
elif text:
|
|
||||||
self.matches = [s for s in self.options if s and s.startswith(text)]
|
|
||||||
else:
|
|
||||||
self.matches = self.options[:]
|
|
||||||
|
|
||||||
# Return the state'th item from the match list,
|
|
||||||
# if we have that many.
|
|
||||||
try:
|
|
||||||
response = self.matches[state]
|
|
||||||
except IndexError:
|
|
||||||
response = None
|
|
||||||
return response
|
|
||||||
|
|
||||||
def complete_extensions(self, extensions: list):
|
|
||||||
"""
|
|
||||||
If called with a list of extensions, will force completer
|
|
||||||
to do file path completions.
|
|
||||||
"""
|
|
||||||
self.extensions = extensions
|
|
||||||
|
|
||||||
def add_history(self, line):
|
|
||||||
"""
|
|
||||||
Pass thru to readline
|
|
||||||
"""
|
|
||||||
if not self.auto_history_active:
|
|
||||||
readline.add_history(line)
|
|
||||||
|
|
||||||
def clear_history(self):
|
|
||||||
"""
|
|
||||||
Pass clear_history() thru to readline
|
|
||||||
"""
|
|
||||||
readline.clear_history()
|
|
||||||
|
|
||||||
def search_history(self, match: str):
|
|
||||||
"""
|
|
||||||
Like show_history() but only shows items that
|
|
||||||
contain the match string.
|
|
||||||
"""
|
|
||||||
self.show_history(match)
|
|
||||||
|
|
||||||
def remove_history_item(self, pos):
|
|
||||||
readline.remove_history_item(pos)
|
|
||||||
|
|
||||||
def add_seed(self, seed):
|
|
||||||
"""
|
|
||||||
Add a seed to the autocomplete list for display when -S is autocompleted.
|
|
||||||
"""
|
|
||||||
if seed is not None:
|
|
||||||
self.seeds.add(str(seed))
|
|
||||||
|
|
||||||
def set_default_dir(self, path):
|
|
||||||
self.default_dir = path
|
|
||||||
|
|
||||||
def set_options(self, options):
|
|
||||||
self.options = options
|
|
||||||
|
|
||||||
def get_line(self, index):
|
|
||||||
try:
|
|
||||||
line = self.get_history_item(index)
|
|
||||||
except IndexError:
|
|
||||||
return None
|
|
||||||
return line
|
|
||||||
|
|
||||||
def get_current_history_length(self):
|
|
||||||
return readline.get_current_history_length()
|
|
||||||
|
|
||||||
def get_history_item(self, index):
|
|
||||||
return readline.get_history_item(index)
|
|
||||||
|
|
||||||
def show_history(self, match=None):
|
|
||||||
"""
|
|
||||||
Print the session history using the pydoc pager
|
|
||||||
"""
|
|
||||||
import pydoc
|
|
||||||
|
|
||||||
lines = list()
|
|
||||||
h_len = self.get_current_history_length()
|
|
||||||
if h_len < 1:
|
|
||||||
print("<empty history>")
|
|
||||||
return
|
|
||||||
|
|
||||||
for i in range(0, h_len):
|
|
||||||
line = self.get_history_item(i + 1)
|
|
||||||
if match and match not in line:
|
|
||||||
continue
|
|
||||||
lines.append(f"[{i+1}] {line}")
|
|
||||||
pydoc.pager("\n".join(lines))
|
|
||||||
|
|
||||||
def set_line(self, line) -> None:
|
|
||||||
"""
|
|
||||||
Set the default string displayed in the next line of input.
|
|
||||||
"""
|
|
||||||
self.linebuffer = line
|
|
||||||
readline.redisplay()
|
|
||||||
|
|
||||||
def update_models(self, models: dict) -> None:
|
|
||||||
"""
|
|
||||||
update our list of models
|
|
||||||
"""
|
|
||||||
self.models = models
|
|
||||||
|
|
||||||
def _seed_completions(self, text, state):
|
|
||||||
m = re.search("(-S\s?|--seed[=\s]?)(\d*)", text)
|
|
||||||
if m:
|
|
||||||
switch = m.groups()[0]
|
|
||||||
partial = m.groups()[1]
|
|
||||||
else:
|
|
||||||
switch = ""
|
|
||||||
partial = text
|
|
||||||
|
|
||||||
matches = list()
|
|
||||||
for s in self.seeds:
|
|
||||||
if s.startswith(partial):
|
|
||||||
matches.append(switch + s)
|
|
||||||
matches.sort()
|
|
||||||
return matches
|
|
||||||
|
|
||||||
def add_embedding_terms(self, terms: list[str]):
|
|
||||||
self.embedding_terms = set(terms)
|
|
||||||
if self.concepts:
|
|
||||||
self.embedding_terms.update(set(self.concepts.list_concepts()))
|
|
||||||
|
|
||||||
def _concept_completions(self, text, state):
|
|
||||||
if self.concepts is None:
|
|
||||||
# cache Concepts() instance so we can check for updates in concepts_list during runtime.
|
|
||||||
self.concepts = HuggingFaceConceptsLibrary()
|
|
||||||
self.embedding_terms.update(set(self.concepts.list_concepts()))
|
|
||||||
else:
|
|
||||||
self.embedding_terms.update(set(self.concepts.list_concepts()))
|
|
||||||
|
|
||||||
partial = text[1:] # this removes the leading '<'
|
|
||||||
if len(partial) == 0:
|
|
||||||
return list(self.embedding_terms) # whole dump - think if user wants this!
|
|
||||||
|
|
||||||
matches = list()
|
|
||||||
for concept in self.embedding_terms:
|
|
||||||
if concept.startswith(partial):
|
|
||||||
matches.append(f"<{concept}>")
|
|
||||||
matches.sort()
|
|
||||||
return matches
|
|
||||||
|
|
||||||
def _model_completions(self, text, state, ckpt_only=False):
|
|
||||||
m = re.search("(!switch\s+)(\w*)", text)
|
|
||||||
if m:
|
|
||||||
switch = m.groups()[0]
|
|
||||||
partial = m.groups()[1]
|
|
||||||
else:
|
|
||||||
switch = ""
|
|
||||||
partial = text
|
|
||||||
matches = list()
|
|
||||||
for s in self.models:
|
|
||||||
format = self.models[s]["format"]
|
|
||||||
if format == "vae":
|
|
||||||
continue
|
|
||||||
if ckpt_only and format != "ckpt":
|
|
||||||
continue
|
|
||||||
if s.startswith(partial):
|
|
||||||
matches.append(switch + s)
|
|
||||||
matches.sort()
|
|
||||||
return matches
|
|
||||||
|
|
||||||
def _pre_input_hook(self):
|
|
||||||
if self.linebuffer:
|
|
||||||
readline.insert_text(self.linebuffer)
|
|
||||||
readline.redisplay()
|
|
||||||
self.linebuffer = None
|
|
||||||
|
|
||||||
def _path_completions(
|
|
||||||
self, text, state, extensions, shortcut_ok=True, default_dir: str = ""
|
|
||||||
):
|
|
||||||
# separate the switch from the partial path
|
|
||||||
match = re.search("^(-\w|--\w+=?)(.*)", text)
|
|
||||||
if match is None:
|
|
||||||
switch = None
|
|
||||||
partial_path = text
|
|
||||||
else:
|
|
||||||
switch, partial_path = match.groups()
|
|
||||||
|
|
||||||
partial_path = partial_path.lstrip()
|
|
||||||
|
|
||||||
matches = list()
|
|
||||||
path = os.path.expanduser(partial_path)
|
|
||||||
|
|
||||||
if os.path.isdir(path):
|
|
||||||
dir = path
|
|
||||||
elif os.path.dirname(path) != "":
|
|
||||||
dir = os.path.dirname(path)
|
|
||||||
else:
|
|
||||||
dir = default_dir if os.path.exists(default_dir) else ""
|
|
||||||
path = os.path.join(dir, path)
|
|
||||||
|
|
||||||
dir_list = os.listdir(dir or ".")
|
|
||||||
if shortcut_ok and os.path.exists(self.default_dir) and dir == "":
|
|
||||||
dir_list += os.listdir(self.default_dir)
|
|
||||||
|
|
||||||
for node in dir_list:
|
|
||||||
if node.startswith(".") and len(node) > 1:
|
|
||||||
continue
|
|
||||||
full_path = os.path.join(dir, node)
|
|
||||||
|
|
||||||
if not (node.endswith(extensions) or os.path.isdir(full_path)):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if path and not full_path.startswith(path):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if switch is None:
|
|
||||||
match_path = os.path.join(dir, node)
|
|
||||||
matches.append(
|
|
||||||
match_path + "/" if os.path.isdir(full_path) else match_path
|
|
||||||
)
|
|
||||||
elif os.path.isdir(full_path):
|
|
||||||
matches.append(
|
|
||||||
switch + os.path.join(os.path.dirname(full_path), node) + "/"
|
|
||||||
)
|
|
||||||
elif node.endswith(extensions):
|
|
||||||
matches.append(switch + os.path.join(os.path.dirname(full_path), node))
|
|
||||||
|
|
||||||
return matches
|
|
||||||
|
|
||||||
|
|
||||||
class DummyCompleter(Completer):
|
|
||||||
def __init__(self, options):
|
|
||||||
super().__init__(options)
|
|
||||||
self.history = list()
|
|
||||||
|
|
||||||
def add_history(self, line):
|
|
||||||
self.history.append(line)
|
|
||||||
|
|
||||||
def clear_history(self):
|
|
||||||
self.history = list()
|
|
||||||
|
|
||||||
def get_current_history_length(self):
|
|
||||||
return len(self.history)
|
|
||||||
|
|
||||||
def get_history_item(self, index):
|
|
||||||
return self.history[index - 1]
|
|
||||||
|
|
||||||
def remove_history_item(self, index):
|
|
||||||
return self.history.pop(index - 1)
|
|
||||||
|
|
||||||
def set_line(self, line):
|
|
||||||
print(f"# {line}")
|
|
||||||
|
|
||||||
|
|
||||||
def generic_completer(commands: list) -> Completer:
|
|
||||||
if readline_available:
|
|
||||||
completer = Completer(commands, [])
|
|
||||||
readline.set_completer(completer.complete)
|
|
||||||
readline.set_pre_input_hook(completer._pre_input_hook)
|
|
||||||
readline.set_completer_delims(" ")
|
|
||||||
readline.parse_and_bind("tab: complete")
|
|
||||||
readline.parse_and_bind("set print-completions-horizontally off")
|
|
||||||
readline.parse_and_bind("set page-completions on")
|
|
||||||
readline.parse_and_bind("set skip-completed-text on")
|
|
||||||
readline.parse_and_bind("set show-all-if-ambiguous on")
|
|
||||||
else:
|
|
||||||
completer = DummyCompleter(commands)
|
|
||||||
return completer
|
|
||||||
|
|
||||||
|
|
||||||
def get_completer(opt: Args, models=[]) -> Completer:
|
|
||||||
if readline_available:
|
|
||||||
completer = Completer(COMMANDS, models)
|
|
||||||
|
|
||||||
readline.set_completer(completer.complete)
|
|
||||||
# pyreadline3 does not have a set_auto_history() method
|
|
||||||
try:
|
|
||||||
readline.set_auto_history(False)
|
|
||||||
completer.auto_history_active = False
|
|
||||||
except:
|
|
||||||
completer.auto_history_active = True
|
|
||||||
readline.set_pre_input_hook(completer._pre_input_hook)
|
|
||||||
readline.set_completer_delims(" ")
|
|
||||||
readline.parse_and_bind("tab: complete")
|
|
||||||
readline.parse_and_bind("set print-completions-horizontally off")
|
|
||||||
readline.parse_and_bind("set page-completions on")
|
|
||||||
readline.parse_and_bind("set skip-completed-text on")
|
|
||||||
readline.parse_and_bind("set show-all-if-ambiguous on")
|
|
||||||
|
|
||||||
outdir = os.path.expanduser(opt.outdir)
|
|
||||||
if os.path.isabs(outdir):
|
|
||||||
histfile = os.path.join(outdir, ".invoke_history")
|
|
||||||
else:
|
|
||||||
histfile = os.path.join(Globals.root, outdir, ".invoke_history")
|
|
||||||
try:
|
|
||||||
readline.read_history_file(histfile)
|
|
||||||
readline.set_history_length(1000)
|
|
||||||
except FileNotFoundError:
|
|
||||||
pass
|
|
||||||
except OSError: # file likely corrupted
|
|
||||||
newname = f"{histfile}.old"
|
|
||||||
print(
|
|
||||||
f"## Your history file {histfile} couldn't be loaded and may be corrupted. Renaming it to {newname}"
|
|
||||||
)
|
|
||||||
os.replace(histfile, newname)
|
|
||||||
atexit.register(readline.write_history_file, histfile)
|
|
||||||
|
|
||||||
else:
|
|
||||||
completer = DummyCompleter(COMMANDS)
|
|
||||||
return completer
|
|
@ -1,30 +0,0 @@
|
|||||||
'''
|
|
||||||
This is a modularized version of the sd-metadata.py script,
|
|
||||||
which retrieves and prints the metadata from a series of generated png files.
|
|
||||||
'''
|
|
||||||
import sys
|
|
||||||
import json
|
|
||||||
from invokeai.backend.image_util import retrieve_metadata
|
|
||||||
|
|
||||||
|
|
||||||
def print_metadata():
|
|
||||||
if len(sys.argv) < 2:
|
|
||||||
print("Usage: file2prompt.py <file1.png> <file2.png> <file3.png>...")
|
|
||||||
print("This script opens up the indicated invoke.py-generated PNG file(s) and prints out their metadata.")
|
|
||||||
exit(-1)
|
|
||||||
|
|
||||||
filenames = sys.argv[1:]
|
|
||||||
for f in filenames:
|
|
||||||
try:
|
|
||||||
metadata = retrieve_metadata(f)
|
|
||||||
print(f'{f}:\n',json.dumps(metadata['sd-metadata'], indent=4))
|
|
||||||
except FileNotFoundError:
|
|
||||||
sys.stderr.write(f'{f} not found\n')
|
|
||||||
continue
|
|
||||||
except PermissionError:
|
|
||||||
sys.stderr.write(f'{f} could not be opened due to inadequate permissions\n')
|
|
||||||
continue
|
|
||||||
|
|
||||||
if __name__== '__main__':
|
|
||||||
print_metadata()
|
|
||||||
|
|
@ -23,7 +23,6 @@ from npyscreen import widget
|
|||||||
from omegaconf import OmegaConf
|
from omegaconf import OmegaConf
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals, global_config_dir
|
|
||||||
|
|
||||||
from ...backend.config.model_install_backend import (
|
from ...backend.config.model_install_backend import (
|
||||||
Dataset_path,
|
Dataset_path,
|
||||||
@ -41,11 +40,13 @@ from .widgets import (
|
|||||||
TextBox,
|
TextBox,
|
||||||
set_min_terminal_size,
|
set_min_terminal_size,
|
||||||
)
|
)
|
||||||
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
|
||||||
# minimum size for the UI
|
# minimum size for the UI
|
||||||
MIN_COLS = 120
|
MIN_COLS = 120
|
||||||
MIN_LINES = 45
|
MIN_LINES = 45
|
||||||
|
|
||||||
|
config = get_invokeai_config()
|
||||||
|
|
||||||
class addModelsForm(npyscreen.FormMultiPage):
|
class addModelsForm(npyscreen.FormMultiPage):
|
||||||
# for responsive resizing - disabled
|
# for responsive resizing - disabled
|
||||||
@ -453,9 +454,9 @@ def main():
|
|||||||
opt = parser.parse_args()
|
opt = parser.parse_args()
|
||||||
|
|
||||||
# setting a global here
|
# setting a global here
|
||||||
Globals.root = os.path.expanduser(get_root(opt.root) or "")
|
config.root = os.path.expanduser(get_root(opt.root) or "")
|
||||||
|
|
||||||
if not global_config_dir().exists():
|
if not (config.conf_path / '..' ).exists():
|
||||||
logger.info(
|
logger.info(
|
||||||
"Your InvokeAI root directory is not set up. Calling invokeai-configure."
|
"Your InvokeAI root directory is not set up. Calling invokeai-configure."
|
||||||
)
|
)
|
||||||
|
@ -8,7 +8,6 @@ import argparse
|
|||||||
import curses
|
import curses
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import traceback
|
|
||||||
import warnings
|
import warnings
|
||||||
from argparse import Namespace
|
from argparse import Namespace
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
@ -20,20 +19,13 @@ from diffusers import logging as dlogging
|
|||||||
from npyscreen import widget
|
from npyscreen import widget
|
||||||
from omegaconf import OmegaConf
|
from omegaconf import OmegaConf
|
||||||
|
|
||||||
from ...backend.globals import (
|
|
||||||
Globals,
|
|
||||||
global_cache_dir,
|
|
||||||
global_config_file,
|
|
||||||
global_models_dir,
|
|
||||||
global_set_root,
|
|
||||||
)
|
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
|
from invokeai.services.config import get_invokeai_config
|
||||||
from ...backend.model_management import ModelManager
|
from ...backend.model_management import ModelManager
|
||||||
from ...frontend.install.widgets import FloatTitleSlider
|
from ...frontend.install.widgets import FloatTitleSlider
|
||||||
|
|
||||||
DEST_MERGED_MODEL_DIR = "merged_models"
|
DEST_MERGED_MODEL_DIR = "merged_models"
|
||||||
|
config = get_invokeai_config()
|
||||||
|
|
||||||
def merge_diffusion_models(
|
def merge_diffusion_models(
|
||||||
model_ids_or_paths: List[Union[str, Path]],
|
model_ids_or_paths: List[Union[str, Path]],
|
||||||
@ -60,7 +52,7 @@ def merge_diffusion_models(
|
|||||||
|
|
||||||
pipe = DiffusionPipeline.from_pretrained(
|
pipe = DiffusionPipeline.from_pretrained(
|
||||||
model_ids_or_paths[0],
|
model_ids_or_paths[0],
|
||||||
cache_dir=kwargs.get("cache_dir", global_cache_dir()),
|
cache_dir=kwargs.get("cache_dir", config.cache_dir),
|
||||||
custom_pipeline="checkpoint_merger",
|
custom_pipeline="checkpoint_merger",
|
||||||
)
|
)
|
||||||
merged_pipe = pipe.merge(
|
merged_pipe = pipe.merge(
|
||||||
@ -94,7 +86,7 @@ def merge_diffusion_models_and_commit(
|
|||||||
**kwargs - the default DiffusionPipeline.get_config_dict kwargs:
|
**kwargs - the default DiffusionPipeline.get_config_dict kwargs:
|
||||||
cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map
|
cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map
|
||||||
"""
|
"""
|
||||||
config_file = global_config_file()
|
config_file = config.model_conf_path
|
||||||
model_manager = ModelManager(OmegaConf.load(config_file))
|
model_manager = ModelManager(OmegaConf.load(config_file))
|
||||||
for mod in models:
|
for mod in models:
|
||||||
assert mod in model_manager.model_names(), f'** Unknown model "{mod}"'
|
assert mod in model_manager.model_names(), f'** Unknown model "{mod}"'
|
||||||
@ -106,7 +98,7 @@ def merge_diffusion_models_and_commit(
|
|||||||
merged_pipe = merge_diffusion_models(
|
merged_pipe = merge_diffusion_models(
|
||||||
model_ids_or_paths, alpha, interp, force, **kwargs
|
model_ids_or_paths, alpha, interp, force, **kwargs
|
||||||
)
|
)
|
||||||
dump_path = global_models_dir() / DEST_MERGED_MODEL_DIR
|
dump_path = config.models_dir / DEST_MERGED_MODEL_DIR
|
||||||
|
|
||||||
os.makedirs(dump_path, exist_ok=True)
|
os.makedirs(dump_path, exist_ok=True)
|
||||||
dump_path = dump_path / merged_model_name
|
dump_path = dump_path / merged_model_name
|
||||||
@ -126,7 +118,7 @@ def _parse_args() -> Namespace:
|
|||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--root_dir",
|
"--root_dir",
|
||||||
type=Path,
|
type=Path,
|
||||||
default=Globals.root,
|
default=config.root,
|
||||||
help="Path to the invokeai runtime directory",
|
help="Path to the invokeai runtime directory",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
@ -398,7 +390,7 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
|
|||||||
class Mergeapp(npyscreen.NPSAppManaged):
|
class Mergeapp(npyscreen.NPSAppManaged):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
conf = OmegaConf.load(global_config_file())
|
conf = OmegaConf.load(config.model_conf_path)
|
||||||
self.model_manager = ModelManager(
|
self.model_manager = ModelManager(
|
||||||
conf, "cpu", "float16"
|
conf, "cpu", "float16"
|
||||||
) # precision doesn't really matter here
|
) # precision doesn't really matter here
|
||||||
@ -429,7 +421,7 @@ def run_cli(args: Namespace):
|
|||||||
f'No --merged_model_name provided. Defaulting to "{args.merged_model_name}"'
|
f'No --merged_model_name provided. Defaulting to "{args.merged_model_name}"'
|
||||||
)
|
)
|
||||||
|
|
||||||
model_manager = ModelManager(OmegaConf.load(global_config_file()))
|
model_manager = ModelManager(OmegaConf.load(config.model_conf_path))
|
||||||
assert (
|
assert (
|
||||||
args.clobber or args.merged_model_name not in model_manager.model_names()
|
args.clobber or args.merged_model_name not in model_manager.model_names()
|
||||||
), f'A model named "{args.merged_model_name}" already exists. Use --clobber to overwrite.'
|
), f'A model named "{args.merged_model_name}" already exists. Use --clobber to overwrite.'
|
||||||
@ -440,9 +432,9 @@ def run_cli(args: Namespace):
|
|||||||
|
|
||||||
def main():
|
def main():
|
||||||
args = _parse_args()
|
args = _parse_args()
|
||||||
global_set_root(args.root_dir)
|
config.root = args.root_dir
|
||||||
|
|
||||||
cache_dir = str(global_cache_dir("hub"))
|
cache_dir = config.cache_dir
|
||||||
os.environ[
|
os.environ[
|
||||||
"HF_HOME"
|
"HF_HOME"
|
||||||
] = cache_dir # because not clear the merge pipeline is honoring cache_dir
|
] = cache_dir # because not clear the merge pipeline is honoring cache_dir
|
||||||
|
@ -21,14 +21,17 @@ from npyscreen import widget
|
|||||||
from omegaconf import OmegaConf
|
from omegaconf import OmegaConf
|
||||||
|
|
||||||
import invokeai.backend.util.logging as logger
|
import invokeai.backend.util.logging as logger
|
||||||
from invokeai.backend.globals import Globals, global_set_root
|
|
||||||
|
|
||||||
from ...backend.training import do_textual_inversion_training, parse_args
|
from invokeai.app.services.config import get_invokeai_config
|
||||||
|
from ...backend.training import (
|
||||||
|
do_textual_inversion_training,
|
||||||
|
parse_args
|
||||||
|
)
|
||||||
|
|
||||||
TRAINING_DATA = "text-inversion-training-data"
|
TRAINING_DATA = "text-inversion-training-data"
|
||||||
TRAINING_DIR = "text-inversion-output"
|
TRAINING_DIR = "text-inversion-output"
|
||||||
CONF_FILE = "preferences.conf"
|
CONF_FILE = "preferences.conf"
|
||||||
|
config = None
|
||||||
|
|
||||||
class textualInversionForm(npyscreen.FormMultiPageAction):
|
class textualInversionForm(npyscreen.FormMultiPageAction):
|
||||||
resolutions = [512, 768, 1024]
|
resolutions = [512, 768, 1024]
|
||||||
@ -122,7 +125,7 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
|
|||||||
value=str(
|
value=str(
|
||||||
saved_args.get(
|
saved_args.get(
|
||||||
"train_data_dir",
|
"train_data_dir",
|
||||||
Path(Globals.root) / TRAINING_DATA / default_placeholder_token,
|
config.root_dir / TRAINING_DATA / default_placeholder_token,
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
scroll_exit=True,
|
scroll_exit=True,
|
||||||
@ -135,7 +138,7 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
|
|||||||
value=str(
|
value=str(
|
||||||
saved_args.get(
|
saved_args.get(
|
||||||
"output_dir",
|
"output_dir",
|
||||||
Path(Globals.root) / TRAINING_DIR / default_placeholder_token,
|
config.root_dir / TRAINING_DIR / default_placeholder_token,
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
scroll_exit=True,
|
scroll_exit=True,
|
||||||
@ -241,9 +244,9 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
|
|||||||
placeholder = self.placeholder_token.value
|
placeholder = self.placeholder_token.value
|
||||||
self.prompt_token.value = f"(Trigger by using <{placeholder}> in your prompts)"
|
self.prompt_token.value = f"(Trigger by using <{placeholder}> in your prompts)"
|
||||||
self.train_data_dir.value = str(
|
self.train_data_dir.value = str(
|
||||||
Path(Globals.root) / TRAINING_DATA / placeholder
|
config.root_dir / TRAINING_DATA / placeholder
|
||||||
)
|
)
|
||||||
self.output_dir.value = str(Path(Globals.root) / TRAINING_DIR / placeholder)
|
self.output_dir.value = str(config.root_dir / TRAINING_DIR / placeholder)
|
||||||
self.resume_from_checkpoint.value = Path(self.output_dir.value).exists()
|
self.resume_from_checkpoint.value = Path(self.output_dir.value).exists()
|
||||||
|
|
||||||
def on_ok(self):
|
def on_ok(self):
|
||||||
@ -284,7 +287,7 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
|
|||||||
return True
|
return True
|
||||||
|
|
||||||
def get_model_names(self) -> Tuple[List[str], int]:
|
def get_model_names(self) -> Tuple[List[str], int]:
|
||||||
conf = OmegaConf.load(os.path.join(Globals.root, "configs/models.yaml"))
|
conf = OmegaConf.load(config.root_dir / "configs/models.yaml")
|
||||||
model_names = [
|
model_names = [
|
||||||
idx
|
idx
|
||||||
for idx in sorted(list(conf.keys()))
|
for idx in sorted(list(conf.keys()))
|
||||||
@ -367,7 +370,7 @@ def copy_to_embeddings_folder(args: dict):
|
|||||||
"""
|
"""
|
||||||
source = Path(args["output_dir"], "learned_embeds.bin")
|
source = Path(args["output_dir"], "learned_embeds.bin")
|
||||||
dest_dir_name = args["placeholder_token"].strip("<>")
|
dest_dir_name = args["placeholder_token"].strip("<>")
|
||||||
destination = Path(Globals.root, "embeddings", dest_dir_name)
|
destination = config.root_dir / "embeddings" / dest_dir_name
|
||||||
os.makedirs(destination, exist_ok=True)
|
os.makedirs(destination, exist_ok=True)
|
||||||
logger.info(f"Training completed. Copying learned_embeds.bin into {str(destination)}")
|
logger.info(f"Training completed. Copying learned_embeds.bin into {str(destination)}")
|
||||||
shutil.copy(source, destination)
|
shutil.copy(source, destination)
|
||||||
@ -383,7 +386,7 @@ def save_args(args: dict):
|
|||||||
"""
|
"""
|
||||||
Save the current argument values to an omegaconf file
|
Save the current argument values to an omegaconf file
|
||||||
"""
|
"""
|
||||||
dest_dir = Path(Globals.root) / TRAINING_DIR
|
dest_dir = config.root_dir / TRAINING_DIR
|
||||||
os.makedirs(dest_dir, exist_ok=True)
|
os.makedirs(dest_dir, exist_ok=True)
|
||||||
conf_file = dest_dir / CONF_FILE
|
conf_file = dest_dir / CONF_FILE
|
||||||
conf = OmegaConf.create(args)
|
conf = OmegaConf.create(args)
|
||||||
@ -394,7 +397,7 @@ def previous_args() -> dict:
|
|||||||
"""
|
"""
|
||||||
Get the previous arguments used.
|
Get the previous arguments used.
|
||||||
"""
|
"""
|
||||||
conf_file = Path(Globals.root) / TRAINING_DIR / CONF_FILE
|
conf_file = config.root_dir / TRAINING_DIR / CONF_FILE
|
||||||
try:
|
try:
|
||||||
conf = OmegaConf.load(conf_file)
|
conf = OmegaConf.load(conf_file)
|
||||||
conf["placeholder_token"] = conf["placeholder_token"].strip("<>")
|
conf["placeholder_token"] = conf["placeholder_token"].strip("<>")
|
||||||
@ -420,7 +423,7 @@ def do_front_end(args: Namespace):
|
|||||||
save_args(args)
|
save_args(args)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
do_textual_inversion_training(**args)
|
do_textual_inversion_training(get_invokeai_config(),**args)
|
||||||
copy_to_embeddings_folder(args)
|
copy_to_embeddings_folder(args)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error("An exception occurred during training. The exception was:")
|
logger.error("An exception occurred during training. The exception was:")
|
||||||
@ -430,13 +433,20 @@ def do_front_end(args: Namespace):
|
|||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
global config
|
||||||
|
|
||||||
args = parse_args()
|
args = parse_args()
|
||||||
global_set_root(args.root_dir or Globals.root)
|
config = get_invokeai_config(argv=[])
|
||||||
|
|
||||||
|
# change root if needed
|
||||||
|
if args.root_dir:
|
||||||
|
config.root = args.root_dir
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if args.front_end:
|
if args.front_end:
|
||||||
do_front_end(args)
|
do_front_end(args)
|
||||||
else:
|
else:
|
||||||
do_textual_inversion_training(**vars(args))
|
do_textual_inversion_training(config,**vars(args))
|
||||||
except AssertionError as e:
|
except AssertionError as e:
|
||||||
logger.error(e)
|
logger.error(e)
|
||||||
sys.exit(-1)
|
sys.exit(-1)
|
||||||
|
@ -99,12 +99,14 @@ dependencies = [
|
|||||||
[project.scripts]
|
[project.scripts]
|
||||||
|
|
||||||
# legacy entrypoints; provided for backwards compatibility
|
# legacy entrypoints; provided for backwards compatibility
|
||||||
"invoke.py" = "invokeai.frontend.CLI:invokeai_command_line_interface"
|
|
||||||
"configure_invokeai.py" = "invokeai.frontend.install:invokeai_configure"
|
"configure_invokeai.py" = "invokeai.frontend.install:invokeai_configure"
|
||||||
"textual_inversion.py" = "invokeai.frontend.training:invokeai_textual_inversion"
|
"textual_inversion.py" = "invokeai.frontend.training:invokeai_textual_inversion"
|
||||||
|
|
||||||
# modern entrypoints
|
# shortcut commands to start cli and web
|
||||||
"invokeai" = "invokeai.frontend.CLI:invokeai_command_line_interface"
|
"invokeai" = "invokeai.app.cli_app:invoke_cli"
|
||||||
|
"invokeai-web" = "invokeai.app.api_app:invoke_api"
|
||||||
|
|
||||||
|
# full commands
|
||||||
"invokeai-configure" = "invokeai.frontend.install:invokeai_configure"
|
"invokeai-configure" = "invokeai.frontend.install:invokeai_configure"
|
||||||
"invokeai-merge" = "invokeai.frontend.merge:invokeai_merge_diffusers"
|
"invokeai-merge" = "invokeai.frontend.merge:invokeai_merge_diffusers"
|
||||||
"invokeai-ti" = "invokeai.frontend.training:invokeai_textual_inversion"
|
"invokeai-ti" = "invokeai.frontend.training:invokeai_textual_inversion"
|
||||||
|
@ -1,4 +0,0 @@
|
|||||||
banana sushi -Ak_lms -W640 -H480 -S42 -s20
|
|
||||||
banana sushi -Ak_lms -S42 -G1 -U 2 0.5 -s20
|
|
||||||
banana sushi -Ak_lms -S42 -v0.2 -n3 -s20
|
|
||||||
banana sushi -Ak_lms -S42 -V1349749425:0.1,4145759947:0.1 -s20
|
|
79
tests/test_config.py
Normal file
79
tests/test_config.py
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
import os
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from omegaconf import OmegaConf
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
os.environ['INVOKEAI_ROOT']='/tmp'
|
||||||
|
from invokeai.app.services.config import InvokeAIAppConfig, InvokeAISettings
|
||||||
|
from invokeai.app.invocations.generate import TextToImageInvocation
|
||||||
|
|
||||||
|
init1 = OmegaConf.create(
|
||||||
|
'''
|
||||||
|
InvokeAI:
|
||||||
|
Features:
|
||||||
|
nsfw_checker: False
|
||||||
|
Memory/Performance:
|
||||||
|
max_loaded_models: 5
|
||||||
|
'''
|
||||||
|
)
|
||||||
|
|
||||||
|
init2 = OmegaConf.create(
|
||||||
|
'''
|
||||||
|
InvokeAI:
|
||||||
|
Features:
|
||||||
|
nsfw_checker: true
|
||||||
|
Memory/Performance:
|
||||||
|
max_loaded_models: 2
|
||||||
|
'''
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_use_init():
|
||||||
|
# note that we explicitly set omegaconf dict and argv here
|
||||||
|
# so that the values aren't read from ~invokeai/invokeai.yaml and
|
||||||
|
# sys.argv respectively.
|
||||||
|
conf1 = InvokeAIAppConfig(init1,[])
|
||||||
|
assert conf1
|
||||||
|
assert conf1.max_loaded_models==5
|
||||||
|
assert not conf1.nsfw_checker
|
||||||
|
|
||||||
|
conf2 = InvokeAIAppConfig(init2,[])
|
||||||
|
assert conf2
|
||||||
|
assert conf2.nsfw_checker
|
||||||
|
assert conf2.max_loaded_models==2
|
||||||
|
assert not hasattr(conf2,'invalid_attribute')
|
||||||
|
|
||||||
|
def test_argv_override():
|
||||||
|
conf = InvokeAIAppConfig(init1,['--nsfw_checker','--max_loaded=10'])
|
||||||
|
assert conf.nsfw_checker
|
||||||
|
assert conf.max_loaded_models==10
|
||||||
|
assert conf.outdir==Path('outputs') # this is the default
|
||||||
|
|
||||||
|
def test_env_override():
|
||||||
|
# argv overrides
|
||||||
|
conf = InvokeAIAppConfig(conf=init1,argv=['--max_loaded=10'])
|
||||||
|
assert conf.nsfw_checker==False
|
||||||
|
|
||||||
|
os.environ['INVOKEAI_nsfw_checker'] = 'True'
|
||||||
|
conf = InvokeAIAppConfig(conf=init1,argv=['--max_loaded=10'])
|
||||||
|
assert conf.nsfw_checker==True
|
||||||
|
|
||||||
|
# environment variables should be case insensitive
|
||||||
|
os.environ['InvokeAI_Max_Loaded_Models'] = '15'
|
||||||
|
conf = InvokeAIAppConfig(conf=init1)
|
||||||
|
assert conf.max_loaded_models == 15
|
||||||
|
|
||||||
|
conf = InvokeAIAppConfig(conf=init1,argv=['--no-nsfw_checker','--max_loaded=10'])
|
||||||
|
assert conf.nsfw_checker==False
|
||||||
|
assert conf.max_loaded_models==10
|
||||||
|
|
||||||
|
conf = InvokeAIAppConfig(conf=init1,argv=[],max_loaded_models=20)
|
||||||
|
assert conf.max_loaded_models==20
|
||||||
|
|
||||||
|
def test_type_coercion():
|
||||||
|
conf = InvokeAIAppConfig(argv=['--root=/tmp/foobar'])
|
||||||
|
assert conf.root==Path('/tmp/foobar')
|
||||||
|
assert isinstance(conf.root,Path)
|
||||||
|
conf = InvokeAIAppConfig(argv=['--root=/tmp/foobar'],root='/tmp/different')
|
||||||
|
assert conf.root==Path('/tmp/different')
|
||||||
|
assert isinstance(conf.root,Path)
|
@ -1,3 +1,3 @@
|
|||||||
banana sushi -Ak_lms -S42 -s5
|
t2i --positive_prompt 'banana sushi' --seed 42
|
||||||
banana sushi -Ak_heun -S42 -s5
|
compel --prompt 'strawberry sushi' | compel | noise | t2l --scheduler heun --steps 3 --scheduler ddim --link -3 conditioning positive_conditioning --link -2 conditioning negative_conditioning | l2i
|
||||||
banana sushi -Addim -S42 -s5
|
compel --prompt 'banana sushi' | compel | noise | t2l --scheduler heun --steps 3 --scheduler euler_a --link -3 conditioning positive_conditioning --link -2 conditioning negative_conditioning
|
||||||
|
Loading…
Reference in New Issue
Block a user