InvokeAI/scripts/preload_models.py

13 lines
355 B
Python
Raw Normal View History

This commit separates the InvokeAI source code from end-user files - preload_models.py has been renamed load_models.py. I've left a shell legacy version with the previous name to avoid breaking any code. - The load_models.py script now takes an optional --root argument, which points to an install directory for the models, scripts, config files, and the default outputs directory. In the future, the embeddings manager directory will also be stored here. - If no --root is provided, and no init file or environment variable is present, load_models.py will install to '.' by default, which is the current behavior. (This has *not* been tested thoroughly.) - The location of the root directory is stored in the file .invokeai in the user's home directory ($HOME on Linux/Mac, or HOMEPATH on windows). The load_models.py script creates this file if it does not already exist. - invoke.py and load_models.py use the following search path to find the install directory: 1. Contents of the environment variable INVOKEAI_ROOT 2. The --root=XXXXX option in ~/.invokeai 3. The --root option passed on the script command line. 4. As a last gasp, the currently working directory (".") Running `python scripts/load_models.py --root ~/invokeai` will create a directory structured like this (shortened for clarity): ~/invokeai ├── configs │   ├── models.yaml │   └── stable-diffusion │   ├── v1-finetune.yaml │   ├── v1-finetune_style.yaml │   ├── v1-inference.yaml │   ├── v1-inpainting-inference.yaml │   └── v1-m1-finetune.yaml ├── models │   ├── CompVis │   ├── bert-base-uncased │   ├── clipseg │   ├── codeformer │   ├── gfpgan │   ├── ldm │   │   └── stable-diffusion-v1 │   │   ├── sd-v1-5-inpainting.ckpt │   │   └── vae-ft-mse-840000-ema-pruned.ckpt │   └── openai ├── outputs └── scripts ├── dream.py ├── images2prompt.py ├── invoke.py ├── legacy_api.py ├── load_models.py ├── merge_embeddings.py ├── orig_scripts │   ├── download_first_stages.sh │   ├── train_searcher.py │   └── txt2img.py ├── preload_models.py └── sd-metadata.py 1. You can now run invoke.py anywhere! Just copy it to one of your bin directories, or put the ~/invokeai/scripts onto your PATH. 2. git pulls will no longer fight with you over models.yaml 3. It keeps end users out of the source code repo and will create a path for us to do installs from invokeai.tar.gz.
2022-11-15 17:59:00 +00:00
#!/usr/bin/env python
# Copyright (c) 2022 Lincoln D. Stein (https://github.com/lstein)
# Before running stable-diffusion on an internet-isolated machine,
# run this script from one with internet connectivity. The
# two machines must share a common .cache directory.
import warnings
This commit separates the InvokeAI source code from end-user files - preload_models.py has been renamed load_models.py. I've left a shell legacy version with the previous name to avoid breaking any code. - The load_models.py script now takes an optional --root argument, which points to an install directory for the models, scripts, config files, and the default outputs directory. In the future, the embeddings manager directory will also be stored here. - If no --root is provided, and no init file or environment variable is present, load_models.py will install to '.' by default, which is the current behavior. (This has *not* been tested thoroughly.) - The location of the root directory is stored in the file .invokeai in the user's home directory ($HOME on Linux/Mac, or HOMEPATH on windows). The load_models.py script creates this file if it does not already exist. - invoke.py and load_models.py use the following search path to find the install directory: 1. Contents of the environment variable INVOKEAI_ROOT 2. The --root=XXXXX option in ~/.invokeai 3. The --root option passed on the script command line. 4. As a last gasp, the currently working directory (".") Running `python scripts/load_models.py --root ~/invokeai` will create a directory structured like this (shortened for clarity): ~/invokeai ├── configs │   ├── models.yaml │   └── stable-diffusion │   ├── v1-finetune.yaml │   ├── v1-finetune_style.yaml │   ├── v1-inference.yaml │   ├── v1-inpainting-inference.yaml │   └── v1-m1-finetune.yaml ├── models │   ├── CompVis │   ├── bert-base-uncased │   ├── clipseg │   ├── codeformer │   ├── gfpgan │   ├── ldm │   │   └── stable-diffusion-v1 │   │   ├── sd-v1-5-inpainting.ckpt │   │   └── vae-ft-mse-840000-ema-pruned.ckpt │   └── openai ├── outputs └── scripts ├── dream.py ├── images2prompt.py ├── invoke.py ├── legacy_api.py ├── load_models.py ├── merge_embeddings.py ├── orig_scripts │   ├── download_first_stages.sh │   ├── train_searcher.py │   └── txt2img.py ├── preload_models.py └── sd-metadata.py 1. You can now run invoke.py anywhere! Just copy it to one of your bin directories, or put the ~/invokeai/scripts onto your PATH. 2. git pulls will no longer fight with you over models.yaml 3. It keeps end users out of the source code repo and will create a path for us to do installs from invokeai.tar.gz.
2022-11-15 17:59:00 +00:00
import load_models
if __name__ == '__main__':
This commit separates the InvokeAI source code from end-user files - preload_models.py has been renamed load_models.py. I've left a shell legacy version with the previous name to avoid breaking any code. - The load_models.py script now takes an optional --root argument, which points to an install directory for the models, scripts, config files, and the default outputs directory. In the future, the embeddings manager directory will also be stored here. - If no --root is provided, and no init file or environment variable is present, load_models.py will install to '.' by default, which is the current behavior. (This has *not* been tested thoroughly.) - The location of the root directory is stored in the file .invokeai in the user's home directory ($HOME on Linux/Mac, or HOMEPATH on windows). The load_models.py script creates this file if it does not already exist. - invoke.py and load_models.py use the following search path to find the install directory: 1. Contents of the environment variable INVOKEAI_ROOT 2. The --root=XXXXX option in ~/.invokeai 3. The --root option passed on the script command line. 4. As a last gasp, the currently working directory (".") Running `python scripts/load_models.py --root ~/invokeai` will create a directory structured like this (shortened for clarity): ~/invokeai ├── configs │   ├── models.yaml │   └── stable-diffusion │   ├── v1-finetune.yaml │   ├── v1-finetune_style.yaml │   ├── v1-inference.yaml │   ├── v1-inpainting-inference.yaml │   └── v1-m1-finetune.yaml ├── models │   ├── CompVis │   ├── bert-base-uncased │   ├── clipseg │   ├── codeformer │   ├── gfpgan │   ├── ldm │   │   └── stable-diffusion-v1 │   │   ├── sd-v1-5-inpainting.ckpt │   │   └── vae-ft-mse-840000-ema-pruned.ckpt │   └── openai ├── outputs └── scripts ├── dream.py ├── images2prompt.py ├── invoke.py ├── legacy_api.py ├── load_models.py ├── merge_embeddings.py ├── orig_scripts │   ├── download_first_stages.sh │   ├── train_searcher.py │   └── txt2img.py ├── preload_models.py └── sd-metadata.py 1. You can now run invoke.py anywhere! Just copy it to one of your bin directories, or put the ~/invokeai/scripts onto your PATH. 2. git pulls will no longer fight with you over models.yaml 3. It keeps end users out of the source code repo and will create a path for us to do installs from invokeai.tar.gz.
2022-11-15 17:59:00 +00:00
load_models.main()