diff --git a/docs/features/CHANGELOG.md b/docs/features/CHANGELOG.md index bc5731fd7b..17cf06b4dd 100644 --- a/docs/features/CHANGELOG.md +++ b/docs/features/CHANGELOG.md @@ -1,52 +1,72 @@ --- -title: **Changelog** +title: Changelog --- ## v1.13 (in process) -- Supports a Google Colab notebook for a standalone server running on Google hardware [Arturo Mendivil](https://github.com/artmen1516) -- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling [Kevin Gibbons](https://github.com/bakkot) -- WebUI supports incremental display of in-progress images during generation [Kevin Gibbons](https://github.com/bakkot) +- Supports a Google Colab notebook for a standalone server running on Google + hardware [Arturo Mendivil](https://github.com/artmen1516) +- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling + [Kevin Gibbons](https://github.com/bakkot) +- WebUI supports incremental display of in-progress images during generation + [Kevin Gibbons](https://github.com/bakkot) - Output directory can be specified on the dream> command line. -- The grid was displaying duplicated images when not enough images to fill the final row [Muhammad Usama](https://github.com/SMUsamaShah) +- The grid was displaying duplicated images when not enough images to fill the + final row [Muhammad Usama](https://github.com/SMUsamaShah) - Can specify --grid on dream.py command line as the default. - Miscellaneous internal bug and stability fixes. +--- + ## v1.12 (28 August 2022) - Improved file handling, including ability to read prompts from standard input. (kudos to [Yunsaki](https://github.com/yunsaki) -- The web server is now integrated with the dream.py script. Invoke by adding --web to - the dream.py command arguments. +- The web server is now integrated with the dream.py script. Invoke by adding + --web to the dream.py command arguments. - Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically enabled if the GFPGAN directory is located as a sibling to Stable Diffusion. - VRAM requirements are modestly reduced. Thanks to both [Blessedcoolant](https://github.com/blessedcoolant) and + VRAM requirements are modestly reduced. Thanks to both + [Blessedcoolant](https://github.com/blessedcoolant) and [Oceanswave](https://github.com/oceanswave) for their work on this. -- You can now swap samplers on the dream> command line. [Blessedcoolant](https://github.com/blessedcoolant) +- You can now swap samplers on the dream> command line. + [Blessedcoolant](https://github.com/blessedcoolant) + +--- ## v1.11 (26 August 2022) -- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module. (kudos to [Oceanswave](https://github.com/Oceanswave) -- You now can specify a seed of -1 to use the previous image's seed, -2 to use the seed for the image generated before that, etc. - Seed memory only extends back to the previous command, but will work on all images generated with the -n# switch. +- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module. + (kudos to [Oceanswave](https://github.com/Oceanswave)) +- You now can specify a seed of -1 to use the previous image's seed, -2 to use + the seed for the image generated before that, etc. Seed memory only extends + back to the previous command, but will work on all images generated with the + -n# switch. - Variant generation support temporarily disabled pending more general solution. -- Created a feature branch named **yunsaki-morphing-dream** which adds experimental support for - iteratively modifying the prompt and its parameters. Please see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) - for a synopsis of how this works. Note that when this feature is eventually added to the main branch, it will may be modified - significantly. +- Created a feature branch named **yunsaki-morphing-dream** which adds + experimental support for iteratively modifying the prompt and its parameters. + Please + see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for + a synopsis of how this works. Note that when this feature is eventually added + to the main branch, it will may be modified significantly. + +--- ## v1.10 (25 August 2022) -- A barebones but fully functional interactive web server for online generation of txt2img and img2img. +- A barebones but fully functional interactive web server for online generation + of txt2img and img2img. --- ## v1.09 (24 August 2022) - A new -v option allows you to generate multiple variants of an initial image - in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave). [ - See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810)) -- Added ability to personalize text to image generation (kudos to [Oceanswave](https://github.com/Oceanswave) and [nicolai256](https://github.com/nicolai256)) + in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave). +- [See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810)) +- Added ability to personalize text to image generation (kudos to + [Oceanswave](https://github.com/Oceanswave) and + [nicolai256](https://github.com/nicolai256)) - Enabled all of the samplers from k_diffusion --- @@ -64,34 +84,34 @@ title: **Changelog** ## v1.07 (23 August 2022) -- Image filenames will now never fill gaps in the sequence, but will be assigned the - next higher name in the chosen directory. This ensures that the alphabetic and chronological - sort orders are the same. +- Image filenames will now never fill gaps in the sequence, but will be assigned + the next higher name in the chosen directory. This ensures that the alphabetic + and chronological sort orders are the same. --- ## v1.06 (23 August 2022) -- Added weighted prompt support contributed by [xraxra](https://github.com/xraxra) -- Example of using weighted prompts to tweak a demonic figure contributed by [bmaltais](https://github.com/bmaltais) +- Added weighted prompt support contributed by + [xraxra](https://github.com/xraxra) +- Example of using weighted prompts to tweak a demonic figure contributed by + [bmaltais](https://github.com/bmaltais) --- ## v1.05 (22 August 2022 - after the drop) -- Filenames now use the following formats: - 000010.95183149.png -- Two files produced by the same command (e.g. -n2), - 000010.26742632.png -- distinguished by a different seed. - - 000011.455191342.01.png -- Two files produced by the same command using - 000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed. - - 000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid can - be regenerated with the indicated key +- Filenames now use the following formats: 000010.95183149.png -- Two files + produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished + by a different seed. + 000011.455191342.01.png -- Two files produced by the same command using + 000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed. + 000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid + can be regenerated with the indicated key - It should no longer be possible for one image to overwrite another -- You can use the "cd" and "pwd" commands at the dream> prompt to set and retrieve - the path of the output directory. +- You can use the "cd" and "pwd" commands at the dream> prompt to set and + retrieve the path of the output directory. ## v1.04 (22 August 2022 - after the drop) @@ -101,19 +121,21 @@ title: **Changelog** ## v1.03 (22 August 2022) -- The original txt2img and img2img scripts from the CompViz repository have been moved into - a subfolder named "orig_scripts", to reduce confusion. +- The original txt2img and img2img scripts from the CompViz repository have been + moved into a subfolder named "orig_scripts", to reduce confusion. ## v1.02 (21 August 2022) -- A copy of the prompt and all of its switches and options is now stored in the corresponding - image in a tEXt metadata field named "Dream". You can read the prompt using scripts/images2prompt.py, - or an image editor that allows you to explore the full metadata. - **Please run "conda env update -f environment.yaml" to load the k_lms dependencies!!** +- A copy of the prompt and all of its switches and options is now stored in the + corresponding image in a tEXt metadata field named "Dream". You can read the + prompt using scripts/images2prompt.py, or an image editor that allows you to + explore the full metadata. **Please run "conda env update -f environment.yaml" + to load the k_lms dependencies!!** ## v1.01 (21 August 2022) -- added k_lms sampling. - **Please run "conda env update -f environment.yaml" to load the k_lms dependencies!!** -- use half precision arithmetic by default, resulting in faster execution and lower memory requirements - Pass argument --full_precision to dream.py to get slower but more accurate image generation +- added k_lms sampling. **Please run "conda env update -f environment.yaml" to + load the k_lms dependencies!!** +- use half precision arithmetic by default, resulting in faster execution and + lower memory requirements Pass argument --full_precision to dream.py to get + slower but more accurate image generation diff --git a/docs/features/CLI.md b/docs/features/CLI.md index b44184a5af..c6013d9ff8 100644 --- a/docs/features/CLI.md +++ b/docs/features/CLI.md @@ -1,21 +1,27 @@ --- -title: "CLI" +title: CLI --- -**Interactive Command Line Interface** +## **Interactive Command Line Interface** -The `dream.py` script, located in `scripts/dream.py`, provides an interactive interface to image generation similar to the "dream mothership" bot that Stable AI provided on its Discord server. +The `dream.py` script, located in `scripts/dream.py`, provides an interactive interface to image +generation similar to the "dream mothership" bot that Stable AI provided on its Discord server. -Unlike the txt2img.py and img2img.py scripts provided in the original CompViz/stable-diffusion source code repository, the time-consuming initialization of the AI model initialization only happens once. After that image generation -from the command-line interface is very fast. +Unlike the txt2img.py and img2img.py scripts provided in the original CompViz/stable-diffusion +source code repository, the time-consuming initialization of the AI model initialization only +happens once. After that image generation from the command-line interface is very fast. -The script uses the readline library to allow for in-line editing, command history (up and down arrows), autocompletion, and more. To help keep track of which prompts generated which images, the script writes a log file of image names and prompts to the selected output directory. +The script uses the readline library to allow for in-line editing, command history (up and down +arrows), autocompletion, and more. To help keep track of which prompts generated which images, the +script writes a log file of image names and prompts to the selected output directory. -In addition, as of version 1.02, it also writes the prompt into the PNG file's metadata where it can be retrieved using scripts/images2prompt.py +In addition, as of version 1.02, it also writes the prompt into the PNG file's metadata where it can +be retrieved using scripts/images2prompt.py The script is confirmed to work on Linux, Windows and Mac systems. -_Note:_ This script runs from the command-line or can be used as a Web application. The Web GUI is currently rudimentary, but a much better replacement is on its way. +_Note:_ This script runs from the command-line or can be used as a Web application. The Web GUI is +currently rudimentary, but a much better replacement is on its way. ```bash (ldm) ~/stable-diffusion$ python3 ./scripts/dream.py @@ -45,184 +51,174 @@ dream> q
-The `dream>` prompt's arguments are pretty much identical to those -used in the Discord bot, except you don't need to type "!dream" (it -doesn't hurt if you do). A significant change is that creation of -individual images is now the default unless --grid (-g) is given. A -full list is given in [List of prompt arguments] -(#list-of-prompt-arguments). +The `dream>` prompt's arguments are pretty much identical to those used in the Discord bot, except +you don't need to type "!dream" (it doesn't hurt if you do). A significant change is that creation +of individual images is now the default unless --grid (-g) is given. A full list is given in [List +of prompt arguments] (#list-of-prompt-arguments). -# Arguments +## Arguments -The script itself also recognizes a series of command-line switches -that will change important global defaults, such as the directory for -image outputs and the location of the model weight files. +The script itself also recognizes a series of command-line switches that will change important +global defaults, such as the directory for image outputs and the location of the model weight files. ## List of arguments recognized at the command line -These command-line arguments can be passed to dream.py when you first -run it from the Windows, Mac or Linux command line. Some set defaults -that can be overridden on a per-prompt basis (see [List of prompt -arguments] (#list-of-prompt-arguments). Others +These command-line arguments can be passed to dream.py when you first run it from the Windows, Mac +or Linux command line. Some set defaults that can be overridden on a per-prompt basis (see [List of +prompt arguments] (#list-of-prompt-arguments). Others -| Argument | Shortcut | Default | Description | -|--------------------|------------|---------------------|--------------| -| --help | -h | | Print a concise help message. | -| --outdir+ +
@@ -16,17 +18,14 @@ title: Home
---
-This is a fork of
-[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
-the open source text-to-image generator. It provides a streamlined
-process with various new features and options to aid the image
-generation process. It runs on Windows, Mac and Linux machines,
-and runs on GPU cards with as little as 4 GB or RAM.
+This is a fork of [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion), the open
+source text-to-image generator. It provides a streamlined process with various new features and
+options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on
+GPU cards with as little as 4 GB or RAM.
_Note: This fork is rapidly evolving. Please use the
-[Issues](https://github.com/lstein/stable-diffusion/issues) tab to
-report bugs and make feature requests. Be sure to use the provided
-templates. They will help aid diagnose issues faster._
+[Issues](https://github.com/lstein/stable-diffusion/issues) tab to report bugs and make feature
+requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
## **Table of Contents**
@@ -49,7 +48,8 @@ templates. They will help aid diagnose issues faster._
## Installation
-This fork is supported across multiple platforms. You can find individual installation instructions below.
+This fork is supported across multiple platforms. You can find individual installation instructions
+below.
- [Linux](./installation/INSTALL_LINUX.md)
- [Windows](./installation/INSTALL_WINDOWS.md)
@@ -74,13 +74,12 @@ You wil need one of the following:
### **Note**
-If you are have a Nvidia 10xx series card (e.g. the 1080ti), please
-run the dream script in full-precision mode as shown below.
+If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the dream script in
+full-precision mode as shown below.
Similarly, specify full-precision mode on Apple M1 hardware.
-To run in full-precision mode, start `dream.py` with the
-`--full_precision` flag:
+To run in full-precision mode, start `dream.py` with the `--full_precision` flag:
```bash
(ldm) ~/stable-diffusion$ python scripts/dream.py --full_precision
@@ -113,20 +112,27 @@ To run in full-precision mode, start `dream.py` with the
## Latest Changes
- v1.14 (11 September 2022)
+
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- - Add "seamless mode" for circular tiling of image. Generates beautiful effects. ([prixt](https://github.com/prixt)).
+ - Add "seamless mode" for circular tiling of image. Generates beautiful effects.
+ ([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
- v1.13 (3 September 2022
- - Support image variations (see [VARIATIONS](./features/VARIATIONS.md) ([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
- - Supports a Google Colab notebook for a standalone server running on Google hardware [Arturo Mendivil](https://github.com/artmen1516)
- - WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling [Kevin Gibbons](https://github.com/bakkot)
- - WebUI supports incremental display of in-progress images during generation [Kevin Gibbons](https://github.com/bakkot)
- - A new configuration file scheme that allows new models (including upcoming stable-diffusion-v1.5)
- to be added without altering the code. ([David Wager](https://github.com/maddavid12))
+ - Support image variations (see [VARIATIONS](./features/VARIATIONS.md)
+ ([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
+ - Supports a Google Colab notebook for a standalone server running on Google hardware
+ [Arturo Mendivil](https://github.com/artmen1516)
+ - WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
+ [Kevin Gibbons](https://github.com/bakkot)
+ - WebUI supports incremental display of in-progress images during generation
+ [Kevin Gibbons](https://github.com/bakkot)
+ - A new configuration file scheme that allows new models (including upcoming
+ stable-diffusion-v1.5) to be added without altering the code.
+ ([David Wager](https://github.com/maddavid12))
- Can specify --grid on dream.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
@@ -136,28 +142,36 @@ For older changelogs, please visit **[CHANGELOGS](./CHANGELOG.md)**.
## Troubleshooting
-Please check out our **[Q&A](./help/TROUBLESHOOT.md)** to get solutions for common installation problems and other issues.
+Please check out our **[Q&A](./help/TROUBLESHOOT.md)** to get solutions for common installation
+problems and other issues.
## Contributing
-Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with
-how to contribute to GitHub projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
+Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
+cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
+to contribute to GitHub projects, here is a
+[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
-A full set of contribution guidelines, along with templates, are in progress, but for now the most important thing is to **make your pull request against the "development" branch**, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical changes.
+A full set of contribution guidelines, along with templates, are in progress, but for now the most
+important thing is to **make your pull request against the "development" branch**, and not against
+"main". This will help keep public breakage to a minimum and will allow you to propose more radical
+changes.
### **Contributors**
-This fork is a combined effort of various people from across the world. [Check out the list of all these amazing people](./CONTRIBUTORS.md). We thank them for their time, hard work and effort.
+This fork is a combined effort of various people from across the world.
+[Check out the list of all these amazing people](./CONTRIBUTORS.md). We thank them for their time,
+hard work and effort.
## Support
-For support,
-please use this repository's GitHub Issues tracking service. Feel free
-to send me an email if you use and like the script.
+For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
+email if you use and like the script.
-Original portions of the software are Copyright (c) 2020 Lincoln D. Stein (https://github.com/lstein)
+Original portions of the software are Copyright (c) 2020
+[Lincoln D. Stein](https://github.com/lstein)
## Further Reading
-Please see the original README for more information on this software
-and underlying algorithm, located in the file [README-CompViz.md](./README-CompViz.md).
+Please see the original README for more information on this software and underlying algorithm,
+located in the file [README-CompViz.md](./README-CompViz.md).
diff --git a/docs/installation/INSTALL_LINUX.md b/docs/installation/INSTALL_LINUX.md
index b7a6cd8ff0..312ab60482 100644
--- a/docs/installation/INSTALL_LINUX.md
+++ b/docs/installation/INSTALL_LINUX.md
@@ -1,89 +1,110 @@
-# **Linux Installation**
+---
+title: Linux
+---
-1. You will need to install the following prerequisites if they are not already available. Use your operating system's preferred installer
+1. You will need to install the following prerequisites if they are not already
+ available. Use your operating system's preferred installer.
-- Python (version 3.8.5 recommended; higher may work)
-- git
+ - Python (version 3.8.5 recommended; higher may work)
+ - git
2. Install the Python Anaconda environment manager.
-```
-~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
-~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
-~$ ./Anaconda3-2022.05-Linux-x86_64.sh
-```
+ ```bash
+ ~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
+ ~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
+ ~$ ./Anaconda3-2022.05-Linux-x86_64.sh
+ ```
-After installing anaconda, you should log out of your system and log back in. If the installation
-worked, your command prompt will be prefixed by the name of the current anaconda environment - `(base)`.
+ After installing anaconda, you should log out of your system and log back in. If
+ the installation worked, your command prompt will be prefixed by the name of the
+ current anaconda environment - `(base)`.
3. Copy the stable-diffusion source code from GitHub:
-```
-(base) ~$ git clone https://github.com/lstein/stable-diffusion.git
-```
+ ```bash
+ (base) ~$ git clone https://github.com/lstein/stable-diffusion.git
+ ```
-This will create stable-diffusion folder where you will follow the rest of the steps.
+ This will create stable-diffusion folder where you will follow the rest of the
+ steps.
-4. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
+4. Enter the newly-created stable-diffusion folder. From this step forward make
+ sure that you are working in the stable-diffusion directory!
-```
-(base) ~$ cd stable-diffusion
-(base) ~/stable-diffusion$
-```
+ ```bash
+ (base) ~$ cd stable-diffusion
+ (base) ~/stable-diffusion$
+ ```
-5. Use anaconda to copy necessary python packages, create a new python environment named `ldm` and activate the environment.
+5. Use anaconda to copy necessary python packages, create a new python
+ environment named `ldm` and activate the environment.
-```
-(base) ~/stable-diffusion$ conda env create -f environment.yaml
-(base) ~/stable-diffusion$ conda activate ldm
-(ldm) ~/stable-diffusion$
-```
+ ```bash
+ (base) ~/stable-diffusion$ conda env create -f environment.yaml
+ (base) ~/stable-diffusion$ conda activate ldm
+ (ldm) ~/stable-diffusion$
+ ```
-After these steps, your command prompt will be prefixed by `(ldm)` as shown above.
+ After these steps, your command prompt will be prefixed by `(ldm)` as shown
+ above.
6. Load a couple of small machine-learning models required by stable diffusion:
-```
-(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
-```
+ ```bash
+ (ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
+ ```
-Note that this step is necessary because I modified the original just-in-time model loading scheme to allow the script to work on GPU machines that are not internet connected. See [Preload Models](../features/OTHER.md#preload-models)
+ Note that this step is necessary because I modified the original just-in-time
+ model loading scheme to allow the script to work on GPU machines that are not
+ internet connected. See [Preload Models](../features/OTHER.md#preload-models)
7. Now you need to install the weights for the stable diffusion model.
-- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
-- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
-- You may be asked to sign a license agreement at this point.
-- Click on "Files and versions" near the top of the page, and then click on the file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click the "download" link. Save the file somewhere safe on your local machine.
+ - For running with the released weights, you will first need to set up an acount
+ with [Hugging Face](https://huggingface.co).
+ - Use your credentials to log in, and then point your browser [here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.)
+ - You may be asked to sign a license agreement at this point.
+ - Click on "Files and versions" near the top of the page, and then click on the
+ file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click
+ the "download" link. Save the file somewhere safe on your local machine.
-Now run the following commands from within the stable-diffusion directory. This will create a symbolic link from the stable-diffusion model.ckpt file, to the true location of the sd-v1-4.ckpt file.
+ Now run the following commands from within the stable-diffusion directory.
+ This will create a symbolic link from the stable-diffusion model.ckpt file, to
+ the true location of the `sd-v1-4.ckpt` file.
-```
-(ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
-(ldm) ~/stable-diffusion$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
-```
+ ```bash
+ (ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
+ (ldm) ~/stable-diffusion$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
+ ```
8. Start generating images!
-```
-# for the pre-release weights use the -l or --liaon400m switch
-(ldm) ~/stable-diffusion$ python3 scripts/dream.py -l
+ ```bash
+ # for the pre-release weights use the -l or --liaon400m switch
+ (ldm) ~/stable-diffusion$ python3 scripts/dream.py -l
-# for the post-release weights do not use the switch
-(ldm) ~/stable-diffusion$ python3 scripts/dream.py
+ # for the post-release weights do not use the switch
+ (ldm) ~/stable-diffusion$ python3 scripts/dream.py
-# for additional configuration switches and arguments, use -h or --help
-(ldm) ~/stable-diffusion$ python3 scripts/dream.py -h
-```
+ # for additional configuration switches and arguments, use -h or --help
+ (ldm) ~/stable-diffusion$ python3 scripts/dream.py -h
+ ```
-9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `stable-diffusion` directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
+9. Subsequently, to relaunch the script, be sure to run "conda activate ldm"
+ (step 5, second command), enter the `stable-diffusion` directory, and then
+ launch the dream script (step 8). If you forget to activate the ldm
+ environment, the script will fail with multiple `ModuleNotFound` errors.
-### Updating to newer versions of the script
+ ### Updating to newer versions of the script
-This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter `stable-diffusion` and type:
+ This distribution is changing rapidly. If you used the `git clone` method
+ (step 5) to download the stable-diffusion directory, then to update to the
+ latest and greatest version, launch the Anaconda window, enter
+ `stable-diffusion` and type:
-```
-(ldm) ~/stable-diffusion$ git pull
-```
+ ```bash
+ (ldm) ~/stable-diffusion$ git pull
+ ```
-This will bring your local copy into sync with the remote one.
+ This will bring your local copy into sync with the remote one.
diff --git a/docs/installation/INSTALL_MAC.md b/docs/installation/INSTALL_MAC.md
index c000e818bb..39398c36ac 100644
--- a/docs/installation/INSTALL_MAC.md
+++ b/docs/installation/INSTALL_MAC.md
@@ -1,37 +1,44 @@
-# **macOS Instructions**
+---
+title: macOS
+---
-Requirements
+## Requirements
- macOS 12.3 Monterey or later
- Python
- Patience
- Apple Silicon\*
-\*I haven't tested any of this on Intel Macs but I have read that one person got it to work, so Apple Silicon might not be requried.
+\*I haven't tested any of this on Intel Macs but I have read that one person got
+it to work, so Apple Silicon might not be requried.
-Things have moved really fast and so these instructions change often
-and are often out-of-date. One of the problems is that there are so
-many different ways to run this.
+Things have moved really fast and so these instructions change often and are
+often out-of-date. One of the problems is that there are so many different ways
+to run this.
-We are trying to build a testing setup so that when we make changes it
-doesn't always break.
+We are trying to build a testing setup so that when we make changes it doesn't
+always break.
How to (this hasn't been 100% tested yet):
First get the weights checkpoint download started - it's big:
1. Sign up at https://huggingface.co
-2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
+2. Go to the
+ [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository:
-4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder)
+4. Download
+ [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt)
+ and note where you have saved it (probably the Downloads folder)
-While that is downloading, open Terminal and run the following commands one at a time.
+ While that is downloading, open Terminal and run the following commands one
+ at a time.
```bash
# install brew (and Xcode command line tools):
+
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
-#
# Now there are two different routes to get the Python (miniconda) environment up and running:
# 1. Alongside pyenv
# 2. No pyenv
@@ -41,11 +48,11 @@ While that is downloading, open Terminal and run the following commands one at a
# NOW EITHER DO
# 1. Installing alongside pyenv
-brew install pyenv-virtualenv # you might have this from before, no problem
-pyenv install anaconda3-2022.05
-pyenv virtualenv anaconda3-2022.05
-eval "$(pyenv init -)"
-pyenv activate anaconda3-2022.05
+ brew install pyenv-virtualenv # you might have this from before, no problem
+ pyenv install anaconda3-2022.05
+ pyenv virtualenv anaconda3-2022.05
+ eval "$(pyenv init -)"
+ pyenv activate anaconda3-2022.05
# OR,
# 2. Installing standalone
@@ -53,31 +60,31 @@ pyenv activate anaconda3-2022.05
brew install cmake protobuf rust
# install miniconda (M1 arm64 version):
-curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o Miniconda3-latest-MacOSX-arm64.sh
-/bin/bash Miniconda3-latest-MacOSX-arm64.sh
+ curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o Miniconda3-latest-MacOSX-arm64.sh
+ /bin/bash Miniconda3-latest-MacOSX-arm64.sh
# EITHER WAY,
# continue from here
# clone the repo
-git clone https://github.com/lstein/stable-diffusion.git
-cd stable-diffusion
+ git clone https://github.com/lstein/stable-diffusion.git
+ cd stable-diffusion
#
# wait until the checkpoint file has downloaded, then proceed
#
# create symlink to checkpoint
-mkdir -p models/ldm/stable-diffusion-v1/
+ mkdir -p models/ldm/stable-diffusion-v1/
-PATH_TO_CKPT="$HOME/Downloads" # or wherever you saved sd-v1-4.ckpt
+ PATH_TO_CKPT="$HOME/Downloads" # or wherever you saved sd-v1-4.ckpt
-ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
+ ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
# install packages
-PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yaml
-conda activate ldm
+ PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yaml
+ conda activate ldm
# only need to do this once
python scripts/preload_models.py
@@ -88,117 +95,172 @@ python scripts/dream.py --full_precision # half-precision requires autocast and
The original scripts should work as well.
-```
+```bash
python scripts/orig_scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
```
-Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
-create -f environment-mac.yaml` never finishing in some situations. So
-it isn't required but wont hurt.
+Note,
-After you follow all the instructions and run dream.py you might get several errors. Here's the errors I've seen and found solutions for.
+```bash
+export PIP_EXISTS_ACTION=w
+```
+
+is a precaution to fix
+
+```bash
+conda env create -f environment-mac.yaml
+```
+
+never finishing in some situations. So it isn't required but wont hurt.
+
+After you follow all the instructions and run dream.py you might get several
+errors. Here's the errors I've seen and found solutions for.
+
+---
### Is it slow?
Be sure to specify 1 sample and 1 iteration.
- python ./scripts/orig_scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
+```bash
+python ./scripts/orig_scripts/txt2img.py \
+ --prompt "ocean" \
+ --ddim_steps 5 \
+ --n_samples 1 \
+ --n_iter 1
+```
+
+---
### Doesn't work anymore?
-PyTorch nightly includes support for MPS. Because of this, this setup is inherently unstable. One morning I woke up and it no longer worked no matter what I did until I switched to miniforge. However, I have another Mac that works just fine with Anaconda. If you can't get it to work, please search a little first because many of the errors will get posted and solved. If you can't find a solution please [create an issue](https://github.com/lstein/stable-diffusion/issues).
+PyTorch nightly includes support for MPS. Because of this, this setup is
+inherently unstable. One morning I woke up and it no longer worked no matter
+what I did until I switched to miniforge. However, I have another Mac that works
+just fine with Anaconda. If you can't get it to work, please search a little
+first because many of the errors will get posted and solved. If you can't find a
+solution please
+[create an issue](https://github.com/lstein/stable-diffusion/issues).
One debugging step is to update to the latest version of PyTorch nightly.
- conda install pytorch torchvision torchaudio -c pytorch-nightly
+```bash
+conda install pytorch torchvision torchaudio -c pytorch-nightly
+```
-If `conda env create -f environment-mac.yaml` takes forever run this.
+If it takes forever to run
- git clean -f
+```bash
+conda env create -f environment-mac.yaml
+```
-And run this.
+you could try to run `git clean -f` followed by:
- conda clean --yes --all
+`conda clean --yes --all`
-Or you could reset Anaconda.
+Or you could try to completley reset Anaconda:
- conda update --force-reinstall -y -n base -c defaults conda
+```bash
+conda update --force-reinstall -y -n base -c defaults conda
+```
-### "No module named cv2", torch, 'ldm', 'transformers', 'taming', etc.
+---
+
+### "No module named cv2", torch, 'ldm', 'transformers', 'taming', etc
There are several causes of these errors.
-First, did you remember to `conda activate ldm`? If your terminal prompt
-begins with "(ldm)" then you activated it. If it begins with "(base)"
-or something else you haven't.
+- First, did you remember to `conda activate ldm`? If your terminal prompt
+ begins with "(ldm)" then you activated it. If it begins with "(base)" or
+ something else you haven't.
-Second, you might've run `./scripts/preload_models.py` or `./scripts/dream.py`
-instead of `python ./scripts/preload_models.py` or `python ./scripts/dream.py`.
-The cause of this error is long so it's below.
+- Second, you might've run `./scripts/preload_models.py` or `./scripts/dream.py`
+ instead of `python ./scripts/preload_models.py` or
+ `python ./scripts/dream.py`. The cause of this error is long so it's below.
-Third, if it says you're missing taming you need to rebuild your virtual
-environment.
+- Third, if it says you're missing taming you need to rebuild your virtual
+ environment.
- conda env remove -n ldm
- conda env create -f environment-mac.yaml
+`conda env remove -n ldm conda env create -f environment-mac.yaml`
-Fourth, If you have activated the ldm virtual environment and tried rebuilding it, maybe the problem could be that I have something installed that you don't and you'll just need to manually install it. Make sure you activate the virtual environment so it installs there instead of
-globally.
+Fourth, If you have activated the ldm virtual environment and tried rebuilding
+it, maybe the problem could be that I have something installed that you don't
+and you'll just need to manually install it. Make sure you activate the virtual
+environment so it installs there instead of globally.
- conda activate ldm
- pip install *name*
+`conda activate ldm pip install _name_`
You might also need to install Rust (I mention this again below).
+---
+
### How many snakes are living in your computer?
You might have multiple Python installations on your system, in which case it's
-important to be explicit and consistent about which one to use for a given project.
-This is because virtual environments are coupled to the Python that created it (and all
-the associated 'system-level' modules).
+important to be explicit and consistent about which one to use for a given
+project. This is because virtual environments are coupled to the Python that
+created it (and all the associated 'system-level' modules).
-When you run `python` or `python3`, your shell searches the colon-delimited locations
-in the `PATH` environment variable (`echo $PATH` to see that list) in that order - first match wins.
-You can ask for the location of the first `python3` found in your `PATH` with the `which` command like this:
+When you run `python` or `python3`, your shell searches the colon-delimited
+locations in the `PATH` environment variable (`echo $PATH` to see that list) in
+that order - first match wins. You can ask for the location of the first
+`python3` found in your `PATH` with the `which` command like this:
- % which python3
- /usr/bin/python3
+```bash
+% which python3
+/usr/bin/python3
+```
-Anything in `/usr/bin` is [part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6). However, `/usr/bin/python3` is not actually python3, but
-rather a stub that offers to install Xcode (which includes python 3). If you have Xcode installed already,
-`/usr/bin/python3` will execute `/Library/Developer/CommandLineTools/usr/bin/python3` or
+Anything in `/usr/bin` is
+[part of the OS](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6).
+However, `/usr/bin/python3` is not actually python3, but rather a stub that
+offers to install Xcode (which includes python 3). If you have Xcode installed
+already, `/usr/bin/python3` will execute
+`/Library/Developer/CommandLineTools/usr/bin/python3` or
`/Applications/Xcode.app/Contents/Developer/usr/bin/python3` (depending on which
Xcode you've selected with `xcode-select`).
-Note that `/usr/bin/python` is an entirely different python - specifically, python 2. Note: starting in
-macOS 12.3, `/usr/bin/python` no longer exists.
+Note that `/usr/bin/python` is an entirely different python - specifically,
+python 2. Note: starting in macOS 12.3, `/usr/bin/python` no longer exists.
- % which python3
- /opt/homebrew/bin/python3
+```bash
+% which python3
+/opt/homebrew/bin/python3
+```
If you installed python3 with Homebrew and you've modified your path to search
for Homebrew binaries before system ones, you'll see the above path.
- % which python
- /opt/anaconda3/bin/python
+```bash
+% which python
+/opt/anaconda3/bin/python
+```
If you have Anaconda installed, you will see the above path. There is a
-`/opt/anaconda3/bin/python3` also. We expect that `/opt/anaconda3/bin/python`
-and `/opt/anaconda3/bin/python3` should actually be the *same python*, which you can
-verify by comparing the output of `python3 -V` and `python -V`.
+`/opt/anaconda3/bin/python3` also.
- (ldm) % which python
- /Users/name/miniforge3/envs/ldm/bin/python
+We expect that `/opt/anaconda3/bin/python` and `/opt/anaconda3/bin/python3`
+should actually be the _same python_, which you can verify by comparing the
+output of `python3 -V` and `python -V`.
-The above is what you'll see if you have miniforge and you've correctly activated
-the ldm environment, and you used option 2 in the setup instructions above ("no pyenv").
+```bash
+(ldm) % which python
+/Users/name/miniforge3/envs/ldm/bin/python
+```
+
+The above is what you'll see if you have miniforge and you've correctly
+activated the ldm environment, and you used option 2 in the setup instructions
+above ("no pyenv").
+
+```bash
+(anaconda3-2022.05) % which python
+/Users/name/.pyenv/shims/python
+```
- (anaconda3-2022.05) % which python
- /Users/name/.pyenv/shims/python
-
... and the above is what you'll see if you used option 1 ("Alongside pyenv").
-It's all a mess and you should know [how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
+It's all a mess and you should know
+[how to modify the path environment variable](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac)
if you want to fix it. Here's a brief hint of all the ways you can modify it
(don't really have the time to explain it all here).
@@ -211,18 +273,19 @@ if you want to fix it. Here's a brief hint of all the ways you can modify it
Which one you use will depend on what you have installed except putting a file
in /etc/paths.d is what I prefer to do.
-Finally, to answer the question posed by this section's title, it may help to list
-all of the `python` / `python3` things found in `$PATH` instead of just the one that
-will be executed by default. To do that, add the `-a` switch to `which`:
+Finally, to answer the question posed by this section's title, it may help to
+list all of the `python` / `python3` things found in `$PATH` instead of just the
+one that will be executed by default. To do that, add the `-a` switch to
+`which`:
% which -a python3
...
### Debugging?
-Tired of waiting for your renders to finish before you can see if it
-works? Reduce the steps! The image quality will be horrible but at least you'll
-get quick feedback.
+Tired of waiting for your renders to finish before you can see if it works?
+Reduce the steps! The image quality will be horrible but at least you'll get
+quick feedback.
python ./scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
@@ -235,15 +298,24 @@ get quick feedback.
Example error.
```
-...
-NotImplementedError: The operator 'aten::_index_put_impl_' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on [https://github.com/pytorch/pytorch/issues/77764](https://github.com/pytorch/pytorch/issues/77764). As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
+
+... NotImplementedError: The operator 'aten::_index_put_impl_' is not current
+implemented for the MPS device. If you want this op to be added in priority
+during the prototype phase of this feature, please comment on
+[https://github.com/pytorch/pytorch/issues/77764](https://github.com/pytorch/pytorch/issues/77764).
+As a temporary fix, you can set the environment variable
+`PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op.
+WARNING: this will be slower than running natively on MPS.
+
```
-The lstein branch includes this fix in [environment-mac.yaml](https://github.com/lstein/stable-diffusion/blob/main/environment-mac.yaml).
+The lstein branch includes this fix in
+[environment-mac.yaml](https://github.com/lstein/stable-diffusion/blob/main/environment-mac.yaml).
### "Could not build wheels for tokenizers"
-I have not seen this error because I had Rust installed on my computer before I started playing with Stable Diffusion. The fix is to install Rust.
+I have not seen this error because I had Rust installed on my computer before I
+started playing with Stable Diffusion. The fix is to install Rust.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
@@ -251,10 +323,9 @@ I have not seen this error because I had Rust installed on my computer before I
First this:
-> Completely reproducible results are not guaranteed across PyTorch
-> releases, individual commits, or different platforms. Furthermore,
-> results may not be reproducible between CPU and GPU executions, even
-> when using identical seeds.
+> Completely reproducible results are not guaranteed across PyTorch releases,
+> individual commits, or different platforms. Furthermore, results may not be
+> reproducible between CPU and GPU executions, even when using identical seeds.
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
@@ -265,53 +336,56 @@ still working on it.
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
-You are likely using an Intel package by mistake. Be sure to run conda with
-the environment variable `CONDA_SUBDIR=osx-arm64`, like so:
+You are likely using an Intel package by mistake. Be sure to run conda with the
+environment variable `CONDA_SUBDIR=osx-arm64`, like so:
`CONDA_SUBDIR=osx-arm64 conda install ...`
-This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in by
-a dependency. [nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
+This error happens with Anaconda on Macs when the Intel-only `mkl` is pulled in
+by a dependency.
+[nomkl](https://stackoverflow.com/questions/66224879/what-is-the-nomkl-python-package-used-for)
is a metapackage designed to prevent this, by making it impossible to install
`mkl`, but if your environment is already broken it may not work.
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
masks the underlying issue of using Intel packages.
-### Not enough memory.
+### Not enough memory
-This seems to be a common problem and is probably the underlying
-problem for a lot of symptoms (listed below). The fix is to lower your
-image size or to add `model.half()` right after the model is loaded. I
-should probably test it out. I've read that the reason this fixes
-problems is because it converts the model from 32-bit to 16-bit and
-that leaves more RAM for other things. I have no idea how that would
-affect the quality of the images though.
+This seems to be a common problem and is probably the underlying problem for a
+lot of symptoms (listed below). The fix is to lower your image size or to add
+`model.half()` right after the model is loaded. I should probably test it out.
+I've read that the reason this fixes problems is because it converts the model
+from 32-bit to 16-bit and that leaves more RAM for other things. I have no idea
+how that would affect the quality of the images though.
See [this issue](https://github.com/CompVis/stable-diffusion/issues/71).
### "Error: product of dimension sizes > 2\*\*31'"
-This error happens with img2img, which I haven't played with too much
-yet. But I know it's because your image is too big or the resolution
-isn't a multiple of 32x32. Because the stable-diffusion model was
-trained on images that were 512 x 512, it's always best to use that
-output size (which is the default). However, if you're using that size
-and you get the above error, try 256 x 256 or 512 x 256 or something
-as the source image.
+This error happens with img2img, which I haven't played with too much yet. But I
+know it's because your image is too big or the resolution isn't a multiple of
+32x32. Because the stable-diffusion model was trained on images that were 512 x
+512, it's always best to use that output size (which is the default). However,
+if you're using that size and you get the above error, try 256 x 256 or 512 x
+256 or something as the source image.
-BTW, 2\*\*31-1 = [2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in C.
+BTW, 2\*\*31-1 =
+[2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which
+is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in
+C.
### I just got Rickrolled! Do I have a virus?
You don't have a virus. It's part of the project. Here's
[Rick](https://github.com/lstein/stable-diffusion/blob/main/assets/rick.jpeg)
-and here's [the
-code](https://github.com/lstein/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
-that swaps him in. It's a NSFW filter, which IMO, doesn't work very
-good (and we call this "computer vision", sheesh).
+and here's
+[the code](https://github.com/lstein/stable-diffusion/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
+that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
+call this "computer vision", sheesh).
-Actually, this could be happening because there's not enough RAM. You could try the `model.half()` suggestion or specify smaller output images.
+Actually, this could be happening because there's not enough RAM. You could try
+the `model.half()` suggestion or specify smaller output images.
### My images come out black
@@ -319,31 +393,29 @@ We might have this fixed, we are still testing.
There's a [similar issue](https://github.com/CompVis/stable-diffusion/issues/69)
on CUDA GPU's where the images come out green. Maybe it's the same issue?
-Someone in that issue says to use "--precision full", but this fork
-actually disables that flag. I don't know why, someone else provided
-that code and I don't know what it does. Maybe the `model.half()`
-suggestion above would fix this issue too. I should probably test it.
+Someone in that issue says to use "--precision full", but this fork actually
+disables that flag. I don't know why, someone else provided that code and I
+don't know what it does. Maybe the `model.half()` suggestion above would fix
+this issue too. I should probably test it.
### "view size is not compatible with input tensor's size and stride"
-```
- File "/opt/anaconda3/envs/ldm/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
- return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
+```bash
+File "/opt/anaconda3/envs/ldm/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
+return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
-Update to the latest version of lstein/stable-diffusion. We were
-patching pytorch but we found a file in stable-diffusion that we could
-change instead. This is a 32-bit vs 16-bit problem.
+Update to the latest version of lstein/stable-diffusion. We were patching
+pytorch but we found a file in stable-diffusion that we could change instead.
+This is a 32-bit vs 16-bit problem.
+
+---
### The processor must support the Intel bla bla bla
What? Intel? On an Apple Silicon?
-
- Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library.
- The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions.
- The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions.
- The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
+`bash Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library. The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions. The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions. The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions. `
This is due to the Intel `mkl` package getting picked up when you try to install
something that depends on it-- Rosetta can translate some Intel instructions but
@@ -351,11 +423,13 @@ not the specialized ones here. To avoid this, make sure to use the environment
variable `CONDA_SUBDIR=osx-arm64`, which restricts the Conda environment to only
use ARM packages, and use `nomkl` as described above.
+---
+
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
May appear when just starting to generate, e.g.:
-```
+```bash
dream> clouds
Generating: 0%| | 0/1 [00:00, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
@@ -366,4 +440,5 @@ Abort trap: 6
warnings.warn('resource_tracker: There appear to be %d '
```
-Macs do not support autocast/mixed-precision. Supply `--full_precision` to use float32 everywhere.
+Macs do not support `autocast/mixed-precision`, so you need to supply
+`--full_precision` to use float32 everywhere.
diff --git a/docs/installation/INSTALL_WINDOWS.md b/docs/installation/INSTALL_WINDOWS.md
index 238988a15a..8119449717 100644
--- a/docs/installation/INSTALL_WINDOWS.md
+++ b/docs/installation/INSTALL_WINDOWS.md
@@ -1,110 +1,135 @@
-# **Windows Installation**
+---
+title: Windows
+---
## **Notebook install (semi-automated)**
-We have a [Jupyter
-notebook](https://github.com/lstein/stable-diffusion/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb)
-with cell-by-cell installation steps. It will download the code in
-this repo as one of the steps, so instead of cloning this repo, simply
-download the notebook from the link above and load it up in VSCode
-(with the appropriate extensions installed)/Jupyter/JupyterLab and
-start running the cells one-by-one.
+We have a
+[Jupyter notebook](https://github.com/lstein/stable-diffusion/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb)
+with cell-by-cell installation steps. It will download the code in this repo as
+one of the steps, so instead of cloning this repo, simply download the notebook
+from the link above and load it up in VSCode (with the appropriate extensions
+installed)/Jupyter/JupyterLab and start running the cells one-by-one.
Note that you will need NVIDIA drivers, Python 3.10, and Git installed
-beforehand - simplified [step-by-step
-instructions](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
+beforehand - simplified
+[step-by-step instructions](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
are available in the wiki (you'll only need steps 1, 2, & 3 ).
## **Manual Install**
### **pip**
-See [Easy-peasy Windows install](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
+See
+[Easy-peasy Windows install](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
in the wiki
+---
+
### **Conda**
-1. Install Anaconda3 (miniconda3 version) from here: https://docs.anaconda.com/anaconda/install/windows/
+1. Install Anaconda3 (miniconda3 version) from here:
+ https://docs.anaconda.com/anaconda/install/windows/
2. Install Git from here: https://git-scm.com/download/win
-3. Launch Anaconda from the Windows Start menu. This will bring up a command window. Type all the remaining commands in this window.
+3. Launch Anaconda from the Windows Start menu. This will bring up a command
+ window. Type all the remaining commands in this window.
4. Run the command:
-```
-git clone https://github.com/lstein/stable-diffusion.git
-```
+ ```bash
+ git clone https://github.com/lstein/stable-diffusion.git
+ ```
-This will create stable-diffusion folder where you will follow the rest of the steps.
+ This will create stable-diffusion folder where you will follow the rest of
+ the steps.
-5. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
+5. Enter the newly-created stable-diffusion folder. From this step forward make
+ sure that you are working in the stable-diffusion directory!
-```
-cd stable-diffusion
-```
+ ```bash
+ cd stable-diffusion
+ ```
6. Run the following two commands:
-```
-conda env create -f environment.yaml (step 6a)
-conda activate ldm (step 6b)
-```
+ ```bash
+ conda env create -f environment.yaml (step 6a)
+ conda activate ldm (step 6b)
+ ```
-This will install all python requirements and activate the "ldm"
-environment which sets PATH and other environment variables properly.
+ This will install all python requirements and activate the "ldm" environment
+ which sets PATH and other environment variables properly.
7. Run the command:
-```
-python scripts\preload_models.py
-```
+ ```bash
+ python scripts\preload_models.py
+ ```
-This installs several machine learning models that stable diffusion requires.
+ This installs several machine learning models that stable diffusion requires.
-Note: This step is required. This was done because some users may might be blocked by firewalls or have limited internet connectivity for the models to be downloaded just-in-time.
+ Note: This step is required. This was done because some users may might be
+ blocked by firewalls or have limited internet connectivity for the models to
+ be downloaded just-in-time.
8. Now you need to install the weights for the big stable diffusion model.
-- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
-- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
-- You may be asked to sign a license agreement at this point.
-- Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
- prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
-- The weight file is >4 GB in size, so
- downloading may take a while.
+ - For running with the released weights, you will first need to set up an
+ acount with Hugging Face (https://huggingface.co).
+ - Use your credentials to log in, and then point your browser at
+ https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
+ - You may be asked to sign a license agreement at this point.
+ - Click on "Files and versions" near the top of the page, and then click on
+ the file named `sd-v1-4.ckpt`. You'll be taken to a page that prompts you
+ to click the "download" link. Now save the file somewhere safe on your
+ local machine.
+ - The weight file is >4 GB in size, so downloading may take a while.
-Now run the following commands from **within the stable-diffusion directory** to copy the weights file to the right place:
+ Now run the following commands from **within the stable-diffusion directory**
+ to copy the weights file to the right place:
-```
-mkdir -p models\ldm\stable-diffusion-v1
-copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
-```
+ ```bash
+ mkdir -p models\ldm\stable-diffusion-v1
+ copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
+ ```
-Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
-you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
+ Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever
+ you stashed this file. If you prefer not to copy or move the .ckpt file, you
+ may instead create a shortcut to it from within
+ `models\ldm\stable-diffusion-v1\`.
9. Start generating images!
-```
-# for the pre-release weights
-python scripts\dream.py -l
+ ```bash
+ # for the pre-release weights
+ python scripts\dream.py -l
-# for the post-release weights
-python scripts\dream.py
-```
+ # for the post-release weights
+ python scripts\dream.py
+ ```
-10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the stable-diffusion directory (step 5, `cd \path\to\stable-diffusion`), run `conda activate ldm` (step 6b), and then launch the dream script (step 9).
+10. Subsequently, to relaunch the script, first activate the Anaconda command
+ window (step 3),enter the stable-diffusion directory (step 5,
+ `cd \path\to\stable-diffusion`), run `conda activate ldm` (step 6b), and
+ then launch the dream script (step 9).
-**Note:** Tildebyte has written an alternative ["Easy peasy Windows
-install"](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
-which uses the Windows Powershell and pew. If you are having trouble with Anaconda on Windows, give this a try (or try it first!)
+ **Note:** Tildebyte has written an alternative
+ ["Easy peasy Windows install"](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
+ which uses the Windows Powershell and pew. If you are having trouble with
+ Anaconda on Windows, give this a try (or try it first!)
+
+---
### Updating to newer versions of the script
-This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter `stable-diffusion`, and type:
+This distribution is changing rapidly. If you used the `git clone` method
+(step 5) to download the stable-diffusion directory, then to update to the
+latest and greatest version, launch the Anaconda window, enter
+`stable-diffusion`, and type:
-```
+```bash
git pull
conda env update -f environment.yaml
```
diff --git a/docs/other/CONTRIBUTORS.md b/docs/other/CONTRIBUTORS.md
index a6410aa018..36b12f4064 100644
--- a/docs/other/CONTRIBUTORS.md
+++ b/docs/other/CONTRIBUTORS.md
@@ -1,5 +1,5 @@
---
-title: **Contributors**
+title: Contributors
---
The list of all the amazing people who have contributed to the various features that you get to experience in this fork.
diff --git a/docs/other/README-CompViz.md b/docs/other/README-CompViz.md
index a4eac77e60..395612092f 100644
--- a/docs/other/README-CompViz.md
+++ b/docs/other/README-CompViz.md
@@ -2,9 +2,11 @@
title: CompViz-Readme
---
-# *README from [CompViz/stable-diffusion](https://github.com/CompVis/stable-diffusion)*
+# _README from [CompViz/stable-diffusion](https://github.com/CompVis/stable-diffusion)_
-_Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:_
+_Stable Diffusion was made possible thanks to a collaboration with
+[Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and
+builds upon our previous work:_
[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)
[Robin Rombach](https://github.com/rromb)\*,
@@ -15,28 +17,36 @@ _Stable Diffusion was made possible thanks to a collaboration with [Stability AI
## **CVPR '22 Oral**
-which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our [Project page](https://ommer-lab.com/research/latent-diffusion-models/).
+which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF
+at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our
+[Project page](https://ommer-lab.com/research/latent-diffusion-models/).
![txt2img-stable2](../assets/stable-samples/txt2img/merged-0006.png)
[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion
-model.
-Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
-Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
-this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.
-With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.
-See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).
+model. Thanks to a generous compute donation from
+[Stability AI](https://stability.ai/) and support from
+[LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on
+512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/)
+database. Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487), this
+model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text
+prompts. With its 860M UNet and 123M text encoder, the model is relatively
+lightweight and runs on a GPU with at least 10GB VRAM. See
+[this section](#stable-diffusion-v1) below and the
+[model card](https://huggingface.co/CompVis/stable-diffusion).
## Requirements
-A suitable [conda](https://conda.io/) environment named `ldm` can be created
-and activated with:
+A suitable [conda](https://conda.io/) environment named `ldm` can be created and
+activated with:
```bash
conda env create -f environment.yaml
conda activate ldm
```
-You can also update an existing [latent diffusion](https://github.com/CompVis/latent-diffusion) environment by running
+You can also update an existing
+[latent diffusion](https://github.com/CompVis/latent-diffusion) environment by
+running
```bash
conda install pytorch torchvision -c pytorch
@@ -46,42 +56,57 @@ pip install -e .
## Stable Diffusion v1
-Stable Diffusion v1 refers to a specific configuration of the model
-architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet
-and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and
-then finetuned on 512x512 images.
+Stable Diffusion v1 refers to a specific configuration of the model architecture
+that uses a downsampling-factor 8 autoencoder with an 860M UNet and CLIP
+ViT-L/14 text encoder for the diffusion model. The model was pretrained on
+256x256 images and then finetuned on 512x512 images.
-\*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present
-in its training data.
-Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).
-Research into the safe deployment of general text-to-image models is an ongoing effort. To prevent misuse and harm, we currently provide access to the checkpoints only for [academic research purposes upon request](https://stability.ai/academia-access-form).
-**This is an experiment in safe and community-driven publication of a capable and general text-to-image model. We are working on a public release with a more permissive license that also incorporates ethical considerations.\***
+\*Note: Stable Diffusion v1 is a general text-to-image diffusion model and
+therefore mirrors biases and (mis-)conceptions that are present in its training
+data. Details on the training procedure and data, as well as the intended use of
+the model can be found in the corresponding
+[model card](https://huggingface.co/CompVis/stable-diffusion). Research into the
+safe deployment of general text-to-image models is an ongoing effort. To prevent
+misuse and harm, we currently provide access to the checkpoints only for
+[academic research purposes upon request](https://stability.ai/academia-access-form).
+**This is an experiment in safe and community-driven publication of a capable
+and general text-to-image model. We are working on a public release with a more
+permissive license that also incorporates ethical considerations.\***
[Request access to Stable Diffusion v1 checkpoints for academic research](https://stability.ai/academia-access-form)
### Weights
-We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`,
-which were trained as follows,
+We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and
+`sd-v1-3.ckpt`, which were trained as follows,
-- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
- 194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
-- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
- 515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
- filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
-- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
+- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on
+ [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194k steps at
+ resolution `512x512` on
+ [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution)
+ (170M examples from LAION-5B with resolution `>= 1024x1024`).
+- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`. 515k steps at resolution
+ `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to
+ images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`,
+ and an estimated watermark probability `< 0.5`. The watermark estimate is from
+ the LAION-5B metadata, the aesthetics score is estimated using an
+ [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
+- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution
+ `512x512` on "laion-improved-aesthetics" and 10\% dropping of the
+ text-conditioning to improve
+ [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
-5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
-steps show the relative improvements of the checkpoints:
-![sd evaluation results](../assets/v1-variants-scores.jpg)
+5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of
+the checkpoints: ![sd evaluation results](../assets/v1-variants-scores.jpg)
### Text-to-Image with Stable Diffusion
![txt2img-stable2](../assets/stable-samples/txt2img/merged-0005.png)
![txt2img-stable2](../assets/stable-samples/txt2img/merged-0007.png)
-Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder.
+Stable Diffusion is a latent diffusion model conditioned on the (non-pooled)
+text embeddings of a CLIP ViT-L/14 text encoder.
#### Sampling Script
@@ -98,8 +123,11 @@ and sample with
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
```
-By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
-and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`).
+By default, this uses a guidance scale of `--scale 7.5`,
+[Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51)
+of the [PLMS](https://arxiv.org/abs/2202.09778) sampler, and renders images of
+size 512x512 (which it was trained on) in 50 steps. All supported arguments are
+listed below (type `python scripts/txt2img.py --help`).
```commandline
usage: txt2img.py [-h] [--prompt [PROMPT]] [--outdir [OUTDIR]] [--skip_grid] [--skip_save] [--ddim_steps DDIM_STEPS] [--plms] [--laion400m] [--fixed_code] [--ddim_eta DDIM_ETA] [--n_iter N_ITER] [--H H] [--W W] [--C C] [--f F] [--n_samples N_SAMPLES] [--n_rows N_ROWS]
@@ -137,14 +165,17 @@ optional arguments:
```
-Note: The inference config for all v1 versions is designed to be used with EMA-only checkpoints.
-For this reason `use_ema=False` is set in the configuration, otherwise the code will try to switch from
-non-EMA to EMA weights. If you want to examine the effect of EMA vs no EMA, we provide "full" checkpoints
-which contain both types of weights. For these, `use_ema=False` will load and use the non-EMA weights.
+Note: The inference config for all v1 versions is designed to be used with
+EMA-only checkpoints. For this reason `use_ema=False` is set in the
+configuration, otherwise the code will try to switch from non-EMA to EMA
+weights. If you want to examine the effect of EMA vs no EMA, we provide "full"
+checkpoints which contain both types of weights. For these, `use_ema=False` will
+load and use the non-EMA weights.
#### Diffusers Integration
-Another way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers)
+Another way to download and sample Stable Diffusion is by using the
+[diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers)
```py
# make sure you're logged in with `huggingface-cli login`
@@ -165,18 +196,23 @@ image.save("astronaut_rides_horse.png")
### Image Modification with Stable Diffusion
-By using a diffusion-denoising mechanism as first proposed by [SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different
-tasks such as text-guided image-to-image translation and upscaling. Similar to the txt2img sampling script,
-we provide a script to perform image modification with Stable Diffusion.
+By using a diffusion-denoising mechanism as first proposed by
+[SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different
+tasks such as text-guided image-to-image translation and upscaling. Similar to
+the txt2img sampling script, we provide a script to perform image modification
+with Stable Diffusion.
-The following describes an example where a rough sketch made in [Pinta](https://www.pinta-project.com/) is converted into a detailed artwork.
+The following describes an example where a rough sketch made in
+[Pinta](https://www.pinta-project.com/) is converted into a detailed artwork.
```
python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img