+
| Sampler | (3 sample avg) it/s (M1 Max 64GB, 512x512) |
|---|---|
| `DDIM` | 1.89 |
@@ -32,6 +34,8 @@ Looking for a short version? Here's a TL;DR in 3 tables.
| `K_DPM_2_A` | 0.95 (slower) |
| `K_EULER_A` | 1.86 |
+
+
| Suggestions |
|:---|
| For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.|
diff --git a/docs/index.md b/docs/index.md
index f053df25af..c9a19a18ce 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,6 +1,5 @@
---
title: Home
-template: main.html
---
-# :material-script-text-outline: InvokeAI: A Stable Diffusion Toolkit
+# ^^**InvokeAI: A Stable Diffusion Toolkit**^^ :tools:
Formally known as lstein/stable-diffusion
![project logo](assets/logo.png)
@@ -29,8 +28,8 @@ template: main.html
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
-[discord badge]: https://flat.badgen.net/discord/members/htRgbc7e?icon=discord
-[discord link]: https://discord.com/invite/htRgbc7e
+[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
+[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
[github forks link]: https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
[github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
@@ -53,14 +52,13 @@ various new features and options to aid the image generation
process. It runs on Windows, Mac and Linux machines, and runs on GPU
cards with as little as 4 GB or RAM.
-Quick links:
-
+**Quick links**: [
Discord Server] [
Code and Downloads] [
Bug Reports] [
Discussion, Ideas & Q&A]
+
+
+
+!!! note
+
+ This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
## :octicons-package-dependencies-24: Installation
@@ -98,7 +96,7 @@ You wil need one of the following:
To run in full-precision mode, start `invoke.py` with the `--full_precision` flag:
```bash
- (ldm) ~/stable-diffusion$ python scripts/invoke.py --full_precision
+ (invokeai) ~/InvokeAI$ python scripts/invoke.py --full_precision
```
## :octicons-log-16: Latest Changes
diff --git a/docs/installation/INSTALL_DOCKER.md b/docs/installation/INSTALL_DOCKER.md
index 880b216f3c..eb2e2ab39f 100644
--- a/docs/installation/INSTALL_DOCKER.md
+++ b/docs/installation/INSTALL_DOCKER.md
@@ -1,4 +1,10 @@
-# Before you begin
+---
+title: Docker
+---
+
+# :fontawesome-brands-docker: Docker
+
+## Before you begin
- For end users: Install Stable Diffusion locally using the instructions for
your OS.
@@ -6,7 +12,7 @@
deployment to other environments (on-premises or cloud), follow these
instructions. For general use, install locally to leverage your machine's GPU.
-# Why containers?
+## Why containers?
They provide a flexible, reliable way to build and deploy Stable Diffusion.
You'll also use a Docker volume to store the largest model files and image
@@ -26,11 +32,11 @@ development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud.
-# Installation on a Linux container
+## Installation on a Linux container
-## Prerequisites
+### Prerequisites
-### Get the data files
+#### Get the data files
Go to
[Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original),
@@ -44,14 +50,14 @@ cd ~/Downloads
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
```
-### Install [Docker](https://github.com/santisbon/guides#docker)
+#### Install [Docker](https://github.com/santisbon/guides#docker)
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the
CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
-## Setup
+### Setup
Set the fork you want to use and other variables.
@@ -132,9 +138,9 @@ docker run -it \
$TAG_STABLE_DIFFUSION
```
-# Usage (time to have fun)
+## Usage (time to have fun)
-## Startup
+### Startup
If you're on a **Linux container** the `invoke` script is **automatically
started** and the output dir set to the Docker volume you created earlier.
@@ -158,7 +164,7 @@ invoke> -h
invoke> q
```
-## Text to Image
+### Text to Image
For quick (but bad) image results test with 5 steps (default 50) and 1 sample
image. This will let you know that everything is set up correctly.
@@ -188,7 +194,7 @@ volume):
docker cp dummy:/data/000001.928403745.png /Users/
/Pictures
```
-## Image to Image
+### Image to Image
You can also do text-guided image-to-image translation. For example, turning a
sketch into a detailed drawing.
@@ -225,7 +231,7 @@ If you're on a Linux container on your Mac
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
```
-## Web Interface
+### Web Interface
You can use the `invoke` script with a graphical web interface. Start the web
server with:
@@ -238,7 +244,7 @@ If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090
Press Control-C at the command line to stop the web server.
-## Notes
+### Notes
Some text you can add at the end of the prompt to make it very pretty:
diff --git a/docs/installation/INSTALL_LINUX.md b/docs/installation/INSTALL_LINUX.md
index ca406bf0a5..629175c3fa 100644
--- a/docs/installation/INSTALL_LINUX.md
+++ b/docs/installation/INSTALL_LINUX.md
@@ -26,38 +26,36 @@ title: Linux
3. Copy the InvokeAI source code from GitHub:
-```
-(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
-```
+ ```bash
+ (base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
+ ```
-This will create InvokeAI folder where you will follow the rest of the steps.
+ This will create InvokeAI folder where you will follow the rest of the steps.
4. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
-```
-(base) ~$ cd InvokeAI
-(base) ~/InvokeAI$
-```
+ ```bash
+ (base) ~$ cd InvokeAI
+ (base) ~/InvokeAI$
+ ```
5. Use anaconda to copy necessary python packages, create a new python
environment named `invokeai` and activate the environment.
-
-```
-(base) ~/InvokeAI$ conda env create
-(base) ~/InvokeAI$ conda activate invokeai
-(invokeai) ~/InvokeAI$
-```
+ ```bash
+ (base) ~/InvokeAI$ conda env create
+ (base) ~/InvokeAI$ conda activate invokeai
+ (invokeai) ~/InvokeAI$
+ ```
After these steps, your command prompt will be prefixed by `(invokeai)` as shown
above.
6. Load a couple of small machine-learning models required by stable diffusion:
-
-```
-(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
-```
+ ```bash
+ (invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
+ ```
!!! note
@@ -79,33 +77,31 @@ This will create InvokeAI folder where you will follow the rest of the steps.
This will create a symbolic link from the stable-diffusion model.ckpt file, to
the true location of the `sd-v1-4.ckpt` file.
-
-```
-(invokeai) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1
-(invokeai) ~/InvokeAI$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
-```
+ ```bash
+ (invokeai) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1
+ (invokeai) ~/InvokeAI$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
+ ```
8. Start generating images!
-```
-# for the pre-release weights use the -l or --liaon400m switch
-(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
+ ```bash
+ # for the pre-release weights use the -l or --liaon400m switch
+ (invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
-# for the post-release weights do not use the switch
-(invokeai) ~/InvokeAI$ python3 scripts/invoke.py
+ # for the post-release weights do not use the switch
+ (invokeai) ~/InvokeAI$ python3 scripts/invoke.py
-# for additional configuration switches and arguments, use -h or --help
-(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
-```
+ # for additional configuration switches and arguments, use -h or --help
+ (invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
+ ```
9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors.
## Updating to newer versions of the script
-
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
-```
+```bash
(invokeai) ~/InvokeAI$ git pull
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
```
diff --git a/docs/installation/INSTALL_MAC.md b/docs/installation/INSTALL_MAC.md
index 75e8f22785..086739e59e 100644
--- a/docs/installation/INSTALL_MAC.md
+++ b/docs/installation/INSTALL_MAC.md
@@ -2,6 +2,8 @@
title: macOS
---
+# :fontawesome-brands-apple: macOS
+
Invoke AI runs quite well on M1 Macs and we have a number of M1 users
in the community.
@@ -26,98 +28,120 @@ First you need to download a large checkpoint file.
While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1).
-Do not just copy and paste the whole thing into your terminal!
+!!! todo "Homebrew"
-```bash
-# Install brew (and Xcode command line tools):
-/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
+ If you have no brew installation yet (otherwise skip):
-# Now there are two options to get the Python (miniconda) environment up and running:
-# 1. Alongside pyenv
-# 2. Standalone
-#
-# If you don't know what we are talking about, choose 2.
-#
-# If you are familiar with python environments, you'll know there are other options
-# for setting up the environment - you are on your own if you go one of those routes.
+ ```bash title="install brew (and Xcode command line tools)"
+ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
+ ```
-##### BEGIN TWO DIFFERENT OPTIONS #####
+!!! todo "Conda Installation"
-### BEGIN OPTION 1: Installing alongside pyenv ###
-brew install pyenv-virtualenv # you might have this from before, no problem
-pyenv install anaconda3-2022.05
-pyenv virtualenv anaconda3-2022.05
-eval "$(pyenv init -)"
-pyenv activate anaconda3-2022.05
-### END OPTION 1 ###
+ Now there are two different ways to set up the Python (miniconda) environment:
-### BEGIN OPTION 2: Installing standalone ###
-# Install cmake, protobuf, and rust:
-brew install cmake protobuf rust
+ 1. Standalone
+ 2. with pyenv
-# BEGIN ARCHITECTURE-DEPENDENT STEP #
-# For M1: install miniconda (M1 arm64 version):
-curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o Miniconda3-latest-MacOSX-arm64.sh
-/bin/bash Miniconda3-latest-MacOSX-arm64.sh
+ If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
-# For Intel: install miniconda (Intel x86-64 version):
-curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o Miniconda3-latest-MacOSX-x86_64.sh
-/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
-# END ARCHITECTURE-DEPENDENT STEP #
+ === "Standalone"
-### END OPTION 2 ###
+ ```bash title="Install cmake, protobuf, and rust"
+ brew install cmake protobuf rust
+ ```
-##### END TWO DIFFERENT OPTIONS #####
+ Then choose the kind of your Mac and install miniconda:
-# Clone the Invoke AI repo
-git clone https://github.com/invoke-ai/InvokeAI.git
-cd InvokeAI
+ === "M1 arm64"
-### WAIT FOR THE CHECKPOINT FILE TO DOWNLOAD, THEN PROCEED ###
+ ```bash title="Install miniconda for M1 arm64"
+ curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
+ -o Miniconda3-latest-MacOSX-arm64.sh
+ /bin/bash Miniconda3-latest-MacOSX-arm64.sh
+ ```
-# We will leave the big checkpoint wherever you stashed it for long-term storage,
-# and make a link to it from the repo's folder. This allows you to use it for
-# other repos, and if you need to delete Invoke AI, you won't have to download it again.
+ === "Intel x86_64"
-# Make the directory in the repo for the symlink
-mkdir -p models/ldm/stable-diffusion-v1/
+ ```bash title="Install miniconda for Intel"
+ curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
+ -o Miniconda3-latest-MacOSX-x86_64.sh
+ /bin/bash Miniconda3-latest-MacOSX-x86_64.sh
+ ```
-# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
-PATH_TO_CKPT="$HOME/Downloads"
+ === "with pyenv"
-# Create a link to the checkpoint
-ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
+ ```bash
+ brew install pyenv-virtualenv
+ pyenv install anaconda3-2022.05
+ pyenv virtualenv anaconda3-2022.05
+ eval "$(pyenv init -)"
+ pyenv activate anaconda3-2022.05
+ ```
-# BEGIN ARCHITECTURE-DEPENDENT STEP #
-# For M1: Create the environment & install packages
-PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
+!!! todo "Clone the Invoke AI repo"
-# For Intel: Create the environment & install packages
-PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
+ ```bash
+ git clone https://github.com/invoke-ai/InvokeAI.git
+ cd InvokeAI
+ ```
-# END ARCHITECTURE-DEPENDENT STEP #
+!!! todo "Wait until the checkpoint-file download finished, then proceed"
-# Activate the environment (you need to do this every time you want to run SD)
-conda activate invokeai
+ We will leave the big checkpoint wherever you stashed it for long-term storage,
+ and make a link to it from the repo's folder. This allows you to use it for
+ other repos, or if you need to delete Invoke AI, you won't have to download it again.
-# This will download some bits and pieces and make take a while
-python scripts/preload_models.py
+ ```{.bash .annotate}
+ # Make the directory in the repo for the symlink
+ mkdir -p models/ldm/stable-diffusion-v1/
-# Run SD!
-python scripts/dream.py
-```
-# or run the web interface!
-python scripts/invoke.py --web
+ # This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
+ PATH_TO_CKPT="$HOME/Downloads" # (1)!
-# The original scripts should work as well.
-python scripts/orig_scripts/txt2img.py \
- --prompt "a photograph of an astronaut riding a horse" \
- --plms
-```
+ # Create a link to the checkpoint
+ ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
+ ```
-Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
-create -f environment-mac.yml` never finishing in some situations. So
-it isn't required but wont hurt.
+ 1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
+
+!!! todo "Create the environment & install packages"
+
+ === "M1 Mac"
+
+ ```bash
+ PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
+ ```
+
+ === "Intel x86_64 Mac"
+
+ ```bash
+ PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
+ ```
+
+ ```bash
+ # Activate the environment (you need to do this every time you want to run SD)
+ conda activate invokeai
+
+ # This will download some bits and pieces and make take a while
+ python scripts/preload_models.py
+
+ # Run SD!
+ python scripts/dream.py
+
+ # or run the web interface!
+ python scripts/invoke.py --web
+
+ # The original scripts should work as well.
+ python scripts/orig_scripts/txt2img.py \
+ --prompt "a photograph of an astronaut riding a horse" \
+ --plms
+ ```
+ !!! info
+
+ `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
+ create -f environment-mac.yml` never finishing in some situations. So
+ it isn't required but wont hurt.
---
## Common problems
@@ -157,7 +181,6 @@ conda install \
-n ldm
```
-
If it takes forever to run `conda env create -f environment-mac.yml`, try this:
```bash
@@ -169,12 +192,12 @@ conda clean \
Or you could try to completley reset Anaconda:
- ```bash
- conda update \
- --force-reinstall \
- -y \
- -n base \
- -c defaults conda
+```bash
+conda update \
+ --force-reinstall \
+ -y \
+ -n base \
+ -c defaults conda
```
---
diff --git a/docs/installation/INSTALL_WINDOWS.md b/docs/installation/INSTALL_WINDOWS.md
index 343fcde9a4..c7dc9065ea 100644
--- a/docs/installation/INSTALL_WINDOWS.md
+++ b/docs/installation/INSTALL_WINDOWS.md
@@ -39,7 +39,7 @@ in the wiki
4. Run the command:
- ```bash
+ ```batch
git clone https://github.com/invoke-ai/InvokeAI.git
```
@@ -48,16 +48,20 @@ in the wiki
5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
-```
-cd InvokeAI
-```
+ ```batch
+ cd InvokeAI
+ ```
6. Run the following two commands:
-```
-conda env create (step 6a)
-conda activate invokeai (step 6b)
-```
+ ```batch title="step 6a"
+ conda env create
+ ```
+
+ ```batch title="step 6b"
+ conda activate invokeai
+ ```
+
This will install all python requirements and activate the "invokeai" environment
which sets PATH and other environment variables properly.
@@ -67,7 +71,7 @@ conda activate invokeai (step 6b)
7. Run the command:
- ```bash
+ ```batch
python scripts\preload_models.py
```
@@ -79,45 +83,44 @@ conda activate invokeai (step 6b)
8. Now you need to install the weights for the big stable diffusion model.
-- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
-- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
-- You may be asked to sign a license agreement at this point.
-- Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
- prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
-- The weight file is >4 GB in size, so
- downloading may take a while.
+ 1. For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
+ 2. Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
+ 3. You may be asked to sign a license agreement at this point.
+ 4. Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
+ prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
+ 5. The weight file is >4 GB in size, so
+ downloading may take a while.
-Now run the following commands from **within the InvokeAI directory** to copy the weights file to the right place:
+ Now run the following commands from **within the InvokeAI directory** to copy the weights file to the right place:
-```
-mkdir -p models\ldm\stable-diffusion-v1
-copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
-```
+ ```batch
+ mkdir -p models\ldm\stable-diffusion-v1
+ copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
+ ```
-Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
-you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
+ Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
+ you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
9. Start generating images!
- ```bash
- # for the pre-release weights
+ ```batch title="for the pre-release weights"
python scripts\invoke.py -l
+ ```
- # for the post-release weights
+ ```batch title="for the post-release weights"
python scripts\invoke.py
```
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9).
- **Note:** Tildebyte has written an alternative
+!!! tip "Tildebyte has written an alternative"
+
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
which uses the Windows Powershell and pew. If you are having trouble with
Anaconda on Windows, give this a try (or try it first!)
---
-This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI`, and type:
-
This distribution is changing rapidly. If you used the `git clone` method
(step 5) to download the stable-diffusion directory, then to update to the
latest and greatest version, launch the Anaconda window, enter
diff --git a/mkdocs.yml b/mkdocs.yml
index 209372562e..86713729d5 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -1,12 +1,12 @@
# General
-site_name: Dream Script Docs
-site_url: https://invoke-ai.github.io/InvokeAI/
+site_name: Stable Diffusion Toolkit Docs
+site_url: https://invoke-ai.github.io/InvokeAI
site_author: mauwii
dev_addr: "127.0.0.1:8080"
# Repository
-repo_name: invoke-ai/InvokeAI
-repo_url: https://invoke-ai.github.io/InvokeAI/
+repo_name: 'invoke-ai/InvokeAI'
+repo_url: 'https://github.com/invoke-ai/InvokeAI'
edit_uri: edit/main/docs/
# Copyright