mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
documentation and usability fixes
This commit is contained in:
@ -385,7 +385,7 @@ automatically.
|
|||||||
Example:
|
Example:
|
||||||
|
|
||||||
<pre>
|
<pre>
|
||||||
invoke> <b>!import_model models/ldm/stable-diffusion-v1/ model-epoch08-float16.ckpt</b>
|
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
|
||||||
>> Model import in process. Please enter the values needed to configure this model:
|
>> Model import in process. Please enter the values needed to configure this model:
|
||||||
|
|
||||||
Name for this model: <b>waifu-diffusion</b>
|
Name for this model: <b>waifu-diffusion</b>
|
||||||
|
@ -4,6 +4,258 @@ title: Installing Models
|
|||||||
|
|
||||||
# :octicons-paintbrush-16: Installing Models
|
# :octicons-paintbrush-16: Installing Models
|
||||||
|
|
||||||
## TO COME
|
## Model Weight Files
|
||||||
|
|
||||||
|
The model weight files ('*.ckpt') are the Stable Diffusion "secret
|
||||||
|
sauce". They are the product of training the AI on millions of
|
||||||
|
captioned images gathered from multiple sources.
|
||||||
|
|
||||||
|
Originally there was only a single Stable Diffusion weights file,
|
||||||
|
which many people named `model.ckpt`. Now there are dozens or more
|
||||||
|
that have been "fine tuned" to provide particulary styles, genres, or
|
||||||
|
other features. InvokeAI allows you to install and run multiple model
|
||||||
|
weight files and switch between them quickly in the command-line and
|
||||||
|
web interfaces.
|
||||||
|
|
||||||
|
This manual will guide you through installing and configuring model
|
||||||
|
weight files.
|
||||||
|
|
||||||
|
## Base Models
|
||||||
|
|
||||||
|
InvokeAI comes with support for a good initial set of models listed in
|
||||||
|
the model configuration file `configs/models.yaml`. They are:
|
||||||
|
|
||||||
|
| Model | Weight File | Description | DOWNLOAD FROM |
|
||||||
|
| ---------------------- | ----------------------------- |--------------------------------- | ----------------|
|
||||||
|
| stable-diffusion-1.5 | v1-5-pruned-emaonly.ckpt | Most recent version of base Stable Diffusion model| https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|
||||||
|
| stable-diffusion-1.4 | sd-v1-4.ckpt | Previous version of base Stable Diffusion model | https://huggingface.co/CompVis/stable-diffusion-v-1-4-original |
|
||||||
|
| inpainting-1.5 | sd-v1-5-inpainting.ckpt | Stable Diffusion 1.5 model specialized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting |
|
||||||
|
| waifu-diffusion-1.3 | model-epoch09-float32.ckpt | Stable Diffusion 1.4 trained to produce anime images | https://huggingface.co/hakurei/waifu-diffusion-v1-3 |
|
||||||
|
| <all models> | vae-ft-mse-840000-ema-pruned.ckpt | A fine-tune file add-on file that improves face generation | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/ |
|
||||||
|
|
||||||
|
|
||||||
|
Note that these files are covered by an "Ethical AI" license which
|
||||||
|
forbids certain uses. You will need to create an account on the
|
||||||
|
Hugging Face website and accept the license terms before you can
|
||||||
|
access the files.
|
||||||
|
|
||||||
|
The predefined configuration file for InvokeAI (located at
|
||||||
|
`configs/models.yaml`) provides entries for each of these weights
|
||||||
|
files. `stable-diffusion-1.5` is the default model used, and we
|
||||||
|
strongly recommend that you install this weights file if nothing else.
|
||||||
|
|
||||||
|
## Community-Contributed Models
|
||||||
|
|
||||||
|
There are too many to list here and more are being contributed every
|
||||||
|
day. [This Page](https://rentry.org/sdmodels) hosts an updated list of
|
||||||
|
Stable Diffusion models and where they can be obtained.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
There are three ways to install weights files:
|
||||||
|
|
||||||
|
1. During InvokeAI installation, the `preload_models.py` script can
|
||||||
|
download them for you.
|
||||||
|
|
||||||
|
2. You can use the command-line interface (CLI) to import, configure
|
||||||
|
and modify new models files.
|
||||||
|
|
||||||
|
3. You can download the files manually and add the appropriate entries
|
||||||
|
to `models.yaml`.
|
||||||
|
|
||||||
|
### Installation via `preload_models.py`
|
||||||
|
|
||||||
|
This is the most automatic way. Run `scripts/preload_models.py` from
|
||||||
|
the console. It will ask you to select which models to download and
|
||||||
|
lead you through the steps of setting up a Hugging Face account if you
|
||||||
|
haven't done so already.
|
||||||
|
|
||||||
|
To start, from within the InvokeAI directory run the command `python
|
||||||
|
scripts/preload_models.py` (Linux/MacOS) or `python
|
||||||
|
scripts\preload_models.py` (Windows):
|
||||||
|
|
||||||
|
```
|
||||||
|
Loading Python libraries...
|
||||||
|
|
||||||
|
** INTRODUCTION **
|
||||||
|
Welcome to InvokeAI. This script will help download the Stable Diffusion weight files
|
||||||
|
and other large models that are needed for text to image generation. At any point you may interrupt
|
||||||
|
this program and resume later.
|
||||||
|
|
||||||
|
** WEIGHT SELECTION **
|
||||||
|
Would you like to download the Stable Diffusion model weights now? [y]
|
||||||
|
|
||||||
|
Choose the weight file(s) you wish to download. Before downloading you
|
||||||
|
will be given the option to view and change your selections.
|
||||||
|
|
||||||
|
[1] stable-diffusion-1.5:
|
||||||
|
The newest Stable Diffusion version 1.5 weight file (4.27 GB) (recommended)
|
||||||
|
Download? [y]
|
||||||
|
[2] inpainting-1.5:
|
||||||
|
RunwayML SD 1.5 model optimized for inpainting (4.27 GB) (recommended)
|
||||||
|
Download? [y]
|
||||||
|
[3] stable-diffusion-1.4:
|
||||||
|
The original Stable Diffusion version 1.4 weight file (4.27 GB)
|
||||||
|
Download? [n] n
|
||||||
|
[4] waifu-diffusion-1.3:
|
||||||
|
Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
|
||||||
|
Download? [n] y
|
||||||
|
[5] ft-mse-improved-autoencoder-840000:
|
||||||
|
StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB) (recommended)
|
||||||
|
Download? [y] y
|
||||||
|
The following weight files will be downloaded:
|
||||||
|
[1] stable-diffusion-1.5*
|
||||||
|
[2] inpainting-1.5
|
||||||
|
[4] waifu-diffusion-1.3
|
||||||
|
[5] ft-mse-improved-autoencoder-840000
|
||||||
|
*default
|
||||||
|
Ok to download? [y]
|
||||||
|
** LICENSE AGREEMENT FOR WEIGHT FILES **
|
||||||
|
|
||||||
|
1. To download the Stable Diffusion weight files you need to read and accept the
|
||||||
|
CreativeML Responsible AI license. If you have not already done so, please
|
||||||
|
create an account using the "Sign Up" button:
|
||||||
|
|
||||||
|
https://huggingface.co
|
||||||
|
|
||||||
|
You will need to verify your email address as part of the HuggingFace
|
||||||
|
registration process.
|
||||||
|
|
||||||
|
2. After creating the account, login under your account and accept
|
||||||
|
the license terms located here:
|
||||||
|
|
||||||
|
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
|
||||||
|
|
||||||
|
Press <enter> when you are ready to continue:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
When the script is complete, you will find the downloaded weights
|
||||||
|
files in `models/ldm/stable-diffusion-v1` and a matching configuration
|
||||||
|
file in `configs/models.yaml`.
|
||||||
|
|
||||||
|
You can run the script again to add any models you didn't select the
|
||||||
|
first time. Note that as a safety measure the script will _never_
|
||||||
|
remove a previously-installed weights file. You will have to do this
|
||||||
|
manually.
|
||||||
|
|
||||||
|
### Installation via the CLI
|
||||||
|
|
||||||
|
You can install a new model, including any of the community-supported
|
||||||
|
ones, via the command-line client's `!import_model` command.
|
||||||
|
|
||||||
|
1. First download the desired model weights file and place it under `models/ldm/stable-diffusion-v1/`.
|
||||||
|
You may rename the weights file to something more memorable if you wish. Record the path of the
|
||||||
|
weights file (e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`)
|
||||||
|
|
||||||
|
2. Launch the `invoke.py` CLI with `python scripts/invoke.py`.
|
||||||
|
|
||||||
|
3. At the `invoke>` command-line, enter the command `!import_model <path to model>`.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
`invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
|
||||||
|
|
||||||
|
(Hint - the CLI supports file path autocompletion. Type a bit of the path
|
||||||
|
name and hit <tab> in order to get a choice of possible completions.
|
||||||
|
|
||||||
|
4. Follow the wizard's instructions to complete installation as shown in the example
|
||||||
|
here:
|
||||||
|
|
||||||
|
```
|
||||||
|
invoke> <b>!import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt</b>
|
||||||
|
>> Model import in process. Please enter the values needed to configure this model:
|
||||||
|
|
||||||
|
Name for this model: <b>arabian-nights</b>
|
||||||
|
Description of this model: <b>Arabian Nights Fine Tune v1.0</b>
|
||||||
|
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
|
||||||
|
Default image width: <b>512</b>
|
||||||
|
Default image height: <b>512</b>
|
||||||
|
>> New configuration:
|
||||||
|
arabian-nights:
|
||||||
|
config: configs/stable-diffusion/v1-inference.yaml
|
||||||
|
description: Arabian Nights Fine Tune v1.0
|
||||||
|
height: 512
|
||||||
|
weights: models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
||||||
|
width: 512
|
||||||
|
OK to import [n]? <b>y</b>
|
||||||
|
>> Caching model stable-diffusion-1.4 in system RAM
|
||||||
|
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
||||||
|
| LatentDiffusion: Running in eps-prediction mode
|
||||||
|
| DiffusionWrapper has 859.52 M params.
|
||||||
|
| Making attention of type 'vanilla' with 512 in_channels
|
||||||
|
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
||||||
|
| Making attention of type 'vanilla' with 512 in_channels
|
||||||
|
| Using faster float16 precision
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
If you've previously installed the fine-tune VAE file `vae-ft-mse-840000-ema-pruned.ckpt`,
|
||||||
|
the wizard will also ask you if you want to add this VAE to the model.
|
||||||
|
|
||||||
|
The appropriate entry for this model will be added to `configs/models.yaml` and it will
|
||||||
|
be available to use in the CLI immediately.
|
||||||
|
|
||||||
|
The CLI has additional commands for switching among, viewing, editing,
|
||||||
|
deleting the available models. These are described in [Command Line
|
||||||
|
Client](CLI.md#model-selection-and-importation), but the two most
|
||||||
|
frequently-used are `!models` and `!switch <name of model>`. The first
|
||||||
|
prints a table of models that InvokeAI knows about and their load
|
||||||
|
status. The second will load the requested model and lets you switch
|
||||||
|
back and forth quickly among loaded models.
|
||||||
|
|
||||||
|
### Manually editing of `configs/models.yaml`
|
||||||
|
|
||||||
|
If you are comfortable with a text editor then you may simply edit
|
||||||
|
`models.yaml` directly.
|
||||||
|
|
||||||
|
First you need to download the desired .ckpt file and place it in
|
||||||
|
`models/ldm/stable-diffusion-v1` as descirbed in step #1 in the
|
||||||
|
previous section. Record the path to the weights file,
|
||||||
|
e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
|
||||||
|
|
||||||
|
Then using a **text** editor (e.g. the Windows Notepad application),
|
||||||
|
open the file `configs/models.yaml`, and add a new stanza that follows
|
||||||
|
this model:
|
||||||
|
|
||||||
|
```
|
||||||
|
arabian-nights-1.0:
|
||||||
|
description: A great fine-tune in Arabian Nights style
|
||||||
|
weights: ./models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
|
||||||
|
config: ./configs/stable-diffusion/v1-inference.yaml
|
||||||
|
width: 512
|
||||||
|
height: 512
|
||||||
|
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
||||||
|
default: false
|
||||||
|
```
|
||||||
|
|
||||||
|
* arabian-nights-1.0
|
||||||
|
- This is the name of the model that you will refer to from within the
|
||||||
|
CLI and the WebGUI when you need to load and use the model.
|
||||||
|
|
||||||
|
* description
|
||||||
|
- Any description that you want to add to the model to remind you what
|
||||||
|
it is.
|
||||||
|
|
||||||
|
* weights
|
||||||
|
- Relative path to the .ckpt weights file for this model.
|
||||||
|
|
||||||
|
* config
|
||||||
|
- This is the confusingly-named configuration file for the model itself.
|
||||||
|
Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens
|
||||||
|
to need a custom configuration, in which case the place you downloaded it
|
||||||
|
from will tell you what to use instead. For example, the runwayML custom
|
||||||
|
inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`.
|
||||||
|
(This is already inclued in the InvokeAI distribution and configured automatically
|
||||||
|
for you by the `preload_models.py` script.
|
||||||
|
|
||||||
|
* vae
|
||||||
|
- If you want to add a VAE file to the model, then enter its path here.
|
||||||
|
|
||||||
|
* width, height
|
||||||
|
- This is the width and height of the images used to train the model.
|
||||||
|
Currently they are always 512 and 512.
|
||||||
|
|
||||||
|
Save the `models.yaml` and relaunch InvokeAI. The new model should be
|
||||||
|
available for your use.
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: Linux
|
title: Manual Installation, Linux
|
||||||
---
|
---
|
||||||
|
|
||||||
# :fontawesome-brands-linux: Linux
|
# :fontawesome-brands-linux: Linux
|
||||||
@ -63,24 +63,16 @@ title: Linux
|
|||||||
model loading scheme to allow the script to work on GPU machines that are not
|
model loading scheme to allow the script to work on GPU machines that are not
|
||||||
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
|
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
|
||||||
|
|
||||||
7. Now you need to install the weights for the stable diffusion model.
|
7. Install the weights for the stable diffusion model.
|
||||||
|
|
||||||
- For running with the released weights, you will first need to set up an acount
|
- Sign up at https://huggingface.co
|
||||||
with [Hugging Face](https://huggingface.co).
|
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||||
- Use your credentials to log in, and then point your browser [here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original).
|
- Accept the terms and click Access Repository
|
||||||
- You may be asked to sign a license agreement at this point.
|
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
||||||
- Click on "Files and versions" near the top of the page, and then click on the
|
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
||||||
file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click
|
|
||||||
the "download" link. Save the file somewhere safe on your local machine.
|
|
||||||
|
|
||||||
Now run the following commands from within the stable-diffusion directory.
|
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
|
||||||
This will create a symbolic link from the stable-diffusion model.ckpt file, to
|
for details.
|
||||||
the true location of the `sd-v1-4.ckpt` file.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
(invokeai) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1
|
|
||||||
(invokeai) ~/InvokeAI$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
|
|
||||||
```
|
|
||||||
|
|
||||||
8. Start generating images!
|
8. Start generating images!
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: macOS
|
title: Manual Installation, macOS
|
||||||
---
|
---
|
||||||
|
|
||||||
# :fontawesome-brands-apple: macOS
|
# :fontawesome-brands-apple: macOS
|
||||||
@ -24,9 +24,15 @@ First you need to download a large checkpoint file.
|
|||||||
1. Sign up at https://huggingface.co
|
1. Sign up at https://huggingface.co
|
||||||
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||||
3. Accept the terms and click Access Repository
|
3. Accept the terms and click Access Repository
|
||||||
4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder). You may want to move it somewhere else for longer term storage - SD needs this file to run.
|
4. Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
||||||
|
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
||||||
|
|
||||||
While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1).
|
There are many other models that you can try. Please see [../features/INSTALLING_MODELS.md]
|
||||||
|
for details.
|
||||||
|
|
||||||
|
While that is downloading, open Terminal and run the following
|
||||||
|
commands one at a time, reading the comments and taking care to run
|
||||||
|
the appropriate command for your Mac's architecture (Intel or M1).
|
||||||
|
|
||||||
!!! todo "Homebrew"
|
!!! todo "Homebrew"
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: Windows
|
title: Manual Installation, Windows
|
||||||
---
|
---
|
||||||
|
|
||||||
# :fontawesome-brands-windows: Windows
|
# :fontawesome-brands-windows: Windows
|
||||||
@ -83,23 +83,14 @@ in the wiki
|
|||||||
|
|
||||||
8. Now you need to install the weights for the big stable diffusion model.
|
8. Now you need to install the weights for the big stable diffusion model.
|
||||||
|
|
||||||
1. For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
|
- Sign up at https://huggingface.co
|
||||||
2. Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
|
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||||
3. You may be asked to sign a license agreement at this point.
|
- Accept the terms and click Access Repository
|
||||||
4. Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
|
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
|
||||||
prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
|
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
|
||||||
5. The weight file is >4 GB in size, so
|
|
||||||
downloading may take a while.
|
|
||||||
|
|
||||||
Now run the following commands from **within the InvokeAI directory** to copy the weights file to the right place:
|
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
|
||||||
|
for details.
|
||||||
```batch
|
|
||||||
mkdir -p models\ldm\stable-diffusion-v1
|
|
||||||
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
|
|
||||||
```
|
|
||||||
|
|
||||||
Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
|
|
||||||
you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
|
|
||||||
|
|
||||||
9. Start generating images!
|
9. Start generating images!
|
||||||
|
|
||||||
|
@ -227,11 +227,14 @@ class ModelCache(object):
|
|||||||
print(' | Using more accurate float32 precision')
|
print(' | Using more accurate float32 precision')
|
||||||
|
|
||||||
# look and load a matching vae file. Code borrowed from AUTOMATIC1111 modules/sd_models.py
|
# look and load a matching vae file. Code borrowed from AUTOMATIC1111 modules/sd_models.py
|
||||||
if vae and os.path.exists(vae):
|
if vae:
|
||||||
print(f' | Loading VAE weights from: {vae}')
|
if os.path.exists(vae):
|
||||||
vae_ckpt = torch.load(vae, map_location="cpu")
|
print(f' | Loading VAE weights from: {vae}')
|
||||||
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
|
vae_ckpt = torch.load(vae, map_location="cpu")
|
||||||
model.first_stage_model.load_state_dict(vae_dict, strict=False)
|
vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
|
||||||
|
model.first_stage_model.load_state_dict(vae_dict, strict=False)
|
||||||
|
else:
|
||||||
|
print(f' | VAE file {vae} not found. Skipping.')
|
||||||
|
|
||||||
model.to(self.device)
|
model.to(self.device)
|
||||||
# model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here
|
# model.to doesn't change the cond_stage_model.device used to move the tokenizer output, so set it here
|
||||||
|
2
models/ldm/stable-diffusion-v1/place-ckpt-files-here.txt
Normal file
2
models/ldm/stable-diffusion-v1/place-ckpt-files-here.txt
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
See docs/features/INSTALLING_MODELS.md for how to populate this
|
||||||
|
directory with one or more Stable Diffusion model weight files.
|
Reference in New Issue
Block a user