mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
[Documentation] Why containers?
This commit is contained in:
parent
3f8a289e9b
commit
b20f2bcd7e
@ -5,6 +5,7 @@ Table of Contents
|
||||
* [Step 1 - Get the Model](#step-1---get-the-model)
|
||||
* [Step 2 - Installation](#step-2---installation)
|
||||
* [Option A - On a Linux container](#option-a---on-a-linux-container)
|
||||
* [Why containers?](#why-containers)
|
||||
* [Prerequisites](#prerequisites)
|
||||
* [Setup](#setup)
|
||||
* [Option B - Directly on Apple silicon](#option-b---directly-on-apple-silicon)
|
||||
@ -17,6 +18,7 @@ Table of Contents
|
||||
* [Web Interface](#web-interface)
|
||||
* [Notes](#notes)
|
||||
|
||||
|
||||
# Step 1 - Get the Model
|
||||
Go to [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), and click "Access repository" to Download ```sd-v1-4.ckpt``` (~4 GB) to ```~/Downloads```.
|
||||
You'll need to create an account but it's quick and free.
|
||||
@ -24,9 +26,13 @@ You'll need to create an account but it's quick and free.
|
||||
# Step 2 - Installation
|
||||
|
||||
## Option A - On a Linux container
|
||||
This example uses a Mac M1/M2 (arm64) but you can specify the platform and architecture as parameters when building the image and running the container. It provides a reliable way to generate a build and deploy it. It also uses a Docker volume to store the largest model file as a first step in decoupling storage and compute. Future enhancements will do this for other model files and assets. The steps would be the same on an amd64 machine with NVIDIA GPUs as for an arm64 Mac; the platform is configurable.
|
||||
|
||||
You [can't access the Macbook M1/M2 GPU cores from the Docker containers](https://github.com/pytorch/pytorch/issues/81224) so performance is reduced compared with running it directly on macOS but for development purposes it's fine.
|
||||
### Why containers?
|
||||
They provide a flexible, reliable way to build and deploy Stable Diffusion. We also use a Docker volume to store the largest model file and image outputs as a first step in decoupling storage and compute. Future enhancements will do this for other model files and assets. See [Processes](https://12factor.net/processes) under the Twelve-Factor App methodology for details on why running applications in such a stateless fashion is important.
|
||||
|
||||
This example uses a Mac M1/M2 (arm64) but you can specify the platform and architecture as parameters when building the image and running the container.
|
||||
|
||||
The steps would be the same on an amd64 machine with NVIDIA GPUs as for an arm64 Mac; the platform is configurable. You [can't access the Mac M1/M2 GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224) and performance is reduced compared with running it directly on macOS but for development purposes it's fine. Once you're done with development tasks on your laptop you can build for the target platform and architecture and deploy to en environment with NVIDIA GPUs on-premises or in the cloud.
|
||||
|
||||
### Prerequisites
|
||||
[Install Docker](https://gist.github.com/santisbon/2165fd1c9aaa1f7974f424535d3756f7#docker)
|
||||
|
Loading…
Reference in New Issue
Block a user