Small cleanups.

- Quenched tokenizer warnings during model initialization.
- Changed "batch" to "iterations" for generating multiple images in
  order to conserve vram.
- Updated README.
- Moved static folder from under scripts to top level. Can store other
  static content there in future.
- Added screenshot of web server in action (to static folder).
This commit is contained in:
Lincoln Stein 2022-08-25 15:03:40 -04:00
parent 79add5f0b6
commit 2ada3288e7
4 changed files with 42 additions and 12 deletions

View File

@ -88,6 +88,23 @@ You may also pass a -v<count> option to generate count variants on the original
passing the first generated image back into img2img the requested number of times. It generates interesting
variants.
## Barebones Web Server
As of version 1.10, this distribution comes with a bare bones web server (see screenshot). To use it,
run the command:
~~~~
(ldm) ~/stable-diffusion$ python3 scripts/dream_web.py
~~~~
You can then connect to the server by pointing your web browser at
http://localhost:9090, or to the network name or IP address of the server.
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for
contributing this code.
![Dream Web Server](static/dream_web_server.png)
## Weighted Prompts
You may weight different sections of the prompt to tell the sampler to attach different levels of
@ -171,6 +188,7 @@ repository and associated paper for details and limitations.
## Changes
* v1.09 (24 August 2022)
* A barebone web server for interactive online generation of txt2img and img2img.
* A new -v option allows you to generate multiple variants of an initial image
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave). [See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
* Added ability to personalize text to image generation (kudos to [Oceanswave](https://github.com/Oceanswave) and [nicolai256](https://github.com/nicolai256))
@ -459,7 +477,9 @@ to send me an email if you use and like the script.
[Peter Kowalczyk](https://github.com/slix), [Henry Harrison](https://github.com/hwharrison),
[xraxra](https://github.com/xraxra), [bmaltais](https://github.com/bmaltais), [Sean McLellan](https://github.com/Oceanswave),
[nicolai256](https://github.com/nicolai256), [Benjamin Warner](https://github.com/warner-benjamin),
and [tildebyte](https://github.com/tildebyte)
[tildebyte](https://github.com/tildebyte),
and [Tesseract Cat](https://github.com/TesseractCat)
Original portions of the software are Copyright (c) 2020 Lincoln D. Stein (https://github.com/lstein)

View File

@ -1,11 +1,20 @@
import json
import base64
import os
from pytorch_lightning import logging
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
print("Loading model...")
from ldm.simplet2i import T2I
model = T2I()
model = T2I(sampler_name='k_lms')
# to get rid of annoying warning messages from pytorch
import transformers
transformers.logging.set_verbosity_error()
logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)
print("Initializing model, be patient...")
model.load_model()
class DreamServer(BaseHTTPRequestHandler):
def do_GET(self):
@ -13,7 +22,7 @@ class DreamServer(BaseHTTPRequestHandler):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
with open("./scripts/static/index.html", "rb") as content:
with open("./static/index.html", "rb") as content:
self.wfile.write(content.read())
elif os.path.exists("." + self.path):
self.send_response(200)
@ -33,7 +42,7 @@ class DreamServer(BaseHTTPRequestHandler):
post_data = json.loads(self.rfile.read(content_length))
prompt = post_data['prompt']
initimg = post_data['initimg']
batch = int(post_data['batch'])
iterations = int(post_data['iterations'])
steps = int(post_data['steps'])
width = int(post_data['width'])
height = int(post_data['height'])
@ -46,7 +55,7 @@ class DreamServer(BaseHTTPRequestHandler):
if initimg is None:
# Run txt2img
outputs = model.txt2img(prompt,
batch_size = batch,
iterations=iterations,
cfg_scale = cfgscale,
width = width,
height = height,
@ -61,7 +70,7 @@ class DreamServer(BaseHTTPRequestHandler):
# Run img2img
outputs = model.img2img(prompt,
init_img = "./img2img-tmp.png",
batch_size = batch,
iterations = iterations,
cfg_scale = cfgscale,
seed = seed,
steps = steps)
@ -77,7 +86,7 @@ class DreamServer(BaseHTTPRequestHandler):
if __name__ == "__main__":
dream_server = ThreadingHTTPServer(("0.0.0.0", 9090), DreamServer)
print("Started Stable Diffusion dream server!")
print("\n\n* Started Stable Diffusion dream server! Point your browser at http://localhost:9090 or use the host's DNS name or IP address. *")
try:
dream_server.serve_forever()

@ -1 +1 @@
Subproject commit ef1bf07627c9a10ba9137e68a0206b844544a7d9
Subproject commit db5799068749bf3a6d5845120ed32df16b7d883b

View File

@ -1,6 +1,6 @@
<html>
<head>
<title>Stable Diffusion WebUI</title>
<title>Stable Diffusion Dream Server</title>
<link rel="icon" href="data:,">
<style>
* {
@ -156,7 +156,7 @@
</head>
<body>
<div id="search">
<h2 id="header">Stable Diffusion</h2>
<h2 id="header">Stable Diffusion Dream Server</h2>
<form id="generate-form" method="post" action="#">
<fieldset>
@ -164,8 +164,8 @@
<input type="submit" id="submit" value="Generate">
</fieldset>
<fieldset id="generate-config">
<label for="batch">Batch Size:</label>
<input value="1" type="number" id="batch" name="batch">
<label for="iterations">Images to generate:</label>
<input value="1" type="number" id="iterations" name="iterations">
<label for="steps">Steps:</label>
<input value="50" type="number" id="steps" name="steps">
<label for="cfgscale">Cfg Scale:</label>
@ -183,6 +183,7 @@
<button type="button" id="reset">&olarr;</button>
</fieldset>
</form>
<div id="about">For news and support for this web service, visit our <a href="http://github.com/lstein/stable-diffusion">GitHub site</a></div>
</div>
<hr style="width: 200px">
<div id="results">