InvokeAI/invokeai
Lincoln Stein dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
..
app rename gpu_mem_reserved to max_vram_cache_size 2023-07-11 15:25:39 -04:00
assets Various fixes 2023-01-30 18:42:17 -05:00
backend rename gpu_mem_reserved to max_vram_cache_size 2023-07-11 15:25:39 -04:00
configs rename ModelType.Pipeline to ModelType.Main 2023-06-24 11:45:49 -04:00
frontend load thumbnail 2023-07-10 23:47:49 +10:00
version feat: Add App Version to UI 2023-07-08 21:31:17 +12:00
__init__.py Various fixes 2023-01-30 18:42:17 -05:00
README CODEOWNERS coarse draft 2023-03-03 14:36:43 -05:00

Organization of the source tree:

app -- Home of nodes invocations and services
assets -- Images and other data files used by InvokeAI
backend -- Non-user facing libraries, including the rendering
	core.
configs -- Configuration files used at install and run times
frontend -- User-facing scripts, including the CLI and the WebUI
version -- Current InvokeAI version string, stored
	in version/invokeai_version.py