Lincoln Stein dab03fb646 rename gpu_mem_reserved to max_vram_cache_size
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
2023-07-11 15:25:39 -04:00
..
2023-01-30 18:42:17 -05:00
2023-07-10 23:47:49 +10:00
2023-07-08 21:31:17 +12:00
2023-01-30 18:42:17 -05:00
2023-03-03 14:36:43 -05:00

Organization of the source tree:

app -- Home of nodes invocations and services
assets -- Images and other data files used by InvokeAI
backend -- Non-user facing libraries, including the rendering
	core.
configs -- Configuration files used at install and run times
frontend -- User-facing scripts, including the CLI and the WebUI
version -- Current InvokeAI version string, stored
	in version/invokeai_version.py