Adds loading workflows with exhaustive validation via `zod`.
There is a load button but no dedicated save/load UI yet. Also need to add versioning to the workflow format itself.
## What type of PR is this? (check all applicable)
- [X Refactor
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
## Have you updated all relevant documentation?
- [X] Yes
## Description
### Refactoring
This PR refactors `invokeai.app.services.config` to be easier to
maintain by splitting off the argument, environment and init file
parsing code from the InvokeAIAppConfig object. This will hopefully make
it easier for people to find the place where the various settings are
defined.
### New Features
In collaboration with @StAlKeR7779 , I have renamed and reorganized the
settings controlling image generation and model management to be more
intuitive. The relevant portion of the init file now looks like this:
```
Model Cache:
ram: 14.5
vram: 0.5
lazy_offload: true
Device:
precision: auto
device: auto
Generation:
sequential_guidance: false
attention_type: auto
attention_slice_size: auto
force_tiled_decode: false
```
Key differences are:
1. Split `Performance/Memory` into `Device`, `Generation` and `Model
Cache`
2. Added the ability to force the `device`. The value of this option is
one of {`auto`, `cpu`, `cuda`, `cuda:1`, `mps`}
3. Added the ability to force the `attention_type`. Possible values are
{`auto`, `normal`, `xformers`, `sliced`, `torch-sdp`}
4. Added the ability to force the `attention_slice_size` when `sliced`
attention is in use. The value of this option is one of {`auto`, `max`}
or an integer between 1 and 8.
@StAlKeR7779 Please confirm that I wired the `attention_type` and
`attention_slice_size` configuration options to the diffusers backend
correctly.
In addition, I have exposed the generation-related configuration options
to the TUI:
![image](https://github.com/invoke-ai/InvokeAI/assets/111189/8c0235d4-c3b0-494e-a1ab-ff45cdbfd9af)
### Backward Compatibility
This refactor should be backward compatible with earlier versions of
`invokeai.yaml`. If the user re-runs the `invokeai-configure` script,
`invokeai.yaml` will be upgraded to the current format. Several
configuration attributes had to be changed in order to preserve backward
compatibility. These attributes been changed in the code where
appropriate. For the record:
| Old Name | Preferred New Name | Comment |
| ------------| ---------------|------------|
| `max_cache_size` | `ram_cache_size` |
| `max_vram_cache` | `vram_cache_size` |
| `always_use_cpu` | `use_cpu` | Better to check conf.device == "cpu" |