- User can choose to download just recommended models, customize list to download,
or skip downloading altogether.
- Does direct download to models directory instead of to HuggingFace cache
- Able to resume interrupted downloads
- user can select which weight files to download using huggingface cache
- user must log in to huggingface, generate an access token, and accept
license terms the very first time this is run. After that, everything
works automatically.
- added placeholder for docs for installing models
- also got rid of unused config files. hopefully they weren't needed
for textual inversion, but I don't think so.
- in this way the action is used from the base repository
- also use new secret HUGGINGFACE_TOKEN (username:token)
- f.e. `noreply@github.com:hf_lkaugfklagwrjglaslzfgkjzzf`
- change pr prompt file to validate_pr_prompt.txt
The Args object would crap out when trying to retrieve metadata from
an image file that did not contain InvokeAI-generated metadata, such
as a JPG. This corrects that and returns dummy values (seed of zero,
prompt of '') to avoid downstream breakage.
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.
This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
- This sets a step switchover point at which the k-samplers stop using the
Karras noise schedule and start using the LatentDiffusion noise schedule.
The advantage of this is that the Karras schedule produces excellent
results at low step counts but starts to become unstable at high
steps.
- A new command argument --karras_max, lets the user set where the
switchover occurs. Default is 29 steps (1-29 steps Karras),
(30 or greater LDM)
- Tildebyte, sorry to do a fast forward three-way merge for this
but rebasing was just too painful due to extensive recent
changes to the diffuser code.