Fix handling of init_timestep in StableDiffusionGeneratorPipeline and improve its documentation.

This commit is contained in:
Ryan Dick 2024-06-25 18:38:13 -04:00
parent bd74b84cc5
commit 9a3b8c6fcb
2 changed files with 4 additions and 9 deletions

View File

@ -299,9 +299,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
HACK(ryand): seed is only used in a particular case when `noise` is None, but we need to re-generate the HACK(ryand): seed is only used in a particular case when `noise` is None, but we need to re-generate the
same noise used earlier in the pipeline. This should really be handled in a clearer way. same noise used earlier in the pipeline. This should really be handled in a clearer way.
timesteps: The timestep schedule for the denoising process. timesteps: The timestep schedule for the denoising process.
init_timestep: The first timestep in the schedule. init_timestep: The first timestep in the schedule. This is used to determine the initial noise level, so
TODO(ryand): I'm pretty sure this should always be the same as timesteps[0:1]. Confirm that that is the should be populated if you want noise applied *even* if timesteps is empty.
case, and remove this duplicate param.
callback: A callback function that is called to report progress during the denoising process. callback: A callback function that is called to report progress during the denoising process.
control_data: ControlNet data. control_data: ControlNet data.
ip_adapter_data: IP-Adapter data. ip_adapter_data: IP-Adapter data.
@ -316,9 +315,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
SD UNet model. SD UNet model.
is_gradient_mask: A flag indicating whether `mask` is a gradient mask or not. is_gradient_mask: A flag indicating whether `mask` is a gradient mask or not.
""" """
# TODO(ryand): Figure out why this condition is necessary, and document it. My guess is that it's to handle if init_timestep.shape[0] == 0:
# cases where densoisings_start and denoising_end are set such that there are no timesteps.
if init_timestep.shape[0] == 0 or timesteps.shape[0] == 0:
return latents return latents
orig_latents = latents.clone() orig_latents = latents.clone()

View File

@ -49,9 +49,7 @@ class MultiDiffusionPipeline(StableDiffusionGeneratorPipeline):
) -> torch.Tensor: ) -> torch.Tensor:
self._check_regional_prompting(multi_diffusion_conditioning) self._check_regional_prompting(multi_diffusion_conditioning)
# TODO(ryand): Figure out why this condition is necessary, and document it. My guess is that it's to handle if init_timestep.shape[0] == 0:
# cases where densoisings_start and denoising_end are set such that there are no timesteps.
if init_timestep.shape[0] == 0 or timesteps.shape[0] == 0:
return latents return latents
batch_size, _, latent_height, latent_width = latents.shape batch_size, _, latent_height, latent_width = latents.shape