How much can a video generated by the same diffusion model differ across GPU architectures if the initial noise latent is fixed? [D]
About this article
Hi! I am trying to sanity-check an assumption for diffusion video generation reproducibility. Suppose I run the same video diffusion model on two different GPU architectures, with: identical model weights and implementation (same attention backend, etc) identical prompt and parameters (same number of denoising steps, etc) deterministic sampler (no extra noise is injected during inference) the exact same starting noise latent Could I expect more or less the same generated video? I understand t...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket