As for whether the sampling method is DDIM or PLMS, there is some difference, but it is not so obvious to the human eye
Rather, the effect of size is so great that it’s a completely different thing.
If the composition is the same even if it is small, I tried the “generate many at high speed at a small size and then select the best ones for higher resolution” approach, but it didn’t work at all.
LDM in Stable Diffusion does diffusion model in latent space
- Not a super-resolution (e.g., video) approach after diffusion modeling in the space of small images
- Perhaps that’s why.
This page is auto-translated from [/nishio/Stable Diffusionの画像サイズの影響](https://scrapbox.io/nishio/Stable Diffusionの画像サイズの影響) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.