Despite previous attempts to accelerate image generation through diffusion distillation, such as Instaflow and LCM, MIT's DMD method appears to offer a balance between speed and image detail. Another model, Stable Diffusion Turbo, can also generate images in a single step, but results tend to improve with additional steps. Unlike other AI groups that closely guard their technology, MIT has made its findings publicly available.
Key takeaways:
- AI image generators typically work through a process known as 'diffusion', which is a time-consuming process requiring lots of steps. MIT researchers have found a way to reduce this process to a single step using a new approach called 'distribution matching distillation'.
- The new method developed by MIT's Computer Science and Artificial Intelligence Laboratory is faster than the typical image diffusion processes, reducing the iteration on the image from thirty to fifty times to just once.
- While other models like Instaflow and LCM have tried to use diffusion distillation to accelerate image generation with varying results, MIT's new method seems to offer a balance between speed and resolved image detail.
- Stability AI has also created a model called Stable Diffusion Turbo that can generate 1-megapixel images in a single diffusion step, working in a similar manner to MIT's 'DMD' approach.