Based on user studies, Gen-2 has been preferred over existing methods for Image to Image and Video to Video translation, with 73.53% preferring it over Stable Diffusion 1.5 and 88.24% preferring it over Text2Live. Runway Research is dedicated to building multimodal AI systems that enable new forms of creativity and is at the forefront of developments in image and video synthesis.
Key takeaways:
- Gen-2 is a multimodal AI system developed by Runway Research that can generate novel videos using text, images, or video clips.
- The system can synthesize new videos realistically and consistently, either by applying the composition and style of an image or text prompt to the structure of a source video, or using nothing but words.
- Based on user studies, results from Gen-1 are preferred over existing methods for Image to Image and Video to Video translation, with Gen-2 expected to further improve on this.
- Runway Research is dedicated to building multimodal AI systems that enable new forms of creativity, with a focus on making the future of creativity accessible, controllable, and empowering for all.