Keyframer is powered by a large language model, GPT-4, which can generate CSS animation code from a static SVG image and prompt. Users can refine the animation by editing the CSS code directly or by adding new prompts in natural language. The tool supports exploration and refinement of animations through prompting and direct editing of generated output. The researchers hope that Keyframer will inspire future animation design tools that combine the generative capabilities of LLMs with dynamic editors that enable creators to maintain creative control.
Key takeaways:
- Apple researchers have developed a new AI tool called 'Keyframer' that uses large language models (LLMs) to animate static images based on natural language prompts.
- 'Keyframer' generates CSS animation code from a static SVG image and a text prompt, allowing users to create animations by simply typing a description of the desired motion.
- The tool supports iterative design and allows users to refine animations through direct editing of the generated output or by adding new prompts in natural language.
- 'Keyframer' has the potential to democratize the animation process, making it more accessible to non-experts and marking a significant shift in the use of AI in the creative process.