Google defended the video, stating it showed real outputs from Gemini and that they had been transparent about making edits to the demo. However, critics argue that the video does not reflect reality and that Google may have damaged its credibility. The incident raises questions about the veracity of AI demos and the potential for exaggeration of capabilities.
Key takeaways:
- Google's new Gemini AI model's demo video has been criticized for misrepresenting the model's capabilities and interactions.
- The video, which shows Gemini responding to various inputs, was created using carefully tuned text prompts with still images, not live interactions as implied.
- Despite the video showing real outputs from Gemini, the interactions were more involved or different than those shown, leading to accusations of the video being 'faked'.
- This misrepresentation has led to skepticism and a loss of trust in Google's claims about their AI model's capabilities.