HyunJeong Choe, director of engineering for the Gemini app, acknowledges the difficulty of ensuring factual accuracy in AI-generated summaries. The company focuses on training the model to use information correctly. Jules Walter, product lead for international markets, highlights testing programs for quality checks from native perspectives and local team reviews of datasets. A recent TechCrunch report noted that contractors working on Gemini were instructed not to skip prompt responses, with Google stating that contractors evaluate content, style, and format.
Key takeaways:
- Google is expanding Gemini's in-depth research mode to 40 more languages, allowing Google One AI premium plan users to access an AI-powered research assistant.
- The in-depth research function involves a multi-step process, including creating a research plan, finding relevant information, and generating a report.
- Google faces challenges in ensuring the accuracy of AI-generated summaries in native languages, with a focus on using clean data and trustworthy sources.
- Google has testing programs and local teams to ensure quality checks from native perspectives, and contractors are required to evaluate responses for content, style, and format.