Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

How Google taught AI to doubt itself

Sep 20, 2023 - theverge.com
Google's chatbot, Bard, has been updated to help users verify the accuracy of its responses. The new feature allows users to "double-check" Bard's responses by clicking a Google button, which then highlights sentences in green or brown. Green indicates that the information can be backed up by cited web pages, while brown suggests that Bard doesn't know the source of the information, indicating a potential error. This feature aims to address the issue of chatbots being "confidently wrong," as they often generate text based on probabilistic guesses rather than established facts.

In addition to this, Bard can now connect to various Google products, including Gmail, Docs, Drive, YouTube, and Maps, allowing users to search, summarize, and ask questions about documents stored in their Google account in real time. However, this feature is currently limited to personal accounts. Despite these advancements, the article suggests that the responsibility of ensuring the accuracy of chatbot responses still largely falls on the user, and hopes for future improvements where AI can more effectively check its own work.

Key takeaways:

  • Google's chatbot Bard has a new feature that allows users to double-check its responses using Google Search, highlighting statements that can be substantiated with web content.
  • The double-checking feature turns sentences green if they can be linked to cited web pages and brown if Bard doesn't know where the information came from, indicating a likely mistake.
  • Bard can now connect to personal Gmail, Docs, Drive, YouTube, and Maps accounts, allowing users to search, summarize, and ask questions about their stored documents in real time.
  • Despite these advancements, the task of steering chatbots towards the right answer still heavily relies on the user, highlighting the need for AI tools to be able to check their own work more effectively.
View Full Article

Comments (0)

Be the first to comment!