Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google, Anthropic Working to Address Limitations of GenAI

Feb 14, 2024 - pymnts.com
Google and Anthropic are addressing the limitations of their generative AI systems, including issues of hallucinations, copyright, and sensitive data, according to The Wall Street Journal. Google's VP of product management at DeepMind, Eli Collins, proposed a solution for users to identify the sources of information provided by AI systems, enhancing transparency and reliability. Anthropic, an AI startup, is working on reducing hallucinations and improving accuracy, with its co-founder Jared Kaplan discussing the development of data sets where the AI model responds with "I don’t know" when lacking sufficient information.

The issue of model training data provenance has also been highlighted, with The New York Times filing a lawsuit alleging unauthorized use of its content by Microsoft and OpenAI to train their AI models. Both Google and Anthropic are focusing on the importance of hardware in building powerful AI models, working on improving the availability, capacity, and cost-effectiveness of AI chips used for training. Google’s in-house chips, Tensor Processing Units (TPUs), have been developed to enhance efficiency and reduce costs.

Key takeaways:

  • Google and Anthropic are working to address the limitations of generative AI systems, including issues around hallucinations, copyright, and sensitive data.
  • One major concern is the AI systems producing incorrect statements, or 'hallucinations', with confidence. Google and Anthropic are working on reducing these occurrences and improving accuracy.
  • Eli Collins of Google DeepMind proposed a solution for transparency, enabling users to easily identify the sources of information provided by AI systems, while Anthropic is developing data sets where the AI model responds with “I don’t know” when it lacks sufficient information.
  • Both Google and Anthropic are focusing on the importance of hardware in building powerful AI models, working on improving the availability, capacity, and cost-effectiveness of AI chips used for training.
View Full Article

Comments (0)

Be the first to comment!