The issue of model training data provenance has also been highlighted, with The New York Times filing a lawsuit alleging unauthorized use of its content by Microsoft and OpenAI to train their AI models. Both Google and Anthropic are focusing on the importance of hardware in building powerful AI models, working on improving the availability, capacity, and cost-effectiveness of AI chips used for training. Google’s in-house chips, Tensor Processing Units (TPUs), have been developed to enhance efficiency and reduce costs.
Key takeaways:
- Google and Anthropic are working to address the limitations of generative AI systems, including issues around hallucinations, copyright, and sensitive data.
- One major concern is the AI systems producing incorrect statements, or 'hallucinations', with confidence. Google and Anthropic are working on reducing these occurrences and improving accuracy.
- Eli Collins of Google DeepMind proposed a solution for transparency, enabling users to easily identify the sources of information provided by AI systems, while Anthropic is developing data sets where the AI model responds with “I don’t know” when it lacks sufficient information.
- Both Google and Anthropic are focusing on the importance of hardware in building powerful AI models, working on improving the availability, capacity, and cost-effectiveness of AI chips used for training.