The contractors focused on improving Gemini have discussed the accuracy and safety of its responses, noting that they must evaluate each reply based on criteria like verbosity and honesty. Internal communications at Google have revealed references to Claude, suggesting that comparisons are being made between Gemini and other AI models, including Claude. This raises concerns about potential violations of Anthropic's terms of service, which prohibit using Claude to develop competing products or train rival models without permission. Despite Google's significant investment in Anthropic, it still requires consent to use Claude in this manner.
Key takeaways:
- Contractors working on Google's Gemini AI have compared its responses with those from Anthropic's Claude, raising legal and ethical concerns.
- Google has not commented on whether it has permission from Anthropic to use Claude for testing, which could be unlawful without consent.
- Contractors evaluate AI responses based on criteria like accuracy and safety, noting that Gemini sometimes produces unsafe replies.
- Anthropic's terms of service prohibit using Claude to develop competing products or train rival models without permission.