An update to Anthropic's terms of service allows its AI tools to be used for identifying covert campaigns or providing advance warning of potential military activities. The partnership grants Claude access to "secret" Palantir Impact Level 6 (IL6) information, which can contain data critical to national security. This move places Anthropic in ethically questionable territory and raises concerns about the growing ties between the AI industry and the US military-industrial complex.
Key takeaways:
- AI company Anthropic, known for prioritizing safety, has partnered with defense contractor Palantir and Amazon Web Services to bring its AI chatbot Claude to US intelligence and defense agencies.
- The partnership aims to support the US military-industrial complex by processing complex data rapidly, identifying patterns and trends, and helping US officials make informed decisions in time-sensitive situations.
- Anthropic's AI tools can be used for identifying covert influence or sabotage campaigns and providing advance warning of potential military activities, according to its recently expanded terms of service.
- The partnership allows Claude access to information that falls under the 'secret' Palantir Impact Level 6 (IL6), which can contain data critical to national security, raising ethical concerns and potential risks due to AI's inherent flaws.