Sign up to save tools and stay up to date with the latest in AI
bg
bg
2

The AI Startup Anthropic, Which Is Always Talking About How Ethical It Is, Just Partnered With Palantir

Nov 08, 2024 - futurism.com
AI company Anthropic, known for prioritizing safety, has partnered with defense contractor Palantir and Amazon Web Services to bring its AI chatbot Claude to US intelligence and defense agencies. This move contradicts Anthropic's safety-first claim, as AI chatbots are known for leaking sensitive information. The partnership aims to support the US military-industrial complex by processing complex data rapidly, identifying patterns, streamlining document review, and aiding decision-making in time-sensitive situations.

An update to Anthropic's terms of service allows its AI tools to be used for identifying covert campaigns or providing advance warning of potential military activities. The partnership grants Claude access to "secret" Palantir Impact Level 6 (IL6) information, which can contain data critical to national security. This move places Anthropic in ethically questionable territory and raises concerns about the growing ties between the AI industry and the US military-industrial complex.

Key takeaways:

  • AI company Anthropic, known for prioritizing safety, has partnered with defense contractor Palantir and Amazon Web Services to bring its AI chatbot Claude to US intelligence and defense agencies.
  • The partnership aims to support the US military-industrial complex by processing complex data rapidly, identifying patterns and trends, and helping US officials make informed decisions in time-sensitive situations.
  • Anthropic's AI tools can be used for identifying covert influence or sabotage campaigns and providing advance warning of potential military activities, according to its recently expanded terms of service.
  • The partnership allows Claude access to information that falls under the 'secret' Palantir Impact Level 6 (IL6), which can contain data critical to national security, raising ethical concerns and potential risks due to AI's inherent flaws.
View Full Article

Comments (0)

Be the first to comment!