The company is adapting its service agreements to meet the unique needs of governments, with contractual exceptions allowing Claude to be used for legally authorized foreign intelligence analysis. However, restrictions concerning disinformation campaigns, weapon design or use, censorship, and malicious cyber operations remain. Anthropic is committed to ensuring AI serves the public interest while mitigating potential risks, and is working with governments to develop effective AI testing and measurement regimes.
Key takeaways:
- Anthropic's AI models Claude 3 Haiku and Claude 3 Sonnet are now available in the AWS Marketplace for the US Intelligence Community and in AWS GovCloud, offering a wide range of potential applications for government agencies.
- The company is adapting its service agreements to meet the unique needs and legal authorities of governments, including crafting contractual exceptions to enable beneficial uses by selected government agencies.
- Anthropic's policy currently applies only to models that are at AI Safety Level 2 under their Responsible Scaling Policy, and they commit to regularly evaluating their partnerships and impacts.
- Anthropic has been committed to supporting effective government policies about AI since its founding, and believes working with governments is essential to ensuring the world safely transitions toward transformative AI.