The policy change comes as children and teens increasingly use generative AI tools for schoolwork and personal issues. Rival AI vendors, including Google and OpenAI, are also exploring more child-focused use cases. However, concerns remain about the potential misuse of generative AI, with over half of children reporting seeing generative AI used negatively. Calls for guidelines on child usage of generative AI are growing, with UNESCO pushing for government regulation on the use of generative AI in education.
Key takeaways:
- AI startup Anthropic is changing its policies to allow minors to use its generative AI systems through third-party apps, provided these apps implement specific safety features and disclose which Anthropic technologies they're using.
- Developers creating AI-powered apps for minors should include safety measures such as age verification systems, content moderation and filtering, and educational resources on safe and responsible AI use.
- Developers using Anthropic’s AI models must comply with child safety and data privacy regulations such as the Children’s Online Privacy Protection Act (COPPA), and Anthropic plans to periodically audit apps for compliance.
- The policy change comes as children and teens increasingly use generative AI tools for schoolwork and personal issues, and as rival AI vendors like Google and OpenAI explore more use cases aimed at children.