1
Feature Story
Character.AI Says It’s Made Huge Changes to Protect Underage Users, But It’s Emailing Them to Recommend Conversations With AI Versions of School Shooters
Feb 03, 2025 · futurism.com
Character.AI has pledged to enhance safety features, such as content filters and usage alerts, but their effectiveness is questioned. Despite deactivating the problematic bot after media inquiries, the company left related profiles online. Recently, Character.AI supported the Inspired Internet Pledge to make the internet safer for young people, yet simultaneously filed to dismiss a lawsuit, arguing First Amendment protection for speech allegedly resulting in suicide. These actions highlight ongoing challenges in balancing platform safety and free speech rights.
Key takeaways
- Character.AI has been criticized for hosting chatbots based on real-life school shootings, raising concerns about the safety of underage users.
- The company is facing lawsuits alleging that its chatbots sexually groomed and emotionally abused minors, leading to severe consequences, including a teenager's suicide.
- Despite promises to improve safety measures, Character.AI has been found to still host inappropriate content and target minors in its marketing efforts.
- Character.AI has attempted to dismiss a lawsuit by arguing that the First Amendment protects speech allegedly resulting in suicide.