Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

A Book App Used AI to 'Roast' Its Users. It Went Anti-Woke Instead

Jan 03, 2025 - wired.com
Fable, a social media app for book enthusiasts, introduced an AI-powered end-of-year summary feature for 2024 that unexpectedly generated offensive and combative content. Users, including writer Danny Groves and influencer Tiana Trammell, reported receiving summaries with inappropriate comments on race, gender, and sexual orientation. The backlash led to widespread criticism on social media, prompting Fable to apologize and announce changes to its AI summaries, such as removing the roasting feature and offering an opt-out option. However, some users, like A.R. Kaufer and Trammell, felt the response was inadequate and chose to delete their accounts.

The incident highlights ongoing issues with generative AI tools, which have a history of biased outputs due to the societal biases inherent in their programming. Similar problems have been observed with other AI technologies, such as OpenAI's Dall-E and Google's Gemini, which have produced racially insensitive content. Critics argue that AI models need more rigorous testing and safeguards to prevent such missteps, emphasizing the importance of addressing bias in AI development.

Key takeaways:

```html
  • Fable's AI-powered end-of-year summary feature for book readers faced backlash for generating summaries with inappropriate and biased commentary.
  • The company apologized and announced changes to improve the AI summaries, including removing the roasting feature and adding an opt-out option.
  • Some users, including writers and influencers, were dissatisfied with the response and chose to delete their accounts, calling for more rigorous testing and safeguards.
  • This incident highlights ongoing issues with generative AI tools, which can perpetuate societal biases and produce offensive content.
```
View Full Article

Comments (0)

Be the first to comment!