Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI chatbots found to use racist stereotypes even after anti-racism training

Mar 11, 2024 - news.bensbites.co
A team of AI researchers from the Allen Institute for AI, Stanford University, and the University of Chicago have found that many popular large language models (LLMs) continue to use racist stereotypes, despite undergoing anti-racism training. The researchers trained AI chatbots on text documents written in African American English and Standard American English, and found that the responses to the former often supported negative stereotypes. For instance, GPT-4 suggested authors of texts in African American English were likely to be aggressive, rude, ignorant, and suspicious.

The study also revealed bias when the LLMs were asked to describe the potential occupations of the authors. The LLMs often associated authors of African American English texts with jobs that seldom require a degree or were related to sports or entertainment. The researchers concluded that larger LLMs showed more negative bias towards authors of African American English texts than smaller models, indicating a deep-rooted problem.

Key takeaways:

  • A team of AI researchers found that many popular large language models (LLMs) continue to use racist stereotypes even after anti-racism training.
  • The researchers trained AI chatbots on text documents written in African American English and Standard American English, and found that the chatbots returned results supporting negative stereotypes, particularly for the African American English texts.
  • The same LLMs were found to be more positive when asked to comment on African Americans in general, but showed bias when asked to describe what type of work the authors of the two types of papers might do for a living.
  • The research team concluded that the larger LLMs showed more negative bias toward authors of African American English texts than did the smaller models, indicating a deep-rooted problem.
View Full Article

Comments (0)

Be the first to comment!