The study also revealed bias when the LLMs were asked to describe the potential occupations of the authors. The LLMs often associated authors of African American English texts with jobs that seldom require a degree or were related to sports or entertainment. The researchers concluded that larger LLMs showed more negative bias towards authors of African American English texts than smaller models, indicating a deep-rooted problem.
Key takeaways:
- A team of AI researchers found that many popular large language models (LLMs) continue to use racist stereotypes even after anti-racism training.
- The researchers trained AI chatbots on text documents written in African American English and Standard American English, and found that the chatbots returned results supporting negative stereotypes, particularly for the African American English texts.
- The same LLMs were found to be more positive when asked to comment on African Americans in general, but showed bias when asked to describe what type of work the authors of the two types of papers might do for a living.
- The research team concluded that the larger LLMs showed more negative bias toward authors of African American English texts than did the smaller models, indicating a deep-rooted problem.