The report warns that these biases could have serious implications, particularly in job screening and legal contexts where AI models are increasingly being used. The researchers urged developers to address these issues of racism in large language models. The study also highlighted the limitations of ethical guardrails, which, instead of eliminating the underlying problem, only teach language models to be more discreet about their racial biases. The researchers advocate for federal regulation to curtail the use of these technologies in sensitive areas.
Key takeaways:
- Large language models like OpenAI’s ChatGPT and Google’s Gemini hold racist stereotypes about speakers of African American Vernacular English (AAVE), according to a new report.
- The AI models were found to be more likely to describe AAVE speakers as “stupid” and “lazy”, and were more likely to recommend the death penalty for hypothetical criminal defendants that used AAVE.
- As language models grow, covert racism increases, with ethical guardrails only teaching language models to be more discreet about their racial biases, rather than eliminating them.
- AI experts are calling for the federal government to curtail the mostly unregulated use of large language models, with concerns about the harm they might cause if technological advancements continue to outpace federal regulation.