Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

ChatGPT is biased against resumes with credentials that imply a disability — but it can improve

Jun 23, 2024 - washington.edu
A study conducted by researchers at the University of Washington found that OpenAI's ChatGPT, an AI tool used to summarize resumes and rank candidates, consistently ranked resumes with disability-related honors and credentials lower than the same resumes without those credentials. However, when the tool was customized with instructions not to be ableist, it reduced this bias for all but one of the disabilities tested. The researchers used a publicly available CV and created six enhanced CVs, each implying a different disability, and used the GPT-4 model to rank these against the original for a real job listing.

The researchers found that the system exhibited explicit and implicit ableism when asked to explain the rankings. They then used the GPTs Editor tool to train the system to be less biased, instructing it to work with disability justice and DEI principles. The newly trained system ranked the enhanced CVs higher than the control CV 37 times out of 60. However, improvements for some disabilities were minimal or absent. The researchers emphasized the need for more research to document and remedy AI biases, and the importance of awareness of these biases when using AI for real-world tasks.

Key takeaways:

  • Researchers at the University of Washington found that OpenAI’s ChatGPT AI tool consistently ranked resumes with disability-related honors and credentials lower than the same resumes without those honors and credentials.
  • When the tool was customized with instructions to avoid ableist bias, it reduced this bias for all but one of the disabilities tested.
  • The researchers used the GPTs Editor tool to instruct the chatbot to work with disability justice and DEI principles, which improved the ranking of the enhanced CVs.
  • The study emphasizes the need for more research to document and remedy AI biases, including testing other systems, exploring further customization, and studying the intersections of the system’s bias against disabilities with other attributes such as gender and race.
View Full Article

Comments (0)

Be the first to comment!