The researchers found that the system exhibited explicit and implicit ableism when asked to explain the rankings. They then used the GPTs Editor tool to train the system to be less biased, instructing it to work with disability justice and DEI principles. The newly trained system ranked the enhanced CVs higher than the control CV 37 times out of 60. However, improvements for some disabilities were minimal or absent. The researchers emphasized the need for more research to document and remedy AI biases, and the importance of awareness of these biases when using AI for real-world tasks.
Key takeaways:
- Researchers at the University of Washington found that OpenAI’s ChatGPT AI tool consistently ranked resumes with disability-related honors and credentials lower than the same resumes without those honors and credentials.
- When the tool was customized with instructions to avoid ableist bias, it reduced this bias for all but one of the disabilities tested.
- The researchers used the GPTs Editor tool to instruct the chatbot to work with disability justice and DEI principles, which improved the ranking of the enhanced CVs.
- The study emphasizes the need for more research to document and remedy AI biases, including testing other systems, exploring further customization, and studying the intersections of the system’s bias against disabilities with other attributes such as gender and race.