The report suggests that relying solely on AI tools for hiring could adversely impact applicants of certain genders or races. OpenAI suggested that recruiters could adjust the tool to reduce bias, such as removing names from the screening process. However, the issue of AI bias extends beyond hiring, with research indicating that AI models are more convincing when generating white faces than those of other racial groups, potentially due to more training on white faces.
Key takeaways:
- OpenAI's ChatGPT, an AI tool used by recruiters, has been found to show racial bias in screening resumes, according to a Bloomberg investigation.
- The tool was found to prefer names associated with Asian applicants more than those associated with Black job seekers.
- OpenAI suggests that recruiters can add their own safeguards against bias, such as removing names from the screening process.
- AI bias is a significant challenge in the technology sector, with potential to adversely impact marginalized communities.