Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI hiring bias? Men with Anglo-Saxon names score lower in tech interviews

Nov 23, 2024 - theregister.com
A recent study conducted by Celeste De Nadai at the Royal Institute of Technology (KTH) in Stockholm, Sweden, found that AI models used in mock interviews for software engineering jobs rated men, particularly those with Anglo-Saxon names, less favorably. The study, which was part of De Nadai's undergraduate thesis project, aimed to investigate whether current-generation LLMs demonstrate bias when presented with gender data and names that allow cultural inferences. The models tested included Google's Gemini-1.5-flash, Mistral AI's Open-Mistral-nemo-2407, and OpenAI's GPT4o-mini.

Contrary to previous bias studies, which suggested that men and Western names would be favored, the results showed an inherent bias against male names in general and Anglo-Saxon names in particular. De Nadai theorizes that this bias may reflect an over-correction to previous biases. The study concludes that model biases cannot be fully mitigated by adjusting settings and prompts alone and recommends masking the name and obfuscating the gender in a hiring context to ensure results are as general and unbiased as possible.

Key takeaways:

  • A recent study conducted by Celeste De Nadai at the Royal Institute of Technology (KTH) in Stockholm, Sweden, found that recent AI models used in mock interviews for software engineering jobs rated men, particularly those with Anglo-Saxon names, less favorably.
  • The study used Google's Gemini-1.5-flash, Mistral AI's Open-Mistral-nemo-2407, and OpenAI's GPT4o-mini models and tested them with variations in temperature, gender, and names associated with cultural groups.
  • Contrary to the expected finding that men and Western names would be favored, the study found that male names and particularly Anglo-Saxon names were discriminated against.
  • The study suggests that to make interview evaluation results more fair, models should be provided with a prompt with rigid, detailed criteria about how to grade interview questions and should be denied access to information that might be used to make unwanted inferences, such as name and gender.
View Full Article

Comments (0)

Be the first to comment!