Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Law designed to stop AI bias in hiring decisions is so ineffective it's slowing similar initiatives

Jan 23, 2024 - theregister.com
A study by researchers at Cornell University, Consumer Reports, and the Data & Society Research Institute has found that New York City Local Law 144 (LL144), aimed at reducing bias in AI hiring algorithms, is largely ineffective. The law, which requires employers using automated employment decision tools (AEDTs) to conduct annual audits for race and gender bias and publish the results, has seen compliance from only 18 out of 391 employers sampled. The study also found that the law does not require companies to take action if an audit reveals discriminatory outcomes.

The researchers suggest that the law's ineffectiveness is due to its abstract language and the discretion it grants employers in determining whether their system falls within the law's scope. They recommend that future laws aimed at combating AEDT discrimination should broaden the scope of what constitutes the use of AEDT software and eliminate any qualifications on the use of AI hiring algorithms. The researchers believe that these changes could enhance the accountability and effectiveness of such laws.

Key takeaways:

  • A study has found that New York City Local Law 144 (LL144), which requires employers to audit AI hiring algorithms for bias, is largely ineffective.
  • Out of 391 employers sampled, only 18 had published the required audit reports and just 13 included the necessary transparency notices.
  • The law does not require companies to take action if an audit shows discriminatory outcomes, but companies found to be using biased algorithms could face civil lawsuits.
  • The researchers suggest that future laws need to broaden the scope of what constitutes the use of automated employment decision tools (AEDTs) and remove any qualification on the use of AI hiring algorithms.
View Full Article

Comments (0)

Be the first to comment!