However, the legislation has had a slow start, with a bill in Washington state already failing in committee. Critics argue that the impact assessments are vague and their ability to detect bias is unclear. Some suggest that a more effective approach would be to require bias audits and make the results public, but the industry opposes this, claiming it would expose trade secrets. Despite these challenges, the proposed legislation represents the beginning of a long-term discussion about how to balance the benefits and risks of AI technology.
Key takeaways:
- Lawmakers in at least seven states are working to regulate bias in artificial intelligence, addressing a lack of oversight from the federal government.
- AI systems are pervasive in everyday life, often used in hiring processes, rental applications, and medical care, but can be biased based on the data they are trained on.
- Proposed legislation would require companies using AI decision tools to conduct 'impact assessments' to analyze the risk of discrimination and explain their safeguards. Some bills would also require companies to inform customers when AI is used in decision-making.
- Despite these efforts, the legislation has had a slow start, with bills in Washington state and California failing. The industry also resists requirements for routine bias audits, arguing it would expose trade secrets.