The use of AI for automated decision-making in high-risk areas has drawn regulatory attention due to concerns about potential biases and discrimination. The EU's AI Act imposes stringent requirements on high-risk AI systems, including registration, risk management, and human supervision. In the U.S., states like Colorado and New York City have enacted laws to ensure transparency and fairness in AI applications, such as requiring bias audits for employment screening tools. These measures reflect growing scrutiny and regulatory efforts to manage the impact of AI on individual rights and societal outcomes.
Key takeaways:
```html
- Google has updated its terms to allow the use of its generative AI for automated decisions in high-risk areas, such as healthcare, with human supervision.
- Google's policy change clarifies that automated decisions can be made in domains like employment, housing, and insurance, provided a human is involved.
- Google's competitors, OpenAI and Anthropic, have stricter rules for using AI in high-risk decision-making, with OpenAI prohibiting it in several areas and Anthropic requiring professional supervision and disclosure.
- Regulatory scrutiny is increasing globally, with the EU and U.S. states like Colorado and New York implementing laws to manage the risks associated with high-risk AI systems.