Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google says customers can use its AI in 'high-risk' domains, so long as there's human supervision | TechCrunch

Dec 17, 2024 - techcrunch.com
Google has updated its terms to clarify that its generative AI tools can be used for automated decision-making in high-risk areas, such as healthcare, employment, and housing, provided there is human supervision. This change highlights the company's stance that human oversight is essential in these applications, a policy that was always in place but is now more explicitly stated. In contrast, competitors like OpenAI and Anthropic have stricter regulations, with OpenAI prohibiting its AI's use in high-risk decisions entirely, and Anthropic requiring supervision by a qualified professional and disclosure of AI usage.

The use of AI for automated decision-making in high-risk areas has drawn regulatory attention due to concerns about potential biases and discrimination. The EU's AI Act imposes stringent requirements on high-risk AI systems, including registration, risk management, and human supervision. In the U.S., states like Colorado and New York City have enacted laws to ensure transparency and fairness in AI applications, such as requiring bias audits for employment screening tools. These measures reflect growing scrutiny and regulatory efforts to manage the impact of AI on individual rights and societal outcomes.

Key takeaways:

```html
  • Google has updated its terms to allow the use of its generative AI for automated decisions in high-risk areas, such as healthcare, with human supervision.
  • Google's policy change clarifies that automated decisions can be made in domains like employment, housing, and insurance, provided a human is involved.
  • Google's competitors, OpenAI and Anthropic, have stricter rules for using AI in high-risk decision-making, with OpenAI prohibiting it in several areas and Anthropic requiring professional supervision and disclosure.
  • Regulatory scrutiny is increasing globally, with the EU and U.S. states like Colorado and New York implementing laws to manage the risks associated with high-risk AI systems.
```
View Full Article

Comments (0)

Be the first to comment!