Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI Safety Is Hard to Steer With Science in Flux, US Official Says

Dec 11, 2024 - insurancejournal.com
Policymakers are struggling to recommend effective safeguards for artificial intelligence due to the rapidly evolving nature of the technology. Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, highlighted the challenges at the Reuters NEXT conference, noting that AI developers are still figuring out how to prevent system abuse, leaving government authorities without clear solutions. Cybersecurity and the ease of bypassing AI security measures, known as "jailbreaks," are significant concerns. Additionally, tampering with digital watermarks on AI-generated content complicates the creation of industry guidance. The U.S. AI Safety Institute, established under the Biden administration, is addressing these issues through collaborations with academia, industry, and civil society.

Kelly emphasized that AI safety is a bipartisan issue, expressing confidence in the institute's continuity even with a change in administration. She recently led the first global meeting of AI safety institutes in San Francisco, where representatives from 10 countries worked on developing interoperable safety tests. The meeting focused on technical discussions, involving experts from various fields to enhance AI safety protocols.

Key takeaways:

```html
  • Policymakers face challenges in recommending AI safeguards due to the evolving nature of the technology.
  • Cybersecurity and AI system abuse, such as "jailbreaks," are significant concerns for AI safety.
  • The U.S. AI Safety Institute collaborates with academic, industry, and civil society partners to address AI safety issues.
  • AI safety is considered a bipartisan issue, with international efforts focusing on creating interoperable safety tests.
```
View Full Article

Comments (0)

Be the first to comment!