Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

India Drops Plan To Require Approval For AI Model Launches - Slashdot

Mar 15, 2024 - yro.slashdot.org
India's Ministry of Electronics and IT has revised its AI advisory, no longer requiring companies to seek government approval before launching or deploying an AI model in the South Asian market. Instead, the updated advisory suggests that firms should label under-tested and unreliable AI models to inform users of their potential fallibility. This is a reversal from the March 1 advisory and a shift from the country's previous hands-off approach to AI regulation.

The advisory also emphasizes that AI models should not be used to share unlawful content or permit bias, discrimination, or threats to the electoral process. It suggests the use of "consent popups" to inform users about the unreliability of AI-generated output and insists on labeling deepfakes and misinformation for easy identification. However, it no longer requires firms to identify the "originator" of any particular message.

Key takeaways:

  • India's Ministry of Electronics and IT has revised its AI advisory, no longer requiring government approval before launching or deploying an AI model in the South Asian market.
  • Instead, the revised guidelines advise firms to label under-tested and unreliable AI models to inform users of their potential unreliability.
  • The advisory also emphasizes that AI models should not be used to share unlawful content under Indian law, should not permit bias or discrimination, and should not threaten the integrity of the electoral process.
  • Intermediaries are advised to use 'consent popups' or similar mechanisms to inform users about the unreliability of AI-generated output, and to label or embed content with unique metadata or identifiers to ensure that deepfakes and misinformation are easily identifiable.
View Full Article

Comments (0)

Be the first to comment!