The advisory also emphasizes that AI models should not be used to share unlawful content or permit bias, discrimination, or threats to the electoral process. It suggests the use of "consent popups" to inform users about the unreliability of AI-generated output and insists on labeling deepfakes and misinformation for easy identification. However, it no longer requires firms to identify the "originator" of any particular message.
Key takeaways:
- India's Ministry of Electronics and IT has revised its AI advisory, no longer requiring government approval before launching or deploying an AI model in the South Asian market.
- Instead, the revised guidelines advise firms to label under-tested and unreliable AI models to inform users of their potential unreliability.
- The advisory also emphasizes that AI models should not be used to share unlawful content under Indian law, should not permit bias or discrimination, and should not threaten the integrity of the electoral process.
- Intermediaries are advised to use 'consent popups' or similar mechanisms to inform users about the unreliability of AI-generated output, and to label or embed content with unique metadata or identifiers to ensure that deepfakes and misinformation are easily identifiable.