The new advisory, which has not been published online, emphasizes that AI models should not share unlawful content or permit bias, discrimination, or threats to electoral integrity. It also suggests the use of "consent popups" to inform users about the unreliability of AI-generated output and advises intermediaries to label or embed content with unique metadata to make deepfakes and misinformation easily identifiable. However, it no longer requires firms to identify the "originator" of any particular message.
Key takeaways:
- India's Ministry of Electronics and IT has revised its AI advisory, no longer requiring government approval before launching or deploying an AI model in the South Asian market.
- The revised guidelines advise firms to label under-tested and unreliable AI models to inform users of their potential unreliability.
- The advisory emphasizes that AI models should not be used to share unlawful content under Indian law and should not permit bias, discrimination, or threats to the electoral process.
- Intermediaries are advised to use consent popups or similar mechanisms to inform users about the unreliability of AI-generated output, and to label or embed content with unique metadata or identifiers to ensure that deepfakes and misinformation are easily identifiable.