The government issued a late-night advisory to social media intermediaries, generative AI platforms, and large language models, emphasizing compliance with IT Rules and electoral integrity. Platforms must obtain explicit permission to operate in India and provide disclaimers regarding their testing status.
Additionally, platforms and AI models are required to implement a “consent pop-up mechanism” to inform users about the potential fallibility of generated output. The advisory extends to platforms enabling the creation of deepfake content, including images and videos.
Minister of State for Electronics and Information Technology Rajeev Chandrasekhar emphasized the need for platforms to take full accountability for generated content. The advisory follows controversy surrounding Google’s AI model Gemini’s responses to questions about prominent global leaders.
Also Read : Meta Ceases New Deals with News Outlets
Chandrasekhar highlighted concerns over the potential misuse of AI models for generating biased or misleading content. The advisory aims to ensure transparency, accountability, and integrity in online content dissemination.
Platforms must prioritize user consent and transparency, particularly regarding the reliability of AI-generated responses. The government’s proactive measures reflect its commitment to regulating digital platforms and safeguarding electoral processes.
The advisory underscores the evolving landscape of digital governance and the need for responsible AI usage. It seeks to address emerging challenges in online content moderation and deepfake detection.