To address the growing misuse of AI-generated and deepfake content, the Ministry of Electronics and Information Technology (MeitY) has proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Key Provisions of the Draft Rules
- Mandatory Labelling of AI Content: All synthetically generated information, including AI-created videos, images, or audio, must carry clear and permanent labels identifying them as artificial.
- For Example: An AI-generated video on YouTube must display 
- A label within the video (embedded at the time of creation).
- A platform-based disclosure on the YouTube page hosting the content.
 
 
- Definition of “Synthetically Generated Information”:  “Information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that appears reasonably authentic or true.”
- This includes deepfakes, AI-modified images, AI-cloned voices, or algorithmically generated videos that mimic real persons or events.
 
- User Declaration Requirement:
- Platforms must require users uploading content to declare whether it is synthetically generated.
- Platforms must deploy technical verification tools (e.g., AI detectors, metadata checks) to ensure accuracy of such declarations.
- If verified as synthetic, the platform must ensure prominent and visible labelling of the content.
 
- Compliance and Liability: Platforms failing to comply may lose legal immunity under Section 79 of the IT Act, 2000, which protects intermediaries from liability for third-party content.
| Global Context
China introduced similar AI labelling norms in September 2025, requiring:
Visible AI symbols for chatbots, AI writing, synthetic voices, face swaps, and virtual scene editing.Hidden watermarks for other AI-generated content. Voluntary Labelling Initiatives
Meta began labelling AI-generated content on its platforms in 2024.The Coalition for Content Provenance and Authenticity (C2PA) has been developing standards for digital provenance — tracing content origin and modifications. | 
 
Need for the Proposed Amendments on AI Content Labelling
- To Curb Deepfake Misuse: Deepfakes are being used for identity theft, defamation, and misinformation, often targeting public figures and private citizens alike.
- Example: The 2023 deepfake video of actor Rashmika Mandanna went viral, sparking national concern over privacy violations and reputational harm.
- To Ensure User Awareness and Transparency: Viewers must be able to distinguish between real and AI-generated content, enabling informed decisions in a democracy.
- To Prevent Manipulation and Electoral Misuse: Generative AI can be weaponised to influence public opinion or elections through fabricated speeches or videos of political leaders.
- To Establish Platform Responsibility: Social media intermediaries currently enjoy safe harbour under Section 79 of the IT Act; these rules introduce shared accountability for verifying and flagging synthetic content.