The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 require social media platforms to “prominently” label “synthetically generated” content, or AI-generated images and videos. Image used for representation only.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 require social media platforms to “prominently” label “synthetically generated” content, or AI-generated images and videos. Image used for representation only.
| Photo Credit: Reuters

The story so far: The Ministry of Electronics and Information Technology (MeitY) earlier this week notified an amendment to the IT Rules, 2021, that would require users and social media platforms to label AI-generated content, and tighten the takedown timelines for all content — not just AI-generated posts — from 24-36 hours to two to three hours. The rules come into effect on February 20.

Also Read | IT Ministry mandates label for AI-generated content, reduces takedown timeline to 2–3 hours

What about AI-generated content?

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 require social media platforms to “prominently” label “synthetically generated” content, or AI-generated images and videos. The requirement was first proposed in October last year, and was notified this week. Social media platforms with more than five million users are required to obtain a “user declaration [for AI-generated content] and [conduct] technical verification before publishing [AI-generated content]”.

MeitY said in an explanatory document that this requirement was introduced to counter “deepfakes, misinformation, and other unlawful content capable of misleading users, causing users harm, violating privacy, or threatening national integrity,” and that it was important that users be aware whether what they are viewing is inauthentic.

The October draft definition of “Synthetically Generated Information” (SGI) was wider, encompassing any audiovisual content that was AI-modified or generated. The final rules carve out some exemptions: for instance, smartphone-clicked photos that are retouched automatically by the camera app will not be considered SGI, and special effects in films will not be considered something that needs to be labelled. The rules also prohibit certain types of SGI: child sexual exploitation and abuse material, forged documents, information on developing explosives, and deepfakes falsely representing a real person.

Also Read | AI labelling rules nearing finalisation, says IT Secretary S. Krishnan

How can AI-generated content be detected?

The government has asked large platforms to “deploy reasonable and appropriate technical measures to prevent unlawful SGI, and to ensure labelling/provenance/identifier requirements for permissible SGI”. A senior official at the IT Ministry argued that large platforms have sophisticated tools to detect SGI, and that this requirement merely requires them to perform detection that they are already doing. Additionally, some AI firms and platforms have participated in the Coalition for Content Provenance and Authenticity (C2PA), which offers technical standards to invisibly label AI-generated content in a way that can be read by other platforms, in case AI-based detection fails. The rules allude to this effort by requiring “provenance/identifier requirements,” but the official said they don’t want to endorse any single effort, but would like to formalise the aims of such collaborations.

Also Read | India’s new AI governance guidelines push hands-off approach

How have time limits changed?

The IT Rules enable some government authorities and police officials to issue takedown notices under Rule 3(1)(b), and users to send in grievances for “illegal” categories of content enumerated in the IT Rules. Those categories include misinformation, nudity, and threats to sovereignty. For both government- and court-issued takedown notices, the timelines have been reduced to 2-3 hours, while for all other categories of user complaints (like defamation and misinformation), the response timelines have been reduced from two weeks to one week. The timelines for responding to user reports (under Rule 3(2)(b)) on “sensitive” content has also been slashed from 72 hours to 36 hours. The government reasoned that the previous limits allowed a lot of damage to be done even within those timelines, necessitating a revisit of the time platforms have to act.

EDITORIAL | Too fake to be good: On AI-generated imagery, labelling

What other changes have been made?

Users will now have to receive a reminder of platforms’ terms and conditions more often. “The amendments include revisions to Rule 3(1)(c) of the Intermediary Rules, increasing the frequency of user notifications from once every year to at least once every 3 (three) months, and expanding the content of such notifications to clarify potential consequences of non-compliance and reporting obligations,” JSA Advocates and Solicitors said in an analysis.

The rules also require platforms to specifically warn users that harmful deepfakes and other illegal AI-generated content could expose them to legal action, including the disclosure of their identity to law enforcement agencies and “immediate disabling of access or removal of such content, suspension or termination of user accounts”, JSA said in its analysis.


Leave a Reply

Your email address will not be published. Required fields are marked *