AI, Anonymity and Accountability: What the New IT Rules Change

The amendments formally define synthetic media, shorten takedown timelines, and expand compliance requirements for the social media platforms. 

New IT rules
AI, Anonymity and Accountability: What the New IT Rules Change Photo: | Representational
info_icon
Summary
Summary of this article
  • Deepfakes and other synthetic media now formally covered under the amendments in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules.

  •  Platforms must remove unlawful content within three hours (two hours for sexual/impersonation cases).

  • Mandatory labelling of AI content and possible user identity disclosure raise concerns.

India’s amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, notified on 10 February 2026, introduce significant changes for social media platforms and other intermediaries hosting audio-visual and AI-generated content.

The amendments formally define synthetic media, shorten takedown timelines, and expand compliance requirements for the social media platforms. 

For the first time, the rules define what they term “synthetically generated information” (SGI). This includes audio-visual content that is “artificially or algorithmically created, generated, modified or altered” in a manner that makes it appear real or authentic and capable of depicting individuals or events.

In effect, the definition covers deepfakes, AI-generated videos, cloned voices, and hyper-realistic manipulated images.

At the same time, the rules clarify that routine editing, such as colour correction, noise reduction, translation, formatting or transcription, will not be treated as synthetic content so long as it does not “materially alter, distort, or misrepresent” the original material.

The amendments also broaden the definition of “audio, visual or audio-visual information” to include any image, video, recording or graphic “whether created, generated, modified or altered through any computer resource”, bringing most forms of digital content within scope.

It goes further by clarifying that whenever the rules refer to “information” being used to commit an unlawful act, that term includes synthetically generated information. This means AI-generated videos or images are treated no differently from any other form of content if they violate the law. Platforms cannot argue that AI-generated material falls outside the traditional framework of intermediary liability.

Three To Two Hour Takedown Window

One of the most consequential changes is the reduction in the takedown timeline. Earlier, intermediaries were required to remove unlawful content within 36 hours of receiving a court order or government notification. Under the amended rules, platforms must now remove or disable access within three hours of receiving “actual knowledge”, either through a court order or a written, reasoned communication from an authorised government officer.

The three-hour requirement applies across categories of unlawful content, not only to AI-generated material. 

Prateek Waghre, Head of Programmes at the Tech Global Institute, said this provision “extends the scope well beyond just synthetic media.”

“This is a part of the rule that applies to everything,” he said. 

Expecting platforms to determine what is “unlawful” within two or three hours, he argued, sets an unrealistic standard.

He added that platforms may, in practice, rely more heavily on automated systems to comply within the compressed timeframe.

Independent internet researcher Srinivas Kodali said the move should also be seen in the broader context of the government’s stated aim of limiting the rapid spread of harmful material online. “The state is not afraid of one article or two articles. The state is afraid of an article going viral,” he said, suggesting that the shorter timeline is designed to curb virality at an early stage.

Government officials have reportedly indicated that the timeline was reduced after feedback from individuals who felt platforms were not acting quickly enough to limit the spread of harmful content.

The rules retain a separate, even shorter timeline of 2 hours for certain categories of user complaints. In cases involving content that exposes private body parts, depicts sexual acts, or involves impersonation, including artificially morphed images,  intermediaries must take action within two hours of receiving a complaint.

Given the rise in deepfake pornography and impersonation content, this provision is expected to have particular relevance for AI-generated harms affecting individuals.

Labelling AI-Generated Content 

The earlier draft rules had proposed 10% visibility standards for labelling AI-generated content. The final version drops numerical prescriptions and instead requires that such content be labelled “prominently”.

Waghre said a blanket percentage-based requirement may have been impractical, particularly for audio-only formats. However, he cautioned that vagueness brings its own problems. Platforms will now interpret “prominent” differently, leading to inconsistency.

He also pointed out that labels can be bypassed. Referring to watermarking on AI-generated videos, he noted that tools to remove such watermarks appeared almost immediately. Even with labels, “people will find ways to remove them or crop them,” he said.

Identity Disclosure

The amendments also expand intermediary obligations beyond merely removing content. In cases where a user is found to have violated the rules, particularly in matters involving impersonation, synthetic content, or other unlawful material,  platforms may be required to identify such users. 

The rules provide that, in accordance with applicable law, intermediaries can be required to disclose the identity of a user to a complainant who claims to be a victim. This means that, subject to legal safeguards and due process, anonymity may not shield individuals who misuse AI tools or publish harmful audio-visual content..

Digital rights group Internet Freedom Foundation (IFF) has criticised the move, calling it a “troubling addition” in a statement on social media platform X. It also expressed concern that the changes could create risks if not accompanied by appropriate safeguards and judicial oversight.

IFF further argued that the rules increase compliance burdens on intermediaries and may affect how safe harbour protections under Section 79 of the IT Act are interpreted in practice. Section 79 grants “safe harbour” protection to internet intermediaries, shielding them from liability for user-generated content, provided they comply with due diligence requirements.

Waghre also questioned how this provision would play out in practice. “If someone publishes satire about a politician, could a complaint trigger identity disclosure?” he asked. The rule, he suggested, appears not to clearly anchor such disclosure to a court order.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

×