top of page

WATERMARK OR VANISH: INDIA’S PROACTIVE TURN IN THE AGE OF DEEPFAKES

  • Mar 24
  • 7 min read

Updated: Mar 28

-Rahul Agrawal and Muskaan Goyal*

Opening thought 

Deepfakes have moved from being mere technological curiosities to destabilising forces in India’s democratic landscape. The recent election cycles made this painfully clear. Synthetic videos impersonating political leaders, fabricated speeches designed to polarise communities, and AI-generated misinformation travelling across platforms at impossible speeds have exposed a sharp asymmetry; while technology enable rapid deception, the law struggles to keep pace. In October 2025, that asymmetry narrowed dramatically as two different but converging legal developments reshaped India’s regulatory approach almost overnight, pushing the country towards one of the world’s most ambitious experiments in proactive AI governance.


From Synthetic Content to Proactive Liability

On 22nd October 2025, the Government released the Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules with a new category of  Synthetically Generated Information. The category is specifically wide, which includes content produced or altered materially by artificial intelligence, including audio, visuals, and composite media. The proposed regulations require such content to have a Permanent Watermark or metadata where it would reveal its artificial origin. They also impose obligations on intermediaries (such as social media platforms) to ensure user disclosure at the time of upload and to maintain verifiable compliance records. 


The trend marks a major change in the content regulation structure in India. In the past, internet law used to intervene when damage had already become a reality- through defamation, obscenity, hate speech, or threats to public order- and therefore it initiated the liability based on effect. The draft rules, in contrast, regulate the content when it is generated, irrespective of the harm. With regulation rooted in origin and not effect, the State reconstitutes the issue of regulation: it is no longer only a matter of misinformation, but of an undermining of trust in digital authenticity. Watermarking is, therefore, a transparency mechanism rather than a censorship tool.


Further, on 28 October 2025, the Hon’ble Delhi High court provided legal force to this regulatory intent by holding that intermediaries who do not watermark AI-generated content may lose the safe harbour protection of Section 79 of the IT Act even without the user complaining. This is a stark contrast to the regime of the notice-based liability that was confirmed in Shreya Singhal v. In Union of India, where in intermediary immunity was conditional on failure to comply with court orders or good faith takedown notices. The decision is important as it changes the role of intermediaries. Social media platforms are no longer seen as neutral channels of response to flagged illegality, but as proactive identification and verification bodies. The ex-ante compliance is conditional upon the immunity, rather than the ex-post response. 


This shift generates further structural issues, such as the forced surveillance under Article 19(1)(a), linking statutory immunity to compliance with evolving technological requirements and effectively turns private platforms into regulatory actors. Together, the decision of the High Court and the draft regulations indicate a radical change in the Indian intermediary liability regime, shifting from reactive moderation to a proactive regulatory framework.


Democratic Urgency and Constitutional Limits

The development of the above shift on proactive intermediary obligations is further enhanced, and deepfakes present a unique threat to the democratic process against traditional misinformation. These media forms imitate voice, facial looks and authenticity to an extent that it becomes hard to distinguish between appearance and reality and therefore this samples such signals on which political trust relies. In the 2024-25 election cycle, the dissemination of deepfakes happened faster than fact-checking tools could respond, posing a serious threat to democracy. This temporal asymmetry can be used to interpret the State’s shift to proactive regulation. Democratic harm is often irreparable by the time a complaint is made or a takedown order has been issued. 


Although this democratic urgency is quite comprehensible, it does not supersede constitutional restraints. The use of mandatory watermarking is a point of concern in accordance with Article 19(1) (a), that ensures freedom of speech since the user is made to disclose the artificiality of his/her content. In this respect, it is a kind of forced disclosure. In Modern Dental College and Research Centre v. State of Madhya Pradesh, under the proportionality test, the State should demonstrate that a restriction on rights is necessary, necessary, and effective, and not excessive. Although transparency may be a valid goal, a blanket rule, even in the context of harmless or artistic material, leads to the concern of overbreadth.


These concerns are heightened by the fact that active detection of synthetic content is required. The strategy leaves behind Shreya Singhal v. Union of India, which opposed general surveillance changes to intermediaries. In case compliance involves pre-screening or algorithmic scanning of everything uploaded, the regime has a chance to reintroduce constant monitoring and make informational privacy a subject in Puttaswamy.


Lastly, the safe harbour loss is automatically lost under Section 79, which poses proportionality issues at the statutory design level. The concept of proportionality is applicable in this instance since any legal impediment, in particular one concerning speech and platform liability, should balance the purposes of the State and the weight imposed thereof. Safe harbour is meant to uphold the neutrality of intermediaries; it should be conditional upon near-perfect obedience to imperfect technology, lest it become punishment. In cases where compliance is probable, the strict withdrawal can be beyond the constitutional boundaries. 


The combination of these tensions characterises the main contradiction in India’s deepfake response: regulatory expediency could be required democratically, but constitutional discipline is a red line. The way this balance is achieved will be the difference between India striking a balance between a calibrated evolutionary approach and regulatory excesses.


Administrative Proportionality and Comparative Caution

Extending the constitutional concerns discussed above into the domain of administrative law, judicial review of GST limitation-extension notification and income-tax compliance cases reflects an established administrative law rule: the State is not permitted to create obligations that are beyond its statutory powers or require performance which is impossible in practice. It has always been the view of the courts that delegated law cannot override judicially established boundaries and that regulation implementation cannot disregard the real-world limitations. Obedience, the court has stressed, cannot be formalistic but needs to be practical. 


This theory applies to the newly formed deepfake regime in India. In situations where the watermarking tools are underdeveloped and detection systems are still flawed, the conditioning of intermediate safe harbour based on the notion that such systems will be near-perfect detection processes will place undue burden on the platforms. Such regulatory overreach, outside the technological domain, has been opposed by Indian courts on numerous occasions, and the reason is no different, as the subject matter is that of artificial intelligence.


This approach is contextualized with a short comparative perspective, where the EU AI Act uses a risk-based approach with stronger obligations imposed on high-risk applications of AI, whereas transparency measures, including the disclosure of deepfakes, are enforced without imposing the necessity to conduct constant monitoring or lose intermediary liability. The comparison is practical as it points to the difference in a regulatory philosophy. Although the EU scales the obligations according to risk and technological preparedness, the Indian approach seems much wider and more urgent, with the synthetic content itself being regarded as destabilising in nature and, as a result, has to be covered by the transparency requirements of default. This is an intentional policy decision: in a large and heterogeneous democracy, the perceived benefits of doing nothing can be more important than the costs of excessive regulation. This method however is questionable to hold up when scrutinized on the principles of administrative proportionality, constitutional limits, and technological feasibility because the deepfake framework in India is still developing.


The Technology Gap: Weaknesses and the Requirement of Moderate Compliance.

The deepfake structure that India has adopted is limited by a technological shortcoming that is inherent in watermarking and AI-detection: these methods are still unreliable and lack standardisation. AI models utilise diverse watermarking approach with no shared protocol, and watermarks may degenerate with compression, editing, or cross-model transfers. Even the developers of AI giants admit that the existing solutions remain experimental. These restrictions are especially severe for smaller intermediate players, which do not have the computational power to recognise high-resolution synthetic content on this scale. 

Training intermediate immunity on a near-perfect detection to this end, therefore, poses a threat of imposing obligations that platforms might not reasonably be in a position to fulfil. Indian jurisprudence has always realised that liability must not be imposed on impracticable or unreasonable requirements, especially in a regulatory setting where the ability to comply must be realistic.


This dysconnectivity requires recalibration and not withdrawal. Standardisation of watermarking protocols should be a priority in India, which can be achieved by cooperating with major developers of AI and standard setting organisations and generating compatibility between models. In the absence of this coordination, detection work is disjointed and unreliable. Compliance requirements are also supposed to be progressive and capacity-oriented. Larger intermediaries, having developed infrastructure, can reasonably bear greater responsibilities earlier, while small platforms need scaling-off periods and technical assistance to prevent huge burdens. 


Lastly, it should replace the automatic penalties with an audit-based assessment. Rather than direct loss of safe harbour, regulators ought to look at whether the platforms have taken reasonable diligence by detecting and logging it and responding promptly to the flagged material. A model based on proportional effort and not technological perfection should be more in keeping with the proportionality provisions in the constitution, the ideas of administrative law, and the disparities between the capabilities of the digital ecosystem in India.


Concluding thought

The strategy of India with regards to deepfakes is visionary and bold and is directed to protect the democratic integrity in a time when the distinction between fact and fiction is becoming increasingly indistinct. The recognition of the synthetic content by name and the active obligations imposed on intermediaries is an indication to the world that India is no longer going to wait until misinformation causes irreversible damage. Nevertheless, ambition should be based on constitutional discipline and technological reality. What the law cannot require, what urgency cannot override, what technology cannot consistently provide, the Constitution guarantees. Whether this balance has been achieved or not is ultimately to be decided by the Supreme Court. At this point, however, India is leading the pack in AI regulation- it is running a high-stakes experiment in how to regulate the future of truth.


*Rahul Agrawal and Muskaan Goyal are 3rd year undergraduate students pursuing B.A., LL.B. (Hons.) at the Hidyatullah National Law University, Raipur.


The views expressed above are the author's alone and do not represent the beliefs of Pith & Substance: The CCAL Blog. 

 
 
 

Comments


What are you looking for?

  • LinkedIn
  • Instagram
  • Twitter
  • Facebook

Copyright Policy: The Centre for Comparative Constitutional Law and Administrative Law and National Law University Jodhpur reserve all copyrights for works published herein.

© National Law University, Jodhpur

bottom of page