Empowerment, Due Process and Dignity: Governing Synthetic Media in India’s Digital Public Sphere
S. Krishnan
India’s digital public sphere is at a defining moment. Advances in artificial intelligence, particularly in the generation and alteration of audio, visual, and audio-visual content, have fundamentally reshaped how information is created, consumed, and trusted. While these technologies expand the possibilities of expression, creativity, and accessibility, they also introduce new risks that touch directly upon individual dignity, social harmony, and constitutional values.
Recognising the distinctive challenges posed by synthetically generated information, the Government of India has strengthened the legal and policy architecture governing digital intermediaries. Recent amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, read alongside the India AI Governance Guidelines, 2025 released under the IndiaAI Mission in November 2025, reflect a coherent and calibrated approach: binding legal obligations to address concrete harms, supported by policy principles to guide responsible AI adoption.
Together, these instruments, the outcome of extensive consultation with stakeholders, signal a clear governing intent-technological advancement must proceed within a framework that preserves transparency, accountability, and the dignity of the citizen.
A Statutory Response to Synthetic Media Risks
The amended Intermediary Rules introduce, for the first time, a precise and operational definition of “synthetically generated information.” The definition is carefully calibrated. It captures content that is artificially or algorithmically created or altered in a manner that appears authentic or indistinguishable from reality, while expressly excluding routine, good-faith activities such as technical editing, accessibility enhancements, educational or research material, and legitimate creative use.
This definitional clarity performs an important legal function. By expressly bringing synthetically generated information within the scope of “information” for the purposes of due diligence, grievance redressal, and intermediary responsibility, the Rules ensure that emerging forms of digital harm are addressed within the existing statutory framework, rather than left to informal moderation practices.
Equally significant is the explicit clarification that good-faith actions taken by intermediaries, through automated tools or other reasonable technical measures, do not undermine statutory protections. This reinforces a compliance-enabling environment while preserving accountability.
Labelling, Provenance, and Ex-Ante Safeguards
The most consequential shift in the amended framework lies in its movement from reactive moderation to ex-ante governance. Intermediaries that enable or facilitate the creation or dissemination of synthetically generated information are now required to deploy reasonable and appropriate technical measures to prevent the circulation of unlawful content at the point of creation or dissemination.
Where synthetically generated content is lawful, the Rules mandate clear and prominent labelling, supported by persistent metadata or other provenance mechanisms, to the extent technically feasible. The modification, suppression, or removal of such labels or identifiers is expressly prohibited.
This techno-legal approach reflects a nuanced regulatory understanding. Rather than relying solely on takedowns after harm has occurred, the framework treats transparency as a safeguard of dignity and trust. Citizens are empowered not only through remedies, but through the ability to distinguish authentic content from synthetic content in real time. It is a recognition of a right to know for citizens.
The amended Intermediary Guidelines also require the Intermediaries to periodically and clearly inform users of their rights, obligations, and the consequences of non-compliance, using accessible language.
Heightened Responsibilities for Significant Platforms
For significant social media intermediaries, the regulatory framework imposes additional obligations commensurate with scale and influence. Such platforms are required to obtain user declarations regarding synthetically generated content, deploy proportionate technical measures to assess the accuracy of those declarations, and ensure that synthetically generated information is not published without appropriate identification.
Failure to comply with these requirements may constitute a lapse in due diligence under the Rules, with attendant statutory consequences. This approach reflects a calibrated allocation of responsibility, recognising that platforms with greater systemic impact must shoulder higher governance obligations.
Policy Guidance for Responsible AI Adoption
Complementing these enforceable legal rules, the India AI Governance Guidelines, 2025 articulate a policy framework for responsible, safe, and inclusive AI adoption. The Guidelines emphasise transparency, accountability, human-centric design, and risk awareness, while expressly operating within the boundaries of existing law.
Importantly, the Guidelines do not displace statutory obligations under the IT Act or the Intermediary Rules. Rather, they provide directional guidance to developers, deployers, and institutions, reinforcing the expectation that AI systems, particularly those capable of generating synthetic content, must be designed and deployed with due regard to societal impact and public trust.
Legislative Competence and Democratic Stewardship
India’s approach to governing emerging technologies is anchored in constitutional legitimacy and institutional capacity. While the present framework relies on calibrated rule-making and policy guidance, Parliament retains full legislative competence to respond to evolving technological realities where required in the public interest. This is not an assertion of regulatory excess, but a reaffirmation of democratic stewardship-ensuring that innovation remains aligned with constitutional values, due process, and the protection of individual dignity. The existing framework therefore reflects not regulatory finality, but regulatory readiness: adaptive, proportionate, and firmly grounded in the rule of law.
A Citizen-Centric Digital Governance Model
India’s approach to digital regulation has consistently avoided rigid or reactionary responses. Instead, it has favoured principle-based and proportionate governance built on extensive consultation that preserves innovation while safeguarding rights.
The evolving framework for synthetic media exemplifies this approach. By combining definitional precision, ex-ante safeguards, mandatory transparency, time-bound remedies, appellate oversight, and policy guidance for responsible AI, India is strengthening confidence in its digital public sphere. The challenge posed by synthetic media is ultimately one of trust, that technology will not outpace rights, that platforms will remain accountable, and that institutions will respond effectively to citizen harm. Empowering users through enforceable procedures, embedding transparency by design, and reinforcing institutional oversight are not competing objectives. They are the foundations of a resilient digital democracy.
Anchored in these principles, India’s response to synthetic media offers not only a domestic governance solution, but a globally relevant model for democratic, rights-respecting regulation in an AI-mediated world.
(The author is Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India)