Recently, the Supreme Court of India called for a strong regulatory framework for harmful User-Generated Content, stating self-regulation has failed. It recommended an independent oversight body and encrypted Aadhaar/Permanent Account Number age verification to curb fast-spreading, irreversible online harm.
Concerns Raised by the Supreme Court

- Rapid Spread of Harmful Content: Harmful digital material spreads at a speed that outpaces takedown procedures, creating lasting reputational and psychological damage that reactive systems cannot effectively reverse.
- Rise of Unregulated Content Creators: Large online influencers operate without meaningful accountability, enabling widespread circulation of misleading, abusive, or socially disruptive content across platforms.
- Weak Protection for Minors: Basic “18+ disclaimers” fail to ensure child safety, allowing minors easy access to explicit, violent, or psychologically harmful material due to poor verification mechanisms.
- Ineffectiveness of Self-Regulation: Current platform-driven self-regulatory systems remain inconsistent, lack enforcement power, and fail to curb harmful content in a timely manner.
- Deepfakes and AI-Driven Misuse: The growth of AI-generated deepfakes and manipulated content threatens individual dignity, democratic discourse, and public order by making truth indistinguishable from fabrication.
Key Directives Issued by the Supreme Court

- Autonomous Digital Regulator: Establish a statutory, independent, and influence-free authority to monitor digital content and ensure impartial regulation beyond political or commercial pressure.
- Robust Age-Verification Systems: Implement privacy-preserving age-verification mechanisms—potentially using encrypted Aadhaar/PAN tokens—to prevent minors from accessing sensitive content.
- Shift Toward Proactive Mitigation: Encourage platforms to adopt pre-emptive detection systems instead of reactive complaints, particularly for content that poses deep, irreversible, or dignity-related harms.
- Strengthened Dignity Protections: Reinforce Article 21 by ensuring that online abuse, humiliation, and deepfake attacks are addressed through timely and effective State-backed safeguards.
- Balanced Enforcement of Free Speech: Craft regulatory standards that preserve Article 19(1)(a) freedoms while upholding Article 19(2) safeguards relating to morality, public order, and defamation.
|
Evolution of India’s Online Regulation
- IT Act 2000 as the First Digital Framework: The IT Act introduced India’s early standards for intermediary liability, though it now struggles to address modern challenges like deepfakes and AI manipulation.
- Shreya Singhal (2015) and Free Speech Protections: The judgment struck down Section 66A, protecting free expression but leaving a regulatory vacuum for addressing targeted online abuse.
- IT Rules 2021 and Platform Accountability: The Rules mandated due diligence, traceability, and grievance redressal, though litigation has slowed uniform implementation.
- Digital Shifts of the 2020s: Rapid growth of social media, misinformation, cyberbullying, and deepfake technology has exposed major gaps in existing legal safeguards.
- Judicial Push for Modern Regulation: The Court’s intervention reflects a shift towards structured, preventive, and proportionate digital governance.
- Kaushal Kishor v. State of U.P. (2023) Judgment: The 5-judge bench held that Article 21 includes protection from online abuse by non-State actors, observing “Freedom of speech of one cannot become freedom to humiliate another.
- Telecommunications Act, 2023 (Section 20): It empowers Centre to regulate/take over OTT & internet services for public safety, but risks overlap & conflict with IT Act.
Constitutional Context
- Free Speech Under Article 19(1)(a): Regulation must not create a chilling effect, discouraging citizens, journalists or artists from expressing legitimate opinions.
- Reasonable Limits Under Article 19(2): Restrictions must be narrowly tailored, precisely drafted, and linked to constitutional grounds like decency, morality, public order, and security of the State.
- Right to Dignity Under Article 21: Online harassment, deepfakes, and cyberbullying undermine the constitutional guarantee of dignified life, requiring active State intervention.
- Proportionality as a Regulatory Standard: Any framework must meet the constitutional test of suitability, necessity, and least-restrictiveness.
Significance of Strengthening India’s Digital Governance
- Enhanced Citizen Protection: Strong regulation helps curb cyberbullying, doxxing, deepfake attacks, and hate campaigns, especially against vulnerable groups.
- Strengthening Democratic Institutions: Tackling disinformation and online intimidation supports healthier public discourse and reinforces democratic trust.
- Expanding the Digital Rule of Law: A structured framework helps embed constitutional values—dignity, equality, accountability—within the digital ecosystem.
- Positioning India as a Digital Leader: Rights-based regulation allows India to present a globally credible model for democratic digital governance.
- Improved Child Safety Standards: Stronger age controls, content filters, and verification systems improve protection for minors.
Key Challenges in Regulating Online Content in India

- Over-Regulation Risks and Constitutional Concerns:
- Excessive Censorship: Vague or overbroad rules may suppress dissent, satire, and political criticism, producing a chilling effect on legitimate expression.
- Proportionality Violations: Restrictions that fail the constitutional tests of suitability, necessity, and least-restrictiveness risk violating the Shreya Singhal standard.
- Threat to Democratic Debate: Ambiguous thresholds for what qualifies as “harmful” or “offensive” content may weaponise regulation against journalists, activists, and students.
- BNS 2023 Gap: Bharatiya Nyaya Sanhita 2023 covers online defamation & harassment but lacks specific provisions for deepfake pornography, online stalking, doxxing & brigading; requires dedicated chapter on technology-facilitated abuse.
- Technological and Moderation Limitations:
- AI Misclassification: Automated tools often mislabel legitimate speech, fail to detect regional nuances, and struggle with India’s linguistic diversity.
- Algorithmic Bias: Content moderation systems may reproduce gender, caste, or religious biases, unfairly targeting marginalised voices.
- Deepfake Complexity: Rising volumes of AI-generated deepfakes complicate detection, magnifying risks to dignity, trust, and public discourse.
- Institutional, Judicial, and Enforcement Gaps:
- State Capacity Constraints: Shortages of cyber-forensics experts, specialised regulators, and advanced technical infrastructure hinder effective oversight.
- Judicial Delays: Lack of specialised mechanisms leads to slow adjudication of cyber offences, prolonging harm.
- Fragmented Regulation: Overlapping jurisdiction among multiple agencies results in inconsistent enforcement and regulatory confusion.
- Centre–State Friction and Federal Governance Issues:
- Concurrent List Tensions: Online content falls under Entry 31 (Posts & Telegraphs), yet States like Tamil Nadu and Kerala argue the IT Rules 2021 reflect central overreach.
- Uneven State Enforcement: Differing political priorities and capacities across States create regulatory inconsistency.
- Weak Local Response Systems: Absence of State-level digital safety mechanisms limits ground-level responsiveness to region-specific harms.
- Transparency Deficits and Accountability Concerns:
- Opaque Blocking Orders: Takedowns issued under Section 69A and Section 70B by MeitY and CERT-In often lack public disclosure, reducing democratic oversight.
- Low Public Trust: Secrecy surrounding takedown reasons fuels perceptions of arbitrariness or political bias.
- Insufficient Platform Disclosure: Platforms provide minimal or generic data on moderation actions, concealing systemic failures.
- Economic and Innovation Trade-Offs:
- Compliance Burdens on Startups: Over-regulation may raise costs for smaller companies, undermining the Creator Economy and digital entrepreneurship.
- Capital Flight Risks: Stringent regulatory expectations may prompt global platforms to consider scaling down operations, as seen in Twitter’s 2021 exit threats.
- Under-Regulation Harms Trust: Weak safeguards diminish user confidence and advertiser trust, as highlighted by the Cambridge Analytica fallout.
- Unequal Compliance Impact: Uniform standards burden smaller firms while failing to address disproportionate risks posed by dominant platforms.
- Identity-Based Harms and Societal Vulnerabilities:
- Disproportionate Harassment: Studies show 59% of urban women and 38% of rural women face online harassment; Dalit, Adivasi, and minority activists face heightened casteist abuse and doxxing.
- Invisible Structural Patterns: Absence of identity-specific reporting hides systemic discrimination and mutes recognition of high-risk groups.
- Enhanced Vulnerability: Marginalised communities face greater risks of targeted misinformation, hate campaigns, and coordinated trolling.
- WEF Global Risks 2024: WEF Global Risks Report 2024 ranks misinformation & disinformation as India’s No.1 short-term risk — ahead of climate, debt or war — highlighting urgency of regulating harmful online content.
- Election Integrity and Public Order Threats:
- Coordinated Influence Operations: Deepfakes, mass-forwarded misinformation, and targeted propaganda threaten electoral integrity.
- Rapid Viral Harm: False political content spreads faster than authorities can respond, jeopardising free and fair elections.
- Public Order Disruptions: Misinformation can incite panic, communal tension, or violence before corrective messaging reaches citizens.
Way Forward
- Legal, Constitutional & Institutional Reforms:
- Clear Definition of Online Abuse: Enact a precise statutory definition of targeted online abuse, ensuring it cannot be misused against satire, dissent, or political criticism, drawing on lessons from the UK Online Safety Act.
- Proportionality-Based Restrictions: Ensure that all regulatory actions meet constitutional standards of suitability, necessity, and least-restrictiveness, reinforcing the proportionality test affirmed in Shreya Singhal (2015).
- Statutory Digital Standards Authority: Convert the current Grievance Appellate Committee (GAC) into a Digital Standards Authority with judicial and technical expertise from law, technology, psychology, child rights, and civil society.
- Risk-Based Tiered Regulation: Adopt a risk-based model, imposing the strictest obligations on Very Large Online Platforms (VLOPs) while protecting innovation among smaller Indian platforms and the Creator Economy.
- Inter-Agency Coordination: Build seamless cooperation between MeitY, MIB, law enforcement, the DPDP Authority, and child protection bodies to ensure coherent and uniform digital governance.
- Federalism, Decentralisation & Transparency:
- Concurrent List Harmonisation: Recognise online content governance as a Concurrent List subject (Entry 31—Posts & Telegraphs), mandating shared Centre–State responsibility.
- Reduce Centre–State Friction: Address tensions arising from challenges to the IT Rules 2021 by States such as Tamil Nadu and Kerala, ensuring policy clarity and cooperative oversight.
- Cooperative Federalism Architecture: Create a national regulatory framework supported by State Internet Safety Officers to enable localised enforcement, culturally sensitive implementation, and timely grievance handling.
- Mandatory Transparency Reporting: Require MeitY to release an annual public report on all Section 69A and 70B blocking orders, following European Union-style democratic transparency norms.
- Platform Accountability & Duty of Care:
- Conditional Safe-Harbour Immunity: Withdraw Section 79 protections if platforms fail to remove clearly illegal content within 24–36 hours of orders from courts or the regulator, consistent with the EU Digital Services Act.
- Graded Liability Mechanism: Introduce escalating penalties for platforms showing repeated non-compliance, systemic moderation failures, or negligence in addressing harmful content.
- Safety by Identity Reporting: Mandate annual identity-sensitive moderation reports detailing how abuse affects different groups—gender, caste, tribe, ethnicity—to ensure equity and prevent discriminatory harms.
- Technology, Cyber Safety & AI Governance:
- Mandatory Proactive AI Tools: Require deployment of AI/ML-based pre-filtering, deepfake detection systems, and early-warning mechanisms, backed by transparency norms and independent audits.
- Human-in-the-Loop Oversight: Combine AI systems with trained human moderators to reduce algorithmic biases, misclassification errors, and linguistic inaccuracies across Indian languages.
- Sustainable Digital Moderation: Introduce energy audits and green computing standards for high-risk AI systems used in moderation, following the EU Artificial Intelligence Act 2024.
- Privacy, Encryption & User Protection:
- Preservation of End-to-End Encryption: Ensure that regulatory powers do not weaken E2E encryption, consistent with the fundamental right to privacy recognised in Justice K.S. Puttaswamy (2017).
- Privacy-Protective Age Verification: Use Zero-Knowledge Proof systems, encrypted age tokens, or Aadhaar/PAN-based blind tokens to verify age without storing personal data, fully compliant with the DPDP Act 2023.
- No Traceability Violations: Avoid identity or age-verification mechanisms that create traceability loopholes, surveillance risks, or mass monitoring concerns flagged by the Justice Srikrishna Committee.
- Digital Literacy, Judicial Support & Ethical Ecosystems:
- Nationwide Digital Literacy Programme: Integrate online safety, information literacy, fact-checking, and digital citizenship into school curricula and community programmes to build informed users.
- Specialised Cyber Courts: Establish fast-track Cyber Courts modelled on POCSO courts to ensure timely adjudication of online abuse, harassment, and deepfake-related offences.
- Promotion of Digital Empathy: Foster ethical and empathetic digital behaviour through public campaigns, reducing online toxicity and building societal resilience against misinformation.
- Equity, Democracy & Economic Balance:
- Protection for Marginalised Groups: Recognise evidence showing higher vulnerability of women, Dalits, Adivasis, and activists to online abuse, and ensure policies explicitly address these identity-based harms.
- Identity-Sensitive Moderation Metrics: Require platforms to publish identity-disaggregated data on complaints, takedowns, and risk assessments to ensure accountability and equity.
- Election Integrity Mechanisms: Form a permanent Election Content Oversight Board comprising the ECI, MeitY, and DSA for real-time monitoring of election-period misinformation and deepfakes.
- Economic-Innovation Balance: Align digital regulation with India’s target of building a USD 1 trillion digital economy, avoiding both over-regulation that stifles start-ups and under-regulation that erodes user trust.
- Global Alignment with Indian Context: Incorporate best practices from the EU DSA and UNESCO frameworks while tailoring implementation to India’s socio-legal environment.
Conclusion
India needs a “Goldilocks solution”—not too much lawlessness, and not too much censorship. A regulatory framework must be calibrated, transparent, and rights-respecting. By carefully designing a system that punishes the abusers without silencing honest critics, India can successfully connect the freedoms of Article 19(1)(a) with the essential dignity and protection of Article 21 in the digital age.