Summary of this article
A digital restriction originating from Bengaluru, India’s IT capital, carries a symbolism that travels beyond state boundaries
Platforms currently rely on self-declared age at sign-up, which teenagers routinely circumvent
A regulatory framework that targets social media while leaving online gaming sphere untouched is not a coherent child protection policy, it is a selective intervention at best
Earlier this month, Karnataka Chief Minister Siddaramaiah announced, in the course of presenting the state's budget for 2026–27, that social media access would be banned for children under the age of 16, making Karnataka the first Indian state to formally adopt such a measure. The stated objective is to counter the "adverse effects of increasing mobile usage on children." The announcement follows Australia's landmark nationwide ban, and sits alongside a growing wave of similar proposals across France, Spain, Indonesia, and several other geographies. Goa and Andhra Pradesh are reportedly considering restrictions on similar lines.
The concern around the ban is not without basis. Indian adolescents reportedly spend between three to seven hours daily on social media. Research has consistently linked heavy platform use to anxiety, depression, cyberbullying, and eating disorders among teenagers. The recent Grok scandal, where an AI system generated sexual deepfakes of minors, has sharply accelerated legislative urgency on this front globally, and India's own Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified as recently as February 20, 2026, already introduce a dedicated regulatory framework for AI-generated harmful content, including deepfakes. That these two policy conversations are running in parallel is not coincidental, both are expressions of the same underlying concern that digital platforms, as currently designed and governed, offer inadequate protection to younger users.
The symbolic weight of Karnataka's announcement is also real. Bengaluru is India's technology capital, home to the regional offices of tech giants such as Meta, Google, Microsoft, Amazon etc. A digital restriction originating from this geography carries a signal that travels beyond state boundaries. The Union IT Minister of India has confirmed active discussions with platforms on age-based restrictions, and India's Chief Economic Adviser had already publicly backed such measures. The direction of travel, at least rhetorically, is clear. The difficulty is that the ban, as presently announced, is constitutionally precarious. +
The difficulty is that the ban as announced at present is operationally hollow. No draft legislation, no subordinate regulation, no age verification mechanism, and no implementation date have followed the announcement. A measure introduced through a budget speech, without any accompanying framework for how it will actually function, raises an obvious question: what, precisely, has been banned? The announcement tells platforms and users what the government intends, but nothing about how that intention translates into enforceable obligation. Until these questions are answered, the ban exists as a political statement rather than a regulatory instrument. Moreover, the choice of 16 years of age follows Australia's lead rather than any independent reasoning. Australia picked it as a workable middle ground between the existing platform minimum of 13 and the legally fraught territory of 18. Karnataka has simply borrowed the number, without pausing to ask whether it sits comfortably alongside India's own data protection regulation, which already treats anyone under 18 as a child.
Platforms currently rely on self-declared age at sign-up, which teenagers routinely circumvent, for instance, the TikTok ban of 2020 demonstrated as much domestically. Meaningful enforcement would require hard identity verification, likely Aadhaar-linked, for every user accessing a platform. The consequence is a surveillance architecture that centralises sensitive biometric data across a billion-user base, destroying online anonymity in the process and still failing to stop determined minors with access to VPNs or false identification. Australia resolved this by placing the compliance burden on platforms rather than users, with fines of up to A$49.5 million for violations.
Whether India's eventual framework (if one materialises) follows that model or opts for user-side verification is entirely unclear. Then there is the question of scope, and it is one that proponents of the ban have not satisfactorily answered. The harms attributed to social media, addictive design, exposure to harmful content, predatory behaviour, and unrestricted peer interaction are not exclusive to Instagram or Facebook. Online gaming platforms present an identical, and in some respects more acute, risk profile. Roblox, which is among the most widely used platforms by children globally, operates as an immersive social environment where children interact with strangers in real time, often without meaningful parental oversight. Another example is Discord, technically a communication platform but functionally a gaming-adjacent social network hosts thousands of unmoderated communities where minors are present and where harmful content and predatory contact are documented concerns. A regulatory framework that targets social media while leaving online gaming sphere untouched is not a coherent child protection policy, it is a selective intervention that addresses the most visible part of the problem while ignoring comparable risks that operate in adjacent spaces. If the legislative intent is genuinely to protect children from digital harm, the scope of regulation needs to follow the harm, not the label.
There is also a quieter problem that the ban does not address at all: India already has a child data protection framework, and it hasn't been used. The Digital Personal Data Protection Act, 2023 (DPDPA) mandates verifiable parental consent before platforms process a child's data, prohibits behavioural tracking and targeted advertising directed at minors, and prescribes penalties of up to ₹200 crore for violations. These provisions have been notified but not enforced. The debate over a social media ban is proceeding as though this framework does not exist.
Beyond the legal infirmities, there is a substantive policy concern worth sitting with. A ban that successfully displaces teenagers off mainstream platforms does not reduce their digital activity, it simply redirects it toward encrypted messaging groups, anonymous forums, and unmonitored spaces where moderation is absent and harmful content operates without any institutional check. The platforms being banned are, whatever their failings, accountable to regulators and maintain reporting mechanisms. Age-gating also risks giving platforms a convenient compliance shield, they can point to access restrictions while leaving untouched the algorithmic architectures, infinite scroll designs, and engagement-maximising notification systems that generate the harms in the first place.
Karnataka's announcement is a legitimate opening to a necessary conversation. The harms motivating it are real, and the global direction of policy is clear. But a questionable state announcement, with no implementation mechanism and no engagement with the existing DPDPA framework, is a considerable distance from the calibrated, enforceable policy that the problem actually demands.
Gaurav Mahajan is a co-founding Partner at The Precept Law Offices, India. He has extensive experience in commercial law, intellectual property and technology law, data privacy, AI Ethics, dispute resolution, white-collar crime and general litigation.
Views expressed are personal

















