Whether India's eventual framework (if one materialises) follows that model or opts for user-side verification is entirely unclear. Then there is the question of scope, and it is one that proponents of the ban have not satisfactorily answered. The harms attributed to social media, addictive design, exposure to harmful content, predatory behaviour, and unrestricted peer interaction are not exclusive to Instagram or Facebook. Online gaming platforms present an identical, and in some respects more acute, risk profile. Roblox, which is among the most widely used platforms by children globally, operates as an immersive social environment where children interact with strangers in real time, often without meaningful parental oversight. Another example is Discord, technically a communication platform but functionally a gaming-adjacent social network hosts thousands of unmoderated communities where minors are present and where harmful content and predatory contact are documented concerns. A regulatory framework that targets social media while leaving online gaming sphere untouched is not a coherent child protection policy, it is a selective intervention that addresses the most visible part of the problem while ignoring comparable risks that operate in adjacent spaces. If the legislative intent is genuinely to protect children from digital harm, the scope of regulation needs to follow the harm, not the label.