Advertisement
X

AI India Impact Summit: 300 Million Children Faced Tech-Facilitated Abuse In 2024, Warn Experts

Speaking at a session titled “Child-Centric AI Policy: Safeguarding India’s AI Future”, experts examined gaps in India’s current AI policy frameworks, from child rights and data privacy to algorithmic harm and called for a shift from “child safety” to “child well-being by design.”

PM Modi on the inauguration day of India AI Impact Expo 2026 at Bharat Mandapam New Delhi, Feb 16 (ANI): Prime Minister Narendra Modi during the inaugural event of AI Impact Expo 2026 at Bharat Mandapam in New Delhi on Monday. | Source: IMAGO / ANI News | Representational
Summary
  • 300 million children faced tech-facilitated abuse in 2024; AI-generated sexual abuse material surged by 1,325%.

  • Experts called for “well-being by design,” stronger legal clarity, monitoring systems, and child-focused AI governance.

  • Amended IT Rules regulate deepfakes and mandate rapid takedowns, but India still lacks a comprehensive AI law.

At a time when India is positioning itself as a global AI leader, child rights experts and technology policy advocates have warned that the country’s AI future could place millions of children at risk unless safeguards move beyond rhetoric to enforcement.

Speaking at a session titled “Child-Centric AI Policy: Safeguarding India’s AI Future” at the  AI India Impact Summit, experts examined gaps in India’s current AI policy frameworks,  from child rights and data privacy to algorithmic harm and called for a shift from “child safety” to “child well-being by design.”

Zoe Lambourne of Childlight warned that AI systems are amplifying risks for children globally. AI tools, she said, are capable of generating misinformation, offering dangerous advice, providing offenders guidance on how to abuse children, and creating illegal AI-generated synthetic child sexual abuse material. Citing findings from Childlight’s latest report, Lambourne said that in 2024 an estimated 300 million children worldwide were victims of technology-facilitated abuse or exploitation. Particularly alarming, she noted, was a 1,325 per cent increase in AI-generated sexual abuse material over the past year.

“Young people in India see AI as powerful and beneficial,” she said, “but not safe by default.”

She emphasised that safety cannot end at the product design stage. It must extend to sustained monitoring, rapid response systems, strengthened child helplines, and compensation frameworks for survivors. Without these layered mechanisms, she argued, policy interventions risk remaining superficial.

Expert Recommendations

Gaurav Aggarwal of iSPIRT Foundation suggested that even the language of regulation needs rethinking. Rather than “child safety”, he proposed that policymakers adopt the broader framing of “child well-being”, arguing that governance should focus not only on preventing harm but on enabling children to flourish in digital ecosystems. He said parents must be meaningfully included in policy processes and that well-being by design should be embedded into law, rather than leaving responsibility entirely to companies and platforms.

Chitra Iyer of Space2Grow said an expert group has submitted detailed recommendations to the Ministry of Electronics and IT (MeitY), and a child safety working group within the ministry is reviewing them. If adopted, she said, these recommendations could help India articulate a clear stance on AI and child protection and position the country to set a narrative for the Global South. 

Advertisement

Among the proposals under consideration is the creation of a national child safety observatory that would consolidate innovations, best practices and research in one institutional space. The idea is not only to coordinate domestic efforts but also to establish leadership in emerging global conversations. The group has also proposed building a Global South working network, something MeitY is already engaging with, to foster cross-border collaboration on child protection in AI systems.

Another recommendation is the creation of a child safety innovation sandbox to pilot solutions aimed at combating digital and AI-driven harms affecting children. A youth advisory council to be set up to ensure that young voices inform policymaking. The experts have also stressed the need to strengthen the legal framework, particularly by clearly demystifying how the law distinguishes between AI-generated abuse material and content generated by individuals. Investment in digital resilience and AI literacy, they argued, should be treated as preventive infrastructure rather than as an afterthought.

Advertisement

India has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, notified on 10 February 2026, bringing AI-generated content under sharper regulation.

For the first time, the rules define “synthetically generated information” (SGI), formally covering deepfakes, AI-generated videos, cloned voices and other hyper-realistic manipulated content. Platforms must remove unlawful content within three hours, and within two hours in cases involving sexual content or impersonation, and ensure mandatory labelling of AI-generated material.

Advocate N.S. Nappinai, speaking at the session addressed these new guidelines, noting that their primary purpose is preventive and protective, placing accountability on platforms and service providers. However, she raised critical questions about regulatory balance. 

“When we speak about preventive measures, there is a lot beyond what meets the obvious,” she said. “Who decides what is essential in terms of preventive and protective measures? Where do you place the guardrails? To what extent should the law step in, and to what extent should technology be given the space to evolve?”

Advertisement

India currently has AI-related regulations and intermediary guidelines, but it does not yet have a comprehensive AI law comparable to the European Union’s framework, she mentioned.  The Digital Personal Data Protection Act has been notified but is still awaiting full implementation. “We have a very long road ahead,” she said. “It’s not enough, but we have started somewhere and we hope that we land safely eventually.”

Published At:
US