India's AI Legal Crisis: Governing Tomorrow's Technology With Yesterday's Laws

As India hosts the AI Summit 2026, the gap etween what AI can do, what it is doing, and what the legal system is equipped to address, is widening. That gap carries real consequences for citizens, businesses, courts, and for India's credibility as a responsible AI power on the world stage

India AI Law 2026
Artificale Intelligence framwork
Photo: PTI
info_icon
Summary
Summary of this article
  • The IT Act contains no provisions addressing artificial intelligence, machine learning, algorithmic decision-making, or the legal status of AI-generated content.

  • Yet it is this legislation that courts, regulators, and litigants are being asked to stretch, interpret, and apply to AI-related disputes in 2026.

  • As AI penetrates banking, healthcare, defence, and public services, the absence of AI-specific security standards and enforceable cybersecurity obligations is becoming a national security concern.

India's artificial intelligence ecosystem is growing at a pace that few predicted even five years ago. The country has emerged as one of the world's leading hubs for AI talent, AI-powered startups, and AI adoption across sectors ranging from healthcare and agriculture to finance and public administration. The government has committed billions of dollars to AI infrastructure through the IndiaAI Mission, and the AI Action Summit hosted at Bharat Mandapam in February 2026, currently ongoing as the time of writing, has drawn global attention as the largest AI gathering ever organised, signalling that India is determined to play a defining role in shaping the future of this technology. And yet, beneath this impressive momentum lies a legal reality that is far less comfortable: India is attempting to govern one of the most consequential technologies in human history using laws that were never designed for it, in the absence of frameworks that the rest of the world is already building.

The core problem is not that India lacks thoughtful people working on AI governance. It is that thinking has not yet translated into law. The result is a widening gap between what AI can do, what it is doing, and what the legal system is equipped to address. That gap carries real consequences for citizens, businesses, courts, and for India's credibility as a responsible AI power on the world stage.

The IT Act 2000: A 26-year-old law

The primary legislation governing India's digital landscape remains the Information Technology Act of 2000. This law was enacted at a time when the internet was still a novelty for most Indians, when smartphones did not exist, and when the most sophisticated algorithms in widespread use were basic search engines. The IT Act was never intended to be an AI law. It contains no provisions specifically addressing artificial intelligence, machine learning, algorithmic decision-making, or the legal status of AI-generated content. Yet it is this legislation that courts, regulators, and litigants are being asked to stretch, interpret, and apply to AI-related disputes in 2026.

The limitations are not minor. Questions about who bears liability when an AI system causes harm, whether an AI company qualifies as an intermediary under the Act, what due diligence obligations apply to platforms that deploy generative AI, and how existing criminal provisions apply to AI-facilitated offences are all being answered through judicial interpretation rather than clear statutory guidance. This is an inherently unstable foundation. It creates legal uncertainty for businesses, inconsistent outcomes for individuals who suffer harm, and an enforcement environment where regulators lack the tools they need to act decisively.

The AI-Cybersecurity Legal Vacuum

Dozens of countries across the world have enacted national cybersecurity laws that establish clear obligations for organisations operating critical digital infrastructure, set minimum security standards, and create accountability mechanisms when those standards are breached. India has none of this. Yet, in India there is no binding national cybersecurity standard beyond the Information Technology (Reasonable Security Practices and Procedures) Rules of 2011, which essentially point organisations toward ISO 27001 compliance as the benchmark for reasonable security. India has no law on AI security.

ISO 27001, whatever its merits as a general information security standard, was not designed for AI systems. It does not address the specific vulnerabilities introduced by machine learning models, the risks of adversarial attacks on AI systems, the security implications of training data poisoning, or the accountability questions that arise when an AI system makes a security-relevant decision autonomously. As AI becomes embedded in critical systems across banking, healthcare, defence, and public services, the absence of AI-specific security standards and enforceable cybersecurity obligations is not merely a regulatory inconvenience. It is a national security concern. In the absence of hard law, India's approach to AI governance has leaned heavily on voluntary guidelines and self-regulatory frameworks. The underlying logic is that India must not stifle innovation by imposing premature or overly prescriptive rules on a rapidly evolving technology. This is a reasonable concern, and it reflects a genuine tension that every major AI nation is navigating. However, self-regulation has a structural weakness that voluntary frameworks cannot overcome.

Cybersecurity is expensive. Transparency mechanisms cost money to build and maintain. Algorithmic auditing requires significant investment. When these measures are voluntary, the organisations most likely adopt which are already inclined toward responsible practices, while those most likely to cause harm face no meaningful pressure to change their behaviour. The result is not a self-regulating ecosystem but a compliance gap that falls precisely where the risks are highest. India's particular business culture, with its tradition of creative improvisation and regulatory workarounds, makes voluntary frameworks even less effective than they might be elsewhere. Without the binding force of law, compliance remains aspirational.

The new 2026 rules: progress, but not enough

In February 2026, the government amended the Information Technology Intermediary Guidelines and Digital Media Ethics Code to introduce a significant new obligation: service providers offering AI-generated or synthetically generated content must label that content as AI-generated. Failure to do so results in the loss of statutory immunity from legal liability. The rules come into effect on the 20th of February 2026 and represent a genuine step forward, particularly in addressing the deepfakes and the spread of synthetic misinformation.

However, the rules fall well short of a comprehensive AI governance framework. They focus on labelling and due diligence by platforms rather than on the broader questions of accountability, security, bias, liability, and transparency that an AI law would need to address. They do not resolve the fundamental question of how AI companies should be classified under Indian law. Moreover, they remain grounded in an intermediary liability framework that was designed for platforms hosting user-generated content, not for companies whose core product is an AI system capable of autonomous generation, reasoning, and decision-making.

The global context: India falling behind on AI law

The international landscape makes India's situation more urgent. Six jurisdictions now have dedicated AI-cyber legislation: the European Union's AI Act, which establishes a comprehensive risk-based framework with significant penalties for non-compliance; China's regulations on generative AI and algorithmic recommendations; South Korea's AI Basic Act; Japan's approach to AI governance; Hungary's legislation; and El Salvador's distinctive framework. Each of these is imperfect, and each reflects the priorities and limitations of its own legal culture. But each of them provides a foundation that India currently lacks: a clear signal to businesses about what is required, a framework for courts to apply when disputes arise, and an accountability structure that goes beyond voluntary compliance.

The questions Indian law has not answered

Several foundational legal questions remain entirely unresolved in the Indian context. The question of AI legal personhood, whether AI systems can bear rights, obligations, or liability, has not been addressed despite being central to any workable accountability framework. If an AI system causes harm autonomously, and neither the developer, the deployer, nor the user can be clearly identified as responsible, the victim has no meaningful recourse under current law. The black box problem, the inability of affected parties to understand or challenge how an AI system reached a decision, has not been addressed through any transparency or explainability requirement. I ecently released an AI accountability framework 2026, which contains the essential, legal principles and doctrines governing AI accountability.

Copyright law does not yet address AI-generated works, leaving authorship, ownership, and the legality of training on copyrighted material unresolved. The legal status of AI agents, systems capable of taking autonomous actions in the digital world, including entering contracts, conducting transactions, and interacting with other systems, is entirely undefined. Also, the question of data privacy in the context of AI training, particularly whether the Digital Personal Data Protection Act of 2023 provides adequate protections given that AI companies often treat user data as effectively public for training purposes, remains deeply contested.

The path forward

India needs a dedicated AI law. It needs a national cybersecurity framework updated for the AI era and a dedicated authority that consolidates governance across ministries and provides the coherent regulatory oversight that a technology of this consequence demands. And it needs to move from the instinct of watching how others regulate before acting, to actively contributing to the development of AI governance norms that reflect the interests and values of the global south.

None of this means abandoning the commitment to innovation. It means recognising that legal clarity is not the enemy of innovation but one of its essential preconditions. Businesses invest more confidently when liability is clear. Developers build more responsibly when standards are defined. Citizens participate more willingly in an AI-powered society when they know their rights are protected. India's AI ambition is real and deserved. The legal infrastructure to match that ambition is overdue.

Dr. Pavan Duggal, is Architect, Global AI Accountability. He is a global authority on AI law. With over 37 years as a Supreme Court of India advocate, he specialises in AI ethics, liability frameworks, data privacy, cybercrime, blockchain, metaverse law, quantum computing, and Global South perspectives on AI governance.

Views expressed are personal

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

×