ISO 27001, whatever its merits as a general information security standard, was not designed for AI systems. It does not address the specific vulnerabilities introduced by machine learning models, the risks of adversarial attacks on AI systems, the security implications of training data poisoning, or the accountability questions that arise when an AI system makes a security-relevant decision autonomously. As AI becomes embedded in critical systems across banking, healthcare, defence, and public services, the absence of AI-specific security standards and enforceable cybersecurity obligations is not merely a regulatory inconvenience. It is a national security concern. In the absence of hard law, India's approach to AI governance has leaned heavily on voluntary guidelines and self-regulatory frameworks. The underlying logic is that India must not stifle innovation by imposing premature or overly prescriptive rules on a rapidly evolving technology. This is a reasonable concern, and it reflects a genuine tension that every major AI nation is navigating. However, self-regulation has a structural weakness that voluntary frameworks cannot overcome.