Summary of this article
In the ongoing conflict in West Asia, AI is not a background feature, it is front and centre.
When the tempo of warfare accelerates to machine speed, the role of human judgement begins to blur.
An algorithm trained on past data might misinterpret a civilian vehicle as a military convoy or misidentify a radar signature.
Artificial intelligence (AI) is no longer a speculative tool for future wars because it is already embedded in contemporary conflict, and today's geopolitical developments make that reality impossible to ignore.
Recently, the Indian government convened an all-party meeting in New Delhi to address the escalating West Asia crisis following the US-Israeli strikes on Iran. Senior ministers including External Affairs Minister S. Jaishankar, Home Minister Amit Shah, and Finance Minister Nirmala Sitharaman briefed cross-party leaders on maritime disruptions through the Strait of Hormuz, the safety of the Indian diaspora, and the domestic impact on energy supply. What went largely not discussed in that chamber, however, is the dimension of this conflict that will define all future wars: the central role of AI in making targeting decisions at machine speed.
In the ongoing US-Iran conflict, AI is not a background feature, it is front and centre. Brad Cooper, head of the US Central Command, publicly confirmed that American war-fighters are "leveraging a variety of advanced AI tools" to sift through vast amounts of battlefield data in seconds, enabling commanders to "make smarter decisions faster than the enemy can react”. The US 2026 Worldwide Threat Assessment, released by the Office of the Director of National Intelligence, has now formally designated AI a "defining technology for the 21st century", noting it "has been used in recent conflicts to influence targeting and streamline decision-making, marking a significant shift in the nature of modern warfare".
The Pentagon has simultaneously expanded its AI footprint by formalising Palantir's Maven AI system into a long-term military programme - a move that defence officials describe as part of a broader shift toward "data-centric warfare". At the same time, the War Department has launched an AI Acceleration Strategy explicitly aimed at making the United States the world's ‘undisputed AI-enabled fighting force'.
Across militaries today, AI is increasingly used as what strategists call a ‘decision advantage’ engine. Algorithms scan thousands of images to identify changes in terrain or movement patterns, they fuse signals intelligence, drone feeds and intercepted communications into a single operational picture and they simulate millions of battlefield scenarios to predict enemy responses and optimise strike routes. What once took teams of analysts days or weeks can now be produced in minutes. The result is something military planners describe as ‘decision compression’. The time between detecting a threat and responding to it shrinks dramatically. However, when the tempo of warfare accelerates to machine speed, the role of human judgement begins to blur.
In theory, the doctrine of human-in-the-loop (HIIL) protects against that risk. Technology companies and defence establishments insist that AI merely advises and the human commander still presses the button. In practice, the situation is far more complicated.
Private AI firms have tried to draw ethical red lines around military use of their technologies. Anthropic, for instance, built explicit guardrails into its AI system Claude, prohibiting two applications outright: mass domestic surveillance and fully autonomous weapons operating without human oversight. These restrictions were designed to signal that the firm would not build technologies that could independently decide who lives or dies.
As where we stand today, those boundaries have already collided with geopolitics. Because of these restrictions, Anthropic reportedly fell out of favour with the US administration and was temporarily blacklisted from federal use. Earlier this month, the Pentagon formally called for Anthropic's AI to be removed from military operations within six months, it was the result of an escalating feud with the company's leadership. An internal Pentagon memo revealed that Claude had been the only large-scale AI system operational on the Defence Department's classified systems, used in areas including nuclear weapons, ballistic missile defence, and cyber warfare. This exposed a fundamental tension in the emerging AI arms race, i.e., governments want technological dominance, and technology companies want ethical distance.
Where one company withdraws, another often steps in
When Anthropic’s technology became politically contentious, OpenAI moved to fill the gap by signing a deal with the US Department of War. Critics quickly pointed out that the agreement contained vague safeguards and potential legal loopholes. Public backlash followed almost immediately, including a surge in consumer distrust and a dramatic spike in users uninstalling ChatGPT. Facing reputational pressure, OpenAI later amended its policy to mirror Anthropic’s guardrails by banning mass surveillance and unmonitored use in autonomous weapons systems. Meanwhile, Palantir's expanded Maven contract signals that the void will not remain unfilled for long.
This reveals something important about the governance of military AI, that for now, corporate ethics policies are functioning as the first line of regulation. But companies are not regulators. Their priorities shift with political pressure, shareholder expectations and national security demands.
The risks become clearer when we examine how AI actually behaves in complex environments. Unlike humans, AI systems do not perceive the world, they infer it statistically. One demonstration used in AI safety research illustrates the point through what engineers call the ‘chicken test’. In the experiment, an AI system is shown an image of a chicken that appears to have three legs because of shadows or unusual camera angles. Since its training data overwhelmingly suggests chickens have two legs, the model often dismisses the visual evidence as an illusion. While some systems insist the bird has two legs despite seeing three, others confidently claim four.
The problem is not merely that the model is wrong, it is that it is confidently wrong.
Now translate that dynamic into a battlefield environment filled with smoke, damaged infrastructure and ambiguous signals. An algorithm trained on past data might misinterpret a civilian vehicle as a military convoy or misidentify a radar signature. When systems generate hundreds of potential targets a day, the ability of a human analyst to independently verify each recommendation becomes almost impossible. This is where the idea of human oversight begins to erode, and where the human-in-the-loop risks becoming a human rubber stamp.
From a governance perspective, concerns about AI escalation are not theoretical. Research from King's College London found that in simulations of international crisis scenarios, AI models escalated to nuclear signalling in 95% of cases, frequently selecting aggressive options when placed under time pressure. The systems were not programmed to provoke conflict. They simply followed statistical pathways that appeared strategically rational within their parameters. China's Defence Ministry, notably, this month warned against "unrestricted application of AI by the military", stating that "giving algorithms the power to determine life and death not only erodes ethical restraints and accountability in wars, but also risks technological runaway". The warning is significant, not least because it comes from one of the United States' primary AI competitors.
International law has not caught up with this reality. The legal framework governing weapons reviews, particularly Article 36 of the Geneva Conventions, was written for physical systems like missiles or artillery platforms. It assumes a weapon remains largely unchanged after deployment. However, AI does not work that way because AI systems evolve continuously through software updates, retraining and new datasets. Their behaviour can change without altering their hardware, making traditional weapons review processes insufficient.
What the world needs instead is lifecycle accountability for military AI. First, oversight must extend beyond procurement to include how systems are trained, updated and deployed in real time. Second, algorithms used in targeting should produce auditable decision logs explaining how conclusions were reached. Third, command structures must preserve genuine human veto authority rather than symbolic approval. These are governance, risk and compliance questions. And they are becoming geopolitical questions as well.
Most international discussions about military AI have taken place under the UN Convention on Certain Conventional Weapons. After nearly a decade of negotiations, the process has produced guiding principles but no binding treaty. Major powers remain reluctant to restrict technologies they believe will define future military advantage.
Here, history offers a warning!
Chemical weapons were widely used before the world agreed to ban them. Nuclear weapons were deployed before international treaties attempted to regulate them. The pattern is clear that governance often arrives after catastrophe. AI could follow the same path if policymakers continue to treat it as a niche technical issue rather than a systemic security challenge.
This is precisely why today's all-party meeting in New Delhi matters beyond its immediate agenda. India is navigating the West Asia crisis as a consuming practical emergency, protecting its diaspora, securing its energy corridors, managing the Strait of Hormuz disruption. But India is simultaneously navigating something far longer in duration: its positioning in a world where AI is reshaping the rules of conflict itself.
India sits at the intersection of two worlds, a rapidly expanding digital economy and a complex regional security environment. A recent analysis from Chatham House argues that middle powers must pursue "sovereign AI" strategies to preserve their political and economic independence in an era increasingly shaped by algorithmic power. Failure to secure influence over AI ecosystems risks forfeiting control over not just technology, but also economic competitiveness, governance systems, and geopolitical standing.
India, in particular, has an opportunity to bridge the widening gap between major military powers and developing nations concerned about technological inequality. By advocating for binding international norms such as lifecycle accountability, auditable AI systems and enforceable human control over lethal decisions, middle powers can transform regulation from a constraint on sovereignty into a stabilising mechanism that reduces the risks of algorithmic opacity and unintended escalation.
Without such leadership, the trajectory is clear. AI systems will become more deeply embedded in warfare while legal frameworks struggle to keep pace. Conflicts will be conducted at machine speed but governed by diplomatic processes designed for a slower era.
The question is no longer whether AI will shape war, it already does, in real time, in West Asia, right now. The real question is whether humanity can still shape the rules before the algorithms shape them for us. And whether India, meeting today in New Delhi about ships and gas supplies, will also raise its voice about the deeper battle being fought in lines of code?
Vidhi Sharma works at the intersection of global digital technology policy and responsible AI governance and is currently working as 'Head of Responsible AI' at Future Shift Labs.
(Views expressed are personal)



















