Advertisement
X

From AI-Generated Attacks To Fake Sightings Of Khamenei, Misinformation Surges Amid West Asia War

Deepfakes are now a popular wartime tool because they are cheap to produce and fast to spread. They can be used to fake surrender messages, fake atrocities, and even fake speeches by leaders.

Credibility often takes a back seat, especially when such posts are shared by verified blue-tick accounts. Video screen grab from X
Summary
  • Following Ayatollah Ali Khamenei’s death, social media saw a surge of AI-generated images and false videos, amplifying misinformation in a tense environment.

  • Fact-checkers face major challenges, with tools like Grok providing inconsistent results, and verifying content can take up to two days, requiring contextual inquiry and official quotes.

  • Fast-spreading deepfakes are increasingly used in wartime to fake speeches or events, amplified through hacked material and coordinated bot networks to outpace verification.

After the announcement of Ayatollah Ali Khamenei’s death and the escalation of the war in West Asia, social media was inundated with an image purportedly showing the Supreme Leader’s death.

In one of the images, he appeared trapped beneath the debris of a collapsed building, blood streaming from his forehead. Another image showed an elderly man with his head covered in dust and rubble. These images, claimed to be of Khamenei’s last, turned out to be AI-generated.

Similarly, another post on X claimed that Khamenei was alive, sharing an image purportedly showing him in the Sahara Desert, “indicating he may still be alive and in hiding.” The post attracted 7.5 million views, but Grok flagged the photo as actually being from 2014.

These posts were just a fraction of the flood of fake visuals and AI-generated images circulating on social media, further fuelling misinformation in an already panicked atmosphere. 

Misinformation spreads—sometimes deliberately, sometimes inadvertently—far beyond the realities on the ground. It includes recycled videos, hyper-realistic clips picked from video games, and at times, even fake accounts of individuals posing as diplomats issuing what appear to be official statements. 

In the barrage of content, for a person not trained in detecting AI or finding the primary source of the information, it becomes increasingly difficult to discern fact from fiction. Credibility often takes a back seat, especially when such posts are shared by verified blue-tick accounts.

On March 2, one of the trending misinformations was that Israeli Prime Minister Benjamin Netanyahu had been killed in an Iranian strike. One of the verified accounts, with 21.2k followers, was amongst those who posted this. The post had 50k views with 1.1k likes.The misinformation was further amplified by an account claiming to be a news portal. “If the rumour is truth then Israel’s PM Benjamin Netanyahu has been killed by the Iranian Army,” the caption read. 

The same account circulated a video purportedly showing the Burj Khalifa on fire, originally posted by a commentary account. The clip depicts smoke billowing from the building, with claims of missile strikes and people running for cover. The video, which seemed as convincing as it gets, was later flagged by Grok as AI-generated, after several viewers in the thread brought it to the platform’s attention.

Advertisement

“This video is AI generated. The audio quality itself is a dead giveaway,” Grok — a truth-seeking AI chatbot with real-time monitoring on X — stated. The post, which has still not been deleted, has racked up as many as 3.6 million views.

“Deepfakes are now an easy wartime tool because they are cheap to produce and fast to spread. They can be used to fake surrender messages, fake atrocities, or fake speeches by leaders,” says Apar Gupta, founding director of the Internet Freedom Foundation. A common pattern is hacking or data theft, then releasing selected material at a chosen moment, and pushing it through bot-like accounts and coordinated inauthentic behaviour. “The goal is to make a claim look popular before verification catches up,” he said.

Gupta cited an example of the 2022 deepfake video that appeared to show Ukraine’s President Volodymyr Zelenskyy asking soldiers to surrender, which was debunked but still travelled widely. 

Advertisement

Microsoft’s Digital Defense Report 2025 warned that synthetic video and voice can be paired with hacks of real accounts, so the false content arrives from trusted channels. The report also stated that threats on traditional cybersecurity are now amplified using AI and direct attacks on AI systems.

The report mentioned that a significant development is the rise of AI-first actors—groups that prioritise AI-generated content and tools over conventional methods of manipulation. These actors are moving from creating isolated spectacles to achieving saturation, inundating the information space with synthetic media to desensitise audiences and overwhelm detection systems.

There is a whole range of motivation on why people post fake content, Prateek Waghre, Head of Programmes at Tech Global Institute, said, explaining that accounts tend to benefit from posting trending content, even if it is untrue, since there is a potential to monetise as engagement translates to currency, as is seen in the case of X.

Advertisement

“There’s also often a deliberate element to sowing confusion intentionally and one might be acting on someone’s behalf, or they could simply be ideologically aligned with the parties involved," he added. 

However, assigning blame becomes complicated, as determining the intent behind posting misinformation is often difficult, making it challenging to hold anyone accountable. “It is challenging to frame laws since it is very hard to criminalise disinformation without also criminalising information,” Waghre said. 

The situation gets trickier in conflict situations, as there is an information void and people are trying to make sense of what's happening, he noted, quoting Kate Starbird’s idea of “collective sensemaking.” 

The University of Washington professor, who studied how social media is used in times of crisis, explained that collective sensemaking is thought to be a natural response to the uncertainty and anxiety that accompany crisis events, with researchers theorising that sensemaking serves both informational and psychological benefits. “Some rumors turn out to be true. But many do not. And so collective sensemaking can lead to misinformation,” the website stated.  

Advertisement

Swasti Chatterjee, News Editor at Boom FactCheck, said periods of war and conflict are particularly demanding for fact-checkers, with dedicated teams closely monitoring television broadcasts, X feeds and other social media platforms. She added that such times require extreme caution, as the urgency to debunk misinformation must not come at the cost of viewers’ trust.

Mentioning Grok, she said that the AI-help tool gives “bizzare” and varying answers to the same queries raised by users. Giving an example, she said that at one point, “it claimed the video was from Pakistan. At another, it responded to a user saying the footage was from Afghanistan.” She added that properly fact-checking could take 2 days, with contextual inquiry and getting quotes.

Referring to the ongoing war in West Asia, Chatterjee noted that while the volume of AI-generated videos and fake news was relatively limited in the initial days, it saw a sharp spike on March 4.

“Repeated exposure to graphic images can reduce emotional response over time,” Gupta said. When timelines are flooded with graphic clips, recycled footage, and AI generated scenes, “it can dull outrage when verified evidence appears later. It can also create general doubt that undermines documentation, reporting, and justice processes,” he added. 

Some Indian news channels have also misreported events, airing old or unrelated footage and presenting it as visuals of fresh attacks, as posted by Mohammed Zubair on X, where the news channels played a video from Bahrain claiming it to be from Dubai.

Answering whether there is a solution to circulation of false information, Gupta mentioned that, “the government is not powerless, and we can even state that it has wide and extensive powers even during periods of normalcy which are used for political censorship.”

Under IT Act Section 69A and the Blocking Rules 2009, the Union government can direct intermediaries to block content. Under IT Act Section 79 and the IT Rules 2021, platforms have due diligence duties and can risk losing safe harbour if they do not comply. “The gap, if any, is effective and legitimate use. Over broad or secret blocking can also suppress lawful speech and make verification harder. Hence often, “regulation” becomes censorship without improving safety.”

Published At: