AI-generated videos impersonating news anchors, politicians, and public figures are being used to spread misinformation and influence voter perception ahead of key state polls.
Parties are using voice cloning, chatbots, and AI-generated messaging in local dialects to reach millions of voters quickly and at significantly lower cost.
Fact-checkers report a sharp rise in AI-generated misinformation, raising concerns about electoral integrity, public trust, and voters’ ability to distinguish truth from fabrication.
A video purporting to show an Aaj Tak news bulletin has been widely shared on social media. In the clip, an anchor appears to say that a leaked intelligence report has sounded a red alert for the ruling Bharatiya Janata Party (BJP) ahead of the Assam assembly elections, warning that the party could face a major setback.
The footage appeared authentic: the anchor’s voice was clear, his delivery familiar, and the presentation consistent with the channel’s format. But it was fake. Aaj Tak never aired such a report. Fact-checkers later confirmed that the video was a deepfake generated using artificial intelligence. It had falsely replicated the channel’s branding, format, and anchor likeness to create the impression of a legitimate broadcast.
The case highlights the growing use of AI-generated deepfakes to spread misinformation and influence public opinion during elections.
Assembly elections in Assam, West Bengal, Tamil Nadu, Kerala, and Puducherry are scheduled later this year. Yet the misuse of AI has already begun, with deepfakes circulating online in an attempt to shape political narratives and voter perceptions.
More broadly, AI-generated misinformation has increased sharply over the past year. According to BOOM’s 2025 annual report, artificial intelligence-generated disinformation has become significantly more widespread and sophisticated, marking a structural shift in how misinformation is produced and circulated. The organisation published 1,067 fact-checks in 2025, of which 219—or 20.5 percent—involved AI-generated content, more than double the share recorded in 2024.
The report identified two major trends: the rise of low-quality synthetic content known as “AI slop,” designed to maximise social media engagement, and coordinated deepfake campaigns targeting public figures and institutions.
AI-driven misinformation has also played a significant role in elections and communal polarisation. Assembly elections in Bihar and Delhi saw extensive use of deepfakes, manipulated videos, and fabricated endorsements aimed at misleading voters and shaping political narratives. BOOM concluded that AI-generated synthetic media had evolved from a marginal novelty into a central tool of disinformation, enabling both mass-produced viral content and coordinated influence campaigns.
During the Bihar assembly elections, fact-checkers documented deepfakes involving senior politicians such as Nitish Kumar and Yogi Adityanath, as well as fabricated videos showing public figures endorsing parties they had never supported. In one instance, a manipulated video falsely showed actor Manoj Bajpayee endorsing the Rashtriya Janata Dal (RJD), illustrating how AI was used to manufacture political endorsements and influence voter perception.
The scale of election-related misinformation has grown significantly. In 2025 alone, assembly elections accounted for more than 100 debunked false claims, with Bihar alone accounting for 77 fact-checks involving deepfakes and AI-generated political content. The Delhi elections also saw widespread manipulation of videos to distort political messaging and inflame communal tensions.
What distinguishes this new phase of misinformation is not just its sophistication but its strategic intent. Earlier forms of electoral misinformation relied on edited footage, misleading captions, or rumours. AI now allows political narratives to be manufactured entirely from scratch. Videos can be generated showing politicians endorsing rivals, retracting positions, or making inflammatory remarks—without any original footage. This has made it increasingly difficult for voters to distinguish between authentic content and fabricated narratives.
At the same time, political campaigns themselves have begun using AI tools extensively. Digital marketing firms working for candidates used platforms such as ElevenLabs, ChatGPT, and Claude to generate speeches, campaign videos, and personalised messages in local dialects, enabling parties to reach millions of voters across linguistically diverse constituencies.
Voice cloning proved particularly effective, allowing politicians to communicate in hyperlocal dialects without physically visiting those regions. Campaign teams also deployed AI chatbots on WhatsApp and Telegram to answer voter queries and deliver targeted messaging, even during the legally mandated campaign silence period. These tools significantly reduced campaign costs while expanding outreach.
However, the same technologies have also enabled large-scale misinformation. Deepfake videos circulated showing politicians, journalists, and celebrities falsely endorsing candidates or making statements they never made. Synthetic videos also created the illusion of politicians campaigning in locations they had never visited.
Fact-checkers reported that distinguishing authentic content from AI-generated material has become increasingly difficult as voice clones and deepfakes grow more realistic. More than 50 million AI-generated robocalls mimicking political leaders were deployed across India’s electoral landscape, demonstrating the scale of automated political persuasion.
Experts warn that the constant flow of AI-generated political messaging risks overwhelming voters’ ability to evaluate information critically, while also giving an advantage to parties with greater technological and financial resources.
India’s scale and digital penetration make it particularly vulnerable. With hundreds of millions of voters and widespread social media usage, elections provide fertile ground for synthetic propaganda. AI tools, once restricted to specialised researchers, are now widely accessible, lowering the barriers to producing convincing deepfakes.
The problem is not limited to misinformation alone. Meta, which owns WhatsApp and Facebook, approved several AI-generated political advertisements containing extremist and inflammatory language during the 2024 general election cycle. Political parties also used AI to generate propaganda images, clone voices, and produce personalised campaign messaging, often with limited transparency or oversight.
According to Anadi, a researcher studying AI and elections, artificial intelligence is reshaping how political campaigns operate.
“India’s political parties have used AI to create fake audio, propaganda images, and parodies,” she writes in her research paper, Deep Fakes, Deeper Impacts: AI’s Role in the 2024 Indian General Election and Beyond. “At the same time, AI has enabled campaigns to reach wider audiences by generating content in multiple languages and dialects, spreading both accurate information and misinformation.”
AI has also significantly reduced campaign costs. Using AI-generated voice calls was estimated to be eight times cheaper than traditional call centres. In the two months before the 2024 general election, more than 50 million AI-generated calls were made to voters using cloned voices of political leaders.
“The rapid growth of generative AI is creating new challenges for electoral democracy,” Anadi said. “Deepfakes can mislead voters, erode trust in digital information, and make it harder for citizens to evaluate candidates and policies.”
Experts warn that AI-powered chatbots, micro-targeted messaging, and synthetic media could further polarise voters or manipulate political behaviour. There is also concern that extremist groups could exploit these tools for recruitment and propaganda.
In response, major technology companies—including Google, Meta, OpenAI, X, and TikTok—have pledged to detect and label AI-generated political content. These measures include watermarking synthetic media and adding labels to indicate when content has been created or altered using artificial intelligence.
Whether such safeguards will be sufficient remains uncertain. What is clear is that artificial intelligence has fundamentally altered the informational landscape of Indian elections—making misinformation easier to produce, harder to detect, and more powerful than ever before.





















