Advertisement
X

The Narrative As A Weapon

Weaponization of narration has been going on for thousands of years, but it has been sharpened recently by using social media as a highly efficient tool to amplify the narrations.

For thousands of years, "fake news" has been used for diplomacy, as well as for military strategy, for politics, and businesses. The spread of "rumour" is well documented in books of war strategies, as a very critical strategy. Many a war has been won on basis of rumours or as we call them now, "fake news". The bumbling reporting of BBC Radio during the 1971 Bangladesh Liberation war, played a key role in a few battles, where they broadcasted that a brigade of Indian soldiers is leading an attack, instead of a battalion. It played havoc with the mind of the adversaries and led to the peculiar situation of a handful of Indian soldiers, managing a disproportionate number of POW’s.

The advent of social media has essentially made false narratives into an even more efficient and devastatingly effective tool for political and geopolitical purposes. This is getting played out with a vengeance, across the globe, as we had already seen in the accusations of alleged Russian interference in the last US elections and in the flurry of charges of potential influencing being done by Cambridge Analytica, in the previous Indian elections, using the data from Facebook. The power of influencing using the digital media was exemplified a few years ago by the Blue Whale digital “game”. The “game” could influence the target to convince them to self-destruct. Such power of mind-control will continue to be misused with increasingly military precision for political purposes, both within the national boundaries, or as we now see with China, using the same across national boundaries, on a global scale. I do not see this trend ebbing anytime soon, and humans are susceptible to consume narration that reinforces their existing biases. That is what makes them gullible and hence a target for such precision fake news.

Before the advent of social media, influencing the narration also was passed off as "Soft power", being subtly placed in movies for influencing global and national narratives. That power is no more “soft” in nature. It is a deadly accurate technological weapon.

Such strategies were pushed by propaganda professionals, who are now euphemistically referred to as PR (public relations) professionals. PR is a complex science. Under the Nazi regime, the evil genius of Goebbels pressed into service the then technological breakthroughs in dissipating narrations – the television and the radio. Those two mediums had such a deep impact that many governments thereafter, made these two mediums to be the sole purview of the government so that the government could push its narrations in a monopolistic manner. That is what we did in India for decades after independence.

Traditional propaganda is not any different from the current social media-fueled propaganda that is enhanced by digital technologies. Both try to push their narrative in a mercenary manner, providing the services to the highest bidder, and walking away from the potential damage or destruction, without an iota of responsibility or accountability, or even guilt. Didn't we have PR channels using radio in the '60s to spread the "awareness" on the harmful effect of "bacteria-laden" mother's milk and why mothers across the developing nations must switch to "sanitized" tinned milk, marketed by multinationals? PR will latch on to any medium possible to send out a specific narrative, as any professional would do. The issue is that digital technology, and specifically social media, acts as a massive amplifier of narration, and is also able to dig out and address the long-tail of the target groups. It is akin to a nuclear bomb in its ability to completely swamp the target groups and completely decimate the target group with the crafted narrative. Social media is only another weapon in the arsenal of propagandists, but then again, it is a weapon of mass destruction.

Advertisement

In this year alone, we have seen social media being used to “construct” riots in Bangalore and Delhi. We also saw earlier how a clip from Karachi on creating awareness on child lifting, was manipulated to lead to the lynching of innocent people in across India, on the suspicion of being child-lifters.

Given such severe implications, and as we have seen again and again, given that such power of narration amplification provided by social media has led to weaponizing of narration, and unleashing terrible losses on society, there need to be legal provisions to bring accountability to such practices. The IT Act 2000, along with its amendments, does attempt to provide such a regulatory framework. Unfortunately, there is a thin line between regulation for accountability on social media and regulation for muffling free speech.

There is a thin line between stopping the spread of hatred and stopping free speech. Anonymity is the reason why the Internet has grown and why people shed their inhibitions on the Internet and can put their views out there. Now some of those views may be false and detrimental to the nation. There may even be views that the person believes is true, which have been formed based on partial facts or plain falsehood. But how does one decide which narration is true and which one is not? And how does one handle millions and billions of such messages floating around at blindingly high frequency? Of course, making citizens identify themselves on the Internet, in a manner that one identifies oneself on the telephone, seems to be an easy way to reduce the velocity of fake news. But if we end up managing to put such a regulation, how would the Balochis, Tibetans, Uighurs, and other oppressed people globally, be able to vent their views against their oppressors? It would be the end of the Internet as we know it today. But then does it mean that we should do nothing? No. There is a consensus that when the tenets of freedom of speech were being created, it was never anticipated that a medium such as the Internet will come up one day and become a highly efficient engine for weaponizing narrations.

Advertisement

We have reached a stage, where protecting individual's venom in social media under the garb of providing the fundamental right of free speech, seems to be stretching, as individuals are not using Free speech to express themselves, but are using Free Speech to harm others. This was not the basis on which Free Speech was enshrined as a fundamental right. There is a need to create a solution that protects the society from the false narration, but it should not take away the basis that made the internet so popular and that made it a democratizing force.

However, it is extremely challenging to create regulations that will be acceptable to the wider society, much of whom do not have the same threat perception of weaponization of narration that the security apparatus has or the more aware segments of society have. Hence, such regulations can come in only when society is mature and ready to accept such regulations.

Advertisement

But can social media platforms actively prevent some of the harmful narrations from passing through their platforms? There are enough advances in sentiment analysis using AI, to be able to effectively filter out hate speech. In the process, there might be some collateral damage in the form of some legitimate narrations getting blocked, but over time, the learning algorithms get better and we should be able to have a social media that has been able to self-police and remove hate campaigns and fake narratives. However, we do not see that happening with the large monopolistic social media giants. If they do not take the first step, we will not be able to go down the path of weeding out what is evil in Social media. It is, however, heartening to see some of the Indian platforms like Sharechat, proactively focusing on reducing hate campaigns and fake narratives, which does go to show that it is possible to reduce such unacceptable conversations on Social media.

Advertisement

However, if the power of censorship is given to a private entity, the challenge would be to ensure that such powers of censorship are not wielded unjustly or in a biased manner, as I had captured in one of my previous articles in this column. So should we, therefore, have a government arbitrator or regulator, perhaps similar to the office of Lokpal, to ensure that there is no transgression of power to censor by private platforms?

(Dr Jaijit Bhattacharya is President of Centre for Digital Economy Policy Research.  Views are personal)  

Show comments
US