May 30, 2020
Home  »  Magazine  »  National  » Cover Stories  » Opinion »  Opinion: Deepfake Will Make Fake News Realistic

Opinion: Deepfake Will Make Fake News Realistic

Malicious hands can use the technology to spread propaganda, spark conflicts

Opinion: Deepfake Will Make Fake News Realistic
Opinion: Deepfake Will Make Fake News Realistic

As India grapples with how to stop the spread of misinformation online, a new generation of digital forgery technologies threatens to upend whatever progress has been made thus far. Deepfake, a new suite of technologies, uses powerful machine learning to copy and superimpose the voice, face and speaking style from one person onto another. Early versions produced choppy and unconvincing results, but they’ve come a long way since. Across the internet, hobbyists have used the technology to create entertainment, pornography and art—sometimes with devastating personal consequences.

Media has focused on how deepfake can be used to spread propaganda or spark military conflicts, but the larger threat is that the technologies can be weaponised and targeted against individuals and businesses. Exaggeration, taking thi­ngs out of context and outright lies have existed long before the internet. What deepfake changes is that it’s now far cheaper and easier for anybody to create synthetic evidence for whatever purpose they want.

In the context of India’s ecosystem, this can be especially dangerous. There’s already the rampant spread of inflammatory statements, doctored videos and conspiracy theories causing social turmoil. That means that small groups of people without much technical knowledge will be able to develop powerful tools of division.

Take for example the June 2018 att­acks on migrant workers in Rainpada. A group of people circulated grainy images of ind­ividuals allegedly committing wrongdoings, which was used to justify mob violence. With deepfake, a person seeking to cause trouble can now insert a specific person’s face and body movement into a video of a crime. In a similar situation, a hacker can download a CEO’s speeches to create a model of their voice, and use that to damage a brand or give fraudulent orders to an executive. Security firm Symantec found that has happened at least three times already—causing millions of dollars in losses around the world.

Videos of PM Modi and Rahul Gandhi ­saying things they didn’t say can now be created.

With social media and messaging apps, we’ve seen the risks of adopting a technology without upgrading the security infrastructure around it. The Indian government recently implicated WhatsApp in the wave of communal violence that have rocked the country over the past years. The company has prioritised growth over user safety, and as a result, society as a whole has suffered. In the case of deepfake, India can’t afford to repeat that same mistake. These technologies are available for anyone to download, and it’s too late to reverse course—instead we need to rebuild our defences. The stakes couldn’t be higher.

Countering deepfake will necessitate the creation of new tools, systems and behaviours.  Rising to the occasion will require a massive eff­ort by the government, media and software companies to help users become more sceptical about what they see and thoughtful about what they share. This is no easy task to accomplish, but there are many places where they can get started today. These groups can work together to create mechanisms to report and remove fake content, determine what the acceptable limits of speech are, and establish verified lines of communication for officials to disseminate news.

It is necessary to provide users a way to verify whether a piece of content is real or not.  To do this, manufacturers and developers can harness cryptography to create a shared basis of truth. That involves having our devices digitally sign each photo or video so we have the ability to prove when and by whom it was taken.

These measures alone don’t offer a full solution, but rather a starting point. It will require many rounds of incremental improvements to know what’s effective. While it might be tempting to short-circuit the process by declaring the technology illegal, that won’t stop bad actors from causing harm anyway. At the end of the day, technology or regulation alone can’t solve what’s fundamentally a cultural problem.

(The writer is visiting instructor at Carnegie Mellon University’s College of Engineering and Founder. He is also the CEO of Day One Insights, a strategy and advisory firm.)

Next Story >>
Google + Linkedin Whatsapp

The Latest Issue

Outlook Videos