National

From Deepfakes To Rogue Superintelligence, Here Is Dark Side Of Artificial Intelligence

Deepfakes are just the tip of the iceberg. From spreading misinformation to reflecting real-world bias, there are several darker aspects of artificial intelligence (AI).

Advertisement

Representative Image
info_icon

Much of India learnt about 'deepfakes' earlier this month when a video went viral that showed actor Rashmika Mandanna entering a lift. Except the woman in the video was not her. She was a British woman and someone had superimposed Mandanna's face over her face. 

Deepfakes are videos that are made by a form of artificial intelligence (AI) called deep learning. While Mandanna's case was not the first, it certainly popularised the concept of deepfakes. Over the years, there have been several cases of deepfake videos going viral, with some of them being quite mundane and or even comedic, such as a video featuring Prime Minister Narendra Modi, Defence Minister Rajnath Singh, and Chinese President Xi Jinping singing a Hindi song in 2020 in context of the India-China stand-off in Eastern Ladakh. 

Advertisement

While the 2020 video might have been funny, that's not the case with all deepfakes. In a world where videos go viral in hours and authorities struggle with digital abuse, deepfakes have been weaponised by abusers. It includes fake quotes attributed to politicians or corporate leaders and non-consensual porn where exes or stalkers superimpose a person's face —targets are overwhelmingly women— in a porn clip and circulate that clip as theirs. The latter makes the digital space worse for women, who already face a plethora of issues online and offline. 

Data from Sensity AI, a research company that has tracked deepfake videos since December 2018, was cited as showing in 2021 that 90-95 per cent of deepfakes were non-consensual porn and about 90 per cent of that porn was of women. MIT Technology Review in an article noted that apps and code for deepfakes have been easily available over the years.

Advertisement

"It’s become far too easy to make deepfake nudes of any woman. Apps for this express purpose have emerged repeatedly even though they have quickly been banned: there was DeepNude in 2019, for example, and a Telegram bot in 2020. The underlying code for “stripping” the clothes off photos of women continues to exist in open-source repositories," reported MIT Technology Review.

Even with such grave consequences, deepfakes are just the tip of the iceberg. From fake photos to biased AI and rogue 'superintelligence', AI has much darker aspects. To get an idea about the stakes, consider this. More than 350 top AI scientists and executives signed the following statement earlier this year: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The signatories considering extinction-level implications of AI's dark sides include AI's modern godfathers Geoffrey Hinton and Yoshua Bengio; OpenAI CEO Sam Altman; Google DeepMind CEO Demis Hassabis; and Anthropic CEO Dario Amodei.

What are dark sides of AI?

The dark sides of artificial intelligence (AI) refer to unethical and dangerous applications of the technology. Commentators have often drawn comparisons with nuclear technology, saying that just like it has great promise, a nuclear disaster would be damning. With AI, the stakes are higher.

While nuclear technology led to the development of nuclear weapons that can potentially destroy the world, the nuclear bombs and consoles don't have minds of their own. They still need humans and a lengthy chain of approval to be launched. The AI, however, is well on its way to developing a mind of its own — much faster than you would expect. Once AI has a mind of its own, it could very well be a nuclear bomb with a mind, ready to make decisions on its own — not unlike the rogue AI robot Ultron in the Marvel Cinematic Universe's (MCU) 'Avengers: Age of Ultron' and 'What If'.

Advertisement

While this is, of course, the worst-case scenario, experts say we need to talk more about such scenarios so we are prepared whenever AI acquires 'superintelligence' and 'general intelligence' (AGI) levels — surpassing human intelligence and having developed so much that they need no human intervention at all to evolve. 

For these reasons, computer scientist and Unanimous AI CEO Dr. Louis B. Rosenberg compares the realisation of superintelligence and AGI to the arrival of aliens on Earth. In an article for Big Think, he calls it a "dangerous new lifeform" that would know all about humans since it would have been fed and trained on humongous datasets but humans would have no idea about it. He writes, "After all, the aliens we build here will know everything about us from the moment they arrive, having been trained on our wants, needs, and motivations, and able to sense our emotions, predict our reactions, and influence our opinions. If a species heading toward us in flying saucers had such abilities, we’d be terrified."

Advertisement

AI tools with dark potential 

Here are some of the ways in which artificial intelligence (AI) may have dark potential, ranging from rogue superintelligence systems to AI becoming tools to spread misinformation and political propaganda. 

1. Superintelligent AI we don't anything about

Imagine this: an AI platform that surpasses human intelligence several times and can develop on its own and humans don't have an idea how it works. 

In the best-case scenario, it sounds like another species similar to humans. In the worst-case scenario, it is going to pose an extinction-level threat as it is much smarter than us and knows us inside-out but we don't know anything about it. Once you bring robots into the equation with their metal bodies, you get the kind of situation like in the film 'I, Robot' or several storylines of Marvel's 'Agent of SHIELD'. But it's not the domain of science fiction anymore. Even today, makers of AI platforms don't know how they work. They know 'what' result AI would produce but not 'how' it's produced.

Advertisement

Even makers of AI don't know how neural networks fuelling AI platforms —such as ChatGPT— work. AI scientist Sam Bowman told the 'Unexplainable' podcast of Vox that there is no concise explanation of how these networks work like the way we know how a 'regular' program like MS Word or Paint would work. He further said that even their development has been quite autonomous so humans have not exactly 'built' AI platforms. He says the role of humans has been more of a facilitator. 

"I think the important piece here is that we really didn’t build it in any deep sense. We built the computers, but then we just gave the faintest outline of a blueprint and kind of let these systems develop on their own. I think an analogy here might be that we’re trying to grow a decorative topiary, a decorative hedge that we’re trying to shape. We plant the seed and we know what shape we want and we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree. We just kind of started the process, let it go, and try to nudge it around a little bit at the end," said Bowman.

Advertisement

In an article for Scientific American, Tamlyn Hunt noted that humans would be outsmarted by superintelligent AGI because every single mechanism or trick we would consider to contain would have already been figured out by AI because it works hundreds of times faster than us. 

"Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace...We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us. Any defences we’ve built in will be undone, like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him," wrote Hunt.

Advertisement

2. Misinformation and fake news

For years, videos and graphics of politicians like Barack Obama and corporate leaders like Mark Zuckerburg have circulated with false quotes. They are of course fake and have been made with AI tools. While misinformation has been around for centuries, AI takes it to a new level. 

With AI, one can morph photos, create deepfakes, have bots flooding social media platforms with propaganda, and spread propaganda across borders. This has been observed during elections and turbulent times such as wars and conflict. 

In the ongoing war in the Middle East, photos made by AI tools have been circulated as being of either Israel or the Gaza Strip. Even media platforms have used AI-generated images in stories without specifying that the images are not real. At times, such images give impression a bombing or an incident has happened which never actually happened because the photo is artificially generated.   

Advertisement

3. Plagiarism and IP theft

As ChatGPT clears the bar council and other exams, more and more students and even professionals have started using tools like ChatGPT to write essays, blog posts, and even articles for submissions to media platforms. While this raises clear questions about ethics, there is also the aspect of intellectual property (IP).

ChatGPT does not write anything original and works on a predictive text premise, meaning it strings together words to make the most likely response to user instructions. It does this on the basis of tomes of data it has been trained on across the internet. So, ChatGPT rehashes from what has already been published, making all of these essays or write-ups stale. When AI tools scrape data or quote excessively from a source, plagiarism also comes into the picture. 

Advertisement

There is also the question of who gets to own the copyright for such work, the user instructing ChatGPT or ChatGPT itself.

4. Biased AI

In line with the long-running principle of 'garbage in, garbage out' (GIGO), AI trained on biased datasets created by biased individuals would lead to a biased platform. 

Reports in the West show that law enforcement's AI tools are more likely to recognise or flag Black individuals than Whites. This is just one example of how real-world bias and discrimination can seep into AI tools since the AI platforms are training on such data. 

There is also the issue of AI tools profiling and such data compromising the person's identity if it's leaked. While any data leak is damning, profiles created by AI can be even more damning. For example, AI platforms study user behaviour and may determine a person's gender, sexual orientation, or purchasing patterns. If these are made public in a leak, a queer person may be outed without consent. 

Advertisement

5. Art and culture issues

The AI is not just writing essays but it is also generating photographs now. While much of such AI generation is mundane, ethical questions arise when people and even media platforms use AI to create photos of sensitive events like wars and conflict and don't specify that images are AI-generated. 

Then, there is also the issue of scraping when AI scrapes paintings and artworks while they are training on datasets. This leads to the question of whether the art produced by AI would be considered genuine or a copyright violation since it is not producing anything from scratch but after 'feeding' on existing art. 

Advertisement

There are also free speech aspects here. In both text- and visual-generation, AI companies are actually gatekeepers. They decide what their AI feeds on and that in turn decides the work they produce. For example, Baidu's AI tool does not return results for Tiananmen Square because the Communist Party of China (CPC) has removed the massacre from the public domain in China. While this is one example of AI gatekeeping that we know, there may be others that we don't know of yet. Under the GIGO principle described above, an AI tool barred from accessing socially or politically inconvenient datasets would produce flawed results. 

Advertisement

Advertisement