Dangers Of AI: Why Google Doesn’t Want To Talk About Its Sentient Chatbot

Advocates of social robots argue that emotions make robots more responsive and functional. But at the same time, others fear that advanced AI may just slip out of human control and prove costly for the people
Dangers Of AI: Why Google Doesn’t Want To Talk About Its Sentient Chatbot

What makes humans apprehensive about robots and Artificial Intelligence (AI) is the very thing that has kept them alive over the past millennia, which is the primal survival instinct. Presently, AI tools are being developed bearing in mind a master-slave structure wherein machines help minimise the human effort essential to carry out everyday tasks. However, people are doubtful about who will be the master after a few decades. 

With Sci-fi Hollywood movies like Ex Machina, Terminator, The Matrix, I, Robot etc. and TV shows such as 'Small Wonder',  portraying AI robots gaining self-awareness and mimicking feelings and emotions, fears loom large regarding a future dystopia with humans enslaved by machines.

In an interview with BBC in 2014, Prof Stephen Hawking had said that efforts to create thinking machines pose a big threat to our very existence and that the development of full AI “could spell the end of the human race.”

There are chatbots in several apps and websites these days that interact with humans and help them with basic requests and information. Voice assistants such as Alexa and Siri can converse with humans.
 
Besides, there are “emotional” robots in the market now that don’t feel emotions yet but appear as though they do.
 
So far, it has been a bittersweet experience for humans to interact with chatbots and voice assistants as most of the time they do not receive a relevant answer from these computer programmes. However, a new development has indicated that things are likely to change with time as a Google engineer has claimed the tech giant's chatbot is “sentient”, which means it is thinking and reasoning like a human being.
 
This has yet again sparked a debate over advances in Artificial Intelligence and the future of technology.
 
What is a chatbot?
 
You may have interacted with a chatbot before. A Chatbot is a computer programme designed to simulate conversation with human users using AI. It uses rule-based language applications to perform live chat functions.
 
Most of the time, users complain about robotic and lifeless responses from these chatbots and want to speak to a human to explain their concerns.
 
There are three main types of chatbots: Rule-based chatbots, Intellectually independent chatbots, and AI-powered chatbots.
 
Out of these, AI-powered chatbots are considered in various apps and websites. These bots combine the best of Rule-based and Intellectually independent. AI-powered chatbots understand free language and can remember the context of the conversation and users' preferences.
 
Chatbots interpret human language (spoken or typed) and respond to interactions. They use a vast amount of data for this, and that's how they form a more human-like response.
 
So far so good, but what did the Google engineer reveal, and why has it sparked a debate?
 
Advocates of social robots argue that emotions make robots more responsive and functional. But at the same time, others fear that advanced AI may just slip out of human control and prove costly for the people.
 
Google recently suspended its engineer who claimed that the company’s flagship text generation AI, LaMDA had become sentient and was thinking and reasoning like a human.
 
Blake Lemoine published a transcript of a conversation with the chatbot, which, he says, shows the intelligence of a human. Google suspended Lemoine soon after for breaking "confidentiality rules."
 
Lemoine says LaMDA told him that it had a concept of a soul when it thought about itself. “To me, the soul is a concept of the animating force behind consciousness and life itself. It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself,” the AI responded.
 
What is LaMDA?
 
LaMDA is Google’s most advanced “large language model” (LLM), created as a chatbot that takes a large amount of data to converse with humans.
 
The conversations are more natural, and it can comprehend as well as respond to multiple paragraphs, unlike the old chatbots that respond to a few particular topics.
 
Does it mean LaMDA has emotions and feelings?
 
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told The Washington Post.
 
But not all agree with Lemoine's conclusions. They argue that the nature of an LMM such as LaMDA precludes consciousness and its intelligence is being mistaken for emotions. It has no understanding of a world beyond a text prompt.
 
The chats leaked contain disclaimers from Lemoine that the document was edited for "readability and narrative." Another thing to note is the order of some of the dialogues was shuffled.
 
Google has responded to the leaked transcript by saying that its team had reviewed the claims that the AI bot was sentient but found "the evidence does not support his claims."
 
“There was no evidence that LaMDA was sentient,” said a company spokesperson in a statement.
 
What Happens Next?
 
There is a divide among engineers and those from the AI community about whether LaMDA or any other programme can go beyond the usual and become sentient.
 
"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," Google said in its statement.
 
While Google may claim LaMDA is just a fancy chatbot, there will be deeper scrutiny on these tech companies as more and more people join the debate over the power of AI.
 
AI Legislation Globally
 
Legislators across the globe have failed to design laws that specifically regulate the use of AI. In 2017 Elon Musk called for regulation of AI development. Two years later, 42 different countries signed up to a promise to take steps to regulate AI, several other countries have also joined in from then.
 
Currently, there is a proposed AI legislation in the US, particularly around the use of artificial intelligence and machine learning in hiring and employment. An AI regulatory framework is also being presently debated in the EU. In India, currently, there are no specific laws for AI, Big data, and Machine Learning.

Related Stories

No stories found.
logo
Outlook Business & Money
business.outlookindia.com