Home »  Website »  Society »  If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?

If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?

Artificial intelligence gives a second chance, with our future by giving us the opportunity to wipe out human bias in decision-making.

If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?
If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?

We live in a fractured world that is increasingly becoming polarised with shrill voices intent on drowning out dissent. We have divided ourselves, created inequalities and are steeped in prejudices that we have carried for years. It’s not a perfect world, and how can it be -- we are after all humans, intolerant, and unfair.

However when it comes to machines, the expectations change. The first words that usually come to mind are: cold, calculating and unbiased. But are they really?

It is a question that is becoming more and more relevant as Artificial Intelligence is no longer confined to the pages of a sci-fi novel. From the realms of fantasy it has now crept into our lives. Our devices are connected, personal digital assistants answer our queries, algorithms track our habits and make recommendations, AI is sparking advancements in medicine, cars will soon be driving themselves, and robots will be delivering our pizza etc. AI is growing fast, what was once considered a possible distant future is now being tested and rolled out. Just imagine our lives five, ten or twenty years from now.

But  will the developments and the benefits suit us all? Will it be equal? The answer is perhaps ‘no’. AI is flawed, just like the rest of us.

Do you remember Google’s photo app that automatically classified dark skin tones as gorillas, or Nikon’s camera that insisted all asian faces were blinking. An AI-judged beauty contest went through thousands of selfies and chose 44 fair skin faces and only one dark face to be the winners. Microsoft’s twitter-based chatbox ‘Tay’ was designed to learn from its interactions with users. Within 24 hours it was shut down. The user community taught it some seriously offensive language and it regurgitated it faithfully. The very public experiment ended up in disaster with the aggressive bot spewing racist and sexist remarks. These are not the only examples. Sexism, racism and all kinds of discrimination are built into the algorithms that drive these ‘intelligent’ machines, for a simple reason - these are built by humans. Machines reflect the biases we have.

This is not new, and certainly not limited to Artificial Intelligence.
Tools are usually designed for men, women clothing have no pockets, seat belts were till recently only tested on male dummies, thus putting women at greater risk in case of a crash. Yes, prejudices and stereotyping in product design has been around for a long time but what is worrying is that some of it is now creeping into the development of AI.

The deep learning algorithms are all around us, tracking us, prompting us, shaping our preferences and our behavior. This is just the beginning. Artificial intelligence is going to be an integral part of our lives, even more than it already is, and thus it is absolutely critical that we mould it in a way that makes it truly neutral. It is our chance to build our own future. The present may be imperfect, the future need not be. Considering it is still developing, and has still not entrenched itself in our lives, this is the time to begin talking about it.

Our conversations around this so far has largely been limited to the number of jobs that are going to be lost, perhaps now we should start asking other questions too - like that of its purpose and its accountability. Currently it is the tech companies, primarily in the west who are leading the discussion on it. But there need to be more participants from across the world - governments, social institutions, corporates, academicians, research bodies and so on. They must come together to talk, and think and figure out a way to make it equitable, to make it work for everyone. If not, then the development of AI is going to be lopsided, and this is not going to be limited to a social or cultural issue, it can mean the difference between life and death.

As in the case of self-driving cars - will it give preference to one racial group versus another, will it choose to hit someone or save someone based on colour, height or gender? I hope not.

In the coming years, for society to be equal, technology must also serve us all equally. Artificial intelligence gives us the incredible opportunity to wipe out human bias in decision making. This can be possible only now, at the development stage where diversity and inclusion should drive innovation. We need to involve all kinds of minds in research laboratories, in conference rooms and in workshops where decisions are taken about our future. A homogenous group will subconsciously carry forward their own prejudicial way of looking at life, their biases can taint the output, their assumptions can tilt the sphere of science thus sharpening inequalities, marginalizing people or putting certain sections of the population at higher risk.

We are at a crucial stage of technological evolution. A better world awaits us, but we need all kinds of people to imagine our tomorrow, design it, engineer it and finally make it real.




Post a Comment

You are not logged in, To comment please / Register
or use
Next Story : Google Aims To Improve Search Quality By Introducing 'Upsetting-Offensive' Category
Download the Outlook ​Magazines App. Six magazines, wherever you go! Play Store and App Store



A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 0 1 2 3 4 5 6 7 8 9

or just type initial letters