But will the developments and the benefits suit us all? Will it be equal? The answer is perhaps ‘no’. AI is flawed, just like the rest of us.
Do you remember Google’s photo app that automatically classified dark skin tones as gorillas, or Nikon’s camera that insisted all asian faces were blinking. An AI-judged beauty contest went through thousands of selfies and chose 44 fair skin faces and only one dark face to be the winners. Microsoft’s twitter-based chatbox ‘Tay’ was designed to learn from its interactions with users. Within 24 hours it was shut down. The user community taught it some seriously offensive language and it regurgitated it faithfully. The very public experiment ended up in disaster with the aggressive bot spewing racist and sexist remarks. These are not the only examples. Sexism, racism and all kinds of discrimination are built into the algorithms that drive these ‘intelligent’ machines, for a simple reason - these are built by humans. Machines reflect the biases we have.
This is not new, and certainly not limited to Artificial Intelligence.
Tools are usually designed for men, women clothing have no pockets, seat belts were till recently only tested on male dummies, thus putting women at greater risk in case of a crash. Yes, prejudices and stereotyping in product design has been around for a long time but what is worrying is that some of it is now creeping into the development of AI.
The deep learning algorithms are all around us, tracking us, prompting us, shaping our preferences and our behavior. This is just the beginning. Artificial intelligence is going to be an integral part of our lives, even more than it already is, and thus it is absolutely critical that we mould it in a way that makes it truly neutral. It is our chance to build our own future. The present may be imperfect, the future need not be. Considering it is still developing, and has still not entrenched itself in our lives, this is the time to begin talking about it.
Our conversations around this so far has largely been limited to the number of jobs that are going to be lost, perhaps now we should start asking other questions too - like that of its purpose and its accountability. Currently it is the tech companies, primarily in the west who are leading the discussion on it. But there need to be more participants from across the world - governments, social institutions, corporates, academicians, research bodies and so on. They must come together to talk, and think and figure out a way to make it equitable, to make it work for everyone. If not, then the development of AI is going to be lopsided, and this is not going to be limited to a social or cultural issue, it can mean the difference between life and death.