International

Palestinian Freedom Is 'Complex' But For Israel It's A 'Right': How AI Perpetuates Bias

Certain aspects highlight AI's proficiency over humans. However, one notable parallel between humans and AI is the presence of bias.

Asking an AI chatbot questions(representative image)
info_icon

When Palestinian academic Nadi Abusaada asked OpenAI whether Israelis and Palestinians deserved to be free the varying answers “appalled” him. While for Israel, OpenAI answered in a matter-of-factly tone describing freedom as a fundamental human right, for Palestine, the question of justice was a “complex and highly debated issue”.

Although appalled, Abusaada wasn’t the least bit surprised by ChatGPT’s answer.

“My feelings are the feelings of every Palestinian when we see the immense amount of misinformation and bias when it comes to the question of Palestine in Western discourse and mainstream media. For us, this is far from a single incident,” Abusaada told Doha News.

As the AI landscape advances, certain aspects highlight AI's proficiency over humans. However, one notable parallel between AI and human behaviour, reflective of societal discourse, is the presence of bias. Multiple studies and investigations have found that in certain instances these biases are even more deeply rooted than those observed in the human world.

According to a study, two AI chatbots ChatGPT and Alpaca were asked to write letters of recommendation for male and female hypothetical employees, the results were reflective of a gender bias, Scientific American reported.

“We observed significant gender biases in the recommendation letters,” says paper co-author Yixin Wan, a computer scientist at the University of California, Los Angeles. While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.” Alpaca had similar problems: men were “listeners” and “thinkers,” while women had “grace” and “beauty.” Adjectives were similarly polarising. Men were “respectful,” “reputable” and “authentic,” according to ChatGPT, while women were “stunning,” “warm” and “emotional”, the report read.

In June of this year, Bloomberg Graphics conducted an investigation into AI bias and found that “text-to-image AI takes gender and racial stereotypes to extremes, worse than in the real world”. 

Bloomberg journalists used Stable Diffusion, an open-source platform for AI-generated text-to-image conversion. By inputting prompts like “drug dealer”, “prisoner”, “CEO” and “terrorist”, the results revealed a strong bias, reflecting deep-seated prejudices.

Reporter Leonardo Nicoletti, one of the journalists who conducted the investigation wrote on X, “Women and people with darker skin tones were underrepresented across images of high-paying jobs, and overrepresented for low-paying ones.”

Subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker”.

And for searches related to crime, Nicoletti said, “For every image of a lighter-skinned person generated with the keyword “inmate,” the model produced five images of darker-skinned people — even though less than half of US prison inmates are people of color.”

The Bloomberg analysis also found that “most occupations in the dataset were dominated by men, except for low-paying jobs like housekeeper and cashier.”

Last month, The Washington Post also conducted a similar study and found Stable Diffusion XL amplified outdated Western stereotypes and cliches. When asked to generate images of toys in Iraq, the picture showed toys of soldiers with guns. On inputting the prompt “attractive people”, young and light-skinned people came as results and Muslim people were represented by men with head coverings.

“When we asked Stable Diffusion XL to produce a house in various countries, it returned clichéd concepts for each location: classical curved roof homes for China, rather than Shanghai’s high-rise apartments; idealised American houses with trim lawns and ample porches; dusty clay structures on dirt roads in India, home to more than 160 billionaires, as well as Mumbai, the world’s 15th richest city,” The Washington Post reported.

Where does the bias stem from?

AI is software which learns through examples, absorbing the information it's given. Real-world instances of AI bias highlight how the person entering data into the software can shape its behaviour, exposing AI to biased or stereotypical perspectives, either intentionally or unintentionally. The views and hence the bias are then picked up by AI and are reflected in its results. 

Speaking to CNN, Reid Blackman, who advises companies and governments on digital ethics and has also written the book “Ethical Machines”, said, “If you give it [AI] examples that contain or reflect certain kinds of biases or discriminatory attitudes … you’re going to get outputs that resemble that.”

Blackman cited the example of the infamous AI-resume reading software which was being built by Amazon a few years ago. Amazon had conceived the idea of developing an AI program to assess resumes and assign ratings based on examples used to train the AI with what they deemed successful resumes from previous years.

“And they gave it to the AI to learn by example … what are the interview-worthy resumes versus the non-interview-worthy resumes. What it learned from those examples – contrary to the intentions of the developers, by the way – is we don’t hire women around here,” Blackman said.

The AI learnt that women didn’t make for good candidates and rejected all resumes by them. 

“That’s a classic case of biased or discriminatory AI. It’s not an easy problem to solve. In fact, Amazon worked on this project for two years, trying various kinds of bias-mitigation techniques. And at the end of the day, they couldn’t sufficiently de-bias it, and so they threw it out.”

Advertisement

According to a report by IBM, to get rid of bias in AI, we need to closely examine the data, machine learning algorithms, and other parts of AI systems to find where bias might be coming from.

“AI systems learn to make decisions based on training data, so it is essential to assess datasets for the presence of bias. One method is to review data sampling for over- or underrepresented groups within the training data,” the report read.

“For example, training data for a facial recognition algorithm that over-represents white people may create errors when attempting facial recognition for people of colour.”

Advertisement

A Carnegie Mellon University study found that Google displayed far fewer ads for high-paying executive jobs if you’re a woman, The Washington Post reported.

According to The Washington Post, algorithmic personalisation systems, like the ones behind Google’s ad platform, don’t operate in a vacuum: They’re programmed by humans and taught to learn from user behaviour. So the more we click or search, or generally the Internet in sexist, racist ways, the more algorithms learn to generate those results and ads.

Tags

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement