White stereotypes in intelligent machines: too much whiteness in the AI

White stereotypes in intelligent machines: too much whiteness in the AI stereotypes shape the image of systems with artificial intelligence
White stereotypes in intelligent machine

Two scientists from the University of Cambridge have published a treatise entitled "The Whiteness of AI", which shows how white stereotypes shape the image of systems with artificial intelligence. The article deals with both real and fictitious intelligent machines.

The two scientists divide the machines into the categories humanoid robots, chatbots, virtual assistants and stock images. They offer three interpretations for the white stereotyping of AI systems.

Race classification for machines


Machines can obtain a racial classification via anthropomorphism, that is, the transfer of human attributes to the artificial systems. In addition to the obvious visual features, which correspond more to white models, the human-like voice or the type of interaction also plays a role.

Even very artificial robots like Nao, Pepper and PR2 are made of largely white materials. Even more striking is the attribution in the Sophia presented by Hanson Robotics, which was granted citizenship of Saudi Arabia in 2017. Despite its Asian origins, the robot woman built in Hong Kong clearly has Caucasian features.

Sophia can be clearly classified as white with or without a torso
White stereotypes in intelligent machines

Reasons and consequences


The paper examines three possible reasons for the dominant white image. On the one hand, the race classification often reflects the white milieus in which they arise. As a second reason, the authors point out that machines that are intelligent, professional and powerful are attributable to their own race from the point of view of many white people.

As a result, the authors warn that the white stereotypes could remove colored people from the white utopia. Overall, the presentation promotes prejudices and bias in machine learning systems. This can lead to decisions that disadvantage ethnic groups.

In a nutshell, the sentence puts it clearly: "If white people imagine that they are cut out by superior beings, these beings will not correspond to the races they previously considered inferior. "A white audience can’t imagine it being surpassed by black machines."

Self-knowledge is the first step towards improvement


As an important first step to breaking up the structures, the article sees the realization and the concession that they exist. The white stereotypes should not remain invisible. It is unlikely that the majority of white viewers will recognize a racial classification in Human-like machines, since they will only see confirmed what 'human-like' means. For people without white skin color, however, the white stereotypes are never invisible.

Next Post Previous Post