A man has taken to social media to share his views on how he believes racial prejudice is present in artificial intelligence.
Joris Lechêne, who educates people on racial bias, created a video on the social media platform in which he explained a passport photo upload failed because the software did not recognize him properly.
The London-based model and artist, said in the video: “Don’t you love it when you train people to spot racist biases for a living and then it happens to you?
“In the process of applying for a British passport, I had to upload a photo, so I followed every guideline to the T and submitted this melanated hotness.”
In the video, which has been liked over 39,000 times, Lechêne revealed the photograph in question, which showed him wearing a black T-shirt and standing before a light grey background.
He continued: “Lo and behold, that photo was rejected because the artificial intelligence software wasn’t designed with people of my phenotype in mind.
“It tends to get very confused by hairlines that don’t fall along the face and somehow mistake it for the background and it has a habit of thinking that people like me keep our mouths open.”
Lechêne shared a screenshot of the rejection, which stated that the photo “doesn’t meet all the rules and is unlikely to be suitable for a new passport.”
The software suggested that his mouth “may be open” and the “image and background are difficult to tell apart.”
The social media star said he knew about these types of issues surrounding artificial intelligence as he has highlighted similar examples in his prejudice training courses.
Lechêne believes his experience proves that the current software is inadequate, stating: “This is just a reminder that, if you believe that automation and artificial intelligence can help us build a society without biases, you are terribly mistaken.”
He then went on to surmise that in order to achieve a more fair and equal system there needed to be “political actions at every level of society” and claimed that this was because “robots are just as racist as society is.”
A BBC investigation in December 2020 found that in a study of 1,000 people, 22 percent of dark-skinned women were told their online passport photos were of poor quality, in contrast to just 14 percent of light-skinned women.
Dark-skinned men were also disproportionally affected, with 15 percent of snaps deemed poor quality, with only 9 percent of light-skinned males having the same result.
There have been other examples of artificial intelligence displaying inadequate recognition techniques in the past.
The same happened for a Chinese pair of friends, who looked dissimilar, prompting many to believe that it had not been as stringently tested on all races.
In September 2020, the chief design officer of Twitter acknowledged that there were racial biases in the how the platform generated photo previews using a neutral network.
The company launched an investigation into the algorithm that decides which sections of an image are cropped and displayed in tweet previews, and the results found it did favor white faces over Black ones.
AI has crept into many different sectors, and back in 2019 it was revealed that software for predictive policing had been used by 50 different police departments with varying success.
Many critics have argued that this could lead to low-income communities of color being disproportionately targeted, if the algorithms dictate where law enforcement patrol