Why Artificial Intelligence is Racist

By: Kai Lockwood, on Behalf of FTX Vertex

Increasingly, we live in a world shaped more by algorithms and than people. For instance, recommendation algorithms on Netflix or Instagram shape how we consume media and what media is even fit to be consumed. Financial trading algorithms on Wall Street, for their part, are getting closer to predicting the stock market and taking the guesswork out of our global marketplace. Diagnostic algorithms in healthcare mean we often turn to robots, rather than humans, for first opinions. 

Algorithms are already ubiquitous, so the real question comes down to whether they end up choosing the best outcomes.

In part, they do not. This is because we, globally, have never been good at choosing what is best for us. We see this manifested in our many failures to build an egalitarian society, even though such a world would undoubtedly be better for the human race and its prospects of happiness. Even today, our societies harbor disdain for those who are different than us. In come artificial intelligence and other technologies that promise to be great equalizers, inventions that will make everyone’s lives so much better and improve us as a species. 

Such proclamations avoid a fundamental question—can a systematically unequal and biased society create technology, specifically artificial intelligence, that will equalize us?

I don’t believe we can. But first, let’s define our terms. Artificial intelligence are types of technology built to solve problems more efficiently or better than humans currently can. The majority of AI are what we call “black-box” algorithms, where the human codes the inputs and some restrictions on what the output could look like, but the algorithm finds the most efficient way to bridge input and output. 

The main problem with black-box algorithms is that the human coder cannot see or manipulate what is happening with the machine’s decision making and the way it actually gets from input to the output. 

To this end, one can often only control the input, and that input comes mostly in the form of data. Whether that be a database of skin lesions for diagnosing cancerous skin lesions or a database of candidates to help an AI identify the strongest of the bunch, this data always comes from the coder. 

Since these databases are products of an imperfect and biased society, they are, at the core, just as imperfect and biased—and we have lots of test cases that have come to prove this. 

We can see this in Amazon’s now-scrapped hiring AI, which was trained to understand what a “good resume” was by observing accepted resumes from the last 10-years. Since many of those resumes came from men, the AI was trained to believe the male applicants were preferable. 

We can also look at Tay, Microsoft’s attempt to make a chatbot. They trained Tay to comprehensibly respond to people on Twitter, using millions of the site’s tweets for its database. Within twenty-four hours, Tay began to churn out misogynistic, racist and anti-Semitic tweets. 

Or we can look at how, in July 2018, the ACLU and NAACP (along with about 100 other civil rights and justice-based organizations) signed a statement urging states to prevent the implementation of AI designed for risk assessment. The AI, made with the intention of reducing incarceration without increasing crime, seeks to efficiently and safely get defendants through the legal system. Its risk assessment tool is fed the defendant’s profile in a criminal case, along with historical crime data to calculate the likelihood of recidivism on the part of the defendant. This score is often used by judges to deal with decisions around pre-trial detention, rehabilitation service access and even sentence length. The unfortunate truth is that AI in our legal system has been trained with historically racist and classist data. 

A final case study against the usefulness of AI comes from the medical industry and affects millions of Americans. At face value, medical AI seems like a good thing: in the future, it will allow people in rural areas or people who cannot leave their homes to more cheaply and quickly gain access to medical diagnostic tools. This could, in turn, strengthen early diagnosis and lead to a higher likelihood of survival. 

The problem, however, is that these systems are set up to fail people of color. Due to multiple issues—the historical underdiagnosis of people of color, disparities in insurance coverage and differences in existing healthcare outcomes—we have a staggering underrepresentation of data from people of color in health indexes and other databases.

Currently, we fail to make a connection between our artificial intelligence and the culture they perpetuate. If we are going to create technology that is at the cutting edge, that paves our way to the future, I believe that we must ask ourselves: does AI include everyone? Is it really paving a way to a brighter future, or is it just amplifying existing systems of prejudice?

Previous
Previous

Real Activism for Australia

Next
Next

Towards an Antiracist Exeter