Technology always involves people. People build it. People use it. People are impacted by it.
Few know that better than Joy Buolamwini, an African American, MIT-trained AI researcher who discovered that the face-tracking software she needed for a school project couldn’t see her face unless she put on a white mask.
“The white mask demonstration is an entry point to larger conversations about bias in artificial intelligence and the people who can be harmed by these systems,” Buolamwini says in her book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines (2023, Random House).
The book recounts Buolamwini’s journey from unabashed technology enthusiast to technology activist, blending art and science to uncover the very human fallibilities of the technologies we build.
Along the way, she has exposed the bias baked into the products of some of the largest tech companies, starred in an Emmy-nominated documentary, and advised governments and world leaders, including Joe Biden.
Inside the Black Box
“Any sufficiently advanced technology is indistinguishable from magic,” Arthur C. Clarke said in 1973. There’s also a tendency to ascribe superhuman infallibility to admittedly wonderous technology.
But as technology is a reflection of its creators, it often comes with all-too-human flaws. And unlike a mechanical engine that can be deconstructed to find the cause of a malfunction, AI-driven tools run the risk of operating as black boxes with their inner workings hidden from view, making problems difficult to diagnose and correct.
For example, as Buolamwini discovered in her quest to uncover why face-tracking algorithms failed to detect her face, even software that functions as designed may be running on faulty data. Turns out the off-the-shelf software she was using relied on images of light-skinned people to learn how to track faces.
“Access to the training data is crucial when we want to have a deeper understanding of the risks posed by an AI system,” Buolamwini writes in Unmasking AI.
As she documents in her book, not just this particular tool but facial recognition systems from the likes of IBM, Microsoft, and Google failed to correctly identify the faces of Black people much more often than those of white people because of skewed training data.
She wanted to know what other biases lurked in unexamined AI tools and what harm they could cause.
Through Inaction
“A robot may not injure a human being or, through inaction, allow a human being to come to harm,” states the first of Isaac Asimov’s Three Laws of Robotics.
The laws have influenced technology developers as well as science fiction writers and filmmakers since 1942. But what about unintended or indirect harms? What about harm caused by people acting on flawed recommendations from AI-powered systems?
- A man dies in a crash after placing too much trust in self-driving car technology.
- An innocent man is arrested in front of his children after a facial recognition system falsely ID’s him as the suspect in a theft.
- A writer is fired after his real-world research reveals a pre-approved, AI-generated outline as nonsense (not as dramatic, I know, but this one happened to me).
Like it or not, AI increasingly impacts everyone who lives in the modern world, sometimes in harmful ways. As Buolamwini points out, AI doesn’t have to take the form of killer robots or civilization-ending Skynet to cause harm.
The work of reminding the world of the human fallibility behind lines of code and AI training data — so technologists, users, bystanders, and policymakers can correct it — has become a mission for Buolamwini, who runs a nonprofit called the Algorithmic Justice League (AJL).
“I imagined AJL becoming a network of individuals from different backgrounds working together to uncover what ailed artificial intelligence so we could create better systems that prevented harms instead of perpetuating them,” Buolamini writes in her book.
She advises technologists, users, and policymakers to view information technology, including AI, as essential infrastructure and to treat it accordingly.
“When people point out potholes that can lead to dangerous accidents or show the damage done to their vehicles as a result of a pothole, we don’t ask them to stop using roads,” Buolamwini writes in Unmasking AI. “We also do not ask individuals to fix the potholes themselves. Instead, we reach out to groups established to safeguard the public interest and to maintain infrastructure.”
Opening the black box is the first step.