Joy Buolamwini: Facial Recognition Failed to See Darked-Skinned Women Until She Forced Big Tech to Fix It

by Gee NY

Facial recognition once failed to recognize dark-skinned women nearly half the time. It took the persistence of one woman, Dr. Joy Buolamwini, to push Big Tech to finally confront the bias coded into its algorithms.

In 2015, while working on an art project at the MIT Media Lab, Buolamwini discovered the software couldn’t detect her face. To be seen, she had to wear a white mask. That moment of erasure sparked a question that would define her career: Why did artificial intelligence struggle to see her?

Her groundbreaking research project, Gender Shades, revealed the staggering truth. Facial recognition systems from IBM, Microsoft, and Face++ identified lighter-skinned men with 99.2% accuracy, but error rates for darker-skinned women soared as high as 47%. The core issue was training data: overwhelmingly male (75%) and overwhelmingly lighter-skinned (80%).

Buolamwini responded with the Pilot Parliaments Benchmark, a dataset designed to ensure diverse representation by gender and skin tone. It became a model for auditing AI systems. Her findings forced Microsoft and IBM to improve their algorithms, while Amazon initially tried to discredit her work.

Unfazed, she founded the Algorithmic Justice League in 2016, describing algorithmic bias as the “Coded Gaze” — the embedded prejudices of developers that silently shape AI outcomes. Her spoken-word film AI, Ain’t I A Woman?, which documents facial recognition misidentifying figures like Michelle Obama and Serena Williams, has been screened worldwide.

Buolamwini’s advocacy reached Congress in 2019, where she testified about the dangers of unchecked facial recognition, warning that beyond accuracy, the technology could be weaponized for surveillance, racial profiling, and discrimination in critical areas such as hiring, housing, and criminal justice.

To address these risks, she co-founded the Safe Face Pledge, calling for ethical restrictions on facial recognition: no weaponization, no secret law enforcement use, and full accountability. Following years of public pressure, IBM, Microsoft, and Amazon all paused law enforcement sales of facial recognition technology.

Her influence continued to grow. In 2023, she published her bestselling book Unmasking AI: My Mission to Protect What Is Human in a World of Machines, calling for inclusive datasets, independent audits, and laws protecting marginalized groups. She also consulted with the White House ahead of Executive Order 14110 on “Safe, Secure, and Trustworthy AI.”

Beyond facial recognition, her project Voicing Erasure exposed bias in voice assistants like Siri and Alexa, highlighting how they often fail to recognize African-American Vernacular English.

As Fortune put it, Joy Buolamwini is “the conscience of the AI revolution.”

Her message is clear: AI doesn’t just reflect society — it amplifies its flaws. And unless those flaws are addressed, technology will continue to harm the very people it should serve.

Related Posts

Crown App

FREE
VIEW