• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Article

Why AI should move slow and fix things

By Eliza Strickland

Joy Buolamwini‘s AI research was attracting attention years before she received her PhD.from the MIT Media Lab in 2022. As a graduate student, she made waves with a 2016 TED talk about algorithmic bias that has received more than 1.6 million views to date. In the talk, Buolamwini, who is Black, showed that standard facial detection systems didn’t recognize her face unless she put on a white mask. During the talk, she also brandished a shield emblazoned with the logo of her new organization, the Algorithmic Justice League, which she said would fight for people harmed by AI systems, people she would later come to call the excoded.

In her new book, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini describes her own awakenings to the clear and present dangers of today’s AI. She explains her research on facial recognition systems and the Gender Shades research project, in which she showed that commercial gender classification systems consistently misclassified dark-skinned women. She also narrates her stratospheric rise—in the years since her TED talk, she has presented at the World Economic Forum, testified before Congress, and participated in President Biden’s roundtable on AI.

While the book is an interesting read on a autobiographical level, it also contains useful prompts for AI researchers who are ready to question their assumptions. She reminds engineers that default settings are not neutral, that convenient datasets may be rife with ethical and legal problems, and that benchmarks aren’t always assessing the right things. Via email, she answered IEEE Spectrum‘s questions about how to be a principled AI researcher and how to change the status quo.

Related Content