Hi there,

I believe that people who are concerned about the climate catastrophe, economic and racial justice and war and peace, are not a fringe minority, not even a silent majority, but the silenced majority—silenced by the corporate media. That's why we have to take the media back—especially now. But we can't do it without your support. Thanks to a group of generous donors, all donations made today will be DOUBLED, which means your $15 gift is worth $30. With your contribution, we can continue to go to where the silence is, to bring you the voices of the silenced majority. Every dollar makes a difference. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

How AI Is Enabling Racism & Sexism: Algorithmic Justice League’s Joy Buolamwini on Meeting with Biden

Listen
Media Options
Listen

Image Credit: Coded Bias

We speak with Dr. Joy Buolamwini, founder of the Algorithmic Justice League, who met this week with President Biden in a closed-door discussion with other artificial intelligence experts and critics about the need to explore the promise and risk of AI. The computer scientist and coding expert has long raised alarm about how AI and algorithms are enabling racist and sexist bias. We discuss examples, and she lays out what should be included in the White House’s “Vision for Protecting Our Civil Rights in the Algorithmic Age.”

Related Story

StoryApr 22, 2024“Enormous Expansion of the Law”: James Bamford on FISA Extension, U.S.-Israel Data Sharing
Transcript
This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. I’m Amy Goodman, with Nermeen Shaikh.

Amidst the boom in artificial intelligence and growing awareness of its potential risks, President Biden met Tuesday with critics of the technology. He spoke before the closed-door meeting in San Francisco.

PRESIDENT JOE BIDEN: The same here today: I want to hear directly from the experts. And these are the world’s — some of the world’s leading experts on this issue and the intersection of technology and society, who we — who we can provide a range — who can provide a range of perspectives for us and — on AI’s enormous promise and its risks.

AMY GOODMAN: For years, groups like the Algorithmic Justice League have raised alarm about how AI and algorithms can spread racist and sexist biases. The group’s founder, Dr. Joy Buolamwini, was among those who met with Biden Tuesday. She’s going to join us in a minute. The group recently honored Robert Williams, who’s African American. And in 2020, he was the first known case of police wrongfully arresting someone in the United States based on a false facial recognition hit, when Detroit police wrongfully arrested him at his home as his wife and two young daughters watched. He was held overnight in jail, interrogated the next day. Police told him, quote, “The computer must have gotten it wrong,” and finally released him. This is part of the acceptance speech by Robert Williams when he received the Gender Shades Justice Award.

ROBERT WILLIAMS: I just want to say to anybody who’s listening at this point, I guess, just to have the opportunity to let my story be a forewarning to the rest of the world that, as it happened to me, it could happen to you. Right? I just was a regular regular. I was at work and was trying to get home, and I got arrested for something that had nothing to do with me. And I wasn’t even in the vicinity of the crime when it happened. Right? So, it’s just that I guess the way the technology is set up, everybody with a driver’s license or a state ID is essentially in a photo lineup.

AMY GOODMAN: For more, we’re joined in Boston by Dr. Joy Buolamwini, founder of the Algorithmic Justice League, just back from that meeting with President Biden on artificial intelligence in San Francisco. She’s also featured in the documentary Coded Bias.

Dr. Joy Buolamwini, welcome back to Democracy Now! You posted on Twitter, before meeting with President Biden, that you were looking forward to the meeting to talk about “the dangers of AI and what we can do to prevent harms already impacting everyday people,” like “mortgages and housing, in need of medical treatment, encountering workplace surveillance, & more.” I assume, in “more,” you’re talking about issues like this, a kind of false racial facial recognition based on AI. Can you talk about the Williams case and so much more, what you discussed with President Biden?

JOY BUOLAMWINI: Absolutely. Thank you so much for having me.

I am actually hopeful after this roundtable with President Biden, because we started the conversation really focused not just on what AI can do, which we’ve heard a lot about, but centering how it’s impacting real people, like we saw with Robert Williams.

With the Robert Williams case, what we saw was a case of AI-powered biometrics leading to a wrongful arrest. So, the research that I’ve done, and many others have done, as well, has shown documented racial bias, gender bias and other types of biases in facial recognition systems. And when these systems are used in the real world, like we saw with the Robert Williams case, you actually have consequences. So, for Robert to be arrested in front of his wife and in front of his two young daughters, you cannot erase those sorts of experiences, and then to be sleeping on a cold slab for 30 hours with just a filthy faucet as a water source. So these are the types of real-world harms that are concerning.

And it’s also not just on race, right? We have examples of hiring algorithms that have been showing sexist hiring practices then being automated in a way that appears to be neutral. You have people being denied life-saving healthcare because of biased and inaccurate algorithms. And so, I was very excited to see the Biden administration putting the real-world harms in the center of this conversation.

NERMEEN SHAIKH: Joy, if you could just explain, you know, how is it that AI has been — has these kinds of biases? Because, of course, AI can only reflect what already exists; it’s not coming up with something itself. So, who are the programmers? How is it that these biases, as you say, not just on race, although particularly on race, but also gender and other issues — how are they embedded within AI systems?

JOY BUOLAMWINI: Well, the AI systems that we are seeing on the rise are increasingly pattern recognition systems. And so, to teach a machine how to recognize a face or how to produce human-like text, like we’re seeing with some of the large language models, what you have are large data sets of examples. Here’s a face, here’s a sentence, here’s a whole book, right? And based on that, you have these systems that can begin to learn different patterns.

But if the data itself is biased or if it contains stereotypes or if it has toxic content, what you’re going to learn is the good, the bad and the ugly, as well, when it comes to large language models, for example. And then, on the facial recognition side, if you have the underrepresentation of certain populations — it could be people with darker skin; it could be children, for good reason, who we don’t want their faces in those data sets — then, when they’re used in the real world, you have several risks. One is misidentification — right? — what we saw with Robert Williams’ case.

But even if these systems were perfectly accurate, now we have to ask: Do we want the face to be the last frontier of privacy? Because we’re then creating a surveillance state apparatus.

NERMEEN SHAIKH: Well, Joy, let’s go to a clip from Coded Bias, the film, documentary film, that you’re featured in. This is Safiya Umoja Noble, the author of the book Algorithms of Oppression.

SAFIYA UMOJA NOBLE: The way we know about algorithmic impact is by looking at the outcomes. For example, when Americans are bet against and selected and optimized for failure. So it’s like looking for a particular profile of people who can get a subprime mortgage, and kind of betting against their failure, and then foreclosing on them and wiping out their wealth. That was an algorithmic game that came out of Wall Street. During the mortgage crisis, you had the largest wipeout of Black wealth in the history of the United States. Just like that. This is what I mean by algorithmic oppression. The tyranny of these types of practices of discrimination have just become opaque.

NERMEEN SHAIKH: So, that’s a clip from Coded Bias, a documentary by Shalini Kantayya, which you’re featured in. Your comments, Joy?

JOY BUOLAMWINI: I think this is a great clip, because it’s showing that while we have all of these conversations about the possibilities of AI, the reality shows the perils. And what’s even more concerning to me right now is, in this rush to adopt algorithmic systems, there is a narrative that says we want to have trustworthy AI, or we have to have responsible AI, but so many of the popular AI systems that have been built have been built on a foundation of oppression or a foundation of unconsented data — some would say stolen data.

And so, something that was concerning to me at the roundtable was there was expressed excitement about using AI for education. But then, when you looked at the models in the AI systems that were being integrated, these are known models where the companies aren’t sharing the training data. Those who have labeled the toxic aspects of that data — right? — have spoken out about the exploitative working conditions that they face, being paid, you know, one or two dollars an hour for doing really traumatic work. And so, we can’t build responsible AI or expect people to trust in AI systems, when we have all of these terrible practices that are undergirding these foundation models. So the foundations themselves need to be excavated, and we need a start-over.

AMY GOODMAN: Dr. Buolamwini, can you talk about the project of Algorithmic Justice League that was just launched, called a TSA checkpoint scorecard, fly.ajl.org, and how people can share their experiences dealing with a new facial recognition program that’s being used at several airports across the country?

JOY BUOLAMWINI: Absolutely. So, the TSA is starting to roll out facial recognition at domestic checkpoints. They’re now at 25 airports. And this is concerning, because the United States needs to start leading on biometric rights. Just last week, we had EU lawmakers push forward the EU AI Act, which explicitly bans the use of biometric technologies, like facial recognition, in public spaces, the live use of this technology. We are flying in exactly the opposite direction, where people don’t even know — right? — that they have a choice to opt out.

And so, what we’re doing with the Algorithmic Justice League is we’ve released the scorecard. So, if you have traveled this summer, if you’re traveling this summer, please share your experience, so we understand: Did you give consent? What was your experience if you tried to opt out? Did the technology work for you?

I also think that this is a great opportunity for the U.S. government to put into place the Blueprint for a Bill of Rights for AI. And so, this blueprint came out last year, and it highlights so many of the issues that we’ve been talking about, which is the need for notice, and we need consent, as well, but also we need protections from algorithmic discrimination. We need to know that these systems are safe and effective. We need data privacy, so that you can’t just snatch people’s faces, right? And we need human fallbacks, as well. So I think it’s a great opportunity for the Biden administration to make true on their promise to make what was put in the blueprint binding through the Office of Management and Budget, and then to push to make the blueprint federal law.

AMY GOODMAN: Can I ask you — as it becomes harder to travel, longer and longer and longer lines, the other day I was at the airport. Guy comes up — I was on an endless line — says, “Hey, do you want to do CLEAR? I will get your information, and then I’ll walk you right to the front of the line.” It’s very hard to say no to that — right? — when you’re missing your plane. But can you explain what these iris scans are used for, and also fingerprints?

JOY BUOLAMWINI: Yeah. So, when you have systems like CLEAR, I want to make a distinction between you electing to use biometrics when you sign up for CLEAR or TSA PreCheck, where they might be looking at biometrics like your fingerprint, your iris or your face. This is different from what the TSA has stated in their roadmap, which is to go from pilot to requirement, so that the default option when you go to an airport is that you have to submit your face. This is what’s in their roadmap.

So, agency is absolutely important. The right to refusal is absolutely important. And you just pointed out a dynamic that so many people face. You just made it to the airport. Your flight’s about to go. And you’re given, I don’t know, the red pill or the blue pill, and you make a snap decision. And I’m really cautious about these snap decisions, because I worry about what I call convenient shackles. So, for the few seconds that you might save, or maybe minutes, etc., you now have given up very valuable face data. And we already have examples of data breaches with the government of travelers’ face data, so it’s not even hypothetical when we’re talking about the privacy risks that are here.

And the roadmap that the TSA has laid out also talks about then using that face data potentially with other government agencies. So we have to understand that it doesn’t just stop at the checkpoint. This is a pilot and a starting point that is going to move us towards more mass surveillance, if we don’t resist now, which is why we launched the website fly.ajl.org. That way, you can let your voice be heard. You can let your experiences be documented.

And it’s also not the case, for example, if your face has already been scanned, that there’s nothing that can be done. Meta, Facebook, deleted over 1 billion faceprints after a $650 million settlement for violating the Biometric Information Privacy Act of Illinois. This is to say, laws do make a difference.

And again, I do think the U.S. has an opportunity to lead when it comes to biometric protections, but we are going in the opposite direction right now. So I would call for the federal government to halt TSA’s pilot of domestic facial recognition technology at checkpoints. And if you’ve been subjected to it already, let us hear your story, fly.ajl.org.

NERMEEN SHAIKH: And, Joy, finally, we just have 30 seconds, but if a Bill of Rights is put in place with the stipulations that you outlined, do you see any benefits of artificial intelligence?

JOY BUOLAMWINI: Oh, absolutely. So, if we can have ethical AI systems that do help us, for example, with medical breakthroughs, I think that is something that is worth developing. So, I am not opposed to beneficial uses of AI, but we don’t have to build it in a harmful way. We can enjoy the promise while mitigating the perils.

AMY GOODMAN: Joy Buolamwini, we want to thank you so much for being with us, computer scientist, coding expert, founder of the Algorithmic Justice League. To see all our interviews on artificial intelligence, you can go to democracynow.org.

Democracy Now! is produced with Renée Feltz, Mike Burke, Deena Guzder, Messiah Rhodes, María Taracena, Tami Woronoff, Charina Nadura, Sam Alcoff, Tey-Marie Astudillo, John Hamilton, Robby Karran, Hany Massoud, Sonyi Lopez. Our executive director is Julie Crosby. I’m Amy Goodman, with Nermeen Shaikh.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Up Next

“Enormous Expansion of the Law”: James Bamford on FISA Extension, U.S.-Israel Data Sharing

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Make a donation
Top