© 2024 SDPB Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Artificial Intelligence Can Make Our Lives Easier, But It Can Also Go Awry

AILSA CHANG, HOST:

Even if you can advance technology - say, create the next greatest app or build a robot that fights wars - the question is, should you? In this month's All Tech Considered, we're looking at some of the ethical dilemmas raised by technological innovation. If technology is going to be used for some of the most sensitive purposes, what are the guardrails to make sure that technology does no harm?

(SOUNDBITE OF ULRICH SCHNAUSS' "NOTHING HAPPENS IN JUNE")

CHANG: There is a group thinking about this very issue as it relates to the spread of artificial intelligence. It's called AI Now Institute. It's dedicated to understanding how artificial intelligence can reshape our everyday lives. One of the group's co-founders is Kate Crawford. Welcome.

KATE CRAWFORD: Hi, Ailsa. Lovely to talk with you.

CHANG: Lovely to talk to you. So what are some real-world examples of how AI has gone badly wrong?

CRAWFORD: I mean, most people know, of course, the issues around Facebook, of course, with race, gender and age discrimination, particularly, in terms of everything from job ads to housing ads. But it even goes deeper than that. We've seen it in facial recognition systems that work less well for darker-skinned women. We've seen it in voice recognition systems that have been trained to detect male-sounding voices better than female-sounding voices.

And there are even emotion detection tools now that assign more negative emotions to black men's faces than white men's faces, for example. So there are these real quite substantial problems across the systems that we sort of fit under this loose heading of AI.

CHANG: And how are companies addressing these concerns now? I mean, do you think companies are doing enough?

CRAWFORD: Well, it's a really hard problem. What's interesting is to see that it just - in the last couple of years, companies are really starting to take it seriously in terms of the scale of these issues. We're starting to see groups focused on responsibility in AI sometimes. These groups are very much focused on technical solutions. So can they tweak the data sets? Can they tweak the algorithms to try and produce less biased results? But in some cases, that just isn't enough.

I mean, we have a really chilling example, of course, from Amazon when they tried to create an AI hiring tool that was designed to scan resumes to decide who should get an interview. And what they found was that, ultimately, even if you had the word women on your CV, let alone if you were actually a woman, your CV was getting downrated.

CHANG: No way.

CRAWFORD: This is exactly true. It has been widely reported. And, of course, engineers tried to fix this, of course, by sort of tweaking the algorithm and the data set. Of course, the data set itself was very skewed towards male engineers. Just have a look at Amazon's workforce and you'll see how bad that skew is. So in the end, they found that they could not fix this problem. And they ended up devolving the tool. They do not use it. So these problems, in some ways, are very hard to fix technically. And that's because we have to look at them much more broadly.

CHANG: So do you think the government should step in to try to help companies avoid these biases, help them avoid injecting AI with these biases?

CRAWFORD: So I think certainly there are some technologies that really do need very stringent regulation. In the AI Now Report for 2018, which came out in December, we specifically looked at facial recognition and so-called affect recognition. That's kind of like a subclass of facial recognition that claims to detect things like your true personality, your inner feelings and even your mental health based on images or video of your face.

Now, I would say that some of these claims are just not backed by very robust scientific evidence in some cases. You know, the science has really been questioned. And so I think linking these types of emotion and affect recognition tools to things like hiring or access to insurance or policing, for example, creates really concerning risks.

CHANG: You know, for years, the government, at least here in the United States, has kind of taken this hands-off approach towards Silicon Valley because there was this fear that overregulation would suppress innovation. So could you take government regulation too far, where you end up tipping the balance towards suppressing innovation?

CRAWFORD: In some ways, I think it's a false choice. We can actually have innovation and safety at the same time. And certainly we don't want to be rolling out systems that discriminate. The philosophy of moving fast and breaking things has really had its day. I mean, all of these systems are so profound that I think it's really legitimate that people want to know that they're safe and non-discriminatory.

CHANG: Kate Crawford is the co-founder of the AI Now Institute at New York University. Thank you very much for joining us.

CRAWFORD: It's a pleasure. Transcript provided by NPR, Copyright NPR.