© 2024 SDPB Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
News

Tech Giants Team Up To Tackle The Ethics Of Artificial Intelligence

Science Photo Library RM/Getty Images

Artificial intelligence is one of those tech terms that seems to inevitably conjure up images (and jokes) of computer overlords running sci-fi dystopias — or, more recently, robots taking over human jobs.

But AI is already here: It's powering your voice-activated digital personal assistants and Web searches, guiding automated features on your car and translating foreign texts, detecting your friends in photos you post on social media and filtering your spam.

But as practical uses of AI have exploded in recent years, one critical element remains missing: an industrywide set of ethics standards or best practices to guide the growing field.

Now, the industry heavyweights are partnering to fill that gap. Called the Partnership on Artificial Intelligence to Benefit People and Society, the group consists of Amazon, Facebook, Google, Microsoft and IBM. Apple is also in talks to join.

Executives from four of the five founding members of the Partnership on AI (from left): Eric Horvitz of Microsoft, Francesca Rossi of IBM, Yann LeCun of Facebook and Mustafa Suleyman of Google's DeepMind.
/ Jon Simon/Feature Photo Service for IBM
/
Jon Simon/Feature Photo Service for IBM
Executives from four of the five founding members of the Partnership on AI (from left): Eric Horvitz of Microsoft, Francesca Rossi of IBM, Yann LeCun of Facebook and Mustafa Suleyman of Google's DeepMind.

"We've been talking about this for many years — informally," says IBM's Vice President of Cognitive Computing Guruduth Banavar. "Finally we have this opportunity to formalize (the conversation)."

The group's goal is to create the first industry-led consortium that would also include academic and nonprofit researchers, leading the effort to essentially ensure AI's trustworthiness: driving research toward technologies that are ethical, secure and reliable — that help rather than hurt — while also helping to diffuse fears and misperceptions about it.

"We plan to discuss, we plan to publish, we plan to also potentially sponsor some research projects that dive into specific issues," Banavar says, "but foremost, this is a platform for open discussion across industry."

In a way, the extent to which AI took off over the past few years snuck up on many of us. AI scientists have long predicted the surge, but its timing was a moving target. Now, machines are besting humans at translation and texting. Competition is growing among voice-activated assistants: Apple's Siri, Amazon's Alexa, Microsoft's Cortana. IBM's Watson supercomputer is writing recipes and helping doctors treat cancer. Google's DeepMind defeated a human at the complex Chinese game Go. Algorithms are getting into art.

"There have been situations already where companies had to essentially make up their own best practices," says Subbarao Kambhampati, a computer science professor at Arizona State University and president of the Association for the Advancement of Artificial Intelligence.

The researchers talk, consult and confer; individual companies doing AI have formed ethics boards and committees. But industrywide coordination has not existed.

"These days, the AI technologies are touching so many aspects of our lives that people are making their own decisions and going forward," Kambhampati says. "And most of the times, things work out well, but sometimes they haven't worked out all that well."

Like that time Microsoft made a Twitter chatbot that learned from its conversations and quickly derailed into offensive chaos. Or that time the Tesla car autopilot failed to recognize a white tractor-trailer against the bright sky, resulting in the driver's death. Or when Google's ads algorithms faced accusations of racism and sexism. The list goes on.

"When you train a learning algorithm on a bunch of data, then it will find a pattern that is in that data. This has been known, obviously, understood by everybody within AI," Kambhampati says.

"But the fact that the impact of that may be unintended stereotyping, unintended discrimination is something that has become much more of an issue right now," he says, "because these technologies are actually making very important decisions in our day-to-day life."

Kambhampati, whose scientific society AAAI will also have a seat at the Partnership on AI, hopes that the new group will focus exactly on that — the current and short-term practical concerns, rather than the distant doomsday scenarios typically explored by ethicists.

IBM's Banavar says the hope for the group is certainly to stick around for the long haul. But its goals are indeed near-term. He expects the Partnership on AI to create an education forum — online and in real life — for resources on AI. A specific plan for the group is expected to be published in the next few weeks, followed later by an event to kick-start the collaboration.

Of course, he acknowledges that the group's work won't stop any rogue uses of AI — its purpose, in effect, is advisory. But Banavar hopes the group's work will make its way into educational curricula around the world that will inspire the new generations of AI researchers.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

News
Alina Selyukh is a business correspondent at NPR, where she follows the path of the retail and tech industries, tracking how America's biggest companies are influencing the way we spend our time, money, and energy.