© 2026 SDPB
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How AI-generated content increased disinformation after Maduro's removal

SCOTT SIMON, HOST:

Immediately after the U.S. seized Nicolás Maduro, photos appeared on social media showing him in custody. Many of those photos were fake, generated by artificial intelligence. And as NPR's Geoff Brumfiel reports, they were just the start of a flood of AI content around Venezuela.

GEOFF BRUMFIEL, BYLINE: On the social media platform X, a video of crowds flooding the streets of Caracas got a million views.

(CHEERING)

BRUMFIEL: Another video reposted by Elon Musk himself showed Venezuelans celebrating. It got 5 million eyeballs.

(SOUNDBITE OF ARCHIVED RECORDING)

AI-GENERATED VOICE: (Speaking Spanish).

BRUMFIEL: Why aren't the media showing you these huge celebrations? asked one user. The answer? They didn't happen. The first video was generated by OpenAI's new platform, Sora, which makes fake videos from simple text prompts. The second came from a TikTok account that previously generated AI fan art for the cartoon "Powerpuff Girls," among other things. These videos are just the tip of the iceberg.

HANY FARID: I've never seen anything like this in terms of scale, speed and sophistication.

BRUMFIEL: Hany Farid is at the University of California, Berkeley and studies AI-generated content. Such content has been a growing part of the social media ecosystem for years, but the speed and volume following the Venezuela strike is unprecedented.

FARID: It's hard to quantify, but I think there's probably more fake content coming out of this than real content.

BRUMFIEL: The source of that content is messy. Some of it comes from trolls, some from influencers seeking attention, clicks and money. Some is amplified by political figures. Farid says a lot of it wasn't really meant to deceive, like this video depicting the capture of Maduro.

(SOUNDBITE OF ARCHIVED RECORDING)

AI-GENERATED VOICE: Target secured. Copy. Heading home.

FARID: A lot of what you're seeing now is taking real-world events and then essentially, like, animating them, right? Nobody's got an image of the thing that happened? No problem. Make an image and then tell the story.

BRUMFIEL: But these AI-generated reenactments aren't telling the story - at least not the full story. Farid worries they're muddying the waters, and Darren Linvill agrees. He codirects the Media Forensics Hub at Clemson University.

DARREN LINVILL: To create content that looks, at least without careful analysis, perfectly credible, perfectly reliable can warp real people's and even real voters' understanding of this incredibly important political moment.

BRUMFIEL: Despite these worries, Linvill says he thinks AI content will only accelerate because when news happens...

LINVILL: People want more content, and so I think the people that create this kind of AI slop are going to create more of it to fuel that beast.

BRUMFIEL: Users on social media want to see world events immediately, he says, even if the images aren't real.

Geoff Brumfiel, NPR News. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Geoff Brumfiel works as a senior editor and correspondent on NPR's science desk. His editing duties include science and space, while his reporting focuses on the intersection of science and national security.