MARY REICHARD, HOST: Coming up next on The World and Everything in It: Manipulated videos.
By now, most Americans have heard the term “fake news.” That is, fictional stories posted by what seem to be legitimate sources. They gain credibility because of wide circulation on the internet.
NICK EICHER, HOST: Faking a news story is one thing, but what about faking a video?
Artificial intelligence is making it possible to create realistic videos in which famous people appear to be saying things they actually never said.
The potential to turn these videos into weapons has the U.S. Defense Department concerned. DOD calls them “deepfake” videos.
REICHARD: This summer, the Defense Advanced Research Projects Agency is sponsoring a competition to develop tools for detecting and countering these realistic, deepfake videos. We’re going to talk about that agency quite a bit, so when you hear me say “DARPA,” just know that’s the acronym for Defense Advanced Research Projects Agency. DARPA’s just easier on the ears.
WORLD Radio technology reporter Michael Cochrane is here now to talk to us about the technology behind fake videos and some of its implications.
Michael, why is the Pentagon so concerned about fake videos?
MICHAEL COCHRANE, REPORTER: Defense officials are realizing that the artificial intelligence technology that makes deepfakes possible is rapidly becoming more accessible and easy to implement, even for relatively unskilled users. While many video manipulations are done for fun or artistic purposes, they recognize that these media products can be used for adversarial purposes, including propaganda or misinformation campaigns.
Tell us about this competition that DARPA is running this summer.
COCHRANE: DARPA is inviting leading experts in digital forensics to compete to produce the most convincing computer-generated fake video, audio and imagery, and then try and develop ways to identify such counterfeit media automatically. DARPA is really concerned about an emerging AI technique that uses what are called generative adversarial networks. They call them GANs for short. And GANs could make fake videos almost impossible to detect.
OK, can you give us a lay person’s explanation about GANs- again, generative adversarial networks- and how they work?
COCHRANE: Sure. Most artificial intelligence, or machine learning, algorithms learn to recognize patterns in a data set, such as a huge set of images or videos. For example, last year a research team at the University of Washington used a neural network to analyze millions of frames of video of President Obama, and it learned what mouth shapes linked to various sounds. They then used the network to create a fake video of him saying things he may have said years before. In a GAN, however, there’s a second network called the “critic” which tries to learn the difference between real and faked examples. Feedback from the critic to the first network helps it to produce even more realistic examples.
So, is it becoming impossible to detect fake videos?
COCHRANE: It’s still possible to detect forgeries. Experts typically examine digital files for signs that someone has spliced together images or videos. They can also look at the lighting and other physical aspects of the images to see if something doesn’t look right. The most difficult thing to do automatically is to consider logical inconsistencies such as the incorrect background for the supposed location. But experts are saying it’s increasingly more difficult to know if a video has been machine generated.
But even if you could eventually determine a video to be a deepfake, if it’s been out on the internet, the damage has already been done. Right?
COCHRANE: That’s absolutely true! Recently Florida Senator Marco Rubio expressed concerns about this. He’s a member of the Senate Intelligence Committee, and he worried a terrorist groups like Hezbollah could produce deepfake videos of Israeli soldiers committing atrocities against Palestinians with the intent of sparking riots, violence, even a war. Or a false video of a politician saying something that was never spoken and then publicized on the eve of an election—with no time to correct the record. Here’s Rubio at an intelligence hearing last month (audio courtesy of CSPAN):
RUBIO: I believe that this is the next wave of attacks against America and western democracies: the ability to produce fake videos that can only be determined to be fake after extensive analytical analysis, and by then the election’s over and millions of Americans have seen an image that they want to believe anyway, because of their preconceived bias against that individual.
Are there any other steps that can be taken to deal with this rise in deepfake videos?
COCHRANE: Senator Rubio has stated that an important first step would be to raise public awareness of the problem, especially among media outlets, so that hard-to-believe or sensational videos are carefully checked before being disseminated. But many commentators believe the tech companies themselves bear some responsibility by too frequently developing technologies without considering the ethical implications. Sort of the “Can we?” versus “Should we?” approach. For example, Facebook initiated a project that would animate the profile photos of its users because they thought it was a cool idea. They never considered the possibility that their face-manipulation software could be misused.
Unintended consequences. Michael Cochrane cover science and technology for WORLD. Thank you, Michael!
COCHRANE: You’re very welcome, Mary.