By late 2018, Ali Bongo, the president of Gabon in West Africa, hadnt appeared in public for many months after reportedly suffering a stroke.
So it came as a surprise when a seemingly official video appeared online on New Years Day depicting Bongo, looking a bit disoriented, addressing the Gabonese people.
Bongos rivals declared the video to be a deepfakea sophisticated digital forgery. Bongo was dead or incapacitated, the opposition declared. On Jan. 7, army officers launched an abortive coup.
Bongo wasnt dead. And the video wasnt a deepfake, just a weirdly-staged recording of an unwell man struggling to appear healthier than he apparently was. But the coup plotters were right about one thing. Its entirely possible to fake a convincing video of a world leader… or anyone else, for that matter.
Deepfakes are a fairly new feature of social media. And theyve already contributed to violence and instability on a national scale. Experts say the problem could get worse before it gets better.
The first deepfakes appeared in late 2017 on Reddit. An anonymous user calling themselves deepfakesa portmanteau of artificial-intelligence deep learning and fakesimposed celebrities faces on pornography.
Reddit eventually banned deepfakes, but the cat was out of the bag. A wave of copycats replaced him on other platforms, Tom Van de Weghe, a Stanford University researcher, wrote in a May essay.
A deepfake video, still image, or audio recording is the product of a clever bit of coding called a generative adversarial network, or GAN. A GAN has two components: a discriminator and a generator. The discriminator is trying to tell fake media from real media. The generator is trying to fool the discriminator with increasingly realistic-seeming fakes.
The two components go back and forth at the speed of computer code. The generator goes first and generates an image, Matt Guzdial, an assistant professor of computer science at the University of Alberta, told The Daily Beast. This is tricky because initially the generator is going to produce random noise.
Now its the discriminators turn. Its sitting on a trove of authentic media plus the initial fakes from the generator. It will guess which is which, Guzdial explained. After guessing, it will be told which ones are real ones. Then the process starts all over again.
The trick for the discriminator is getting over the hump of just coughing up random noise. If the generator can break through the noise barrier and grok what the discriminator is seeing or hearing in its catalogue of authentic media, then the generator can really start learning from the back-and-forth.
After potentially hundreds of thousands of iterations, one of two things happens. Eventually the generator starts generating things that look like real images, Guzdial said. Or it doesnt work. The same process applies to video, stills and audio.
More and more, it does work. Type deepfake into the search bar on Facebook and you can watch Tom Cruise starring in a Marvel superhero movie or Barack Obama ranting about, well, a Marvel superhero movie.
Its not hard to imagine political parties, intelligence agencies, militant groups and online trolls deploying deepfakes to discredit, frame or embarrass their rivals. The risk is such that computer scientists already are scrambling to devise deepfake countermeasures. Theres an arms race, Mark Riedl, an associate professor of computer science at Georgia Tech, told The Daily Beast.
That said, deploying a GAN is easier said than done. The underlying code isnt hard to find. But you need more than code. For starters, you need lots and lots of authentic media as grist for the generator. For faking a photo or video of a person, you need need plenty of clear images of that persons face. To fake that persons voice, youd need lots of clean recordings of them really speaking.
Social media makes it easier to find good media as a starting point, Van de Weghe pointed out. But gathering all that raw material still takes time and effort.
Of course, in this hypothetical scenario youre a hacker or a shady political operative or a committed troll, so youre willing to put in the hours. Youve gathered plenty of authentic media as a starting point, fired up your GAN and, after a few noisy failures, produced a convincing deepfake.
Now you need to do it again. And again. It is likely that a malevolent entity would need more than a single image, audio or video to effectively sway opinion at scale, Polo Chau, who is also a Georgia Tech computer scientist, told The Daily Beast. After all, any one deepfake is likely to disappear in the churn of social media.
So youve produced a bunch of decent deepfakes all targeting the same person in the same way. Its enough to really make a dent in your subjects reputation. Assuming, that is, no one flags your fakes for what they really are.
Sure, GANs are getting better all the time. But for now its still easier to detect a fake than it is to produce a convincing one. Todays GANs are good at faces, Riedl explained. But they get sloppy around complex, moving materials. Look close at the subjects hair in deepfake video. You should be able to spot telltale distortions.
Its possible to automate this scrutiny. Social-media companies and users could deploy discriminators to sift through media on a network, looking for the pixelation and other digital fingerprints of GAN-produced fake. In September Google released, like targets at a shooting range, a trove of 3,000 old deepfakesall in order to boost efforts to identify newer fakes.
Plus, there are methods of countering deepfakes that dont solely rely on code. Purveyors of deepfakes need social media accounts and unscrupulous media outlets to help spread the malicious content, Chau explained.
And that exposes the purveyor of the deepfake to social-media analysis, Riedl said. You can analyze that and find out where these things originate from, look at the network around this person and find out if theyre an artificial entity or linked to known groups.
You can counter-program against that, Reidl added. Methods could include blocking or reporting the account pushing the deepfake. Thats a very promising way of not dealing with the tech directly.
Ultimately, experts said, the best defense against GANs and the deepfakes they produce is an educated and skeptical public that can view media in an informed context, considering its source, its proponents and detractors and its potential for weaponization.
Obama chattering away about some Marvel villain? Certainly useful to fringe media outlets eager to portray the former president as silly And thus probably fake. A smart social-media user should know that.
Skepticism is the key. But in a hyper-partisan media environment, where everyone is grasping for any confirmation of their existing biases, skepticism could be in short supply.
That, more than the GANs and deepfakes themselves, worries some experts. The reason deepfakes are such a threat is not that the tech is so good, Riedl said, its that people want to be fooled.