what is a deepfake?
A deepfake is a digital rendition of someone doing something they never actually did.
You used to need a whole special-effects studio to mock up something like that. Soon, thanks to artificial intelligebce (AI), all you’ll need will be an app.
There are already thousands of deepfakes on the internet; so many that, in January, Facebook banned them.
A lot of deepfakes are celebrity porn videos. You can also find deepfakes with Boris Johnson backing Jeremy Corbyn, and vice versa. In one deepfake, Barack Obama says Donald Trump is a ‘complete and utter dip-shit’. Oscar-winning director Jordan Peele and BuzzFeed CEO Jonah Peretti created that one as satire.
You can see how that technology might be abused: a single police mugshot being turned into a full-blown confession, say; or a video of a politician feasting on children.
California assemblyman Marc Berman proposed banning the technology after a deepfake emerged of House Speaker Nancy Pelosi, apparently drunk.
‘Deepfakes are a powerful and dangerous new technology that can be weaponised to sow misinformation and discord among an already hyperpartisan electorate,’ he said.
A British government White Paper on online harms says, ‘It is becoming even easier to create and disseminate false content and narratives.’
One of the biggest deepfakes appeared last year. A Russian AI team, led by a young man named Egor Zakharov, uploaded three ‘living portraits’ of Leonardo da Vinci’s iconic Mona Lisa. In the first animation, Mona narrows her eyes and moves her lips, as if divulging something she shouldn’t. In the second, she turns her head and widens her eyes as if relating an outrageous anecdote. In the third, she’s coy, confessional – and wholly convincing.
When I encountered the images on a morning stroll around the internet, my first reaction was ‘Mona Lisa is hot’. But then I felt a little queasy. If that’s what they can do with a single image of a dead countess … imagine what they could do with someone living now.
Getting rid of deepfakes won’t be easy. They are made possible by a relatively new development in AI known as a ‘generative adversarial network’, or GAN.
You set up two algorithms to play a game against each other: the ‘generative network’ comes up with a candidate; then the ‘discriminative network’ works out whether it’s plausible or not. The more games the algorithms play between themselves, the more data they create and the better future results.
GAN networks are what power FaceApp, a popular ‘game’ where users can upload pictures of themselves and see what they will look like in 20 years’ time (and meanwhile feed more data into the machine).
One of the scary things about deepfakes is that as soon as you come up with a reliable deepfake detection system, the GAN network can work out how to get round it.
How worried should we be? Mr Zakharov seems relaxed, arguing (not particularly convincingly) that humanity managed to adapt to the initial problems caused by radio transmitters and smartphones: ‘The net effect of democratisation on the world has been positive, and mechanisms for stemming the negative effects have been developed.’ Have they, though?
As deepfake technology becomes more accessible, it’s likely that the membrane between the private imagination and the public image will become more porous. We will all have to become that bit more cynical and mistrustful about what we see.