X Close

Calm down, ‘deepfake’ news is not here yet

September 13, 2019 - 5:25pm

It’s easy to sensationalise the threat of “deepfake” images. The fear is that computer-generated audio and video will soon be so convincing and so ubiquitous that the distinction between online truth and fiction will collapse altogether.

Thank goodness, then, for a cool-headed new report from the Centre for Data Ethics and Information(CDEI). While taking the issue seriously, the authors inject a note of, um, reality:

Deepfakes are likely to become more sophisticated over time. For now, however, high quality content remains difficult to create, requiring specialist skills and professional software that is not yet widely available. We are yet to see a convincing deepfake of a politician that could distort public discourse.
- CDEI Snapshot Paper

Of course, the technology is moving forward. We may be OK at the moment, but in ten years’ time will the news be overwhelmed with fake photographs and footage? Should we be legislating now to outlaw the impending flood of bogus pixels?

Legislative moves have already been made in the US congress, but the CDEI report is sceptical:

Attempts to legislate against deepfakes may prove ineffective, given it is very difficult to identify where doctored content originates. Legislation could also threaten beneficial uses of visual and audio manipulation.
- CDEI Snapshot Paper

In fact, the whole issue of provenance is likely to be our first and most effective line of defence. If no one credible is willing to put their name to an image then it won’t be seen as authentic.

It’s worth remembering that most news consists not of images, but words – and rendered as text these are supremely easy to fake and disseminate online. See, I only have to type the following words – “in a statement issued earlier today, the Prime Minister, Boris Johnson, said: ‘Wibble, wibble, wibble. I’m a little teapot. Wibble'” – and I’ve faked a quote, using a style cribbed from legitimate reports. I could tweet it out too, but it wouldn’t be believed because it’s obviously unbelievable. Similarly, a deepfake image of, say, the Prime Minister sticking up two fingers behind Angela Merkel’s back is not going to be believed (I hope).

But what about something much closer to the bounds of possibility – for example, a clip of a politician swearing at a junior member of staff?

Again we need to remember that stories of this kind can be easily concocted using words – and yet the news isn’t full of superficially plausible but completely made-up reports. That’s because credible journalists rely on checks like double sourcing to ensure they’re not constantly being played by fraudsters and fantasists.

Admittedly, it’s not an absolutely foolproof system – horrible mistakes are sometimes made. But leaving aside the more subtle distortions like bias, spin and poor analysis – our public discourse mostly proceeds on the basis of things that have actually happened. As for deepfake images, these may make a bigger immediate splash than mere words, but they’ll generally provide more in the way of contextual details for factual verification (and, perhaps, digitally detectable traces of image manipulation).

Of course, if we lose the authentication service that quality journalism provides, then we’re in trouble – but that would be the case with or without deepfake imagery.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments