2019: The Beginning of the Era of the Deepfake?

How Digital Manipulation can be a Threat to Democracy and Human Rights

By Peggy Whitfield

In early November, the prestigious English dictionary, Collins, named “deepfake” as one of its new words of the year for 2019; it’s certainly one of the buzzwords of the IGF conference in Berlin. But what exactly are deepfakes and what potential do they have to impact society and democracy? NATO Strategic Communications Centre of Excellence released a report entitled “The Role of Deepfakes in Malign Influence Campaigns” earlier this month, where the term was defined as “ … a portmanteau of ‘deep learning’ and ‘fake’, referring to the method used and the result obtained. Although most commonly associated in popular consciousness with video, a deepfake is any digital product – audio, video, or still image – created artificially with machine learning methods.” At the most basic level, deepfakes are created by deep learning algorithms which are made up of two artificial neural networks, one which produces data and the other which decides if it is realistic and believable.  

Whilst deepfakes have been on the radar of governments and the tech industry for some time, the concept only really hit public consciousness when a YouTube channel from a mysterious, anonymous avatar called Ctrl Shift Face – who claims to have worked in Hollywood in VFX for many years –  started releasing deepfake videos of famous faces transplanted onto other famous bodies. One of the most widely shared videos is the infamous “Here’s Johnny” scene from the classic horror film, ‘The Shining’. The film stars Jack Nicholson, but in this deepfake video the actor’s face has been replaced with that of Jim Carrey. To a casual observer who is not familiar with the film, it is very convincing.

Ctrl Shift Face’s videos are clearly labelled as deepfakes, but the concern is that malicious actors – be they state-sponsored or just nefarious individuals – may use the technology in more harmful ways than entertaining people on social media. This has caused many people to raise worrying questions, as Marianne Franklin from The Internet Rights and Principles Coalition explains:

   “All this technology has been deployed and sold and downloaded without any kind of consideration about whether it is even acceptable to have in the first place. The trouble is the technology and what can be done technically is galloping way ahead of legislation, political representatives, high courts, middle management and universities. That is why the use of deepfakes in revenge porn is so disturbing.”

Franklin’s view is supported by legal realities; in a world where revenge porn is ubiquitous and often goes unpunished – it is not a criminal offence in most of Europe, let alone the rest of the world – technology has already moved on. The tech platform Reddit is full of sites where people are creating deepfakes, sometimes for innocent purposes, but after public pressure, the platform has also taken down sub-Reddits which were devoted to creating pornography using the faces of celebrities and ex-partners without their consent. 

But what about the use of deepfakes in a political context? It is possible, of course, that they may have already been used and we simply did not notice. Thus far, there has been only one verified use of a deepfake in political campaigning in Belgium in 2018. A Flemish socialist party, sp.a, published a deepfake video of US President, Donald Trump jeering at Belgium for staying in the Paris Climate Accord. It seems to be a deliberately poor attempt as ‘Trump’s’ final words are “We all know that climate change is fake, just like this video.”

 More worryingly, there are as yet unsubstantiated claims that a deepfake was used in Gabon late last year and may have triggered a coup. President Ali Bongo had not been seen in public for a while and then suddenly a video of a new year address was broadcast; many people were convinced it was a fake and the military staged an unsuccessful attempt to overthrow Ali Bongo a week later. These two examples clearly demonstrate that deepfakes are not the technology of the future, but today’s reality. 

Whilst the aforementioned examples may seem to have had dramatic consequences, video deepfakes are the least of our worries, according to Sebastian Bay, a Senior Expert at NATO StratCom and project manager of the organisation’s recent report on deepfakes. Apparently, audio is more concerning:

   “Imagine a world where you pick up the phone and the voice and the video call that you see, you cannot 100 percent know that that is really the person you are talking too. So you are talking to your mum, it sounds like your mum, it looks like your mum, but it could be a person just using the ‘mum filter’. That is a very scary future.”

Bay goes on to explain that people could use this technology to mimic journalists to extort sensitive information from sources or it could be used to defraud individuals on a massive scale. The deepfake report also concentrates on the effect that deepfake text may have on democracy, with trolling, especially by authoritarian states now able to achieve mass saturation:

   “With the development of ‘fake text’, trolls could finally be replaced by automated systems for all but the most sophisticated interactions with real humans. This new automated astroturfing could result in a massive scaling up of operations, generating an entire fake public to influence political decisions.”

If we think Russian troll farms are an issue, it seems we may be at the very beginning of a terrifying journey down the disinformation rabbit hole. Damian Tambini, Associate Professor at the Department of Media and Communications at the London School of Economics, echoed these warnings at a disinformation seminar at the IGF earlier this week. He stated:

 “This is a war on democracies by non-democracies. It is asymmetrical. If one of the participants in this conflict has genuine popular sovereignty, then it matters if the opinions of its citizens can be manipulated.”

Such developments in the disinformation war could really take fake news to a whole different level. How could any of us believe anything we see on the Internet? But some argue that whether or not deepfakes are used or not, effective methods deception can still be carried out in more technologically basic ways without AI, as Ctrl Shift Face argued in a recent interview with the publication ‘Final Boss’:

 “You don’t need the cutting edge deepfake tech to fool people. All it takes is a Facebook post and [the] less intelligent majority will spread it like a wildfire because it confirms their bias … it’s happening all over the world and huge companies, mainly Facebook (which is the main platform for this) should find a solution to this and filter this stuff out.”

The recent viral footage of US politician, Nancy Pelosi, seemingly drunk, but actually edited and slowed down so her answers seemed slurred and disjointed – which Facebook refused to remove – seem to bear this argument out. Civil society were quick to condemn the video and the platform for allowing it to be widely shared; one Instagram user bill_posters_uk was so incensed, he created his own deepfake video of Facebook CEO, Mark Zuckerburg, talking about his plans for global domination, which proved to be wildly popular. 

So it seems that deepfakes may just be part of the disinformation deluge that big tech – and thus the world – faces. Until platforms can be held accountable for the content they allow to be published and shared, we all remain at risk of being duped by or falling victim to deepfakes, as the NATO StratCom report states:

“The main objective of entities like Facebook and Twitter is generating profits rather than defending Western political systems … the platforms remain unwilling or unable to address the fundamental problem of malign influence overall, even in its most simplistic characterisation as    ‘fake news’. 

#deepfakes #metamanipulation #democracy #disinformation #bigtech #digitalrights #humanrights

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s