Abstract

Interestingly art is full of reproductions. Some are replicas, like Mona Lisa and others are fake or forgeries, like the ‘Vermeers’ painted by Han van Meegeren that was sold for $60 million (Kreuger and van Meegeren 2010). Now the distinction between real and fake is based on the concept of authenticity. The question is, is this artefact what it claims to be? The answer seems simple, but in reality, things are complicated. Today, the painting of the forger John Myatt is so famous that they are valued at up to $40,000 each, as ‘genuine fake’ (Furlong 1986). So technically, they are not what they say they are, but they are authentically painted by him and not by any other forger. And they are beautiful, “a bit as if one were to utter a beautiful lie, not any ordinary lie.” According to research out of cyber security company, Deeptrace, the numbers of ‘deep fake videos on the internet have doubled in just nine months from 7,964 in December 2018 to 14,698. Of these ‘deep fakes, 96% were pornographic, often with a face of a celebrity morphed onto the body of an adult actor engaged in sexual activity. Accordingly, Facebook has invested $ 10M into a research effort to produce a database and benchmark for detecting deep fakes and is partnering with top research institutions such as MIT, UC Berkeley, and Cornell Tech. It is clear that deep fakes are alarming and firms like Facebook are doing something about it, but the question is what are deep fakes? And why are they alarming? Due to the increased concentration of users around social media and democratization of means by which deep fakes are produced, the web is seeing an increasing propagation of hyper-realistic deep fakes without a technical understanding of machine learning, and their increased realism and scale is largely due to improvements in the organization of datasets being fed into machine learning algorithms, as well as the introduction of Generative Adversarial Network (GANs). When truths are indistinguishable from falsehoods, we put at risk our democracy, our ‘national security, and public safety. When the world of the ‘perfect’ deep fake, the waters of fact and fiction are muddled, creating a fog of questioning what’s real and what’s fake? How might deep fakes make us question our national security in times of war? Deepfakes sent from adversaries can show our soldiers killing civilians to invoke an environment of distrust and instability.

Introduction

Origin of Deepfake

Deep learning is part of a border family of machine learning methods based on artificial neural networks with representation learning. Deep learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolution neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and broad game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Deep learning has been successfully adopted and applied in solving various complex problems relating to big data analytics to computer vision and human-level control. It is also been used to create software that can cause threats to privacy, democracy and national security. One of those deep learning-powered applications which recently emerged was ‘Deepfake”. Deepfake algorithms can create fake images and videos that humans cannot differentiate from authentic ones. Deepfake which has stemmed up from “deep learning” and “fake”, is a technique that can superimpose face images of a person to a video of the targeted and source person to create a video of the targeted person, doing or saying things the source person does. Deep learning models such as “autoencoders” and generative adversarial networks have been applied widely in the computer vision domain to solve various problems. These same models and algorithms have been used by deep fake to examine facial expressions and movements of a person and then these facial images of another person are synthesized for making analogies expressions and movements. Creating Deepfake normally requires a large amount of image and video data to train models to create photo-realistic images and videos. Celebrities and public figures are an easy target for deep fakes as many videos and images are easily available online. Deepfakes were used to swap faces of celebrities or politicians to bodies in porn images and videos. The first deep fake video was circulated in 2017 where the face of a celebrity was swapped to that of a porn star. Deepfake methods are a threat to world security when such methods are employed to create videos of world leaders with fake speeches for falsification purposes, and hence deep fakes can be used to abuse political or religious tensions, create governmental rifts, fool the public, affect results in elections campaign, or create chaos in the financial market. It can also be used to generate fake satellite images of the earth to contain things that do not exist and confuse military analysts, creating a fake bridge across a river although there is no such bridge. Such things can mislead troops who have been guided to cross the bridge in a battle. Deepfake have positive usage too such as, creating voices of those who have lost theirs or updating episodes of movies without reshooting them. However malicious use of deep fakes largely dominates the positives. To address the threat of face-swapping technology or deep fakes, the United States Defense Advanced Research Projects Agency (DARPA) initiated a research scheme in media forensics (named Media Forensics or MediFor) to accelerate the development of fake digital visual media detection methods.

For more details: Click here