As artificial intelligence has developed over the years, it was only a matter of time before someone found nefarious use cases for the technology. DeepFakes are one of the ways AI can be used against individuals, or society as a whole.
Though photo manipulation has been around since the 19th century, the introduction and rapid advancement of digital video technology – would in turn – fuel the development of video manipulation.
So What Exactly Are DeepFakes?
DeepFake videos are -as they have come to be known in their more sinister form- are very well crafted, fake, videos of people saying and/or doing things that are uncharacteristic or would otherwise discredit them. In most instances, these DeepFakes have shown up in the form of celebrities (mostly female) in pornographic videos. However, manipulated videos of politicians have also surfaced.
According to AI firm Deeptrace, there were no less than 15 000 DeepFakes circulating the internet by September 2019. This number represents a near two fold increase over a nine month period.
How Do They Work?
DeepFakes use a form of machine learning known as deep learning to produce astonishingly believable images, sounds and videos of events that never happened.
The first step is to feed a great number of images of people through an AI algorithm known as an encoder. The encoder will then filter through the images to pinpoint and learn the physical similarities and differences between two people, to break them down to only their common features.
The next step involves the use of another AI algorithm called a decoder. This AI is used to recover the now broken down and compressed images from the first AI. This AI is also fed data on the two individuals’ faces in order to complete its task. Due to the fact that more than one individual’s face was used, two decoders would have to be used. One specifically trained for each individual.
To superimpose one subject’s face over the other’s, you simply run the encoded images through the opposite decoder. And that is how the most basic DeepFakes are made.
The more advanced and harder to spot DeepFake videos use a GAN (Generative Adversarial Network). This works by putting two algorithms up against each other. The first is the generator, which is trained to create a synthetic image of whoever is being faked. The second is the discriminator, which is trained to scrutinise images made by the generator. This process is then repeated until the generator produces realistic copies.
How Is This a Danger?
Up until recently, the biggest danger posed by DeepFakes was mainly something that celebrities and people with vindictive ex-lovers would have sleepless nights about. Though DeepFakes aren’t likely to trigger world war 3, they could be used as tools for harassment, intimidation, fraud, misinformation and destabilisation.
One particularly disturbing outcome of DeepFakes is the rise of what has come to be known as the “virtual defence”. Because of good quality fakes, it is now possible to question the authenticity of the real image. In countries where generated illicit images carry a lesser punishment than the real ones, claiming that they were computer generated may be very attractive to offenders.
“The problem may not be so much the faked reality as the fact that real reality becomes plausibly deniable,” according to Newcastle University’s Prof Lilian Edwards, an expert in internet law.
How To Spot One?
As is the case with many other things, quality plays a key role. Naturally, weaker quality fakes are easier to spot. For example, researchers in the United States noticed that the vast majority of DeepFakes did not blink (the vast majority of images show people with their eyes open). However, this was back in 2018 and it did not take the fakers long to iron out that crease.
Besides checking for whether or not the person in the video blinks, one could also look out for flickering around the edges of superimposed face, off lip synching and the more delicate details like the hair.
As the technology develops and becomes more widely available, DeepFakes, and those who create them, could start to unjustly influence the decisions of our daily lives. To truly police this potential threat, AI seems the most viable option to deploy to the endless cat and mouse game that is developing.