The Deepfake Detection Arms Race
BLOG

The Deepfake Detection Arms Race

Can deepfake detection technology keep up with malicious creators?

The manipulation of images is nothing new. From making it appear as though fairies are real in the 1920s, to erasing certain figures out of historical photographs, image editing has been used for hoaxes and in attempts to manipulate a story for centuries. Now, deepfakes created using AI have taken things to the next level, manipulating video and sound in a way that threatens the credibility of visual media.

Artificial intelligence has powered the phenomenon of falsified videos, which is slowly creeping into mainstream media, and it can also be used to detect it. Rest assured, the big industry players are scrambling to find a way to, perhaps not stop the phenomenon, but certainly prevent it from reaching its full damage potential. While companies big and small are working on deepfake detection as you read this, the technology used to generate them seems to be getting more accessible and convincing. It’s fair to say we’re witnessing a deepfake arms race. 

The Danger of Deepfakes

Researchers at University College London called deepfakes the most dangerous crime of the future, and with good reason. A ‘Deepfake’ refers to a video created using AI and a deep-learning algorithm, typically to make a person appear to say or do something they haven’t, or be somewhere they never were. A well-known example of this is a video that did the rounds of Barack Obama insulting Donald Trump to show that we shouldn’t believe all that we see.

 

 


The authors of the UCL report say that deepfakes are a danger for several reasons, the most significant being that it’s hard to spot one without the right technology.  They also cite the variety of crimes deepfakes could be used for as a serious worry. The discreditation of public figures, they say, could cause legal issues and societal disorder. Furthermore, they claim that the long-term effect of deepfakes could lead to a widespread distrust of audio and video media. It’s believed that this could cause social harm, right down to this type of media being discredited as reliable evidence in trials. It sounds like something out of a dystopian future depicted on film, but it’s a potential reality, and it is cause for great concern. 

Making and Breaking Reputations

The most likely danger, according to the researchers, is the wrecking and ruining of individual reputations and the effects this may have in society. In an age when many of us are conducting, or at least recording, our lives online, word spreads faster than ever before. One tweet containing one video could make or break a person’s reputation in seconds; this is a world where data and visual information are two of the most powerful and persuasive things, after all. It’s the ideal scenario for someone with malevolent objectives to manipulate our perceptions.

Another scenario could see a convincing deepfake of a public figure in an undesirable situation being sold to the media for thousands, if not millions. It wouldn’t be the first time the press paid an incredulous figure for media relating to celebrities or public figures.  It would be an unusual, unprecedented form of fraud that media outlets can, and it seems should, protect themselves from by employing detection technology, rather than taking such videos and images at face value.  

Detecting Deepfakes With AI

In September 2020, Microsoft released a product that analyzes videos and images, giving them a confidence score to help users determine whether they are real or fake. They say this is their bid to “combat disinformation”

The tech giant’s video authenticator tool works by using Artificial Intelligence to detect signs that it is fake, which might be invisible to the human eye. Examples of these tell-tale signs include grayscale pixels at the boundary of where the artificially generated component appears. Microsoft used a public dataset of deepfake videos and tested it against an even bigger dataset that Facebook had created in their attempt to understand deepfakes.

Facebook, in fact, led the way in the social media industry when it, at least attempted, to pull the plug on fake videos. Seeing the benefit in fighting fire with fire, as it were, Facebook used Visual Artificial Intelligence to build a database by which their own algorithm can learn to spot a falsified video. TikTok and Twitter weren’t far behind in following suit. There are also a number of smaller enterprises building technology in the hope that preventing catastrophic fallout from deepfakes in the future. 

It’s clear to see there is work being done, but you have to wonder, is it enough to stop deepfakes from having the worrying effects the UCL researchers are predicting, and are they moving fast enough? In order for deepfake detectors to work, they must be accurate in analyzing videos as either real or fake 100% of the time while training on 100s of videos. Those creating fake videos with malicious intent only need to be successful one time. As Wired says, the technology creating deepfakes isn’t great right now, but neither are the tools that have been built to detect them. It seems a lot has been done, but there is so much more to do. Can those trying to stop deepfakes in their tracks do it before they become ubiquitous?

Deepfake detection phone transfer

Is Deepfake Detection Technology Moving Fast Enough?

Many of us have been having fun with deepfake apps on our phones, where we can place our faces on Leonardo Dicaprio’s body, semi-convincingly, and see what it would have looked like had we been the one to clinch that role in Titanic. This is arguably the first step to deepfakes becoming ubiquitous. While there are some examples of deepfakes appearing online, such as the earlier Barack Obama example, they are still rare and the only widespread use of the technology is of female celebrities images used non-consensually in deepfaked pornography.

However, Nina Schick, author of “Deep Fakes and The Infocalypse” says that we need to further develop tools such as Microsoft’s, and fast. “…Synthetic media is expected to become ubiquitous in about three to five years,” she says, “However, as detection capabilities get better, so too will the generation capability- it’s never going to be the case that Microsoft can release one tool that detects all kinds of video manipulation.” There we have it, three years to stop deepfakes from taking over. And it looks like detection technology isn’t moving fast enough right now. Time to up the game, it seems. 

Much like in the anti-phishing industry, developers need to get to a place where they can stay one step ahead of the technology to prevent deepfakes from corroding our information ecosystem. One thing is for sure, Visual-AI will play a part in creating what could very well become a monster, but it will play a bigger role in keeping it at bay as well. 

RELATED

BLOG
Phishing Training Should Help Your Staff – Not Set Them Up To Fail

Re-evaluating what Phishing Training should look like By: Luca Boschin, CEO & Co-Founder VISUA Almost two thousand years ago, Epictetus, a Greek […]

Anti-Phishing Cybersecurity
BLOG
Press Release: VISUA & De La Rue Partner To Deliver Automated Holographic Authentication

Partnership will see De La Rue integrate VISUA’s Holographic Authentication Engine into its new DLR Validate™ solution. Visual-AI leader, VISUA, today announces […]

Brand Protection Counterfeit Detection Product Authentication VISUA News
BLOG
It’s Amazing What’s Possible With Visual-AI For Trademark & Copyright Protection

Your Trademark & Copyright Protection Solution Powered By Best-In-Class Enterprise Visual-AI Protecting your brand and products is not only key for established […]

Brand Protection Counterfeit Detection Trademark Protection

Talk to the Visual-Al people

Out team will focus on understanding what you need to achieve and demonstrate the benefits of our technology, such as:

  • Extensive suite of Visual-Al tools
  • Highest precision and recall in both images and video.
  • Easy to integrate API, or powerful dashboard.
  • This field is for validation purposes and should be left unchanged.

Trusted by the world's leading platforms, marletplaces and agencies