The manipulation of images is nothing new. From making it appear as though fairies are real in the 1920s, to erasing certain figures out of historical photographs, image editing has been used for hoaxes and in attempts to manipulate a story for centuries. Now, deepfakes created using AI have taken things to the next level, manipulating video and sound in a way that threatens the credibility of visual media.
Artificial intelligence has powered the phenomenon of falsified videos, which is slowly creeping into mainstream media, and it can also be used to detect it. Rest assured, the big industry players are scrambling to find a way to, perhaps not stop the phenomenon, but certainly prevent it from reaching its full damage potential. While companies big and small are working on deepfake detection as you read this, the technology used to generate them seems to be getting more accessible and convincing. It’s fair to say we’re witnessing a deepfake arms race.
Researchers at University College London called deepfakes the most dangerous crime of the future, and with good reason. A ‘Deepfake’ refers to a video created using AI and a deep-learning algorithm, typically to make a person appear to say or do something they haven’t, or be somewhere they never were. A well-known example of this is a video that did the rounds of Barack Obama insulting Donald Trump to show that we shouldn’t believe all that we see.
The most likely danger, according to the researchers, is the wrecking and ruining of individual reputations and the effects this may have in society. In an age when many of us are conducting, or at least recording, our lives online, word spreads faster than ever before. One tweet containing one video could make or break a person’s reputation in seconds; this is a world where data and visual information are two of the most powerful and persuasive things, after all. It’s the ideal scenario for someone with malevolent objectives to manipulate our perceptions.
Another scenario could see a convincing deepfake of a public figure in an undesirable situation being sold to the media for thousands, if not millions. It wouldn’t be the first time the press paid an incredulous figure for media relating to celebrities or public figures. It would be an unusual, unprecedented form of fraud that media outlets can, and it seems should, protect themselves from by employing detection technology, rather than taking such videos and images at face value.
In September 2020, Microsoft released a product that analyzes videos and images, giving them a confidence score to help users determine whether they are real or fake. They say this is their bid to “combat disinformation”.
The tech giant’s video authenticator tool works by using Artificial Intelligence to detect signs that it is fake, which might be invisible to the human eye. Examples of these tell-tale signs include grayscale pixels at the boundary of where the artificially generated component appears. Microsoft used a public dataset of deepfake videos and tested it against an even bigger dataset that Facebook had created in their attempt to understand deepfakes.
Facebook, in fact, led the way in the social media industry when it, at least attempted, to pull the plug on fake videos. Seeing the benefit in fighting fire with fire, as it were, Facebook used Visual Artificial Intelligence to build a database by which their own algorithm can learn to spot a falsified video. TikTok and Twitter weren’t far behind in following suit. There are also a number of smaller enterprises building technology in the hope that preventing catastrophic fallout from deepfakes in the future.
It’s clear to see there is work being done, but you have to wonder, is it enough to stop deepfakes from having the worrying effects the UCL researchers are predicting, and are they moving fast enough? In order for deepfake detectors to work, they must be accurate in analyzing videos as either real or fake 100% of the time while training on 100s of videos. Those creating fake videos with malicious intent only need to be successful one time. As Wired says, the technology creating deepfakes isn’t great right now, but neither are the tools that have been built to detect them. It seems a lot has been done, but there is so much more to do. Can those trying to stop deepfakes in their tracks do it before they become ubiquitous?
Many of us have been having fun with deepfake apps on our phones, where we can place our faces on Leonardo Dicaprio’s body, semi-convincingly, and see what it would have looked like had we been the one to clinch that role in Titanic. This is arguably the first step to deepfakes becoming ubiquitous. While there are some examples of deepfakes appearing online, such as the earlier Barack Obama example, they are still rare and the only widespread use of the technology is of female celebrities images used non-consensually in deepfaked pornography.
However, Nina Schick, author of “Deep Fakes and The Infocalypse” says that we need to further develop tools such as Microsoft’s, and fast. “…Synthetic media is expected to become ubiquitous in about three to five years,” she says, “However, as detection capabilities get better, so too will the generation capability- it’s never going to be the case that Microsoft can release one tool that detects all kinds of video manipulation.” There we have it, three years to stop deepfakes from taking over. And it looks like detection technology isn’t moving fast enough right now. Time to up the game, it seems.
Much like in the anti-phishing industry, developers need to get to a place where they can stay one step ahead of the technology to prevent deepfakes from corroding our information ecosystem. One thing is for sure, Visual-AI will play a part in creating what could very well become a monster, but it will play a bigger role in keeping it at bay as well.
Reading Time: 6 minutes TLDR: The European Digital Services Act has been ratified into law by the European Union and will have wide-ranging implications for companies […]Content Moderation Featured
Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.