Can AI Combat Deepfakes, the New Threat?
Deepfakes are the next big national threat. In this post, learn how fake deepfakes are, how to spot an AI-generated deepfake video, and how to use AI to combat deepfakes.
AI is always in the news, in the limelight. Sometimes it comes as a blessing transforming the field of medical science. And sometimes it creates a lot of buzz creating “deepfakes”.
It is feared that AI-generated “deepfake” videos could be the next big source of viral misinformation. Also, the next US presidential election is knocking on the door. These “deepfake” videos can bring potentially catastrophic consequences for this big event next year.
No, no! I’m not going to talk about anything “political” here, but the election is a good point of reference to understand how severe the impact can be.
Deeply fake “deepfake” - what is that?
By combining “deep learning” and “fake”, “deepfake” was created. And this Deep Learning is a subset of Artificial Intelligence (AI).
Deepfakes are falsified videos made using deep learning. It uses neural networks to combine and superimpose existing images and videos onto source images or videos using GANs (Generative Adversarial Networks).
According to Peter Singer, senior fellow and cybersecurity and defense focused strategist at New America, “The technology can be used to make people believe something is real when it is not” which is undoubtedly very dangerous.
According to Lawmakers, deepfakes could be the next big national security threat!
Sadly, we’ve already seen this technology creating fake celebrity porn that is going viral around the web. Also, you might have already come across the trending video where Nicolas Cage and David Schwimmer are nearly identical.
Just look at the image! Even if we share it as a meme, deep down we all know how terrifying this is.
How does deepfake AI work?
According to Vox, the GAN algorithm involves two separate AIs. The first one is the generator that synthesises new samples from scratch. And the other one is the discriminator. The latter one takes samples from both the generator’s output and the training data and then predicts if they are “fake” or “real”.
Initially, generator AI has zero idea about how people look. Hence, the input is a random vector (noise). As a result, the initial output is also noise. However, over time, each type of AI is getting progressively better. And eventually, the generator starts producing more “realistic” images and frames which results in a deepfake video.
How to detect deepfake videos using AI?
Well, we shouldn’t underestimate the power of AI. It has a lot to offer. While it is being used to tamper videos and creating deepfakes, tech experts are trying to utilise this advanced technology to detect the deepfakes.
Here is a list of AI techniques to combat a deepfake video.
Eye blinking speed
Last year, computer scientist Siwei Lyu and his research team from the University at Albany, SUNY published a research paper on detecting deepfake videos using AI. According to them, such tampered videos can be detected with AI by examining the eye blinking patterns of the subject in the video.
Using benchmarks of eye-blinking detection data sets, Siwei Lyu and his team tested their methodology. And they were able to expose significant flaws in the fake videos. They found that the act of blinking which is a psychological signal is not well replicated in the tampered videos.
Sadly, soon after publishing their research, new deepfakes began to emerge where the subject in the video is blinking more normally!
Head and face gestures
A graduate student at UC Berkeley and her thesis adviser are working on another deepfake detection approach. They are working on an AI algorithm that can detect such deeply fake videos based on the behavioral characteristics, especially the face and head quirks of the subjects in the videos.
Every individual is unique and has a unique face and head gestures. For instance, maybe someone smirks while making a point or maybe bites lips while listening or concentrating. Now, if a neural network is trained on the video data comprised of the behavioral patterns or characteristics of an individual, it can flag those deeply fake videos that contain gestures that do not belong to that particular individual.
According to the research team, so far, they have been able to identify tampered videos with 92% accuracy. Identifying the changing patterns in posture, facial expressions, hand gestures along with other characteristics in relation to what message a speaker is conveying can now be identified with this AI algorithm. The research team is expecting that the accuracy rate will increase to 99% by 2020.
While making deepfake videos, usually still images are used. To make a more realistic tampered video, all these images must be processed at a common fixed resolution. Now, this entire process nevertheless leaves behind some artifacts that a trained AI neural network can detect.
A research team from the University of California, Riverside (UCR) is presently working on this algorithm. They were able to show that the pixels around the boundaries of the objects had unnatural feathering and smoothing in a tampered video.
Are you watching a deepfake video?
Not sure whether you’re watching a real video or a deeply faked one? Well, here are some ways to identify it.
According to a Deep Learning specialist writer Jonathan Hui, just slow down the video and look for the following things.
- Changed skin tone near the edge of the face
- Not elsewhere in the video but a bit blurring evident in the face only
- Is the face getting a bit blurry when it is partially obscured by another object or maybe a hand?
- Double edges to the face, etc.
Though with time, it is becoming pretty harder to identify a finely-curated deepfake video as the deepfake technology is evolving rapidly.
Do we have any proven method yet to prevent deepfakes?
Sadly, no, not yet!
However, tech giants like Google, Facebook, and Microsoft are already getting ready to head off a disinformation disaster.
Facebook is investing in making lots of deepfakes of its own. These will help the R&D team to build and refine the deepfake detection tools. Recently, Google has come up with a dataset of synthetic speech in support of an international challenge to develop high-performing fake audio detectors. Also, Google is contributing a large data set to deepfake detection research.
Recently, in June 2019, the prestigious Cornell University released a study focused on a unique approach to prevent the creation of deepfake videos. According to the study, by inserting digital “noise” into photography, the facial recognition capability can be disrupted. This noise is undetectable by normal human eyes.
While we are worried about the altered and tampered videos, it should be kept in mind that deepfakes can alter audio content as well.
However, for now, we all have to wait patiently to see where AI takes us to combat deepfakes.