Smart Cities

How AI is Used to Detect Deep-Fake Videos


How AI is Used to Detect Deep-Fake Videos

According to CSO, a deep-fake videois simply audio and video that sound and look like they are real, but are actually the result of manipulation. As technology continues to advance, deep-fakes will become more of a problem. Fortunately, the technology to detect them is also advancing. There are several aspects of deep-fake videos you should know about.


What is a Deep-Fake Video?

A deep-fake video is one where someone will intentionally change the words someone is speaking or place another person’s face or head onto the video. There are camera and phone apps that can remove facial blemishes, add cat ears, or lengthen arms and legs. While these manipulations are normally harmless, much of the technology that is now available to create fake videos is not. Deep learning systems produce fakes after studying real videos or photos and then copying a person’s speech and behavior patterns. The software or system then places the fake face over another, almost like a mask.

CNBC describes deep-fake videos as falsified videos that are a form of artificial intelligence. The videos are ultimately the result of deep learning. Deep fake videos can put someone’s face onto a video of someone committing a crime. They can make it sound like politicians are saying things they really aren’t. Specific examples of deep-fakes include President Obama’s fake service announcement, Mark Zuckerberg bragging about owning the users of Facebook, and Bill Hader, using DeepFaceLab, while physically morphing into several actors he’s portraying. These are just a few examples of how potentially dangerous and detrimental to society these types of videos can be.

Detect Deep-Fake Videos artificial intelligence

How Does a Deep-Fake Video Work?

Deep-fake videos were once something you only saw as part of Hollywood special effects. Even then, it often took weeks or months for experts to seamlessly add someone to a film or photo. There is now software available that gives almost anyone with a computer and an internet connection the ability to create a fake video. The software maps and studies specific features within the content. Using the information from the old content, a person can create new content. The software can alter the new content so it matches the old as closely as possible.

Text-based editing is a specific type of technology that can add and delete what a person is saying in the video. Unfortunately, deep fake technology isn’t just for movies or harmless entertainment. There are several specific ways people or organizations may use deep fakes for nefarious purposes.

  • Political Attacks

    Political groups can use fake videos to influence voters or even encourage members of specific groups to incite violence.

  • Business Attacks

    Deep-fakes can impersonate business leaders to infiltrate a company. A fake CEO can tell employees to send money, trade secrets, etc., to rival businesses.

  • Financial Attacks

    These attacks can focus on businesses as well as individuals. Criminals can use deep fakes to engage in phishing attacks and other scams to convince people to divulge financial information.

  • Personal Revenge

    People can now superimpose one person’s head on another’s body in a video. This is sometimes what happens in revenge pornography videos.


How Can AI Spot Deep-Fakes?

Low-grade or amateur videos may not always have the ability to fool the naked eye. Sometimes you can spot these videos if you look close enough. A face that never blinks or lighting and shadows that are off are signs of a fake. Sometimes the words obviously don’t match the way a person’s mouth or face is moving. Deep-fake videos, however, usually require more sophisticated methods to spot. Artificial intelligence can now use a variety of methods to detect deep-fake videos.

  • Analyzing Datasets

    When studying deep-fakes, one of the first steps is studying the videos using audio to video and text to video methods to compare and analyze different videos.

  • Analyzing Phonemes and Visemes

    Phonemes are units of sound in language while visemes are the visual counterpart, such as mouth movements, that correlate to the sounds of language. AI can study these to see if mouth movements are matching actual sounds and words. A primary take away from the studies done regarding deep-fakes is that the mouth must close completely when making the phoneme sounds B, M, and P. When creating deep-fake videos, this is often not correct even when using sophisticated technology.

There are new companies that are developing software that can more easily detect digital manipulation. They use a variety of advancing technology that can detect several aspects of a video including the origins of a video and pixel content.

  • Deeptrace

    This is a startup company from Amsterdam that is developing tools that does background scans on different types of audiovisual media.

  • Truepic

    Truepic is using a combination of emerging technologies such as artificial intelligence and cryptography to fight deep-fakes. They provide a holistic approach when fighting a problem as complex as deep-fake videos.


Who Needs This Technology?

With the increasing emergence of deep-fake technology, nearly every large business and government agency needs to watch for deep-fakes. The following are several specific types of organizations that can benefit from this technology with examples of where each industry is most vulnerable.

  • Financial Institutions

    According to iproov, payment transfers and personal banking are the areas most at risk because of deep-fake fraud.

  • Healthcare Industry

    Deep-fakes can burden the healthcare system by spreading misinformation about various conditions including Covid-19. Fake videos can convince people that dangerous home remedies, such as drinking bleach, will kill the virus.

  • Educational Organizations

    Getting Smart states that educating students regarding AI and deep-fakes is an important part of learning critical consumption.

  • Government Agencies

    If people believe that government agencies or elected officials are saying something or promoting information they’re not, this can have dire implications for a democracy.

  • Social Media

    Social media is perhaps the avenue that people will most often take to promote deep-fake videos and spread misinformation. All major media platforms, including Facebook and Twitter, are extremely susceptible to deep-fake manipulation.

Eventually, almost every type of business or company may need to invest in the technology to distinguish between fake and real videos.

Financial Institutions Healthcare Industry Educational Organizations Government Agencies

Is it Possible to Win This War?

The biggest problem when fighting deep-fakes is that the tools that are used to detect deep-fakes are often the same ones people are using to create these types of videos. It is often a battle of AI versus AI. In general, it’s easier to detect a fake video when someone’s entire face or head is on the video. However, it is much harder to detect lip-synching and speech that has been added into authentic videos.

It’s as important to find out what is true as it is to find out what’s not. While fake videos can inflict serious damage, so can real videos that millions of people believe are fake. The ability to weed out deep-fakes and validate real videos is crucial to the stability of financial institutions, the media, and even governments.



Previous Article

How Artificial Intelligence is Remarkably Improving Assistive Technologies 

Next Article

How AI is Transforming Public Transit in Smart Cities 

Related Posts