April 20, 2023

With the growing popularity and constant improvement of artificial intelligence, it’s tough to decipher what’s real or fake online. We’ve seen it — and fact-checked it — for ourselves.

One video claimed to show Ukraine President Volodymyr Zelenskyy with cocaine on his desk. Another clip said it showed President Joe Biden railing against transgender people. And a video of Elon Musk purported that he declared that people can now unlock Teslas with their — ahem, body parts.

None of that actually happened. But on social media, deepfake videos like those are spreading.

A deepfake is a machine-generated image or video that changes faces, bodies or voices, making people appear to do and say things that they never did or said. Items can also be added to a video to distort the truth (we’re looking at you, Zelenskyy cocaine video).

Our team at PolitiFact regularly flags and debunks these viral claims. You may not realize that you have some of the same tools at your fingertips that we do. Here’s how we do it, and how you can, too.

Clues in a video can indicate when something’s off

When watching videos, we look for clues that can tell us whether they’re deepfakes. For starters, does the audio sync to the person’s mouth movements? In a video rated Pants on Fire of Florida Gov. Ron DeSantis, it didn’t.

The video falsely claimed to show DeSantis talking about his political record and admitting to signing into law things like state-required vaccinations, because “that’s where the money is.”

Audio synchronization — or lack thereof — is a key marker of a deepfake video, said Siwei Lyu, a University at Buffalo computer science and engineering professor and co-director of the university’s Center for Information Integrity.

Other parts of the audio can be clues, too. We fact-checked a clip that claimed to be a recording of Biden talking about the Silicon Valley Bank collapse. In the clip, the voice never took a breath between sentences and used words that made no sense. An expert forensically analyzed the clip and told us the voice was machine generated — not human at all.

Programs used to create deepfake videos are “very powerful, but they’re not perfect yet,” Lyu said.

Eyes can be another telltale sign.

“I always look into the eyes” of the person in the video, Lyu said. “In some cases, the reflections are not consistent, almost as if one eye is looking at one thing and the other is looking at the other.” Normally, he said, a person’s eyes will show a reflection of their surroundings or environment.

For example, one deepfake we fact-checked of Morgan Freeman claimed to show the actor criticizing Biden for comments he made at a summit, when talking about the March 27 mass shooting at a school in Nashville, Tennessee.

The person pretending to be Freeman sounded frustrated, but the eyes did not move or blink. The only part of the face that did move was the mouth.

As the person spoke and moved his head, the image of Freeman’s face shifted constantly, showing it was just an artificial intelligence-generated filter. AI filters are computerized masks that are popular on platforms like TikTok and Snapchat.

Lyu has studied deepfakes for more than 20 years and operates a program called the  DeepFake-o-meter, which tests videos for signs of alteration. The program DeepFake-o-meter used to be available to the public, but that’s no longer the case because of malicious use, he said. The group is talking with partners to make the program publicly available again.

Dig deeper — and outward — to find out what’s real

Looking at clues within the piece of media is a starting point, but it’s not enough. We also recommend running a lateral search to confirm or debunk a video’s accuracy — something you can do at home, too. Lateral searching means reading “across many connected sites instead of digging deep into the site at hand,” according to a fact-checking guide by Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public. Open multiple tabs on a web browser to find out more information about what the claim is, who is sharing it and what other sources are saying about it.

Caulfield advises, “Get off the page and see what other authoritative sources have said about the site,” and piece together “different bits of information from across the web to get a better picture of the site.”

If the Biden audio about the bank collapse were real, news reports almost certainly would have included information about it. But when we searched, the results included only other social media posts sharing the clip, or news articles that debunked it. Nothing confirmed it to be true.

Likewise, when PolitiFact found a video claiming to show DeSantis announcing his 2024 presidential run, no credible news sources backed that up — something that would have happened if DeSantis really had announced.

“It’s important to note first of all, who is sharing this video, you know, look for a little provenance, where this video is from originally,” Lyu said “If the message truly matters to the audience, they should look for cross-verifications.”

Fact-checkers also use reverse-image searches, which social media users can do, too. Take screenshots of videos and upload them to sites like Google Images or TinEye. The results can reveal the video’s original source, whether it has been shared in the past and if it has been edited.

Deepfakes are getting more accessible

Most of the time, anyone can debunk a deepfake. Part of the problem, though, is that anyone also can create a deepfake — which can be dangerous, Lyu said.

The more accessible deepfake technology has become, the more popular it is to use, said Lyu. For example, people have been using programs that allow users to place a computerized mouth on top of someone’s face in a video — something that we’ve seen used repeatedly for people including DeSantis and Twitter CEO Elon Musk.

“In my opinion, those lip-syncing deepfakes are the most dangerous forms of deepfake videos,” Lyu said. “Lip-syncing is particularly devious because they only alter a small portion of the face.”

One solution to stop deepfake videos from spreading is simple — don’t share them when you see them.

“As (a user), I don’t want to be part of the problem, I want to be part of the solution,” Lyu said. “I would not spread this out to my friends or colleagues even though I tell them this is deepfake, because that’s only adding to the virology of that video.”

If you see a deepfake video you’d like us to fact-check, email us at truthometer@politifact.com

This fact check was originally published by PolitiFact, which is part of the Poynter Institute. See the sources for this fact check here.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Gabrielle Settles is a reporter covering misinformation for PolitiFact. Previously, she was a staff writer for The Weekly Challenger and staff member and reporter for…
Gabrielle Settles

More News

Back to News


This site uses Akismet to reduce spam. Learn how your comment data is processed.