This is an AI-generated image by Freepik.
On April 11, 2023, a video appeared on X of Hillary Clinton endorsing Ron DeSantis for Governor of Florida. In the video, Clinton says:
“People might be surprised to hear me say this but I actually like Ron DeSantis—a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs….If Ron DeSantis got installed as president, I’d be fine with that…”
The video shows an image of Clinton from a three-quarter angle, looking as though she is speaking to someone off-camera, possibly an interviewer. She sounds quite emphatic in her support of DeSantis. The post received nearly 900,000 views.
There’s just one problem. it wasn’t actually Hillary Clinton in the video. The clip was generated by AI.
This particular deepfake was flagged by the MIT Affective Computing lab. Deepfakes, which use AI and deep learning techniques to mimic real human speech and behavior, have been increasingly used in areas like entertainment, scams, blackmail, and—what is highly concerning at the present time—to influence elections.
The MIT lab has studied ways to detect deepfakes, presenting their findings in a paper that highlights different types of clues to look for: anatomical (such as facial inconsistencies), stylistic (like unnatural textures), functional (such as strange behavior of eyes), violations of physics, and sociocultural indicators.
They’ve also developed a set of questions to help spot deepfakes. For example, look carefully at the face and you might notice odd features like unnatural smoothness or wrinkles, inconsistencies of signs of age between the face and hands, whether the hair looks natural, or strange blinking patterns. In the Clinton video, for instance, the image blinked too frequently, which was a red flag for me. You can find the full set of questions here.
Why do so many people fall for deepfakes like the one with Hillary Clinton? One reason is that people are more likely to perceive events portrayed in video as real compared to if the events were portrayed in text or audio. A study published in the Journal of Computer-Mediated Communication found that people were more likely to judge a fake story as credible when presented in video compared to audio or text. It also makes a difference in how much background knowledge a person has about an issue. People who knew less about an issue are more susceptible to believing deepfake videos. In other words, videos can feel convincing and real to people, which makes them seem credible, even when they are not.
Seeing is believing. Videos provide visual cues that map onto what we observe and know in our natural environment. There is then less mental translation involved to interpret them. As a result, we're less likely to question whether what we’re seeing is real.
The Kellogg School of Management at Northwestern has teamed up with MIT to develop a test that helps people practice distinguishing between real and fake images. I gave it a go with 12 different images and got 9 correct, which is the average score. For some deepfakes, subtle details seemed a bit off, like the lighting on some people, and freckles, which seemed a little too perfect. However, it was very hard to tell the difference. The differences were so minimal that if these images were posted on social media or shown in a news report, most people probably wouldn’t question their authenticity.
A key reason people fall for fake content is confirmation bias. People tend to accept information that aligns with their beliefs and are less likely to notice evidence that contradicts them. If someone is firmly rooted in their beliefs, they are likely to believe a deepfake that supports their views without questioning its authenticity. Take the case of a deep fake video of Ron DeSantis that surfaced in Sept. 2023 claiming that he was dropping out of the presidential primary (he actually dropped out in Jan. 2024). People hoping that he would drop out might have easily believed this video to be credible.
Currently, there is no federal law in the U.S. banning deepfakes, though legislation is in the works. The No Fakes Act of 2024, introduced in the Senate, aims to protect individuals from unauthorized AI-generated depictions. Several states—Texas, Oregon, New Mexico, and Indiana—have passed laws banning or regulating deepfakes in elections. California also took action when Governor Gavin Newsom signed three bills on September 17 addressing this issue. First, large online platforms would need to take down or label any content concerning the elections that is fake. Second, these platforms will need to develop ways for reporting fake content. Last, an injunction can be brought against any large online platform that doesn’t comply with this legislation. Mississippi has a law that goes even further, penalizing deepfakes regardless of whether or not they are election-related.
Enforcing these laws may prove difficult. For the California legislation, It can be expected that large social media companies, based in California, will resist these regulations. Even Elon Musk, the owner of X, posted a deep fake video of Kamela Harris. When social media owners themselves participate in disseminating deepfakes, we know it will be a battle.
The bottom line? In this heated election season, the threat of deepfakes is concerning. While a report from the Alan Turing Institute indicates that deepfakes didn’t significantly impact this year’s European elections, there is no guarantee that they won’t influence U.S. elections as the technology becomes more sophisticated and widespread.
For now, the best defense is to learn to recognize deepfakes. Take the MIT lab’s test to hone your ability in detecting deep fakes. Get your news from a trusted source, and not from social media. Always question what you see on social media and think critically about it. If you have doubts, check this information against a trusted source. If content doesn’t seem right, then it probably isn’t. Don’t believe everything you see.
You can learn more about how technology is impacting society in my book Attention Span.
Thanks, please do.
Excellent article. I’d like to repost on LinkedIn