close
close

Viral celebrity deepfake warns that use of artificial intelligence will ’cause you not to vote’

Viral celebrity deepfake warns that use of artificial intelligence will ’cause you not to vote’

Election interference increasingly relies on artificial intelligence and deepfakes. That’s why one viral PSA uses them as a warning sign.

“This election, attackers will use artificial intelligence to trick you into not voting,” the ad says. “Don’t give in to it. This threat is very real.”

The video “Don’t Let AI Steal Your Voice” features Hollywood stars such as Rosario Dawson, Amy Schumer, Chris Rock and Michael Douglas. But many of them are unrealistic. Douglas, Rock and Schumer, for example, are deepfakes.

“The artists involved were very enthusiastic about it,” Joshua Graham Lynn, CEO and co-founder of RepresentUs, the national nonpartisan anti-corruption organization behind the video, told Scripps News.

“Everyone you see there either donated their image to us or volunteered in person. They were all very excited to do this to help get out the vote because they know this is a really important election,” Lynn added.

RELATED STORY | Scripps News deepfake to see how AI could influence elections

The video, which has received more than 6 million views on YouTube, urges voters to pay more attention to what they see and hear online.

“If something doesn’t seem right, it probably isn’t,” the real-life Rosario Dawson says in the video.

“It’s very difficult to tell the difference online now between what’s real and what’s fake,” Lynn said. “You just watch any new video and sometimes you can’t tell if it was completely created by AI.”

“Technology is evolving quickly, and more importantly, attackers will always be at the forefront,” he added.

Disinformation experts and community leaders said AI-generated content is being used to sow chaos and confusion around the election. The Department of Homeland Security, as ABC News previously reported, warned state election officials that artificial intelligence tools could be used to “create false election records; impersonate election officials to gain access to confidential information; generate fake calls to voters to overload call centers; and for more persuasive action.” spread false information online.”

“And so we want voters to use their brains,” Lynn said. “Be skeptical if you see something that tells you not to participate. If you see something about a candidate you support, question it. Check again.”

While deepfakes can be used to spread misinformation about elections, experts warn they can also be used to undermine the public’s trust in official sources, facts or their own instincts.

“We have situations where we all start to doubt the information we encounter, especially information related to politics,” Purdue University professor Kayleen Jackson Schiff told Scripps News. “And then, given the election environment that we’re in, we’ve seen examples of claims that real images are deepfakes.”

Schiff said this phenomenon, this widespread uncertainty, is part of a concept called the “liar’s dividend.”

“The ability to credibly claim that real images or videos are fake is due to widespread awareness of deepfakes and media manipulation,” she said.

RELATED STORY | San Francisco sues websites that created fake nudes of women and girls

Schiff, who is also co-director of Purdue’s Governance and Responsible Artificial Intelligence Lab, and Purdue University PhD candidate Christina Walker have been tracking political deepfakes since June 2023, documenting more than 500 cases in their database of political deepfakes incidents.

“A lot of the things we capture in the database are actually meant to be satirical, so they’re more like political cartoons,” Walker told Scripps News. “It’s not always because everything is very malicious and intended to cause harm.”

However, Walker and Schiff say some deepfakes cause “reputational harm,” and even parody videos intended for entertainment can take on new meaning if shared out of context.

“It remains a concern that some of these deepfakes, which are initially shared for entertainment, could mislead people who don’t know the original context if the post is later reposted,” Schiff said.

While deepfakes in the “Don’t Let AI Steal Your Voice” video are difficult to detect, Scripps News took a closer look and found that visual artifacts and shadows were disappearing. Deepfake technology has improved, but Walker says there are still clear signs.

“These could include extra or missing fingers, blurry faces, writing on the image, or an image that is not quite right or does not match. All of these things can indicate that something is a deepfake,” Walker said. “As these models improve, it becomes increasingly difficult to talk about it. But there are still ways to test it.”

Fact-checking a deepfake or any video that evokes an emotional response, especially in relation to elections, should begin with official sources such as secretaries of state or vote.gov.

“We encourage people to seek additional sources of information, especially as it relates to politics and close to the election,” Schiff said. “And also just thinking about who the source of the information is and what their motivations might be for sharing that information.”

“If anything tells you as a voter, ‘Don’t go to the polls. Everything has changed. Riots broke out. Everything is postponed. You can come back tomorrow,” double check your sources. This is the most important thing. deal right now.”