close
close

Utah Valley University students explore how deepfakes influence viewers and voters

Utah Valley University students explore how deepfakes influence viewers and voters

UTAH VALLEY UNIVERSITY, Utah — Three Utah Valley University student organizations wanted to study how deepfakes affect viewers, whether viewers can recognize deepfakes and how they interact with them.

This comes after several deepfakes related to political candidates or campaigns circulated online in the lead-up to Election Day, including one of Gov. Spencer Cox that surfaced earlier this year.

A deepfake is a video in which a person’s face or body has been digitally altered to look like someone else using artificial intelligence.

The school’s neuromarketing lab, SMARTLab, tracked the microexpressions of 40 participants using iMotions technology.

Participants were tested in front of a computer equipped with hardware and software for eye tracking and facial emotion analysis.

Laboratory results showed that levels of engagement and confusion were higher when exposed to deepfake content, as reflected in their microexpressions, but they did not report these feelings in post-test interviews.

Real video and audio evoked more traditional emotional responses.

Another 200 participants took part in an online study that assessed their ability to recognize deepfakes in video and audio formats.

At the beginning of the test, participants were divided into groups, unaware that they were being shown content created by artificial intelligence.

Subjects then rated the video or audio speaker based on factors such as credibility and reliability.

That’s when they were informed that the purpose of the study was to measure the impact of deepfakes and that some content could be created by artificial intelligence.

They were then asked to rate whether they believed the media was real or fake and to rate the degree of confidence in their judgments.

But even after being told they might have encountered a deepfake, they struggled to consistently identify doctored content.

Of all deepfake videos and audio, as well as real video and audio, at least 50 percent of participants thought the media was “probably real.”

And 57 percent or more were confident in their assessment, suggesting a 50/50 chance of detecting a deepfake.

“If you just want their image, if you just want their voice… everyone’s image and voice is online and freely available,” said Hope Fager, a national security student who helped conduct the research. “It’s a little scary how easy it is to find the information you need to pretend to be someone else.”

And that’s why some campaigns have taken extra steps to ensure voters are exposed to fake media as little as possible.

Michael Kaiser is President and CEO of Defending Digital Campaigns, based in Washington, DC.

“We talk a lot with campaigns about the issue of deepfakes and impostors, and this is another thing that happens in the space where someone fakes a campaign or a candidate, and we work with them,” Kaiser says. “We have some tools that we provide to federal campaigns to help them track what’s happening online, identify instances of deepfakes, identify places where people are trying to spoof a real campaign, and we help them take it down.”

As deepfake technology becomes more sophisticated, is there a way to distinguish the real from the fake?

“I think it’s really difficult,” Kaiser says. “I think there might be some technical things that might cause people to see some blur here or there, maybe the background looks wrong, but it’s really hard to train people to see those things, so I think you need to kind of trust your own instincts.”

Some social media sites are aware of this and flag content that may seem suspicious.

“The audit checks that this could be disinformation, and things like that can be significant,” Fager says. “The mere thought that something could be a deepfake is enough to make people cross the line and start thinking critically. Nobody really thinks about this on their own, but if we could introduce this to more people and say, “Hey, just be aware of what you’re looking at.” Trust your intuition.”

Until more protections are put in place to weed out deepfakes, responsibility begins and ends with voters right now as they view content leading up to Election Day.

Be alert and watch out for what is unreal.

“The problem is that this is happening today. This is no longer a distant idea, and we need to be prepared,” says Fager. “We need to be aware of what we’re looking at and we need to think critically, which is a little more difficult in a situation where we’re looking at social media and everyone’s brain is off.”