close
close

Efforts to combat disinformation recede as voters head to the polls

Efforts to combat disinformation recede as voters head to the polls

Elon Musk speaks to Republican presidential candidate Donald Trump at a campaign rally in Madison Square Garden on October 27.

Elon Musk speaks to Republican presidential candidate Donald Trump at a campaign rally in Madison Square Garden on October 27. (Jabin Botsford/The Washington Post)


During the chaos following the 2020 election, tech companies created unprecedented protections to prevent the spread of misinformation on their platforms.

Twitter’s Trust and Safety team added fact-checking labels to false election claims and blocked some tweets from then-President Donald Trump about voter fraud. from distribution. Facebook filled election posts with links to its voter information center, which was filled with reliable information about the legitimacy of mail-in ballots and voting in general. Within weeks of the race being called, YouTube began removing videos that alleged widespread election fraud.

Four years later, all of these platforms are in retreat.

Under Elon Musk, Twitter (now X) fired most of its content moderation staff, replacing them with a crowdsourced and flawed fact-checking experiment. Facebook, now Meta, has scaled back its voter information center and reduced the visibility of political posts on Facebook and Instagram. And YouTube now allows claims of election fraud online.

Faced with legal threats and political pressure, programs to combat the spread of misinformation at social media giants have waned; most companies have refused to update their policies in response to the 2024 election. A once-thriving ecosystem of academic and government programs designed to monitor the spread of hoaxes and foreign interference online has also shrunk, opening the door to threats against election workers and viral, unsubstantiated claims of voting irregularities.

This new environment has given rise to a flood of exaggerated claims of voting irregularities, which researchers say are intensifying as Election Day approaches. Some election officials and researchers argue that this ecosystem gives voters access to information that is publicly available, distorting their perception of the results and potentially contributing to political instability.

“After 2020, platforms really felt like ‘mission accomplished,'” said Color of Change President Rashad Robinson, whose digital civil rights group has pushed tech companies to adopt tougher rules against voter suppression. “And so (now) when you talk to them, they have very much come to believe that they know how to deal with the problem.”

Some experts argue that a saturation of online conspiracies can lead to dangerous actions in real life. A large amount of voter fraud propaganda could make the public distrustful of the election results, for example, laying the political groundwork for GOP leaders to challenge the results.

Pieces of election disinformation can incite physical or digital attacks against election workers, election officials, or immigrant communities.

“In 2020, we saw (election officials) chased home and attacked, (faced) death threats online, (and) photos of them circulated on Facebook,” said Nora Benavides, senior counsel for the Free Press digital rights group.

Meta Representative Corey Chambliss It said in a statement that protecting the 2024 US elections remains a top priority for the social media giant and “no technology company is doing more to protect its platforms – not just during elections, but always.”

“We have approximately 40,000 people around the world working on safety and security—more than during the 2020 cycle—and we have invested more than $20 billion in teams and technology in this area since 2016,” added Chambliss.

YouTube spokeswoman Audrey Lopez said in a statement that the company “will support the election using a multi-layered approach to effectively bring people high-quality, authoritative news and information.” A representative for X did not respond to requests for comment.

The corporate retreat is being driven by several factors. A conservative legal and political campaign over allegations of censorship has successfully pressured government agencies, technology companies and outside researchers to stop working together to uncover election fraud. Musk, who has sharply scaled back X’s disinformation programs, has inspired other companies to roll back anti-propaganda measures.

Some platforms don’t just allow false claims of election fraud to spread, they actively encourage them. X owner Elon Musk’s pro-Trump super PAC, America PAC, last month launched the Election Integrity community page on X to encourage more than 58,000 members to post examples of potential fraud in the 2024 election, creating a database that includes many unsubstantiated and unsubstantiated facts. accusations.

“What’s happening is much more important than making a business decision to have fewer controlled news sources and less content moderation,” said Eddie Perez, who once led Twitter’s civic integrity team and is now a board member of the nonprofit. OSET. Institute. “Musk, by supporting Trump, is actually going to the other extreme, which is actively leveraging the power of the platform in favor of very specific anti-democratic viewpoints.”

The spate of election denial comes as the baseless claim that the 2020 election was rigged against former President Donald Trump became a major talking point among conservatives. In the months leading up to the vote, these claims became a hodgepodge of conspiracy theories.

Since 2021, tech companies have opened the door to politicians challenging election results. Trump returned to meta platforms, YouTube and X, after the companies suspended his account following the Jan. 6 riot at the U.S. Capitol. The Meta began allowing politicians to make claims of fraud in the 2020 election in political advertisements, although claims of fraud regarding the 2024 vote remained prohibited.

Twitter once banned misleading statements that could undermine public confidence in elections, “including false information about election results.” By 2023, a year after Musk took over the platform, this prohibition had disappeared from the company’s civic integrity policy., according to his website.

“They’re all getting to the point where they’re saying, ‘We can’t do this for every election in the past,’” said Katie Harbutt, CEO of technology consulting firm Anchor Change and former director of public policy at Facebook. “They might be more willing to take action on the 2024 election than spend a ton of time constantly rehashing the 2020 election and other past elections.” Internet companies have also radically changed the way they disseminate accurate election information. Four years ago

Meta operated a voter information center that received ongoing election news from outside groups, including the Bipartisan Policy Center, a Washington think tank. The Voter Information Center now directs users to static government websites after Meta lobbyists complained that relying on the think tank could make them appear biased, according to two people familiar with the matter who spoke on condition of anonymity . talk about private discussions.

Twitter’s curation team, which included several seasoned journalists, posted election-related articles from news outlets in Spanish and English to a dedicated elections page in the platform’s Explore tab. Today this program does not exist. The company directs users to the state’s voter registration page.

Both X and Meta downplay news stories in users’ news feeds, limiting the reach of mainstream journalists sharing accurate election news. Meta removed a Facebook news tab that promoted credible stories about the election and reduced the visibility of accounts that talk about politics and social issues.

While Meta argues that cutting out news and politics exposes users to less vitriolic content they don’t want, experts and activists say the move could reduce the quality and variety of information online, especially for people who don’t actively seek out quality. journalism from other sources.

“I think in many ways the solution for companies in the election context is to simply eliminate the possibility of liability,” Benavidez said. “And one way to do that is to depoliticize the channels.”

Tech companies are also getting less support from federal agencies to combat disinformation this year as the White House is mired in legal battles with Republican state attorneys general. Their lawsuit, Murthy v. Missouri, argued that the Biden administration’s coordination with tech companies to combat election and vaccine fraud amounted to censorship. The Supreme Court ultimately rejected the conservatives’ efforts in June, but communication between internet platforms and government watchdogs is now more limited.

The Department of Homeland Security has abandoned direct contact with companies such as Meta, Google and X after years of holding joint meetings with them to discuss threats to elections, including foreign influence campaigns, according to two people familiar with the matter who spoke on regarding the condition of anonymity for discussing sensitive issues.

In a statement, the FBI said it shares information with social media companies and recently updated its procedures to let the platforms know they are “free to decide for themselves” whether to take action.

Meanwhile, federal programs to combat foreign disinformation are under threat. The Center for Global Engagement, which was founded in 2016 to combat propaganda campaigns that undermine the United States, is expected to close in December unless Congress votes to renew its mandate. Sen. Chris Murphy (D-Conn.) and Sen. John Cornyn (R-Texas) co-sponsored an amendment to allow the program to continue, but it faces opposition from House Republicans who accuse the agency of “mission creep” and say its performance may violate the First Amendment.

Secretary of State Antony Blinken “has made it clear publicly that continuing this vital work abroad is a priority,” the State Department said in a statement.

Some disinformation research programs have also scaled back or changed strategies to avoid investigations by House Republicans and conservative activists investigating allegations of digital censorship. Others are simply having trouble conducting research after Twitter and Meta reduced or closed access to tools widely used to track viral misinformation on their platforms.

Now researchers are waiting in anticipation to see how tech companies’ reduced protections against disinformation and rising political propaganda will affect voters heading to the polls.

“The world of misinformation and disinformation is much broader than it was in 2020,” said Tim Harper, who leads election work at the Center for Democracy and Technology, a Washington nonprofit that advocates for digital rights and freedom of expression.

“How this plays out will be difficult to determine until after the election.”