close
close

Disinformation is ruining our elections. This is how we can save them.

Disinformation is ruining our elections. This is how we can save them.

How US Presidential Election 2024 is approaching, along with other key elections around the world, spreading misinformation online reaches new heights. Manipulative content is flooding social media platforms, created by foreign adversaries, cybercriminals and even artificial intelligence systems. Algorithms that most users barely understand amplify this misinformation, making it a critical threat to our democratic process and the very concept of a shared, fact-based reality.

Disinformation is dangerous because it undermines trust, undermines public discourse and distorts decision-making. When lies masquerade as truth, a society’s ability to make informed choices is compromised and the democratic process itself becomes corrupted. At its worst, disinformation can fuel polarization, incite violence, and destabilize societies, making it nearly impossible to separate fact from fiction.

Who is behind this wave of false information? State actors such as Russia, China, and Iran are notorious for attempting to manipulate social media to achieve political and strategic goals. These foreign organizations seek to sow division, disrupt elections and influence public opinion, often through targeted disinformation campaigns. It also involves insiders and financially motivated cybercriminals who use disinformation for personal, political or financial gain. These groups are becoming increasingly sophisticated, using artificial intelligence to create content that is more compelling and harder to detect.

What’s worse is how social media companies are making the problem worse. These platforms amplify misinformation with algorithms designed to prioritize engagement, often favoring sensational or divisive content because it gets users to click. The more attention a post gets, the more likely it is to be seen by others, regardless of its accuracy. In other words, the very mechanisms that make these platforms profitable also make them breeding grounds for disinformation. These companies’ incentive structures prioritize growth, user retention, and advertising revenue over content integrity.

Why aren’t these platforms doing more to stop the spread of misinformation? The answer lies in their incentives. Social media companies benefit financially from high levels of engagement, which can generate controversial or misleading content. Moreover, they have little legal obligation to intervene. Section 230 of the Communications Decency Act of 1996, which was created long before the advent of the modern Internet, protects platforms from liability for content posted by their users. This outdated law treats these tech giants as neutral message boards, even though their algorithms actively shape what users see. Until this law is reformed, there will be no real incentive for social media companies to address this issue directly.

In the absence of strict regulatory oversight, we have defaulted to relying on individual users to combat misinformation. People are expected to be able to recognize fake news, filter it, and make informed decisions about what to believe. But this approach is fundamentally wrong.

Many struggle to identify misinformation

Relying on people to detect and combat growing misinformation on social media as a strategy will fail. Simply put, people are bad at recognizing manipulated content. Decades of cybersecurity experience have taught us that even the most educated and cautious people can fall for phishing scams, click on fraudulent links, and unwittingly approve malicious access requests. The same goes for misinformation. No matter how many people receive media literacy or safety training, many will still have difficulty recognizing sophisticated misinformation, especially as AI-generated content becomes more realistic and harder to distinguish from the truth.

So what can we do about it? First and foremost, we need stronger regulation to hold social media companies accountable. Amending Section 230 to hold platforms accountable not for individual user posts, but for how their algorithms spread and propagate false or harmful content could finally push social media companies to take meaningful action. This will not suppress freedom of speech or expression, as some fear, but it will force these companies to bear greater responsibility for the consequences of their systems. In short, this change could encourage platforms to develop more powerful tools to detect and prevent the spread of false information.

Additionally, AI tools themselves should be subject to stricter oversight. Despite well-publicized efforts by industry coalitions to create their own rules and boundaries around the ethical use of AI, cybersecurity experts know all too well that bad actors aren’t waiting for legislation to catch up with them. These rules, while positive, are not sufficient to protect the average person in real time.

Complex instruments

Finally, we need to remove this burden from individuals and provide users with more sophisticated tools to help them navigate the digital world safely. Just as filtering and security monitoring tools help prevent phishing attacks in enterprise environments, we need similar solutions to combat disinformation—technological tools that work in real time and help users assess the credibility of the information they encounter.

The 2024 elections will be a stark reminder that we are at a critical crossroads. Foreign actors and cybercriminals will continue to use social media to undermine our democratic processes, and the threat will extend well beyond November. The real question is whether we will confront this problem head-on or continue to place the burden on individuals, hoping they can solve a crisis that should never have been their responsibility in the first place.