Social Media Bans of Scientific Misinformation Aren't Helpful, Researchers Say

There are more effective ways to fight scientific misinformation than banning and removing content, according to the report.

We may earn a commission from links on this page.
Graffiti on telecom equipment in Batley, the UK, promoting conspiracy theories about 5G connectivity technology in July 2021.
Graffiti on telecom equipment in Batley, the UK, promoting conspiracy theories about 5G connectivity technology in July 2021.
Photo: Daniel Harvey Gonzalez / In Pictures via Getty Images (Getty Images)

Just let the anti-vaxxers, climate deniers, and 5G conspiracy theorists live without the constant threat of content removal and bans, lest they flee to even more radical hubs on niche sites and other obscure parts of the internet, the Royal Society has concluded.

The Royal Society is the UK’s national academy of sciences. On Wednesday, it published a report on what it calls the “online information environment,” challenging some key assumptions behind the movement to de-platform conspiracy theorists spreading hoax info on topics like climate change, 5G, and the coronavirus.

Advertisement

Based on literature reviews, workshops and roundtables with academic experts and fact-checking groups, and two surveys in the UK, the Royal Society reached several conclusions. The first is that while online misinformation is rampant, its influence may be exaggerated, at least as far as the UK goes: “the vast majority of respondents believe the COVID-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful.” The second is that the impact of so-called echo chambers may be similarly exaggerated and there’s little evidence to support the “filter bubble” hypothesis (basically, algorithm-fueled extremist rabbit holes). The researchers also highlighted that many debates about what constitutes misinformation are rooted in disputes within the scientific community and that the anti-vax movement is far broader than any one set of beliefs or motivations.

Advertisement

One of the main takeaways: The government and social media companies should not rely on “constant removal” of misleading content, which is not a “solution to online scientific misinformation.” It also warns that if conspiracy theorists are driven out of places like Facebook, they could retreat into parts of the web where they are unreachable. Importantly, the report makes a distinction between removing scientific misinformation and other content like hate speech or illegal media, where removals may be more effective:

... Whilst this approach may be effective and essential for illegal content (eg hate speech, terrorist content, child sexual abuse material) there is little evidence to support the effectiveness of this approach for scientific misinformation, and approaches to addressing the amplification of misinformation may be more effective.

In addition, demonstrating a causal link between online misinformation and offline harm is difficult to achieve, and there is a risk that content removal may cause more harm than good by driving misinformation content (and people who may act upon it) towards harder-to-address corners of the internet.

Advertisement

(Advocates of banning neo-Nazis and hate groups are thus safe from the Royal Society’s conclusions in this report.)

Instead of removal, the Royal Society researchers advocate developing what they call “collective resilience.” Pushing back on scientific disinformation may be more effective via other tactics, such as demonetization, systems to prevent amplification of such content, and fact-checking labels. The report encourages the UK government to continue fighting back against scientific misinformation but to emphasize society-wide harms that may arise from issues like climate change rather than the potential risk to individuals for taking the bait. Other strategies the Royal Society suggests are continuing the development of independent, well-financed fact-checking organizations; fighting misinformation “beyond high-risk, high-reach social media platforms”; and promoting transparency and collaboration between platforms and scientists. Finally, the report mentions that regulating recommendation algorithms may be effective.

Advertisement

“Clamping down on claims outside the consensus may seem desirable, but it can hamper the scientific process and force genuinely malicious content underground,” Statistical Laboratory at the University of Cambridge professor of the mathematics of systems and report chair Frank Kelly told Politico Europe.

“Science stands on the edge of error and the nature of the scientific endeavour at the frontiers means there is always uncertainty,” Kelly separately told Computer Weekly. “In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself, but that prodding and testing of received wisdom is integral to the advancement of science and society.”

Advertisement

“Our polling showed that people have complex reasons for sharing misinformation, and we won’t change this by giving them more facts,” Oxford Internet Institute professor of technology and society Gina Neff, who contributed to the report, told Computer Weekly. “... We need new strategies to ensure high-quality information can compete in the online attention economy. This means investing in lifelong information literacy programmes, provenance enhancing technologies, and mechanisms for data sharing between platforms and researchers.”

The idea that driving conspiracy theorists off mainstream platforms and deeper into the web only makes them stronger has also, in some cases, been spread by bullshit artists themselves as a last-ditch defense of their access to lucrative audiences on Facebook, Twitter, and YouTube. Often, they’ve been wrong. InfoWars kingpin Alex Jones, for example, claimed that the wave of bans that hit him on virtually every major social media site would only fuel his audience’s persecution complex. Instead, his web traffic plummeted, he lost access to key revenue sources, and he’s seemingly spent much of his dwindling resources fighting a crushing series of defamation lawsuits. Donald Trump faced a similar situation, at least as far as his ability to spread conspiracies about the 2020 elections went.

Advertisement

Research has shown that while de-platforming might not be a long-term solution to misinformation and hate speech, it does act as a constraint on the relatively small but disproportionately influential individuals and groups most committed to spreading it—like the handful of anti-vaxxers responsible for much of the deluge of hoax vaccine claims on the biggest social media sites. Some of the solutions advocated in the Royal Society report are backed by evidence, like demonetization, but others don’t appear to have been very effective in practice: Facebook’s fact-checking labels are questionably useful, for example.

The Royal Society report only briefly addresses demonetization, referring to direct methods like removing the ability of misinformation promoters to collect ad revenue. But there are many ways for those who accrue vast audiences via mainstream social media sites to make money off-site, such as donations, and crowdfunding campaigns, supplement and alternative medicine sales, and going on Fox News constantly to promote their book. Allowing misleading scientific claims to remain up, albeit demonetized, thus doesn’t remove the financial incentive to post them in the first place.

Advertisement

It’s true, though, that bans on mainstream sites have driven large numbers of believers to alternative platforms where more openly extreme beliefs thrive, like messaging app Telegram. To some degree, the dispute comes down to whether this sort of self-imposed online quarantine is preferable to giving these web users access to audiences on major platforms, or whether it works at all; the report warns that these niche venues need to be subjected to “higher scrutiny.” The fear expressed in the Royal Society report seems akin to what has happened with apps like WhatsApp, which is mostly unmoderated and has become a prime vehicle for hate speech and conspiracy theories in places like India and Brazil.

Context is also important—the findings of the report, especially those tied to surveys, focus on the UK and may not be universally applicable. For example, vaccination rates are far higher in the UK than in the U.S., and a mob of conspiracy theorists hasn’t recently stormed Parliament.

Advertisement

As Computer Weekly noted, researchers at the Election Integrity Partnership reached similar conclusions to the Royal Society report on the separate issue of hoax claims about 2020 elections in the U.S., finding that “widespread suppression” wasn’t necessary to reduce their spread. However, those researchers also advocated stronger sanctions against social media accounts and media organizations that are “repeat offenders,” including bans.

Advertisement