Skip to main contentSkip to navigationSkip to navigation
A pro-Trump mob breaches the Capitol on 6 January, days before Trump was banned from Twitter.
A pro-Trump mob breaches the Capitol on 6 January, days before Trump was banned from Twitter. Photograph: Carol Guzy/Zuma Wire/Rex/Shutterstock
A pro-Trump mob breaches the Capitol on 6 January, days before Trump was banned from Twitter. Photograph: Carol Guzy/Zuma Wire/Rex/Shutterstock

Banning Trump won't fix social media: 10 ideas to rebuild our broken internet – by experts

This article is more than 3 years old

Away from the vitriol, researchers are investigating concrete steps companies, officials and the rest of us can take to tackle the crisis

It was nearing midnight on Tuesday, 12 January when the final plank of Donald Trump’s social media platform fell away. Facebook, Instagram, Twitter, Twitch, Snapchat and, finally, YouTube had all come to the same conclusion: that their platforms – multibillion-dollar American companies that dominate American political discourse – could not be safely used by the president of the United States.

In less than a week, a new president will take office. But considering the role social media played in elevating Trump to the presidency and its part in spreading misinformation, conspiracy theories and calls for violence, it is clear that the end of the Trump presidency won’t provide an immediate fix. There is something fundamentally broken in social media that has allowed us to reach this violent juncture, and the de-platforming of Trump is not going to address those deeper pathologies.

Much of the ensuing debate about the Trump bans has played out along predictable and unproductive lines, with free speech absolutists refusing to acknowledge that harassment and hate lead to the silencing of marginalized groups and those who support tougher crackdowns tending to elide good-faith concerns about overly aggressive censorship. It’s a fight we’ve been having about the internet for so long that people on either side can probably recite the lines of the other from memory.

Still, away from the clamor of social media, meaningful and innovative work is being done by researchers and activists who have been living with and studying the brokenness of social media for years now. We asked a dozen of them to help us move the debate forward by sharing their proposals for concrete actions that can and should be taken now to prevent social media platforms from damaging democracy, spreading hate, or inciting violence.

The banning of Donald Trump from social media will not heal a broken system. Photograph: Denis Charlet/AFP via Getty Images

The result is this list of 10 ideas to address our broken internet. Some of these proposals are for the tech companies, some would need to be enacted by governments, and some are challenges for us all. None would be a panacea, but any of them would be a good start. At the very least, they can help us move beyond the strictures of the ossified free speech debate and start having better, more nuanced discussions about the path forward.

Some responses have been lightly edited for length and clarity.

Hire 10,000 librarians for the internet

We need 10,000 librarians hired with the mandate of fixing our information ecosystem. This workforce would be global in size, but local in scope, focused on building systems for curating timely, local, relevant, and accurate information across social media platforms. Their work would be similar to the public interest obligation already applied to radio, where we legally require “broadcasting serve the public interest, convenience and necessity” as a condition of having access to the airwaves. Social media companies should tune into the frequency of democracy, rather than the static of disinformation.

  • Joan Donovan, research director at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy

Fund training for teachers, our ‘informational first responders’

Social media feeds people lies, hate, and radicalizing messages. But people also bring lies, hate, and radicalizing messages to social media – or at least carry those interests with them when they go online, in turn triggering recommendation algorithms to serve up false, bigoted, and radicalized offerings. Something needs to be done about all those beliefs and choices too – because otherwise, even the most sweeping technological or regulatory fixes will be treating the symptoms, not the underlying causes.

We need funding for empirical, longitudinal research into media literacy education, and funding for the development of collaborative media literacy curriculum. We also need funding for research to assess what media literacy training K-12 teachers themselves need, and funding to ensure that teachers across subject area receive that training to ensure that they are equipped for their role as informational first responders. We do a great disservice to our teachers when the message is: here’s nothing, now go save democracy.

To be clear, educational responses aren’t more important than technological responses. They’re mutually reinforcing. When more people have the appropriate media literacy training, there will be less pollution on social media that needs cleaning up. That will make it easier to see where the breaks are, and how to target regulatory and platform policy solutions as accurately as possible.

  • Whitney Phillips, co-author of You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape

Understand the limitations of the first amendment …

Considering remedies for the proliferation of white supremacist harassment and organizing online requires that society de-colonize our assumptions about racism, speech, dissidence and violence. Content management rules, the rule of law and big tech profits must all be grounded in three underlying assumptions:

Demonstrators in Brooklyn protest against the death of George Floyd, in June. Photograph: Caitlin Ochs/Reuters

1. First amendment protections are not equally shared across all members of American society. Black, indigenous and other people of color in the US have been historically and structurally censored in law and de facto social norms. The suppression of Black dissent by police violence, for example, has been on increasingly visible display. It is not enough to have the right to free speech; one must have the power to exercise it as well. As a result, simply relying on free speech protections is insufficient to protect everyone’s free speech online.

2. It isn’t simply speech that is being acted upon as the litmus test for social media bans. It’s the real-world impact of that speech. White supremacists aren’t being kicked off platforms because of words, they are being denied the ability to amplify violent racism and organize real world harm.

3. Protecting speech and protecting democracy are equally important but not always aligned. We cannot understand or deal with freedom without also dealing with power and governance.

  • Malkia Devich-Cyril, founding director and senior fellow at MediaJustice

Americans are attached to the myth that the United States is uniquely protective of free speech. But the American view of free speech, exported to the world through the domination of US-based tech companies, is simpleminded and elitist: free speech means ensuring that the most violent, racist, misogynist individuals feel free to speak without consequences, no matter the death, destruction, and silencing of other people’s speech that results. Even in the midst of an insurrectionist uprising openly fueled and organized on social media platforms, the loudest concerns are about what those platforms are finally taking down, not what they have been leaving up. The tech industry’s claimed commitment to free speech, reinforced through the sweeping immunity it receives through Section 230, contributes to the American public’s confusion between government action restricting speech, which implicates the First Amendment, and private action, which does not, as well as contributing to the erasure of the distinction between speech and conduct and the distinction between protected speech and unprotected speech.

  • Mary Anne Franks, author of The Cult of the Constitution: Our Deadly Devotion to Guns and Free Speech

… and think beyond the US and Europe

From my perspective, the first step is to broaden the conversation. For too long, we’ve viewed platform issues from a US- and Euro-centric framework, when the worst implications of bad regulation and governance occur in the so-called global south. Companies should immediately expand their consultations to be truly inclusive of a more diverse range of civil society. Furthermore, it’s high time that companies adopt the Santa Clara Principles on Transparency and Accountability in Content Moderation, which are supported by dozens of digital and human rights groups around the world and present a set of baseline minimum standards that include public-facing transparency about how rules are implemented and content is amplified, notice to users, and a path for remedy when content moderation errors are made.

  • Jillian York, director for international freedom of expression at the Electronic Frontier Foundation

Protect the journalists and researchers who study platforms

We cannot fix what we do not understand. The social media companies are shaping public discourse by algorithmically amplifying the speech that is most likely to maximize user “engagement”. This decision to prioritize “engagement” appears to be fueling a destructive feedback loop that promotes the spread of hate, misinformation, and propaganda.

Unfortunately, we do not fully understand this phenomenon, or how we might fix it, in large part because the social media companies generally forbid independent research into their platforms that relies on the basic tools of digital journalism and research. Facebook’s terms of service, for example, forbid journalists and researchers from collecting information from the platform through “automated means”, even if that research scrupulously respects user privacy and would be in the public interest.

Donald Trump addresses the violence at the US Capitol, as seen on a screen in the White House’s Brady press briefing room, on 6 January. Photograph: Shawn Thew/EPA

There are at least three ways to overcome this barrier to independent research. First, the companies should amend their terms of service to create a safe harbor for privacy-respecting research in the public interest. Second, if they do not, Congress and other legislative bodies should consider legislation immunizing bona fide investigations of the platforms. And third, courts should refuse to enforce the social media companies’ terms of service against research projects that respect user privacy and would be manifestly in the public interest.

  • Alex Abdo, litigation director at the Knight First Amendment Institute

Change recommendation algorithms to promote accurate information – and reward those who fight online harms

Companies should evaluate and make changes to how they recommend content and their entire advertising system. Indoctrination into hate ideologies online is often propelled by the companies themselves recommending increasingly more content that is geared towards fortifying your world view, not opening users up to accurate information.

Companies should provide algorithmic rewards for people and groups actively working to combat disinformation and misinformation online. There are people who are already actively engaging in “technological placekeeping”– or the practice of active care and maintenance of digital places, both as a defense mechanism against manipulation and mis/disinformation and in the service of preserving the health of their respective community’s uses of information and communication. Just as they seek out and work with influencers doing pop culture content, these companies should think about how to incentivize and reward those on the front line already doing the work to create safe spaces on their platforms.

  • Brandi Collins-Dexter, visiting fellow at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy and senior fellow at Color Of Change

Supporters of Donald Trump at the US Capitol on 6 January. Photograph: Mike Theiler/Reuters

Implement strong rules against harassment, hate, and harm

1. Content protections: Stop amplifying hate and harm. Define harassment and legislate its ban from social media. At Reddit in 2015, we defined it as “systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them”.

2. Privacy protections: Ban unauthorized nude photos and revenge porn. Ban personal identifiable information.

3. For people who are repeat offenders: Ban users who promote hate and harm. Track them to make sure they don’t continue.

4. For groups of people whose content has been banned: Don’t let them re-create the same problem that has been eliminated in 1 or 2. No new accounts if they’ve been banned. No new pages or subreddits or other sections for the same groups of people.

5. For cross-platform harassment and harm: Take a page from the fraud-fighting sector and create a shared database of really problematic accounts that should not get the benefit of the doubt. This should cover child pornography, domestic terrorism, other terrorism, online harassment, sharing public information.

  • Ellen Pao, CEO of Project Include and former CEO of Reddit

Enforce the rules platforms already have

In recent days, the various platforms that Trump has used so effectively to cultivate and directly reach his “base” over the past four years have, at long last, begun enforcing their extant rules and applying them to him.

This effort has not taken a drastic shift in platform terms of service (although internal accepted policy and practice may be another matter). It’s also not taken the invention of a powerful new machine learning algorithm or the onboarding of thousands of new commercial content moderators. It has simply been a matter of Trump no longer receiving carte blanche to do and say whatever he likes without consequence, and for the rules of engagement that apply to all users of Twitter, of YouTube, of Facebook and of Snapchat, to apply, likewise, to Trump.

Trump was banned from Twitter on 8 January. Photograph: Pavlo Gonchar/SOPA Images/REX/Shutterstock

What, then, is my grand solution going forward? That platforms enforce the rules, transparently and with immediacy for all users, and hold them accountable for their behavior online. Do this even more so, and not less, when the person on the other end of the tweet holds the world hostage to his mercurial, dangerous whims. To have offered Trump a lesser standard, to have refused to hold him to account until the 11th hour, has put American democracy and the stability of the globe on the line and made social media firms complicit in the destabilization from which we have yet to emerge.

  • Sarah T Roberts, PhD, co-director of the UCLA Center for Critical Internet Inquiry and author of Behind the Screen: Content Moderation in the Shadows of Social Media

Social media platforms must stop giving deference to world leaders, government officials, and elected officials who exploit their platforms to incite hate, harm, and harassment against their political opposition. Not holding them to the same standard as every other user on the platform gives already powerful people another weapon to wield because they know their content won’t be removed. Donald Trump understood this advantage and used it to great effect over the course of his presidency to spread hateful rhetoric, smear his enemies, and call his supporters to DC for what became a violent attempted coup. This is a teachable moment for the tech companies, and concrete action on changing world leaders’ policies has the potential to save the lives of activists, dissidents, and citizens across the globe.

  • Melissa Ryan, CEO of Card Strategies and editor of Ctrl Alt-Right Delete

Address the ‘architectural exclusion’ of marginalized communities from platforms

Some of our content moderation problems result from the architectural exclusion of communities from using platforms and taking part in discourse on equal terms. Many communities are affected by this, but by way of one example, there is much more that platforms can do to be accessible to users with disabilities, including requiring the provision of authoring tools for closed captions and image descriptions, developing interfaces to nudge users toward making their own content accessible, better developing and tuning general-purpose artificial intelligence techniques like speech-to-text and text-to-speech for accessibility purposes, including and hiring people with disabilities in the testing and development of new services, and enforcing compatibility with web and app store standards for accessibility. Enforcing and extending existing regulatory mandates for accessible design and functionality under both disability and telecommunications law and providing broad exemptions in copyright law for making user-generated content available in accessible formats are critical avenues for addressing these problems.

  • Blake Reid, director of the Samuelson-Glushko Technology Law & Policy Clinic at Colorado Law School

Facebook’s campus in Menlo Park, California. Photograph: Josh Edelson/AFP/Getty Images

Reform tech’s liability shield to create accountability for the conduct – not speech – of users

Section 230 [the US law that shields tech platforms from liability for third-party content] allows powerful tech companies to invoke the laissez-faire principles of the first amendment to absolve themselves of responsibility for abuse and extremism that flourish on their platforms, undermining the concept of collective responsibility necessary for a functioning society, both online and off. Section 230 should be amended so that online platforms are no longer immunized from liability for the conduct, as opposed to speech, of their users, or when these platforms encourage, profit from, or demonstrate deliberate indifference to harmful content.

  • Mary Anne Franks, author of The Cult of the Constitution: Our Deadly Devotion to Guns and Free Speech

CEOs and senior executives should be directly fined by the Federal Trade Commission for harms that stem directly from their platform that reasonably could have been predicted and addressed.

  • Brandi Collins-Dexter, visiting fellow at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy and senior fellow at Color Of Change

Most viewed

Most viewed