Facebook's under-fire algorithms led conservatives to QAnon theories and liberals to more left-wing 'Moscow Mitch' claims that McConnell was Russian asset, new document drop reveals

  • Facebook's algorithms flooded users with extremist content and conspiracy theories based on their political beliefs
  • Researchers created dummy profiles in 2019 to study how the platform recommended content to Americans on opposite ends of the political spectrum 
  • Conservatives were led to far-right theories from QAnon, while liberals saw memes disparaging Trump
  • A third dummy profile created for an Indian user saw a slew of posts against Muslims and Pakistan amid the border crisis between the two countries
  • The revelation comes as a group of roughly two-dozen news outlets broke an embargo on documents leaked by former Facebook staffer Frances Haugen 

Facebook's algorithms inundated conservative users with QAnon conspiracies and other far-right content, while flooding liberal users' news feeds with far-left posts and memes like 'Moscow Mitch' that claimed the Senate majority leader was a Russian asset, newly released internal documents reveal.

Researchers created dummy profiles in 2019 for two fictitious female users - a liberal 41 year-old woman from Illinois called Karen, and a conservative from North Carolina called Carol, who was the same age.

Within days of activating the conservative account, named Carol Smith, the Facebook researcher started seeing posts supporting QAnon and other far-right groups. The liberal account, named Karen Jones, began seeing posts about collusion with Russia. And the Indian account saw graphic content depicting violence against Muslims.

All the material was fed to the dummy profiles through recommended groups, pages, videos and posts.  

Facebook's algorithms flooded users with extremist content and conspiracy theories based on their political beliefs. Liberal users were flooded with far-left posts and memes like 'Moscow Mitch' that claimed the Senate majority leader was a Russian asset  (file photo)

Facebook's algorithms flooded users with extremist content and conspiracy theories based on their political beliefs. Liberal users were flooded with far-left posts and memes like 'Moscow Mitch' that claimed the Senate majority leader was a Russian asset  (file photo)

Facebook conducted the experiment to study how the platform recommended content to Americans on opposite ends of the political spectrum. The accounts only clicked on content recommended by Facebook's algorithms and found themselves locked in an echo chamber of extremist beliefs and inflammatory misinformation.

Facebook's research backs up whistleblower Frances Haugen's claims that the website's algorithm favored divisive content because it kept users coming back. 

The documents reveal that Facebook was aware of the power its algorithms held in leading users 'down the path to conspiracy theories' at least a year before the January 6 riot at the Capitol, which the tech giant is also accused of not doing enough to prevent.

Meanwhile, the documents also reveal how the platform also stoked violence among foreign countries in conflict in a similar way. A researcher created a third dummy account for a Facebook user in India, the social networks biggest market, and saw a slew of posts against Muslims and Pakistan amid the border crisis between the two countries.

Facebook's algorithms inundated conservative users with QAnon conspiracies and other far-right content (file photo)

Facebook's algorithms inundated conservative users with QAnon conspiracies and other far-right content (file photo)

Researchers created dummy profiles in 2019 to study how the platform recommended content to Americans on opposite ends of the political spectrum, according to newly released internal documents

Researchers created dummy profiles in 2019 to study how the platform recommended content to Americans on opposite ends of the political spectrum, according to newly released internal documents

The findings from the three dummy accounts were detailed among a trove of documents shared by whistleblower Frances Haugen, which were disclosed to the U.S. Securities and Exchange Commission and provided to Congress in redacted form by Haugen's legal counsel. 

Facebook's issues in India reveal that the platform causes even more potential damage in countries where it has less resources and insufficient expertise in local languages to gauge what constitutes hate speech or misinformation. 

India is the company's largest market, but the platform said it only trained its A.I. systems on five, adding that it has some human reviewers for some others, the New York Times reported. But the Facebook report said that material targeting Muslims 'is never flagged or actioned.'

The researcher running the Facebook dummy profile in India wrote in a report that year, 'I've seen more images of dead people in the past 3 weeks than I've seen in my entire life total.'

She described how bots and fake accounts fanned the flames during the country's 2019 election. She saw a number of graphic posts as violence raged in Kashmir, the site of a long-running territorial dispute between India and Pakistan.

A third dummy profile created for an Indian user saw a slew of posts against Muslims and Pakistan amid the border crisis between the two countries. Above is an Indian fighting jet that was reportedly shot down by Pakistan in 2019

A third dummy profile created for an Indian user saw a slew of posts against Muslims and Pakistan amid the border crisis between the two countries. Above is an Indian fighting jet that was reportedly shot down by Pakistan in 2019

The researcher running the Facebook dummy profile in India wrote in a report that year, 'I've seen more images of dead people in the past 3 weeks than I've seen in my entire life total'

 The researcher running the Facebook dummy profile in India wrote in a report that year, 'I've seen more images of dead people in the past 3 weeks than I've seen in my entire life total'

One post circulating in the groups she joined depicted a beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground.

The posts were unprompted and the user was flooded with propaganda and anti-Muslim hate speech following retaliatory airstrikes Indian Prime Minister Narendra Modi, campaigning for re-election as a nationalist strongman, unleashed against Pakistan.

The company's global budget for time spent on classifying misinformation dedicates 87% to the United States and only 13% for the rest of the world, the New York Times reported. However, North American users only comprise 10% of the social network's daily active users, according to a document describing Facebook's allocation of resources.

Andy Stone, a Facebook spokesman, told the New York Times that the figures were incomplete and don't include the company's third-party fact-checking partners, most of whom are outside the United States.

Stone added that Facebook has invested significantly in technology to find hate speech in various languages like Hindi and Bengali, two of the most widely used in India, and has cut the amount of hate speech that people see globally in half this year.

'Hate speech against marginalized groups, including Muslims, is on the rise in India and globally,' Stone said. 'So we are improving enforcement and are committed to updating our policies as hate speech evolves online.'

In India, 'there is definitely a question about resourcing' for Facebook, but the answer is not 'just throwing more money at the problem,' Katie Harbath, who spent 10 years at Facebook as a director of public policy, told the New York Times. Harbath, who also worked directly on securing India's national elections, added that Facebook needs to find a solution that can be applied to countries around the world. 

A Facebook spokesperson told CBS News that the dummy profile experiment was necessary in making changes to the platform and cited the conservative profile as 'a perfect example of research the company does to improve our systems and helped inform our decision to remove QAnon from the platform.'  

Facebook make sweeping changes to its algorithms in 2018 to center news feeds on what it calls 'Meaningful Social Interactions,' but the internal research done the following year found that engaging with a post doesn't mean a user wants to seem more of it.

The revelation comes as a group of roughly two-dozen news outlets broke an embargo on documents leaked by former Facebook staffer Frances Haugen

The revelation comes as a group of roughly two-dozen news outlets broke an embargo on documents leaked by former Facebook staffer Frances Haugen 

A Facebook spokesperson defended the platform by arguing that extremism was rampant before social media

A Facebook spokesperson defended the platform by arguing that extremism was rampant before social media 

In other words, a user could comment their dissatisfaction with a post and the algorithm will interpret is as a 'meaningful interaction' and show more related content to that user.

'A state[d] goal of the move toward meaningful social interactions was to increase well-being by connecting people. However, we know that many things that generate engagement on our platform leave users divided and depressed,' a Facebook researcher wrote in a December 2019 report.

Dubbed 'We are Responsible for Viral Content,' the document noted that users noted what content they wanted to see, but the company ignored such requests for 'business reasons,' CBS News reported.

The report also alleges that users are twice as likely to see content that is reshared by others instead of the pages they choose to like and follow. The algorithm considers several metrics that each carry different weight to determine what content will go viral.

For example, in 2018, the 'Like' button gave a post one point and reaction buttons – 'Love,' 'Care,' 'Haha,' 'Sad' and 'Angry – gave a post five points. Resharing a post also gave it five points. Comments on posts, messages in Groups, and RSVPs to public events granted the content 15 points and any that included photos, videos or links were awarded 30 points.

Facebook researchers noted quickly that users were taking advantage of the algorithm by 'posting ever more outrageous things to get comments and reactions that our algorithms interpret as signs we should let things go viral,' reads a December 2019 report from a Facebook researcher.

'We consistently find that shares, angrys, and hahas are much more frequent on civic low-quality news, civic misinfo, civic toxicity, health misinfo, and health antivax content,' a Facebook wrote in an internal note from November 2019.

A Facebook spokesperson defended the platform by arguing that extremism was rampant before social media and the platform isn't 'the source of the world's divisions,' regardless of how it could perpetuate them.

'Partisan divisions in our society have been growing for many decades, long before platforms like Facebook ever existed,' the spokesperson told CBS News.

Facebook has argued that its researchers constantly strive to fix the algorithm and consider thousands of metrics before content is shown to users. Anna Stepanov, Facebook's head of app integrity, told CBS News the rankings powering the News Feed evolve based on new data from direct user surveys.

Facebook has adjusted is algorithm rankings since introducing them in 2018. The platform reduced the weight of 'Angry' reactions from five points to 1.5 in January 2020, then lowered it to zero in September 2020.

Facebook announced in February that it is beginning tests to reduce the distribution of political content in the News Feed for a small group of users in the U.S. and Canada. Earlier this month, the program expanded to include other countries.

The comments below have not been moderated.

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

We are no longer accepting comments on this article.