better late than never —

Facebook promises to beef up “election integrity” efforts heading into 2020

2016 is behind us, but foreign interference looks just as likely in 2020.

Photograph of busy open-plan office.
Enlarge / Facebook's election "War Room" on Wednesday, Oct. 17, 2018.

With 379 long, long days to go until the 2020 US presidential election, Facebook is promising to do a better job than it did in 2016 of preventing bad actors, both foreign and domestic, from abusing its platform to potentially affect the outcome.

The company unveiled a slew of "election integrity efforts" today, saying the measures will "help protect the democratic process" by identifying threats, closing vulnerabilities, and reducing "the spread of viral misinformation and fake accounts."

The sheer scope of the problem is admittedly mind-boggling but perhaps unsurprising, given that Facebook's most recent investor report claimed more than 2.4 billion monthly active users on the platform (Instagram also boasts more than 1 billion MAUs). Company CEO Mark Zuckerberg said in a call with reporters that the company spends "billions" on security annually, totaling more in a given year than Facebook's annual revenue at the time it went public. (For context, Facebook went public in 2012; it posted total revenues of just about $5 billion for that fiscal year. Its total revenue for 2018 was about $55.8 billion.)

All that money goes to a combination of projects that aim to reduce both fake news and voter-suppression efforts, Zuckerberg said. "I'm confident we are more prepared now" than in 2016, he added.

Foreign interference

Facebook's biggest target is "coordinated inauthentic behavior"—big bunches of super-fake accounts posting super-fake stuff—that originates from a certain geographic area. Today, in tandem with its announcement about election security, the company updated its policy on how it will handle different kinds of coordinated efforts.

"We're constantly working to detect and stop this type of activity because we don't want our services to be used to manipulate people," Facebook said. To that end, the company works with the intelligence community and law enforcement to identify and take down disinformation campaigns, particularly around elections.

Facebook has for many months now posted regular updates when it takes down campaigns of coordinated inauthentic behavior. While some nations do pop up repeatedly, attempts to spread disinformation occur worldwide. The most recent, today, involved campaigns originating in Russia and Iran. On October 3, Facebook pulled down accounts associated with Egypt, Indonesia, Nigeria, and the United Arab Emirates. On September 20, it identified a cluster in Spain. The September 16 announcement involved Iraq and Ukraine. A late August announcement targeted accounts that originated in Myanmar, and so on.

Facebook also said it plans to label pages and accounts more clearly so that users can understand who is behind the information they see on their feeds:

We're adding more information about who is behind a Page, including a new "Organizations That Manage This Page" tab that will feature the Page's "Confirmed Page Owner," including the organization's legal name and verified city, phone number or website.

Any page that wants to run ads about "social issues, elections, or politics in the US" will be required to add that information, Facebook said. The company also plans to put labels on content originating from "media outlets that are wholly or partially under the editorial control of their government as state-controlled media."

Not a hypothetical problem

The question for Facebook is not if state actors are attempting to sway elections in a variety of countries, but how many are trying—and how subtly.

Russia's use of social media to influence the outcome of the 2016 US presidential election is at this point extremely well-documented. The Senate Intelligence Committee issued a report (PDF) outlining the methods that Russia's Internet Research Agency (IRA) used to "conduct an information warfare campaign designed to spread disinformation and societal division in the United States," including targeted ads, fake news articles, and other tactics. While the IRA used and uses several different platforms—including Twitter, YouTube, and Reddit—its primary vectors for outreach are Facebook and Instagram.

Facebook said today it removed 50 Instagram accounts and one Facebook account that "showed some links to the Internet Research Agency (IRA) and had the hallmarks of a well-resourced operation that took consistent operational security steps to conceal their identity and location." The accounts primarily shared and amplified memes, and they purported to be basically all things divisive—pro-Trump, anti-Trump, progressive, conservative, etc. They boasted about 246,000 Instagram followers.

In addition to trying to keep networks of foreign actors out of the fray, Facebook said it is beefing up attempts to protect actual, valid accounts belonging to campaigns and individuals affiliated with them. The "Facebook Protect" program not only includes basic security principles—like requiring two-factor authentication—but it also links participating, registered pages and accounts together. That way, if Facebook detects an attack against one account in the cluster it can proactively move to review activity and protect other accounts affiliated with the same organization.

Authentic—but bad—behavior

Foreign interference is far from the only problem in US politics at the moment. Plenty of homegrown organizations are also spreading disinformation on Facebook.

The company hopes to combat fake news by labeling it as such, very clearly. Stories that have flunked an independent fact-check will be prominently flagged, Facebook said, with a "false news" label covering the content by default. Users who share stories that are considered false will also see a pop-up that explains the post is false, including explanations of why, before they choose to cancel or share anyway.

"In many countries, including in the US, if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checker," the company added.

Facebook also addressed attempts at voter suppression. Organizations have been barred since 2018 from placing advertisements that contain deliberate misinformation about voting, such as encouraging residents to go to the polls on a Thursday (US federal elections always take place on a Tuesday) or claiming you can vote by text message (not a real thing in the United States).

Facebook said it is expanding those policies to include banning paid advertising that "suggests voting is useless or meaningless, or advises people not to vote." Citing its hate-speech policy, the company also said it prohibits ads that "exclude people from political participation on the basis of things like race, ethnicity, or religion." That includes ads saying not to vote for a candidate because of their race or that threaten violence or intimidation.

Facebook also said it will upgrade its ad library, a database that documents political advertising, to include an overall US presidential-candidate spending tracker. The database will also know what type of audience was shown ads, where that audience lives, and what platforms it was using.

That said, however, the social media giant has also confirmed several times that its community guidelines—which prohibit, among other things, certain kinds of hate speech—do not apply to politicians. Politicians' posts are also not subject to fact-checking. Political ads are also exempt from ad guidelines that apply to other kinds of advertisements. That means a candidate or sitting public servant can lie in a political ad and Facebook will still accept their money and run the promotion. (The politicians just can't include fake buttons or profanity.)

So what happens if ads from a verified campaign are considered to cross the line into voter suppression? We've got 379 days left to find out.

Channel Ars Technica