Tech giants pledge unprecedented action to tackle terrorist content

We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.

Advertisement

This was published 4 years ago

Tech giants pledge unprecedented action to tackle terrorist content

By Jennifer Duke

US technology behemoths Google, Facebook, Microsoft, Amazon and Twitter will join forces as part of an unprecedented push against the sharing of terrorist content in the aftermath of the Christchurch massacre.

In a nine-point agreement released overnight, the first of its kind, the competing multi-billion dollar businesses pledge to reconsider livestreaming safeguards and to work together to develop tools to improve the detection of extremist violent content.

The move comes amid intensifying global pressure on the companies to be more accountable for stopping violent content on their platforms after Facebook's live technology was used to share footage of the New Zealand terror attack in February.

A joint statement from the companies, released to The Sydney Morning Herald and The Age, said the horrifying nature of the attacks in New Zealand meant it was "right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence".

"Terrorism and violent extremism are complex societal problems that require an all-of-society response," the statement said.

"For our part, the commitments we are making today will further strengthen the partnership that governments, society and the technology industry must have to address this threat."

The commitments we are making today will further strengthen the partnership that governments, society and the technology industry must have to address this threat.

Joint statement from Facebook, Google, Twitter, Amazon and Microsoft

Under the agreement, all the tech giants have agreed to identify "appropriate checks on livestreaming, aimed at reducing the risk of disseminating terrorist and violent extremist content online".

This could include more vetting measures, moderating specific events and checks on livestreaming.

Advertisement
Loading

The competing companies said they would develop new technology, and collaborate with global governments, including sharing data, in an effort to improve machine learning and artificial intelligence as well as developing open source and shared digital tools.

A "crisis protocol" would also be put into place to respond to new urgent events, with information to be shared among the companies, governments and non-government organisations. Each company has agreed to create an incident management team to coordinate and share information.

After the Christchurch massacre was livestreamed, the tech giants grappled to keep the video from appearing online after different versions were uploaded millions of times across platforms like Facebook, Twitter and Google's YouTube.

New Zealand Prime Minister Jacinda Ardern, as part of a "Christchurch Call" pledge supported by a swathe of countries, has asked the social media giants to take a closer look at any software directing people to violent content and has pushed for examination of their algorithms. British Prime Minister Theresa May has also called for action from the social media giants.

The Australian government pushed through tough legislation in the wake of the attacks that could see tech companies face billions of dollars in fines and have their executives jailed if they did not quickly remove objectionable content.

The tech companies have also now promised to update their terms of use to explicitly make it against the rules to allow terrorist content, ensure there are specific methods for users to report or flag it and to publish reports about enforcement efforts.

The digital giants have agreed to work together on tech tools to stop the spread of terrorist content.

The digital giants have agreed to work together on tech tools to stop the spread of terrorist content.Credit: AP

Facebook on Wednesday has also independently introduced a new "one strike" policy for livestreaming for its 2.3 billion users after widespread calls for limits on the technology.

Facebook vice president of integrity Guy Rosen said, in a post uploaded to the social media company's blog on Wednesday afternoon (AEST), that those who broke the social network's "most serious policies" on one occasion would now be blocked from using its livestreaming technology for 30 days.

Loading

This includes a no-tolerance approach to those who link to, or share, terrorist or violent content.

These restrictions will soon be extended to stop the same users from creating advertisements.

In the past, content that broke the rules on Facebook was removed by moderators and those who continually did so were blocked for a period of time or, in extreme cases, banned.

"Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate," he said.

Most Viewed in Business

Loading