Skip to main contentSkip to navigationSkip to navigation
Australia’s new cyber abuse laws would give the e-safety commissioner powers to order the removal of harmful content
Australia’s new cyber abuse laws would give the e-safety commissioner powers to order the removal of harmful content. Photograph: Pavlo Gonchar/SOPA Images/REX/Shutterstock
Australia’s new cyber abuse laws would give the e-safety commissioner powers to order the removal of harmful content. Photograph: Pavlo Gonchar/SOPA Images/REX/Shutterstock

Australian cyber abuse laws won't address Coalition MPs’ concerns about deplatforming

This article is more than 3 years old

E-safety commissioner says laws will be applied fairly but won’t police social media companies’ bans on users

New cyber abuse laws will help ensure moderation of social media is applied “fairly and consistently” but do not address concerns from some in the Coalition about deplatforming, the e-safety commissioner has said.

Julie Inman Grant made the comments to Guardian Australia, confirming that even with new powers to take down harmful material, Australia’s cyber watchdog will have no role policing social media companies’ decisions to remove content or ban users.

Debate within the Coalition was sparked by Nationals MP George Christensen calling for new laws to “stop social media platforms from censoring any and all lawful content created by their users”.

On Monday the communications minister, Paul Fletcher, argued that additional measures were not required after the widespread deplatforming of outgoing US president Donald Trump.

Despite the treasurer, Josh Frydenberg, and acting prime minister, Michael McCormack, expressing disquiet over the decisions of Twitter and other social media companies, the senior Coalition figures did not call for further reform.

The disquiet was not universal. Prominent moderate MP Trent Zimmerman told Guardian Australia that Twitter and social media companies were “within their rights” to remove Trump, who he accused of “stoking the flames” of a threat to the peaceful transition of power in the US.

“It’s not an academic debate, it’s a real threat,” he said. “It’s hard for Australians who called on Twitter to censor the Chinese foreign ministry to argue that Twitter shouldn’t censor someone who is potentially engaged in a far more serious exercise of trying to stop the peaceful transition of government in his own country.”

The government already has two processes in train to improve regulation: a voluntary code on disinformation, to be devised by the social media giants and enforced by the Australian Communications and Media Authority; and a draft online safety bill proposing to give the e-safety commissioner powers to order the take-down of harmful content.

Neither process envisages the government preventing social media companies from applying community standards or preventing them removing content that falls short of being unlawful. Both are aimed at increasing their responsibilities as publishers.

Inman Grant told Guardian Australia the legislation would be the first of its kind to tackle not just illegal content “but also serious online harms, including image-based abuse, youth cyberbullying, and … serious adult cyber abuse with the intent to harm”.

“But our powers do not extend to regulating political speech on either end of the spectrum, or for that matter, how individual social media companies choose to enforce their own terms of service,” she said.

“As private companies, these platforms have the right to ban users or pull down content they deem violates these terms.”

The e-safety commissioner said the platforms “aren’t always transparent in how they enforce and apply these policies and it’s not always clear why they may remove one piece of content and not another”.

Transparency would be improved by the online safety bill’s basic online safety expectations which would “set out the expectation on platforms to reflect community standards, as well as fairly and consistently implementing appropriate reporting and moderation on their sites”, she said.

“This could include, for example, the rules that platforms currently apply to ensuring the safety of their users online, including from threats of violence.”

The technology and industry minister, Karen Andrews, has said there “needs to be consistency to the way in which these tech companies moderate content, with transparent rules and guidelines for everyone”.

Andrews told the Sydney Morning Herald “it concerns me that an outcry by prominent people or certain groups on social media may dictate their decision making, when there is incredibly vile, hateful and dangerous content that frequently goes unchecked”.

Zimmerman agreed there are “inconsistent standards being applied” by social media companies, calling on them to “collectively or individually be more transparent about the circumstances they will moderate or suspend any individual”.

A spokesperson for Acma told Guardian Australia it “strongly encourages platforms to strengthen their transparency and accountability to their users”.

“Platforms should be clearer about their misinformation policies and how and when they apply them.”

Most viewed

Most viewed