Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

AI deemed ‘too dangerous to release’ makes it out into the world

Extremists could generate 'synthetic propaganda', automatically creating white supremacist screeds, researchers warn

Andrew Griffin
Thursday 07 November 2019 12:21 GMT
Comments
AI deemed 'too dangerous to release' makes it out into the world

An AI that was deemed too dangerous to be released has now been released into the world.

Researchers had feared that the model, known as "GPT-2", was so powerful that it could be maliciously misused by everyone from politicians to scammers.

GPT-2 was created for a simple purpose: it can be fed a piece of text, and is able to predict the words that will come next. By doing so, it is able to create long strings of writing that are largely indistinguishable from those written by a human being.

But it became clear that it was worryingly good at that job, with its text creation so powerful that it could be used to scam people and may undermine trust in the things we read.

What's more, the model can be abused by extremist groups to create "synthetic propaganda" that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis, for instance.

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," wrote OpenAI in a February blog post, released when it made the announcement. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

At that time, the organisation released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.

The full version is more convincing than the smaller one, but only "marginally". The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.

It hopes that the release can partly help the public understand how such a tool could be misused, and help inform discussions among experts about how that danger can be mitigated.

In February, researchers said that there was a variety of ways that malicious people could misuse the programme. The outputted text could be used to create misleading news articles, impersonate other people, automatically create abusive or fake content for social media or to use to spam people with – along with a variety of possible uses that might not even have been imagined yet, they noted.

Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.

"These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns," they wrote. "The public at large will need to become more skeptical of text they find online, just as the “deep fakes” phenomenon calls for more skepticism about images."

The researchers said that experts needed to work to consider "how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures".

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in