AI poses ‘risk of extinction’ on par with nuclear war and pandemics, tech leaders warn

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the signatories wrote.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the signatories wrote. Copyright Canva
By Euronews with AP
Share this articleComments
Share this articleClose Button

OpenAI CEO Sam Altman and "AI godfather" Geoffrey Hinton are among hundreds of tech experts warning of the threat AI poses to humanity.

ADVERTISEMENT

Scientists and tech industry leaders have issued a fresh warning about the existential threats artificial intelligence (AI) poses to humankind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they said in a short statement on the website of the Center for AI Safety.

Geoffrey Hinton, a computer scientist known as the "godfather of AI" who quit his job at Google last month to voice his concerns about the unchecked development of new AI tools, was among the hundreds of signatories

So were Sam Altman, CEO of ChatGPT maker OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei, CEO of Anthropic.

Worries about AI systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.

More than 1,000 researchers and technologists, including Elon Musk, signed a letter earlier this year calling for a six-month pause on AI development, arguing it poses “profound risks to society and humanity”.

The concerns include AI's ability to dramatically boost the spread of online misinformation, whether it will put humans out of their jobs entirely, and whether anyone would be able to stop a government from using AI technology to dominate its neighbours or its own citizens.

Countries around the world are scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.

In February, OpenAI's Altman said the world may not be “that far away from potentially scary” AI tools, and that regulation would be critical but would take time to figure out.

Last week, however, he said his company might consider leaving Europe if it could not comply with its new AI regulations.

"The current draft of the EU AI Act would be over-regulating, but we have heard it's going to get pulled back," he told Reuters. "They are still talking about it."

Share this articleComments

You might also like