AI could be as dangerous as 'pandemics or nuclear war,' industry leaders say

A new petition is warning of the existential risks posed by artificial intelligence. Some see it as a diversion amid discussions on regulating the sector.

By 

Published on May 30, 2023, at 11:05 pm (Paris), updated on May 31, 2023, at 9:39 am

Time to 3 min.

Lire en français

Subscribers only

ChatGPT app displayed on an iPhone in New York, on Thursday, May 18, 2023.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." This single – and alarmist – sentence constitutes the entire content of a petition launched on Tuesday, May 30, by 350 leading figures in the AI sector.

The initiative, spearheaded by the San Francisco-based non-governmental organization Center for AI Safety, is reminiscent of the March 28 open letter calling for a "pause" in advanced research in the field, signed by over a thousand personalities, including Tesla chief executive Elon Musk. But the text published on Tuesday was also endorsed by industry leaders: Sam Altman, the CEO of OpenAI, creator of the ChatGPT chatbot, Demis Hassabis, CEO of Google-DeepMind, James Manyika, the senior vice president in charge of AI regulatory and ethical issues at Google, Eric Horvitz, chief scientific officer of Microsoft, and Dario Amodei, OpenAI alumnus and founder of Anthropic, a Google-backed start-up.

Among the other signatories are many who promoted the letter calling for a six-month pause, including Max Tegmark, from the NGO Future of Life Institute, and Stuart Russell, from the Center for Human Compatible AI, a laboratory at the University of California Berkeley. They were joined by a number of leading researchers who have recently converted to the idea that AI poses an existential risk to humanity: Geoffrey Hinton, who recently resigned from Google, and Yoshua Bengio, from the University of Montreal. They are considered "fathers" of modern AI and have received the prestigious Alan Turing Award, alongside Yann LeCun. LeCun, who heads AI research at Meta, the mother company of Facebook, is far more reassuring and optimistic. He does not see why artificial intelligence software would attack humans.

Why would the leaders of a booming industry call on the world's governments to consider their technology a major threat and regulate it accordingly? The initiative seems counter-intuitive, but it can be explained by going back to the beginnings of OpenAI: At the time, in 2015, Musk, one of the co-founders, had already been warning for several months about the risks of AI, rightly deemed "potentially more dangerous than nuclear bombs." Some argued that a "general artificial intelligence," superior to that of humans, could become hostile by design or by mistake. But that did not stop Musk from co-founding OpenAI, whose original aim was to bring about such an AI "in a way that benefits humanity."

You have 53.6% of this article left to read. The rest is for subscribers only.

Lecture du Monde en cours sur un autre appareil.

Vous pouvez lire Le Monde sur un seul appareil à la fois

Ce message s’affichera sur l’autre appareil.

  • Parce qu’une autre personne (ou vous) est en train de lire Le Monde avec ce compte sur un autre appareil.

    Vous ne pouvez lire Le Monde que sur un seul appareil à la fois (ordinateur, téléphone ou tablette).

  • Comment ne plus voir ce message ?

    En cliquant sur «  » et en vous assurant que vous êtes la seule personne à consulter Le Monde avec ce compte.

  • Que se passera-t-il si vous continuez à lire ici ?

    Ce message s’affichera sur l’autre appareil. Ce dernier restera connecté avec ce compte.

  • Y a-t-il d’autres limites ?

    Non. Vous pouvez vous connecter avec votre compte sur autant d’appareils que vous le souhaitez, mais en les utilisant à des moments différents.

  • Vous ignorez qui est l’autre personne ?

    Nous vous conseillons de modifier votre mot de passe.

Lecture restreinte

Votre abonnement n’autorise pas la lecture de cet article

Pour plus d’informations, merci de contacter notre service commercial.