"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." This single – and alarmist – sentence constitutes the entire content of a petition launched on Tuesday, May 30, by 350 leading figures in the AI sector.
The initiative, spearheaded by the San Francisco-based non-governmental organization Center for AI Safety, is reminiscent of the March 28 open letter calling for a "pause" in advanced research in the field, signed by over a thousand personalities, including Tesla chief executive Elon Musk. But the text published on Tuesday was also endorsed by industry leaders: Sam Altman, the CEO of OpenAI, creator of the ChatGPT chatbot, Demis Hassabis, CEO of Google-DeepMind, James Manyika, the senior vice president in charge of AI regulatory and ethical issues at Google, Eric Horvitz, chief scientific officer of Microsoft, and Dario Amodei, OpenAI alumnus and founder of Anthropic, a Google-backed start-up.
Among the other signatories are many who promoted the letter calling for a six-month pause, including Max Tegmark, from the NGO Future of Life Institute, and Stuart Russell, from the Center for Human Compatible AI, a laboratory at the University of California Berkeley. They were joined by a number of leading researchers who have recently converted to the idea that AI poses an existential risk to humanity: Geoffrey Hinton, who recently resigned from Google, and Yoshua Bengio, from the University of Montreal. They are considered "fathers" of modern AI and have received the prestigious Alan Turing Award, alongside Yann LeCun. LeCun, who heads AI research at Meta, the mother company of Facebook, is far more reassuring and optimistic. He does not see why artificial intelligence software would attack humans.
Why would the leaders of a booming industry call on the world's governments to consider their technology a major threat and regulate it accordingly? The initiative seems counter-intuitive, but it can be explained by going back to the beginnings of OpenAI: At the time, in 2015, Musk, one of the co-founders, had already been warning for several months about the risks of AI, rightly deemed "potentially more dangerous than nuclear bombs." Some argued that a "general artificial intelligence," superior to that of humans, could become hostile by design or by mistake. But that did not stop Musk from co-founding OpenAI, whose original aim was to bring about such an AI "in a way that benefits humanity."
You have 53.6% of this article left to read. The rest is for subscribers only.