By John Ikani
Top executives in the field of artificial intelligence (AI), including OpenAI CEO Sam Altman, have joined forces with renowned experts and professors to draw attention to the potential “risk of extinction from AI.”
They are urging policymakers to treat this risk on par with the dangers posed by pandemics and nuclear warfare.
More than 350 signatories, which include CEOs from prominent AI companies such as DeepMind and Anthropic, as well as executives from Microsoft and Google, expressed their concerns in a letter published by the nonprofit Center for AI Safety (CAIS).
The letter stresses the need for global prioritization in mitigating the risks associated with AI, highlighting the scale at which such risks can impact society.
Among the signatories are Geoffrey Hinton and Yoshua Bengio, both distinguished as “godfathers of AI” and recipients of the prestigious 2018 Turing Award for their groundbreaking contributions to deep learning.
The letter also includes professors from renowned institutions like Harvard and China’s Tsinghua University.
Notably absent from the list of signatories is Meta, where Yann LeCun, another esteemed AI expert, is employed.
CAIS director Dan Hendrycks expressed disappointment, stating, “We asked many Meta employees to sign.”
At the time of filing this report, Meta has not responded to requests for comment.
The call for action coincides with the U.S.-EU Trade and Technology Council meeting in Sweden, where politicians are expected to discuss the regulation of AI.
Elon Musk, along with a group of AI experts and industry executives, initially highlighted the potential risks to society back in April.
CAIS director Hendrycks hopes that Musk will also sign the letter, saying, “We’ve extended an invitation, and hopefully, he’ll sign it this week.”
While recent advancements in AI have presented promising applications in areas such as medical diagnostics and legal brief writing, concerns have arisen regarding potential privacy violations, the spread of misinformation, and the development of “smart machines” capable of independent thought.
The warning comes just two months after the nonprofit Future of Life Institute (FLI) issued an open letter, signed by Musk and hundreds of others, calling for an urgent pause in advanced AI research due to risks posed to humanity.
Max Tegmark, president of FLI and a signatory of the recent letter, expressed optimism, stating, “Our letter mainstreamed pausing; this mainstreams extinction. Now, a constructive open conversation can finally start.”
AI pioneer Geoffrey Hinton previously highlighted the urgency of AI as a threat to humanity, even more so than climate change, in an interview with Reuters.
Recently, OpenAI CEO Sam Altman found himself at the forefront of the AI discussion after the global success of his ChatGPT chatbot.
Altman initially criticized EU AI regulation efforts as over-regulation and threatened to withdraw from Europe.
However, after facing backlash from politicians, he swiftly reversed his stance within days.
Altman’s role in AI has garnered significant attention, prompting European Commission President Ursula von der Leyen to schedule a meeting with him on Thursday.
Also, EU industry chief Thierry Breton is set to meet with Altman in San Francisco next month.