350 leaders and leading experts in artificial intelligence (AI) issued a message warning that humanity is at risk of extinction because of AI.
"Reducing the risk of extinction due to AI must be a global priority alongside other societal-scale risks such as pandemics or nuclear war" - announcement by the Center for AI Safety (CAIS) in San Francisco – America posted on the website on May 30 (US time).
Bloomberg describes the above message as signed by 350 people who are leading leaders and experts in the AI industry, including Mr. Sam Altman - CEO of OpenAI - "father" of ChatGPT.
In addition, there are leaders from Google DeepMind or Anthropic... but no one from Meta - the company pursuing the field of AI.
The fact that a large number of figures are compared to the "technological elite" come together to warn that AI is like a pandemic or nuclear war, taking place in the context of many concerns about AI.
More than 1,000 leading technology experts, including billionaire Elon Musk and Apple co-founder Steve Wozniak in late March, signed a letter calling on global companies and organizations to stop the super AI race in the future. 6 months to build a common set of rules for this technology.
About a month later, Google CEO Sundar Pichai admitted that AI gave him sleepless nights "because it can be more dangerous than anything humans have ever seen."
"One day, AI will have capabilities far beyond human imagination and we cannot yet imagine all the worst things that can happen" - Mr. Sundar Pichai said.
Billionaire Elon Musk (left) and OpenAI CEO Sam Altman. Photo: Vanityfair
The OpenAI CEO also said that AI can bring great benefits but raises concerns about misinformation, economic shock, or something "at a level that far exceeds anything humans are prepared for". Mr. Sam Altman himself has to admit that he always feels worried about AI and is shocked that ChatGPT is so popular.
Experts are also concerned about a higher model, artificial general intelligence (AGI). AGI is considered much more complex than generative AI models thanks to its ability to self-aware of what it says and does. Theoretically, this technology will make people afraid in the future.