Is AI going to kill everyone? Top experts say yes, warning about “risk of extinction” similar to nuclear weapons, pandemics
In a joint statement, OpenAI head Sam Altman and “Godfather of AI” Geoffrey Hinton warned that the existential threat of artificial intelligence (AI) to humanity is real.
Even though Altman, whose firm created ChatGPT, and Hinton are both profiting from AI, they admit, along with more than 350 other prominent figures, that AI could end up killing off most of humanity in the coming years.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the experts wrote in a single, 22-word sentence put together by the nonprofit Center for AI Safety.
As of late, there have been a number of similar types of statements made by promoters of AI, including billionaire electric vehicle (EV) master Elon Musk, about the dangers of AI.
Anyway, the one-sentence statement is meant to cover the threat of AI to basically destroy the world through all sorts of calamities, including through the increased spread of misinformation and the economic upheaval that will inevitably come through AI humanoid robot-created job losses.
AI-generated Pentagon explosion photo triggers mass stock market selloff
The world has been getting a steady dose of AI propaganda ever since the release of OpenAI’s ChatGPT product, which allows users to ask all sorts of questions or request proofreading and receive instant answers or revisions.
ChatGPT is basically grooming the general public to accept AI as a normal part of everyday life. Once AI becomes fully normalized, there are sure to be increasingly more dystopian products that come down the pike.
AI-generated photos are also becoming a problem, at least for the establishment. One such photo depicting a fake explosion at the Pentagon triggered a stock market selloff that ended up erasing billions in value from the markets at large.
The Center for AI Safety recognizes that these and other issues threaten to destabilize the planet in many ways, hence why it issued the one-sentence statement in an effort to “open up discussion” about the topic, especially given the “broad spectrum of important and urgent risks from AI.”
Other notable signatories of the letter besides Altman and Hinton include Google DeepMind boss Demis Hassabis and Anthropic CEO Dario Amodei.
Altman, Hassabis and Amodei joined a select group of experts that met with President Biden to discuss what “the big guy” thinks about the risks and regulations of AI.
In 2018 Hinton and Yoshua Bengio, another letter signatory, won the Turing Award, the highest honor in the computing world, for their work on advancements in neural networks. These advancements were described at the time as “major breakthroughs in artificial intelligence.”
“As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence,” commented Center for AI Safety director Dan Hendrycks.
“Mitigating the risk of extinction from AI will require global action. The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems.”
Altman, meanwhile, has been speaking out in favor of more government regulations to keep AI in check, warning that AI could “cause significant harm to the world.” And Hinton, who has basically devoted his entire life’s work to AI development, now says he regrets it because it could allow “bad actors” to do “bad deeds.”
yogaesoteric
December 19, 2023