AI could exceed human intelligence in 3 years, top scientist warns
If the many shortcomings of today’s artificial intelligence tools such as ChatGPT have left you confident that AI surpassing human intelligence is still a far way off, think again. A top scientist has warned that this nightmare scenario could be a reality decades sooner than previous predictions, and it may even be just a few years away.
The warning comes from PhD mathematician and futurist Ben Goertzel, who is known for popularizing the term “artificial general intelligence” (AGI). At a recent summit, he said: “It seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years.”
He added that once we have achieved human level artificial general intelligence, it will only be a few years before a radically superhuman version will emerge. Although he conceded that his prediction may not be correct, he said that the only aspect stopping an AI that is vastly superior in intelligence to its human creators would be if an AI bot’s “own conservatism” compelled it to proceed with caution. He believes that an exponential escalation of artificial intelligence technology is an inevitability.
Goertzel has been investigating artificial super intelligence, a term for an AI that can match all of the computing and brain power of human civilization. He pointed out that a predictive model developed by Google computer scientist and futurist Ray Kurzweil suggests this type of intelligence will be possible by 2029.
The notion is further supported by the huge advancements that have been made in large language models in the last couple of years. This technology has evolved so quickly that much of the world is now all too aware of its potential.
The futurist’s latest warning is just one of several he has made in recent years about this technology. Last May, he cautioned that artificial intelligence could replace 80 percent of human jobs within the next few years, saying that any job that involves paperwork could be automatable.
AI industry leaders warn risk is on par with pandemics and nuclear war
Last year, a group of industry leaders warned that AI technology could pose an existential threat to humanity one day and should be thought of as just as dangerous as the risk of nuclear wars and deadly pandemics.
The nonprofit Center for AI Safety released a one-sentence statement reading: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
It was signed by more than 350 experts in the field, such as engineers, researchers and executives involved in artificial intelligence. Among those who signed it were the chief executive of Google’s DeepMind, Demis Hassabis, Anthropic’s Chief Executive Dario Amodei, and OpenAI Chief Executive Sam Altman.
It was also signed by two of the researchers who are considered the “godfathers” of modern AI, Yoshua Bengio and Geoffrey Hinton.
The world should be very worried that the same people who are deeply involved in this industry and stand to profit most from it are the ones pushing governments to regulate the technology given its potential harms.
Center for AI Safety Executive Director Dan Hendrycks said that many insiders are scared of where this is headed, telling the New York Times: “There’s a very common misconception, even in the AI community, that there are only a handful of doomers. But, in fact, many people privately would express concerns about these elements.”
It may not be long before the types of biased answers and hallucinations that tools like Google Gemini have been making headlines for lately are the least of our worries.
AI pioneer Eliezer Yudkowsky also believes that an apocalypse driven by machines is a few years away. He said: “If you forced me to put probabilities on what I see, I have a sense that our current remaining timeline looks more like five years than 50 years.”
yogaesoteric
March 27, 2024