Five Days of Chaos at OpenAI and the Powerful AI Discovery that Could Threaten Humanity
Months before OpenAI board member Ilya Sutskever would gain notoriety for his key role in the ouster of CEO Sam Altman, Sutskever co-authored a little-noticed but apocalyptic warning about the threat posed by artificial intelligence. Superintelligent AI, Sutskever co-wrote on a company blog, could lead to “the disempowerment of humanity or even human extinction,” since engineers are unable to prevent AI from “going rogue.” The message echoed OpenAI’s charter, which calls for avoiding AI uses if they “harm humanity.”
The cry for caution from Sutskever, however, arrived at a period of breakneck growth for OpenAI. A $10 billion investment from Microsoft at the outset of this year helped fuel the development of GPT-4, a viral conversation bot that the company says now boasts 100 million weekly users. The forced exit of Altman arose in part from frustration between him and Sutskever over a tension at the heart of the company: heightened awareness of the risks posed by AI.
Samuel Harris Altman is a leftist American entrepreneur and investor and the Chief Executive officer of OpenAI since 2019. In 2022, he wrote: “I’m concerned about the political landscape in the USA, some elements of the republican party are becoming increasingly anti-democratic.”
And if you go to Sam Altman’s Twitter account, you find a multitude of seemingly cheerful photos showing various world leaders of the New World Disorder engaging in friendly conversations with him. For a moment, one might think that the G20 summit has started early and that OpenAI is the host.
OpenAI is an American artificial intelligence (AI) research organization consisting of the non-profit OpenAI, Inc. conveniently registered in Joe Biden’s leftist paradise of Delaware and its for-profit subsidiary OpenAI Global, LLC. OpenAI research artificial intelligence works with the declared intention of developing “safe and beneficial” artificial general intelligence, but something disturbing occurred lately and it’s not the brief ousting of Sam Altman’s that made the news all over the world.
Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity. This shocking news was provided to Reuters in 24 hours by two people familiar with the matter. The previously unreported letter and the allegedly dangerous AI algorithm were key developments before the board’s ouster of Altman, the poster child of generative AI, the two sources said.
Prior to his triumphant return, more than 700 employees threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader. However, the sources in question cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Unfortunately, neither Reuters nor anybody else was able to review a copy of the letter to better understand the implications of this sudden AI danger that might threaten humanity; and the staff who wrote the letter did not respond to various requests for comment.
Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and drew investment – and computing resources – necessary from Microsoft to get closer to AGI. In addition to announcing a slew of new tools in a recent demonstration, Altman teased at a summit of world leaders in San Francisco that he believed major advances were in sight.
“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward and getting to do that is the professional honor of a lifetime,” he said at the Asia-Pacific Economic Cooperation summit that featured the controversial meeting between Joe Biden and Xi Jinping. A day later, the board fired Altman.
For the moment, the way globalists hype AI is very tactical, they are presenting it as a new technology with potential dangers which will, however, be contained for the benefit of humanity.
However, “the potential risk is very high,” Gawdat told Bartlett, who once led the Silicon Valley behemoth’s Google X “moonshot” division, “is very stressed about AI’s future” stating in a podcast called The Diary of a CEO, hosted by Stephen Bartlett that the situation “is beyond an emergency.”
Morgan Meaker at Wired wrote in an article entitled Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse:
“Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past days of high-stakes soap opera.”
“What occurred with this drama around Sam Altman shows us we cannot rely on visionary CEOs or ambassadors of these companies, but instead, we need to have regulation,” says Brando Benifei, one of two European Parliament lawmakers leading negotiations on the new rules. “These events show us there is unreliability and unpredictability in the governance of these enterprises.”
The high-profile failure of OpenAI’s governance structure is likely to amplify calls for stronger public oversight, but that is what the elite wants to control the rise of AI aka Cyber Satan. We don’t have much time left so please don’t underestimate what has occurred at OpenAI and ask your elected officials to enquire about Sam Altman’s dubious activities before it’s too late.
yogaesoteric
December 8, 2023