{"id":141505,"date":"2023-12-08T19:37:13","date_gmt":"2023-12-08T19:37:13","guid":{"rendered":"https:\/\/yogaesoteric.net\/?p=141505"},"modified":"2023-12-08T19:37:13","modified_gmt":"2023-12-08T19:37:13","slug":"five-days-of-chaos-at-openai-and-the-powerful-ai-discovery-that-could-threaten-humanity","status":"publish","type":"post","link":"https:\/\/yogaesoteric.net\/en\/five-days-of-chaos-at-openai-and-the-powerful-ai-discovery-that-could-threaten-humanity\/","title":{"rendered":"Five Days of Chaos at OpenAI and the Powerful AI Discovery that Could Threaten Humanity"},"content":{"rendered":"<p>Months before OpenAI board member Ilya Sutskever would gain notoriety for his key role in the ouster of CEO Sam Altman, Sutskever co-authored a little-noticed but apocalyptic warning about the threat posed by artificial intelligence. Superintelligent AI, Sutskever co-wrote on a company blog, could lead to \u201c<em>the disempowerment of humanity or even human extinction<\/em>,\u201d since engineers are unable to prevent AI from \u201c<em>going rogue<\/em>.\u201d The message echoed OpenAI\u2019s charter, which calls for avoiding AI uses if they \u201c<em>harm humanity<\/em>.\u201d<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-141506\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/12\/days.jpg\" alt=\"\" width=\"560\" height=\"317\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/12\/days.jpg 510w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/12\/days-300x170.jpg 300w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>The cry for caution from Sutskever, however, arrived at a period of breakneck growth for OpenAI. A $10 billion investment from Microsoft at the outset of this year helped fuel the development of GPT-4, a viral conversation bot that the company says now boasts 100 million weekly users. The forced exit of Altman arose in part from frustration between him and Sutskever over a tension at the heart of the company: heightened awareness of the risks posed by AI.<\/p>\n<p>Samuel Harris Altman is a leftist American entrepreneur and investor and the Chief Executive officer of OpenAI since 2019. In 2022, he wrote: \u201c<em>I\u2019m concerned about the political landscape in the USA, some elements of the republican party are becoming increasingly anti-democratic<\/em>.\u201d<\/p>\n<p>And if you go to Sam Altman\u2019s Twitter account, you find a multitude of seemingly cheerful photos showing various world leaders of the New World Disorder engaging in friendly conversations with him. For a moment, one might think that the G20 summit has started early and that OpenAI is the host.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-141509\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/12\/days2.png\" alt=\"\" width=\"560\" height=\"327\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/12\/days2.png 750w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/12\/days2-300x175.png 300w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>OpenAI is an American artificial intelligence (AI) research organization consisting of the non-profit OpenAI, Inc. conveniently registered in Joe Biden\u2019s leftist paradise of Delaware and its for-profit subsidiary OpenAI Global, LLC. OpenAI research artificial intelligence works with the declared intention of developing \u201c<em>safe and beneficial<\/em>\u201d artificial general intelligence, but something disturbing occurred lately and it\u2019s not the brief ousting of Sam Altman\u2019s that made the news all over the world.<\/p>\n<p>Ahead of OpenAI CEO Sam Altman\u2019s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity. This shocking news was provided to <em>Reuters<\/em> in 24 hours by two people familiar with the matter. The previously unreported letter and the allegedly dangerous AI algorithm were key developments before the board\u2019s ouster of Altman, the poster child of generative AI, the two sources said.<\/p>\n<p>Prior to his triumphant return, more than 700 employees threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader. However, the sources in question cited the letter as one factor among a longer list of grievances by the board leading to Altman\u2019s firing, among which were concerns over commercializing advances before understanding the consequences. Unfortunately, neither <em>Reuters<\/em> nor anybody else was able to review a copy of the letter to better understand the implications of this sudden AI danger that might threaten humanity; and the staff who wrote the letter did not respond to various requests for comment.<\/p>\n<p>Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and drew investment \u2013 and computing resources \u2013 necessary from Microsoft to get closer to AGI. In addition to announcing a slew of new tools in a recent demonstration, Altman teased at a summit of world leaders in San Francisco that he believed major advances were in sight.<\/p>\n<p>\u201c<em>Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I\u2019ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward and getting to do that is the professional honor of a lifetime<\/em>,\u201d he said at the Asia-Pacific Economic Cooperation summit that featured the controversial meeting between Joe Biden and Xi Jinping. A day later, the board fired Altman.<\/p>\n<p>For the moment, the way globalists hype AI is very tactical, they are presenting it as a new technology with potential dangers which will, however, be contained for the benefit of humanity.<\/p>\n<p>However, \u201c<em>the potential risk is very high<\/em>,\u201d Gawdat told Bartlett, who once led the Silicon Valley behemoth\u2019s Google X \u201c<em>moonshot<\/em>\u201d division, \u201c<em>is very stressed about AI\u2019s future<\/em>\u201d stating in a podcast called <em>The Diary of a CEO<\/em>, hosted by Stephen Bartlett that the situation \u201c<em>is beyond an emergency<\/em>.\u201d<\/p>\n<p>Morgan Meaker at Wired wrote in an article entitled <em>Sam Altman\u2019s Second Coming Sparks New Fears of the AI Apocalypse<\/em>:<\/p>\n<p>\u201c<em>Open AI\u2019s new boss is the same as the old boss. But the company\u2014and the artificial intelligence industry\u2014may have been profoundly changed by the past days of high-stakes soap opera<\/em>.\u201d<\/p>\n<p>\u201c<em>What occurred with this drama around Sam Altman shows us we cannot rely on visionary CEOs or ambassadors of these companies, but instead, we need to have regulation<\/em>,\u201d says Brando Benifei, one of two European Parliament lawmakers leading negotiations on the new rules. \u201c<em>These events show us there is unreliability and unpredictability in the governance of these enterprises<\/em>.\u201d<\/p>\n<p>The high-profile failure of OpenAI\u2019s governance structure is likely to amplify calls for stronger public oversight, but that is what the elite wants to control the rise of AI aka Cyber Satan. We don\u2019t have much time left so please don\u2019t underestimate what has occurred at OpenAI and ask your elected officials to enquire about Sam Altman\u2019s dubious activities before it\u2019s too late.<\/p>\n<p><em>\u00a0<\/em><\/p>\n<p><strong>yogaesoteric<br \/>\nDecember 8, 2023<\/strong><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Months before OpenAI board member Ilya Sutskever would gain notoriety for his key role in the ouster of CEO Sam Altman, Sutskever co-authored a little-noticed but apocalyptic warning about the threat posed by artificial intelligence. Superintelligent AI, Sutskever co-wrote on a company blog, could lead to \u201cthe disempowerment of humanity or even human extinction,\u201d since [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[1374],"tags":[],"class_list":["post-141505","post","type-post","status-publish","format-standard","hentry","category-the-threat-of-artificial-intelligence-3480-en"],"_links":{"self":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/141505","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/comments?post=141505"}],"version-history":[{"count":1,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/141505\/revisions"}],"predecessor-version":[{"id":141512,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/141505\/revisions\/141512"}],"wp:attachment":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/media?parent=141505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/categories?post=141505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/tags?post=141505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}