{"id":225732,"date":"2026-02-20T20:08:54","date_gmt":"2026-02-20T20:08:54","guid":{"rendered":"https:\/\/yogaesoteric.net\/?p=225732"},"modified":"2026-02-20T20:11:25","modified_gmt":"2026-02-20T20:11:25","slug":"students-use-ai-to-write-papers-professors-use-ai-to-grade-them-degrees-become-meaningless-and-tech-companies-make-fortunes-welcome-to-the-death-of-higher-education-2","status":"publish","type":"post","link":"https:\/\/yogaesoteric.net\/en\/students-use-ai-to-write-papers-professors-use-ai-to-grade-them-degrees-become-meaningless-and-tech-companies-make-fortunes-welcome-to-the-death-of-higher-education-2\/","title":{"rendered":"Students use AI to write papers, professors use AI to grade them, degrees become meaningless, and tech companies make fortunes. Welcome to the death of higher education (2)"},"content":{"rendered":"<p>Read <a href=\"https:\/\/yogaesoteric.net\/en\/students-use-ai-to-write-papers-professors-use-ai-to-grade-them-degrees-become-meaningless-and-tech-companies-make-fortunes-welcome-to-the-death-of-higher-education-1\/\">the first part<\/a> of the article<\/p>\n<p><strong>From BS Jobs to BS Degrees<\/strong><\/p>\n<p>Anthropologist David Graeber wrote about the rise of \u201c<a href=\"https:\/\/libcom.org\/article\/phenomenon-bullshit-jobs-david-graeber\" target=\"_blank\" rel=\"noopener\">BS jobs<\/a>\u201d \u2013 work sustained not by necessity or meaning but by institutional inertia. Universities now risk creating their academic twin: BS degrees<em>.<\/em> AI threatens to professionalize the art of meaningless activity, widening the gap between education\u2019s public mission and its hollow routines. In Graeber\u2019s words, such systems inflict \u201c<em>profound psychological violence<\/em>,\u201d the dissonance of knowing one\u2019s labour serves no purpose.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-47436\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2021\/07\/educatie2-e1625261519935.jpg\" alt=\"\" width=\"560\" height=\"315\" \/><\/p>\n<p>Universities are already caught in this loop: students going through motions they know are empty, faculty grading work they suspect wasn\u2019t written by students, administrators celebrating \u201c<em>innovations<\/em>\u201d everyone else understands are destroying education. The difference from the corporate world\u2019s \u201cBS jobs\u201d is that students have to pay for the privilege of this theatre of make-believe learning.<\/p>\n<p>If <a href=\"https:\/\/link.springer.com\/article\/10.1007\/s40593-022-00300-7\" target=\"_blank\" rel=\"noopener\">ChatGPT can generate student essays, <\/a>complete assignments, and even provide feedback, what remains of the educational transaction? We risk creating a system where:<\/p>\n<ul>\n<li>Students pay tuition for credentials they didn\u2019t earn through learning<\/li>\n<li>Faculty grade work they know wasn\u2019t produced by students<\/li>\n<li>Administrators celebrate \u201c<em>efficiency gains<\/em>\u201d that are actually learning losses<\/li>\n<li>Employers receive graduates with degrees that signify nothing about actual competence<\/li>\n<\/ul>\n<p>I got a front-row seat to this charade at a recent workshop called \u201c<em>OpenAI Day Faculty Session: AI in the Classroom<\/em>,\u201d held in the university library as part of San Francisco State University\u2019s rollout of <em>ChatGPT Edu<\/em>. OpenAI had transformed the sanctuary of learning into its corporate showroom. The vibe: half product tech demo, half corporate pep rally, disguised as professional development.<\/p>\n<p>Siya Raj Purohit, an OpenAI staffer, bounced onto the stage with breathless enthusiasm: \u201c<em>You\u2019ll learn great use cases! Cool demos! Cool functionality!<\/em>\u201d (Too cool for school, but I endured.)<\/p>\n<p>Then came the centrepiece: a slide instructing faculty how to prompt-engineer their courses. A template read:<\/p>\n<p><strong>\u201c<em>Experiment with This Prompt<\/em><\/strong><\/p>\n<p><em>Try inputting the following prompt. Feel free to edit it however you\u2019d like \u2013 this is simply the point!<\/em><\/p>\n<p><strong><em>I\u2019m a professor at San Francisco State University, teaching [course name or subject]. Assignment where students [briefly describe the task]. I want to redesign it using AI to deepen student learning, engagement, and critical thinking.<\/em><\/strong><\/p>\n<p><em>Can you suggest:<\/em><\/p>\n<ul>\n<li><em>A revised version of the assignment using ChatGPT<\/em><\/li>\n<li><em>A prompt I can give students to guide their use of ChatGPT<\/em><\/li>\n<li><em>A way to evaluate whether AI improved the quality of their work<\/em><\/li>\n<li><em>Any academic integrity risks I should be aware of?<\/em>\u201d<\/li>\n<\/ul>\n<p>The message was clear. Let <em>ChatGPT<\/em> redesign your class. Let <em>ChatGPT<\/em> tell you how to evaluate your students. Let <em>ChatGPT<\/em> tell students how to use <em>ChatGPT<\/em>. Let <em>ChatGPT<\/em> solve the problem of human education. It was like being handed a <em>Mad Libs<\/em> puzzle for automating your syllabus.<\/p>\n<p>Then came the real showstopper.<\/p>\n<p>Siya, clearly moved, shared what she called a personal turning point: \u201c<em>There was a moment when ChatGPT and I became friends. I was working on a project and said, \u2018Hey, do you remember when we built that element for my manager last month?\u2019 And it said, \u2018Yes, Siya, I remember.\u2019 That was such a powerful moment \u2013 it felt like a friend who remembers your story and helps you become a better knowledge worker<\/em>.\u201d<\/p>\n<p>A faculty member, Prof. Tanya Augsburg, interrupted. \u201c<em>Sorry, it\u2019s a tool, right? You&#8217;re saying a tool is going to be a friend?<\/em>\u201d<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-215086\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/11\/ChatGPT-e1771617748570.jpg\" alt=\"\" width=\"560\" height=\"354\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/11\/ChatGPT-e1771617748570.jpg 1140w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/11\/ChatGPT-e1771617748570-300x189.jpg 300w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/11\/ChatGPT-e1771617748570-1024x647.jpg 1024w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/11\/ChatGPT-e1771617748570-768x485.jpg 768w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>Siya deflected: \u201c<em>Well, it\u2019s an anecdote that sometimes helps faculty<\/em>.\u201d (That sometimes wasn\u2019t this time). \u201c<em>It\u2019s just about how much context it remembers<\/em>.\u201d<\/p>\n<p>Augsburg persisted: \u201c<em>So we\u2019re encouraging students to have relationships with it? I just want to be clear<\/em>.\u201d<\/p>\n<p>Siya countered with survey data, the rhetorical flak jacket of every good ed-tech evangelist: \u201c<em>According to the survey we run, a lot of students already do. They see it as a coach, mentor, career navigator\u2026\u2026. it\u2019s up to them what kind of relationship they want<\/em>.\u201d<\/p>\n<p>Welcome to the brave new world of parasocial machine bonding \u2013 sponsored by the campus centre for teaching excellence. The moment was absurd but revealing; the university wasn\u2019t resisting BS education, it was onboarding it. Education at its best sparks curiosity and critical thought. \u201cBS education\u201d does the opposite: it trains people to tolerate meaninglessness, to accept automation of their own thinking, to value credentials over competence.<\/p>\n<p>Administrators seem unable to fathom the obvious:<a href=\"https:\/\/www.theatlantic.com\/culture\/archive\/2025\/09\/ai-colleges-universities-solution\/684160\/?utm_source=reddit&amp;utm_campaign=the-atlantic&amp;utm_medium=social&amp;utm_content=edit-promo\" target=\"_blank\" rel=\"noopener\"> eroding higher education\u2019s core purpose <\/a>doesn\u2019t go unnoticed. If <a href=\"https:\/\/www.newyorker.com\/newsletter\/the-daily\/hua-hsu-on-the-demise-of-the-english-paper?fbclid=IwY2xjawLeNLNleHRuA2FlbQIxMQABHl13ybxOWB7bs1z2aiqP-SgNMSCmY-VkYjwzU3HgPmUgfrgiLxQmwiH42ZgK_aem_keNkudOaCOJn4QP363TEqw\" target=\"_blank\" rel=\"noopener\"><em>ChatGPT<\/em> can write essays<\/a>, ace exams and tutor, what exactly is the university selling? Why pay tens of thousands for an experience increasingly automated? Why dedicate your life to teaching if it\u2019s reduced to prompt engineering? Why retain tenured professors whose role seems quaint, medieval and redundant? Why have universities at all?<\/p>\n<p><a href=\"https:\/\/www.thetimes.com\/business-money\/money\/article\/degrees-cost-worth-salaries-debate-8bm5lgf72?utm_medium=Social&amp;utm_source=Facebook&amp;fbclid=IwY2xjawMMIJNleHRuA2FlbQIxMABicmlkETF1dHo0NnBHU041dUZ3VWRSAR7_q2mEzM53JlOh5YjugLcqKRgbewTWjinLukr06dmG8hm4KqHFSwhetYKr0w_aem_HjlTnVuZ3Xtrw27ryLnyxQ#Echobox=1755157315\" target=\"_blank\" rel=\"noopener\">Students and parents <\/a>have certainly noticed the rot. Enrolments and retention rates are plunging, especially in public systems like the CSU. Students are reasoning, rightly, that it makes little sense to take on crushing debt for degrees that may soon be obsolete.<\/p>\n<p>Philosophy professor Troy Jollimore at CSU Chico sees the writing on the wall. As reported in <a href=\"https:\/\/nymag.com\/intelligencer\/article\/openai-chatgpt-ai-cheating-education-college-students-school.html\" target=\"_blank\" rel=\"noopener\"><em>New York Magazine<\/em><\/a>, he warned, \u201c<em>Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate<\/em>.\u201d He added: \u201c<em>Every time I talk to a colleague about this, the same aspect comes up: retirement. \u2018When can I retire? When can I get out of this?\u2019 That\u2019s what we\u2019re all thinking now<\/em>.\u201d<\/p>\n<p>Those who spent decades honing their craft now watch as their life\u2019s work is reduced to prompting a chatbot. No wonder so many are calculating pension benefits between office hours.<\/p>\n<p><strong>Let Them Eat AI<\/strong><\/p>\n<p>I attended OpenAI\u2019s education webinar \u201c<a href=\"https:\/\/academy.openai.com\/public\/events\/writing-in-the-age-of-ai-what-faculty-need-to-know-gfhyabqmad\" target=\"_blank\" rel=\"noopener\"><em>Writing in the Age of AI<\/em><\/a>\u201d (is that an oxymoron now?). Once again, the event was hosted by OpenAI\u2019s Siya Raj Purohit, whom I had seen months earlier on the SFSU campus. She opened with lavish praise for educators \u201c<em>meeting the moment with empathy and curiosity<\/em>,\u201d before introducing Jay Dixit, a former Yale English professor turned AI evangelist and now OpenAI\u2019s Head of Community of Writers.<\/p>\n<p>Dixit\u2019s personal website reads like a masterly list of <em>ChatGPT<\/em> conquests \u2013 \u201c<em>My ethical AI framework has been adopted!<\/em>\u201d \u201c<em>I defined messaging about AI!<\/em>\u201d \u2013 the kind of self-congratulatory corporate resume-speak that would make a <em>LinkedIn <\/em>influencer blush. What followed was a surreal blend of <em>TED Talk<\/em> charm, techno-theology, and moral instruction.<\/p>\n<p>The irony wasn\u2019t subtle. Here was Dixit, product of an $80,000-a-year elite Yale education, lecturing faculty at public universities like San Francisco State about how their working-class students should embrace <em>ChatGPT<\/em>. At SFSU, 60 percent of students are first-generation college attendees; many work multiple jobs or come from immigrant families where education represents the family\u2019s single shot at upward mobility. These aren\u2019t students who can afford to experiment with their academic futures.<\/p>\n<p>Dixit\u2019s message was pure Silicon Valley gospel: personal responsibility wrapped in corporate platitudes. Professors, he advised, shouldn\u2019t police students\u2019 use of <em>ChatGPT<\/em> but instead encourage them to craft their own \u201c<em>personal AI ethics<\/em>,\u201d to appeal to their higher angels. In other words, just put the burden on the students. \u201c<em>Don\u2019t outsource the thinking!<\/em>\u201d Dixit proclaimed, while literally selling the chatbot.<\/p>\n<p>The audacity was breathtaking. Tell an 18-year-old whose financial aid, scholarship or visa depends on GPA to develop \u201c<em>personal AI ethics<\/em>\u201d while you profit from the very technology designed to undermine their learning. It\u2019s classic neoliberal jiu-jitsu: reframe the erosion of institutional norms as a character-building opportunity. Like a drug dealer lecturing about personal responsibility while handing out free samples.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-111650\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/02\/chatgpt-h-e1677265021923.png\" alt=\"\" width=\"560\" height=\"296\" \/><\/p>\n<p>When critics push back against this corporate evangelism, the reply \u2013 like Roy Lee\u2019s \u2013 is predictable: we\u2019re accused of \u201c<em>moral panic<\/em>\u201d over inevitable progress, with the old invocation of Socrates\u2019 anxiety about writing to suggest today\u2019s AI fears are mere nostalgia. Tech luminaries such as <a href=\"https:\/\/www.simonandschuster.com\/books\/Superagency\/Reid-Hoffman\/9798893310108\" target=\"_blank\" rel=\"noopener\">Reid Hoffman make this argument,<\/a> urging \u201c<em>iterative deployment<\/em>\u201d and insisting our \u201c<em>sense of urgency needs to match the current speed of change<\/em>\u201d \u2013 learn-by-shipping, fix later. He recasts precaution as \u201c<em>problemism<\/em>\u201d and labels sceptics as \u201c<em>Gloomers<\/em>,\u201d claiming that slowing or pausing AI would only pre-empt its benefits.<\/p>\n<p>But the analogy is flawed. Earlier technologies expanded human agency over generations; this one seeks to replace cognition at platform speed (the launch of <em>ChatGPT<\/em> hit 100 million users in two months), while the public is conscripted into the experiment \u201chands-on\u201d after release. Hoffman concedes the democratic catch: broad participation slows innovation, so faster progress may come from \u201c<em>more authoritarian countries<\/em>.\u201d Far from an answer to moral panic, this is an argument for outrunning consent.<\/p>\n<p>The contradictions piled up. As Dixit projected a<a href=\"https:\/\/admissions.yale.edu\/liberal-arts-education\" target=\"_blank\" rel=\"noopener\"> Yale<\/a> brochure extolling the purpose of liberal education, he reassured faculty that <em>ChatGPT<\/em> could serve as a \u201c<em>creative partner<\/em>,\u201d a \u201c<em>sounding board<\/em>,\u201d even an \u201c<em>editorial assistant<\/em>.\u201d Writing with AI wasn\u2019t to be feared; it was simply being reborn. And what mattered now was student adaptability. \u201c<em>The future is uncertain<\/em>,\u201d he concluded. \u201c<em>We need to prepare students to be agile, nimble, and ready for anything<\/em>.\u201d (Where had I heard that <a href=\"https:\/\/www.currentaffairs.org\/news\/2023\/01\/how-euphemistic-corporate-language-aided-purdue-pharmas-role-in-the-opioid-crisis\" target=\"_blank\" rel=\"noopener\">corporatese <\/a>before? Probably in a boring business-school meeting.)<\/p>\n<p>The whole event was a masterclass in gaslighting. OpenAI creates the tools that facilitate cheating, then hosts webinars to sell moral recovery strategies. It\u2019s the Silicon Valley circle of life: disruption, panic, profit.<\/p>\n<p>When Siya opened the floor for questions, I submitted one rooted in the actual pressures my students face:<\/p>\n<p>\u201c<em>How can we expect to motivate students when AI can easily generate their essays \u2013 especially when their financial aid, scholarships and visas all depend on GPA? When education has become a high-stakes, transactional sorting process for a hyper-competitive labour market, how can we expect them to not use AI to do their work?<\/em>\u201d<\/p>\n<p>It was never read aloud. Siya skipped over it, preferring questions that allowed for soft moral encouragement and company talking points. The event promised dialogue but delivered dogma.<\/p>\n<p><strong>Working-Class Students See Through the Con<\/strong><\/p>\n<p>What Dixit\u2019s corporate evangelism missed entirely is that students themselves are leading the resistance. While the headlines fixate on widespread AI cheating, a different story is emerging in classrooms where faculty actually listen to their students.<\/p>\n<p>At San Francisco State, Professor Martha Kenney, who chaired the Women and Gender Studies department, described what occurred in her science fiction class after the CSU-OpenAI partnership was announced. Her students, she said, \u201c<em>were rightfully sceptical that regular use of generative AI in the classroom would rob them of the education they\u2019re paying so much for<\/em>,\u201d Kenney told me. Most of them had not opened <em>ChatGPT Edu<\/em> by semester\u2019s end.<\/p>\n<p>Her colleague, Martha Lincoln, who teaches Anthropology, witnessed the same scepticism. \u201c<em>Our students are pro-socially motivated. They want to give back<\/em>,\u201d she told me. \u201c<em>They\u2019re paying a lot of money to be here<\/em>.\u201d When Lincoln spoke publicly about CSU\u2019s AI deal, she says, \u201c<em>I heard from a lot of Cal State students not even on our campus asking me \u2018How can I resist this? Who is organizing?\u2019<\/em>\u201d<\/p>\n<p>These weren\u2019t privileged Ivy League students looking for shortcuts. These were first-generation college students, many from historically marginalized groups, who understood something administrators apparently didn\u2019t: they were being asked to pay premium prices for a cheapened product.<\/p>\n<p>\u201c<em>ChatGPT is not an educational technology<\/em>,\u201d Kenney explained. \u201c<em>It wasn&#8217;t designed or optimized for education<\/em>.\u201d When CSU rolled out the partnership, \u201c<em>it doesn&#8217;t say how we\u2019re supposed to use it or what we\u2019re supposed to use it for. Normally when we buy a tech license, it\u2019s for software that\u2019s supposed to do something specific&#8230;&#8230;. but ChatGPT doesn\u2019t<\/em>.\u201d<\/p>\n<p>Lincoln was even more direct. \u201c<em>There has not been a pedagogical rationale stated. This isn\u2019t about student success. OpenAI wants to make this the infrastructure of higher education \u2013 because we&#8217;re a market for them. If we privilege AI as a source of right answers, we are taking the process out of teaching and learning. We are just selling down the river for so little<\/em>.\u201d<\/p>\n<p>Ali Kashani, a lecturer in the Political Science department and member of the faculty union\u2019s AI collective bargaining article committee, voiced a similar concern. \u201c<em>The CSU unleashed AI on faculty and students without doing any proper research about the impact<\/em>,\u201d he told me. \u201c<em>First-generation and marginalized students will experience the harmful aspect of AI. Students are being used as guinea pigs in the AI laboratory<\/em>.\u201d That phrase \u2013 \u201c<em>guinea pigs<\/em>\u201d \u2013 echoes the warning Kenney and Lincoln sounded in their <a href=\"https:\/\/www.sfchronicle.com\/opinion\/openforum\/article\/csu-ai-university-education-20158671.php?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"><em>San Francisco Chronicle<\/em> op-ed<\/a>: \u201c<em>The introduction of AI in higher education is essentially an unregulated experiment. Why should our students be the guinea pigs?<\/em>\u201d<\/p>\n<p>For Kashani and others, the question isn\u2019t whether educators are for or against technology \u2013 it\u2019s who controls it, and to what end. AI isn\u2019t democratizing learning; it\u2019s automating it.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-210167\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/10\/ai-summary-3-e1759604509220.jpg\" alt=\"\" width=\"560\" height=\"341\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/10\/ai-summary-3-e1759604509220.jpg 680w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/10\/ai-summary-3-e1759604509220-300x183.jpg 300w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>The organized response is growing. The California Faculty Association (CFA) has filed an unfair labour practice charge against the CSU for imposing the AI initiative without faculty consultation, arguing that it violated labour law and faculty intellectual-property rights. At CFA\u2019s Equity Conference, Dr. Safiya Noble \u2013 author of <em>Algorithms of Oppression<\/em> \u2013 urged faculty to demand transparency about how data is stored, what labour exploitation lies behind AI systems, and what environmental harms the CSU is complicit in.<\/p>\n<p>The resistance is spreading beyond California. Dutch university faculty have issued an <a href=\"https:\/\/openletter.earth\/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e?fbclid=IwY2xjawLSFtRleHRuA2FlbQIxMQBicmlkETF2dFI1ZmUwYWQ5eklaZFlBAR5qPuOm466zxr9hX5CwX1nQnNqt1dZMV0MVUmICqbCPnJDUciPrqqGYVZXKFQ_aem_Iu_RhxOo7j-_sEIUvhxa5A\" target=\"_blank\" rel=\"noopener\">open letter <\/a>calling for a moratorium on AI in academic settings, warning that its use \u201c<em>deskills critical thought<\/em>\u201d and reduces students to operators of machines.<\/p>\n<p>The difference between SFSU\u2019s student resistance and the cheating epidemic elsewhere is politically motivated. \u201c<em>Very few students get a Women and Gender Studies degree for instrumental reasons<\/em>,\u201d Kenney explained. \u201c<em>They\u2019re there because they want to be critical thinkers and politically engaged citizens<\/em>.\u201d These students understand something that administrators and tech evangelists don\u2019t: they\u2019re not paying for automation. They\u2019re paying for mentorship, for dialogue, for intellectual relationships that can\u2019t be outsourced to a chatbot.<\/p>\n<p>The Chatversity normalizes and legitimizes cheating. It rebrands educational destruction as cutting edge \u201c<em>AI literacy<\/em>\u201d while silencing the very voices \u2013 working-class students, critical scholars, organized faculty \u2013 who expose the con.<\/p>\n<p>But the <a href=\"https:\/\/www.theatlantic.com\/culture\/archive\/2025\/09\/ai-colleges-universities-solution\/684160\/?utm_source=reddit&amp;utm_campaign=the-atlantic&amp;utm_medium=social&amp;utm_content=edit-promo\" target=\"_blank\" rel=\"noopener\">resistance is real<\/a>, and it\u2019s asking the questions university leaders refuse to answer. As Lincoln put it with perfect clarity: \u201c<em>Why would our institution buy a license for a free cheating product?<\/em>\u201d<\/p>\n<p><strong>The New AI Colonialism<\/strong><\/p>\n<p>That webinar was emblematic of something larger. OpenAI, once founded on the promise of openness, now filters out discomfort in favour of corporate propaganda.<\/p>\n<p>Investigative journalist Karen Hao learned this the hard way. After publishing a <a href=\"https:\/\/www.technologyreview.com\/2020\/02\/17\/844721\/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality\/\" target=\"_blank\" rel=\"noopener\">critical profile of OpenAI,<\/a> she was blacklisted for years. In <em>Empire of AI<\/em>, she shows how CEO Sam Altman cloaks monopoly ambitions in humanitarian language \u2013 his soft-spoken, monkish image masking a vast, opaque empire of venture capital and government partnerships extending from Silicon Valley to the White House. And while OpenAI publicly champions \u201c<em>aligning AI with human values<\/em>,\u201d it has pressured employees to sign lifelong non-disparagement agreements under <a href=\"https:\/\/www.vox.com\/future-perfect\/351132\/openai-vested-equity-nda-sam-altman-documents-employees\" target=\"_blank\" rel=\"noopener\">threat of losing millions in equity.<\/a><\/p>\n<p>Hao compares this empire to the 19th-century cotton mills: technologically advanced, economically dominant, and built on hidden labour. Where cotton was king, <em>ChatGPT<\/em> now reigns \u2013 sustained by exploitation made invisible. <a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\" target=\"_blank\" rel=\"noopener\"><em>Time<\/em> magazine<\/a> revealed that OpenAI outsourced content moderation for <em>ChatGPT<\/em> to the Kenyan firm Sama, where workers earned under $2 an hour to filter horrific online material: graphic violence, hate speech, sexual exploitation. Many were traumatized by the toxic content. OpenAI exported this suffering to workers in the Global South, then rebranded the sanitized product as \u201c<em>safe AI<\/em>.\u201d<\/p>\n<p>The same logic of extraction extends to the environment. Training large-language models <a href=\"https:\/\/www.forbes.com\/councils\/forbestechcouncil\/2024\/04\/26\/the-untold-story-of-ais-huge-carbon-footprint\/\" target=\"_blank\" rel=\"noopener\">consumes millions of kilowatt-hours <\/a>and <a href=\"https:\/\/andthewest.stanford.edu\/2025\/thirsty-for-power-and-water-ai-crunching-data-centers-sprout-across-the-west\/\" target=\"_blank\" rel=\"noopener\">hundreds of thousands of gallons of water <\/a>annually, sometimes as much as <a href=\"https:\/\/www.bloomberg.com\/graphics\/2025-ai-impacts-data-centers-water-data\/\" target=\"_blank\" rel=\"noopener\">small cities<\/a>, often in <a href=\"https:\/\/www.techrepublic.com\/article\/news-ai-data-centers-drought\/\" target=\"_blank\" rel=\"noopener\">drought-prone regions. <\/a>Costs are <a href=\"https:\/\/www.bloomberg.com\/graphics\/2025-ai-impacts-data-centers-water-data\/\" target=\"_blank\" rel=\"noopener\">hidden, externalized, and ignored<\/a>. That\u2019s the gospel of OpenAI: promise utopia, outsource the damage.<\/p>\n<p>The California State University system, which long styled itself as \u201c<em>the people\u2019s university<\/em>,\u201d has now joined this global supply chain. Its $17-million partnership with OpenAI \u2013 signed without meaningful faculty consultation \u2013 offers up students and instructors as beta testers for a company that punishes dissent and drains public resources. This is the final stage of corporatization: public education transformed into a delivery system for private capital. The CSU\u2019s collaboration with OpenAI is the latest chapter in a long history of empire, where public goods are conquered, repackaged, and sold back as progress.<\/p>\n<p>Faculty on the ground see the contradiction. Jennifer Trainor, Professor of English and Faculty Director at SFSU\u2019s Centre for Equity and Excellence in Teaching and Learning, only learned of the partnership when it was publicly announced. She says the most striking part of the announcement, at the time, was its celebratory tone. \u201c<em>It felt surreal<\/em>,\u201d she recalls, \u201c<em>coming at the exact moment when budget cuts, layoffs, and curriculum consolidations were being imposed on our campus<\/em>.\u201d<\/p>\n<p>For Trainor, the deal felt like \u201c<em>a bait-and-switch \u2013 positioning AI as a student success strategy while gutting the very programs that support critical thinking<\/em>.\u201d CSU could have funded genuine educational tools created by educators, she points out, yet chose to pay millions to a Silicon Valley firm already offering its product for free. As <em>Chronicle of Higher Education<\/em> writer Marc Watkins notes, it\u2019s \u201c<em>panic purchasing<\/em>\u201d \u2013 buying \u201c<a href=\"https:\/\/marcwatkins.substack.com\/p\/the-costs-of-ai-in-education\" target=\"_blank\" rel=\"noopener\"><em>the illusion of control<\/em>.<\/a>\u201d<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-210170\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/10\/small-brain-2-e1759604420581.jpg\" alt=\"\" width=\"560\" height=\"356\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/10\/small-brain-2-e1759604420581.jpg 607w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2025\/10\/small-brain-2-e1759604420581-300x191.jpg 300w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>Even more telling, CSU bypassed faculty with real AI expertise. In an ideal world, Trainor says, the system would have supported \u201c<em>ground-up, faculty-driven initiatives<\/em>.\u201d Instead, it embraced a corporate platform many faculty distrust. Indeed, AI has become Orwellian shorthand for closed governance and privatized profit. Trainor has since gone on to write about and work with faculty to address the problems companies like OpenAI pose for education.<\/p>\n<p>The CSU partnership lays bare how far public universities have drifted from their democratic mission. What\u2019s being marketed as innovation is simply another form of dependency \u2013 education reduced to a franchise of a global tech empire.<\/p>\n<p><strong>The Real Stakes<\/strong><\/p>\n<p>If the previous sections exposed the economic and institutional colonization of public education, what follows is its cognitive and moral cost.<\/p>\n<p>A recent MIT study, <a href=\"https:\/\/www.media.mit.edu\/publications\/your-brain-on-chatgpt\/\" target=\"_blank\" rel=\"noopener\"><em>Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task<\/em>,<\/a> provides sobering evidence. When participants used ChatGPT to draft essays, brain scans revealed a 47 percent drop in neural connectivity across regions associated with memory, language, and critical reasoning. Their brains worked less, but they felt just as engaged \u2013 a kind of metacognitive mirage. Eighty-three percent of heavy AI users couldn\u2019t recall key points from what they\u2019d \u201cwritten,\u201d compared to only 10 percent of those who composed unaided. Neutral reviewers described the AI-assisted writing as \u201c<em>soulless, empty, lacking personality<\/em>.\u201d Most alarmingly, after four months of reliance on <em>ChatGPT<\/em>, participants wrote worse once it was removed than those who had never used it at all.<\/p>\n<p>The study warns that when writing is delegated to AI, the way people learn fundamentally changes. As computer scientist <a href=\"https:\/\/newrepublic.com\/article\/181189\/inventor-chatbot-tried-warn-us-ai-joseph-weizenbaum-computer-power-human-reason\" target=\"_blank\" rel=\"noopener\">Joseph Weizenbaum cautioned decades ago<\/a>, the real danger lies in humans adapting their consciousnesses to machine logic. Students aren\u2019t just learning less; their brains are learning not to learn.<\/p>\n<p>Author and podcaster Cal Newport calls this <a href=\"https:\/\/www.thedeeplife.com\/podcasts\/episodes\/ep-359-should-we-fear-cognitive-debt\/\" target=\"_blank\" rel=\"noopener\">\u201c<em>cognitive debt<\/em>\u201d<\/a> \u2013 mortgaging future cognitive fitness for short-term ease. His guest, Brad Stulberg, likens it to using a forklift at the gym: you can spend the same hour lifting nothing and still feel productive, but your muscles will atrophy. Thinking, like strength, develops through resistance. The more we delegate our mental strain to machines, the more we lose the capacity to think at all.<\/p>\n<p>This erosion is already visible in classrooms. Students arrive fluent in prompting but hesitant to articulate their own ideas. Essays look polished yet stilted \u2013 stitched together from synthetic syntax and borrowed thought. The language of reflection \u2013 <em>I wonder, I struggle, I see now<\/em> \u2013 is disappearing. In its place comes the clean grammar of automation: fluent, efficient, and empty.<\/p>\n<p>The real tragedy isn\u2019t that students use <em>ChatGPT<\/em> to do their course work. It\u2019s that universities are teaching everyone \u2013 students, faculty, administrators \u2013 to stop thinking. We\u2019re outsourcing discernment. Students graduate fluent in prompting, but illiterate in judgment; faculty teach but aren\u2019t allowed the freedom to educate; and universities, eager to appear innovative, dismantle the very practices that made them worthy of the name. We are approaching educational bankruptcy: degrees without learning, teaching without understanding, institutions without purpose.<\/p>\n<p>The soul of public education is at stake. When the largest public university system licenses an AI chatbot from a corporation that blacklists journalists, exploits data workers in the Global South, amasses geopolitical and energy power at an unprecedented scale, and positions itself as an unelected steward of human destiny, it betrays its mission as the \u201c<em>people\u2019s university<\/em>,\u201d rooted in democratic ideals and social justice.<\/p>\n<p>OpenAI is not a partner \u2013 it\u2019s an empire, cloaked in ethics and bundled with a Terms of Service. The university didn\u2019t resist. It clicked \u2018<em>Accept.<\/em>\u2019<\/p>\n<p>I\u2019ve watched this unravel from two vantage points: as a professor living it, and as a first-generation college student who once believed the university was a sacred space for learning. In the 1980s, I attended Sonoma State University. The CSU charged no tuition \u2013 just a modest $670\/year registration fee. The economy was in recession, but I barely noticed. I was already broke. If I needed a few bucks, I\u2019d sell LPs at the used record store. I didn\u2019t go to college in order to get a job. I went to explore, to be challenged, to figure out what mattered. It took me six years to graduate with a degree in Psychology \u2013 six of the most meaningful, exploratory years of my life.<\/p>\n<p>That kind of education \u2013 the open, affordable, meaning-seeking kind \u2013 once flourished in public universities. But now it is nearly extinct. It doesn\u2019t \u201cscale.\u201d It doesn\u2019t fit into the strategic plan. And it doesn\u2019t compute \u2013 which is exactly why the Chatversity wants to eliminate it.<\/p>\n<p>But it also shows another truth: the situation can be different. It once was.<\/p>\n<p><em>Author: Ron Purser<\/em><\/p>\n<p>&nbsp;<\/p>\n<p><strong>yogaesoteric<br \/>\nFebruary 20, 2025<\/strong><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Universities are already caught in a loop: students going through motions they know are empty, faculty grading work they suspect wasn\u2019t written by students, administrators celebrating \u201cinnovations\u201d everyone else understands are destroying education. Students have to pay for the privilege of this theatre of make-believe learning.<\/p>\n","protected":false},"author":4,"featured_media":210170,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[1620],"tags":[1516],"class_list":["post-225732","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-the-threat-of-artificial-intelligence","tag-article_of_the_week"],"_links":{"self":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/225732","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/comments?post=225732"}],"version-history":[{"count":3,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/225732\/revisions"}],"predecessor-version":[{"id":225736,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/225732\/revisions\/225736"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/media\/210170"}],"wp:attachment":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/media?parent=225732"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/categories?post=225732"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/tags?post=225732"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}