Students use AI to write papers, professors use AI to grade them, degrees become meaningless, and tech companies make fortunes. Welcome to the death of higher education (2)
Read the first part of the article
From BS Jobs to BS Degrees
Anthropologist David Graeber wrote about the rise of “BS jobs” – work sustained not by necessity or meaning but by institutional inertia. Universities now risk creating their academic twin: BS degrees. AI threatens to professionalize the art of meaningless activity, widening the gap between education’s public mission and its hollow routines. In Graeber’s words, such systems inflict “profound psychological violence,” the dissonance of knowing one’s labour serves no purpose.

Universities are already caught in this loop: students going through motions they know are empty, faculty grading work they suspect wasn’t written by students, administrators celebrating “innovations” everyone else understands are destroying education. The difference from the corporate world’s “BS jobs” is that students have to pay for the privilege of this theatre of make-believe learning.
If ChatGPT can generate student essays, complete assignments, and even provide feedback, what remains of the educational transaction? We risk creating a system where:
- Students pay tuition for credentials they didn’t earn through learning
- Faculty grade work they know wasn’t produced by students
- Administrators celebrate “efficiency gains” that are actually learning losses
- Employers receive graduates with degrees that signify nothing about actual competence
I got a front-row seat to this charade at a recent workshop called “OpenAI Day Faculty Session: AI in the Classroom,” held in the university library as part of San Francisco State University’s rollout of ChatGPT Edu. OpenAI had transformed the sanctuary of learning into its corporate showroom. The vibe: half product tech demo, half corporate pep rally, disguised as professional development.
Siya Raj Purohit, an OpenAI staffer, bounced onto the stage with breathless enthusiasm: “You’ll learn great use cases! Cool demos! Cool functionality!” (Too cool for school, but I endured.)
Then came the centrepiece: a slide instructing faculty how to prompt-engineer their courses. A template read:
“Experiment with This Prompt
Try inputting the following prompt. Feel free to edit it however you’d like – this is simply the point!
I’m a professor at San Francisco State University, teaching [course name or subject]. Assignment where students [briefly describe the task]. I want to redesign it using AI to deepen student learning, engagement, and critical thinking.
Can you suggest:
- A revised version of the assignment using ChatGPT
- A prompt I can give students to guide their use of ChatGPT
- A way to evaluate whether AI improved the quality of their work
- Any academic integrity risks I should be aware of?”
The message was clear. Let ChatGPT redesign your class. Let ChatGPT tell you how to evaluate your students. Let ChatGPT tell students how to use ChatGPT. Let ChatGPT solve the problem of human education. It was like being handed a Mad Libs puzzle for automating your syllabus.
Then came the real showstopper.
Siya, clearly moved, shared what she called a personal turning point: “There was a moment when ChatGPT and I became friends. I was working on a project and said, ‘Hey, do you remember when we built that element for my manager last month?’ And it said, ‘Yes, Siya, I remember.’ That was such a powerful moment – it felt like a friend who remembers your story and helps you become a better knowledge worker.”
A faculty member, Prof. Tanya Augsburg, interrupted. “Sorry, it’s a tool, right? You’re saying a tool is going to be a friend?”

Siya deflected: “Well, it’s an anecdote that sometimes helps faculty.” (That sometimes wasn’t this time). “It’s just about how much context it remembers.”
Augsburg persisted: “So we’re encouraging students to have relationships with it? I just want to be clear.”
Siya countered with survey data, the rhetorical flak jacket of every good ed-tech evangelist: “According to the survey we run, a lot of students already do. They see it as a coach, mentor, career navigator……. it’s up to them what kind of relationship they want.”
Welcome to the brave new world of parasocial machine bonding – sponsored by the campus centre for teaching excellence. The moment was absurd but revealing; the university wasn’t resisting BS education, it was onboarding it. Education at its best sparks curiosity and critical thought. “BS education” does the opposite: it trains people to tolerate meaninglessness, to accept automation of their own thinking, to value credentials over competence.
Administrators seem unable to fathom the obvious: eroding higher education’s core purpose doesn’t go unnoticed. If ChatGPT can write essays, ace exams and tutor, what exactly is the university selling? Why pay tens of thousands for an experience increasingly automated? Why dedicate your life to teaching if it’s reduced to prompt engineering? Why retain tenured professors whose role seems quaint, medieval and redundant? Why have universities at all?
Students and parents have certainly noticed the rot. Enrolments and retention rates are plunging, especially in public systems like the CSU. Students are reasoning, rightly, that it makes little sense to take on crushing debt for degrees that may soon be obsolete.
Philosophy professor Troy Jollimore at CSU Chico sees the writing on the wall. As reported in New York Magazine, he warned, “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.” He added: “Every time I talk to a colleague about this, the same aspect comes up: retirement. ‘When can I retire? When can I get out of this?’ That’s what we’re all thinking now.”
Those who spent decades honing their craft now watch as their life’s work is reduced to prompting a chatbot. No wonder so many are calculating pension benefits between office hours.
Let Them Eat AI
I attended OpenAI’s education webinar “Writing in the Age of AI” (is that an oxymoron now?). Once again, the event was hosted by OpenAI’s Siya Raj Purohit, whom I had seen months earlier on the SFSU campus. She opened with lavish praise for educators “meeting the moment with empathy and curiosity,” before introducing Jay Dixit, a former Yale English professor turned AI evangelist and now OpenAI’s Head of Community of Writers.
Dixit’s personal website reads like a masterly list of ChatGPT conquests – “My ethical AI framework has been adopted!” “I defined messaging about AI!” – the kind of self-congratulatory corporate resume-speak that would make a LinkedIn influencer blush. What followed was a surreal blend of TED Talk charm, techno-theology, and moral instruction.
The irony wasn’t subtle. Here was Dixit, product of an $80,000-a-year elite Yale education, lecturing faculty at public universities like San Francisco State about how their working-class students should embrace ChatGPT. At SFSU, 60 percent of students are first-generation college attendees; many work multiple jobs or come from immigrant families where education represents the family’s single shot at upward mobility. These aren’t students who can afford to experiment with their academic futures.
Dixit’s message was pure Silicon Valley gospel: personal responsibility wrapped in corporate platitudes. Professors, he advised, shouldn’t police students’ use of ChatGPT but instead encourage them to craft their own “personal AI ethics,” to appeal to their higher angels. In other words, just put the burden on the students. “Don’t outsource the thinking!” Dixit proclaimed, while literally selling the chatbot.
The audacity was breathtaking. Tell an 18-year-old whose financial aid, scholarship or visa depends on GPA to develop “personal AI ethics” while you profit from the very technology designed to undermine their learning. It’s classic neoliberal jiu-jitsu: reframe the erosion of institutional norms as a character-building opportunity. Like a drug dealer lecturing about personal responsibility while handing out free samples.

When critics push back against this corporate evangelism, the reply – like Roy Lee’s – is predictable: we’re accused of “moral panic” over inevitable progress, with the old invocation of Socrates’ anxiety about writing to suggest today’s AI fears are mere nostalgia. Tech luminaries such as Reid Hoffman make this argument, urging “iterative deployment” and insisting our “sense of urgency needs to match the current speed of change” – learn-by-shipping, fix later. He recasts precaution as “problemism” and labels sceptics as “Gloomers,” claiming that slowing or pausing AI would only pre-empt its benefits.
But the analogy is flawed. Earlier technologies expanded human agency over generations; this one seeks to replace cognition at platform speed (the launch of ChatGPT hit 100 million users in two months), while the public is conscripted into the experiment “hands-on” after release. Hoffman concedes the democratic catch: broad participation slows innovation, so faster progress may come from “more authoritarian countries.” Far from an answer to moral panic, this is an argument for outrunning consent.
The contradictions piled up. As Dixit projected a Yale brochure extolling the purpose of liberal education, he reassured faculty that ChatGPT could serve as a “creative partner,” a “sounding board,” even an “editorial assistant.” Writing with AI wasn’t to be feared; it was simply being reborn. And what mattered now was student adaptability. “The future is uncertain,” he concluded. “We need to prepare students to be agile, nimble, and ready for anything.” (Where had I heard that corporatese before? Probably in a boring business-school meeting.)
The whole event was a masterclass in gaslighting. OpenAI creates the tools that facilitate cheating, then hosts webinars to sell moral recovery strategies. It’s the Silicon Valley circle of life: disruption, panic, profit.
When Siya opened the floor for questions, I submitted one rooted in the actual pressures my students face:
“How can we expect to motivate students when AI can easily generate their essays – especially when their financial aid, scholarships and visas all depend on GPA? When education has become a high-stakes, transactional sorting process for a hyper-competitive labour market, how can we expect them to not use AI to do their work?”
It was never read aloud. Siya skipped over it, preferring questions that allowed for soft moral encouragement and company talking points. The event promised dialogue but delivered dogma.
Working-Class Students See Through the Con
What Dixit’s corporate evangelism missed entirely is that students themselves are leading the resistance. While the headlines fixate on widespread AI cheating, a different story is emerging in classrooms where faculty actually listen to their students.
At San Francisco State, Professor Martha Kenney, who chaired the Women and Gender Studies department, described what occurred in her science fiction class after the CSU-OpenAI partnership was announced. Her students, she said, “were rightfully sceptical that regular use of generative AI in the classroom would rob them of the education they’re paying so much for,” Kenney told me. Most of them had not opened ChatGPT Edu by semester’s end.
Her colleague, Martha Lincoln, who teaches Anthropology, witnessed the same scepticism. “Our students are pro-socially motivated. They want to give back,” she told me. “They’re paying a lot of money to be here.” When Lincoln spoke publicly about CSU’s AI deal, she says, “I heard from a lot of Cal State students not even on our campus asking me ‘How can I resist this? Who is organizing?’”
These weren’t privileged Ivy League students looking for shortcuts. These were first-generation college students, many from historically marginalized groups, who understood something administrators apparently didn’t: they were being asked to pay premium prices for a cheapened product.
“ChatGPT is not an educational technology,” Kenney explained. “It wasn’t designed or optimized for education.” When CSU rolled out the partnership, “it doesn’t say how we’re supposed to use it or what we’re supposed to use it for. Normally when we buy a tech license, it’s for software that’s supposed to do something specific……. but ChatGPT doesn’t.”
Lincoln was even more direct. “There has not been a pedagogical rationale stated. This isn’t about student success. OpenAI wants to make this the infrastructure of higher education – because we’re a market for them. If we privilege AI as a source of right answers, we are taking the process out of teaching and learning. We are just selling down the river for so little.”
Ali Kashani, a lecturer in the Political Science department and member of the faculty union’s AI collective bargaining article committee, voiced a similar concern. “The CSU unleashed AI on faculty and students without doing any proper research about the impact,” he told me. “First-generation and marginalized students will experience the harmful aspect of AI. Students are being used as guinea pigs in the AI laboratory.” That phrase – “guinea pigs” – echoes the warning Kenney and Lincoln sounded in their San Francisco Chronicle op-ed: “The introduction of AI in higher education is essentially an unregulated experiment. Why should our students be the guinea pigs?”
For Kashani and others, the question isn’t whether educators are for or against technology – it’s who controls it, and to what end. AI isn’t democratizing learning; it’s automating it.

The organized response is growing. The California Faculty Association (CFA) has filed an unfair labour practice charge against the CSU for imposing the AI initiative without faculty consultation, arguing that it violated labour law and faculty intellectual-property rights. At CFA’s Equity Conference, Dr. Safiya Noble – author of Algorithms of Oppression – urged faculty to demand transparency about how data is stored, what labour exploitation lies behind AI systems, and what environmental harms the CSU is complicit in.
The resistance is spreading beyond California. Dutch university faculty have issued an open letter calling for a moratorium on AI in academic settings, warning that its use “deskills critical thought” and reduces students to operators of machines.
The difference between SFSU’s student resistance and the cheating epidemic elsewhere is politically motivated. “Very few students get a Women and Gender Studies degree for instrumental reasons,” Kenney explained. “They’re there because they want to be critical thinkers and politically engaged citizens.” These students understand something that administrators and tech evangelists don’t: they’re not paying for automation. They’re paying for mentorship, for dialogue, for intellectual relationships that can’t be outsourced to a chatbot.
The Chatversity normalizes and legitimizes cheating. It rebrands educational destruction as cutting edge “AI literacy” while silencing the very voices – working-class students, critical scholars, organized faculty – who expose the con.
But the resistance is real, and it’s asking the questions university leaders refuse to answer. As Lincoln put it with perfect clarity: “Why would our institution buy a license for a free cheating product?”
The New AI Colonialism
That webinar was emblematic of something larger. OpenAI, once founded on the promise of openness, now filters out discomfort in favour of corporate propaganda.
Investigative journalist Karen Hao learned this the hard way. After publishing a critical profile of OpenAI, she was blacklisted for years. In Empire of AI, she shows how CEO Sam Altman cloaks monopoly ambitions in humanitarian language – his soft-spoken, monkish image masking a vast, opaque empire of venture capital and government partnerships extending from Silicon Valley to the White House. And while OpenAI publicly champions “aligning AI with human values,” it has pressured employees to sign lifelong non-disparagement agreements under threat of losing millions in equity.
Hao compares this empire to the 19th-century cotton mills: technologically advanced, economically dominant, and built on hidden labour. Where cotton was king, ChatGPT now reigns – sustained by exploitation made invisible. Time magazine revealed that OpenAI outsourced content moderation for ChatGPT to the Kenyan firm Sama, where workers earned under $2 an hour to filter horrific online material: graphic violence, hate speech, sexual exploitation. Many were traumatized by the toxic content. OpenAI exported this suffering to workers in the Global South, then rebranded the sanitized product as “safe AI.”
The same logic of extraction extends to the environment. Training large-language models consumes millions of kilowatt-hours and hundreds of thousands of gallons of water annually, sometimes as much as small cities, often in drought-prone regions. Costs are hidden, externalized, and ignored. That’s the gospel of OpenAI: promise utopia, outsource the damage.
The California State University system, which long styled itself as “the people’s university,” has now joined this global supply chain. Its $17-million partnership with OpenAI – signed without meaningful faculty consultation – offers up students and instructors as beta testers for a company that punishes dissent and drains public resources. This is the final stage of corporatization: public education transformed into a delivery system for private capital. The CSU’s collaboration with OpenAI is the latest chapter in a long history of empire, where public goods are conquered, repackaged, and sold back as progress.
Faculty on the ground see the contradiction. Jennifer Trainor, Professor of English and Faculty Director at SFSU’s Centre for Equity and Excellence in Teaching and Learning, only learned of the partnership when it was publicly announced. She says the most striking part of the announcement, at the time, was its celebratory tone. “It felt surreal,” she recalls, “coming at the exact moment when budget cuts, layoffs, and curriculum consolidations were being imposed on our campus.”
For Trainor, the deal felt like “a bait-and-switch – positioning AI as a student success strategy while gutting the very programs that support critical thinking.” CSU could have funded genuine educational tools created by educators, she points out, yet chose to pay millions to a Silicon Valley firm already offering its product for free. As Chronicle of Higher Education writer Marc Watkins notes, it’s “panic purchasing” – buying “the illusion of control.”

Even more telling, CSU bypassed faculty with real AI expertise. In an ideal world, Trainor says, the system would have supported “ground-up, faculty-driven initiatives.” Instead, it embraced a corporate platform many faculty distrust. Indeed, AI has become Orwellian shorthand for closed governance and privatized profit. Trainor has since gone on to write about and work with faculty to address the problems companies like OpenAI pose for education.
The CSU partnership lays bare how far public universities have drifted from their democratic mission. What’s being marketed as innovation is simply another form of dependency – education reduced to a franchise of a global tech empire.
The Real Stakes
If the previous sections exposed the economic and institutional colonization of public education, what follows is its cognitive and moral cost.
A recent MIT study, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, provides sobering evidence. When participants used ChatGPT to draft essays, brain scans revealed a 47 percent drop in neural connectivity across regions associated with memory, language, and critical reasoning. Their brains worked less, but they felt just as engaged – a kind of metacognitive mirage. Eighty-three percent of heavy AI users couldn’t recall key points from what they’d “written,” compared to only 10 percent of those who composed unaided. Neutral reviewers described the AI-assisted writing as “soulless, empty, lacking personality.” Most alarmingly, after four months of reliance on ChatGPT, participants wrote worse once it was removed than those who had never used it at all.
The study warns that when writing is delegated to AI, the way people learn fundamentally changes. As computer scientist Joseph Weizenbaum cautioned decades ago, the real danger lies in humans adapting their consciousnesses to machine logic. Students aren’t just learning less; their brains are learning not to learn.
Author and podcaster Cal Newport calls this “cognitive debt” – mortgaging future cognitive fitness for short-term ease. His guest, Brad Stulberg, likens it to using a forklift at the gym: you can spend the same hour lifting nothing and still feel productive, but your muscles will atrophy. Thinking, like strength, develops through resistance. The more we delegate our mental strain to machines, the more we lose the capacity to think at all.
This erosion is already visible in classrooms. Students arrive fluent in prompting but hesitant to articulate their own ideas. Essays look polished yet stilted – stitched together from synthetic syntax and borrowed thought. The language of reflection – I wonder, I struggle, I see now – is disappearing. In its place comes the clean grammar of automation: fluent, efficient, and empty.
The real tragedy isn’t that students use ChatGPT to do their course work. It’s that universities are teaching everyone – students, faculty, administrators – to stop thinking. We’re outsourcing discernment. Students graduate fluent in prompting, but illiterate in judgment; faculty teach but aren’t allowed the freedom to educate; and universities, eager to appear innovative, dismantle the very practices that made them worthy of the name. We are approaching educational bankruptcy: degrees without learning, teaching without understanding, institutions without purpose.
The soul of public education is at stake. When the largest public university system licenses an AI chatbot from a corporation that blacklists journalists, exploits data workers in the Global South, amasses geopolitical and energy power at an unprecedented scale, and positions itself as an unelected steward of human destiny, it betrays its mission as the “people’s university,” rooted in democratic ideals and social justice.
OpenAI is not a partner – it’s an empire, cloaked in ethics and bundled with a Terms of Service. The university didn’t resist. It clicked ‘Accept.’
I’ve watched this unravel from two vantage points: as a professor living it, and as a first-generation college student who once believed the university was a sacred space for learning. In the 1980s, I attended Sonoma State University. The CSU charged no tuition – just a modest $670/year registration fee. The economy was in recession, but I barely noticed. I was already broke. If I needed a few bucks, I’d sell LPs at the used record store. I didn’t go to college in order to get a job. I went to explore, to be challenged, to figure out what mattered. It took me six years to graduate with a degree in Psychology – six of the most meaningful, exploratory years of my life.
That kind of education – the open, affordable, meaning-seeking kind – once flourished in public universities. But now it is nearly extinct. It doesn’t “scale.” It doesn’t fit into the strategic plan. And it doesn’t compute – which is exactly why the Chatversity wants to eliminate it.
But it also shows another truth: the situation can be different. It once was.
Author: Ron Purser
yogaesoteric
February 20, 2025