Students use AI to write papers, professors use AI to grade them, degrees become meaningless, and tech companies make fortunes. Welcome to the death of higher education (1)

I used to think that the hype surrounding artificial intelligence was just that – hype. I was sceptical when ChatGPT made its debut. The media frenzy, the breathless proclamations of a new era – it all felt familiar. I assumed it would blow over like every tech fad before it. I was wrong. But not in the way you might think.

The panic came first. Faculty meetings erupted in dread: “How will we detect plagiarism now?” “Is this the end of the college essay?” “Should we go back to blue books and proctored exams?” My business school colleagues suddenly behaved as if cheating had just been invented.

Then, almost overnight, the hand-wringing turned into hand-rubbing. The same professors forecasting academic doom were now giddily rebranding themselves as “AI-ready educators.” Across campus, workshops like Building AI Skills and Knowledge in the Classroom and AI Literacy Essentials popped up like mushrooms after rain. The initial panic about plagiarism gave way to a resigned embrace: “If you can’t beat ‘em, join ‘em.”

This about-face wasn’t unique to my campus. The California State University (CSU) system – America’s largest public university system with 23 campuses and nearly half a million students – went all-in, announcing a $17 million partnership with OpenAI. CSU would become the nation’s first “AI-empowered” university system, offering free ChatGPT Edu (a campus-branded version designed for educational institutions) to every student and employee. The press release gushed about “personalized, future-focused learning tools” and preparing students for an “AI-driven economy.”

The timing was surreal. CSU unveiled its grand technological gesture just as it proposed slashing $375 million from its budget. While administrators cut ribbons on their AI initiative, they were also cutting faculty positions, entire academic programs, and student services. At CSU East Bay, general layoff notices were issued twice within a year, hitting departments like General Studies and Modern Languages. My own alma mater, Sonoma State, faced a $24 million deficit and announced plans to eliminate 23 academic programs – including philosophy, economics, and physics – and to cut over 130 faculty positions, more than a quarter of its teaching staff.

At San Francisco State University, the provost’s office formally notified our union, the California Faculty Association (CFA) of potential layoffs – an announcement that sent shockwaves through campus as faculty tried to reconcile budget cuts with the administration’s AI enthusiasm. The irony was hard to miss: the same month our union received layoff threats, OpenAI’s education evangelists set up shop in the university library to recruit faculty into the dark gospel of automated learning.

The math is brutal and the juxtaposition stark: millions for OpenAI while pink slips go out to longtime lecturers. The CSU isn’t investing in education – it’s outsourcing it, paying premium prices for a chatbot many students were already using for free.

For Sale: Critical Education

Public education has been for sale for decades. Cultural theorist Henry Giroux was among the first to see how public universities were being remade as vocational feeders for private markets. Academic departments now have to justify themselves in the language of revenue, “deliverables,” and “learning outcomes.” CSU’s new partnership with OpenAI is the latest turn of that screw.

Others have traced the same drift. Sheila Slaughter and Gary Rhoades called it academic capitalism: knowledge refashioned as commodity and students as consumers. In Unmaking the Public University, Christopher Newfield showed how privatization actually impoverishes public universities, turning them into debt-financed shells of themselves. Benjamin Ginsberg chronicled the rise of the “all-administrative campus,” where managerial layers and administrative blight multiplied even as faculty shrink. And Martha Nussbaum warned what’s lost when the humanities – those spaces for imagination and civic reflection – are treated as expendable in a democracy. Together they describe a university that no longer asks what education is for, only what it can earn.

The California State University system has now written the next chapter of that story. Facing deficits and enrolment declines, administrators embraced the rhetoric of AI-innovation as if it were salvation. When CSU Chancellor Mildred Garcia announced the $17-million partnership with OpenAI, the press release promised a “highly collaborative public-private initiative” that would “elevate our students’ educational experience” and “drive California’s AI-powered economy.” This corporate-speak reads like a press release ChatGPT could have written.

Meanwhile, at San Francisco State, entire graduate programs devoted to critical inquiry – Women and Gender Studies, and Anthropology – were being suspended due to lack of funding. But not to worry: everyone got a free ChatGPT Edu license!

Professor Martha Kenney, Chair of the Women and Gender Studies department and Principal Investigator on a National Science Foundation grant examining AI’s social justice impacts, saw the contradiction firsthand. Shortly after the CSU announcement, she co-authored a San Francisco Chronicle op-ed with Anthropology Professor Martha Lincoln, warning that the new initiative risked short-changing students and undermining critical thinking.

I’m not a Luddite,” Kenney wrote. “But we need to be asking critical questions about what AI is doing to education, labour, and democracy – questions that my department is uniquely qualified to explore.”

The irony couldn’t be starker: the very programs best equipped to study the social and ethical implications of AI were being defunded, even as the university promoted the use of OpenAI’s products across campus.

This isn’t innovation – it’s institutional auto-cannibalism.

The new mission statement? Optimization. Inside the institution, the corporate idiom trickles down through administrative memos and patronizing emails. Under the guise of “fiscal sustainability” (a friendlier way of saying “cuts”), administrators sharpen their scalpels to restructure the university in accordance with efficiency metrics instead of educational purpose.

The messaging from administrators would be comical if it weren’t so cynical. Before summer break at San Francisco State, a university administrator warned faculty in an email of potential layoffs, hedging with the lines: “We hope to avoid layoffs,” and “No decisions have been made.” Weeks later came her chirpy summer send-off: “I hope you are enjoying the last day to turn in grades. You may even be reading the novel you never finished from winter break.”

Right, because nothing says leisure reading like looming unemployment. Then came the kicker: “If we continue doing the work above to reduce expenses while still maintaining access for students, we do not anticipate having to do layoffs.” Translation: Sacrifice your workloads, your job security, even your colleagues, maybe we’ll let you keep your job. No promises. Now go enjoy that novel.

Technopoly Comes to Campus

When my business school colleagues insist that ChatGPT is “just another tool in the toolbox,” I’m tempted to remind them that Facebook was once “just a way to connect with friends.” But there’s a difference between tools and technologies. Tools help us accomplish tasks; technologies reshape the very environments in which we think, work, and relate. As philosopher Peter Hershock observes, we don’t merely use technologies; we participate in them. With tools, we retain agency – we can choose when and how to use them. With technologies, the choice is subtler: they remake the conditions of choice itself. A pen extends communication without redefining it; virtual communication networks changed what we mean by privacy, friendship, even truth.

Media theorist Neil Postman warned that a “technopoly” arises when societies surrender judgment to technological imperatives – when efficiency and innovation become moral goods in themselves. Once metrics like speed and optimization replace reflection and dialogue, education mutates into logistics: grading automated, essays generated in seconds. Knowledge becomes data; teaching becomes delivery. What disappears are precious human capacities – curiosity, discernment, presence. The result isn’t augmented intelligence but simulated learning: a paint-by-numbers approach to thought.

Political theorist Langdon Winner once asked whether artifacts can have politics. They can, and AI systems are no exception. They encode assumptions about what counts as intelligence and whose labour counts as valuable. The more we rely on algorithms, the more we normalize their values: automation, prediction, standardization, and corporate dependency. Eventually these priorities fade from view and come to seem natural – “just the way the situation is.”

In classrooms today, the technopoly is thriving. Universities are being retrofitted as fulfilment centres of cognitive convenience. Students aren’t being taught to think more deeply but to prompt more effectively. We are exporting the very labour of teaching and learning – the slow work of wrestling with ideas, the enduring of discomfort, doubt and confusion, the struggle of finding one’s own voice. Critical pedagogy is out; productivity hacks are in. What’s sold as innovation is really surrender. As the university trades its teaching mission for “AI-tech integration,” it doesn’t just risk irrelevance – it risks becoming mechanically soulless. Genuine intellectual struggle has become too expensive of a value proposition.

The scandal is not one of ignorance but indifference. University administrators understand exactly what’s going on, and proceed anyway. As long as enrolment numbers hold and tuition checks clear, they turn a blind eye to the learning crisis while faculty are left to manage the educational carnage in their classrooms.

The future of education has already arrived, as a liquidation sale of everything that once made it matter.

The Cheating-AI Technology Complex

Before AI arrived, I used to joke with colleagues about plagiarism. “Too bad there isn’t an AI app that can grade their plagiarized essays for us,” I’d say, half in jest. Students have always found ways to cheat – scribbling answers on their palms, sending exams to Chegg.com, hiring ghostwriters – but ChatGPT took it to another level. Suddenly they had access to a writing assistant that never slept, never charged, and never said no.

Universities scrambled to fight back with AI-detectors like Turnitin – despite high rates of false positives, documented bias against ESL and Black students situation, and the absurdity of fighting robots with robots. It’s a twisted ouroboros: universities partner with AI companies; students use AI to cheat; schools panic about cheating and then partner with more AI companies to detect the cheating. It’s surveillance capitalism meeting institutional malpractice, with students trapped in an arms race they never asked to join.

The ouroboros just got darker. In October 2025, Perplexity AI launched a Facebook Ad for its new Comet browser featuring a teenage influencer bragging about how he’ll use the app to cheat on every quiz and assignment – and it wasn’t parody. The company literally paid to broadcast academic dishonesty as a selling point. Marc Watkins, writing on his Substack, called it “a new low,” noting that Perplexity’s own CEO seemed unaware his marketing team was glamorizing fraud.

If this sounds like satire, it isn’t: the same week that ad dropped, a faculty member in our College of Business emailed all professors and students, enthusiastically promoting a free one-year Perplexity Pro account “with some additional interesting features!” Yes – even more effective ways to cheat. It’s hard to script a clearer emblem of what I’ve called education’s auto-cannibalism: universities consuming their own purpose while cheerfully marketing the tools of their undoing.

Then there is the Chungin “Roy” Lee saga. Lee arrived as a freshman at Columbia University with ambition – and an OpenAI tab permanently open. By his own admission, he cheated on nearly every assignment. “I’d just dump the prompt into ChatGPT and hand in whatever it spat out,” he told New York Magazine. “AI wrote 80 percent of every essay I turned in.” Asked why he even bothered applying to an Ivy League school, Lee was disarmingly honest: “To find a wife and a startup partner.”

It would be hilarious if it weren’t so telling. Conservative economist Tyler Cowen has offered an even bleaker take on the modern university’s “value proposition.” “Higher education will persist as a dating service, a way of leaving the house, and a chance to party and go see some football games,” he wrote in Everyone’s Using AI to Cheat at School. And That’s a Good Thing. In this view, the university’s intellectual mission is already dead, replaced by credentialism, consumption, and convenience.

Lee’s first venture was an AI app called Interview Coder, designed to cheat Amazon’s job interviews. He filmed himself using it; his video post went viral. Columbia suspended him for “advertising a link to a cheating tool.” Ironically, this came just as the university – like the CSU – announced a partnership with OpenAI, the same company powering the software that Lee used to cheat his way through their courses.

Unfazed, Lee posted his disciplinary hearing online, gaining more followers. He and his business partner Neel Shanmugam, also disciplined, argued their app violated no rules. “I didn’t learn anything in any class at Columbia,” Shanmugam told KTVU News. “And I think that applies to most of my friends.”

After their suspension, the dynamic duo dropped out, raised $5.3 million in seed funding, and relocated to San Francisco. Of course – because nothing says “tech visionary” like getting expelled for cheating.

Their new company? Cluely. Its mission: “We want to cheat on everything. To help you cheat – smarter.” Its tagline: “We built Cluely so you never have to think alone again.”

Cluely isn’t hiding its purpose; it’s flaunting it. Its manifesto spells out the logic:

Why memorize facts, write code, research anything – when a model can do it in seconds? The future won’t reward effort. It’ll reward leverage. So start cheating. Because when everyone does, no one is.

When challenged on ethics, Lee resorts to the standard Silicon Valley defence: “any technology in the past – whether that’s calculators, Google search – they were all met with an initial push back of, ‘hey, this is cheating’.” he told KTVU. It’s a glib analogy that sounds profound at a startup pitch but crumbles under scrutiny. Calculators expanded reasoning; the printing press spread knowledge. ChatGPT, by contrast, doesn’t extend cognition – it automates it, turning thinking itself into a service. Rather than democratizing learning, it privatizes the act of thinking under corporate control.

When a 21-year-old college dropout suspended for cheating lectures us about technological inevitability, the response shouldn’t be moral panic but moral clarity – about whose interests are being served. Cheating has ceased to be a subculture; it’s become a brand identity and venture-capital ideology. And why not? In the Chatversity, cheating is no longer deviant – it’s the default. Students openly swap jailbreak prompts to make ChatGPT sound dumber, insert typos, and train models on their own mediocre essays to “humanize” the output.

What’s unfolding now is more than dishonesty – it’s the unravelling of any shared understanding of what education is for. And students aren’t irrational. Many are under immense pressure to maintain GPAs for scholarships, financial aid, or visa eligibility. Education has become transactional; cheating has become a survival strategy.

Some institutions have simply given up. Ohio State University announced that using AI would no longer count as an academic integrity violation. “All cases of using AI in classes will not be an academic integrity question going forward,” Provost Ravi Bellamkonda told WOSU public radio. In an op-ed, OSU alum Christian Collins asked the obvious question: “Why would a student pay full tuition, along with exposing themselves to the economically ruinous trap of student debt, to potentially not even be taught by a human being?

The irony only deepens.

The New York Times reported on Ella Stapleton, a senior at Northeastern University who discovered her business professor had quietly used ChatGPT to generate lecture slides – even though the syllabus explicitly forbade students from doing the same. While reviewing the slides on leadership theory, she found a leftover prompt embedded in the slides: “Expand on all areas. Be more detailed and specific.” The PowerPoints were full of giveaways: mangled AI images of office workers with extra limbs, garbled text, and spelling errors. “He’s telling us not to use it,” Stapleton said, “and then he’s using it himself.”

Furious, she filed a complaint demanding an $8,000 refund, her share of that semester’s tuition. The professor, Dr. Rick Arrowood, admitted using ChatGPT for his slides to “give them a fresh look,” then conceded, “In hindsight, I wish I would have looked at it more closely.”

One might think this hypocrisy is anecdotal, but it’s institutional. Faculty who once panicked over AI plagiarism are now being “empowered” by universities like CSU, Columbia, and Ohio State to embrace the very “tools” they feared. As corporatization increases class sizes and faculty workloads, the temptation is obvious: let ChatGPT write lectures and journal articles, grade essays, redesign syllabi.

All this pretending reminds me of an old Soviet joke from the factory floor: “They pretend to pay us, and we pretend to work.” In the Chatversity, the roles are just as scripted and cynical. Faculty: “They pretend to support us, and we pretend to teach.” Students: “They pretend to educate us, and we pretend to learn.”

(to be continued)

Author: Ron Purser

 

yogaesoteric
February 13, 2026

 

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More