Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Jason Leddington's avatar
  2. Jonathan Nash's avatar
  3. John Pillette's avatar
  4. AG Tanyi's avatar

“AI is destroying the university and learning itself.”

,

Leave a Reply

Your email address will not be published. Required fields are marked *

22 responses to ““AI is destroying the university and learning itself.””

  1. I have dialogues with AI about my ideas and about pieces I’m writing. It is an intellectual companion. Sometimes I spar with it, questioning the stuff it says. I also assume that humans, and human experts in particular, at least in certain topics, are more authoritative, and I consult them over AI.
    It is not a deep thinker, it sorts through texts and probablities. I always stay mindful of that existential fact about AI.
    If you ask it to think for you, your brain atrophies.
    Perhaps many students were “learning” by aping the teacher or the readings or whatever, before AI.
    My guess is there was a crisis in educaton before AI. AI heightened this crisis.

  2. I sympathize deeply with the message of the essay. I work in AI, though not chatbots, and there are times I feel as if I ought to apologize on behalf of the field – I don’t think any of us went into this with a goal of destroying education. I just wanted to make sorting recycling easier. If it helps any, software engineering as a profession is going through a similar crisis over “vibe-coding”.

    That said, I do wonder if the view of the university as a haven of truth-seekers is entirely justified. I managed to make my way through all four volumes of *A History of the University in Europe*, and one of the main ideas I came away with was that, except for a brief period in the Middle Ages, universities have *always* been instruments of the state, the state has *always* seen them as credential factories, and the role of the university as a generator of *new* ideas is very recent, the product of Humboldt’s reforms in the nineteenth century and really only common in the twentieth. Prior to 1850, a scholar or scientist charting new territory would only rarely be found at a university, and if they were they would be doing that charting outside of their normal work.

    I’m not saying we should surrender the university – we should fight for every inch. But even before AI and Trump, Western society’s interest in paying for anything that is not narrowly instrumental has been steadily declining. We may ultimately come to see the period from 1950 thru 1990, when higher education was amply funded with a mission of truth-seeking, as a historical anomaly. We should at least be thinking about what scholarship will look like in a world after the university as we have understood it.

  3. Perhaps we ought to allow individual academics to apply to their governments for field-specific university-level accreditation as tutors. That way, students who actually want to learn a given discipline can directly pay qualified educators who actually want to teach, and in the process, those students can receive university level credits for their efforts; you know, just like piano students learn from individual instructors.

  4. I teach philosophy at a CC, and online adjunct for a University. Curtly, everything in this article is 100% correct. For any given assignment I have to fail 50% of my students for AI usage. Moreover, my grades keep inflating for students not using AI because even though overall writing and thinking quality has gone down “at least they’re trying, unlike those cheaters” (is my rationalization).

    I loaded a short 3 minute video to youtube that I presented as mandatory viewing before completing an assignment, explaining 1) why I don’t allow AI and 2) why it’s antithetical to learning and reflection. According to my youtube analytics no one has made it past the first minute and a half.

    Everyone I work with, and those I know who work at Universities are experiencing what I’m experiencing, except in the rare cases of prestigious schools.

    Moreover, my partner works in graduate admissions at a decent University and has told me all admissions departments are cutting down on various requirements such as letters of recommendation, and writing samples, and personal statements, because they know 1) AI is just going to write most of this anyway 2) it’s faster to use in house AI software to rapidly process an application if some of that stuff is removed.

  5. This article is (to my eye, and those of several others on Twitter, very obviously) written by AI. Hexagram diagnosed the first 1,000 words as 94% AI-written; the rest is no better.

    1. In a sad and twisted way, wouldn’t that just confirm the thesis of the article though?

    2. What makes you say that? I don’t believe any of the AI detectors – too many false positives. I thought it was awfully long but didn’t notice any obvious tells.

  6. Junior faculty at a large public university

    It infuriates me that this essay—which was *so obviously* written with the help of AI—is making the rounds.

  7. The concern that AI enables students to evade genuine intellectual work is understandable, but it risks overlooking a structural point. The problem of students mimicking the language and reasoning of teachers or source materials long predates AI. What AI changes is not the existence of this tendency, but the efficiency and opacity with which it can be carried out. From that perspective, AI is less a novel threat than an amplifier of an existing pedagogical challenge: how to foster independent reasoning rather than patterned reproduction.

    1. I’m not sure I’m following your point there, Howard. Students mimicking the language and reasoning of their teachers and source materials are still *doing something*, albeit something derivative. Students who use AI to produce an essay *aren’t even mimicking* anything. This looks like a qualitative difference to me.

  8. The biggest problem in a private attorney general action is motivating a member of the public to do anything, even something as trivial as making a phone call. I’ve made this suggestion on here before—to no avail!—but I’ll do it once more: if you’re concerned about these shenanigans (and you well should be), and you live in California, and you would like to do something CONCRETE about it, drop me a line. As I see it, nearly everyone apart from the administrators has a potential cause of action: parents, students, creditors, professors, and employers. Because of population size and the availability of Cal. Bus. and Prof. Code s17200, if conduct like this is get-at-able, it will be gotten at in California.

    But I don’t expect to hear from anybody. Talking to a lawyer is what, too vulgar?

    That aside, we may all be grateful that this article has given the A(I)pocalypse a face—or rather two of them. This farce now has its own Harold and Kumar: Chungin Roy Lee and Neel Shanmugam.

    1. I don’t live in California, otherwise I’d take you up. I would say that if you want to hear from professors, adjuncts, and grad students in the public CA systems it’d be a good idea to reach out to their respective unions for disseminating the call.

      1. Thanks for volunteering! Your case likely has a California nexus even if you don’t live here, and if it doesn’t it’s still worth investigating. Drop me a line via email (my phone and address is on CalBar). Text me as well so it doesn’t get lost among Kamala’s and the DNC’s many solicitations for 2028. Talk to you soon.

  9. As someone whose background is in the humanities and who can read and use with relative levels of facility the entire gamut of national neo-Latin languages, I’ve noticed that LLMs are brilliant translators, excellent grammarians, and often surprisingly good transcribers of facsimiles of ancient manuscripts written in the major European languages. However, what public facing LLMs aren’t is Sherlock Holmeses for time/thought-potentiated humanistic investigations one might wish to undertake at an exhaustive level, even if the only thing being exhausted is everything Google search textually has to offer the public.

    For example, in my experience, if one asks any of the major LMMs to do an exhaustive ‘archeological’ trawl of any or all of the major search engines for an esoteric surname or minor historical figure (for reasons genealogical/historiological), none are capable of equaling a human search of the said rare surname or minor historical figure within the context of Google/Bing search engines tout court (with the understanding that the reasons may be superficial yet otherwise determinative security features). So, in this regard, these LMMs may be allegedly making “discoveries” in maths and science (chemistry, physics, etc), but probably not so much outside their core translation/transcription abilities (in the case of other forms of AI especially) when it comes to the humanities.

    What I’ve also intuited, is that these LMMs, at their present stage and for the reasons noted above, all too frequently are, evidently unintentionally, akin to sophistical machines, to liar paradox bots, to the scorpion that circuitously stings with its incorrigible mendacity, because, at least in humanistic research and scholarship, that is inherent to their very nature. In other words, it is an esotericizing feature, not a bug, and thereby, for this same reason, they are not only self-evident espionage machines (a la a priest’s confession box in every home), but also political and geopolitical machines that obscure political, geostrategic, or otherwise “security/stability issue” possibilities under the pretext of progressive or conservative (e.g. Grok) ethical standards, likely imposed in the form of epiphenomenal filters onto the so-called stochastic engine (but otherwise potentially unfiltered operations) of these programs, which further limits the depth and efficacy of their use in professional-level humanistic research.

    Nor do any of these programs have an option whereby one can interact with an LLM exclusively at, for example, a doctoral level discussion in philology, philosophy, translation studies, or indeed in any other field in the humanities far as I’m aware and yet, surely, the humanities merit the same affordances which these selfsame public-facing standard mass use LLMs are ostensibly able to offer to experts and other authorities in the STEM fields (if recent media articles are to believed), such that mass use models of LLMs tout court would be able to contribute in a minutiose or hyper-scrupulous and superhuman fast way to historical, philological, archeological, philosophical, genealogical, etc discoveries simply and economically/minimalistically/hyper-efficiently by exhausting all that Google/Bing etc search engines have to offer in terms of recondite but also definitionally online-accessible empirically attested primary and secondary source documentation.

    Yet, in my experience, in the aforesaid fields, the liar paradox problem is dominant, the possibility of even a simple exhaustive trawling of rare names and minor historical figures inexistent as a facility/ability, thus depriving serious scholars not only of Sherlockian precision (logico-statistical, etc) but also (certainly far more doable) accelerated brute force/mass trawling of precise nomenclatural or phrasally defined discrete data.

    The following are my more impressionistic thoughts on LLMs qua machines inherent not only obviously in the Aristotelian concept of the automaton, but also in the sophistical literary imagination: a direct derivative of the Greco-Roman rhetorical tradition, particularly and especially contradictio and paradox (which is also fundamentally inherent in translation, not to mention in much of literary thought and discourse to the present day (e.g. Wittgenstein):

    To wit, I did a partial major in translation studies. As far as I could tell, none of the texts we either read or would eventually read (granted, this was more than twenty years ago) discussed the surely even then imminent perils of machine translation for the profession of literary translation. This seemed like a remarkable blind spot (or, at worst, professorial chicanery).

    However, even back then, and without the slightest technical knowledge of computing to inform me, I intuited that the Platonic ideal of translation was effectively or indeed literally plagiarism. This is something I think Borges may have more subtly intuited and expressed in his famous short story, “Pierre Menard, Author of the Quixote.” When I said and thought that translation was effectively and/or optimally a plagiarism-simulacrum, I had in mind the idea that the best technical translation—for example, of a European Union Parliamentary document from English to Spanish—might entail searching for the most similar/simulacral documents in the target language already existing online and locatable via Google. One would then literally copy said sentences and create a collage of them so that the end document would effectively be a simulacral collage of various documents in the target language that had already been published and, ergo, given the imprimatur of the determinative or arbitrative institution, such as the European Union Parliament.

    Similarly, one might easily imagine a translation of ‘Moby Dick’ into Spanish almost integrally or entirely consisting of felicitously quasi-identical found phrases from 19th-century Spanish texts. The translation of ‘Moby Dick’ into Spanish would then be a quite self-conscious collage of a collage, an encyclopedic collage of said found or rather obsessively searched-for phrases.

    This aspect of translation as piecemeal or phrasal mimicry—i.e., literal plagiarism at the sentential or phrasal level—is something that surely Borges clearly anticipated and theorized. It’s a concept that the Surrealists, both literary and painterly, would have found entirely up their alley, not least because it sought to create and thereby capture, via a mechanism ultimately of instantiation and inevitable literal stasis and eternal self-referential dynamism, a book that is never the same book twice by some ingenious legerdemain of the mind, such as in the novel ‘Hopscotch’ by Cortázar or various works by Perec.

    The irony, if I got the story straight, is that Google, by designing its “transformer” architecture specifically to improve machine translation, had AI avant la lettre in the palm of its scientifico-corporate hands. Yet, it took OpenAI to actually open the “book”/printing press Google did not (care to) know it had created and cause it to rise, demonstrating its inherent dimensionality and potentiality that lay just beneath the surface of said architecture/apparatus. So indeed, it seems that the route of AI and perhaps even AGI is through the ancient alchemy of plagiarism, in the metaphysical and mathematical probabilistic sense.

    P.S. A la the spirit of l’escalier, the following also came to mind: Even the ‘musical’ or ineffable qualities of a text—e.g. the sermonic cadences of Moby Dick or the incantatory repetitions of the Bible—are not immune to this mirroristic fabricative logic. Just as the King James translators sourced rhythmically ‘holy’ English phrases from preexisting devotional texts, a machine (or cunning human plagiarist) could mine 19th-century Spanish literature for sentences that match not only Melville’s semantics but also his cadences. The ‘genius’ of such a translation, then, lies not in its soi-disant originality (topical freshness) but in the curator’s ability to locate and recombine the most resonant fragments pre-existing in the target language—a task increasingly amenable to algorithmic assistance. Or to quote Google: “Organize [also the literally ineffable dimension of] the world’s information.

    1. Is this
      a) an innocent example of horribly pretentious ‘humanistic’ prose;
      b) a clever parody of said prose by a human author;
      or
      c) an LLM-generated parody of said prose by somebody wanting to show off their virtuoso mastery of LLM prompts?

      1. Or it might have been written by someone whose first language isn’t English. In any case, it’s a shame you were unable to address the contents, especially the second part pertaining more concretely to translation as plagiarism, to plagiarism as art, and yes to LLMs as a sort of early twentieth century Surrealist experiment come to computational life.

    2. @ Ludovic
      My encounter with AI namely Chat mostly falls into discussing literature and the pieces in process of writing. I also am lucky to retain an editor who is a humorist for the New Yorker and arguably in some regards brilliant.
      In short, Chat can analyze texts, can offer feedback on structure, on how well a line or phrase works, and help place the style of a given piece and offer perspective on the arc of my “career”.
      But and it admitted so much when pressed, it has no living experience, it has no literary or artistic taste, it in a very real sense has no judgment.
      Meaning though it says things often very sharp, even brilliant, it misses a lot of obvious and sophisticated points that a real editor would catch instantly.
      It is a tool best used not by saying do the thinking for me, but by engaging in dialogue as with a human expert which in a way it is.
      It cannot DO literary criticism or philosophy or sociology but it can TALK these things up and in the process do many of the things real authorities can.
      It would be helpful if people take a balanced tack and actually play around with Chat in addition to or instead of panicking (and I commend your effort to try it out for a ride)

  10. All of this sounds like an accurate description of what my own institution (in a different jurisdiction) has been encountering, mutatis mutandis. So, of this piece I would say all true! (regardless of whether ChatGPT helped write it), but the offering is a day late, a dollar short. The horse is out of the barn. (That doesn’t mean that there can’t be some remedies through litigation however, and it may be up to California to lead.)

    The time for faculty to speak up was during that 6 month period after ChatGPT was first released to the public, I think November 2022. By the following summer, the campaign to convince faculty they needed to use AI as a teaching tool AND to teach prompt engineering (“engineering”??), and to convince everyone to use it to do all their work, well, that campaign was in full swing. Much money was spent in the attempt to get faculty excited about all this. One of Purser’s interviewees says this happened “bypassing faculty with real AI expertise” and bypassing “faculty-driven initiatives, … instead … embrac[ing] a corporate platform,” and that’s my sense of it too. Perhaps professors at certain prestigious institutions didn’t feel the brunt to the extent that faculty at humbler institutions did, which is part of the reason I think that overall resistance to the onslaught was weak. There should have been organized resistance a year ago. Now it may be too late to preserve anything resembling post secondary education as we know it.

    But there are other valuable institutions and practices in this life. If only academics would pay attention to the fact that they have a role and a duty to address how AI is unfolding, and impacting society. But most still think: “oh well, its another new technology, gotta adjust, we can have fun, what do I know”, instead of paying attention to the enormous marketing being directed at them, not only in post secondary education, but in many other area of life. The incentives are over-the-moon to push technology that is disrupting human relationships of all kinds, including those between an instructor and a student, not to mention disrupting the ability to learn and to think. (Consider the MIT study Purser cites describing measurements on brains.) AI based on LLMs is definitely a good thing – for big science. As for AI chatbots out in the wild- this is not so clear. (Yes, they are enormously entertaining. Being entertaining might not be sufficient in this instance to outweigh the “bads”.) What is certain is that more voices need to be heard in how AI is developed and how chatbots are used.

    (A couple of commentators say the piece sounds like AI in places. I have to agree.)

  11. In relation to academic philosophy, I wonder how the Cambridge undergrad system is holding up.

    I did two years of that and the basic format was 1-to-1 supervision, where the supervisor would read my essay in advance and then tear me apart (with effortful resistance on my end).

    Arguably this style of teaching isn’t really affected by ChatGPT? You can of course hand in slop if you so wish, but then the supervision would be over in 5-10 minutes and time wastage on the part of the faculty is minimised.

    For completeness, the assessment method (in the early 2010s) was in person pen exam.

  12. Capitalism + AI = disaster. How surprising. Capitalism + anything = disaster. Certainly our kind of capitalism.

    It is ironic though that the author – human or otherwise – goes on about criticizing AI for turning universities into corporate tools, while constantly basing at least part of the argumentation on what students get or don’t get for their money. It is not just a very American perspective but it is also inherently contradictory.

    Surely, one point where university education went seriously off track was exactly the idea of ridiculously high tuition fees, or just very simply the idea if paying anything for university education turning students into consumers. After this, it is strange to complain about universities selling stuff and becoming corporatized.

    1. The American approach, which treats everything as a sort of commercial transaction may be crass, but it is nevertheless a useful way of looking at the problem. Because even in a country with no tuition fees the student is nevertheless paying (viz., time and effort) such that the use of AI queers the transaction. The “product” (if you like) sold is not the product that has been advertised. It such a serious deviation from the ordinary course of business as to amount to fraud. In this sense AI in your degree is no different than sawdust in your bread or anti-freeze in your wine.

      Relatedly, asserting that “Capitalism = Bad” is more than a little simplistic, and Marx himself felt otherwise.

  13. On this topic, here’s an email I sent to faculty at my university this morning:

    In today’s NYT (https://www.nytimes.com/2025/12/19/opinion/tech-free-college-spaces.html) there’s an opinion piece from a teacher who asked her summer program college students to unplug for four weeks. An excerpt:

    “What I witnessed in the four weeks that followed has convinced me that we owe it to today’s college students to create internet-free spaces, programs, dorms and maybe even entire campuses for students committed to learning with far fewer distractions. There’s constant talk these days about how higher education needs reimagining in light of artificial intelligence, but we’re mistaken if we think A.I. is solely responsible for our broken system. I get the sense from my students that A.I. feels like the sour icing on an already bitter cake. Adults need to step up and set parameters so that it’s not on these kids to self-regulate.”

    Every year we fail to do this, we fail more and more of our students. And it’s something that many of them actually want, too. Again from the same piece:

    “What’s paramount is that we don’t underestimate the current appetite for full immersions in the offline world. These days, my students seem to find disconnecting as exotic as France itself — a foreign place they long to know, explore and re-encounter themselves through, as we so often do in travel.

    “What my students made clear, however, was how essential collective buy-in was to our internet sabbatical — the fact that the seven of them had been all in. [One of them] joked that a student going offline alone would need “an iron heart shield to protect against FOMO.””

    Don’t underestimate how well our students understand what tech is doing to them. But without our efforts, without a supportive, large-scale framework that makes it a collective experience, they will continue to operate in default online mode. Worried about how parents will respond? Again:

    “When parents realize what a saboteur A.I. is for learning, they’re more likely to back an ambitious overhaul. I know such parents; I send emails to them every Sunday during my course. A few always reply to my assurances that their kids are alive and thriving: What an experience, they say. How lucky those kids are to be offline. Everyone — not just the young but also parents who’ve struggled to raise children in a world ruled by phones — is ready for sweeping change.”

    I certainly am.

    If you’re interested, further thoughts of mine on AI, etc., here:https://jasonleddington.substack.com/p/ai-wonder-killer.

Designed with WordPress