That’s political theorist Paul Sagar’s not implausible assessment; an excerpt:
I…teach political philosophy in a British university, so I have had to wrestle with the impact of large language models (LLMs) in one small domain: higher education. And here, my conclusion is simple. The threat they pose is existential…. Specifically, students who use LLMs to complete their coursework assignments [are the core of the problem]. Ask anybody lecturing in a university today and they will tell you the same: the impact has been dramatic.
The most obvious change is that whereas plagiarism software was previously very good at catching students passing off copied work as their own, LLMs evade this entirely. Programs like ChatGPT generate wholly original text based on the prompts you feed them, making plagiarism software useless. (It’s cheating, Jim, but not as we knew it.)
Likewise, software which claims to be able to detect AI-generated text is of no help, yielding false results in both directions. Students know this. I can only speak directly to the effect on essay-based subjects, but I can’t imagine the situation is any better in the sciences. In turn, we academics are painfully aware that at least some students are using the technology, and hence we are sometimes giving out grades for work spat out by machines, but fraudulently presented as human.
Shouldn’t we, the certified experts, be able to tell the difference between undergraduate work and LLM slop? Well kind of — but it’s tricky. A little while ago I was confidently pronouncing on my ability to spot AI-generated work in my field. When it comes to political philosophy, there is a certain tone of arch confidence — a panoramic control of the wider discipline — that it takes years of reading, writing, and thinking to pull off. No undergraduate has been studying the subject anywhere near long enough to be able to write with that level of control and authority. Hence when I find this tone in student work, I’m pretty sure that I’ve got an LLM-cheat….[But] those who are effective at using the technology to cheat are precisely the ones getting away with it….
My gut instinct that coursework feels like AI is, reasonably enough, insufficient proof to fail a submission. Unless the student is daft enough to have included a hallucinated bibliography of made-up AI references (and yes, this does happen), they always have plausible deniability. Their word, against my gut instinct. Unprincipled offenders win every time. After all, for a student willing to cheat on submitted assignments, a bit of extra lying isn’t much of a leap.
Yet it’s not just the problem of brazen cheating. In some ways, the more insidious threat LLMs pose to undergraduate learning is the promise of instant shortcuts. Why struggle through that difficult article, why read that complicated book, why force yourself through the problem set, when the internet can just summarize it for you?
The answer to which is: because it is only through the struggle, the forcing, the wrestling with ideas for yourself, over the course of years, that you can truly train and develop your mind. Indeed, this is the reason university humanities degrees put such a high premium on writing. Writing is thinking. Until you have tried to put your ideas on the page, you never really know if you understand them and have them under control.
Unfortunately, the truth of these facts only becomes apparent with experience — which is exactly what undergraduates lack….
What I would give to demolish the Silicon Valley cartels foisting this corrosive digital narcotic upon my students! Move fast and break things? How I wish I could return the favor.
By this point, it is abundantly clear that the only pedagogically robust response to LLMs in universities is at least a partial return to traditional methods. Reliance on online coursework has to be reduced; a significant return to paper and pen is required. This is the only way we can guarantee that students are not cheating in (all) their submissions. It is only by demanding that they prove their knowledge directly, in person, that we can incentivize them to go away and learn properly in their own time. Everybody in higher education knows this already.
For thirty years, I gave take-home essay exams in my jurisprudence classes. No longer: now it’s a 4-hour in-class essay exam, no Internet access. Colleagues convinced me it was too risky to continue with a take-home exam.
Thoughts from readers on Professor Sagar’s piece? Signed comments (full name, valid email address [the latter will not appear] will be preferred.




Leave a Reply to Steven Hales Cancel reply