…from philosopher Nicolas Delon, a former Law & Philosophy Fellow here.
Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s…
News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.
Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s…
I teach both large courses, like Jurisprudence and Critical Legal Thinking (a.k.a Legal Argumentation), and small seminar-based courses at Edinburgh…
Surely there is an answer to the problem of AI cheating which averts the existential threat. . It’s not great,…
I’d like to pose a question. Let’s be pessimistic for the moment, and assume AI *does* destroy the university, at…
Hear hear
I agree with all of this. The threat is really that stark. The only solution is indeed in-class essay exams,…
I’m not sure I’d yet go so far as to call LLMs an existential threat to universities. But I do…
The public choice framing is illuminating, but I wonder if it concedes too much to the policies on their own terms. Even setting aside rent-seeking motivations, disclosure requirements can’t do what they’re supposed to do. Peer review exists precisely to admit strangers, including barbarians at the gates, on the strength of the work alone. But pledges don’t work that way. A pledge from someone you know carries weight. A pledge from a stranger carries none. That transforms an institution designed to include outsiders into one that systematically screens them out.
Thanks, Eli. One of my drafts had a paragraph casting doubt on the efficacy of the policy, but I thought it distracted from the main point. Moreover, it is not unique to AI policies that what is basically an honor system is easy to game.
But I do agree that disclosure requirements may reinforce prestige hierarchies. In fact, I read someone (sorry, can’t remember who), who speculated that, with increasing suspicion directed towards AI-assisted work, people who distrust AI will tend to trust well-known scholars over unknown scholars even more than they already do—under the assumption that the former surely won’t use AI but the latter who knows! That seems right and unfortunate. As for the Ethics policy specifically, one of my concerns was that disclosing AI use during peer review would automatically negatively affect the perception of the paper by editors and reviewers, which creates a perverse incentive not to disclose.
Thanks again for reading and commenting. And thanks to Brian for linking!
Leave a Reply to Eli Alshanetsky Cancel reply