…from philosopher Nicolas Delon, a former Law & Philosophy Fellow here.
Thanks, Eli. One of my drafts had a paragraph casting doubt on the efficacy of the policy, but I thought…
News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.
Thanks, Eli. One of my drafts had a paragraph casting doubt on the efficacy of the policy, but I thought…
[…] 2011 certainly accelerated an existing trend (2014 was also, perhaps not coincidentally, the year the online philosophy profession went…
Sometime ago (so I can’t remember the source), I read that Mencken was a racist until World War I. During…
Thank you for the offer, I’d love to take a look! I found your email address on your faculty webpage;…
I commented back in 2018, but I’ll do it again. By the way, why are they “Americans”? If you include…
[…] Back in 2018–with my views and those of readers. […]
Sorry, my reply below was supposed to be to Mark.
The public choice framing is illuminating, but I wonder if it concedes too much to the policies on their own terms. Even setting aside rent-seeking motivations, disclosure requirements can’t do what they’re supposed to do. Peer review exists precisely to admit strangers, including barbarians at the gates, on the strength of the work alone. But pledges don’t work that way. A pledge from someone you know carries weight. A pledge from a stranger carries none. That transforms an institution designed to include outsiders into one that systematically screens them out.
Thanks, Eli. One of my drafts had a paragraph casting doubt on the efficacy of the policy, but I thought it distracted from the main point. Moreover, it is not unique to AI policies that what is basically an honor system is easy to game.
But I do agree that disclosure requirements may reinforce prestige hierarchies. In fact, I read someone (sorry, can’t remember who), who speculated that, with increasing suspicion directed towards AI-assisted work, people who distrust AI will tend to trust well-known scholars over unknown scholars even more than they already do—under the assumption that the former surely won’t use AI but the latter who knows! That seems right and unfortunate. As for the Ethics policy specifically, one of my concerns was that disclosing AI use during peer review would automatically negatively affect the perception of the paper by editors and reviewers, which creates a perverse incentive not to disclose.
Thanks again for reading and commenting. And thanks to Brian for linking!
Leave a Reply