This is going to become a very serious issue. PhD and MA admissions depend very importantly on the writing sample. What do programs do when they later come to suspect the admitted student used AI to produce the writing sample? I think all graduate programs need to adopt an absolutely draconian rule, namely, automatic expulsion if it turns out the writing sample was written with AI. Of course there should be due process: evidence should be presented to the student, who should have an opportunity to explain or rebut the evidence. But absent such a rule, I would expect this to become a recurring problem.
I have heard from some journal editors that they are also increasingly getting articles they suspect are AI generated in significant part (or entirely). So journals will have to think about imposing similar rules (e.g., a lifetime ban of the author from submitting?).
What do readers think? Other proposals welcome.




Leave a Reply to Cian Dorr Cancel reply