Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Alejandro Esteban Camacho's avatar

    I should clarify that this was the conclusion of university counsel as a matter of policy as well. But of…

  2. Alejandro Esteban Camacho's avatar

    I have no expertise in this, but i have been looking for such a program for student papers. My understanding…

  3. Samuel Murray's avatar

    I just tried Pangram out as a test. I uploaded two chunks of text that were almost entirely AI-generated using…

  4. Nathan Meyvis's avatar
  5. Jason's avatar

    Not selling PDF versions of books, or shifting to HTML-only versions for web access, will do absolutely nothing to prevent…

  6. Peter's avatar

    Why not publish open access? Are university presses such an important tool to generate money?

  7. Rollo Burgess's avatar

    My general rule is that any book involving extensive mathematical or logical notation should be read in hard copy. Digital…

Which AI-writing detector is best?

A reader calls my attention to this article about Pangram. Curious to hear from readers about their experiences with AI-writing detection programs, whether Pangram, or others.

Leave a Reply to Alejandro Esteban Camacho Cancel reply

Your email address will not be published. Required fields are marked *

4 responses to “Which AI-writing detector is best?”

  1. Pangram’s outputs match my judgment about whether something is AI more reliably than any other tool’s. It is the most widely respected “is this AI?” tool I know of, and my first choice when I want a second opinion (or to evaluate something I haven’t read).

    My job is primarily working with AI, so I pay a lot of attention to this area, but I have no special expertise in making or evaluating this specific kind of tool. Others might know more.

  2. I just tried Pangram out as a test. I uploaded two chunks of text that were almost entirely AI-generated using ChatGPT 5.4 (279 words and 506 words) and it rendered a “100% human-written” judgment.

    I uploaded another chunk of text that was written with a relatively careless prompt (basically “Write a 500-word essay on the relationship between partisanship and ideology in the United States”). I pasted the result into Pangram and got a “100% AI-generated” judgment. I tried going back to ChatGPT to ask for a revision in a more human register, but the subsequent two checks were also flagged as 100% AI-generated.

    One difference might be that the two initial versions were generated based on text I had already written. I loaded text from my papers, asked ChatGPT to rewrite them around a new idea, and pasted the output. So, perhaps the “complete revision” of human text still retains some human signature. It is promising that text generated from scratch gets flagged.

  3. Alejandro Esteban Camacho

    I have no expertise in this, but i have been looking for such a program for student papers. My understanding based on my research (including conversations with experts on this) is that to date there is no reliable software currently available to detect this with sufficient accuracy to use as evidence for honor code violations. FWIW.

    1. Alejandro Esteban Camacho

      I should clarify that this was the conclusion of university counsel as a matter of policy as well. But of course especially in this space, this conclusion needs to be monitored and reevalauted as technologies evolve.

Designed with WordPress