Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Keith Douglas's avatar

    Cyber security professional here -reliably determining when a computational artifact (file, etc.) was created is *hard*. This is sorta why…

  2. sahpa's avatar

    Agreed with the other commentator. It is extremely unlikely that Pangram’s success is due to its cheating by reading metadata.

  3. Deirdre Anne's avatar
  4. Mark's avatar
  5. Mark Robert Taylor's avatar

    At the risk of self-advertising:… You claim “AI is unusual in degree, not in kind” and “It is not clear…

  6. F.E. Guerra-Pujol's avatar

    Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s…

  7. Claudio's avatar

    I teach both large courses, like Jurisprudence and Critical Legal Thinking (a.k.a Legal Argumentation), and small seminar-based courses at Edinburgh…

AI cheating and pangram redux

Philosopher Stefan Sciaraffa at McMaster in Canada writes:

I read the Unherd piece…I align with its general  spirit. However there is one key claim that I’m not so sure about. I’ve been using Pangram. It has a vanishingly small false positive rate. Folks at  the business school at your university have verified this. I’ve run 100s of 2022 and earlier essays from my students without a single hit, corroborating the claims regarding virtually zero false positives.  This detector does show ai usage in my student’s papers for this year and last.  So I’m not sure about the linked article’s claim that the false positive rate is too high for all AI detectors. One possibility is that Pangram can read the date of the paper’s authorship. Hence, no false positives prior for 2022 and earlier essays. I can’t read Pangram’s code. So I can’t rule that out.  

Pangram came up on an earlier thread. Curious whether any readers know whether Pangram’s code allows it to detect the date of a piece of writing?

,

Leave a Reply

Your email address will not be published. Required fields are marked *

4 responses to “AI cheating and pangram redux”

  1. There’s a simple way to test. Open a pre-2022 essay and copy-and-paste it into a new file.

  2. I see this question as a bit naïve. There is metadata on every document created by a modern word processor and every pdf (unless it’s expressly stripped). This makes it trivial to determine the date of creation of any such document. It is possible to spoof this data, so if Sciaraffa’s concern that Pangram is reading this to game its own analysis were true, it would be likewise trivial to trick the system.

    An interesting test case might be to identify both pre-2022 documents and post-2002 documents, some of the latter specifically known to be written by GenAI, and then strip the metadata and upload them to Pangram.

    1. Cyber security professional here -reliably determining when a computational artifact (file, etc.) was created is *hard*. This is sorta why when your computer clock battery dies websites (which use TLS- the “s” in HTTPS) start going nuts.

      It is trivial, however, to read the *asserted* date, but as we say in my profession, don’t rely on it for any security related judgements. In the case of document provenance (author ,e.g.) that’s effectively a form of access control or the like and so would apply.

  3. Agreed with the other commentator. It is extremely unlikely that Pangram’s success is due to its cheating by reading metadata.

Designed with WordPress