Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Deirdre Anne's avatar
  2. Keith Douglas's avatar

    Cyber security professional here -reliably determining when a computational artifact (file, etc.) was created is *hard*. This is sorta why…

  3. sahpa's avatar

    Agreed with the other commentator. It is extremely unlikely that Pangram’s success is due to its cheating by reading metadata.

  4. Deirdre Anne's avatar
  5. Mark's avatar
  6. Mark Robert Taylor's avatar

    At the risk of self-advertising:… You claim “AI is unusual in degree, not in kind” and “It is not clear…

  7. F.E. Guerra-Pujol's avatar

    Apropos of Sagar’s wish to foist the A.I. industry by its own petard, this article appeared in print in yesterday’s…

Annals of “bullshit” rankings

Rankings are fun, sure, but it's good to figure out wheter the metric means something (anything!) lest one produce nonsense.  Case in point:  ranking law reviews by Google Scholar h-indices.  The problem (we've encountered it in philosophy in the past, but now everyone there knows Google  Scholar is worthless for measuring journal impact) is that there is no control for the volume of publishing by each journal, so any journal that publishes more pages and articles per year will do better than a peer journal with the same actual impact that publishes fewer articles and pages.

UPDATE:  In the case of philosophy, Synthese was the number 1 journal in "impact" according to the nonsense Google number–this was obviously ludicrous, as everyone in academic philosophy knew.  But Synthese also publishes five to ten times as many articles per year as the actual leading journals in the field.  One philosopher adjusted the results for volume of publication, and lo and behold, Synthese rank fell dramatically.

Designed with WordPress