So with over 250 votes cast, our earlier poll is now complete; herewith the results:
| 1. There is no reliable method (Condorcet winner: wins contests with all other choices) |
| 2. Impact/citation studies loses to There is no reliable method by 125–115 |
| 3. Reputational surveys loses to There is no reliable method by 130–110, loses to Impact/citation studies by 118–107 |
| 4. SSRN Downloads loses to There is no reliable method by 163–65, loses to Reputational surveys by 173–51 |
I'm a bit puzzled by the victory of "there is no reliable method," though at least some readers told me they chose it as a proxy for "none of the above." That would make more sense, since I assume all those who voted for "no reliable method" are in habit of adjudging some faculties better than others, so they must actually believe there is some rational basis for those judgments. Alternatively, perhaps some readers took "reliable" to mean wholly accurate or infallible, and then, of course, one would have to agree.
I, personally, ranked reputational surveys first–not the kind U.S. News conducts, of course, but well-designed surveys of scholarly experts, who are given real information, seem to me the best gauge–certainlty, that is how good schools make appointments, based on evaluations by experts, either within the school or from outside. But, interestingly, impact/citation studies slighly beat out reputational surveys. If there was any real consensus, here, it was that SSRN downloads are not a very good measure, which certainly seems right.
Thoughts from readers who might care to explain their own votes or comment on the results? Signed comments only.



Leave a Reply to Michael Risch Cancel reply