Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Fool's avatar
  2. Santa Monica's avatar
  3. Charles Bakker's avatar
  4. Matty Silverstein's avatar
  5. Jason's avatar
  6. Nathan Meyvis's avatar
  7. Stefan Sciaraffa's avatar

    The McMaster Department of Philosophy has now put together the following notice commemorating Barry: Barry Allen: A Philosophical Life Barry…

Letters for students applying to US programs and those “top 1%,” “top 5%” comparisons

A philosopher in Europe writes:

I wonder if I could persuade you to write a blog post that would help European letter writers do well by their students? (Please don't identify me though – not least because I don't want my student to be able to identify herself.)

I recently wrote my first round of letters for the US PhD applications market, for a student whom I rate very highly. Since I am working at a major European university, the standard of students that I teach is outstanding; and among this cohort the student for whom I wrote is one of the best. I thought that I had represented this fact accurately when, in the sections of the reference letters asking me to rank students relative to their cohort, I rated her in the top 10%-25% for most of the areas asked (and in top 1% for one or two others). Given that our graduate students are likely as good as those at most major US universities, I thought this was high praise indeed. However, an American colleague has recently told me that any ranking outside the top 5% is generally likely to kill the application of a student.

This strikes me as both crazy and unfair, but since I want to do the best by my students, I'll reluctantly play whatever games it takes to see that they get the chances they deserve. For the benefit of non-US letter writers, though, perhaps it would be good to canvas opinions here. How highly must students be ranked to be considered by strong programs? And what percentages of competitive applications are described as being in the top 1% of even very strong MA or undergraduate programs?

Some guidance here would be very much appreciated!

What do readers think?  If you post anonymously, at least indicate something about your experience in these matters (e.g., faculty member at a PhD program, recommender, etc.).

Leave a Reply to Ken Taylor Cancel reply

Your email address will not be published. Required fields are marked *

9 responses to “Letters for students applying to US programs and those “top 1%,” “top 5%” comparisons”

  1. A couple of things. First, it seems to me that recommendations are a terrible way to judge applicants for anything: jobs, PhD programs, or anything else. But given the volume of applications, people are going to take short cuts where they can. One way to make a quick judgement is to look for outliers in places that are easy to spot. If you say top 5%, nobody's going to notice, but if you say top 10-25%, it'll stick out like a sore thumb. Indeed, it might be taken as a coded dis-recommendation. Second, and perhaps more contentiously, are your students really so outstanding that top 25% is as good as top 5% at a less prominent university? That's not obvious a priori, and it seems to me understandable that an admissions committee might be sceptical.

  2. I was the person who asked this question, and I'd add one clarification: the student in question is an MA student.

    So to answer Mohan Matthen's question: you're right it's certainly not a priori. But since our MA students come from all over Europe, I'd be reasonably confident that the best 25% really are as good as the top 5% of many universities. Maybe not the very best ones, but many of the middle tier ones, at least.

    Many thanks to Brian for posting this.

  3. I wouldn't at all try to epistemically justify setting aside files on the basis of those rankings. But the practical problem that causes people to use them as a negative indicator is real. There are SO many files for SO few slots. As a reader, you simply MUST find some way to quickly get the flood of applications down to a more manageable number. These get your full attention. Now I think the background assumption is that everybody knows this situation. And just about everybody knows, that as a consequence, ANY early and easily read signal that can be taken to the detriment of the candidate will be taken to the candidate's detriment. Being ranked in the top 10- 25% is such a signal. If you are on the receiving end, and you presume that the recommender knows the score, then you tend to take the negative signal as intentionally given, perhaps a rare expression of brutal honesty. Of course, the almost inevitable consequence is that as this pattern becomes common knowledge everybody defensively uses only the top or the top two slots of the scale. As a consequence, the ratings lose almost all of their value.

    Because of that, I'm in favor dropping the ratings altogether. I note that there are letter writers who systematically refuse to use them. They fill them out, because they must. But they append a note saying they rate everybody as highly as possible in a sort of protest against the usefulness of the signal.

  4. Many schools ask for comparisons, but don't specify a standardized comparison class. Sometimes (though not always) these schools will ask who you're comparing to. In these circumstances, would putting 10-25% still kill someone's candidacy? I'm thinking about the following types of cases:

    – You taught Sally when she was an MA student at a top philosophy department. By comparison to the other grad students you've taught recently (PhD and MA students) she's in the top 25%. Given that you're largely comparing her to PhD students at a top school, this is a marker of your thinking extremely highly of her.

    – You teach at a terminal MA program that's got a strong track record of placing people in top programs. Sally is in the top 30% — which is comparable with past students who got into top 20 schools.

    Or should one simply always use undergraduate philosophy majors as a comparison class?

  5. I too wish we could get rid of the ratings, in part because I think it's likely that anyone who is using them as a quick way to sort piles into 'read more closely' vs. 'do not' is also likely to miss the crucial bit of those ratings, whether you indicate the comparison class. To be in the top 10-25% of students in our MA program would indicate that one should be (at least) considered (read more closely) by all, or at least most, PhD programs. But such a rating for an undergraduate here (or most places) would indicate to most, or all, PhD programs that you need not look more closely. And then there's the difficult of figuring out what the ratings mean for non-US students (ranked by non-US faculty). I suspect that the ratings are there because they are part of University-wide application forms/systems, but it'd be nice if philosophy departments made it clear to everyone that they will not be using them.

  6. I don't know how you could know that your 75th percentile are 95th percentile at most middle tier universities. More importantly, I don't know how an admissions committee member could know it, or know that this is what you mean to say. They will take it as semaphore: "She's brilliant as hell." (Cancel previous signal.)

    I think Ken Taylor's read is spot on: just tick the left-most column right down the line, and everybody will ignore that part of your letter.

  7. Reading this thread (multiple times) is disheartening and has made me angry, then sad, then angry. First of all, we have to be really clear on exactly what the questions are. I just filled out a slew of these a few months ago and my memory is not perfect, but I think the questions are not at all standard and vary from school to school. Sometimes there is a possible ranking of top 1%, sometimes top 5% is the highest possible. Sometimes no explicit reference class is given, sometimes there is a dropdown menu, and sometimes something else is mentioned (like 'cohort'). But I am certain I was at least once asked 'of students going on to graduate school, how does this candidate compare…' or whatever. I am certain because I stopped to count how many students I could think of that I had who actually went on to graduate school (very few). If we think seriously about these numbers, it is obvious that this system is complete lunacy. Imagine that I am comparing my master's students to all graduate students that I have ever taught. Now maybe I am supposed to ignore those who are ALREADY in PhD programs, but read literally, I have taught about 35 graduate students at Texas Tech, Cornell, and Stanford. Thus at most 1 can be in the top 1% of such a group and I have taught multiple students who now have (well deserved) tenure track jobs at top ten Leiter ranked PhD programs. I have taught students that have well-deserved tenure track jobs and I would judge that they are not in the top 5% of graduate students that I have taught. If we are talking about undergraduates who want to go on to graduate school here, there are not that many. Even someone who has been at Texas Tech for a long time could not reasonably say 'top 5% of cohort' more than once or twice in their life (depending on order effects). Regardless of what they ask for, the relevant reference class is probably 'master's students here at Texas Tech' where we have about 10 a year. I think multiple students in each cohort belong in good PhD programs. Yet by frequency alone, I only have a 'top 5%' student once every two years and often have deserving graduate students who don't crack the top 25%. Are commenters seriously proposing that I should rank any student I think they should accept as in the top category?

    Now some commenters are clearly considering a broader class of students – perhaps I should be making judgments about how good students are at other institutions where I have never taught. But this makes matters worse. When I think of other students, those who come to mind invariably stand out as excellent. I think about the graduate students who I meet at conferences or who were students with me at Wisconsin (and typically, better students come to mind). We all (I think) have had experiences where we read a paper, interviewed a job candidate, hear an APA talk or whatever where we were blown away by a graduate student. The top 1% of graduate students now sounds absurd (I should point out these rankings are sometimes seen when recommending someone for a tenure-track job!!) When I make a judgment about a very broad reference class, I have very incomplete data and very biased intuitions.

    Just to bring to bring to mind some anecdotes, I once wrote a recommendation for an undergraduate student at Caltech who was applying for summer research with the US Navy. I thought he was terrific. He got an A in a 12 person philosophy of probability class. He was clearly talented and I had no doubt that would do a great job. However, he received the third highest grade in the class. In that class, I had one student who already had a professional publication in a physics journal and a different student who was on some kind of international computer programming olympics team thing. Both of those students received slightly higher As. As I filled out the recommendation, I lied and grudgingly put 'top 10% of his class' because I knew that the truth (25%) might be interpreted as a negative signal. But I did it hesitatingly and felt that I was doing something dirty that I should not have done. I now gather that others would have put the very highest possible recommendation. But surely not everyone. Many philosophers have more scruples. A second anecdote is this: At Wisconsin I taught an introduction to logic class with something like 20 students. I had several students who got As and one such student asked me for a recommendation for law school. I knew that he would do well, he was a strong student, etc.. I don't remember if I had to say where in his class he was, but if I did, I am sure he was not in the top 10%. In that class I had two amazing students. One went on to chemical engineering at MIT another to Physics at Caltech. We think of these things as outliers because they are, but even if I add up all of the students who I have ever taught, cracking the top 5% of such students is quite a feat.

    What makes me angry (or sad or both) is that I think surely everyone knows this [THEY DON'T!!) and then I see how people behave. What people know (at best) is how the statistics work and how biased our judgments are. Obviously people don't know how we are supposed to answer the questions as evidenced by the initial inquiry. I myself simply assumed that they would OF COURSE not be taken seriously and probably not even looked at. This made me much less likely to lie (though I admit I did a bit I think – but I didn't simply put the top category for everything thus apparently hurting my students!!) But Ken Taylor's reaction is that they are very bad data, but these get his full attention anyway. I would guess that others consider them as well. I can't understand that reaction. My reaction is that virtually anyone who says top 1% is not to be taken seriously. Perhaps they are trying to send an 'amazing' signal when they say 5%. But I can assure you that not everyone who writes 10-25% is trying to send a bad signal. Perhaps (gasp!) they are just being honest. Some recommenders would not even consider this to be a bad signal and would not realize it should be taken that way. Ken mentions this possibility as though the recommender might just be 'brutally honest' but then he still proceeds to use this data to dismiss that student! Perhaps this is not terrible at Stanford where the graduate students are all terrific and you really do only want the best student Prof X has ever seen. But it is just irresponsible (and counterproductive) at most institutions. Very little reflection should make it clear that the best thing to do is to simply ignore these kinds of rankings. It is the only morally and epistemically defensible position not to mention it says a lot of time (which as Ken points out is quite important given the number of applicants). Perhaps we should think of it this way – if your computer system didn't give you this data would you want to ask your recommenders to provide it? I can't believe you would. So why are you using it?

  8. Joel;

    You seem to have misinterpreted my post as carrying far more normative implications than I intended — or at least quite different normative implications from the ones i intend. My bottom line is that this rankings are, on the whole, epistemically useless because they contain no real information. That's because as a matter of fact evaluators mostly use the highest ends of the scale. I presume primarily for purely "defensive reasons." Writers who take the rankings seriously and try to give an honest ranking are, I predict, probably doing a disservice to their students, given how files are likely to be read. Again, it's just a fact that in the early readings anything that can be taken negatively will be taken negatively. That's not a normative statement. Just a factual prediction. My normative advice to those who write recommendations is that, given the current practices, don't try to convey genuine information using those rankings.

    If I could in one fell swoop reform our collective behavior, I might keep those rankings, I don't know, but if I did I would insist that people treat use them honestly rather than defensively. But I don't have the power to reform our practices.

    Something similar is happening, by the way, with letters of recommendations. The evaluations of candidates is often way overblown, so much so that they don't really tell you all that much about the candidates. In the sort of noisy environment in which these decisions have to be made, negative letters — especially letters that seem intended to convey, however indirectly, negative information kind of stand out and are likely to be over interpreted to the candidate's detriment — unless the letter writer is very good at framing things of that sort so that they can be wrongly taken.

    The one part of the file that absolutely doesn't lie are writing samples. That's real information. Thank god. The rest is so mixed with noise. It is our collective fault, I think. Not sure what should be done about it. I was just trying to project the OP into the mind of a committee member trying to sort through hundreds of files. No doubt such a person will resort to some sort of triage. Wasn't saying they will necessarily take the negative rating to the absolute detriment of the candidate. Certainly wasn't trying to justify the practice of doing so. The point is that it is both an informationally impoverished and morally imperfect world. One problem with morality is that it doesn't provide a whole heck of a lot of guidance in such a world.

  9. that should be 'can't' be wrongly taken. If you are going to include negative things is a letter, than it has to be framed in the right way. The point is that you have to guard against the hurried, overwhelmed reader taking things in a way you don't intend them to be taken. If you can do that then your letter might actually gain more credibility, I think.

Designed with WordPress