Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Fool's avatar
  2. Santa Monica's avatar
  3. Charles Bakker's avatar
  4. Matty Silverstein's avatar
  5. Jason's avatar
  6. Nathan Meyvis's avatar
  7. Stefan Sciaraffa's avatar

    The McMaster Department of Philosophy has now put together the following notice commemorating Barry: Barry Allen: A Philosophical Life Barry…

The Rise of Assessment and Non-Academic Administrators (updated)

When I was in the Boy Scouts, there was a song that we sang at the start of meetings and around campfires: The Announcements Song.   It’s a really long song, but to this day, I still remember how it starts:

Announcements, announcements, announcements.
A horrible way to die, a horrible way to die,
A horrible way to start the day,
A horrible way to die… 

All you have to do is replace “announcements” with “assessment” and you will begin to learn how I feel about the outcome assessment movement (okay, I have evolved in my thinking on assessment, but only for practical reasons I explain below). I know I am not the only one who dislikes outcome assessments.  But as much as I dislike writing them for general education and for the philosophy major, assessment reports are a modern reality on college campuses.  If you haven’t read the APA statement on outcome assessments, you should.  Thoughtful philosophers have done recent work on the topic (2008), and everyone in the discipline should understand what is being asked of colleges and universities relating to assessment.  Sometimes assessment is crucial to accreditation (see my last post), and sometimes it is based on a misguided assumption of OA proponents that learning outcomes can only improve with change.  Ultimately, we have to play along with our assessment overlords or we risk losing out on resources and standing in the academy even more than we currently have.  This line from the APA statement on assessment is especially revealing:

“Currently, however, most philosophy courses and programs do not address or formulate student learning outcomes in ways that satisfy all of the expectations typical of the OA movement. Consider what are perhaps the main three expectations of OA.”  The three expectations are: evaluating levels of (content and skill) mastery, identical measures for courses independent of the instructor, and the outcome should link to or “map” or track (not the truth, but) the program.

You might be thinking: I do assess my students. It’s called assigning grades.  You are so living in the pre-80s assessment world.  Grades are one measure of assessment, but as the former director of assessment at my university (an Associate Provost) told me in an assessment meeting: grades don’t show that a student learned anything in your class, outcome assessments do.  Perplexed, I asked why?  She replied that lots of faculty calculate 50% of a student’s grade by attendance and offer lots of extra credit to increase grades.  When I replied that I don’t do either of those things, (those who are interested can read my paper in Teaching Philosophy, to see why I don’t), she asked: “So a student couldn’t get a satisfactory grade in your class if they did poorly on homework and exams?”  “Exactly,” I replied..  She commented with something like: so there is a real connection between what students learn in your class and the grades they earn I guess.  I thought that was the point of grading (assessment), but that isn’t the norm in many classes; hence the rise of the outcome assessment movement.  Thanks, grade inflation.

A few lessons I’ve learned about assessment so far:  (a) Assessment is NOT going away.  Administrators and government agencies that make funding decisions want to see results in student learning.  Measurable. Provable. Traceable. Results.  (b) If philosophers have to do outcome assessments, we ought to do it well and use it to our advantage—something philosophers haven’t been too good at lately.  (c) Reevaluating our stance on assessment could be good for everyone: faculty, administration, and students.

I am opening comments for people to share some of their outcome assessment experiences (both positive and negative) so that the rest of the profession can learn from them.  Assessment is something we should work together on as a profession, but as the APA statement acknowledges, there are real concerns with assessment.  It often turns into measuring what is easy, and even worse, there isn’t much rigorous research showing that outcome assessment is useful.  Whether or not that’s true, if administrators think assessment is useful, then philosophers need to provide assessment data and craft assessment narratives that show it is useful for students to study philosophy.  And let’s be honest, we have testing results from several standardized tests to support claims about the value of philosophy and we should use that data when we can. 

The Rise of the Non-Academic Administrator
One reason to clearly articulate the benefits of philosophy through assessment is so philosophers can show it to administrators who aren’t academic and don’t know the value of philosophy. There are two kinds of non-academic administrators.  No, not competent and incompetent, but those who work in academic affairs and those who do not.  The number of administrators has increased significantly in areas like student services, admissions, development, and athletics over the years, and it is one of the main causes for increased costs in higher education. Administrators who do not work in academic affairs are competing for scarce, university resources with the academic side of the university.  But, the academic side of the university has to do its job keeping the university focused on the academic mission while supporting other non-academic priorities of the modern university. 

There is, however, a second kind of non-academic administrator: the one who works in academic affairs. This non-academic administrator is more of a concern for philosophy and other humanities disciplines. Many of these non-academic administrators, who have never held faculty positions, tend not to understand the full academic mission of a university, much less philosophy because they went through Ed.D. programs in educational leadership or something similar. More importantly, these non-academic administrators are making decisions about how universities are run, how scarce resources are being allocated, and the value of academic programs. As good as the APA Statement on the Major is, there are non-academic administrators who just do not understand the value of philosophy, both instrumentally and intrinsically, to the mission of a university.  This means philosophers must do a better job of explaining the instrumental and intrinsic value of philosophy (through assessment?) at some point or philosophy is going to continue to lose ground to other more “career ready” majors.

Let me be clear, I am not opposed to running a college or university with some business principles, but I don’t think colleges and universities are businesses and as such they shouldn’t be run merely as businesses.  Administrators who only look at credit hour production, number of majors graduated, and say, things like “students can take a history class rather than a philosophy class” really do their students, their institution, and society a disservice.  Since academic affairs usually has the biggest budget, when budge cuts are called for, that’s where the cuts most often occur.  So when these non-academic administrators need to cut, we have to make it clear that philosophy is not the place to balance the book.

Finally, I want to suggest to all my philosopher colleagues a book titled Provost, by Larry Nielsen.  He was a long time faculty member before he was an administrator and eventual provost.  I read his book last summer because I was to be the chair of our university provost search (it’s been delayed a year).  Seeing how academic administrators conceive and articulate problems in the university will change your outlook on the running of a college or university.  It is my sincerest hope that support for philosophy grows on college campuses nation wide.  But we can’t expect it to happen ex nihilo.  We have to do the hard work of showing our value and promoting philosophy’s virtues.  I’ll write more on my ideas for this later this week.

Update: I've been asked to split this post into two. So, if you want to comments on assessment, stay here.  If you want to comment on non-academic administrators, go here.

Leave a Reply to Gene Witmer Cancel reply

Your email address will not be published. Required fields are marked *

11 responses to “The Rise of Assessment and Non-Academic Administrators (updated)”

  1. Well… neither grades NOR outcome assessments show that students learned anything in your class, which is why we should be satisfied with neither, and should be trying, as a discipline, to develop measures of learning, which, at some point, we'll be pressed to use. The best way of getting an A in a course is by having mastered the material before you started the course; in which case you will get a high grade, and satisfy any measure of outcomes, without learning anything. Reasonably good measures of learning, though, are very hard to develop, and would have to be developed by people within the discipline in collaboration with experts. I am amazed in discussions about this how satisfied some people seem to be with completely impressionistic beliefs about whether their students learn, and when I hear people say (as I sometimes do, though not in my department), "well, I aim my teaching at the 10% of students who "get" it", I wonder who is learning anything in those courses.

    You say:
    "And let’s be honest, we have testing results from several standardized tests to support claims about the value of philosophy and we should use that data when we can."

    Is that true? We have evidence that philosophy majors who take MCAT and LSAT, etc, do better than most other majors. But this could easily be a selection bias (Philosophy majors might be more socio-economically advantaged than the average LSAT/MCAT taker, or might be better prepared before entering the major, or whatever..) I don't know of anything like a rigorous study about this.

    I read the APA statement, and thought, "wow this is good". Then saw who headed the drafting committee (Randy Curren) and understood why!

    Since I am the first commenter, if there is still time, can I suggest an update: divide the two topics into separate posts so that any discussion is easier to follow? (I have things to say about the rise of non-academic administrators, but am more interested in assessment, and don't want to introduce anything else into that discussion).
    Anyway, great series of posts, I'm enjoying these and they're a service to the profession. Thanks.

  2. According to the administrator you mention, at that institution anyway "lots of faculty calculate 50% of a student’s grade by attendance and offer lots of extra credit to increase grades." I find this hard to believe: it sounds mythic, like typical right-wing images of welfare recipients or the like. Did you find out if anything like that is true? I know grade inflation is a problem all over, but the idea that anyone would calculate 50% of a grade by attendance is outrageous to a degree I would not have anticipated. In any case, I'm very curious to know if you followed up on that in any way — confirming or disconfirming.

  3. Christopher Pynes

    Gene — I didn't do much digging to look at my colleagues syllabi, but the administrator at the time was from communications. And as strange as it sounded, I didn't have a reason to doubt her. She was a former chair and saw all that stuff. I can say that I chaired a college grade appeal committee last year and the faculty member, not from the humanities, assigned 40% of the course grade based on attendance.

  4. I've seen many more than a thousand course syllabi in the past couple of years, all of which needed to go through my committee for approval. Maybe 5 or 6 have had this sort of grading scheme (lots of credit for attendance and opportunities for extra credit); and that held them all up. So it probably is a myth (at least here). Of course, once the course is approved, we have no control over what people actually do.

  5. Again, thank you for these important remarks about a feature of academic life most of us now have to grapple with if we hope to continue a successful program. The main problem I find with assessment in philosophy is that some of the most important things we want to teach are seeds that might not come to full flower until later in life. This makes it incredibly difficult to measure the impact that a good philosophy course might have on a person. For instance, one of the things I hope to teach in any course is how a person goes about evaluating and reasoning about evidence, for the purposes of almost any sort of decision making. I would feel I had succeeded if my students were more carefully deliberative and made better-informed and reasoned decisions in whatever careers they ended up pursuing. How do I measure that, exactly? In three months, in a form easily quantifiable on a report for administrators?

    Consequently, we turn to the low-hanging fruit. I think we could do reasonably effective assessments of "value added" (an unfortunate term to apply to the human beings we teach, but part of the native language of administrative assessment) by comparing papers an individual student wrote over the course of a whole undergraduate career. The comments on this overall body of work might be quite valuable for students to receive, too, but it's quite labor intensive. Right now the best we can do is a pre- and post-test of concepts we hope all our students have learned to understand in the course of a major, and this has the advantage of being easier to administer and grade and report. We will doubtless learn some important things from this. However, it would be quite unfortunate if this sort of assessment came to represent the most important goals of our discipline, simply because those are the ones we feel reasonably equipped inngage terms of time and methods to measure.

    As a partial antidote, I would suggest that philosophers consider contributing further to the conversations about assessment both at their own institutions and in the regional accrediting bodies. Others need to know the challenges we face in this enterprise (many of them will share similar ones), and we need to articulate the goals and value of what we do even when it cannot be easily translated into "measurable SLOs". Those goals don't disappear just because they are more subtle, and they should still be declared as essential to our teaching.

    As a practical aid, I would like to see some mechanism for departments to share information about their assessments and about measures of student success that might be used as references (perhaps via the APA). The more sources of data and mutual support we could find with other departments in our circumstances, the stronger a case we might able to build when called on to make such reports or determine where our limited time is best focused.

    I am also interested in the possibility of critical reasoning assessments along the lines of the CLA+ which might help to demonstrate the value of what we offer to any students, including those who simply take one or two courses and never choose to major or minor in philosophy. I am glad to see that my own university is invested in building the kinds of evidence assessment and problem solving required on the CLA into the broader curriculum. However, this alone is no substitute for offering the kinds of courses dedicated to improving reasoning, to which Philosophy lays a special claim. If we could demonstrate more effectively how students in our courses learn precisely the sorts of things required by exams like the CLA, perhaps the need for devoting more resources to it would become clearer. [As an aside, I am familiar with many complaints about the CLA and its methodology, and other similar exams, but I mention it because my university has committed to investing in this measure and it does emphasize a performance task that is almost exactly the sort of thing we teach in our lower level Critical Reasoning courses.]

    I would love to hear more about how other departments are dealing with these challenges, or how we might collaborate to strengthen common efforts.

  6. I found that hard to believe, as well. (Not that that means it's false.) At my institution (R1 public in the South), attendance is prohibited from counting for more than 15% of a course grade.

    Thanks, Christopher, for these posts. You're raising exceptionally important issues for academic philosophy and higher ed in general.

  7. Laura says: As a practical aid, I would like to see some mechanism for departments to share information about their assessments and about measures of student success that might be used as references (perhaps via the APA). The more sources of data and mutual support we could find with other departments in our circumstances, the stronger a case we might able to build when called on to make such reports or determine where our limited time is best focused.

    Me: Agreed. In fact the APA might usefully coordinate a consortium of departments (of different kinds, R1, R2, SLAC, Regional, 2-year) and try to develop assessments of learning that would be useful for all departments. This would be a major undertaking, and you'd want pretty good commitments from enough departments that they would actually use the results (or at least seriously consider adopting or adapting them). Interesting thing is that most of the assessment pressure is on the major; whereas even in departments with many majors, most credits are taught to non-majors (and, of course, if your department does not have a major, all credits are taught to non-majors). But it should be possible to develop useful values-added learning measures for 101; applied ethics/contemporary issues; logic; and History of Philosophy courses. Developing high quality assessments is loads of work, and it seems silly for depts all over the country to be putting lots of resources into doing it badly, when we have a pretty unified discipline. APA could also give guidance (which the APA statement gives, but could be elaborated) on how to frame assessments.

  8. Wow, so much I could say. Let me start with the disclaimer: I'm the assessment guy on campus (Associate Dean of Academic Affairs for Institutional Effectiveness, though that's too long to fit on my business card). I've got an MA in Philosophy and a PhD in Religious Studies. I'm staff, not faculty, though I have been a VAP and adjunct at previous stops. I normally teach a 1-1 load (this semester a freshman seminar plus a course in ancient political philosophy). No Ed.D., though I guess I have gone over to the dark side of administration.

    The purpose of assessment is to improve student learning, everywhere from the classroom level to the university level. The reason grades aren't useful for assessment is because students get evaluated (from an assessment perspective) in order to provide data about instruction and curriculum, not to reflect on individual learning or the lack thereof; because every instructor has different grading standards, so the resulting grades are pretty much incommensurate; and because students just get one grade, but if you want useful data for assessment, you want to look individually at particular skills like how well students develop arguments, how well they use evidence, how well they represent the ideas of others, etc. If you notice grades are dropping in your course, you don't know what to work on to improve learning if you don't have more fine-grained data.

    Assessment done to serve outside agents like accreditors, state legislators, etc. is generally not worth doing, because it's more focused on producing paperwork than on doing good assessment. But it's necessary to keep the dollars flowing. Being kind to your assessment folks goes a long way: if they're any good, they know how much silliness is involved in what they're asking you to do, and they feel guilty about it. Still, they do help keep the dollars flowing, so they aren't all evil.

    Good assessment is difficult to do, and expensive.

    Most assessment never gets reported to anyone. Every time you scan the room for looks of incomprehension, or think about why students didn't go better on that exam, you're doing assessment. Never confuse doing assessment with reporting on assessment. Assessment reporting generally relies on a naive form of the (social) scientific method, and is of limited value because it serves to satisfy outside agents as much as it helps to improve instruction. Hard to serve two masters at once.

    Consequently, if you're trying to use it for your own (political, rhetorical) purposes, you're giving in to the Ed.D.s. If you're using it for your own pedagogical purposes, then it'll take more time but it'll actually be worth the effort. Rarely if ever do schools use assessment data for purposes of resource allocation (they should, but they don't), so it's a waste of time to try to game the system anyway.

    Colleges are businesses. If you think they aren't, you'd be willing to teach for no pay. If you want a paycheck and electricity in your office and chalk in your classroom, someone has to pay for it, and that would be students/parents and taxpayers. Colleges shouldn't be run like GE or Ford or Google, anymore than Ford should be run like a web search firm. But they are businesses. Furthermore, non-profit colleges have fiduciary responsibilities to taxpayers, their local communities, and of course students and alumni. That means they need to demonstrate a certain level of accountability. This is an ethical issue as well as a practical one. The only real accountability they have is of a fiscal sort (and that's about as substandard as it is in the for-profit world of banks and car manufacturers), but in principle there's no reason colleges shouldn't be help accountable for the learning of their students. In practice that's hard to do, of course. It's hard to have the same assessment process serve the dual goals of improving learning and assuring accountability; probably separate systems would work better, but most schools try to bootstrap one on the other for efficiency's sake. Once again, good assessment is difficult and expensive, and one reason assessment protocols try to serve too many disparate goals with one process is to limit faculty resistance.

  9. This article called "Does Assessment Make Colleges Better? Who Knows?" came out in the Chronicle recently and is worth a look. A short excerpt:

    "Assessment is one of those things that we keep telling ourselves will pay off if we could just get it right, but we never seem to get there. It’s time for us to demand that the accreditors who are driving assessment provide evidence that it offers benefits commensurate with the expense that goes into it."

    http://chronicle.com/article/Does-Assessment-Make-Colleges/232371/

  10. The popular mantra that "colleges are businesses," generally delivered in the tone of The Practical Man Speaking To the Benighted, is repugnant. Colleges are businesses in the same way that the Catholic Church is a business, or the National Park Service is. Sure, there are business aspects, like paying salaries and keeping the lights on. But the crabbed and narrow vision of all human interaction as nothing but economic transactions best understood on a business model is one of the very things that scholarship exists to examine and criticize. No doubt we can look forward to demands for outcomes assessment of the Tridentine Mass, replete with pieties about fiduciary responsibilities and accountability. Salvation will be attained through spreadsheets and bullet points.

  11. There is no evidence that any of the assessment regimes I've seen have helped any student ever. There is no evidence that any of the assessment regimes I've seen have improved learning at any educational institution ever.

    The assessment religionists for the most part seem to honestly believe that their assessment 'tools' tend to improve learning, but they have zero evidence. The more likely sociological movement in play is making administrators look good and feel satisfied.

    These people always argue and try to get people to think that those who oppose this 'assessment' nonsense are bad guys who don't care about student learning. That might be reasonable if they had a case. But they do not have a case, because there is zero evidence that any of these assessment instruments improve student learning.

    It's not that the evidence is weak and questionable. it's that the evidence is zero.

    —–
    KEYWORDS:
    Primary Blog

Designed with WordPress