Friday, April 27, 2012

How Do We Measure Quality Teaching?

One of the hardest nuts to crack in academia is measuring the effectiveness and quality of faculty. This week's decision in Northern Iowa will probably only add fuel to the fire of that debate. Though the decision to disallow the use of student evaluations as the basis for merit raises was based on the legal issues of the union contract, not the merits of the issue, I think it's probably a better outcome anyway. And it opens up broader questions of how we measure the quality and productivity of faculty work.

There is one aspect of what faculty do that can be measured well: research and scholarship. It's very easy to develop metrics for both the quantity and quality of scholarly productivity - number of articles, books, or papers, citation counts, etc. These can be easily adapted discipline by discipline, while remaining rigorous and fair. Even the softer notion of "reputation" within the field is fairly easy to measure, which is why come promotion & tenure time we call on outside experts in the candidate's field to tell us this particular person's niche in their chosen pantheon.

Quantity of service is easy to measure - how many committees do you serve on? Given that most university committees produce little, and what they do produce is often at the expense of a great deal of time, quantity doesn't tell us much here. Quality of service is a much more challenging thing to measure. Over time, the true stars in the service field - those who can get things done, run efficient and effective meetings, and are critical to moving the institution forward - do emerge. Some of these get pulled into administration, where P&T measures don't matter as much. As for the rest, most faculty reward systems seem content to rely on a pretty minimal standard of "service" - so long as you're above that bar, there's no penalty, but there's not much reward for being far above it, either.

The real Gordian knot, of course, is teaching. This is what professors, in the public imagination, are primarily paid to do. At many universities, it is the primary mission - scholarly research being a somewhat distant afterthought (I have taught at such institutions). In today's economic climate, with the public and politicians demanding to see a "return on investment" in universities, what they mean is - are we doing a good job educating students for jobs, careers, and as productive members of society?

At the level of the individual faculty member, student teaching evaluations - especially quantitative measures - are a terrible means of getting at this. I don't mean that they lack value entirely; as a device to weed out the truly awful, they're great. But most professors, even those whose students may not be learning all that much, can manage to get at least passable marks on student evaluations. Some of them manage to get really good ratings, because they are charming and charismatic and popular (yes, there is a "popularity contest" component to student evals). 

This is not to say that student evaluations should be done away with - they can pick out the really bad teachers, and as a formative tool the comments that students write are very useful. I've used many student comments over the years to hone my own teaching, I think for the better. But when they are the only measure of teaching - as they usually are - they tell you next to nothing about what you want to know - who deserves the merit money this year and who doesn't?

Some years ago I was pulled into a debate, at both the department and university level, about how to measure faculty quality and productivity. Across the university there was very little consensus (in part because of deep philosophical and political divisions). Within the department, we all agreed that we had to have some additional measures beyond student evaluations. The only ones we could come up with were:

Quantity measures - how many students did you teach? (funny how that doesn't come up much - but as a measure of productivity, surely it matters) 

• Peer evaluation - which is time-consuming, and dependent on the peers doing the evaluating themselves knowing what good teaching looks like, but which adds a component of validity when done well.

To this we might have added interviews of graduating seniors, to ask them about specific faculty and their impact on that student's formation. That, too, takes time and effort to organize and perform - but I suspect that it would tell us a lot.

The fact that the productivity of teaching is so hard to measure is actually much to the liking of many faculty, because it is a means of escaping accountability. So long as I am teaching my classes and my students are passing, if you can't tell how well I'm doing it I am free to put as much or as little energy into it as I like. This is not universal, but it is more common in higher education than we faculty would like to admit publicly.

Where we have dedicated, energetic teachers - and there are plenty of those - it is because of their own internal motivation, not because their universities reward them for it. And that, it has always seemed to me, is a terrible shame. Systems tend to produce more of what they reward, and less of what they ignore. If we really want higher-quality and more productive teaching, we need to find a way to seriously reward it.

No comments:

Post a Comment