Showing posts with label Teaching. Show all posts
Showing posts with label Teaching. Show all posts

Monday, June 24, 2013

Sometimes, Power Resides Outside of Washington

I was struck by this article in last weeks' Chronicle of Higher Education:
Crusader for Better Science Teaching Finds Colleges Slow to Change
There are a lot of easy narratives to fit this into: status quo-defending professors vs. an upstart innovator, arguments about "outcomes-based" vs. "process-oriented" teaching, and so on. The debate about what's the best way for students to learn science is an important one, and I will leave it to those who are best suited to conduct it.

As usual, I'm drawn to the intersection between higher education and politics. The part of the article that struck me was this:
In 2010, while at British Columbia, he got a visit from another Nobel laureate, Steven Chu, U.S. energy secretary under Mr. Obama. Mr. Chu urged Mr. Wieman to join the White House science office. Mr. Chu suggested that Mr. Wieman could do a lot more to improve undergraduate science education from Washington than from Vancouver, Mr. Wieman says. (emphasis added)
This is a typical notion - that if you want to really get something done, you need to go where the "action" is, the center of power: Washington. Steven Chu's suggestion to Wieman was, in terms of conventional wisdom, entirely unremarkable.

The rest of the article goes on to talk about some of the mechanisms that have prevented Wieman's ideas and research on science pedagogy from gaining widespread traction once he took Chu up on his offer. Many would tell that story to say, despite his position as a science advisor to the White House. I want to suggest that the failure of Wieman's ideas to be adopted more widely and quickly is because of that position.

Over the past 60 years (some might argue for the past 100), we have seen an increasing centralization of power and authority, both in the Federal government in general and in the executive branch in particular. There are certainly instances in which this was a good thing; the Civil Rights Movement would have struggled for far longer if not for the Voting Rights Act of 1965, which overrode the express wishes of a number of states (largely in the South), and for the willingness of the Federal Executive to intervene with military force to protect school integration in Arkansas, Mississippi, and Alabama.

But that centralization has its downsides. In every debate in which the Federal government decides to tip the scales, there are winners and losers. Since every issue creates different cleavages, the chances of being on the losing end of the stick approach near-certainty over time. Eventually, everyone has been ticked off by the government at some point - a phenomenon long ago uncovered by John Mueller, among others.

The past two administrations (Bush and Obama) have exacerbated this phenomena. Each administration was deeply and abidingly disappointing to partisans of the other party, with the further ordered impact that many liberals ticked off by Bush have not been restored to full faith in the government by Obama (the recent revelations about NSA spying being a case in point).

In this context, the phrase "I'm from the government, I'm here to help" has become more ironic and maligned than ever. It is difficult to find folks these days who have faith in both the motives and the capacity of government to do good things - but much easier to find instances in which people fear government intrusion into areas it may not belong.

This is especially true when it comes to fields of expertise. In the case of the article above, science faculty are supposed to be experts in how to teach science to college students (whether they actually are experts or not is another question). Nobody who fancies himself an expert in something likes outsiders telling him what to do - especially when that outsider is wearing the badge of the Federal government, and therefore the implicit threat of funding withheld.

There are other things contributing to the reluctance to change, of course - fear of change itself, fear of MOOCs and online learning, fear that universities are being undermined by market forces, fear of anti-intellectualism, and so on. But the fact remains that in the current climate, anybody trying to induce widespread change may be better off doing so from outside the government than from within it.

How could such change come about? If the object is to get people (in this case, tenured faculty) to change their habits and try something new, the answer has to be through persuasion. You cannot force these folks to teach differently, you have to persuade them that it's a good idea. And persuasion is inherently a relational tool that comes only through conversation. That conversation is much more effective in an environment where there are no threats, even implicit ones, in the air.

In this sense, it's possible that Wiemer may well have effected more change had he stayed anchored in Vancouver. From that position, armed with his research, he would have been harmless - and therefore may have had an easier time engaging in the necessary conversations. Give some TED talks; hold symposia; travel from university to university spreading ideas and debating skeptics. That takes resources, it's true - but relatively modest ones, easily procured by a Nobel Prize winner.

As long as we accept the logic that "if you want anything done, you have to go through Washington," a great deal less will get done that could be otherwise. In this day and age, change may not always require power - in fact, power may inhibit change. Let's instead fashion our own power and change our own world, rather than waiting for Washington's broken system to do it for us.

Thursday, May 30, 2013

The Real Mission of Higher Education: Learning or Customer Satisfaction?

There's a lot of talk these days in higher education about a "revolution" or a "coming avalanche" of change. Many of these predictions are based around the allegedly radical transformative power of technology, with particularly emphasis on MOOCs.

Some institutions have begun to adopt the MOOC model not as a substitute for their classrooms, but as a supplement - a sort of internet-delivered textbook for a tech generation that would rather watch a video on a screen than read a traditional text. A part of the argument for doing so has a certain appeal: why not expose our students to the best lecturers from the best institutions (Harvard, MIT, etc.)?

While this logic makes a lot of faculty nervous, it's a natural extension of something we've been doing for decades: measuring "teaching performance" largely on the basis of student evaluations. Professors who are better lecturers - more dynamic, energetic, and engaging - always score well on these, and are therefore regarded in our annual and P&T evaluation systems as "better teachers". I suspect that what bothers may faculty about MOOCs is that they don't want to compete with faculty from Harvard and MIT who, we fear, are better lecturers than we are.

But it turns out that "best lecturer" may not be a particularly useful thing if what we're interested in is actual student learning. A fascinating study has come out suggesting that the dynamism and charisma of the lecturer may not in fact have any impact on whether students actually learn the material. As one Harvard professor (not directly involved in the study) put it:

"The hard work has to be done by the learner -- there's not much the instructor can do to make the neuroconnections necessary for learning."

What this suggests is that what we have valued for many years now - "dynamism" in the classroom - isn't really related to our stated mission (student learning). Instead, what we've been measuring is customer satisfaction - are the students happy? And since you tend to get what you measure, we have gotten pretty good at keeping our students happy with their experience in college. Hopefully they learn something on the way as well, but we're not really tracking that as much - especially across the "broad skills" like critical thinking that we talk a lot about.

I've written before (here and here) about how we don't do a good job of measuring what we really should be, and about how I am less and less convinced that what I do in the classroom is really the right approach. This latest study is another step down that road. I think we (or, at least, I) need to fundamentally rethink how students learn, and adopt classroom models that may be very different from what we've done in the past. This is almost certainly going to be hard work - but then, we didn't get into this business because we want to produce satisfied customers. We want to help people learn.

Thursday, November 15, 2012

Am I Teaching the Wrong Way?

I've been teaching in higher education now for about 15 years. Like most academics, I'm pretty comfortable with my "style", and since I am now teaching a rotation of courses I've taught before I tend to take the easy road and do what I did before. That's not to say that I don't put a good bit of passion into my teaching - I still find the material fascinating, and I hope that comes across in class. But like many of my mid-career tenured colleagues, it's easier to stick with the well-worn grooves.

Exhortations to "keep things fresh" and "keep adapting" are important, but often not enough. Re-thinking what and how we teach is a very time-consuming exercise, and unless you're gunning for Professor of the Year the rewards for doing so are pretty thin. In most departments, as long as you're meeting your teaching obligations and your students are happy, you can keep doing the same thing for years.

In my field the emphasis has tended to be on learning ideas. We make students read things (textbooks or, better still, original intellectual works), we make them write about the ideas therein, we lecture in class about those ideas, we discuss them. Although we don't like to think of it in these terms, at its core there's a rote-learning heart to this approach. I gauge the success of an Intro to International Relations class, for example, by seeing whether students understand what anarchy is, can identify sovereign states, or can write an essay using concepts like levels of analysis or realist theory. Success is defined, on exams and in papers, by answering the questions "What did they learn?" or "What do they know?"

I am increasingly wondering whether this is the right approach - or, at least, whether it is the only right approach. This is partly in response to a growing conversation about critical thinking and learning outcomes (see this article from today's Chronicle for example). This is one of the responses we in higher education have put forward to the "is college worth it?" question - that we teach students "how to think", even if we don't have a really clear notion of how to measure that.

But a big part of my shift in thinking has come from my experience teaching in the completely different field of martial arts. When you teach a karate class, the focus ultimately is not on what students know; it's on what they can do. We do ask them to learn some facts along the way - usually, foreign terminology or particular traditions. But, at least in the tradition I've come up through, we don't teach these things by telling and then quizzing; we simply use the terms and do the traditions and over time, students pick them up. We expect students to know these things as they advance, but memorizing isn't the focus. We don't test them to see if they can count to ten in Japanese or Korean.

There's an element of this that can't apply in higher education - time. In studying martial arts, everyone learns at their own pace. Someone else may take a year to master something it takes me three years to be able to do. That's fine, since ultimately the emphasis is on the journey and how long it takes to get through the ranks can be different from person to person. That doesn't work in higher education, where we have defined time frames (quarters or semesters) and a lot of expectations about everybody finishing within (more or less) the same amount of time.

But maybe the focus on what you can do would be a useful shift. If I taught with an eye on skills and abilities rather than knowledge and facts, what might that look like? Maybe I'd play more games in class. Maybe I'd have to invent drills. Maybe I should go watch some math teachers, who may know more about this than I do (no one says, "do you know the math?" It's always, "can you do the math?")

I hope I can find the time to experiment with this in my next class (which just so happens to be Diplomacy & Negotiation). Will I be able to figure out how to assess students' abilities, or even what skills I want them to learn? I don't know. But for all our talk about helping our students become "critical thinkers" and "problem solvers", maybe we should start rethinking the way we teach. Or, at least, maybe I should.

Friday, April 27, 2012

How Do We Measure Quality Teaching?

One of the hardest nuts to crack in academia is measuring the effectiveness and quality of faculty. This week's decision in Northern Iowa will probably only add fuel to the fire of that debate. Though the decision to disallow the use of student evaluations as the basis for merit raises was based on the legal issues of the union contract, not the merits of the issue, I think it's probably a better outcome anyway. And it opens up broader questions of how we measure the quality and productivity of faculty work.

There is one aspect of what faculty do that can be measured well: research and scholarship. It's very easy to develop metrics for both the quantity and quality of scholarly productivity - number of articles, books, or papers, citation counts, etc. These can be easily adapted discipline by discipline, while remaining rigorous and fair. Even the softer notion of "reputation" within the field is fairly easy to measure, which is why come promotion & tenure time we call on outside experts in the candidate's field to tell us this particular person's niche in their chosen pantheon.

Quantity of service is easy to measure - how many committees do you serve on? Given that most university committees produce little, and what they do produce is often at the expense of a great deal of time, quantity doesn't tell us much here. Quality of service is a much more challenging thing to measure. Over time, the true stars in the service field - those who can get things done, run efficient and effective meetings, and are critical to moving the institution forward - do emerge. Some of these get pulled into administration, where P&T measures don't matter as much. As for the rest, most faculty reward systems seem content to rely on a pretty minimal standard of "service" - so long as you're above that bar, there's no penalty, but there's not much reward for being far above it, either.

The real Gordian knot, of course, is teaching. This is what professors, in the public imagination, are primarily paid to do. At many universities, it is the primary mission - scholarly research being a somewhat distant afterthought (I have taught at such institutions). In today's economic climate, with the public and politicians demanding to see a "return on investment" in universities, what they mean is - are we doing a good job educating students for jobs, careers, and as productive members of society?

At the level of the individual faculty member, student teaching evaluations - especially quantitative measures - are a terrible means of getting at this. I don't mean that they lack value entirely; as a device to weed out the truly awful, they're great. But most professors, even those whose students may not be learning all that much, can manage to get at least passable marks on student evaluations. Some of them manage to get really good ratings, because they are charming and charismatic and popular (yes, there is a "popularity contest" component to student evals). 

This is not to say that student evaluations should be done away with - they can pick out the really bad teachers, and as a formative tool the comments that students write are very useful. I've used many student comments over the years to hone my own teaching, I think for the better. But when they are the only measure of teaching - as they usually are - they tell you next to nothing about what you want to know - who deserves the merit money this year and who doesn't?

Some years ago I was pulled into a debate, at both the department and university level, about how to measure faculty quality and productivity. Across the university there was very little consensus (in part because of deep philosophical and political divisions). Within the department, we all agreed that we had to have some additional measures beyond student evaluations. The only ones we could come up with were:

Quantity measures - how many students did you teach? (funny how that doesn't come up much - but as a measure of productivity, surely it matters) 

• Peer evaluation - which is time-consuming, and dependent on the peers doing the evaluating themselves knowing what good teaching looks like, but which adds a component of validity when done well.

To this we might have added interviews of graduating seniors, to ask them about specific faculty and their impact on that student's formation. That, too, takes time and effort to organize and perform - but I suspect that it would tell us a lot.

The fact that the productivity of teaching is so hard to measure is actually much to the liking of many faculty, because it is a means of escaping accountability. So long as I am teaching my classes and my students are passing, if you can't tell how well I'm doing it I am free to put as much or as little energy into it as I like. This is not universal, but it is more common in higher education than we faculty would like to admit publicly.

Where we have dedicated, energetic teachers - and there are plenty of those - it is because of their own internal motivation, not because their universities reward them for it. And that, it has always seemed to me, is a terrible shame. Systems tend to produce more of what they reward, and less of what they ignore. If we really want higher-quality and more productive teaching, we need to find a way to seriously reward it.