An Insider's Take on Assessment: It May Be Worse Than You ThinkThe article itself is a commentary on another article written by a higher ed assessment professional and published here. This latter piece is a very interesting and significantly deep dive into a recurrent challenge: how to actually measure student learning in a way that can be useful. This is much harder than it seems at first blush, and is therefore often done poorly on university and college campuses.
The former commentary piece takes this discussion in a somewhat more cynical direction, suggesting (if only indirectly) that efforts to assess student learning may be a waste of time, money, and energy. This is certainly where my colleagues took the discussion, along with an added dose of attacks on the people who work at accrediting agencies as "true believers" focused exclusively on "noncompliance" and engaged in "groupthink".
I suspect many faculty feel the same way - that the people who work in regional accrediting agencies are out-of-touch busybodies and that "assessment" is a colossal waste of time and energy. Conclusion: we (colleges & universities, and especially faculty) would all be better off if we just chucked the whole thing and went back to the "way things were" - before the "assessment craze" came along.
I understand this point of view. I spent much of my career as a teaching faculty member, and I have myself taught many students and many classes. I understand the sometimes perplexing challenge of being told to do new things, or to do things a different way, especially if it involves (or appears to involve) doing more work than one is currently doing. I have a deep appreciation for the culture of independence which drives many faculty, and which is one of the chief attractions of the profession.
The only problem is that the cynical take on assessment is wrong. Yes, there are measurement problems (of both the reliability and validity kind). And yes, assessment is often done poorly. But the conclusions that the people "pushing" assessment are evil, and the whole thing should be scrapped, does not follow from these observations.
First, let me dispense with the rote ad hominem attacks on accrediting bodies and the people in them. I only have significant experience with one such organization - the Higher Learning Commission, which oversees 19 states in (broadly speaking) the American midwest. HLC is the largest of the regional accrediting bodies, and probably the most influential.
I have been through HLC accreditation processes with universities I worked for, and I have also been through HLC Peer Reviewer Corps training myself. I have met a great many people employed by the HLC as well as those who have dedicated years to working as volunteers in the accreditation process (most accreditation work is in fact done by volunteers, as is the work of reevaluating and assessing standards). These people are universally dedicated to trying to make higher education work better, always and especially for the students. Many of them have been faculty themselves, and they have a deep appreciation for the importance of higher education and the impact it has on our students' lives. They care deeply about the same things my colleagues and friends do.
So the accreditation process is not populated by people wearing horns and tails. What of the other conclusion - that assessment is not being done right, therefore it can't be done right, therefore we should just chuck the whole thing? This is not only wrong - it's dangerous. And it's really a betrayal of what we say we believe as faculty.
At its core, the assessment process is an attempt to answer a simple question: are our students learning what we say we want them to learn? If we deliver an educational experience and have no idea if anybody learned anything, then what are we doing? Not assessing learning turns higher education into a circus, another form of entertainment: pay your money at the door and we'll let you into the tent. What you get out of it - well, that's your problem.
That's clearly not a viable answer. We have to try to assess whether students are learning. And if that assessment effort tells us - reliably and validly - that some students aren't learning what we want them to learn, then we need to make changes. We either need to conclude that some students cannot learn what we're trying to teach, and stop admitting those students to our programs. Or we need to do a better job of teaching.
That's what assessment is all about. It's about finding out how well we're doing at the very core of what we say we're about, so we can do it better next time.
Note that there's a dimension that's often left out of these conversations: what learning are we trying to measure? "Learning outcomes" may be a catch phrase, but it also connotes something really important. We can't measure something if we don't know what it is in the first place. This is where a great many assessment efforts fail, because the things we're trying to measure aren't right in the first place.
Learning outcomes can be developed well. There are folks who know a lot more about this than I do; many of them work on college and university campuses as consultants to faculty. Unfortunately, many faculty have never had the benefit of their help (on smaller campuses, there may be no such people, and on larger ones they are often ignored). I wish that earlier in my career I had access to such support, because looking back I realize that I had only the vaguest notion of what my students were supposed to be learning, beyond working familiarity with some specific content in my field (a gussied-up form of memorization).
If you've got good, active learning outcomes (the kinds of things that can map onto Bloom's taxonomy), then you need to figure out how to measure them. This is where "assessment as an extra chore" is often a problem, because measurements can be made up for the purpose of checking a box instead of actually trying to measure. If your goal is to put as little effort as possible into assessment, that pretty much guarantees that it won't be any good.
Many faculty suggest (as my colleagues have) that assignments and course grades should be the relevant assessments. I agree - course grades can be a measure of assessment of student learning, if the course has good learning outcomes and if the tools for measuring those outcomes (quizzes, tests, papers, presentations, etc.) are genuinely good measures of those LOs. If all of that is true, then the grades should indeed reflect the level of student learning.
But saying that "grades are enough" does not get faculty off the hook from doing the work of designing good student learning outcomes, building the course and its assessment tools around them, and then demonstrating their work to their employing institution. It's this last point that, I suspect, secretly angers many faculty - the need to demonstrate to someone else (often, an administrator outside their department) that they've done good work.
This is where we (and I count myself in this crowd, because I've been guilty of the same mistake) can get ourselves in trouble. We want to fall back on the defense of "they can't possibly understand my course, because they have no background in <insert field here>." That's a flawed logic, however. If a well-educated individual with a PhD in a different field can't understand what you're doing, how can a student with no such background and no degree be expected to get it?
One of the root issues here is accountability. Faculty understandably are reluctant to make themselves accountable for their work to people outside their field or their department. Academic units often serve as protective shelters in which we can insulate ourselves from having to explain what we do to others.
This goes well beyond having to explain oneself to a Dean or a Provost. Faculty are quick to blame accrediting bodies, but the demand for accountability and transparency of student learning comes from a wide range of sources: state legislatures (which provide our funding, often to private as well as public institutions), businesses (which employ our graduates) and the general public (who send us their sons and daughters and, often, become students themselves). These people all want to know: what are students going to learn at your institution? It's a reasonable question.
The answers can be broad, and can include citizenship, civic perspective, values, and a great many other non-economic things beyond "job-ready" skills. For those who believe (as I do) that higher education isn't merely job training, there's plenty of room for students to learn higher and more important things. We just have to articulate what those actually are, and demonstrate that they actually do.
So by all means, let's critique the methods of assessment. Most administrations I know (mine included) would welcome robust faculty involvement in helping us design better, more reliable, more valid ways to measure what our students are learning, just as we would also welcome robust discussions about what our students are supposed to learn (a topic almost entirely in the hands of the faculty). Where systems are broken, let's fix them. The only thing we can't do is give up and quit.
No comments:
Post a Comment