Off the bat, I concede some academic clickbait in this title. In calling learning a fiction, I obviously don’t mean that the experience of gaining knowledge or acquiring skills and abilities doesn’t happen. I mean fiction in the sense Adam Mastroianni (drawing on Yuval Noah Harari) uses to characterize psychology’s fundamental conundrum: studying inherently abstract concepts, like attitudes, norms, the self, leadership, and creativity, to comprehend “the things people do and the stuff that happens in their minds.”
This “pick a noun and study it” paradigm, as Mastroianni puts it, is central to psychological inquiry but also fundamentally vexing, because we cannot directly study fictions. Instead, we transform them into measurable “nonfictions” and study those, which gives us plenty of concrete data (and fodder for publications and grants). But things get tricky when we try to draw meaningful connections between those measurable results and our understanding of the broader concept those results are meant to illuminate. Take the study of leadership:
“You could be watching little league soccer teams, or corporate board meetings, or subway conductors, and all this can get crammed under the heading of ‘studying leadership,’ even though it’s possible that none of it has anything to do with anything else.”
This does not mean studying abstract concepts like leadership is pointless or that the nonfictions people measure have no meaningful connection to the abstract concepts we want to understand. It does mean that this enterprise is messy and often confounding and that we should strive not to conflate abstract fictions with measurable nonfictions. It definitely means people pursuing this research should proceed with caution and humility (traits, one might argue, lacking in some social scientists who touted simple ways to enhance success or happiness on the TED Talk/public lecture circuit only to become enmeshed in the field’s replication crisis).
Learning assessment is similarly vexing. Teachers cannot gather direct knowledge of what is happening in students’ minds as they strive to learn a concept or skill. And the things we in higher ed say we most want students to cultivate: critical thinking, discernment, cognitive flexibility, metacognition, and the like, are themselves abstract concepts (it’s abstractions all the way down, folks). So, we also seek nonfictions we can measure. And, as I have been exploring in this series of posts on the wicked problem of assessment, we also tend to conflate measurable nonfictions with the abstract learning we seek to understand. This conflation is what I call the taming of assessment.
For the record, I believe students acquire knowledge and skills in college that are substantive and beneficial for their intellectual and professional journeys, but I also believe we can only obtain, at best, indirect evidence of this learning. I find this gap between what we want students to learn and the highly imperfect tools available to measure learning to be intellectually invigorating. This problem, like any good teaching problem, is worthy of ongoing inquiry and research, which is why I am likely nowhere near the end of this series on wicked assessment; it is also why I am fascinated by the science of learning even as this field exemplifies the challenges of picking a noun and studying it.
Although the wicked problem of learning is ultimately irresolvable, the effort to wrestle conscientiously with this problem is nevertheless generative and worthwhile; it leads to compelling, intriguing, and sometimes playful approaches and models that will all fail, but some will fail in interesting ways that will lead to alternative (and intellectually invigorating) variations on how we articulate, understand, and pursue more (failed) solutions to the problem. And that process itself will lead to learning for all who take part (learning we also will not be able to measure directly, of course).
But from administrative, economic, and (increasingly) cultural perspectives, this gap is generally not viewed as a compelling problem that merits ongoing inquiry and playful exploration; from these perspectives, the gap is more of an existential problem—meaning, it puts the future of higher education at risk.
This other view of the gap between learning goals and our limited capacity to measure learning helps explain the continuing prevalance of “students don’t really learn anything in college” critiques like Academically Adrift. It is very hard to dispute arguments like that (not for a lack of trying)—at least, it is hard for rebuttals to get the attention the headline-garnering critiques get—because we cannot easily circumnavigate the ambiguity of learning itself.
Note: There is a big difference between acknowledging and exploring that gap in good faith and using that gap to undermine the institution of higher education (and formal education as a whole). These critiques are contemporary versions of decades-long, often bad-faith arguments that formal schooling fails to teach wide swaths of students (I see you, Why Johnny Can’t Read).
Administrators who share or, by dint of their institutional position have little choice but to act as if they share, the existential view operate as though the gap either does not exist or can be, like tax reformists’ dream government, drowned in the bathtub. I feel some sympathy for administrators here—at least those who privately believe that assessment is a wicked problem. At the end of the day, they cannot fix a problem built into learning itself.
What they can and have been doing for the past couple decades is to create elaborate architectures of bureaucracy around the work of assessment, including assessment offices, remediation plans, and endless PowerPoint presentations designed to “prove” (to accrediting agencies, policy makers, parents, and business leaders) that students are in fact academically … what is a good antonym for adrift—anchored? moored?
In other words, administrators have directed significant institutional infrastructure toward taming assessment, and as far as teachers and students are concerned, perhaps the most salient manifestations of this infrastructure are the mega-syllabus and learning outcomes.
Most college students today do not know a world before the mega-syllabus, but when I was an undergraduate in the 90s, a typical college syllabus consisted of a one page, loosely laid out agenda: basically a short course description and a list of readings. Nowadays, a typical syllabus can be 15 to 20 pages or longer, swollen with learning outcomes (more on that in a moment), detailed course schedules and university-mandated, department-mandated, and individual instructor-mandated policies, in addition to detailed grading and assignment breakdowns and lists of resources (also university-mandated) that might help students navigate myriad challenges related to academics, mental and physical health, and financial need. (That last part is perhaps the most compassionate aspect of the mega-syllabus, at least in theory, though in my experience most students are not aware of these resources—at least in part because the lists tend to appear around page 17.)
Syllabi are sometimes treated as contracts between teachers (and the institutions they represent) and students, whether or not these “contracts” are legally enforceable. (Readers interested in how syllabi evolved from off-the-cuff, almost literally throwaway sheets of paper to bloated, pseudo-legal documents might enjoy Rebecca Schuman’s brief history of the mega-syllabus. You will not be surprised to learn that neoliberalism and the corporatization of higher education played a big part in this story. You might also find Dana Lloyd and Vincent Lloyd’s metaphor of the carceral syllabus interesting.)
Then there are learning outcomes, which were supposed to be the theme of this post and, for that matter, the one before it. I keep promising to talk about learning outcomes, and I keep kicking that can down the assessment road, because it is the wickedness of learning assessment that interests me, whereas learning outcomes are bureaucratic tools designed to make assessment (seem) tame.
Every time I start writing that post, I get entangled in semantics, like the differences between learning goals, objectives, and outcomes. This problem is endemic to the world of learning outcomes, which after all exist because of the technocratic imperative to make the beautiful and fascinating fiction of learning appear to be nonfiction. This is why education designers are often adamant that learning outcomes follow a formulaic structure—to the point of diagramming sentences for us—and that outcomes use appropriate verbs. (Some of this may be unavoidable: Educational designer Paul Hanstedt, whose book Creating Wicked Students is unusually forthright about the wickedness of assessment—as discussed in Part Two of this series—also likes to geek out about verbs).
Despite the skepticism in that last paragraph, I wholeheartedly support the basic aims of (most?) educational designers: Ensuring that students have meaningful and intellectually rich experiences in college that help set them on their way to whatever comes next on their path. Many education designers are motivated by values of inclusivity and access, and they believe education should be a force for social mobility and equity—values I also wholeheartedly share.
I imagine that most education designers also recognize that in practice, learning is messy and non-linear. But they also tend to advocate practices like course mapping, or the idea that learning outcomes can be coherently and meaningfully mapped onto the instructional methods and assessment tools used in a course, as if curriculum development and assessment can be wrapped up in a neat little package.
And this is where our pathways to meaningful learning start to diverge.