Several years ago, I attended a six-session training offered by the teaching and learning center at my then university. I wanted to get certified for hybrid courses, meeting students in person once a week and having them work asynchronously the other day. Here is the text of a slide presented early in the training:
Course goals, which lead to
Course learning objectives, which lead to
Sequence of modules, which lead to
Module Objectives, which lead to
Assessments, which lead to
Sequence of activities, which lead to
Lesson designs, which lead to
Course Materials, which lead to
Syllabus and communication
This logic model reflects what I call course mapping on steroids, because the learning objectives (what I call learning outcomes, because of their “by the end of the course, students will be able to” language) were further broken down into module objectives that led to copious acronym-speak: you had your COs and your MOs, which were further broken down into CO1, CO2, CO3, and MO1, MO2, MO3, etc., and every MO was supposed to link back to at least one CO. Having an MO that wasn’t tied back to a CO was a no-no. And (of course) each assignment was supposed to connect to at least one MO (and, by natural extension, a CO); otherwise, why would you assign it?
The educational designers facilitating this training were conscientious colleagues whom I had worked with in other capacities; I respected them and their commitment to supporting faculty and students, and I would gladly have a beer with them outside work. I also appreciated the challenge they faced in designing and facilitating certification trainings for interdisciplinary arrays of faculty with very different backgrounds in pedagogy and orientations toward teaching.
But what quickly became clear is that my colleagues had adopted a one-size-fits-all approach to negotiating this challenge, which meant subjecting everyone to an extended, Pedagogy 101-adjacent tutorial and workshop that drew heavily on L Dee Fink’s taxonomy of significant learning and that premised course design as a highly linear and straightforward process of working “backward” from goals to activities.
As models of course development, I think Fink’s taxonomy of significant learning and Wiggins’s backward course design have their uses; they can potentially support this blog’s purpose of cultivating learning environments that challenge (and hopefully inspire) students to grapple with complex issues. The certification process was itself part of the institution’s broader effort to incorporate evidence-based teaching into curriculum development, a goal I supported (and still support) in principle.
In principle, that is, course mapping can make curriculum development more intentional and thoughtful for the many teachers in higher ed who were never able to take any kind of pedagogy course and/or practicum as they began their teaching careers and who, oftentimes, end up replicating what they experienced as students (practices that may have worked well enough for them but that often don’t work for the many students who do not intend to spend their careers in the academy). I am sure the prospect of following a highly scaffolded and prescribed process of determining learning goals and outcomes and then devising assignments and lesson plans that all map onto those outcomes is appealing to some teachers.
And seriously: Wouldn’t it be great if we could determine what we wanted every student to get out of our course ahead of time, and if we could align those goals neatly with the assignments and activities we asked students to do, and if the students in our classes could all follow that map and learn what they needed to at the properly scaffolded times and, by the end of the course, they all acquired roughly the same knowledge and skills?
Well, would it be great, even if it were remotely possible in this very diverse human world of flesh-and-blood teachers and students? Maybe for some courses, keeping everything as close as possible to the original plan makes sense. But it is hard to see how this would be possible or even desirable in any course where students are expected to inquire into intellectually interesting problems where they (and the teachers) genuinely do not know the answers ahead of time. In other words, the kinds of courses that writing teachers like me like to teach.
Creative teachers can preserve some room for student agency even amid all those COs and MOs, but as others have pointed out, the pressure to pre-map a course’s trajectory makes it very difficult to allow for organic evolution based on the real-world, dynamic experiences of the people taking the course. Course mapping is, in essence, largely an exercise in rooting out uncertainty and unpredictability. But uncertainty and unpredictability are inherent to wicked problems.
My brain simply does not process the task of establishing and aligning goals and assignments in such a linear way, though I have tried. Instead, I generally build courses the way I go about other creative endeavors, including writing these blog posts. I feel my way around, starting with rough ideas of experiences I think students would benefit from having, then reading and reflecting and (if there is time) batting around ideas with colleagues, usually realizing the idea isn’t quite right or is really about something else, and circuitously working my way to what I want the course to actually be about. The question, “What would students of this class find engaging?”—like the question, “What would readers of this post find engaging?”—is always on my mind, which can cause me to blow up what I have been working on and go in a different direction.
I don’t believe education designers themselves believe that developing a course (or, for that matter, taking the course) is really this straightforward. Nevertheless, these sessions allotted minimal time to discuss the messiness of real-life teaching and learning, let alone the tenuous connections between the assignments and activities we require students to perform and our capacity to assess what students learn from these performances. Time was limited, and the training sessions, including the goals, outcomes, and activities had (of course) all been pre-mapped. In other words, the certification process had succumbed to institutional pressures to standardize, quantify, and homogenize—i.e., to tame assessment and learning.
I knew my colleagues were doing the best they could, and they were professional and collegial. I didn’t want to make their jobs harder (you know, being “that guy” who complains endlessly and questions the entire premise of the training they had put all that effort into preparing). So throughout the sessions, I muzzled my frustration. I submitted the materials they required (sprinkling in some MOs among my COs), I taught my course, and I got my certification (and the small stipend that came with it, a nice carrot to go along with sitting through those sessions, which certainly felt like a stick).
I don’t suspect there was much unusual about this certification process; I imagine it was typical of how contemporary teaching and learning centers run course development workshops and trainings. If anything, I had it easy. After all, once my syllabus had been submitted and “approved,” oversight ceased, and I could essentially teach the course as I wanted.
But at that institution, pressures to tame assessment (and to capture evidence-based teaching in the name of the neoliberal university) were even heavier for fully online courses. Any time a new online course was proposed, or an in-person course was to be given an online counterpart, instructors were expected to work with an online course designer to obtain either Quality Matters (QM), or the slightly less laborious High Quality (HQ), designation, an extended review process even more rigidly built around course mapping. Acquiring QM/HQ designation (pick your acronym) entailed meeting with Reviewers, Master Reviewers, ostensible “Subject Matter Expert” Reviewers, and obtaining at least 85 out of 100 points on the QM rubric … you get the idea. (Yes, there were also stipends involved). Once all that time and effort had been expended on obtaining the quality designation, a teacher would need considerable gumption (and fortitude against the sunk cost fallacy) to stray from the approved design.
But all this pales in comparison to what many K-12 instructors endure, still mired as they are in the aftermath of No Child Left Behind, Race to the Top, the Common Core standards, and other components of public education’s never-ending (if regularly morphing) accountability movement. If you want to get an idea of what this means, check out the Florida Department of Education’s Standards & Instructional Support website. These standards were derived from the Collaborative Planning, Assessment, and Learning Management System—aka CPALMS— Florida’s “official source for standards” (the logo of which features a smiley-faced sun beaming educational joy in front of a palm tree and bearing large square glasses that assure us it is cool to look smart).
If you play around, you will find that each standard embeds even more granular standards in a fractal of education-speak and acronyms like ELA 10.R.1.4. A colleague who taught for years in a Florida public high school explained that annual reviews depended in large part on how explicitly teachers tied activities and assignments to these standards, even to the point of individual questions on individual quizzes.
This is the world of tame assessment we occupy, one in which learning goals and the measurement of learning are considered roughly the same thing, and both teaching and learning are linear, mappable, and predictable.
To bring this sundrenched foray into course mapping full circle: What would have been more useful for me to do in those hybrid training sessions? At the least, I would like to have seen every faculty participant given multiple opportunities to discuss what they were thinking about doing and, over time, explaining their rationale for how they were designing and sequencing course material. This would have enabled interdisciplinary feedback and (ideally) the mutual discovery of resonances—both pedagogical and intellectual—across our disciplines, as well as sparks of ideas we likely would not have experienced working in isolation. This is what I have experienced in other faculty communities of practice I have participated in or facilitated myself, which is why they comprise a big part of my vision of teaching as communal intellectual work.
I believe communities of practice like these, by enhancing faculty engagement in the process of course design, can lead to more vibrant and engaging learning environments for students, which will reciprocate to even more faculty engagement. Granted, this “outcome” of creating vibrant learning environments is fairly intangible (and should remain intangible), but it can produce “metricizable” downstream effects including more students graduating from college with compelling stories to tell employers and graduate committees about what they did and learned in college, leading to more meaningful jobs and satisfied alumni donating to the alma maters that provided these rich (dare I say at times transformative?) learning experiences.
Now that is a logic model I can get behind, and there is evidence for its logic-ness in how college alumni highlight engagement and connections with faculty as key to their professional success.
To be sure, the virtuous circle this model envisions requires years to build, and it relies on trusting people for the non-metricizable parts of the process—trust being a value our institutions have become decidedly terrible at upholding. I also recognize the limits of trust, and I note that some teachers (probably more than I want to believe) could care less about creating vibrant learning environments. In other words, I do not advocate an alternative assessment system that simply does away with learning goals, metrics, and evidence-based teaching and “just lets teachers be teachers,” but that is a post for a different time.
A truly wicked assessment system is not the answer (there is no singular answer to the wicked problem of assessment), but we can try to cultivate a more intentional balance between trust and accountability, tangible and intangible, predictable and unpredictable, mapping and curious (if errant) wandering, autonomy and standardization. Unpacking what this might look like is where this series is headed next.