Skating to an erratic puck

Why transformation needs to be ongoing, and one way of approaching it

By Teddy Svoronos

As I get the ball rolling on my course redesign, I’ve been reflecting on the feeling of whiplash that many educators (including myself) are experiencing. It goes something like: “I spent so much of 2020-21 redesigning my course for Zoom; do I seriously need to do this again?” What’s worse, the prescriptions of these two redesigns feel at odds - after needing to pivot to online learning during a pandemic where flexibility and asynchronous work were the order of the day, many are now feeling that anything other than in-person assessment is not feasible.

Even if these parallels resonate, though, these two scenarios are fundamentally different. Transitioning to Zoom teaching was about figuring out new ways to deliver the same underlying content - enabling active learning in an online format, making use of synchronous chat and asynchronous materials. While some of us took the opportunity to rethink the content of our courses, that felt like a “nice to have”. In the reality that we’re now in, rethinking content is absolutely essential.

When analogies fail

So how to approach the question of what remains relevant, what is newly relevant, and what is irrelevant in what I teach? The idea that I should be “skating to where the puck is heading” resonates; after all, we’re in a period of major technological change, and I want whatever changes I make to last more than a few months. The version of our generative AI course that we taught in Spring 2024 is quite different from the version that we taught in Fall 2025, and for good reason. In my statistics course, my “tell me why AI got this statistical concept wrong” types of assignments are well out of date, and have shifted to “do this analysis yourself without AI, then replicate your result and extend beyond your capabilities with AI”.

The problem with the skating analogy, though, is that it’s extremely unclear where the puck is heading! While the capabilities of AI models are increasing quickly, they do so at an inconsistent rate. As a result, when using a new frontier model I never know whether I’ll leave the experience thinking “why on earth did I do that manually for so long?,” or “why on earth did I just waste an hour watching AI fail so miserably?”

For me, the punchline of all this is that redesigning our courses during this period of technological change will have to be an ongoing adaptation, not a one-time thing. My colleagues who don’t spend most of their professional life thinking about teaching (can’t relate) might read that and shudder. But I think we can do this in a way that doesn’t feel like we’re starting from square one every time.

A three-step approach

In order to continuously adapt my course to new AI advancements, my plan is to take the time to explicitly articulate three things. I use my course’s final exercise as an illustrative example:

  1. Step 1: What are the learning objectives for my course: Specifically, what things do I want students to know, think about, and be able to do by the end of my course. I’m distinguishing this list from the question of whether or not I can actually measure these things.
    • Example: Students should be able to understand when a policy topic could be informed by empirics, investigate that question using real-world data, and interpret the results in a way that takes into account the limitations of their analysis.
  2. Step 2: What outputs do I use as proxies for those objectives: I am not preparing my students to enter a workforce where they do weekly problem sets. Rather, the work embedded in those problem sets develops the kinds of skills that I’m trying to get at in #1.
    • Example: My course has a final exercise where we give students real datasets, which they then analyze and produce a final memo, technical appendix, and presentation. It is worth saying: I do not believe the vast majority of my students will produce a memo of this kind in their professional career. But the act of going through this process develops, refines, and showcases the skills listed above.
  3. Is it possible to reliably assess those proxies: This is the source of much hand-wringing these days; if a student can produce an excellent memo but I have no way of verifying if they wrote the memo itself, the link between #1 and #2 is severed.
    • Example: As I said in my previous post, Claude Code can complete a high-quality final exercise, soup to nuts, with no intervention from me. This means that evaluating a group’s final output may not be sufficient to assess my students’ work. We currently have groups make weekly progress toward their final exercise in their problem sets, which I may want to expand further as I think about my redesign.

I believe that we as faculty should fully articulate these three things for our courses, in writing. Then whenever the next massive leap (or regression!) in AI capabilities takes place, we can return to this document and decide what, if anything, we need to change.

It’s certainly not easy to do that. In the final exercise example that I lay out above, I’ll need to assess (a) whether those underlying skills remain relevant, (b) whether I have chosen the right proxy for those skills, and (c) whether that proxy can be meaningfully treated as a signal of student learning. I’ll also have to resist the urge to decide that what I’ve been doing has luckily been exactly right all along, and therefore should keep doing what I’m doing. But, having this information laid out in a way that I can continually revisit feels like a way to resist that siren call, since it will enable lots of iterative changes as manageable steps toward a bigger, more daunting transformation.

What’s next

So that’s the plan, at least for now. I’m going to spend the next few days articulating Step 1 for my course – not just laying out the learning objectives but mapping specific skills and capabilities that I currently associate with each objective. Then I can go through each and think through which remain important, which should be dropped, and what new things should be added. Thanks for reading, and please share your thoughts!