**I. **

If you give a quiz covering lots of different topics, you’re going to get a lot of different mistakes. Which leaves you with a dilemma: how do you address those mistakes?

Yesterday’s quiz in geometry was a review quiz, so the topics were from all over the place:

- angles in isosceles triangles
- inscribed angles in a circle
- area of triangles, parallelograms and trapezoids
- congruence proofs

As expected, kids distributed their not-quite-there work fairly evenly across these topics. (OK so that’s not true, there were a lot of issues with the congruence proofs. There always are and always will be. Sigh.)

Here were two bad options for returning the quiz:

- Try to address all the issues with individual comments. First, it’s a game of whack-a-mole that is guaranteed to drive me insane. Second, what should I do? Try to leave perfect hints? Say nothing, and let kids figure out on their own what they did wrong? Show them the correct way to answer the question, and thereby eliminate anything for the kids to actually
*think about*when I return the quizzes? - Pick just one thing to focus on. Reteach that one thing in a careful way, then return the quizzes and ask kids to revise.

The second of the two options is great when there the mistakes are in the same galaxy. (I wrote about this in a post, Feedbackless Feedback.) But, I’m realizing now, this isn’t a terrific move when the mistakes are distributed across many topics. Because on what basis should I pick something to focus on reteaching? Any choice would be equally bad.

**II. **

While reviewing the class’ quizzes, I found myself falling into written comments, at least until I figured out what else to do with the quizzes.

I used to write long, wordy comments that were essentially hints on the margins of the page. (*“Great start! Have you tried multiplying both sides of the equation by 3?”*) I came to dislike those sort of comments, as they just focus focus focus attention all on THIS problem. But I don’t particularly care about whether a students gets *this *problem correct; I care about the generalization.

What I’ve fallen into is, whenever possible, writing a quick example that’s related (but not identical) to the trouble-problem (the problem-problem) on the page. I do this below on the second question:

Then, I ask kids to revise the original on the basis of the example (or anything else they realized).

After writing a few of these example-comments, I realized I was taking a lot of time doing this, and repeating myself somewhat. I also realized that I don’t know if I could repeat this on every page for the congruence proofs, as the problem itself was reasonably complex:

I wasn’t sure what to do. Then, I remembered something I had read from Dylan Wiliam — I think it’s in *Embedded Formative Assessment*. His idea there was that you can give *all *the class’ comments to everyone, and then kids have to decide which comments apply to them.

I thought, OK, I can work with this. So I quickly (quickly!) made a page of examples, one for every mistake I saw on the quiz:

My routine in class went like this:

- Hand out the examples for revision.
- Hand back the quizzes with comments.
- Search for an example that’s relevant to your mistake.
- Call for revision on the basis of the examples. Work with friends, neighbors. Of course, I’m available to help.
- Then, try the extension task.

This was my first time trying this, but I thought it went well. Solid engagement, really good questions, no unproductively stuck students.

When you do something good in teaching, you never really know if it’ll work again, but I’ve got a good feeling about this one. It feels like a lot of what has already worked for me, but in a better order.

**III.**

Harry Fletcher-Wood is very nice and has a lot of interesting thoughts about feedback. As such, Harry and I very nicely disagree about a pretty interesting question about feedback: *how can you teach people how to give better feeedback?*

The usual caveats apply: I am not a teacher teacher, but Harry is involved in teacher education, and I have no idea if I’m right on this.

In any event, Harry recently published a really cool post where he tried to synthesize a lot of the research on feedback into a decision tree:

Now, this is awesome as a synthesis. But just because something is a good description of feedback doesn’t mean that it’s useful prescriptive advice. My favorite example of this comes from Pólya’s strategies for mathematical problem solving. Alan Schoenfeld has a nice way of putting it in *Learning to Think Mathematically — *the strategies have descriptive, but not prescriptive validity:

In short, the critique of the strategies listed in How to Solve It and its successors is that the characterizations of them were

descriptiverather thanprescriptive. That is, the characterizations allowed one to recognize the strategies when they were being used. However, Pólya’s characterizations did not provide the amount of detail that would enable people who were not already familiar with the strategies to be able to implement them.

In other words, just because a heuristic is a good description of practice doesn’t mean that it is an effective pedagogical tool. And that’s precisely my concern with Harry’s decision tree.

Feedback is a high-level concept that describes a TON of what happens in teaching. And any guidelines for how to give feedback effectively are also going to be high-level in a way that reminds me of Pólya’s moves like “find a simpler problem” or “draw a picture.”

And just as Pólya’s moves struggle because they aim to guide problem solving in geometry, algebra, topology, etc., *all* areas of math, Harry’s decision tree seems to me an attempt to guide feedback in *all *areas of teaching — math, history, medical school, etc.

Of course, Harry doesn’t intend for this to be the *only *thing guiding students, but neither did Pólya. My question is whether these generalizations themselves are helpful, beyond whatever ways that teacher educators can make them concrete and specific for teachers.

But what’s the alternative?

I don’t know yet. I can say a few things now that I couldn’t a few years ago:

- I think domain-specific — math-specific, history-specific — generalizations will be more useful than domain-general ones.
- I think that the generalizations can productively come in the form of
*instructional routines*.

And, with this post and the other one, I now have two generalizations I can make about giving feedback in math class.

First: if there’s a problem that a lot of students have trouble with, consider a reteaching/revising cycle like the one in this image:

Second: if mistakes are sprinkled across too many topics, consider something like the revision routine I described in this post.

**IV.**

My bet is that a lot of knowledge about teaching looks like this. It’s not that there isn’t knowledge about teaching that accrues, but that we look for ways to scale things out of their contexts. Then we call those things myths and talk about how we have to kill ’em.

In general, generalizations about teaching are hard to come by. But nobody teaches in general. All teaching is intensely particular. *These *kids. *These *schools. *This *idea.

Some people are skeptical of the possibility of making generalizations about teaching, and the vast majority of people are cheery about making sky-high generalizations that cross every context. There’s a middle position that I want to find. There’s a sweet spot for knowledge about teaching, though I don’t know if we’ve all found it yet.

Here’s a thought about Harry’s diagram.

1. Provide examples of teaching that novices can follow. Instructional routines are a good example of this as they reduce the number of decisions novices need to make while still maintaining important aspects of the complexity of teaching.

2. Work with novice teachers to practice and refine their use of instructional routines so that they are fairly fluent with the routines.

3. Unpack the routines again but for a different element of teaching you want to focus on, like feedback for example. What you described, “provide examples of feedback, learners decide which are important for themselves, and then discuss the feedback with a partner before applying it to a new task” seems like it could easily be routinized, and such a routine would focus on a common area of practice – providing different types of feedback to a room full of individual learners.

This reminds me a little bit of Dylan’s recent post on Task Propensity. Often when introducing the instructional routines, we’ve focused on helping teachers learn the routine and accomplish the task at hand, forgetting sometimes that the goal is to learn ideas that they can generalize to teaching.

LikeLike

I used Google Forms in last part of lesson 5 mins to see who had got basics. Then showed them a couple of pie charts to indicate how many got answers correct.

LikeLike

There is so much in this post to think about. I’m going to have an initial go, by parts:

II

I love this approach. It seems to balance clarity about how to improve, efficiency and scope for student metacognition and responsibility. It’s beautiful.

My question about it would lie further upstream. Is there a way that you can narrow the test to avoid having to respond on issues which are all over the map? I don’t know enough about the choices that went into the test to judge and it feels like you’ve found an effective way to respond, so maybe it’s not an issue: but if a teacher came to me with the problem you faced and asked for advice for next time, I would suggest designing a test with the feedback in mind. Maybe that’s not practical for what you were trying to do on this test though.

III

I guess I would say that the decision tree is a tool to guide thinking, whereas in teaching teachers to give feedback I would use other supports: lots of models, and maybe instructional routines too.

Fundamentally though, I think you’re right about the analogy to Polya – each question is bound up with lots of variables and requires an understanding of what the question means in your context: it’s reminscent of Polanyi’s argument that maxims are useful only to people who already know what they’re doing.

That said, I think your two generalisations are strong ones and I suspect there are others that we can make – subject to your caveat that the more subject-specific the generalisation, the more useful it is likely to be… maybe not many, but a handful.

IV

This is a beautiful articulation of the problem. In history we talk about ‘lumpers’ (who lump everything together in broad brushstrokes) and ‘splitters’ (who insist everything is unique). We don’t have a term for that middle ground you describe, but consciousness of the value of generalisations, and their limits, is crucial.

LikeLike

I think I may have shared this post with you before, https://elsdunbar.wordpress.com/2015/09/21/thinking-about-feedback/, …but I BRIEFLY describe a process that I used for students to make corrections on their assessments. It seems similar to your “comments for the whole” idea and it seemed to work well. I didn’t get a lot of time to perfect this idea because I came out of the classroom. But, I’d love to hear if you think it seems in line with what you are describing.

LikeLike