We teachers learn early on that exams should reflect what students have learned. They should attempt to measure what was taught, to capture student understanding in a way that helps drive future instruction.

But lately, I’ve been asking myself, *what if I included material on exams that students haven’t explicitly learned? What if I expected them to stretch what they did learn to apply it in a new way?*

Specifically, I’m thinking that 10% of each exam would be stuff that students have never seen in class or homework. It would be unknown to the kids before they saw it on an exam. This 10% would push students to expand and enrich what they did learn. It would allow me to bridge pre- and post-exam content and possibly preassess things to come. It would trigger meaningful reflection afterward which, I hope, would cause students to genuinely learn something new. It would also help me measure how far their understanding of the mathematics will take them into uncharted territory — which is probably worth it in and of itself. And besides, the oh-so-high-stakes Regents exam in June is filled with problems that neither they nor I could have predicted…so why not prepare them for this all throughout the year?

All that sounds great. But what scares me is the unethical nature of it all. This is where my preservice days haunt me. How could I possibly hold my kids accountable for material they’ve never interacted with? Is that fair? This unpredictability for the students is making me second guess myself.

Although, I am only thinking about what’s expected now — which is that exams will follow suit with the problems they’ve already done. But what if this unknown 10% was a norm that was baked into our classroom culture from jump? What if it was something students understood and acknowledged going into every exam, an inherent challenge I placed on them to demonstrate their mathematical abilities to new ways?

bp

Brian – I like the idea of asking kids to expand on what they know and apply on a regular basis, and to have that as an expectation in my classroom. And I think it’s a great formative pre/post assessment tool. But I don’t know if I could feel comfortable grading a student on something they hadn’t learned given the way grades and report cards work in most (including my) high schools.

My thoughts exactly. I’m in a bind.

I’m not sure what your grading structure is – can it be some kind of ‘extend your thinking’/extra credit, or is assigning any kind of point value patently unfair?

Maybe. Generally, I’m not a huge fan of extra credit, but you got me thinking.

I have done this as a regular part of assessment for a few years now. I think that it depends on exactly what you are assessing. If you are assessing conceptual and procedural understandings, then you need to support students ahead of time and give them problems very similar to what they’ve seen before. It is not fair to assess students on a procedure that they have not seen before and expect them to discover it during a test. However, I think that we need to assess problem solving and argumentation in addition to conceptual and procedural understanding. If we value these things as much (or more) than the ability to repeat a process that they have seen before, we need to reflect this in what we are assessing.

I have received pushback on this, especially from students who have had great success repeating back procedures, and who have not had to do this before. This is definitely a shift from traditional math assessment, and is not easy. We certainly need to make low stakes opportunities for students to practice this before the unit test, and give them feedback and tools to succeed. The classroom needs to prioritize problem solving and a culture of arguing and critique. This is a tall order. For me this was one of the important shifts in buying into the CCSS math practices.

Thanks for your thoughtful reply, Nat! You’re right. And providing ‘students low stakes opportunities to practice’ is certainly a weakness of mine.

Hey Brian,

I like this idea!! What if you added in the 10% and asked students to do as much as they can and then write about where/why they got stuck? For example, if I was testing solving basic equations and my 10% question had a square root in it, my students might take the inverse of the operations they already know and then circle the square root and tell me that they don’t know the inverse of a square root.

Further, I think I’d grade them on a rubric about the power of their thinking/justification, not whether or not they got it right!