10%

We teachers learn early on that exams should reflect what students have learned. They should attempt to measure what was taught, to capture student understanding in a way that helps drive future instruction.

But lately, I’ve been asking myself, what if I included material on exams that students haven’t explicitly learned? What if I expected them to stretch what they did learn to apply it in a new way?

Specifically, I’m thinking that 10% of each exam would be stuff that students have never seen in class or homework. It would be unknown to the kids before they saw it on an exam. This 10% would push students to expand and enrich what they did learn. It would allow me to bridge pre- and post-exam content and possibly preassess things to come. It would trigger meaningful reflection afterward which, I hope, would cause students to genuinely learn something new. It would also help me measure how far their understanding of the mathematics will take them into uncharted territory — which is probably worth it in and of itself. And besides, the oh-so-high-stakes Regents exam in June is filled with problems that neither they nor I could have predicted…so why not prepare them for this all throughout the year?

All that sounds great. But what scares me is the unethical nature of it all. This is where my preservice days haunt me. How could I possibly hold my kids accountable for material they’ve never interacted with? Is that fair? This unpredictability for the students is making me second guess myself.

Although, I am only thinking about what’s expected now — which is that exams will follow suit with the problems they’ve already done. But what if this unknown 10% was a norm that was baked into our classroom culture from jump? What if it was something students understood and acknowledged going into every exam, an inherent challenge I placed on them to demonstrate their mathematical abilities to new ways?

 

bp

 

The day after

london_underground2

I don’t like review days before exams. I’d much rather spend that day after an exam analyzing mistakes and relearning. I find this to be crucial in promoting a growth mindset in my students. My struggle has been how to structure these post-exam days. Here’s a formative assessment idea that I’ve used a few times this year.

The day after an exam, I set up the room in 3-5 stations. Each serves as a place to study a particular concept that was on the exam.

My bell ringer asks students to check their exam performance on the bulletin board in the back of the room. It lets them know for which concepts they earned proficiency. I also email the kids their performance immediately after assessing the exams, but many don’t check.

I hand back the exams and they move to a concept that they need help with based on their performance. If they have earned credit for every concept on the exam then I ask them to float and help others. At each station they use notes, each other, and the feedback I provided on the exam to analyze and learn from their mistakes. I also have practice problems at each station so they can make sure they understand the concept. I float around the room and help. Of course, the SBG data allows me to sit with students who need me most.

After a student feels they have successfully relearned a concept, and I usually check in to confirm, they can retake that concept. The retakes are in folders in the corner – students grab one and do it anywhere in the room. They submit it and begin working on another concept, if necessary. It doesn’t matter how many concepts a student retakes during the period, but it usually works out to be 1-2.

Before I did this activity I was concerned that since the stations would be full of students that struggled on a concept that they would all sit together and get no where. This hasn’t been the case. The kids are diligent to relearn. This may be because they like retaking exams and earning proficiency during class time, as I usually make them come after school to do this. It helps that the relearning is targeted and individualized to each student. Plus, it’s all formative. They go wherever they feel they need to. They assess themselves, but use one another in the process.

It can look and feel chaotic. But that’s the point. Improvement is messy. It’s also amazing – especially when it happens amongst your students.
bp

Knowledge Audits

Audit 1How can I know what my kids know?

I’ve been asking myself that question for a long time. In my Regents-obsessed school, I’m forced to ensure my students can perform well on end-of-year state exams. The typical learning flow in my class usually looks like this:

  1. Student learns X.
  2. Student demonstrates understanding of X.
  3. Student learns Y and forgets X.
  4. Student demonstrates understanding of Y and has no idea what X is.

Compile this over the course of a school year and you have students that understand nothing other than what they just learned. What does this mean for a comprehensive standardized exam? Disaster!

Sure, a lot of this has to do with pacing and students not diving deep into things they learn to make connections. That is a sad reality of too many teachers, including me. So given these constraints, how can I help kids build long-lasting understanding of things they learn and not forget everything other than what we’re doing right now?

In the past, I’ve “spiraled” homework and even put review questions on exams, but this never helped. There was no system to it and I never followed up. This year, I’m lagging both homework and exams, which does seem to be making a difference. But with the ginormous amount of standards that students are supposed to learn each year, I still feel this isn’t enough.

So, last week I began implementing Audits. These are exams that do not assess concepts from the current unit. The plan is to administer about one a month and because I lag my unit exams, I should have no trouble fitting them into the regular flow of things.

I’m choosing not to call them “Review Exams” or some other straightforward name in order to put a fresh spin on them and increase buy in. So far, so good.

The hope is to continually and systematically revisit older content to keep students actively recalling these standards. This should reinforce their learning and help to make it stick. On the teacher side of things, I get an updated snapshot of where they are and can plan accordingly. The SBG aspect is simple: the results from the Audit supersede any previous level of understanding.

  • If a student has not previously earned proficiency on a standard that is assessed on an Audit, he or she can earn proficiency. This alleviates the need to retest on their own.
  • If a student has previously earned proficiency on a standard, he or she must earn proficiency again or else lose credit for that standard. This would then require them to retest.

The first Audit resulted in a mix of students earning credit and losing credit for a set of standards. It was great. The proof is in the pudding. Knowledge isn’t static and my assessment practices must reflect this.


bp