One Step, Crumple, Toss

IMG_0221
The other day I did an activity that reminded me of both Kate Nowack’s Solve, Crumple, Toss game and Jon Orr’s Commit & Crumple activity, but it was slightly different.

I grouped students in twos and threes and gave each group one problem on a full sheet of paper. They struggled on a few concepts that we recently tested on, so the problem stemmed from those concepts. Each group completed the first step in their problem. That’s it. After, they crumpled the paper into a ball. After all the groups crumpled, I had them throw the ball at/to another group in the room. The receiving group would uncrumple the paper, check the work that’s already been done (correct it if necessary), and complete the next step in the problem. They then crumpled and tossed the paper to another group. This process continued until every problem was completed.

I like this activity for several reasons:

  • Firstly, students must put focused effort into starting a problem. Teachers, and math teachers specifically, know that the first step of a problem can often make or break a student.
  • Secondly, the bite-size chunks that they work on after each throw make long, multi-step problems easily digestible and accessible. They’re not stuck, sometimes haphazardly, on a single problem for extended periods of time. The students, without even knowing it, scaffold one another.
  • From a problem solving perspective, the idea of emphasizing the completion of one step at a time could be useful. The students themselves must decipher the procedural “steps” of a problem and also relate them to a peer’s work. This may help to develop the skill of breaking down a large problem into a series of smaller ones. I’m not completely sold on my reasoning here, but I feel there’s something meaningful on this front.
  • This activity affords kids the time to analyze and challenge each other’s work. It’s weird, but I’ve noticed, even with other activities, that students are highly engaged when analyzing a peer’s work. Maybe this is because teenagers are so judgemental of each other already, who knows.
  • On the teacher side of things, it’s never a bad thing when an activity gives you the opportunity to walk around and assess all period. It was so helpful to provide loads of individualized feedback to them on concepts they previously struggled with.
  • Lastly: who doesn’t like to throw things?! This was by far the best aspect of the lesson.

I’m still wondering about how things ended. The exit slip did show improved understanding of the concepts, which was good, but the conclusion of the activity could have been stronger. I posted the solutions for each question (they were numbered) and groups checked to see if the problem they ended with arrived at the correct answer. Most did. I also opened up a class discussion about common mistakes that were found as they checked work. That said, there still may be something better I could have done to wrap it up. Hmm.

 
bp

Review Days?

I was having a discussion with a few of my colleagues today about our upcoming quarterly exams. They’re basically our midterms – we just give them a fancier name. We were talking about the amount of review days (or class periods) that are necessary for students to prepare for these types of summative assessments.

It made me think. I don’t really “review” before big exams. I don’t even regularly review before a unit exam. I definitely have days where students aren’t learning new material. Maybe they’re reinforcing things they have learned previously. Or maybe it’s an extension of a previous lesson. Those aren’t the days I’m thinking of. I’m thinking of traditional review days that aim to refresh students minds before a significant assessment.

Whenever it’s necessary or if I feel the moment is right, sure, I’ll whip out a game to review or have the students speed date to catch each other up. I do these sorts of activities from time to time, but this is infrequent and rarely happens right before an exam.

So why don’t I review before exams?

I want raw, unfiltered data on my students’ understanding. By reviewing just before an exam, I am giving my kids a mental cheat sheet that they can use on the day of the exam. Did they understand the content before we reviewed or because we reviewed? My data is inherently skewed because of our review. But if I test them on a concept that we studied two months ago, yes, I want data on their understanding of that concept without refreshing their memories. I want them to forget a step or two. Or for that matter, I don’t mind if they completely forgot how to start a problem. This is precisely the information that will drive my future planning. Also, those misconceptions are exactly the sorts of things that will help my students do better in the future.

But don’t students want review? I mean, they want to increase the odds of performing well on the assessment and therefore increasing their overall grade in the class, right? I answer that question in two ways. First, there are plenty of instances where I don’t even count an exam towards their overall grade. Yes, I said that. I treat it as a diagnostic and the kids buy into it. I find out where they’re at and they get a (potentially poor) exam that doesn’t count against them. We all win. Second, students will adapt to the no-review-before-an-exam policy. They will meet our expectations. If they know I don’t review before exams, then over time they will prepare/study accordingly. And if at first they don’t, they may complain, but eventually they will come around.

So what to do with that extra time not spent reviewing? I spend that day or two (or three) after a big exam to reteach and reassess what my students had trouble with. It just makes sense. Now, because I looked at the data, I have a pretty good understanding of what they know and don’t know. I can pinpoint how I reteach a particular set of concepts. Often times, I even immediately reassess them on concepts they struggled with. This almost always results in improvement, which only helps to establish a growth mindset. It also helps them understand the method behind the madness of no review days.

I guess all this may count as “review after the test” and I’m good with that. Reviewing before an exam is overrated. Intuitively, as teachers, it makes sense to review before. But I think the more effective strategy is to do so after.

I’m really curious about what others believe about this. How do you incorporate review into your class structure?

bp

Quick Key

To help me collect data, I’ve been using a tool for the last couple of months. It’s called Quick Key and it’s used to quickly and easily collect responses from multiple choice questions.

For a long, long time, my school utilized the Apperson Datalink scanner to aid in scoring multiple choice portions of exams. It not only scores exams quickly and efficiently, but its accompanying software provides insightful data analysis that I use to modify my teaching. On the downside, these machines are pricey (almost $1000) and require you to purchase their unique scanning sheets that work only with their machine. Each department in my school had a machine.

Because of my push towards standards-based grading, I find myself giving smaller, bite-size assessments that target fewer concepts. Consequently, I am assessing more frequently and I need the scanning machine at least once a week. The machine was constantly changing hands and I was always running around the building trying to track it down.

I decided that I didn’t want to be a slave to the scanner – and its arbitrary sheets. It’s not sustainable. Especially when we have mobile technology that can perform the same task and provide similar results.

Enter Quick Key.

Quick Key has allowed me to score MC items and analyze my students’ responses in a much more convenient and cost-effective way. Like, free. Hello. You simply set up your classes, print out sheets, and start scanning with your mobile device. (You don’t even need to have wifi or cellular data when scanning.) The interface is pretty clean and easy to use. Plus, it was created and designed by a teacher. Props there too.

Data is synced between my phone and the web, which allows me to download CSV files to use with my standards-based grading spreadsheets.

My SBG tracking spreadsheet

That is the big Quick Key buy-in for me: exporting data for use with SBG. As I have mentioned before, SBG has completely changed my teaching and my approach to student learning. At some point, I hope to write in-depth about the specifics of this process and the structure I use.

Though the Quick Key data analysis isn’t as rigorous as what I would get from Datalink, it suffices for my purposes. I sort of wish Quick Key would improve the analysis they provide, but for now, if I need more detailed analytics, its usually requires a simple formula that I can quickly insert.

Sample data analysis from Quick Key
Sample data analysis from Datalink

Through all this, I don’t overlook the obvious: MC questions provide minimal insight into what students actually know, especially in math. That being said, my students’ graduation exams still require them to answer a relatively large number of MC items. For that reason alone I feel somewhat obligated to use MC questions on unit exams. Also, when assessing student knowledge via MC questions, I do my best to design them as hinge questions. TMC14 (specifically Nik Doran) formally introduced me to the idea of a hinge question, which are MC questions that are consciously engineered to categorize and target student misconceptions based on their answer. In this way, students responses to MC questions, though less powerful than short response questions, can provide me an intuitive understanding of student abilities.

Quick Key recently introduced a Pro plan ($30/year) that now places limitations on those that sign up for free accounts. Their free plan still offers plenty for the average teacher.

Either way, Quick Key still beats a $1000 scanner + cost of sheets.

bp

Exit mobile version
%%footer%%