Speed dating, revised & reloaded

I love speed dating.

With students, of course. The type of speed dating that serves to reinforce, promote peer tutoring, and review.

But I always found the initial assigning of problems to be tricky and inefficient. Desks would be set up facing each other. I give kids handouts on their way into the class that have the practice problems on them. I would then walk around and assign each student a specific problem from the handout. I always did this strategically; I would assign higher level questions to higher level students. This way, the higher level concepts would get facilitated effectively to other students who may not initially understand those concepts. Because they needed to wait for me to “give” them their question, this caused some off-task, downtime for the kids…which was not good. They would also spend less time speed dating (and doing math) because they were dependent on me at the beginning. Bottom line: I wasted a bunch of class time assigning questions.

Today, I tried another technique and I liked it. I placed numbers on the desks and wrote numbers on the handouts. I gave the students the handouts as they entered (randomly), and instead of wading around assigning questions to students, the kids just sat at a desk whose number matched the number on their handout. And that number was their assigned question they needed to master. They didn’t need me to assign them a problem because they already had one. Here are the desks with the numbers.

The result: a much more fluid, efficient start to the activity. The students came in and knew exactly where to sit and which question they were responsible for. They ultimately spent more time working on math – which is the whole point.

I randomly assigned the questions as they walked in – so I got away from assigning higher level questions to certain students. It was fine. I just walked around and helped students with their problems – which I would have done anyway. Even if I wanted a specific student (or students) to be in charge of a specific problem, I could simply ensure that I gave them the number of that problem when they walked into the room. I would only need to do this for one or two questions, so it wouldn’t be terribly difficult. It is a lot better than students sitting around while I manually assign problems at the beginning of class.

3/18/15 UPDATE:

Love it.

bp

Review Days?

I was having a discussion with a few of my colleagues today about our upcoming quarterly exams. They’re basically our midterms – we just give them a fancier name. We were talking about the amount of review days (or class periods) that are necessary for students to prepare for these types of summative assessments.

It made me think. I don’t really “review” before big exams. I don’t even regularly review before a unit exam. I definitely have days where students aren’t learning new material. Maybe they’re reinforcing things they have learned previously. Or maybe it’s an extension of a previous lesson. Those aren’t the days I’m thinking of. I’m thinking of traditional review days that aim to refresh students minds before a significant assessment.

Whenever it’s necessary or if I feel the moment is right, sure, I’ll whip out a game to review or have the students speed date to catch each other up. I do these sorts of activities from time to time, but this is infrequent and rarely happens right before an exam.

So why don’t I review before exams?

I want raw, unfiltered data on my students’ understanding. By reviewing just before an exam, I am giving my kids a mental cheat sheet that they can use on the day of the exam. Did they understand the content before we reviewed or because we reviewed? My data is inherently skewed because of our review. But if I test them on a concept that we studied two months ago, yes, I want data on their understanding of that concept without refreshing their memories. I want them to forget a step or two. Or for that matter, I don’t mind if they completely forgot how to start a problem. This is precisely the information that will drive my future planning. Also, those misconceptions are exactly the sorts of things that will help my students do better in the future.

But don’t students want review? I mean, they want to increase the odds of performing well on the assessment and therefore increasing their overall grade in the class, right? I answer that question in two ways. First, there are plenty of instances where I don’t even count an exam towards their overall grade. Yes, I said that. I treat it as a diagnostic and the kids buy into it. I find out where they’re at and they get a (potentially poor) exam that doesn’t count against them. We all win. Second, students will adapt to the no-review-before-an-exam policy. They will meet our expectations. If they know I don’t review before exams, then over time they will prepare/study accordingly. And if at first they don’t, they may complain, but eventually they will come around.

So what to do with that extra time not spent reviewing? I spend that day or two (or three) after a big exam to reteach and reassess what my students had trouble with. It just makes sense. Now, because I looked at the data, I have a pretty good understanding of what they know and don’t know. I can pinpoint how I reteach a particular set of concepts. Often times, I even immediately reassess them on concepts they struggled with. This almost always results in improvement, which only helps to establish a growth mindset. It also helps them understand the method behind the madness of no review days.

I guess all this may count as “review after the test” and I’m good with that. Reviewing before an exam is overrated. Intuitively, as teachers, it makes sense to review before. But I think the more effective strategy is to do so after.

I’m really curious about what others believe about this. How do you incorporate review into your class structure?

bp

Quick Key

To help me collect data, I’ve been using a tool for the last couple of months. It’s called Quick Key and it’s used to quickly and easily collect responses from multiple choice questions.

For a long, long time, my school utilized the Apperson Datalink scanner to aid in scoring multiple choice portions of exams. It not only scores exams quickly and efficiently, but its accompanying software provides insightful data analysis that I use to modify my teaching. On the downside, these machines are pricey (almost $1000) and require you to purchase their unique scanning sheets that work only with their machine. Each department in my school had a machine.

Because of my push towards standards-based grading, I find myself giving smaller, bite-size assessments that target fewer concepts. Consequently, I am assessing more frequently and I need the scanning machine at least once a week. The machine was constantly changing hands and I was always running around the building trying to track it down.

I decided that I didn’t want to be a slave to the scanner – and its arbitrary sheets. It’s not sustainable. Especially when we have mobile technology that can perform the same task and provide similar results.

Enter Quick Key.

Quick Key has allowed me to score MC items and analyze my students’ responses in a much more convenient and cost-effective way. Like, free. Hello. You simply set up your classes, print out sheets, and start scanning with your mobile device. (You don’t even need to have wifi or cellular data when scanning.) The interface is pretty clean and easy to use. Plus, it was created and designed by a teacher. Props there too.

Data is synced between my phone and the web, which allows me to download CSV files to use with my standards-based grading spreadsheets.

My SBG tracking spreadsheet

That is the big Quick Key buy-in for me: exporting data for use with SBG. As I have mentioned before, SBG has completely changed my teaching and my approach to student learning. At some point, I hope to write in-depth about the specifics of this process and the structure I use.

Though the Quick Key data analysis isn’t as rigorous as what I would get from Datalink, it suffices for my purposes. I sort of wish Quick Key would improve the analysis they provide, but for now, if I need more detailed analytics, its usually requires a simple formula that I can quickly insert.

Sample data analysis from Quick Key
Sample data analysis from Datalink

Through all this, I don’t overlook the obvious: MC questions provide minimal insight into what students actually know, especially in math. That being said, my students’ graduation exams still require them to answer a relatively large number of MC items. For that reason alone I feel somewhat obligated to use MC questions on unit exams. Also, when assessing student knowledge via MC questions, I do my best to design them as hinge questions. TMC14 (specifically Nik Doran) formally introduced me to the idea of a hinge question, which are MC questions that are consciously engineered to categorize and target student misconceptions based on their answer. In this way, students responses to MC questions, though less powerful than short response questions, can provide me an intuitive understanding of student abilities.

Quick Key recently introduced a Pro plan ($30/year) that now places limitations on those that sign up for free accounts. Their free plan still offers plenty for the average teacher.

Either way, Quick Key still beats a $1000 scanner + cost of sheets.

bp

Traffic Light


I’ve seen and read about many “traffic light” strategies used in the classroom. In most instances, its a label we use for a strategy thats helps us gauge student understanding or receive feedback. Here’s another twist on it.

I’m using it as a formative assessment strategy that I fittingly call Traffic Light. (Very creative, I know.) I’ve laminated red, yellow, and green pieces of paper and slid them into another laminated piece of paper that I half-taped to the top of each desk.

During any given lesson, I mention “Traffic Light!” and my students hold up a color corresponding to their level of understanding at that moment. Sometimes I see a sea of green, sometimes a mix, and sometimes I see so much red that I myself turn red. Either way, I have found the cards to be an indispensable tool to keep a pulse on how things are going and, if need be, change things up on the fly. There are plenty of instances where I needed to re-explain something, regroup students, or change the approach to a concept. And, without this in-the-moment feedback from the kids, I probably would not have been aware that a change was necessary.

I must put out a disclaimer. When I first started using the cards, I found that some of the quieter students would hold up a green to avoid me eyeing their yellow or red card – essentially making them “stick out” to me. I had a talk with my classes about how their learning is dependent on their integrity. We also discussed honesty as it relates to their understanding and how this is a driving force of everything we do. I did find that all this helped encourage the kids to provide more accurate responses.

Besides the obvious benefit for me, their teacher, the students actually enjoy using Traffic Light. At the end of the first semester, I asked each student to provide me with one thing they thought went well and one thing they felt needed improvement in our class. I was surprised by this, but several students actually mentioned the Traffic Light cards.

(“the new grading system for exams” refers to my shift to standards-based grading)

It could be the interactivity. Students get to, essentially, voice their opinion…and teens love to do that. It could also the message it sends: that I’m willing adjust any lesson based on how they’re learning – and then to actually adjust it. Who knows. I’m just glad they’ve taken to it.

bp

Exit mobile version
%%footer%%