Checkpoints and homework, circa 2016

Here’s my current structure for exams checkpoints and homework. Everything is a work in progress.

Checkpoints

  • First off, terminology. Formally known as exams, I now call these summative assessments ‘checkpoints’ to further establish a low-stakes classroom culture. It feels much less formal, but I still reference them as ‘exams’ when in a rush. Plus, my frustration with the Regents exams is at an all-time high, so distancing myself and my students from any term that references them is a good thing.
  • I really liked how I lagged things last year, so I’m going to continue with this routine. This means that each checkpoint will only assess learning from a previous unit. In most instances this will be the previous unit, but once a month there will be a checkpoint that only assesses learning from material learned at least two units back. With my standards-based grading, students can lose proficiency on a standard at any time during the course of the year. The hope is to interweave what has been learned with what is currently being learned to help improve retention.
  • Speaking of SBG, I’m reinstituting mastery level achievement in 2016-17. I have yet to work out the kinks regarding how this will impact report card grades.
  • I will not review before any checkpoint, which is what I started last year. Instead, that time will be spent afterwards to reflect and relearn.
  • I make these assessments relatively short, they take students roughly 25-30 minutes to complete…but my class period is 45 minutes. I’m still trying to figure out how to best use that first 15 minutes. Last year I didn’t have this problem because my checkpoints always fell on a shortened, 35-minute period. Right now I’m debating over some sort of reflection or peer review time.
  • I have begun requiring advanced reservation for every after school tutoring or retake session. I learned very quickly at my new school that if I don’t limit the attendance, it is far too hectic to give thoughtful attention to attendees. Right now, I’m capping attendance at 15 students per day with priority given to those who need the most help.

Homework

  • Disclaimer: developing a respectable system for homework is a goal of mine this year.
  • Homework assignments are two-fold. First, students will have daily assignments from our unit packet that are checked for completion the next day. Second, they will have a DeltaMath assignment that is due at the end of the unit, again, checked for completion.
  • Homework is never accepted late.
  • Homework is not collected.
  • To check the daily homework, I walk around with my clipboard during the bell ringer. While checking, I attempt to address individual questions students may have. This serves as a formative assessment for me gauge where they are on the homework. After the bell ringer, but before any new material, I hope to have student-led discussion around representative problems, depending on the homework that day (I haven’t gotten here yet). The goal is to have students write on the board the numbers of the problems that gave them a headache…so we know which ones to discuss.
  • I’m going to do everything I can check it this year. It sounds simple, but over time things can slip away from any teacher.
  • I’m posting worked out homework solutions on our class website. I used to include the solutions in the back of the unit packet. This is an improvement on that, but also requires students take an extra step. Students must check their thinking, assess themselves against the solutions, and indicate next to each problem whether or not they arrived at the solution.

 

bp

The day after

london_underground2

I don’t like review days before exams. I’d much rather spend that day after an exam analyzing mistakes and relearning. I find this to be crucial in promoting a growth mindset in my students. My struggle has been how to structure these post-exam days. Here’s a formative assessment idea that I’ve used a few times this year.

The day after an exam, I set up the room in 3-5 stations. Each serves as a place to study a particular concept that was on the exam.

My bell ringer asks students to check their exam performance on the bulletin board in the back of the room. It lets them know for which concepts they earned proficiency. I also email the kids their performance immediately after assessing the exams, but many don’t check.

I hand back the exams and they move to a concept that they need help with based on their performance. If they have earned credit for every concept on the exam then I ask them to float and help others. At each station they use notes, each other, and the feedback I provided on the exam to analyze and learn from their mistakes. I also have practice problems at each station so they can make sure they understand the concept. I float around the room and help. Of course, the SBG data allows me to sit with students who need me most.

After a student feels they have successfully relearned a concept, and I usually check in to confirm, they can retake that concept. The retakes are in folders in the corner – students grab one and do it anywhere in the room. They submit it and begin working on another concept, if necessary. It doesn’t matter how many concepts a student retakes during the period, but it usually works out to be 1-2.

Before I did this activity I was concerned that since the stations would be full of students that struggled on a concept that they would all sit together and get no where. This hasn’t been the case. The kids are diligent to relearn. This may be because they like retaking exams and earning proficiency during class time, as I usually make them come after school to do this. It helps that the relearning is targeted and individualized to each student. Plus, it’s all formative. They go wherever they feel they need to. They assess themselves, but use one another in the process.

It can look and feel chaotic. But that’s the point. Improvement is messy. It’s also amazing – especially when it happens amongst your students.
bp

Knowledge Audits

Audit 1How can I know what my kids know?

I’ve been asking myself that question for a long time. In my Regents-obsessed school, I’m forced to ensure my students can perform well on end-of-year state exams. The typical learning flow in my class usually looks like this:

  1. Student learns X.
  2. Student demonstrates understanding of X.
  3. Student learns Y and forgets X.
  4. Student demonstrates understanding of Y and has no idea what X is.

Compile this over the course of a school year and you have students that understand nothing other than what they just learned. What does this mean for a comprehensive standardized exam? Disaster!

Sure, a lot of this has to do with pacing and students not diving deep into things they learn to make connections. That is a sad reality of too many teachers, including me. So given these constraints, how can I help kids build long-lasting understanding of things they learn and not forget everything other than what we’re doing right now?

In the past, I’ve “spiraled” homework and even put review questions on exams, but this never helped. There was no system to it and I never followed up. This year, I’m lagging both homework and exams, which does seem to be making a difference. But with the ginormous amount of standards that students are supposed to learn each year, I still feel this isn’t enough.

So, last week I began implementing Audits. These are exams that do not assess concepts from the current unit. The plan is to administer about one a month and because I lag my unit exams, I should have no trouble fitting them into the regular flow of things.

I’m choosing not to call them “Review Exams” or some other straightforward name in order to put a fresh spin on them and increase buy in. So far, so good.

The hope is to continually and systematically revisit older content to keep students actively recalling these standards. This should reinforce their learning and help to make it stick. On the teacher side of things, I get an updated snapshot of where they are and can plan accordingly. The SBG aspect is simple: the results from the Audit supersede any previous level of understanding.

  • If a student has not previously earned proficiency on a standard that is assessed on an Audit, he or she can earn proficiency. This alleviates the need to retest on their own.
  • If a student has previously earned proficiency on a standard, he or she must earn proficiency again or else lose credit for that standard. This would then require them to retest.

The first Audit resulted in a mix of students earning credit and losing credit for a set of standards. It was great. The proof is in the pudding. Knowledge isn’t static and my assessment practices must reflect this.


bp

Internalizing feedback without seeing it

Feedback on exam in spanish

I’ve found that students all too often overlook, or simply ignore, the feedback I give them on assessments. For whatever reason they just don’t use it. This is a problem.

I value reassessment and see feedback as crucial in that process. If a student struggles on an exam, I need them to learn from their misconceptions. My feedback reflects this. I’ve always provided fairly detailed feedback, but this year I’ve stepped it up significantly. In fact, all I do now is give feedback. I provide no scores or grades on exams. This helps, but I still need to find ways for students to grow from the feedback they receive.

I have experimented with kids relearning concepts the day after an exam without seeing their actual exam. The day after the exam, I give a copy of the SBG results to each group. Each student uses the data to identify the specific concepts that they need to relearn or review. The groups are a mix of proficiency levels (based on the exam results) so if a student needs help with a particular standard, there’s someone in their group that understands it and can help them. I also give them blank copies of the exam to work on and discuss.

After about 15-20 minutes of peer tutoring, I give them their exams back. Based on their newfound understanding, at least some of their misconceptions should be alleviated. They now spend 15-20 minutes correcting their mistakes on a separate sheet of paper while directly responding to the feedback I’ve given them on the exam.

Ideally, this means that they are using feedback from their peers to understand and respond to the feedback I’ve given them. It serves as relearning/remediation before they retake the exam. What I’m missing, though, is a reflection piece that ties into the feedback as well.

A colleague conjured up a different spin on this. After an exam, he informs students which standards they didn’t earn proficiency on. (He doesn’t hand back their actual exam either.) He allows one week (more or less) of relearning/remediation on those standards – either on their own, or with you. He actually uses an online resource for this. Then, when they feel ready to retake, he returns their exam and asks them to self-assess and correct their original mistakes. If they can, he allows them to retake. If not, they continue relearning. It may not focus on feedback, but I like this.

Closing thoughts: what if I do get my students to internalize my feedback? Are they just going to be doing it to satisfy the requirements that I ask of them? When they leave my classroom, will they view feedback as a necessary component of success? Will my feedback really make a difference? How else could I get them to value it?

 

bp

SBG updates

I’ve made some tweaks to my standards-based grading.

Last year I used a common four-point scale for each standard/concept. There are tons of other teachers using this structure, but it just didn’t have an impact on learning in my room. Two problems: I didn’t use the scale + my system for it was too complex.

With the 1-4 scale, I found myself most concerned with students earning at least a 3 (proficient) on each standard. If they did, they earned “credit” for the standard. To calculate their final content grade, I divided the number of standards they earned credit on by the total number of standards assessed.

SBG Fraction

My SBG tracker (an excel spreadsheet) used the four-point scale, but because of how I calculated their final letter grade, my actual gradebook incorporated a two-point scale: 0 (no credit) or 1 (credit). This means that I was entering students’ progress twice: once for the SBG tracker and once for my actual gradebook.

Add to this the tedious process of converting multiple choice responses (from scanned sheets) to scaled scores and averaging them with free response scores, and my SBG was, well, daunting. Not to mention overly cumbersome.

SBG screenshot

I didn’t think about all this last year because I was primarily concerned with implementing SBG for the first time. I wanted it to be sound. I wanted it to be thorough. It was both of these things, but it was also far more complex than I needed it to be. I spent so much time implementing the system that I barely made use of all the SBG data I was collecting. I never strategized around my SBG data. I never harnessed it to better my students understanding of the concepts we studied. SBG is meaningless if teachers, students, and their parents don’t actively interact with, and grow from, its product.

This was reiterated this fall when a colleague new to SBG looked at my old SBG spreadsheets from last year and gasped in trepidation. I had already adapted my structure at that point, but his reaction reassured me that sometimes less is more. (He’s also uber-efficient – which subconsciously pushed me to create a more competent SBG system. Thanks R!)

With all the said, I’m no longer using a four-point scale. I’m now on a 0 or 1 system. You got it or you don’t. If there’s no more than 2 computational errors and no conceptual errors in the solution, 1. If there’s one or more conceptual error(s) in the solution, 0. I’m using this for both my tracker and gradebook. Plus, I’m using Google sheets now instead of Excel. I finally get to email SBG progress reports to both students and parents.

New SBG Screenshot

I know this all-or-nothing scale eliminates the possibility of measuring those in-between areas, but by definition SBG provides a highly precise way of gauging student understanding since I’m measuring against individual standards. To me, it’s worth the slight sacrifice in precision if there’s more time and effort to act upon the results. And besides, how significant of a difference is a 2 compared to a 2.5? Or even a 1 compared  to 2.5? Either way the student has not obtained proficiency, which is the ultimate goal.

Since my course terminates in a high-stakes standardized exam, unit exams are my primary means of measuring attainment of standards. My exams are short (not new). There’s at most two questions related to any given standard (also not new). So this makes it even simpler to average out final scores using 0s and 1s. And since I’m providing answers to every question, I’m not scanning multiple choice questions and don’t need to manipulate the data all crazy to convert scores. I only grade work and explanations now, so after I examine the entire exam I determine whether each standard is a 0 or 1 and record it.

Next steps?

  • I have started, but I must continue to use the SBG data  in effective ways (reteaching, flexible grouping, etc)
  • I must be steadfast in getting students (and their parents) accustomed to retake exams. More importantly, they must learn to value retakes as a means of growth.
  • There is now another teacher in my department using SBG. This will be a great resource to help make each other’s system better. Plus, now I can have regular conversations with someone about SBG face-to-face. Invaluable.
  • Get students to take ownership of their results. Part of this will come from retakes and self-tracking, but another piece is dissecting their SBG grades in terms of computational and conceptual errors.