SBG updates

I’ve made some tweaks to my standards-based grading.

Last year I used a common four-point scale for each standard/concept. There are tons of other teachers using this structure, but it just didn’t have an impact on learning in my room. Two problems: I didn’t use the scale + my system for it was too complex.

With the 1-4 scale, I found myself most concerned with students earning at least a 3 (proficient) on each standard. If they did, they earned “credit” for the standard. To calculate their final content grade, I divided the number of standards they earned credit on by the total number of standards assessed.

My SBG tracker (an excel spreadsheet) used the four-point scale, but because of how I calculated their final letter grade, my actual gradebook incorporated a two-point scale: 0 (no credit) or 1 (credit). This means that I was entering students’ progress twice: once for the SBG tracker and once for my actual gradebook.

Add to this the tedious process of converting multiple choice responses (from scanned sheets) to scaled scores and averaging them with free response scores, and my SBG was, well, daunting. Not to mention overly cumbersome.

I didn’t think about all this last year because I was primarily concerned with implementing SBG for the first time. I wanted it to be sound. I wanted it to be thorough. It was both of these things, but it was also far more complex than I needed it to be. I spent so much time implementing the system that I barely made use of all the SBG data I was collecting. I never strategized around my SBG data. I never harnessed it to better my students understanding of the concepts we studied. SBG is meaningless if teachers, students, and their parents don’t actively interact with, and grow from, its product.

This was reiterated this fall when a colleague new to SBG looked at my old SBG spreadsheets from last year and gasped in trepidation. I had already adapted my structure at that point, but his reaction reassured me that sometimes less is more. (He’s also uber-efficient – which subconsciously pushed me to create a more competent SBG system. Thanks R!)

With all the said, I’m no longer using a four-point scale. I’m now on a 0 or 1 system. You got it or you don’t. If there’s no more than 2 computational errors and no conceptual errors in the solution, 1. If there’s one or more conceptual error(s) in the solution, 0. I’m using this for both my tracker and gradebook. Plus, I’m using Google sheets now instead of Excel. I finally get to email SBG progress reports to both students and parents.

I know this all-or-nothing scale eliminates the possibility of measuring those in-between areas, but by definition SBG provides a highly precise way of gauging student understanding since I’m measuring against individual standards. To me, it’s worth the slight sacrifice in precision if there’s more time and effort to act upon the results. And besides, how significant of a difference is a 2 compared to a 2.5? Or even a 1 compared  to 2.5? Either way the student has not obtained proficiency, which is the ultimate goal.

Since my course terminates in a high-stakes standardized exam, unit exams are my primary means of measuring attainment of standards. My exams are short (not new). There’s at most two questions related to any given standard (also not new). So this makes it even simpler to average out final scores using 0s and 1s. And since I’m providing answers to every question, I’m not scanning multiple choice questions and don’t need to manipulate the data all crazy to convert scores. I only grade work and explanations now, so after I examine the entire exam I determine whether each standard is a 0 or 1 and record it.

Next steps?

  • I have started, but I must continue to use the SBG data  in effective ways (reteaching, flexible grouping, etc)
  • I must be steadfast in getting students (and their parents) accustomed to retake exams. More importantly, they must learn to value retakes as a means of growth.
  • There is now another teacher in my department using SBG. This will be a great resource to help make each other’s system better. Plus, now I can have regular conversations with someone about SBG face-to-face. Invaluable.
  • Get students to take ownership of their results. Part of this will come from retakes and self-tracking, but another piece is dissecting their SBG grades in terms of computational and conceptual errors.

Exams: tools for feedback, answers provided, and lagged

I’ve made three fundamental changes to my unit exams this year.

Part One: Exams as tools for feedback

After a student gets an exam back, what’s the first thing they notice? Easy: their overall score. That’s why I’m not putting scores on exams.

All too often a student sees their score and, especially if its low or mediocre, views the score as the only thing that matters. Even with standards-based grading, sometimes students will get caught up in whether they’ve earned proficiency on a particular concept (which isn’t a bad thing). What they forget about is how to correct mistakes and improve based on how they did. This attitude is more present in struggling learners than it is in high achievers, but it is present throughout.

This is why this year I’ve decided to not put scores or grades on exams. I am only putting feedback. That feedback comes in the form of highlighting incorrect work, asking clarifying questions, inserting direct how-to, and cheers for correct responses. Never will my students find an overwhelming (or underwhelming) score on their exam. When they critic their performance, I want them to focus on their work – not lament on their grade. My next challenge is to get them to actually internalize and grow from the feedback.

Part Two: Exams that focus on why

On exams, I’m providing the answer to every question.

I know this is ridiculous and unheard of, but here’s my thing: I want to build a classroom culture that hinges on questions, not answers. In fact, I fear my kids being answer-driven. I want students to focus on the how and why rather than the what. In addition to simply talking to them and encouraging this frame of mind on an ongoing basis, I wanted to add a structural aspect that can help accomplish this. Providing every answer is what I came up with.

I know this doesn’t simulate standardized exams outside of my room and is fairly impractical, but I hope that I’m helping them see the bigger picture. Besides, I already include them into classwork and homework assignments, so I figured why not exams too?

Part Three: Exams that lag
After reading much about the power of lagging homework from the MTBOS, this summer I decided to incorporate it. In addition, I’ve decided to lag my unit exams. 

It just makes sense to lag both. In fact, when I made the choice to lag my homework, I found lagging unit exams to be direct corollary. Summative assessments (e.g. exams) should always align with what and how I teach. If I lag homework and 80% of what students are doing every night focuses on review content, how can administer an exam of 100% new content?
This all may backfire completely. But at least then I’ll be able to add them to the extensively long list of things that I’ve failed at implementing.


I’ve realized that it has become a goal of mine to improve my questioning. Here’s some of what I’ve been pondering (and doing) as of late.

1. Asking “what if…” questions. This will usually come into play after we finish a problem. I try to change the conceptual nature of the problem, which provokes students to examine relationships and see the problem under a new context. I also really like giving the students a minute or two to generate their own “what if…” questions about a problem after we’ve found a solution.

2. Asking students to find errors within student work samples. I really started focusing on this last year with my exit tickets, but I’m doing it just about every class. I usually pick up someone’s paper and slide it under the document camera for the class the assess. Quick, easy, authentic. Plus, it creates a culture of identifying and accepting mistakes on a regular basis.

3. I’ve also begun asking students to identify potential errors within problems before examining any sample work. The result is always rich classroom discussion over creatively wrong solutions. The goal is for them to identify both subtle and more serious mistakes that could occur.

4. Having students construct their own questions (that are good). I really need to get better at this. I’ve had some success in the past, but usually when I least expect it. I’m thinking of researching more into RQI to find some useful strategies.

5. The other day, out of the blue, I utilized a “convince me” statement to a student during a class discussion. We were factoring and I proposed a (wrong) solution to him. I essentially asked “why is my solution wrong,” but in a way that felt more like a challenge rather than a question. I felt the power when I uttered it. It probably bled through from a workshop from Chris Luzniak a couple years ago on using debate in math class. He has great stuff.

6. Using questions as a foundation of my class. I want my classroom culture to be one that emphasizes the why behind the answer instead of the answer itself. As a math teacher, I’ve always emphasized work and how critical it is. But I’ve never lived out that creed by how I teach my kids. Trying to change that this year. More to come on this.

7. If one student X makes a statement about something we’re studying, I’ll sometimes turn to student Y and ask them to “Interpret what X just said…”

8. During an intervisitation, the teacher I was visiting posed a question to the class and no one responded or seemed to have a clue. He said “Alright, take 30 seconds and brainstorm with a neighbor about the question.” He waited and asked the question again and there were several responses. This was awesome.

9. The questioning doesn’t begin and end while I’m teaching. I’ve started questioning more of what I plan and structure for my students, including things that I’ve done for years. I’ve put my teaching philosophy under a microscope too. It’s changing. This will have repercussions far greater than any question I could ever pose to a student.



Starting the year with letters


Last year I really started getting into writing more with my students. (This probably started because of my first year of blogging.) Specifically, I did Friday letters and notes that I wrote students while they took exams. I also had my students write themselves a letter mid-year that I held onto and gave back to them at the end of the school year. 

To culminate all this writing, on the last day of school I had each student write me a letter that I didn’t open until the first day of classes this year. I asked them to give me some inspiration for the new school as well as simply capturing the moment at the end of a long, hard-fought school year. I locked the letters away for the summer.

When I opened my closet a few weeks ago upon my return to school, the letters were staring directly at me. I strategically placed them in front of all my crap so I wouldn’t forget. 

What I read convinced me that I have to do this again in June. Some letters provided fresh perspective and advice of how to teach more effectively. There was some really good advice, like being tougher and expecting more. Others served as reminders as to exactly why I became a teacher. I was informed by one kid that I was head and shoulders their favorite teacher of all time. Others proved more serious, like the one that shared insight into the world of living with divorced parents. 

They were heartfelt, real, and unadulterated. The letters allowed me to reconnect, at least in spirit, with those kids and all we experienced in room 516. I learned a lot too. They were exactly what I needed to start the year. 

Exit mobile version