Knowledge Audits

Audit 1How can I know what my kids know?

I’ve been asking myself that question for a long time. In my Regents-obsessed school, I’m forced to ensure my students can perform well on end-of-year state exams. The typical learning flow in my class usually looks like this:

  1. Student learns X.
  2. Student demonstrates understanding of X.
  3. Student learns Y and forgets X.
  4. Student demonstrates understanding of Y and has no idea what X is.

Compile this over the course of a school year and you have students that understand nothing other than what they just learned. What does this mean for a comprehensive standardized exam? Disaster!

Sure, a lot of this has to do with pacing and students not diving deep into things they learn to make connections. That is a sad reality of too many teachers, including me. So given these constraints, how can I help kids build long-lasting understanding of things they learn and not forget everything other than what we’re doing right now?

In the past, I’ve “spiraled” homework and even put review questions on exams, but this never helped. There was no system to it and I never followed up. This year, I’m lagging both homework and exams, which does seem to be making a difference. But with the ginormous amount of standards that students are supposed to learn each year, I still feel this isn’t enough.

So, last week I began implementing Audits. These are exams that do not assess concepts from the current unit. The plan is to administer about one a month and because I lag my unit exams, I should have no trouble fitting them into the regular flow of things.

I’m choosing not to call them “Review Exams” or some other straightforward name in order to put a fresh spin on them and increase buy in. So far, so good.

The hope is to continually and systematically revisit older content to keep students actively recalling these standards. This should reinforce their learning and help to make it stick. On the teacher side of things, I get an updated snapshot of where they are and can plan accordingly. The SBG aspect is simple: the results from the Audit supersede any previous level of understanding.

  • If a student has not previously earned proficiency on a standard that is assessed on an Audit, he or she can earn proficiency. This alleviates the need to retest on their own.
  • If a student has previously earned proficiency on a standard, he or she must earn proficiency again or else lose credit for that standard. This would then require them to retest.

The first Audit resulted in a mix of students earning credit and losing credit for a set of standards. It was great. The proof is in the pudding. Knowledge isn’t static and my assessment practices must reflect this.


bp

Internalizing feedback without seeing it

Feedback on exam in spanish

I’ve found that students all too often overlook, or simply ignore, the feedback I give them on assessments. For whatever reason they just don’t use it. This is a problem.

I value reassessment and see feedback as crucial in that process. If a student struggles on an exam, I need them to learn from their misconceptions. My feedback reflects this. I’ve always provided fairly detailed feedback, but this year I’ve stepped it up significantly. In fact, all I do now is give feedback. I provide no scores or grades on exams. This helps, but I still need to find ways for students to grow from the feedback they receive.

I have experimented with kids relearning concepts the day after an exam without seeing their actual exam. The day after the exam, I give a copy of the SBG results to each group. Each student uses the data to identify the specific concepts that they need to relearn or review. The groups are a mix of proficiency levels (based on the exam results) so if a student needs help with a particular standard, there’s someone in their group that understands it and can help them. I also give them blank copies of the exam to work on and discuss.

After about 15-20 minutes of peer tutoring, I give them their exams back. Based on their newfound understanding, at least some of their misconceptions should be alleviated. They now spend 15-20 minutes correcting their mistakes on a separate sheet of paper while directly responding to the feedback I’ve given them on the exam.

Ideally, this means that they are using feedback from their peers to understand and respond to the feedback I’ve given them. It serves as relearning/remediation before they retake the exam. What I’m missing, though, is a reflection piece that ties into the feedback as well.

A colleague conjured up a different spin on this. After an exam, he informs students which standards they didn’t earn proficiency on. (He doesn’t hand back their actual exam either.) He allows one week (more or less) of relearning/remediation on those standards – either on their own, or with you. He actually uses an online resource for this. Then, when they feel ready to retake, he returns their exam and asks them to self-assess and correct their original mistakes. If they can, he allows them to retake. If not, they continue relearning. It may not focus on feedback, but I like this.

Closing thoughts: what if I do get my students to internalize my feedback? Are they just going to be doing it to satisfy the requirements that I ask of them? When they leave my classroom, will they view feedback as a necessary component of success? Will my feedback really make a difference? How else could I get them to value it?

 

bp

SBG updates

I’ve made some tweaks to my standards-based grading.

Last year I used a common four-point scale for each standard/concept. There are tons of other teachers using this structure, but it just didn’t have an impact on learning in my room. Two problems: I didn’t use the scale + my system for it was too complex.

With the 1-4 scale, I found myself most concerned with students earning at least a 3 (proficient) on each standard. If they did, they earned “credit” for the standard. To calculate their final content grade, I divided the number of standards they earned credit on by the total number of standards assessed.

SBG Fraction

My SBG tracker (an excel spreadsheet) used the four-point scale, but because of how I calculated their final letter grade, my actual gradebook incorporated a two-point scale: 0 (no credit) or 1 (credit). This means that I was entering students’ progress twice: once for the SBG tracker and once for my actual gradebook.

Add to this the tedious process of converting multiple choice responses (from scanned sheets) to scaled scores and averaging them with free response scores, and my SBG was, well, daunting. Not to mention overly cumbersome.

SBG screenshot

I didn’t think about all this last year because I was primarily concerned with implementing SBG for the first time. I wanted it to be sound. I wanted it to be thorough. It was both of these things, but it was also far more complex than I needed it to be. I spent so much time implementing the system that I barely made use of all the SBG data I was collecting. I never strategized around my SBG data. I never harnessed it to better my students understanding of the concepts we studied. SBG is meaningless if teachers, students, and their parents don’t actively interact with, and grow from, its product.

This was reiterated this fall when a colleague new to SBG looked at my old SBG spreadsheets from last year and gasped in trepidation. I had already adapted my structure at that point, but his reaction reassured me that sometimes less is more. (He’s also uber-efficient – which subconsciously pushed me to create a more competent SBG system. Thanks R!)

With all the said, I’m no longer using a four-point scale. I’m now on a 0 or 1 system. You got it or you don’t. If there’s no more than 2 computational errors and no conceptual errors in the solution, 1. If there’s one or more conceptual error(s) in the solution, 0. I’m using this for both my tracker and gradebook. Plus, I’m using Google sheets now instead of Excel. I finally get to email SBG progress reports to both students and parents.

New SBG Screenshot

I know this all-or-nothing scale eliminates the possibility of measuring those in-between areas, but by definition SBG provides a highly precise way of gauging student understanding since I’m measuring against individual standards. To me, it’s worth the slight sacrifice in precision if there’s more time and effort to act upon the results. And besides, how significant of a difference is a 2 compared to a 2.5? Or even a 1 compared  to 2.5? Either way the student has not obtained proficiency, which is the ultimate goal.

Since my course terminates in a high-stakes standardized exam, unit exams are my primary means of measuring attainment of standards. My exams are short (not new). There’s at most two questions related to any given standard (also not new). So this makes it even simpler to average out final scores using 0s and 1s. And since I’m providing answers to every question, I’m not scanning multiple choice questions and don’t need to manipulate the data all crazy to convert scores. I only grade work and explanations now, so after I examine the entire exam I determine whether each standard is a 0 or 1 and record it.

Next steps?

  • I have started, but I must continue to use the SBG data  in effective ways (reteaching, flexible grouping, etc)
  • I must be steadfast in getting students (and their parents) accustomed to retake exams. More importantly, they must learn to value retakes as a means of growth.
  • There is now another teacher in my department using SBG. This will be a great resource to help make each other’s system better. Plus, now I can have regular conversations with someone about SBG face-to-face. Invaluable.
  • Get students to take ownership of their results. Part of this will come from retakes and self-tracking, but another piece is dissecting their SBG grades in terms of computational and conceptual errors.

Exams: tools for feedback, answers provided, and lagged

I’ve made three fundamental changes to my unit exams this year.

Part One: Exams as tools for feedback

After a student gets an exam back, what’s the first thing they notice? Easy: their overall score. That’s why I’m not putting scores on exams.

All too often a student sees their score and, especially if its low or mediocre, views the score as the only thing that matters. Even with standards-based grading, sometimes students will get caught up in whether they’ve earned proficiency on a particular concept (which isn’t a bad thing). What they forget about is how to correct mistakes and improve based on how they did. This attitude is more present in struggling learners than it is in high achievers, but it is present throughout.

This is why this year I’ve decided to not put scores or grades on exams. I am only putting feedback. That feedback comes in the form of highlighting incorrect work, asking clarifying questions, inserting direct how-to, and cheers for correct responses. Never will my students find an overwhelming (or underwhelming) score on their exam. When they critic their performance, I want them to focus on their work – not lament on their grade. My next challenge is to get them to actually internalize and grow from the feedback.

Part Two: Exams that focus on why

On exams, I’m providing the answer to every question.

I know this is ridiculous and unheard of, but here’s my thing: I want to build a classroom culture that hinges on questions, not answers. In fact, I fear my kids being answer-driven. I want students to focus on the how and why rather than the what. In addition to simply talking to them and encouraging this frame of mind on an ongoing basis, I wanted to add a structural aspect that can help accomplish this. Providing every answer is what I came up with.

I know this doesn’t simulate standardized exams outside of my room and is fairly impractical, but I hope that I’m helping them see the bigger picture. Besides, I already include them into classwork and homework assignments, so I figured why not exams too?

Part Three: Exams that lag
After reading much about the power of lagging homework from the MTBOS, this summer I decided to incorporate it. In addition, I’ve decided to lag my unit exams. 

It just makes sense to lag both. In fact, when I made the choice to lag my homework, I found lagging unit exams to be direct corollary. Summative assessments (e.g. exams) should always align with what and how I teach. If I lag homework and 80% of what students are doing every night focuses on review content, how can administer an exam of 100% new content?

This all may backfire completely. But at least then I’ll be able to add them to the extensively long list of things that I’ve failed at implementing.



bp