SBG updates

I’ve made some tweaks to my standards-based grading.

Last year I used a common four-point scale for each standard/concept. There are tons of other teachers using this structure, but it just didn’t have an impact on learning in my room. Two problems: I didn’t use the scale + my system for it was too complex.

With the 1-4 scale, I found myself most concerned with students earning at least a 3 (proficient) on each standard. If they did, they earned “credit” for the standard. To calculate their final content grade, I divided the number of standards they earned credit on by the total number of standards assessed.

My SBG tracker (an excel spreadsheet) used the four-point scale, but because of how I calculated their final letter grade, my actual gradebook incorporated a two-point scale: 0 (no credit) or 1 (credit). This means that I was entering students’ progress twice: once for the SBG tracker and once for my actual gradebook.

Add to this the tedious process of converting multiple choice responses (from scanned sheets) to scaled scores and averaging them with free response scores, and my SBG was, well, daunting. Not to mention overly cumbersome.

I didn’t think about all this last year because I was primarily concerned with implementing SBG for the first time. I wanted it to be sound. I wanted it to be thorough. It was both of these things, but it was also far more complex than I needed it to be. I spent so much time implementing the system that I barely made use of all the SBG data I was collecting. I never strategized around my SBG data. I never harnessed it to better my students understanding of the concepts we studied. SBG is meaningless if teachers, students, and their parents don’t actively interact with, and grow from, its product.

This was reiterated this fall when a colleague new to SBG looked at my old SBG spreadsheets from last year and gasped in trepidation. I had already adapted my structure at that point, but his reaction reassured me that sometimes less is more. (He’s also uber-efficient – which subconsciously pushed me to create a more competent SBG system. Thanks R!)

With all the said, I’m no longer using a four-point scale. I’m now on a 0 or 1 system. You got it or you don’t. If there’s no more than 2 computational errors and no conceptual errors in the solution, 1. If there’s one or more conceptual error(s) in the solution, 0. I’m using this for both my tracker and gradebook. Plus, I’m using Google sheets now instead of Excel. I finally get to email SBG progress reports to both students and parents.

I know this all-or-nothing scale eliminates the possibility of measuring those in-between areas, but by definition SBG provides a highly precise way of gauging student understanding since I’m measuring against individual standards. To me, it’s worth the slight sacrifice in precision if there’s more time and effort to act upon the results. And besides, how significant of a difference is a 2 compared to a 2.5? Or even a 1 compared  to 2.5? Either way the student has not obtained proficiency, which is the ultimate goal.

Since my course terminates in a high-stakes standardized exam, unit exams are my primary means of measuring attainment of standards. My exams are short (not new). There’s at most two questions related to any given standard (also not new). So this makes it even simpler to average out final scores using 0s and 1s. And since I’m providing answers to every question, I’m not scanning multiple choice questions and don’t need to manipulate the data all crazy to convert scores. I only grade work and explanations now, so after I examine the entire exam I determine whether each standard is a 0 or 1 and record it.

Next steps?

  • I have started, but I must continue to use the SBG data  in effective ways (reteaching, flexible grouping, etc)
  • I must be steadfast in getting students (and their parents) accustomed to retake exams. More importantly, they must learn to value retakes as a means of growth.
  • There is now another teacher in my department using SBG. This will be a great resource to help make each other’s system better. Plus, now I can have regular conversations with someone about SBG face-to-face. Invaluable.
  • Get students to take ownership of their results. Part of this will come from retakes and self-tracking, but another piece is dissecting their SBG grades in terms of computational and conceptual errors.

Exams: tools for feedback, answers provided, and lagged

I’ve made three fundamental changes to my unit exams this year.

Part One: Exams as tools for feedback

After a student gets an exam back, what’s the first thing they notice? Easy: their overall score. That’s why I’m not putting scores on exams.

All too often a student sees their score and, especially if its low or mediocre, views the score as the only thing that matters. Even with standards-based grading, sometimes students will get caught up in whether they’ve earned proficiency on a particular concept (which isn’t a bad thing). What they forget about is how to correct mistakes and improve based on how they did. This attitude is more present in struggling learners than it is in high achievers, but it is present throughout.

This is why this year I’ve decided to not put scores or grades on exams. I am only putting feedback. That feedback comes in the form of highlighting incorrect work, asking clarifying questions, inserting direct how-to, and cheers for correct responses. Never will my students find an overwhelming (or underwhelming) score on their exam. When they critic their performance, I want them to focus on their work – not lament on their grade. My next challenge is to get them to actually internalize and grow from the feedback.

Part Two: Exams that focus on why

On exams, I’m providing the answer to every question.

I know this is ridiculous and unheard of, but here’s my thing: I want to build a classroom culture that hinges on questions, not answers. In fact, I fear my kids being answer-driven. I want students to focus on the how and why rather than the what. In addition to simply talking to them and encouraging this frame of mind on an ongoing basis, I wanted to add a structural aspect that can help accomplish this. Providing every answer is what I came up with.

I know this doesn’t simulate standardized exams outside of my room and is fairly impractical, but I hope that I’m helping them see the bigger picture. Besides, I already include them into classwork and homework assignments, so I figured why not exams too?

Part Three: Exams that lag
After reading much about the power of lagging homework from the MTBOS, this summer I decided to incorporate it. In addition, I’ve decided to lag my unit exams. 

It just makes sense to lag both. In fact, when I made the choice to lag my homework, I found lagging unit exams to be direct corollary. Summative assessments (e.g. exams) should always align with what and how I teach. If I lag homework and 80% of what students are doing every night focuses on review content, how can administer an exam of 100% new content?
This all may backfire completely. But at least then I’ll be able to add them to the extensively long list of things that I’ve failed at implementing.
bp

Better feedback through structure?

When assessing, I do my absolute best to provide detailed remarks and comments on student papers. The problem I run into is doing this for every student. If it’s an exam I’m marking, I’ll usually pick and choose the length and depth of my feedback depending on the particular student and the work they displayed on the exam. This is fine because different students need varying feedback, but, looking back, I find that I shortchange some students.

If I chose to be meticulous with the work of every student, I’d spend an overwhelming amount of time assessing. So I don’t. The result is that some kids get feedback that is robust and thorough while others receive relatively minimal feedback. In addition, how I indicate a specific error may vary slightly from one exam to the next, which I think could be more unified. I also want to have a systematic approach that keeps feedback consistent amongst different students. This way when kids are analyzing and assessing work, there is uniformity amongst us all on how specific errors are indicated.

What I’ve thought about doing next year is using a set of abbreviations or symbols that would indicate certain errors. For lack of a better term, let’s call them “indicators.” I would use these indicators on exams and other assessments to highlight mistakes.

For example, if a student didn’t know they needed to factor on a given problem, I could indicate this by writing “FCT” next to the error, instead of writing an explanation or setting up the factoring for them. On the same problem, if another student attempted to factor, but committed a computational error in the process, I could write “FCT” with a circle around it. The subtlety of the circle would differentiate between the two errors.

Another simple example could be when a student commits a computational error involving addition, subtraction, multiplication, or division on any problem. Near the error I could indicate this by drawing a star, say. When a student sees a star, they will know to look for a computational error involving an operation to find their mistake.

Those are three pretty sad examples, but I can’t clarify others at the moment.

My goal would be for students to easily identify an error on an assessment by calling up the indicator. The indicators would be commonplace throughout the class and we’d build on them over time. I would create a key for all the indicators, post it in my classroom, and give them a copy. I could even include them in my lessons for reinforcement.

Since there are an endless combination of errors that can be made on any given problem, I couldn’t have an indicator for every possible error – only common ones or those that are conceptual in nature. These would form a “database” of errors that would be used throughout the year. For those errors that don’t align with one of the indicators, I could combine the indicators with regular comments to clarify the mistake(s).

By using these indicators, it could allow me to quickly and easily provide precise, detailed, and consistent feedback to every student.

Based on the type of error, these indicators would also help students distinguish between SBG scores. For example, if a student gets a FCT indicator they may earn a score of 2 (a conceptual error), but if they get a FCT with a circle, they could earn a 3 (a computational error).

All these are just ideas at this point. There’s still a lot of work I need to do to actually implement a systematic approach to feedback. I don’t know if it’s feasible or even useful compared to my traditional feedback. But I do see the need to improve the qualitative nature and efficiency of the feedback given in my class – either by me or my students.

bp

P.S. Another way to implement feedback in a non-traditional way would be to use different color highlighters to represent the different types of errors. I remember John Scammell mentioning something about this during his formative assessment talk at TMC14.

Quick Key

To help me collect data, I’ve been using a tool for the last couple of months. It’s called Quick Key and it’s used to quickly and easily collect responses from multiple choice questions.

For a long, long time, my school utilized the Apperson Datalink scanner to aid in scoring multiple choice portions of exams. It not only scores exams quickly and efficiently, but its accompanying software provides insightful data analysis that I use to modify my teaching. On the downside, these machines are pricey (almost $1000) and require you to purchase their unique scanning sheets that work only with their machine. Each department in my school had a machine.

Because of my push towards standards-based grading, I find myself giving smaller, bite-size assessments that target fewer concepts. Consequently, I am assessing more frequently and I need the scanning machine at least once a week. The machine was constantly changing hands and I was always running around the building trying to track it down.

I decided that I didn’t want to be a slave to the scanner – and its arbitrary sheets. It’s not sustainable. Especially when we have mobile technology that can perform the same task and provide similar results.

Enter Quick Key.

Quick Key has allowed me to score MC items and analyze my students’ responses in a much more convenient and cost-effective way. Like, free. Hello. You simply set up your classes, print out sheets, and start scanning with your mobile device. (You don’t even need to have wifi or cellular data when scanning.) The interface is pretty clean and easy to use. Plus, it was created and designed by a teacher. Props there too.

Data is synced between my phone and the web, which allows me to download CSV files to use with my standards-based grading spreadsheets.

My SBG tracking spreadsheet

That is the big Quick Key buy-in for me: exporting data for use with SBG. As I have mentioned before, SBG has completely changed my teaching and my approach to student learning. At some point, I hope to write in-depth about the specifics of this process and the structure I use.

Though the Quick Key data analysis isn’t as rigorous as what I would get from Datalink, it suffices for my purposes. I sort of wish Quick Key would improve the analysis they provide, but for now, if I need more detailed analytics, its usually requires a simple formula that I can quickly insert.

Sample data analysis from Quick Key
Sample data analysis from Datalink

Through all this, I don’t overlook the obvious: MC questions provide minimal insight into what students actually know, especially in math. That being said, my students’ graduation exams still require them to answer a relatively large number of MC items. For that reason alone I feel somewhat obligated to use MC questions on unit exams. Also, when assessing student knowledge via MC questions, I do my best to design them as hinge questions. TMC14 (specifically Nik Doran) formally introduced me to the idea of a hinge question, which are MC questions that are consciously engineered to categorize and target student misconceptions based on their answer. In this way, students responses to MC questions, though less powerful than short response questions, can provide me an intuitive understanding of student abilities.

Quick Key recently introduced a Pro plan ($30/year) that now places limitations on those that sign up for free accounts. Their free plan still offers plenty for the average teacher.

Either way, Quick Key still beats a $1000 scanner + cost of sheets.

bp

Exit mobile version
%%footer%%