Better feedback through structure?

Feedback - to improve

When assessing, I do my absolute best to provide detailed remarks and comments on student papers. The problem I run into is doing this for every student. If it’s an exam I’m marking, I’ll usually pick and choose the length and depth of my feedback depending on the particular student and the work they displayed on the exam. This is fine because different students need varying feedback, but, looking back, I find that I shortchange some students.

If I chose to be meticulous with the work of every student, I’d spend an overwhelming amount of time assessing. So I don’t. The result is that some kids get feedback that is robust and thorough while others receive relatively minimal feedback. In addition, how I indicate a specific error may vary slightly from one exam to the next, which I think could be more unified. I also want to have a systematic approach that keeps feedback consistent amongst different students. This way when kids are analyzing and assessing work, there is uniformity amongst us all on how specific errors are indicated.

What I’ve thought about doing next year is using a set of abbreviations or symbols that would indicate certain errors. For lack of a better term, let’s call them “indicators.” I would use these indicators on exams and other assessments to highlight mistakes.

For example, if a student didn’t know they needed to factor on a given problem, I could indicate this by writing “FCT” next to the error, instead of writing an explanation or setting up the factoring for them. On the same problem, if another student attempted to factor, but committed a computational error in the process, I could write “FCT” with a circle around it. The subtlety of the circle would differentiate between the two errors.

Another simple example could be when a student commits a computational error involving addition, subtraction, multiplication, or division on any problem. Near the error I could indicate this by drawing a star, say. When a student sees a star, they will know to look for a computational error involving an operation to find their mistake.

Those are three pretty sad examples, but I can’t clarify others at the moment.

My goal would be for students to easily identify an error on an assessment by calling up the indicator. The indicators would be commonplace throughout the class and we’d build on them over time. I would create a key for all the indicators, post it in my classroom, and give them a copy. I could even include them in my lessons for reinforcement.

Since there are an endless combination of errors that can be made on any given problem, I couldn’t have an indicator for every possible error – only common ones or those that are conceptual in nature. These would form a “database” of errors that would be used throughout the year. For those errors that don’t align with one of the indicators, I could combine the indicators with regular comments to clarify the mistake(s).

By using these indicators, it could allow me to quickly and easily provide precise, detailed, and consistent feedback to every student.

Based on the type of error, these indicators would also help students distinguish between SBG scores. For example, if a student gets a FCT indicator they may earn a score of 2 (a conceptual error), but if they get a FCT with a circle, they could earn a 3 (a computational error).

All these are just ideas at this point. There’s still a lot of work I need to do to actually implement a systematic approach to feedback. I don’t know if it’s feasible or even useful compared to my traditional feedback. But I do see the need to improve the qualitative nature and efficiency of the feedback given in my class – either by me or my students.

bp

P.S. Another way to implement feedback in a non-traditional way would be to use different color highlighters to represent the different types of errors. I remember John Scammell mentioning something about this during his formative assessment talk at TMC14.