Exams: tools for feedback, answers provided, and lagged

I’ve made three fundamental changes to my unit exams this year.

Part One: Exams as tools for feedback

After a student gets an exam back, what’s the first thing they notice? Easy: their overall score. That’s why I’m not putting scores on exams.

All too often a student sees their score and, especially if its low or mediocre, views the score as the only thing that matters. Even with standards-based grading, sometimes students will get caught up in whether they’ve earned proficiency on a particular concept (which isn’t a bad thing). What they forget about is how to correct mistakes and improve based on how they did. This attitude is more present in struggling learners than it is in high achievers, but it is present throughout.

This is why this year I’ve decided to not put scores or grades on exams. I am only putting feedback. That feedback comes in the form of highlighting incorrect work, asking clarifying questions, inserting direct how-to, and cheers for correct responses. Never will my students find an overwhelming (or underwhelming) score on their exam. When they critic their performance, I want them to focus on their work – not lament on their grade. My next challenge is to get them to actually internalize and grow from the feedback.

Part Two: Exams that focus on why

On exams, I’m providing the answer to every question.

I know this is ridiculous and unheard of, but here’s my thing: I want to build a classroom culture that hinges on questions, not answers. In fact, I fear my kids being answer-driven. I want students to focus on the how and why rather than the what. In addition to simply talking to them and encouraging this frame of mind on an ongoing basis, I wanted to add a structural aspect that can help accomplish this. Providing every answer is what I came up with.

I know this doesn’t simulate standardized exams outside of my room and is fairly impractical, but I hope that I’m helping them see the bigger picture. Besides, I already include them into classwork and homework assignments, so I figured why not exams too?

Part Three: Exams that lag
After reading much about the power of lagging homework from the MTBOS, this summer I decided to incorporate it. In addition, I’ve decided to lag my unit exams. 

It just makes sense to lag both. In fact, when I made the choice to lag my homework, I found lagging unit exams to be direct corollary. Summative assessments (e.g. exams) should always align with what and how I teach. If I lag homework and 80% of what students are doing every night focuses on review content, how can administer an exam of 100% new content?

This all may backfire completely. But at least then I’ll be able to add them to the extensively long list of things that I’ve failed at implementing.



bp

Better feedback through structure?

Feedback - to improve

When assessing, I do my absolute best to provide detailed remarks and comments on student papers. The problem I run into is doing this for every student. If it’s an exam I’m marking, I’ll usually pick and choose the length and depth of my feedback depending on the particular student and the work they displayed on the exam. This is fine because different students need varying feedback, but, looking back, I find that I shortchange some students.

If I chose to be meticulous with the work of every student, I’d spend an overwhelming amount of time assessing. So I don’t. The result is that some kids get feedback that is robust and thorough while others receive relatively minimal feedback. In addition, how I indicate a specific error may vary slightly from one exam to the next, which I think could be more unified. I also want to have a systematic approach that keeps feedback consistent amongst different students. This way when kids are analyzing and assessing work, there is uniformity amongst us all on how specific errors are indicated.

What I’ve thought about doing next year is using a set of abbreviations or symbols that would indicate certain errors. For lack of a better term, let’s call them “indicators.” I would use these indicators on exams and other assessments to highlight mistakes.

For example, if a student didn’t know they needed to factor on a given problem, I could indicate this by writing “FCT” next to the error, instead of writing an explanation or setting up the factoring for them. On the same problem, if another student attempted to factor, but committed a computational error in the process, I could write “FCT” with a circle around it. The subtlety of the circle would differentiate between the two errors.

Another simple example could be when a student commits a computational error involving addition, subtraction, multiplication, or division on any problem. Near the error I could indicate this by drawing a star, say. When a student sees a star, they will know to look for a computational error involving an operation to find their mistake.

Those are three pretty sad examples, but I can’t clarify others at the moment.

My goal would be for students to easily identify an error on an assessment by calling up the indicator. The indicators would be commonplace throughout the class and we’d build on them over time. I would create a key for all the indicators, post it in my classroom, and give them a copy. I could even include them in my lessons for reinforcement.

Since there are an endless combination of errors that can be made on any given problem, I couldn’t have an indicator for every possible error – only common ones or those that are conceptual in nature. These would form a “database” of errors that would be used throughout the year. For those errors that don’t align with one of the indicators, I could combine the indicators with regular comments to clarify the mistake(s).

By using these indicators, it could allow me to quickly and easily provide precise, detailed, and consistent feedback to every student.

Based on the type of error, these indicators would also help students distinguish between SBG scores. For example, if a student gets a FCT indicator they may earn a score of 2 (a conceptual error), but if they get a FCT with a circle, they could earn a 3 (a computational error).

All these are just ideas at this point. There’s still a lot of work I need to do to actually implement a systematic approach to feedback. I don’t know if it’s feasible or even useful compared to my traditional feedback. But I do see the need to improve the qualitative nature and efficiency of the feedback given in my class – either by me or my students.

bp

P.S. Another way to implement feedback in a non-traditional way would be to use different color highlighters to represent the different types of errors. I remember John Scammell mentioning something about this during his formative assessment talk at TMC14.

Two-Stage Exam

 

My kids have been struggling this spring and their exam scores have been pretty sad. Its been one of those years. To help matters, I began adjusting my pace, but I also wanted to implement some sort of structure for collaborative learning. Idea: group exams.

Sadly, I’ve never really used group exams. To be honest, the collaboration aspect of my lessons is usually pretty lackluster as a whole. I may have used group exams once or twice before, but it wasn’t significant enough for me to remember the experience. So, I had no idea on how I was going to structure it now. Brian Vancil mentioned that I try a two-stage exam.

It was amazing.

During a two-stage exam, you first have students take an exam independently, like they normally would (this is stage one). Immediately after you collect it, you get them in groups and give them the same exact exam  (this is stage two). They collaborate and submit one document with everyone’s name on it. Their final grade: 80% stage one and 20% stage two. These percentages can certainly be adjusted.

Student discussion during stage two was rich and completely focused on the mathematics. The kids were consumed with sharing their ideas, strategies, and misconceptions. Even my more introverted students were voluntarily sharing their thoughts in the groups. As I was walking around observing, part of me felt like I was dreaming. It was that good.

Their scores didn’t disappoint, either. I’ve given these exams a few times over the course of this spring and, overall, the results have been better than my traditional exams. But their scores are the least of my concerns. And two-stage exams do way more than merely inform me about how well my students understand something.

Students actually LEARN from these exams.

They’re driven by the students, reduce anxiety, and afford the kids a great opportunity to communicate their thoughts in a meaningful way. I’ve polled my kids after each of the exams and their attitudes towards the experience were overwhelmingly positive. The kids loved the immediate feedback and the ability to learn what they did wrong (and right). They were teaching and learning from each other in ways I’ve never seen. There were so many “ah-ha!” moments during stage two that they were hard to count. The groups were reflecting about what they did and didn’t do and unifying these thoughts to really learn from each other.

My kids are looking forward to the next exam. I’ve never heard that before.

 

bp

 

P.S. There’s also some introductory research on two stage exams conducted by Carl E. Wieman, Georg W. Rieger, and Cynthia E. Heiner. A good read!

Plickers

 Last summer at Twitter Math Camp I learned about an incredible formative assessment tool. I’ve actually started using it fairly regularly now, so I figured I would get out a quick post about it.

It’s called Plickers. It’s essentially a poor-man’s Clickers (think Turning Point Technologies). They’re pieces of paper that you print off for free online and distribute to your class. Each student gets one Plicker. The teacher puts up a question and the orientation in which a student holds their Plicker determines their answer choice. Where the magic happens: download the Plickers app to your mobile device and you can “scan” the room with your camera and the app picks up all the student responses. Think exit slips, class polls, checks for understanding, and the like. It is remarkable. The first time you see it, you literally can’t believe your eyes. Here’s a video.

Pros:

  • Allows me to collect assessment data relatively easily
  • The kids seem to love using it
  • Easy to replace in case one comes up missing
  • No software to install; it’s all web based and the app is user-friendly
  • Free

Cons:

  • Requires preparing prompts ahead of time
  • Cannot export data (or maybe I just I don’t know how to)
  • Requires lamenating for long-term use

There are many things in educational technology that are impractical and overdone. This is not one. Plickers leverage technology in a way that’s simple, accessible, and useful.

In short, Plickers are game changers.

If you haven’t tried them yet and are interested in a slick formative assessment strategy, I would definitely check them out.

 

bp