The day after

london_underground2

I don’t like review days before exams. I’d much rather spend that day after an exam analyzing mistakes and relearning. I find this to be crucial in promoting a growth mindset in my students. My struggle has been how to structure these post-exam days. Here’s a formative assessment idea that I’ve used a few times this year.

The day after an exam, I set up the room in 3-5 stations. Each serves as a place to study a particular concept that was on the exam.

My bell ringer asks students to check their exam performance on the bulletin board in the back of the room. It lets them know for which concepts they earned proficiency. I also email the kids their performance immediately after assessing the exams, but many don’t check.

I hand back the exams and they move to a concept that they need help with based on their performance. If they have earned credit for every concept on the exam then I ask them to float and help others. At each station they use notes, each other, and the feedback I provided on the exam to analyze and learn from their mistakes. I also have practice problems at each station so they can make sure they understand the concept. I float around the room and help. Of course, the SBG data allows me to sit with students who need me most.

After a student feels they have successfully relearned a concept, and I usually check in to confirm, they can retake that concept. The retakes are in folders in the corner – students grab one and do it anywhere in the room. They submit it and begin working on another concept, if necessary. It doesn’t matter how many concepts a student retakes during the period, but it usually works out to be 1-2.

Before I did this activity I was concerned that since the stations would be full of students that struggled on a concept that they would all sit together and get no where. This hasn’t been the case. The kids are diligent to relearn. This may be because they like retaking exams and earning proficiency during class time, as I usually make them come after school to do this. It helps that the relearning is targeted and individualized to each student. Plus, it’s all formative. They go wherever they feel they need to. They assess themselves, but use one another in the process.

It can look and feel chaotic. But that’s the point. Improvement is messy. It’s also amazing – especially when it happens amongst your students.
bp

Internalizing feedback without seeing it

Feedback on exam in spanish

I’ve found that students all too often overlook, or simply ignore, the feedback I give them on assessments. For whatever reason they just don’t use it. This is a problem.

I value reassessment and see feedback as crucial in that process. If a student struggles on an exam, I need them to learn from their misconceptions. My feedback reflects this. I’ve always provided fairly detailed feedback, but this year I’ve stepped it up significantly. In fact, all I do now is give feedback. I provide no scores or grades on exams. This helps, but I still need to find ways for students to grow from the feedback they receive.

I have experimented with kids relearning concepts the day after an exam without seeing their actual exam. The day after the exam, I give a copy of the SBG results to each group. Each student uses the data to identify the specific concepts that they need to relearn or review. The groups are a mix of proficiency levels (based on the exam results) so if a student needs help with a particular standard, there’s someone in their group that understands it and can help them. I also give them blank copies of the exam to work on and discuss.

After about 15-20 minutes of peer tutoring, I give them their exams back. Based on their newfound understanding, at least some of their misconceptions should be alleviated. They now spend 15-20 minutes correcting their mistakes on a separate sheet of paper while directly responding to the feedback I’ve given them on the exam.

Ideally, this means that they are using feedback from their peers to understand and respond to the feedback I’ve given them. It serves as relearning/remediation before they retake the exam. What I’m missing, though, is a reflection piece that ties into the feedback as well.

A colleague conjured up a different spin on this. After an exam, he informs students which standards they didn’t earn proficiency on. (He doesn’t hand back their actual exam either.) He allows one week (more or less) of relearning/remediation on those standards – either on their own, or with you. He actually uses an online resource for this. Then, when they feel ready to retake, he returns their exam and asks them to self-assess and correct their original mistakes. If they can, he allows them to retake. If not, they continue relearning. It may not focus on feedback, but I like this.

Closing thoughts: what if I do get my students to internalize my feedback? Are they just going to be doing it to satisfy the requirements that I ask of them? When they leave my classroom, will they view feedback as a necessary component of success? Will my feedback really make a difference? How else could I get them to value it?

 

bp

Exams: tools for feedback, answers provided, and lagged

I’ve made three fundamental changes to my unit exams this year.

Part One: Exams as tools for feedback

After a student gets an exam back, what’s the first thing they notice? Easy: their overall score. That’s why I’m not putting scores on exams.

All too often a student sees their score and, especially if its low or mediocre, views the score as the only thing that matters. Even with standards-based grading, sometimes students will get caught up in whether they’ve earned proficiency on a particular concept (which isn’t a bad thing). What they forget about is how to correct mistakes and improve based on how they did. This attitude is more present in struggling learners than it is in high achievers, but it is present throughout.

This is why this year I’ve decided to not put scores or grades on exams. I am only putting feedback. That feedback comes in the form of highlighting incorrect work, asking clarifying questions, inserting direct how-to, and cheers for correct responses. Never will my students find an overwhelming (or underwhelming) score on their exam. When they critic their performance, I want them to focus on their work – not lament on their grade. My next challenge is to get them to actually internalize and grow from the feedback.

Part Two: Exams that focus on why

On exams, I’m providing the answer to every question.

I know this is ridiculous and unheard of, but here’s my thing: I want to build a classroom culture that hinges on questions, not answers. In fact, I fear my kids being answer-driven. I want students to focus on the how and why rather than the what. In addition to simply talking to them and encouraging this frame of mind on an ongoing basis, I wanted to add a structural aspect that can help accomplish this. Providing every answer is what I came up with.

I know this doesn’t simulate standardized exams outside of my room and is fairly impractical, but I hope that I’m helping them see the bigger picture. Besides, I already include them into classwork and homework assignments, so I figured why not exams too?

Part Three: Exams that lag
After reading much about the power of lagging homework from the MTBOS, this summer I decided to incorporate it. In addition, I’ve decided to lag my unit exams. 

It just makes sense to lag both. In fact, when I made the choice to lag my homework, I found lagging unit exams to be direct corollary. Summative assessments (e.g. exams) should always align with what and how I teach. If I lag homework and 80% of what students are doing every night focuses on review content, how can administer an exam of 100% new content?

This all may backfire completely. But at least then I’ll be able to add them to the extensively long list of things that I’ve failed at implementing.



bp

Better feedback through structure?

Feedback - to improve

When assessing, I do my absolute best to provide detailed remarks and comments on student papers. The problem I run into is doing this for every student. If it’s an exam I’m marking, I’ll usually pick and choose the length and depth of my feedback depending on the particular student and the work they displayed on the exam. This is fine because different students need varying feedback, but, looking back, I find that I shortchange some students.

If I chose to be meticulous with the work of every student, I’d spend an overwhelming amount of time assessing. So I don’t. The result is that some kids get feedback that is robust and thorough while others receive relatively minimal feedback. In addition, how I indicate a specific error may vary slightly from one exam to the next, which I think could be more unified. I also want to have a systematic approach that keeps feedback consistent amongst different students. This way when kids are analyzing and assessing work, there is uniformity amongst us all on how specific errors are indicated.

What I’ve thought about doing next year is using a set of abbreviations or symbols that would indicate certain errors. For lack of a better term, let’s call them “indicators.” I would use these indicators on exams and other assessments to highlight mistakes.

For example, if a student didn’t know they needed to factor on a given problem, I could indicate this by writing “FCT” next to the error, instead of writing an explanation or setting up the factoring for them. On the same problem, if another student attempted to factor, but committed a computational error in the process, I could write “FCT” with a circle around it. The subtlety of the circle would differentiate between the two errors.

Another simple example could be when a student commits a computational error involving addition, subtraction, multiplication, or division on any problem. Near the error I could indicate this by drawing a star, say. When a student sees a star, they will know to look for a computational error involving an operation to find their mistake.

Those are three pretty sad examples, but I can’t clarify others at the moment.

My goal would be for students to easily identify an error on an assessment by calling up the indicator. The indicators would be commonplace throughout the class and we’d build on them over time. I would create a key for all the indicators, post it in my classroom, and give them a copy. I could even include them in my lessons for reinforcement.

Since there are an endless combination of errors that can be made on any given problem, I couldn’t have an indicator for every possible error – only common ones or those that are conceptual in nature. These would form a “database” of errors that would be used throughout the year. For those errors that don’t align with one of the indicators, I could combine the indicators with regular comments to clarify the mistake(s).

By using these indicators, it could allow me to quickly and easily provide precise, detailed, and consistent feedback to every student.

Based on the type of error, these indicators would also help students distinguish between SBG scores. For example, if a student gets a FCT indicator they may earn a score of 2 (a conceptual error), but if they get a FCT with a circle, they could earn a 3 (a computational error).

All these are just ideas at this point. There’s still a lot of work I need to do to actually implement a systematic approach to feedback. I don’t know if it’s feasible or even useful compared to my traditional feedback. But I do see the need to improve the qualitative nature and efficiency of the feedback given in my class – either by me or my students.

bp

P.S. Another way to implement feedback in a non-traditional way would be to use different color highlighters to represent the different types of errors. I remember John Scammell mentioning something about this during his formative assessment talk at TMC14.

Plickers

 Last summer at Twitter Math Camp I learned about an incredible formative assessment tool. I’ve actually started using it fairly regularly now, so I figured I would get out a quick post about it.

It’s called Plickers. It’s essentially a poor-man’s Clickers (think Turning Point Technologies). They’re pieces of paper that you print off for free online and distribute to your class. Each student gets one Plicker. The teacher puts up a question and the orientation in which a student holds their Plicker determines their answer choice. Where the magic happens: download the Plickers app to your mobile device and you can “scan” the room with your camera and the app picks up all the student responses. Think exit slips, class polls, checks for understanding, and the like. It is remarkable. The first time you see it, you literally can’t believe your eyes. Here’s a video.

Pros:

  • Allows me to collect assessment data relatively easily
  • The kids seem to love using it
  • Easy to replace in case one comes up missing
  • No software to install; it’s all web based and the app is user-friendly
  • Free

Cons:

  • Requires preparing prompts ahead of time
  • Cannot export data (or maybe I just I don’t know how to)
  • Requires lamenating for long-term use

There are many things in educational technology that are impractical and overdone. This is not one. Plickers leverage technology in a way that’s simple, accessible, and useful.

In short, Plickers are game changers.

If you haven’t tried them yet and are interested in a slick formative assessment strategy, I would definitely check them out.

 

bp