Reading back through my blog posts about assessment and student engagement in the reading/writing classroom, I realized I wanted to explore a few questions in-depth.
1) Is there more to the term motivation than we take for granted?
2) Can grading be formulated to enhance student motivation in a learning-oriented classroom?
3) Portfolio grading sounds great, but is it effective for both teachers and students?
Ultimately, I’m trying to find out if there are reasonable, practicable (at the university level) grading systems that do not impinge upon students’ motivation or quash the very idea of learning for the sake of learning. I’ve found, if not an abundance of information, a few articles whose authors attempt to reconcile grading with progressive, rather than traditional, teaching. After a long stint in the stacks of JSTOR and Google Scholar, for which specific search terms do not always yield desired results, I was relieved to find articles that represented all sides of the spectrum. In this final blog, I’m invoking a variety of perspectives, from those of anti-assessment educators like Alfie Kohn, to those who believe assessment practices, such as rubrics, need to be changed to fit progressive curricula, and finally to those who see assessment as contributing to student motivation (however extrinsic).
As I was reading about assessment and student engagement, I realized I needed to conduct a more thorough examination of term motivation. Aside from distinguishing extrinsic from intrinsic, many authors who address issues of student motivation do not offer a specific definition. To my mind, the word implies self-propulsion toward a goal of some sort (I always think of running to, rather than running from, something). All three Dictionary.com definitions include variations of the word “motivate,” which must mean it is difficult, even for lexicographers, to pen a specific meaning. Perhaps, then, there is more to the idea of motivation than we suppose.
For instance, motivation does not always come with positive implications: “It’s remarkable how often educators use the word motivation when what they mean is compliance. Indeed, one of the fundamental myths in this area is that it’s possible to motivate somebody else. Whenever you see an article or a seminar called ‘How to Motivate Your Students,’ I recommend that you ignore it. You can’t motivate another person, so framing the issue that way virtually guarantees the use of controlling devices.” This perspective, offered up by Alfie Kohn in an interview with Ron Brandt (for the Association for Supervision and Curriculum Development), speaks to the vagueness of motivation and the many ways in which we throw the word around. Can’t we be responsible for motivating others? Kohn’s most likely referring to intrinsic motivation, and neglecting to make the distinction. Or perhaps he’s leaving out the distinction with the purpose of implying that extrinsic motivation doesn’t really count for anything (and it really doesn’t, in his book).
But is extrinsic motivation really synonymous with compliance, and is the current institutional focus on assessment ultimately responsible for the dissipation of students’ natural curiosity in the learning process?
Some say no way. In his article, “Intrinsic Versus Extrinsic Motivation in Schools: A Reconciliation,” Martin V. Covington “examine(s) critically the assertion that these (assessment) processes are necessarily antagonistic, such that the will to learn for its own sake is inhibited or even destroyed by the offering of extrinsic rewards.” His goals are realistic: he wants debunk anti-assessment theories, like those propagated by Kohn, and to help teachers figure out how to grade effectively and relevantly without compromising student engagement with learning.
However, Covington acknowledges that certain grading systems don’t work: “In many class rooms, an inadequate supply of rewards (e.g., good grades) is distributed by teachers unequally, with the greatest number of rewards going to the best performers or to the fastest learners. This arrangement is based on the false assumption that achievement is maximized when students compete for a limited number of rewards.” However, he does not put much stock in the “overjustification effect” disseminated by theorists like Kohn, which is the idea that rewards from teachers stifle students’ natural curiosity and intrinsic motivation, promoting instead performance-oriented task fulfillment rather than learning for its own sake. He believes assessment does not have to be damaging, but can actually motivate students, which means that not all extrinsic motivation has a negative impact.
Hmm. That’s going against the grain. I tend to think of Kohn as the radical version of the teacher/philosopher I want to become, so it is difficult to side with the assessors on this point. However, in the territory underlying my dreams and ambitions, where I can see clearly the real challenges and obligations of teaching in a community college or university system, Covington’s ideas make more sense.
Motivation, fuzzy term though it may be, is crucial to both students and teachers in college-level reading and writing classes. Nobody wants to teach a group of bored, distant, back-of-the-room eye-rollers, and no one wants to sit in class day after day with a frustrated teacher. So, if there are ways to give feedback and grades and keep students interested, engaged, and (at least) extrinsically motivated, what are they and how can we get on board?
In my search for student-friendly grading methods, I came across articles touting new and improved rubrics, which led me to wonder whether the rubric (that dirty word!) could actually be part of the answer. Because much of what I’ve read about them emphasizes their ties to directive instruction, I’ve always thought rubrics were basically a reinforcement of the banking method. For a long time, I’ve been under the impression that they work like a narrow list of instructions that lead students to write in prefabricated, unimaginative ways; even worse, teachers like me end up with a stack of near-identical papers that threaten to bore us to death over the weekend. But how well-founded are these assumptions, really? If instruction methods can be changed, why can’t methods of assessment? Perhaps even rubrics can be altered to reflect student-centered assessment principles.
Some educators believe rubrics can be altered to make assessment more student-centered. Heidi Goodrich Andrade’s views on the subject are worth noting: “Research has shown that feedback can improve learning, especially when it gives students specific information about the strengths and weaknesses of their work. The problem is that giving focused feedback is wildly time consuming. A good rubric allows me to provide individualized, constructive critique in a manageable time frame.” Moreover, rubrics, she claims, keep her honest and fair in the grading process and provide students with clear assignment guidelines that encourage responsibility and self-efficacy. Also, she says, “Instructional rubrics allow me to assign more challenging work than I otherwise could.” However, she warns, good rubrics are not a substitute for good teaching.
I believe that in order for assessment to be “good”, or “effective,” it must coincide with student-centered instruction. Good teaching involves caring about students’ success—not just in class but in the world at large—and helping students overcome feelings of failure, even when they are confronted with bad grades. It means not treating students like numbers, and not making them feel as if they’re being judged by the grades they’ve made in the past. Looking toward the future, we have to see our students as capable learners so that they can start to see themselves that way. We cannot achieve that goal using assessment methods that are based on arbitrary standards, just as we cannot (given the choice) perpetuate archaic teaching models that do not reflect our informed beliefs. Right now, there is no assessment system in place that works for all students (if there was, we’d have heard about it!), but it nice to hear that some teachers have managed to give grades and keep students engaged.
The most student-centered approach to assessment I can think of (besides student self-assessment, which is essentially effective but not generally used to determine students’ final grades), is portfolio grading. Portfolios, many experts claim, provide teachers and students with a unique opportunity for collaboration and responsibility-sharing. In the article “Portfolio Assessment: Some Questions, Some Answers, Some Recommendations,” Cindy S. Gillespie asserts, “The major advantage of portfolio assessment is that it allows students to actively participate with teachers in the evaluation process.” Though portfolios mainly have to do with writing, there are ways to use them to assess reading as well. Reading logs and journals can be collected and graded as a portfolio, for instance. In a very memorable literature education class I took, we took that idea a step further and wrote reflection papers on how our reading and writing processes had changed throughout the course of the semester.
Also in Gillespie’s study were many other poignant advantages of using portfolios:
1) They allow students to see how they’ve grown as readers/writers through the semester.
2) They allow students to reflect on reading, writing, and thinking as interrelated processes.
3) Work on portfolios can be done collaboratively through peer review.
4) They promote students’ responsibility for their own learning.
5) They increase student self-awareness and self-esteem.
Sounds good, right? But what are the drawbacks, and how much extra work is it for teachers?
After listening to the opinions of educators who likewise believe that this method benefits students, I can honestly see why very few busy teachers end up implementing it. “According to many of the authors, the greatest weakness of portfolio assessment is the increased workload for the teacher.” Another weakness, Gillespie notes, is that “Portfolios may encourage teachers toward a ‘one assessment tool fits all,’ mentality.”
So how much work do we really want to do? Rubrics are quick, easy, and relatively painless, although they do generate a lot of controversy, considering their traditional alignment with the banking method rather than progressive methods of instruction and assessment. Portfolios evaluate the whole student, his or her progress throughout the semester, and increase metacognition. However, according to Gillespie, they may “present unique data that may be ignored or criticized by school-related constituencies.”
Ultimately, I think the best way to come up with a rock-solid assessment plan is for teachers to figure out that they don’t have to stick with one method if it is not working out! We should be willing to change our methods to fit our instructional principles and beliefs, and not give up because we think we have no options. We do have options. It’s just that there is no one-size-fits-all approach (not that there is one to anything), and that can seem daunting. But there are ways to help students learn—and stay motivated—by giving thorough feedback, holding conferences, and grading as fairly as possible within the extant grading systems, imperfect though they may be. The important thing to remember is that good assessment practices are nothing without good teaching.