Author Archives: amyglasenapp

Blog # 8: Portfolios and Rubrics: An Assessment of the Good, the Bad, and the Ugly

Reading back through my blog posts about assessment and student engagement in the reading/writing classroom, I realized I wanted to explore a few questions in-depth.

1)      Is there more to the term motivation than we take for granted?

2)      Can grading be formulated to enhance student motivation in a learning-oriented classroom?

3)      Portfolio grading sounds great, but is it effective for both teachers and students?

Ultimately, I’m trying to find out if there are reasonable, practicable (at the university level) grading systems that do not impinge upon students’ motivation or quash the very idea of learning for the sake of learning. I’ve found, if not an abundance of information, a few articles whose authors attempt to reconcile grading with progressive, rather than traditional, teaching. After a long stint in the stacks of JSTOR and Google Scholar, for which specific search terms do not always yield desired results, I was relieved to find articles that represented all sides of the spectrum. In this final blog, I’m invoking a variety of perspectives, from those of anti-assessment educators like Alfie Kohn, to those who believe assessment practices, such as rubrics, need to be changed to fit progressive curricula, and finally to those who see assessment as contributing to student motivation (however extrinsic).

As I was reading about assessment and student engagement, I realized I needed to conduct a more thorough examination of term motivation. Aside from distinguishing extrinsic from intrinsic, many authors who address issues of student motivation do not offer a specific definition. To my mind, the word implies self-propulsion toward a goal of some sort (I always think of running to, rather than running from, something). All three definitions include variations of the word “motivate,” which must mean it is difficult, even for lexicographers, to pen a specific meaning. Perhaps, then, there is more to the idea of motivation than we suppose.

For instance, motivation does not always come with positive implications: “It’s remarkable how often educators use the word motivation when what they mean is compliance. Indeed, one of the fundamental myths in this area is that it’s possible to motivate somebody else. Whenever you see an article or a seminar called ‘How to Motivate Your Students,’ I recommend that you ignore it. You can’t motivate another person, so framing the issue that way virtually guarantees the use of controlling devices.” This perspective, offered up by Alfie Kohn in an interview with Ron Brandt (for the Association for Supervision and Curriculum Development), speaks to the vagueness of motivation and the many ways in which we throw the word around. Can’t we be responsible for motivating others? Kohn’s most likely referring to intrinsic motivation, and neglecting to make the distinction. Or perhaps he’s leaving out the distinction with the purpose of implying that extrinsic motivation doesn’t really count for anything (and it really doesn’t, in his book).

But is extrinsic motivation really synonymous with compliance, and is the current institutional focus on assessment ultimately responsible for the dissipation of students’ natural curiosity in the learning process?


Some say no way. In his article, “Intrinsic Versus Extrinsic Motivation in Schools: A Reconciliation,” Martin V. Covington “examine(s) critically the assertion that these (assessment) processes are necessarily antagonistic, such that the will to learn for its own sake is inhibited or even destroyed by the offering of extrinsic rewards.” His goals are realistic: he wants debunk anti-assessment theories, like those propagated by Kohn, and to help teachers figure out how to grade effectively and relevantly without compromising student engagement with learning.

However, Covington acknowledges that certain grading systems don’t work: “In many class rooms, an inadequate supply of rewards (e.g., good grades) is distributed by teachers unequally, with the greatest number of rewards going to the best performers or to the fastest learners. This arrangement is based on the false assumption that achievement is maximized when students compete for a limited number of rewards.” However, he does not put much stock in the “overjustification effect” disseminated by theorists like Kohn, which is the idea that rewards from teachers stifle students’ natural curiosity and intrinsic motivation, promoting instead performance-oriented task fulfillment rather than learning for its own sake. He believes assessment does not have to be damaging, but can actually motivate students, which means that not all extrinsic motivation has a negative impact.

Hmm. That’s going against the grain. I tend to think of Kohn as the radical version of the teacher/philosopher I want to become, so it is difficult to side with the assessors on this point. However, in the territory underlying my dreams and ambitions, where I can see clearly the real challenges and obligations of teaching in a community college or university system, Covington’s ideas make more sense.

Motivation, fuzzy term though it may be, is crucial to both students and teachers in college-level reading and writing classes. Nobody wants to teach a group of bored, distant, back-of-the-room eye-rollers, and no one wants to sit in class day after day with a frustrated teacher. So, if there are ways to give feedback and grades and keep students interested, engaged, and (at least) extrinsically motivated, what are they and how can we get on board?

In my search for student-friendly grading methods, I came across articles touting new and improved rubrics, which led me to wonder whether the rubric (that dirty word!) could actually be part of the answer. Because much of what I’ve read about them emphasizes their ties to directive instruction, I’ve always thought rubrics were basically a reinforcement of the banking method. For a long time, I’ve been under the impression that they work like a narrow list of instructions that lead students to write in prefabricated, unimaginative ways; even worse, teachers like me end up with a stack of near-identical papers that threaten to bore us to death over the weekend. But how well-founded are these assumptions, really? If instruction methods can be changed, why can’t methods of assessment? Perhaps even rubrics can be altered to reflect student-centered assessment principles.

Some educators believe rubrics can be altered to make assessment more student-centered. Heidi Goodrich Andrade’s views on the subject are worth noting: “Research has shown that feedback can improve learning, especially when it gives students specific information about the strengths and weaknesses of their work. The problem is that giving focused feedback is wildly time consuming. A good rubric allows me to provide individualized, constructive critique in a manageable time frame.” Moreover, rubrics, she claims, keep her honest and fair in the grading process and provide students with clear assignment guidelines that encourage responsibility and self-efficacy. Also, she says, “Instructional rubrics allow me to assign more challenging work than I otherwise could.” However, she warns, good rubrics are not a substitute for good teaching.

I believe that in order for assessment to be “good”, or “effective,” it must coincide with student-centered instruction. Good teaching involves caring about students’ success—not just in class but in the world at large—and helping students overcome feelings of failure, even when they are confronted with bad grades. It means not treating students like numbers, and not making them feel as if they’re being judged by the grades they’ve made in the past. Looking toward the future, we have to see our students as capable learners so that they can start to see themselves that way. We cannot achieve that goal using assessment methods that are based on arbitrary standards, just as we cannot (given the choice) perpetuate archaic teaching models that do not reflect our informed beliefs. Right now, there is no assessment system in place that works for all students (if there was, we’d have heard about it!), but it nice to hear that some teachers have managed to give grades and keep students engaged.

The most student-centered approach to assessment I can think of (besides student self-assessment, which is essentially effective but not generally used to determine students’ final grades), is portfolio grading. Portfolios, many experts claim, provide teachers and students with a unique opportunity for collaboration and responsibility-sharing. In the article “Portfolio Assessment: Some Questions, Some Answers, Some Recommendations,” Cindy S. Gillespie asserts, “The major advantage of portfolio assessment is that it allows students to actively participate with teachers in the evaluation process.” Though portfolios mainly have to do with writing, there are ways to use them to assess reading as well. Reading logs and journals can be collected and graded as a portfolio, for instance. In a very memorable literature education class I took, we took that idea a step further and wrote reflection papers on how our reading and writing processes had changed throughout the course of the semester.

Also in Gillespie’s study were many other poignant advantages of using portfolios:

1) They allow students to see how they’ve grown as readers/writers through the semester.

2) They allow students to reflect on reading, writing, and thinking as interrelated processes.

3) Work on portfolios can be done collaboratively through peer review.

4) They promote students’ responsibility for their own learning.

5) They increase student self-awareness and self-esteem.

Sounds good, right? But what are the drawbacks, and how much extra work is it for teachers?

After listening to the opinions of educators who likewise believe that this method benefits students, I can honestly see why very few busy teachers end up implementing it. “According to many of the authors, the greatest weakness of portfolio assessment is the increased workload for the teacher.” Another weakness, Gillespie notes, is that “Portfolios may encourage teachers toward a ‘one assessment tool fits all,’ mentality.”

So how much work do we really want to do? Rubrics are quick, easy, and relatively painless, although they do generate a lot of controversy, considering their traditional alignment with the banking method rather than progressive methods of instruction and assessment. Portfolios evaluate the whole student, his or her progress throughout the semester, and increase metacognition. However, according to Gillespie, they may “present unique data that may be ignored or criticized by school-related constituencies.”

Ultimately, I think the best way to come up with a rock-solid assessment plan is for teachers to figure out that they don’t have to stick with one method if it is not working out! We should be willing to change our methods to fit our instructional principles and beliefs, and not give up because we think we have no options. We do have options. It’s just that there is no one-size-fits-all approach (not that there is one to anything), and that can seem daunting. But there are ways to help students learn—and stay motivated—by giving thorough feedback, holding conferences, and grading as fairly as possible within the extant grading systems, imperfect though they may be. The important thing to remember is that good assessment practices are nothing without good teaching.



How to Reconcile Assessment Methods with Social-Constructivist Learning Theories

Shepard, Lorrie A. (2000). The Role of Assessment in a Learning Culture. Educational Researcher, 29(7), 4-14.

Should assessment practices be changed to reflect the current shift in classroom practices, away from “the banking method” of education (Freire), toward a more collaborative, student-centered approach? Working to educate current and prospective teachers about this shift, as well as the research corroborating it, the author of this article organizes her initial inquiry into distinct but overlapping sections that address and make connections between historical learning perspectives, standardized testing, social efficiency theories, and a reconceptualization of assessment as a tool to enhance student learning.

I have chosen to review this article because I’m interested in exploring the benefits of a social-constructivist  framework of assessment, as well as learning new solutions and strategies for applying such methods in a “learning-oriented” classroom (Kohn). Moreover, the author points to a number of problems with traditional methods of assessment and evaluation that I have addressed in blog posts #4 and #5. Her initial inquiry (and the subject of the article’s abstract) clearly speaks to the subject I’m exploring in my blog about the effects of assessment on student learning and motivation.

As she sets up her historical framework of assessment methodology and its relationship to scientific study, Shepard makes clear early-on her opinion (supported by an abundance of both historical and up-to-date research and evidence) that assessment should follow the current educational shift from a primarily behaviorist to a progressive or social-constructivist pedagogy. She goes on to describe the evolution of what is now known as “the banking method of education” (Freire), in which the instructor possesses knowledge and deposits it into the student’s mind/memory (insert student-as-receptacle metaphor here). For much of the twentieth century, the dominant ideas about education were based on theories of social efficiency and scientific management. These theories equated schools with factories in that knowledge was produced by academics and scholars, distributed by teachers, and committed to memory by students. This mechanistic view of student learning led to mechanistic methods of assessment, and “precise standards of measurement were required to ensure that each skill was mastered at the desired level” (Shepard 4). This is interesting, and pretty scary, because while they seem so crassly outmoded in a current educational framework, these precise standards are also used in predicting students’ aptitude for certain future endeavors, which has led to the creation of disparate avenues: vocational tracking and college preparation. This type of tracking is still being enacted in high schools across the country, and it is advocated by politicians who question the benefits of a college education for everyone (as Barack Obama proposed during one of his speeches).

Shepard presents this historical framework as a segue into her argument against standardized testing. I agree with her that it reflects the mechanistic educational views of yore, and should be drastically changed (especially the high-stakes nature of the tests), if not done away with. Why is it still in place, when schools are no longer thought of as factories, nor students as machines? Citing current research, Shepard outlines the numerous ways in which “objective” testing does not reflect how students actually process information, nor does it correspond with a learning-oriented environment. Learning, she claims is not a process of memorization, but rather an “active process of mental construction and sense-making” (5). (This idea is not new, but rather borrows from current cognitive theories of learning. Though she seeks solutions primarily from a social-constructivist framework, she also integrates cognitive and social-cultural learning theories and cites the conflicts and areas of overlap between all three.) Furthermore, “high stakes accountability in testing leads to the de-skilling and de-professionalization of teachers…” and “teaches students that effort in school should be in response to externally administered rewards and punishment rather than excitement of ideas” (9).

It’s not difficult to see the importance of such a shift away from testing ( and from school-as-factory to school-as-learning-environment). But how is it made? And how can assessment change with the times, as theories of learning do? Because both development and learning are primarily social processes (Vygotsky), there must be a way, Shepard argues, to treat assessment as such. Because there are no simple answers, and a list of strategies wouldn’t get at the pedagogical/historical underpinnings of WHY changes are needed, not to mention what needs changing, the author organizes her reconceptualization of classroom assessment into a “set of principles for curriculum reform,” which she then divides up into two main categories: Form and Content and How Assessment is used and regarded. Here, she addresses the problem of motivation and calls for “more open-ended performance tasks to ensure that students are able to reason critically, to solve complex problems, and to apply their knowledge in real-world contexts” (8).

One of the ways to go about this is to emphasize informal assessment occasions, in which the teacher responds to students in a low-stakes evaluative setting, such as giving feedback on reflective journals that is more substantive than error-focused.  The author also suggests the students should have more power in the classroom, a collaborative relationship with the instructor, and transparency from teachers as to what the assessment criteria might be (10). She outlines strategies for aligning assessment with classroom practices, such as:

Dynamic Assessment

Assessment of Prior Knowledge

Use of Feedback

Teaching for Transfer

Explicit Criteria

Student Self-Assessment


Evaluation of Teaching


  Such progressive strategies are in line with the majority of those we study in the M.A. Composition and certificate programs, and thus they are all very familiar. However, she has much to say about student self-assessment, which I tend to want to advocate, but have less general knowledge of, strategy-wise. The way she describes student self-assessment leads me to believe it a) makes students more accountable for their own learning, b) establishes a collaborative relationship between student and teacher, c) shows that standards of evaluation are neither “capricious” nor “arbitrary” (10), and d) helps students take ownership of the evaluation process. While I find the concept interesting (and have experience as a student performing self-evaluations) I have not often stopped to consider its pedagogical values (or its promotion of metacognition about assessment!)  The author’s consideration of it as a valuable resource for teachers has shown me something that none of the other assessment-oriented articles I’ve read have pointed out: that teachers are not the only ones that should play a role in assessing students’ work.

Shepard concludes her article with the humble acknowledgement that “this social-constructivist view of classroom assessment is an idealization. The new ideas and perspectives underlying it have a basis in theory and empirical studies, but how they will work in practice and on a larger scale is not known.” As a prospective teacher, I am driven toward the theories and studies that appeal to me and meet the learning goals I intend to set for my students. The social-constructivist perspective, with its student-centered classroom practices and collaborative nature, not to mention its view of knowledge as ever-changing and students as co-contributors to that knowledge, appeals to me as a teacher, and I’d like my assessment practices to mirror the ways in which I teach. Shepard also calls for teachers to be transparent about their learning process, and in a sense, model learning for their students (i.e. make the reasons you are redirecting your curriculum or launching into a mini-lesson known to the students, so that your process does not seem random or decontextualized). I love this idea, because as teachers, we don’t have all the answers, and we have to problem-solve as we go along, just like our students.

Grades, Motivation, and Student Self-Worth

How important is self-esteem, or self-worth, in competitive educational environments? Does a student’s self-worth affect his or her motivation and grades?

In her article, “Why Grading on the Curve Hurts,” Kit Richart equates competitive classrooms to a game of musical chairs, in which early on, the slowest students lose their place in the game. “For one person to win, another must lose.” Likewise, Dr. Marty Covington, a Psychology professor at UC Berkeley, insists that “under competitive goals, individuals are likely only to continue striving only for as long as they remain successful. No one wants to continue if the result is shame and self-recrimination.

It makes sense that students who are rewarded by the system, who have learned how to become successful, will thrive. Students who do tend to be successful in school are, according to the research of Shirley Brice Heath, are the children of “mainstream” and “school-oriented” families, in contrast to other learners who have had less exposure to school literacy practices. Because these children are already familiar with “schooled” learning, they process information in a successful way, according to their teachers and the educational institutions of which they are a part. Students whose backgrounds include oral traditions rather than an emphasis on reading and writing do not have such exposure, and therefore may not appear to be “successul” learners. In accordance with Heath’s study, the authors of the article for this week, “Variability in Reading Comprehension” quote James Gee on the cultural and historical situatedness of students: “An awareness of how members of particulardiscourse communities construct their identities as readers (through their waysof behaving, interacting, valuing, thinking, believing, speaking, reading, and writing) is one important step in understanding variability in readers.

As in the earlier grades, certain students arrive at the university fully prepared for the academic work that will be required of them. Many others will be required to take developmental courses in order to read and write at the college level. “Developmental” being a more PC word than “Remedial,” perhaps these students don’t feel as stigmatized as they once were. However, they are still starting out on a lower rung than other freshman, and they must complete more course hours in English in order to graduate.

The question of student motivation at the college level harks back to Heath’s study of elementary school children, because as students learn to see themselves as naturally, or intellectually, deficient, it will not seem as if other avenues of learning exist. Grades reinforce this perception, and competitive classrooms discourage some students as much as they encourage others. Grades fail to take into account students’ backgrounds, and often they do not reflect creativity; in fact, in some cases students are graded poorly for disagreeing with (or merely diverging from) the teacher’s point of view.

Students who come to college unprepared will have more trouble adapting to academic life, and teachers will continue to express frustration with them. They will lack the self-monitoring skills they need to fully comprehend what they are reading. As Jodi Holschuh and Lori Aultman point out in Chapter 6 (Comprehension Development) of the Handbook, “In an environment where 85% of all learning comes from independent reading, college students who are not metacognitively aware will probably experience academic problems.

The traditional practice of giving letter grades, it seems, is not effective or motivating for all students. In fact, it singles certain of them out for failure. What I want to know is: are there grading systems that serve students who do not arrive at school ready to acclimate to an academic environment? Because grading is an institutional necessity, and as an instructor I will not be able to avoid giving grades, I want to figure out how grades/types of assessment can be motivating, and not just to students who are best prepared for learning.

As I read about different types of assessment, I keep coming across the famous/infamous Portfolio. According to Jere E. Brophy, “The portfolio approach reflects several motivational principles by focusing attention on quality standards rather than just grades or scores, incorporating assessment data as informative feedback, encouraging students to become reflective about their work and oriented toward improvement over time (Motivating Students to Learn 62.)

If portfolio grading is such a great method, why are so many teachers in the Comp program against using it? Why do no current GTAs seem to use it? I’ve heard it is more work for the teacher, but what are some other arguments against it? Everything I’ve read seems to reinforce its awesome effects.

More on motivating methods of assessment to come….

The “Assessment Juggernaut”: How Grading Affects Student Engagement

In this post, I will attempt to tie in research I’ve done on the subject of assessment with what I know about student engagement and motivation. Referencing both academic and not-so-academic sources, I will examine current attitudes on grading, the role of creativity in student engagement, and traditional vs. alternative methods of assessment.

Grades are a fact of the educational system. They determine how teachers treat you, whether or not you’re allowed to play sports, and whether or not you’ll get into the college/graduate program of your choice. However, certain critics of traditional assessment & grading methods propose we do away with grades altogether. (Imagine students cheering in the background). But what was once (and probably still is)  considered a crazy, utopian concept is gaining popularity among more progressive teachers, such as the ones at my daughter Teagan’s elementary school, which was originally formed by an anarchist collective in the 1950s. They do not give grades, and yet the students learn, and they learn well. My daughter is already doing more advanced math than I can help her with, and her reading and writing skills are improving by the day.

Her teachers’ ideas–as well as those of others who want to relinquish the burden of assessment–are justified in the works of Alfie Kohn, whose article, “The Case Against Grades,” I found hanging by a tack from the bulletin board in the school’s main office.

The quote on the first page struck me immediately:

“I remember the first time that a grading rubric was attached to a piece of my writing….Suddenly all the joy was taken away.  I was writing for a grade — I was no longer exploring for me.  I want to get that back.  Will I ever get that back?”

— Claire, a student (in Olson, 2006) (Kohn)

I sounds like a lament, a cry for help… “Will I ever get that back?” And yet I relate to the sentiment of this quote entirely, because once upon a time, I was only “writing for a grade,” and I have to say, it sucked (and so did my writing). My daughter is someone who knows nothing of this, and she writes quite happily, churning out page after page of school writing assignments with what I can only describe as pleasure.

Later on in the article, Kohn references scientific and psychological studies conducted in the 1980s and 90s to ground his anti-assessment assertions with factual evidence. One salient result of one such study was this:

“Grades tend to diminish students’ interest in whatever they’re learning.  A ‘grading orientation’ and a ‘learning orientation’ have been shown to be inversely related and, as far as I can tell, every study that has ever investigated the impact of receiving grades (or instructions that emphasize the importance of getting good grades) on intrinsic motivation  has found a negative effect” (Kohn).

Speaking of disappointed and disillusioned students, a rant I found on a university student’s blog (she calls herself “Lazy Wanderer”) expressed some of the same sentiments as Kohn. In particular, her rant focuses on grading and rubrics, and apparently she’s had some negative experiences with both. In response to the limitations of essay rubrics, she makes a very good point: “While you can teach someone the mechanics or writing, and give them advice on how to improve their own individual voice, you simply cannot teach someone how to be creative.”

Now, I know a number of Creative Writing professors who have felt called upon to defend their positions. In a writing exercise handout from the Iowa Writer’s Workshop, for instance, Professor Sergei Tsimberov asserts: “It’s fairly safe to conclude that there are certainly ways to teach the craft if not the art of creative writing, and although the pedagogy of any art form can be a delicate undertaking…”

But how do professors in Creative Writing programs actually grade? (And this is getting back to Lazy Wanderer’s point about the lack of emphasis on creativity in rubrics). From experience, I know that they grade on taking creative risks, making effort to improve writing from draft to draft, attendance, participation, and meeting length requirements. But they do not (as far as I know) in any way grade content. For that reason, it seems the opposite of an English teacher’s grading system, which in a lot of cases, focuses very little on creativity or effort (the CW teacher’s #1 focus).

I imagine one of the differences between Kohn’s “learning orientation” and “grading orientation” is an emphasis on creativity. Creativity is not often rewarded by grades (especially not in rubrics), and it cannot really be evaluated, percentage-wise. A big question then is: Is creativity important to learning? I personally believe students would be more engaged if creativity were made a bigger deal in English and writing curricula, and I think Ms. Lazy Wanderer would agree.

Where am I going with this? Well, I really want to find a way to promote the idea of a “learning orientation,” as well as foster creativity & originality, in my assessment practices. I can’t override the system–it seems to be sticking around for as long as I can see–but I’d like to find a way to change current practices to suit my instructional goals. Because I’m going to teach soon, I need to structure assessment in my classes in a way that reflects my teaching philosophy, which takes into account ideas like Kohn’s in the context of the current system, and, more importantly, the C-word. (No, not that. Creativity.)

Since this post is getting really long, I’ll wait to address student self-assessment, an interesting alternative to traditional grading, in Blog #6.


Kohn, Alfie. “The Case Against Grades.” Educational Leadership. Nov. 2011.

Anonymous. “Creative Essay Rubric PDF.” Retrieved March 28, 2012, from

Lazy Wanderer. “Rant on Grading.” Retrieved March 28, 2012, from

Blogging about Blog Topics: Emphasizing Learning over Assessment in the Classroom

All week, I’ve gone back and forth about possible topics, trying not to have too specific or too general a focus or end up reiterating what’s already been said in every class I’ve had thus far. I’m interested in a world of issues, such as student motivation, metacognition, teaching critical literacy, and using technology in the classroom. Ultimately, I want to involve all these as subtopics in my blog…

What I’m mainly interested in, which incorporates theory as well as teaching strategies, is how to get students to recognize that they are responsible for their own learning, and that learning is NOT about rote memorization or cramming for tests (although, unfortunately, performance of these skills is still a primary consideration for grading in many departments). Rather, learning is about developing new ways of seeing, acting, and interacting within the academic–and various other–discourse communities. I want to find ways to separate the ideas of learning (as a holistic concept) and performance (of discrete skills), and I think that in a reading class, in which assessment is, by its precise, statistical nature, difficult to implement, it can be done.

Students at the college level have consciously chosen an academic path… but how many of them have really, deeply gone into the reasons why? Perhaps it’s only for the chance of securing a decent career– a very good reason, for sure. But students going so far as to get into college must realize they have some level of agency, which opens the door for further investigation of that agency…

Ultimately, by creating a classroom community and applying strategies to help students stop mining for the main idea or trying to tease some “hidden meaning” out of the text (but rather, of course, making meaning for themselves through reader-text transaction) I want students to see reading in ways they didn’t before. I want them to find ways to enjoy reading. I know that’s a lofty goal for a blog. But I think that placing “assessment” in a certain context in which it is not synonymous with “learning,” but rather perceived as an institutional requirement (and I’m still learning how to go about doing that while still fulfilling those requirements) I can help students to see that learning is not the same as getting an A or doing “what the teacher wants” or “playing the game.” I’d like to try to debunk those high-school notions and help students to understand that reading, writing, and thinking are interrelated and important to all aspects of learning, and can be fulfilling in and of themselves.

Of course, one of the primary challenges of such a blog topic would be figuring out how best to grade…

Blogging comes highly recommended by this week’s pedagogues

As I read the articles on various types of literacies–including multiliteracies and the dominant literacy of Standard English in academia, I began to ask myself how MUCH is TOO MUCH technology in the classroom. I’ve noticed a few other fellow bloggers have addressed this question as well. My concern is less about reconciling edu. theory with what Pawan and Honeyford refer to as “New Literacies” and more about how I rarely think of technology as an educational tool for college students.

Perhaps this is because 99% of the research my nine-year-old daughter does for her school projects is conducted on Wikipedia rather than other sources. And for me, this works out just fine; her research requirements are often simple, and easy to catch any misinformation or assumptions on the part of the author(s), because I have enough background knowledge to do so. But for college students to rely on such sources of information–and quite possible taking such sources at face value, without critically questioning the authority of the text and/or seeking out other sources to confirm or disprove ideas/assertions–prevents them from digging deeper into what they are exploring.

Why is that? I mean, after all, Wikipedia is an EXCELLENT example of socially constructed knowledge. But is that how students see it? I don’t know… I tend to think students use the site because it’s easy to find stuff on Wikipedia. It’s instant. It’s satisfying. It provides answers. For these reasons, there’s not a lit of critical reflection POSSIBLE in this type of internet-based research. Also, when information can be accessed so easily, at the touch of a button or whisk of a finger across a screen, it does not inspire much motivation on the part of the student.

I don’t know how this turned into a Wikipedia rant. I like Wikipedia; I just think it is too uniformly misutilized. Looking something up on Wikipedia does not mean students are aware of or using inquiry-based learning.

Of course, I do like the idea of collaborative technology, especially because of its focus on non-school literacies (Reynolds & Werner). However, in-class communities are, to me, far more valuable a learning tool than blogging or posting comments on ilearn posts, or even socially constructing knowledge via wikis. When students are present in the discussion, in real-time, they are having an actual lived experience, which cements learning in ways distance-learning does not.