Interesting article related to our discussion of motivation:
Shepard, Lorrie A. (2000). The Role of Assessment in a Learning Culture. Educational Researcher, 29(7), 4-14.
Should assessment practices be changed to reflect the current shift in classroom practices, away from “the banking method” of education (Freire), toward a more collaborative, student-centered approach? Working to educate current and prospective teachers about this shift, as well as the research corroborating it, the author of this article organizes her initial inquiry into distinct but overlapping sections that address and make connections between historical learning perspectives, standardized testing, social efficiency theories, and a reconceptualization of assessment as a tool to enhance student learning.
I have chosen to review this article because I’m interested in exploring the benefits of a social-constructivist framework of assessment, as well as learning new solutions and strategies for applying such methods in a “learning-oriented” classroom (Kohn). Moreover, the author points to a number of problems with traditional methods of assessment and evaluation that I have addressed in blog posts #4 and #5. Her initial inquiry (and the subject of the article’s abstract) clearly speaks to the subject I’m exploring in my blog about the effects of assessment on student learning and motivation.
As she sets up her historical framework of assessment methodology and its relationship to scientific study, Shepard makes clear early-on her opinion (supported by an abundance of both historical and up-to-date research and evidence) that assessment should follow the current educational shift from a primarily behaviorist to a progressive or social-constructivist pedagogy. She goes on to describe the evolution of what is now known as “the banking method of education” (Freire), in which the instructor possesses knowledge and deposits it into the student’s mind/memory (insert student-as-receptacle metaphor here). For much of the twentieth century, the dominant ideas about education were based on theories of social efficiency and scientific management. These theories equated schools with factories in that knowledge was produced by academics and scholars, distributed by teachers, and committed to memory by students. This mechanistic view of student learning led to mechanistic methods of assessment, and “precise standards of measurement were required to ensure that each skill was mastered at the desired level” (Shepard 4). This is interesting, and pretty scary, because while they seem so crassly outmoded in a current educational framework, these precise standards are also used in predicting students’ aptitude for certain future endeavors, which has led to the creation of disparate avenues: vocational tracking and college preparation. This type of tracking is still being enacted in high schools across the country, and it is advocated by politicians who question the benefits of a college education for everyone (as Barack Obama proposed during one of his speeches).
Shepard presents this historical framework as a segue into her argument against standardized testing. I agree with her that it reflects the mechanistic educational views of yore, and should be drastically changed (especially the high-stakes nature of the tests), if not done away with. Why is it still in place, when schools are no longer thought of as factories, nor students as machines? Citing current research, Shepard outlines the numerous ways in which “objective” testing does not reflect how students actually process information, nor does it correspond with a learning-oriented environment. Learning, she claims is not a process of memorization, but rather an “active process of mental construction and sense-making” (5). (This idea is not new, but rather borrows from current cognitive theories of learning. Though she seeks solutions primarily from a social-constructivist framework, she also integrates cognitive and social-cultural learning theories and cites the conflicts and areas of overlap between all three.) Furthermore, “high stakes accountability in testing leads to the de-skilling and de-professionalization of teachers…” and “teaches students that effort in school should be in response to externally administered rewards and punishment rather than excitement of ideas” (9).
It’s not difficult to see the importance of such a shift away from testing ( and from school-as-factory to school-as-learning-environment). But how is it made? And how can assessment change with the times, as theories of learning do? Because both development and learning are primarily social processes (Vygotsky), there must be a way, Shepard argues, to treat assessment as such. Because there are no simple answers, and a list of strategies wouldn’t get at the pedagogical/historical underpinnings of WHY changes are needed, not to mention what needs changing, the author organizes her reconceptualization of classroom assessment into a “set of principles for curriculum reform,” which she then divides up into two main categories: Form and Content and How Assessment is used and regarded. Here, she addresses the problem of motivation and calls for “more open-ended performance tasks to ensure that students are able to reason critically, to solve complex problems, and to apply their knowledge in real-world contexts” (8).
One of the ways to go about this is to emphasize informal assessment occasions, in which the teacher responds to students in a low-stakes evaluative setting, such as giving feedback on reflective journals that is more substantive than error-focused. The author also suggests the students should have more power in the classroom, a collaborative relationship with the instructor, and transparency from teachers as to what the assessment criteria might be (10). She outlines strategies for aligning assessment with classroom practices, such as:
Assessment of Prior Knowledge
Use of Feedback
Teaching for Transfer
Evaluation of Teaching
Such progressive strategies are in line with the majority of those we study in the M.A. Composition and certificate programs, and thus they are all very familiar. However, she has much to say about student self-assessment, which I tend to want to advocate, but have less general knowledge of, strategy-wise. The way she describes student self-assessment leads me to believe it a) makes students more accountable for their own learning, b) establishes a collaborative relationship between student and teacher, c) shows that standards of evaluation are neither “capricious” nor “arbitrary” (10), and d) helps students take ownership of the evaluation process. While I find the concept interesting (and have experience as a student performing self-evaluations) I have not often stopped to consider its pedagogical values (or its promotion of metacognition about assessment!) The author’s consideration of it as a valuable resource for teachers has shown me something that none of the other assessment-oriented articles I’ve read have pointed out: that teachers are not the only ones that should play a role in assessing students’ work.
Shepard concludes her article with the humble acknowledgement that “this social-constructivist view of classroom assessment is an idealization. The new ideas and perspectives underlying it have a basis in theory and empirical studies, but how they will work in practice and on a larger scale is not known.” As a prospective teacher, I am driven toward the theories and studies that appeal to me and meet the learning goals I intend to set for my students. The social-constructivist perspective, with its student-centered classroom practices and collaborative nature, not to mention its view of knowledge as ever-changing and students as co-contributors to that knowledge, appeals to me as a teacher, and I’d like my assessment practices to mirror the ways in which I teach. Shepard also calls for teachers to be transparent about their learning process, and in a sense, model learning for their students (i.e. make the reasons you are redirecting your curriculum or launching into a mini-lesson known to the students, so that your process does not seem random or decontextualized). I love this idea, because as teachers, we don’t have all the answers, and we have to problem-solve as we go along, just like our students.
The following post is an analysis of the article:
“Ability Differences in the Classroom: teaching and Learning in Inclusive Classrooms” by Mara Sapon-Shevin from Common Bonds: Anti-Bias Teaching in a Diverse Society. Author(s): Byrnes, Deborah A., Ed.; Kiger, Gary, Ed.
I chose this article because it is directly related to the issue which I am dealing with, how to teach to a class full of students with different reading abilities and rates, and also because the author provides a lot of interesting examples, ideas and support for her conclusions about this topic.
The overall purpose of the article is to promote a particular kind of classroom experience, which the author refers to early in the article as a class which is based on the idea of “purposive heterogeneity,” or “full inclusion” as it is also referred to. This is a kind of class that will, according to the author, “…embody the belief that diversity is a positive force in children’s and teachers’ lives and should be embraced, rather than ignored or minimized.”(Sapon-Shevin, pg. 37).
(It will be noted that the author intending this essay for use in an elementary setting, hence the repeated use of the phrase children; however, in the mind of this reader, much of what is discussed in the essay is equally applicable to higher grade levels).
As the author says, promoting inclusive instruction is a relatively radical idea in an educational climate in which segregating students according to ability remains the norm. This situation is further exasperated by political and institutional forces which are calling for increased standardization of curricula and tests, limiting the range of teacher’s scope within the class and promoting the pedagogical segregation already mentioned. In this climate, the author feels it is important to explain and promote her belief in full inclusion classrooms.
While this is more of a descriptive rather than a strictly research based essay, the author does cite a number of other researchers and findings in support of her ideas. This reader was particularly shocked to find out about one study cited in the article, which found that, “…homogeneous grouping does not consistently help anyone learn more or better (Massachusetts Advocacy Center, 1990; Thousand, Villa, & Nevin, 2002) .”. The effect was a little like scales falling from the eyes to see that the unquestioned way of conducting education (even in post-secondary education, and especially in FYC), was not so unquestionable after all.
As mentioned though, much of the paper is descriptive, with many of these descriptions and other forms of advice coming form the lived experiences of teachers in the field. The author frequently cites the experiences of teachers in the field, for example Patty Feld, who the author describes as a teacher from a small rural school who employs various full inclusion methods in her classroom to positive results. Despite some inclusions anecdotal evidence, which of course could be subject to professional skepticism, the author never strays to far from citations of published works and research, including her own.
The overall thrust of the article then was to both describe the methods of full inclusion classrooms as well as to demonstrate support, both anecdotal and analytical, for its effectiveness. The author begins by questioning some long held myths regarding inclusive and non-inclusive classrooms, including the supposed merits of the homogeneous classroom, the willingness of students to work with various levels, and the relative ease or difficulty of teaching homogeneous and heterogeneous classes. The author then goes on to describe the various aspects of teaching an inclusive class, including activities, peer-tutoring, multi-level teaching and the adaptation of appropriate materials and subjects. The author then speaks about the social skills necessary to conduct an effective inclusive classroom, and concludes by arguing for the importance of these kinds of classrooms.
In the opinion of this reader, this was a very informative and useful essay. The author was able to achieve a nice balance between describing lived classrooms experiences and citing professional research and works(and this even despite the fact that this was not, strictly speaking, a research essay). The conclusions were well supported through both the citations and the authors eloquent evocations of current pedagogical and political trends in education. Though I am still just beginning to wrap my head around the notion that homogeneous classrooms may in fact be detrimental to students success, I am nevertheless intrigued by this idea and I think the author has a persuasive argument in favor of at least considering heterogeneous classrooms as an alternative. In any case, I think the reality is that we will face heterogeneous classrooms of greater or lesser degree, whether or not they are labeled as such. Keeping this in mind, I am grateful to have come across this article and look forward to implementing some of its ideas in both my continued research on this subject as well as in my own classes in the future.
Mealey, Donna L. 2003. “Understanding the Motivation Problems of At-Risk College Students.” Eds. Stahl, Norman A., Hunter, Boylan. Developmental Reading: Historical, Theoretical, and Practical Background Readings (208-213). Boston: Bedford/St. Martin’s.
In Blog #5, I focused on the impact of highly structured versus unstructured classroom environments on student motivation. In this week’s exciting sequel, I look at approaches to fostering reading motivation in high-risk populations.
I think this topic is particularly salient for those of us who plan to teach at community college, or any diverse public college or university in fact. Every teacher in these types of settings will encounter students that have had less than stellar academic experiences that have negatively affected their academic confidence. What’s a teacher to do?
Donna L. Mealey asserts in “Understanding the Motivation Problems of At-Risk College Students,” that students with a history of poor academic achievement can improve their performance if they can learn to:
- Take responsibility for their own learning,
- Recognize that their success or failure is determined by the level of effort they invest, and
- View themselves as college learners.
Mealy argues that low-achieving students arrive with negative beliefs about themselves that impede their success. If students do not believe they can be successful, they will lack the motivation to work hard enough to be successful. She cites the work of attribution theorists who argue that students will be motivated when they attribute their successes and failures to the amount of effort they invest – rather than to their innate ability, luck, etc. If students attribute their success or failure to luck, genetics or other factors outside their control, their sense of personal control decreases. The problem here for educators is that if students do not perceive a pay-off for hard work, if success is all up to chance and circumstance, students are not likely to put in much effort. The probable result? Demoralized, anxious, unmotivated, failing students who are very likely to feel like helpless victims in the education system.
This scenario makes sense to most of us, both as people who have failed at thingsL, and as educators who have worked with students who struggle academically. I think most of us can relate to the desire to avoid situations where we feel like we are failing and have no control over engineering a better outcome. Despite my general agreement with Mealy’s characterization, I was a little surprised at the lack of support she offered for her assertions. Isn’t there a lot of research out there on this topic? I would have loved to have seen a bit more summary of the literature on the links between achievement and motivation…
She does a better job providing supporting evidence for the strategic learning approach she advocates to help correct this unmotivated student scenario however. Strategic learning here is defined as a combination of learning and metacognitive strategies. Essentially, Mealy argues that since motivation is a function of attribution, e.g., students are motivated by success, and success is largely a function of effort. Therefore, motivation can be increased if students experience success as a result of effort. She suggest that the negative loop of low-self esteem, avoidance and failure can be corrected if students strive for competence by exerting effort and persistence; achieve success as a result of their efforts; and therefore develop confidence in their ability to succeed in academic tasks.
Cool, I’m with her there. But wait, what about the metacognitive component? Did we talk about that? Well, it turns out it’s all about control too. Student motivation is also, according to Mealy, contingent upon students’ sense of control over their own learning process. You can’t have strategic learning without student investment in the process. To help students become aware, metacognitive learners, she suggests the use of journaling techniques that can help students increase their awareness of their beliefs about learning and that allow for the exploration of their motivations, attitudes, time-management and study skills and emotions. The ultimate goal here is to put the student in the drivers seat of their own learning process.
As Mealy explains, “Metacognitive development is important because students need to monitor their comprehension and become aware of when they are experiencing difficulties with academic material and when to use appropriate fix-up strategies. Motivation is predicted to improve because of the self-control implicit in their awareness and subsequent actions and the self-management of their resources. If students are shown that strategy use will improve their achievement, they can become convinced that their efforts will make a difference and that their learning is under their control.”
This all makes good sense. Yet, I am a little disappointed in Mealy’s methodology. She did not conduct any new research on her topic and her conclusions and recommended learning strategies are derivative. Moreover, if her intent was to provide a review of the literature, she reviews a pretty paltry number of sources. On the positive side, however, she does a nice job of pulling together attribution theory and metacognitive learning strategies in a succinct and practical overview. Sounds great! I hope it works!
Blog # 6 – Investigation of a Five-Year Longitudinal Study of Parental Involvement in the Development of Chidren’s Reading Skill
Sénéchal, Monique & Le Fevre, J. (2002). Parental Involvement in the Development of Children’s Reading Skill: A Five-Year Longitudinal Study. Child Development, 73(2), 445-460.
In my quest for finding out whether or not improved literacy skills of parents impact literacy success in their children, nearly every article I found cited the longitudinal study by Sénéchal and Le Fevre, so I decided that it would be a perfect article to examine more closely. This study was quite complex, and made me wish on more than one occasion that I had paid better attention in the statistics course I took over ten years ago! However, after digging through all of the nitty gritty details, I realized that this study does begin to answer many of the questions I have been asking, and points clearly to other areas where research is lacking and where I would like to do a closer examination.
The article I am reviewing for this blog post presented the findings of a five-year longitudinal study of 168 children. The study investigated the relationship between early home literacy experiences and reading achievement in the third grade. The goal of the study was to “examine the pathways from children’s early knowledge and experiences through to fluent reading, with a focus on how parental involvement is related to the development of reading skills (Sénéchal & Le Fevre, 2002, p. 445). One of the most important distinctions this article makes, in terms of relating to my area of interest, is between informal and formal literacy activities. Informal literacy activities are those for which the primary goal is the message contained in the print, not the actual words themselves. For example, if a parent reads a fairy tale to a child with the purpose of telling a good story, or teaching a moral lesson, this would be considered an informal literacy activity. On the other hand, formal literacy activities are those where the parent and child focus on the words, or the print in the story more than the meaning. For example, a book focusing on ABCs or rhyming words is read to teach formal literacy. However, it is possible to teach both formal literacy and informal literacy while reading the same story, simply by engaging in different activities. This study examines all of the different literacy practices that the parents in the study complete with their children, and studies the impact these activities had on their children over the course of the five years in the study.
To determine how often they engaged in informal literacy activities with their children, parents were asked to report on how many books they read to their children. To test for accuracy, parents were asked to identify popular children’s book titles and authors, with false titles and authors thrown in to ensure parents did not simply randomly select books that they have never actually read with their children. I question the accuracy of these results. It seems to me this might be easy to do for books you have read over and over again with your children, but there is no guarantee that a parent can recall the title and author of a book he or she may have picked up from the local library and read with his or her child only one time. Additionally, some parents may be better at recalling titles and authors of storybooks, while others simply remember the content. However, I cannot think of a more flawless way of capturing this data, without recording parents reading to their children at home, which would be extremely impractical to do on a large scale.
To determine how often parents focused on formal literacy activities, parents were simply asked to report on how frequently they taught their child about reading and writing words, or the sound and letter correspondences within these words. Once again, this does not seem like the most accurate way to record formal literacy activities, but gathering “hard data” would be nearly impossible when looking at such a large number of children for over five years.
One of the most interesting findings of this study, and one that has a great deal of impact in my research, is that there is no correlation between the frequency that parents engage in informal literacy activities, and the frequency that these same parents engage in formal literacy activities. In other words, just because a parent reads a lot of books to his or her children, does not necessarily mean he or she will focus on the words and the sounds in the English language, or vice versa.
The main finding of this study was that parent-child reading that involved informal literacy activities were directly related to the development of receptive language, especially when children reached first and third grade. Additionally, parent-child reading that involved formal literacy activities were directly related to the development of emergent literacy, especially in the pre-K and Kindergarten years. These factors were determined by giving emergent literacy tests to the children in pre-K and Kindergarten (focused on alphabet knowledge, decoding, consonant-vowel awareness, and invented spelling), as well as in first and third grade (focused on reading comprehension, speed, and ability), and comparing these results to the earlier literacy activities reported by their parents.
As a result, the study argues that simply reading to children should have some impact on their reading comprehension as they get older, but may not be much as much of a benefit in helping them learn to read or recognize sounds and letters in pre-K and Kindergarten. Similarly, simply focusing on the structure of the language and the words on the page might result in a child learning to read at a younger age, or be a better speller, but may not necessarily impact their reading success later in life. So, you may be asking yourself, what is the point of this whole study? From what I can tell, it is of vital importance for parents to engage in BOTH informal and formal literacy activities while their children are growing up.
The main reason this article did not give me everything I could ever dream of, and more, is that it focused on middle and upper-middle class families. All of the 168 children in the study were native English speakers, which means none of this data speaks directly to the question that I am investigating. Although the study did account for literacy differences between parents, namely their education level and how much they enjoy reading themselves, it did not account for parents who might enjoy reading but are unable to do it because of language barriers. Additionally, it did not speak to the low-literacy, non-native English speaking population I am focused on that may or may not have easy access to books to read to their children, and may or may not be able to afford a great deal of books to keep in their homes.
Speaking personally, I’ve always felt better about work I must perform when there’s been an element of choice involved. Even if faced with a range of unappealing options, I still get to make the best of the situation if I get to pick one I find least difficult, or at least arrange them in an order I prefer.
As I’ve mentioned before in both my previous posting and the in-class literacy activity, choice is very important to me, and has also seemed helpful in my own education, so I believe that idea could likewise be extended to other students. This concept is also one that’s undergone rigorous examination by more qualified individuals than myself, in a variety of carefully conducted studies, so it’s not just my personal belief in the idea and personal curiosity in whether it extends to others.
Flowerday, Schraw and Stevens’ ‘The Role of Choice and Interest in Reader Engagement‘ investigates this concept quite thoroughly. In this article they say that psychologists and educators alike agree on the positivity involved in having a choice. Various sources they have examined state that “Students learn more or perform more efficiently when given choices” and “report more enjoyment in learning.” (94) and that teachers “believe it increases motivation, effort, and learning.” (94) Their work even goes further, asking an important question about choice in reading: is it the opportunity for the actual choice itself the thing that improves performance, or is it the level of interest in what one has chosen? Apparently many previous studies conflated the two, rather than clearly separating and examining the two concepts and possible effects on one another (and the affected individual). This actually caught me by surprise, because I had never clearly thought of it in that way. The most clearly I had been able to articulate on the matter was that I picked the things that interested me the most, and enjoyed my work all the more for being able to make the choice.
There is continuing debate over whether or not choice actually does improve performance, but overall attitude, motivation, and engagement with the subject material definitely improves (95). Meanwhile, interest in the material improves engagement, recall of the text’s main ideas and ability to critically apply that information (96). (As an aside, students with lower interest were better at memorizing micro-details and raw data—facts and figures—but no mention is made of critical application). Interest is also subdivided into topic and situation; that is, the material itself, or its presentation (novelty, structure, etc). Both of these subcategories were seen as positive effectors for engagement and learning.
With these factors in mind, Flowerday et al performed a set of experiments in which they gave a pool of students a choice between two packets of reading materials (unbeknownst to them, the packets were the same either way), followed by an interest survey; a multiple-choice review test; a pair of short essays to respectively describe content of, and reaction to, the text; and a final attitude checklist. A second pool of students was given either one packet or the other, with no choice allowed (but with the remaining materials and procedures the same). The experiment was repeated with a few alterations (reading material and related multiple-choice test questions, with a possible effect on situational interest). Scores were compared for the various tests and essays, along with measures of topic interest; situational interest; and attitude. In either case, by keeping the contents of the reading secret (since both packets had the same materials and were just labeled A or B), this study removed the possibility of interest as a confound, and was able to look purely at whether choice itself was a useful factor in determining student efficacy (and whether it had an effect on interest as well).
In both cases, they found that students in both choice and non-choice groups scored closely with each other on all tests (with the exception of the non-choice students scoring higher on the descriptive essay—possibly related to the phenomenon of non-choosing students and connection to data rather than critical application of information). The overall conclusions made were that situational interest did have an effect on attitude and engagement, while topical interest was a much lesser factor. Choice itself, however, had very little measurable effect on performance (although the article does concede that low-stakes testing like these experiments may not be the best measure of the efficacy of choice).
For me (and apparently for many educators, as the article concluded), this goes against some of what I believed I knew. The difference between choice and interest also helped me to understand that, yes, I choose mostly what interests me most (or disinterests me least); if I were given a blind-choice test like the students above, I might actually be annoyed at not knowing what I was getting into (or feel like it’s not much of a choice when I don’t know what my options really are). Choice may not necessarily be the end-all-be-all in improving student reading and performance. In some cases, choice might even present ‘option paralysis’ or discourage students, or perhaps even distress those who expect more direct control from their instructors (though this may be a culturally-situated idea). Interest, meanwhile, seems to be what helps students to do better. What can we take from this? At the very least, that educators have a responsibility to lead, not just open the gates and hope their charges find their way.
Further examination of choice vs interest is definitely in order, but this article and its related experiments did go a long way toward separating the two. Educators are not one-hundred-percent responsible for finding ways to help motivate their students and improve their attitude and performance, but they do share some of the burden. Choice might still be useful (within a very narrow range, kept in appropriate context, and used judiciously), but at the very least, judging by this study, keeping them interested is a large part of the process.