Author Archives: jpatrickreed

Putting it together

I was really surprised by the article that I ended up reviewing because it only tangentially referred specifically to reading assessment and it was rooted more in the field of psychology. The article was surprisingly accessible, but more importantly it helped to focus my attention on the process of reading. That in turn helped me to identify what I truly hoped to measure in the assessments that I plan for my reading and writing students.

As I mentioned, the assessment that I use currently would be considered holistic in that it’s contextualized and asks students to complete a task that is authentic. They pre-read an article, then summarize it. I use the annotations they make during pre-reading to assess their prereading/reading skills and their summaries to assess their comprehension. The problem that I faced with using summaries is that it depends entirely on their writing skills. I didn’t grade for grammar or spelling in their summaries, but rather how well they were able to write about their understanding of the main idea. This dependence on writing made me somewhat wary of how valid the test was in terms of giving me information about their reading comprehension.

I have all of these books on reading and they all offer a host of options when it comes to assessing reading, but until I read the article about Dynamic Text Comprehension, I don’t feel that I had an appropriate lens with which to evaluate the assessment tools presented in the book. Often the purposes of the assessments are not clearly articulated. In many cases, the creators of the tasks simply assert that they are not traditional as though that were sufficient to make them useful. I want an assessment that provides washback, that is it gives students’ some insight into where they may have a gap in their process. It should also specifically focus on their reading process, though I do understand that writing is a necessary part of reading instruction, I felt that all of my assessments relied on students’ abilities to write which may or may not have a disproportionate effect on their performance.

Ultimately, I want the best of both worlds. My students have many more traditional assessments ahead of them and I want them to be able to maintain those test taking skills that they’ll need to reach the next level of their education. But I want the tests to provide more useful information to me and to them about where gaps in their knowledge or process exists so that we can address them. To this end, I want it to look like a duck, but not to quack like one.

When I went back to look at the assessments available to me, I had a fresh way of looking at them and this is what I decided on. This first comes from Zully G. Tondolo and is included in the book, New ways of classroom assessment.  Students are given a passage with sentences that are in the wrong order and they must first order the sentences, and then they are given a few short answer questions. Here’s an example from the book:

Directions: Number the sentences in logical order from 1 to 6.

A. Devi and the Tree

_____Sometimes she also liked to climb the trees and sit there in her secret place.

_____One of the trees was Devi’s special reading tree.

_____Five hundred years ago, a young girl called Devi lived in a town in the mountains in India.

_____Her family’s house had a big garden.

_____In later life, Devi liked to sit under the beautiful trees in the garden and read a book.

_____Sometimes Devi and her friends had picnics or played games together there.

The content may seem simple, however, I think that is one of the benefits to this task. It can easily be manipulated for the level of my students which varies a great deal from semester to semester. In fact, what I like about this activity is that I can control the content, vocabulary, grammar and other features. Following this activity, I can check in with students to find out why they ordered sentences the way they did and might be able to see what’s helping or hindering them.

The next activity is a little more complicated, but also focuses on reading with little need for students to write. It comes from the same text from a teacher by the name of Patrick Rosenkjar. For this it is important to use a self-contained reading passage, a short article of ten to fifteen paragraphs. Between one-third and one-half of the paragraphs should be paraphrased by the teacher to accurately reflect the main ideas, but should carefully avoid using key words contained in the original. Students must then decide which paragraphs were paraphrased by the teacher.

I particularly like this last assessment because it too can be modified depending on students’ levels and also reinforces other skills that they learn later in the semester. It is also focused on asking students to understand the text in order to complete the task without asking them to take on the additional cognitive burden of writing. Like Amber said in her post, I’m not in the business of reinventing the wheel. The activities are out there, but now that I have a better understanding of what I want to actually test for, I can judge which of these activities will suit the goals established in my own class.

What am I missing?

Rapp, D., & van den Broek, P. (2005). Dynamic text comprehension: An integrative view of reading. Current Directions in Psychological Science. 14(5), pp. 276. http://www.jstor.org/stable/20183043

The primary problem I’ve noticed in all of the literature that I’ve read is that they all point to the use of alternative assessments for reading: portfolios, annotation, summary writing. Part of the problem for me as a TESOL instructor is that some of these alternative asessments don’t address one of my primary concerns. It is possible that students are comprehending more of the text than they are able to communicate through writing, even low-stakes journaling. I know this for several reasons, not the least of which is my own language learning experiences which I described in a previous post. This is particularly true with difficult texts in which the reader uses context to guess the meaning of difficult vocabulary.

I felt slightly vindicated when reviewing a bit of reading I did in English 709. In From Reader to Reading Teacher, Aebersold and Field (2006) discuss challenges face by teachers in designing assessment tools for ESL/EFL students. The question of validity is chief among them, that is, are you truly testing the thing you are indeed seeking to test (176). For example, short answer questions or summary writing could involve testing students’ vocabulary, knowledge of and ability to write using English rhetorical conventions, and/or cultural knowledge. They also support my view that teachers should recognize that students will face a variety of assessments throughout their academic careers, and the use of assessments in the classroom should reflect that variety and prepare students to be successful (177). With this, I felt satisfied that traditional assessments do have a part to play within the context of my student population and the goals of our English for Academic Purposes (EAP) program.

But, how could I design assessments that help prepare students for such traditional assessments while remaining true to my goal of giving students authentic language tasks and opportunities for washback. One problem that presents itself is the competing views of reading (Rap & van den Broek, 2005). One is a memory-based perspective from Gerrig & McKoon in which items a reader processes activates other information, in TESOL we often refer to this as schema or background knowledge. In this view, the process occurs on a subconscious level or during on-line processing (the simultaneous processing you do while decoding). Think about this:

Which is correct to say, “The yolk of the egg is white” or “The yolk of the egg are white?”

As you read this sentence, you activate your egg schema: maybe something like eggs have a shell, a yolk and a white. But the question is designed to distract you from the content by focusing your attention on the grammar. The answer is neither is correct. Go ahead and check. I’ll wait. For more like this click here. If you didn’t need to check your answer then maybe you should stop reading now and head over here.

The competing view originated in the work of Piaget and Vygotsky in the constructionist perspective in which readers are actively engaged in a process of trying to understand the text and resolve their difficulties into understanding. I imagine this might be like trying to read Lewis Carrol’s “Jabberwocky.” We all use context clues, syntax, and morphology to try to construct meaning out of the poem. These are more typical of off-line processing that one does after decoding. It follows the conventions of a valid literary form: poetry. Therefore, there must be meaning to be made from it and we consciously seek to draw out inferences from the text.

I had never considered the two views as mutually exclusive before. However, Rapp and van den Broek note that researchers often focus on one or the other as they seek to control as many variables as possible. Consequently, the research is often skewed in one particular direction. Though this has begun to be superseded by the landscape model, which would take another twelve blog posts to explain. If you’re interested you can read more about it here, here, or here. The landscape model essentially says that the two are complementary and occur in cycles as we read, reflect, and construct, revise and reconstruct meaning from reading.

The authors essentially argue that the memory-based and constructionist approach taken separately are lacking. They combine them in the term Dynamic Text Comprehension (DTC) to account for the fact that the process of reading is recursive and that as we decode, we do indeed experience subconscious on-line activation of schema and that after we read, we then reflect (off-line) on our understanding at which point we make conscious choices that in turn lead us to other on-line processing. At this point, schema that is relevant is strengthened in our processing of the text and irrelevant information is discarded.

The authors do not in fact present any original research to support the claims, however, they do review several studies that have been conducted specifically using this model to account for data that supports both the memory-based and constructionist theories. They go to great lengths to point out that they are not in fact attempting to supplant either of these theories, but rather to unify them into a coherent whole that can be studied in its own right.

They posit that both mechanisms are necessary to account for reading. They classify the on-line processes as cohort activation. This occurs when a concept, and any associated pre-existing knowledge, is activated. The second mechanism which they call coherence-based retrieval simultaneously begins and organizes the information into a coherent structure based on the reader’s expectations. While we typically think of one as occuring on-line and the other as off-line, the authors suggest that the two processes occur simultaneously, though they don’t specifically provide any evidence to support this. A glaring omission considering the on-line/off-line debate that runs through much of the debate. They do cite a number of studies in which this model successfully predicted outcomes better than either of the theories used in isolation.

The article proved to be far more interesting than many of the others that I read specifically about assessment because I realized that I what I was hoping to do was to find a way of writing a test for students that engaged them in the process of reading, while still being contained in a familiar format. Think aloud protocols, reading journals, portfolios, summary writing are all well and good. I do use them to some degree or another in my teaching and will continue to use them for assessment. However, I realized that in understanding this surprisingly accessible article helped me to think about what I can do to make a reading test that is valid, i.e. it actually measures to some degree whether students are successfully moving through the reading process rather than can they write about what they read or eliminate wrong answers. My next step is to identify some ways to apply this new understanding to the types of traditional assessment I’m trying to incorporate into my class.

References

Aebersol, J. & Field, M. L. (2006). From reader to reading teacher: Issues and strategies for second language classrooms. New York: Cambridge University Press.

Rapp, D., & van den Broek, P. (2005). Dynamic text comprehension: An integrative view of reading. Current Directions in Psychological Science. 14(5), pp. 276. http://www.jstor.org/stable/20183043

Getting more specific about “Assessment”

I started down this road of looking at assessment, but I’ve begun to lean more towards specifically looking at traditional assessments, i.e. exams. Throughout my studies in grad school, I haven’t paid much attention to traditional assessments. Despite the fact that I use them in my teaching, my Prometric gig was where I really learned the jargon and techniques for designing multiple choice assessments. I want to take the holisitic approach that considers the whole student, his or her skills and development over the course of the semester. The diagram above shows some of the features of traditional assessments versus alternative assessments from Brown & Abeywickrama’s Language assessment: Principles and classroom practices (2010). What I would like to do is remind myself of the benefits of each, and then look through the literature to try and find assessments that occupy that steel blue space between the cornflower of traditional and alternative assessments.

I’d like to start by defining some of the terms that are relevant to discussing assessment. The product orientation of Traditional Assessments refers to the fact that students get one shot to do well which is why it also fosters extrinsic motivation. They are taking the test in order to get a grade. This contrasts with Alternative Assessments, portfolios for example or other extensive projects that students work on over time. As they engage in the process, teachers can provide ongoing (formative) feedback and ideally, students become more concerned with the doing of the work rather than focused on the end result. This is also reflected in the practicality, at least for teachers, of traditional assessments. Grade exams once and you’re done, you give feedback (summative) once at the end. Most alternative assessments involve periodic feedback which means students’ work is likely to be discussed, revised, and reevaluated.

Traditional Assessments tend to be more reliable, meaning that if the same test is given to a similar student population then the scores will be roughly the same. You can use the same multiple choice test semester after semester which also adds to the practicality – you know what you’re students’ scores are likely to be and anyone with the answer key can score them. Alternative assessments however tend to reflect authentic tasks that learners will do, texts similar to what learners will read, and the process is (ideally) truer to life. I seem to be making more of an argument for Alternative Assessment, but there is another important way that separates Alternative and Traditional Assessments.

Traditional assessments tend to have disproportionately high levels of Impact. This means that the consequences for failing a traditional assessment can be quite serious. Many of my students are studying in the US because they failed to score adequately on university entrance exams in their home countries. Some must continually show improvement on their TOEFL® scores in order to retain government scholarships that allow them to study here. I will continue to use alternative assessments (portfolios, the reading “mid-term” I mentioned in my last post, and journaling) to provide both formative and summative feedback and evaluations in my class. However, considering how many high impact traditional assessments they have in store for them, I’d like to give them some low-impact, low-stakes opportunities to take some traditional assessments, but only if I can find a way to fit them in with the goals of my class and institution.

I have also noticed that students are incredibly motivated just by the thought of having an exam. To some degree they’ve internalized the extrinsic motivation of getting a good quiz score. And giving them a multiple choice test does seem to give them a cognitive break. As long as you don’t do this to them. →→→→→→→→→→→→→→↑

In my reading and writing class, the first thing students do every day is read and then write about what they’ve read. From my own experience as a language learner, I know that it takes a great deal of mental energy to read and write in one’s L2. Students seem genuinely relieved when they know we have something coming up that they can study for. So what I’m hoping to find is some way to reinforce the work they’re already doing, give them a little cognitive break, keep their test-taking skills sharp and do it with a low-stakes traditional assessment that doesn’t insult their intelligence, or mine, I mean I will have to grade them after all.

With those goals in mind, I’ve rounded up some articles that I think will help me to make the most of students’ motivation to do well on traditional assessments, but using them in a way that reinforces the importance of thinking over choosing the right answer (or eliminating the wrong ones). I’d like to be able to write a test that doesn’t seek to trick them, but still gets at finding out what they know and provides some positive washback that students’ can use to improve their skills. I’ve already mentioned the first text, Brown & Abeywickrama, that I used to focus the aspects of assessment that I felt were important to my goals. Their focus is on TESOL, however, their discussion of the general framework of assessment is relevant:

Brown, D. & Abeywickrama, P. (2010). Language assessment: Principles and Classroom Practices. White Plains, NY: Pearson Longman.

Some articles that I’ll use to explore further:

Afflerbach, P., & Kapinus, B. (1994). Developing alternative assessments: Six problems worth solvingReading Teacher47(5), 420. [This one looks promising as it discusses some of the issues I may run into when trying to tweak Traditional Assessments into performing a little more like Alternative Assessments.]

Antón, M. (2009). Dynamic Assessment of Advanced Second Language LearnersForeign Language Annals42(3), 576-598. doi:10.1111/j.1944-9720.2009.01030.x [This was intersting, though a little far afield from what I’m looking at. I typically only use Dynamic Assessment, in which the teacher plays an active role in the test task, for diagnostics on the first day of class.]

Dennis, D V. (2012). Matching our knowledge of reading development with assessment data. In Using informative assessments towards effective literacy instruction.  (pp. 177). [This looks like it might give me some insights towards better connecting students performance with their location within a stage of development.]

Liu, P., Chen, C., Chang, Y. (2010). Effects of a computer assisted concept mapping learning strategy on EFL college students’ English reading comprehension. Computers & Education, 54(2), 436. [This again is a little far from my search, but it was one of the few that addressed technology which is slowly beginning to have more of an effect on assessment at my institution.]

Rapp, D., & van den Broek, P. (2005). Dynamic text comprehension: An integrative view of reading. Current Directions in Psychological Science. 14(5), pp. 276. http://www.jstor.org/stable/20183043 [This nearly didn’t make it into my list because it is firmly rooted in Psychology, which is rather outside my comfort zone. Though I would recommend reading it, it is surprisingly accessible.]

Smagorinsky, P. (2009). The Cultural Practice of Reading and the Standardized Assessment of Reading Instruction: When Incommensurate Worlds CollideEducational Researcher October. 38: 522-527. doi:10.3102/0013189X09347583. [This looks quite promising because it is recent and also deals with the intersection of traditional and alternative assessments.]

Assessment vs. Exams

To begin, I feel I should make a distinction between an assessment and a test. I am constantly assessing my students through activities, discussions, etc. In fact, I place more stock in assessing students reading comprehension through short low-stakes writing assignments than in exams. However, I do plan to teach in countries like South KoreaSaudi Arabia or China, that place a high premium on students’ abilities to pass exams. Most of them will have to pass an English proficiency test in some form or another.

In some ways, I would like to help prepare them for passing such exams without simply giving them some generic test-taking strategies, but also because tests are important to my students. I don’t think they should be and I take every opportunity to minimize the relation of exams to their ability to communicate in English. Some of my students with the highest TOEFL® scores are almost incomprehensible when engaging in mundane chit chat. Likewise, some of the lowest scorers could function perfectly well if they didn’t have to write research papers. I stress that test-taking is a skill and their scores mostly reflect how well they take tests rather than how well they communicate their ideas in English. I began to wonder, if I could design a test that could be more useful as an assessment tool rather than simply training students for future tests. What I am looking for is a cross between what I do in my current class and the traditional multiple choice tests that students are familiar with.

First, let me tell you what I do in my current class. I think it is an improvement on multiple choice tests, but is not without its problems. The methods that I have been using so far seem more appropriate for testing students’ retention of the skills that I teach them. For example, the first exam that I give in my reading and writing class consists of giving students a passage to read, asking them to annotate it as they read it for the first time. To show their use of reading strategies, they are explicitly instructed to mark the passage as they read it. For example, students are told to highlight or underline the parts of the text that they read first. Furthermore, they are told to make notes about any questions that come to mind as they read the text. “What does this word mean?” “I don’t understand this.” “Maybe this is important.” These are all things that students are encouraged to make note of during their reading.

The stated goal of this exam is to try and make the tacit explicit; that is, to show the processes that students are going through as they read. You may have noticed by now that during the first part of this exam, no mention is made of the traditional measure of reading assessment: comprehension. In fact, the traditional comprehension questions are no where to be found in this class. Reading comprehension is checked through the second part of the exam where students are asked to write a summary of the article that accurately represents the author’s ideas while (mostly) using the students’ words. All parts of this exam are completed in class although the summary writing occurs on a subsequent day. This allows students the chance to take the article home, read it more deeply, look up vocabulary, and plan their writing.

I feel that the format of this exam succeeds in focusing on reading as a skill rather than focusing on students’ abilities to pull information from an article. This is important for two reasons. First, it provides a model for which students can develop their own reading habits that they are encouraged to carry over into their other classes. Secondly, it discourages the strategies that students have learned to help them succeed at multiple choice tests such as the TOEFL®. I must mention that I have worked as a writer of test items for Prometric, a subsidiary of ETS, the company that designed and maintains the TOEFL®, SAT® and other tests.

     Before I accepted this job, I thought about the implications of working on a standardized test and how I felt about participating in creating an assessment that I feel fails to accurately evaluate students’ skills and abilities. Ultimately pragmatism won out over my idealism and I decided that standardized tests are a part of students’ lives. I wanted to participate in the process so that I could better understand the nature of the demands that students face. What I learned during the process has informed my own practices of assessment in the classroom. I don’t think anyone in our ENG 701 class would advocate using multiple choice tests to assess students’ reading, but I think it worth discussing the reasons why.

First, multiple choice tests are decontextualized. Regardless of whether the questions are about short passages that students read in class or extensive reading done outside of class, the student is only asked about information that fits within the space of a sentence or two. This relies on students’ memory as much as it does reading comprehension. Because the questions are so decontextualized, many things can lead students astray. A key word that appears in the right position, may persuade students to check the wrong answer. In fact, in designing the test we didn’t call them wrong answers, we called them “plausible distractors”. To be sure, the plausible distractors needed to be clearly wrong, but they are designed to confuse students.

To illustrate what I mean, here is an example of a vocabulary item. Please note that ETS, vigorously defends their copyrights so this example isn’t the actual item we discussed, but it does accurately represent the type of item and the issues that were raised in our discussion.

I enjoy watching the sunset over the lake in the afternoon. It’s always so serene at that time of day.

SERENE probably means:

A. CALM

B. CLEAN

C. FUN

D. WARM

In the above example, the issues raised were primarily with the distractors. Originally, two of the distractors had been nouns, however, reviewers felt that the student population would likely be able to eliminate them based on syntax alone. Then we changed those two distractors to the adjectives FUN and LOUD, but then it was decided that LOUD stood out too much because it didn’t end with a nasal as the three others did. It was thought that students would either select it or disregard it based on this distinction alone. WARM was ultimately selected because it fit the syntactic and phonological patterns, but also because it had the added distinction of introducing slightly more confusion complexity. The stem mentions sunset, the sun is warm, therefore students might select warm. And here’s the logic: this will help us determine if students know the meaning of the word SERENE. Really, I think it tests whether students can spot the wrong answers, rather than testing whether they know the right one.

I’m using the example of a vocabulary question, but the principles involved apply to other types of questions including reading comprehension. The basic idea is that multiple choice questions have a set of strategies that can help students be successful. Test writers know this and so write questions and distractors to confound students as much as to try and measure how much of a reading passage students comprehend. I don’t really know if it’s possible to design a hybrid test that provides students some familiarity with the format, while still truly measuring how well students are reading. Students have a lot of anxiety about the tests in my class, in part because they don’t know what to expect. There is no real way for students to study for them, except perhaps by practicing their reading and summary writing skills. While I want to be ready for the types of writing tasks that they will have to do in the US, I also do want them to improve their ability to score well on proficiency exams because they’ll likely have to take the TOEFL® or similar tests throughout their academic careers. I hope to find some articles that can help me strike a balance.

Critical Pedagogy vs. Initiation into Academic Discourse

My question was “Why is it that they [critical pedagogy versus initiating students into academic discourses] are always discussed as an either/or proposition? And most importantly, why is it that the goals of students are nearly always absent from the conversation.” Since I posed that question initially, I have noticed several things that I think address this issue through our readings, discussions and blog postings.

The discussion about whether to resist or initiate students into the academic discourse is discussed as an either/or proposition is, ironically, a function of the discussion taking place within the academic discourse. The academic world is built upon a continual renewal of knowledge that must be refreshed and developed. A positive benefit of this pressure to innovate has been the effusion of cross-disciplinary ideas, so that advances in one field can be brought to bear on the questions and issues of another. Given this pressure, it’s little wonder that the ideas espoused in Marxist thought would make their way to pedagogy. The social, political and economic lens through which Marxism filters the world was already embedded within an academic discourse as it came to influence other fields. Academics are fluent in their own discourse and, as such, use the language that is available to them and that most clearly expresses their message. Furthermore, that message is received, interpreted and retransmitted by other academics. So in the end, it seems logical that a discussion about resisting a discourse has taken place mostly within the discourse it seeks to resist. But that doesn’t really address why it’s an either/or proposition.

An academic in search of an argument will quickly find that he or she is also in search of a job. Each of us, as academics, has something to contribute to the conversation based on our individual perspectives, but the pressure is always to find something new to say. A difficult feat when the conversation has been ongoing for any period of time, particularly now that we are able to store and access a written record of our knowledge in the field that stretches across time and geography. Most graduate students can relate to the pressure of taking old ideas and finding new ways to apply them, not to mention the difficulty presented by the prospect of trying to find anything original to say. Most of us have sat in a seminar with at least one or two students who seem intent on taking arbitrary positions for the sake of being controversial. In some ways, this is the business of academics: to shake up the old order and at the very least force the dominant voice to continually justify it’s approach.

In the hands of someone like Paolo Freire, the result can send tremors throughout a field and cause an entire generation of academics to question the prevailing wisdom. To truly be effective though, such paradigm shifting ideas need to be well-supported and address specific issues seen within a field. Academic discussions are often based upon tightly controlled (and sometimes contrived) environments that allow researchers to focus on specific areas which they hope to impact. These environments have the benefit of freeing researchers from the muddied reality present in everyday life. As a student, I have often found that the arguments that seem paradigm shifting are the most compelling, but as a teacher, it is difficult to maintain too much purity in any one approach. In either case though, it is an opposing viewpoint that makes the most impact. All of our problems in the classroom will be solved if only we do X. This appeals to our sense of order much more than the more nuanced approaches that most of us know the classroom requires.

It also necessarily leaves students out of the equation because it seeks to control for variables as determined by researchers. Students’ needs are as diverse as the number of students within a classroom. This makes it easier to focus on what the teacher is doing rather than on what students’ hope to achieve. And based entirely on my own experiences, there is also a certain amount of arrogance on the part of teachers and researchers. Whether that arrogance is well-intentioned is not for me to decide, but it does exist. We, as teachers and academics, do have a certain degree of expertise in terms of what we are preparing students for and consciously or not, we justify the decisions we make based on that expertise and our own experiences in academe. From there, it is an easy leap to viewing our decisions about what’s best for students as self-evident. We may think “Of course students want to question the prevailing power structure of society,” because that is in fact a goal of our own.

In fact, students some times have considerable interest vested in perpetuating that structure. But it is easy to dismiss students’ interests because we know that if they knew better, they would see it our way. And I believe that is probably true, but part of getting students to see it our way is allowing them to take their own journey and arrive at their own conclusions. It may just be a desire to see the best in people, but it’s a leap of faith that I feel comfortable making. Until students get to a place where they are able and desire to question the prevailing power structure, I feel it is important to address students’ needs as defined by the students. That doesn’t preclude a critical approach, but it allows room for gaining fluency in the predominant discourse and encouraging students to question the power and authority vested in the institutions they attend.

Sh*t _____ Say (fill-in-the-blank)

It may be odd, but the article I connected with the most was “Schema Theory Revisited.” Although, I must admit that I nearly gave up after the first 25 pages. However, once I got to the discussion of the classroom interactions with Deng something struck a chord for me. I’m taking a loose approach with the context of the article and my own recent classroom experience because it isn’t strictly connected to reading (my text was a video), but indulge me. The video leads up to a reading later in the unit:

The whole “Sh*t _____ Say” meme started a few years ago, with the famous twitter feed that turned into a television show. The show was cancelled last year in its first season, but as they often do, the meme lived on and has spawned a crowded field of competitors for your youtube viewing time. I became aware of this particular video as one does: I was procrastinating on a lesson plan on stereotypes because my plan for this same lesson last semester did not go well.

Last semester as I introduced the topic of stereotypes to my class of all international students, I was a bit shocked when my students unanimously voiced agreement that “Yes, all Asians are good at math.” I was saddened when the women started telling the class what bad drivers they all were. Finally, I was defeated when my Saudi students told me that “No one in Muslim countries drinks alcohol.” And that was said by a student who once posted this status update to facebook “Vodka isn’t the answer, but it makes you forget the question.”

When I saw the above video, I thought that perhaps I was going about it in the wrong way. Rather than trying to tell students about the stereotypes that Americans have about them, perhaps I should start by showing them what people in their own group say about themselves. This approach made all the difference in the world and I think it was because they had much of the cultural schema in place. I think it also helped that there was a little L1 thrown in during the video which also helped to lower their cognitive load. The conversation we had afterwards was also much more productive.

My class this semester is a mix of Chinese, Saudi, Kuwaiti, Japanese, Thai and Korean students. I showed this video to them and while everyone got some of the humor, the Arabic speakers understood all the jokes. The non-Arabic speakers had lots of questions and in their small groups the Arabic speakers walked everyone through the references, and to my surprise, they started making connections between their cultures. As the conversation turned to whether this was an accurate portrayal of young Arabic men in America, things got more complicated.

“Yes,” my students said, “everyone knows a guy like that, but I’m not like that.” This was the sort of ambiguous starting point that I was hoping to achieve in my first lesson, one that would allow us to look at the conversation from multiple angles. Why do we stereotype? One student suggested it helps people fit in to a group, but then I asked, “If that’s why we have stereotypes about our own group, why do we also have stereotypes about people who aren’t in our group?” I was delighted when he replied “Well, that tells you who doesn’t belong in your group.” That’s the beginning of a critical analysis if I’ve ever heard one.

I was lucky that this lesson happened just a few days before I read this article, because I made a connection that I’m sure my teachers had been trying to make explicit for the past two years: schema activation vs. schema building. As a new teacher, I thought that I could build schema by telling students, depositing the knowledge. Here’s a brief recap of my previous lesson plan: give the definition for stereotype, give an example of a stereotype, ask students for an example of stereotypes in their country, finally ask their opinion about stereotypes.

That approach entirely missed the point by relying on the banking notion of education that misled me to believe that I could build schema for students. In the second lesson plan, I found that students were doing the work. In connecting students to the concept by showing them an example relevant to their own experiences, then following that with asking them to explain the humor to their classmates and concluding with a discussion that focused on open-ended questions, students experienced more engagement and their answers were far more equivocal.

Now I feel like reattempting those first 25 pages to see if I can pick-up anything else that I might have failed to glean over the past four years that I’ve been kicking around the term “Schema” as though I knew what I was talking about. The one thing I’m not sure about is what step to take next. I’m open to suggestions and while you’re thinking about it, I’ll leave you with this gem. Who knows how it might inspire your teaching?

On Being Iconoclastic

I experience varying degrees of dissonance with the literature as it relates to pedagogical approaches in the teaching of reading and writing. This dissonance stems from my enthusiastic embrace of approaches that seek to treat students as individuals with diverse knowledge, backgrounds, experiences, skills, interests, and goals; however, these same approaches often advocate an oppositional stance on the part of students that seems to coincide more with the goals of individuals within academe, than with the goals of individual students. We want to empower students, and yet theoretical debates about how to approach the relationships between knowledge and power risk further disenfranchising students when we fail to account for students’ goals.

Mickey Rooney famously assailed the state of our civilization by attacking the plastic cows that, at the time, passed for art. His argument was that only people that had become a master of one’s craft could lay claim to their work as “art”. He illustrated his argument by showing several works by Picasso. This painting is fairly representative of Picasso:

This is probably not what you think of when you imagine a work by Picasso:

But they are in fact both works by Picasso. And the crux of Rooney’s argument is that in order to establish his authority to call the former art, Picasso first had to learn how to paint the latter.

Now before you skip to the comments and accuse me of perpetuating the prevailing power structure, I won’t go so far as to agree with Rooney. I do however argue that in learning how to create the torso in the second picture, Picasso actually became capable of creating the iconoclastic work that he produced later in his career. I believe the same is true of my students, particularly because of their status as English language learners. Many of them have only ever been exposed to the banking model of education and asking them to situate themselves critically within academic discourse as they are learning English within a student-centered classroom is akin to asking them to bake an apple pie by showing them a picture of the final product.

The closest thing to a discussion of these difficulties I have found so far is in Farida and Honeyford’s chapter, “Academic Literacy,” in Handbook of College Reading and Study Strategy Research. On page 43, they quote several people on both sides of the question of critical pedagogy. One side says essentially that a critical approach in itself privileges a culturally-specific pedagogy and excludes all others. The proponents of critical pedagogy decry the acceptance of the notion that by acquiring prevailing discourse practices, a student will inevitably acquire power.

There are clearly two ideas at work here, one in which students are initiated into a discourse through the use of its accepted conventions; and the second in which students are asked to question the relationships of power inherent in the institutions to which we all belong. I agree with them both, and that brings me back to the dissonance I experience when reading the literature. I am in no way arguing against the use of critical pedagogy in the classroom. In fact, I think it is the only viable road to the higher order thinking that we, and other teachers, will continue to ask from students as they progress through their academic careers.

So I’ve admitted to being an “access” sympathizer, a teacher that wants to help students feel comfortable in the academic discourse communities that they aspire to join. I also want to empower them to identify, navigate and when necessary change or dismantle the power structures in which they find themselves. But I always want to do what will help my students achieve their goals. I know too that students sometimes need help to reevaluate their goals, but I don’t think its appropriate to impose a critical pedagogy that fails to address their language needs. I think the people best situated to effect change within an institution are those that can get inside of it. And to that end, I don’t see access and critical pedagogies as mutually exclusive. Indeed, one supports the other.

True iconoclasts aren’t outsiders, Paolo Freire was a member of the academic discourse community that continues to evaluate, interpret, reevaluate and reinterpret his work. I don’t think it’s too much to ask our students to balance their increasing fluency in academic discourse with a healthy suspicion of its underlying structures. Why is it that they are always discussed as an either/or proposition? And most importantly, why is it that the goals of students are nearly always absent from the conversation?