Getting more specific about “Assessment”

I started down this road of looking at assessment, but I’ve begun to lean more towards specifically looking at traditional assessments, i.e. exams. Throughout my studies in grad school, I haven’t paid much attention to traditional assessments. Despite the fact that I use them in my teaching, my Prometric gig was where I really learned the jargon and techniques for designing multiple choice assessments. I want to take the holisitic approach that considers the whole student, his or her skills and development over the course of the semester. The diagram above shows some of the features of traditional assessments versus alternative assessments from Brown & Abeywickrama’s Language assessment: Principles and classroom practices (2010). What I would like to do is remind myself of the benefits of each, and then look through the literature to try and find assessments that occupy that steel blue space between the cornflower of traditional and alternative assessments.

I’d like to start by defining some of the terms that are relevant to discussing assessment. The product orientation of Traditional Assessments refers to the fact that students get one shot to do well which is why it also fosters extrinsic motivation. They are taking the test in order to get a grade. This contrasts with Alternative Assessments, portfolios for example or other extensive projects that students work on over time. As they engage in the process, teachers can provide ongoing (formative) feedback and ideally, students become more concerned with the doing of the work rather than focused on the end result. This is also reflected in the practicality, at least for teachers, of traditional assessments. Grade exams once and you’re done, you give feedback (summative) once at the end. Most alternative assessments involve periodic feedback which means students’ work is likely to be discussed, revised, and reevaluated.

Traditional Assessments tend to be more reliable, meaning that if the same test is given to a similar student population then the scores will be roughly the same. You can use the same multiple choice test semester after semester which also adds to the practicality – you know what you’re students’ scores are likely to be and anyone with the answer key can score them. Alternative assessments however tend to reflect authentic tasks that learners will do, texts similar to what learners will read, and the process is (ideally) truer to life. I seem to be making more of an argument for Alternative Assessment, but there is another important way that separates Alternative and Traditional Assessments.

Traditional assessments tend to have disproportionately high levels of Impact. This means that the consequences for failing a traditional assessment can be quite serious. Many of my students are studying in the US because they failed to score adequately on university entrance exams in their home countries. Some must continually show improvement on their TOEFL® scores in order to retain government scholarships that allow them to study here. I will continue to use alternative assessments (portfolios, the reading “mid-term” I mentioned in my last post, and journaling) to provide both formative and summative feedback and evaluations in my class. However, considering how many high impact traditional assessments they have in store for them, I’d like to give them some low-impact, low-stakes opportunities to take some traditional assessments, but only if I can find a way to fit them in with the goals of my class and institution.

I have also noticed that students are incredibly motivated just by the thought of having an exam. To some degree they’ve internalized the extrinsic motivation of getting a good quiz score. And giving them a multiple choice test does seem to give them a cognitive break. As long as you don’t do this to them. →→→→→→→→→→→→→→↑

In my reading and writing class, the first thing students do every day is read and then write about what they’ve read. From my own experience as a language learner, I know that it takes a great deal of mental energy to read and write in one’s L2. Students seem genuinely relieved when they know we have something coming up that they can study for. So what I’m hoping to find is some way to reinforce the work they’re already doing, give them a little cognitive break, keep their test-taking skills sharp and do it with a low-stakes traditional assessment that doesn’t insult their intelligence, or mine, I mean I will have to grade them after all.

With those goals in mind, I’ve rounded up some articles that I think will help me to make the most of students’ motivation to do well on traditional assessments, but using them in a way that reinforces the importance of thinking over choosing the right answer (or eliminating the wrong ones). I’d like to be able to write a test that doesn’t seek to trick them, but still gets at finding out what they know and provides some positive washback that students’ can use to improve their skills. I’ve already mentioned the first text, Brown & Abeywickrama, that I used to focus the aspects of assessment that I felt were important to my goals. Their focus is on TESOL, however, their discussion of the general framework of assessment is relevant:

Brown, D. & Abeywickrama, P. (2010). Language assessment: Principles and Classroom Practices. White Plains, NY: Pearson Longman.

Some articles that I’ll use to explore further:

Afflerbach, P., & Kapinus, B. (1994). Developing alternative assessments: Six problems worth solvingReading Teacher47(5), 420. [This one looks promising as it discusses some of the issues I may run into when trying to tweak Traditional Assessments into performing a little more like Alternative Assessments.]

Antón, M. (2009). Dynamic Assessment of Advanced Second Language LearnersForeign Language Annals42(3), 576-598. doi:10.1111/j.1944-9720.2009.01030.x [This was intersting, though a little far afield from what I’m looking at. I typically only use Dynamic Assessment, in which the teacher plays an active role in the test task, for diagnostics on the first day of class.]

Dennis, D V. (2012). Matching our knowledge of reading development with assessment data. In Using informative assessments towards effective literacy instruction.  (pp. 177). [This looks like it might give me some insights towards better connecting students performance with their location within a stage of development.]

Liu, P., Chen, C., Chang, Y. (2010). Effects of a computer assisted concept mapping learning strategy on EFL college students’ English reading comprehension. Computers & Education, 54(2), 436. [This again is a little far from my search, but it was one of the few that addressed technology which is slowly beginning to have more of an effect on assessment at my institution.]

Rapp, D., & van den Broek, P. (2005). Dynamic text comprehension: An integrative view of reading. Current Directions in Psychological Science. 14(5), pp. 276. [This nearly didn’t make it into my list because it is firmly rooted in Psychology, which is rather outside my comfort zone. Though I would recommend reading it, it is surprisingly accessible.]

Smagorinsky, P. (2009). The Cultural Practice of Reading and the Standardized Assessment of Reading Instruction: When Incommensurate Worlds CollideEducational Researcher October. 38: 522-527. doi:10.3102/0013189X09347583. [This looks quite promising because it is recent and also deals with the intersection of traditional and alternative assessments.]


One response to “Getting more specific about “Assessment”

  1. Pingback: What am I missing? | Theoretical Backgrounds for College Reading

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s