Sunday, April 21, 2013

Week 13: Assessment and CALL

We began our Monday discussion on assessment and CALL by comparing general assessment terms (e.g., validity, reliability, practicality, and washback) with those specific to CALL (e.g., computerized fixed tests and computer-adaptive tests).  In thinking about how CALL impacts those "general" assessment basics, my group and I felt that practicality is the element of testing most affected by CALL.  In our context as grad students, most of us have taken the computer-adaptive GRE and outside-of-class BBLearn tests and quizzes.  We have also submitted numerous papers and projects electronically (our most recent paper full of hyperlinks to "chapters" and appendices).  Using computers for assessment can save time, clutter, and trees-- and possibly even diseases from spreading (I heard one writing teacher at AZ-TESOL say she stopped requiring hard copies of papers in her classroom after the bird flu epidemic a few years ago). I personally find computer-adaptive tests to be one of the best uses of computers for testing.  I administered hundreds of oral interview exams while working at a non-profit before starting my M.A.  Some students who came through our doors had just arrived in the U.S., had never been to school before, and did not know any English yet.  Others wanted help with their English in order to apply for advanced degrees at U.S. universities.  You can imagine, then, that it was painful for learners on both ends of the spectrum to take the same test.  The community college in my town used an adaptive test to administer this oral interview exam.  That way, more advanced students would not be forced to answer a bunch of questions that were much too easy, and beginner students would not have to shake their heads or shrug over and over while questions became more difficult.  Overall, adaptive tests are very practical when administering a test to a large group of test-takers or a diverse group of test-takers.

In addition to practicality, we also discussed the notion of construct validity and whether or not the construct of, say, reading changes when words are read on a screen.  Alan brought up the context of reading something on a small window on a screen, where test-takers must scroll down to read the entire passage.  In this example, technology may change the way we go back to find information.  When I read something on paper, I often remember, spatially, where that information is. When I read something on a smaller screen, such as a friend's tablet, I cannot always retrieve that information in the same way as I do on paper.

Finally, I think another important consideration in assessment and CALL is affective factors.  We know that technology has the potential to both motivate students and cause anxiety.  If we are working in a university setting, I think it's a good idea to use computerized tests for low-stakes assessments in order to help students who may have anxiety towards using computers become more comfortable with them.  Additionally, taking computerized tests may require test-takers to do some planning, so students should have some experience with this before entering the university.  If students have 24 hours to take a test on BBLearn, they will need to plan ahead and make sure they understand how to access the test, when they have an hour-long block in their day to take the test, and where they can go to get reliable internet access and a quiet atmosphere to take the test.  Planning ahead is the student's responsibility, so teachers in EAP programs may consider preparing their students for this.

No comments:

Post a Comment