Blast from the Past: This was first posted on January 29, 2009 and reposted on August 9, 2018. As we get ready to start a new school year, it would be wise for teachers to dedicate themselves to avoiding wasting time this year having kids practicing answering certain kinds of questions. Reading is not the ability to answer particular kinds of questions, but to making sense of text. Many more research studies showing this since this posting first appeared.
Here’s a big idea that can save your school district a lot of money and teachers and kids a lot of time: reading comprehension tests cannot be used to diagnose reading problems.
This isn’t a traditional educator complaint about reading tests; I’m pro reading test. The typical reading comprehension test (e.g, Gates-MacGinitie, Stanford, Metropolitan, Iowa, state accountability assessment) is valid, reliable, reasonably respectful of students from varied cultures… and, yet, those tests cannot be used diagnostically.
The problem isn’t with the tests, it’s just a fact based on the nature of reading ability. Reading is complicated. It involves a bunch of skills that need to be used either simultaneously or in amazingly rapid sequence. Reading comprehension tests do a great job of identifying who has trouble with reading, but they can’t sort out why students struggle. Is it a comprehension problem or did the student fail to decode? Maybe the youngster decoded the words, just fine, but didn’t know the word meanings. Or could she read the text fluently, with the pauses in the right places within sentences? Of course, none of those might be problems: maybe the student really had trouble thinking about the ideas.
Because reading is a hierarchy of skills that must be used simultaneously, failures with low-level skills necessarily undermine higher level ones (like interpreting ideas in the text). Because every comprehension question has to be answered on the basis of decoding, interpretations of word meanings, use of prior knowledge, analysis of sentence syntax, etc., it is impossible to find patterns of student performance on a typical reading comprehension test that can tell you anything.
That is also a reason why items are so highly intercorrelated in reading comprehension tests.
The companies that offer to analyze kids test results to provide you with an instructional map of their comprehension needs are offering something of no value. If a main idea question is hard, all your kids will need help with main ideas. If several inferential questions are bunched at the end of the test and some of your kids don’t finish all the items, you’ll find out that most of your kids need help with inferencing.
No scheme for analyzing item responses on comprehension tests is reliable and none has been validated empirically. Those schemes simply don’t work, except to separate schools from their money.
3/27/2009
This has been my experience, too. What do you recommend a teacher do to diagnose which of those low-level hierarchical skills, or what combination of skills, are strengths and which are weaknesses. It seems I'd have to use a battery of assessments, but because of time and financial restraints I need to be as precise as I can be with relatively few tools.
When I teach courses on assessment, I make it very clear that some tests are diagnostic and others are only indicators. Comprehension tests, among which I include any test that produces a Lexile score, is only an indicator of a problem that might be replicated under similar conditions but still gives limited information regarding how to solve or reduce the likelihood of the problem recurring or getting worse. An example is the common CBC test of your blood. If the test reveals an elevated white blood cell count, it might indicate an infection, or the demise of an infection, or something far worse such as cancer. A poor or limited diagnostician might prescribe a broad spectrum antibiotic "for good measure," but we know now that such an approach is unlikely to address the real problem and could only make the patient worse, especially if they begin to distrust antibiotics and refuse to take them in the future. Reading comprehension is strikingly similar: if we apply what we think is just a good measure of "good" intervention, such as teaching discrete comprehension skills or question types on similar texts, we are unable to determine if we addressed the real problem, masked the problem and in fact accelerated the damage it causes to the reader, or gave the child false hope that the problem is fixed, only to find it returns on the next complex text. We must use data responsibly, and not subject our students to treatments they don't need, don't work, or contribute to the placebo effect.
I totally agree with what Dr. Shanahan states in his 8/9/18 reposted blog. To me, as a reading specialist, this statement from his post says it all: Reading comprehension tests do a great job of identifying who has trouble with reading, but they can’t sort out why students struggle. I also appreciate and agree with Lisa Regan DeRoss' response to Dr. Shanahan's blog post. Is it possible to see a list of diagnostic assessments used by Dr. Shanahan and by Lisa Regan DeRoss?
How should one design a good reading age assessment? Have you written about what is optimal?
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.