Teacher question:
I’m writing to you about high school progress monitoring for reading comprehension. Our school has learning goals for Reading Comprehension. Every two weeks, students read an on-grade level passage and answer 5 multiple-choice questions that assess literal comprehension and main idea. Our data are not matching well with other data that we have (such as course passing rates and state assessments). What might be a more effective progress monitoring process, that go beyond the literal level, and that would provide information the teachers could use to improve instruction.
RELATED: Shared Reading in the Structured Literacy Era
Shanahan response:
I’m not surprised that approach is not working. There is so much wrong with it.
First, why test students so often? Does anyone really believe (or is there any evidence supporting the idea) that student reading ability is so sensitive to teaching that their reading performance would be measurably changed in any 10-day period. Performance on measures like reading comprehension don’t change that quickly, especially with older students.
I don’t think it would be possible to evaluate reading comprehension more than 2 or 3 times over an entire school year in the hopes of seeing any changes in ability. It is unlikely that students would experience meaningful measurable changes in comprehension ability in shorter time spans. The changes from test-to-test that you might see would likely be meaningless noise – that is test unreliability or student disgust. Acting on such differences (changing placement or curriculum, for instance) would, in most cases, be more disruptive than helpful.
Second, I get why we seek brief, efficient assessments (e.g., a single passage with 5 multiple-choice questions). Let’s not sacrifice a lot of instructional time for testing. We have such dipsticks for monitoring the learning of foundational skills (e.g., decoding, alphabet knowledge) with younger students and it would be great to have something comparable for the older ones too.
Unfortunately, reading comprehension is more complicated than that. To estimate reliably the reading comprehension of older students takes a lot more time, a lot more questions, and a lot more text. That’s why typical standardized tests of reading comprehension usually ask 30-40 questions about multiple texts – and texts longer than the ones that your district is using.
How many questions does a student have to answer correctly to decide he/she is doing well? Remember, guessing is possible with multiple-choice questions, so with only 5, I’d expect kids to, by chance, get 1 or 2 correct, even if they don’t bother to read the passages at all. There is simply no room in that scenario to either decide that the student is doing better or worse than previously or to differentiate across students. If a student got 2 items correct last testing, and this week he gets 3, does that mean he showed progress?
Third, reading comprehension question types are not useful for determining instructional needs. Studies repeatedly find no meaningful differences in comprehension across categories like literal, inferential, or main idea categories. If a text passage is easy for students, they usually can answer any kind of question one might ask about it; and, if a passage is hard (in readability and/or content), students will struggle to answer any of the question types.
That means there is no reason to either limit the questions to literal ones or to shift to a different questioning regime. In fact, doing so might focus teacher attention on trying to improve performance with certain types of questions, rather than on decoding, fluency, vocabulary, syntax, cohesion, text structure, writing, and other abilities that really matter.
Fourth, the measurement of readability or text difficulty is not as specific or reliable as you might think. Look at Lexile levels, one of the better of these tools: texts that Lexiles designate as grade level for high school freshmen are also grade level for students in grades 5-8 and 10-12. This kind of overlap is common with readability estimates, and suggests that passages judged to be 1200L will differ in the difficulties that they actually pose for students. Kids might be more familiar with the vocabulary or content of one text or another which can lead to dramatic outcome differences from assessment to assessment.
That’s why the standardized comprehension tests not only pay attention to readability ratings but evaluate combinations of specific passages to make sure that those combinations are going to provide sufficiently accurate and reliable results.
I would suggest that you test students twice a year (at the beginning of each semester) with a more substantial validated reading test. To monitor more closely how students are performing with what is being taught.
For example, one valuable area of growth in reading comprehension is vocabulary. Keep track of what words are being taught in the remedial program and monitor student retention of these words.
Or, If you are teaching students how to break down sentences to make sense of them, then identify such sentences in the texts students are reading to see how well they can apply what is being taught. The same kind of monitoring is possible with cohesion and text structure.
My point is that since you cannot provide the kind of meaningful close monitoring of general reading comprehension that you would like, instead monitor how well students are doing with the skills and abilities that you are teaching – that should provide you with some useful index of progress.
READ MORE ARTICLE HERE: Shanahan On Literacy's Blogs
Hi, Dr. Shanahan!
I really enjoyed this post. It reinforces some of my own experiences and beliefs, but my question is about how to square this with the requirements of MTSS and IEPs for progress monitoring. At our middle school, we are being told give all tier II students a comprehension goal and to use Reading Plus as the intervention and progress monitoring tool. We must provide a score 2x a month. Of course, there are a lot of problems with this, including the fact that comprehension may not be the most important goal for these children. If we aren't using the monthly measures for gauging progress in comprehension via a passage and questions, how do we show progress as required by the state? Or do we jump through the hoops and rely more on data from measures like you suggest? We do give the NWEA MAP assessment 3x a year...
Thank you!
Katie-
It is miserable when the folks in charge require information that can't validly be provided. I would suggest using the kinds of measures that I have suggested.
tim
Thank you for the post. This makes me think about reading in general at the secondary level. This is one of the hardest things to get my mind around because we are balancing student needs with state requirements, such as MTSS, IEP’s, graduation requirements, credits, etc. And as you have pointed out, many of the tools we use to measure needs aren’t necessarily aligned with what students actually need.
Could you point me in the direction of research on the best ways to provide services to struggling readers at the middle and high school level?
You mention older students, but are these claims also true (around comprehension) for younger readers, such as intermediate elementary? Would they show more growth in shorter time frames because of the growth in the foundational skills?
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.