Teacher question:
It seems that there is a lot of conflicting information coming out about accuracy and complex text. In the April edition of The Reading Teacher, Richard Allington wrote an article pertaining to struggling readers. In this article he says that there are studies showing the benefits to teaching children using text where their accuracy is high. Our district just raised the running record accuracy rate expectation to 95-98% accuracy based on the current research. Yet, your blog postings pull in the opposite direction. How do teachers know what is right and what is wrong? After all, teachers want to do what is best and most effective towards student learning.
Shanahan response:
What a great question. In my blog post, I cited particular studies and Dick Allington’s focused on a completely different set of studies. This is what teachers find so confusing.
The experimental studies that I cited randomly assigned students to different treatment groups, so that children were matched to books in different ways, which allows a direct comparison of the impact of these methods—and gives us some certainty that the differences in learning were due to the different ways students were matched with text and not to something else.
Allington cites several correlational studies that examine existing patterns of relationship. These studies show that the lowest readers will tend to be placed in relatively harder texts and that they tend to make the least gains or to be the least motivated.
The problem with correlational studies of this kind is that they don’t allow us to attribute causation. From such evidence we can’t determine what role, if any, the student-book match made in kids’ learning.
The students may have lagged because of how they were matched to books. But their low learning gains could also be due to other unmeasured instructional or demographic differences (many differences between high and low readers have been documented, but those were not controlled or measured in these studies). It could just be that the lowest readers make the least gains and that it has nothing to do with how they are matched to books. That’s why you need experiments (to determine whether the correlations matter).
I looked at studies that actually evaluated the effectiveness of this instructional practice (and these studies found either that student-text match made no difference or that harder placements led to more learning). While Dick looked at studies that revealed that there was a relationship between these variables, omitting all mention of these contradictory direct tests or of any of the correlational evidence that didn’t support his claims.
There were two experimental studies in his review, but neither of them manipulated this particular variable, so these results are correlational, too. For example, Linnea Ehri and her colleagues created a program in which teachers provided intensive reading support to young struggling readers (mainly explicit instruction in phonological awareness and phonics). However, teachers varied in how much reading they had the students do during the intervention and how they matched children to books; the kids who did a lot of reading of easier materials seemed to learn the most. That is an interesting finding, but it is still just a correlation.
One possibility is that there were other differences that weren’t measured (but that were somehow captured indirectly by the text-match variable). Perhaps the teachers were just responding to the students who were making the biggest gains and were undershooting their levels since they were gaining so fast. That would mean that it wasn’t the student-book match that was leading to learning, but that the better learning was influencing teacher decision-making about student-book match. How could we sort that confusing picture out? With experiments that systematically observe the impact of book placement separate from other variables; such as the the experimental studies that I cited.
A couple of other points worth noting: the kids who gained the least in the Ehri study were placed in texts in the way that you say your school is doing. In the Ehri study, the kids who made the biggest gains were in even easier materials than that; materials that should have afforded little opportunity to learn (which makes my point—there is no magic level that kids have to be placed in text to allow them to learn).
Another important point to remember: Allington’s article made no distinction based on grade levels or student reading levels. His claim is that all struggling readers need to spend much or most of their time reading relatively easy texts, and his most convincing data were drawn from studies of first-graders. However, the Common Core State Standards do not raise text levels for beginning readers. When students are reading at a first-grade level or lower (no matter what their ages), it may be appropriately cautious to keep them in relatively easy materials (though there are some discrepant data on this point too--that suggest that grouping students for instruction in this way damages children more than it helps them).
Experimental studies show that by the time students are reading like second-graders, it is possible for them to learn from harder text (as they did in the Morgan study). If we hold students back at their supposed levels, we are guaranteeing that they cannot reach the levels of literacy needed for college and career readiness by the time they leave high school.
11/5/2013
Thanks for providing further commentary on the research related to the idea of text complexity, Dr. Shanahan. If you have it easily available, do you have a quick link to the articles you cite and a discussion of that research? Last time I looked at cited articles related to this discussion there wasn't a strong research base for unilaterally giving children reading material "above their instructional level."
Related to that comment, of critical importance I believe is being clear about what we're referring to with "instructional level." I appreciate that you defined the practice of 95% accuracy and higher as being potentially less effective, which is the range I would call "mastery" level. Indeed, I doubt there would be much support for only expecting kids to tackle mastery level material.
Most folks, though, consider "instructional level" to be lower - generally 90-95% accuracy, with further definitions involving rate as well. However, even beyond this particular definition, it's important to consider that "instructional level" means the level at which children can successfully perform, yet below their mastery level, with or without assistance. By definition, if a child can accomplish a task, the task is within either the child's instructional or mastery level.
As such, the essence of your argument is "push children to tackle the most difficult material possible within the child's instructional level." Not only do I think this is supported by research, but it's common sense.
The massive problem, from my experience, is attempting to describe "text complexity" as "giving children reading material more difficult than their instructional level." This is misleading and false. While it's true that some children may be able to tackle text above their decoding instructional level (but within their comprehension instructional level), we're still engaged in the basic task of expecting kids to work with material at the most difficult end of their instructional range.
If we phrased the text complexity discussion as such, I think we'd have a lot more people understanding what was meant.
One final piece of commentary - my experience has been that it would be profoundly inappropriate and unethical to assign children material based solely on their age or grade level with no consideration of available assessment data. While we DO want to challenge children to work at the upper limit of their instructional level (most difficult material still within instructional level), that "upper limit" will fluctuate based on the child. There is no support for the idea that 2 children in 3rd grade - one reading at the 1st grade level and one at the 4th grade level (in all reading areas) - would benefit the same from the same text. Many folks are under the impression that CCSS calls for teachers to ignore individual assessment data and assign reading based solely on grade level. In my opinion, this is unethical and unprofessional, and a huge step backward for the professional community.
11/6/2013
Tim,
As I am sure you are aware, there has been considerable criticism of the CCSS recommended text levels for grades 2 and 3. Indeed, there is good evidence that the pre-CCSS levels may be more appropriate for these young readers than those that have been adjusted "stair-step" style by the CCSS authors and MetaMetrics. There is no evidence that the adjusted levels are appropriate for grades 2 and 3. Hieber and Van Sluys address the issue in their article "Three Assumptions about Text Complexity" in Goodman, Calfee and Goodman's new book, Whose Knowledge Counts in Government Literacy Policies?
I address the issue here: http://russonreading.blogspot.com/2013/10/could-common-core-widen-achievement-gap.html
11/6/2013
Russ,
It's good to have opinions on this, but the only data that represent a direct test (experimental study) of harder levels is Morgan, et al., and it doesn't support those opinions. Of course, there are correlational studies as well, at least some of which (Powell) are supportive. There is simply not good evidence showing that particular student-book matches facilitate learning. You need data on learning for that.
tim
11/8/2013
But Tim, haven't we been down this road before. The National Reading Panel tok a very narrow view of what counted as research. They got changes in how teachers taught using "scientifically based research", and increases in time spent on the five areas they identified, but for all that effort, the Abt report found no improvement in student reading comprehension scores. Shouldn't we broaden our concept of what is useful research to include good old fashioned "kid watching?"
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.