Someone put a bug in my ear, and I started writing, and by the time I was done, I had two blogs rather than one. I'll set the table with this one, and bring it to conclusion next time.
One of the best things about research is that it can let the wind out of windbags and force some hard thinking. Our field suffers fatuous pronouncements as much as any. An example?
How about the constant drumbeat concerning the failure of “one size fits all” instructional approaches? Seemingly, everybody agrees with that one.
I typed the terms “one size fits all” and “reading instruction” into Google and came up with almost 97,000 documents. The basic premise of the aphorism is that we are all different and that we, therefore, learn differently.
Let’s face it. Twenty-First-century classrooms are a mélange of IQs, races, ethnicities, genders, languages, disabilities, religions, SES levels, and who knows what else. Our kids certainly are different from one another.
The conclusion drawn from that circumstance is the widely expressed claim that teachers can’t be expected to follow an instructional program because kids vary so much that teaching needs to be highly individualized.
That makes sense. I know I’m pretty special, and the idea that society needs to adjust to my particular needs seems reasonable to me. Okay, maybe not so much.
But that’s the cool thing about research. It encourages us to challenge popular beliefs; to ask questions like, is it true? Does research confirm what everyone thinks?
The answer in this case, surprisingly enough, is “Not exactly.”
Reading research isn’t challenging the idea that people aren’t infinitely complex or that each of us isn’t unique and special. It’s just that we don’t learn as uniquely as is so often claimed.
Research has certainly done a great job of identifying ways that we differ: Boys don’t read as well as girls (Loveless, 2015), nor do they tend to like reading as much (Love & Hamston, 2004), nor do they choose to read the same books (Merisuo-Storm, 2006). There are racial and ethnic differences in literacy attainment, too (Musu-Gillette, Robinson, McFarland, Kewal, Ramani, et al., 2016); and more and better children’s books are available in different neighborhoods (Neuman & Ceprano, 2001).
Looking for Interactions
But the issue is not whether kids vary—they do—but whether what works to teach them to read differs. Researchers try to get at that question by looking for “interactions.”
Here is how it works: Let’s say the researcher wants to know if the Handy Dandy Reading Approach improves learning. Perhaps kids will be randomly assigned to a treatment group and a control group. The treatment group will be taught Handy Dandy and the control group will get the usual methods and materials. At the end, everyone will be tested and the two groups will be compared to see who won.
Let’s say Handy Dandy is as wonderful as its creators imagined. Indeed, the kids taught with that program clearly outdistanced the controls, or as the researcher might say, “there was a significant main effect for treatment.”
But then the researcher starts to wonder. Since one-size-doesn't-fit-all, and all kids are different, maybe the program works differently with some kinds of kids than with others. That leads the researcher to look beyond the main effects (whether the program had worked over all), to consider interaction effects (whether some subgroups may have performed differently).
Boys might have underperformed the girls with Handy Dandy, but the control group boys also may have lagged, so that isn’t exactly what we’re getting at. A significant interaction, on the other hand, would reveal something like: The girls learned best from Handy Dandy, but the boys did somewhat better when they were taught with the traditional instructional approach. Such an interaction would reveal that the boys and girls were learning differently (not just that some were doing better than others).
So, what is the result of those kinds of interaction analyses? More often than not, reading studies report no significant interactions. All the groups usually benefit similarly from the various instructional programs, approaches, materials, and procedures that we try out.
Of course, there can be exceptions. For example, in a study in which students were encouraged to read over the summer—by providing them with books matched to their reading levels and interests and with some ongoing encouragement—there were no main effects; that is, the summer reading didn’t improve reading achievement (White, Kim, Kingston, & Foster, 2013).
However, there was a significant, and puzzling, interaction.
The children from the schools with high concentrations of poverty (75%-100% of students receiving free/reduced-price lunches) actually made small gains in reading achievement due to taking part in the summer reading program. Unfortunately, those from schools with lower poverty concentrations (46%-74%) not only didn’t gain as much as the more disadvantaged kids, but they declined in achievement in comparison to the non-booked controls. Encouraging reading was a small benefit to some kids, but it appears to have harmed some others. Yikes!
The researchers opined that perhaps this interaction was an “anomaly.” It is true that the results of research are never 100 percent certain, so occasionally we may obtain a result that cannot be replicated. But this peculiar result illustrates how tenuous interactions tend to be in reading studies.
One reason research hasn’t been able to tease out differences in learning has to do with power problems. What I mean by that is that if a sample of students is very small, it is harder to identify a significant difference; you can only do that with a small sample when the learning difference itself is especially large. Most studies have sample sizes large enough to provide a sound test of the main effect, but they do not necessarily include enough participants to allow for the subgroup comparisons needed to adequately test the “one size fits all” idea.
A benefit of meta-analysis—since meta-analyses combine data from lots of individual studies—is that they can sometimes detect learning differences that the original studies wouldn’t have been able to. Meta-analysis can do that since sufficient numbers of subgroup members can be accumulated across the multiple studies. If one study had three black students, and another had 24 and two others 15, you may eventually gather enough data that would allow for a good racial comparison—even though none of the original studies may have had enough students to do this.
With all of our large public research syntheses in literacy (e.g., National Reading Panel, National Early Literacy Panel, National Literacy Panel for Language Minority Children and Youth, Committee for the Prevention of Reading Difficulties, Writing Next), we must be identifying a lot of interactions, right?
Unfortunately, accumulating those differences across studies only works if the original reports provided key information about each subgroup. It isn’t enough to have included varied samples of kids in the studies (most literacy studies seem to do that). The studies need to report how those particular students did (such as providing the means and standard deviations for the various groups and measures—even if the study itself wasn’t analyzing those particular comparisons). That kind of reporting is required in medical studies these days, but it is still rare in reading.
Not many interactions are well tested in reading. However, even when they are they have tended to be either non-significant or dubious, as in that puzzling summer reading study. As the National Early Literacy Panel reported: “This meta-analysis evaluated whether such variables as race or SES mitigated or moderated the effectiveness of the various interventions. Unfortunately, it was all too rare that the original studies had provided sufficient data to allow for unambiguous conclusions to be drawn.” (National Early Literacy Panel, 2008, p. x). And, when it did manage to test such comparisons, it found no differences.
The lack of such interactions undermines the mantra about “one-size-fits-all” instruction. For instance, what about learning styles—the notion that learners think differently or learn best from auditory or visual inputs, and so on? No interactions; there are no meaningful, reliable, or significant differences in learning styles that impact literacy learning (Willingham, Hughes, & Dobly, 2015).
Yeah, but what if my child is “right-brained”? No interactions; there are no meaningful, reliable, or significant differences in the phony left-brain/right-brain divide in learning (Nielsen, Zielinski, Ferguson, Lainhart, et al., 2013).
I could go on, but you get the idea—a lot of the individual differences that supposedly require different instructional responses from teachers are, well, baloney.
That sums up the problem and points out some of the mythological differences that supposedly undermine one-size-fits-all approaches. Next week, I'll explore the one consistent difference in learning to read that really does exist. Until then...
Copyright © 2018 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.