Recently, I wrote about the science of reading. I explained how I thought the term should be defined and described the kind of research needed to prescribe instruction.
Today I thought I’d put some meat on the bone; adding some details that might help readers to grasp the implications of a scientific or research-based approach to reading.
What does it mean when someone says an approach to reading instruction “works”?
The term “it works” has gnawed at me for more than fifty years! I remember as a teacher how certain activities or approaches grabbed me. They just seemed right. Then I’d try them out in my classroom and judge some to work, and others not so much.
But why?
What was it that led me to believe some of them “worked” and some didn’t?
It puzzled me even then.
Teachers, administrators, and researchers seem to have different notions of “what works.”
Teachers, I think, depend heavily on student response. If an activity engages the kids, we see it as hopeful. We give credence to whether an activity elicits groans or a buzz of activity.
When I do a classroom demonstration and students say they liked the activity and want to do more, most likely I’ve won that teacher over.
Teachers recognize that learning requires engagement, so when an activity pulls kids in, they’re convinced that it’s a good idea.
That satisfaction is sometimes denigrated because of its potential vapidity. Let’s face it. Bozo the Clown engages kids, too, but with how much learning?
What those complaints fail to recognize is that the teacher already has bought into the the pedagogical value of the activity. They assume it is effective. Student engagement is like gaining a third Michelin star.
What about administrators?
Their needs are different. To them, “it works,” is more about adult acceptance. If a program is adopted, the materials shipments arrive as promised, and neither teachers nor parents complain, it works!
And, to researchers?
To them, it means there has been an experimental study that compared that approach with some other and found it to be superior in terms of fostering learning.
If a method does no better than “business as usual” classroom practice, then it doesn’t work (which, confusingly, isn’t entirely correct, since the difference isn’t that everybody in one group learned and nobody in the other did).
I’ve worn all those hats – teacher, administrator, researcher – and I prefer the last one. The reason? Because it’s the only one that explicitly bases the judgment on student learning.
RELATED: What's the Role of Amount of Reading Instruction?
Will we accomplish higher achievement if we follow research and make our teaching consistent with the science?
That’s the basic idea, but even that doesn’t appear to be well understood.
I think we tend to get misled by medical science, particularly pharmacology.
New drugs are studied so thoroughly it’s possible for scientists to say that a particular nostrum will provide benefit 94% of the time and that 28% of patients will probably suffer some unfortunate side effect.
When I tell you that the research shows that a particular kind of instruction works (i.e., it led to more learning), I can’t tell you how likely it is that you will be able to make it work, too.
Our education studies reveal whether someone has managed to make an approach successful.
Our evidence indicates possibility, not certainty.
When we encourage you to teach like it was done in the studies, we are saying, “if they made it work, you may be able to make it work, too.”
That’s why I’m such a fan of multiple studies.
The more times other people have made an approach work under varied circumstances, the more likely you’ll be able to find a way to make it work as well.
If you show me one such study, it seems possible I could match their success. Show me 38, and it seems even more likely that I could pull it off.
That nuance highlights an important point: Our instructional methods don’t have automatic effects. We, as teachers, make these methods work.
Lackadaisical implementation of instruction is never likely to have good results. The teacher who thinks passive implementation of a science-based program is what works is in for a sad awakening.
I assure you that in the studies, everyone worked hard to make sure there were learning payoffs for the kids. That’s part of what made it work better than the business-as-usual approach.
That point is too often muffled by our rhetoric around science-based reading. But teacher buy-in, teacher effort, and teacher desire to see a program work for the kids are all ingredients in success.
I don’t get it, I’m hearing that some approach (e.g., 3-cueing) is harmful, and, yet I know of research-based programs that teach it. Does that make any sense?
You’re right about 3-cueing being part of some successful programs. But that doesn’t mean it’s a good idea. Instructional programs usually include multiple components. Studies of them tell if the program has been effective, but they usually say little about the various components that are integral to the program.
Without a direct test of the individual components, there are three possibilities: (1) a component may be an active ingredient, one of the reasons for the success; or (2) it’s a neutral ingredient -- drop it and kids would do as well; or (3) it’s hurtful, the instruction would be even more effective without it.
Logically, 3-cueing makes no sense. It emphasizes behaviors good readers eschew.
That said, I know of no research that has evaluated 3-cueing specifically.
Claims that it’s harmful (beyond being a likely time waster) are, for the time being, overstatements. These claims rely on logic, not data.
The problem that you identify is a common one – people will tell you that multisensory instruction, a sole focus on decodable texts, advanced phonemic awareness, more social studies lessons, word walls, sound walls, and so on are all certain roads to improved achievement. Each is part of at least one successful program or another. But none have been evaluated directly. The truth is, we really don’t know if they have any value at all.
They might provide benefits, but that isn’t the same thing as knowing that they have done so before.
Our district has adopted new programs and instructional routines based on science. But our kids aren’t doing any better than before. Does that make any sense?
No, that makes no sense at all. The purpose of any kind of educational reform – including science-based reform – is to increase learning. The whole point is higher average reading scores or a reduction in the numbers of struggling students.
Whoever’s in charge should take this lack of success seriously and should be asking – and finding answers – to the following questions?
Administrators often make choices based on minimal information. It is better to vet these things before adopting them, but in a case like this one, it is never too late to find out if the reform scheme was really consistent with the science.
Some approaches work better than others because they have a bigger footprint. They provide a greater amount of teaching than business-as-usual approaches. Adopting such programs without making the schedule changes to facilitate their implantation will likely undermine potential success. Are kids getting more instruction, less instruction, or about the same as before?
Often the adoption of new programs or reform efforts aimed at a particular piece of the puzzle lead to greater attention to certain abilities, but to diminished attention to other key parts of literacy. Make sure that you aren’t trading more phonics for less fluency work, or more vocabulary for less comprehension. You want to make sure that all components of reading are receiving adequate attention – not going overboard with some and neglecting others.
Compliance matters in program implementation. The adage that “teachers can do whatever they want when the door is closed” highlights one of the biggest roadblocks to making such efforts work. You need to make sure you have sufficient buy in for the men and women who do the daily teaching. You bought a new program or set new instructional policies. Are they being used or followed?
Program adoption requires a lot more than issuing a policy proclamation. Research shows that program implementation supported by substantial professional development is much more successful than just buying a program. You need to make sure that you’ve built the capacity for success and not just expected magic to happen.
READ MORE: Shanahan On Literacy Blog
References
National Research Council. (2002). Scientific research in education. Washington, DC: National Academy Press.
Shanahan, T. (2020). What constitutes a science of reading instruction. Reading Research Quarterly, 55(S1), S235-S247.
Stanovich, P. J. , & Stanovich, K. E. (2003). Using research and reason in education. Washington, DC: National Institute of Literacy.
Your consistent advocacy for the use of reading practices that are based on multiple science research studies is always impressive. I think it would be good to consider a complementary science — the science of implementation. Dean Fixsen has done a great deal of research on the most effective ways to successfully imlement and monitor new research-based programs. Quality implementation is certainly not a trivial pursuit. Solid reading programs can sadly be rejected not because of their intrinsic value but because they were not implemented and monitored properly.
Are you saying there is no research on multisensory teaching methods, especially relating to dyslexics?
Plenty of gut punches today, Tim. You must be thinking of going to see "Creed 3" this weekend. One of your points could be expanded a bit.
Your "they liked it" criteria for instruction that "works" is tied to behavior management. For both teachers and admin (pre and post Covid), "what works" has been and is characterized by what keeps kids entertained, in their seats, quiet, respectful and out of the administrator's office. Whether or not the instruction impacts learning is secondary.
Particularly concerning to me, as a small publisher, is the impact of some reading technology. The design of "instructional" reading technology, especially "adaptive" reading programs that centers on video-gaming "fun" with all the bells and whistles, badges, congratulatory certificates, etc. If it "works," the teachers and admin are able to catch up on their emails. (Of course, this doesn't apply to teachers and admin reading this post.) In fact, two of the most popular (and extremely expensive) reading intervention programs tout their ability to "check-in" (snoop) on students' screen activities by clicking on the students' names on the teachers' desktops. Sigh.
Rochelle--
Yep. That is what I'm saying (a point I've made before as have other researchers, such as the stellar group at Florida State University).
tim
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.