The literacy field has long been beleaguered by generic terms that no one seems to understand – or more exactly, of which nobody agrees on the definitions. Terms like whole language, balanced literacy, direct instruction, dyslexia, sight words, and guided reading, are bandied about in journals, conference presentations, newspaper articles, and teacher’s lounges as if there was some shared dictionary out there that we were all accessing. Even terms that seem like they would be widely understood like research or fluency often turn out to be problematic.
This plague of vagueness is exasperating, and I think it prevents productive dialogue or any kind of substantive progress in the field.
Over the decades, reporters and policymakers have often asked me my opinion of [insert any of those undefined terms]. My usual response has been something along the lines of:
“Tell me what ________ is, and I’ll give you my opinion,” not-so-cleverly shifting the responsibility for definition to my questioner.
If they say, “balanced literacy means providing explicit instruction in key reading skills while trying to provide a motivational and supportive classroom environment”, I say, “I’m all for it.” If they tell me, “it means teaching reading with a minimum of explicit instruction, particularly in foundational skills like spelling and decoding,” then I’m strongly opposed.
That approach keeps me out of the soup, but it really doesn’t solve any important problem. My clarity and consistency aside, teachers are still inundated with invitations to professional development programs, textbooks, and classroom instructional practices that are supposedly aligned with some unspecified definition of today’s hot jargon.
The biggest offender now – if my Twitter feed is representative – is the “science of reading.”
I can’t believe the number of webinars, blogs, textbooks, professional development opportunities, and the like that aim to provide the latest and greatest information from the science of reading (whatever that is?).
My advice to everyone: Grab your wallets and run!
Okay, I admit that isn’t very helpful, but it should save you a lot of money and aggravation.
Consumers of a science of reading should start out with a definition of what would fairly constitute such a science. That way they could always check to see if what was being promoted was what they were seeking.
Back in the late 1990s, federal education law – recognizing how misleadingly the term “research” was being used by textbook companies, consultants, and the like – provided definitions of “scientifically-based reading instruction (SBRR).”
Unfortunately, in one fell swoop, the feds stopped promoting instructional approaches based on research and did away with the legal definition of scientific evidence; moves that coincided, I might point out, with the last round of gains in national reading scores.
I’d suggest that, though that definition no longer has legal standing, it is a good starting point for deciding what should be in your personal definition of “a science of reading.”
First, the evidence must be derived from a scientific method that is appropriate to the claim being made. If you want to claim that a particular instructional method or approach improves reading achievement, you need to prove that; that such instruction is more beneficial than other approaches.
That can only be accomplished through an educational experiment; that provides a sound comparison between students who are receiving that instruction and those who aren’t.
Other scientific methods can provide valuable information, but they can’t answer a “what works” kind of question.
Descriptive and correlational research methods are appropriate for many other important questions (e.g., Are kids of different races or genders making equal gains? What kinds of library books are students most interested in? Have reading scores risen in the past three years?). Those other research methods, if implemented appropriately, can provide sound answers to such questions.
You might be surprised how many fine scientists are out there telling teachers how and what to teach – even though their research has never tested the effectiveness of what they are recommending.
Evidence from their studies can be usefully provocative – that is, it may suggest worthwhile questions. If, for example, you noticed greater student engagement when kids were allowed to choose what to read, you might wonder, “Would such choice lead to more learning?” Unfortunately, too often, people see or think they see that kind of pattern and jump right to a conclusion, “Student choice must lead to more learning,” without bothering to test that claim through a rigorous experiment. (Sometimes research supports such a claim and sometimes it doesn’t. But it certainly can’t be recommended as being based on science without such a test).
Something we should remember that when science identifies a potentially valuable avenue to better learning that doesn’t mean we know how best to exploit that knowledge.
Basically, all I’m saying is, if you want to claim that something works, you need to try it out and show that it can be beneficial.
Second, a science of reading would require studies that provided a rigorous analysis of the data derived from educational experiments. Such analysis must ensure that the results are due to the instruction and not just to normal variations in performance. It also must ensure that the comparisons being made are sound. Some studies try to compare results with groups that are so different in the beginning that it would be impossible to attribute outcome differences to the instruction.
Third, the studies need to go through peer review or some other kind of independent scientific evaluation to protect against serious flaws in the reasoning or analysis.
Fourth, the studies need to be replicated or generalized. That’s why I depend so heavily on meta-analysis; it combines the results of multiple studies. It is not enough to know that the XYZ reading method had great results in one study, if there are 9 other investigations that showed it to be ineffective. That kind of pattern says to me, this technique can work, but it rarely does. Not something I’d be likely to adopt or to recommend to schools.
Fifth, it helps if there are convergent findings – in other words, other evidence that appears to be consistent with these findings. Like the U.S. Department of Education of two decades ago, I would never place the imprimatur of science upon an instructional approach that had not actually been tried out in classrooms and shown to be effective. But once I have that evidence, I am heartened to know of other supporting information.
I don’t talk much about the brain research in reading. Not because I’m unaware of its potential importance, but because of its insufficiency. Any pattern revealed in neurological investigations that suggests an instructional possibility still must be evaluated in the classroom. Sometimes a basic idea is sound, but it is more challenging or complicated to implement than you realize.
In any event, descriptive and correlational studies, theories, neurological investigations, and studies of other kinds of learning may bolster your trust in the instructional studies that you have.
We have many studies showing the effectiveness of decoding instruction. Those are studies that have compared the results of a strong phonics emphasis versus a no phonics or a weak phonics approach. My trust in those results goes up when I see the mRI studies showing how the brain connects the visual recognition of letters and words with the part of the brain that carries out phonological processing. That neurological evidence on its own, wouldn’t be enough to scientifically endorse phonics as an effective instructional approach, but it sure provides convergent proof that should strengthen my resolve to offer such instruction. (The same, in this case, could be said about digital simulation studies of reading as well.)
If I were invited to a science of reading seminar, and wondered if it would be worthwhile, I’d ask the sponsors if the presenters will either
If I had no choice but to attend, those would be the kinds of questions I’d be asking the presenters if their presentations didn’t make the foundations of their claims clear.
If we are serious about improving reading achievement for all children, we are only likely to get there if we hold ourselves to the highest standards of professional practice. Having a sound definition for what constitutes a “science of reading” is more than a game of semantics. Employing instructional approaches that have repeatedly benefited learners in rigorously implemented and analyzed studies is likely to be the most productive way to progress.
These days I’m seeing schools mandating instructional practices that have no direct research evidence in the name of the science of reading. Those practices don’t become part of the science of reading because someone wrote them down, or because they were recommended by a researcher, or because they address a particular aspect of reading development.
I think you are at the root of a big, big, deeply discouraging problem.
"SOLD A STORY" described the "rock star status" of Marie Clay. Now same status, just as problematic, is being applied to SOR people who don't have the evidence either, as you say.
A characteristic of science is skepticism and questioning, hypothesizing based on what is established. SOR instead is dogmatic. Tim, we are kicked in the teeth when we ask those presenters the questions you suggest, right as the questions are.
Listened to some "SOR" podcasts this week and, just one example, several times, the issue of how many times a child must encounter a word to "know" it was discussed. 12? 15? 20? Chasing citations was proclaimed as if there was a specific number answer that applied to all-children and these podcasters would chase it, find it! This demonstrates a profound misunderstanding of the knowledge base, how studies are constructed, that "findings"/answers depend on several factors (as science tends to define), how study results add value and provide direction. This is one example of misplaced precision. Science doesn't yield a recipe which teachers still seek.
Maybe we need to define a scientific mindset.
What would you the best phonics instruction looks like in a constructivist classroom ? What are the best resources for phonics and phonemic awareness instruction ?
Thank you, Dr. Shanahan, for offering clarity and precision again. The body of actual experimental research and other types of research is so large that it is difficult even for experts to read and digest it all. Teachers look for "recipes" and quick fixes because they are so overloaded, though that is no excuse. I appreciate that your posts offer summaries of research and citations to follow. A few examples of topics where I find the SOR community not quite in line with the research are 1) use of 100% decoable or mostly decodable text only until all sound patterns are learned, and 2) learning syllable types and rules as always effective. There are fight about "three-cueing", which is nuanced and you've addressed it here. I appreciated your careful analysis of "Sold a Story" as well. As a teacher educator in a small private college, I keep up as best I can, but often both side are critical of those in my role.
Related to your points above, one of the big problems is how selective people are in citing research. How many times do I have to hear someone citing the Clackmannanshire studies as evidence in support of SSR, ignoring the fundamental flaws that have been pointed out repeatedly. There is a long list of problems I've highlighted in various papers regarding the evidence for SSP (e.g., https://link.springer.com/article/10.1007/s10648-019-09515-y) that are just ignored. A strong proponent of SSP does not have to agree with my analyses, but it is not appropriate to simply ignore the paper despite being published in one of the top journals in education. When you treat the literature selectively, you are not engaging in science for the sake of understanding, you are using science as propaganda tool.
Just as often people mischaracterize the studies they cite to support their claim. E.g., that the NRP provides evidence for SSP when in fact it claimed that there was no good evidence for one form of phonics for another. The list is endless, and researches are just as guilty as anyone.
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.