I talk a lot about research in this space.
I argue for research-based instruction and policy.
I point out a dearth of empirical evidence behind some instructional schemes, and champion others that have been validated or verified to my satisfaction.
Some readers are happy to find out what is “known,” and others see me as a killjoy because the research findings don’t match well with what they claim to “know.”
Members of this latter group are often horrified by my conclusions. They often are certain that I’m wrong because they read a book for teachers that had lots of impressive citations that seem contradict my claims.
What is clear from these exchanges is that many educators don’t know what research is, why we should rely on it, or how to interpret research findings.
Research is used to try to answer a question, solve a problem, or figure something out. It requires the systematic and formal collection and analysis of empirical data. Research can never prove something with 100 percent certainty, but it can reduce our uncertainty.
“Systematic and formal” means that there are rules or conventions for how data in a research study need to be handled; the rigor of these methods is what make the data trustworthy and allow the research to reduce our uncertainty. Thus, if a researcher wants to compare the effectiveness of two instructional approaches, he or she has to make sure the groups to be taught with these approaches be equivalent at the beginning. Likewise, we are more likely to trust a survey that defines its terms, or an anthropological study that immerses the observer in the environment for a long period of time.
Research reports don’t just provide the results or outcomes of an investigation, but they explain—usually in great detail—the methods used to arrive at those results. Most people don’t find research reports very interesting because of this kind of detail, but it is that detail that allows us to determine how much weight to place on a study.
Given all of that, here are some guidelines to remember.
1. Just because something is written, doesn’t make it research.
Many practitioners think that if an idea is in a book or magazine that it is research. Some even think my blog is research. It is not, and neither is the typical Reading Teacher article or Heinemann book.
That’s not a comment on their quality or value, but a recognition of what such writing can provide. In some cases, as with my blog, there is a serious effort to summarize research findings accurately/ I work hard trying to distinguish my opinions from actual research findings.
Many publications for teachers are no more than compendia of opinions or personal experiences, which is fine. However, these have all of the limits of that kind of thing.
Just because someone likes what they’re doing (e.g., teaching, investing, cooking) and then writes about how well they’ve done it… doesn’t necessarily mean it is really so great. That’s why 82% of people believe that they’re in the top 30% of drivers; something that obviously can’t be right.
As human beings we all fall prey to overconfidence, selective memory, and just a plain lack of systematicity in how we gain information about our impact.
Often when teachers tell me that kids now love reading as a result of how they teach, I ask how do you know? What evidence do you have? Usually the answer is something like, “A parent told me that their child now likes to read.” Of course, that doesn’t tell how the other 25 kids are doing, or whether the parent is a good observer of such things, or even the motivation for the, seemingly, offhand comment.
Even when you’re correct about things improving, it’s impossible—from personal experience alone—to know the source of the success. It could be the teaching method, or maybe just the force of your personality. If another teacher adopted your methods, things might not be so magical.
And, then there is opportunity cost. We all struggle with this one. No matter how good an outcome, I can’t possibly know how well things might have gone had I done it differently. The roads not traveled may have gotten me someplace less positive—but not necessarily. You simply can’t know.
That’s where research comes in… it allows us to avoid overconfidence, selective memory, lack of systematicity, lack of reliable evidence, incorrect causal attribution, and the narrowness of individual experience.
2. Research should not be used selectively.
Many educators use research the same way advertisers and politicians do—selectively, to support their beliefs or claims—rather than trying to figure out how things work or how they could be made to work better.
I wish I had a doughnut for every time a school official has asked me to identify research that could be used to support their new policy! They know what they want to do and want research to sell it. Rather than studying the research to determine what they should do.
Cherry-picking an aberrant study outcome that matches one’s claims or ignoring a rigorously designed study in favor of one with a preferred outcome may be acceptable debater’s tricks but are bad science. And, they can only lead to bad instructional practice.
When it comes to determining what research means, you must pay attention not just to results that you like. Research is at its best when it challenges us to see things differently.
I vividly remember early in my career when Scott Paris challenged our colleagues to wonder why DISTAR, a scripted teaching approach was so effective, despite that fact that most of us despised it. Clearly, we were missing something; our theories were so strong that they were blinding us to the fact that what we didn’t like was positive for kids—at least for some kids or under some conditions (the kinds of things that personal experience can’t reveal).
3. Research, and the interpretation of research, require consistency.
Admittedly, interpreting research studies is as much an art as science. During the nearly 50 years of my professional career, the interpretation of research has changed dramatically.
It used to be entirely up to the discretion of each individual researcher as to which studies they’d include in a review and what criteria they would use to weigh these studies.
That led to some pretty funky science: research syntheses that identified only studies that supported a particular teaching method or inconsistent criteria for impeaching studies (this study should be ignored because it has a serious design weakness, but then using studies with more acceptable findings even though they suffer the same flaw).
I’ve been running into this problem a lot lately. Not among researchers, but among practitioners. When I point out a research-supported instructional practice (Reading Recovery) that is inconsistent with phonics theories, I’m told “anything works if it is taught one-on-one.” That sounds great, but those same people are offended when there is insufficient attention to phonics instruction, in spite of the evidence supporting phonics such as the National Reading Panel. The problem with this: the instruction in many of those positive phonics studies was delivered one-on-one.
I’m persuaded that both phonics and Reading Recovery work (because they both have multiple studies of sufficient quality showing their effectiveness). That doesn’t mean I think they work equally well, or that they are equally efficient, or that they even accomplish the same things for students.
I agree with those who argue against teaching cueing systems, because research evidence reveals that poor readers use non-orthographic information to identify words and that good readers do not. Teaching kids to read like poor readers makes no sense to me. Nevertheless, Reading Recovery clearly gives kids a learning advantage, and we’d be wise to look hard at it to see why (one study found adding more explicit phonics to it improved kids’ progress, and that’s a clue that may help us understand what it does and what it doesn’t).
The point isn’t phonics or Reading Recovery: but when we make those kinds of choices, we need to weigh evidence consistently—treating as the same those studies that challenge our deepest beliefs as well as those that are wind beneath our wings. What works in teaching, who it helps, how it helps them… those are complex questions requiring sound evidence and wise analysis rather than rage and cheap “hooray for our side” Tweets.
Let’s do better.
Thanks, Tim, for the guidelines on how to read research - we need it! Have you looked at this recent piece from Chapman and Tunmer: 'Reading Recovery's unrecovered learners: Characteristics and issues' published in the UK journal Review of Education in July 2018? If so, what are your thoughts?
Abstract
Reading Recovery (RR) was developed in New Zealand in the early 1980s to provide 30 minutes of daily individualised literacy instruction over 20 weeks for students struggling with learning to read after one year of formal schooling. Considerable research has been undertaken on the RR programme. While results indicate short?term success for some students, each year 15–30% of students do not successfully complete the programme and are therefore ‘unrecovered’. Research on the characteristics of these unrecovered students is sparse. This review examines findings on the characteristics of unrecovered students. These RR students typically have limited phonemic awareness and phonemically based decoding skills, and lower scores on RR screening measures on entry to RR than ‘recovered’ students. In New Zealand, unrecovered students tend to be enrolled in schools serving lower socio?economic neighbourhoods, and tend to be from M?ori or Pasifika (Polynesian Pacific Island heritage) backgrounds. These students typically receive more RR lessons than recovered students. We conclude that RR does not tailor instruction to meet the needs of individual students, as claimed. The RR instructional model, developed in the 1970s, fails to recognise the importance of explicit, systematic instruction in phonemic awareness and the use of letter–sound relations. Such instruction is essential for most students who struggle with literacy learning during their early years of schooling and especially important for students who experience the most difficulty with learning to read. Suggestions are presented for strengthening the RR programme and for reducing the number of unrecovered students.
Thanks for this reminder of the value of research, and especially talking about what it is and what it isn't so we can avoid the trap of thinking the latest trendy book for educators is research-based. I also love that your writing is always served with a side dish of snarky humor! Can you recommend any great repositories on research-based practices that summarize the findings of rigorous studies other than What Works Clearinghouse, which has some significant limits?
I'm pondering--what a great write up!
A very very long time ago when I did my first piece of research, one of my committee members reminded me that if I really thought I already knew the answer to my research question, then I'd best take up a different question. He further reminded me that my goal was not to prove something, my goal was to discover something. Your post reminded me of that long ago conversation. It was both a needed post and well done posted. Earlier this year in one of your tweets you pointed out that too often defenders of various positions often only look at the strengths of their position and fail to admit their sometimes there may also be some weaknesses. If research is done with the purpose of uncovering truth rather than proving a position then sometimes we may discover things that challenge some of our most cherished positions, and may actually require we modify them. In the case of Reading Recovery I have presented spirited defenses of the program https://doctorsam7.blog/2018/08/10/why-i-like-reading-recovery-and-what-we-can-learn-from-it-by-dr-sam-bommarito/(and I was trained in RR). Recently I was talking to the mother of a dyslexic child for whom the RR program did not work. Was it because her child had an RR teacher who did not implement the program properly or was it because there are SOME children who really need a more direct and systematic phonics program than RR provides. I honestly don't know the answer (yet), but if I really believe if I believe in research based teaching then I must be willing to at least ask that kind of question. This doesn't mean I've stopped advocating for RR. Look at my blog entry & you'll see some of the many pieces of research that demonstrates it works for many many children. I remain steadfast in my belief it is a program that needs to be continued and emulated. HOWEVER, if there is even one child for whom a different program might work better then the question I just raised is the kind of question we need to ask (is there such a child/are there such children?) with the follow up to that question being that if the answer turns out to be yes, then what might work better for that particular child/those particular children? Thanks for the reminder of what research should really be about. If we are not willing to admit there are both strengths and weaknesses to every approach, then we shut the door to future progress. Sam
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.