The Whys and Hows of Research and the Teaching of Reading

  • 03 November, 2018
  • 30 Comments

I talk a lot about research in this space.

I argue for research-based instruction and policy.

I point out a dearth of empirical evidence behind some instructional schemes, and champion others that have been validated or verified to my satisfaction.

Some readers are happy to find out what is “known,” and others see me as a killjoy because the research findings don’t match well with what they claim to “know.”

Members of this latter group are often horrified by my conclusions. They often are certain that I’m wrong because they read a book for teachers that had lots of impressive citations that seem contradict my claims.  

What is clear from these exchanges is that many educators don’t know what research is, why we should rely on it, or how to interpret research findings.

Research is used to try to answer a question, solve a problem, or figure something out. It requires the systematic and formal collection and analysis of empirical data. Research can never prove something with 100 percent certainty, but it can reduce our uncertainty.

“Systematic and formal” means that there are rules or conventions for how data in a research study need to be handled; the rigor of these methods is what make the data trustworthy and allow the research to reduce our uncertainty. Thus, if a researcher wants to compare the effectiveness of two instructional approaches, he or she has to make sure the groups to be taught with these approaches be equivalent at the beginning. Likewise, we are more likely to trust a survey that defines its terms, or an anthropological study that immerses the observer in the environment for a long period of time.

Research reports don’t just provide the results or outcomes of an investigation, but they explain—usually in great detail—the methods used to arrive at those results. Most people don’t find research reports very interesting because of this kind of detail, but it is that detail that allows us to determine how much weight to place on a study.

Given all of that, here are some guidelines to remember.

1. Just because something is written, doesn’t make it research.

Many practitioners think that if an idea is in a book or magazine that it is research. Some even think my blog is research. It is not, and neither is the typical Reading Teacher article or Heinemann book.

That’s not a comment on their quality or value, but a recognition of what such writing can provide. In some cases, as with my blog, there is a serious effort to summarize research findings accurately/ I work hard trying to distinguish my opinions from actual research findings.  

 

Many publications for teachers are no more than compendia of opinions or personal experiences, which is fine. However, these have all of the limits of that kind of thing.

Just because someone likes what they’re doing (e.g., teaching, investing, cooking) and then writes about how well they’ve done it… doesn’t necessarily mean it is really so great. That’s why 82% of people believe that they’re in the top 30% of drivers; something that obviously can’t be right.

As human beings we all fall prey to overconfidence, selective memory, and just a plain lack of systematicity in how we gain information about our impact.

Often when teachers tell me that kids now love reading as a result of how they teach, I ask how do you know? What evidence do you have? Usually the answer is something like, “A parent told me that their child now likes to read.” Of course, that doesn’t tell how the other 25 kids are doing, or whether the parent is a good observer of such things, or even the motivation for the, seemingly, offhand comment.

Even when you’re correct about things improving, it’s impossible—from personal experience alone—to know the source of the success. It could be the teaching method, or maybe just the force of your personality. If another teacher adopted your methods, things might not be so magical.

And, then there is opportunity cost. We all struggle with this one. No matter how good an outcome, I can’t possibly know how well things might have gone had I done it differently. The roads not traveled may have gotten me someplace less positive—but not necessarily. You simply can’t know.

That’s where research comes in… it allows us to avoid overconfidence, selective memory, lack of systematicity, lack of reliable evidence, incorrect causal attribution, and the narrowness of individual experience.

 2.     Research should not be used selectively.

Many educators use research the same way advertisers and politicians do—selectively, to support their beliefs or claims—rather than trying to figure out how things work or how they could be made to work better.

I wish I had a doughnut for every time a school official has asked me to identify research that could be used to support their new policy! They know what they want to do and want research to sell it. Rather than studying the research to determine what they should do.

Cherry-picking an aberrant study outcome that matches one’s claims or ignoring a rigorously designed study in favor of one with a preferred outcome may be acceptable debater’s tricks but are bad science. And, they can only lead to bad instructional practice.

When it comes to determining what research means, you must pay attention not just to results that you like. Research is at its best when it challenges us to see things differently.

I vividly remember early in my career when Scott Paris challenged our colleagues to wonder why DISTAR, a scripted teaching approach was so effective, despite that fact that most of us despised it. Clearly, we were missing something; our theories were so strong that they were blinding us to the fact that what we didn’t like was positive for kids—at least for some kids or under some conditions (the kinds of things that personal experience can’t reveal).

 3.     Research, and the interpretation of research, require consistency.

Admittedly, interpreting research studies is as much an art as science. During the nearly 50 years of my professional career, the interpretation of research has changed dramatically.

It used to be entirely up to the discretion of each individual researcher as to which studies they’d include in a review and what criteria they would use to weigh these studies.

That led to some pretty funky science: research syntheses that identified only studies that supported a particular teaching method or inconsistent criteria for impeaching studies (this study should be ignored because it has a serious design weakness, but then using studies with more acceptable findings even though they suffer the same flaw).

I’ve been running into this problem a lot lately. Not among researchers, but among practitioners. When I point out a research-supported instructional practice (Reading Recovery) that is inconsistent with phonics theories, I’m told “anything works if it is taught one-on-one.” That sounds great, but those same people are offended when there is insufficient attention to phonics instruction, in spite of the evidence supporting phonics such as the National Reading Panel. The problem with this: the instruction in many of those positive phonics studies was delivered one-on-one.

I’m persuaded that both phonics and Reading Recovery work (because they both have multiple studies of sufficient quality showing their effectiveness). That doesn’t mean I think they work equally well, or that they are equally efficient, or that they even accomplish the same things for students.

I agree with those who argue against teaching cueing systems, because research evidence reveals that poor readers use non-orthographic information to identify words and that good readers do not. Teaching kids to read like poor readers makes no sense to me. Nevertheless, Reading Recovery clearly gives kids a learning advantage, and we’d be wise to look hard at it to see why (one study found adding more explicit phonics to it improved kids’ progress, and that’s a clue that may help us understand what it does and what it doesn’t).

The point isn’t phonics or Reading Recovery: but when we make those kinds of choices, we need to weigh evidence consistently—treating as the same those studies that challenge our deepest beliefs as well as those that are wind beneath our wings. What works in teaching, who it helps, how it helps them… those are complex questions requiring sound evidence and wise analysis rather than rage and cheap “hooray for our side” Tweets.

Let’s do better.

Comments

See what others have to say about this topic.

Harriett Janetos
Nov 03, 2018 11:16 PM

Thanks, Tim, for the guidelines on how to read research - we need it! Have you looked at this recent piece from Chapman and Tunmer: 'Reading Recovery's unrecovered learners: Characteristics and issues' published in the UK journal Review of Education in July 2018? If so, what are your thoughts?

Abstract

Reading Recovery (RR) was developed in New Zealand in the early 1980s to provide 30 minutes of daily individualised literacy instruction over 20 weeks for students struggling with learning to read after one year of formal schooling. Considerable research has been undertaken on the RR programme. While results indicate short?term success for some students, each year 15–30% of students do not successfully complete the programme and are therefore ‘unrecovered’. Research on the characteristics of these unrecovered students is sparse. This review examines findings on the characteristics of unrecovered students. These RR students typically have limited phonemic awareness and phonemically based decoding skills, and lower scores on RR screening measures on entry to RR than ‘recovered’ students. In New Zealand, unrecovered students tend to be enrolled in schools serving lower socio?economic neighbourhoods, and tend to be from M?ori or Pasifika (Polynesian Pacific Island heritage) backgrounds. These students typically receive more RR lessons than recovered students. We conclude that RR does not tailor instruction to meet the needs of individual students, as claimed. The RR instructional model, developed in the 1970s, fails to recognise the importance of explicit, systematic instruction in phonemic awareness and the use of letter–sound relations. Such instruction is essential for most students who struggle with literacy learning during their early years of schooling and especially important for students who experience the most difficulty with learning to read. Suggestions are presented for strengthening the RR programme and for reducing the number of unrecovered students.

Carrie
Nov 04, 2018 12:42 AM

Thanks for this reminder of the value of research, and especially talking about what it is and what it isn't so we can avoid the trap of thinking the latest trendy book for educators is research-based. I also love that your writing is always served with a side dish of snarky humor! Can you recommend any great repositories on research-based practices that summarize the findings of rigorous studies other than What Works Clearinghouse, which has some significant limits?

TIna Abrego
Nov 04, 2018 01:53 AM

I'm pondering--what a great write up!

Sam Bommarito
Nov 04, 2018 05:02 AM

A very very long time ago when I did my first piece of research, one of my committee members reminded me that if I really thought I already knew the answer to my research question, then I'd best take up a different question. He further reminded me that my goal was not to prove something, my goal was to discover something. Your post reminded me of that long ago conversation. It was both a needed post and well done posted. Earlier this year in one of your tweets you pointed out that too often defenders of various positions often only look at the strengths of their position and fail to admit their sometimes there may also be some weaknesses. If research is done with the purpose of uncovering truth rather than proving a position then sometimes we may discover things that challenge some of our most cherished positions, and may actually require we modify them. In the case of Reading Recovery I have presented spirited defenses of the program https://doctorsam7.blog/2018/08/10/why-i-like-reading-recovery-and-what-we-can-learn-from-it-by-dr-sam-bommarito/(and I was trained in RR). Recently I was talking to the mother of a dyslexic child for whom the RR program did not work. Was it because her child had an RR teacher who did not implement the program properly or was it because there are SOME children who really need a more direct and systematic phonics program than RR provides. I honestly don't know the answer (yet), but if I really believe if I believe in research based teaching then I must be willing to at least ask that kind of question. This doesn't mean I've stopped advocating for RR. Look at my blog entry & you'll see some of the many pieces of research that demonstrates it works for many many children. I remain steadfast in my belief it is a program that needs to be continued and emulated. HOWEVER, if there is even one child for whom a different program might work better then the question I just raised is the kind of question we need to ask (is there such a child/are there such children?) with the follow up to that question being that if the answer turns out to be yes, then what might work better for that particular child/those particular children? Thanks for the reminder of what research should really be about. If we are not willing to admit there are both strengths and weaknesses to every approach, then we shut the door to future progress. Sam

Sam Bommarito
Nov 04, 2018 05:02 AM

A very very long time ago when I did my first piece of research, one of my committee members reminded me that if I really thought I already knew the answer to my research question, then I'd best take up a different question. He further reminded me that my goal was not to prove something, my goal was to discover something. Your post reminded me of that long ago conversation. It was both a needed post and well done posted. Earlier this year in one of your tweets you pointed out that too often defenders of various positions often only look at the strengths of their position and fail to admit their sometimes there may also be some weaknesses. If research is done with the purpose of uncovering truth rather than proving a position then sometimes we may discover things that challenge some of our most cherished positions, and may actually require we modify them. In the case of Reading Recovery I have presented spirited defenses of the program https://doctorsam7.blog/2018/08/10/why-i-like-reading-recovery-and-what-we-can-learn-from-it-by-dr-sam-bommarito/(and I was trained in RR). Recently I was talking to the mother of a dyslexic child for whom the RR program did not work. Was it because her child had an RR teacher who did not implement the program properly or was it because there are SOME children who really need a more direct and systematic phonics program than RR provides. I honestly don't know the answer (yet), but if I really believe if I believe in research based teaching then I must be willing to at least ask that kind of question. This doesn't mean I've stopped advocating for RR. Look at my blog entry & you'll see some of the many pieces of research that demonstrates it works for many many children. I remain steadfast in my belief it is a program that needs to be continued and emulated. HOWEVER, if there is even one child for whom a different program might work better then the question I just raised is the kind of question we need to ask (is there such a child/are there such children?) with the follow up to that question being that if the answer turns out to be yes, then what might work better for that particular child/those particular children? Thanks for the reminder of what research should really be about. If we are not willing to admit there are both strengths and weaknesses to every approach, then we shut the door to future progress. Sam

Jeffrey Bowers
Nov 04, 2018 10:46 AM

Dear Tim, I agree with your philosophy of how science should be applied to education, but you do not follow your own advice when it comes to phonics. You claim that there is evidence for phonics from the NRP, but that is not correct. Although the NRP authors claim that the meta-analysis supports “systematic phonics”, they designed their analysis in such a way that they did not even test this hypothesis or test the claim that systematic phonics is better than whole language.

The confusion can be seen in the following two quotes from the NRP. In the first quote, the authors conclude that systematic phonics is better than non-systematic phonics and whole language, writing:

“Students taught systematic phonics outperformed students who were taught a variety of nonsystematic or non-phonics programs, including basal programs, whole language approaches, and whole word programs” [bold added]. (NPR, 2000, p. 2-134).

But the second quote describes what in fact was the design of the study:

“Findings provided solid support for the conclusion that systematic phonics instruction makes a bigger contribution to children’s growth in reading than alternative programs providing unsystematic or no phonics instruction. [bold added] (NRP, 2000, p. 2-92).”

It is easy to miss the difference here, but it is fundamental. The second quote makes it clear that non-systematic phonics and whole language studies were part of the broader category of “alternative programs” that includes multiple different types of instructions, including whole word instruction (that includes no phonics). If you want to assess the relative effectiveness of systematic phonics and whole language, then the relevant comparison is studies that used systematic phonics vs. studies that used whole language. And if you want to assess the effectiveness of systematic phonics, then you need to compare the effectiveness of systematics phonics to unsystematic phonics (not to the broader category of alternative programs that include “no phonics conditions”). But the authors of the NRP did not do this. When Camilli et al. (2006) made the correct comparison in order to assess systematic phonics, namely, comparing studies that included systematic phonics compared to non-systematic phonics, the effect was no longer significant.

On top of this, as I detail in my review, the NRP meta-analysis included studies that should have been excluded based on the exclusion criteria, used the wrong baseline in at least one study in a way that contributed to the large effects, most studies where not RCTs, and as Torgerson et al. (2006) noted, there is evidence of publication bias in the RCTs that were included. At minimum, researchers should cite the Camilli et al. study and should not use the NRP to support systematic phonics over non-systematic phonics (or whole language) given the design of the meta-analysis. But the fact is that the NRP has been cited over 20K times and Camilli et al. 58 times according to google scholar. And it is the first quote from the NRP that claims that systematic phonics is better than whole that is repeated 1000s of times in the literature.

But the NRP meta-analysis is now almost 30 years old. As I show in Bowers (2018), the evidence is weaker still in subsequent meta-analyses (people focus on the NRP as it shows the largest effects), and indeed, all subsequent meta-analyses in support of systematic phonics have the same logical flaw pointed out by Camilli et al. If you are interested in unbiased reporting of the literature, why not cite the Suggate (2016) study shows that the impact of phonics does not have a long-term impact, and indeed, descriptively does the worst in a comparison of multiple methods. I just see no empirical evidence that justifies the strong commitment to systematic phonics. You can find the Bowers (2018) detailed review of systematic phonics at: https://psyarxiv.com/xz4yn/ For short blogpost that summarizes the problem with the evidence for systematic phonics, and some debate, go to: https://jeffbowers.blogs.bristol.ac.uk/blog/phonics/)



Jo-Anne Gross
Nov 04, 2018 01:51 PM

I`m just happy Pleasant Rowlands gave the Reading League a leg up with her donation.
We can finally get some teachers trained and stop the insanity.

RR has skewed it`s research by dropping the students that didn`t benefit from it midway into the program.

Tim Shanahan
Nov 04, 2018 02:03 PM

Harriet-

I do know of that research. I also know of an earlier Tunmervpiece in which he added explicit phonics to RR and sled the children’s recovery. He has been a good example of a researcher trying to figure something out rather than trying to throw the baby out with the bath water,

Tim

Tim Shanahan
Nov 04, 2018 02:05 PM

Carrie—
I don’t know of another Clearinghouse that does that. I would continue to depend on the very best research journals I; the field however. review of Educational Research is a good place to start.

Tim Shanahan
Nov 04, 2018 02:08 PM

Actually Jeff the charge to the NRP was not to compare phonics and whole language and as you point out it is not what we studied, we looked at the impacts of instruction in PA, phonics, oral reading fluency, encouraging kids to read, vocabulary, reading and reading comprehension.

Tim

Tim Shanahan
Nov 04, 2018 02:20 PM

Jo-Anne—
You are correct about RR studies dropping kids from analysis. In fact, I was the first person to write about that (more than 30 years ago). However, that isn’t true of all RR research and RR has been found effective even in those studies. I’m not a big fan of RR and ended it’s funding in Chicago, but it has been found to be effective. To not admit that is self blinding, and to not wonder why limits our ability to make everyone literate.

I hope the Reading League uses some of that money to test some of their up-to-now unproven claims. They have been wonderful in trumpeting the importance of evidence for those claims they make that are well supported by study, but have been much less evidence oriented for the beliefs they advance that lack such evidence. In this, they are identical to the RR advocates. RR advocates should be tripping over themselves trying to figure out if they can supercharge their results by adding explicit teaching (it might violate their philosophy, but research is clear it gives an advantage and they should wonder why).

Research evidence should not be a rhetorical convenience.

Tim

Jeffrey Bowers
Nov 04, 2018 02:37 PM

Actually Tim, that is what the authors of the NPR concluded, as made clear in the quote I provided above:

“Students taught systematic phonics outperformed students who were taught a variety of nonsystematic or non-phonics programs, including basal programs, whole language approaches, and whole word programs” [bold added]. (NPR, 2000, p. 2-134).


And check out the abstract of Ehri et al. (2001) peer reviewed paper that reports the meta-analysis of the NRP entitled
"Systematic Phonics Instruction Helps Students Learn to Read: Evidence from the National Reading Panel's Meta-Analysis"

They write:

"Systematic phonics instruction helped children learn to read better than all forms of control group instruction, including whole language".

It is just not true that the NPR did not makes claims about whole language, and 1000s of papers cite the NRP in support of systematic phonics compared to whole language. Worse that that, the NRP did not even carry out the relevant meta-analysis to assess systematic phonics given that they did not compare systematic to non-systematic phonics interventions.


Jeanette
Nov 04, 2018 03:09 PM

Yes, Let's do better! And we will now that The Reading League will be going national! This year was only the 2nd year of their annual reading conference and its success was phenomenal! The keynote speaker was the amazing Dr. Louisa Moats! She is someone who knows about reading research and best instruction and has done so much high quality work in the field during her long and distinguished career. She also is NOT a fan of The 3 Cueing system and programs based on it (like RR). I have read and heard similiar views from others who I know understand the research (Dr.David Kilpatrick, Mark Seidenberg). Also, IDA does not recommend RR and we know they understand research. So, I am a bit perplexed as to why you say something different about RR than these other distinguished reading experts??? This is exactly why we need The Reading League! They can educate teachers like me on evidence based practice and we will have an organization to go to that we can TRUST to give us the best information available for teaching our students to read. Join The Reading League!

Harriett Janetos
Nov 04, 2018 03:32 PM

I have been trying to access the Tunmer piece, but my sons at Stanford and Davis (my usual providers of research) can't get this one for some reason. Any suggestions? I am particularly interested because the 45 first and second grade students I am currently working with reinforce Tunmer's findings about "the importance of explicit, systematic instruction in phonemic awareness and the use of letter–sound relations. Such instruction is essential for most students who struggle with literacy learning during their early years of schooling and especially important for students who experience the most difficulty with learning to read."

Harriett Janetos
Nov 04, 2018 06:45 PM

Louisa Moats also figures prominently in the audio documentary https://www.apmreports.org/story/2018/09/10/hard-words-why-american-kids-arent-being-taught-to-read.

Harriett Janetos
Nov 04, 2018 07:38 PM

Jeff, have you read Mark Seidenberg's 2017 book Language at the speed of sight : how we read, why so many can't, and what can be done about it? I know you have a busy schedule, but I'd very much appreciate your thoughts about his conclusions and recommendations. Thanks!

Tim Shanahan
Nov 05, 2018 04:47 AM

Jeanette— they might know something about research but they are willing to freelance and go beyond what the research actually says if they believe it. And they make no distinctions. So they espouse the teaching of phonics which is clearly research based and the use of decodable texts which is not (and they are willing to ignore the existing research on this to claim that it is a logical inference from research). Or they’ll support one kind of phonics instruction over another despite research saying there’s no difference. You can believe them or you can read the reports put out by the National Academy, NICHD, the U.S. Department of Education. The difference is the latter are rule-based, consistent, and separate from commercial interest. Don’t just support research when you like a finding or find it profitable. That might seem confusing at times when studies don’t support your beliefs, but in the long run you’ll get it right more ofte.

Tim

Tim Shanahan
Nov 05, 2018 04:51 AM

Jeff

The point was not to evaluate whole language, but to identify those approaches that had positive research. NRP looked at the studies that evaluated phonics instruction and its effectiveness. The contro/comparison groups did include some that purported to be whole language but the only reason they were included was because they included no phonics or incidental phonics only. There were also basal reader control groups...the point wasn’t to evaluate whole language.

jeff bowers
Nov 05, 2018 09:46 AM

Tim, that may not have been the point of the NRP, but do you agree that is what is claimed in the NRP and in the abstract of Ehri et al. (2001), and that 1000s of authors have cited this work as evidence that phonics is better than whole language?

Can we agree that no one should cite the NRP in support of phonics over whole language? Or in support of systematic phonics? To be clear, I'm not in favour of whole language or non-systematic phonics, but we should follow the advice of your blogpost and be clear and unbiased in our assessment of the data. And going beyond the NRP, there is little or no evidence for phonics over whole language in all subsequent meta-analyses as your readers can find here: https://psyarxiv.com/xz4yn/

Jo-Anne Gross
Nov 05, 2018 01:00 PM

I am referring to Jeff Bowers here-it`s obvious to me in your comment that you don`t attach your ideas and research with the thought of helping children.
Otherwise,this statement-dry-defunct of any use whatsoever-will obfuscate AND SUPPORT the continuation of erroneous teaching across the U.S. /Canada/U.K.that fail to teach up to 40% of their students to read proficiently.
I don`t understand people like you-what is your contribution to the field?
Yes everyone,read Dr. Mark Seidenberg`s book and read Dr. David Kilpatrick`s,become enlightened so you can BEST serve your students.

Lois Letchford
Nov 05, 2018 01:43 PM

Love it! Parents and teachers want simple research to answer complex questions. Thanks for this piece!

Jeffrey Bowers
Nov 05, 2018 02:36 PM

Hi Jo-Anne Gross, I’ve been getting this comment frequently recently – that I do not care about children, or that I’m actively trying to harm them. Indeed, this is a common concern of reviewers of my work submitted to journals. There is a problem here, but not sure it is my callous indifference to teaching literacy well. There is more debate about this subject on my blog in case you are intereted: https://jeffbowers.blogs.bristol.ac.uk/blog/phonics/

Harriett Janetos
Nov 05, 2018 03:06 PM

I was extremely fortunate on my trip to New York in September to meet with Linnea Ehri and pose all the questions that colleagues and I had compiled for her. One was from our kindergarten DLI teacher about whether to teach Spanish at the level of the phoneme or syllable. (Diane McGuinness refers to beginning instruction based on the syllable as teaching "half-language"). She was able to cite the work of her Brazilian grad student who had just done the following study.

Teaching orthographic mapping to novice beginners in Brazilian Portuguese: effects of phonemes, syllables and articulatory gestures
First Author: Renan de Almeida Sargiani -- University of São Paulo
Additional authors/chairs:
Linnea Carlson Ehri; Maria Regina Maluf
Conference:
Twenty-Third Annual Meeting
Keywords: Grapheme-phoneme correspondences, Reading development, Writing development, Phoneme awareness, Early childhood age 3-8
Abstract / Summary:
The purpose of this study was to explore 1) whether children benefit more from instruction in the orthographic mapping of phonemes or syllables at the outset of learning to read in Brazilian Portuguese and 2) whether including articulatory gestures in the training of orthographic mapping of phonemes improves phonemic segmentation more than training without articulation. This was an experimental study with a pretest/posttests design and random assignment of participants to treatment and control groups. 90 Brazilian Portuguese speakers, mean age 4 years, 5 months, were drawn from one public kindergarten in São Paulo, Brazil. Children received instructions in small groups in one of 4 conditions: 1) orthographic mapping of phonemes with articulation (OM-PA), 2) orthographic mapping of phonemes without articulation (OM-P), 3) orthographic mapping of syllables without articulation (OM-S), or 4) drawing pictures (Control). Then children were assessed in a word-learning task followed by reading, spelling, phonemic and syllabic segmentation tasks. Results showed that children in the OM-PA and the OM-P groups outperformed children in the OM-S and control groups in reading and spelling tasks. Instruction with articulatory gestures benefited children more than instruction without this component. The OM-PA group outperformed the others in phonemic segmentation, reading and spelling. In a delayed posttest given 1.5 years later, 48 children, 12 from each experimental condition, were assessed again in several literacy skills. Children who received orthographic mapping of phonemes performed better in phonemic segmentation, reading and spelling tasks, than children who received orthographic mapping of syllables and children in the control group. Overall results show that teaching orthographic mapping of phonemes to novice beginning readers is more effective than teaching orthographic mapping of syllables, despite the fact that syllables are more salient units in Brazilian Portuguese.

Pat Stone
Nov 05, 2018 05:55 PM

Thanks for opening up - and sticking to it - the possibility, the probability, that "I prefer to teach this way; it works for me and my students, but I can see that proper research doesn't support every aspect of my practice and I will look into what I can do about that."
I am wary of people who write assuming all RR teachers do this, this and this, all SSP teachers do this, this and this; this method works, this one doesn't. Real life in schools is more flexible than that.
I think there is more and more conflict between people who work and have their mindsets at system level and those who teach in schools. I've had good friends and colleagues who teach differently from me. I've watched them and learned from them. They in turn have asked me to watch and tell them what I think about certain children they have been puzzled by - the most common difficulty has been, "He knows his phonics but he can't blend." - and I have helped.
No need for fights to break out.

Patrick Manyak
Nov 05, 2018 06:40 PM

I often use the phrase "research drift" when talking to teachers about instructional practices that have tenuous connections to research (at best). In simple terms, I find many claims of "research based" for instruction that has drifted quite far from that which actually produced the positive outcomes documented in the research. Here is an example. Let's say that instruction in teaching students to construct main ideas while reading has proven to be more beneficial than control conditions in a number of well-designed studies. The news then gets around that teaching students main idea represents a research-based form of instruction. However, key information often gets left out of this "news" - that the studies were all conducted in upper elementary grades and up (a recent review on main idea instruction with struggling readers by Stevens, Park, & Vaughn included 23 studies and only 1 took place at 3rd grade and none at grades below that; Tim would likely know if there are studies on main idea instruction in regular classrooms at lower grade levels...) and that very specific instructional protocols for teaching students to construct main ideas while reading were used in the studies. The next thing I know, I am speaking with kinder and first-grade teachers who put serious emphasis on teaching their students to identify main ideas because it is "research-based," and they tend to do so in a very vague way that shows little resemblance to the actual instruction that produced the positive results in the research studies. To me, this represents significant research drift - the evidence supporting main idea instruction issues from very different grade levels and from instruction that looks very different than is actually being provided by the teachers. How far can we actually 'drift' from the grade levels and instruction of the studies and still claim that we are implementing research-based instruction? To me, if primary-grade teachers are implementing a watered down version of instruction that was shown to be effective in 5th grade, that is too much drift. I think that this is a significant issue with the "teacher professional books" that Tim refers to. To my eye, many of those books vaguely claim a research base - "teaching students to identify main ideas has strong research evidence" - and then go on to describe instruction that looks nothing like the instruction in the studies and often imply that it is effective at every grade level. So, in my mind, we need not only general knowledge with regard to research supported instruction but also need enough specific knowledge about research findings to avoid the research drift phenomenon.

Harriett Janetos
Nov 05, 2018 09:19 PM

The debate on Jeff's blog https://jeffbowers.blogs.bristol.ac.uk/blog/phonics/ is very interesting and well worth reading. This statement by Anne Castles takes us back to the importance of solid research. I share her gut feeling as stateted below:

"As you know, I think the question of whether GPCs are taught in the context of morphology and etymology in initial instruction is an empirical one – we simply don’t know the answer. My gut feeling is that this is too complex for very early readers, and the single letter-sound level should be the unit of focus initially – but I’m the first to admit that we haven’t done the study to establish that. So I don’t think there is a huge amount of point in debating it. But I still think it would be good if we were in agreement about whether and when GPCs should be taught (and whether the learning of them should be practised).

Mary Spencer
Nov 05, 2018 10:01 PM

Thank you very much, Dr. Shanahan! I will be sharing this with my teacher candidates tomorrow in class!

Mary L. Spencer

Mary Spencer
Nov 05, 2018 10:03 PM

Thank you very much, Dr. Shanahan! I will be sharing this with my teacher candidates tomorrow in class!

Mary L. Spencer

Tim Shanahan
Nov 07, 2018 05:27 AM

Patrick— that drift is subsumption and it shows how new learning and experience can change what we know. I may lay out what I believe to be a well tempered and limited interpretation of a study... and I report it that way, very carefully. Then someone challenges my claim and I start to argue—pressing a claim that I thought to be limited and changing what I believe the results to be. Things don’t always get fuzzier with time, often it gets sharper (but less correct). I hate when I do that. I find it to be so embarrassing. Brilliant letter.

Thanks.

Tim

Tim Shanahan
Nov 07, 2018 05:28 AM

Thanks, Mary. I hope it went well.

Tim

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

The Whys and Hows of Research and the Teaching of Reading

30 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.