What if there is no reading research on an issue?

  • reading research
  • 15 May, 2021
  • 22 Comments

Teacher’s question:

I agree with you about the need for basing what we do on research.  But what do you do for the things for which there is no or limited research? For example, what about Orton-Gillingham instruction, what is the best way to sequence phonemes for teaching, or how specifically should background knowledge be taught? What about research that is evolving so that we do things a certain way and then refine these (say with Ehri's & Gonzalez-Frey's recent work in SSR) -- what about all the time that we did the practice the other way? There are some topics with so much research that we can’t digest all of it, and other topics with no research or with ambiguous results. How do we follow the research?

Shanahan’s response:

Yes, I’m a proponent of using research to make instructional decisions. Let’s start with that.

First, I want to make good decisions for kids. I seek practices that have unambiguously helped them to learn to read better. I can put more trust in an instructional practice found to be effective again and again under close analysis. If those other educators could make that work, I could too. That’s better than buying what the district next door bought!

Second, I want to be able to act without everything being a big megillah. Reading is a contentious field, and our crazy arguments rightfully cause parents to worry about whether we are making the best choices for their kids. Physicians and engineers don’t always get it right, but they have methods for determining acceptable practice. In reading the serve often goes to the loudest, kids’ literacy learning be damned. Consistent standards of evidence make educational decision making more professional – fostering confidence rather than disgust and despair.

An argument against a research-based approach is that it supposedly undermines teacher authority. Yep, there are some who believe teachers should make all classroom decisions (e.g., Diane Ravitch). That includes the idea that the best education comes from teachers who shrug off the curriculum and author all their own lessons. Think of Robin Williams (Dead Poet’s Society) encouraging kids to tear up the school’s poetry anthology; now that’s inspired pedagogy.

Your question lets the air out that teacher-as-inspired-genius complaint.

The fact is, as with physicians, no matter how explicit or thorough any research-based standards of practice might be, there’ll always be plenty of consequential decisions for the teacher that must be based upon their judgment and experience. As standards of practice in medicine have become more certain through empirical study, physician decision making has actually increased in significance.

I have no problem with the those who improvise when there is no sound research to go on, what else can we do? But I rage at states, districts, and schools that mandate an improvisation as if guessing on scale ensures success.

Variations in practices can help us to determine which choices are best – as long as we’re aware that we’re improvising and pay attention. What kills me is that so often authorities in their fervor to advance an approach (or to defend a wobbly decision) claim it to be research-based, when it was really more a child of logic, a hunch, or susceptibility to a really great sales pitch.

I lose patience with those “thought leaders” who proffer their darling approach under the guise of research. These days that happens a lot. There is ton of research showing the benefits of explicit phonics instruction. When someone is arguing that phonics is beneficial, and they cite research studies and government reports I’m on board. But once they’ve made that argument and have convinced an audience that systematic daily instruction in decoding in grades K-2 is the way to go, they don’t know when to stop. They keep going without any acknowledgement that the claims that follow lack the same evidential pedigree…. with assertions about what they may sincerely believe in but about which they should be confessing a lack of certainty: the value of tracing in the teaching of decoding skills, advanced phonemic awareness instruction, decodable text, the most effective sequencing of skills, sound walls, and so on.

The same nonsense accompanies nostrums for reading comprehension or fluency – substantial research evidence supporting a basic premise allied with specific practical recommendations with a decided lack of convincing or relevant research support (e.g., extensive comprehension strategy teaching, front-loading of background information about a text prior to reading, thematic units, weekly fluency tests, individual conferencing, and so on). Discerning readers may look at that parenthetical list and protest, “isn’t there research on reading strategies or background knowledge?” There is, of course, but not research that shows how much strategy teaching is beneficial or whether providing background knowledge has anything but transitory effects. It certainly improves comprehension of a specific text, but we have no idea what that means to students’ reading ability in the long run.

There is nothing wrong with making any of these claims – as long as they are proposed along with an open admission that there is no proof that they work. Lack of evidence doesn’t mean something doesn’t work, only that we don’t know. That admission is important because we can only respond professionally if we know when something has worked consistently in the past and when it is just somebody’s hunch.

Too often I hear from teachers and principals distraught over the local ineffectiveness of an approach that they’d been led to believe was research based. They are often told that the failure is due to their shoddy implementation. That happens, of course, but I’m more likely to buy that charge if the practice has consistently worked elsewhere in the past. If there is no rigorous evidence that the practice has ever worked then maybe the fault is neither in us nor in the stars.

Basically, if there is no research on a particular practice – feel free to adopt it but keep a close eye on it and be ready to adjust accordingly.

As for keeping up with the research? No one can read the 1000+ relevant research studies published each year. Even if we could, it would not be a good idea to adopt those results into practice immediately. Most studies in education tend to be small, and single studies are rarely determinative. It is wisest to limit data-based decision making to topics on which sufficient data have accumulated to justify pedagogical action – responding to each new study as published would lead to changing your policies every 27 minutes. We use research to increase the certainty we can invest in our actions, not for the sake of novelty.

Practical advice on how to monitor and use research evidence?

1.     Monitor some of the better research journals just to see what topics they are addressing. Some of the best journals to watch for reading research include Journal of Educational Psychology; Reading Research Quarterly; Reading & Writing Quarterly; Review of Educational Research; Scientific Studies of Reading. These aren’t the only journals that publish high quality reading research, but they’re among the most rigorously reviewed and widely cited by scholars in the field.

2.     Pay particular attention to research reviews and meta-analyses that synthesize bodies of research. The benefit of that approach is that you get the combined power of an entire collection of research rather than one particular study; that should reveal to you both the average outcome but also the variations in results that has been obtained. Effective approaches may vary in how often they payoff.

3.     When you read research make sure you understand what they were studying (and what they weren’t). As noted earlier, a lot of comprehension research examines how we can facilitate comprehension of a particular text. That is not unimportant theoretically. However, it isn’t the same thing as finding that an approach helps kids to read better independently.

4.     There are many kinds of research, all of it potentially valuable. If your goal is to determine what to teach or how to teach something, then you need to depend upon evidence that shows whether a practice can benefit learners. Focus on instructional research; studies that consider the impact of teaching. Indeed, there are other kinds of research that may be provocative (that study with the cool multicolor fMRI pictures, for instance) – as interesting as such research may be, it usually has little value for prescribing effective teaching practice.

5.     When there is no research? Get professionals together and think it through. Whatever courses of action you agree upon, make sure folks understand the reasoning (rather than the evidence) behind the choice. That makes it easier to change course up the road if things don’t pan out. If you can’t agree on a course of action, perhaps set up your own local study to see if it even matters. If it doesn’t, let teachers and principals improvise.

6.     Finally, that research says something is advantageous doesn’t mean it will work for you. If you rely on meta-analyses to set a policy or practice direction, I’d suggest going back and reading some of the individual studies included in the meta-analysis. I do that to determine whether the approach worked in situations like mine and to get clues about proper implementation (“Gee, the successful programs provided 18 hours of training for each teacher, and I didn’t budget for any of that, yikes.”)  Knowing those specific articles can have another payoff as well. Sometimes the researchers may publish a practice-oriented version in a journal like the Reading Teacher; the research article proving that it works and the practice article giving details as to what it really was.

Comments

See what others have to say about this topic.

MaxScholar Oct 13, 2021 12:27 PM

Intresting blog on Reading Research. Thank you for sharing such a nice blog.

Katie Garner May 15, 2021 04:50 PM

I was wondering if your ears were ringing last week, lol! https://www.facebook.com/groups/373171717063493/permalink/489224798791517/

Julie Lewis May 15, 2021 06:30 PM

Here I go. I may be an outlier, but I have taught a resource room for a grand total now of 30some years. I have taught many struggling and dyslexic readers. I ALWAYS endeavored to engage in best practices and sought out approaches to teach those who were not responding. I took regular informal measures so I could objectively determine whether or not I was getting progress and I struggled when the research did not seem to do an adequate job of addressing some of the deficits I identified in my practice. Over the years, I have observed research seem to "catch up" with what I was observing firsthand in teaching students who were significantly delayed in developing reading skills, many of whom we are confident display dyslexic "symptoms," even when we shied away from that diagnosis. I firmly believe researchers set out to document, to 'prove" or "disprove" observations that are reported, time and again, by experienced teachers who can and do document their practices. Maybe I am an outlier because I do not observe a majority of my colleagues, especially those who work with students with disabilities, engaging in this level of diagnostic teaching. Indeed, a recent situation where we completed a triennial re-evaluation of a 5th grader who matches the profile of a dyslexic student, and who, after 3 years in the resource room, was still reading at a first grade level and had regressed on virtually all standardized measures of reading, seems to support the hypothesis that many teachers still do what "feels good," "seems right," etc. When the resource teacher saw the results, which were really above suspicion because there was more than one measure of regression, she responded, "But I just know he has improved." Perhaps had she used some good, objective informal or curriculum-based assessments along the way, this could have been avoided. And, sadly, she may be representative of too many educators. Might more and better training be appropriate?

Kathleen Coyne May 15, 2021 07:22 PM

Finally a reasonable response to the newly declared reading wars, I’m hearing yes to phonics - systematic and explicit,etc but married to what we know works for fluency and comprehension and that is not a sole diet of decodables. There is no research on that extreme !

Timothy Shanahan May 15, 2021 07:26 PM

Julie--

More and better training for all of us is certainly in order!

thanks.

tim

George Lilley May 15, 2021 09:13 PM

There is not clear evidence for most interventions in Education. The major research organisations contradict each other, e.g., the What works Clearing House suggest there not much compelling evidence for most things, whereas the English Education Endowment Foundation evidence suggests a number of strategies -metacognition, homework, feedback,.. While Hattie, who uses the same synthesis of meta-analysis approach, has a totally different list - collective Teacher Efficacy, Self report, Piagetian programs... Who do you believe?
I've done a more detailed comparison here - https://visablelearning.blogspot.com/p/other-researchers.html

Marie Derby May 16, 2021 01:55 AM

The power of your writing style is very refreshing.

Jack Arnold May 16, 2021 05:11 AM

What about Wheelwright's Oral Reading Fluency (ORF) that measures the ability of a kid's reading rather than argue about the ''best'' method for teaching? this approach is a little difficult because it requires daily practice reading, the Drop Everything and Read (DEAR) programme. However, it measures the end-product of what ever teaching method was used to instruct the students.

Jeff Bowers May 16, 2021 08:52 AM

Shanahan writes: “There is ton of research showing the benefits of explicit phonics instruction”. But he fails to address my critique of the evidence I recently published in the journal Educational Psychology Review. Check out how he fails to respond to direct questions here: https://jeffbowers.blogs.bristol.ac.uk/blog/fletcher/. Readers here should ask Shanahan to write a blogpost justifying his claims regarding my work, or better, agree to a debate as I am sure there would be a good deal of interest in this.

Sue Sasko May 16, 2021 02:46 PM

Drawing a line in the sand, are ya?
I’d like to see a study comparing the big box curriculum you’ve personally contributed and financially benefit from to Super Kids curriculum (a small company foundations based k-3 curriculum), Wilson Fundations, or teacher training in LETRS or AIM Pathways.

Harriett May 16, 2021 06:23 PM

Jeff, as I recall the Fletcher et al piece responding to your article challenging systematic phonics instruction concludes that explicit phonics instruction is currently the most efficient way to introduce GPC's (please correct me if I'm wrong). In your response to the article, you talk about researchers justifying the 'phonology first' hypothesis by referring to the Simple View of Reading or the Alphabetic Principle. I would add to that list the importance of recognizing 'cognitive load'. For example, I have been working with four groups of struggling first graders. One group has been progressing nicely with mastering their spelling variations for the given sounds, so last week I introduced a morphology lesson. We read an informational text on force called 'Push and Pull' and we sorted the 'ed' inflection by sound: /t/ pushed, /d/ pulled, /e/ /d/ lifted. They were ready for it. I believe many (including me) are saying that until we have a proven better way forward, it would be irresponsible to abandon explicit phonics instruction in favor of teaching GPC's and morphology together from 'day one'. We agree on the GPC part. We disagree on when to add morphology.

Jeff Bowers May 16, 2021 06:49 PM

Hi Harriett, for the moment, put aside the "alphabetic principle", the simple view of reading, when morphology should be introduced, etc. I'm just asking Shanahan and others justify the claim that there is empirical evidence that systematic phonics is more effective than standard alternative approaches. Yes Fletcher et al, and others claim that phonics is the most effective method. But I detail how all the points of Fletcher (and Buckingham) are wrong, and ask people to comment. Still waiting for a response. It is just odd that Shanahan will say my work is "very misleading" and "biased" but refuse to justify his claims. I don't expect he will address my arguments here either. But I think it is worth me commenting so readers of this blog see how leading researchers cannot support their claims.

Timothy Shanahan May 17, 2021 06:07 PM

Sue-
You're welcome to do any study you would like to do -- so are the publishers of those materials. Of the ones you mentioned, only one has been studied to my knowledge (and it didn't do so well).

good luck.

Timothy Shanahan May 17, 2021 06:56 PM

Jeff-
There are far too many problems with that article to justify taking the time to analyze it very carefully. Basically, if one puts aside all of the biasing language and misleading reporting of the various meta-analyses, one is still left with the result that phonemic awareness and phonics instruction have pretty consistently improved reading achievement. One can argue for hours as to which effect sizes are the most important to attend to (I favor looking at the whole set, personally), and one is stuck with the fact that such instruction has very consistently been beneficial (indeed, more beneficial with some selection rules and some samples, but beneficial).

I'm troubled that there was no consistent analytical approach used to examine the various meta-analyses included. It is very clear that if a meta-analysis concluded that explicit phonics instruction helped kids learn to read, then it had many problems. While if it was an anti-phonics review, there seemed to be none... that looks like bias to me, Jeff. Take a look at how meta-analysis is typically conducted and the reasons for those analytical procedures... shouldn't one notice that Camilli didn't follow those (it was the only way he could get rid of the phonics effect). If one is going to compare phonics with some other type of instruction shouldn't the researchers be certain that they categorize the studies correctly (look at Suggate, for instance). I know that was a very comprehensive and careful analysis of all the meta-analyses of decoding instruction, but where is the National Early Literacy Panel report? You report on what happened with public policy and national assessment data in the UK, which is certainly appropriate, but then fail to provide such an analysis in the U.S. (which would have led to very different conclusions). When I find as many examples of biased selection and reporting (and uses of language to characterize things), I wonder if the reporter was trying to win an argument or whether they were just sloppy... I'll go with the former here, since sloppiness is usually random -- this was not, it was just bias.

When I see that kind of bias I wonder if it characterizes all of that scholar's work. I start looking at his other reviews. I find it interesting that you conclude that teaching morphophonemic information to beginning readers is superior to phonics and yet with no empirical evidence of that (you have reported on two studies that found adding such teaching to phonics instruction was helpful) and were those studies held to the same standards touted in your article? Not so much.

I know you are frustrated that people aren't adopting morphophonemic approaches for teaching young children to read but trying to show that phonological instruction isn't helpful is a ham-handed way to go. Phonics may or may not be as good as what you would like folks to teach, but the only way to win that argument is with data. There are sufficient data to show that phonics is better than no phonics (or in some cases better than less phonics). You won't overturn that with argument. You need data that show that doing it your way would be even more beneficial. (That's why Dick Venezky was a such a big supporter of explicit phonics instruction for getting kids started in reading).

tim



Timothy Shanahan May 17, 2021 06:58 PM

Jack--

I've never heard of that assessment and wouldn't comment on it if I had. In any event there is a substantial base of research on DEAR showing that it is not particularly effective in promoting reading.

Good luck.

tim

Sheila A. May 17, 2021 07:18 PM

So here are my thoughts on what teachers "know" works. I have found time and again that teachers truly see improvement in their classrooms when they follow what they have always done. They use the same assessments, that they teach the students to pass.
However, once the student moves to a higher level, they have forgotten what they learned the previous year. I think many teachers who think they know the best ways to teach often see short-term improvement. However, the students didn't truly learn the skills. If teachers would rely on research-based methods that use explicit, systematic instruction, we would see long-term improvements. Once a student learns how to read, it is highly unlikely that they will forget everything over the summer break.

Jeff Bowers May 17, 2021 08:46 PM

Tim, thanks for responding. You say there are “too many problems with that article to justify taking the time to analyze it very carefully” but I appreciate you have made the time to highlight what you think are the most fundamental problems. However, just like the Buckingham and Fletche et al., yall our comments are all mistaken or misguided.

You write: “ “it is very clear that if a meta-analysis concluded that explicit phonics instruction helped kids learn to read, then it had many problems. While if it was an anti-phonics review, there seemed to be none... that looks like bias to me, Jeff”.

I think you have a misunderstanding of what is required to make the claim that the science of reading supports phonics. The burden is on the researcher claiming that phonics works to provide evidence in support of this claim. I have shown that all the meta-analyses that are used to support phonics are flawed and that the conclusions do not follow from the findings. If there are problems with my critique of the NRP (or other positive studies) please point them out. But there is no “bias” when I don’t criticize “anti-phonics reviews”, as the question is whether there is any positive evidence for phonics. I’m not sure what criticism I should make of Camilli – they point out that the design of the NPR does not allow any conclusions that phonics is better than whole language, and they are correct about that.

You write: “If one is going to compare phonics with some other type of instruction shouldn't the researchers be certain that they categorize the studies correctly (look at Suggate, for instance). I know that was a very comprehensive and careful analysis of all the meta-analyses of decoding instruction, but where is the National Early Literacy Panel report?”.

I am not sure what your point is about Suggate, but the NELP did not assess the phonics (it only included a condition with phonics and PA). And here is a passage from that report, so it is clearly misleading to suggest that this report provides some support for phonics.

“Finally, there were significant problems with the quality of much of the research in this area. Many studies used simple pretest-posttest designs, which provide no causally interpretable evidence, and studies often did not provide evidence that these groups were equivalent prior to an intervention or represented the same population. Often, there was evidence for group differences that existed before the start of the intervention. The panel was unable to rely on the data drawn from such badly designed studies, and they were excluded from all of the analyses reported here. These flaws do not allow appropriate postintervention differences to be attributed unambiguously to the intervention; neither do studies in which the intervention is confounded with other important factors that could be the source of any observed effects. Ultimately, building a larger and more comprehensive knowledge base concerning early literacy skill development and promotion will require more high-quality research.”

You write: “You report on what happened with public policy and national assessment data in the UK, which is certainly appropriate but then fail to provide such an analysis in the U.S. (which would have led to very different conclusions).”

So, do you accept that more than a decade of phonics in England has not improved reading outcomes? And what dataset should I have used from the USA that would have led to a different conclusion? If I missed some dataset, I really am interested in knowing. Please let me know of any references I’ve missed.

You write: “When I see that kind of bias I wonder if it characterizes all of that scholar's work. I start looking at his other reviews. I find it interesting that you conclude that teaching morphophonemic information to beginning readers is superior to phonics and yet with no empirical evidence of that”

This misleading statement has been repeated over and over again, and it does not seem to matter how many times I explicitly correct it or however clear I am in my papers. This is what I wrote in response to Buckingham who made the same false claim (note for the reader, SWI refers to a form of training that focuses on morphophonemic nature of English):

Buckingham’s final criticism is that there is no good empirical evidence for SWI, writing: “The problem with positing SWI as a superior alternative to systematic phonics is first that there is insufficient information to assess whether it is an effective method for teaching beginning readers, and particularly for the acquisition for essential knowledge about GPCs.” (p. 109) There are two problems with this criticism. First, and most importantly, Bowers (2020) never claimed that there is strong empirical evidence in support of SWI. Rather, the lack of evidence in support of systematic phonics was taken as a strong motivation to conduct more research into alternative approaches, including SWI. Indeed, Bowers (2020) ends with “...the first step in motivating more research into alternative forms of instruction is to realize that there is a problem with the current approach”. (p. 703) Similarly, in Bowers and Bowers (2017) we wrote: “This is our goal: to motivate future empirical studies of SWI in order to assess whether indeed this method is more effective than phonics that is currently failing too many children”. (p. 138) And in Bowers and Bowers (2018b) we concluded: “We do not want to make too much of this empirical evidence given so few studies have been carried out thus far. But in combination with the strong pedagogical considerations, we would argue that SWI is a highly promising approach that deserves more attention”. (p. 411)”

I think we need more testing of this hypothesis, but unlike proponents of phonics, I make modest claims regarding the evidence. The bottom-line is that you have not addressed any of my criticisms of the evidence that is taken for phonics. It is irrelevant if I don’t criticize negative studies (which is actually false, but never mind), that I fail to include a meta-analysis that does not test phonics (and which, as he quote above shows, only highlight how weak the data is), and I claim that we should do more research on SWI. Is there something that I’m missing that shows how I’m biased? Or my conclusions are unsound?

You are right about one thing. I am frustrated how people ignore my work, or even worse, misrepresent my work.





Timothy Shanahan May 17, 2021 08:56 PM

Jeff--

I have read about 1/4 of this and I'm out. I don't intend respond again. My audience can now see why I've responding in the past.

Here is one example of the problem. Camilli did conclude that the National Reading Panel did not prove that phonics was better than whole language. He is right about that. You are right about that. But, let's note (1) we were not asked to compare whole language and phonics; (2) we didn't claim that we did. We did show that including phonics in a reading program improved reading achievement and we reported that (and Camilli found that we were correct in our reporting of the range of effect sizes -- all of which were positive and many of which were statistically significant). I'm amazed that you didn't notice that. In any event a comparable review of your analysis of phonics would be that you were wrong because you didn't prove that whole language was better than phonics therefore phonics must work. I didn't do that because I read your paper and I recognize that wasn't what you were writing about and that even if you had it would have been irrelevant to your claims. Why would someone go down that road? Grabbing at straws!

good luck, Jeff.

tim

Jeff Bowes May 17, 2021 10:15 PM

For someone who is very keen on the science of reading it is impressive that you can only get through ¼ of a post that highlights straightforward errors in your post. To keep it shorter this time: (1) It is not “biased” to criticize the meta-analyses that support phonics. (2) It is not “biased” to exclude a meta-analysis that did not assesses phonics. (3) I have not concluded “that teaching morphophonemic information to beginning readers is superior to phonics”; rather it is a hypothesis that I think needs more testing. These are just logical or factual errors on your part.

And now, you have written another post that has two more straightforward errors. You write: “But, let's note (1) we were not asked to compare whole language and phonics; (2) we didn't claim that we did” In fact you (or the authors of the phonics chapter of the NRP) did make this claim, in multiple places, including in the executive summary of the chapter on phonics: “Students taught phonics systematically outperformed students who were taught a variety of nonsystematic non-phonics programs, including basal programs, whole language approaches, and whole-word programs”. Indeed, there were statistical analyses that compared phonics to whole language in the NRP. Not to mention that the NPR has been cited 1000s of times in support of phonics over whole language.

You write: “I'm amazed that you didn't notice” that Camilli found that including phonics led to statistically significant benefits. In fact, I summarize these findings in detail, and note that the effect was significant in the 2003 meta-analysis and not in 2006 meta-analysis when some covariates were considered. Again, just a straightforward error.

I think the problem is that you are so confident in your assumptions you don’t even read the articles you criticize.





Timothy Shanahan May 17, 2021 10:56 PM

Sheila--

I agree with your assessment of the matter. Teachers deliver lessons the way they do and they see learning take place -- that's enough for them. The problem with that is that approach doesn't tell you what would likely be most effective. None of us can tell that without research studies.

thanks.

tim

Stephen Parker May 18, 2021 04:42 PM

Tim,

I am flummoxed by your inclusion of “decodable text” in the list of practices that lack “evidential pedigree.”

To doubt the efficacy of decodable text is to doubt the efficacy of practice itself for children who are learning something that is brand new.

No one would take such doubts seriously in the realm of learning to play the piano, for instance. I can't take seriously doubts about decodable text in the realm of learning to read.

On the other hand, I welcome your caution when it comes to “advanced phonemic awareness instruction.” That particular bandwagon is careening out of control.

Stephen Parker (@ParkerPhonics)

Timothy Shanahan May 18, 2021 06:01 PM

Stephen--

I'm just going on the research with decodable text. I definitely think there is enough evidence to conclude that beginning reading texts need to be relatively easy with lots of repetition of words (and without predictable sentence patterns) but schemes aimed at providing some specific text match with the phonics elements/patterns taught to that point have not worked out (I've written about that before -- citing the appropriate studies). The issue becomes one of developing a mental set for diversity rather than one aimed at consistency. Kids need to learn that there are multiple alternatives for pronouncing particular words and to choose among the appropriate possibilities. That's where statistical learning comes in and giving kids text that lacks that kind of variability causes some learning problems. Psychologist generally find that schemes that simplify or make consistent this kind of complexity tend to result in faster learning initially but problems with application and transfer later. It is fine to give kids some decodable text practice as part of their phonics teaching but kids early reading definitely should not be limited to this (I would argue for adding in controlled vocabulary text additionally to prevent the over reliance on overly consistent text).

thanks.

tim

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

What if there is no reading research on an issue?

22 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.