The Implementation Scandal: Why Research-Based Instruction Often Fails

  • 18 July, 2008
  • 0 Comments

This week I spoke to the research and curriculum directors from the Council of Great Cities Schools. They held their meeting in Chicago. The topic they were focused on was interventions for older readers. Catherine Snow and I sounded off on the first day, and Jim Kemple of RMDC presented a recent federal study of such interventions on day two.

  The big topic of discussion was the implementation of interventions. One thing that stands out from recent experiences with trying to implement research-based practices is how often these practices fail to work once adopted in the classroom. There are lots of reasons that we should pay attention to if we want research-based instruction to work.

  First, one aspect of the problem is the research itself. Just because something works under research conditions doesn’t mean that it could work on scale when the researcher is forced to be more distant and detached. Education has been a field of small studies, but one suspects if we started trying those innovations out under less ideal conditions we would find some pretty important limitations in those initially workable ideas. Meta-analysis helps us to pull together the results of lots of small studies, but it cannot ever overcome this particular limitation.

  Second, the research-to-practice folks don’t pay close enough attention to matching the thoroughness or intensity of the original studies. Comprehension strategies instruction that leads to higher achievement tends to be thorough and intensive; providing kids with lots of practice over a period of time. Often textbooks and teachers try to deliver this “research-based instruction,” but they fail to match the dosage level that was provided in the study. If it took 25 hours of lessons to get an effect, what would lead any of us to believe that we could get the same effect in 3 hours (and even if we believed that, why wouldn’t we at least study the dosage change rather than just adopting it)?

  Third, research in education is not written in a fashion that allows its implementation to be duplicated. Researchers do a lot of things to carry out their study. For instance, it is becoming increasingly common that the researcher has someone visit the classrooms with some frequency to check on fidelity. That is noted in a research study as part of the research method, but in fact, it is an important issue for the implementer (if you adopt this plan, have someone check up with the teacher with a similar frequency). Research studies typically tell the amount of time of an intervention, show lesson plans, and sample text passages to help folks understand the nature of the instruction. They are much less likely to provide the protocols of what teachers and principals were told or how these interventions were brought into the schools (and what was done to get buy-in). That information is essential to implementation, and they should be provided more often in research studies.

  I am often told by school administrators that a program failed because of poor implementation. It is time that educators and researchers take this serious issue on, rather than continuing to treat it as an annoying side issue.

Comments

See what others have to say about this topic.

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

The Implementation Scandal: Why Research-Based Instruction Often Fails

0 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.