Making Decisions about Which Intervention is Best: A Case Study

  • early interventions afterschool programs
  • 08 April, 2018
  • 5 Comments

Teacher question:

I wonder if you could comment on your blog about this crazy idea that the reading specialists should change the program every 12 weeks if a student is not showing growth on the one-minute reading fluency measure. I have second grade student who reads 80 wcpm with 97% accuracy. She made great growth in the fall but has leveled out this winter. She is being removed her from my “program” to Wilson because an outside evaluator said that is what she needs. What do you think?

Shanahan response:

One thing is clear: No matter how I answer this question, somebody is going to be mad at me.

That’s okay with me, I’ll find some way to get out of bed in the morning anyway.

However, the reason these kinds of questions are tricky is because the questioner, from his/her point of view tells me the critical information (critical from his or her point of view)—and the folks on the other side of the argument would have provided me with other information that they think matters.

One time a teacher wrote asking my opinion given the “facts” she had provided. I answered, and she apparently took the answer to the school board. Then the superintendent wrote to me adding some information, and I replied with a somewhat different answer. I’ll do my best to make sense of this question, but don’t be surprised if up the road there is more information that might change my mind.

Should reading specialists change their program every 12 weeks if a student is not showing growth?

I guess that depends on what you mean by “program.”

  If I see that a child is not learning from a given lesson, I’ll make changes to try to correct the problem, adapting my teaching to make sure that it works. That might entail adding explanation or examples or extending the lesson longer. Maybe I’ll come up with a new approach and try the same lesson again tomorrow.

  In this case, you are talking about larger changes than that. If I’ve determined that a young child has a fluency problem and have developed or selected a program to address that need, I’m not going to stop focusing on fluency instruction just because I have a lesson go belly up on me.

  How much time and how much information I’d need to determine whether to make a program change will depend. (By a “program change” I mean moving a youngster from a Title I reading program to a Special Education based intervention; or moving from one commercial program to another; or deciding that a phonics intervention might be better than a fluency-oriented one).

  It will depend on whether I have gained new information about the child that I lacked when the original decision was made. That could be because I’ve worked with the child for a significant amount of time and have seen responses that don’t match with my original decisions or it could be that we collected more test data on the child which changed my mind.

  When it comes to foundational skills decisions with primary grade students… 12 weeks should be a sufficient amount of time on which to make this kind of decision. If a student is not making progress for 12 weeks, I, too, would be concerned and would consider a change. Making program changes of these types once or twice or even three times in a school year is not crazy. (Some folks try to make such decisions more often than that—testing kids weekly, for instance—but that’s foolish, since these tests aren’t refined enough to allow for such fine learning distinctions.)

Should consequential educational decisions be based upon 1-minute oral reading fluency measures?

  If students aren’t making learning progress, it is important to adjust instruction to meet their needs. The idea that kids may be doing poorly now but that they’ll mature and do better later is unwise and has hurt a lot of kids. Kids should not be allowed to languish with no or low learning for extended periods of time.

  However, I’m puzzled about the use of a 1-minute oral reading fluency measure to make this kind of decision. A one-minute oral read won’t provide a reliable estimate of how well a child can read a text fluently. And, if such a test is unreliable, then it is also invalid—since validity depends, in part, on reliability.

  One of the more popular oral reading measures is DIBELS. To estimate kids’ oral reading fluency, DIBELS requires that children perform two one-minute reads; two readings, two passages. That’s where their reliability comes from — from the two minutes of reading performance. The levels of reliability that DIBELS and DIBELS-like testing procedures can obtain are probably sufficient for many instructional decisions.

  In 2001, Sheila Valencia and her colleagues put these kinds of testing procedures to the test. They found big problems with one-minute reads. Significantly more reliable and valid outcomes on such tests required three-minutes of oral reading.

  So, I’m willing to test kids every few months to determine if their foundational reading skills are improving and to make instructional changes if they are not.

  But I wouldn’t make such changes on the basis of a one-minute oral reading. Malpractice!

Should second-graders be assigned to special phonics instruction based upon an oral reading fluency measure?

Typically, one would not make consequential reading decisions based upon any single measure. If I were sending kids to special education or a reading specialist I would want to be both certain that the child was experiencing difficulty in reading and I would want to have some idea of the pattern of reading skills and abilities the child had so we could make sure he/she would learn.

  To accomplish that, one would usually need more than a single measure (even if that were a reliable measure).

  More specifically, if a youngster were struggling with oral reading fluency (as measured by a reliable, valid measure), I would not just assume that the student had a fluency problem requiring explicit fluency instruction—and that is especially true with young students.

  I would, in such a case, want to know about that student’s ability to decode. Some kids with fluency problems really struggle with phonics and their low words correct per minute rates may be due to phonics difficulties. Other kids who don’t read well orally may be pretty accomplished when it comes to phonics (see work by Carol Chomsky, for instance). I would need to know more than that reading accuracy and speed were low before I could determine a sound instructional response.

  If someone assigned students to the Wilson phonics program because of an oral reading fluency test alone, I’d have concerns. Don’t misunderstand, I like the Wilson phonics program and I use oral reading measures. Those aren’t the problems here. The only issue should be about what approach would most likely benefit the student, and even a reliable oral reading fluency test on its own would not be adequate to make such a decision. I’d certainly want to know about this student’s decoding skills (and comprehension, too).

  You evidently don’t agree with the decision made for this student. From my vantage, I can’t tell if it was a good or a bad decision, but if were made on the basis of the information that you provided, there is a very good chance that they have erred.

What would you do with a student who performed at this reading level?

I don’t have a clue.

  I’m very puzzled about the information you provided, and my puzzlement is part of my skepticism.

  You say this student reads at 80 wcpm in the winter of second grade. While I don’t trust that assessment, for the reasons, given above, if I take this score at face value, it would mean that this student is reading at a rate and accuracy only slightly below what an average second-grader would be expected to obtain (Hasbrouck & Tindal, 2016).

  I wouldn’t put an average reader in a special intervention, no matter which department was providing it.

  I wouldn’t put a second-grader with average reading scores in a special phonics program, especially this late in the year. I wouldn’t expect that to improve his/her reading accuracy or rate and if phonics instruction isn’t going to boost those, it certainly won’t facilitate comprehension.

  If a student reads fluently (and a mid-year second-grader who can cold read a second-grade text with 80wcpm is reading fluently), then my concerns would be with reading comprehension and vocabulary instruction, not phonics.

Comments

See what others have to say about this topic.

Jan Hasbrouck Apr 08, 2018 10:29 PM

The latest study of which I'm aware on use of ORF for progress monitoring as a General Outcome Measure (GOM) is January, S.A. (2018). Progress monitoring in reading: Comparison of weekly, bimonthly, and monthly assessments for students at risk for reading difficulties in Grades 2–4. School Psychology Review, 47 (1), 83–94. They conclude that in grades 2-4 bimonthly assessments is likely the optimal for decision making. "Overall, findings from this study reveal that collecting CBM-R data less frequently than weekly may be a viable option for educators monitoring the progress of students in Grades 2–4 who are at risk for reading difficulties."
I agree, Tim. There is not enough information provided about this student for making any kind of instructional recommendation, but we should celebrate 80 wcpm in mid second grade!

Lauta Apr 10, 2018 11:55 AM

Thanks for the information on the ORF assessment measure. It is informative.

My concern here is the emphasis that is placed on this one minute passage. The data points on a graph for some of my struggling readers can look like a patient having a heart attack with the Aimsweb trending line averaging the data. This is especially true in the spring with my more struggling readers.

Adjustments to the “program” take place on a daily basis with knowledgeable reading teachers. Throwing out one program for another can be like throwing out the baby with the bath water.

Sam Bommarito Apr 10, 2018 03:09 PM

You say "To accomplish that, one would usually need more than a single measure (even if that were a reliable measure)." I couldn't agree more. I am skeptical of fluency measures that focus solely on reading rate. Rasinski is highly critical of doing things that way. He has developed an easy to use/freely available fluency rubric that includes 4 dimensions, not just one. I think his way of measuring prosody makes more sense than focusing only on reading rate. Use this rubric (or one like it) encourages teachers to teach kids to read like storytellers instead of reading like a robot. Adding a measure like this to any reading rate assessment would take very little time, but would greatly improve fluency instruction. It would also open the door to more advanced instruction e.g. when to vary reading rate, something proficient readers do all the time. Rasinski also has developed a commercially available 3 minute reading test that includes the use of his rubric and a simple comprehension check. I've helped teachers use this assessment in classrooms and found it gives highly useful insights into the child's reading and still left plenty of time for actually reading instruction. I think it is one example of a measure that does give a more complete look at the child. It helps teachers have a way to having an ongoing assessment while leaving plenty of time for classroom instruction.

Mary Evers Apr 16, 2018 10:55 AM

I agree wholeheartedly with what was said; especially with your last paragraph. From the limited information given, working on vocabulary and comprehension would appear to be the way to go with this second grader.

However, the last part of what the teacher wrote says that, "an outside evaluator" said that the student needs Wilson. Hmm. That evaluation would have been based on some other assessments, likely decoding assessments, we assume. That information was not shared by this teacher.

If this student was being progress-monitored every other week, we would know if progress was or was not being made much sooner than 12 weeks. Did the the teacher make any adjustments in instruction over the course of the 12 weeks based on progress monitoring?

For example, were changes made to the size or length of time of her intervention group? If fluency was the target skill, were teaching strategies changed when progress was not being made? Over the course of 12 weeks, had the target skill been changed? These are the kinds of questions I would want answers to.

Laura Apr 18, 2018 11:08 AM

Those are great questions. My student made great progress in the fall. We worked on her strengthening foundational skills and as a result her accuracy rate, decoding skills and comprehension of stories improved greatly. Adjustments to the program were made on a daily basis depending on what needed more or less practice. Her group size was reduced to two students.

My challenge is to deal with outside evaluators that always recommend Wilson as if it is the answer to all reading problems. It has a place with the severely disabled students. Not this student, in my opinion. Sad thing is that the outside evaluators recommendations are almost always followed and they Do not have a reading background and barely know the student.

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

Making Decisions about Which Intervention is Best: A Case Study

5 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.