Teacher question:
Here is my dilemma. My administration has decided that if a student has 3 or 4 points of data on an ORF (Oral Reading Fluency) graph that shows they are not making progress then the entire reading intervention program must be changed. It doesn't matter to them if the student had been making progress for months before in the same program. I was told by my principal that our school district is being sued because of RTI. When a student is not making progress as evidenced by the ORF and the reading specialist doesn't change the program then the school district is at risk of being sued.
The administration has now decided to meet with me every few weeks so they can direct my programming. They are so set on this program idea. They want to direct my switch of programs from Leveled Literacy Instruction to Barton to Wilson to Read Naturally to Reading Mastery to My Sidewalks to 100% Solution without thoroughly diagnosing the need of the child or the validity of the program. They also want the paperwork to reflect that an actual program was changed. It is a house of cards build on speed of reading. It is expected that all below level readers will reach grade level expectations in a 4-month period of time or else. If they have their way a kid could be in Wilson for six weeks, then LLI another 6 weeks then Barton. There would be no continuity of programming.
Do you know of any research that either supports or negates this idea of the need to change a program after so many points of data on an ORF graph? What is the RTI law?
Shanahan responds:
A colleague of mine helped frame this reply given the technical issues that you are raising.
In 2004, the very complex IDEA law was enacted. It includes a Child-Find requirement (schools must “find” students with disabilities) and the push for early interventions due to the growing number of students identified with disabilities, specifically those labeled with Specific Learning Disability (SLD).
Also, as part of IDEA, students receiving special education services must demonstrate “adequate progress.” Messy situations tend to arise when students with IEPs fail to make progress, which then calls into question the educators’ responses and the effectiveness or appropriateness of the intervention. Parents might raise questions about such efforts or may have hired a really good advocate and be looking for compensatory services, and the district is running scared because no one knows how to respond, especially if there are numerous data points from private evaluations that they can't speak to or refute with their own data. When this kind of thing occurs, there is a long process before a mediation and then a due-process hearing.
It is misleading for the administration to say they are getting “sued because of RtI.” It is more likely that they weren’t responsive to data and that doesn’t mean changing a program.
Three data points on an ORF graph do give a trend line, and these can be sensitive to incremental learning (but also to other factors that can limit their validity for making growth determinations). Advocates love them, but they are a single piece of data.
What does “growth” mean? What did the team determine as adequate? Any amount of growth? Growth that closes a gap? Accelerated growth? Average growth? Any change on a particular test even if it is within the test’s standard error of measurement and is therefore unreliable (in other words, possibly not growth at all)? Growth on non-linear measures?
Studies on this suggest that 3 weeks or 6 weeks simply aren’t sufficient for determining adequate growth, and slope analysis hasn’t been found to consistently improve outcome predictability (across outcomes, grade levels, levels of performance, etc.). There are no studies validating its sole use in the manner that is becoming so common in schools; so much for the “science of reading.”
My first concern is the rush to judgment that your letter describes. This kind of testing is not so precise and reliable, nor is learning so linear, that it makes sense to arrive at any kind of consequential decisions over such brief periods of time. The law doesn’t call for such quick decision making, nor should it. Given the serious limitations found for short-term slope analysis (those changes from testing to testing), it would be neither pedagogically sound nor ethically reasonable to do so.
Of course, no one wants to see a flatline or a negative slope, though those aren’t uncommon since learning isn’t a linear process. It is perfectly normal that kids will show no growth for several weeks even when overall progress is fine.
But if there is no apparent growth, then you should ask, “Why?”
I’d want additional data. Data that would depend on the intervention being provided, the grade level of the student, the core instructional materials, and so on, as well as what else is going on in the student's life. (“He was out with the flu for a week.” “She has had a recurrence of seizures.” “His parents are going through a divorce.” “She was out of school because she was at Disney World.”)
The first thing I’d check would be the integrity and fidelity of the intervention. Who’s providing services to the child? What have they been doing? What's going on during core instruction?
I would never ditch an intervention without a really deep dive with the problem-solving team. That’s the kind of aggressiveness RtI was supposed to generate, not the mindless and counterproductive program switching that you describe.
The law says kids must make adequate growth, but it provides no guidance or specific requirement with regard to the number of weeks or data points that are required to determine adequate growth.
No one wants a youngster go months and months without progress, but weekly testing has not been shown to improve intervention delivery or student progress. Eventually the advocates (and the courts) will figure out that responding quickly to unreliable data with no discernible learning payoff for kids isn’t the hallmark of an appropriately responsive school. It’s malpractice.
There is no sense in the flitting among interventions that your letter describes. It reveals a total lack of diagnostic data or rationale. This kind of foolish pedagogy results in “RtI casualties.” We can do better.
Sources
Cho, E., Capin, P., Roberts, G., & Vaughn, S. (2018). Examining predictive validity of oral reading fluency slope in upper elementary grades using quantile regression. Journal of Learning Disabilities, 51(6), 565-577.
Compton, D.L., Fuchs, D., Fuchs, L.S., & Bryant, J.D. (2006). Selecting at-risk readers in first grade for early intervention: A two-year longitudinal study of decision rules and procedures. Journal of Educational Psychology, 98(2), 394-409.
Kim, Y.S., Petscher, Y., Schatschneider, C., & Foorman, B. (2010). Does growth in rate in oral reading fluency matter in predicting reading comprehension achievement? Journal of Educational Psychology, 102(3), 652-667.
Tolar, T.D., Barth, A.E., Fletcher, J.M., Francis, D.J., & Vaughn, S. (2014). Predicting reading outcomes with progress monitoring slopes among middle school students. Learning and Individual Differences, 1(30), 46-57.
Van Norman, E.A., & Parker, D.C. (2016). An evaluation of the linearity of curriculum-based measurement of oral reading (CBM-R) progress monitoring data: Ideographic considerations. Learning Disabilities Research & Practice, 31(4), 199-207.
Yeo, S., Fearrington, J.Y., & Christ, T.J. (2012). Relation between CBM-R and CBM-mR slopes: An application of latent growth modeling. Assessment for Effective Instruction, 37(3), 147-158.
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.