Are Oral Reading Norms Accurate with Complex Text?

  • afterschool programs author awareness
  • 01 February, 2016
  • 2 Comments

Teacher Question:  

          A question has come up that I don't know how to address and I would love your input.  For years, we have used the Hasbrook/Tindal fluency norms as one of the ways we measure our student's reading progress.  For example, the 4th grade midyear 50th percentile is 112 CWPM.  The fourth grade team has chosen a mid-year running record passage and is finding that many of students have gone down instead of up in their CWPM.  One teacher said that is because the common-core aligned texts are more challenging and that the passage is really the equivalent of what used to be considered a 5th grade passage. She said that the norms were done using text that is easier than what the students are now expected to read. I know that the texts are more complex and challenging and therefore more difficult for the students to read, and that this particular text may not be a good choice to use for an assessment, But it does raise the larger question--are these fluency norms still applicable?
Shanahan response:
         This is a great question, and one that I must admit I hadn’t thought about before you raised it. If average fourth-graders read texts at about 112 words correct per minute by mid-fourth-grade, one would think that their accuracy and/or speed would be affected if they were then asked to read texts that in the past would have been in the fifth-grade curriculum.
         However, while that assumption seems to make sense, it would depend on how those norms were originally established. Were kids asked to read texts characteristic of their grade levels at particular times of the year or was the text agenda wider than that? If the latter, then the complex text changes we are going through would not necessarily matter very much.
         So what’s the answer to your question? I contacted Jan Hasbrouck, the grand lady herself, and put your question to her. Here is her response:
         I guess the most honest answer is "who knows?" I hope that we may actually have an answer to that question by this spring or summer because Jerry Tindal and I are in the process of collecting ORF data to create a new set of norms, which should reflect more current classroom  practice. 
         My prediction is that the new ORF norms won't change much from our 2006 norms (or our 1992 norms). My prediction is based on the fact that ORF is, outside of expected measurement error (which Christ & Coolong-Chaffin, 2007 suggest is in the range of 5 wcpm for grades 1 and 2 and 9 wcpm in grades 3-8+), fairly stable. You can see evidence of this on our 2006 norms when looking at the spring 50th %iles for grades 6 (150), grade 7 (150), and grade 8 (151). When you think that these three scores represent approximately 30,000 students reading a variety of grade level passages that pretty darn stable. Other studies of older readers (high school; college) also find that 150 wcpm is a common "average.”
         Of course this stability assumes that the ORF scores were obtained correctly, using the required standardized procedures, which unfortunately is too often not the case. Standardized ORF procedures require that students read aloud for 60 seconds from unpracticed samples of grade level passages, and the performance is scored using the standardized procedures for counting errors. In my experience most educators are doing these required steps correctly. However, I see widespread errors being made in another step in the required ORF protocol: Students must try to do their best reading (NOT their fastest reading)!  In other words, in an ORF assessment the student should be attempting to read the text in a manner that mirrors normal, spoken speech (Stahl & Kuhn, 2002) and with attention to the meaning of the text. 
         What I witness in schools (and hear about from teachers, specialists, and administrators in the field) is that students are being allowed and even encouraged to read as fast as they can during ORF assessments, completely invalidating the assessment. The current (2006) Hasbrouck & Tindal norms were collected before the widespread and misguided push to ever faster reading.  It remains to be seen if students are in fact reading faster. Other data, including NAEP data, suggests that U.S. students are not reading "better." 
         And yes, of course the number of words read correctly per minute (wcpm) would be affected if students were asked to read text that is very easy for them or very difficult, but again, ORF is a standardized measure that can serve as an indicator of reading proficiency.  
         Given Jan's response, I assume the norms won’t change much. The reason for this is that they don’t have tight control of the data collection—reading procedures and texts varying across sites (not surprising with data on 250,000 readers). That means that the current norms do not necessarily reflect the reading of a single level of difficulty, and I suspect that the future norms determinations won’t have such tight control either. 
         The norms are averages and they still will be; that suggests using them as rough estimates rather than exact statistics (a point worth remembering when trying to determine if students are sufficiently fluent readers). 
         Last point: your fourth-grade teachers are correct that the texts they are testing with may not be of equivalent difficulty, which makes it difficult to determine whether or not there are real gains (or losses) being made. We've known for a long time that text difficulty varies a great deal from passage to passage. Just because you take a selection from the middle of a fourth-grade textbook, doesn't mean that passage is a good representation of appropriate text difficulty. That is true even if you know the Lexile rating of the overall chapter or article that you have drawn from (since difficulty varies across text). The only ways to be sure would be to do what Hasbrouck and Tindal did--use a lot of texts and assume the average is correct; or measure the difficulty of each passage used for assessment. The use of longer texts (having kids read for 2-3 minutes instead of 1) can improve your accuracy, too.

Comments

See what others have to say about this topic.

Karen Apr 09, 2017 07:17 PM

We use McGraw Hill's Reading Wonders as our textbook in my 5th grade classroom, and I have noticed that my students have significantly higher CWPM using the fluency passages in the practice book compared to their DIBELS benchmark and progress monitoring scores. (A pretty significant difference--kids who read 100 CWPM on the December screener routinely read 140 CWPM or more on a cold read with the Wonders materials.) And although I've never counted CWPM when they are reading out loud from the literature anthology (which should be complex texts, right?), my students sound much more fluent than when they read the DIBELS passages. I've never been quite sure what to make of this difference.

2/2/16

Timothy Shanahan Apr 09, 2017 07:18 PM

Thanks, Karen. That sounds like a passage problem on one side or both of the equation; or an administration problem. You definitely shouldn't find differences that great across instruments in that skill. The best way to prevent that (with any instruments) is to take a large enough sample of behavior. For example, DIBELS expects you to have the students read for 2 minutes (2 1-minute passages), and some researchers have argued for 3-minute rather than 2-minute reads.

2/6/16

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

Are Oral Reading Norms Accurate with Complex Text?

2 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.