Teacher question:
I teach first-grade and this year I switched schools. In my previous school, we tested our students with DIBELS three times a year. The idea was to figure if the students were having trouble with decoding so that we could help them. That isn’t how my new principal does it. He has us giving kids a reading comprehension test with leveled books. I asked him about it and he said that the district didn’t care about DIBELS and he didn’t care about DIBELS (he only cares about how kids do on the ____ test). I’m confused. I thought the idea of teaching phonics and fluency was to enable comprehension, but the emphasis on this test seems to suggest—at least in my new school—that is no longer necessary. What should I do?
Shanahan response:
Educators often complain about the intrusive accountability testing imposed by politicians and bureaucrats who wouldn’t know the difference between a textbook and a whiteboard. But many of the dumbest decisions made by teachers and principals are in pursuit of the tests we ourselves impose.
The accountability tests—PARRC, SBAC, and all the other state tests—are there to check up on how well we are doing our jobs. Not surprising that we don’t like those, and that, consequently, we might bend over backward to try to look good on such tests. That’s why many schools sacrifice real reading instruction—that is, the instruction that could actually be expected to help kids read better—in favor of so much test prep and test practice.
But what of the formative assessments that are supposed to help us do our jobs? You know, the alphabet soup of inventories, diagnostic tests, screeners, monitors, and dipsticks that pervade early reading instruction like DIBELS, ERDA, PALS, CTOPP, TPRI, ISIP, CAP, TRC, NWEA, AIMSweb, and TOWRE.
None of these instruments are problematic in and of themselves. Most set out to measure young children’s ability to…well, to… do something relevant to early literacy learning. For instance, they might evaluate how many letters the children can name, or how well they can hear the sounds within words. Sometimes, as in your school, they ask kids to read graded passages or little books and to answer questions about them, or as in your previous school, they might gauge student ability to perceive correctly the sounds within words.
The basic idea of these testing schemes is to find lacks and limitations. If Johnny doesn’t know his letters, then his kindergarten teacher should provide extra tuition in that. If Mary can’t understand the first-grade text, perhaps she should get her teaching from a somewhat easier book. And so on.
That is all well and good… but how we do twist those schemes out of shape! My goodness.
As a result, educators increasingly have grown restive with the “instructional validity” of assessment. Instructional validity refers to the appropriateness of the impact these tests have upon instruction.
DIBELS itself has often been the target of these complaints. These tests shine a light on parts of the reading process—and teachers and principals tend to focus their attention on these tested parts—neglecting anything about literacy development that may not have this kind of flashlight. Thus, one sees first-grade teachers spending inordinate amounts of time on word attack trying to raise NWF (nonsense word fluency) scores, but with little teaching of untested skills like vocabulary or comprehension or writing.
Even worse, we sometimes find instruction aimed at mastery of the nonsense words themselves too with the idea that this will result in higher scores.
Of course, this is foolishness. The idea of these formative testing regimes is to figure out how the children are doing with some skill that supports their reading progress, not to see who can obtain the best formative test scores.
The reason why DIBELS evaluates how well kids can read (decode or sound out) nonsense words is that research is clear that decoding ability is essential to learning to read and instruction that leads students to decode better eventually improves reading ability itself (including reading comprehension). Nonsense words can provide a good avenue for the assessment of this skill because they would not favor any particular curriculum (as real words would), they correlate with reading as well as real words do, and no one in their right mind would have children memorizing nonsense words. Oops… apparently, the last consideration is not correct. Teachers, not understanding or caring the purpose of the test, are sometimes willing to raise scores artificially by just this kind of memorization.
And, to what end? Remember, the tests are aimed at identifying learning needs that can be addressed with extra teaching. If I artificially make it appear that Hector can decode well when he can’t (memorizing the test words is one way to do this), then I get out of having to provide him the instruction that he needs. In other words, I’ve made it look like I’m a good teacher, but what I’ve really done is disguised the fact that Hector isn't succeeding, and I'm delaying any help that may be provided until it is too late.
Another example of this kind of educational shortsightedness has to do with the idea of using the tests to determine who gets extra help, like from a Title I reading teacher, perhaps. In most schools, the idea is to catch kids literacy learning gaps early so we can keep them on the right track from the beginning. But what if you are in a school with high mobility (your kids move a lot)?
I know of principals who deploy these resources later—grades 2 or 3—to try to make certain that these bucks improve reading achievement at their schools. Research might find it best to use these tests early to target appropriate interventions in Kindergarten and Grade 1, but these schmos don’t want to "waste" resources in that way since so many students don’t stick around all the way to the accountability testing. Instead of targeting the testing and intervention at the points where these will help kids the most, these principals aim them at what might make the principals themselves look better (kind of like the teachers teaching kids the nonsense words).
Back to your question… your school is only going to test an amalgam of fluency (oral reading of the graded passages) and reading comprehension. If all that you want to know is how well your students can read, that is probably adequate. If all the first-grade teachers tested her charges with that kind of test, the principal will end up with a pretty good idea of how well the first-graders in his school are reading so far. Your principal is doing nothing wrong in imposing that kind of test if that is what he wants to know. I assume those results will be used to identify which kids will need extra teaching.
I get your discomfort with this, however. You are a teacher. You are wondering… if little Mary needs extra teaching what should that extra teaching focus on?
Because of the nature of reading, that kind of assessment simply can’t identify which reading skills are causing the problem. Mary might not read well—the test is clear about that, but we can’t tell whether this poor reading is due to gaps in phonological awareness (PA), phonics, oral reading fluency, vocabulary, or reading comprehension itself.
The default response for too many teachers, with this test or any other, is to teach something that looks like the test. In first grade that would mean neglecting those very skills that improve reading ability. The official panels that have carefully examined the research and concluded that decoding instruction was essential did so because such teaching resulted in better overall reading achievement (not just improvements in the skill that was taught). The same can be said about PA, fluency, and vocabulary instruction.
I’d love to tell you I have a great solution to your problem… for instance, perhaps all the children could be tested in the way that your principal requires and then anyone who failed to reach a particular reading level could then be tested further using DIBELS or something like DIBELS to identify the underlying skills that are likely holding those kids back. That sounds pretty sensible since it would keep teachers from just focusing on the underlying skills (and then ignoring reading comprehension), and yet, I quake at those teachers who will now teach reading with the test passages or who will coach the kids on the answering the test questions so that no one needs to be tested further--in other words, hiding the fact that their kids are struggling.
The key to making all of this work for kids is:
So, a pox on both your houses… That your principal doesn’t care why kids are having reading trouble is a serious impediment for the boys and girls in that school. That you don’t recognize the value of a test of your students’ actual reading ability concerns me as it might indicate a willingness to go off the deep end, teaching some aspects of reading to the neglect of others. Teach it all, monitor it all... and help these children to succeed.
Let me come to this teacher's defense--possibly. Tim, you say: "That you don’t recognize the value of a test of your students’ actual reading ability concerns me as it might indicate a willingness to go off the deep end, teaching some aspects of reading to the neglect of others." The teacher didn't mention which test with "leveled books" she gives, but if her district is like mine and many others, it's the Fountas & Pinnell Benchmark Assessments, which are very counterproductive in K-1. Why? Because the first few levels use predictable text rather than decodable text. What they assess is whether students have mastered around 30 high-frequency words and then are able to look at the picture and read words like "elephant" or "swing", words most kindergarteners could not independently decode. As you've previously poined out (In your discussion of Reading at the Speed of Sight), relying on context support is characteristic of poor readers. Both Louisa Moats and Marilyn Adams--cowriters of the Foundational Skills portion of the Common Core Standards--have spoken out eloquently against using predictable text for beginning readers. After the National Reading Panel's report, California only approved two textbooks for adoption--and both used decodable text as part of their reading assessments. Many districts have taken a step backward.
This is an excellent compilation of the points that need to be addressed in early literacy education and beyond, i.e., that 'reading' is the amalgam of a number of skills. We have been in touch over the summer. I am an SLP and specialist in reading and writing acquisition as part of my larger experience with identifying and treating language disorders, including 'written oral language disorders' commonly identified as 'dyslexia.' I understand that there is a vicious entrapment of accountability, measured by tests resulting in the wholesale adoption of balanced literacy programs which are meant to address PA, phonics and 'comprehension.' You are absolutely right. . .we don't teach kids to read but push them through these hoops and then, I hear stories of 3rd graders having troubling reading (and writing).
The default solution often is to put them on some kind of computer program as 'teacher', not as practice. Coupled with this, is the sad neglect of handwriting even though it has been shown through viable, valid research, to be an important pathway into integrating sound/symbols, fluent spelling and importantly, memory. As children move through to secondary education, often they are given even more apps. . essentially z-apping them into under-performance, whether secondary to underlying linguistic deficits or frankly, misteaching. SLPs in schools, are often hobbled by reliance on tests to pinpoint these deficits and then, by lack of time or knowledge to help address them.
This becomes essential when teaching ESL students and in fact, I am working to develop now, I hope, with the local library system, ESL courses for parents who not only want to improve their language uptake by working through phonology, vocabulary, reading comprehension and writing but also, to expose their preschoolers and early elementary kids to these very concepts to assist them in developing areas that might not be fully valued and taught in their own classrooms.
Nancy Rose Steinbock, M.A., CCC-SLP
Edgartown, Massachusetts
I am always concerned when schools exclusively use leveled text kits to assess primary level readers.
1. Typically, that indicates that there is less instructional emphasis on explicit, systematic instruction of the code. Many of these schools are not using a consistent program of instruction. Without assessing foundational skills, teachers are unlikely to know where students are along the word recognition developmental continuum. All students require systematic instruction in word recognition.
2. Although some test using a finely defined text-reading gradient is necessary K-1, there is a great deal of variabillity in test administration, scoring, and interpretation by teachers using kits of leveled readers. It is troubling if you only have one test and the data is not collected in reliable, standardized ways. This is particularly true if the school’s test kit does not have standardized questions for each book and consistent quantitative ways to code the retelling. Schools should be having annual (or more frequent) booster sessions to ensure fidelity.
3. The teacher requesting Tim’s feedback mentioned that her school valued the leveled reader as a gauge of first grade comprehension. This is troubling to me. Texts at the easiest levels (A-G) do not adhere to comprehensive narrative text structures. Therefore, they may be harder for young children to remember than a more higher level book that has a more complex story line. Additionally, without a script to prompt children to “think about what is happening in the book, because you will have to retell what you read and answer questions when you finish,” young readers get so focused on performing and getting words right that they may be startled by those activities at the end, especially if they are only doing collaborative retellings during instruction. At that reading stage (A-G/H) the M-S-V analysis is an important tool. In many schools that I visit, they are just using the percentage to inform instructional level, omitting the time-consuming MSV analysis. Finally, tests of listening comprehension can distinquish between comprehension issues and decoding problems.
MSV analysis is discredited in Mark Seidenberg's Language at the Speed of Sight and David Kilpatrick's Essentials of Assessing, Preventing, and Overcoming Reading Difficulties along with works by other researchers.
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.