Teacher question:
I've attached a Student Work Analysis tool that we are using. I have read that you oppose attempts to grade students on the individual reading standards. Although this tool is not used for grading students, it is a standard-by-standard analysis of the students’ work, and I wonder what you think of it? [The form that was included provided spaces for teachers to analyze student success with each of their state’s math standards].
Shanahan response:
In the blog entry that you refer to, I spoke specifically about evaluating reading comprehension standards (not math or even the more foundational or skills-oriented decoding, vocabulary, or morphology).
A common error in reading education is to treat reading comprehension as if it were a skill or a collection of discrete skills.
Skills tend to be highly repeatable things…
Many of the items listed as comprehension skills are not particularly repeatable. All these standards or question types aimed at main idea, central message, key details, supporting details, inferencing, application, tone, comparison, purpose, etc. are fine, but none is repeatable in real reading situations.
Each of these actions is unique or at least high particularized. Each time these instances occur they are in completely different contexts. To execute them, it requires different steps from instance to instance.
Not only does each text have its own main ideas, but because the expression of each text is so different, what it takes to locate, identify, or construct a main idea will vary greatly from text to text. Contrast this with forming the appropriate phoneme for sh or ph, computing the product of 3 X 3, or defining photosynthesis.
Another problem is that these supposed comprehension skills aren’t individually measurable.
My point isn’t that teachers can’t ask questions that would require students to figure out particular things about a text—of course they can—but performance on such questions is startlingly unreliable. Today, Johnny might answer the tone question like a champ, but tomorrow he won’t—since that is a different story, and the author revealed tone in a totally different way.
Also, comprehension questions asked about a particular text aren’t independent of each other (and item independence is imperative in assessment). The reason, little Johnny struggled with tone on the day after wasn’t because he forgot what he knew about tone, nor even because tone was handled more subtly in text two… but because his reading was deeply affected by that text’s more challenging vocabulary, complex sentences, or complicated time sequence—none of which are specifically tone issues.
That means that when teachers try to suss out how well Johnny can meet Standard 6 by asking tone questions, his answers will reveal how well he could make sense of tone in one particular text, but it won’t likely be indicative of how well he’ll handle tone on any other. (Not at all what one would expect to see with math, decoding, or vocabulary assessments).
Reading comprehension is so affected by the readers’ prior knowledge of the subject matter being read about and the language used to express those ideas (e.g., vocabulary, sentence structure, cohesion, text organization, literary devices, graphics), that focusing one’s attention on which kinds of question the kids could answer is a fool’s errand.
If I were trying to assess reading comprehension information to determine who might need more help, the kind of help to provide, or who I should worry about concerning the end of year testing, then I wouldn’t hesitate to ask questions that seemed to reflect the standards… but the information I’d use for assessment would ignore how well the kids could answer particular types of questions.
My interest would be in how well students did with particular types of texts.
Keep track of their overall comprehension with different types of text. I’d record the following information:
Thus, a student record may look something like this:
| Comprehension | Lexile | Familiarity | Text Type | Length |
Week 1 | 90% | 400L | 4 | Fiction/Narrative | 300 words |
Week 2 | 60% | 570L | 2 (habitats) | Info/Exposition | 550 words |
Week 3 | 75% | 500L | 2 | Fiction/Narrative | 575 words |
Week 4 | 75% | 570L | 4 (robots) | Info/Exposition | 500 words |
Week 5 | 80% | 490L | 4 | Fiction/Narrative | 400 words |
Week 6 | 65% | 580L | 3 (climate) | Info/Exposition | 500 words |
Week 7 | 85% | 525L | 3 | Fiction/Narrative | 250 words |
Over time, you’ll get some sense that junior does great with texts that are lower than 500L, but not so well with texts that are harder than 550L (unless they’re about robots).
Or, perhaps over the report card marking period you may notice a difference in performance on the the literary or informational texts (which you can see in my example above). But you also need to notice that the informational texts were relatively harder here, so it isn’t certain that the student would struggle more with content than literature (though one might make an effort to sort this out to see if there is a consistent pattern). Likewise the student seemed to be able to handle silent reading demands with the shorter texts, but comprehension tended to fall off with the longer texts. That may lead me to try to do more to build stamina with this student.
And so on.
Basically, the information that you are collecting should describe how well the student does with particular types of texts (in terms of discourse types, length, topic familiarity, and difficulty), rather than trying to figure out which comprehension skills the individual question responses may reveal.
If a student does well with many of the passages, then he or she will likely do well with the comprehension standards—as long as these weekly dipsticks are reflective of the difficulty, lengths, and types of texts that will appear on the end-of-year tests.
And, If students perform poorly with many of the passages, then their performance on all question types will be affected.
How valuable do you think Informal Reading Inventories (IRI) are for assessing comprehension and word analysis skills?
How would you suggest using this information to inform instruction?
What are your thoughts on using a DRA 2 assessment on 1st graders to determine their reading level? My entire team uses it to decide if a student is on grade level with me being the only teacher that does not use this assessment. I do not believe that this is the most accurate way to make this determination. Thoughts?
What should schools do for report cards in elementary school for letting parents know how Well their children are reading?
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.