I hear you.
Last week I posted a blog challenging the amount of testing and test preparation in American reading classes. I got smacked, metaphorically, by friend and foe alike. Some posted their concerns, many more sent them to me directly.
The grumbles from past foes are the easiest to reply to. They often expressed—in passive aggressive tones—exasperation that I have “finally” woken up to the idea that testing companies are evil and that testing is a conspiracy against kids and teachers. They know because they follow Diane Ravitch’s “research.”
The thing is—and I’m sure this is true since I’ve reread last week’s posting—I didn’t really come out against testing. Just against over-testing and test prep generally. The politicians have imposed some testing—and I think they have overdone it—but teachers and principals are also devoting too much time to testing, and that's on us.
Dr. Ravitch seems to be quite upset about accountability testing, which she herself helped impose on educators overriding the critics who depended upon research in their arguments. (Ravitch is an educational historian, and quite a good one, but ignrores—then and now—psychological and educational research).
I’m not even against accountability testing, as long as the amount of testing is commensurate with the information that one is collecting. To find out how well a school or district is doing, do we really need to test every year? Do they change that fast? Do we really need to test everyone? Anyone ever hear of random sampling? Come onnnnnn!
If Dr. Ravitch’s minions spent more time in schools, they’d know the heaviest testing commitments are the ones the districts (and, sometimes, even individual principals and teachers) have taken on themselves. We may blame those misguided efforts on the accountability testing—we all want to look good for the picture—but, it is a bad choice, nevertheless. And, it is a choice.
I do find the critics’ vexation with me a little surprising. For example, when I was director of reading in the Chicago Public Schools (15 years ago), I was ordered, by then Mayor Daley—to emphasize test prep in my teacher education efforts in the city. Unlike some of the critics who these days are so noisy about over-testing, I had skin in the game and I refused.
It might be worth noting that my refusal led to two outcomes that matter: (1) the Chicago Public Schools engaged in the least test prep—before or since; and (2) Chicago kids made their biggest measured gains in reading. Not a research study, but a policy dispute affecting nearly a half million kids.
Of course, those who appreciated my past candor were now chagrined at my remarks. They weren’t necessarily upset by what I had to say about accountability testing (many of them concur that it is over the top), but they were scared to death by my comments on the various screening, monitoring, and diagnostic tests that are so much of the daily lives of primary grade classrooms.
Again, I think I was clear, despite the concerns. The typical complaint: “I understand you, but no one else will.” That is, they get that I am not opposed to all classroom assessment, but they are sure no one else will appreciate the subtlety of what they see as a complex position.
For example, one dear friend, a grandmother, pointed out her appreciation that her grandkids are given annually a standardized test in reading and math. The reason? She doesn’t trust teachers or schools to actually tell how kids are doing.
The fact is too often teachers don’t tell parents how their kids are doing. For all kinds of reasons: What if a child isn’t doing well and I don’t know what to tell the parent—why raise a question I can’t answer? What if I don’t think there is anything that can be done—it’s a minority child without economic resources whose family is a wreck? What if I only notice effort and not achievement? What if I just don’t want the argument (often parents don’t like to hear that junior isn’t succeeding)?
An annual test isn’t perfect, but it doubles the amount of information that most parents have and that isn’t a bad thing. I’m not against that kind of testing.
One reader thought I was smacking DIBELS, but I wasn’t. I was tough on the notion that tests like DIBELS can profitably be given to ANYBODY every week or two through a school year. But not because I was anti-DIBELS.
Twice a year I go to my dentist. She takes x-rays every fourth visit. Why doesn’t she do it every time? For two reasons: first, dental health doesn’t change that fast, so they try not to test more than would help; and, second, because x-rays can cause damage, so the balance is best struck between help and hindrance, by testing once every four checkups instead of the seemingly more rigorous testing every time.
DIBELS-like instruments won’t do physical damage, like x-rays, but they do reduce the amount of teaching and they might shape that teaching in bizarre ways. That is harmful.
My advice:
1. Reduce accountability testing to the minimum amounts required to accomplish the goal. Research is clear that we can test much less to find out how states, districts, and schools are doing. Without a loss of information.
2. Test individual kids annually to ensure parents have alternative information to that provided by teachers.
3. Limit diagnostic testing in reading to no more than 2-3 times per school year. Studies do not find that any more testing than that is beneficial, and no research supports reducing the amount of teaching to enable such over-testing.
4. Give most test prep a pass. It doesn’t really help and it reduces the amount of essential instruction that kids should be getting. One practice test given once one or two weeks ahead so kids will feel comfortable with the testing should be plenty.
You mention: Limit diagnostic testing in reading to no more than 2-3 times per school year. Studies do not find that any more testing than that is beneficial, and no research supports reducing the amount of teaching to enable such over-testing.
Does this include progress monitoring (using AimsWeb, DIBELS, etc.) for below level students in specific skill areas in order to determine if a particular intervention is effective? 1/9/17
Edward--
In a word, YES. That is exactly what I am saying. Those tests are terrific, but they are not finely calibrated enough, and kids don't develop so quickly (especially those kids in need of intervention), that the tests can tell you anything trustworthy when given over and over again. There are studies showing that more frequent testing can be useful in math, but math and reading are not the same.
tim 2/9/17
Mark Shinn proposes progress monitoring for Tiers 2 and 3 students in reading on a more frequent basis, dependent upon the abilities of the students. He uses the analogy of weighing yourself weekly when dieting. This seems to be logical. You appear to propose that benchmarking students two or three times per year is sufficient. This seems fine if an intervention is working, but what if the intervention does not seem to be working? 1/9/17
Edward--
I've had that argument with Mark and he claimed his position was based on Lynn Fuchs' research. I asked Lynn about it and she said, "No way," and, in fact, has now had the same argument with him. The problem is that the standard errors of the tests are so much larger than the amount of growth a student will go through that you are getting no new information. If a student scores 90 wcpm on Friday, and he gets a 100 next Friday, is that because he was learning so much, or simply because we should expect that much variation in the scores from administration to administration? We can weigh people with much less error.
Weekly reading testing is not supported by the research. It may seem like "common sense," but it is pretend rigor, not real rigor. It's the kind of thing that get kids hurrying to read rather than reading fluently.
tim 1/9/17
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.