Today I was talking to a group of educators from several states. The focus was on adolescent literacy. We were discussing the fact that various programs, initiatives, and documents—all supposedly research-based efforts—were promoting the idea that teachers should collect formative assessment data.
I pointed out that there wasn’t any evidence that it actually works at improving reading achievement with older students.
9/23/2015
HI Dr. Shanahan,
I've responded to a few of your comments before, but decided to do so on the atozteacherstuff.com blog - this isn't link spamming - just wanting to send you a link to my response in case you wanted to reply. I don't own that site or benefit from its promotion in any way.
http://forums.atozteacherstuff.com/showthread.php?t=190736
9/23/2015
Thanks, EdEd. You might be right, but if you are then the research standards employed by the National Science Foundation, National Institutes of Health, and the various National Academies of Science in the U.S,--and the comparable institutions around the world (Canada, UK, Hong Kong, Luxembourg, etc.--are all doing it wrong. The only way you can know if a component makes a difference is to isolate that component. Yes, if a component is a part of a larger intervention that works, you can't just omit that component in practice, but as I indicate in this blog, and as all those science organizations agree, you can isolate the impact of a component, by evaluating the intervention without it. Many years ago, Isabel Beck put forth a comprehensive approach to vocabulary instruction, and then tested the impact of various components of the intervention by leaving them out of particular studies. One can and should do the same thing if one wants to claim that formative assessment makes a difference in adolescent literacy.
9/23/2015
Thanks for your response Dr. Shanahan - I agree with your comment that package research doesn't prove efficacy of individual components separate from the whole. I think I could have explained myself better with my wording and choice of terms.
What I mean, more specifically and (hopefully) simply, is that I don't see formative assessment as just a component in many studies, but an integral part of the intervention process evaluated - inseparable, inextricable, part & parcel - both practically and theoretically. As such, studies that evaluate reading programs that collect structured data, then use that data to plan, modify, or change strategies, really are directly measuring formative assessment as an independent variable. Studies that demonstrate the efficacy of CBM and MTSS/RtI on academic achievement, for example, are re
9/23/2015
Thanks, EdEd. That helps to clarify... however, I don't know of any of those studies that are on anything in reading but basic skills... typically studies carried out with primary grade kids or older readers reading at very low levels. The recommendations I was challenging were formative assessment plans aimed at improving literacy achievement in secondary schools. Don't know of any evidence of that and given the nature of what kids need to learn in those years if they are not severely disabled, I find it unlikely that such assessment would help. Always willing to be convinced by evidence, but without it I'm left with reasoning on the basis of logic and experience (tenuous, of course, but it is all I have in this case). Thanks, again.
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.