Does Formative Assessment Improve Reading Achievement?

  • 22 September, 2015
  • 7 Comments

          Today I was talking to a group of educators from several states. The focus was on adolescent literacy. We were discussing the fact that various programs, initiatives, and documents—all supposedly research-based efforts—were promoting the idea that teachers should collect formative assessment data.

    I pointed out that there wasn’t any evidence that it actually works at improving reading achievement with older students.

          I see the benefit of such assessment or “pretesting” when dealing with the learning of a particular topic or curriculum content. Testing kids about what they know about a topic, may allow a teacher to skip some topics or to identify topics that may require more extensive classroom coverage than originally assumed. 
            It even seems to make sense with certain beginning reading skills (e.g., letters names, phonological awareness, decoding, oral reading fluency). Various tests of these skills can help teachers to target instruction so no one slips by without mastering these essential skills. I can’t find any research studies showing that this actually works, but I myself have seen the success of such practices in many schools. (Sad to say, I’ve also seen teachers reduce the amount of teaching they provide in skills that aren’t so easily tested—like comprehension and writing—in lieu of these more easily assessed topics.)
            However, “reading” and “writing” are more than those specific skills—especially as students advance up the grades. Reading Next (2004), for example, encourages the idea of formative assessment with adolescents to promote higher literacy. I can’t find any studies that support (or refute) the idea of using formative assessment to advance literacy learning at these levels, and unlike with the specific skills, I’m skeptical about this recommendation. 
            I’m not arguing against teachers paying attention… “I’m teaching a lesson and I notice that my many of my students are struggling to make sense of the Chemistry book, so I change my up my upcoming lessons, providing a greater amount of scaffolding to ensure that they are successful.” Or, even more likely… I’m delivering a lesson and can see that the kids aren’t getting it, so tomorrow we revisit the lesson. 
            Those kinds of observations and on-the-fly adjustments may be what all that is implied by the idea of “formative assessment.” If so, it is obviously sensible, and it isn’t likely to garner much research evidence. 
            However, I suspect the idea is meant to be more sophisticated and elaborate than that. If so, I wouldn’t encourage it. It is hard for me to imagine what kinds of assessment data would be collected about reading in these upper grades, and how content teachers would ever use that information productively in a 42-minute period with a daily case load of 150 students.
            A lot of what seems to be promoted these days as formative assessment is getting a snapshot or level of a school’s reading performance, so that teachers and principals can see how much gain the students make in the course of the school year (in fact, I heard several of these examples today). That isn’t really formative assessment by any definition that I’m aware of. That is just a kind of benchmarking to keep the teachers focused. Nothing wrong with that… but you certainly don’t need to test 800 kids to get such a number (a randomized sample would provide the same information a lot more efficiently). 
            Of course, many of the computer instruction programs provide a formative assessment placement test that supposedly identifies the skills that students lack so they can be guided through the program lessons. Thus, a test might have students engaged in a timed task of filling out a cloze passage. Then the instruction has kids practicing this kind of task. Makes sense to align the assessment and the instruction, right? But cloze has a rather shaky relationship with general reading comprehension, so improving student performance on that kind of task doesn’t necessarily mean that these students are becoming more college and career ready. Few secondary teachers and principals are savvy about the nature of reading instruction, so they get mesmerized by the fact that “formative assessment”—a key feature of quality reading instruction—is being provided, and the “gains” that they may see are encouraging. That these gains may reflect nothing that matters would likely never occur to them; it looks like reading instruction, it must be reading instruction. 
            One could determine the value of such lessons by using other outcome measures that are more in line with the kinds of literacy one sees in college, as well as in civic, familial, and economic lives of adults. And, one could determine the value of the formative assessments included in such programs if one were to have groups use the program, following the diagnostic guidance based on the testing, and having other groups just use the program by following a set grade level sequence of practice. I haven’t been able to find any such studies on reading so we have to take the value of this pretesting on the basis of faith I guess.
            Testing less—even for formative purposes—and teaching more seems to me to be the best way forward in most situations. 

Comments

See what others have to say about this topic.

EdEd Jun 13, 2017 01:47 AM

9/23/2015

HI Dr. Shanahan,

I've responded to a few of your comments before, but decided to do so on the atozteacherstuff.com blog - this isn't link spamming - just wanting to send you a link to my response in case you wanted to reply. I don't own that site or benefit from its promotion in any way.

http://forums.atozteacherstuff.com/showthread.php?t=190736

Timothy Shanahan Jun 13, 2017 01:47 AM

9/23/2015

Thanks, EdEd. You might be right, but if you are then the research standards employed by the National Science Foundation, National Institutes of Health, and the various National Academies of Science in the U.S,--and the comparable institutions around the world (Canada, UK, Hong Kong, Luxembourg, etc.--are all doing it wrong. The only way you can know if a component makes a difference is to isolate that component. Yes, if a component is a part of a larger intervention that works, you can't just omit that component in practice, but as I indicate in this blog, and as all those science organizations agree, you can isolate the impact of a component, by evaluating the intervention without it. Many years ago, Isabel Beck put forth a comprehensive approach to vocabulary instruction, and then tested the impact of various components of the intervention by leaving them out of particular studies. One can and should do the same thing if one wants to claim that formative assessment makes a difference in adolescent literacy.

EdEd Jun 13, 2017 01:48 AM

9/23/2015
Thanks for your response Dr. Shanahan - I agree with your comment that package research doesn't prove efficacy of individual components separate from the whole. I think I could have explained myself better with my wording and choice of terms.

What I mean, more specifically and (hopefully) simply, is that I don't see formative assessment as just a component in many studies, but an integral part of the intervention process evaluated - inseparable, inextricable, part & parcel - both practically and theoretically. As such, studies that evaluate reading programs that collect structured data, then use that data to plan, modify, or change strategies, really are directly measuring formative assessment as an independent variable. Studies that demonstrate the efficacy of CBM and MTSS/RtI on academic achievement, for example, are re

Timothy Shanahan Jun 13, 2017 01:48 AM

9/23/2015

Thanks, EdEd. That helps to clarify... however, I don't know of any of those studies that are on anything in reading but basic skills... typically studies carried out with primary grade kids or older readers reading at very low levels. The recommendations I was challenging were formative assessment plans aimed at improving literacy achievement in secondary schools. Don't know of any evidence of that and given the nature of what kids need to learn in those years if they are not severely disabled, I find it unlikely that such assessment would help. Always willing to be convinced by evidence, but without it I'm left with reasoning on the basis of logic and experience (tenuous, of course, but it is all I have in this case). Thanks, again.

EdEd Jun 13, 2017 01:49 AM

9/24/2015

Definitely Dr. Shanahan - my perspective is that you're right with research on literacy at the secondary level. My initial comments were more related to how you came about your conclusions - some of the assumptions about research that seemed to be made - rather than the specifics of the particular concept of formative assessment in secondary school.

I wouldn't mind your opinion on this, slightly alternative thought process: Formative assessment (in a general sense, not a specific set of procedures) really is a structured way of determining evidence-based practice. It's different from other strategies in that it almost isn't a technique, but a way to evaluate techniques. To what degree should we use formative assessment not because it's evidence-based, but because we believe instruction should be evidence-based, and that's our way of figuring out if it is.

Similarly, do we have any direct research of research itself on secondary literacy? To what extent have we evaluated the very concept of research methodology with secondary literacy? Sure, we can evaluate and test specific statistical tests, and could evaluate the utility of specific research designs, but the very concept of research itself? Is that even possible, or is that more of an assumption given our approach to expecting research in the first place?

In short, I'm thinking that I see formative assessment - again, in the concept sense, not any particular application such as CBM - as philosophical - inherent in our approach as data-based practitioners. Should we maybe assume that formative assessment always needs to be present in that data-based paradigm, but expect certain applications of formative assessment to demonstrate their utility? I'm thinking of CBM specifically here as an example - maybe CBM doesn't demonstrate its utility at the secondary level as much because it isn't tapping core constructs to be measured, and we've generally moved beyond basic skills at the secondary level. However, perhaps we should still expect formative assessment to occur. Even if we haven't found a way, we should want to be able to see a student's progress over time and have some incoming data to let us know if our instruction is making an impact.

Not arguing here - just genuinely interested...

Timothy Shanahan Jun 13, 2017 01:49 AM

9/24/2015

EdEd--

The only point to emphasizing "research-based" is to employ practices that have been found to advantage kids' learning. There are many things that "make sense" or "seem logical" that turn out not to help very much, and in some cases to take time and other resources away from what is really important. Seeing student growth over time can be fulfilling for teachers and can (sometimes) be informative to students; that doesn't mean that it helps raise achievement. Nevertheless, I don't disagree with your point: there are many times when we do things without any research base (like not allowing 7th period classes to go late so no one misses their school bus). I have no problem with that, but we definitely shouldn't tell teachers that we are using these practices because they have been proved to be effective.

I know of no studies showing that adherence to research-based practices consistently improves reading achievement for anyone.

EdEd Jun 13, 2017 01:50 AM

9/24/2015

I'm certainly not arguing for using common sense in place of research/evidence. At the risk of rehashing what I said earlier, my point is that formative assessment IS the very process of collecting that research/evidence/verification of learning. I'm not thinking it's a common sense strategy - I'm thinking it's the process by which we judge whether something is "evidence-based" in the field. Just as we use "research methodology" to evaluate strategies, we use formative assessment (along with summative assessment) to determine if a strategy is having an impact in the classroom.

In short, I'm suggesting that the only way to identify whether strategies "advantage kids' learning" as you've described is actually through formative (and summative) assessment. Which other methods would you use to identify whether a selected strategy were actually effective in an applied situation?

Finally, I'm not sure I understand your last sentence - if I'm reading it right, you're saying that you're not aware of any studies which show that research-based practices improve reading achievement? Guessing that's not what you meant?

What Are your thoughts?

Leave me a comment and I would like to have a discussion with you!

Comment *
Name*
Email*
Website
Comments

Does Formative Assessment Improve Reading Achievement?

7 comments

One of the world’s premier literacy educators.

He studies reading and writing across all ages and abilities. Feel free to contact him.