If you type “reading comprehension observation” into Google, you get 462,000 hits. Not all of those pages will be instruments for observing how teachers and classrooms support reading comprehension development. But a lot of them are.
Some of these instruments are famous, like the CIERA one that Barbara Taylor and P. David Pearson made up awhile back. Others are the brainchildren of small companies or school districts. And t0hey all are supposed to be useful checklists for determining whether a classroom has what it takes.
Studies on such instruments suggest that they work—too some extent. What I mean is that many of the checklists are workable; a coach or other observer can see whether particular actions are taking place, and it may have reliability. So far, so good.
Some of these forms even have a correlation with kids’ learning. But having a relationship and having a strong relationship is the difference between talking to the counterman at the local 7-11 and talking to my wife. The correlations that have emanated from these observation forms tend to be tiny, and experts like Steve Roudenbush now believe there is simply too much variation in the day-to-day activities of classes to allow such observations to reveal much that is worthwhile with regard to the relationship of teaching and learning.
Reliability problems aside, most of the questions in these instruments get at the wrong things. They are simply too superficial to allow anything important to be determined.
A simple example: many forms ask the observer to determine if there is vocabulary instruction. That’s easy enough to figure out, and observers should be able to check the right boxes. However, what do those observations tell us? Well, that almost all teachers deliver some vocabulary instruction, so we’ll check yes to such observation items, but this variable, even when added together with, doesn’t tell us what we want to know.
I guess what I’m saying is that good teachers and poor teachers aren’t that different. In fact, it might be fair to say that many bad teachers look exactly like good teachers, but that the resemblance is superficial. You and I both might get checked off that we are teaching vocabulary, but which of us has selected words worth learning? Which is providing clear explanations of the word meanings? And which is being thorough enough and interactive enough and focused on meaning enough, to help the students to learn the words.
Checklists for observing reading lessons, for the most part, do not require qualitative judgments of how well teachers are keeping kids focused on meaning or how coherent or intellectually demanding the atmosphere is. Two teachers may be asking questions, reading stories, and teaching strategies, but they are probably not doing those things in equally powerful ways. Unfortunately, our observation instruments rarely get at these quality issues (though when they do, it is those items that seem to work best).
That means that most reading coaches are probably looking at the wrong things when they observe, and quite often the prescriptions they develop on the basis of their observations are more aimed at changing the instructional activities, rather than trying to make the current activities more substantial in getting kids to zero in on meaning.
8/7/2015
If you type “reading comprehension observation” into Google, you get 462,000 hits. Not all of those pages will be instruments for observing how teachers and classrooms support reading comprehension development. But a lot of them are.
Some of these instruments are famous, like the CIERA one that Barbara Taylor and P. David Pearson made up awhile back. Others are the brainchildren of small companies or school districts. And t0hey all are supposed to be useful checklists for determining whether a classroom has what it takes.
Studies on such instruments suggests that they work—too some extent. What I mean is that many of the checklists are workable; a coach or other observer can see whether particular actions are taking place, and it may have reliability. So far, so good.
Some of these forms even have a correlation with kids’ learning. But having a relationship and having a strong relationship is the difference between talking to the counter man at the local 7-11 and talking to my wife. The correlations that have emanated from these observation forms tend to be tiny, and experts like Steve Roudenbush now believe there is simply too much variation in the day-to-day activities of classes to allow such observations to reveal much that is worthwhile with regard to the relationship of teaching and learning.
Reliability problems aside, most of the questions in these instruments get at the wrong things. They are simply too superficial to allow anything important to be determined.
A simple example: many forms ask the observer to determine if there is vocabulary instruction. That’s easy enough to figure out, and observers should be able to check the right boxes. However, what do those observations tell us? Well, that almost all teachers deliver some vocabulary instruction, so we’ll check yes to such observation items, but this variable, even when added together with, doesn’t tell us what we want to know.
I guess what I’m saying is that good teachers and poor teachers aren’t that different. In fact, it might be fair to say that many bad teachers look exactly like good teachers, but that the resemblance is superficial. You and I both might get checked off that we are teaching vocabulary, but which of us has selected words worth learning? Which is providing clear explanations of the word meanings? And which is being thorough enough and interactive enough and focused on meaning enough, to help the students to learn the words.
Checklists for observing reading lessons, for the most part, do not require qualitative judgments of how well teachers are keeping kids focused on meaning or how coherent or intellectually demanding the atmosphere is. Two teachers may be asking questions, reading stories, and teaching strategies, but they are probably not doing those things in equally powerful ways. Unfortunately, our observation instruments rarely get at these quality issues (though when they do, it is those items that seem to work best).
That means that most reading coaches are probably looking at the wrong things when they observe, and quite often the prescriptions they develop on the basis of their observations are more aimed at changing the instructional activities, rather than trying to make the current activities more substantial in getting kids to zero in on meaning.
8/7/2015
Luqman--
There are many reasons why research-based interventions never work as well during implementation as they did during the studies. One reason is that study interventions are more likely to be carried out by the individuals who created the interventions in the first place. Thus, they know a lot more about what is being delivered than a typical teacher would, and so part of the study intervention is the insight of the development team (ideas that aren't actually built into the program). Even when the training isn't done by the creator, often it is delivered by someone who is trained directly by the creator (like one of his or her graduate students). Also, there are so-called Hawthorne effects; the newness of an intervention has an effect, and this effect usually cannot be sustained with extended use. Which raises time issues: a study might implement an intervention for two or three months in a grade level or two and then on the basis of that success the intervention is put into place for entire school years for entire elementary schools (with no evidence that its effectiveness can be sustained that long).
Basically, researchers work like crazy to make interventions work and teachers often don't (in part because they don't think it is necessary--it is a program that works after all, the research proves it).
Leave me a comment and I would like to have a discussion with you!
Copyright © 2024 Shanahan on Literacy. All rights reserved. Web Development by Dog and Rooster, Inc.
Comments
See what others have to say about this topic.