Updating Data-Driven Instruction and the Practice of Teaching

The following post appeared May 12, 2011. Since then it has been the most read post I have written–nearly 28,000 views. I am updating it with a few changes in language and additional studies and comments that were not in the original post.

I like numbers. Numbers are facts: blood pressure reading is 145/90. Numbers are objective, free of emotion. The bike odometer tells me that I traveled 17 miles. Objective and factual as numbers may be,  still we inject meaning into them. The blood pressure reading, for example, crosses the threshold of high blood pressure and needs attention.  And that 17-mile bike ride meant a  chocolate-dipped vanilla cone at a Dairy Queen.

Which brings me to a school reform effort centered on numbers. Much has already been written on the U.S. obsession with standardized test scores. Ditto for the recent passion for value-added measures.  I turn now to policymakers who gather, digest, and use a vast array of numbers to reshape teaching practices.

Yes, I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific. Ultimately, a reform that will make teaching systematic and effective. Standardized test scores, dropout figures, percentages of non-native speakers proficient in English–are collected, disaggregated by  ethnicity and school grade, and analyzed. Then with access to data warehouses, staff can obtain electronic packets of student test data that can be used for instructional decision-making to increase academic performance. Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades to increase their productivity.

An earlier incarnation appeared four decades ago.  Responding to criticism of failing U.S. schools, policymakers established “competency tests” that students had to pass to graduate high school. These tests measured what students learned from the curriculum. Policymakers believed that when results were fed back to principals and teachers, they would realign lessons. Hence, “measurement-driven” instruction.

Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences. Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.

Both subjective and objective, such micro- decisions were both practice- and data-driven. Teachers’ informal assessments of students gathered information directly and  would lead to altered lessons. Analysis of annual test results that showed patterns in student errors  helped teachers figure out better sequencing of content and different ways to teach particular topics.

In the 1990s and, especially after No Child Left Behind became law in 2002, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from analysis of tests and classroom practices became a top priority. Why? Because stigma and high-stakes consequences (e.g., state-inflicted penalties) occurred from public reporting of low test scores and inadequate school performance that could lead to a school’s closure, negative teacher evaluations, and students dropping out.

Now, principals and teachers are awash in data.

 

How do teachers use the massive data available to them on student performance? Researcher Viki Young studied four elementary school grade-level teams in how they used data to improve lessons. She found that supportive principals and superintendents and habits of collaboration increased use of data to alter lessons in two of the cases but not in the other two. She did not link the work of these grade-level teams to student achievement.  In another study of 36 instances of data use in two districts, Julie Marsh and her colleagues found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners. Researchers pointed out how timeliness of data, its perceived worth by teachers, and district support limited or expanded the quality of analysis. These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions  in these two districts.

Yet policymakers assume that micro- or macro-decisions driven by data will improve student achievement just like those productivity increases and profits major corporations accrue from using data to make decisions. Wait, it gets worse.

In 2009, the federal government published a report ( IES Expert Panel) that examined 490 studies where data was used by school staffs to make instructional decisions. Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.

Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.

8 Comments

Filed under how teachers teach, school reform policies

8 responses to “Updating Data-Driven Instruction and the Practice of Teaching

  1. Benny Stein

    Hello Larry,
    As a teacher of over 25 years experience, I never needed data to know whether my students understood and absorbed the learned materials. I always felt by listening to them, or reading their tests, whether they had succeeded in decyphering the content of the lessons. As you have stated so well in the past, the crux of the situation in the classroom is often based on the added value of the personality of the adult/teacher in motivating the real learning process, I seem to feel that it is the mediocre teachers and schools who see the need for data to define their success. All data is also open to different interpretations depending on who gathers it and for what purpose. Maybe these schools could put more theought in how to get their students motivated to learn instead of measuring their success or lack of success in teaching. Thankyou for your article, Benny

  2. I agree with Benny’s comment. Good experienced teachers do not need test results to know if they are doing their job. The trouble is there is a real shortage of good experienced teachers. There are a lot of mediocre teachers and schools out there which sort of justifies the use of data to define success. But for some reason it seems all the weight for lack of success is put on the teachers, not on those that are defining that success, the students, and through them, the parents. How about instead of denying teachers pay due to low test scores we fine parents for low achieving students? Wouldn’t that be a controversial mess! Almost as bad as using test data to compare schools of different populations.

  3. Reblogged this on From experience to meaning… and commented:
    Larry Cuban updated one of his most popular blog posts with new studies. The conclusion remain and is quite sobering.

Leave a comment