Data-Driven Instruction and the Practice of Teaching

I like numbers. Numbers are facts: blood pressure reading is 145/90. Numbers are objective, free of emotion. The bike odometer tells me that I traveled 17 miles. Objective and factual as numbers may be, we still inject meaning into them. The blood pressure reading, for example, crosses the threshold of high blood pressure and needs attention.  And that 17-mile bike ride meant a  chocolate-dipped vanilla cone at a Dairy Queen.

Which brings me to a school reform effort centered on numbers. Much has already been written on the U.S. obsession with standardized test scores. Ditto for a recent passion for value-added measures.  Turn now to those policymakers who gather, digest, and use a vast array of numbers in order to reshape teaching practices.

Yes, I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less intuitive and experience-based, and more scientific. Ultimately, a reform that will make teaching more systematic and effective. Standardized test scores, dropout figures, percentages of non-native speakers proficient in English–are collected, disaggregated by  ethnicity and school grade, and analyzed. Then with access to data warehouses, staff can obtain electronic packets of student performance data that can be used to make instructional decisions to increase academic performance. Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades in making decisions that increased their productivity.

An earlier incarnation of data-driven instruction appeared a half-century ago.  Responding to criticism of failing U.S. schools, policymakers established “competency tests” that students had to pass to graduate high school. These tests measured what students learned from the curriculum. Policymakers believed that when results were fed back to principals and teachers, they would realign lessons. Hence, “measurement-driven” instruction..

Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences. Based on these data, teachers revised lessons. Teachers leaned heavily on their experiences with students and the incremental learning they had accumulated from teaching 180 days, year after year.

Both subjective and objective, such micro- decisions were both practice- and data-driven. Teachers’ informal assessments of students would lead to altered lessons. Analysis of annual test results that showed patterns in student errors  helped teachers figure out better sequencing of content and different ways to teach particular topics.

In the 1990s and, especially after No Child Left Behind became law in 2001, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from the analysis to teaching became a top priority. Why? Because stigma and high-stakes consequences (e.g., state-inflicted penalties) occurred from public reporting of low test scores and inadequate school performance that could lead to a school’s closure.

Now, principals and teachers are awash in data.

How do teachers use the massive data available to them on student performance?  Studies of  teacher and administrator usage reveal wide variation and different strategies. In one study of 36 instances of data use in two districts, researchers found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners. There were fewer instances of collective, sustained, and deeper inquiry by groups of teachers and administrators using multiple data sources (e.g., test scores, district surveys, and interviews) to, for example,  reallocate funds for reading specialists or start an overhaul of district high schools. Researchers pointed out how timeliness of data, its perceived worth by teachers, and district support limited or expanded the quality of analysis. These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions  in these two districts.

Yet policymakers assume that micro- or macro-decisions driven by data will improve student achievement just like those productivity increases and profits major corporations accrue from using data to make decisions. Wait, it gets worse.

In 2009, the federal government published a report ( IES Expert Panel) that examined 490 studies where data was used by school staffs to make instructional decisions. Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.

Numbers may be facts. Numbers may be objective. Numbers may smell scientific But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.

28 Comments

Filed under dilemmas of teaching, how teachers teach

28 responses to “Data-Driven Instruction and the Practice of Teaching

  1. Excellent post. We clearly need good data from formative assessments that inform further instruction. Summative testing should cover all parts of the curriculum and should allow for students to retake components as many times as necessary. Its the yearly NCLB driven standardized tests that we should ditch. They are not objective and only cover a fraction of the underlying standards. Kids pass or fail by the luck of the draw. Read “The Myths of Standardized Tests” by Phillip Harris, Bruce Smith, and Joan Harris where they explain the problem in detail. See my summary at http://bit.ly/hu6wnS. Douglas W. Green, EdD

  2. Pingback: More On The Dangers Of Being “Data-Driven” | Larry Ferlazzo's Websites of the Day...

  3. Bill Younglove

    NOW we are getting SOMEwhere: Valid, reliable student assessment data, formative and summative, to help plan and drive curriculum. That, and nothing else is what we should be using to improve instruction. The best that I have seen is still derived by competent teachers within their own classrooms on a yearly basis. I realize that such information is of little value to the harried legislator en route to his/her next flight to bolster re-election chances; that crunched numbers are, like seductive liquor, easier and quicker. Also, it might even behoove the think tank gurus to actual step inside a real classroom, stocked with real students (and, please, none of that 16 to 1 TIME Magazine unreal student-teacher ratio stuff), instead of abstract numbers.

  4. Meg

    Great set-up and conclusion Larry! I think the must-read article that corresponds to your argument: Joel Best More Damned Lies and Statistics: How Numbers Confuse Public Issues “Magical Numbers.”

  5. Our union in SF just unanimously passed a resolution to oppose threatened imposition of compulsory Measures of Academic Progress testing to be administered quarterly. What really boggled my mind was the stealthy way in which the proposal for the testing was being brought to the Board of Ed. Most teachers were under the impression that the testing was compulsory, their administrators quite suddenly insisting on its being done. Some elementary and secondary teachers got angry, started to connect dots between schools, and pushed back at district wide meetings of union reps and central office. In those meetings, and with lots of phone and email detective work between them, I learned that the “administrative regulation” in which the testing was described had not only not yet been presented to the board, but, we were told, was not yet in written form. At our last h.s. reps and central office meeting, the asst. superintendent’s asst. director expressed surprise at the “confrontational” tone of the union resolution. He seemed to think that having non-classroom “teachers on special assignment” and some department chairs involved in preparation of the test materials was enough to cover the district’s contractual obligation to include union representation on significant policy committees.

    Larry, I used to be depressed about all this. Now I’m close to despairing. The professionals I work with have been so infantalized and disempowered for so long now, they’re working in such impossible conditions and under such inexperienced managers, that I don’t see how things can get better. Sometimes I think CTA or CFT might have some influence. But our union push back on MAP, for example, was just lucky. The local’s limited resources and energies are fighting bigger survival issues – pink slips, length of the school year, health benefits, site operating budgets. There’s not much left over for the pressing struggles over curriculum and learning. And most of those exhausted teachers in the classroom are so beaten down and overworked, they don’t have the time to get a perspective on what’s happening.

    Your post is being sent to that asst. supt. and her assistant and to every teacher in SFUSD who I think might have the time to read it.

    • larrycuban

      Patrick,
      Thanks for sending along the union-sponsored resolution and the background, as you reconstructed it, to what recently occurred over Measures of Academic Progress testing. Given the fiscal situation in the state and SFUSD, I understand why it is dispiriting to you and others, amid cuts and serious shifts in policy, that discovering MAP tests were not required came by chance. The tests are consequential. Moreover, they also provide much data that might be put to good use if teachers trust the tests, how the data is aggregated and disaggregated, and how it is to be used by both teachers and administrators. Seeking transparency for teachers, parents, and the community in all of these steps is essential.

  6. David Z.

    Hi Larry,

    Thanks for this article. I am a math teacher in SF and recently we had MAP testing (Measure of Academic Progress) imposed on us, without union consent. The plan was for us to give four Map tests per school year.There were numerous problems to start with, the first was lack of calendar days. The additional data-collection process is eating up more of our instruction time.

    Our “school year” is already based on the CST (Star Test), which covers an entire year of state standards content, but is given *six weeks* before the end of the school year. This forces us to cram six marking periods of content into five marking periods. Recently, because of CA budget cuts, we have an additional 4 furlough days removed from the calendar. With Map testing, the actual test day, and say, one or more test prep days, meant that we had to cram six marking periods of content into 4 and a half marking periods. We were already burdened by the CST; the MAP testing would only worsen it.

    There were additional problems with test itself, that would potentially skew the data. Our district (or state?) hired an independent company to come up with the MAP tests. After analyzing each test, I’ve came to some rather startling conclusions. First, the tests were based on only two major textbooks that are in use in the district: Prentice Hall and Discovering Geometry.

    *The scope and sequences of these textbooks drive the test content and the timing of their administration.*

    Therefore, if you don’t use these text books, then the benchmark content may well be different that what an individual teacher is teaching. For example, the Honors Geometry program at my school uses an older Jurgenson’s textbook (the finest Geo textbook ever written). The Regular Geometry courses come from a text put together by a talented teacher in our department. Both of our texts match state and national core standards *exactly*.

    For a more detailed example:

    In the current (as in we’re supposed to give this test now) Prentice Hall textbook test, on pg. 3, problem 6, the problem requires a Geometry student to do a problem involving Circumference of a circle. At my school we won’t get to circles for another month.

    An unsettling consequence of this testing is that teachers will feel pressure to use the textbooks represented by the MAP test, even though they might suck, like the Prentice Hall text. We are trapped into a set curriculum with no deviation or autonomy possible – that is, if we want our students to do well on the MAP tests

    • larrycuban

      Thanks, David, for the detailed examination of math in MAP tests. That teachers did not appear to be seriously involved in the decisions as per contract–see Patrick’s description of teacher involvement as reported by the assistant to the Assistant Superintendent–could reduce teacher trust in the tests and were that to occur, is a real problem over time. Your observations that particular approved texts may strait-jacket math teaching is, of course, one of the strategies, that is, aligning tests to texts in order to press teachers to teach certain content. Were teachers to have been seriously involved over a period of time and decided on that approach, I would not have much problem with it. It does not appear, however, to have been the case.

      • David B. Cohen

        Larry, correct me if I’m wrong, but wouldn’t these kinds of year-to-year shifts in the teaching and learning conditions pose some serious challenges to any effort at a year-to-year data comparison? That was the main thrust of a blog post I wrote last year, an open letter to policy makers – none of whom would ever answer, of course. (I did email it to everyone listed).

        An Open Letter to California Public Officials

  7. larrycuban

    David,
    I read your open letter to the Governor and assorted policymakers. Your nine factors that might (and would) affect your evaluation as a high school are based in evidence and in personal experience. The three studies you refer to in your post add similar reservations on “value added measures.”
    Your comment includes another factor: the instability of student test scores year to year. That also is mentioned in many studies as a serious weakness to using “value added measures” to judge any teacher’s performance.

  8. Pingback: John Thompson: The End Is Not Near: School Turn Arounds Continue To Strike Out | moregoodstuff.info

  9. Pingback: Can New Meanings of Knowledge Change Education? « Ready S.E.T. Science

  10. Pingback: ClassLink – soluções de aprendizagem virtual « Enio de Aragon

  11. Pingback: Measuring Student Growth | Educational Aspirations

  12. Larry, this is an insightful post. Data can certainly inform teachers of areas of academic concern, but interventions that address the concern aren’t always available or implemented appropriately. Over time interventions should be analyzed to ensure that progress is being made. Analyzing and utilizing student achievement data may lead to a more focused effort in improving student learning.

  13. Pingback: StateImpact-How Data Can (And Can’t) Help Schools Get Ready For High-Stakes Tests | Indiana Democrats for Education Reform

  14. Diane Ravitch

    Larry,
    I don’t feel as confident about numbers and data as the reformers do. Enron had great numbers right up to the day it collapsed. So did Madoff. And many smart people lost their life savings trusting the data from these guys.
    Diane Ravitch

    • larrycuban

      We both know, Diane, that unvarnished trust in numbers is a fool’s game. One doesn’t have to be an historian to be skeptical. Doubt, I believe, is the proper stance to take about business numbers, medical numbers, and, of course, numbers coming from the feds, states, and districts. Knowing how the numbers are produced, independent verification, and replication help to ease doubt and increase confidence. But it ain’t easy. Thanks for the comment.

  15. Pingback: Measuring Student Growth « Educational Aspirations

  16. Bug Assassin

    I am a product of a fantastic “outside the box” education at a public, LA magnet school. My teachers did not follow a standard script, and were a highly diverse group (politically, pedagogically, and temperamentally). There were no check boxes or bubbles to fill. Instead we discussed, crafted arguments, and wrote essays. They challenged us, and we challenged them. They taught us to think critically decades before it was in fashion. Now, as an educator myself who is being increasingly forced to standardize my methods into someone’s idea of a “best practices” orthodoxy, I increasingly value the good old days when teachers could teach to their passions and strengths. Data can be useful, but it should inform and not dictate.

    • larrycuban

      Thank you for your comments on the kind of schooling you received (and participated in) at an LAUSD magnet. Your teachers used a different kind of data–observation, judgment, etc.—in those “good old days.”

  17. Pingback: Educational Policy Information

  18. Pingback: On Data, Part One: Responding to Data-Driven Instruction | An Urban Teacher's Education

  19. Pingback: How Data Can (And Can't) Help Schools Get Ready For High-Stakes Tests | StateImpact Indiana

  20. Reblogged this on Boils Down to It and commented:
    Well thought out argument. In in agreement.

  21. Pingback: Speaking from Experience: A Series | Ready for the New World

Leave a comment