Big Data, Algorithms, and Professional Judgment in Reforming Schools (Part 1)

The crusade among reformers for data-driven decision-making in classrooms, schools, and districts didn’t just begin in the past decade. Its roots go back to Frederick Winslow Taylor‘s “scientific management” movement a century ago. In the decade before World War I and through the 1930s, borrowing from the business sector where Taylorism reigned, school boards and superintendent adopted wholesale ways of determining educational efficiency producing a Niagara Falls of data which policymakers used to drive the practice of daily schooling.

Before there were IBM punchcards, before there were the earliest computers, there were city-wide surveys, school scorecards, and  statistical tables recording the efficiency and effectiveness of principals, teachers, and students. And, yes, there were achievement test scores as well.

In Raymond Callahan’s Education and The Cult Of Efficiency (1962), he documents Newton (MA) superintendent Frank Spaulding telling fellow superintendents at the annual conference of the National Education Association in 1913 how he “scientifically managed” his district (Review of Callahan book). The crucial task, Spaulding told his peers, was for district officials to measure school “products or results” and thereby compare “the efficiency of schools in these respects.” What did he mean by products?

I refer to such results as the percentage of children of each year of age [enrolled] in school; the average number of days attendance secured annually from each child; the average length of time required for each child to do a given definite unit of work…(p. 69).

Spaulding and other superintendents measured in dollars and cents whether the teaching of Latin was more efficient than the teaching of English, Latin, or history. They recorded how much it cost to teach vocational subjects vs. academic subjects.

What Spaulding described in Newton for increased efficiency (and effectiveness) spread swiftly among school boards, superintendents, and administrators.  Academic experts hired by districts produced huge amounts of data in the 1920s and 1930s describing and analyzing every nook and cranny of buildings, how much time principals spent with students and parents, and what teachers did in daily lessons.

That crusade for meaningful data to inform policy decisions about district and school efficiency and effectiveness continued in subsequent decades. The current resurgence of a “cult of efficiency,” or the application of scientific management to schooling appears in the current romance with Big Data and the onslaught of models that use algorithms applied to grading schools, individual teacher performance,  and customizing online lessons for students.

As efficiency-driven management began in the business sector a century ago, so too have contemporary business-driven practices in using “analytics”  harnessed computer capacity to process, over time, kilo-, mega-, giga-,  tera-, and petabytes of data filled policymakers determined to reform U.S. schools with confidence. Big Data, the use of complex algorithms, and data-driven decision-making in districts, schools, and classrooms have entranced school reformers. The use of these “analytics” and model-driven algorithms for grading schools, evaluating teachers, and finding the right lesson for the individual student have, sad to say, pushed teachers’ professional judgment off the cliff.

images

The point I want to make in this and subsequent posts on Big Data and models chock full with algorithms is that using data to inform decisions about schooling is (and has been) essential to policymakers and practitioners. For decades, teachers, principals, and policymakers have used data, gathered systematically or on-the-run, to make decisions about programs, buildings, teaching, and learning. The data, however, had to fit existing models, conceptual frameworks–or theory, if you like, to determine whether the numbers, the stories, the facts explained what was going on. If they didn’t fit, some smart people developed new theories, new models  to make sense of those data.

In the past few years, tons of data about students, teachers, and results surround decision-makers. Some zealots for Big Data believe that all of these quantifiable data mean the end of theory and models.  Listen to Chris Anderson:

Out with every theory of human behavior, from linguistics to sociology…. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

Just like facts from the past do not speak for themselves and historians have to interpret those facts, neither do numbers speak for themselves.

The use of those data to inform and make decisions require policymakers and practitioners to have models in their heads that capture the nature of schooling and teaching and learning. From these models and Big Data, algorithms–mathematical rules for making decisions– spill out. Schooling algorithms derived from these models often aim to eliminate wasteful procedures and reduce costs–recall the “cult of efficiency”–without compromising quality. Think of computer-based algorithms to mark student essays. Or value-added measures to determine which teachers stay and which are fired. Or Florida grading each and every school in the state.

The next post takes up making school policy by algorithm.

15 Comments

Filed under Reforming schools

15 responses to “Big Data, Algorithms, and Professional Judgment in Reforming Schools (Part 1)

  1. This, then, depends on the quality, validity and suitability of the data. For classroom use the single most useful data sets come from achievement information. While we have made good strides forward in the crafting and implementation of achievement assessment instruments (and I really do not wish to wade into THAT debate) I fear that we often forget that there is yet another link in the ‘chain’ that sometimes gets overlooked. The assessment instruments have to assume that the outcomes (or objectives) are, themselves, valid and well-articulated. Simply put: there’s no value in doing excellent assessment or gathering great data if the curriculum is ‘bad.’ In the US, of late, there has been considerable work at the national level on curriculum development and while I am not in a position to judge the quality of those efforts, I can say that in other countries, including mine (Canada) it has been a very long time since we buckled down, nationally, in an effort to try and construct an excellent, coherent set of curriculum outcomes.

    • larrycuban

      Thanks, Maurice, for the comment. You say: “For classroom use the single most useful data sets come from achievement information.” Right now, that is probably true. What I am most interested in is the teacher’s judgment about academic and non-academic performance of students. Such data appear in parent-teacher conferences, retention decisions when teachers meet with principal,and the portfolio of observations about individual students that teachers compile over the course of a school year. Such data seldom, if ever, show up in judgments about a class or individual students. It is missing-in-action particularly at a moment when teacher judgment hardly gets high points from reformers.

  2. Pingback: Big Data, Algorithms, and Professional Judgment in Reforming Schools (Part 1) | Big Data 21st century | Scoop.it

  3. Ah, it’s been a long time since I’ve thought about Taylorism. I’ve now changed my “data-driven-decision-making” vocabulary to “data-informed decision-making.”

    My biggest question is this: Is the data we collect associated with (even correlated with) increased student learning? Attendance certainly affects learning. How many other things education can we measure with a yes/no tick box? I’m not opposed to standardised tests as one measure, but we need other valid and reliable ways of measuring student growth within the context of projects and authentic assessments.

    I remember a former principal and school counselor looking at the daily attendance sheet, then walking into the neighbourhood housing developments, knocking on absent students’ doors, helping them pick out clothes, and helping them find breakfast before walking them to school. The process improved our state “report card” score. It got students to school – which was measured. But the process took time away that could have been used coaching teachers to improve their practice. We can’t be sure about the test score impact for getting those students to school every day. What got measured, got the school focus.

    • larrycuban

      Hi Janet,
      What you describe is a common phenomenon about what gets measured receives attention, particularly if consequences are attached. The example you give has many echoes elsewhere in districts and schools. Thanks.

  4. Bob Calder

    When I look at the science questions my students have to answer, I am reminded of the talks and symposia on test design and their examples of how to screw up. There is little possibility of the kids being measured in a meaningful way. Calling the test bank poor is understatement.

  5. Pingback: Why Progressives Should Care About The Backlash On Standardized Testing | Change the Stakes

  6. Pingback: This Week’s “Round-Up” Of Useful Posts On Education Policy Issues | Larry Ferlazzo’s Websites of the Day…

  7. John FitzGibbon

    I think what is often ignored with data, testing etc. is the fact that most of our data based decisions assume that success for each student is the same. I’m working in the province of Ontario right now and the assumption seems to be that the definition of success for all students is high school graduation (it appears that it is also narrowed to graduation in 4 years).
    The problem that I have presently is that high school graduation is not always a measure of success. I work with First Nations (Native Americans to those south of the border) who see success in a different way. Yes graduation is important, but so is learning traditional culture and native language. The question is just starting to be asked, can the data based decisions that are being made in provincial schools (which many of the First Nations students attend), and First Nations schools (which use school success planning methods developed for Provincial schools) actually be valid for this definition of student success.
    This is very dear to me as we’ve just started a research project to look at data driven decisions for First Nations students and schools. It raises a very important point though, if we are using big data to make decisions are we taking into account the groups and individuals who may have a different view of what success in school means. Is our goal high school graduates, college or university acceptance, an internship, becoming a lifelong learner. I don’t know that big data can address these types of questions for the individual or small group within a district or even a school.

    • larrycuban

      Those are important questions that you ask, John. I hope that they enter the discussion on the research project you describe. Thanks for commenting.

  8. Pingback: The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven” | Larry Ferlazzo’s Websites of the Day…

  9. When I first started teaching, I was tempted to use what I knew from my previous career in business. I wanted data and “metrics.” But I soon learned that too much of what we do can’t be measured, at least not with current methods.

    Schools (and myself) are still on the “crusade for meaningful data.”

    It was interesting to read about the history of the the school district you mention above. It’s where I work!

    • larrycuban

      Thanks for the comment, Brian. I guess you work in Newton (MA). From everything I have heard, it is a strong, well-funded district that prizes it teachers.

Leave a reply to larrycuban Cancel reply