Policy by Algorithm (Jeff Henig), Part 2

 Jeff Henig is a professor of political science and education at Teachers College, Columbia University. This post appeared July 27, 2011 on Rick Hess’s blog in Education Week.

There is a satisfying solidity to the term “data-based” decision-making. But basing decisions on data is not the same thing as basing them on knowledge. Data are collections of nuggets of information. Compared with “soft” rationales for action–opinion, intuition, conventional wisdom, common practice–they are hard, descriptive, often quantitative.

When rich and high quality sets of data are mined by sophisticated and dynamically-adjusted algorithms, the results can be powerful. Google’s search engine is the prime example here. Google scores web pages based on indicators like the number of other websites that link to the page, the popularity and selectivity of those linking sites, how long the target site has existed, and how prominently on the site the search keywords appear. The resulting score determines the order in which sites are listed in response to Google searches–and listing position is critical. According to one source, the top spot typically attracts 20 percent to 30 percent of the search page’s clicks, with a very sharp diminishing return to those listed further down.

A February 2011 change in the Google algorithm was estimated to shift about $1 billion in revenue.

Little wonder that policy technocrats are drawn to the algorithm as a way to improve governmental performance. In the education world, well-tuned algorithms promise to tell us which students need what kind of interventions, which schools are good candidates for closure, which teachers should get tenure, how much a teacher should be paid. I have come to think of this as policy by algorithm.

Policy by algorithm relies on statistical formulas that shift through existing indicators to generate a predicted outcome score, then assigning automatic rewards or penalties to individuals or organizations that fail to meet the expected targets. In education, this can work by penalizing teachers whose value-added scores leave them in the bottom 10 percent or 20 percent over a one, two, or three-year period.

Education is not the only sector where policy by algorithm is currently in vogue.

The Obama administration in May announced a new plan to hold hospitals more accountable for outcomes involving Medicare patients. The formula to be applied in judging their efficiency would look not only at the cost of the services while the patient is hospitalized, but also for the cost of services performed by doctors and other health care providers in the 90 days after the patient leaves the hospital. Under the plan, a hospital that conducted, say, a hip replacement, would get a lower reimbursement rate if the patient later needed follow-up for an infection, even if the infection develops weeks after the original operation.

But the high promise of policy by algorithm mutates into cause for concern when data are thin, algorithms theory-bare and untested, and results tied to laws that enshrine automatic rewards and penalties. Current applications of value-added models for assessing teachers, for example, enshrine standardized tests in reading and math as the outcomes of import primarily because those are the indicators on hand. A signature element of many examples of contemporary policy by algorithm, moreover, is their relative indifference to the specific processes that link interventions to outcomes; there is much we do not know about how and how much individual teachers contribute to their students’ long-term development, but legislators convince themselves that ignorance does not matter as long as the algorithm spits out a standard that has a satisfying gleam of technological precision.

Google makes up for what it might lack in theory and process-knowledge by continually tweaking its formula. The company makes about 500 changes a year, partly in response to feedback from organizations complaining that they have been unjustly “demoted,” but largely out of a continued need to stay ahead of others who keep trying to game the system in ways that will benefit their company or clients. State laws are unlikely to be so responsive and agile.

Both data and algorithms should be an important part of the process of making and implementing education policy, but they need to be employed as inputs into reasoned judgments that take other important factors into account. The last thing we need are accountability policies that undermine education as a profession or erode the elements of community and teamwork that mark and make good schools. But when law and policy outrun knowledge, the results are likely to be unanticipated, paradoxical, and occasionally perverse.

10 Comments

Filed under school reform policies

10 responses to “Policy by Algorithm (Jeff Henig), Part 2

  1. The fact that google is in a competitive economy and they have lots of feedback from customers is huge. They are forced to fundamentally reinvent themselves or be tossed aside. Not so the education policy makers.

    What does the common core have in common with a soviet landfill?

    http://www.ergoscribo.com/2013/01/wasted-effort.html

    • larrycuban

      Mike,
      That is a nice piece on Common Core and a Soviet landfill. I do wonder what would happen if teachers were more trusted than they are now to choose what content and skills children and youth should learn. Thanks for taking the time to comment.

      • What would happen if, instead of a monopoly-based system where no school is permitted to fail unless it has become as toxic as Chernobyl, we borrowed ideas from the parent-funded schools which are springing up by the thousands in developing countries?

        Here in America, we complain that students who enter college must take remedial courses in Math and English. Chat with international students, and you’ll find that that their chief complaint is having to re-take calculus because their 3rd-world free-market high school course is not recognized by the American industry. James Tooley’s research, described in The Beautiful Tree and in reports elsewhere, is a wake-up call, a strong suggestion that we need to re-think many of our standard assumptions about the provision of education.

      • larrycuban

        Thanks for the question and comment, Terry. I am unfamiliar with the research you cite but will look at it.

  2. In your example of hospital algorithms, I wish the algorithms could factor in anecdotal data. The anecdotal evidence we gather in schools is hugely powerful in assessing effectiveness.

    The Google algorithm factors in blog comments and key words. If educational algorithms could do the same, specific teaching points, parent feedback forms, and other pieces of evidence could better inform schoolwide decisions.

    • larrycuban

      Nice point, Janet, about algorithms factoring in anecdotal material. And that would enhance, as you say, schoolwide decisions. Thanks.

  3. Pingback: Why Progressives Should Care About The Backlash On Standardized Testing | Change the Stakes

  4. Pingback: The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven” | Larry Ferlazzo’s Websites of the Day…

  5. In my previous comment, I inadvertently typed “industry” when “university” was intended. Cheers!

Leave a comment