Overconfident Experts as Poor Predictors

Experts are poor predictors of the future.

*”In one study, college counselors were given information about a group of high-school students and asked to predict their freshman grades in college. The counselors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate.”

*In another study, “data from a test used to diagnose brain damage were given to a group of clinical psychologists and their secretaries. The psychologists’ diagnoses were no better than the secretaries.’ “
*In the 1990s, a psychologist interviewed 284 experts who “made their living ‘commenting or offering advice on political and economic trends.'” He asked these experts to predict whether particular events would occur soon in parts of the globe in which they knew well and in areas about which they were familiar but were less knowledgeable. Would Mikhail Gorbachev be pushed out as leader of the Soviet Union? Would the U.S. go to war in the Persian Gulf? Which nation in the world would emerge as a big market? He collected over 80,000 predictions.
“The results were devastating,” Daniel Kahneman writes. “People who spend their time and earn their living studying a particular topic produce poorer predictions than dart-throwing monkeys…” (p. 219). Moreover, when confronted with their errors, experts would admit to the mistakes but gave a list of excuses.
I could give more examples of economists who predicted a healthy economy in 2007 rather than one in which a housing market bubble would burst and the Recession of 2008 would plunge the U.S. into high unemployment and low economic growth.  But I won’t.
OK, overconfident experts are poor predictors of the future. So what?
It matters when educational policymakers use expert opinion to make decisions that have big effects on the lives of students, teachers, parents, and the direction of schooling.
Consider value-added measures. On this issue, the experts differ. See here and here. Yet federal  policymakers in their confidence that value-added measures are the heart-and-soul of accountability have chosen one set of experts over another, ones who are confident that student test scores must be included in any evaluation of teacher effectiveness. In granting waivers to states from portions of NCLB, for example, U.S. Secretary of Education Arne Duncan, tied that relief to states increasing the number of charters, adopting Common Core Standards, and–the punch line–insuring that student test scores would be part of teacher evaluations.
In ignoring one set of test and measurement experts who know intimately the flaws of value-added measures and the untoward consequences of adopting schemes that judge teachers on each year of student test scores in math and reading (and a horde of new tests that have to be developed), the Obama administration has adopted a political recipe for disaster far worse “than dart-throwing monkeys.”
Note the adjective “political” that I used in the previous sentence. I do not mean that as a negative since politics is inherent to any educational policy making and action. I used the word to suggest that political rationality–retaining support of educational entrepreneurs, corporate leaders concerned about failing U.S. schools, those who support charters–trumped vanilla-plain rationality, clear thinking that could avoid the inevitable ill effects of value-added measures that are nearly certain to develop when these metrics are used to fire teachers and determine salaries.
Those predictable ill effects, that is, outcomes that are already occurring or have occurred, are no longer on the horizon they are in plain sight. Such effects include even more teachers avoiding low income and minority schools than do now; parents pressing principals to get rid of teachers who are rated “ineffective” or merely “effective;” constant tweaking of the teacher evaluation system to give more weight to some factors over others; lawsuits filed by teachers fired on the basis of one or two years of student test scores contesting the algorithm used by district administrators. These outcomes can be anticipated.
That is a post-mortem of a policy choice. I do wonder if a pre-mortem might have helped policymakers avoid flawed value-added measures. Perhaps not because the political judgment of choosing one set of experts over another to insure support of key constituencies carries far more weight, regardless of the predictable ill effects of defective policies. Moreover, decision-makers who glow with optimism often are allergic to others pointing out problems and identifying defects in a policy not yet implemented. Sadly, in the political world of educational policymakers and experts, there is no vaccine for overconfidence. It comes with the territory. So kiss pre-mortems goodbye.

8 Comments

Filed under Reforming schools

8 responses to “Overconfident Experts as Poor Predictors

  1. The pre-mortem is a more radical version of the risk analysis good businesses routinely carry out but it undoubtedly has value in this context. One of the predictions I would confidently make of the situation you’ve outlines Larry is simple but sad…a far greater incidence of teachers cheating.

  2. Louise kowitch

    Larry, Thank once again for a very timely perspective on the latest educational trend du jour. VAA is being considered here in CT as part of a flurry of ‘reform’ legislation promoted by the Governor. Your historical context is invaluable and I frequently forward your blogs to colleagues. Mac sends his best (he’s serving as an interim principal in our district’s middle school and reads your blogs with me). Keep the good fight!
    Warm regards, Louise

    • larrycuban

      Louise,
      Thanks for the comment. I did not know that Connecticut was considering value-added measures. With New York City publishing teachers’ level of effectiveness based on test scores, the mindless bandwagon rolls on. Best to Mac

  3. OK. Got it. And I agree “Overconfident Experts” can be very poor in their prediction accuracy. And I agree also that we live in a culture where policy makers over-rely on “experts”. This is a problem. What is the solution? Rely on no one? Send all decisions for referendum voting?

    Decisions need to be informed by background data. There exists too much data for decision-makers to regularly monitor and filter. So we use experts to digest and filter the information and announce conclusions and predict outcomes. Many of the experts make mistakes, and we make mistakes relying on their judgment. Therefor: ??

    That’s the part I can’t see a way out of. I’d be interested in your suggestion.

    • larrycuban

      Thad,
      There is no solution to experts making predictions. That will continue forever. All I can suggest is asking questions of any prediction made by an expert (e.g., doctors, lawyers, engineers, CEOs). Those questions get at the basic assumptions the expert makes in crafting his or her prediction. What are those assumptions? What is the evidence for them? Is the evidence linked strongly or weakly to the prediction? What is the probability of the prediction occurring? What might be the unlikely outcomes of the prediction. You get the picture, Thad. In the post, I mentioned a pre-mortem (rather than post-mortem) of any prediction, particularly ones that have large consequences for children, patients, citizens, etc.

      • Larry,

        Believe me: I’m with you on this. I so much wish that people would ask questions of predictors. They do: and they get a new class of analyst to summarize the predictions and create a meta-prediction based on the summary!

      • larrycuban

        Sounds to me like more questions have to be asked.

Leave a comment