In the past few weeks as the 10th anniversary of Hurricane Katrina approached, a veritable gale of first-hand observations and research reports swept through the media about improvements in New Orleans public schools. In reading those reports, however, researchers and pundits have warred with one another over the degree to which the Orleans Parish schools have improved over the past decade.
A revolution had occurred, say some researchers and politicians (see here and here) Others say no. Changes have surely occurred with nearly all schools being charters but these top-down changes have created a fragmented system, minimized citizen participation and spent more private and public money than Orleans Parish had ever seen (see here and here). Moreover, student test scores have dramatically improved, say researchers and policymakers (see here and here) No, others reply, results are mixed and some students had even been overlooked (see here).
In the cacophony of research reports and political statements, sorting out the sizzle—charter-dominant district, more money, new teaching force–from the steak—are students learning more and doing better than before Katrina–is very hard to do. Listening to elected politicians talk non-stop about an improved better city and schooling while citing research studies require large doses of salt before swallowing. Why? because squabbling researchers point fingers at one another’s designs, methodologies, and interpretations of findings in their reports (see here). Getting consensus in research findings is like a unicorn appearing. In a democratic society where facts and figures are highly valued in making public policy, educational researchers quarreling with one another confuse policymakers in one sense but also give them the latitude to pick-and-choose among research reports that best fit their beliefs and stated policy positions.
None of this back-and-forth is trivial. The stakes are high for students and parents in Orleans Parish as they are for champions and critics of school reform. Disputes over the successes and failures of post-Katrina public schools, for example, have become proxy battles over the worth of charter schools across the nation. Is New Orleans a proof-of-existence that urban districts can fire all of its teachers, abolish attendance zones, have nearly all of its schools become charters, and succeed? If so, then can what occurred in New Orleans be scaled up to other big cities where poor and minority students reside (see here)? Answers to these policy decisions lurk in the background of the hype over the resurrection of the Big Easy’s schools.
Yet for those policymakers and practitioners that call (or yearn) for evidence-based policy, the swirl of research reports on the 10th anniversary of Katrina is hardly a god-send. Big questions go unanswered. And this is where the dilemma over evidence-based policies arises. Federal, state and district policymakers prize using evidence to buttress their policy recommendations. Rational decision-making calls for attending to research and evaluation studies. Yet those very same policymakers value highly getting their choices politically approved by media, foundation officials, and, ultimately voters. Popular policies open the doorway to re-election. Thus these conflicting values of rationality and popularity have to be managed. All of this is much harder to do when research studies reach contrasting conclusions.
Contested educational research, of course, is stale news; conflicting accounts of the same phenomenon have been common for the past century. From the value of intelligence tests in the early 20th century to the New Math in mid-century to small high schools a decade ago and the worth of laptops as an instructional device–need I continue?–have yielded research findings that some researchers and policymakers have embraced but many have scorned. University-based social, behavioral, and natural scientists historically have looked down upon the applied research that educational academics have produced. Teachers have criticized educational researchers for decades in asking questions that they find irrelevant to their classrooms. Add further the belabored arguments researchers have had over the design of studies, the methodologies used, and interpretation of findings. Such criticism of educational research has been so common in the U.S. that many critics have come to agree with historian of education Carl Kaestle when he asked in 1993: ” Why is the reputation of educational research so awful?”
Kaestle, along with earlier and subsequent critics of educational research, pointed to how studies have been unhelpful to policymakers, shifting priorities among multiple goals driving tax-supported public education, the politicalization of research, the lack of federally subsidized research and development producing usable knowledge for policymakers and practitioners, and the fragmented ways of informing the public and practitioners of what exactly has been found through research. That “awful” reputation of educational research has not been helped by the back-and-forth of post-Katrina studies.
Evidence-based policy based upon research studies remains a dream. At best, evidence-informed* policy, allowing for the conflicting values of rationality and political popularity that come into play, remains a tough dilemma facing decision-makers.
Part 2 looks at the historical influence that research has had on classroom practice.
*While others may have used the phrase before and since, I ran across it in Andy Hargreaves and Corrie Stone-Johnson, “Evidence Informed Change and the Practice of Teaching” (pp. 89-110) in John Bransford, et. al. The Role of Research in Educational Improvement (Harvard Education Press, 2009)