Research Counts for Little When It Comes to Adopting “Personalized Learning”

The K-12 sector is investing heavily in technology as a means of providing students with a more customized educational experience. So far, though, the research evidence behind “personalized learning” remains thin.

Ben Herold, Education Week, October 18, 2016

The pushers of computer-based instruction want districts to buy products and then see if the product works. Students and teachers are being used for marketing research, unreimbursed research. Districts are spending money based on hype and tests of the educational efficacy of an extremely narrow range of products as if this is a reasonable way to proceed in this era of extreme cuts in budgets.

Laura Chapman, comment on above guest post, May 21, 2017

Both Ben Herold and Laura Chapman are correct in their statements about the thinness of research on “personalized learning” and that districts spend “money based on hype and tests of the educational efficacy of an extremely narrow range of products….”

In short, independent studies of “personalized learning,” however, defined, are rare birds but of even greater importance, are subordinate to decisions on buying and deploying software and programs promising to tailor learning to each and every student from kindergarten through high school. To provide a fig leaf of cover for spending on new technologies, policymakers often use vendor-endorsed studies and quick-and-dirty product evaluations. They are the stand-ins for “what the research says” when it comes to purchasing new products advertising a platform for “personalized learning.”

Why is research nearly irrelevant to such decisions? Because other major criteria come into play that push aside educational research either independent or vendor-sponsored, on technology. Policymakers lean far more heavily upon criteria of effectiveness, popularity, and longevity in spending scarce dollars on new technologies championing “personalized learning.”

Criteria policymakers use 

The dominant standard used by most policymakers, media editors, and administrators to judge success is effectiveness: What is the evidence that the policy of using new technologies for classroom instruction has produced desired outcomes? Have you done what you said you were going to do and can you prove it? In a society where “bottom lines,” Dow Jones averages, Super Bowl victories, and vote-counts matter, quantifiable results determine effectiveness.

Since the Elementary and Secondary Education Act (1965), federal and state policymakers have relied on the effectiveness standard to examine what students have learned by using proxy measures such as test scores, high school graduation rates, college attendance, and other indicators. For example, in the late-1970s policymakers concluded that public schools had declined because scholastic aptitudes test (SAT) scores had plunged downward. Even though test-makers and researchers repeatedly stated that such claims were false—falling SAT scores fueled public support for states raising academic requirements in the 1980s and adding standardized tests to determine success. With the No Child Left Behind Act (2001-2016) test scores brought rewards and penalties. [i]

Yet test results in some instances proved unhelpful in measuring a reform’s success. For example, studies on computer use in classroom instruction show no substantial gains in students’ test scores. Yet buying more and more tablets and laptops with software programs has leaped forward in the past decade.

Or Consider the mid-1960s’ evaluations of Title I of the Elementary and Secondary Education Act (ESEA). They revealed little improvement in low-income children’s academic performance thereby jeopardizing Congressional renewal of the program. Such evidence gave critics hostile to federal initiatives reasons to brand President Lyndon Johnson’s War on Poverty programs as failures. [ii]

Nonetheless, the program’s political attractiveness to constituents and legislators overcame weak test scores. Each successive U.S. president and Congress, Republican or Democrat, have used that popularity as a basis for allocating funds to needy students in schools across the nation including No Child Left Behind (2001) and its successor, Every Student Succeeds Act (2016). Thus, a reform’s political popularity often leads to its longevity (e.g., kindergarten, new technologies in classrooms).

Popularity, then, is a second standard that public officials use in evaluating success. The spread of an innovation and its hold on voters’ imagination and wallets has meant that attractiveness to parents, communities, and legislators easily translates into political support for reform. Without the political support of parents and teachers, few technological innovations such as “personalized learning” could fly long distances.

The rapid diffusion of kindergarten and preschool, special education, bilingual education, testing for accountability, charter schools, and electronic technologies in schools are instances of innovations that captured the attention of practitioners, parents, communities, and taxpayers. Few educators or public officials questioned large and sustained outlays of public funds for these popular reforms because they were perceived as resounding successes regardless of the research. And they have lasted for decades. Popularity-induced longevity becomes a proxy for effectiveness. [iii]

A third standard used to judge success is assessing how well innovations mirrored what designers of reforms intended. This fidelity standard assesses the fit between the initial design, the formal policy, the subsequent program, and its implementation.

Champions of the fidelity standard ask: How can anyone determine effectiveness if the reform departs from the design? If federal, state, or district policymakers, for example, adopt and fund a new reading program because it has proved to be effective elsewhere, teachers and principals must follow the blueprint as they put it into practice or else the desired outcomes will go unfulfilled (e.g., Success for All). When practitioners add, adapt, or even omit features of the original design, then those in favor of fidelity say that the policy and program cannot be determined effective because of these changes. Policy adaptability is the enemy of fidelity. [iv]

Seldom are these criteria debated publicly, much less questioned. Unexamined acceptance of effectiveness, fidelity, and popularity avoids asking the questions of whose standards will be used, how they are applied and alternative standards that can be used to judge reform success and failure.

Although policymakers, researchers, practitioners have vied for attention in judging the success of school reforms such as using new technologies in classroom instruction, policy elites, including civic and business leaders and their accompanying foundation- and corporate-supported donors have dominated the game of judging reform success.

Sometimes  called a “growth coalition,” these civic, business, and philanthropic leaders see districts and schools as goal-driven organizations with top officials exerting top-down authority through structures. They juggle highly prized values of equity, efficiency, excellence, and getting reelected or appointed. They are also especially sensitive to public expectations for school accountability and test scores; they also reflect societal optimism that technologies can solve individual and community problems. Hence, these policy making elites favor standards of effectiveness, fidelity, and popularity—even when they conflict with one another. Because the world they inhabit is one of running organizations, their authority and access to the media give them the leverage to spread their views about what constitutes “success.” [v]

The world that policy elites inhabit, however, is one driven by values and incentives that differ from the worlds that researchers and practitioners inhabit. Policymakers respond to signals and events that anticipate reelection and media coverage. They consider the standards of effectiveness, fidelity, and popularity rock-hard fixtures of their policy world. [vi]

Most practitioners, however, look to different standards. Although many teachers and principals have expressed initial support for high-performing public schools serving the poor and children of color, most practitioners have expressed strong skepticism about test scores as an accurate measure of either their effects on children or the importance of their work.

Such practitioners are just as interested in student outcomes as are policymakers, but the outcomes differ. They ask: What skills, content, and attitudes have students learned beyond what is tested? To what extent is the life lived in our classrooms and schools healthy, democratic, and caring? Can reform-driven programs, curricula, technologies be bent to our purposes? Such questions, however, are seldom heard. Broader student outcomes and being able to adapt policies to fit the geography of their classroom matter to practitioners.

Another set of standards comes from policy and practice-oriented researchers. Such researchers judge success by the quality of the theory, research design, methodologies, and usefulness of their findings to policy and student outcomes. These researchers’ standards have been selectively used by both policy elites and practitioners in making judgments about high- and low-performing schools. [vii]

So multiple standards for judging school “success” are available. Practitioner-and researcher- derived standards have occasionally surfaced and received erratic attention from policy elites. But it is this strong alliance of policymakers, civic and business elites, and friends in the corporate, foundation, and media worlds that relies on standards of effectiveness, fidelity, and popularity. This coalition and their standards continue to dominate public debate, school reform agendas, and determinations of “success” and “failure.”

And so for “personalized learning,” the effectiveness criterion lacking solid evidence of student success, gives way to the political popularity criterion that currently dominates policy debates over districts buying tablets and laptops to get teachers to put the new technological fad into classroom practice.

____________________________________________________

[i] Patrick McGuinn, No Child Left Behind and the Transformation of Federal Education Policy, 1965-2005 (Lawrence, KS: University Press of Kansas, 2006)

[ii]Harvey Kantor, “Education, Reform, and the State: ESEA and Federal Education Policy in the 1960s,” American Journal of Education, 1991, 100(1), pp. 47-83; Lorraine McDonnell, “No Child Left Behind and the Federal Role in Education: Evolution or Revolution?” Peabody Journal of Education, 2005 80(2), pp. 19-38.

[iii] Michael Kirst and Gail Meister, “Turbulence in American Secondary Schools: What Reforms Last,” Curriculum Inquiry, 1985, 15(2), pp. 169-186; Larry Cuban, “Reforming Again, Again, and Again,” Educational Researcher, 1991, 19(1), pp. 3-13.

[iv]Janet Quinn, et. al., Scaling Up the Success For All Model of School Reform, final report, (Santa Monica (CA): Rand Corportation, 2015).

[v] Sarah Reckhow, Follow the Money: How Foundation Dollars Change Public School Politics (New York: Oxford University Press, 2013); Frederick Hess and Jeff Henig (eds.) The New Education Philanthropy: Politics, Policy, and Reform (Cambridge, MA: Harvrd Education Press,, 2015).

[vi] Linda Darling Hammond,”Instructional Policy into Practice: The Power of the Bottom over the Top,” Educational Evaluation and Policy Analysis, 1990, 12(3), pp. 339-347. Charles Payne, So Much Reform, So Little Change (Cambridge, MA: Harvard Education Press, 2008). Joyce Epstein, “Perspectives and Previews on Research and Policy for School, Family, and Community Partnerships,” in(New York: Routledge, 1996), pp. 209-246.

[vii] Anita Zerigon-Hakes, “Translating Research Findings into Large-Scale Public Programs and Policy,” The Future of Children, Long-Term Outcomes of early Childhood Programs, 1995, 5(3), pp. 175-191; Richard Elmore and Milbrey McLaughlin, Steady Work (Santa Monica, CA: RAND Corporation, 1988);

Advertisements

11 Comments

Filed under Uncategorized

11 responses to “Research Counts for Little When It Comes to Adopting “Personalized Learning”

  1. Reblogged this on From experience to meaning… and commented:
    Good oversight by Larry Cuban

  2. Excellent overview of what drives decisions in education, enlightening – thanks.

  3. David F

    Hi Larry—thanks for this. I’d add that practitioners are also interested in whether the reform/tech leads to greater retention of knowledge by students. This gets into the big pedagogical battle between progressives and “traditionalists”, but I would also argue that the tech people tie themselves to progressive education because the goals of what is seen as “successful” shift from “did student X learn content Y” to a focus on generic skills (often difficult to measure) or other “softer” aspects such as resiliency.

    I’d also like to call your attention to one of my favorite articles on this issue: http://www.orbit-rri.org/technology-oriented-education-along-with-the-uncritical-mass-vs-ethics/

    • larrycuban

      Thanks, David, for the comment and link to a favored article. It got me reading anew about the Luddites and other issues.

  4. JayF

    Thanks Larry for this timely reflection. I wonder if you would also contrast policymakers’ interests in effectiveness, popularity, and fidelity with your previous description of practitioners’ and parents’ concerns for how innovations align with their conceptions of “real school” (i.e. the grammar of school)?

    I also see veteran teachers being wary of an innovation’s unintended consequences, many of which are easily predictable by classroom veterans (e.g. – previous incarnations of master learning where Ss game the system to minimize effort).

    • larrycuban

      A nice point, Jay, about those veteran practitioners sensitive to the unintended outcomes that so often occur when instructional innovations are pushed into classrooms.

  5. Pingback: Five EdTech Story Ideas for Education Reporters – EdTech Strategies

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s