For over 30 years, I have examined the adoption and use of computers in schools (Teachers and Machines, 1986; Oversold and Underused, 2001, Inside the Black Box, 2013). I looked at the policy hype, over-promising, and predictions accompanying new technologies in each decade. The question I asked was: what happens in schools and classrooms after the school board and superintendent adopt a policy of buying and deploying new technologies to improve schooling. In books, articles, and my blog, I moved back and forth between policy and practice.
In these decades, champions of new technologies in schools believed deeply that the traditional goals of tax-supported public schools (i.e., building citizens, preparing graduates for a labor market, and making whole human beings) could be achieved through new electronic devices. They believed that hardware and software would, if not transform, surely alter classroom teaching, improve students’ academic performance, and prepare graduates for an entirely different workplace than their parents faced.
In research during these decades, I described and analyzed computers in schools and classrooms across the U.S. I tracked how these high-tech advocates and donors were often disappointed in how little school and classroom practice changed in the direction they sought, the anemic results in student achievement, and uncertainties in getting the right jobs after graduation, given the claims accompanying these new technologies.
I also documented occasional instances where individual teachers thoroughly integrated laptops and tablets into their practice and moved from teacher- to student-centered classrooms. And there were scattered cases of schools and districts adopting technologies wholesale and slowly altering cultures and structures to improve how teachers teach and students learn. While isolated and infrequent, nonetheless, I found these occasional exemplars of classroom, school, and district integration as important, if not, puzzling in their isolation from mainstream practices. In doing all of this research I became intimately familiar with nearly all that had been written about computers in schools.
Literature on computers in schools
Researchers, policy advocates, practitioners have created an immense literature on access, use, and effectiveness of computers in schools and districts. It is, however, a literature, particularly on effectiveness, that is stacked heavily at the success and failure ends of a continuum. Success refers to studies, reports, and testimonials to how computers have improved teaching and learning. The clustering of such work forms a peak at one end of the literature continuum.
Failure refers to those works where studies and reports show disappointing results, even ineffectiveness in altering how teachers teach and helping students learn more, faster, and better. Such documents form a peak at the other end of the literature spectrum. I have contributed to this end of the continuum.
Academics call this clustering at either end of the spectrum, a “bimodal distribution” with the center of the continuum housing many fewer studies than either pole. In short, the spectrum has two peaks not the familiar normal distribution called the bell curve.
Consider success stories. Between the 1990s and early 2000s, researchers, commission reports, and reporters accumulated upbeat stories and studies of teachers and schools that used devices imaginatively and supposedly demonstrated small to moderate gains in test scores, closing of the achievement gap between minority and white students, increased student engagement, and other desired outcomes. These success stories, often clothed as scientific studies (e.g., heavily documented white papers produced by vendors; self-reports from practitioners), beat the drum for more use of new technologies in schools. Interspersed with these reports especially since the first decade of the 21st century are occasional independent researcher accounts of student and teacher use documenting new technologies’ effects on teachers and students.
At the other end of the continuum is the failure peak in the distribution of this literature. This peak consists of studies that show disappointing results in students’ academic achievement, little closing of the gap in test scores between whites and minorities, and the lack of substantial change in teaching methods during and after use of new technologies. Included are tales told by upset teachers, irritated parents, and disillusioned school board members who authorized technological expenditures.
Hugging the middle between the twin peaks on this continuum of school technology literature are occasional rigorous studies by individual researchers and meta-analyses of studies done over the past half-century to ascertain the contribution (or lack thereof) of computers to student and teacher outcomes.
Even with these meta-analyses and occasional thorough studies, the overall literature oscillating between success and failure has yet to develop a stable and rich midpoint. I would like my study to occupy the center of this continuum by documenting both exemplars and failures of going from policy-to-practice in using new technologies in classrooms, schools, and districts.
Such a bimodal literature results from questions researchers, policymakers, and practitioners asked about access, use and effects of new technologies. Most of the reports and studies were interested initially in answering the questions of who had access, how were they used in lessons, whether the devices “worked,” that is, raised test scores and influenced academic achievement. The resulting answers created each peak.
So in 2016, I visited nearly 50 teachers, a dozen schools, and three districts that the media, experts, colleagues, and I identified as exemplars of integrating technology into daily lessons, school culture, and district infrastructure.
Insofar as a research strategy over the past 30 years, that is, capturing instances of schools that failed to implement fully teachers using computers regularly, academics would say that I was “sampling on the dependent variable.”
What that means is that I was investigating cases where the aim of the reform was to substantially alter how teaching was done, that is, fully integrating devices in teaching daily lessons, and that aim fell far short of being achieved. The point of this kind of sampling is to extract from multiple cases the common features of these disappointing ventures and inform policymakers and practitioners what they needed to avoid and how they could overcome the common hurdles (e.g., barriers to putting computers into lessons like preparation of teachers, insufficient student access to devices).
The are, however, dangers in synthesizing common features of failures when you take a step back and look at what you are doing. By investigating only those cases of “failure,” there is no variation in the sample. The “wisdom” gained from looking at failures may bear little relationship to, for example, the “wisdom” gained from looking at success stories. The common features of failure extracted from exemplars to explain why the initiatives flopped often fall apart after a few years.
See, for example, in the education literature, the research on Effective Schools in the 1980s and 1990s (see here and here). Schools profiled as successes in one year turn out to have sunk into failure a few years later (see here).
Also see a companion literature in business with similar effects in Tom Peters and Robert Waterman, In Search of Excellence: Lessons from America’s Best Run Companies (New York: Harper Collins, 2006) and Jim Collins, Good to Great: Why Some Companies Make the Leap and Others Don’t (New York: Harper Business, 2011).
In other words, without knowing about those cases where teachers did change how they taught when using new technologies, the barriers I identified in “failures” may have been accurate or just as well been inaccurate without having any comparisons to make. By looking only at instances of where technology use in schools failed to transform teaching, I overlooked cases of where technology did succeed in altering classroom practice. To “sample on the dependent variable,” then, is a bias built into the research design.
So for 2016, I have been looking at cases of technology integration–“successes.” I am “sampling on the dependent variable” again but I am fully aware of the bias built into this year’s study. In writing this book in 2017, however, I will pull together what I have learned from both the “failures” I have studied over the decades and the “successes” I have found this year. I will be able to compare both cases of those classrooms and schools that nose-dived and those that soared in integrating devices into lessons.
8 responses to “How and Why I Research Exemplars of Technology Integration”
Pingback: Larry Cuban reflexiona sobre sus 30 años de investigación en integración escolar de las TIC – manuel area moreira
Hi Larry—thanks for this and good luck with your new line of research. One thing that always bugs me about how technology is written about is that “success” is often framed as “changing teacher practices” rather than what I would argue it ought to be, which is “technology provided a learning opportunity that would not have been available otherwise”. Much of the tech that gets foisted on schools merely replaces, but does not really augment, what teachers already do. Essentially, it’s just more bells and whistles promoted for purposes of “engagement” or some other soft reason, often not worth the cost. However, if a lesson is totally altered into something that (a) enhanced student learning and (b) provided a vehicle that would not be available otherwwise and (c) could be assessed properly, then it might be worth it.
Reframing the point about about “success” with tech software is one that I would like to explore. Thanks.
I am presently conducting a qualitative study of a school that has had a 1:1 program for 18 years. I love reading your blogs. I have also read a great deal of your work. When do you expect your new book to be available? If there is any research that you think I should read, please direct me.
Thanks so much, Lynne Lieux, RSCJ University of New Orleans
On Tue, Feb 7, 2017 at 2:01 AM, Larry Cuban on School Reform and Classroom Practice wrote:
> larrycuban posted: “For over 30 years, I have examined the adoption and > use of computers in schools (Teachers and Machines, 1986; Oversold and > Underused, 2001, Inside the Black Box, 2013). I looked at the policy hype, > over-promising, and predictions accompanying new technolo” >
I am now writing the book and it won’t appear until 2018, Lynne. Some sources you may already have in your bibliography: Binbin Zheng, et. al., “Learning in One-to-One Laptop Environments:
A Meta-Analysis and Research Synthesis (2016) and the 2002 ethnographic study of four teachers using 1:1 laptops by Mark Windschitl and Kurt Sahl. Thanks for the comment.
Weren’t there similar discussions when the calculator became affordable and started replacing the slide rule? Schools adopting the calculator had success and failure stories. Then the TI-92 algebraic calculator came out and again adopting programs had success and failures. The TI-92 came out 20 years ago and it’s capabilities are still not integrated into the high school math classroom. I remember reading about schools that jumped on the TI-92 bandwagon, rewrote their curriculum and then considered the program a failure because students did poorly in the standardized tests. Success and failure was determined by the previous generation of measurement. Could this same thing be happening with computer technology? Could we just be measuring the wrong things for that success or failure? How much of that measurement standard should we change without throwing out the baby with the bathwater? I think there are successes and failures in tech programs but it seems it is usually determined by money spent vs expectations.
Thanks for taking the time to comment, Garth.
Pingback: How and Why I Research Exemplars of Technology ...