Dilemmas in Researching Technology in Schools (Part 2)

If you are a technology advocate, that is, someone who believes in his or her heart-of-hearts that new devices, new procedures, and new ways of using these devices will deliver better forms of teaching and learning, past and contemporary research findings are, to put it in a word–disappointing. How come?

For those champions of high-tech use in classrooms, two dilemmas have had technology researchers grumbling, fumbling, and stumbling.

Gap Between Self—Report and Actual Classroom Practice

Journalist accounts and many teacher, student, and parent surveys of 1:1 programs and online instruction in individual districts scattered across the U.S. report extraordinary enthusiasm. Teachers report daily use of laptops and elevated student interest in schoolwork, higher motivation from previously lackluster students, and more engagement in lessons. Students and parents report similar high levels of use and interest in learning. All of these enthusiastic responses to 1:1 programs and online instruction do have a déjà vu feel to those of us who have heard similar gusto for technological innovations prior to the initial novelty wearing off. [i]

The déjà vu feeling is not only from knowing the history of classroom machines; it is also because the evidence is largely drawn from self-reports. And here is the first perennial dilemma that researchers face in investigating teacher and student use of high-tech devices.

Researchers know the dangers of unreliable estimates that plague such survey and interview responses. When investigators examined classrooms of teachers and students who reported high frequency of usage, these researchers subsequently found large discrepancies between what was reported and what was observed. None of the gap between what is said on a survey and what is practiced in a classroom is intentional. The discrepancy often arises from what sociologists call the bias of “social desirability,” that is, respondents to a survey put down what they think the desirable answer should be rather than what they actually do. [ii]

So a healthy dose of skepticism about teacher claims of daily use and students long-term engagement is in order because few researchers have directly observed classroom lessons for sustained periods of time where students use laptops and hand-held devices. Until more researchers go into classrooms, it will be hard to say with confidence that teacher daily use of computers has changed considerably with abundant access to IT.[iii]

While many researchers understand clearly the limits of self-reports, prize classroom observations, and direct contact with teachers and students, the high cost of sending researchers into schools prohibits such on-site studies. Instead, researchers face this value conflict in costs and time efficiencies vs. direct observation by fashioning compromises where they use survey questionnaires and, perhaps interviews—all self-reports. These researchers do not solve the problem of the bias of “social desirability” and unreliability of self-reports; they manage this perennial dilemma.

Recurring Dilemma of Inadequate Research Design

Another dilemma is that many researchers see electronic devices in schools as hardware and software devices that are efficient, speedy, reliable, and effective in producing desirable student outcomes such as higher test scores. These researchers have designed studies that have compared films, instructional television, and now computers to traditional instruction in order to determine to what degree the technology has shown that teachers are more efficient and effective in their teaching and students learn more, faster, and better. Such studies have been dominant in IT research in the U.S. for over a half-century “with the most frequent result being ‘no significant difference.’”[iv]

Other researchers, however, see the introduction of innovative technologies as interventions into complex educational systems that interact and adapt to the institution’s goals, people, and practices. They design studies that bring practitioners and researchers together to study real-world problems of how teaching and learning can be improved through the use of high-tech innovations. They are more interested in refining the innovation, adapting it to the contours of actual schools and classrooms rather than evaluating the success of the technology—which is what the dominant group of technology researchers are engaged in. While most researchers see electronic devices as tools, these researchers see it as a process, not a product of learning how institutions adapt and change the innovation.

For researchers who adopt this point of view, design-based interventions would make the most sense. Here researchers and practitioners work together to identify the problem that they would investigate, come up with hypotheses, design the intervention and then implement it. Collecting and then analyzing data on the intervention and its outcomes in actual classrooms and then teachers decide whether to put into practice the results means that the research is process-driven.

More design-based interventions might well reduce the grumbling, fumbling, and stumbling that afflicts researchers and champions of more hardware and software in classrooms.


[i] Education Development Center and SRI International, “New Study of Large-Scale District Laptop Initiative Shows Benefits of ‘One-to-One Computing,’” June 2004, http://main.edc.org/newsroom/Features/edc_sri.asp: Saul Rockman, “Learning from Laptops,” Threshold, Fall 2003, www.ciconline.org ; David Silvernail and Dawn Lane, “The Impact of Maine’s One-to-One Laptop Program on Middle School Teachers and Students,” Research Report #1, February 2004 (Maine Education Policy Research Institute, University of Southern Maine).

[ii]John Newfield, “Accuracy of Teacher Reports,” Journal of Educational Research, 74(2), 1980, pp. 78-82, Sociologists point out that self-reports of church attendance are inflated. See: http://www.jstor.org/stable/2657482

[iii] Efforts to get sharper findings out of different sources and methodologies—often called “triangulation”—can be helpful to reduce skepticism of self-reports but problems remain. See Sandra Mathison, “Why Triangulate?” Educational Researcher, 1988, vol. 17, p. 13 at: http://edr.sagepub.com/content/17/2/13

[iv] Tel Amiel and Thomas Reeves, “Design-Based Research and Educational Technology: Rethinking Technology and the Research Agenda,” Educational Technology and Society, 11 (4), 2008, pp. 29-40.

13 Comments

Filed under how teachers teach, technology use

13 responses to “Dilemmas in Researching Technology in Schools (Part 2)

  1. Throw the Hawthorne effect into the mix you so accurately outline Larry and you get…well what we’ve got!

    My favourite example of the kind of research practice you identify came from Becta, the UK’s British Educational Communications and Technology Agency, a hugely expensive quango which drove so much of the investment and naivety about ICT this last decade but which the current UK administration closed down soon after they came to office. Their report, ‘Harnessing Technology, Next Generation Learning,’ 2008–2014, a hugely influential document quoted again and again by technology advocates, contained only ONE place in the entire paper where the authors referred to research which suggests technology has had a positive educational effect. This is it, in its entirety. “In addition, links between the use of technology and improved learning outcomes have been identified in an increasing body of evidence.”

    That “increasing body of evidence” referred to…was Becta’s own previous 2007 Review, and worse…the research that 2007 review refers to is Cox, M., Abbott, C., Webb, M., Blakeley, B., Beauchamp, T. and Rhodes, V. (2004) ‘A review of the research literature relating to ICT and attainment’….which was itself commissioned by… Becta!

    • larrycuban

      Thanks, Joe, for the comment. One of the missing parts of this post that I take up elsewhere are the flaws in the familiar experimental/control research design used for decades to show that students using a device (film, instructional television, desktop computers, laptops, notebooks) achieve higher test scores than students not using such devices. Of course, the flaws in the design (e.g., seldom controlling for how teachers teach in the experimental and control groups) have not halted current researchers from using the same design.

  2. I fully recognize the limits of survey research, and appreciate the effort to point out how complex the study of technology in education is. And, sadly, there’s no shortage of poorly designed research (especially in the field of ed. tech.) out there.

    However, there’s also an enormous body of well-designed research out there, much of which involves the sorts of observational accounts you call for. As just one example, the most recent edition of the Journal of Educational Computing Research includes this article by a well-respected team of researchers at the University of Florida:
    http://baywood.metapress.com/app/home/contribution.asp?referrer=parent&backto=issue,6,6;journal,1,177;linkingpublicationresults,1:300321,1 “By employing multiple observations in all schools, document analysis, interviews, and teacher inquiry, an account of the conditions, processes, and consequences (Hall, 1995) of laptop computing was generated.”

    There’s a whole bunch more like that. And, again, there’s a bunch of crap out there too. And a bunch of overzealous tech. enthusiasts as well. But, let’s be fair and honest about the research that has been done.

    • larrycuban

      Thanks for the comment, Jon, and reference to Cathy Cavanaugh’s study of 1:1 laptops in Florida. As I recall it is a study that had multiple pieces to it (as you point out) but also looked at schools where not only was the hardware introduced but intense staff development of teachers and support for teachers were included.

  3. Pingback: Dilemmas in Researching Technology in Schools (Part 2) | Digital Delights | Scoop.it

  4. Don Miller

    “process, not a product”
    Exactly. Ten years ago I objected to my children’s wealthy high school purchasing laptops to use in school. As a software professional I found the idea that hardware introduction was going to improve student learning silly. I think probably early iPad adopters are doing the same thing today. All students will someday be using iPad like devices. But to be meaningful it must be strongly content and process driven. Textbooks actually designed for the iPad will be fantastic.
    I think we may “back in” to really determining technologies effectiveness through the cost side of the equation. If schools like Rocketship and Carpe Diem can deliver roughly equivalent education at a lower cost we have good evidence of effectiveness. Even wealthy schools could then use the technologies and reallocate savings elsewhere. Not to mention Carpe Diem represents an attractive alternate learning style.
    As a non-educator, I believe the opportunity cost of boredom is greatly underestimated by educators, especially in primary school. Technology provides tremendous opportunity for self paced work and fine grained measurement of progress.
    I’m very surprised that there isn’t a robust open source software movement for schools. A couple million in funding for a core organization would bring in a vast amount of volunteer talent.

  5. Dare I mention Concordia again Larry?

    http://www.montrealgazette.com/news/Computers+schools+money+well+spent+Concordia+University+study+says/6182416/story.html

    Are you really suggesting that all of the studies carried out over 40 have the design flaws you mention or is it just the Concordia methodology itself?

    PS There is an open source software movement for schools across the pond http://opensourceschools.org.uk/

  6. Thanks Larry for this ongoing discussion. The quality of inputs is testament to the importance of this.
    My first thought was educational technology research is cultural diversity married to FW Taylor. Not a happy thought I know. Until we get a common understanding on what community we wish to develop and participate in we will continue to have shallow, competing and conflicting perspectives provided through research (The GFC shows the differences within societies on just this question). I fear we will be no better in addressing this question in 10 years time than where we were 10 years ago. Increasingly edtech decision making has become more based on connection and choices made (some good, some not so good) over quality research-backed insights (although some continue to be happy to use whatever research backs their agenda as ‘evidence’). If FW Taylor does prevail, as he has time and time before in education, then we have no excuse for the world we are helping shape.

    • larrycuban

      John,
      Taylorism is alive and well in pay-for-performance evaluations and compensation, in using test scores to measure school and teacher efficiency and effectiveness, and on and on. The “scientific management” of the early 20th century has not disappeared but simply re-appeared in different guises. High-tech devices and software come with implicit promises of teachers and students learning faster, more, and better. Even when there are temporary glitches, as you point out, in terms of band-with and too much video being downloaded.

  7. Pingback: Dilemmas in Researching Technology in Schools (Part 2) | Mediawijsheid in het VO | Scoop.it

  8. Dear Larry,

    Apropos to this discussion, I have a recent article in Educational Researcher that engages your work and offers an alternative approach to both survey research and design research.

    The idea, in a nutshell, is to develop research designs that leverage the data trails left by online learning environments in order examine what teachers and students are actually doing, but at scale. This carries the benefits of generalizability that we appreciate about survey research, along with a historical depth of analysis enabled by the continuous-time data collected by online spaces.

    In my study, I took a population of nearly 200,000 classroom wikis, found a random sample of those used in U.S., K-12 settings, and evaluated 255 of these wikis using an instrument that measured the presence or absence of 24 behaviors that provide opportunities for 21st century skill development and deeper learning.

    A toll free link to the article, some videos describing the work, and a white paper for educators can be found here: http://www.edtechresearcher.com/dclc-project/state-of-wiki-usage-2012/

    I’ve been meaning to send you a email with the link, but I’ve been neck deep in the job search. This is perhaps a more appropriate place to engage you anyway. If you have any thoughts, I welcome them.

    Best,
    Justin Reich (married to an Arlington PS graduate)

    • larrycuban

      Justin,

      I had read the piece when it first appeared in Educational Researcher. Thanks for sending it and other pieces along. You show how methodologically different research designs can tap the wealth of data already out there in schools and classrooms. That students and teachers in more affluent schools use wikis longer and for collaborative work while those in low-income schools for shorter periods of time and less creatively is familiar–sad to say. Also that, overall, teachers use wikis to extend existing ways of teaching is unsurprising but worthwhile in adding yet another piece of knowledge to what happens in schools with new technologies.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s