The historical record is rich in evidence that research findings have played a subordinate role in making educational policy. Often, policy choices were (and are) political decisions. There was no research, for example, that found establishing tax-supported public schools in the early 19th century was better than educating youth through private academies. No studies persuaded late-19th century educators to import the kindergarten into public schools. Ditto for bringing computers into schools a century later.
So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.”
Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures.
*Hormone therapy for post-menopausal women to reduce heart attacks was found to be more harmful than no intervention at all.
*Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older might be tested if they had family histories of prostate cancer.
And then there are new studies that recommend women to have annual mammograms, not at age 50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos.
These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop.
Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion.
Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?
If research into clinical medicine produces doubt about evidence-based practice, consider the difficulties of educational research–already playing a secondary role in making policy and practice decisions–when findings from long-term studies of innovation conflict with current practices. Look, for example, at computer use to transform teaching and improve student achievement.
Politically smart state and local policymakers believe that buying new tablets loaded with new software, deploying them to K-12 classrooms, and watching how the devices engage both teachers and students is a “best practice.” The theory is that student engagement through the device and software will dramatically alter classroom instruction and lead to improved achievement. The problem, of course–sure, you already guessed where I was going with this example–is that evidence of this electronic innovation transforming teaching and achievement growth is not only sparse but also unpersuasive even when some studies show a small “effect size.”
Turn now to the work of John Hattie, a Professor at the University of Auckland (NZ), who has synthesized the research on different factors that influence student achievement and measured their impact on learning. For example, over the last two decades, Hattie has examined over 180,000 studies accumulating 200, 000 “effect sizes” measuring the influence of teaching practices on student learning. All of these studies represent over 50 million students.
He established which factors influenced student learning–the “effect size–by ranking each from 0.1 (hardly any influence) to 1.0 or a full standard deviation–almost a year’s growth in student learning. He found that the “typical” effect size of an innovation was 0.4.
To compare different classroom approaches shaped student learning, Hattie used the “typical” effect size (0.4) to mean that a practice reached the threshold of influence on student learning (p. 5). From his meta-analyses, he then found that class size had a .20 effect (slide 15) while direct instruction had a .59 effect (slide 21). Again and again, he found that teacher feedback had an effect size of .72 (slide 32). Moreover, teacher-directed strategies of increasing student verbalization (.67) and teaching meta-cognition strategies (.67) had substantial effects (slide 32).
What about student use of computers (p. 7)? Hattie included many “effect sizes” of computer use from distance education (.09), multimedia methods (.15), programmed instruction (.24), and computer-assisted instruction (.37). Except for “hypermedia instruction” (.41), all fell below the “typical ” effect size (.40) of innovations improving student learning (slides 14-18). Across all studies of computers, then, Hattie found an overall effect size of .31 (p. 4).
According to Hattie’s meta-analyses, then, introducing computers to students will fall well below other instructional strategies that teachers can and do use. Will Hattie’s findings convince educational policymakers to focus more on teaching? Not as long as political choices trump research findings.
Even if politics were removed from the decision-making equation, there would still remain the major limitation of most educational and medical research. Few studies answer the question: under what conditions and with which students and patients does a treatment work? That question seldom appears in randomized clinical trials. And that is regrettable.
29 responses to “What’s The Evidence on School Devices and Software Improving Student Learning?”
You seem to criticise evidence based policy for ever changing. When we get new information and better studies do you think we should ignore them?
No, I do not think we should ignore new studies that challenge current practices in classroom or doctor’s offices. What we need to do is accept that scientifically-gathered evidence is imperfect and consider one’s own experience and context when deciding whether to adopt new findings or do something different in our daily life. Thanks for taking the time to comment.
I agree Larry. People need to be informed consumers of health care and education. There are no experts anymore; there are lots of people you trust and consider their knowledge, you consider your own experiences definitely and use your common sense in what works for you knowing the consequences and risks involved. Research is not easy these days and difficult to find causal factors when in the case of medical information our bodies have been subjected to a variety of evironmental impacts from basicly how well we have taken care of ourselves – simply put eat well, exercise and set goals for yourself to live a good life. BTW I would go with the butter LOL.
Thanks, Mary, for the comment and tip on butter.
Pingback: What’s The Evidence on School Devices and Software Improving Student Learning? | Educational Policy Information
Using a computer isn’t an instructional strategy and cannot be compared as such when the question is posed as you have. However, if you look at instructional strategies that technology can extend and augment then you find your research results that you say don’t exist. The media effects debate is an old one from the 80’s, Clark vs Kozma. Clark held that technology is like a grocery truck and the groceries were instructional strategies while Kozma stated that it was the technology itself that acted as an instructional strategy. Kozma’s argument has most always lost out to Clark’s analogy. However, as we go forward, and we have things like augmented reality, simulations, gaming, programming, etc. Kozma’s argument may have new found popularity. So studies that look at technology enhancement of instructional strategies do indeed show positive changes in achievement. Then, of course, if you aren’t a believer your just not a believer.
As always, Dr. Bob, thanks for taking the time to comment.
I concur with Dr. Bob. In >40 years of research, we have learned that use of ICT in instruction is a means to an end, not an end in itself. It’s also important to remember that there are many roles of ICT in education, and in the classroom: information source, communications tool, creative tool, instructor, simulator, assessor, instructional manager, administrative manager, performance support system for the teacher and administrator, and so on. And, ICT makes possible creation of learning environments that transcend, or are outside the classroom, including both formal and informal learning environments. Any of the applications can be designed and implemented well or badly, to greater or lesser impact on efficacy and efficiency.
Conclusion: we have learned that asking “does software improve learning?” is too simplistic a question, and it will always have the answer: “on average, no.” But the range of effects is very large, from negative to positive.
I am fond of David Berliner’s observation, over a decade ago, that in educational research, the interaction effects are usually larger than the main effects. That is the flaw in most efforts for evidence-based practice: studies that adequately model the interaction effects are usually too costly and difficult to do, especially at the levels of funding common in education. Asking if software improves learning is an example of a search for a main effect. Much more productive is to ask, “under what circumstances does software improve learning?”
You and Dr. Bob make your points clearly, Rob. And I agree with you and Dr. Bob what the questions about ICT use “ought” to be. Reality, however, intrudes and policymakers and parents do not frame the questions in the ways that you recommend. Your last question in final paragraph is where I end up also although I would amend your question to include, “under what circumstances and with which students….” Thanks for the comment
There’s a not altogether serendipitous connection between this post and your previous one on hierarchies Larry. I’ve observed in the last decade and more, how senior figures in both public and private sector educational enterprises are often entirely driven by a search for “solutions.” What they are less driven by is a desire to really understand the problems facing them. So individuals and organisations invest minimal time, if any, in the sheer, hard thinking required to reach a sufficiently clear level of understanding for them to design effective “solutions.” Hence I think, the popularity of research as a third party, somehow objective activity.
In the case of educational ICT, the naive way new hardware, online resources or social media are repeatedly championed as “motivating” children, is a good example of this failing.
Thanks again, Joe, for your comment.
Here’s a case in point:
We understand that every school is unique and faces different challenges. One size doesn’t fit all.
We understand that while student achievement is the ultimate goal, it’s only with effective and empowered school systems and educators that it can be accomplished.
We understand that today’s classroom is very different from yesterday’s and that we’re all charged with preparing students for the world of tomorrow.
We understand that interactive technology is an essential tool for achieving that goal, not an end in itself.
Bob, you must know that your quote from Promethean–a British based global company which sells interactive white boards and has experienced net losses in revenue for the past fiscal year–comes from their website and is pure boiler-plate language that most high-tech companies use. So, knowing this, what does the quote show in reference to the post?
My reference is to their promise to provide technology that improves student achievement.
To the comment above: “I’ve observed in the last decade and more, how senior figures in both public and private sector educational enterprises are often entirely driven by a search for “solutions.” What they are less driven by is a desire to really understand the problems facing them.”
This example and the “common core” buzz word are examples where corporate vendors sell “technologies that are marketed to improve achievement. In my my opinion this is where we get away from the instructional strategies to just considering the technology. So I agree with the commenter above.
Administrators are generally the ones who are involved in the decision making process about technologies, yet few, if any, have the training and knowledge make any educated decisions about technology, teaching, and learning, etc. and that’s what makes your original posting credible but missing the real issue.
Reblogged this on The Echo Chamber.
thanks for re-blogging the post about evidence on high-tech improving student learning.
In this blog I think you’re on to something that I’ve been thinking about lately. Best practices can be a trap in times when different challenges are emerging. Here’s a blog from the business world that raises a similar skeptic’s question:
Also, I borrowed your effect size examples and put them into a chart that I sent to teachers in Fairfax County, VA, to use in their planning. They particularly like the size of the teacher-related effects compared to other innovations.
Thanks as always,
Thanks for the piece in HBR. I had not seen it before. Yes, it does a nice job of questioning popular wisdom about what are best practices in the private sector. Thanks for the comment also.
Pingback: Wonkbook: What the resignation of Kathleen Sebelius means | Report On Obamacare
Pingback: Wonkbook: What the resignation of Kathleen Sebelius means - Washington Post (blog) 2014
My first response to this post was whether there had been a Road to Damascus moment, or at least maybe a spotlight adjustment. If ‘scientifically-gathered evidence is imperfect’ and educational decision-making is more political than research based, then at the very least we need to look deeper at the contentions on computer use in schools, be it in the interpretations attributed to you of shortcomings, or on those who advocate that “ICT makes possible creation of learning environments that transcend, or are outside the classroom, including both formal and informal learning environments.”
This led me to re-visit your mid-80s and early 2000s insights into the shortcomings associated with the introduction of digital technologies into school. What I arrived at was was to me
(i) It’s a frame of reference issue. Hattie is a good example of measuring within traditional school values. Even though his 2009 finding have also been used as a blanket support for ICT use in the classroom (see http://www.ictineducation.org/home-page/2014/1/29/making-the-most-of-ict-what-the-research-tells-us.html).
You have had some interesting things to say about Papert over the years, but when I revisit Ghost in the Machine: Seymour Papert on how computers fundamentally change the way kids learn (1999) I am taken by the attribution to Papert that “the computer’s true power as an educational medium lies — in the ability to facilitate and extend children’s awesome natural ability and drive to construct, hypothesize, explore, experiment, evaluate, draw conclusions — in short to learn — all be themselves.” This is a frame of reference that I as a practicing educator have tried to add value to for many years, while melding with the traditional learning expectations that school is built around. And occasionally would contend have experienced success.
(ii) so it’s about what learning is valued. 100% standardized, centralized, restrictive testing based systems or Grade score outcomes convey a value that influence what teacher believe can and cannot do. External provisions are nothing new. Postman and Weingartner’s Teaching as a Subversive Activity (1969) is a worthwhile consideration to this.
(iii) Learning, as with life, is about finding the right balance. You have rightly promoted your view as being skeptical, which is needed. But hopefully not a restrictive upholder of outmoded systems. Finding balance between future sellers and their techno-capitalist bedfellows, and upholders of power structures content with past and present imperfections, is what teaching and education needs to constantly search for. Teachers may well tend towards the “familiar” rather than “imaginative””, but can be supported to create better futures through better leadership (or vice versa).
(iv) Digital with its hitherto unexperienced rates of change and possibilities to create is inherently disruptive. So we can expect to continue to have to think deeply about what we are trying to achieve (rather than just what we want our young to know). In this age of digital, social progress, future work and personal empowerment are all intertwined.
(v) Finally, education is about the journey. So I finish with Philip Jackson’s What is Education? (2012) which starts with a quote from John Dewey in 1938 “the fundamental issue is not of new versus old nor of progressive against traditional education but a question of what anything whatever must be to be worthy of the name education… we shall make surer and faster progress when we devote ourselves to finding out just what education is and what conditions have to be satisfied in order that education may be a reality.” This is prescient when considering any or all things digital.
Educational research should always be in support of this, not the other way around.
As always, appreciated the catalysts for thinking you have provided and continue to provide.
Thank you, John, for your comments on John Hattie’s research and what I have written over the years (now that’s scary!). I had not seen Steve Moss’s piece where he certainly uses Hattie’s findings to endorse and extend use of ICT. Perhaps, it is, as you say, how one frames the issue of ICT, the teacher, and student learning.
Pingback: This Week’s Round-Up Of Good Posts & Articles On Education Policy | Larry Ferlazzo’s Websites of the Day…
Pingback: This Week’s Round-Up Of Good Posts & Articles On Education Policy | Educational Policy Information
Pingback: Head in the Oven, Feet in the Freezer | e-Literatee-Literate
Pingback: Looking for evidence of digital value in education. Or just looking for Godot? « Light Offerings
Pingback: What's The Evidence on School Devices and Softw...
Pingback: The Best Research Available On The Use Of Technology In Schools | Larry Ferlazzo’s Websites of the Day…