Keep is a “researcher, learning scientist, and writes about science, learning, and technology at www.benjaminkeep.com”
This appeared July 10, 2019 on T74
When it comes to learning technologies, educators and administrators often focus on what technology to use instead of how the technology facilitates learning. This leads to serious costs.
U.S. fourth-graders who report using tablets in all or nearly all of their classes are a full year behind in reading ability compared with peers who report never using tablets in their classes. Internationally, students who report greater use of technology in their classrooms score worse on the PISA exam, the major international student assessment, even when accounting for differences in wealth and prior performance. This is all according to a recent report by the Reboot Foundation.
These findings align with prior research that found essentially the same thing three years ago: High levels of technology use in the classroom tend to correlate with lower student performance.
The question in both of these reports is not whether technology can improve learning outcomes; lots of well-designed experimental research establishes that it can. The question, rather, is whether it is improving learning outcomes. And the answer seems to be: Not really.
Every year, administrators and teachers make major decisions about which new technologies, software platforms and assessment systems should be added to their ed tech arsenal. Companies pitch their products to school representatives at huge conferences. But technology often is misused, underused or even completely unused. One recent study found that over a third of all technology purchases made by middle schools simply weren’t used. And only 5 percent of purchases met their purchaser’s usage goals.
These findings have a common cause. Teachers and administrators don’t use learning technologies (or even think about using learning technologies) in the right way. A lot of conversations focus on what the technology can do or how students could use it, rather than how students typically use the technology or the contexts in which it would be most and least effective. Consider a typical pitch: “With this new virtual reality system, students can inhabit a fully immersive 3D haptic environment.” Nifty. But how does an immersive environment improve learning outcomes?
The answer to this question is telling. If the company’s answer is something like, “Well, students put on the headset like this, and then the teacher pulls up a scenario — we have lots of different ones, then the student has these options…” it’s a bad sign. This kind of answer just describes what the student does with the technology. It doesn’t tell a thing about how a student’s interaction with the technology will improve learning.
So, how should we be evaluating learning technologies? I suggest answering three questions first:
- Is the technology linked to a specific learning goal?
- Does the technology follow research-supported understandings of how we learn?
- When might the technology fail to facilitate learning?
Consider the humble flashcard. Used wisely, decks of flashcards can take advantage of spaced retrieval practice, a remarkably effective way to study. But used poorly — as a way to cram information before a test or to “learn” a set of vocabulary words over the course of a week and never return to them again — flashcards will make students feel as if they’ve learned far more than they have. The result? Little learning and damaged student perceptions about what they know.
Flashcards also have limits: It’s hard to convey complex information with them, for example. In this way, flashcards are like any other technology — there are good ways to use them, bad ways to use them and limits to how they can be used.
Let’s apply these questions to more modern technologies — take, for example, automated essay feedback tools like Revision Assistant. The learning goal seems clear: to help students learn to write better essays. How does it improve learning outcomes? Revision Assistant’s marketing copy says, “Motivate students to improve their writing with instant, differentiated feedback aligned to genre-specific rubrics.” This seems plausible, given the research on skill development: rounds of practice, feedback and self-evaluation are the cornerstone of deliberate practice, a well-established effective way to improve skills. When might it fail? The feedback itself might be bad. Students may over-rely on it. Or teachers might use it as a replacement for, instead of a complement to, their own feedback.
Or take the virtual reality example. Several companies are working on physics simulations in virtual reality. How might this technology help students learn fundamental physics concepts? One reasonable idea would be to let them experience the behavior of objects in different physical environments. Research suggests that contrasting examples can help make later instruction more effective. When might it fail? Lots of scenarios, but here are two: when the VR experience merely replicates an experience the students could have had otherwise, or when the experience comes after a lecture on the material.
Both of these examples reference well-established learning mechanisms and link them to specific learning goals. Of course, it’s still possible that the technology won’t work — bugs in the system, bad user interfaces, lack of integration with existing teaching systems or just plain bad implementation of the underlying idea. But at least there is an underlying idea that makes sense based on what we know about how students learn.
When we prioritize the how over the what, we think about technology more critically. Given that schools under-use their technology purchases and that buying new technology can be costly, why not delay new purchases for a year or two and explore whether existing technology can be put to good use?
Use technology to pursue specific learning goals. Use only technology that is supported by existing learning research. And stop using technology in contexts where it’s not particularly effective. If we do all that, the next report will show high correlations between technology use and student achievement, instead of the opposite.