The quality of research on technology use in schools and classrooms leaves much to be desired. Yet academics and vendors crank out studies monthly. And they are often cited to justify using particular programs. How practitioners can make sense of research studies is an abiding issue. This post offers viewers some cautionary words in looking carefully at findings drawn from studies of software used in schools.
“Mary Jo Madda (@MJMadda) is Senior Editor at EdSurge, as well as a former STEM middle school teacher and administrator. In 2016, Mary Jo was named to the Forbes ’30 Under 30′ list in education.” This post appeared in EdSurge, August 10, 2016.
How do you know whether an edtech product is effective in delivering its intended outcomes? As the number of edtech products has ballooned in the past five years, educators—and parents—seek information to help them make the best decision. Companies, unsurprisingly, are happy to help “prove” their effectiveness by publishing their own studies, sometimes in partnership with third-party research groups, to validate the impact of a product or service.
But oftentimes, that research draws incorrect conclusions or is “complicated and messy,” as Alpha Public Schools’ Personalized Learning Manager Jin-Soo Huh describes it. With a new school year starting, and many kids about to try new tools for the first time, now is a timely moment for educators to look carefully at studies, scrutinizing marketing language and questioning the data for accuracy and causation vs. correlation. “[Educators] need to look beyond the flash of marketing language and bold claims, and dig into the methodology,” Huh says. But it’s also up to companies and startups to question their own commissioned research.
To help educators and companies alike become the best critics, here are a few pieces of advice from administrators and researchers to consider when reviewing efficacy studies—and deciding whether or not the products are worth your time or attention.
#1: Look for the “caveat statements,” because they might discredit the study.
According to Erin Mote, co-founder of Brooklyn Lab Charter School in New York City, one thing she and her team look for in studies are “caveat statements,” where the study essentially admits that it cannot fully draw a link between the product and an outcome.
“[There are] company studies that can’t draw a definitive causal link between their product and gains. The headline is positive, but when you dig down, buried in three paragraphs are statements like this,” she tells EdSurge, pointing to a Digital Learning Now study about math program Teach to One (TtO):
The report concludes, “The TtO students generally started the 2012-13 academic year with mathematics skills that lagged behind national norms. Researchers found that the average growth of by TtO students surpassed the growth achieved by students nationally. Although these findings cannot be attributed to the program without the use of an experimental design, the results appear encouraging. Achievement gains of TtO students, on average, were strong.”
Mote also describes her frustration with companies that call out research studies as a marketing tactic, such as mentioning both studies and the product within a brief, 140-character Tweet orFacebook post—even though the study is not about the product itself, as in the Zearn Tweet below. “I think there is danger in linking studies to products which don’t even talk about the efficacy of that product,” Mote says, calling out that companies that do this effectively co-opt research that is unrelated to their products.
“Research from @RANDCorporation shows summer learning is key. Use Zearn this summer to strengthen math skills.”
#2: Be wary of studies that report “huge growth” without running a proper experiment or revealing complexities in the data.
Research at Digital Promise, something consumers should look for is “whether or not the study is rigorous,” specifically by asking questions like the following four:
- Is the sample size large enough?
- Is the sample size spread across multiple contexts?
- Are the control groups mismatched?
- Is this study even actually relevant to my school, grade, or subject area?
Additionally, what if a company claims massive growth as indicated by a study, but the data in the report doesn’t support those claims?
Back in the early 2000s, John Pane and his team at the RAND Corporation set out to demonstrate the effectiveness of Carnegie Cognitive Tutor Algebra. Justin Reich, an edtech researcher at Harvard University, wrote at length about the study, conceding that the team “did a lovely job with the study.”
However, Reich pointed out that users should be wary of claims made by Carnegie Learning marketers that the product “doubles math learning in one year” when, as Reich describes, “middle school students using Cognitive Tutor performed no better than students in a regular algebra class.” He continues:
“In a two-year study of high school students, one year Cognitive Tutor students performed the same as students in a regular algebra class, and in another year they scored better. In the year that students in the Cognitive Tutor class scored better, the gains were equivalent to moving an Algebra I student from the 50th to the 58th percentile.”
Here’s another example: In a third-party study released by writing and grammar platform NoRedInk involving students at Shadow Ridge Middle School in Thornton, CO, the company claims that every student who used NoRedInk grew at least 3.9 language RIT (student growth) points on the popularly-used MAP exam or—by equivalence—at least one grade level, demonstrated in a graph (shown below) on the company’s website. But upon further investigation, there are a few issues with the bar graph, says Alpha administrator Jin-Soo Huh.
While the graph shows that roughly 3.9 RIT points equate to one grade level of growth, there’s more to the story, Huh says. That number is the growth expected for an average student at that grade level, but in reality, this number varies from student to student: “One student may need to grow by 10 RIT points to achieve one year of typical growth, while another another student may just need one point,” Huh says. The conclusion: these NoRedInk student users who grew 3.9 points “may or may not have hit their yearly growth expectation.”
Additionally, one will find another “caveat” statement on Page 4 of the report, which reads: “Although answering more questions is generally positively correlated with MAP improvement, in this sample, there was not a statistically significant correlation with the total number of questions answered.”
According to Jean Fleming, NWEA’s VP of Communications, “NWEA does not vet product efficacy studies and cannot offer insight into the methodologies used on studies run outside our organization when it comes to MAP testing. Hence, all the more reason for users to be aware of potential snags.
#1: Consider getting your “study” or “research” reviewed.
No one is perfect, but according to Alpha administrator Jin-Soo Huh, “Edtech companies have a responsibility when putting out studies to understand data clearly and present it accurately.”
To help, Digital Promise launched on Aug. 9 an effort to help evaluate whether or not a research study meets its standard of quality. (Here are a few studies that the nonprofit says pass muster, listed on DP’s “Research Map.“) Digital Promise and researchers from Columbia Teachers College welcome research submissions between now and September from edtech companies in three categories:
- Learning Sciences: How developers use scientific research to justify why a product might work
- User Research: Rapid turnaround-type studies, where developers collect and use information (both quantitative and qualitative) about how people are interacting with their product
- Evaluation Research or Efficacy Studies: How developers determine whether a product has a direct impact on learning outcomes
#2: Continue conducting or orchestrating research experiments.
Jennifer Carolan, a teacher-turned-venture capitalist, says both of her roles have required her to be skeptical about product efficacy studies. But Carolan is also the first to admit that efficacy measurement is hard, and needs to continue happening:
“As a former teacher and educational researcher, I can vouch for how difficult it can be to isolate variables in an educational setting. That doesn’t mean we should stop trying, but we need to bear in mind that learning is incredibly complex.”
When asked about the state of edtech research, Francisco responds that it’s progressing, but there’s work to be done. “We still have a long way to go in terms of being able to understand product impact in a lot of different settings,” she writes. However, she agrees with Carolan, and adds that the possibility of making research mishaps shouldn’t inhibit companies from conducting or commissioning research studies.
“There’s a lot of interest across the community in conducting better studies of products to see how they impact learning in different contexts,” Francisco says.
Disclosure: Reach Capital is an investor in EdSurge.
6 responses to “Did That Edtech Tool Really Cause That Growth? (Mary Jo Madda)”
Reblogged this on From experience to meaning… and commented:
This is a mustread post!
Pedro, thanks for re-blogging post on giving careful attention to research studies.
Pingback: Did That Edtech Tool Really Cause That Growth? ...
It is my experiences that few administrators and even fewer teachers can decipher a top notch research study. That is one of the reasons I will remain in the classroom after finishing my doctorate. Even the bevy of administrators in my district with Ed.D degrees do not read research studies with a critical eye.
The best advice given here are about the caveats. But that only works if you read the entire report. Most high quality peer reviewed research is behind pay walls and is not easily accessible to those not at a university. Therefore much of the research is unknown to the very community that could benefit and schools make some poor decisions that could have been avoided. Along this line, I found Jack Schneider’s Ivory Tower to Schoolhouse book a fascinating read.
Thanks, Alice, for commenting on your experience with getting access to peer-reviewed research articles and your opinion on how district administrators read research studies. By the way, I told Jack about your comment on his book.
Pingback: Hoe lees je een onderzoeksartikel? Enkele tips voor docenten | Blogcollectief Onderzoek Onderwijs