How Much Do Educators Care About Edtech Efficacy? Less Than You Might Think (Jenny Abamu)

Jenny Abamu is a reporter at WAMU. She was previously an education technology reporter at EdSurge where she covered technology’s role in K-12 education.

She previously worked at Columbia University’s EdLab’s Development and Research Group, producing and publishing content for their digital education publication, New Learning Times. Before that, she worked as a researcher, planner, and overnight assignment editor for NY1 News Channel in New York City. She holds a Master’s degree in International and Comparative Education from Columbia University’s Teachers College.”

 

This article appeared in EdSurge, July 17, 2017

Dr. Michael Kennedy, an associate professor at the University of Virginia, was relatively sure he knew the answer to this research question: “When making, purchasing and/or adoption decisions regarding a new technology-based product for your district or school, how important is the existence of peer-reviewed research to back the product?” Nevertheless, as part of the Edtech Research Efficacy Symposium held earlier this year, Kennedy created a research team and gathered the data. But, to his surprise, the results challenged conventional wisdom.

I hypothesized that the school leaders we talked to and surveyed would say, ‘Oh yeah, we privilege products that have been sponsored by high-quality research.’ Of course, we found that that wasn’t exactly correct

Michael Kennedy

“I hypothesized that the school leaders we talked to and surveyed would say, ‘Oh yeah we privilege products that have been sponsored by high-quality research,’” says Kennedy. “Of course we found that that wasn’t exactly correct.”

With a team of 13 other academics and experts, Kennedy surveyed 515 people from 17 states. Out of those they surveyed, 24 percent were district technology supervisors, 22 percent were assistant superintendents, 7 percent were superintendents, 27 percent were teachers, and 10 percent were principals. Within this diverse group, 76 percent directly made edtech purchases for their school or were consulted on purchase decisions. This was the group Kennedy expected would put its trust in efficacy research. To his team’s surprise, however, about 90 percent of the respondents said they didn’t insist on research to be in place before adopting or buying a product.

In contrast, respondents prioritized factors such as ‘fit’ for their school, price, functionality and alignment with district initiatives; these were all rated by those surveyed as “extremely important” or “very important.” In the report, one of the administrators interviewed is quoted saying, “If the product was developed using federal grant dollars, great, but the more important factor is the extent to which it suits our needs.” Kennedy also noted other statements made him pause.

“Research, according to one of the quotes I received was the icing on the cake,” says Kennedy “Having a lot of research evidence, like the type demanded by the feds, was cool but not essential. I found that to be pretty surprising and a little bit troubling.”

The consumer is the one who is going to have to demand the market changes. If school districts say, ‘I am not buying with without any research evidence,’ that would be the only thing, I think, the business community will listen to.

Kennedy defines randomized control trials, a research methodology that tries to remove bias and external effects as much as possible from the experiment, as the gold standard of research. Though this type of extensive and carefully planned research is expensive, the federal government does offer funds to support groups willing to go through the process. However, without schools demanding such research, Kennedy says while the government has made a way, but there is no will—and that could dry up funds.

“The consumer is the one who is going to have to demand the market changes. If school districts say, ‘I am not buying with without any research evidence,’ that would be the only thing, I think, the business community will listen to,” says Kennedy.

So what explains theme educators who did put research at the top of their list? Kennedy speculates it’s a question of exposure to quality research and district funding.

“Some people who responded to our survey had doctorates, other had advanced degrees, and they understand the value of research,” says Kennedy. “Some respondents are from districts that are very well-funded, and they have the luxury of being picky. Other districts have very limited budgets, very limited time and they are going to what is cheapest and easiest.”

Whether rich or poor, all school districts do have to answer to their tax bases, who often foot the bill for edtech purchases. Schools that cannot show academic gains are often under more scrutiny from outside forces, including parents and local officials. However, Kennedy notes that the complicated nature of education and all the variables that can affect student achievement water down any accountability that can be placed on edtech product purchase decisions made by the school districts.

“I suspect they will look at how are we teaching reading and math because technology is often used as a supplementary tool,” says Kennedy. “I hear parents say they want more technology, but they don’t know what they want. They think any tech is good tech, and I think that myth has pervaded as well. It’s a wicked problem, a layered contextual kind of issue, that will take more than the field can do to fix.”

5 Comments

Filed under research, school leaders, technology use

5 responses to “How Much Do Educators Care About Edtech Efficacy? Less Than You Might Think (Jenny Abamu)

  1. There seems to be very little reliable evidence for the efficacy of edtec as anything other than a supplement to other practices. The best I’ve seen is the EEF literature review from 2012. https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit/digital-technology/

    • larrycuban

      Thanks for comment. I agree with overall point that using technology is basically a “supplement to other practices” that is very hard to disentangle. The reference you cite, however, is far more positive than the meta-analyses of the literature I have seen on the topic.

  2. Jay Fogleman

    To be fair, a technological innovation that excites district purchasers as being a “good fit” would probably be “old news” by the time it could be validated by random-controlled trials. As you know, Larry, innovations that stem from strong theoretical foundations are often piloted and refined in particular educational contexts using a “design-based research” approach that often takes ~ three years or more. A random-controlled study would problaby occur after this design phase, and would take another three years. Publishing the results in a peer-reviewed journal might be another two years. Given how fast commercial innovations enter the ed tech market, this time lag would probably take researchers out of pre-purchase deliberations for most districts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s