The New Research Competition That Could Spark an Edtech Revolution

Efficacy

The New Research Competition That Could Spark an Edtech Revolution

By Bart Epstein and David DeSchryver     Oct 12, 2019

The New Research Competition That Could Spark an Edtech Revolution

Another school year has kicked off, and school officials will spend more than $13 billion on technology solutions that claim to work but often fail to generate the desired outcome.

The question of whether or not something “works” in the abstract is very different from the question of whether it might work in your district. That’s because education research has historically paid little attention to the sort of implementation variables (such as teacher buy-in, planning time, data interoperability and school culture) that can affect outcomes in the classroom. It’s a dynamic that leads superintendents to repeat an adage that should be familiar to many educators: “a mediocre intervention implemented well is better than a great intervention implemented poorly.”

Even the venerable “What Works Clearinghouse,” a division within the U.S. Department of Education that reviews existing research on educational programs, products and practices, offers precious little insight into the myriad ways that technology tools are likely to work—or not—across a range of schools and district contexts.

Consider how education research typically works. The gold standard is the randomized controlled trial, which requires very specific divisions of subjects into control and experimental groups. It’s a process that often reduces the number of variables in order to accurately measure the differences between each group’s experience. That means researchers often evaluate the effectiveness of programs in a small number of settings, and with a limited variety of educators and learners.

This type of research is necessary for education researchers and academics (and it’s the kind of information housed in the What Works Clearinghouse). But it offers little guidance to help district or school leaders understand what is likely to flourish or fade in a typical, real school setting.

The demand for a better understanding of “implementation science” is getting louder. Educators want to see why a program or technology works in one setting and not another. It’s not enough to know that something works in an idealized environment. They want to know if and how it can help their particular students—and what they can learn from their peers nationwide to make that happen.

This summer’s announcement by the U.S. Department of Education’s Institute of Education Sciences (IES) responds to this demand. The agency kicked off a new research competition to better understand how technology programs that IES previously deemed effective can perform in specific but varied settings, from different geographic regions to different populations of learners, educators and schools. It will also look at how a program’s impact may differ based on intervention delivery, such as the particular rotations of students in a blended learning program or the balance of video versus face-to-face instruction.

IES is now reviewing the applications, and possible start dates for approved applicants are between July 1 and September 1, 2020. The research will use fiscal year 2020 funds, which Congress is still debating. IES invited the applications this year so that the applicants would have enough time to prepare.

This should be music to the ears of education researchers. We need to know the conditions and resources that are necessary for any education product or program to be implemented successfully.

The competition also highlights how much work lays ahead. This is largely greenfield territory. Not only is there little to no good research, but there isn’t (yet) a common language to describe many of the factors that explain variation in implementation. For example, the meaning of “teacher agency,” with regard to the selection of education technology, differs across districts and across schools within districts. So too do definitions of terms like “initiative fatigue,” “fidelity of implementation,” “professional development” and “interoperability.”

These concepts can make or break the adoption of a program. Yet we cannot currently track these issues and analyze the ways that school environments and real-world conditions vary from each other and impact implementation. We are still living in a world of anecdotes.

Some difficult work lies ahead, but imagine the impact. If education decision-makers achieve consensus on how to define and track the variables that affect implementation, and share that data with their peers nationwide, they will be better able to select tools that offer actual evidence of the real-world conditions in which they thrive or struggle. With better insight into why, where and when programs succeed and fail, school leaders can gauge what quality implementation really looks like, and make meaningful assessments about the impact of different technologies. The education market and its investors can also better understand what can make or break their tools so that they, too, can make wiser decisions related to their investments.

Together, more research like this could revolutionize the operations of schools, improve the quality of materials on the market, and ensure that the tens of billions spent on technology solutions every year is better spent.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up