New Guidance on Conducting Research Unveiled for Ed-Tech Companies

Associate Editor

San Francisco

A leading ed-tech industry group just updated its guidance for publishers and developers working in schools, recommending that companies follow 16 practices to evaluate and increase the impact of those products.

The 54-page document makes recommendations on topics such as how to begin planning for research, how to conduct it, what to report, and how findings should be presented in marketing literature. It was released Monday, and discussed by a panel at the Education Impact Symposium of the Education Technology Industry Network, a division of the Software & Information Industry Association that met here.

“Three things have fundamentally changed the landscape of education research in K-12,” said Denis Newman, the CEO and founder of Empirical Education, Inc., which authored the guidelines.

First, an accelerated pace of change for products makes long-term efficacy studies taking up to two years impractical, because the digital products have changed before studies are published, he said. The collection of usage data through research with faster turnarounds makes it possible to determine how and where a product is most effective, Newman said.

And the definition of evidence from the Every Student Succeeds Act “gives companies a starting point that any education company can get to,” he said during the panel discussion. Now, ed-tech providers can satisfy the requirements for evidence of impact without having to undertake the most rigorous, expensive and extensive studies that were required under the No Child Left Behind Act to meet the definition.

Bridget Foster, the executive vice president and managing director of the ed-tech industry network, said the guidance updates a document that was last released six years ago. Members of the organization see that changes in technology and policy have made evidence of impact “an increasingly critical differentiator” in the marketplace, she said.

Giving Guidance for Impact Research

The Guidelines for Conducting and Reporting EdTech Impact Research in U.S. K-12 Schools provide a framework that covers developing and documenting a model for how the product works; designing and conducting the research; handling sensitive, personally identifiable student information; and reporting results to the educational community.

The guidelines recommend that companies take the following eight steps, among others:

  • Consider how much support to offer during the product evaluation, and understand the difference between studies for “efficacy” vs. “effectiveness”—efficacy studies show how a product can work under ideal conditions, and effectiveness studies test it on a larger scale under regular field conditions.
  • Decide who is being tested: students, teachers, schools, or a combination.
  • Consider the four levels of evidence defined in ESSA: strong, moderate, promising, and entry level.
  • Use random assignment if you have control over who is using the ed-tech product now, and who gets it later.
  • Use comparison group studies to show evidence of impact.
  • Use correlational designs to find a program’s promise and how usage relates to outcomes of interest.
  • Work with researchers who can be objective and independent.
  • Make the research reports accessible and invite external review.

The guidelines also recommend that companies make all findings from product evaluations available, as a general rule—recognizing that conducting a rigorous study of a product’s effectiveness “can be a serious risk” for an ed-tech provider in the event that the results are not favorable to the company’s marketing plans.

“The research one does is not just for marketing,” said John Richards, president of Consulting Services for Education, Inc., and adjunct faculty at the Harvard Graduate School of Education where he teaches Entrepreneurship in the Education Marketplace. “It’s also to internally improve a product.”

Drawing the distinction between formative and summative research, Amar Kumar, a senior vice president at Pearson, said most ed-tech companies see the value of formative research to develop their products, but the value of summative research is less clear to them. Many wonder: “Is the customer really going to care?” he said. (Indeed, my colleague Sean Cavanagh just reported on a newly released survey that indicates that many K-12 officials are more likely to look at whether an ed-tech product meets the specific needs of educators than whether it has gone through rigorous research.)

Kumar, who leads Pearson’s global efficacy and research team, nonetheless said a year or two ago “a case study was enough” for many education decisionmakers. But now he sees an emerging demand for more rigorous research, and transparency in how that research is reported. “You need to report on everything you’re studying,” he said.

Getting the Guidelines to Educators

Newman said the goal of the education industry network is to get the guidelines into the hands of practitioners, by inviting groups like ISTE and SETDA to make the document available to decisionmakers in school systems.

“Educators really need this,” said Malvika Bhagwat, who oversees research and efficacy at Newsela. Companies that follow these guidelines can answer the “compared to what?” questions that educators often have about products, she said. With the thousands of products educators can choose from, she said, teachers and administrators will be able to look at products and ask questions like: “What was the sample size? What was the amount of intervention? What was the focus of the study? And what did this mean in terms of impact?”

“It’s important that we’re compared on the same benchmark, by the same guidelines,” she said.

“Ultimately it will improve the conversation between companies and customers,” said Newman. “They will have a common language.”


See also:

Leave a Reply