Developing a Research Base to Evaluate Digital Courseware

Opinion | Digital Learning

Developing a Research Base to Evaluate Digital Courseware

By Vanessa Peters and Barbara Means     Jul 22, 2016

Developing a Research Base to Evaluate Digital Courseware

Online and blended learning continue to trend upward in higher education institutions across the U.S. According to a Babson Survey, the proportion of academic leaders who report that online learning is critical for long-term institutional growth has grown from 49 percent in 2002 to 71 percent in 2015, and institutions are investing in edtech products to support digital learning. Although these investments can take many forms, they typically include software programs that are designed to deliver curriculum content through educational learning applications.

How can they be confident that their investments of time and money for selecting and implementing new courseware will actually pay off? To facilitate the decision making, the Bill & Melinda Gates Foundation supported the development of the Courseware in Context Framework, a resource for helping instructors and institutions identify courseware products that meet their needs and the needs of their students. The framework provides a starting place for navigating the landscape of available research evidence on the effectiveness of available courseware products.

Finding an answer in the research literature is not easy. New products are coming on the market all the time, and old products are getting renamed and repurposed. For most of them, there is no published independent research on the effectiveness of individual products.

At the same time, the number of studies of the effectiveness of learning software is huge. Even limiting your search to the relatively new category of “courseware” doesn’t reduce the number of studies enough to make the task of reviewing them less daunting. A Google Scholar search using the terms “efficacy of courseware” yields over 7,860 search results in a mere .08 seconds. But if you start examining the studies, you find that only a tiny fraction of them measure courseware learning impacts well enough to support drawing any kind of conclusion. And even when an educator does find a controlled study that provided a credible test of courseware impacts, chances are it will not be for the kind of product, subject area and students he or she is concerned with.

What’s more, the available courseware impact studies have results that are all over the map. Sometimes the classes using the courseware got better results than those using traditional lecture mode; sometimes there is no difference; at other times the students using courseware appear to do worse. An evaluation of adaptive learning products by SRI Education revealed that different impacts may be found even when the same product is evaluated in different studies. Differences in study conditions, such as implementation practices, learner characteristics, outcome measures and the version of the software being used can all affect results, making it difficult to form a conclusion on the basis of prior research.

The Courseware in Context Framework uses the research literature in a different way. Rather than looking for impact studies of particular courseware products, it considers the research base for various product instructional design features. Recognizing that the purposes and contexts for adopting courseware vary widely from college to college and that new courseware products are becoming available all the time, the Courseware in Context Framework does not seek to identify the “best” courseware. Rather, the framework sets forth a set of product capabilities and implementation practices that are considered desirable from multiple perspectives—technical compatibility, usability, best practices in implementation at the course and institutional levels, and consistency with research on learning.

The framework also invites users to consider the product’s context and the user’s primary reason for considering the adoption of courseware. Is an individual faculty member trying to get students to be more active learners in her course? Is an academic department trying to achieve greater consistency for a core course taught by many instructors? Or is a college looking to improve the success rate in a course that many students drop or fail?

Working with the Tyton Partners-led team developing the framework, we helped identify instructional features with support in the learning science literature that are incorporated into some courseware products but not others. Examples include diagnosis of skills a learner is missing, or a prompt for a student to make a prediction. For each of these features, we identified a small number of studies demonstrating that adding the feature to instruction improved learning outcomes.

The next step in our work, commencing this summer, is to conduct a comprehensive search of the learning effectiveness literature to identify all controlled studies of the courseware learning features published since 2000. Synthesizing the findings of all of the studies on a given feature will enable us to compute the average impact of adding the feature, providing a research-based estimate of its relative importance.

The result of this work will be a highly readable document where educators can find information on the range of subject matters, learning outcomes, and learner types included in studies of the feature. This publication will launch alongside an updated Courseware in Context Framework in October 2016.

Barbara Means is director of the Center for Technology in Learning at SRI International and Vanessa Peters, is an education researcher at the center.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up