Making student data more usable: What innovation theory tells us about interoperability

By:

Apr 19, 2017

As schools adopt blended learning, many are eager to use the floods of student learning data gathered by their various software systems to make better instructional decisions. We are accustomed to the ease with which we can use data from multiple systems in other domains of life—such as when we use GPS apps on our smartphones to search for dinner options, check operating hours and customer ratings, and then get traffic-optimized driving directions. So it isn’t hard to imagine an ideal world in which all student data flows seamlessly and securely between software applications: a concept known as data interoperability.

But currently, data interoperability across education software tools remains more of a hope than a reality. Often, the software that schools use only provides educators with the data that software developers have deemed necessary or relevant for teachers. Each piece of learning software usually has its own proprietary dashboards and reports, and the software typically does not tag, categorize, or provide access to its data in a way that makes data easy to share across systems.

In the absence of data interoperability, many teachers and administrators spend hours manually pulling data from multiple systems into spreadsheets—a tedious and error-prone process that is unwieldy at scale. Or worse, when the rubber hits the road in the daily work of teaching, data sits unused, gathering proverbial dust inside expensive software systems while teachers go on teaching as if the data didn’t exist.

Education is not the first industry to face these challenges, and it certainly won’t be the last. But the challenge may feel more intractable in some fields than others. Healthcare, for example, is wrestling with similar issues regarding the interoperability of electronic health records. What makes these issues so daunting in both education and healthcare is that interoperability is a business model challenge as much as a technical challenge. As our research on interdependence and modularity illustrates, it isn’t in the interest of subsystem or subcomponent producers to force fit their processes into a standard format when they are trying to optimize their own technology or operations.

How can industries push past this dilemma? Our research, laid out in The Innovator’s Prescription, suggests three potential paths toward interoperability among electronic health records specifically—and across industries more broadly. Fortunately, we’re already witnessing the education field starting to evolve in some of these directions.

Standards

The first path toward interoperability evolves when industry leaders meet to agree on standards for new technologies. With standards, software providers electively conform to a set of rules for cataloging and sharing data. The problem with this approach in the current education landscape is that software vendors don’t have incentives to conform to standards. Their goal is to optimize the content and usability of their own software and serve as a one-stop shop for student data, not to constrain their software architecture so that their data is more useful to third parties.

Until schools and teachers prioritize interoperability over other features in their software purchasing decisions, standards will continue to fall by the wayside with technology developers. Efforts led by the Ed-Fi Alliance, the Access for Learning Community, and the federal government’s Common Education Data Standards program, all aim to promote common sets of data standards. In parallel with their these efforts, promising initiatives like the Project Unicorn pledge encourage school systems to increase demand for interoperability. But common standards, while gaining some traction, still have a ways to go before they become universal.

Virtualization

A second solution to the lack of interoperability would be virtualization. Virtualization is a technology for translating various unrelated data “languages” into a common one that allows previously incompatible formats to work together seamlessly. In modern computing, virtualization between different systems often happens through APIs. In education, for example, Clever’s API allows districts to use student rosters to set up and maintain student accounts across more than 200 third-party software providers. Students and teachers then go to the Clever dashboard to login across all the other systems. Clever’s virtualization software saves schools large numbers of manual hours setting up and managing student accounts. But translating students’ names and login credentials is an easy problem compared to translating usage data, assessment outcomes, and learning progress across multiple systems. Our current methods for measuring and tracking student learning still need to evolve and improve before they will be reasonably amenable to virtualization.

Platform dominance

A third possibility for tackling interoperability combines aspects of both previous approaches. If a particular platform becomes dominant, then it can set its own standards to which other software providers must conform in order to participate on the platform. For example, as Facebook and Gmail gained widespread adoption, they became de facto standards for sharing user information with other web apps. Similarly, if a particular education platform becomes big enough, it could eventually require other software to share student data according to the requirements it sets. But currently, the edtech market seems far too fragmented for any one platform to dominate in the short term.

It’s uncertain which of these three paths forward will lead to the holy grail of data interoperability. But one thing is clear: data interoperability is critical for realizing the promise of high-quality personalized learning. When teachers cannot easily use data from learning software to make better instructional decisions, the software fails in one of its most critical benefits in education: that of amplifying the abilities of teachers. For schools and teachers trying to push the frontier of personalized learning, data interoperability solutions can’t come soon enough.

Thomas Arnett is a senior research fellow for the Clayton Christensen Institute. His work focuses on using the Theory of Disruptive Innovation to study innovative instructional models and their potential to scale student-centered learning in K–12 education. He also studies demand for innovative resources and practices across the K–12 education system using the Jobs to Be Done Theory.