In higher education, it’s student progress that counts—but prestige that gets measured

By:

Aug 4, 2020

In May 2019, the Christensen Institute interviewed members of the Presidents Forum to identify challenges in higher education that require collaborative efforts and systemic change to ensure student progress. A number of themes emerged that, already relevant before COVID-19 ravaged the nation, have only grown in importance and urgency. This is the third of four blog posts, all written prior to the pandemic, that address these themes and share insights from the leaders of some of higher education’s most innovative institutions.

Congressional debates around coronavirus aid packages are dredging up a thorny practice—distributing aid to colleges and universities based on the number of full-time equivalent students. In other words, serving part-time students brings in less aid.

Is it any wonder that so much of traditional higher education has preferred to focus on first-time, full-time learners? The students that colleges line up to serve are the ones that count—literally. 

As the late Professor Clay Christensen wrote extensively, organizations have a powerful tendency to focus their innovation efforts on their best, most demanding, and most profitable customers. This is true of all industries and higher education is the rule, not the exception. In higher education, the “best” customers are elite, first-time, full-time students. They pay more; they are academically prepared; they graduate in high numbers; they demand immersive amenities; and their achievements, talents, and predominantly wealthy families contribute significantly to the prestige and reputation of the institution. 

Count all your customers

The industries’ obsession with these students has been baked into the data infrastructure of higher education for decades. Major federal data systems like IPEDS, until recently, have only counted those first-time, full-time students. Students who don’t fit that narrow definition—and there are millions of them—aren’t counted in statistics like retention and graduation rates. This means that schools that serve adult learners often see only a tiny fraction of their students counted in federal data sets. 

For instance, American Public University System, which serves many active-duty military service members, veterans, and their families, currently has over 72,000 undergraduates. Its most recent full-time, first-time, first-year class (on which IPEDS statistics like graduation rates are based) consisted of only approximately 1,000 students. Just under half of one percent of students at Western Governors University, a competency-based school with nearly 90,000 undergraduates, are counted in the first-time, full-time cohort. Walden University, with 8,000 undergraduates, did not have a graduation rate reported by IPEDS at all, because it enrolled exactly zero first-time full-time students. 

Our data architecture was designed for traditional students and the schools that serve them. More non-traditional students are making the numbers, but there is still work to do

Stop measuring prestige. Start measuring progress. 

Graduation rates are one of the few ways in which our data system measures student progress in our system. IPEDS, as well as more widely known organizations like U.S. News and World Report, instead tend to rank schools by their prestige via inputs: the resources that a school has, including endowments, spending, faculty members, SAT scores of incoming students, and reputation. 

This is very different than asking or answering the question of how well schools do at achieving their value proposition: the promise they make to help students move forward in their circumstances. To address this question—which is ultimately the question that students, parents, taxpayers, and regulators care about—we would have to focus on student outcomes. These include retention, graduation, job placement, salary outcomes, social capital, and long term student satisfaction. 

To be sure, inputs—like the SAT scores of incoming students—can have a big impact on student progress. Most schools with high graduation rates benefit from having students that are academically well prepared even before they first set foot on campus. But this doesn’t necessarily mean that schools are doing a good job. In their seminal work, Academically Adrift, Richard Arum and Josipa Roksa found that college-goers made disappointingly small gains in critical thinking or writing in their years on campus. The evidence suggests the uncomfortable possibility that many schools admit—but do not create—great thinkers and writers.  

We should assess schools based on the value they create for students, not the quality of students they admit. 

Don’t confuse the finish line with the journey

One group of innovators, the Presidents Forum, is experimenting with new ways to create transparency around the student progress that college enables. Some Forum members have individually taken strides around these issues—since the institutions that make up the Presidents Forum are focused on distance learning, they do not fit well in a data infrastructure designed for first-time, full-time students. The Forum is currently discussing a series of change projects in three innovation domains, one of which is Data, Outcomes & Transparency.

For instance, WGU President Scott Pulsipher says, “We have partnered with Gallup to track student satisfaction, as well as long-term outcomes. Those longer-term outcomes are our best evidence that a degree is creating value for our students.” Clark Gilbert, President at BYU Pathways notes the potential of new types of data to even the playing field for innovative models: “What will always limit the innovators is transparency of outcomes and measurement. In higher ed, we make decisions based on brand and prestige, with low transparency as to actual student outcomes. If we can elevate outcomes, innovators can be freed up to rethink the inputs.” 

Students attend college to make progress in their lives. We should measure that progress at all schools, but innovative schools serving nontraditional students are especially motivated to rethink the limits of our data infrastructure. A system that ranked schools by the progress their students make—and accounted for where they started—wouldn’t just upend the way we think of “good schools.” It would also change the way the schools operate, and motivate them to serve all students well, not just their best customers.