How should quality assurance for competency-based ed work?

By:

Aug 6, 2015

This post was first published on CompetencyWorks.

As online, competency-based learning gains steam in higher education, a critical question is emerging. If the federal government will fund competency-based programs through Title IV dollars, how should it think about regulating these programs?

Whenever a disruptive innovation emerges—and online, competency-based learning deployed in the right business model is a disruptive innovation—it doesn’t look as good as existing services according to the old metrics of performance. Disruptions tend to be simpler than existing services; they start by solving undemanding problems. As a result, the sector’s leading organizations often dismiss them because they don’t look terribly good in comparison to the way people have traditionally thought of quality. But they also redefine the notion of what is quality and performance. As such, they don’t fit neatly into existing regulatory structures and often create new ones over time. Judging them by the old regulations can also limit their innovative potential by trapping and confining them to replicate parts of the existing value propositions of the old system rather than deliver on their new value proposition.

For online, competency-based programs, the old metrics are those focused on inputs. These new programs often lack breadth, generally do not do academic research, and they don’t have grassy green quads and traditional libraries. Assessing them based on these criteria along with specifying their faculty members’ academic credentials and course requirements doesn’t make much sense, nor do one-size-fits-all regulations that govern how students interact with faculty online, especially given that more interaction in online courses isn’t always better for students. Regulations limiting the geography in which approved programs can serve students are counter-productive as well for a medium that knows no geographic boundaries.

But giving carte blanche to providers with complete deregulation and access to government dollars doesn’t make much sense either. Because the country’s dominant higher education policies have focused on expanding access for more than half a century—allowing more students to afford higher education regardless of true and total cost through mechanisms such as Pell Grants and other financial aid programs, subsidies, and access to low-interest student loans—the government has in essence been a customer of higher education that has paid for the enrollment of students, not their successful completion and placement into good jobs. Accordingly, considered as a whole, the traditional higher education sector has followed its incentives and expanded access but had highly uneven student success rates at best. Even though the first wave of online learning innovation unleashed some great benefits for students and employers, because the incentives focused on expanding access above all else, the stories of low graduation rates and some students facing high debt with limited prospects to repay it were also completely predictable in advance. The government should learn from its lessons and shift from funding based on inputs to focusing on incentivizing the outcomes it would like to see from higher education.

Doing this is trickier than it might seem at first blush, though. As competency-based programs emerge, we have little experience regulating them outside of the current constraints. Many of the so-called outcome-based regulation efforts in the states have in fact been more focused on the simple output of student graduation, but not the true underlying quality of the programs such that these regulatory efforts may be creating some perverse incentives for colleges and universities.

So should the government simply create or use existing common assessments to measure the underlying quality of these programs? In my May piece for CompetencyWorks, Michelle Weise and I wrote about how important assessments are for creating a quality competency-based program and how they aren’t something that can be done as an after-thought. After identifying the competencies a student must master in a given program, creating quality assessments are critical to driving quality learning.

But a government-driven assessment program is unlikely to work. These competency-based programs are emerging in a wide variety of fields that are constantly changing—from IT fields to the liberal arts. Using common, government-mandated assessments will be difficult and unwieldy at best and could stunt innovation. Even pegging program quality to a broader assessment—the Collegiate Learning Assessment, for example, is a favorite of many people’s for its assessment of underlying critical thinking, analytic reasoning, problem solving, and written communication skills—may not be much better because those underlying skills that employers say they need manifest themselves in very different ways depending upon the domain in which someone is working. And the assessment may miss out on assessing the very reason why students attend certain programs. For example, the CLA is likely to tell us very little about the program quality for students studying in accredited cosmetology colleges—incidentally, a type of school that is begging for competency-based learning to free students from what can be akin to an apprenticeship for which they are paying even once they’ve mastered the craft if the full time of the program has yet to elapse.

A better path forward would be for the federal government to encourage a variety of experiments over the coming years that try out different approaches in a controlled way, all while releasing programs from the current input-based constraints to learn what works, in what combinations and circumstances, and what are the unintended consequences. A key tenet of all the efforts is that employers along with students are likely best positioned to determine program quality—and programs that align their assessments to the competencies employers need will likely be in a strong place. Some possible paths forward for regulating competency-based programs include creating risk-sharing programs with the new institutions and broadening the use of income share agreements; giving students the dollars up front and creating far more data transparency around program outcomes so students can make more informed choices; paying programs based on student outcomes in accordance with a concept we had a while back called the QV Index that aligns to employment and broader student satisfaction outcomes; experimenting with accreditors that operate like charter authorizers or employer organizations that operate as de facto accrediting bodies; and encouraging states to try more experimentation themselves across a broader range of ideas. Moving beyond all-or-nothing access to federal dollars that can create race-to-the-bottom incentives and only considering the tuition cost of a program as opposed to all of its expenditures so that students capture real savings when costs are reduced is also critical.

Although online, competency-based programs have been around for some time, opening up federal funding at scale across the higher education system for lots of new players has never been done before. As the government gets into this game, harnessing and not limiting the potential that competency-based learning brings—to be fundamentally about a student’s learning—as it seeks to assure quality is critical. The nation has yet to master that.

Michael is a co-founder and distinguished fellow at the Clayton Christensen Institute. He currently serves as Chairman of the Clayton Christensen Institute and works as a senior strategist at Guild Education.