A New Model for OER Sustainability and Continuous Improvement

I’ve been interested in sustainability models for OER for decades. (Longtime readers may recall that the research group I founded at Utah State University in 2003, the Open Sustainable Learning Opportunities group, became The Center for Open and Sustainable Learning in 2005, which I directed until I moved to BYU.) And for just as long, I’ve believed that there are useful lessons for us to learn on this topic from open source software – OER’s far more popular and influential sibling. The empirical work on the sustainability of open source software (e.g., Schweik and English, 2012) is significantly further along than anything in OER, and there have been many more interesting experiments in open source sustainability than in OER.

One of those experiments began in 1995 and was called the “Netscape Bugs Bounty” program:

The contest begins with the beta versions of Netscape Navigator 2.0 — available for Windows, Macintosh and X Window System operating environments — that are on the Internet today. As the rules will explain in detail, users who are the first to report a particular bug will be rewarded with various prizes depending on the bug class: users reporting significant security bugs as judged by Netscape will collect a cash prize; users finding any security bugs will win Netscape merchandise; and users finding other serious bugs will be eligible to win a choice of items from the Netscape General Store.

In other words, the program offered a reward or “bounty” to any user who could identify problems in Netscape Navigator 2.0 beta.

“We are continuing to encourage users to provide feedback on new versions of our software, and the Netscape Bugs Bounty is a natural extension of that process,” said Mike Homer, vice president of marketing at Netscape. “By rewarding users for quickly identifying and reporting bugs back to us, this program will encourage an extensive, open review of Netscape Navigator 2.0 and will help us to continue to create products of the highest quality.”

Bug bounty programs proved incredibly successful and have been adopted by many of the major creators of open source software. According to Wikipedia there are bug bounty programs at Mozilla, Google, Reddit, Square, Microsoft, and Facebook, among other companies and organizations.

Last week Lumen announced that we are revising and remixing the bug bounties idea as a way to engage a wider group of people in sustaining and improving OER. But before we could do that, we had to answer a few questions. Some of these questions were pretty straightforward to answer, while others required a little more creativity.

  1. What does “bug” mean in the context of OER? Because they’re educational materials – that’s the “E” – the primary job of OER is supporting learning. Inasmuch as that is true, a “bug” in OER would be some portion of the materials that don’t effectively support student learning.
  2. How would you find a bug in OER? This is exactly what RISE analysis is designed to do – use learning data in order to empirically identify the least effective parts of OER. (See this chapter for more about RISE analysis.)
  3. How would you fix a bug in OER? This has been perhaps the most fertile question to dig into, and has led me to a very rewarding insight which, for me personally, is of a similar magnitude to “education is sharing” and “openness facilitates the unexpected.” It’s this – “making changes is easy, but making improvements is hard.” By definition, anyone can make changes to OER. But do those changes result in OER that are merely different, or in OER that are better? And here we have, very helpfully, a clear definition of “better” – do the updated OER significantly improve student learning? This is an empirical question and can be answered objectively using a randomized controlled trial, or what in online circles is known as an A/B test.
  4. How would you incentivize a person to hunt for bugs in OER? Lumen can answer this question through RISE analyses. We’ll identify the “bugs” – the places where OER are less effective at supporting student learning – and share that list out.
  5. How would you incentivize a person to fix bugs in OER? The preferred method in higher education seems to be grants, and it makes sense for this effort to adopt that approach. RISE analysis is done at the individual learning outcome level, and a typical Lumen course has between 150 and 200 learning outcomes. So we’re talking about a grant that would fund the improvement of 1/150th of course – essentially a single topic in a single chapter.

The Improve It Challenge

Lumen’s Improve It Challenge is a grant program. Here’s how it works in a nutshell:

  • You choose from a list of ten less effective OER we’ve identified.
  • You write a very short (2 pages or less) grant application, describing the kinds of improvements you would make to the OER.
  • The top proposal for each OER is awarded $250 to make the proposed changes.
  • In the next semester, Lumen A/B tests your new version to see if it improves student learning.
  • People whose new version improves student learning receive a 10x bonus – $2500.

$2500 is what a lot of campuses pay faculty to write an entire textbook through OER mini-grant programs. With the Improve It Challenge, that same amount is available for revising / remixing a single topic – your revised OER just has to be more effective than what it’s designed to replace.

Our intention is for the Improve It Challenge to be an ongoing program with an open call twice each year – one with updated OER due in time for use in fall term, and another with updates due in time for use during spring term.

Check out last week’s official announcement of the Improve It Challenge on the Lumen website. And apply before May 31!