From here to there: Musings about the path to having good OER for every course on campus

I spend most of my time doing fairly tactical thinking and working focused on moving OER adoption forward in the US higher education space. But from time to time I still step back and worry about field-level issues. For example, I spend a fair amount of time thinking about the future of learning materials writ large. I made what was probably the clearest statement of my vision for the future of learning materials in my Shuttleworth Fellowship application several years ago:

My long-term goal is to create a world where OER are used pervasively throughout primary, secondary, and post-secondary schools. In this vision of the world, OER replace traditionally copyrighted, expensive textbooks for all primary, secondary, and post-secondary courses. Organizations, faculty, and students at all three levels collaborate to create and improve an openly licensed content infrastructure that dramatically increases student success, reduces the cost of education, and supports rapid experimentation and innovation in education.

Now, make no mistake – OER is a means, not an end. My end goal isn’t to increase OER adoption. My end goal is to improve student learning, and that can be done in extraordinarily powerful ways when teachers and students are able to leverage the unique affordances of open educational resources.

Content is infrastructure, and working to increase the adoption of OER is like working to build an interstate highway system. No one cares about the roads themselves. Rather, people get excited about all the places they can go and all the things they can do once the highway system is in place. While early adopters absolutely get excited about OER because open is awesome, “normal” faculty and students are more likely to get excited about the things OER let them do – improve student success, decrease costs, expand access, etc.

So when I say that one of my goals in life is to see OER “used pervasively” throughout education, it’s not because OER adoption is my end goal. It’s because we have to get all the roads paved in order to enable the countless, unimagined, amazing things that can happen once the infrastructure exists. But there’s a very, very long path from here to pervasive adoption. Unless there’s a shorter one.

A few months ago, I had the opportunity to participate in a systems thinking panel at the Ashoka U conference. One of the benefits of building a systems map of the space you work in is getting to see the whole web at once, and having the chance to see how pulling on one thread might cause two others to vibrate and a third to snap. It provides an opportunity to see some previously hidden levers for moving your work forward, as well as the potential unintended consequences of your work. Anyway, this all led me to consider again the system-level effects of our work in OER and exactly what kind of world would result from “winning” (as open education advocates might describe it). This turned out to be a rewarding exercise with surprises, twists, and turns.

To paint the picture, let me make two points, and then bring them together.

First point.

We’ve been at this for a while. Depending on where you start the timeline, this summer (2019) the OER movement will be 21 years old. Over this 21 year span the Hewlett Foundation and other private foundations have invested well over $100M in OER. Literally billions of dollars of federal grant programs have had open licensing requirements placed on them. And many institutions have started their own internal OER grant programs. It might lead one to ask: after two decades and dozens of hundreds of millions of dollars, how many university courses can be taught using OER today?

To try to answer this question, I did a quick count of the titles listed in the Open Textbook Library, which is the biggest referatory of “whole course OER” that I’m aware of. There appear to be around 620 titles on the OTL’s various category pages. Several of these books appear on more than one category page, and so are double counted. Many of the books overlap in the course they are designed to support, and so are counted multiple times. (There are actually so many open calculus textbooks that – and I’m not making this up – one is called Yet Another Calculus Text.) Other books in the library appear to be great supplements, but not large enough in scope to substitute for all of the traditionally copyrighted materials (TCM) normally used in a course. To offset all this double counting, there is the separate issue of the many “whole course” OER that OTL refuses to index because they don’t meet the OTL’s definition of “textbook.” So, imagine a deduplicated OTL collection, and then add in all the “whole course OER” that OTL refuses to index. I think the most generous estimate you could make is that today there are around 300 distinct courses worth of OER. (I’m going to ignore for the moment the fact that most of these do not include the full set of materials that many faculty will require before adopting (e.g., assessment banks, assignments, online homework tools).) To keep the conversation simple, let’s say there are 300 courses worth of OER available today.

But is 300 a lot or a little? How many courses are offered at an average institution, anyway? Looking from small to large here in Utah, Salt Lake Community College lists over 2500 courses in their catalog. On the other end of the scale, Utah State University has over 6500 courses in their catalog. Again, to keep the conversation simple let’s split the difference and say that at the average institution we need about 4500 courses worth of OER for OER to be “used pervasively.”

So here’s the first point. After 20 years and dozens of hundreds of millions of dollars of philanthropic, governmental, and institutional investment, sufficient OER exist for somewhere around 300 / 4500 = 6.7% of the courses offered at an average institution. Now, I’m confident the specifics of this math are wrong. But I’m also confident they are directionally correct – after all the time and money and energy we’ve spent, sufficient OER exist for only a tiny fraction of the courses offered on the average campus. The other 90%+ of courses on campus have no choice other than to continue to rely on traditionally copyrighted materials, most of which are provided by a handful of publishers.

Second point.

The creation of much of the whole course OER that exists today has been funded through philanthropy. It’s the nature of most philanthropy to want to see the biggest social impact return on investment from its grant dollars. This means that most of the whole course OER that exist today are for high enrollment courses (e.g., see the OpenStax catalog), where the number of students impacted – and the amount of money they saved – would be the highest.

It turns out that the handful of publishers mentioned above also focus a lot of their attention on this segment of the market. This makes sense – one successful title in a course like Introduction to Psychology could outsell 20 “successful” graduate-level titles. If you look at textbook sales data, you find the omnipresent long tail – a very few high enrolling courses accounting for a large amount of textbook sales, with upper-level and graduate courses accounting for a vanishingly small number of sales. Consequently, in terms of the learning materials side of their businesses, these high enrollment courses are where traditional publishers make much of their money, and to some extent, subsidize the production and distribution of the niche books used in upper-level and graduate-level courses.

So here’s the second point. If rates of OER adoption in high enrollment courses increase substantially over time (as, presumably, OER advocates hope they will), taking these adoptions and their associated revenues away from publishers could undermine publishers’ ability to create, maintain, and provide learning materials for upper-level and graduate courses.

Point 1 + Point 2 = ?

Today we have OER sufficient to teach something like 7% (300 / 4500) of the courses offered at a typical campus. Extrapolating from our last two decades of OER production, we’re around 250 years away from having all the OER we need to replace TCM in all courses (using current models of OER production). We’re simultaneously at a point where, if the OER that do exist find substantially more adoption success, we may undercut the funding mechanisms responsible for the creation and maintenance of the learning materials used in the other 93% of courses. There’s a possible future – not necessarily a likely future, but a possible future – where there are no OER and no viable providers of TCM for those other 93% of courses. That would be “a problem.”

You may not believe that taking away the majority of general education enrollments from publishers would hinder their ability to provide learning materials farther out in the long tail. And you don’t have to. If, like me, you have a goal of seeing OER “used pervasively” across campus, you still have to figure out where the OER for the other 93% of courses is going to come from. Let’s keep chasing this.

A Lesson from Open Source

The Linux operating system is one of the most incredible success stories of the open source software movement. As of 2017, the Linux operating system:

  • ran 82 percent of the world’s smartphones,
  • had 99 percent of the supercomputer market share,
  • ran 90 percent of the public cloud workload, and
  • had 62 percent of the embedded market share.

The core of the Linux operating system is the Linux kernel. Wikipedia defines an operating system kernel as “the most fundamental part of an operating system. It can be thought of as the program which controls all other programs on the computer.” The Linux Foundation issues an annual report about activities related to kernel development. The 2017 Linux Kernel Development Report reveals the absolutely mind-blowing scale of collaborative improvement of the open source Linux kernel:

Over the entire 406-day period covered by this report, the community merged changes at an average rate of 8.5 patches/hour (p. 6).

Over eight improvements per hour, 24 hours a day, seven days a week, for over 400 days. 1,681 individual developers contributed. Why? What incentivizes this many people to participate at this high a level in an open source project?

(Kate Bowles gave us a hint in her recent OER19 keynote when she said, “people can only produce resources if they’re supported to do so.”)

There is a popular myth that most of the people writing open source software do this work in their spare time for altruistic reasons. The data in the Kernel Development report shatter this illusion. “Developers who are known to be doing this work on their own, with no financial contribution happening from any company” represented only 8.2% of all the changes contributed to the kernel during this period (p. 14). In other words, the overwhelming majority of the people who contribute to the Linux kernel are people whose job descriptions include ‘writing code that you will give away under an open source license that allows commercialization by anyone – including our competitors.’ Over one third (34.5%) of all the contributions to the 4.8 – 4.13 releases of the Linux kernel came from employees at Intel, IBM, Google, Facebook, Samsung, RedHat, and SUSE. These are huge companies that compete directly with each other in many ways. But they are also huge companies that pay their employers to give some of their work away to everyone – including their competitors – under open licenses.

Why on earth would they do that? Let me explain again in two points

First point.

The Linux kernel is incredibly complex. It is “the program which controls all other programs on the computer.” It has to know how to communicate with every piece of hardware in the universe (keyboards, mice, monitors, printers, wired and wireless networking devices, USB drives, virtual reality displays, speakers, smartphones, etc.). It has to support desktop, server, and cloud use cases. It needs to be secured against a broad range of threats. &c. Suffice it to say, the kernel is an incredibly complex – and, therefore, incredibly expensive – piece of software.

Second point.

The Linux kernel is entirely undifferentiating from an end user’s point of view. The kernel is so low level and uninteresting that you’d probably never heard of or thought about it before reading this essay. You don’t choose a smartphone or a search engine or a social network or an online office suite because of the amazing kernel buried at the bottom of the operating system it runs on. You make your choices based on the amazing products and services people build on top of the kernel. The things you actually care about in smartphones or search engines or laptops happen far above the level of the kernel. (Sounding familiar yet?)

Point 1 + Point 2 = ?

The Linux kernel is both incredibly expensive to maintain and completely undifferentiating. What incentive is there, then, for a company to bear the full cost of developing and maintaining a complex product that gives the company no advantage in the marketplace? There’s really not one. It makes much more sense to spread the cost and effort of developing and maintaining the kernel across the community of companies, organizations, and individuals who can use and benefit from it. And from the perspective of any individual company or organization, it makes far more economic sense to employ a handful of people to work on the open source Linux kernel in order to make sure that it always meets the company’s needs than it does for a company to try to develop its own proprietary kernel and compete against the entire open source community.

This is the model Apple uses. They start with an open source kernel and then build amazing things on top of it to make the MacOSX and iOS operating systems. Yes, there are some software developers who really love downloading the kernel source, typing “configure; make; make install” at the command line, and resolving all the compile errors manually. But most people want a laptop or a phone with an operating system that “just works.”

So why do companies pay their employees to write code that they give away to their competitors? It’s in the self-interest of for-profit companies and non-profit organizations alike to collaborate on the development and improvement of the open source Linux kernel. The kernel is infrastructure. Companies and organizations pay their employees to support the ongoing improvement of the kernel so that they get on with their real purpose for existing – building and delivering the value-added, differentiating products and services people actually want from them.

Or, as a well-timed article in Wired yesterday put it, “In short, open source provides a way for companies to collaborate on technology that’s mutually beneficial.”

A Path to the Other 93%?

If you care about the question “how do we fill out the catalog of OER options so that every course on campus can use OER?”, the Linux kernel provides a fascinating example. People who are paid by their employers to work on open source software contributed over 10x the amount of improvements that volunteers did. As I’ve argued above, content is a lot like the kernel – it’s undifferentiating infrastructure. So how do we get more employers to start paying their people to work on OER, so that we can increase the amount of OER being created and improved by 10x or more?

I’ll explain with a final pair of points.

First point.

Publishers’ public statements would have you believe that the most important difference between TCM and OER is price. That’s because the price of their products is something they can change rather quickly, without upending their entire business model, whereas copyright licensing is not. But there are a couple of reasons traditional publishers have difficulty competing with OER even when it only comes to price.

The first is the royalty payments that publishers are contractually obligated to make to authors for each and every use of their content. There are no economies of scale with royalties – the more students use a set of learning materials, the more the publisher owes the author. The second is the ever-escalating technology war against students who just want to do normal digital things with their digital content – that is, publishers’ continued investment in digital rights management technology that prevents basic actions like cutting and pasting to make sure that their copyrighted content isn’t copied illicitly. These two costs (and others), which never go away, are like huge anchors dragging behind traditional publishers’ financial models. Companies and organizations that only work with OER don’t have these costs. But as long as publishers have to pay them it will be very hard for them to compete with organizations that don’t. So, competitive pressures are already driving publishers toward OER, even if they don’t fully realize it yet. Shedding these legacy costs by switching as quickly as possible from TCM to OER in new or updated contracts with authors will make publishers much more efficient financially.

The second reason is that if you’ve listened to anything a major publisher has said in the last several years, they’ve talked about everything but content. They’ve talked about adaptive technology, they’ve talked about automatically grading homework, they’ve talked about providing immediate feedback… They’ve talked about all the (hopefully) differentiating value they’re trying to add on top of content. In their own way, publishers have all but admitted that the words and images we all grew up knowing as textbooks are undifferentiating in the age of the internet. Traditional textbook content like words and images are just like the operating system kernel – kind of boring. Everything interesting is being built on top of them – from adaptive systems to OER-enabled pedagogy.

It must be true that publishers wish they could just assume that solid content is going to be there, doing a reasonable job of being content and an excellent job of being royalty-free, so they can get on with building the features and services they’re actually excited about on top of the content.

Second point.

You know who actually knows a thing or two about open? Our community! You know who’s in a great position to help publishers navigate the transition from an outdated model built around TCM to a new model built around OER? We are! Rather than investing our time and energy trying to one up each other creating even more grossly overexaggerated negative metaphors for publishers, we should spend some time and energy making concrete, tactical suggestions about how they can make the transition to open successfully.  

This work will require us to invest time and energy in understanding publishers’ goals, business models, and operations. For-profit publishers exist in an entirely different culture from academia. Unless you’ve spent time really listening to publishers and trying to understand what they’re trying to do, your head and heart are probably full of misconceptions and stereotypes about them (like the misconceptions and stereotypes you may have about people from another culture or country that you’ve never experienced directly). We can’t effectively influence without sincerely working to understand. And while it is true this is time and effort that could be spent on some other forms of advocacy, it’s hard to look around and find another example of an approach that could unleash an extra 10x effort in creating and improving OER.

It shouldn’t be left up to those on the outside looking in to try to find their way through the maze that is our community. We should be helping them find viable paths from peripheral participation on the edge our community into the core of what we’re trying to do. Among other things, this will require some patience on our part with publishers’ stumbling and faltering first steps at moving their practice and business models toward openness. And, if we approach these relationships with humility, there’s a thing or two that we can learn from publishers, too. But that’s a whole separate set of reasons for you, dear reader, to get upset at me. I’ll save those for another post.

Point 1 + Point 2 = ?

There are financial reasons for publishers to make the transition away from TCM to OER. The open education community can help that transition happen faster and easier – if we will.

Conclusion

I really want to see OER being “used pervasively” across institutions. That can’t happen until good OER options exist for every single course offered on campus. I believe the last 20 years have made it clear that we’re not going to get there with our current ways of thinking and our current models for creating and sustaining OER.

The only way to get there will be by thinking WAY outside the box. It may be my limited imagination, but I can’t think of a more counterintuitive, outside the box approach than having commercial publishers do most of the work of creating and sustaining the OER we need for the other 93% of courses on campus. The example of the Linux kernel shows that this is completely possible.

Once the open content infrastructure exists and is widely adopted, a wide range of absolutely amazing things will follow. It won’t matter whether we call them OER-enabled pedagogy, or open pedagogy, or open educational practices, or one of the dozens of other names that will be invented to describe practices both new and old. What will matter is the incredible things teachers and students will do with the newly won rights they’ll have in their openly licensed learning materials. There is no way to say exactly what these things will be since in some sense the primary affordance of open is that “openness facilitates the unexpected.” But we can say with confidence that they will be awesome.

Comments are closed.