Learning From Algorithms: Who Controls AI in Higher Ed, And Why It...

Digital Learning

Learning From Algorithms: Who Controls AI in Higher Ed, And Why It Matters (Part 2)

By Jeffrey R. Young     Nov 14, 2017

Learning From Algorithms: Who Controls AI in Higher Ed, And Why It Matters (Part 2)

This article is part of the guide: EdSurge Live: A Town-Hall Style Video Forum.

Colleges often experiment with artificial intelligence to help spot when students need special help, part of an effort to draw predictions from data. But a rush to test—and possibly rely on—algorithms raises many questions, none bigger than this: Could the data lead colleges to rethink how they operate to serve students?

Those were key points that emerged during an online panel discussion EdSurge hosted on the promises and perils of AI in education. It was the first episode of our new series of video town halls called EdSurge Live. The hour-long discussion was so rich, we’re releasing it in two installments. (Here’s Part 1 if you missed it.)

We invited three guests: Candace Thille, an assistant professor of education at Stanford Graduate School of Education; Mark Milliron, co-founder and chief learning officer at Civitas Learning; and Kyle Bowen, educational technology services director at Penn State University.

Read a transcript of the second half of the conversation below, which has been lightly edited and condensed for clarity. Or listen to highlights from the entire session on this week’s EdSurge On Air podcast.

EdSurge: When colleges buy or develop software that involves artificial intelligence, how do the issues and concerns change?

Kyle Bowen: It is important for institutions to begin to develop these kinds of capabilities, and a lot of that starts with creating an ecosystem for data at your university—being able to access these data, understand where it’s coming from and what the actual data elements mean. Then you can more effectively work with partners in the industry, with your own internal researchers and with software-developers inside the institution to begin to explore some of the applications using this kind of technology.

Then there’s another aspect of this, which is looking at the ways that these tools can help take on other challenges that universities often face. One area where we’re spending a lot of time and effort researching is how can AI and machine learning be used to help drive the design development of open-education materials? How can these kinds of technologies be used to help our students have deeper conversations online? How can we use these kinds of technologies to take on problems that really seem too complex?

At Penn State this is an area where we’re bringing together faculty from across the university. These aren’t just computer-science faculty, but they come from engineering, from health, from technology and education, coming together thinking critically about how do we apply this to solve other big problems that face education. I think when we talk about the uses of AI and machine-learning in the space, that’s the green field.

We are just beginning to understand the implications and some of the complex legal issues that surround something like that. So we’ve worked with projects that have algorithms that are expressive so they can express new content. So when a machine expresses new content, then who’s the author? It raises these questions that we’ve never faced before. And so in many ways it’s a discussion that almost seems science fiction at times.

Here’s a question from the audience: “What use cases do you think AI will impact first or most? Ones used directly for academics (like Knewton) or ones that are non-academic?

Candace Thille: I don’t know which one is going to have an impact first. I think they’re going to happen simultaneously. What I pay most attention to is, how are these technologies going to be used to actually support the teaching and learning process. And I would actually think the Jill Watson example is also an example of that.

At Georgia Tech, they’re using Jill Watson as an AI teaching assistant. So that is still serving in a pedagogical agent. With Knewton, that is this notion of personalized and adaptive instruction. And so what Knewton does—and what any personalized and adaptive technology does—is, you design tasks for students to do in the interface.

As students interact with those tasks, every interaction is a piece of evidence that then gets put through a model.

I would argue that the algorithms and the state of our science isn’t yet there to be able to make a really good prediction that you could just autonomously let our system make decisions [about learning]. The goal here is going to be figuring out which of those decisions we give to humans to make, and which ones we can let a system autonomously make. That’s what a human-in-the-loop AI is, and that’s where we should be focusing our research.

Mark Milliron: That interaction between what the machine can do well and what the person can do well is a bright area for learning. Current AI models are commoditized, and the modeling is not magic. You need to examine which modeling technique you’re using for which purpose, and then you can actually look at the outcomes to see how it works.

Bowen: A lot of work to this point has focused on how do we identify accurate students to intervene before they have challenges, right? That’s important and critical work. But at the same time, there’s also an opportunity to raise ceilings. How do we identify students who aren’t living to their potential and could be, and what more can we do in those kinds of spaces? And I think there’s a great opportunity there.

A really important part of understanding work in this area is [knowing how] people and machines can work together. So when we talk about AI, we imagine robots, we imagine science fiction, we imagine Skynet overthrowing the world. These are the things that we imagine, but the reality is that it’s not nearly that sexy. It’s not nearly something that Michael Bay is going to make a film out of. The reality is that some of the really interesting applications of this are people and computers working together to think about or to explore different problems or ideas.

The examples I like to use are thinking about the word suggestions you get on your text messages. It tries to fill in the text message, and you always get goofy stuff. So it doesn’t always work the way you want it to. Or think about the way that Netflix provides recommendations to you. These types of way of thinking about people working together with this software changes the dynamic and also opens up new opportunities.

Thille: In my open statistics course, I have hundreds of faculty all over the country that are using that course as an open-education resource. And so if they want to put a question in, we want to be able to give them feedback on how well that question performed—not only for their students, but for the thousands of students that are using the course. So we’re about to turn on [a feature that lets professors test how their question performed]. So we’re not only trying to help the learners get better at statistics, but also help faculty get better at writing statistic questions.

We’re living in this time of widespread disillusionment about some of the tools we already have, whether it’s Facebook and all the questions about fake news and sources manipulating these platforms. Is there something in education that you think AI should not be applied to? How much do you worry about a dark future coming, even though you have good intentions?

Milliron: I think you can go dark quickly on this, and think about all the ways these tools can be misused. I am deeply concerned. In fact, Charles Thornburgh, my co-founder, and I often talk about the fact there are some people that should never ever, ever, ever, ever use analytics or AI. I think they’re going to use it for selfish purposes or going to use it in ways to manipulate students.

Above all, I do think there’s this ethic of Do No Harm. We make sure whatever we do is about optimizing the student journey.

Bowen: I think as a general rule, to think about AI in terms of enablement and not enforcement. Anytime you’re using data to enforce something, that’s just the wrong way of approaching it.
The essential piece of it is that we use technology in a way that enables students, enables faculty, enables advisors. In many ways these data is human-driven.

Thille: I guess there’s not something I don’t think we should use. I don’t think we should misuse it. I think it presents an amazing opportunity for higher education. Higher education institutions have three core missions. They’re supposed to have a research for creating new knowledge, the mission for disseminating knowledge (the teaching mission), and then the community-service mission. And what this technology provides us the opportunity to do is bring those three missions together much more tightly than they ever have.

If you think of higher education as an industry, it is one of the few that is in a privileged position to have our researchers are practitioners, and the people that are supposed to benefit from the research and practice all co-located both geographically, and most importantly, [focused on the same] mission. So we can break this linear technology-transfer model, and have researchers and practitioners and learners co-creating the interventions and learning from them so that we’re supporting student success and building our fundamental understanding, simultaneously. That’s what I think the technology and the AI models could be used for.

But to do that, we have to be willing to not only use the patterns to look at the students, but also use the patterns to look at ourselves.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

EdSurge Live: A Town-Hall Style Video Forum

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up