Real Questions About Artificial Intelligence in Education

Events

Real Questions About Artificial Intelligence in Education

By Tony Wan     Jul 30, 2017

Real Questions About Artificial Intelligence in Education

Don’t doubt it: Machine learning is hot—and getting hotter.

For the past two years, public interest in building complex algorithms that automatically “learn” and improve from their own operations, or experience (rather than explicit programming) has been growing. Call it “artificial intelligence,” or (better) “machine learning.” Such work has, in fact, been going on for decades. (The Association for the Advancement of Artificial Intelligence, for instance, got rolling in 1979; some date the ideas back to the Greeks, or at least to the 1940s during the early days of programmable digital computers.)

More recently, Shivon Zilis, an investor with Bloomberg Beta, has been building a landscape map of where machine learning is being applied across other industries. Education makes the list. Some technologists are worried about the dangers. Elon Musk, for instance, has been apocalyptic about his predictions, as the New Yorker wrote. He sparred this past week with a more sanguine Mark Zuckerberg. (The Atlantic covers it here.)

Investors are nonetheless racing ahead: this week, Chinese language learning startup, Liulishuo, which uses machine learning algorithms to teach English to 45 million Chinese students, raised $100 million to accelerate its work.

To explore what machine learning could mean in education, EdSurge convened a meetup this past week in San Francisco with Adam Blum (CEO of OpenEd), Armen Pischdotchian, (an academic technology mentor at IBM Watson), Kathy Benemann (CEO of EruditeAI), and Kirill Kireyev (founder of instaGrok and technology head at TextGenome and GYANT). EdSurge’s Tony Wan moderated the session. Here are a few excerpts from the conversation:

EdSurge: Artificial intelligence has been promising to transform education for generations. How close are getting? What’s different now?

Benemann: There’s so much more data than ever before. For us at EruditeAI, data is more precious than revenue. With better data, we can better train our algorithms. But the important point to remember is that the makers of AI are ultimately us, humans.

Pischdotchian: If you think back on the education model of your earlier years, we called it the factory model. Teachers broadly taught same subject to all students. That isn’t what we’re talking about today. Groups such as the Chan Zuckerberg Initiative are looking to overhaul this model. Learning can’t be done according to the factory model any more. It isn’t sustainable. What will industry require for today’s kids to flourish doing what we call “New Collar” work?

Kireyev: We’re seeing a data explosion in education content—both data for and from students. We can see what students are doing, far more rapidly than in the past. When kids work on Scratch, for instance, their work is web-based: You can see when they start watching a video, when they stop, when they’re bored. You get a lot of insight into their behavior. Transparent data collection is incredibly valuable. And there’s greater availability of the technology—things that you can literally use out the box. So more people are trying to do things with AI and machine learning.

Okay, we’ve heard about the data explosion and about the need to change school models. What else is going on?

Blum: There are two big trends going on—and we’re just at the beginning of this. We work with IMS Global Learning. Technical standards, such as Caliper, and xAPI (or Experience API) are just taking off. And second, there are a whole lot of areas, education is one of them, where you don’t have long-term data. So if you want to pick the next best thing [problem] for a student, you have to use a different approach called reinforcement learning. So if I don’t have a million data records, I can explore as I go. It’s how Google solved the AlphaGo challenge.

What applications do we see of AI in education? Are we using it already?

Pischdotchian: This is about finding patterns in learning experiences. We can take note of say, if one person’s stronger in math, how can the system identify the challenge, and then open it up to teachers so they can be better tutors for their students? IBM is working with Sesame Street on this—the partnership is using universities as testbeds for the development of machine learning. It can also come in handy for teachers: We had a hackathon at MIT and all the classrooms have cameras (and students know that). If a professor is delivering a lecture and he doesn’t look up to see whether half the class is asleep, we can use facial recognition to depict emotions (such as boredom) and send the professor a message.

Benemann: Everywhere you look, people are asking what aspect of education (and everything else) can be touched by AI. What does this look like in the classroom? Will it free up the day? Will AI replace the teachers? Will AI help teachers free up their time so they can be “guides” for the students? Can adaptive platforms (such as ALEKS or Knewton) help students learn the facts and enable the teachers to guides?

A Survey of the State of Machine Intelligence 3.0, from Shivon Zillis

Does that suggest that without AI, the “adaptive” technology on the market, isn’t really that adaptive?

Benemann: It’s a spectrum. Some tools are adaptive, but they’re saying they’re “AI” [but we still have a ways to go.]

Kireyev: Instagrok is a visual search engine. We’re using machine learning to identify the important facts, concepts and then letting the students pursue learning in any direction. They can synthesize it, organize it. TextGeonome is another project. We’re building an infrastructure to do deep AI-based vocabulary development. We’re asking: Given a student and grade level, what are the kinds of words they need to learn next?

Blum: At ACT (which acquired OpenEd),we’re focused on the question of: If you’ve identified the learning gap, what’s the best instructional material to help the student? Not just ACT material; we want to give you the best instructional resource we can find. We use machine learning to pinpoint those.

In some areas, if you don’t use machine learning predictive models, you’re remiss. Take college admissions offices.

As you shift from statistical evaluation models to deep machine learning [involving neural networks], what hasn’t kept pace is “explainability.” You might have a neural network that you can’t explain. So one key challenge as the predictive algorithms get better—and as you get to multilayer neural networks—is that explainability falls off. In some heavily regulated markets— education and medicine, for instance—more explanatory tools will have to be developed.

Suppose you’re at a big university: They use statistical models to pick the incoming class. Now, say you have a neural network or some machine learning program that’s better at predicting student outcomes. For sure, there are universities doing this. They won’t talk about it because the stakes are so high. But you can be sure they’re using machine learning to pick the incoming class. We will need some kind of summarization tools to explain these choices. Even though deep learning is complicated, for this to get talked about and accepted, we’ll have to come up with some of the big elements of explanation: How did they get there?

There are concerns when words like “AI” becomes a label used to sell a product. Say I’m a teacher, and an edtech company says “my math tool is AI-backed.” What should I ask?

Blum: The problem ties back to discoverability and explainability. If you’re going to slap on the AI label, then I want to know more: Are you talking about supervised symbolic system? Natural language processing? If you just say “AI” and nothing further, that reduces your credibility. If you use the AI label, it’s an invitation to have a conversation about what’s behind it all.

Benemann: Vendors should talk about student outcomes and teacher practice. Don’t talk about AI at all. It’s just another way to enable student learning and teacher practice. You’re better off going to the district and saying: Because you use this product I can do a case study and show an increase in efficiency and less wasted time in the classroom.

How do you balance the need for AI tools to have data while safeguarding the privacy and security of sensitive student data?

Blum: We’re at a point where there’s no such thing as PII (personally identifiable information). If you have enough knowledge you can probably deconstruct who any person is likely to be. So there needs to be industry standards. This is an area where it would improve the job of edtech developers if we said, “Here’s what you’re allowed to gather and share.” Something I’ve raised is the need for better standards on privacy so no one can get sued if they follow the standards.

Benemann: Who owns data? Look at health care. It’s a fragmented market, but there’s a trend where patients are increasingly owning their owndata. I wonder if we can get to point where students have the data and it’s up to them (students and their parents) to say, ‘Yes, schools you can have access.’

Job automation is a threat that many people are worried about. How will this impact teachers—and other professions?

Kireyev: I see role of teacher shifting in wonderful ways. Leadership, guidance...these are things I’m excited about getting from teachers. And then more and more teachers can shift into working more deeply with kids, rather than just explain how equations work.

Blum: There have been efforts to do learning goals for vocational tech. But it’s been underutilized. We need to be a little more forward thinking...what does it mean to be truck driver in 10 years? How does that impact supply chain [across industries?] We need efforts to make vocational education better.

Pischdotchian: Hence the importance of STEAM [science, technology, engineering, arts and mathematics] instead of STEM. The right side of brain—arts, creativity, psychology, not the analytics and the math, will be ever more important. Psychology. History. Debate class. Humor and drama. These facets are not (amenable to AI), at least in our lifetime.

AI has gotten good at making certain things easy. But that’s concerning. Thinking hard about things doesn’t come naturally to us. Growth and comfort cannot coexist.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up