Agency and Opportunities for Future Educational Technologies

With all the excitement in the air about big data, analytics, and adaptive instruction, it is easy to imagine a future of complete automation. In this future, algorithms will choose what we will learn next, which specific resources we will interact with in order to learn it, and the order in which we will experience these resources. All the guesswork will be taken out of the process – instruction will be “optimized” for each learner.

There are many reasons to be deeply concerned about this fully automated future. One of the things that concerns me most about this vision of “optimized” instruction is its potential to completely undermine learners’ development of metacognitive skills and deprive them of meaningful opportunities to learn how to learn.

Like every other skill – from playing the piano to factoring polynomials to reasoning about the likely causes of historical events – learning how to learn requires practice. Learners need opportunities to plan out their own learning and select their own study strategies and learning resources. Learners need opportunities to monitor and evaluate the effectiveness of the strategies and resources they’ve selected in support of their own learning. Learners need to experience – and reflect on – a range of successes and failures in regulating their own learning in order to understand what works for them, and how they should approach the next learning task they encounter in school or life.

Some adaptive systems are designed specifically to take control of these metacognitive processes away from learners. These systems make decisions on behalf of the learner, monitoring what does and doesn’t appear to be working and updating their internal models and strategies. The processes by which these decisions are made are hidden from the learner, and are likely trade secret black boxes into which no reviewer can ever peer. At the end of the current reading, or video, or simulation, the system will present the learner with a “Next” button that hides all the complexity and richness of the underlying decision, and simply presents the “optimal” next learning activity to the learner.

Without meaningful opportunities to develop metacognitive skills, there is no reason to believe that learners will develop these skills. (Have you ever spontaneously developed the ability to speak Korean without practicing it?) A fully adaptive system likely never provides learners with the opportunity to answer questions like “What should I study next?”, “How should I study it?”, “Should I read this or watch that?”, or “Should I do a few more practice exercises?” It goes without saying that the ability to learn quickly and effectively is possibly the single most important skill a person living in the modern world can have. In this context, any potential short-term benefits of adaptive instruction seem like a poor trade.

Instead of designing technologies that make choices for students, we have an important opportunity to design technologies that explicitly support students as they learn to make their own choices effectively. Such technologies must respect learner agency, leaving key choices in their hands – even at the risk of learners making some suboptimal choices. (I should say that fully automated recommendations, like “Consider viewing this supplementary video,” fit within this framework to the degree that they respect learner agency.)

Of utmost importance, these new systems must provide learners with simple ways to reflect on the choices they make about their learning and the results of those choices. I believe that providing this kind of feedback, together with opportunities to reflect on what it means, will be a hallmark of future educational technologies that support radical improvements in learning.

2 thoughts on “Agency and Opportunities for Future Educational Technologies”

  1. I completely agree. Too often we restrict the choices of students. We tell them how and where they should blog and that their eportfolios should be in the institutional system which always seems oxymoronic to me if not just plain moronic.

    I think we can guide learners but we should never enforce if we can help it.

    I would add that we should try not to make the choices that learners make irreversible and that means being able to move data between systems easily.

  2. Quite insightful… as usual. Will likely help us assess “the promise of learning analytics” through both pedagogical and ethical lenses.

    The “What should I study next?” question can lead us along interesting paths. One could be about traditional models of academic advising. Lots of material there, especially since Learning Analytics projects are so often connected to a straightforward view of “academic success”.

    Another angle could be that of learning projects. Sure, “project-based learning” is nothing new. But learners’ agency could go all the way to letting them design their own curricula, in view of what they want to achieve.
    Thought of a fictional example which might illuminate this point. Think of this higher ed learner who dreams of, one day, opening a coop brewpub. She decides to take classes in biochemistry, to understand zymurgy, and in social economy, to understand cooperative models. Through such course, she learns about several issues related to biofuels from both environment and economy. She ends up creating a new biodiesel which is better for both the environment and for the social context.
    Many learners have some difficulty figuring out what they should learn. Their learning projects afford some flexibility.

    Speaking of which, your parenthetical comment on recommendations is intriguing. While it’s fair to say that suggestion algorithms “respect learner agency” (though, to this humanist, there’s something a bit anthropomorphic about “respectful algorithms”), your main point about metacognition calls for algorithmic openness, at the very least.
    To stretch the analogy way too far: Google’s AdSense doesn’t prevent people from making their own purchasing decisions. But the AdSense algorithm is too much of a “black box” to encourage reflexivity.

Comments are closed.