Teaching Students to Make Good Choices in an Algorithm-Driven World

Opinion | Artificial Intelligence

Teaching Students to Make Good Choices in an Algorithm-Driven World

By Jose Marichal     Nov 1, 2021

Teaching Students to Make Good Choices in an Algorithm-Driven World

This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

In January, Colby College announced the formation of the Davis Institute for Artificial Intelligence, calling it the “first cross-disciplinary institute for artificial intelligence at a liberal arts college.” There is a reason no other liberal arts college has engaged in an undertaking of this nature. The role of these institutions has been to broadly train undergraduates for living in a democratic society. In contrast, AI centers, like the Stanford Artificial Intelligence Laboratory, have largely focused on high-end, specialized training for graduate students in complex mathematical and computer engineering fields. What could small, liberal arts colleges provide in response?

There’s a clue in a statement from the Davis Institute’s first director, natural language processing expert Amanda Stent. “AI will continue to have broad and profound societal impact, which means that the whole of society should have a say in what we do with it. For that to happen, each of us needs to have a foundational understanding of the nature of this technology,” she said.

What constitutes a “foundational understanding” of artificial intelligence? Can you really understand the convoluted neural networks underneath driverless cars without taking advanced calculus? Do most of us need to understand it that deeply, or just generally?

A relevant analogy might be to ask whether we need to train mechanics and automotive designers, or simply people who can drive a car responsibly.

If it’s the first, most liberal arts colleges are disadvantaged. Many of them struggle to hire and retain people who have the technical knowledge and experience to teach in these fields. Someone proficient in algorithmic design is likely making a pretty good living in industry or is working at a large, well-funded institute with the economies of scale that major scientific initiatives demand.

If it’s the second, then most small liberal arts colleges are well-equipped to train students about the social and ethical challenges that artificial intelligence presents. These colleges specialize in providing a broad education that trains people not simply in acquiring technical skills for the workforce, but in becoming complete, fully integrated citizens. Increasingly, that will involve wrestling with the appropriate societal use of algorithms, artificial intelligence and machine learning in a world driven by expanded datafication.

In a wonderful article, two researchers from the University of Massachusetts Boston Applied Ethics Center, Nir Eisikovits and Dan Feldman, identify a key danger of our algorithmically driven society: the loss of humans’ ability to make good choices. Aristotle called this phronesis, the art of how to live well in community with others. Aristotle saw the only way to acquire this knowledge came through habit, through the experience of engaging with others in different situations. By replacing human choice with machine choice, we run the risk of losing opportunities to develop civic wisdom. As algorithms increasingly choose what we watch, listen to, or whose opinion we hear on social media, we lose the practice of choosing. This may not matter when it comes to tonight’s Netflix choice, but it does have more global implications. If we don’t make choices about our entertainment, does it affect our ability to make moral choices?

Eisikovits and Feldman offer a provocative question: If humans aren’t able to acquire phronesis, do we then fail to justify the high esteem that philosophers like John Locke and others in the natural rights tradition had regarding humans' ability to self-govern? Do we lose the ability to self-govern? Or, perhaps more importantly, do we lose the ability to know when the ability to self-govern has been taken from us? The liberal arts can equip us with the tools needed to cultivate phronesis.

But without a foundational understanding of how these technologies work, is a liberal arts major at a disadvantage in applying their “wisdom” to a changing reality? Instead of arguing whether we need people who have read Chaucer or people who understand what gradient descent means, we should be training people to do both. Colleges must take the lead in training students who can adopt a “technological ethic” that includes a working knowledge of AI along with the liberal arts knowledge to understand how they should situate themselves within an AI-driven world. This means not only being able to “drive a car responsibly” but also understanding how an internal combustion engine works.

Undoubtedly, engagement with these technologies can and must be woven throughout the curriculum, not only in special topics courses like “Philosophy of Technology” or “Surveillance in Literature,” but in introductory courses and as part of a core curriculum for all subjects. But that isn't enough. Faculty in these courses need specialized training in developing or using frameworks, metaphors and analogies that explain the ideas behind artificial intelligence without requiring high-level computational or mathematical knowledge.

In my own case, I try to teach students to be algorithmically literate in a political science course that I have subtitled “Algorithms, Data and Politics.” The course covers the ways in which the collection and analysis of data created unprecedented challenges and opportunities for the distribution of power, equity and justice. In this class, I talk in metaphors and analogies to explain complex concepts. For example, I explain neural networks like a giant panel with tens of thousands of dials (each one representing a feature or parameter) that are being fine-tuned thousands of times a second to produce a desired outcome. I talk about datafication and the effort to make users predictable as a kind of “factory farming” where the variability that affects the “product” is reduced.

Are these perfect analogies? No. I’m sure I miss key elements in my description, partly by design to promote critical thinking. But the alternative isn’t tenable. A society of people who have no conception of how AI, algorithms and machine learning works is a captured and manipulated society. We can’t set the bar for understanding so high that only mathematicians and computer scientists have the ability to speak about these tools. Nor can our training be so base-level that students develop incomplete and misguided (e.g. techno-utopian or techno-dystopian) notions of the future. We need AI training for society that is intentionally inefficient, just as the liberal arts emphasis on breadth, wisdom and human development is inherently and intentionally inefficient.

As Notre Dame humanities professor Mark Roche notes, “the college experience is for many a once-in-a-lifetime opportunity to ask great questions without being overwhelmed by the distractions of material needs and practical applications.” Liberal arts education serves a foundational grounding that, in its stability, allows students to navigate this increasingly fast, perplexing world. Knowledge of the classics, appreciation of arts and letters, and recognition of how physical and human sciences work are timeless traits that serve students well in any age. But the increasing complexity of the tools that govern our lives requires us to be more intentional in which “great questions” we ask.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

For Education, ChatGPT Holds Promise — and Creates Problems

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up