read

These are my prepared remarks, delivered on a panel titled “Outsourcing the Classroom to Ed Tech & Machine-learning: Why Parents & Teachers Should Resist” at the Network for Public Education conference in Indianapolis. The other panelists were Peter Greene and Leonie Haimson. I had fifteen minutes to speak; clearly this is more than I could actually fit into that timeframe.

I want to start off my remarks this morning by making two assertions that I hope are both comforting and discomforting.

First, the role that corporations and philanthropists play in shaping education policy is not new. They have been at this a long, long time.

Companies have been selling their products – textbooks, workbooks, maps, films, and so on – to schools for well over a century. Pearson, for example, was founded (albeit as a construction company) in 1844 and acquired along the long history various textbook publishing companies which have also been around since the turn of the twentieth century. IBM, for its part, was founded in 1911 – a merger of three office manufacturing businesses – and it began to build testing and teaching machines in the 1930s. Many companies – and certainly these two in particular – also have a long history of data collection and data analysis.

These companies and their league of marketers and advocates have long argued that their products will augment what teachers can do. Augment, not replace, of course. Their products will make teachers’ work easier, faster, companies have always promised. Certainly we should scrutinize these arguments – we can debate the intentions and the results of “labor-saving devices” and we can think about the implications of shifting of expertise and control from a teacher to a textbook to a machine. But I’d argue that, more importantly perhaps, we must recognize that there is no point in the history of the American public education system that we can point to as the golden age of high quality, equitable, commercial free schooling.

My second assertion: that as long as these companies and their evangelists have been pitching their products to schools, they have promised a “revolution.” (Perhaps it’s worth pointing out here: “revolutions,” to me at least, mean vast and typically violent changes to the social and political order.) So far at least these predictions have always been wrong.

Thomas Edison famously predicted in 1922, for example, “I believe that the motion picture is destined to revolutionize our educational system and that in a few years it will supplant largely, if not entirely, the use of textbooks.” He continued – and I think this is so very revealing about the goals of much of this push for technological reform, “I should say that on the average we get about two percent efficiency out of schoolbooks as they are written today. The education of the future, as I see it, will be conducted through the medium of the motion picture… where it should be possible to obtain one hundred percent efficiency.”

Educational films were going to change everything. Teaching machines were going to change everything. Educational television was going to change everything. Virtual reality was going to change everything. The Internet was going to change everything. The Macintosh computer was going to change everything. The iPad was going to change everything. Khan Academy was going to change everything. MOOCs were going to change everything. And on and on and on.

Needless to say, movies haven’t replaced textbooks. Computers and YouTube videos haven’t replaced teachers. The Internet has not dismantled the university or the school house.

Not for lack of trying, no doubt. And it might be the trying that we should focus on as much as the technology.

The transformational, revolutionary potential of these technologies has always been vastly, vastly overhyped. And it isn’t simply, as some education reformers like to tell it, that it’s because educators or parents are resistant to change. It’s surely in part because the claims that marketers make are often just simply untrue. My favorite ludicrous claim remains that of Knewton’s CEO who told NPR in 2015 that his company was a “mind reading robot tutor in the sky.” I don’t care how much data you collect about students – well, I do care – but that does not mean, as this CEO said at a Department of Education event, that “We literally know everything about what you know and how you learn best, everything.” (My man here does not even know how to use the word “literally.”)

This promised “ed-tech revolution” hasn’t occurred either in part because the predictions that technologists make are so often divorced from the realities of institutional and individual practices, from the cultures, systems, beliefs, and values of schools and their communities. No one wants a machine to read their children’s minds, thank you very much.

There is arguably no better example of this than the predictions made about artificial intelligence. (No surprise, that includes companies like Knewton who like to say they’re using AI – data collection, data analysis, and algorithms – to improve teaching and learning.) Stories about human-made objects having some sort of mental capacity are ancient; they’re legends. (I’m a folklorist. Trust me when I say they’re legends – exaggerated stories that some people do believe to be true.)

The field of artificial intelligence – programmable, digital computers functioning as some sort of electronic “brain” – dates back to the 1950s. And those early AI researchers loved the legend, making grandiose claims about what their work would soon be able to do: in 1965, for example, Herbert Simon said that “machines will be capable, within twenty years, of doing any work a man can do.” In 1970, Marvin Minsky said that “in from three to eight years, we will have a machine with the general intelligence of an average human being.” Fifty, sixty years later, we still don’t.

Sure, there have been some very showy achievements: IBM’s Deep Blue defeated Garry Kasparov at a game of chess. IBM’s Watson won at Jeopardy. IBM loves these sorts of PR stunts, and it continues to market its AI product-line as hinged on Watson’s celebrity – it purports to be the future of “personalized education.” It’s working with Sesame Street, which kills me. But Watson is not remotely close to the “artificial intelligence” that the company, and the industry more broadly, likes to tout. (A doctor using Watson for cancer treatment described it as “a piece of shit.”) Watson is not some sentient, hyper-intelligent entity. It’s not an objective and therefore superior decision-maker. It’s not a wise seer or fortune-teller.

None of AI is. (And I don’t think it ever can or will be.)

Mostly, today’s “artificial intelligence” is a blend of natural language processing – that is, computers being able to recognize humans’ language (either by typing or speaking) rather than being programmed via a computer language – and/or machine learning – that is, a technique that utilizes statistics and statistical modeling to improve the performance of a program or an algorithm. This is what Google does, for example, when you type something like “how many points LBJ” into the search bar, and you get results about LeBron James. Type “what percentage LBJ,” and you get results about how much the 36th president of the United States increased government spending. Google takes the data about what a website contains, along with how people search and what people click on, in part, to determine what to display in those “ten blue links” that show up on the first page of search results.

In some ways, that’s a lot more mundane than the hype about AI. But it’s an AI I bet we all use daily.

That doesn’t mean it’s not dangerous.

To be clear, when I assert that the push for technology is not new and that the claims about technology are overblown, I don’t mean to imply that this latest push for education technology is irrelevant or inconsequential. To the contrary, in part because of the language that computer scientists have adopted – artificial intelligence, machine learning, electronic brains – they’ve positioned themselves to be powerful authorities when it comes to the future of knowledge and information and when it comes to the future of teaching and learning. The technology industry is powerful, politically and economically and culturally, in its own right, and many of its billionaire philanthropists seem hell-bent on reforming education.

I think there’s a lot to say about machine learning and the push for “personalization” in education. And the historian in me cannot help but add that folks have trying to “personalize” education using machines for about a century now. The folks building these machines have, for a very long time, believed that collecting the student data generated while using the machines will help them improve their “programmed instruction” – this decades before Mark Zuckerberg was born.

I think we can talk about the labor issues – how this continues to shift expertise and decision making in the classroom, for starters, but also how students’ data and students’ work is being utilized for commercial purposes. I think we can talk about privacy and security issues – how sloppily we know that these companies, and unfortunately our schools as well, handle student and teacher information.

But I’ll pick two reasons that we should be much more critical about education technologies (because I seem to be working in a series of “make two points” this morning).

First, these software programs are proprietary, and we – as educators, parents, students, administrators, community members – do not get to see how the machine learning “learns” and how its decisions are made. This is moving us towards what law professor Frank Pasquale calls a “black box society.” “The term ‘black box,’” he writes, “is a useful metaphor… given its own dual meaning. It can refer to a recording device, like the data-monitoring systems in planes, trains, and cars. Or it can mean a system whose workings are mysterious; we can observe its inputs and outputs, but we cannot tell how one becomes the other. We face these two meanings daily: tracked ever more closely by firms and government, we have no clear idea of just how far much of this information can travel, how it is used, or its consequences.” This, Pasquale argues, is an incredibly important issue for us to grapple with because, as he continues, “knowledge is power. To scrutinize others while avoiding scrutiny oneself is one of the most important forms of power.”

I should note briefly that late last year, New York City passed a bill that would create a task force to examine the city’s automated decision systems. And that hopefully includes the algorithm that allocates spaces for students in the city’s high schools. How to resist: demand algorithmic transparency in all software systems used by public entities, including schools.

The second reason to be critical of AI in ed-tech is that all algorithms are biased. I know we are being told that these algorithms are better, smarter, faster, more accurate but they are, as a recent RSA video put it, “an opinion embedded in math.” (Indeed, anytime you hear someone say “personalization” or “AI” or “algorithmic,” I urge you to replace that phrase with “prediction.”)

Algorithms are biased, in part, because they’re built with data that’s biased, data taken from existing institutions and practices that are biased. They’re built by people who are biased. (Bless your hearts, white men, who think you are the rational objective ones and the rest of us just play “identity politics.”) Google Search is biased, as Safiya Noble demonstrates in her book The Algorithms of Oppression. Noble writes about the ways in which Search – the big data, the machine learning – maintains and even exacerbates social inequalities, particularly with regards to race and gender. Let’s be clear, Google Search is very much a “black box.” And again, it’s an AI we use every day.

That Google Search (and Google News and Google Maps and Google Scholar and so on) has bias seems to me to be a much bigger problem than this panel was convened to address. We are supposed to be talking about ed-tech, and here I am suggesting that our whole digital information infrastructure is rigged. I think we’ve seen over the course of the past couple of years quite starkly what has happened when mis- and dis-information campaigns utilize this infrastructure – an infrastructure that is increasingly relying on machine learning – to show us what it thinks we should know. Like I said, it’s not just the technology we should pay attention to; it’s those trying to disrupt the social order.

We are facing a powerful threat to democracy from new digital technologies and their algorithmic decision-making. And I realize this sounds a little overwrought. But this may well be a revolution, and it’s not one that its advocates necessarily predicted or is it, I’d wager one any of us want to be part of.

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives