read

I have volunteered to be a guest speaker in classes this Fall. It's really the least I can do to help teachers and students through another tough term. I spoke tonight in Dorothy Kim's class "Race Before Race: Premodern Critical Race Studies." Here's a bit of what I said...

Thank you for inviting me to speak to your class this evening. I am sorry that we're all kicking off fall term online again — well, online is better than dead, of course. I know that this probably isn't how you imagined college would look. But then again, it's worth pausing and thinking about what we imagine school should look like, what we expect school to be like, and what school is like — not just emergency pandemic Zoom-school, but the institution with all its histories and politics and practices and (importantly) variations. Not everywhere looks like Brandeis. Not everyone experiences Brandeis the same way.

Me, I write about education technology for a living. I'm not an advocate for ed-tech; I'm not here to sell you on a new flashcard app or show you how to use the learning management system more effectively. I'm an ed-tech critic. That doesn't mean I just write about how ed-tech sucks — although it mostly does suck — it means that I spend my time thinking through the ways in which technology shapes education and education shapes technology and the two are shaped by ideologies, particularly capitalism and white supremacy. And I do so because I want us — all of us — to flourish; and too often both education and technology are deeply complicit in exploitation and destruction instead.

These are not good times for educational institutions. Many are struggling, as I'm sure you know, to re-open. Many that have re-opened face-to-face are closing and moving online. Most, I'd wager, are facing severe funding shortages — the loss of tuition, dorm-room, and sportsball dollars, for example — as well as new expenses like PPE and COVID testing. Cities, counties, and states are all seeing massive budget shortfalls — and this is, of course, how most public schools (and not only at the K-12 level) are actually funded, not by tuition or federal dollars but by state and local allocations. (That's not to say that the federal government couldn't and shouldn't step up to bail out the public education system.)

Schools have been doing "more with less" for a long while now. Many states had barely returned to pre-recession funding levels before the pandemic hit. And now in-person classes are supposed to be smaller. Schools need more nurses and teaching aides. And there just isn't the money for it. So what happens?

Technology offers itself as the solution. Wait. Let me fix that sentence. Technology companies offer their products as the solution, and technology advocates promote the narrative of techno-solutionism.

If schools are struggling right now, education technology companies — and technology companies in general — are not. Tech companies are dominating the stock market. The top four richest men in the world: all tech executives (Jeff Bezos, Bill Gates, Mark Zuckerberg, and Elon Musk — all of whom are education technology "philanthropists" of some sort as well, incidentally.) Ed-tech companies raised over $800 million in the first half of this year alone. The promise of ed-tech — now as always: make teaching and learning cheaper, faster, more scalable, more efficient. And where possible, practical, and politically expedient: replace expensive, human labor with the labor of the machine. Replace human decision-making with the decision-making of an algorithm.

This is already happening, of course, with or without the pandemic. Your work and your behavior, as students, are already analyzed by algorithms, many of these designed to identify when and if you cheat. Indeed, it's probably worth considering how much the fear of cheating is constitutive of ed-tech — how much of the technology that you're compelled to use is designed because the system — be that the school, the teachers, the structures or practices — do not trust you.

For a long time, arguably the best known anti-cheating technology was the plagiarism detection software TurnItIn. The company was founded in 1998 by UC Berkeley doctoral students who were concerned about cheating in the science classes they taught. In particular, they were particularly concerned about the ways in which they feared students were utilizing a new feature on the personal computer: copy-and-paste. So they turned some of their research on pattern-matching of brainwaves to create a piece of software that would identify patterns in texts. TurnItIn became a huge business, bought and sold several times over by private equity firms since 2008: first by Warburg Pincus, then by GIC, and then, in 2014, by Insight Partners — the price tag for that sale: $754 million. TurnItIn was acquired by the media conglomerate Advance Publications last year for $1.75 billion.

That price-tag should prompt us to ask: what's so valuable about TurnItIn? Is it the size of the customer base — the number of schools and universities that pay to use the product? Is it the algorithms — the pattern-matching capabilities that purport to identify plagiarism? Is it the vast corpus of data that the company has amassed — decades of essays and theses and Wikipedia entries that it uses to assess student work?

TurnItIn has been challenged many times by students who've complained that it violates their rights to ownership of their work. A judge ruled, however, in 2008 that students' copyright was not infringed upon as they'd agreed to the Terms of Service. But that seems a terribly flawed decision, because what choice does one have but to click "I agree" when one is compelled to use a piece of software by one's professor, one's school? What choice does one have when the whole process of assessment is intertwined with this belief that students are cheaters and thus with a technology infrastructure that is designed to monitor and curb their dishonesty?

Every student is guilty until the algorithm proves her innocence.

Incidentally, one of its newer products promise to help students avoid plagiarism, and so essay mills now also use TurnItIn so they can promise to help students avoid getting caught cheating. The company works both ends of the plagiarism market.

Anti-cheating software isn't just about plagiarism, of course. No longer does it just analyze students' essays to make sure the text is "original." There is a growing digital proctoring industry that offers schools way to monitor students during online test-taking. Well-known names in the industry include ProctorU, Proctorio, and Examity. Many of these companies were launched circa 2013 — that is, in the tailwinds of "the Year of the MOOC" — with the belief that an increasing number of students would be learning online and that professors would demand some sort of mechanism to verify their identity and their integrity. According to one investment company, the market for online proctoring was expected to reach $19 billion last year — much smaller than the size of the anti-plagiarism market, for what it's worth, but one that investors see as poised to grow rapidly, particularly in the light of schools' move online because of COVID.

These proctoring tools gather and analyze far more data than just a student's words, than their responses on an exam. They typically require a student show photo identification to their laptop camera before the test begins. Depending on what kind of ID they use, the software gathers data like name, signature, address, phone number, driver’s license number, passport number, along with any other personal data on the ID. That might include citizenship status, national origin, or military status. The software also gathers physical characteristics or descriptive data including age, race, hair color, height, weight, gender, or gender expression. It then matches that data that to the student's "biometric faceprint" captured by the laptop camera. Some of these products also capture a student's keystrokes and keystroke patterns. Some ask for the student to hand over the password to their machine. Some track location data, pinpointing where the student is working. They capture audio and video from the session — the background sounds and scenery from a student's home.

The proctoring software then uses this data to monitor a student's behavior during the exam and to identify patterns that it infers as cheating — if their eyes stray from the screen too long, for example, or if there are sticky notes on the wall, their "suspicion" score goes up. The algorithm — sometimes in concert with a human proctor — decides who is suspicious. The algorithm decides who is a cheat.

We know that algorithms are biased, because we know that humans are biased. We know that facial recognition software struggles to identify people of color, and there have been reports from students of color that the proctoring software has demanded they move into more well-lit rooms or shine more light on their faces during the exam. Because the algorithms that drive the decision-making in these products is proprietary and "black-boxed," we don't know if or how it might use certain physical traits or cultural characteristics to determine suspicious behavior.

We do know there is a long and racist history of physiognomy and phrenology that has attempted to predict people's moral character from their physical appearance. And we know that schools have a long and racist history too that runs adjacent to this.

Of course, not all surveillance in schools is about preventing cheating; it's not all about academic dishonesty — but it is always, I'd argue, about monitoring behavior and character (and I imagine in this class you are talking about the ways in which institutional and interpersonal assessments of behavior and character are influenced by white supremacy). And surveillance is always caught up in the inequalities students already experience in our educational institutions.

For the past month or so, there's been a huge controversy in the UK over a different kind of algorithmic decision-making. As in the US, schools in the UK were shuttered in the spring because of the coronavirus. Students were therefore unable to sit their A Levels, the exams that are the culmination of secondary education there. These exams are a Very Big Deal — even more so than the SAT exams that many of you probably took in high school. While the SAT exams might have had some influence on where you were accepted — I guess Brandeis is test-optional these days, so nevermind — A Levels almost entirely dictate where students are admitted to university. British universities offer conditional acceptances that are dependent on the students' actual exam scores. So, say, you are admitted to the University of Bristol, as long as you get two As and a B on your A Levels.

No A Levels this spring meant that schools had to come up with a different way to grade students, and so teachers assigned grades based on how well the student had done so far and how well they thought the student would do, and then Ofqual (short for Office of Qualifications and Examinations Regulation), the English agency responsible for these national assessments, adjusted these grades with an algorithm — an algorithm designed in part to avoid grade inflation (which, if you think about it, is just another form of this fear of cheating but one that implicates teachers instead of students).

Almost 40% of teacher-assigned A-Level grades were downgraded by at least one grade. Instead of getting those two As and a B that you expected to get and that would get you into Bristol, the algorithm gave you an A, a B, and a C. No college admission for you.

In part, Ofqual's algorithm used the history of schools' scores to determine students' scores. Let me pause there so you can think about the very obvious implications. It's pretty obvious: the model was more likely to adjust the scores of students attending private schools upward, because students at private schools, historically, have performed much better on their A Levels. (As someone who attended a private school in England, I can guarantee you that it's not that they're smarter.) Ofqual's algorithm adjusted the scores of students attending the most disadvantaged state schools downward, because students at state schools, historically, have not performed very well. (I don't want to get too much into race and class and the British education system, but sufficed to say, about 7% of the country attends private schools and graduates from those schools make up about 40% of top jobs, including government jobs.) Overall, the scores of students in the more affluent regions of London, the Midlands, and Southeast England were adjusted so that that they rose more than the scores of students in the North, which has, for a very long time (maybe always?) been a more economically depressed part of the country.

At first, the British government — which does rival ours for its sheer idiocy and incompetence — refused to admit there was a problem or to change the grades, even arguing there was no systemic bias in the revised exam scores because, according to one official, teachers grade poor students too leniently — something that the algorithm was designed to address. But students took to the streets, chanting "Fuck the algorithm," and the government quickly backed down, fearing that not only might it alienate the youth but also their families. Grades were reverted to those given by teachers, not the algorithm, and university spots were given back to those who'd had their offers rescinded.

I should note here that there was nothing particularly complex about the A-Level algorithm. This wasn't artificial intelligence or complex machine learning that decided students' grades. It was really just a simple formula, probably calculated in an Excel spreadsheet. (That doesn't make this situation any better, of course.)

The A-Level algorithm is part of what Ruha Benjamin calls the "new Jim Code," the racist designs of our digital architecture. And I think what we can see in this example is the ways in which pre-digital policies and practices get "hard-coded" into new technologies. That is, how long-running biases in education — biases about race, ethnicity, national origin, class, gender, religion, and so on — are transfered into educational technologies.

Lest you think that the fiasco in the UK will give education technologists and education reformers pause before moving forward with algorithmic decision-making and algorithmic surveillance, the Gates Foundation last month awarded the Educational Testing Service (which runs the GRE exam) a $460,000 grant to "to validate the efficacy of Automated Essay Scoring software in improving student outcomes in argumentative writing for students who are Black, Latino, and/or experiencing poverty."

A couple of days ago, I saw a series of tweets from a parent, complaining that her junior high school-age son had gone from loving history class to hating it — "tears," "stress," "self-doubt," after the first auto-graded assignment he turned in gave him a score of 50/100. The parent, a professor at USC, showed him how to game the software: write long answers, use lots of proper nouns. His next score was 80/100. An algorithm update one day later: "He cracked it: Two full sentences, followed by a word salad of all possibly applicable keywords. 100% on every assignment. Students on @Edgenuityinc, there's your ticket. He went from an F to an A+ without learning a thing." (Sidenote: in 2016, Alabama state congressperson Mike Hubbard was found guilty of 12 counts of felony ethics violations, including receiving money from Edgenuity. Folks in ed-tech are busy trying to stop students from cheating while being so shady themselves.)

I tweeted in response to the homework algorithm "hack" that if it's not worth a teacher reading the assignment/assessment, then it's not worth the student writing it. That robot grading is degrading. I believe that firmly. (Again, think of that Gates grant. Who has a teacher or peer read their paper, and who gets a robot?) Almost a thousand people responded to my tweet, most agreeing with the sentiment. But a few people said that robot grading was fine, particularly for math and that soon enough it would work in the humanities too. "Manual grading is drudgery that consumes time and energy we could spend elsewhere," one professor responded. And again, I disagreed, because I think it's important to remember, if nothing else, that if it's drudgery for teachers it's probably drudgery for students too. People did not like that tweet so much, and many seemed to insist that drudgery was a necessary part of learning.

And so, there you go. We've taken that drudgery of analog worksheets and we've made that drudgery digital and we call that "progress." Ed-tech promises it can surveil all the clicks and taps that students make while filling out their digital worksheets, calculating how long they spend on their homework, where they were when it was completed, how many times they tabbed out to play a game instead, how their score compares to other students, whether they're "on track" or "falling behind," claiming it can predict whether they'll be a good college student or a good employee. Ed-tech wants to gamify admissions, hiring, and probably if we let it, the school-to-prison pipeline.

I won't say "it's up to you," students, to dismantle this. That's unfair. Rather it is up to all of us, I think — faculty, students, citizens, alike — to chant "Fuck the algorithm" a lot more loudly.

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives