Why One Professor Says We Are ‘Automating Inequality’

Digital Learning

Why One Professor Says We Are ‘Automating Inequality’

By Jeffrey R. Young     Jul 24, 2018

Why One Professor Says We Are ‘Automating Inequality’

Often the algorithms that shape our lives feel invisible, but every now and then you really notice them.

Your credit card might get declined when you’re on vacation because a system decides the behavior seems suspicious. You might buy a quirky gift for your cousin, and then have ads for that product pop up everywhere you go online. In education, schools and colleges even use big data to nudge students to stay on track.

As we create this data layer around us, there’s more and more chance for systems to misfire, or to be set up in a way that consistently disadvantages one group over another.

That potential for systemic unfairness is the concern of this week’s podcast guest, Virginia Eubanks. She’s an associate professor of political science at SUNY Albany and a longtime advocate for underprivileged communities as well as an expert on tech. She’s the author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, which The New York Times called “riveting” and noted that that’s an unusual accomplishment for a book about policy and tech.

EdSurge connected with Eubanks this month to ask her about her explorations of technology’s unintended consequences, and about what people in education should consider as they leverage big data systems.

Subscribe to the EdSurge On Air podcast on your favorite podcast app (like iTunes or Stitcher). Or read highlights from the conversation (which have been edited and condensed for clarity).

Your book presents a divided world when it comes to the experience of tech today. For well-off people, it's a time of convenience and ease thanks to social media and algorithms. But for people who are poor or disadvantaged, some of those same digital systems can look very different, as you describe in your book. Can you give one example of someone experiencing what you see of the downsides of these digital systems?

Eubanks: Too often, I think, when we talk about technology and how it affects our culture and our politics, we tend to talk about it sort of abstractly, as if it affects everybody the same way, or as if most of the impacts are going to happen sometime in the future. It's something that's always been really frustrating for me about our technology conversations because I've done 15 years of economic justice and welfare-rights work. I know that in the communities where I organize and work, these impacts are being felt right now in very material ways by families who are the targets of these tools, and so it's really important for me to start there. I didn't end there, but I started there.

There's a young mom on public assistance who I worked with many years ago when I was doing mostly community technology center work. She and I had worked on designing a tech lab in a residential YWCA in my hometown of Troy, New York. One day, we were sitting around in the lab and shooting the breeze about technology, and I asked about her EBT card, or Electronic Benefits Transfer card—that's a debit-like card that you get public-assistance benefits on. She goes by a pseudonym in the book, Dorothy Allen. I was saying, "Dorothy, people tell me this is more convenient, that there's less stigma than pulling out actual paper food stamps. What do you think?" She said, "All of that's true, but at the same time, my caseworker uses the electronic records of my EBT card to basically track all of my purchases and hence all of my movements."

I must have had this really naively shocked look on my face, because she pretty much pointed and laughed at me for a couple of minutes. Then, she got sort of quieter and more reflective, and she said, "You know what, Virginia, you all"—meaning professional middle class people—"you all should pay attention to what's happening to us, because they're coming for you next.”

That was 18 years ago. And I think it was both a very generous insight and really prescient.

Could you describe for folks who haven't read the book the three case studies that you explore in depth?

The overall idea of the book is that these new data-driven digital technologies that we're starting to incorporate into our social assistance systems have great potential. They can lower barriers to entry and they can integrate programs. They can speed results. But because of the ways we understand poverty in the United States and because of our really incredibly punitive social-assistance system, what we're actually doing is building a ‘digital poorhouse,’ which is the sort of invisible institution that's made up of decision-making algorithms, automated eligibility processes and statistical models across a really wide range of social-assistance programs.

The first case study I talk about in the book is an attempt to automate and privatize all of the eligibility processes for the welfare system in the state of Indiana in 2006.

The second system that I write about is called the ‘coordinated entry system.’ That's something that's in wide use across the country and around the world, but I write about it specifically in Los Angeles County, which has one of the highest rates of homelessness and the highest rate of completely unsheltered homelessness in the country. There's 58,000 unhoused people in Los Angeles County, and 75 percent of them are totally unsheltered. The system I talk about in LA is what proponents call the Match.com of homeless services, and the idea is to basically rank all the unhoused people in Los Angeles County in terms of their vulnerability to some of the worst outcomes of homelessness—death, mental illness and disastrous health effects—to the most appropriate available housing resource, whether that's permanent supportive housing or time-limited resources of rapid rehousing.

The third case is what's called the Allegheny Family Screening Tool, which is a statistical model that's supposed to predict which children might be victims of abuse or neglect in the future in Allegheny County, the county where Pittsburgh is in Pennsylvania.

Can you give a specific example of how these systems can cause stress or harm to people?

In Indiana, the then governor, Mitch Daniels, signed in 2006 a 1.34-billion dollar contract with a coalition of high-tech companies that included IBM and ACS to automate all of their welfare-eligibility processes. What that meant in practice was moving about 1,500 public frontline caseworkers from their local county offices into these regionalized and privatized call centers. What that looked like on the ground was that these caseworkers were no longer responsive to a group of families, a caseload, but instead, in the new call centers, they responded to a queue of tasks that dropped into their new workflow-management system.

From the caseworker's point of view, it was very hard to stay attentive to a case from beginning to end. About a million applications were denied in the first three years of this experiment. That was a 54 percent increase from the three years before the experiment, and most everybody was denied for this sort of catch-all reason: failure to cooperate in establishing eligibility.

In reality, just a mistake had been made somewhere. Either the applicant had forgotten to sign page 36 of a 50-page application, or the call center worker had given them incorrect information about what they needed to provide. What it did, really, was move accountability and shift the burden of proving eligibility from the state and local caseworkers onto the shoulders of some of the most vulnerable people in Indiana, and it had really, really incredibly painful and awful impacts on many people there.

I tell the story of Omega Young, who was a middle-aged African American woman who lived in Evansville, Indiana who was denied Medicaid when she missed a phone appointment with the call center to re-certify because she was in the hospital suffering from terminal cancer, for example. That's one kind of story that I tell in the book. Really, it's not the intentions that I'm interested in. It's the impacts I'm interested in.

You don't focus on any education case studies in your book, but are there ways in which these same trends could be playing out or could play out in the future in education?

There is what seems like a best-case scenario that's happened recently at Georgia State University. My understanding is that Georgia State has faced some real struggles with retaining and graduating students on time. This new president came in talking a lot about using predictive analytics to identify students who might be struggling earlier and to give them better advising support so that they would stay on track and graduate on time. A lot of people brought this case up to me, like, "Oh, but look at this." It happened enough that I actually did some research on it, and my understanding is that we're very much burying the lead in the story.

This gets written about as a ‘success of predictive analytics’ story, when in fact, one of the things that's really profound about this case is that at the same time that they were beginning to use predictive analytics, they went from doing 1,000 advising appointments per year to doing 52,000 advising appointments per year. They hired 43 new advisors.

One of the things that's really interesting about this is that that's not the story we hear. To me, the lead in that story is “Adequate Resources Solve Real Problem.”

So you’re saying that the danger of focusing on tech in that example is that it suggests that all you need to do is build a digital tool instead of hire the extra 40 counselors?

Exactly. Wouldn't that be nice, right? Wouldn't it be great if we could just build these tools that solve our deepest social crises?

But I think that is the hope of a lot of people.

That is the hope. I am sorry to disappoint that hope, but I find that if we don't tackle the deep social issues at root of these problems, that we reproduce them through our tools. And we don't just reproduce them. Now that we're looking at these densely networked, very fast tools that scale so quickly, we also run the risk of vastly amplifying those problems.

We tell this story in the United States that poverty is something that only happens to a tiny percentage of probably pathological people, when the reality is 51 percent of us will be below the poverty line at some point in our adult lives. At some point between the ages of 20 and 64, 51 percent of us will be below the poverty line. That's a major issue. That's not an issue that you address by doing better moral diagnosis and triaging who has access to their basic human rights. That's a problem that you solve by building universal floors under everyone.

It's interesting that you talk about this as a social and an almost narrative problem as much as a technical problem. And you mentioned something that you refer to as a Hippocratic Oath that you're proposing for data scientists and engineers and administrators who deal with these kinds of issues. What is the basic reason for having such an oath?

For me, the Hippocratic Oath boils down to two really basic gut-check questions. The first one is: Does the tool increase the dignity and the self-determination of poor and working people? And the second is: If the tool was aimed at anyone but poor and working people, would it be tolerated?

Those seem like basic democracy questions to me. Those, for me, are the questions that if you can't answer ‘Yes,’ that you just shouldn't do it at all—that you're on the wrong side of history.

How do you propose getting people to rethink how they think about poverty or providing assistance through government programs, whether it's in education or food assistance?

I actually write about technology to force people to think about poverty. And so it's kind of a fascinating back door into these stories that folks maybe wouldn't pick up off the shelf and read because they don't want to deal with the realities of economic and racial inequality in the United States. It's a way to really invite people into the conversation through a different door, and that has been really exciting for me. I believe it's worked to a certain degree.

But the second thing I want to say is, ironically, the cure for bad data may be telling better stories. Data is deeply shaped by the stories we tell ourselves about how the world works—because that affects how we collect [data], who we collect [data] on, and how we organize the data once we have it. Those are all things that go into the deep social programming of these tools.

And I deeply believe that telling better stories, our own stories, about our struggles with economic precarity can really shift how we think about these social programs—and then how we think about the technologies that we build to serve these social programs.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up