Public Thinker: Virginia Eubanks on Digital Surveillance and People Power

Thinking in public demands knowledge, eloquence, and courage. In this interview series, we hear from public scholars about how they found their path and how they communicate to a wide audience.
“We have to build against the legacy of inequality. Intentionally. We have to build our values into our design practices.”
Virginia Eubanks

Virginia Eubanks is a writer, organizer, and academic. She is the author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age and coeditor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith. Eubanks has written about technology and social justice for Scientific American, The Nation, The Guardian, Harper’s, and Wired. For two decades, she has been involved in community-technology and economic-justice movements; this work included cofounding the Our Data Bodies Project. She is an associate professor of political science at the University of Albany, SUNY.

After receiving tenure, she studied journalistic techniques—including during a stint as a New America fellow—that enabled her to make Automating Inequality a vivid portrait of Americans caught up in systems of data mining, policy algorithms, and predictive risk models that worsen economic inequality. In a wide-ranging conversation, we spoke about the craft of making academic arguments accessible and urgent; systemic inequality and the gaps in engineering education; how tech is the new finance; and our shared love of N. K. Jemisin’s fiction.


Jenn Stroud Rossmann (JSR): Throughout your books, you’ve written about how technology can contribute to social stratification. You’ve also written against the fantasy that technology might dissolve these strata. How did you start investigating these questions?

 

Virginia Eubanks (VE): I began researching technology in the welfare system because I was already organizing around economic-justice issues. In the late 1990s and early 2000s, much of the organizing around technology and social justice was captured by the “Digital Divide”: this idea that the problem was that poor and working people, or people in marginalized communities more generally, lacked access to technology.

Early on, I had an incredible light-bulb moment that changed everything about how I do this work. From 1999 to 2005, I was lucky to work in a community of women living in a residential YWCA in my hometown of Troy, NY, who were generous enough to correct my misperceptions.

After my first year of working in the community, they sat me down and basically staged an intervention. They said: “Virginia, we think you’re really sweet, but the questions you’re asking have nothing to do with our lives.”

So, with their guidance, I started asking different questions, and what I learned was fascinating. A moment I often recount was with Dorothy Allen, whom I quote in Automating Inequality. She reminded me that women on public assistance don’t lack access to technology; in fact, technology is ubiquitous in their lives. But their interactions with it are pretty awful. It’s exploitative and makes them feel more vulnerable. It’s often dangerous to their families and our neighborhoods. She said Electronic Benefits Transfer (EBT) cards, for example, were being used by her caseworker to track her purchases and movements.

Ever since that moment, I’ve stopped asking, “What problem in poor and working people’s lives can technology solve in the future?” I ask instead, “How is technology working in your life right now?”

 

 

JSR: Right. It’s more about agency than access.

 

VE: It’s about acknowledging reality. Science fiction writer William Gibson is believed to have said, “The future is already here—it’s just not very evenly distributed.” I believe that too, but in a way opposite to what I think he intended.

People who live in low-rights environments—poor and working-class communities, migrant communities, communities of color, religious or sexual minorities—are already living in the digital future, especially when it comes to high-tech surveillance and discipline. They’re required to give up private information to gain access to services, or they’re at risk of their data trail resulting in algorithmic decisions about their children’s welfare or policing in their communities. They encounter digital surveillance in public housing, in the criminal justice system, in the low-wage workplace. The digital surveillance is near constant.

We don’t need to project potential harms into an abstract future; we can ask people what they experienced last week.

Browse

Facial Recognition Is Only the Beginning

By Jake Goldenfein

JSR: And when you ask after these experiences, as in Automating Inequality when you explore the poor technical and political choices that plagued Indiana’s experiment to automate eligibility for public services, it seems that it is always the most vulnerable people taking it—as you say—“on the chin, … in the gut, and in the heart.” So why aren’t these systems better designed and sustained?

 

VE: We refuse to see these technological systems as part of history. We believe they are like the monolith from 2001. They just drop out of the sky onto totally blank ground and disrupt everything. The reality is that new algorithmic decision-making tools, predictive models, and automated eligibility systems are very much a part of the policy machines that went before. They are consistent with our country’s deep history of punishing the poor that started before the administration of public programs was digitized.

We also refuse to see these technological systems as part of this political moment, which is characterized by absolute hatred of the poor.

Absolute hatred is endemic from all sides of the political spectrum. Our political culture scapegoats, punishes, stigmatizes, and criminalizes poor people across the color line, though the impacts land with particular violence on the bodies of poor people of color. We don’t look at the way that the newest tools—algorithms, machine learning, artificial intelligence—are built on the deep social programming of things that went before, like the poorhouse, scientific charity, and eugenics.

We tell ourselves that this is brand new. But if you start looking at the inside of these systems, you can see that all that legacy programming is still there.

Browse

From The War On Poverty To The War...

By Garrett Felber

JSR: In Automating Inequality you quote a longtime social worker saying that new tech systems changed her job from bearing witness and building solidarity with clients into being more of a fraud investigator, focusing on “just the facts.”

 

VE: This is why we need to know our history. Inspired by successful welfare-rights organizing, eight thousand front-line welfare caseworkers in New York City walked out on strike in 1968. But they weren’t striking for their own employment contract. They were striking to make the application process easier for clients and to make benefits more generous. That’s the year that saw new digital systems being integrated into welfare, literally right after a walkout that suggested solidarity, that caseworkers and clients were on the same side.

That should tell us a lot about what “problem” this technology is actually supposed to solve. Unless we find the courage to face that, we’ll continue to create systems that not only automate inequality but amplify inequality. We have to articulate the politics of these systems before we can build technology that might lean toward justice.

 

JSR: Amen.

 

VE: And building it “in neutral” is not doing that. We have to build explicitly political tools to deal with the explicitly political systems they are operating in. We have to build against the legacy of inequality. Intentionally. We have to build our values into our design practices and be clear about them.

That runs counter to engineering education. That’s one of the reasons it’s challenging.

 

JSR: The status quo of engineering education has clung very tightly to this notion that this knowledge is neutral. We’re just making tools, and then what happens is out of our hands once they’re off the line. That’s very dangerous, of course. And it’s inaccurate: the politics and history are baked in.

But we have to rebuild engineering education to give students the tools to even ask the questions we’re talking about. Because just giving them a week of ethics or that one history requirement isn’t enough.

At the end of Automating Inequality, you say designers should ask themselves these questions: Does the tool increase self-determination and agency of the poor? Will the tool be tolerated if it is targeted at non-poor people? But I’m not sure that young engineers and data scientists are being well equipped to answer those questions.

 

VE: I agree.

Browse

Machine Learning Is a Co-opting Machine

By Solon Barocas

JSR: So how could we help them get there? Or if they’re not going to get there, who should be in the room with them to do this work? Who is capable of answering those questions?

 

VE: I talk a lot to young designers, young engineers, young computer scientists. Often they eagerly say, “Give us the five-point plan of building better technology …”

 

JSR: Give me a checklist, yeah.

 

VE: And I want to hug them and say, “Bless you for asking that question. And I’m sorry, and you’re welcome, but I’m not going to answer it. The solutions are so much bigger than any checklist: they’re contextual, collaborative. But one way to start is to learn a lot more about the history of the policy area that you are designing tools for.” And at this point, often they roll their eyes and say, “That sounds so hard.”

 

JSR: Yes.

 

VE: And I get it. But we’re not going to get to more just technologies in public services, for example, until the designers of those systems really understand that public assistance has historically worked to block whole categories of people from receiving help, to stigmatize them for their poverty, and to punish them for being poor.

 

JSR: Right.

 

VE: I cut my teeth teaching as a graduate student at Rensselaer Polytechnic Institute, throughout my PhD. I’m used to being in classrooms with engineers. One of the things that I found interesting (and this is just my own reflection on my experience) is that most first-year students at RPI seemed to care very deeply about the real world. They were in engineering because they wanted to solve big problems. But by the time they finished their degree, they were living in the bracketed world of the abstract.

So it’s not that we have to teach them something specific and new, it’s that we have to stop teaching them to not care about the real world. Something happened to those students that took them out of the actual and launched them into the realm of “elegant” problems. We have to stop doing that.

JSR: I completely agree. Let’s bust the myth of objectivity and neutrality. Add context to the content. And maybe unpack how important it has been to some powers that be to depoliticize knowledge that is inherently political and will be used in a political world.

 

VE: We also have to acknowledge that tech has become the new finance. The people who would have gone into investment banking in the 1980s are going into tech now.

So I hesitate to suggest that the solution to automated inequality is educating engineers differently. Was the solution to the financial crisis of the ’80s and ’90s training MBAs differently? We can’t put all our faith in that.

Absolutely we need to pay attention to the education that folks are getting. But also, I want to be clear that there are plenty of incentives for young engineers and data scientists to deny their culpability instead of working toward equity. You can make a lot of money not caring. You can gather a lot of power not caring, and you can use the rationalization of “doing well while doing good” as a cover story for doing a lot of nefarious things.

Let’s be honest: there are a lot of great students who care deeply about the world looking to get into this work. And there are also people who think they can make a lot of money, ride in helicopters, and date models.

 

JSR: That’s true. I would argue that education still has to be part of a solution. But, yes, it should go hand in hand with engineers feeling that they themselves are citizens who have the right to refuse, the responsibility to articulate values, the right to interrogate their technology.

 

VE: Yes, absolutely. And I think that’s why tech workers organizing themselves is really exciting. I’m thrilled when folks who work in big tech see themselves as workers, and I think that there’s some really progressively interesting stuff happening around the “No Tech for ICE” campaigns and around tech workers’ unions and walkouts.

I’m most inspired when we define technology work broadly enough to see software engineers as workers who have some interests in common with workers stuffing boxes for Amazon or doing the ghost work of content moderation. When we acknowledge the hierarchies created by education, status, race, class, and gender, but still see ourselves as workers together. That’s really important work.

 

JSR: Right.

Browse

Justice for “Data Janitors”

By Lilly Irani

VE: Especially because—and this connects the question of citizenship and unions back to the status quo assumptions cooked into our technology—American culture is built on a deep and incorrect misperception that poverty is a moral failing and not a majority condition.

If you look at poverty across the life cycle in the United States, as Mark Rank has done in his pathbreaking work, you see that 60 percent of us will be below the poverty line at some point in our adult lives (between the ages of 20 and 75). Close to two thirds of us, 64 percent, will receive means-tested public assistance. That’s not reduced-price school lunches, not social security, that’s straight welfare: TANF [Temporary Assistance for Needy Families], food stamps/SNAP, home heating assistance.

 

JSR: For most of us. Sixty-four percent.

 

VE: But we persist in this pernicious story that poverty is an exception, that it happens to a tiny percentage of probably pathological people.

That means that so many of our social programs are premised on the idea that the first and most important thing we can do is figure out if your poverty is your own fault or not. So the first thing our approach does is create this moral thermometer, this diagnostic for your potential failings: Are you a risk to your children? Might you make “bad choices”? Are you trying to defraud the system?

 

JSR: Predicting whether someone will be a risk, right?

 

VE: Right. Which is tough, because someone has to program these models. And even very smart people—not just students, like we’re discussing, but people advanced in their careers—make some really bad assumptions when they build models.

 

JSR: Right.

 

VE: The best example in the book is the Allegheny Family Screening Tool. Originally, one of the outcome variables—one of the proxies they believed could stand in for actual harm, abuse, and neglect of children—was a variable called “call re-referral.”

Call re-referral was defined in the following way: Say there was one report of suspected maltreatment (a call to a hotline or a report from a mandated reporter) and it was “screened out” and not investigated. But then there was a second call on the same family within two years. For the AFST model, this was the same as abuse or neglect actually having happened.

But anyone who has direct experience with child protective services knows that “vendetta calling” is, unfortunately, common. People sometimes call on each other to settle other kinds of tensions and disputes: neighbors mad about a party, partners going through a breakup. Hotline calling is also anonymous, so it is not unusual to get a whole string of vendetta calls. That means that call re-referral is a terrible proxy for harm.

They did remove call re-referral as an outcome variable after Automating Inequality came out, but it shocked me that this team of very advanced experts from around the world, including experts in child protective services, somehow didn’t take this into account from the beginning. Or didn’t think it was important enough to make call re-referral an inappropriate outcome variable for the system. Did they talk to any CPS-impacted parents? It was among the first things parents and advocates raised when I spoke to them during my research in Allegheny County.

 

JSR: Of course.

VE: So how can we teach people not to do that? We can’t teach students everything they’re going to need to know about every possible system they might come into contact with as designers, especially 5, 10, 15, 20 years down the line. So when it comes to students, helping them develop a healthy curiosity about history and a respect for the lived experience of people who will be affected by the tools they are building may be the most crucial thing. It’s an attitude of being genuinely modest about your own expectations and your own expertise. That’s really important.

 

JSR: That’s right. Humility and curiosity.

 

VE: We cannot create these deeply consequential systems, systems that are making crucial political decisions, unless we’re working in collaboration with the people who will be most impacted by them.

 

JSR: We need a more participatory design process. Design justice, to use Sasha Costanza-Chock’s powerful framing.

 

VE: People who will be directly impacted must have a say in how the systems are designed. More importantly, they have to have power in deciding how these systems are implemented. Communities should have the right to say no to these systems if they make people more vulnerable, if they don’t reflect deeply held community values. There should be some way that folks can say no.

But that’s really complicated inside the public-assistance system, right?

 

JSR: Right, because they’d be saying no to these systems, which are now the only way that they can get housing or the way that they can get services.

 

VE: Right. Some might say: If you don’t want to give up this incredibly sensitive information about yourself, if you don’t want to be constantly under the digital gaze of the state, then just don’t apply for food stamps.

But, as legal scholar Khiara Bridges reminds us, for a family that can’t afford to buy enough healthy food, not applying for food stamps is textbook child neglect. Now you’re opening the door to a child protection investigation.

So the right to refuse a government-technology entrance into your life is very contextual—the right to “just say no” has a lot to do with how much power and resources you have. Or how well you organize to build power with your community.

Browse

Designing AI with Justice

By Sasha Costanza-Chock

JSR: One of the more unsettling subtexts of your work is that our most vulnerable and precarious citizens are beta testers for these technologies. For them, a small glitch is devastating; they really don’t have the ability to opt out meaningfully in a scalable way, and so …

 

VE: Poor and working-class people, or people in marginalized communities more generally, are experimental populations for these systems. That’s happened since the poorhouse, which often donated (or sold) paupers’ bodies to hospitals for dissection.

 

JSR: Oof.

 

VE: And it’s immoral, and appalling. Period. Underline that. But one of the great lessons from the Indiana case, where community organizing and lawsuits led to a destructive, fully automated public welfare system being overturned, is that it’s absolutely possible to stop these things. We talk about the technological as if it is a fait accompli. The reality is, if we decide we’re not going to stand for it, we can stop these projects.

They did in Indiana. It was old-school organizing: printing out flyers and putting them in the bags at convenience stores; town meetings with hundreds talking to their legislators about the system’s impact on their lives.

We need more spaces where the people who are most impacted by these systems can tell their stories and share their analyses. And we all need to recognize that we have the power to shift or stop these systems. There’s nothing magic about them.

 

This article was commissioned by B. R. Cohen. icon

Featured image: Virginia Eubanks (2019). Photograph by Sebastiaan ter Burg / Wikimedia Commons