Machine Learning Is a Co-opting Machine

This is the second article in the series Co-Opting AI, curated by Mona Sloane. Read the first article in the series, by Meredith Broussard, here.
We are using human activity as an example from which to learn, and that becomes the basis upon which we then develop automated solutions to all sorts of ...

The following is a lightly edited transcript of remarks delivered at the Co-Opting AI: Machines event, held on April 1, 2019.


I was playing with the title of this series, “Co-opting AI,” and I had this intuition that what Mona had in mind was the idea that we would reclaim the way we tell stories about AI, to co-opt stories about AI. But I want to flip this, to talk about how AI co-opts us.

The state of the art here, the thing that people often are pointing to when they speak about AI these days, is machine learning. Basically, machine learning involves teaching the computer how to do something by example, by showing it many examples. So rather than writing rules to instruct the computer how to perform complex tasks—like understanding language or interpreting human voices or even identifying what is in a photo—what we can do instead is use pattern recognition algorithms. We expose these algorithms to literally billions of examples of what we want it to be able to identify, and we have the computer all on its own identify the distinguishing characteristics of these things.

Let’s use the canonical example of spam. When I say that AI is co-opting the world, what I’m thinking about is something like spam. When you have a piece of spam in your inbox, you can tell your webmail provider that this is a piece of spam. How many people have this experience of telling their webmail provider, oh, you missed this one message, this is actually spam? This is performing a function that you might not fully realize. It’s not simply moving the message to the spam folder; it’s labeling the message as spam for the webmail provider. We are telling Google—let’s say we’re using Gmail—that this is an example of spam. And every day billions of people are telling Google, here is an example of spam!

What’s really important here is all this human activity that is somehow being co-opted: it is being channeled into the spam-detection service that Google provides to all Gmail users; so human labor has gone into the process of producing these labeled examples of spam that are essential to the development of algorithms that can then identify spam all on their own. It’s an interesting example of the general model at work here. We are using human activity as an example from which to learn, and that becomes the basis upon which we then develop automated solutions to all sorts of previously intractable problems.

Browse

Weeding Our Algorithmic Gardens

By Hallam Stevens

Another example that people might be familiar with is CAPTCHA technology. It’s pretty clear that what we’re doing when we complete a CAPTCHA is helping Google train its models for self-driving cars, because the things they ask us to identify are stop signs, bicyclists, and cars. This is a reverse Turing test. We’re supposed to prove our humanity, prove we are not robots; but, in the process, we serve the secondary function of labeling examples. What we are able to do, again, is co-opt the human work that you are doing to prove your humanity into the service of developing new machine learning models. This goes on and on.

You can also think about it with regard to language translation. How do we actually train machines to be able to translate between different languages? You want extremely high-quality documents, where there have been expert translations across multiple languages for one specific document; and where might you find these? Well, the United Nations has a whole lot of these documents. Many efforts at language translation have been based on repurposing these high-quality translations between different languages to train the machine to be able to do this automatically.

We also have things like this in the workplace. When we think about how AI might change labor, we have to understand that what we’re really doing is teaching the machine how to replace us. We’re actually demonstrating through our own actions how it is that we are able to perform some task. And so, in many situations, often without realizing it, the work that we are doing is being co-opted in some sense by the companies that are able to then use that to train the model to perform the job that we once did.

By definition, then, machine learning is a co-opting machine. If this process is fundamentally about learning from example, of trying to draw general lessons from the particular examples we show the machine, then we have to think very carefully about the examples that we are using to train these models.

To the extent that those examples are products of human culture, human decisions, human frailties, the machine is going to learn from those things as well. This could be as straightforward as trying to train a machine to evaluate job applicants using examples of past subjective decisions about whether or not an applicant is qualified or subjective assessments of that person’s performance on the job. The human decisions are the examples we are using to train a machine to produce a purportedly objective assessment. And thus the model inherits the human subjectivity expressed in the decisions that we used to train it.

If the process of machine learning is fundamentally about learning from example, then we have to think very carefully about the examples that we are using to train these models.

In the past few years I’ve been involved in a budding interdisciplinary community of people who are trying to point out this problem: to specify technically what it is, when it is the case that these machines are learning certain kinds of biases and, secondarily, to try to figure out if you can intervene in some way.

The community takes inspiration from some of the thinking in discrimination law, which has offered a convenient definition of unfairness in the form of the so-called four-fifths rule (this standard comes from the Equal Employment Opportunity Commission, the Federal employment regulator). The four-fifths rule suggests that hiring decisions that result in a disparity in the selection rate greater than 20 percent (between for instance, men or women) constitutes a “adverse impact” that is enough to initiate a lawsuit.

This, unsurprisingly, was a very attractive place to start for computer scientists who were trying to think about questions of fairness, because it’s a mathematical definition of fairness. In the past eight years or so, there has been a lot of work to try to take these mathematical definitions of fairness and use them as a way to evaluate the models that we are building. If I train a model to evaluate job applicants, can we evaluate how it is going to make decisions with respect to different types of candidates?

You can think of this recent work on fairness in machine learning as a different type of co-opting, one that tries to use machine learning as a way to explicitly advance interests around civil rights. Not just as a way to replicate the problems with human decision making or human culture, but instead to intentionally intervene in the world to address injustice.

It’s been interesting to see how computer science, after three decades of outside critics arguing that there needs to be more attention to values in design, is now finally taking up these issues. There’s actually quite a lot of technical work now that is trying, purposefully, to advance normative values. In part maybe because it’s been done largely by computer scientists, these efforts have had practical impact—in the sense that industry and to some extent government is actually quite interested in addressing these issues. This is something that is obviously in the news very often, and so there is also quite a bit of pressure on industry to do something about it. But these tools have also given more traction to calls to address bias and discrimination, making them issues that lend themselves to concrete interventions.

Browse

Counter-histories of the Internet

By Marta Figlerowicz

Yet companies have also seized on the idea of fairness in machine learning in a way that I will call a third type of co-opting, one in which the work done by socially conscious computer scientists working in the service of traditional civil rights goals, which was really meant to be empowering, suddenly becomes something that potentially fits in quite nicely with the existing interests of companies. If your main concern is that some of these systems don’t work as well for certain populations; if it turns out that the models are less likely to accurately assess, for instance, female applicants; well, one possible response is to go and collect more information about that population, so that the accuracy of your model can be improved. And in the end, this is potentially a very attractive problem for industry: it justifies investing more in data collection and seeing the problem of fairness or bias as one that is overcome by simply maximizing accuracy, which would have been a concern all along. Thus critical work done in computer science has quickly turned into something that is much more aligned with the traditional concerns of industry and government.

I worry very much about the co-opting of the work on fairness and machine learning, about how it may no longer be serving the progressive goals that it had at its inception, how it can be deployed to whitewash the problematic and unjust practices that continue to plague some of our most prominent companies and institutions. icon

Featured image: Illustration from “The Manhole” for HyperCard by Rand and Robyn Miller, 1988. Daniel Rehn / Flickr