THE QUESTION
Can we use AI to disentangle the inequalities that are linked to racial categories?
THE ARGUMENT
Racial categories are, by definition, unequal categories. They reflect not universal truths but historical processes that have linked racial status to economic, political, and social inequalities. So if artificial intelligence practitioners train models using these artificial categories, the categories will be reified and inequality will be reproduced.
But if AI practitioners do not pay attention to racial categories, they will still reproduce socially embedded inequality. So how can we design a more equal and just AI, one that recognizes people’s chosen identities while avoiding the re-creation of a “New Jim Code”?1
From data gathered about the ancestry, language, and phenotype—but not specifically race—of a population, artificial intelligence can identify the social attributes that drive segregation and then correct for them, all while holding identity in place.2
We argue that this would yield a more enlightened vision of our society by disentangling the dynamics of racial inequality. This broader vision could then be employed to build better systems to avoid reproducing inequality in employment, education, criminal justice, and other socially consequential domains.
THE PROBLEM
American society is profoundly shaped by the 1790 Naturalization Act, which limited naturalization to immigrants classified as “free white person[s] … of good character.” Even after the official end of slavery in the United States, Jim Crow segregation ensured that whiteness would be imbued with social, economic, and even political clout. Whiteness, in fact, remained a criterion for citizenship until passage of the McCarran-Walter Act (Immigration and Nationality Act) of 1952. This legislation not only continued national-origins quotas created in 1924 but also created racial quotas for Asian nations. The idea that the United States is truly a nation of citizens of all races is less than 70 years old.
In the decades since the dismantling of Jim Crow segregation—via the Black Power and civil rights movements (as well as the American Indian Movement and other movements for equality)—US policy makers have struggled to design institutions that are more fair with respect to race. For example, mid-20th-century trade unions tried to change from overt discrimination against racial and immigrant groups to “race-neutral” policies. But these institutions discovered that race-neutral policies did not correct for racial discrimination and systemic segregation that continued in the workplace.3
In fact, if groups of Americans lack access to resources or face systemic discrimination in society at large, then a race-neutral policy will continue to exclude members of those groups on the basis of other, correlated facts about them (such as their wealth or social connections). Power tends to reproduce power, as Pierre Bourdieu suggests.
This is why many institutions, such as universities, that committed themselves to affirmative action programs have witnessed little change over the years. Nevertheless, these programs from the outset have been targeted by a white conservative political backlash, which claimed “reverse discrimination” as soon as civil rights legislation was enacted.
One problem with all these efforts is the idea of “race” itself. Originally thought to have a theological basis and later buttressed by social Darwinism and scientific racism,4 racial categories have no basis in biology (at best, they are weak proxies for genetic diversity). There are no “racial genes,” so genetic differences are not fixed along racial lines. Instead, racial classifications (like designating groups of people as “indigenous”) are political categories, ascribed to people based on their physical appearance (phenotype), ancestry (indigeneity or “race” origin in Africa, for example), language (culture), and socioeconomic class.
Designing AI with Justice
The boundaries of the political categories that artificially define racial groups and the symbolic meanings of these categories change over time. For example, the 1890 US Census recognized a category, “quadroon,” for persons of “one-fourth black blood.” The 1930 Census, however, solidified the “one-drop rule”: the principle that “one drop” of African blood made a person “black.” Interracial marriage was universally legalized only in 1967. Under political pressure from the parents of mixed-race children, the census added the option to declare “one or more” racial categories only in 2000. Although, historically, census takers decided how to classify a person’s race based on visual assessments, for the last six decades Americans have been allowed to select their own racial categories on census questionnaires.
As the conversation about race became muddled with notions of color blindness, with genotyping by commercial interests, and with a focus on individual identity and thus on racial classification as simply characteristic of individuals, the ascriptive nature of “official” racial classification of a group in society was obscured.
Yet, surely race is more than an identity one can decide for oneself. The way society continues to classify black people using the one-drop rule shows how race is embedded in a history of social and legal practice designed from the beginning to enforce social, economic, and political inequality; of course, those who are classified as “white” gain societal privileges.
For these reasons, we have concluded that racial categories are intrinsically hierarchical and unfair. This unfairness is at the root of why debates about affirmative action are so bitter and unreconciled: because even policies designed to promote racial inclusion depend on racial categories founded in exclusion.
Therefore, the challenge facing designers of new institutions—or institutions on the cusp of dramatic change, due to the introduction of artificial intelligence—is how to design a system that addresses the historical and ongoing inequality reflected in racial categorization, but without reproducing that inequality in AI.
THE CONTEXT
Why worry about racial inequality within artificial intelligence? Because the locus of many important decisions about employment, education, criminal justice, and other socially consequential domains has, in many cases, already moved from human officials to automated AI systems.
This transition poses a serious risk: centralized automated decision making means that small design decisions can affect people in a large-scale way. Engineering decisions—even those made without political intent—can now have tremendous political consequences. This is why public interests have rightly awakened to the ethical and political effects of AI.
Yet, too often the ethical concerns with AI are framed in strong partisan terms, reflecting political position and rhetoric more than social scientific facts. On the one hand are industrial technologists, who see only the promising and socially beneficial uses of AI. On the other hand are an assortment of critics who are deeply skeptical of new technologies and their applications.
Unlike these different partisan camps, we see opportunity in automated decision making; we also believe that its benefits can only be realized with serious interdisciplinary work: that is, combining solid engineering with historically and empirically sound social science. Sincere collaboration between computer and social scientists can lead to system designs—and political outcomes—that could not have been imagined independently in either field.
Amid an ongoing series of scandals about discrimination by algorithms, a growing community of researchers and practitioners now studies fairness in applications of machine learning in such sensitive areas as credit reporting, employment, education, criminal justice, and advertising. This scholarship has been motivated by pragmatic concerns (about how machine learning produces group biases and how to comply with nondiscrimination law) as well as a general concern (about social fairness).
AI has created an opportunity to embed a deeper understanding of racial classification in large-scale systems.
While many of the controversies that have inspired this research have been about discriminatory impact on particular groups, such as black people or women, computer scientists have tended to treat group fairness abstractly, in terms of generic protected classes rather than specific status groups. This leads analysts to treat ranked racial and gender status categories—male and female, for example, or black and white—simply as nominal categories of personal identity (characteristics of the individual) in computational analysis, rather than as systems of hierarchical social status.
This misunderstanding of race in computer science—this conceptual limitation—reflects a naiveté about race in society at large. Understanding how racial categories have been formed historically and structure specific societies today is difficult for those trained in any discipline. Disagreements about the meaning and history of race lead to political divisions, not just in new automated decisions but also in human decision making all the time. Yet AI has raised the political stakes of decision-making procedures by increasing the scale and scope of their impact.
AI has also created an opportunity to embed a deeper understanding of racial classification in large-scale systems. We believe that critics of AI must remain open to new ideas and creative solutions; such critics might well see their concerns addressed by a closer and more sophisticated engagement with what AI could do for racial justice.
At the same time, we believe that engineers and data scientists must understand the socially embedded nature of racial classifications and the historical context that has imbued the topic of race with cultural, social, economic, and political significance. Engineers and data scientists—indeed, all of us—must keep in mind that “race is a modern social construction. … It is a particular way of viewing human difference that is a product of colonial encounters.”5
A thorough and focused understanding of the social construction of racial difference is crucial. But how can we apply computational thinking to disentangle the embedded disadvantage of racial hierarchy?6
Using scientific and mathematical concepts, we present “race” categories in a newly conceptualized way. We offer one possible approach to thinking about race difference that seeks to separate social inequality from social identity within the design of artificially intelligent systems.
THE SOLUTION
Our proposal originates from computational thinking about the problem of racial categorization. We trace the problem to an ongoing historical process.
First, racial ascription assigns individual bodies to political categories of free and unfree. Second, these categories are sorted according to their group ascriptions in space and society. So sorted, these groups are finally treated disparately, leading to different outcomes. This bonds people together, linking their fates to their racial group.
Through this process, the racial ascriptions end up solidifying real differences in social status, education, wealth accumulation, and political access. In the last stage, the system reproduces itself: the real differences between social groups contribute to the formation of racial categories by imbuing them with meaning. These social and material differences are reified and used to justify the ascription of race to bodies. This cycle structurally embeds race differences.
To address the problem of racial inequality, institutions need to address this cycle of racial ascription, sorting, and segregation. Our recommendation: rather than using societally entrenched racial categories that are flawed, derive new analytical categories from demographic data based on observed patterns of segregation.
This proposal depends on an extensive data-collection effort. It assumes that institutions have access to detailed personal or even biometric data that is regularly updated and has information about each person’s appearance, ancestry, language, and social class. Critically, these data must not be framed in racial terms, but rather must capture the nuances of and differences between people in ways that racial categories never do. Facial analysis technologies could potentially read the requisite phenotypic features from a recent photograph, such as a driver license photo. Ancestry information could be drawn from a genealogical database. And so on.
Although many may balk at the collection of such granular data about people, it is well known that fairness requires more data, not less.7 In fact, color blindness does not lead to equality; it reproduces the inequality that it cannot see.
What we propose, therefore, is a proactive strategy, one designed to reduce inequality—which happens to require responsible use of more nuanced data. It is always possible for these data to be misused. But as many recent cases of AI use can attest, ignoring race difference is not the solution. Should the data be available, it will be possible to discover the dimensions of bodily difference that are correlated with how people are segregated in society and space.
A technology known as unsupervised machine learning is useful here. This technology can look at data and infer from indicators what causes the data to take the shape that they do. If there is systematic segregation on the basis of, for example, skin color and ancestry from a particular continent, unsupervised machine learning could discover that in the data set. Rather than taking racial categories for granted, this system would learn quasi-racial categories based on scientific facts about a population. This approach might identify disadvantaged populations irrespective of their collective affinities.
We propose using these quasi-racial categories (which capture the demographic dimensions of segregation and inequality) rather than traditional categories (which would only reify difference). This has a number of benefits.
First and foremost, such a system would address inequality stemming from the political construction of racial categories, but without using those racial categories directly. The system avoids the unfairness of the categories themselves.
Second, this loosens the connection between affirmative action policies dedicated to correcting historical injustices and the conceptually zero-sum stakes that divide our nation politically. Such a system aims for a socially integrated society, not a society where one or another group has dominance or advantage. By being responsive to changing demographic conditions, such a system would ease political tensions (surrounding concerns that policies may weigh unfairly in any direction).
Beyond the proposal for system and policy design, our work raises questions about how we think about race in society. We know that color blindness is not a solution to racism. But racial thinking remains problematic, even when it is employed in projects that aim for justice. Therefore, we argue for heightened awareness of personal differences and social history, and openness to how technology affords the possibility for future change.
This article was commissioned by Mona Sloane and Caitlin Zaloom.
- Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (Polity, 2019). ↩
- Sebastian Benthall and Bruce D. Haynes, “Racial Categories in Machine Learning,” in Proceedings of the Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery, 2019). ↩
- David R. Roediger, Working toward Whiteness: How America’s Immigrants Became White; The Strange Journey from Ellis Island to the Suburbs (Basic, 2005), pp. 2, 212. ↩
- See Stephen J. Gould, The Mismeasure of Man (Norton, 1996), p. 106. ↩
- Tanya Maria Golash-Boza, Race and Racisms: A Critical Approach, 2nd ed. (Oxford University Press, 2018), p. 2. ↩
- Jeannette M. Wing, “Computational Thinking,” Communications of the ACM, vol. 49, no.3 (2006). ↩
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel, “Fairness through Awareness,” in Proceedings of the Third Innovations in Theoretical Computer Science Conference (ACM, 2012). ↩