Over the past several decades, politicians and business leaders, technology pundits and the mainstream media, engineers and computer scientists—as well as science fiction and Hollywood films—have repeated a troubling refrain, championing the shift away from the material and toward the virtual, the networked, the digital, the online. It is as if all of life could be reduced to 1s and 0s, rendering it computable.
Apple’s recent iPhone X announcement is a perfect example of this. In the opening statement of the film that introduces the newest version of the company’s signature product, Apples states, “For more than a decade, our intention has been to create an iPhone that is all display. A physical object that disappears into the experience” (emphasis mine).1
This is more than just sophisticated advertising. Rather, it is an affirmation of more than 25 years of design criteria and engineering logic from the field of ubiquitous computing, which originated at Xerox PARC with Mark Weiser’s well-known statement that “the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”2
Weiser’s vision set the stage—and, more importantly, shaped the business agenda—for the creation of a world in which hundreds of tiny computers would be embedded into the environment invisibly and seamlessly. Today, it is in design criteria and engineering specifications—such as “invisibility” and “seamlessness,” which aim to improve the human experience with technology—that ethical decisions are negotiated.
Whether this disappearance is literal or metaphorical in the case of the iPhone X, the emphasis on dematerialization is significant because it plays such a large role in the animating logic of our time. Such characterizations make it easy to dismiss the material realities of technologies, including the ways in which they are entangled with human bodies, environmental resources, and political economies.
We have been enticed into a world in which computing has faded into the background of everyday life, effectively becoming invisible. At the same time, we have actively concealed the ways in which these networked systems of software, data, technologies, and infrastructures “have politics.”3 And, with promises that computers are impartial, we have removed them from the public eye, making them difficult to expose and critique.
Yet these systems can only be understood as the flawed extensions of human creation. They act on our biases by replicating them and distributing them into the background of everyday life, thereby reinforcing and even exacerbating existing structural inequalities.
Take this example. In late July 2017, the City of Chicago agreed to settle a $38.75 million class-action lawsuit related to its red-light-camera program. Under the settlement, the city will repay drivers who were unfairly ticketed a portion of the cost of their ticket.4 Over the past five years, the program, ostensibly implemented to make Chicago’s intersections safer, has been mired in corruption, bribery, mismanagement, malfunction, and moral wrongdoing. This confluence of factors has resulted in a great deal of negative press about the project.
The red-light-camera program is just one of many examples of such technologies being adopted by cities in their quest to become “smart” and, at the same time, increase revenue. Others include ticketless parking, intelligent traffic management, ride-sharing platforms, wireless networks, sensor-embedded devices, surveillance cameras, predictive policing software, driverless car testbeds, and digital-fabrication facilities.
The company that produced the red-light cameras, Redflex, claims on their website that their technology can “reliably and consistently address negative driving behaviors and effectively enforce traffic laws on roadways and intersections with a history of crashes and incidents.”5 Nothing could be further from the truth. Instead, the cameras were unnecessarily installed at some intersections without a history of problems; they malfunctioned; they issued illegal tickets due to short yellow-lights that were not within federal limits; and they issued tickets after enforcement hours.6 And, due to existing structural inequalities, these difficulties were more likely to negatively impact poorer and less advantaged city residents.
The controversies surrounding red-light cameras in Chicago make visible the ways in which design criteria and engineering specifications—concepts including safety and efficiency, seamlessness and stickiness, convenience and security—are themselves ways of defining the ethics, values, and politics of our cities and citizens. To be sure, these qualities seem clean, comforting, and cuddly at first glance. They are difficult to argue against.
But, like wolves in sheep’s clothing, they gnash their political-economic teeth, and show their insatiable desire to further the goals of neoliberal capitalism. Rather than merely slick marketing, these mundane infrastructures (hardware, software, data, and services) negotiate ethical questions around what kinds of societies we aspire to, what kind of cities we want to live in, what kinds of citizens we can become, who will benefit from these tradeoffs, and who will be left out.
Furthermore, the red-light-camera example should not be understood as a rare exception. Rather, it is business as usual at the intersection of technology and politics. As human and technological systems become more deeply enmeshed and indistinguishable from one another—mutually reinforcing their own possibilities and the failures—we can only expect such examples to become more commonplace.
Networked systems act on our biases by replicating them and distributing them into the background of everyday life, thereby reinforcing and even exacerbating existing structural inequalities.
Rather than letting these systems fade into the background, as advocated by so many science fiction accounts and technologist visions, a deeper engagement with the material realities of digital technologies is necessary. This public participation in understanding, experiencing, and evaluating these technologies will allow for the development of an essential language and literacy around these systems and, in particular, the ways in which they embody our ethics and values. As a result, we will be able to design and use technologies to reinforce principles of equality, fairness, and justice.7
How might we make sense of the politics at play in the iPhone’s quest to become invisible? Can we meaningfully untangle the issues plaguing Chicago’s red-light-camera program? In what ways can we engage with future technologies and ensure that they reflect our values?
Three recent books published by MIT Press—Paul Dourish’s The Stuff of Bits, Hamid Ekbia and Bonnie Nardi’s Heteromation, and Ed Finn’s What Algorithms Want—help to answer these questions, specifically those about the material realities of the technologies that we use (and those that use us) everyday, as well as those that we will design in the future. Each of these three books deals with a different way in which invisibility has come to dominate our technological understanding: Dourish argues that software itself has a distinct materiality that is often overlooked; Finn turns to the history of computing, illustrating a persistent gap between ideas and implementations; while Ekbia and Nardi highlight the important connection between labor and technology. All of these books seek to make visible the hidden politics of technology by overturning the myth of “impartial” code, and excavating the biases and bodies in the machine.
There is no such thing as “just code,” writes Finn. There are no immaterial 1s and 0s mediating reality without also being implemented in the material world. Rather, algorithms are always the product of social, technical, and political decisions, negotiations and tradeoffs that occur throughout their development and implementation. And, this is where biases, values, and discrimination disappear into the black box behind the computational curtain.
From racist predictive-policing software to discriminatory algorithmic hiring platforms to the role of social media in the 2016 presidential election and the Equifax breach, it is becoming more and more difficult to ignore the growing evidence that illustrates the ways in which algorithms reflect our biases. Yet we continue to perpetuate mythologies of simplicity, invisibility, and efficiency that present these systems as neutral, logical, and immaterial, insisting that they, in Finn’s words, “work according to very different rules from material culture.”
In The Stuff of Bits: An Essay on the Materialities of Information, Dourish argues against this dematerialization and the “substitution of bits for atoms,” illustrating the ways in which the “the physical world persistently makes itself felt.” Drawing on empirical descriptions of what he calls “the materialities of information representation” in the case of software, spreadsheets, databases, internet-routing protocols, and networks, he makes the important argument that bits “are not just bits. Some matter more than others.” He explains further:
Some arrangements of bits are more easily manipulable than others. Some are more amenable to one form of processing or another. Some arrangements of bits can be transmitted easily over unreliable networks while others cannot; some can be processed by the applications on our computers, and some cannot; some can fit into limited storage, and some cannot; some can be efficiently processed by parallel computers, and some cannot. Some are designed for concision, some for robustness, and some for interactivity.
In further elaborating on the materiality of information, Dourish develops a rich set of properties and qualities to describe the ways in which the software and its representations can be “created, transmitted, stored, manipulated, and put to use—properties like their heft, size, fragility, and transparency” and qualities including robustness, malleability, speed, error detection, and consistency.
How might we apply Dourish’s analysis to the red-light-camera program described above? Dourish argues that the “social and material are entwined,” meaning that it is necessary to consider the entire system of technologies in the context of historical, social, political, and economic conditions. The red-light-camera system consists of hardware, software, and services; the software itself includes the ability to “capture, process, protect or enforce the targeted behavior” using sensory devices, business process management software and network integration and monitoring software.8 Rather than expecting these systems to function invisibly in the background, their breakdowns—whether social or technical—reveal the ways in which it is foolish to put too much trust in the universal claims of technological solutions.9 Specifically, in this case, rather than enforcing specific behaviors of safe driving, the system created fearful city residents, afraid to drive through certain intersections (sometimes choosing instead to back up, causing unnecessary accidents).
While Dourish acknowledges that “a materialist account can provide a[n] entry point to much broader concerns than simply infrastructures and their configurations, opening up questions of power, policy, and polity in the realm of the digital” (especially in the chapter about internet routing), he does not elaborate significantly in detail on the ways in which the specific examples that he describes reflect the political economy of neoliberal capitalism.
This is where Ekbia and Nardi’s book makes an essential contribution in tracing the parallel developments of technology (specifically, computing, beginning with the Jacquard loom in 1801) and the economy, as well as the ways in which they shape one another. In Heteromation, and other Stories of Computing and Capitalism, Ekbia and Nardi analyze the relationship between “two magical phenomena,” “the division of labor between humans and machines, and how it is configured and put to use by contemporary capitalism.”
Justice for “Data Janitors”
Heteromation is the term that Ekbia and Nardi use to describe a “new logic of capital accumulation” in which economic value is extracted from “low-cost or free labor in computer-mediated networks.” They argue that this “new division of labor” is often invisible, unrecognized (as “economically valuable”), uncompensated, hidden, and naturalized (as necessary for use and “what it means to be a ‘user’ of digital technology”). They offer examples from five different kinds of labor, each with its own ways of extracting and obscuring value: communicative labor (social media and “social data”), cognitive labor (crowd-work and sharing economy platforms), creative labor (gaming and design), emotional labor (social robots and care work), and organizing labor (customer reviews, citizen science, and peer production).
More specifically, framing these digital technologies as “responses to the predicaments [negative drivers] and opportunities [positive drivers] of capitalism,” their book is concerned with the ways in which economic value is extracted from these activities, the nature of that value, and the reasons why people—individually and collectively—participate in digital labor through these networks in ways that are difficult to refuse.
And, here, we have an important link to Dourish’s argument about materiality. Rather than the blunt, cold reality of Foucault’s panopticon, Ekbia and Nardi locate digital control in its “light, elegant, delicate, implicit, hidden, continuous, dispersed nature.” They note that so many of the ways in which control is exercised in contemporary society is through “nearly invisible digital mechanisms” (emphasis mine).
Ekbia and Nardi’s goal is to situate discussions about digitally mediated labor within the history of capitalism, which they argue has been overlooked in many academic disciplines that study technology from a sociological or cultural perspective. For Ekbia and Nardi, labor “is a uniquely human capacity, which accounts for the production of economic value in capitalism.”
In What Algorithms Want: Imagination in the Age of Computing, Ed Finn points toward what appears to be an agreement with Dourish’s characterization about the material agency of software, stating that the problematic structural metaphors that are commonly used to describe code (such as platforms and architectures) de-personify it and render it as a passive actor. Drawing on Ian Bogost’s characterization of the “computational theocracy,” Finn compares algorithmic culture to religion, paying great attention to the ways in which our mystical, magical, and mythological beliefs about computation—reified through science fiction novels and films—and the trust that we place in machines often render their architecture and underlying machinery invisible, immaterial, or out of view.
Each of these three books deals with a different way in which invisibility has come to dominate our technological understanding.
Finn locates his analysis of the algorithm within the long Western, Enlightenment, rationalist quest for a universal system of knowledge (one that eventually goes beyond the human), which has been fulfilled through the creation of “a unified vision of the world through clean interfaces and carefully curated data—everything you might want to know, now available as an app.” Drawing on the history of computer science, cybernetics, and information theory, as well as linguistic metaphors around magic and the ways in which social and cultural practices coevolve with technology, Finn examines a series of examples including Apple’s Siri, Netflix recommendation systems, gamification on Facebook (as well as on Uber and other digital labor platforms), and cryptocurrency.
He argues that these technology companies, which he calls “cultural wrappers” for algorithms, are ultimately “constructing a new epistemological framework of what is knowable and desirable,” in ways that will shape the public sphere, the economy, and how value is measured computationally, as well as human subjectivity and what it means to be human.
All three of these books argue for a deeper understanding of and engagement with the materialities of computing—software and the inner workings of computation and networks, economies and labor, and algorithms as culture machines. They expose the consequences of our intimate dependencies on computational technologies and infrastructures, which cross social, cultural, economic, political, and ethical domains. They question the ways in which we are enticed to participate and the mythologies that are perpetuated through the narratives of dematerialization. These imaginaries of immateriality continue to inhibit our ability to fully understand the implications of digital technologies such as the red-light cameras in Chicago. Finally, they make it easier to dismiss the labor and environmental resources that are distributed in these networks.
In my own work over the past 10 years I have observed the ways in which we, as a society, are often complicit in reinforcing the invisibility of technology. As users, we often repeat the touted benefits of a technology, even when our lived experience doesn’t reflect those claims. For example, when interviewing precarious freelance workers in the mid-2000s, I often heard of the freedoms that they experienced using Wi-Fi networks to work from cafes, parks, and public spaces. Despite this, many of them worked in the same place everyday or alternated between just one or two places, exhibiting very particular, situated routines.10 Similarly, in my own personal experience as a user of networked medical devices, I deliberately keep my insulin pump out of view despite writing about myself as a kind of “disabled cyborg” and the ways in which the pump requires constant attention, maintenance, and care that interrupt my everyday life.11
It is not enough to merely critique these technological systems. We must actively expose the hidden politics of emerging technologies by finding new ways to study them, visualize them, experience them, and evaluate them long before they are adopted into mainstream use. We must engage directly—both across disciplines and with the broader public—in order to advocate for the values that we espouse and, ultimately, change the shape of the technologies that we will live with in the future.12
There are many interventionist and activist approaches that might contribute to a more critical engagement with technology. In one project, with support from the Open Society Foundations, I conducted a participatory design workshop with social and economic justice activists about the future of work and automation.13 In another project, I was a creator of a visualization on driverless cars that was featured in the New York Times Magazine and speculative videos that were featured in the 2017 Vienna Biennale.14 By engaging these topics and even collaborating directly with engineers and computer scientists, it is possible to glimpse the ethical ghosts in the machine.
As we near the 30th anniversary of Weiser’s vision of ubiquitous computing, it is vital that we resist the lure of invisibility and, instead, make visible the ethics embedded in technology. In so doing, we can insure that our values—transparency, openness, justice, and accountability—are reflected in future systems and societies. “The algorithm offers us salvation,” writes Finn, “but only after we accept its terms of service.”
- Apple, “iPhone X” (accessed September 22, 2017). ↩
- Mark Weiser, “The Computer for the Twenty-First Century,” Scientific American, vol. 265, no. 3 (1991), pp. 94–104. ↩
- Langdon Winner, “Do Artifacts Have Politics?,” in The Whale and the Reactor: A Search for Limits in an Age of High Technology, edited by Winner (University of Chicago Press, 1986), pp. 19–39. ↩
- John Byrne, “City Reaches $38.75 Million Settlement in Red Light Ticket Lawsuit,” Chicago Tribune, June 20, 2017. ↩
- Redflex, “Solutions & Services” (accessed September 10, 2017). ↩
- According to a study commissioned by the Chicago Tribune in 2014, the traffic cameras have little effect on safety and, in some cases, have even increased the number of rear-end collisions by 22 percent. The rear-end crashes are caused by the fact that people are afraid to drive through the intersections. See David Kidwell and Alex Richards, “Tribune Study: Chicago Red Light Cameras Provide Few Safety Benefits,” Chicago Tribune, December 19, 2014. ↩
- Sasha Costanza-Chock, ed., Design Justice in Action, vol. 3 (Design Justice Network, 2017). ↩
- Redflex, “Solutions & Services” (accessed September 10, 2017). ↩
- S. L. Star, “The Ethnography of Infrastructure,” American Behavioral Scientist, vol. 43, no. 3 (1999), p. 377. ↩
- Laura Forlano, “WiFi Geographies: When Code Meets Place,” The Information Society, vol. 25 (2009), pp. 1–9. ↩
- Laura Forlano, “Hacking the Feminist Disabled Body,” Journal of Peer Production, vol. 8 (2016). ↩
- Mary Flanagan, Daniel Howe, and Helen Nissenbaum, “Embodying Values in Technology: Theory and Practice,” Information Technology and Moral Philosophy (2008), pp. 322–353. ↩
- Laura Forlano and Megan Halpern, “Reimagining Work: Entanglements and Frictions around Future of Work Narratives,” Fibreculture, vol. 26 (2016), pp. 32–59. ↩
- Marshall Brown, Lili Du, Laura Forlano, Ron Henderson, and Jack Guthman, “The Driverless City,” IIT Institute of Design (accessed January 23, 2018). ↩