In summer 2020, Lafayette Square in Washington, DC, and Jefferson Square Park in Louisville, KY, become the settings for two entangled stories playing out in real time. Washington’s story is one of physical demonstration for social justice. Protestors who assemble to oppose police killings of Black people with impunity are preemptively surrounded by tanks, snipers, and police with military-grade weapons. Law enforcement insists its presence is defensive, even as the nation’s Attorney General proactively orders Lafayette Square to be cleared of protestors, the National Guard deploys surveillance planes without approval from the Department of Defense, and Department of Homeland Security agents compile “intelligence reports” on journalists covering the stories.
Louisville’s story takes less visible and visceral form, but it is no less alarming. Powering every police action are trails of digital data collected on Black communities, movement organizations, and individual protestors. The data are gathered from social media, search engines, cell-phone GPS, automated license-plate readers, toll-booth tags, facial-recognition systems, building “security” cameras, and stingray devices. Purchased through data brokers or directly from technology companies, these data are combined with and analyzed alongside public information, such as voting records. Many protestors know to take basic digital-security precautions to safeguard their mobile phones, but these tactics amount to bringing a butter knife to a war zone. Each graphic video of police advancing on protestors shows less than half the story, for the cameras (ironically) cannot show the digital systems at work.
The insurrection at the US Capitol on January 6, 2021, provides a third case study for perceiving how digital data—or a purposeful lack of attention to it—can affect material practices of assembly. The mostly white rioters not only had been planning the event on social-networking platforms for months; on the day of the insurrection, many of them also livestreamed the attack and then fundraised off the performance of physical violence, which left five people dead. The rioters’ assumption of safety while committing the attack is visible in both their proud livestreams and the gentle treatment—even, at times, the aid—they received from some law-enforcement officers during the siege. The white-supremacist presumption of structural safety when facing this particular cohort (vs. the clear presumption of danger that determined how Black Lives Matter protestors were treated in Lafayette Square and Louisville) underscores the entwined nature of law enforcement and digital systems. Depending on whose data is being collected, governmental actors like the Attorney General, the FBI, and the Capitol Police can either violently disrupt free assembly or conclude “no danger here,” a choice that proved fatal on January 6.
Our digital trails have been following us into the physical world for years now. We rely on the blue line in our mapping apps to get us places, meet up in parks with people we find through shared-interest apps, link our home phones and addresses to retail-loyalty cards in order to save some dollars, and buy transit passes with credit cards, thus connecting our financial information to our commute patterns. We have digitized physical spaces and devices so much that a car company advertises its product by asking if it is “a Buick or an Alexa?” Now that we have learned that social media and search engines mangle our experiences of free expression online,1 we must ask: How does the digitization of physical spaces alter how we assemble and associate, too?
As our physical spaces become digitized, the stakes for civil liberties only get higher. Laura DeNardis’s The Internet in Everything: Freedom and Security in a World with No Off Switch, Sasha Costanza-Chock’s Design Justice: Community-Led Practices to Build the Worlds We Need, and Shalini Kantayya’s documentary Coded Bias help us understand the depths of our digital dependencies and their disparate impacts on marginalized communities, as well as how those very communities lead efforts to resist oppressive digital technologies and to imagine alternative ones. Taken together, these works point out the need for scholarship on the relationship between the internet and democracy to extend its conceptual lens from a focus on freedom of expression to freedom of assembly and association. We know how significantly the corporate internet shapes the online public sphere; now is the time to ensure that public physical spaces where we gather remain safe, accessible, and free for assembly.
From everything we have learned in the past few years about free expression and the internet, we should be prepared for the extent to which our ever-increasing digital dependencies will upend our experiences of other rights, such as free assembly and association. Decades of scholarship about the dangers of online speech when power is concentrated in a few large tech companies is slowly getting more attention.
But public and policy discussions of these issues hold fast to the notion of a “free marketplace of ideas” and, too easily, rely on the expertise of a handful of white, male tech engineers. Tellingly, Netflix’s recent attempt to document the societal harms of social media, in a documentary called The Social Dilemma, positions these same technology insiders as potential saviors capable of correcting the very problems they created. In so doing, The Social Dilemma’s filmmakers silence the many contributions that women and women-of-color scholars have made toward advancing our understanding of these issues.
Our society needs to get past this mindset if we are to learn in time to address the harms of an internet that now shapes our physical spaces and interactions. In this context, it is a good idea to turn to the expertise of and the many recommendations made by those for whom our digital technologies have long been, to paraphrase scholar and activist danah boyd, “complicated.”2
Digital data—or a purposeful lack of attention to it—can affect material practices of assembly.
If humans disappeared from the earth’s face tomorrow, Laura DeNardis tells us in The Internet in Everything, devices across the world would continue to operate, beep, and hum. So long as electricity flows, that is.
The internet today is in everything: from buildings and the energy grid to cars and hospitals. It is no longer merely a communication tool. Instead, the internet is a control switch for education, health, manufacturing, government, social movements, and energy systems. It pervades material spaces to the point where the divide between physical and digital spaces blurs.
Consequently, debates over the control and design of the internet should extend beyond a dominant focus on freedom of expression to consider a broader range of civil liberties, such as freedom of assembly. DeNardis’s book provides important background to help us understand the urgency of these debates, and how to begin to address them.
The public squares of Louisville and DC help us “see” the internet, as DeNardis describes it. Signals from cell towers, phones, and license-plate readers can be triangulated to identify all who gather in parks, whether to protest, picnic, or proselytize. Stingray devices intercept cell-phone location signals and transmit identifying information to police. Social-media histories can become profiling and facial-recognition data, the analysis of which can confirm who is where. A combination of technological capacity, regulatory laxity, and marketing rhetoric has moved technologies’ online-tracking capacities into our public physical spaces.
In the ever-recursive relationship between people and technologies, new associations—such as the Surveillance Technology Oversight Project (STOP)—emerge to fight for public oversight of police technology in order to protect our ability to assemble publicly. In response, police departments use their fundraising foundations (which are not subject to public oversight) to purchase the technologies they do not want the public to know they are using. In other words, the escalation of digital surveillance changes how people participate in public protests, how governments and companies work together, and what institutional mechanisms the two sides use to achieve their goals.
When DeNardis claims the internet is “in everything,” she means everything, including human bodies. From biometric identification systems to wearable technologies, the so-called “internet of things” is also the “internet of self.” She uses the poignant example of how Dick Cheney’s pacemaker—a fairly common medical device with wireless capabilities implanted within a human body—was once a potential target for geostrategic hacking.
But DeNardis’s example also sidelines a core insight at the heart of the book Design Justice and the documentary Coded Bias: namely, that marginalized people and communities are disproportionately impacted by the harmful consequences of an “internet in everything.”
The story that opens Sasha Costanza-Chock’s Design Justice makes this clear. Costanza-Chock is in an airport, the paradigmatic example of a space so hyperconnected that the boundaries between digital and physical no longer hold.
As they approach the millimeter-wave scanning machine, Costanza-Chock—a designer and scholar who identifies as a nonbinary trans* femme queer person—knows that the device will flag their body as anomalous: “I know that this is almost certainly about to happen because of the particular sociotechnical configuration of gender normativity … that has been built into the scanner, through the combination of user-interface (UI) design, scanning technology, binary-gendered body-shape data constructs, and risk detection algorithms, as well as the socialization, training and experience of the TSA agents.”
A heartbreaking description of said flagging ensues. In opening the book in this way, Constanza-Chock viscerally articulates the glaring gap between those who design commercial surveillance technology and those on whom it is used.
How does the digitization of physical spaces alter how we assemble and associate, too?
In stark contrast to The Social Dilemma and its narrative of repentant tech-bro saviors, another film, Shalini Kantayya’s Coded Bias, provides an engaging introduction to the robust scholarship on digital harm. And it does so by employing stories as gut-wrenching as those presented by Costanza-Chock.
Such stories include a 14-year-old Black school boy, stopped on a London street after being misidentified by facial-recognition cameras. Or Daniel Santos, a multi-award-winning Latino schoolteacher from Houston, who faces dismissal after a performance-assessment algorithm deems him a “bad teacher.” We also meet Tranae Moran, a Black renter in Brooklyn, who describes how her building’s property manager replaced door keys with facial-recognition devices, creating all sorts of issues in the community.
Coded Bias centers on Joy Buolamwini, an MIT computer scientist who brings us on her professional path from computer-vision scholar to algorithmic-justice activist (and scholar). The story begins when Buolamwini realizes that the facial-recognition software she is using in a project does not actually see her Black face unless she puts on a white mask. Furthermore, she realizes that this same software is already in widespread public use.
The people featured in Coded Bias are being harmed by technologies deployed in workplaces, housing complexes, courtrooms, and city streets. Contrary to the belief that the most “advanced” technologies are a privilege of the rich, Coded Bias makes clear that technological advancements most often target low-income areas, where communities are experimented on and extensively surveilled.
But wherever there is power, there is also resistance. Both Design Justice and Coded Bias provide inspiring insights into the forms this resistance can take.
Costanza-Chock introduces the theoretical and historical foundations of “design justice.” At the core of this framework is an idea both simple and disruptive: that communities impacted by design choices and products ought to be at the center of the design process, from start to finish. This would mean a radical shifting of power away from those who build to those who have been built on.
Design justice calls for reconsidering what design is for, who does (and owns) design, and where and how it is used. Most importantly, in a world of intersectional inequalities, there is an urgent need to turn to the most marginalized communities for knowledge based on their lived experience of bias and their existing practices of resistance.
These routes are not merely defensive. They are liberatory and provide paths to alternative digital futures. They are what led Joy Buolamwini to train and mentor a new generation of socially conscious tech engineers, and eventually to testify in front of Congress to advocate for regulation of facial-recognition technology.
Taken together, DeNardis, Costanza-Chock, and Kantayya reveal a new and necessary conception of the current internet: an entwining of digital systems with community life. Moreover, they show that this entwining is far more pervasive and shape-shifting than public discourse acknowledges.
It is easy enough to see how one’s “real” life has moved online. This has only increased with shelter-in-place directives, which have forced us to move our physical gatherings (be they worship services, work meetings, city-council meetings, or theatrical performances) onto digital platforms. What we also need to see is the inverse. We need to see that we have, in turn, digitized our physical spaces, in ways that allow our three-dimensional lives to be as persistently captured by third-party sensors as our screen-based ones.
No longer just determining what information you see online, our digital systems are shaping where we go, what we see, with whom we gather, and where we assemble. Seeing the internet in everything requires seeing the power asymmetries that arise from concentrated control of ubiquitous systems. DeNardis, Costanza-Chock, and Kantayya help us grasp what we need to do about it.
We have had decades to take regulatory action against pervasive tracking and lack of consent, but we have done little. Those fighting against government use of facial-recognition technology are at the forefront of addressing how we have ported these challenges into public life. The destructive dynamics between privately governed social-media platforms and disinformation are now manifesting themselves in physical public spaces.
When we leave our homes to assemble again—on streets watched by license-plate readers, in parks monitored by cameras, in our gig jobs, with our classroom-issued Chromebooks and our ever-present cell-phone and social-media trails—will we see the world we have built?
This article was commissioned by Mona Sloane.
- Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018); Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019); Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (MIT Press, 2018); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016). ↩
- danah boyd, It’s Complicated: The Social Lives of Networked Teens (Yale University Press, 2015). ↩