In November 2000, two days before my birthday, my father passed away of colon cancer. I cannot think of a greater change I had experienced in my life to that point. Yet I woke up the next morning and, as I had always done, got myself ready for the day: showered, brushed my teeth, and shaved. The two moments—of unprecedented change and regulated continuity—could not have been more stark and disjointed. And yet they coexisted and were very much a part of my reality. It was while brushing my teeth, on that cold November day of the new millennium when everything about my life was dramatically different and inalterably the same, that I began to appreciate what the authors whose works I review here deal with: life (human or otherwise) is a perpetual negotiation between constancy and change.
We might call such a perpetual mechanism a physis, in the tradition of the ancient Greeks, for whom the word meant nature, but also movement, transformation, alteration, vitalism, and force.1 In these three new books on surveillance, machine learning, and war, a new physis of living is explored. Shoshana Zuboff’s The Age of Surveillance Capitalism explores the physis of surveillance, Jairus Grove’s Savage Ecology explores the physis of geopolitics, and Louise Amoore’s Cloud Ethics explores the physis of machine learning.
Each study (and its accompanying empirical and normative conclusions) is urgent, despite whatever reservations I may elaborate here. That’s because these books—rightly!—insist on the need to think long and hard about our contemporary fragilities, which means thinking long and hard about the new entities, new formations, and new relations currently taking shape: novel viral outbreaks that create bizarre new political associations; climatic variations that generate heretofore unmeasured weather formations; and machine-learning systems that generate strange new forms of human existence. Each of these works is attuned to its discipline’s physis, or how transformation occurs. Finally, none of these works (rightly again!) offers quick and ready solutions. Rather, the authors elaborate on analytic and perceptual tools for renewing and rethinking our critical commitments and dispositions, in light of the entangled complexity of our new(ish) physis.
All three books are compelling in their erudition and their urgency. But while Zuboff wants us to return to the more humane ideals of industrial capitalism in order to resist the totalitarian “instrumentarianism” of surveillance capitalism, Grove’s and Amoore’s works show that that ship sailed long ago. Tackling geopolitics and machine learning, Grove and Amoore challenge us to understand the physis of change and transformation today in a manner that doesn’t take Zuboff’s humanism as either a given or an ideal. And although their posthumanism may seem horrific from the perspective of Zuboff’s humanism, that’s their point. We no longer have the luxury of humanist utopias as ideals for how to view the world and plan for the future.
Zuboff’s The Age of Surveillance Capitalism names a new species of power—instrumentarian power—that governs surveillance capitalism. Instrumentarian power is the operationalization of radical indifference, which deploys “dehumanized methods of evaluation that produce equivalence without equality.”
The target of Zuboff’s call to arms is the hyperscale computing power of FANG (Facebook/Amazon/Netflix/Google), which, she brilliantly shows, is not like Orwell’s Big Brother. That power tried to control the present. Our world is closer (much closer) to Philip K. Dick’s The Minority Report (my comparison, not hers), where automated systems of knowledge track, capture, process, and output information so as to predict (and thus control) future outcomes. Rather than an age of Orwellian domination, ours is what the political philosopher Emily Nacol calls “an age of risk.”2
Today, Zuboff asserts, humans are indistinct sources of information extracted and extrapolated by privately owned entities. And this is the crux of her concern: if information and data are sourced from humans and FANG owns the sourcing, storing, and processing powers of that information, then humans live in a world owned and controlled by the instrumentarian power of surveillance capitalism.
I have many reservations about Zuboff’s analysis, and many sympathies with it. Her thinking is crystalline and compelling, and readily accessible to a nonspecialized audience. And her learnedness is beyond reproach.
Zuboff, professor emerita at Harvard Business School, turns to the tradition of German critical theory (especially Theodor Adorno and Hannah Arendt) to draw critical resources for understanding and critiquing this new form of economic totalitarianism. Zuboff’s specific focus is the insights, ideals, and research programs associated with B. F. Skinner’s behavioral revolution and their intensification in the age of big data.
This is where her analysis is most compelling. She elaborates how, in our current condition, instrumentarian power targets human behavior in such a way as to generate predictive stimuli. Such stimuli converge in human physis, transforming it from an agent of change into an undifferentiated tool of technological dominance.
Surveillance capitalism might be a brutal and frightful version of these earlier forms. But it is a latecomer to the ways the West dominates human life.
My reservations about Zuboff’s work are several. But the crux is this: it is too convenient and too easy to blame our current political and economic devastations on B. F. Skinner’s behaviorism and the intensification of computational powers. And it is even more convenient to find a solution for a new human future in nostalgia for past forms of industrial capitalism, which seem more humane only in comparison to current hyperscaled forms of exploitation and domination.
The fact of the matter is that the same theorists Zuboff endorses to reject instrumentarian power (Adorno and Arendt) were readers of Karl Marx and Sigmund Freud. If we return to those earlier sources, even briefly, we discover that the woes Zuboff describes and analyzes predate the digital revolution: indeed, they are the woes of capitalism tout court. It’s true that the hyperscale of the digital world intensifies the capitalist exploitation of human physis; but it’s no less true that, as both Charlie Chaplin (in Modern Times) and Lucille Ball (in the famous chocolate-factory scene of I Love Lucy) allegorized and immortalized, this exploitation is nothing new.
So everything Zuboff says about surveillance capitalism can be said about capitalism (without the surveillance part). Indeed, everything she identifies as emblematic of today’s surveillance capitalism can also be found, much earlier, in the sacred union of capitalism and democracy that has defined American politics at least since the Cold War.
To be more accurate, this union (of which surveillance capitalism is only the most contemporary manifestation) has defined the West for centuries, since Europe’s imagining of world resources as commodities available for extraction and consumption—including human labor capital.3 Let’s not forget that our most horrific examples of human exploitation and domination (i.e., 19th-century slavery and 20th-century Jim Crow America) were the foundation for the success of American postwar dominance, which seems to coincide with Zuboff’s idealized past.
Surveillance capitalism might be a brutal and frightful version of these earlier forms. But it is a latecomer to the ways the West dominates human life.
For Zuboff, we still live in a world of causal agency that operates on a model of free will, where actions are causes that produce effects: this is the foundation of her humanist appeal to liberal freedom throughout her analysis. Here perhaps is the most fundamental difference between Zuboff and Grove.
The world of Grove’s Savage Ecology is a world of undifferentiated complexity, self-organization, and neural plasticity. Whereas Zuboff leans on Arendt’s humanism, Grove has little time for what he quickly dismisses as “the reactionary position of Kant and other enfeebled humanists.” Nor does he have time for any “sentimental attachment to a humanity that never existed.” Instead, Savage Ecology is a work of speculative reflection and political realism.
That’s because Grove’s book already dwells in Zuboff’s feared future dystopia. For Grove, catastrophe isn’t the exception (which Zuboff, at base, seems to believe) but rather simply is our contemporary condition.
The irony, of course, is that I write these lines during the 16th week of a global pandemic quarantine: Grove’s catastrophic realism is our new ordinary. To quote China Miéville, it is a weird realism, which “punctures the supposed membrane separating off the sublime, and allows swillage of that awe and horror from ‘beyond’ back into the everyday.”4
Grove’s work is erudite, informed, challenging, and smart. It is about the geopolitics of destruction and violence that have characterized North Atlantic political and economic decisions for the past five hundred years. This is the spatial, temporal, and ontological condition he names the Eurocene.
The book is not written for the reader who expects a linear narrative or clear moral directives. “I remain interested but agnostic as to what inspires the will to catastrophe,” he asserts. The book’s structure is, as the title suggests, savage. Its content is populated by other texts, concepts, ideas, scientific studies, historical periodicities, military objects, profanities, life, death, defecation, and vitalism. Its aura is gruesome and it lives in the horrific—both offshoots of Grove’s weird realism.
Moreover, Grove’s Homo sapiens are plastic and neurodiverse. The first line of his acknowledgments puts this out in the open: “This book wasn’t supposed to be written.” He goes on to describe how his early childhood learning disabilities had been diagnosed as laziness. It’s difficult not to appreciate how the reactive, visceral pushback we may feel while reading the book (and that feeling will be inevitable) is precisely due to our neurotypical habits of reading: the fact that rather than reading for associations, clusters, and relations, we tend to read for meaning and understanding defined in terms of the linearity of the “problem/solution,” “cause/effect,” “diagnosis/treatment” analytic.
In other words, Grove’s style of writing is part of his argument. He puts on display a descriptive and critical analytic that contrasts starkly with a more conventional prescriptive/causal mode of critical theorizing. This is necessary precisely because Grove’s analysis is ecological, not logical: “This book,” he affirms, “is inspired by refusals of critique as redemption in favor of useless critique and critique for its own sake.”
In this respect, Grove offers one of the most robust and erudite examples of a critical ethos of pessimism I have read to date (even more robust than the one expertly elaborated by Joshua Foa Dienstag5). If there is a dialectic to Grove’s thinking, it is between the optimism of causal efficiency and the pessimism of entangled complexity. The former assumes a human way out of our problems, perhaps even a surmounting of them; the latter understands human physis as comprising “incipient expressive objects influenced by material and cultural networks.” In short, rather than distancing total destruction from our current moment in order to propose a redemptive, critical utopia, Grove is immersed in catastrophe as an immanent condition of critique.
If Grove provides a neurodiverse pessimism as alternative to Zuboff’s confident humanism, Amoore’s Cloud Ethics provides an ethical and poetic alternative to Zuboff’s moralisms. Amoore, a political and cultural theorist, critical geographer, and international-relations scholar, has written what I consider to be essential reading for anyone interested in the ethical and political analysis of our digital condition.
This is Amoore’s second book, which follows her equally important The Politics of Possibility: Risk and Security beyond Probability (2013). Whereas the first book is a study of the use and abuse of algorithms throughout the security-state apparatus, Cloud Ethics tackles how to think ethically about artificial intelligence and machine learning.
Amoore is careful (and right) to distinguish her work from much of the excellent work in critical algorithm studies that focuses on attempting to regulate or reform such things as privacy or racial bias. Her book is not a contribution to that literature, because, as she explains, “there is a need for a certain kind of ethical practice in relation to algorithms, one that does not merely locate the permissions and prohibitions of their use.”
For Amoore the imperative is to ask how we can think critically with and about algorithms and not assume that reform legislation is sufficient to deal with them politically. If machines really can gain intelligence through learning, Amoore seems to say, then we must consider them as participants in public life, and therefore, as if they were agents and not merely impartial calculators. And this means learning how they act, why they arrange the world as they do—that is, how algorithms articulate their relationship with us and with the world.
Traditional ethical thinking, which we tend to associate with how humans live with one another, is based on the truth or falseness of norms and the rightness or wrongness of claims we make about the world. But machine-learning algorithms do not relate to the world on the basis of truth or falseness, right or wrong; nor do they organize the world according to these normative ideals. Theirs is a world of probabilities, likelihoods, and variabilities.
We must think of algorithms as autonomous participants in shaping the world.
Think of it this way: in a world of traditional ethics, we judge a love interest in terms of whether they are right or wrong for us, whether they are true or not in their intentions, and on the basis of these assessments we decide whether we want to be with this person. A dating algorithm makes amorous assessments on the basis of probable outcomes as determined from values we program into our online profile. Strangely, both approaches seem to result in loving, trusting relationships. However, these two ways of making life decisions—one grounded in a belief in truth, the other in a belief in probability—are inordinately different and produce very different ways of building the world around us, including our personal relationships.
Amoore offers this analysis of the physis of a technical object in order to think about how we might retool our ethical dispositions vis-à-vis algorithms. Crucial to her approach, and what she shares with Grove, is a commitment to moving beyond a political humanism that considers only humans as exclusive political actors.
Tendentially, we like to think about political power and its influence as the property of human agency. It is humans who act in the world to create, shape, and alter political societies. This belief in the exclusivity of human participation in political power has always been the appeal of rational-choice theory, for instance, which in the postwar period attempted to explain, legitimate, and influence international policy outcomes.
But Amoore and Grove push us to consider how limited and exclusive is this way of thinking about political power. It is limited to our human exceptionalism, which seems insufficient to explain many of the political phenomena we are dealing with today.
Just think how impossible it is to provide an adequate assessment of contemporary American electoral politics without considering the agency of a nonhuman viral entity. COVID-19 is a participant in the current US presidential election. Whether and how it will influence a future electoral outcome remains to be seen; but there is no doubt that the physis of a virus is currently entangled in the political life of a nation.
Can Tech Ever Be Good?
The need to account for such nonhuman actors means, for Amoore, that in order to think critically about the ethics of algorithms, we must think of algorithms as autonomous participants in shaping the world. That’s the first step.
The next step is to analyze their mode of acting, and this is the major focus of her book. We discover a host of new vocabularies and perspectives that elaborate whether and how ethics is possible in our current moment. Hence her development of the ethical significance of terms derived from artificial-intelligence research, including “iterativity,” “recursivity,” “traceability,” “rendering,” “aperture,” “actionable propositions,” “correlations,” “arrangement,” “disposability,” “attribution,” “condensation,” and so on, that are the foundation of her innovative cloud ethics.
As I read through these three books I couldn’t but feel a sense of excitement: taken together, they could easily be the basis for a new college-course syllabus, if not a new academic discipline. They are foundational works. And as such, they offer new horizons of learning about the world and our not so human place in it.
This article was commissioned by Ivan Ascher
- This idea of physis has been around for some time, at least since Aristotle (but probably since Democritus). The ancients saw the need to investigate the nature of life, which meant investigating the nature of change. This is what Aristotle, in his Physics, meant by the word physis, which the Romans translated as natura, which then became our “nature.” ↩
- Emily C. Nacol, An Age of Risk: Politics and Economy in Early Modern Britain (Princeton University Press, 2016). ↩
- On the postwar history of financial capitalism, see Ivan Ascher, Portfolio Society: On the Capitalist Mode of Prediction (Zone Press, 2016). ↩
- Mark Bould et al., The Routledge Companion to Science Fiction (Routledge, 2009), p. 511. ↩
- Joshua Foa Dienstag, Pessimism: Philosophy, Ethic, Spirit (Princeton University Press, 2009). ↩