Today, almost 70 years after Alan Turing famously asked, “Can machines think?,” what we call “artificial intelligence,” or AI, has seemingly come to penetrate our everyday life. It is in our phones, our homes, our workplaces, our modes of transportation, our schools, our welfare system. And while it remains unclear what AI really is, or can be, it is undeniably capturing the imagination of governments, corporations, and individuals alike.
The idea of AI is mobilizing vast amounts of resources public and private, for research, innovation, and new policies. As such, it affects the politics of knowledge production on a grand scale: as we all race to get ahead in the “global AI race”—to not fall behind in the “fourth industrial revolution,” to create the next big innovation, to get a slice of the ever-growing AI funding pie—we become complicit in the creation of a dangerously totalizing narrative, one that positions AI as inevitably determining our collective future, for better or for worse. In so doing, we reinforce a rising fear: that the rise of AI is synonymous with a robot takeover: “AI will take our jobs, before it takes over entirely as our new artificial overlord—that is the biggest threat AI poses to society!”
This “robot takeover” narrative presents a political problem—that is, a problem that has everything to do with power—because it distracts from what is really at stake. Such a hyperbolic narrative effectively blurs our view of the real potentials and the real limits of AI technologies. Such a narrative also makes it difficult for us to ask serious questions about how AI technologies work, or do not; what their sociopolitical histories and contexts are; which contexts they should be integrated into, and which ones they shouldn’t be; and who counts as an expert and who does not.
So the real threat is not the robots. The real threat is already here. It is that we may miss the boat: AI prompts us to reevaluate “big” questions relating to power, democracy, and inequality—and to what it means to be a living thing on this planet.
We must co-opt the AI discourse and keep asking these questions, and to ask them from a social, political, historical, and ecological point of view, not just from an economical or technological one.
This is what the Co-Opting AI project is all about: flipping the script and taking the AI hype as a cue to have broader conversations about society, technology, inequality, and our planet—without going down the rabbit holes of techno-skepticism or techno-solutionism. In this spirit, the Co-Opting AI project, a collaboration between Public Books and the Institute for Public Knowledge, puts the most forward-thinking scholars and activists across technology, design, and inequality into conversation with one another, so as to consider how we can reclaim the story about technology, society, and our planetary future.
The Co-Opting AI project is curated by Mona Sloane.