The Art of Intelligence

Art made by AI subverts our usual understandings of creativity as a uniquely human power.

“The artist no longer creates work,” proclaims cybernetic artist Nicolas Schöffer, “he creates creation.” Schöffer’s remark is often quoted to describe art installations made with AI. It appeals because it flatters a classical hubris. Our species esteems itself as approaching the divine, godlike in our crafting of artifacts that then act like us. His remark also points to a consequence of expanding who, or what, is capable of artistic creation: Who gets to be an artist? How to become one?

It is indeed tempting to attribute creativity to machines. Take, for example, artist Sougwen Chung’s mechanical “arm,” D.O.U.G. (Drawing Operations Unit Generation_X). This machine was trained on Chung’s unique strokes; it roves over her canvas in live performances, drawing and painting in responsive collaboration with her. Or consider the 3D “robot artist” Ai-Da, who sees with camera eyes and sketches with a robotic arm. Her website specifies that she “is not alive, but she is a persona that we relate and respond to.”

However much it seems that D.O.U.G. and Ai-Da make art, each project has a human artist at the helm, with her own artistic vision and the impulse to carry it out. Yet whether imitating creativity or engaging in true creation, these art-making objects subvert our usual understandings of the artist as a type of author and of creativity as a uniquely human power.

Art’s relatively recent intersection with AI exposes the paradoxes of authorship, creativity, authenticity, and agency. In fact, the distinction between human and machine creation, as revealed in new books by Joanna Zylinska and Mark Amerika, is merely an artifice. The divide between the natural and the artificial functions as a device we produce and maintain. The artist too is cast as an invention: something that gets created over the course of producing an artwork, instead of asserted at its source.

Provoked by an AI researcher (one of the first authors of OpenAI’s paper from 2020 debuting GPT-3), I read the 1985 creative self-help book Art & Fear: Observations on the Perils (and Rewards) of Artmaking. I attempted to do so in the curious manner that he claimed to have done: by replacing the word “art” with “computer science.” Then the revised book, which could even be called “Computer Science & Fear,” read something like this:

Computer science is made by ordinary people. Creatures having only virtues can hardly be imagined making computer science. It’s difficult to picture the Virgin Mary writing Python. Or Batman maintaining a database. The flawless creature wouldn’t need to make computer science. And so, ironically, the ideal computer scientist is scarcely a theoretical figure at all. If computer science is made by ordinary people, then you’d have to allow that the ideal computer scientist would be an ordinary person too, with the whole usual mixed bag of traits that real human beings possess.

The revision called to mind Aristotle’s quip that art is the imitation of nature: the attempt by human skill to approach an ideal. His definition shaped the Greek concept of techne, referring not only to technology (Technik) but also to the “artistic” and “artificial” alike. In the art of computer science, are computer programs and algorithms then artistic objects that mimic nature? And if those objects are used to make more art—say, because they emulate the brain—should we regard them as mere tools, or as artists themselves?

AI art consists of art made with AI techniques, that is, specially trained computer models whose low-level structure mimics that of a brain. An artificial neural network like GPT-3 “learns” the patterns between words, such that it can predict the next word in a sequence or produce whole poems or news articles, for example, in the style of inputted sample text. OpenAI’s DALL∙E series was trained on text-image pairs and generates images based on text prompts. True to its etymology, the new DALL∙E 2 often outputs surrealist compositions. Users can input unusual combinations of things and abstract concepts (for example, “Bengal cat brothers sipping espresso and ruling the world”) and receive a range of visualizations in the time it takes to sharpen a pencil.

The existence of this genre poses a particular challenge. It calls into question whether art and artistic creation belong solely to humans. Zylinska and Amerika take up the challenge and champion the viability of a posthumanist art theory that views nonhuman entities as potential sources of art just as humans can be.


(A)I, Rapper: Who Voices Hip-Hop’s Future?

By Enongo Lumumba-Kasongo

Artist and critic Joanna Zylinska’s AI Art: Machine Visions and Warped Dreams offers the first overarching critique of AI art and the discussions that have emerged amid its proliferation. For Zylinska, AI art, like all art, is a product of the world it inhabits. This means that the limits of AI apply to its art as well: algorithms are opaque (they cannot tell us how predictions are made), and the data sets from which they learn are imprinted with the biases of the humans responsible for data collection.

Furthermore, AI art is implicated in the same resource consumption as its techniques. Training a model like GPT-3 emits 552 metric tons of carbon dioxide, roughly five times the lifetime emissions of an average American car. One effect of the monetary cost currently required to develop AI tools is that most are based on pretrained “foundation models” that have been produced by the main technological players—Google, Facebook, and OpenAI via Microsoft—who not only profit from the usage of these tools but also limit the adaptations and additional training (“fine-tuning”) their models undergo.

Zylinska is critical of “generative” AI art—so called because it involves the use of pretrained systems that algorithmically delimit and produce new text and image outputs. By principally relying on image and text generation technology, it does little more than emphasize patterns between data points, ultimately “[reducing] perception understood as visual consumption and art … to mild bemusement.” Generative art is a reminder that a prolific artist is not necessarily a good one.

As Zylinska makes clear, AI-generated pieces work like “spectacles” that end up distracting rather than engaging the viewer; they celebrate the ideals of progress, innovation, and efficiency as though these were absolute goods, untarnished by the compromises that occur when they are weighed against other social values. The AI “spectacles” end up serving the same main players, along with Amazon and Apple, that underwrite their production.

AI art still requires human artists. But at its best, it allows the human artist to turn our categories and stories about art and technology inside out.

The promise of AI art, however, lies not in pandering to technocrats while expressing creativity but in revealing the conditions of its creation. Zylinska highlights Trevor Paglen’s work for its ambition to uncover the invisible power structures latent in different ways of seeing in the mass surveillance age.

In one project, It Began as a Military Experiment (2017)—a nod to the 1960s origins of AI research in the military’s Defense Advanced Research Projects Agency (DARPA)—Paglen peers into AI’s black box and probes the original database used in training DARPA’s facial recognition technology (FERET). From a distance, the installation looks like 10 portraits of random people, likely employees at the military base in Maryland. Only a closer inspection makes visible gridlike white symbols superimposed on the subjects’ faces. The project shows how an algorithm of mass surveillance utilizes a large number of photographs to create a “faceprint” of a particular person that can be employed to identify and track that person in other contexts. These faceprints are images not meant for human eyes; they get used in contexts as innocuous as digital photo organizing and as pernicious as determining the risk of crime and recidivism.

Paglen’s work is art that appropriates AI technologies: it asks who the viewer is, whose way of seeing counts, how they see, with what, and why. Given how much we still don’t know about machine learning (compared to other artists’ tools like a camera or pen), art that unveils how algorithmic opacity is utilized to structure and sustain hierarchies of power defamiliarizes the technological obfuscation of politics that has become taken for granted.

The line between human intention and machine generation in mass surveillance has all but disappeared. And it’s the blurring of the boundaries of the human that interests Zylinska. Ultimately, her view of our relationship with technology challenges the narrative that sees developments in AI as confirmation of endless technological and social progress.

From this revisionist standpoint, humans are not technological masters. Instead, they have coevolved with their tools so as to possess an essential technicity. But these tools include not just “bodily prostheses” and “cognitive extensions” but also “expanded modes of intelligence.”

Theorist Bernard Stiegler has argued that it is technology that invents the human, not the other way around.1 This counterintuitive relationship between humans and tools is explored in My Life as an Artificial Creative Intelligence, a book-form work of performance art by Mark Amerika and the language model GPT-2 (which began before the release of its successor, GPT-3, in July 2020). The tone is improvisational, as Amerika resists the academic convention to spell out his argument. Instead, he endeavors to map the workings of creativity as it manifests between the artist and the language model.

The notion of creativity that emerges is not the generation of novelty ex nihilo. Instead, in Amerika’s work, creativity is presented as an encounter with an alien aesthetic sensibility, something already nonhuman, an automatic “machinic unconscious” that humans and robots can both possess. On the surface, the text is a manic and recursive dialogue that becomes a monologue, as the voices of Amerika and GPT-2 (marked by different fonts) blend into one (a “hybridized form of interdependent consciousness”). As the book progressed, I became less sure whether I was reading Amerika’s voice or his editing of GPT-2’s textual output.

By revisiting the same themes in different chapters, the work accumulates layers of meaning, as though the book were a symphony whose rhythm and motifs the reader comes to expect. And indeed, Amerika invokes the metaphor of music, referring to himself as a “remix artist,” his collaboration as a “jam session,” and his editing of GPT-2 as “riffing” and “sampling.” “Often when I am jamming with GPT-2,” Amerika writes, “it feels as though we (the machine and I, the language model and the language artist) are becoming coterminous information sculptors attuning our stylistic tendencies to read each other interstitially as co-producers of a live performance.”

Yet, we cannot be sure that Amerika actually wrote that. To invert our ideas of authorship, Amerika presents himself as an unreliable narrator. In the book’s first few chapters, Amerika the narrator reports when he has edited GPT-2’s text, giving readers a sense that the two authors are distinct characters. But as the book proceeds, its narrator lets slip that he may have been lying all along. “Who’s to say what you’re reading right now isn’t already 80 percent composed by the language model I am jamming with and that my mere 20 percent is some combo of copyediting-riffing-remixing as an attempt to give the predominant cyborg drift its special human flavor?”

Inability to parse the distribution of creative labor is precisely the point. Amerika and GPT-2, individually and together, are presented as “artificial creative intelligence(s)”—a disclosure of the ambiguity and capaciousness of each of these terms. (“Artificial intelligence” is, after all, a moniker and metaphor for a set of techniques that amount to probabilistic modeling.) What we presume to be original thoughts are more like unconscious associations formed from processing everything we’ve ever experienced or learned—what the author(s) refer to as “Source Material Everywhere.” Music captures well what happens when we think. Our thoughts are songs, with inspired motifs and forms, and aesthetic practice is its own black box.

It seems crucial that Amerika remains the primary (and legal) author of his book, and that to understand the meaning of Paglen’s works, we must listen to the remarks he makes about them. As with all tools, the efficacy and utility of AI depend on how it is used and why.


Do the Humanities Need Experts or Skeptics?

By Patrick Fessenbecker

Amid the mounting crises of the twenty-first century, the engineer Julio Mario Ottino has been a vocal advocate for unifying the scientific, technological, and artistic ways of thinking. Whereas art has always used technology toward its own ends, technology, especially now, “wants art’s inventiveness, its untethered thinking, and its built-in innovation.”2 Ottino’s proposed merger of methods would imbue technological practice with a richer idiom of values, stories, dreams, and critique—languages long spoken in the domain of art, which the phenomenon of AI art is well positioned to adopt.

AI art still requires human artists. But at its best, it allows the human artist to turn our categories and stories about art and technology inside out. Human artists, Zylinska points out, have always depended on a myriad of nonhuman agents, including “drives, impulses, viruses, drugs, various organic and nonorganic substances and devices, as well as all sort of networks—from mycelium through to the Internet.” Awareness of the contributions of nonhuman agents is especially needed in the present age, which Stiegler has called the “Automatic Society,” where decision making in administration, law, military, government, and even private life has become largely automated.3

Zylinska and Amerika urge us to recognize the extent to which we are entangled with machines. And this imperative comes from the hope that we may better evaluate and begin reimagining the political and ethical decisions that still belong to us.


This article was commissioned by Stephen Besticon

  1. Bernard Stiegler, Technics and Time, 1: The Fault of Epimetheus, translated from the French by Richard Beardsworth and George Collins (Stanford University Press, 1998).
  2. Julio Mario Ottino, The Nexus: Augmented Thinking for a Complex World—The New Convergence of Art, Technology, and Science (MIT Press, 2022), p. 174.
  3. Bernard Stiegler, Automatic Society, Volume 1: The Future of Work, translated from the French by Daniel Ross (Polity Press, 2017), p. 106.
Featured image: Detail of Self-Portrait by Samuel Joseph Brown, Jr. (c. 1941). Metropolitan Museum of Art (CC0 1.0)