“Costs on All Sides”: Annie Dorsen on “Prometheus Firebringer”

“Technology creates the potential for conflict from the very start.”

Predictive technologies, like the now-ubiquitous ChatGPT, are shape-shifters. They can be presented as innocuous, even cute. (“Let’s see what ChatGPT says about that!”) They can also—with their opaque logic, embedded biases, and inaccessible datasets—assert dominance over enormous domains of practical life. For theater maker Annie Dorsen—who has been creating performance that challenges the relationships between human consciousness, creativity, and technology for more than a decade—one of the dangerous impacts of predictive technology is its increasing role in the fabric of creative life. “These tools,” she noted in a 2022 essay for the Bulletin of the Atomic Scientists, “represent the complete corporate capture of the imagination, that most private and unpredictable part of the human mind.”

This danger forms the center of her newest piece, Prometheus Firebringer, a lecture-performance in which Dorsen uses commercial AI tools to recreate a lost Greek tragedy (possibly the third in Aeschylus’s Prometheus trilogy, of which only one play, Prometheus Bound, is fully extant). Simultaneously, she appears onstage giving a lecture composed of rigorously cited quotations meditating on AI, the nature of language, and the dangers of engaging deeply with technologies whose workings are kept private while their impacts are enormous. The production also features voiceprints by artists Okwui Okpokwasili and Livia Reiner, and masks representing the chorus, generated by AI programs and made by 3D printers. Center stage hangs the “Prometheus” mask, large and asymmetrical, with an abstract swirl adorning the left side of its chin. Prometheus’s eyes are oval-shaped screens playing a selection of AI-generated video, which occasionally give the mask the appearance of glancing emotionlessly at the chorus, a nameless assembly of orphans whose fate hinges on outcome of Prometheus’s encounter with his captor, Zeus.

Dorsen’s lecture raises the question “When there are no good options, when every course of action comes with unbearable costs, how do you choose?” Her talk is by turns funny, sad, and provocative—partially because none of the words she speaks are hers. They are quotations, seamlessly arranged and meticulously cited on a screen upstage.

Prometheus Firebringer builds on Dorsen’s pathbreaking body of work, much of which was brought together last year in a 2022 performance retrospective at Bryn Mawr College. One of her early pieces, Hello Hi There (2010), was partially inspired by Noam Chomsky and Michel Foucault’s famous 1971 debate about language and human creativity, employing custom-built algorithms and featuring deliberately outdated chatbots making “conversation” on the subject of the debate, while also drawing on reams of other selected text, from Hamlet to YouTube commentary about Chomsky and Foucault to generic bot-speak.

Prometheus Firebringer premiered at the Chocolate Factory Theater in Queens, New York, in May 2023 and played a three-week run last fall at Brooklyn’s Theater for a New Audience, where I saw the performance. Dorsen and I spoke shortly after the piece closed, and our conversation—not just its subject matter but also our interaction—testifies to the conceptual opacities and misleading shorthand that often surround generative AI and machine learning. As a critic familiar with Dorsen’s many algorithmic works, I sought to draw out connections between Prometheus Firebringer and her earlier pieces. Dorsen redirected my questions often—and for good reason, as she drew attention to how fundamentally different generative AI is from the custom-coded algorithms she’s used in the past, and how dangerous that difference can be.


Miriam Felton-Dansky (MFD): I was fascinated to see, when I walked in, a large screen showing what looked like an AI generating, in real time, a summary of this lost play, Prometheus Firebringer. It scrolled through a summary—Prometheus meets Zeus, there’s some kind of reconciliation between them, though an uneasy one, and a group of orphans looks on—then deleted that and began another, similar yet always slightly different summary.

This question will reveal my own ignorance about how the technology works, but just now before we talked, I asked ChatGPT for a summary of Prometheus Firebringer, trying to see whether it would offer me something similar to what I saw in your piece. Instead, the chat program told me it had not heard of this text at all. How did you get it to summarize the play for you?

 

Annie Dorsen (AD): So, ChatGPT is an autocomplete program. It doesn’t have knowledge about things in the world. And Prometheus Firebringer is a play that doesn’t exist. At one time it existed, but now only one fragmentary line survives. So it’s totally unclear what the play was, what its plot was, who the characters were. It might have been a satyr play. It might have been the third play of the tragedy portion of the program. Nobody really knows. So first of all, ChatGPT can’t give you a summary of Prometheus Firebringer in any real sense. But if you provide it with a creative writing–type prompt that says, write a plot summary for the third play in Aeschylus’s Prometheus trilogy, and you give it some material to work with—you tell it the chorus is made up of human orphans or whatever—it will fulfill that request. It will output text. But that text will have no relationship to the real play, whatever it may have been. The language model doesn’t have access to a library, can’t do research, has no relationship to facts, or truth that exists in the world.

People sometimes describe this aspect of LLMs (Large Language Models), that they seem to make things up that aren’t true, as the models “hallucinating.” This is totally misleading—they are not hallucinating. Whether it outputs language that aligns with true facts in the world or not, it’s doing the same thing. It’s putting one word after another according to statistical probabilities. So when it says, “Oops, I sometimes get things wrong,” that doesn’t mean that it realizes anything, or understands anything. It can output the word oops but as you know, because you’ve seen Hello Hi There, just because a chatbot says “Oops,” doesn’t mean that it knows what oops means. Or that it had an “oops”-like feeling and expressed that feeling with the word “oops.”

 

MFD: So now I can say “Oops” and actually mean it because I know what oops means. But I did actually have a question that goes back to Hello Hi There, because I was thinking about the masks, and I was curious what kinds of prompts or what kind of input was given to generate these masks, which look a little bit like faces and have anthropomorphic qualities, right?

 

AD: I’m sorry, but I want to ask: How important is that, really? Because the question of prompts, to me, is not so interesting.

 

MFD: I was thinking about what you said in your recent American Theatre essay, recalling the way that people reacted to Hello Hi There, which still feels still quite relevant: audiences react to the chat-bot text with humor and sometimes emotion, and audiences often are tempted to think that the bots are talking to each other—even though all we see is scrolling text and a couple of MacBooks onstage. Prometheus Firebringer, which featured computer-generated figures that looked more like people, elicited none of that response from me.

 

AD: Ah, so I think what you’re talking about is the Eliza Effect. Which is the phenomenon, first articulated by Joseph Weizenbaum, where people often respond to computer-generated language in a particular way, by projecting a mind behind the language, by anthropomorphizing it. I interpret the phenomenon to be about our automatic, involuntary response to language, and to feeling addressed—by something that seems to be talking to you. So you project the existence of a mind behind the words.

I don’t know if everyone who sees Prometheus would agree that it doesn’t produce an Eliza Effect, but I can believe it. It might be that actually having something anthropomorphic like the masks reduces it? Maybe when you see the language embodied in an object that looks like a face, you’re more aware of the artificiality of the construction.

You know, another interesting thing, from my experience with Hello Hi There, is that the Eliza Effect wears off. Over time, you get used to it; the language starts seeming more empty and less magical. But then the question is whether the effect will wear off with ChatGPT, too. Will we all eventually get bored of it?

 

MFD: In contrast to what doesn’t get boring at all, which is the combination of your lecture and the source materials that you are projecting at a very rapid pace behind you. That is where the humor and the emotion in the piece is located, at least for me.

You talk in your lecture about reconstructing the past, and you cite and quote other people talking about the internet as a version of the past, even a very recent past, and the implications of reconstructing history. You talk about Matt Loughrey, the digital colorist who altered historical photos of victims of the Khmer Rouge such that many of their faces showed less pain or fear or were even smiling, and the ethical problems that a reconstruction of the past entails. When you describe Loughrey, the text of your lecture reads: “He had a choice. He made a bad choice. Contrary to popular decree, there is such a thing as a ‘bad choice.’” And each of those three very short sentences is sourced from a different book, and of course they are not books that relate to Loughrey’s work or any of the topics at hand in your lecture. One is from a teen acting manual. One is from a young adult book called Vanola-Ann Choices. Was the contrast here intentional?

 

AD: Yes. But I should say the piece is not full of Easter eggs. Like if you go to the passage in the YA book, you won’t find something relevant. But I was aware that the citations, my choices of where to pull certain phrases from, provided an opportunity for humor, and an opportunity to recognize the vast number of different things we do with language. It was really as simple as that: if there were a few choices for a common phrase, I tried to choose something that seemed funny or interesting or just different from what had come before. So in that sense it wasn’t all that deep.

 

MFD: It reminded me of how you talked a few years ago about the text in The Great Outdoors—your 2017 piece in which the text consisted of vast quantities of Reddit comments—as this kind of landscape: many people talking and responding and voicing experience. The internet as a vast landscape. Many of these are sourced from books and articles and not the internet, but it had a similar gesture toward expansiveness.

 

AD: The entire galaxy of intertextuality, right. None of it original. That’s the Roland Barthes quote. And another quotation I use, that large language models are trying to make the entire galaxy of intertextuality navigable. And here’s a little microcosm of that vast galaxy of intertextuality that I have navigated to create an unfolding argument, to create a path.

 

MFD: But you also make it navigable by shaping your lecture dramaturgically, with an entry point, a progression, and an ending.

 

AD: Yes.

“Large language models are trying to make the entire galaxy of intertextuality navigable. And my performance offers a little microcosm of that vast galaxy of intertextuality.”

MFD: And I was particularly struck by, at the very end, you cite the sentence, “But something else is true as well,” five times: you say it five times, and each of those times it’s cited to a different source. Can you talk a little bit about that sentence?

 

AD: I mean, what to say about it? We’re not all doomed. Nothing is simply one thing. Many things are still possible—as long as there’s time, there’s time for care. So it’s really just literal. And also, of course, I repeat it in order to do a callback to the stuff in the first part of my talk about language as a public good. I mean that in the economics sense: words are nonexclusive, they don’t get used up, if one person uses a word, it’s still available for others to use. But each time we use certain words, we have a purpose in mind. There is something we want to communicate with them—which is not what the so-called generative AI is doing.

 

MFD: So you are enacting that by using the same sentence five times and it’s resonating slightly differently each time. And if we’re looking at the text behind you, we can see that each of those versions of the exact same sentence is from a different text that has used it for a different purpose.

 

AD: Yes.

 

MFD: So in the AI-generated summaries of the play, which I’ll now try to construct myself with better prompts—

 

AD: I actually want to ask you, Why do you want to do that?

 

MFD: Well, I’m curious because I have been thinking about what is actually in there.

 

AD: Ah, but prompting is not a great way to search the dataset. In some cases, you might be able to discover things about the dataset from prompting—if you ask an image model for Mickey Mouse, for example. If it gives you Mickey Mouse, you know it was trained on images of Mickey Mouse that were labeled “Mickey Mouse.”

Taking the example of the plot summary, just because the model can produce an answer to a specific prompt doesn’t necessarily mean that any particular source material is in the training data. So, although the model can only “know” what it’s been trained on, it can produce text that doesn’t appear in that form in its training data—because it’s moving the words around based on statistical probabilities. So, you know, what do you learn if you see that it uses the word tomato? Well, it’s seen tomato 16 billion times because it trained on a whole bunch of recipes for pasta sauce, and a whole bunch of names of paint colors, and a whole bunch of text about gardening, and so on.

 

MFD: Okay. As I was reading these summaries over and over again, I was struck by the recurrence of the idea of some kind of uneasy reconciliation that’s happening between Prometheus and Zeus. The idea that there was some kind of at least détente between Zeus and Prometheus, but also an understanding that technology still contains danger. I am curious whether you think Prometheus Firebringer has any such reconciliation or détente. Any peace.

 

AD: I don’t know. I wasn’t thinking about it precisely like that. I put the reconciliation between Prometheus and Zeus in the prompt because that’s what some scholars think the third play might have been about. So I used it. But I didn’t think about the piece as enacting that, particularly. The conflict between the two characters in Prometheus Bound—and presumably in the other two plays in the trilogy, too—is between the power of the mind and the power of brute force. It’s the power of ingenuity, somehow, and the power of the state. So if Prometheus is going to capitulate to Zeus, it means he’s capitulating to state power. That’s obviously a question in our world … to what extent is the tech sector enabling or contributing to state control, and to what extent is it empowering individuals, ordinary people. Silicon Valley has often claimed to be “democratizing,” or to be giving people power to do more, and so on. But is that what it’s really doing? Or has it actually just been handing elites increasingly powerful tools of control?

 

MFD: Right. And in the thinking you did for your lecture, in addition to the many sources you used to cite common phrases, it seemed as though there were a couple of texts you had engaged with very deeply. One of them was Simon Critchley’s book Tragedy, the Greeks, and Us.

 

AD: Yes, the two tentpoles are Simon Critchley’s book and the work of Bernard Stiegler. Stiegler was very engaged with the Prometheus/Epimetheus story from Plato; the first volume of his major work, Technics and Time, is subtitled The Fault of Epimetheus. He’s thinking about the duality: Epimetheus representing the notion of forgetting, aftersight, realizing the truth too late. Prometheus means foresight, so he is about thinking ahead, planning, seeing consequences. For Stiegler, that duality is central to the human condition of being always a little too early or a little too late, of being trapped between trying to think ahead, but always being cast back on the past, which you think you can know.

And he connects this idea of time to technology—he means technology in the most expansive sense, he’s not just talking about computational technology. Technology puts us in a situation of having to make decisions. Which implicates our projection into the future: Should we do this or should we do that? That also means technology creates the potential for conflict from the very start. Not just because you can build weapons and cause destruction, but because all of a sudden you have to decide, you know, Where should we put our resources? What will be the most useful tool? What would be the most useful step to take? So in that sense technology, or techne very broadly, creates political division and potential conflict. Because even prior to deciding where to put resources, what to build, what to do—we need to decide on what basis we should make those decisions. Which of course raises the question of values.

Critchley, in his introduction to the book on Greek tragedy, writes that the central question of a Greek tragic protagonist is: What should I do? The characters are in situations in which there’s no good choice. There’s enormous costs on all sides, there’s no good options, any decision the protagonist takes will bring suffering, and possibly unbearable suffering. But still, there she is. She’s Electra. She’s Antigone. She has to decide: What should I do?

So the connection between those two is really the basis of the piece. That’s why those books, The Age of Disruption and Symbolic Misery from Stiegler and Critchley’s Tragedy, the Greeks, and Us keep appearing in the lecture. We also are in a situation in which there are costs on all sides, we also know and don’t know at the same time, we also have a desire to know what’s going to happen in the future so we can make the right decisions. Artificial intelligence is really not the right description of these models—they are better described as predictive tech. That’s what they do: they predict based on statistical analyses of past examples. And, you know, many of our current decisions about what to do, both individually and collectively, are really bounded by technology’s possibilities and potential abuses.

Browse

Now the Humanities Can Disrupt "AI"

By Lauren M. E. Goodlad et al.

MFD: But I did sense the possibility of relief at the end, when you say, “As long as there is time, there’s time for care,” and you cite from the Talmud and the Qur’an. Is that sense of relief there for you?

 

AD: But I cite from something someone wrote for Stiegler’s memorial. I don’t cite from those books directly.

 

MFD: I have to believe that as the creator, you could have cited something that didn’t mention religious texts that speak to the value of human life, that there could have been a different citation because, as you cite just a bit earlier, “But something else is true as well, but something else is true as well. …”

 

AD: There could have been another citation. There’s always choices! But I liked including a eulogy of sorts for Stiegler. I always thought I would get to meet him someday. But now not. [Theatre for a New Audience Artistic Director] Jeffrey Horowitz said something to me I hadn’t thought of. He said, at the beginning of the lecture I say I’m going to talk about the individual in the contemporary age, then I go on a long trip, but finally at the end I quote Stiegler saying, If you save one person, you’ve saved all of humanity. So I have actually talked about the individual in the contemporary age. I thought that was a nice observation that I hadn’t realized myself.

Something else just occurred to me, too, about the ending. After doing all this stuff with the Prometheus myth for the last hour, finally at the end: There’s other myths. There’s myths from the Qur’an, there’s myths from the Talmud, there’s myths from the Christian Bible, there’s myths from the Vedas, et cetera. In other words, there are other sources of human aspiration and philosophy, and other parables that we can draw from when thinking through these questions. It’s not just Prometheus. icon

Featured image: Photograph of Annie Dorsen © Stephen Dodd