Public Thinker: Meghan O’Gieblyn on God, Machines, and Intelligence

Thinking in public demands knowledge, eloquence, and courage. In this interview series, we hear from public scholars about how they found their path and how they communicate to a wide audience.
“We can’t always explain how algorithms reach their decisions. The reasoning of algorithms, like the will of God, is unfathomable.”
Meghan O'Gieblyn

Meghan O’Gieblyn is prolific. She is an essayist. She is a columnist. She is a critic. She is a public thinker in the spirit of 20th-century titans claiming the title. Her 2018 book, Interior States, collected essays from a range of prestigious publications—Harper’s, the New Yorker, n+1—and won the Believer Book Award. One of her essays was in the Best American Essays 2017 collection. Another was a National Magazine Award finalist in 2019. She’s written a column for the Paris Review and has one at Wired. As we touch on below, she was raised an evangelical Christian, began Bible college, and then broke with the church. Her new book, God, Human, Animal, Machine (out this August), is concerned with the intersections of theology and technology, where the term “tech evangelist” takes on deeper meaning. In many ways, the new book asks what it means to be human in a technological society, a society that claims it can artificially (as opposed to naturally, as given by God) create intelligence, where questions of faith, belief, trust, and doubt are outsourced to codes, algorithms, and machines that can learn. This interview was conducted over Zoom from her home in Wisconsin, where she lives, writes, and teaches.


B. R. Cohen (BRC): Let’s talk about clocks.

 

Meghan O’Gieblyn (MO): I’m in.

 

BRC: Sure, they keep time, but they’ve also been used as a vital metaphor for centuries, as with the clockwork universe we get from the scientific revolution, the ways they represent order, the ideas of regularity. You’ve written about how humans try to defy or overtake nature through technologies like the clock, or at least to have some claim on controlling the world beyond us. What’s your take on clock metaphors today?

 

MO: A few years ago, I wrote a piece for the Paris Review about the 10,000-year clock, which is a project conceived by a computer designer named Danny Hillis. His idea was to create an enormous clock that would be a symbol of long-term thinking and expand our historical perspective. There would be a second hand that advanced once every century and a cuckoo that emerged once every millennium. I was interested in the idea of using a technological object to encourage people to think more about the future beyond our own lifetime. And also by the fact that the design for the clock was bought a few years ago by Jeff Bezos, who’s now building it.

There’s this odd paradox among Silicon Valley luminaries like Bezos and Elon Musk. On one hand, these men are famous for talking about the need to think big, to think futuristically. But on the other hand, these are the same men responsible for building technologies that have dramatically accelerated the pace of life, making it impossible to think about the future because everything feels like it’s going to be destroyed or replaced through innovation.

 

BRC: Silicon Valley boosters too often have an ahistorical identity. I know I’m being general, but in the framing you often see from that digital tech crowd, it’s as though nothing existed in the past unless it was a precursor or a lesser version of what they’re inventing now. How are they able to imagine these grand futures if they’re already negating the majority of human history?

 

MO: I’m struck by that too—how a lot of their thinking seems ahistorical, or at least not interested in engaging with the history of ideas. That’s partly why I became interested in putting technology in conversation with older theological or philosophical conversations. So much of the debate about technology, and so much technological criticism, seems to be happening in a bubble, almost like an eternal, perpetual now.

 

BRC:  People might not often talk about clocks as instances of machine learning or artificial intelligence, but we could, as clocks outsource our awareness of the day; they build regimentation to place order upon natural systems that are not so ordered. Instead of living and working in response to the day’s solar and diurnal cycles, we have our experiences shaped by whatever the clock tells us to do.

 

MO: We outsource our intelligence to machines, so they become a kind of prosthesis, extensions of our minds. But then the process works in reverse, too, where we start to see ourselves as machinelike. Descartes believed that all animals were basically clocks: they didn’t have any interior experience; they were essentially robots. And he believed that a human was a machine with a soul. The soul was always a problem for the clockwork universe. This was more or less the starting place for my new book, God, Human, Animal, Machine. Much of the book is about how the success of the modern scientific worldview rests on mechanical metaphors, which necessarily put a bracket around individual human thought and agency. We put consciousness and free will to the side and try to describe the world purely as a machine. We did it with the clock metaphor, and that’s what we’re doing with the computer metaphor, with the computational theory of mind.

 

BRC: I thought this was fascinating: you say near the beginning of the book that “To discover truth, it is necessary to work within the metaphors of our own time, which are, for the most part, technological.” You then touch on how people have thought of technology for thousands of years in ways that are somewhat theological. When did you start exploring that relationship?

 

MO: It was after studying theology in college. I went to a small Bible college in Chicago, Moody Bible Institute, which was founded by the American evangelist D. L. Moody. He was responsible for popularizing a very apocalyptic brand of evangelicalism in America called Dispensational Premillennialism. In a way, the Bible college was very future oriented. In that tradition, there’s a lot of talk about how we’re living in the final chapter of history, and the end times are coming.

Long story short, I dropped out of that Bible school. I was working at a bar at the time, and a friend lent me Ray Kurzweil’s book The Age of Spiritual Machines. He was reading it during the slow hours of our shift. The title sparked my interest: this idea of spiritual machines, a phrase that sounds like a contradiction in terms.

That was my first step into current discussions about technology. I really didn’t know a lot about technology at that point in my life. But I’d been studying early church fathers who had these ongoing, really baroque debates about how the resurrection was going to happen. Would God resurrect just our souls, or would the whole body have to be resurrected? And what constitutes identity—is it the physical particles that make up a person, or the form? I was struck by the connections between this tradition and more contemporary transhumanist subcultures, where people were talking about concepts like digital resurrection and mind uploading.

 

BRC: Can I ask for a quick definition of that, of “transhumanism”?

 

MO: Of course, yes. “Transhumanism,” as I understand it, is the belief that humans can use technologies to extend life, perhaps indefinitely, and to further our evolution into another species—posthumanity. Within that subculture, people are talking about issues very similar to resurrection and bodily form, but with no awareness that people had a whole debate about this in the third and fourth centuries. What’s interesting about transhumanism to me is that question of how these very old Christian ideas got into the West Coast utopianism that arose in the ’80s and ’90s, which is something I explore in the book.

 

BRC: There’s another great line early on in GHAM that felt like the core of the book: that “today it’s artificial intelligence and information technologies that have absorbed all the questions that were once taken up by theologians,” which are the mind’s relationship to the body, the question of free will, the possibility of immortality. Does that get at what you’re saying about the connection between technology and theology?

 

MO: Yes. I was asking: Why are there so many similarities between those theological questions and current debates about technology and human identity? Why are we still talking about the same problems that Augustine was writing about, and Aquinas? There are a lot of different ways that these questions found their way into science and technology. One thread of the book explores how modern science inherited from medieval Christianity some of the unsolved problems about identity and the limits of the human mind. Those problems got baked into mechanistic philosophy and then into our machines themselves. And those questions are now reemerging in artificial intelligence, which again brings up the problem of consciousness and the mind/body problem. Can machines think? Are we just machines?

 

BRC: How did you come to that technology-religion nexus after leaving Bible school?

 

MO: I wish I could say it was planned or plotted out. But beyond the Kurzweil spark, when I first left, I read a lot of philosophy and physics in a sort of idiotic way, not systematic at all. I was just reading books I checked out of the library and basically trying to trace this whole history of Western thought without any guidance. I think if you have a traditional liberal arts education, you learn things within their proper context—I mean, ideally—and understand the whole framework and how one system of thought gave way to another. The way I was reading was radically decontextualized. I read Russian novels as though they were speaking to contemporary questions about whether humans can be good without God, or the problem of evil—all these questions that people kind of stopped taking seriously in the 19th century, but that seemed very real and urgent to me. And now, after writing my recent book, I think maybe they still are.

Browse

No Cure

By Hannah Zeavin

BRC: I want to get more at that, the gods and machines and spirits and robots of your book. On my read, the book suggests four henchmen—henchpeople?—that helped you navigate technological society: the book of Job, which comes up a lot in GHAM and which I knew almost nothing about otherwise; Descartes; Dostoevsky’s Brothers Karamazov; and Hannah Arendt, whom you lace in so well throughout. I wonder what brought those points of reference together for you.

 

MO: Those are texts and thinkers that I’ve come back to a lot since I left Christianity. Now that you put them in a group, I do think they’re all contemplating the limitations of the mind, or the inability of the mind to understand some sort of transcendental reality. They’re asking, Is there some higher objective order beyond human perception and human morality?

 

BRC: Could you say more about the book of Job?

 

MO: It’s a book I really struggled with in Bible school. Job is asking God to explain why he’s suffering, but he receives an answer that’s not an answer at all: Who are you, a human, to ask questions of God? Your puny human mind can never understand the complexity of divine will.

I saw an echo of that sentiment, weirdly enough, in a lot of conversations about deep-learning algorithms, particularly around the time when they were rising to public awareness, in 2017 after Google’s AlphaGo program beat the world champion in the game of Go. It’s not just that these algorithms are vastly more intelligent than humans. They’re also black-box machines, so even the people who design and build them don’t really understand how they work. And yet we’ve implemented them in the justice system, in policing and banking, and people are having major decisions about their lives made by these machines. But we can’t explain how they reach their decisions. The reasoning of the algorithms, like the will of God, is unfathomable.

 

BRC: That’s where you work through questions of knowledge, right? Have machines learned something, or is it just that they have a lot more information at their disposal than we have? Which then brings up a distinction between knowledge and information.

 

MO: Right. And also the distinction among various definitions of information. In ordinary speech, when we talk about information, it’s something that only makes sense in terms of a conscious subject. Information doesn’t mean anything unless there’s a person who is being informed by it, right? But when Claude Shannon was pioneering information theory, in the 1940s, he made a very specific point on which the whole theory rests, which is that you can remove semantic meaning from information so information can be just syntax. In other words, there doesn’t need to be anybody who understands what information means; it can just be symbols that are manipulated by computers. That’s the redefined concept of information people are referring to today when they say that computers are “processing information.” And it’s also the usage cognitive scientists are using when they speak of how our brains process information.

I was so curious about this very question when I started writing: How can you say that a machine knows something? That it learns, that it understands? Well, if you create a metaphor—the mind is a computer—and cleverly redefine the terms so that it blurs the lines between those two systems, then those other words like “knowledge” and “understanding” begin to lose their traditional meaning. From there, we can say that not only minds are information processors, but also a forest or a plant.

 

BRC: Trees are talking to each other.

 

MO: Right, exactly. If information is all about unconsciously manipulating symbols, then we get this huge, extensive metaphor where anything can be information. But the irony is that while Shannon’s theory of information sort of disenchanted the notion of information, because it removed subjectivity and consciousness, it’s now being used to reenchant the world—to say that forests can be information systems; they’re alive, in some sense, the same way that we are.

 

BRC: There’s a T. S. Eliot line in his play The Rock, “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” I think of that when I hear people talking about the power of, say, a transistor, a semiconductor, with more computing power in this one transistor than in the synapses of the entire human brain. But I don’t know what to do with that. What do transistors really know?

 

MO: Well, it depends on whether you think our own knowledge is produced in the same way. If knowledge is just the product of computing power, then transistors or computers or algorithms can know and understand—maybe they can be wise, too? The Eliot line is interesting because he’s talking about how human qualities like wisdom degenerate as we come to value more simple forms of knowing. We’re reducing the value of our own minds. But now we’re also endowing machines with those human qualities.

 

BRC: I think of a Xerox machine when people boast about computing power increasing exponentially. I mean, I could press “1” on the photocopier and it will copy one thing. If I press “1000,” it will print a thousand. So, sure, now I have a thousand more pieces of paper. But really, the machine just replicated the same action a thousand times. It’s just adding zeros; there’s no content there. It’s not a thousand times better.

 

MO: You can’t get qualities from quantities. Maybe that’s another way of talking about the distinction between information and knowledge. This is an ongoing problem with machine-learning algorithms, that they don’t have much sensory access to the world, so they don’t know what the symbols are referring to. Even with these superintelligent “deep learning” systems, if you ask them too many follow-up questions, it becomes clear that they don’t have any real understanding. An algorithm that governs how a refrigerator works doesn’t really know what a refrigerator is. There’s still this hope in Silicon Valley that things will improve if we just have more data, if we have more information. But there’s a problem that’s built in at the base layer of these machines: they don’t understand the real-world referents of these words; they were not built to.

 

BRC: There’s a John Mulaney joke, and I’m paraphrasing here, but it’s that with our online lives of log-ins and security questions, we spend so much of our day trying to convince a robot that we’re not a robot.

 

MO: [Laughs] Yeah, I had to do that today because I locked myself out of an old email account and it made me click the button three times and pick out which square had stop signs. To prove I’m human.

BRC: I’d like to talk about your own life as a writer and a public thinker. Interior States, your first book, was a collection of essays. How did you choose the essay form for wrestling with big moral and theological questions?

 

MO: I started writing back in Bible school, when I was struggling with serious doubts about my faith. Part of that doubt came from reading texts like the book of Job or The Brothers Karamazov, where Dostoevsky is contemplating the problem of evil. At the time, I couldn’t really articulate those doubts to anybody because, well, I would’ve been accused of being a heretic. So I started writing as a way to organize my thoughts. It was very much private writing. It wasn’t supposed to be performative or creative or anything like that. I kept doing that type of writing after I left Bible school. I’d write about books I was reading, again not really intending it to be for an audience. Then, several years after Bible school, I had this sense that I could do the same thing in public. That’s a slightly different thing: you have to perform your thinking for others, it isn’t just trying to write to work things out for yourself. But now that I’ve been doing this a while, I think I’ve retained the germ of that impulse even so. Trying to make sense of things for myself through writing.

 

BRC: Like how, as teachers, we talk about learning through writing.

 

MO: Exactly. Or even being able to hear what you think.

 

BRC: I also love your use of a personal, subjective point of view in your nonfiction, which is indeed personal while also being theoretical and intellectually rigorous. You mention in your book that the only letter on your keyboard you’ve worn off is the “I.” How do you see the first-person voice operating in your nonfiction writing?

 

MO: There’s a way in which personal writing often seems less serious or rigorous than journalism or academic writing. But I’ve found that I need myself as a point of reference to think through things. When I’m trying to write about ideas in the abstract, without accounting for my experience or for how I’m personally thinking about those ideas, I get lost in the ideas themselves. Perhaps they become symbols without meaning, as we were discussing earlier—like a formal system without any grounding in the real world.

 

BRC: And of course you’re dancing with these questions about machine learning—Is artificial intelligence a kind of intelligence? What is consciousness?—to which the notion of selfhood definitely pertains. Did that figure into how you wrote this book?

 

MO: I’m really drawn to writers who can put conversations about science or technology in relation to the humanities—like philosophy or religion—or, maybe more broadly, writing that uses some personal question as an avenue into a larger social or philosophical problem. Eula Biss, in On Immunity, is writing about the history of vaccination but also talking about deciding whether or not to vaccinate her child. Or Emmanuel Carrère’s book The Kingdom, which tells the story of the early church alongside his own experience joining and then leaving organized Christianity. I just think that’s a powerful form, bringing the abstract and the personal together.

For me, this convergence may have happened unconsciously. As a personal writer, I always blended my experience with larger ideas, and it didn’t really occur to me till midway through this new book that the whole thing is questioning selfhood and the subjective. There is a meta layer in the book that emerged from my use of the “I.”

Browse

Eugenics Powers IQ and AI

By Natasha Stovall

BRC: This interview is part of the Public Thinker series at Public Books, and many of the subjects are academics or scholars (or began that way) who see or understand their work as living beyond the academy. In your case, there’s an interesting reversal, though I know you teach, too. But your work is relevant for so many academic debates, and it’s coming from outside the academy and should be read—and is read—inside it.

I was about to say I wanted to ask a different kind of question, but now it occurs to me this is consistent with what we’ve been talking about: the creative act, the art of writing, the ability to produce new ideas and formulate them. As a writer, you’ve talked about the somewhat inexplicable thing that happens when you sit down to write at your desk, and you have a plan, and at some point things just start happening and the words take on their own life. I guess this is a continuation of our earlier discussion point about knowledge, or enchantment. How do you view the writing process, in light of the big ideas you explore in your work?

 

MO: I do think there’s a way in which the words take over. And it does relate to something I explore in the book, specifically the notion of “emergent qualities” in machines: the idea that maybe, just by accident, if we put the pieces together the right way, if machines have enough data or become sufficiently complex, they’ll start developing these higher-level skills or capacities that we didn’t intend.

I’m fascinated by this idea: that we can create things that transcend us, that creation can become larger than its creator. And that resonated with me as a writer, because I always feel that the things I create are somehow bigger than what I thought I knew. Or the work ends up synthesizing more, or knowing more than I did at the beginning.

 

BRC: Is this related to the God of the gaps, a concept that comes up in your book? That we can explain so much about where the world came from or how our minds work … but then there’s always that last little space where we can’t quite explain creation or novelty or imagination, so we say “That tiny gap, that’s where God is”?

 

MO: That’s the big mystery—what happens in those gaps, those places that elude our understanding? Consciousness is the big gap, the unanswered hole in the materialist worldview. And I think the tendency, ever since Descartes, has been to write it off as supernatural or some kind of unsolvable mystery. Ironically, even some of the most reductive, materialist positions on consciousness end up doing this by claiming that consciousness is just an “illusion.” It’s something not quite real, outside time and space.

 

BRC: To get back to where we started with metaphors, there’s something here about the brain as a computer. Instead of humans as clocks with a soul, now it’s computers, the technological metaphor of our age. You refer to the novelist Tim Parks, who finds it puzzling that our brains are made up of things, computers, that we ourselves only recently invented. This comes up a lot for me in class, too. Students might start by saying a computer is like a brain, but they pretty often cut out the “like a” and end with the computer is a brain, or the brain is a computer.

 

MO: Exactly. You’re right that there has been this very subtle shift between using a metaphor and making the relationship (between computers and brains) identical. Humans invented technologies—like, say, microchips—that are theoretically more intelligent, in certain ways, than the breadth of humanity, but then we claim that the thing that came up with the computer is itself a computer.

 

BRC: That’s either funny or arrogant, or just the way things go; I don’t know.

 

MO: When the computational metaphor first emerged, in the mid-20th century, all of the terms we used to talk about computers—like “learning” or “understanding” or “knowing”—were put in quotation marks. Today, it’s really rare to see those words in quotation marks. People just take it for granted, basically, that our brains are computational. They’re not like computers; they’re actually doing the same thing that computers are doing.

As my book attests, I’m still not sold on the idea that our brains are actually computers. People have been wrestling with the ambiguities around consciousness since Augustine’s City of God, where he entertains skepticism about whether he exists. It was the same problem formulated in Descartes’s cogito (“I think, therefore I am”), but some 12 centuries earlier. And I don’t know that these abiding questions are going to be suddenly solved because we created a machine that unlocks the mysteries of our nature.

Which is not to say there’s nothing to the metaphor. Writing especially makes me aware of the fact that there is something happening in our brains that eludes our conscious thought—almost like computer processing. I tried to write this book twice and got about halfway both times and realized it was a failure. But then, a year ago, I started this newest version, and I wrote it very quickly—in about five and a half months. I was using material from those earlier attempts, but something clicked. I’m still not sure how it came together so quickly. Maybe this is true of all complex systems: at a certain point, the mechanisms become so intricate, so mysterious, that it feels very much like something mystical has taken hold.

 

This article was commissioned by B. R. Cohen. icon

Featured image: Meghan O'Gieblyn. Photograph courtesy of Meghan O'Gieblyn