AI: Machines or Magic?

This is the third article in the series Co-Opting AI, curated by Mona Sloane. Read the first two articles in the series here and here.
AI has always only partially been about the actual future of probable developments and base-rate outcomes; it has also been singularly productive of philosophical speculation, fantasy, and arguments about ourselves and the future ...

The following is a lightly edited transcript of remarks delivered at the Co-Opting AI: Machines event, held on April 1, 2019.


Hi, everybody. Thank you so much to the Institute for Public Knowledge for having me. This is such a marvelous series and such an interesting event. I’m really delighted to be presenting with Meredith Broussard and Solon Barocas, who have already addressed so many of the questions that surround the issues related to co-opting AI that I feel at liberty to be a little irresponsible.

What I would like to bring to the table are some thoughts about AI and the future. And not, to be clear, the actual future of AI: budgets, agendas, hardware and code, training sets, relative success and provisional failure, and social consequences. All those things are actual future. But AI has always only partially been about the actual future of probable developments and base-rate outcomes; it has also been singularly productive of philosophical speculation, fantasy, and arguments about ourselves and the future.

In the lexicon of science fiction writing workshops, which are a really interesting experience and filled with their own terms of art, AM/FM refers not to radio, but to how you describe speculative technologies. Are you writing about “actual machines” or are you writing about “fucking magic”? It is a crass, but very memorable, way of distinguishing between technologies—in all their friction, faults, inefficiencies, and adaptations—and our fantasies about them.

We’ve been discussing actual machines and actual consequences. I’m here to talk about the fucking magic, which Meredith has already brought up in a number of ways. Fucking magic is something to which AI is very prone, offering, as it does, a perfect container for our own anxieties about consciousness, intelligence, control, agency.

We could go back centuries earlier, but I would like to start in 1914, in Paris, when the engineer and inventor Leonardo Torres y Quevedo presented El Ajedrecista, the first real chess-playing automaton, which could play and win within the extremely constrained limits of a king-rook-king endgame. Basically, this is a minimally complex chess game, but one in which the machine could reliably win by simply cutting off avenues of escape until you lost.

However, let me read you this headline from a contemporaneous report about the automaton in Scientific American: “He Would Substitute Machinery for the Human Mind.” That article contains this splendid sentence: “There is, of course, no claim that [the machine] will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are properly classed with thought.”

Browse

Robot and Juliet

By Jenn Stroud Rossmann

In other words, this is a debate that we have been having, almost unchanged, for more than a century, whether we have been talking about chess, about facial image recognition, about neural networks, about spatial navigation, you name it: the argument over whether or not something thinks, what things should be classified as thought, what we mean by thought, and so on.

In the early 1950s, Alan Turing published a paper on “Computing Machinery and Intelligence,” which presents the now famous imitation game, or “Turing test,” for artificial intelligence. But this bracingly contemptuous paper is much more about why we humans think our minds are so special in the first place and how that colors our evaluations of other kinds of minds. As Turing points out in an argument about whether a computer program could surprise us: “It is perhaps worth remarking that the appreciation of something as surprising requires as much of a ‘creative mental act’ whether the surprising event originates from a man, a book, a machine, or anything else.”

That is, how we describe things as artificial intelligence—and respond to the things that we identify as such—has much more to do with us, with our creative mental acts, than with it, whatever it may be. Through many creative mental acts on our part, the actual machines of AI are constantly being conscripted into the business of fucking magic.

I would like to talk briefly about the grandest edifice of the FM side of AI: the native 20th- and 21st-century computational theology and mythos of the singularity, a notional moment when powerful general-purpose AI can improve itself in a self-augmenting cycle, leading to an exponential, runaway explosion in intelligence and the end of the human condition as we know it—whether it ends in our annihilation or our transfiguration.

The term originates in a conversation between John von Neumann and Stanislaw Ulam in the 1950s, during which von Neumann used it more in the sense of the pace of technological innovation, which he saw as outstripping our ability to comprehend and cope with the changes.

The idea of the singularity was given specificity by mathematician and sci-fi writer Vernor Vinge in a presentation made at a NASA workshop in 1993, in which he reframed it as “the imminent creation by technology of entities with greater than human intelligence,” and also foresaw it as a disaster. The subtitle for his paper for NASA was: “How to Survive in the Post-Human Era.”

We misattribute agency and autonomy, and, thereby, blame and responsibility, to systems that are merely very complex and made by people.

The term was then popularized and dramatically altered by Ray Kurzweil, who pivoted from a career in computer science and software development into his current gig: prophesying a coming epoch of immortal disembodied intelligence, which will torch through all our seemingly intractable human problems like coherent light. Meanwhile, Nick Bostrom and other critics argue that AI is an existential risk to humanity, and perhaps all life on earth.

Still others argue that the singularity has already taken place, and we are currently living in a simulation of our universe being run on some shard of our prospective post-human intelligence or that of another class of entities, who have followed the logically inevitable path toward maximizing the intelligence produced per watt of energy in the universe.

The latter idea is taken with sufficient seriousness in corners of Silicon Valley that institutes and scientists have been hired to figure out how we might determine if we are, in fact, already living in a simulated reality. That this is happening is the greatest argument I can think of for a stronger capital gains tax.

The group I’m personally most interested in, a nonprofit called the Machine Intelligence Research Institute, is trying to anticipate the thinking of a prospective post-human intelligence to figure out how it could be made most friendly to us, which is at once oddly reasonable and perhaps one of the strangest jobs in the history of logic.

This project coexists with the argument, increasingly powerful in Valley philanthropy, that we should be putting more money into AI research and development than anything else, on the grounds that, well, do you want to save paltry tens or hundreds of thousands of human lives now, or see the development of trillions of post-human minds, expanding throughout the solar system? No need to worry about quotidian matters like clean drinking water or pesticide use, then.

I would like us to take these examples—and there are many, many more—as a chance to think about how AI, as an industry and as an area of social consequence, functions at a peculiar intersection of AM and FM. We misattribute agency and autonomy, and, thereby, blame and responsibility, to systems that are merely very complex and made by people. And we paper over flaws, bias, unfairness, and plain bad design with the magical salve of intelligence, which is an effectively meaningless quantitative term, but is constantly being described as though it were like heat or mass: something that we could easily and coherently measure. We undervalue some of the truly surprising results of AI research in our desire for theology by other means.

Browse

Counter-histories of the Internet

By Marta Figlerowicz

AI, especially in popular culture, is often a jumping-off point for dialogue with ourselves about what the future means, sometimes at the expense of understanding the present. Norbert Wiener, the cyberneticist, who actually played chess against a replica of El Ajedrecista in the 1950s, often compared the threat of AI (in terms of automation and Cold War military strategy) to the golem, and the sorcerer’s apprentice, and the monkey’s paw—magical objects whose execution of poorly specified desires using unlimited power leads to disaster.

I would suggest that one way to think about the mythic properties of current AI might be the doppelgänger: the sinister reflection embodying our fears about ourselves, in which we can see our own anxieties, desires, and unspoken biases. The truly human part of artificial intelligence is that we can’t resist making it all about us. icon

Featured image: #davidwojnarowicz (2016, detail). Photograph by Daniel Rehn / Flickr