In an arresting chapter in Carolyn L. Kane’s new book, Chromatic Algorithms: Synthetic Color, Computer Art, and Aesthetics after Code, she analyzes the movie Predator, which gives expression to hyperbolic fears about the potentially dire implications of the rise of computerized algorithms, in this case the algorithms that animate digital infrared technologies. In the movie, a combat unit is sent to a jungle to save a soldier. The team soon finds itself engaged in a battle to the death with the Predator, an extraterrestrial creature with highly advanced camouflage capabilities that can also track them in infrared. The film constantly juxtaposes the world as seen through the eyes of the soldiers—a world suffused with what amounts to mostly irrelevant and cumbersome detail in the context of this battle—and the world as seen by the Predator, which is “represented by ‘heat images’ that appear onscreen in a grid overlay with a vertical ‘levels’ bar on the left, and at times, with crosshairs over the center of the heat image, hovering over the human target.” The Predator is “portrayed to hold a significant hunter-prey advantage over the men, not only because he is invisible to them, but also because he can see in ways that … exceed the limits of human perception with or without the aid of an optical prosthetic.” Kane argues that the fears the movie expresses, however hyperbolic they may be, emanate from a legitimate sense of pessimism and crisis in light of the spread of computerized algorithms, the specific lifeworld they engender, and the ways in which they are put to use in commercial technologies.
An algorithm is a procedure that consists of a finite number of steps that transform an input into an output. We have been living with algorithms for a long time. A cake recipe is an algorithm, as are the assembly instructions that come with a new IKEA chair. Since the digital revolution, however, we have been living with a new type of algorithm: computerized algorithms, executed not by cooks or college students but by digital computers. Such technologies not only increasingly mediate the world for us; they are fast becoming the very stuff of which our world is made. We find them in search engines, traffic and navigation apps, elevators, cars, and home appliances.
Computerized algorithms are everywhere, and the implications, according to Kane, are dire. The book focuses on algorithmic color technologies that transform “color from a qualitative phenomenon to a code, formula, quantum, or mathematical equation.” Algorithms increase standardization and narrow the range of choices people can make. They open the door to what Kane calls “the algorithmic lifeworld,” in which the statistical data compression and prediction that make these technologies work also decontextualize experience. Kane’s choice to explore these issues in the field of color is apt because color has long been conceptualized and understood as a quintessentially subjective and qualitative experience. Yet efforts to quantify, codify, and mechanize it have dramatically increased in the last few decades, culminating in algorithmic color technologies such as HTML color, which people interact with or are impacted by on a daily basis.
Kane describes the story of standardization in the field of digital color as a gradual shift from an atmosphere of playful experiments with computer art in the 1960s, which were productive of fantastic color, utopian agendas, and highly personal visions, to the spread of automated off-the-shelf color software after the 1980s and the subsequent standardization of color and the production of an equally standardized and opaque “cool” aesthetics. The early utopian and highly creative experiments in computer art were enabled by a combination of technology-rich environments and the lack of requirements for clear commercial applications. Artists were given the freedom to experiment with advanced technologies in places such as the Bell Laboratories, the WGBH New Television Workshop, and IBM. Open-minded directors, supportive engineers, and funding agencies provided the atmosphere and resources that enabled artists to develop experimental computerized color technologies such as the Video and Music Program for Interactive Realtime Experimentation/Exploration (VAMPIRE) created by Laurie Spiegel at the Bell Laboratories, an interactive color-music system whose experimental visual effects have since become widespread, as in the “animations that synchronize abstract color with sound and can be downloaded from many Internet radio stations.” Color at this phase, as Kane describes it, was unstable, unstandardized, less precise, blurry, mystical, and, most importantly, subjective; it was tied to artists’ personal visions.
In the late 1970s and ‘80s, changing economic conditions brought an end to this phase. These once experimental environments were restructured to have clear commercial goals. For example, in 1984, following an anti-trust lawsuit, AT&T (formerly Bell Telephone Laboratories) “agreed to give up its monopoly on the telephone systems and compete in the marketplace with other communications companies.” As Laurie Spiegel commented, the result was that “a lot of pure research with questionable long-term economic benefit went by the wayside in favor of things that would bring in revenue … [the labs] had to sell stock and compete with each [other] in the market and fund their own research.” These changes motivated many of the pioneering visionaries to leave the labs. From that point on, color became what Kane calls “democratic,” a phrase she describes as her “own doublespeak,” meant to convey her ambivalence toward the fact that, on the one hand, the advent of personal computers, commercial off-the-shelf software (such as Photoshop), and standards over the internet (such as HTML color) have democratized the access to digital color but, on the other hand, the result has been “an unconscious yet deeply homogenized use of digital color in art, media, and design.”
Kane’s key example of what she calls “the algorithmic lifeworld” is infrared digital technologies. Infrared is “a form of long-wave electromagnetic heat radiation” that is “almost entirely invisible to humans, so ‘seeing’ it depends on synthetic processes from capture to screen display.” In contrast to analog media such as photography, where a direct and continuous relationship exists between input and output, infrared technologies rely on digital transcoding processes where “one source, here heat energy, is captured and translated into another language or system (code), which is in turn used to generate light-based images called ‘heat maps.’” Whereas the former technology involves “optical perception,” the latter involves “algorithmic perception,” and a specific ontology and epistemology that constitute the algorithmic lifeworld. Kane highlights three dimensions of this lifeworld: data reduction and optimization based in statistical analysis; predictive scanning (the use of data from the past to track data in the present and predict to the future); and the presentation of data that are a simulation of reality, bearing only an allegorical rather than direct relation to it. These dimensions underlie other algorithmic technologies that people routinely interact with, such as recommendation algorithms that statistically analyze a user’s past online behavior, produce a reductive understanding of that person, predict his or her future preferences based on this understanding, and produce customized content that would align with those preferences. Because we gradually come to experience the world through the mediation of algorithmic technologies that embody these dimensions, we gradually come to inhabit the algorithmic lifeworld in which what is (ontology) and how we can know what is (epistemology) are rapidly changing.
Computerized algorithms are everywhere, and the implications, according to Kane, are dire.
Kane focuses on digital infrared technologies not only because they represent these general features of the algorithmic lifeworld, but also because they epitomize her pessimistic evaluation of this world. She stresses that the development of digital infrared technologies “is intimately bound to the history of the military industrial complex and the advancement of modern automated weapon systems in the postwar period,” and that digital infrared is pervasively mobilized “as a system of control used to regulate bodies, realities, and experiences in an increasingly post-optic culture, using progressively pervasive and intrusive means.” She argues that the infiltration of infrared systems into a growing number of everyday contexts points to the subtle militarization of everyday life aided by these and similar technologies. Infrared technologies are increasingly adopted by the commercial, business, and entertainment industries, and are integrated in consumer products such as the computer mouse, the TV remote control, the Xbox console, Microsoft’s Kinect, and also in work surveillance technologies. For Kane, the military origins of these digital color technologies and their subsequent infiltration into everyday life are intimately related to some of the historical factors that explain the shift from “luminous colors” to “abysmal darkness” between 1969 and 2009: “the endless escalation of the Vietnam War,” “the Cold War,” “guerilla war,” “the militarization of civilian life,” and “a new world order of terrorism that finally reached the US on September 11, 2001.” Infrared technologies thus directly emanate from the dark historical context responsible for the shift from “luminous colors” to “abysmal darkness,” and hence they epitomize the phenomenological dimensions of this context and its wider, equally dark implications better than other color technologies.
Kane’s algorithmic lifeworld is, indeed, a “dark” place, as she puts it. This is partly the result of her focus, which forces her into the gloom. Left unexplored are roads through this ominous algorithmic lifeworld that take potentially “bright” turns, though they employ many of the same features she fears, such as radical data reduction and prediction via statistical analysis.
One morning in the summer of 2011, as part of my fieldwork in a major institute of technology in the US, I sat in a room facing an electric keyboard that was plugged to the computer controlling Syrus, a humanoid robot marimba player. Algorithms analyze in real-time the music played on the electric piano and respond to it. In a “turn-taking” mode, Syrus was programmed to answer my playing with a riff in the same style I’d played, or in the styles of different past jazz masters, which the algorithms had abstracted beforehand and could enact in real-time, and go so far as to mix all these together to create new styles of improvisation.1 I remember that I did not expect much when I first played, hesitantly, a random Bebop phrase. But Syrus’s response, while lacking many of the features that would make it a distinctly human and satisfying response (in terms of timbre, articulation, and dynamics), was unlike anything I had heard before in countless jam sessions with other musicians. The combination of wide intervals and rhythmic irregularity was not something I was used to, and it immediately prompted me to try to integrate these features, which were so different from the features of my habitual style of improvisation, into the lines I was playing.
The algorithms that animate Syrus, hidden Markov Models in particular, have been used in algorithmic music composition ever since the 1950s. They are especially suitable for style imitation based on the analysis of large corpora of music. The algorithms statistically analyze a jazz master’s past recorded solos to generate transition probabilities, i.e., the probability that a certain future state (a note, in this case) will follow a given present state. In actual performance, the algorithms generate musical content based on these probability functions and chance decisions in accordance with the frequencies detected in the master’s recordings. The primary goal of the scientists I worked with was to design algorithms that could enact the styles of improvisation of different past jazz masters and then mix them to create something entirely new.
Here standardization is not the result of the development and spread of algorithmic technologies that stymie human creativity.
These scientists expressed a surprising and telling rationale for their work. They made it clear that their efforts to develop technologies that can improvise jazz were the result of their disappointment with their fellow musicians, whose playing they found to be too predictable and standardized. In response, they tried to develop algorithms that would be able to provide them with the musical variety they thought lacking in many contemporary jazz musicians.
When I first met James, the director of the research team behind Syrus, I started to explain why I was there by describing my previous research on the rationalization of jazz training in US academic jazz education. James immediately interrupted me: “They all sound the same.”
“Who?” I asked.
“The students! They all sound the same. Like machines!” He laughed. “And all the musicians who come out of the schools, and like 99 percent of the jazz musicians today—they all sound the same. I wanted to be inspired. Everything that can be written had already been written. Everything that can be played had already been played. I felt that I understood all the genres I was familiar with like jazz—there was nothing that really caught my interest, a new sound, new ideas. I wanted to develop a device or a tool that would generate new musical ideas that I could not come up with by myself, nor could other people.”
Here, then, is a piece of the algorithmic lifeworld that is a far cry from Kane’s depiction. Here standardization is not the result of the development and spread of algorithmic technologies that stymie human creativity. Rather, standardization is a feature of human practice, and algorithmic technologies are developed to address and resolve it.2 Where data reduction and prediction based in statistical analysis are a source of hyperbolic fears in Kane’s world, here they are approached as conditions of possibility for renewed creativity and for opening up new avenues of inspiration.3
None of this means these technologies cannot be used to limit human creativity. Indeed, if search engines can and do use similar algorithms to analyze a user’s profile based on the corpus of his or her past online activity to dictate advertising and customized content, they can potentially entrap users in a self-referential and narcissistic world that hinders and stifles personal development and growth. Yet the same algorithmic technologies can be reconfigured for the opposite end. If computerized algorithms can approximate each person’s taste or style, then, in principle, they can also provide each person with content not aligned with them, thus enabling individuals to change their personal styles if they wished to do so by exposing themselves to styles different from their own. This would require what Eli Pariser described in his book The Filter Bubble as “crafting an algorithm that prioritizes ‘falsifiability,’ that is, an algorithm that aims to disprove its idea of who you are.” Pariser imagines what this would look like: “Google and Facebook could place a slider bar running from ‘only stuff I like’ to ‘stuff other people like that I’ll probably hate’ at the top of search results and the News Feed, allowing users to set their own balance between tight personalization and a more diverse information flow.”4

Toledo 65 Algorithm. Photograph by jm_escalante / Flickr
The fact that many algorithmic technologies are extruded through a system of concentrated corporate power, operating according to a logic of profit, is important to note.5 However, also important is the fact that nothing prevents corporate power from responding to market demand for algorithmic technologies that are reconfigured to prevent personal standardization and self-referentiality. Some of the scientists I worked with discussed the commercial possibilities of integrating the algorithms they developed into jazz pedagogical aids that would allow students to improvise and receive real-time musical responses based on the ‘falsifiability’ principle: i.e., to receive musical input that would not align with their habitual style of improvisation. Given the widespread discontent about standardization in the field of jazz education, as well as the significant market for jazz pedagogical aids, the fact that the scientists I worked with were financially motivated to reconfigure their algorithms in this liberating direction is not surprising. It shows that commerce and art do not have to be at odds with one another.
Kane argues that the historical genealogy that underlies the development of contemporary digital color technologies can “counteract some of the naivety and overwhelming optimism surrounding discourses of technical ‘progress’ or ‘transparency’ pervasive in new media discourses.” Ultimately, however, she reproduces this narrative, but with the signs reversed. Instead of utopia, a unidirectional decline. Tropes of loss and a past “golden era” pervade her dystopian narrative of algorithmic color. Consider the following argument about the trajectory from experimentation to standardization: “In this early period of aesthetic computing there was a window, an opening within the yet-to-be. However, as color became accessible, standardized, and available to all, the window closed. … The shift from no color to ‘millions’ of colors (as software packages misleadingly portend) is a story of increased efficiency, severe data compression, commercial motives, and rhetoric. But it is also a story of loss and forgetting: as the visionary, experimental, and subjective aspects of color are cast aside in a culture driven by speed, efficiency, and profit.” Narratives of progress frequently draw on Enlightenment metaphors of illumination and transparency. Drawing from the same semantic field of light, Kane’s narrative of decline mirrors the former, culminating in the dystopia of increased obscurity and darkness.
How could this reproduction of a unidirectional historical narrative take place in a book that purports to challenge precisely such a narrative? It seems to me that one key reason is Kane’s methodological remove from the ways people are practically using algorithmic color technologies on a day-to-day basis, and by her choice to rely exclusively on media archaeology, cybernetics, a Foucauldian approach to discipline and power, and Heideggerian phenomenology as her theoretical foundations. There is nothing wrong with these bodies of knowledge, but when they are relied upon to make arguments about the meaning and implications of technological shifts they can become risky.
Anthropologists are used to the surprises that emanate from their informants’ interpretations of seemingly obvious situations. Standardized technologies do not necessarily entail standardized experiences and standardized uses of such technologies, so a dystopian narrative of decline is no less problematic than a utopian narrative of progress. By themselves, then, the three features of the algorithmic lifeworld Kane identifies are only affordances that can be mobilized and experienced in vastly different ways in different contexts, with intended and unintended consequences alike. What is left for us to do is approach these consequences with an open mind, lest we participate in the same standardization and homogenization of reality that we attribute to the technologies we study.
- See Eitan Wilf, “Toward an Anthropology of Computer-Mediated, Algorithmic Forms of Sociality,” Current Anthropology, vol. 54, no. 6 (University of Chicago Press, 2013), pp. 716–739; “From Media Technologies that Reproduce Seconds to Media Technologies that Reproduce Thirds: A Peircean Perspective on Stylistic Fidelity and Style-Reproducing Computerized Algorithms,” Signs and Society, vol. 2, no. 1 (University of Chicago Press, 2013), pp. 185–211. ↩
- Indeed, the belief that contemporary academic jazz programs train players who sound the same as one another was prevalent among my interlocutors in US academic jazz education. See Eitan Wilf, School for Cool: The Academic Jazz Program and the Paradox of Institutionalized Creativity (University of Chicago Press, 2014). ↩
- This is not to say that these algorithms were always successful. I remember many instances of playing with Syrus and being uninspired by its responses to my own (frequently uninspiring) improvisations. ↩
- Eli Pariser, The Filter Bubble: What the Internet Is Hiding From You (Viking, 2011). ↩
- For an excellent critique of such power, see Lilly Irani, “ Justice for ‘Data Janitors,’” Public Books, January 15, 2015. ↩