The social sciences have an ethics problem. No, I am not referring to the recent scandals about flawed and fudged data in psychology and political science. I’m talking about the failure of the social sciences to develop a satisfactory theory of ethical life. A theory that could explain why humans are constantly judging and evaluating, and why we care about other people and what they think of us. A theory that could explain something so trivial as the fact that social scientists care about data fudging.
This is not to say that we have no theories. It’s just that they’re bad theories. Consider evolutionary game theory. It says that ethical life results from individual rationality. How so? Assume a population of self-interested actors. (A big assumption!) Have them play a one-on-one, zero-sum game with each other, over and over again. (Prisoner’s dilemma, anyone?) The winning strategy will be something called “tit-for-tat.” The rules of TFT are as follows: 1) be nice in the first round; 2) copy your partner on all subsequent rounds. In other words, if they are mean, you should be mean back; if they act nice, you should, too. In the long run, individuals who follow the TFT strategy will be better off than people who follow a mean strategy. Or so the computer simulations tell us.
This theory is morally satisfying. Nice guys don’t finish last after all! But it is not intellectually satisfying. Human evolution didn’t really work this way. Early homo sapiens was not modern homo economicus. Our ancestors were not isolated monads. They lived in small groups. They were social animals. A good theory would start with good assumptions—realistic ones. Ethical life just doesn’t feel like game theory. Often, it’s hot emotion, not cool calculation. It’s filled with anger and sorrow, love and joy, not minimizing and maximizing. Finally, a good theory would have to account for why we have moral emotions in the first place. In particular, it would have to account for “niceness” itself.
Of course, evolutionary game theory never got much traction outside economics. In anthropology and sociology, the usual response to the question of ethical life has been a blend of cultural relativism and social constructionism. The standard account goes something like this: Once upon a time, we thought there were moral universals. (“We” meaning our poor, un-enlightened predecessors.) Then, we discovered cultural diversity. (“We” here meaning clever social scientists.) We saw that what is forbidden in one culture may be enjoined in another. (Cannibalism, anyone?) We realized that there is no moral law within us, much less in the starry skies above us. (Take that, Immanuel!) We concluded that all laws are ultimately arbitrary. They are the product of power, not reason, be it human or divine. We understood that human beings are just blank slates on which cultural systems inscribe their moral codes. Or so Nietzsche and his acolytes tell us.
This theory may be intellectually satisfying. It makes us feel very worldly and cosmopolitan. But it is morally unsatisfying. To begin, human moral cultures are not really as diverse as the theory implies. Values and virtues like fairness and generosity are well-nigh universal, even if the concepts and conventions we use to name and express them are variable. What’s more, our own relativism is rarely as radical as the theory requires. Who would now deny that the Holocaust was evil? Or that abolition was good? Finally, as anyone who has raised a child well knows, the slates are not exactly blank and the codes are rarely clear. If they were, child rearing would be much easier than it is. In short, we can’t be complete relativists in our everyday lives. There is no escaping ethical life.
Still, for all their faults, the rationalist and relativist accounts are not so easily dismissed either. A good social scientific theory of ethical life would need to be compatible with both our current understanding of human evolution and the brute fact of cultural diversity. It would need to show how natural selection could give rise to human ethics. And it would need to show how human history could lead to variation and change in ethical life. It would somehow have to square universalism and historicism. That is a tall order, but that is the aim of Webb Keane’s Ethical Life.
Keane’s book is in three parts. The first focuses on recent developments in moral psychology, where two strands of work are especially important for Keane. One concerns laboratory studies of child development. Here’s an example of the kind of research Keane focuses on. In a fascinating series of laboratory experiments, my colleagues Karen Wynn and Paul Bloom have repeatedly shown that young babies have a moral sense. In each version of the experiment, small infants watched a brief puppet show. One puppet exhibits a positive moral behavior (e.g., sharing, helpfulness, or kindness). Another exhibits a negative one (e.g., selfishness, hindering, meanness). The infants were then given the opportunity to reward and/or punish one or both of the puppets. The majority consistently preferred the nice puppet to the mean one. Keep in mind that these are pre-linguistic babies, some only three months old. These results are awfully hard to square with a rationalist account, which presumes that human beings in the “state of nature” are self-interested monads. And they are also hard to square with the relativist account, which treats moral values as cultural constructions. If either of these theories were right, then little babies would not have a moral sense.
Keane argues that the basic structures of human interaction provide a variety of “ethical affordances”
But where does this moral sense come from? The second strand of work that Keane considers provides the beginnings of an answer. In an ongoing series of books, Michael Tomasello has sought to explain human cognition in evolutionary terms. In his recent work, he puts especial emphasis on what he calls “joint attention.” By this he means the capacity of two or more human beings to focus on a shared project, such as gathering food. Tomasello argues that our primate cousins lack this capacity. Great apes forage collectively, he says, but not cooperatively. Out there in the jungle, it’s every chimp for herself. Human beings, however, evidence a capacity for joint attention from a very early age. Perhaps you’ve seen a young child call an adult’s attention to an object by pointing at it and looking at the adult. That’s the beginning of joint attention.
Tomasello thinks that joint attention conferred evolutionary advantages. To see why, imagine two individuals who both wish to harvest fruit from the upper branches of a tree. Knowing that there are predators about, they are hesitant to do so. Climbing the tree might expose them to attack. If the individuals in question were chimpanzees, they would likely give up. They are capable of imitation but not cooperation, at least not of this sort. But if the individuals in question were early hominids, they could have established a simple division of labor, with one individual standing guard and the other collecting fruit. They probably would not have needed language to do this. Pointing and pantomiming might have been enough. Tomasello speculates that joint attention might have co-evolved with collective foraging. It would have enabled early humans to exploit ecological niches—high-hanging fruit—that their rivals could not.
By now, you might be wondering what joint attention could possibly have to do with human ethics. The answer is a lot, according to Keane, because it requires that we be capable of taking the perspective of another person. And once we have done that, we can also see ourselves through their eyes. In other words, the capacity for joint attention presumes a capacity for second- and third-person perspectives. And this lays the cognitive foundations for moral reasoning. Ego’s ability to put herself in alter’s shoes gives rise to notions of basic equity. And ego’s ability to “observe” her own behavior from “outside” gives rise to feelings of moral obligation vis-à-vis others. In short, no collective foraging, no categorical imperative.
To be clear, Keane is not arguing that the relationship between joint attention and human ethics is a simple one of cause and effect, where one automatically follows the other in law-like fashion. It is one of “affordance.” Consider a chair. It is designed for humans to sit on. But it can also be used for other purposes for which it was not designed: to stand on, to block a door, as firewood, and so on. And it can be used by nonhumans, too—a cat, for instance. Now consider a rock. It is not “designed” for humans, or for anything at all. But it can also be used as a chair if it has a large enough and flat enough surface (smaller if you’re a cat), or, if it is small and heavy, as a doorstop. Let me put this formally: an affordance “A” is a relationship between two entities “X” and “Y” such that X can use Y to do A even though Y was not designed for X to do A and A is not its evolutionary function, either.
Now we can see how joint attention creates what Keane calls “ethical affordances.” Joint attention was not designed to enable human ethics; nor was that its original function. If Tomasello is right, the original function of joint attention was to enable collective foraging. But the second- and third-person perspectives that are required for joint attention can also be employed for ethical reasoning. Is my partner foraging as diligently as I am? Is the fruit being divided evenly? Keane argues that the basic structures of human interaction provide a variety of “ethical affordances” like this.
Keane’s framework is “ontologically stratified.” It presumes that there really are different levels of social reality, not just in the analyst’s mind, but out in the world.
As a linguistic anthropologist, Keane is especially interested in the ethical affordances created by human language. He puts particular stress on abstraction and generalization. Like joint attention, language probably first evolved as a means of coordinating action, rather than labeling things. But the one afforded the other. And once humans began labeling, they were already on the road to generalizing. Labeling requires categorizing, after all. But, again, what does this have to do with ethics? A lot, Keane argues. The capacity for generalizing via language can also be applied to ethics. It can be used to formulate rules, maxims, and codes of behavior. This is important, because ethics is often implicit. Much of our “moral reasoning” doesn’t involve conscious reasoning at all. It is driven by moral emotions such as anger and disgust, or sympathy and benevolence. And it is expressed through habitual responses such as headshaking or handshaking. We can also be called to account for our responses. Often, we can even give a rational explanation of these responses after the fact. Once we have done so, however, that explanation may be subject to dispute. For example, we may be told that we were wrong to feel disgusted or angry when we saw two men holding hands on the street. We can imagine various responses depending on the context. “There’s nothing wrong with being gay!” “In the Middle East, holding hands is just an expression of male friendship.” And so on. The point is that once an ethical response has been made explicit, it is subject to discussion and debate. And discussion and debate might lead us to monitor our habitual responses. Eventually, it might even bring about a change in our emotional responses. Seeing two men holding hands might bring a smile to our faces, instead of a grimace. Indeed, that has been one of the greatest ethical transformations in contemporary Western societies in recent decades.
This is how ethical transformation often happens, namely, through conceptual redescription. Keane gives the example of “consciousness raising” in the feminist movement of the 1960s and 1970s. Women learned to redescribe their life experiences with new concepts such as “patriarchy” and “sexual harassment.” And this changed their emotional responses to these experiences—from melancholy to anger. Consciousness raising led to policy demands such as equal pay for equal work, but it also led to interactional demands, e.g., for nonsexist language. And inevitably so, says Keane, because ethics is not just a matter of impersonal rules; it is also entwined with personal interactions. Ethics is second-personal as well as third-personal.
Ethical life just doesn’t feel like game theory. Often, it’s hot emotion, not cool calculation.
But how is ethical life stabilized? By means of “semiotic forms” and “historical objects,” says Keane. We are accustomed to conceptualizing culture with immaterial terms (e.g., “ideas,” “codes,” “binaries,” “discourses,” and so on). But culture—and ethics—also has material instantiations. These may be human artifacts, such as icons or books. Or they may be human practices, such as gestures or rituals. We may contest the meanings of semiotic forms. We can even contest their very meaningfulness. But we cannot create and sustain meaning without them. Historical objects are complex assemblages of semiotic objects. Icons, books, gestures, and rituals may be assembled into that complex object know as a “liturgy,” for example. Add a few more ingredients—a cloister and some robes—and you have a “monastery.” Of course, liturgies can be altered or even abolished, just as monastic orders can be reformed or even dissolved. But this will involve material as well as mental work. That is why culture and ethics are not as pliable or fragile as strong forms of cultural constructionism imply.
Now, let’s take a step back and look at the theoretical machinery that undergirds Keane’s analysis. Keane tacitly distinguishes at least four levels of social reality. Let’s call them the physiological, the psychological, the sociological, and the anthropological. Each emerges out of the other. Human culture emerges out of human interactions; human interactions depend on psychic capacities; psychodynamics are rooted in our bodily makeup. Contra the current rage for reduction, in which all the action is bottom-up, Keane assumes that higher levels can exert “downward causation” on lower ones. Cultural change (e.g., the success of feminism) can lead to interactive change (e.g., nonsexist styles of interaction), which can lead to emotional change (anger about “patriarchy” replaces resignation to “male superiority”) and even physiological change (gender goes from binary to fluid). In technical terms, Keane’s framework is “ontologically stratified.” It presumes that there really are different levels of social reality, not just in the analyst’s mind, but out in the world.
Importantly, Keane understands the relationship between these levels as one of affordance rather than determinism. Some of the affordances are “bottom-up.” For example, the human capacity for linguistic abstraction affords the creation of ethical systems (e.g., utilitarian and Kantian). Others are “top-down.” Ethical values such as equality and inclusion afford critiques of sexist interactions. In technical terms, the various levels of social reality are “loosely coupled” rather than tightly conjoined.
To this social ontology, Keane adds a moral phenomenology. Ethical life has three moments. Let’s call them “I,” “thou,” and “me.” The “I” moment is unthinking action where consciousness is submerged in doing. The “thou” moment is empathic projection where ego imagines the perspective of alter. And the “me” moment is critical observation of the self where ego looks at herself through eyes of the generalized other. Perhaps we should also add a fourth moment, a historical moment in which ego considers past actions, interactions, and selves as a prelude to further action. We could call that the “we.”
In his magisterial essay, “Religious Rejections of the World and Their Directions,” the German sociologist Max Weber painted a tragic picture of our ethical situation. In the premodern world, he lamented, life and the world were of a piece. Abraham could die in peace, knowing that he had lived a life in full. He had been blessed with wives, progeny, and property. There was nothing more to want. But “cultural beings” (Kulturmenschen) such as ourselves can never experience this sense of completion. There is always more to know and experience. Nor is that the end of the tragedy. We also live in a world of multiple and competing “value spheres”: religious, economic, political, aesthetic, erotic, and intellectual, among others. Each sphere is held together by a particular value, an “ultimate value” that demands our total devotion: salvation, success, power, beauty, pleasure, truth, and so on. What to do? Some would be dilettantes, flitting from one experience to another, collecting stories along the way. That is perhaps the dominant ethos of the present age: “YOLO!” But that was not Weber’s creed. He longed for the unity of life that Abraham had enjoyed. The only way to achieve this, he believed, was to devote one’s life to a single god, the “daemon” that seized the very fibers of one’s being. Not monotheism, the worship of the one true god, then, but monolatry, devotion to one’s own true god—that was Weber’s ethos.
What is Keane’s? He, too, paints an arresting picture of our ethical predicament, albeit a less tragic one than Weber. Where Weber saw multiple and competing “value spheres,” Keane sees multiple and competing cultural worlds, both past and present. Once upon a time, these worlds were separate. Some chose to visit other worlds; others did not. No more. Now, one cultural world bleeds into the next, sometimes quite literally, but more often via the global flow of people, artifacts, and ideas. We are all anthropologists now. What are we to do? Stay home? Go native? Be hybrid? Keane does not venture an answer to these questions.
However we answer these questions for ourselves, we cannot escape the phenomenological tension between the first-, second-, and third-person perspectives on ethical life. Some will seek refuge in the first person. They will seek to be “true to themselves,” to “listen to their inner voice,” and they will respond to challenges with a mix of apology and indignation. Others will immerse themselves in the second person. They will value loyalty to the “tribe,” and respond to “outsiders” with a mix of indifference and hostility. Still others—intellectuals, mostly—will take shelter in the third person. They will place a high value on toleration and acceptance, and they will respond to challenges with a phlegmatic aloofness. The problem is that none of us can stand still in any perspective for very long. The affordances of our minds and our languages, and the demands of social cooperation and interaction, will not permit it for long. We cannot escape ethical life. Nor can we find peace in it, either. That, for Keane, is our predicament.