Hiking Tales and the Function of Morality - Coordination Part V
Also, how not to be eaten by a pride of lions.
Hike the First
Once upon a time, Anaconda, Beaver, Crane, and Duck were hiking.
All four were very human-like, with an important exception: They had no “moral sense.” They didn’t know what it even meant to say something was “wrong.”
But, other than that, basically human.
Anaconda and Beaver were best friends, as were Crane and Duck, the latter two united in their common bird-dom.
They were traveling together through the forest when Beaver managed to knock a juicy mango from a tree, catching it neatly with one hand. Just as Beaver was about to eat it, Crane grabbed the mango and went to take a bite.
“Give that back!” Beaver exclaimed, baring his large teeth.
“I will not,” replied Crane.
Anaconda, Beaver’s friend, took a menacing step forward toward Crane.
Duck, Crane’s friend, stood firmly behind him.
In a flash, blood was spilled and chaos reigned as the two pairs of best friends fought one another.
Eventually, all four got up, resumed their hike, until Crane finished roasting up a varmint he had hunted, and Anaconda grabbed it just as it was done cooking. “Give that back!” Crane said, with the same result as before: mayhem.
By the end of the hike, a series of two-versus-two melees left them weak and defenseless. A pride of lions hunted them down, killing all four.
This story is titled, The Problem of Evenly Matched Fights.
Hike the Second
Once upon a time, Anaconda, Beaver, Crane, and Duck were hiking.
All four were very human-like, with two important exceptions. First, they had no “moral sense.” They didn’t know what it even meant to say something was “wrong.” Second, they couldn’t form friendships or coalitions, long-lasting stable alliances that could coordinate and cooperate.
They were traveling together through the forest when Beaver managed to knock a juicy mango from a tree, catching it neatly with one hand. Just as Beaver was about to eat it, Anaconda grabbed the mango and went to take a bite.
Beaver looked at Anaconda, eyes wide.
“Give that back!” Beaver exclaimed, baring his teeth.
“I will not,” replied Anaconda.
Anaconda looked meaningfully at Crane and Duck, jerking his head slightly to indicate they should get behind him. They paused for a moment, as Anaconda’s visage darkened. Crane and Duck hustled behind Anaconda, fists up menacingly, facing Beaver.
For the rest of the hike, no one, other than Anaconda bothered to hunt or gather, knowing that Anaconda would take anything they got. The three other hikers made do with the scraps that Anaconda left after eating.
While all made it to the end of the hike, all but Anaconda were quite tired from a lack of food, and the four were killed when attacked by a pack of lions.
This story is titled, The Problem of Dictatorships.
Hike the Third
Once upon a time, Anaconda, Beaver, Crane, and Duck were hiking. Anaconda and Beaver were best friends, as were Crane and Duck. All were very human-like, with no odd exceptions to animate the hypothetical.
They were traveling together through the forest when Beaver managed to knock a juicy mango from a tree, catching it neatly with one hand. Just as Beaver was about to eat it, Crane grabbed the mango and went to take a bite.
“That’s mine!” Beaver exclaimed, baring his large teeth, attempting to grab the mango back.
“Not anymore!” replied Crane, holding on firmly.
“Now Crane,” Duck chimed in, “You know that the moral rule that we follow dictates that taking is wrong. Therefore, although you and I are the very best of friends, I must side with Beaver in this instance. It was wrong of you to take—really, steal—the mango.” Anaconda nodded solemnly. Outnumbered three to one, knowing he would lose if it came to blows, Crane returned the mango, peaceably, to Beaver,
“As you also know, the penalty is that you carry Beaver’s backpack for a mile due to your wrong behavior.” Both Anaconda and Duck nodded solemnly.
All made it to the end of the hike well-nourished, uninjured from fights, and they were able to fend off a pack of lions with no injuries.
This story is titled, Morality Works For Choosing Sides and Can Prevent Fights.
Hike the Fourth & Fifth
Ok, I won’t work through the whole narrative again, but now suppose the hikers, before leaving, agreed on two rules. One is “things you gather are yours and others may not take them” and the other is “things you hunt are community property and you must share them.” Then the mango was the property of whoever picked it but the varmint was shared, according to those rules. In both cases, a fight was avoided because all four hikers chose sides using those two moral principles about property rights. That story highlights that groups can wind up with any number of different rules, or equilibria in the language of the prior posts on coordination. Voila. We just explained why morality exists and, at the same time, why there is moral variation.
Consider another hike in which all four people agree that whenever a conflict emerges between two hikers, they will flip a coin to decide who the two uninvolved hikers will back. Sometimes Crane keeps the mango, sometimes Beaver does. But no matter what, after the coin toss, it’s three against one and the one’s best move is to back down, being outnumbered. In this way, even fights are avoided. This might seem odd, but trial by ordeal and trial by combat are historical analogs. These ordeals and duels serve as coordinating equilibria: everyone sees who won and so everyone takes the same side allowing for the dispute between the parties to be settled without factional.
Side-Taking
Previously, I suggested that moral contents—incest is forbidden, intentional harm is forbidden, etc.—are equilibria in a coordination game. Here, I’m (finally) saying what the game is: let’s all get on the same side when conflicts emerge. Because evenly-matched fights are costly to members of a social group, moral rules allow observers to back one party in the conflict and oppose the party that (allegedly) broke the rule. This is how conflicts were settled in the vignette above. I’m suggesting that this is why moral judgments exist and this is the source of the payoffs on the equilibria in the coordination games we’ve seen in this series. In the stories above, observers to the conflicts avoided the costs of fights while, at the same time, they avoiding being dominated by an alpha (as in Hike the Second).
All of this turns on the idea that evenly-matched fights are dangerous for all involved and that cost constituted a selection pressure over the course of human evolutionary history. I’m not going to try to prove this. I can’t. I can offer some evidence.
First, this is what we observe in one-on-one fights in the non-human animal world. As I discussed previously, the reason animals display is so they can avoid a fight. If one is clearly going to win, it’s in the interest of the one that would lose to retreat, so it does. Things only get violent when the animals can’t determine who would win. Then they have to fight. Human history has seen similar patterns. Each time an outnumbered foe has surrendered, this is a statement that they accept defeat because they know there is no longer any point in fighting.
The second kind of evidence is from social networks. Peter DeScioli and collaborators have shown that human social networks are structured so that if people who are not directly involved in a conflict side with their closer friend when conflicts emerge, the result tends to be evenly matched fights. Of course, there have been a large number of social network structures over time, so we can’t know how many of them led to evenly matched conflicts and how many were more like dictatorships.
Related, over evolutionary time, it might not have been the case that evenly matched fights represented a large cost, leading to the evolution of moral judgment as a solution.
That uncertainty is why the evidence about morality itself matters.
Coordination and Moral Judgment
So how does the coordination view square with the evidence?
Previously, I suggested that the three key features of morality were that moral judgment itself is a human universal, that some rules are universal, or nearly so, and that there is tremendous variety in moral rules across time and place.
Morality is universal because it is part of our evolved human psychology, designed for coordinating. Morality is a component of the human species-typical architecture. It’s not an invention of a more general social learning capacity. It’s there to serve a function to do with coordination. In order to serve its function, everyone has to have the same underlying psychology. And so we do.
The coordination view also explains why there are some universal rules. In particular, there are many rules prohibiting things people don’t, by and large, want to do, such as commit incest. Why is such a taboo needed? The universal disgust toward certain kinds of acts creates a strong pressure toward the equilibrium that bans those things. Related, the coordination view explains why there are ubiquitous rules against unprovoked intentional harm: because everyone can be harmed, people support such rules when they are proposed. Finally, the coordination view also might help to explain why there is substantial cross-cultural agreement even on fairly complex moral dilemmas, though additional work is needed in this emerging literature.
The coordination view also explains why morality is an open system with cultures showing robust moral diversity. It explains why people are able to mint new rules about practically any action. In order to coordinate, whether for side-choosing or some other function, there must be rules to cover all the circumstances in which they might be used. So, rules for all possible contexts must be created, similar to what we saw in the discussion of rules of sports. New technologies and contexts open up new avenues of conflict. Moral rules must fluoresce in response.
Beyond these three features of moral judgment, the coordination view also explains a crucial and mysterious phenomenon, nonconsequentialism, the idea that people don’t judge how wrong an act is solely by looking at the consequences of the act. This point is most famously made by hypotheticals such as the Trolley Problem. Across different hypotheticals, the consequences are kept constant as people are asked if it’s ok to cause the death of one to save five. But people have very different judgments about whether it’s wrong depending on how the one person’s death comes about. If morality were there just to help us keep others alive, it would always be right, not wrong, to kill one to save five.
But that’s not what we see. Our moral sense isn’t focused only on the consequences of what others do, but on the actions. Actions are like traffic lights, helping observers move together: don’t commit battery—an action—even if it leads to the greater good. The point of the rule is to get people on the same side, and it can do that by specifying actions, which ways of causing harms/deaths are permissible and which are not, no matter how baroque the hypothetical. It doesn’t have to work in a way that will save the most lives. It’s not there for the potential victims. It’s there to benefit third parties to the act.
Indeed, because of nonconsequentialism, moral rules sometimes undermine mutual gain. All rules that prohibit mutually consensual and beneficial exchanges, for example, suppress, by definition, gains for mutual benefit. Many people and cultures, for example, prohibit the consensual sale of sex. (Oddly, in the U.S., it’s OK to pay someone to have sex with another person if you record it. This is now “pornography” instead of “prostitution.”) Similarly, in the U.S., it is legal to sell some body parts—hair, eggs—but not others, such as a kidney. Our moral sense suppresses certain kinds of cooperation.
In addition, punishments have, historically, been wildly out of proportion to offense. During the Inquisition, simply having a belief in one’s head that was not the one favored by those in power resulted in torture and execution. In the U.S., as has recently been discussed, the Puritans meted out harsh punishments, including death, for sins to do with one’s beliefs. In parts of the world today, fashion choices lead to brutal moralistic punishment. Recently, as has been increasingly widely discussed, having a same-sex preference in some cultures is cause for death. It would be a big mistake to think our culture has moved past massive moralistic punishment. Holding aside, for example, draconian drug laws and “three strikes” laws, informal punishments for minor transgressions—even accidental ones—are meted out regularly by internet “moral” mobs, destroying livelihood and lives.
These excesses are hard to square from a deterrence view, but easy to understand from a coordination perspective because it doesn’t really matter how much punishment is prescribed as long as there is agreement about it. As we’ll see in an upcoming post, intuitions about punishment are likely to be proportional to judgments of wrongness, but are free to vary as long as everyone orders offenses similarly.
In sum, the coordination view explains why there is agreement on certain kinds of rules. It’s easy to get behind a rule banning things you don’t want to do, as we saw in the taboo cases. At the same time, it explains variety in rules. For things that don’t matter much, or at least as much as sex, money, and property, equilibria can be arbitrarily diverse, as we see in food taboos, restrictions on art, clothing, beliefs, and so on. Maybe you can’t eat a mango on a Wednesday with a full moon. Maybe you must do so. The coordination view says that both are plausible equilibria. Almost anything a human can turn into a sentence about what another human did, thought, or didn’t do, can be an equilibrium.
This view also explains why there are conflicts over what rules to use. Thinking about moral rules as equilibria allows us to think about different equilibrium selection processes, as I discussed in the prior post on coordination. People have an interest in different rules, so of course there are fights over them.
The Moral Dimension
From my perspective, stepping back, the key finding that the view explains is most fundamental aspect of morality: that humans hallucinate the moral dimension itself. Sure, there are some who think that morality is “out there,” to be discovered, like water or methane.1 Maybe there will be a scientific advance and it will turn out to be true that eating fish on Friday is, as a matter of fact about the natural world, a sin.
But I don’t think so. Humans imagine that acts exist along a spectrum from OK to very wrong, and we create this dimension in our minds, a shared illusion that acts have moral weight. We are “correct” to the extent—and only to the extent—that everyone else thinks that the act has that moral weight. That’s all it means for one person to be right that something is wrong: other people in their group have the same belief.2
But Hume had it right. There is no “wrongness” intrinsic in actions to be discovered. There is the world of is that is to be discovered, and a world of ought to be invented. We can create a world in which taking mangoes is wrong. We can create a world in which not sharing mangoes is wrong. In neither case did humans make a discovery about which act, stealing or refusing to share, is wrong. These creations are useful, coordinating side-taking, reducing costly fights.
Similarly, sure, we might all agree incest is wrong, but is the human brain projecting that dimension on to it? We can discover that inbreeding leads to offspring with lower average reproductive success. In fact, humans did just that. In contrast, we cannot “discover” that incest is a mortal sin.
This projection is central to the functioning of the moral sense. If I am to take the same side as others, then we must share beliefs about what is wrong.
Even if those beliefs are more constructions than deductions.
I find myself in agreement with nearly everything that Sam Harris has to say. Yet somehow he and I seem to differ on the idea that “science can determine human values.” My guess is that our difference is terminological because I agree with the idea that science can tell us important things about how to increase aggregate happiness and reduce aggregate suffering.
I’m not endorsing a strong version of moral relativism here. I believe some norms are “better” in the sense that they lead to a world with more happiness and less suffering. My view is that it is coherent to hold these two beliefs: 1) there is no sense in which “murder is wrong” is true in a way that parallels “water consists of two hydrogen atoms and an oxygen atom” is true, and 2) people will have greater overall well-being in a society that holds that “murder is wrong.” I also might note that this approach to what is “moral” is parallel to the approach I take in the series on power. People have power in virtue of others’ beliefs.
I still belive that "Maybe you can’t eat a mango on a Wednesday with a full moon. Maybe you must do so. The [evolution] view says [one must be the] equilibria [more suited for a given environment]" so that "strange equilibria are moral fossils".
That said, I must confess this is the most mind-opening pieces I ever read about psychology: I must thank you very much, dr. Kurzban.
Handing off the responsibility for judging acts to an inherent moral cause puts the emphasis on the act, rather than the individual, and avoids coming off as an insult or attack. That, in itself, is an anti-fight equilibrium. Complex codes of morality and general rules exist for this purpose - hate the game, nto the player.
https://argomend.substack.com/p/the-totalitarian-organizing-principle