War. What is it good for?
In a 1986 episode of The Twilight Zone, aliens come to Earth and express their deep disappointment with, as they put it, humanity’s “Small Talent for War.” [SPOILER] Assuming that the aliens mean that they are frustrated with our species’ never-ending squabbles and conflicts, the Earthlings frantically put together a peace plan to assuage the menacing invaders… who reveal that the Terrans have misunderstood. The aliens seeded the planet to create warriors, not pacifists. Annihilation ensues.
While the human capacity for warfare disappointed the aliens, by some metrics, humans are pretty warlike. Very few other species—on our planet, anyway—cooperate with non-kin to form coalitions to attack and even kill other members of their species. There are exceptions, but they are rare. Chimpanzees and dolphins, for example.
In 1988, shortly after the airing of the episode of The Twilight Zone, though I don’t think the two events were related, my former mentors, John Tooby and Leda Cosmides, released a paper that tried to explain this zoologically-odd human propensity. War is especially puzzling because despite the fact that it’s potentially fatal to the participants, across time and cultures, many humans have been eager participants. That paper got some attention but, until recently, didn’t appear in a scholarly outlet. More than three decades on, some of these ideas have finally been published, in Evolution and Human Behavior. In the authors’ note1 explaining the history of the paper, Cosmides writes (her italics):
I remember the shock and wonder on John’s face when he grasped the most counter-intuitive implication of this model: that motivations to initiate a coalitional attack could be selected for if certain conditions were met, even if the probability of death was high.
In neither paper do Tooby and Cosmides offer a mathematical model to defend this surprising idea, but it’s possible to build a tiny little one with very limited math. Working through the model helps to solidify the intuition of the model as they described it. The rest of this post has more math than usual, but only basic algebra. If you don’t want to work through it, you can skip to End of Model, below.
Ok. Here goes.
Imagine, if you will, a competitive lottery.
In this game there are two teams, Red and Blue. One team will win the lottery, sharing the prize money, and the other team will lose. To determine who wins, there will be a coin flip. For the moment, we’ll assume it’s a simple coin flip, so each team wins with a probability of 0.5.
Now, there’s a twist. Before the coin flip, each person must also first roll a die. The outcome of the die roll applies only to the person who rolls it. The rule for the die is simple: if you roll a one, you are eliminated from the lottery. If you roll anything other than one, nothing happens.2
Once everyone on both teams has rolled their dice and the ones have been eliminated, the coin is flipped. Only the people on the winning team who escaped elimination share in the prize.
Now, the die is not necessarily your run-of-the-mill six-sided die. Instead, it is one of those dice that you see used in games such as Dungeons & Dragons. There are dice with four sides, six sides, eight sides, ten sides, twelve sides, twenty sides, even a hundred sides. If you want to see some of these dice in action, I recommend a trip to The Twenty-Sided Tavern on Broadway.
Suppose that the prize is large relative to the entry fee. Maybe it costs $1 to enter, there are 100 people per team, and the prize is $1,000.
It’s fairly straightforward to determine whether this is a good game to play: We simply compare the cost of entering ($1) with the expected value of entering.
Holding aside the die for the moment, the expected value is the chance of winning, 0.5, multiplied by the value of the prize, $1,000, divided by the number of people on the team, 100. That’s ($1000 * (0.5))/100 = $5. So entering this game is an excellent decision. It costs only $1 to enter but your expected value is $5. You won’t get those sort of odds in Vegas.
But now we have to add the die. Let’s say the die has N sides. Let’s compute the expected value of the game now. Before you read on, ask yourself how N affects how much you would want to play. Would you rather play with a high, low, or medium risk of dieing (sic)?
First, let’s figure out how many players survive. We roll a die with N sides. Only one kills you, rolling a one, so you survive N-1 times. So the odds of staying alive are the number of rolls that you survive divided by the total possible number of rolls: (N-1)/N. That makes sense. If N is, say, 100, on average only one person will roll a 1, so there will be 99 survivors. So now we can easily represent any given player’s chance of surviving. We’ll use p() to denote probabilities:
This lets us figure out the number of survivors. It’s just the chance of surviving multiplied by the size of the team, which we’ll take to be T. We’ll use E() to refer to an expected value.
Ok, now how much does each survivor get? They get the prize divided by the expected number of survivors. Let K be the size of the prize. The expected share of a survivor is just K divided by the expected number of survivors. Putting in the expected number of survivors from above, the expected share of a player is:
This is the expected share if your team wins.
Now to compute our expected value, we have to take into account how likely we are to win and how likely we are to survive. So the formula here is just the probability of my team winning, p, multiplied by the probability of surviving (which, you’ll recall, is just N-1/N) multiplied by the value we just computed, my share of the prize if my side wins and I survive, which we just derived, above.
That gives this ugly thing:
Simplify by multiplying top and bottom of the rightmost term by N to get rid of the fraction in the denominator gives us.
Now we can just get rid of those Ns. This leaves us with
Now, happily, (N-1) can be cancelled out in the top and bottom, so we are left with this elegant result
End of Model
In words, the amount I’ll get from participating is the chance my side has of winning multiplied by my share, which is the size of the reward divided by the size of the team.
So, in our example above, we had a .5 chance of winning, a $1000 prize, and 100 people on a team, so that’s .5 X ($1000/100), which is $5, so that checks.
Importantly, notice that N, the number of sides of the die, has dropped out!
From this we see that if everyone is rolling a die, it doesn’t matter how high the odds are of rolling a one. Six-sided die, hundred-sided die, it doesn’t matter. The die roll doesn’t change the expected value from the standpoint of any given player of the game because everyone has to roll it—that is, the higher the likelihood of dying, the higher the payout if you survive.
Now let’s think about this in the context of warfare.
If members of groups are contemplating going to battle, this analysis suggests, from a game theoretic point of view their decision should ignore the chance of dying as long as the critical conditions hold. These critical conditions are that the prize is large relative to the price of entry, the chance of winning is reasonably good,3 and, crucially, that A) the risks of death are the same for everyone and B) the prize is shared equally.
Now, these assumptions are not absolute. As long as the prize is large enough, even if the winnings aren’t divided exactly evenly, entering the game might still be advantageous. Related, the odds of death don’t have to be divided exactly evenly. As these odds get very different for some, others will be less likely to want to play.4
In any case, this analysis suggests that there are conditions under which evolution can be expected to have selected for a preference, even eagerness, to go to war, despite the risks of mortality.
This analysis might shed light on why warfare is zoologically rare. For nearly all other species, the assumptions A and B are not met. It might be difficult for members of a group to ensure that everyone bears the same risk—perhaps some hang back a little bit, others just stay home—and even more difficult to enforce the division of the spoils. After all, alphas in most groups take what they want. If the alpha is going to get all the stuff after a fight, then the subordinates should be selected to resist participation.
We, however, are different. We humans punish free-riders. In war, the penalty for cowardice or desertion has historically often been death. And it’s probably not a coincidence that people in the military wear uniforms. This cultivates the sense that everyone shares the uniform odds of death even though, especially in modern militaries, in which fighting is at a distance, this might not be the case.
In short, the system breaks down unless there’s a way to ensure that everyone rolls the same die, or at least is perceived to. This is why enforcement of risk-sharing and very strong punishment of desertion and cowardice are essential. In evolutionary terms, mechanisms for detecting and punishing cheaters were not optional extras—they were load-bearing components of the entire structure.
This model might be applicable at both fine and coarse levels. Peter Leeson's The Invisible Hook is a fine read and provides a vivid historical illustration. Pirate crews, operating outside the reach of state institutions, faced collective risks—combat, capture, betrayal—and had to devise systems that incentivized cooperation under mortal threat. Just as the lottery game's expected value hinges on a fair distribution of reward among surviving participants, pirate ships established explicit rules governing how booty was to be divided. These rules ensured that shares were distributed as agreed to beforehand—often in proportion to rank and role—reducing the risk of post-victory conflict, making participation in violent raids a rational gamble.
Leeson's analysis reinforces the importance of the two assumptions in the war-lottery framework: shared risk and predictable division of spoils. Pirate crews enforced both with vigor. Combat roles were not optional and deviation from agreed-upon norms (e.g., hiding buried treasure, cowardice) was punished swiftly, often fatally.5
This analysis also explains why leaders are keen to announce that victory is guaranteed, even if history goes on to prove them wrong. As we have seen, the expected value of entering the lottery depends on the chance of victory. Knowing this, we should expect leaders to motivate the troops to battle by spinning this probability as high, even certain. We are psychologically disposed to think our side will win if we feel there are many of us; after all, historically numbers have conferred a key advantage in war. This fact probably helps to explain why leaders cultivate the sense of there being many warriors prepared to fight. This comes in the form of rallies, parades, and the creation of great din. As Tooby & Cosmides put it:
Coalitions of males, when they assess the relevant variables indicating that they are larger or more formidable than any local competing coalitions, should appear to manifest an eagerness and satisfaction in initiating warfare and an obliviousness or insensitivity to the risk they run as individuals...
If potential participants judge the chance of victory to be better than even, perhaps certain, then they should be even more eager to fight, even in the face of a higher chance of death.
Finally, what are the spoils of war and why is it that men rather than women historically have been the ones who go to war? The two questions are related. Historically, and from an evolutionary point of view, the benefits of warfare were reproductive females, the scarce resource that is the bottleneck of men’s reproductive success. The reverse, is not, of course, the case; women’s reproductive success does not increase in the same way by gaining sexual access to more males.
The prize from warfare over evolutionary history, to say nothing of more modern history of conflict, was access to females. Recall that the conditions for the evolution of the kind of coalitional psychology Tooby and Cosmides envision, an appetite for war even if there are risks of death, depends on the prize being sufficiently large relative to the potential costs, which include the risk of death. Over evolutionary time, that benefit had to be in the coin of reproductive success. Over the eons, groups of men have been fighting, killing, and dying for the highest of fitness stakes.
The author’s note begins as follows: John Tooby presented this paper, “The evolution of war and its cognitive foundations”, in 1988 at the Evolution and Human Behavior Meeting (a precursor to HBES) in Ann Arbor, Michigan. At that time there were few, if any, places you could publish a theoretical paper like this, so it has existed ever since in our files as Institute for Evolutionary Studies Technical Report 88-1. Once pdfs came into existence, we posted it on the Center for Evolutionary Psychology website; somehow people found it, to judge from its citation footprint.
Yes, the set-up evokes Squid Game a bit.
There’s nothing special about a 50-50 chance of winning. The odds of winning could be lower than that and it could still be worth participating, depending on the values of the other variables.
I won’t derive what the entry conditions look like if these assumptions are relaxed. If you want to try it for yourself, upload this post to an AI and use something like the following prompt: “Extend the analysis in this essay, relaxing the assumption that the chance of death is equal for all players and the assumption that the rewards are distributed evenly. Assume that these asymmetries are known ex ante. Specify the decision rule for a player, i, who has perfect knowledge of their own risk of death, their own share of the reward in the case of a win, and the distribution of reward share and death risk for all players.”
The bits about the right of parlay in the Pirates of the Caribbean films draw on this historical pattern. There is a line in one of the films that goes: "Jacques, you silly fool. I fear you have made a grave error. Don't you know any better than to try and conceal booty from your comrades? That's an offense punishable by death..."
https://link.springer.com/article/10.1007/s11186-025-09620-8?utm_source=rct_congratemailt&utm_medium=email&utm_campaign=oa_20250508&utm_content=10.1007%2Fs11186-025-09620-8
Which is one of reasons why the evolutionary model of the actor is a good social science theory!