Defining the Problem & The Problem of Definitions
What do "cooperation" and "morality" mean?
If you and I are interested in horses and by horses I mean the animals cowboys ride but by horse you mean the animals we get milk from, you’re going to claim horses say “moo”—correctly, from your perspective—and I’m gong to claim horses say “neigh”—again, correctly, but you and I are not going to get anywhere. We’re both right, but we’re not really arguing with each other.
To start with definitions is not to be picky or pedantic. The first step is figuring out disagreement.
Not long ago, I published a somewhat lengthy piece in Aporia, where I put some essays that work better for that outlet than for here on Fossils. In it, I addressed work by Oliver Curry and Jon Haidt. In both cases, we seem to disagree. I very much respect both scholars. That makes me interested in the question, what is the source of our disagreement?
After the Aporia essay came out, Oliver Curry took the time to have a little back-and-forth with me on Twitter/X. From this discussion, I think I have a much better understanding of where and why we disagree about morality.
First, I’ll lay out my perspective and then I’ll provide his views to highlight the difference. To me, from a scientific standpoint, there are, on the one hand, observations—phenomena—and, on the other hand, theories that purport to explain those phenomena. To begin to develop an explanation, the first step is to clarify as closely as possible the set of phenomena in the world that one is trying to explain. This typically entails creating a definition. What, specifically, are the things in the world that you are trying to explain? When you say “cow,” can you tell me what you are referring to so I can evaluate your theory about mooing?
Now, I’m not saying that this is easy. In fact, a key point in the piece about altruism is that definitions are hard. Still, it’s a crucial first step, as illustrated by the horse/cow issue. If we’re not pointing to the same thing then we can’t compare our explanations for it.
Cooperation is a good place to start because Curry and I seem to agree, more or less. I once collaborated with the biologist Stu West on a piece about the evolution of altruism, and Stu’s definitions are taken by many to be the gold standard in the field. West and colleagues published a piece in 2006 suggesting that in terms of social behavior, “a huge theoretical and empirical literature has developed on this topic” but that “progress is often hindered by poor communication between scientists, with different people using the same term to mean different things, or different terms to mean the same thing” (emphasis mine). Truer words were never said.
In that piece (the Glossary, Box 1), they write: “Cooperation: a behaviour which provides a benefit to another individual (recipient), and which is selected for because of its beneficial effect on the recipient.”1
So far so good.
The other word at stake is morality, fraught because even the dictionary assigns it multiple meanings. My collaborator Peter DeScioli and I tried to lay out our understanding of “morality,” (see Section 2 of this paper). We understand the key phenomenon to be explained as what we refer to as a dimension, the human capacity to evaluate actions as “wrong” (and therefore deserving of punishment) versus right. We view these judgments as the key empirical moral phenomenon. So we are pointing to cases in which someone judges and perhaps says “stealing is wrong.” Why do humans perceive certain acts to be “wrong” and so label it?
Now, as I say, even the dictionary provides multiple meanings of the word morality. One meaning, for instance is about virtue, or goodness. So, because of the cow/horse issue, I asked Curry in our discussion for his definitions of cooperation and morality. This is important because for me, I’m not interested in virtue; it’s a cow, not a horse.
Oliver’s reply (thread link) was: “I define cooperation as traits/behaviours2 that realise some *mutual* benefit in a non-zero-sum game. So, 'mutualism' (as opposed to 'egoism' or 'altruism').” He followed this with: “These cooperative traits are what philosophers and others have called 'morality'. This includes making cooperative (i.e. moral) decisions, and also applying the same cooperative criteria to judge the cooperativeness (morality) of others.”
Just to be sure I had that right, I asked: “Are you saying “morality’ is DEFINED as “those traits that philosophers have labelled “cooperative”… So morality = “cooperative traits?”
He confirmed that I had this right: “Well yes, 'moral' is just another word for 'cooperative.’ Hence 'morality as cooperation.’ Cooperation explains the phenomena that philosophers and others have called morality.”
From this, you can see that he and I are using words differently, which explains much of the disconnect.
First, when he says “cooperation explains…,” this diverges from how I use the word. To me, cooperation is a phenomenon to be explained, not something that explains phenomena. This is a very basic disconnect in word usage, and by itself can lead to no good when it comes to communicating.
Second, he is saying that his definition of morality is cooperative traits. If one has a definition of a word, then one can simply substitute the definition in for the word. That’s what a definition is.
Now, Curry’s core claim is that “morality is for cooperation,” the idea behind his theory of “morality as cooperation.” So if “morality = cooperative traits” then the claim that “morality is for cooperation” becomes, by substitution, “cooperative traits are for cooperation,” which isn’t a claim at all, it’s a tautology. Similarly, if one understands that “moral” is another word for cooperative, then the “theory of ‘morality as cooperation’” becomes “The theory of cooperation as cooperation.”
From my perspective, the claim that “cooperative traits are for cooperation” isn’t an explanation, it’s circular. If one has something that purports to be a theory, as Curry styles his “Morality as Cooperation” (MAC) view, then it can’t simply be a tautology.
Just to be sure I had it right, I said that “That claim (in itself) can’t be wrong or tested because morality is/for/as cooperation by definition: morality is whatever traits cooperation theories explain.”
He replied that “…the theory that 'morality is cooperation' can be tested…It could turn out that only a small part of morality is cooperative…”
Again, look what happens if we substitute again, putting his definition in there for what could turn out:
It could turn out that only a small part of morality is cooperative…” becomes:
It could turn out that only a small part of cooperative traits [are] cooperative.”
How can a cooperative trait not be cooperative?
Unifying the Language & Finding Disagreement
I think, from all of this, after having thought about it, I’ve made progress understanding some of the issues.
First, as I say, we are using words differently. Until we can agree on a common language, I’m afraid we are not going to be able to figure out how to make progress. For my part, I just don’t see how using the word “cooperation” to mean “morality” or using the word as an explanation constitutes a clear way to proceed.
Second, I think it will turn out that we differ in what I would call our optimism. Let me explain.
Let’s call the set of theories that purport to explain design for traits that benefit others—the theory of kin selection, reciprocal altruism, etc.—theories of prosocialiaty. Theories of prosocialiaty are explanations for a set of phenomena (e.g., cooperation, altruism).
Similarly, let’s take the basket of traits, behaviors, and so forth that people have, at one point or another, referred to using the word “moral.” So this is a set of phenomena, stuff you can observe in the world.
I think my way to put what I think Curry is saying would be: all phenomena referred to using the word “moral” can be explained by theories of pro-socialiaty. That is, he is claiming that whatever stuff philosophers and other scholars have observed and wanted to call moral, all those phenomena can be explained with an existing theory that was developed to explain cooperation.
Maybe. Consider the following story.
Mike and John swim to a deserted island after their cruise ship goes down in a tragic accident. They reach the beach, breathless.
“Thank god we made it,” John exclaims.
“Indeed… hold on… which god?” Mike asks.
John answers. Mike grabs a large stone, saying: “Sinner! Infidel! You are a moral abomination, and I am going to moralistically punish you for your sacrilegious beliefs!” They struggle, and both die from their injuries.
Can one of the theories of cooperation—mutual benefit—explain that interaction? It seems to be a moral one, to do with a moral rule that was broken about which god to believe in and punishment for that transgression. Can a theory that explains mutual benefit explain mutual annihilation? I mean… maybe?
Similarly, will theories of pro-socialiaty explain the principle of double effect and why it’s ok to switch tracks but not push the guy in the trolly problem? Maybe.
If it’s true, then science has all the theoretical tools it needs to explain all the phenomena that people have discussed using the word morality.
I talk about John Tooby a more than I used to, probably because of his recent death. When I was in grad school, I told him that the ideas of evolutionary psychology we were working with were so powerful that I wondered if the end of the social sciences were finally in sight of humanity.
If you knew John, you would have been very familiar with the belly laugh that evoked.
“One percent. Tops,” he said. “We’ve figured out maybe one percent of human behavior. If you’re worried about running out of research to do, my advice is find something else to worry about.”
Maybe Tooby was wrong. Maybe our theories, about what people use the word “morality” to describe are so good we don’t need any more ideas, no more new explanations. That’s possible.
Generally, my experience was that it was never a good idea to bet against Tooby. So that’s not where I’d put my money.
But whichever way we go, a top priority is to find some way to agree on terms.
Look, it’s complicated. Note that here cooperation is defined as a behavior. Elsewhere, in one piece, Stu (and collaborators) say: “Cooperation is defined as any adaptation that has evolved, at least in part, to increase the reproductive success of the actor’s social partners.” Note the switch from behavior to adaptation. Not a huge deal, just saying. I should also add that I put a little survey in the piece about cicadas. Are the cicadas cooperating? The results of the survey were nearly split 50/50. Note that according to this definition, the cicadas are not cooperating. The definition turns on why the trait was selected. In the cicada case, when I come out after 17 years, this isn’t because of the beneficial effect on other cicadas. It’s because it confers a benefit to me. That’s the cause. The benefit to others is a side-effect. Therefore, from the Stu West perspective, the cicadas are not cooperating with each other.
He uses British spelling, which I leave here. I tidied up some punctuation, spacing, and such in our respective posts.
Hi Rob,
Thanks for these articles. They are very insightful. I also enjoyed your book Why everyone (else) is a Hypocrite. I have read and re-read it many times. I also enjoyed the Hidden Agenda book, which greatly clarified my thinking on (US) politics. Regarding morality, my understanding is that folks like Baumard and Fitouchi say that judgements of right and wrong are really outputs of a cheater detection module. To believe that a person has acted wrongly by taking a specific action is to believe (implicitly) that by taking that action (or by not acting) the person has not kept up their end of (an implicitly-defined) bargain or violated the terms of a contract, with various types of contracts at various social scales being used to explain phenomena like double-effect or punishment meted out by the Puritans, etc (This is described in the preprint by Andre, Fitouchi, Baumard, and I take the Puritans example from your and DeScioli's response to the Fitouchi et al BBS paper.)
Intuitions of wrongness/rightness are one thing; punishment is another. The individual may punish on their own, but it's better to get others involved and if possible turn punishment into a collective action. Moralistic intuitions can serve as a coordination/focal/Schelling point around which to rally a group for this purpose. Such intuitions can solve the coordination problems attendant to turning punishment into a collective enterprise. As you argued in your papers, your and DeScioli's side-taking hypothesis, which is indeed too elegant to not be true, addresses a different selection pressure, that of avoiding even fights where basically everyone ends up losing. So, if I understand this correctly: Baumard et al are saying the moralistic intuitions are there "for free" as outputs of cheater detection (at various social scales: dyadic, coalitional,..) and separate these from the problem of organizing collective punishment, while you are saying that intuitions of right and wrong arose as ways to break the tie, so to speak, in the side-taking game. Is this right?
Cheers,
AS
Moo & Neigh: The FIRST philosophical question: Is there one thing that we can agree upon?