You know what’s hard? Calculus. Finding meaning in life. Playing Eruption on the guitar.
Thinking about the concept of function really isn’t that hard.
Or, at least, it shouldn’t be.
Recently, Scott Alexander at Astral Codex Ten wrote a post entitled, “Come On, Obviously the Purpose of a System is not What it Does.”
I see a parallel between his point and how to think about function from a biological perspective.
Ok, first, let’s briefly review what Alexander is up to. He’s clearly irritated with some claims people are making about what certain political systems or institutions are for. So, one of his examples is the following (silly) claim: “The purpose of the New York bus system is to emit four billion pounds of carbon dioxide.” He says, correctly in my view, that such claims “are obviously false.”
He's interested more broadly in the use of the slogan “the purpose of a system is what it does,” used in political contexts. In the one above, using the slogan is, in essence, a way to say that the people who designed the New York bus system are trying to destroy the planet rather than trying to get people to work.
Alexander is pointing out something that he takes to be obvious: complicated systems often fail and have unwanted side-effects.
In the abstract, the slogan “the purpose of a system is what it does” is a claim about an inference. It’s saying you can go from observations—what a system does—to the intentions of the people who created it. New York buses emit exhaust. That’s the observation. The claim is that you can, from that, infer that the people who created the system intended that result.
Alexander is saying that one is not, in fact, entitled to draw that inference.1 In the language of logic, the inference from observed effects to function is logically blocked. If you see an outcome of some system or process, it might be plausible that that the effect is the one that it was designed to bring about, but it’s not necessarily the case.
Pretty much anything that has a function—whether a political system, transportation network, or biological trait—produces side-effects. This is just a logical consequence of things that exist in the physical world, which is a complicated place. The New York City bus system produces tons of carbon dioxide as “an unfortunate side effect” of its intended function, moving people from one part of the city to another. SEPTA buses, in my city of Philadelphia, have the side-effect of waking me up because I’m on the 40 Route, but I guarantee you that this was not the intention of the builders of the SEPTA system.
In the domain of policy, unintended consequences abound. To take one of my favorite examples, consider the case in which Delhi was concerned about the problem of venomous snakes. The government put a bounty on cobras, which led to some helpful hunting of cobras but, at the same time, some much less helpful breeding of cobras so the breeders could present the snakes for the bounty. Worse, when the bounty was lifted, the now valueless cobras were released into the city. The bounty was emphatically not intended to lead to more cobras, although that is what it actually did. This result also illustrates that many problems are hard to solve and solutions to them often fail. Just because a bounty is designed to reduce the number of snakes doesn’t mean that it will.
In short, snake policies and transportation networks have many effects. Only a subset of all of those effects are the intended ones. Yes, if the design of the policy is poor, then the desired effects might wind up being rare. Policy is hard. Getting people to do the socially beneficial thing is hard. Moving 10 million people to work is hard.
Many evolved traits, just like systems designed by humans, also have functions and the problem of distinguishing functions from side-effects is just as crucial. You can’t infer the function of a trait by looking only at what that trait does. This is a point that biologists of a certain stripe have been arguing about ever since Darwin, but especially since George Williams wrote about it in Adaptation and Natural Selection. If a fox walks through freshly-fallen snow, it’s just obviously true that you can’t infer from the tracks that fox paws are for making snow prints. The purpose of the fox leg system is not revealed by what it does after it snows: making fox prints.
Recall from the post on altruism the case in which an insect uses its wings to fly around and crashes into a spider web, becoming a meal. If you were a sincere member of “the purpose of a system is what it does” crowd, you would say that that the purpose of insects and their wings is feeding spiders.
This is just obviously silly.
The reason is the same. The complex system that we call “a fly,” just like the complex system we call “the Metropolitan Transportation Authority of New York City” creates tons of unintended consequences, or side effects. Because they do, the inference from results to intent/design is blocked.
Evolutionary biology provides powerful tools to help us infer the function of a trait. The OG, George Williams, emphasized the work you had to do to draw that inference, saying that the claim of adaptation was an “onerous concept.” Strong claims require strong evidence. In the case of establishing evolved function, biologists have many tools, not the least of which is seeking evidence of design, that the form or shape of the adaptation follows the putative function. The shape of a sperm cell, with its tiny package of DNA and tail to get it moving is powerful evidence regarding its function. In addition to shape, there are other kinds of evidence that can be brought to bear on establishing function, including evidence to do with genes, development, and even maps.
Figuring out how to establish adaptation was hard. Williams was a genius. But figuring out that simply seeing the effect of a trait does not license the claim of function seems, well, sort of obvious.
So, what is the problem? Why would someone say that you can infer the function of a system when you see its effects?
Here are three possibilities, listed from least to most likely, in my opinion.
1. Alexander and I are wrong. It is hard.
Maybe understanding that this inference from does to for is blocked is actually hard.
Some evidence for this view comes from the late biologist Stephen Jay Gould.
He, and his collaborator Dick Lewontin, published a famous paper with the somewhat weighty title, “The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme.” The paper discussed spandrels, little triangles formed when arches come together. People used to decorate these spaces—what they do is look pretty—and Gould helpfully pointed out that one couldn’t infer that’s what they are for. You might think from this so many people were confused about this issue that Gould and Lewontin had to clarify matters.
Except… did they?
I mean, Gould and Lewontin sort of knew that they were building a straw man. In the paper, they wrote that “some evolutionists will protest that we are caricaturing their view of adaptation. After all, do they not admit …a variety of reasons for non-adaptive evolution?”
Why yes, yes they—we—do. Not only do evolutionists admit these reasons, we insist on them, as any number of people have pointed out any number of times.
So, look, if a luminary such as Gould felt that he had to carefully explain that you couldn’t go from observed trait to purpose/function, maybe that fact implies that it is hard to figure out that the inference is blocked.
But, again… is it?
In his book The Intentional Stance, philosopher Dan Dennett argued that humans have good intuitions about things with functions and we humans can naturally assume what he called the “design stance” to understand them. In the case of a structure such as the eye, you don’t need a doctoral degree to infer it’s for seeing. The eye has features that make it exquisitely well designed for seeing: a transparent lens, photoreceptors on the inside, and a collection of parts finely-tuned for taking in light and building an image. The properties of the eye scream its function. Similarly, we see something shaped like a spoon, we infer it’s probably for scooping. We might notice that we can use it to hold down a napkin in the breeze, but we know the manufacturer didn’t have that in mind when they made it.
We can often do a good job of distinguishing what something was made for as opposed to what it actually does when the wind is blowing.
Everyone pretty easily understands that carbon dioxide produced by buses is just a side effect, just like the shape you get when arches come together.
In many cases, inferring the function of a trait isn’t that hard but biologists are usually properly conservative about their guesses when things are less clear. Generally, it’s easier to infer what structures such as the eye are for as opposed to behavioral traits, which are often a bit murkier.
We humans have pretty good intuitions about what traits are for but, at the same time, recognize that in many cases, their intuition isn’t enough to justify the inference.
2. Alexander and I are right but some people are just galactically dum.
Even easy things are hard for stupid people.
When I was a foolish young man, I recall indulging in some recreational activities that left me, for want of a better term, subcortical. On one of these occasions, I was in the passenger side of a car and the driver stopped to get gas. She asked me if I would be so kind as to do the honors.
I looked back at her and she quickly realized that, in that particular moment, I was not able to figure out what the first step of the process might be. (It would have been to unbuckle my seatbelt, I would later figure out.)
Pumping gas is not hard.
Just then, for me, it was.
3. Alexander and I are right but Certain People are simply not interested in logic or the truth.
I think this option is the most likely.
In another prior post, I quoted the late, great John Tooby, and I like this quote so much that I think it bears repeating:
[A] belief’s relationship to the truth is secondary to its probable impact on the social coordination of the group… and its impact on the belief-holder’s approval by her coalition or community - John Tooby
The fact is that most people in most contexts would much rather endorse a false belief that makes them a good team member than adopt and endorse a true one that makes them a bad team member. In parallel, most people would rather draw a blocked inference that makes them a good team member than acknowledge that the inference is blocked, making them a bad team member.
And that, ultimately, is what Alexander is witnessing. The crowd that draws on “the purpose of a system is what it does” are probably not making the mistake for the first two reasons above. They know bus systems are for commuting not polluting.
Alexander is pointing out a kind of rhetorical sleight-of-hand, where failures of a system, doing something difficult and complicated, are rebranded as intended outcomes—where the mere existence of an undesirable consequence is taken as evidence that it’s the goal of the system. Your hospital didn’t cure every cancer patient? Must not have been trying. Your public transit system emits carbon? You hate polar bears.
I think there’s a more sinister aspect. The reason they do this is because they want to use the tactic to attack their foes. Moral attacks have greater weight if the attack includes the idea that the alleged perpetrator intended the bad outcome. Stepping on your foot accidentally might be a bit bad, but stepping on your foot intentionally is worse.
The “what it does” folks are saying not only did those bad people on the other team who created a mass transit system create lots of carbon dioxide (and so should be punished a little) but also they did it intentionally (and so should be punished a lot).
I know, I know, “never attribute to malice that which is adequately explained by stupidity.”
But, I mean…
If Alexander is right, and it’s just obvious that the purpose of a system is not what it does, then what are we to make of the people who (say they) think that it is?
Sometimes when children are playing, a limb accidentally goes this way instead of that way and Johnny takes an elbow to his forehead. Hurt and maybe even a little embarrassed, Johnny puts his little arms on his hips and whines accusingly, “you did that on purpose!”
It’s not true, of course, but the thing about children is that they tend to be quite childish.
This idea, about a blocked inference, is a cousin of the better-known caution that correlation does not logically entail causation, neatly illustrated by this xkcd cartoon. If you observe a correlation between A and B, that observation, in itself, does not afford the inference that A caused B or that B caused A.
I don’t disagree with anything you’ve written. Yes, most complex systems have negative externalities and unintended side effects. It would be wrong to suggest that these are the purpose of the systems. But this is not where I hear the phrase used, at least when used appropriately.
The phrase is correct as an explanation of systems that intend one thing and get something very different but that are then defended and promoted. Examples are billion dollar defense systems that don’t work, or million dollar “low cost” housing units. In both cases, and hundreds of others, rent seeking coalitions have taken over the worthy goal of the organization (defense or housing vagrants) and converted it to a money machine for their own purposes.
Somewhere along the line the noble goal was hijacked and the system which was designed to do one thing, instead does something totally different.
Perhaps it's more useful to talk about costs and benefits rather than the "function" of an adaptation. The brain, for example, consumes a great deal of energy (cost) but enables complex inference (benefit). An adaptation persists only when its benefits outweigh its costs over repeated interactions.
This framing also helps to make sense of why people believe things that are evidently false. It can still be adaptive as Tooby says. The benefit is increased group cohesion, while the cost of misrepresenting reality depends on the environment. (Note there is asymmetry.)
If this holds, it suggests that in evolutionary terms group cohesion often outweighs the costs of factual inaccuracy.