aka “Cooperation, cultural evolution & economic development”.
Where do ‘good’ or pro-social institutions come from ? Why does the capacity for collective action and cooperative behaviour vary so much across the world today ? How do some populations transcend tribalism to form a civil society ? How do you “get to Denmark” ? I first take a look at what the “cultural evolution” literature has to say about it. I then turn to the intersection of economics and differential psychology.
[Warning: long and kind of abstract, though not technical. Edit 21 Oct 2015: ‘Denmark’ is a metaphor taken from Fukuyama. This post has absolutely nothing what ever to do with Denmark.]
“Cultural evolution”
There’s been a revival of cultural explanations in economics. And by coincidence, there’s a coalition of biologists, anthropologists, and behavioural economists operating somewhat outside the mainstream of their professions under the umbrella of “cultural evolution“. Most of them appear convinced that “neither psychology nor economics is currently theoretically well-equipped to explain the origins of institutions” [Henrich 2015]. In response they offer a unified theory of gene-culture co-evolution or dual inheritance theory which models ‘culture’ as a non-genetic Darwinian process. From “Culture & social behaviour”:
“[In order] to build a theory of cultural evolution capable of explaining where institutions come from, researchers have gone back to the basics, to reconstruct our understanding of human evolution and the nature of our species. These approaches … have used the logic of natural selection and mathematical modeling to ask how natural selection might have shaped our learning psychology to most effectively extract ideas, beliefs, motivations and practices from the minds of others…
This foundation then allows theorists to model cultural evolution by building on empirically established psychological mechanisms. The result is cultural evolutionary game theory [64]. This powerful tool has already been deployed to understand the emergence of a wide range of social norms and institutions, including those related to social stratification [65], ethnic groups [66], cultures of honor [67], signaling systems [68], punishment [69–71] and various reputational systems [72,73].”
‘Culture’ is defined as any information inside the mind which modifies behaviour and which got there through social learning — whether from parents, or peers, or society at large. Non-genetically inherited ‘content’ would obviously include technology/knowledge (“how to remove toxins from edible tubers”), beliefs (“witches can cause blindness”), and customs (use of knife & fork). But it also includes what economists would describe as “informal institutions”, i.e., mating systems, ethical values, social norms, etc.
Cultural-evolutionists and evolutionary psychologists (a separate academic tribe) tend to bicker over whether culture is responsible for causing the massive anthropological diversity of behaviours seen around the world. EP (generally) argues phenomena such as food taboos are ‘evoked’ by a universal innate psychology in reaction to different physical environments. By contrast CE (generally) argues behaviours vary primarily because the information that’s stored inside people’s minds varies from society to society. Our genetically evolved capacity for social learning enables us to transmit information accumulated over the generations, and this shadow of the past can override influences of the current physical environment. Without this persistent element with its own evolutionary logic, which CE calls ‘culture’, it would be inexplicable why distinct groups living in identical or quite similar environments nonetheless still behave very differently.
The subset of culture called ‘norms’ and ‘values’ is well defined here:
“…decision making heuristics or ‘rules of thumb’ that have evolved given our need to make decisions in complex and uncertain environments. Using theoretical models, Boyd and Richerson (1985, 2005) show that if information acquisition is either costly or imperfect, the use of heuristics or rules of thumb in decision making can arise optimally. By relying on general beliefs, values or gut feelings about the ‘‘right’’ thing to do in different situations, individuals may not behave in a manner that is optimal in every instance, but they do save on the costs of obtaining the information necessary to always behave optimally. The benefit of these heuristics is that they are ‘‘fast-and frugal,’’ a benefit which in many environments outweighs the costs of imprecision (Gigerenzer and Goldstein 1996). Therefore, culture, as defined in this paper, refers to these decision-making heuristics, which typically manifest themselves as values, beliefs, or social norms.”
Some additional points about ‘norms’ in the CE literature:
- as decision-making heuristics, norms strongly influence behaviour;
- norms are passed down from generation to generation non-genetically, much like the knowledge of making fire or wine;
- norms aggregate into ‘institutions’ at the population or group level [Ensminger & Henrich 2014]
- because norms are largely acquired and internalised before adulthood, they can be stable over a very long time, even when they seem maladaptive or adapted to past circumstances;
- but at the population level norms can change in response to new situations;
I turn now to those ‘norms’ and ‘values’ which enable cooperative behaviour.
The “Problem of Ultra-sociality”
Cultural-evolutionists have produced a large literature on what is sometimes called the “problem of human ultra-sociality“. Its main theme was popularised in a book by Paul Seabright: from an evolutionary point of view, how was it possible for the human species to go from living in small foraging bands of close relatives in the Palaeolithic to the global network of billions of anonymously interacting strangers that we see today ?
Modern societies engage in incredibly complex and incredibly large-scale forms of cooperative behaviour, with almost infinitesimal divisions of labour connected in a delicate skein of confidence and trust. Think of what it takes to get the New York Stock Exchange humming along, or the frightening amount of organisation and solidarity it took to wage the Second World War (on either side). Or, as Peter Turchin wondered, why did so many millions enthusiastically volunteer to risk death for unrelated strangers in the Great War ?
I think this is a good summary of the CE view on ‘ultra-sociality’ overall: humans have an instinct for cooperation, but the social norms of fairness evolve gradually to accomodate a wider definition of ‘insiders’ as a society gets larger and more complex.
But a related puzzle gets lost in the shuffle: why does ‘ultra-sociality’ vary so much across the world ? Why are pro-social institutions not more widespread ? The poorest countries like Afghanistan have pretty low levels of “social capability” in governance, to put it mildly. Why does the South of Italy have so little “civic capital” compared with the North of Italy ? Why aren’t most countries like Denmark ?
Direct & Indirect Reciprocity
Traditional evolutionary biologists had already worked out a couple of mechanisms by which members of some species innately cohere as cooperative groups. For example, kin selection instinctively prediposes social animals to powerfully favour close genetic relatives. This explains things like bee colonies, lion prides, and human nepotism.
Amongst unrelated, selfish people, reciprocal altruism can explain exchange and cooperation in the absence of a central authority — as long as they live in small communities where people know each other and are locked into repeated interactions over time. In such a context, the promise of future benefits and retaliation against cheating are sufficient for rational calculation to generate the “I will share my Mastodon steak with you now, assuming that you will share with me in the future” principle.
Even in populations where people don’t know each other directly, it’s still possible to generate “spontaneous order” as long as the group has high community cohesion due to ethnic or religious identity. This “indirect reciprocity” was in my opinion best illustrated by economist Avner Greif (who is definitely not part of the cultural evolution movement). Making inferences from documents deposited in the Cairo Geniza, Greif argued that informal institutions regulated commerce amongst the Maghribi Jewish traders as they conducted long-distance trade with one another in the Mediterranean during the Middle Ages. The strength of ethnoreligious ties (especially as a minority in a wider world), maintained by strong exclusion of outsiders, reproduced a village-like flow of information even within a far-flung community and enabled reputation and ostracism to be the instruments of policing.
Of course, as anyone who has read Thomas Sowell knows, such commercial minorities are abundant even today and operate effectively in corrupt countries with weak legal institutions, such as the Lebanese in West Africa and Latin America, or the Chinese in Southeast Asia. And even in countries with strong legal institutions, ultra-Orthodox Jews conduct a major international trade in diamonds without much reliance on external authorities.
But most people agree these mechanisms — kin selection and reciprocity — by themselves cannot sustain a cooperative equilibrium in much larger societies composed of strangers who may never interact more than once and are separated by great distances.
Collective action problems
Whereas the cultural-evolutionists tend to ask “how did large-scale cooperation ever get started?”, mainstream economists and political scientists ask a more abstract, ahistorical question: “how does any kind of cooperation ever happen at all ?” This line of reasoning, which Mancur Olson called the “collective action problem”, investigates any situation where the private costs of an individual action are high while the benefits of that action redound collectively to the group.
In an election, for example, voting is costly to the individual voter because he must obtain information about the candidates and physically go to the poll. But the benefits of this public good (having a legitimate, accountable government) are divided equally by the society at large. Therefore, a rationally selfish voter ought then to free-ride on the actions of other voters: choose not to vote, knowing that the others will vote and elect a government. In fact, many people are free riders. But if everybody acted like this, there would be no election ! So why does anyone vote at all ? There should be a failure of coordination, but this kind of uncoordinated cooperation happens all the same.
Many things are subject to the free rider problem — the maintenance of price cartels like OPEC, political demonstrations, law enforcement, common environmental resources, taxation, and even the rule of law. All those arrangements either fail to happen or are under threat of unravelling because they are plagued with free-riders. If you have too many of them, others in the group will retaliate or imitate, and the cooperative equilibrium will collapse. (Collective action or coordination failures figure prominently in explaining recessions. But Germany may be exceptional.)
The standard solution is for the state — a third-party enforcer — to punish the louts and ensure compliance with the rules of the game. But the state has what’s called in political economy parlance a “commitment problem”: if it’s strong enough to enforce the rules of the game, then it’s also strong enough to manipulate them in its own interests, or in the interests of the powerful which control the state. So why on earth would the state act altruistically to solve the coordination problem in the public interest ? Many issues of governance — such as corruption or patronage politics — in one way or another, boil down to the inability of a population to coordinate on an agreed-upon set of rules, because there exists no disinterested nth-order enforcer.
Yet, despite public choice theory, despite capture by special interests, we know that the state in well-functioning societies more or less acts in the public interest most of the time. In other words, somehow collective action problems get solved; somehow socially productive cooperation happens.
“Strong reciprocity”
A possible solution is “strong reciprocity“, sometimes also known as “altruistic punishment“. Behavioural economists claim to have documented the existence of this emotional instinct to engage in costly punishment of non-cooperators. In anonymous experiments intended to mimic collective action situations, strong reciprocators tend to punish free-riders, even when they are not the direct victims, and even when there is no clear or assured benefit in the future from doing so. [2nd vs 3rd party punishment] “Strong reciprocity” could be the psychological basis of the outrage that one sees in reaction to a social norm violation like, say, queue-jumping.
In the 4-player public goods game (PGG), each player is given some money and the choice to contribute any fraction of the amount (including zero) to a common pool. At the end, the total is multiplied by some factor and then divided equally amongst the players. Players only know about one another’s contributions at the end of each game. Then the game is repeated many times.
The experiment is designed so that the players, collectively, do their best if everyone contributes the maximum amount from the beginning. But an individual player has an incentive to free-ride whilst everyone else contributes. The worst collective outcome is obtained if everyone decides to free-ride.
PGG comes in two versions, with and without punishment. In the punishment version, each player is informed anonymously after each round about everyone else’s contributions and is allowed to punish whom ever they deem a free-loader by deducting from the free-loader’s final take. But the punisher must pay for some fraction of the punishment amount from his own take.
In a version of the game without punishment, repeating the game many times always causes cooperation to tank, because those who initially made high contributions learn about the free riders at the end of each iteration and then lower their subsequent contributions. But in the version of the game with punishment, a high level of cooperation is sustained. From Fehr & Gächter 2002:
It’s possible three stable personality types exist in repeated public goods games: free riders, “unconditional cooperators”, and “conditional cooperators” practising “strong reciprocity” or “altruistic punishment”. (Peter Turchin calls the three types ‘knaves’, ‘saints’, and ‘moralists’.)
The existence of pro-social instincts has been replicated in dozens of societies around the world — at many scales and complexity of social organisation, western and non-western, university and non-university students. [More on this below; also see the separate post “Experimenting with Social Norms in Small-Scale Societies“.]
The implication is that large-scale collective action or market exchange cannot be sustained unless self-interest in some fraction of the population is constrained by an internalised ethical adherence to the rules of the game. You can’t have a society entirely composed of people whose “good behaviour” is achieved only through fear of punishment.
What ever you think of such a view, it certainly dovetails with the thinking of classical political economists of the 18th century, as well as Hayek and neo-institutionalists like Douglass North. The latter even mentions early versions of these experiments.
Optimal institutions?
Successful legal institutions may therefore depend on some interaction between conditionally cooperative norms and formal institutions.
Once again Avner Greif (who, I swear, must think about these issues from morning to night, even in his most indelicate moments): in a series of papers on the nature of market exchange in individualist versus collectivist societies, he argues (amongst many other things) that informal institutions can be more efficient than formal ones under certain conditions.
The closed ethnic networks such as those of the Maghribi traders or of Chinese clans could be highly efficient because ethical norms and customary rules save on transaction costs. Imagine commercial contracts which don’t need to spell out every possible contingency [efficient incomplete contracts] or don’t even need to exist, because you can trust the parties to settle disputes according to long-standing custom. Or suppose you don’t have to do “due diligence” on every single transaction, because reputation counts for everything and its loss constitutes social death.
In such a world you save on the high costs of legal institutions, but it’s difficult to scale up because community cohesion may break down after a certain size. You need formal, legal rules and enforcement to conduct market exchange on the scale of millions of people.
But then the optimal arrangement is to have a large society which reproduces the cohesion of small-scale communities regulated by social norms, along with a predictable but lightly used formal enforcement system which every now and then disciplines the determined opportunists and other dickheads. Such is the world of high “social capital” as described by Putnam and writ large by Fukuyama.
This is how Putnam contrasted the North of Italy, with its high “social capital”, with the Mezzogiorno or the South of Italy:
“Collective life in the civic regions is eased by the expectation that others will probably follow the rules. Knowing that others will, you are more likely to go along, too, thus fulfilling their expectations. In the less civic regions nearly everyone expects everyone else to violate the rules. It seems foolish to obey the traffic laws or the tax code or the welfare rules, if you expect everyone else to cheat.”
One might argue, the real institutional difference between developed and developing countries is actually a “social capital” gap: there are just many more coordination failures in developing countries. Never mind countries torn by civil war. Never mind countries where the kleptocrat with a monopoly of violence does not even bother to hide his plundering. Even the political systems of minimally functioning democratic societies are still organised de facto according to segmentary lineages, with clan- and tribe-based political parties campaigning to distribute to their members the spoils of the public treasury. In societies without clans and tribes, the distributive conflict in politics is played out along ethnolinguistic or caste divisions. But even in some relatively homogeneous societies, political parties are often a system of N-party competitive distribution of public spoils, with only nominal ideological differences between the parties. Greece is an upper-middle-income country and it’s still like that.
But how do you improve a society’s collective action capacity ? How do people become more public-spirited ? How do people achieve the transition from group-specific “limited morality” to “generalised morality” ? This is effectively the same as asking how does the radius of trust widen beyond small kin or clan groups in the first place.
Tribal social instincts & group competition
One answer offered by cultural-evolutionists is “tribal social instincts”.
‘Tribal’ is an unfortunate choice of words because it normally connotes group balkanisation and incoherence. But in the sense used by Boyd & Richerson, it refers to a genetically evolved predisposition to combine into groups successively larger than friends and family. ‘Tribalism’ is innate, but the specific size and manifestation of ‘tribe’ depends on social norms and cultural innovations:
“…the ways that people behave toward others can depend heavily on how those others are classified—as kin, friends, and community members or outsiders, strangers and foreigners. Second, human populations can vary dramatically in: (1) how they define closeness and distance of a social partner and (2) how these qualities of a partner influence social behavior. Third, these population differences are not fixed or static. Populations can change quite dramatically within several generations, [as with the Iban of Borneo], from hunting the heads of neighboring groups to participating relatively peacefully in a much larger nation-state and world system.” [Hruschka & Henrich]
You get parochial altruism or ethnocentrism when an expanding in-group progressively absorbs outsiders and reclassifies them as insiders. In recorded history, the most important way in-group favouritism has been applied to larger populations is through symbolic markers of common identity such as language or religion or extreme rituals. These create the glue for large associations of fictive kinship, “honourary friendship”, and imagined communities.
Then different levels of aggregation — from tribe to states to empires — are decided on the basis of intergroup competition. In short: more cooperative groups grow larger by outcompeting less cooperative groups in war.
Cooperation within a group is constantly being undermined by competition within the group — i.e., free-riding behaviour by those maximising individual fitness. That’s because kin bias and tribal bias are in tension. As Boyd & Richerson put it:
“These new tribal social instincts were superimposed onto human psychology without eliminating those that favor friends and kin. Thus, there is an inherent conflict built into human social life. The tribal instincts that support identification and cooperation in large groups are often at odds with selfishness, nepotism, and face-to-face reciprocity.”
But this tendency for groups to lose internal coherence can be counteracted by competition between groups through “multi-level selection” and “cultural group selection“. In the cultural-evolution jargon: between-group cultural variation in pro-social traits exceeds the within-group variation.
Groups whose cultural innovations — such as “high moralising gods” — tend to suppress or moderate the anti-social instincts of their members and promote the pro-social ones, grow bigger and more powerful than groups which remain riddled with group-suicidally selfish actors or subgroups. (Think: the small yet cohesive Prussia versus the large but fragmented Austria-Hungary.) Those less able to cohere as groups, all else equal, get exterminated, assimilated, or reduced to insignificance. Those wishing or able to survive imitate the cultural innovations of the successful ones.
War and violence feature prominently in the CE literature since that has characterised so much of human history. Thus the mathematical ecologist Peter Turchin, asking “how are empires [even] possible” in the first place, models group solidarity as an endogenous variable in the rise and fall of empires between 1500 BCE and 1500 CE.
(For claims about how norms and institutions co-evolve rather more peacefully, see the separate post “Experimenting with Social Norms in Small-Scale Societies“.)
The prominence of war in the cultural-evolutionist explanations of the most ancient human institutions such as hierarchy, the state, and ethnocentrism has a clear parallel with the literature in economic history on “state capacity” in which the rise of effective states in the early modern period is also linked to war (Brewer, Tilly, Hoffman, Dincecco 2015, etc.) It also has an obvious link with the literature in comparative historical development inspired by Jared Diamond stressing the importance of “an early start” in agriculture and state history (Bockstette et al., Borcan et al., Spolaore & Wacziarg).
Social norms and ‘stateness’
But if the long history of ‘stateness’ and agriculture, plus a long history of state-level warfare, are somehow crucial to the development of a ‘modern‘ society and state, that naturally prompts the question of how and why. It cannot be merely the long history of state experience. From Olsson & Paik, log GDP per capita in 2005 plotted against time since adoption of agriculture:
On a global scale, it’s as Jared Diamond argued: a long history of agriculture (and therefore also of the state) is a good thing, but within regions, it may not be such a good thing after all. (Statistically, this is Simpson’s Paradox.)
So at this point, cultural-evolutionists often say “history matters” and reference the empirical work in mainstream neoclassical economics and other social sciences which demonstrates that “institutional shocks” in the past can leave a persistent imprint on contemporary culture. (If I’m going to get referred back to econ why did I bother to read you people again ???) This evidence sorts well with the cultural-evolutionist assumption that cultural inertia can persist for a long time.
That econ literature finds real effects, by which I mean it does establish that culture matters, social norms or cultural values do change, and this has consequences for political and economic development on the margins. But the overall effects found by this literature seem to me kind of small, or highly local, or particular to their datasets. Or in some cases it cannot rule out reverse causality, e.g., it could be economic development which results in big cultural changes, such as rising levels of trust.
I myself have little doubt different levels of trust induced by the experience of communism, for example, play a role in the various disparities between western and eastern Germany. Nor do I doubt different (cultural) preferences for leisure and work contribute non-trivially to the US-European differences in work hours. And it makes perfect sense to me that the strength of family ties explains political attitudes which influence the demand for labour market regulations in northern versus southern Europe. So different institutional paths chosen thanks to differences in culture and history do matter.
But as the gap gets bigger between countries I grow more sceptical about how powerful ‘culture’ — as defined above — can really be in explaining the massive global variation in state capacity and economic development.
There are always clever ways of magnifying the impact of culture by modelling culture-institutions-geography traps. Thanks in part to ancient biogeographic conditions, a particular cultural group is able to set up institutions reflecting their prevailing norms in a newly settled country; the ‘bad’ institutions persist and persist and persist, because this group controls the resources and has no incentive to make a “credible commitment” to “inclusive institutions” or an “open access order”; and because the ‘bad’ institutions persist, pro-market norms fail to emerge and the ‘bad’ norms reinforce the ‘bad’ institutions. There are like 3 or 4 tangled traps in there.
So is ‘good’ governance just a matter of time, a long time, but still a matter of time ? Jerry Hough and Robin Grier have argued just that in a recent book which actually inspired this post. That book also makes a cultural evolution argument in which state capacity, markets, and “rational-legal values” co-evolve. Or is ‘bad’ governance an issue of timing mismatch, when democracy arrives before a strong state, as Fukuyama has repeatedly argued ? Interestingly, both Fukuyama and Hough & Grier point to the corruption-ridden United States of the 19th century as an important piece of evidence.
I think the “really long time” argument has some validity, but that just prompts the question: why have some countries — those of East Asia come to mind — have had such rapid and dramatic institutional revolutions ?
I don’t find plausible theories for answering that question in cultural evolution. So I turn to the intersection of economics and differential psychology.
§ § § § §
Intelligence and Cooperation
In the workhorse model of (non)cooperation — the prisoner’s dilemma — two players are faced with the decision to cooperate or defect based on a matrix of 4 possible payoff combinations.
Suppose a sedentary peasant would end up with $4 (the loot + his own output), if he ambushed and robbed a passing horse-backed nomad, but only $3 if he traded goods with him. The nomad faces the identical decision: $4 with robbing, $3 with trading. If both decided to rob, then they would be left with $2 each.
It’s set up so that each has a perfectly rational self-interest in robbing the other, but the trading world is clearly better than both-turn-to-robbery world.
It’s well known from simulations of “infinitely repeated” prisoner’s dilemma games that cooperation is best sustained when both players adopt some variety of conditional cooperation strategy: cooperate first, but then copy the other play’s earlier choice. [Axelrod] The gains to both parties are bigger in the long run if both parties behave like that. And, as we have already seen, real people do in practise show an instinct for conditional cooperation.
But Proto, Rustichini & Sofianos (2014) demonstrates that in a multi-round Prisoners’ Dilemma experiment with actual human participants and real money, the intelligent are much more likely to practise conditional cooperation.
In this experiment, subjects were first administered Raven’s Progressive Matrices, a test which measures fluid intelligence (i.e., not based on knowledge). They were also tested for risk attitudes and the Big Five personality traits. In the end, 130 participants were allocated to two groups — “high Raven” and “low Raven” — and the only statistically significant difference between the two groups was in fluid intelligence. The participants did not know how they were grouped. (Edit: Also relevant: “Participants in these non-economists sessions had not taken any game theory modules or classes.”)
Then within each group, different pairs of participants repeatedly played the prisoner’s dilemma — the maximum number of rounds was 10 but a computer decided whether to terminate the session after each round with a fixed probability. There were multiple sessions of these rounds of games.
[Blue: high Ravens group, Red: low Ravens group. X-axis: each period represents 10 rounds; Y-axis: the fraction of players cooperating.]
The high Raven group not only diverged early from the low Ravens, but also sustained cooperation much longer. There actually wasn’t much difference between the two groups in the early rounds. The difference grew incrementally, in drips and drabs, but in the end, it was substantial. This suggests high Ravens learnt the optimal behaviour from the previous rounds better than the other group.
Proto et al. sliced and diced the data in various ways and found that:
- reciprocation is much stronger with the smart: high Ravens are more likely than the low Ravens, to match prior cooperation with cooperation in kind, and punish prior defection with defection in kind;
- reaction times — the time it took to decide whether to cooperate or defect — were shorter and declined faster for the high Ravens;
- the only statistically significant difference in individual participant characteristics was fluid intelligence;
- when the monetary payoffs were manipulated to make cooperation less profitable in the long run, the high Ravens were no more cooperative than the low Ravens — if anything, the low Ravens were slightly more cooperative !
But the most amazing result has to be this: “Low Raven subjects play Always Defect with probability above 50 per cent, in stark contrast with high Raven subjects who play this strategy with probability statistically equal to 0. Instead, the probability for the high Raven to play more cooperative strategies (Grim and Tit for Tat) is about 67 per cent, while for the low Raven this is lower (around 45 per cent).”
Understanding the benefits of working together in complex situations — which is what a repeated prisoner’s dilemma simulates — implicitly requires reasoning skills, the ability to learn from mistakes, the ability to anticipate, and accurate beliefs about other people’s motives.
The ethical implication: the intelligent are more likely to practise the Golden Rule, and this actually breeds trust; and the less intelligent are more likely to think they can get away with it, and this breeds mistrust. You only need intelligence to generate this difference. You can immediately see where social and civic capital might come from, at least in part.
The Proto et al. study replicates and extends a few earlier studies on intelligence and cooperation (Al-Ubaydli et al. 2014; Jones 2008). Moreover, it’s consistent with cross-cultural findings from Public Goods Games as described below.
Anti-social punishment
Earlier I mentioned that in the Public Goods Game there were possibly 3 stable personality types: free riders, unconditional cooperators, and conditional cooperators who punish free-riders. But when the PGG was conducted in 16 different cities around the world, Herrmann, Thöni, & Gächter (2008) reported the existence of “anti-social punishment”.
That is, in addition to people actively punishing free-riders, in some cases there were people actively punishing cooperators !Another interesting thing is that subjects in both Seoul and Chengdu started at fairly low levels of cooperation but rapidly converged to Northwest European levels of cooperation:
Anti-social punishment kills cooperation in non-western countries except East Asia. Hmmm. What’s different about the Koreans and the Chinese ?
When the punishment option is entirely removed in a separate treatment, cooperation levels plummeted across the board and even the rank order changed to some extent:
Those Danes remained ever true to stereotype, but what a fall for most other NW Europeans ! This suggests pro-social punishment is important to sustaining collective action in western countries.
Interestingly, the authors found in their regressions that the “rule of law” indicator was not a statistically significant predictor of social punishment, but it was strongly inversely correlated with antisocial punishment. That is, weak rule of law and anti-social punishment tend to go hand in hand. It’s probably a feedback effect.
Patience and Cooperation
Patience also matters to whether cooperation is sustained. Low time preference is correlated with intelligence [Shamosh & Gray], but the association is not so big that you can’t have many smart people who are more impatient than some less intelligent people. So patience is a quasi-separate factor in cooperation.
Intuitively, if the benefits of cooperation accrue in the long run, then less patient and more impulsive people are less likely to cooperate. In the repeated prisoners’ game, the degree of ‘patience’ is represented by discounting the total gains in each period after the first one. In the nomad-peasant example from above, if both were impatient, the $6 cooperative payoff in (say) the 9th round would only be worth (say) $4, because a dollar in the future is not worth the same as a dollar today. But for more patient people, the future payoff will be closer to $6.
But the Stag Hunt may better highlight the role of patience in cooperation. It’s similar to the prisoner’s dilemma, but the payoffs to cooperation are obviously superior to those of non-cooperation (defection). The choice is between cooperating to hunt stag (which requires team effort) or going it alone chasing rabbits. The scenario is set up so that non-cooperation is riskless — a sure thing for both parties — whereas cooperation is risky because you don’t know whether the other party will come through, and the big payoff absolutely requires both to participate.
Many social situations are more like a stag hunt than a prisoner’s dilemma, and intuitively you would think taste for risk would predict cooperative behaviour in such cases. But in fact Al-Ubaydli, Jones & Weel (2013) finds that patience is the most important element in successful cooperation in Stag Hunt experiments with real people. The study participants had been tested for patience, intelligence, risk aversion, and personality traits; and “risk aversion and intelligence have no bearing on any aspect of behavior or outcomes at conventional significance levels”.
[There are other interesting details to this study, such as increasing returns to pairs of patient players, but it would take too much space to describe.]
Need I say, that populations vary substantially in patience ? Wang et al. (2011) asked subjects in 45 countries about how much they value offers of money now, in the near future, the medium future, and in the far future. Based on their answers the authors computed discount rates for each group.
But people tend to be “time inconsistent”, i.e., they can be very patient about long-term rewards, but simultaneously very impulsive about things right in front of them. If your future self tearfully regrets that your past self was such an idiot as to consume 5 slices of cake after each meal, then you are a “hyperbolic discounter”.
Wang et al. found that countries and regions vary a lot more in their present bias than in their long-term discount factor:
The above says: how people in the present value rewards they expect to receive (say) 10 years into the future, is pretty similar across the world — although small differences can make a big difference in the long term through compounding. But the degree to which people want things right now, as opposed to tomorrow, varies quite dramatically.
By the way, Russia’s β is 0.21 !!! If that has nothing to do with low investment rates or insecure property rights for foreign companies, then I will eat my shorts !
The role of patience in cooperation is relevant to the “commitment problem” of the state in solving collective action problems. In theorising about the origins of the state, Mancur Olson gave a famous answer with his dichotomy of roving bandits and stationary bandits. In the world of political anarchy, roving bandits fight one another for opportunities to pillage the productive peasants. But sometimes one of them defeats all the others and establishes himself as a “stationary bandit”. He then acquires a strong intrinsic interest in restraining his plunder — his ‘taxation’ — in order to let the economy grow. It’s the “fatten the goose that lays the golden eggs” principle.
But that depends ! If the stationary bandit is impulsive and impatient, he can remain a predator for a very long time.
Political scientist Carles Boix in a recent book pointed out that the reciprocity of stateless foraging societies cannot be sustained when the distribution of resources is too unequal. But even his model depends on ‘patience’, with the implication that uncoordinated cooperation is still possible with more inequality as long as people are patient enough. This is actually true of models using prisoner’s dilemma and stag hunt in general. Even Acemoglu‘s ruling elite with vested interests in maintaining “extractive institutions” would have incentives for “inclusive institutions” if they were only patient enough.
§ § § § §
So to answer the question at the head of this post, “where do pro-social institutions come from?” — if ‘bad’ institutions represent coordination failures, then intelligence and patience must be a big part of the answer. This need not have the same relevance for social evolution from 100,000 BCE to 1500 CE. But for the emergence of ‘modern’, advanced societies, intelligence and patience matter.
It’s not that people’s norms and values don’t or can’t change. They do. But that does not seem enough. Solving the most complex coordination failures and collective action problems requires a lot more than just ‘good’ culture.
I am not saying intelligence and patience explain ‘everything‘, just that they seem to be an important part of how ‘good’ institutions happen. Nor am I saying that intelligence and patience are immutable quantities. Pinker argued in The Better Angels of Our Nature that the long-run secular decline in violence may be due to the Flynn Effect:
…the pacifying effects of reason, and the Flynn Effect. We have several grounds for supposing that enhanced powers of reason—specifically, the ability to set aside immediate experience, detach oneself from a parochial vantage point, and frame one’s ideas in abstract, universal terms—would lead to better moral commitments, including an avoidance of violence.
What is the above describing, other than the increasing ability of people to empathise with a wider group of people than friends and family ? Intelligence and patience allow you to understand, and weigh, the intuitive risks and the counterintuitive benefits from collaborating with perfect strangers. With less intelligence and less patience you stick to what you know — intuit the benefits from relationships cultivated over a long time through blood ties or other intimate affiliations.
Your “moral circle” is wider with intelligence and patience than without.
In the 1990s, in the middle of free market triumphalism, it was widely assumed that if you let markets rip, the institutions necessary to their proper functioning would ‘naturally’ follow. Those with a vested interested in protecting their property rights would demand them, politically. That assumption went up in flames in the former communist countries and the developing countries under economic restructuring.
To paraphrase Garett Jones, one of the co-authors of the stag hunt study: smart, patient people are more Coasian; they find a way to cooperate and build good institutions.
PS: It’s not out yet so I don’t know what’s in it exactly, but based on his papers that I’ve read, I strongly recommend Jones’s forthcoming book, The Hive Mind. And according to the table of contents now available at Google Books, it will cover the aforementioned “Political Coase Theorem” territory.
Edit-Note: The Proto et al. and the al-Ubaydli et al. studies were, indeed, conducted with university students in WEIRD countries (the UK and the USA, respectively). The Wang et al. was with economics students in 45 countries. The Herrmann et al. was in 16 different cities including western and non-western countires. These are not intended as definitive evidences of anything. Yet they all strongly suggest intelligence and patience generate cooperative behaviour. Future studies like these — especially with regard to intelligence — will certainly be carried out in more diverse societies. Besides, Henrich et al. have already conducted one-shot ‘trust’ or ‘sharing’ game experiments in dozens of small-scale societies around the world.
Filed under:
Behavioural economics,
Cultural Evolution,
Political Development,
Political Economy,
Social & Civic Capital,
Social Evolution Tagged:
Boyd & Richerson,
collective action problem,
cooperation,
cultural evolution,
cultural group selection,
Joseph Henrich,
market norms,
social evolution,
ultra-sociality