Quantcast
Channel: pseudoerasmus
Viewing all 102 articles
Browse latest View live

“Experimenting with Social Norms” in Small-Scale Societies

$
0
0

Social norms, institutions, and economic development. (A companion post to “Where do pro-social institutions come from?”)

FinalCoverSocialNorms_0Although the main focus of “cultural evolution” research seems to be on the big picture of how humans, culturally, went from the “Lascaux caves to Goldman Sachs”, it may have some relevance for explaining the wealth and poverty of nations today. And behavioural experiments are a crucial part of the cultural-evolutionists’ ambition to discover the psychological primitives behind “human ultra-sociality” and social institutions.

The vast majority of such experiments to date have been conducted at university labs with students in developed countries, with the results having a pronounced WEIRD bias. In order to take a close, micro-level look at the global diversity of cooperative norms, researchers led by Joe Henrich have been conducting experiments in many small-scale societies and comparing them with reference populations in developed countries.

Ensminger & Henrich 2014, which replicates and extends findings from Henrich et al. 2004, document the variation in ‘fairness’ norms within diverse societies around the world. As Ensminger & Henrich put it,

“The sample of societies from which we draw the data for this project is virtually unique in that it runs the gamut from almost pure hunter-gatherers (absent most traces of modern development and material possessions) through numerous horticultural and nomadic herding societies (some equally remote from modern markets), to cash-cropping farmers, urban African workers [Accra], and small-town residents in rural America.”

ensminger

The three games conducted in the field with substantial monetary stakes were the Dictator Game (DG), the Ultimatum Game (UG), and the Third Party Punishment Game (TPG). All of these simulate, in controlled settings, bargaining between strangers intended to tease out and measure the existence of ‘fairness’ norms.

In DG, the ‘proposer’ is given a certain amount of money and is permitted to choose whether to keep all for himself, or share some part of it with a passive second party. In UG, the second party or ‘responder’ can engage in costly punishment by rejecting the proposer’s offer, if it is deemed unfair. Rejection would mean neither gets any money. In TPG, a third party is permitted, if he chooses, to punish the proposer’s choice of allocation to the second party, but only by paying a cost (real money) to do so.

The big general finding is that people in larger, more complex societies are more likely to split pots of money with strangers on a 50-50 basis. They are also more likely to have norms which enforce that outcome.

Ensminger & Henrich find there is a lot of variation in behaviour across the sample societies, but purely selfish behaviour, on average, is not found in any of them. (An exception which comes closest may be the Hadza, a small hunter-gatherer group in Tanzania, whose modal offer in DG was zero). Every society shows a positive mean offer in DG (as opposed to zero). And in all societies sampled, there is some offer that is perceived as just too low and gets rejected (in UG) or punished (in TPG), with both rejections and punishments declining as the offer approaches 50% of the total. However, the “minimum acceptable offers” do vary substantially from society to society.

DG

The X-axis = the percentage of the money offered by the ‘dictator’. Grey bars = the mean offer for each population. The size of the bubbles = the fraction of the sample making those offers. Also see the chart of distribution of results for UG and TPG.

henrich sum stats

What accounts for the variability in these behaviours ? After a series of regressions, Ensminger & Henrich come to the following conclusions:

  1. The average amount of money offered by first parties in all three games increases with the degree of their community’s market integration (as measured by % of calories obtained in the market).
  2.  ‘Fairness’ also increases with individuals’ participation in a major world religion, as represented in the sample by Christianity and Islam (as opposed to a ‘traditional’ religion lacking “high moralising gods“).
  3. “People from larger communities punish more” in TPG. This is consistent with claims that more anonymous societies depend on “altruistic punishment”, the supposed tendency of some people to violate norm-violators and free-riders even when they are not direct victims.
  4. The amount of third-party punishment meted out in TPG is a robust predictor of offers made by the proposer in the DG (a game with no punishment option) and of offers made by the proposer in the two-party UG. This may be evidence for the internalisation of fairness norms.
  5. “[L]arger communities with greater punishment and fairness…are strongly associated with larger ethnic groups”, which is taken as consistent with their theoretical prediction that “[fairness] norms can spread as a consequence of their group-beneficial effects”.

plot

The statistical tests show the two variables — market integration and world religion — are decent predictors of the prevalence of fairness norms in all the experiments. (In the Third Party Punishment, community size is also important.)

henrich regression

But the amount of variation in the data explained by the variables of interest is fairly small. “Low R-squared” is usually not important, but in this case it matters. There are real trends in the noise, but obviously much more is going on in the data to differentiate these societies than Ensminger & Henrich explore.

Interestingly Ensminger & Henrich reject possible genetic causes for the variation, even though they themselves report that the heritability of rejection behaviour in the Ultimatum Game is >40% in a sample of Swedish twins. E & H appeal to the standard logical argument that the within-population heritability of a trait does not imply anything about the between-population heritability of the same trait. That’s true, but it doesn’t foreclose the possibility that between-population variability is also heritable, either.

Their argument is that as societies grow larger and more complex (e.g., more market integrated), the social norms which are required to function in society adjust slowly to match the societal evolution. Of course that’s an inference from a cross-section, not capturing change in norms over time within societies. But they do have a theoretical model for a positive feedback between norms and societal scale with which their data are consistent.

This was recently echoed in a paper by Richerson & Henrich 2012, “Tribal Social Instincts and the Cultural Evolution of Institutions to Solve Collective Action Problems”, which appeared in a special issue devoted to nation-building in failed states in the journal Cliodynamics: The Journal of Quantitative History and Cultural Evolution. It recapitulates the main tenets of cultural evolution and multi-level selection theory, but specifically on the question of the massive cross-country variation in collective action capacity, it ends like this:

“Why does the scale vary so much among human societies, with some societies lacking much collective actions beyond the extended family while others organize millions in modern nation-states?

People began to domesticate plants and animals only about 11,000 years ago. Agriculture and the many arts that grew up with it created the potential for dense societies in favorable locations. Hence villages, towns, and cities began to grow. The pace of evolution varied from region to region, probably for many reasons (Richerson and Boyd 2001), even in the most favorable areas. Today, the world is a mosaic caused by differences in history and ecology. Tropical forest cultivators living a low density in family hamlets have virtually no institutions that operate outside the extended family (Johnson 2003). Densely populated urban cores of societies rich in agricultural, industrial, and human capital resources support modern nation-states. In some places with intermediate productivity or a historically slow trajectory of development tribal-scale institutions are still very strong. Sub-Saharan Africa and parts of the Middle East, most notably the Pashtun parts of Pakistan and Afghanistan are examples (see Turchin’s article in this Special Issue). It is important to note that the time scales of cultural change ranges from generations to millennia. If an institution is destabilized it may change rapidly until a new, usually nearby, equilibrium is established. Unless destabilized, institutions are very resistant to change. Policy makers are fated to be frustrated by the slow and hard-to control nature of cultural evolution.”

Basically, the reasoning is very similar to what you find in modernisation theory. The pace of cultural change is linked to the pace of economic development, which is treated as exogenous; or at least the trigger for the positive feedback cycle of economic development and cultural change is exogenous. So if development is slow or non-existent, then traditional social institutions are also slow to change. Even maladaptive norms can be stuck in a trap.

But a shock can put a society onto a path of evolving pro-social institutions. As social systems based on kin networks get disrupted by economic development, people grow more dependent on impartial norms. This is echoed in numerous papers in the “cultural evolution” literature, e.g., Hruschka et al., Hruschka & Henrich, Newson & Richerson.

Also, Boyd & Richerson’s reasoning is rather similar to that in biological evolution. If the palaeontological or archaeological record shows it took X million years for binocular vision to evolve, then, that’s how long it took and the job of your model is to spell out the mechanism within the dominant theoretical paradigm. In the case of “cultural evolution”, the assumption seems (to me anyway) that how ever long it takes is how ever long it takes, and you should model it as cultural evolution.

§  §  §  §  §

An Indonesian whaling village displays fairness norms in the Ultimatum game which exceed the ‘perfect’ 50-50 (proposers tend to offer more than 50%), presumably because their whale-hunt economy is completely dependent on team effort and strong social cohesion. But is that all it takes? Maybe these one-shot trust games don’t really give us fine enough information about what’s required to evolve into a modern complex market society. I stress I am not questioning the external validity of one-shot games. But they don’t really seem to generate enough variation beyond a certain societal scale, for explaining the cross-country heterogeneity in social institutions. Or so it seems.

The repeated versions of games like the prisoner’s dilemma or the stag hunt or the public goods game are probably more informative about the complex interactions that exist in the most advanced societies. (The 2004 volume Foundations of Human Sociality does report findings from public goods games conducted in a couple of small-scale societies, which I will blog about separately.)

I should note that the economics literature on “culture & institutions” mirrors the argument by cultural evolutionists to some extent: culture and institutions mutually reinforce one another and co-evolve. That is, cultural differences are reinforced by institutions which are themselves path-dependent products of earlier cultural differences. I don’t find that a very satisfying literature on the largest questions of political and economic development. But I will be commenting more on the economics literature on culture in the near future.


Filed under: Cultural Evolution, Economic Anthropology, Economic Development, Institutions, Social Evolution Tagged: behavioral games, Dictator Game, Economic Experiments, Experimenting with Social Norms, Fairness and Punishment in Cross-cultural perspective, Foundations of Human Sociality, Jean Ensminger, Joseph Henrich, Third Party Punishment Game, Trust games, Ultimatum Game

¿De donde vienen las instituciones prosociales?

Where do pro-social institutions come from?

$
0
0

aka “Cooperation, cultural evolution & economic development”.

Where do ‘good’ or pro-social institutions come from ? Why does the capacity for collective action and cooperative behaviour vary so much across the world today ? How do some populations transcend tribalism to form a civil society ? How do you “get to Denmark” ? I first take a look at what the “cultural evolution” literature has to say about it. I then turn to the intersection of economics and differential psychology.

[Warning: long and kind of abstract, though not technical. Edit 21 Oct 2015: ‘Denmark’ is a metaphor taken from Fukuyama. This post has absolutely nothing what ever to do with Denmark.]

“Cultural evolution”

There’s been a revival of cultural explanations in economics. And by coincidence, there’s a coalition of biologists, anthropologists, and behavioural economists operating somewhat outside the mainstream of their professions under the umbrella of “cultural evolution. Most of them appear convinced that “neither psychology nor economics is currently theoretically well-equipped to explain the origins of institutions” [Henrich 2015]. In response they offer a unified theory of gene-culture co-evolution or dual inheritance theory which models ‘culture’ as a non-genetic Darwinian process. From “Culture & social behaviour”:

“[In order] to build a theory of cultural evolution capable of explaining where institutions come from, researchers have gone back to the basics, to reconstruct our understanding of human evolution and the nature of our species. These approaches … have used the logic of natural selection and mathematical modeling to ask how natural selection might have shaped our learning psychology to most effectively extract ideas, beliefs, motivations and practices from the minds of others…

This foundation then allows theorists to model cultural evolution by building on empirically established psychological mechanisms. The result is cultural evolutionary game theory [64]. This powerful tool has already been deployed to understand the emergence of a wide range of social norms and institutions, including those related to social stratification [65], ethnic groups [66], cultures of honor [67], signaling systems [68], punishment [69–71] and various reputational systems [72,73].”

‘Culture’ is defined as any information inside the mind which modifies behaviour and which got there through social learning — whether from parents, or peers, or society at large. Non-genetically inherited ‘content’ would obviously include technology/knowledge (“how to remove toxins from edible tubers”), beliefs (“witches can cause blindness”), and customs (use of knife & fork). But it also includes what economists would describe as “informal institutions”, i.e., mating systems, ethical values, social norms, etc.

Cultural-evolutionists and evolutionary psychologists (a separate academic tribe) tend to bicker over whether culture is responsible for causing the massive anthropological diversity of behaviours seen around the world. EP (generally) argues phenomena such as food taboos are ‘evoked’ by a universal innate psychology in reaction to different physical environments. By contrast CE (generally) argues behaviours vary primarily because the information that’s stored inside people’s minds varies from society to society. Our genetically evolved capacity for social learning enables us to transmit information accumulated over the generations, and this shadow of the past can override influences of the current physical environment. Without this persistent element with its own evolutionary logic, which CE calls ‘culture’, it would be inexplicable why distinct groups living in identical or quite similar environments nonetheless still behave very differently.

The subset of culture called ‘norms’ and ‘values’ is well defined here:

“…decision making heuristics or ‘rules of thumb’ that have evolved given our need to make decisions in complex and uncertain environments. Using theoretical models, Boyd and Richerson (19852005) show that if information acquisition is either costly or imperfect, the use of heuristics or rules of thumb in decision making can arise optimally. By relying on general beliefs, values or gut feelings about the ‘‘right’’ thing to do in different situations, individuals may not behave in a manner that is optimal in every instance, but they do save on the costs of obtaining the information necessary to always behave optimally. The benefit of these heuristics is that they are ‘‘fast-and frugal,’’ a benefit which in many environments outweighs the costs of imprecision (Gigerenzer and Goldstein 1996). Therefore, culture, as defined in this paper, refers to these decision-making heuristics, which typically manifest themselves as values, beliefs, or social norms.”

Some additional points about ‘norms’ in the CE literature:

  • as decision-making heuristics, norms strongly influence behaviour;
  • norms are passed down from generation to generation non-genetically, much like the knowledge of making fire or wine;
  • norms aggregate into ‘institutions’ at the population or group level [Ensminger & Henrich 2014]
  • because norms are largely acquired and internalised before adulthood, they can be stable over a very long time, even when they seem maladaptive or adapted to past circumstances;
  • but at the population level norms can change in response to new situations;

I turn now to those ‘norms’ and ‘values’ which enable cooperative behaviour.

The “Problem of Ultra-sociality”

Cultural-evolutionists have produced a large literature on what is sometimes called the “problem of human ultra-sociality“. Its main theme was popularised in a book by Paul Seabright: from an evolutionary point of view, how was it possible for the human species to go from living in small foraging bands of close relatives in the Palaeolithic to the global network of billions of anonymously interacting strangers that we see today ?

Modern societies engage in incredibly complex and incredibly large-scale forms of cooperative behaviour, with almost infinitesimal divisions of labour connected in a delicate skein of confidence and trust. Think of what it takes to get the New York Stock Exchange humming along, or the frightening amount of organisation and solidarity it took to wage the Second World War (on either side). Or, as Peter Turchin wondered, why did so many millions enthusiastically volunteer to risk death for unrelated strangers in the Great War ?

I think this is a good summary of the CE view on ‘ultra-sociality’ overall: humans have an instinct for cooperation, but the social norms of fairness evolve gradually to accomodate a wider definition of ‘insiders’ as a society gets larger and more complex.

But a related puzzle gets lost in the shuffle: why does ‘ultra-sociality’ vary so much across the world ? Why are pro-social institutions not more widespread ? The poorest countries like Afghanistan have pretty low levels of “social capability” in governance, to put it mildly. Why does the South of Italy have so little “civic capital” compared with the North of Italy ? Why aren’t most countries like Denmark ?

Direct & Indirect Reciprocity

Traditional evolutionary biologists had already worked out a couple of mechanisms by which members of some species innately cohere as cooperative groups. For example, kin selection instinctively prediposes social animals to powerfully favour close genetic relatives. This explains things like bee colonies, lion prides, and human nepotism.

Amongst unrelated, selfish people, reciprocal altruism can explain exchange and cooperation in the absence of a central authority — as long as they live in small communities where people know each other and are locked into repeated interactions over time. In such a context, the promise of future benefits and retaliation against cheating are sufficient for rational calculation to generate the “I will share my Mastodon steak with you now, assuming that you will share with me in the future” principle.

Even in populations where people don’t know each other directly, it’s still possible to generate “spontaneous order” as long as the group has high community cohesion due to ethnic or religious identity. This “indirect reciprocity” was in my opinion best illustrated by economist Avner Greif (who is definitely not part of the cultural evolution movement). Making inferences from documents deposited in the Cairo Geniza, Greif argued that informal institutions regulated commerce amongst the Maghribi Jewish traders as they conducted long-distance trade with one another in the Mediterranean during the Middle Ages. The strength of ethnoreligious ties (especially as a minority in a wider world), maintained by strong exclusion of outsiders, reproduced a village-like flow of information even within a far-flung community and enabled reputation and ostracism to be the instruments of policing.

Of course, as anyone who has read Thomas Sowell knows, such commercial minorities are abundant even today and operate effectively in corrupt countries with weak legal institutions, such as the Lebanese in West Africa and Latin America, or the Chinese in Southeast Asia. And even in countries with strong legal institutions, ultra-Orthodox Jews conduct a major international trade in diamonds without much reliance on external authorities.

But most people agree these mechanisms — kin selection and reciprocity — by themselves cannot sustain a cooperative equilibrium in much larger societies composed of strangers who may never interact more than once and are separated by great distances.

Collective action problems

Whereas the cultural-evolutionists tend to ask “how did large-scale cooperation ever get started?”, mainstream economists and political scientists ask a more abstract, ahistorical question: “how does any kind of cooperation ever happen at all ?” This line of reasoning, which Mancur Olson called the “collective action problem”, investigates any situation where the private costs of an individual action are high while the benefits of that action redound collectively to the group.

In an election, for example, voting is costly to the individual voter because he must obtain information about the candidates and physically go to the poll. But the benefits of this public good (having a legitimate, accountable government) are divided equally by the society at large. Therefore, a rationally selfish voter ought then to free-ride on the actions of other voters: choose not to vote, knowing that the others will vote and elect a government. In fact, many people are free riders. But if everybody acted like this, there would be no election ! So why does anyone vote at all ? There should be a failure of coordination, but this kind of uncoordinated cooperation happens all the same.

Many things are subject to the free rider problem — the maintenance of price cartels like OPEC, political demonstrations, law enforcement, common environmental resources, taxation, and even the rule of law. All those arrangements either fail to happen or are under threat of unravelling because they are plagued with free-riders. If you have too many of them, others in the group will retaliate or imitate, and the cooperative equilibrium will collapse. (Collective action or coordination failures figure prominently in explaining recessions. But Germany may be exceptional.)

The standard solution is for the state — a third-party enforcer — to punish the louts and ensure compliance with the rules of the game. But the state has what’s called in political economy parlance a “commitment problem”: if it’s strong enough to enforce the rules of the game, then it’s also strong enough to manipulate them in its own interests, or in the interests of the powerful which control the state. So why on earth would the state act altruistically to solve the coordination problem in the public interest ? Many issues of governance — such as corruption or patronage politics — in one way or another, boil down to the inability of a population to coordinate on an agreed-upon set of rules, because there exists no disinterested nth-order enforcer.

Yet, despite public choice theory, despite capture by special interests, we know that the state in well-functioning societies more or less acts in the public interest most of the time. In other words, somehow collective action problems get solved; somehow socially productive cooperation happens.

“Strong reciprocity”

A possible solution is “strong reciprocity“, sometimes also known as “altruistic punishment“. Behavioural economists claim to have documented the existence of this emotional instinct to engage in costly punishment of non-cooperators. In anonymous experiments intended to mimic collective action situations, strong reciprocators tend to punish free-riders, even when they are not the direct victims, and even when there is no clear or assured benefit in the future from doing so. [2nd vs 3rd party punishment] “Strong reciprocity” could be the psychological basis of the outrage that one sees in reaction to a social norm violation like, say, queue-jumping.

In the 4-player public goods game (PGG), each player is given some money and the choice to contribute any fraction of the amount (including zero) to a common pool. At the end, the total is multiplied by some factor and then divided equally amongst the players. Players only know about one another’s contributions at the end of each game. Then the game is repeated many times.

The experiment is designed so that the players, collectively, do their best if everyone contributes the maximum amount from the beginning. But an individual player has an incentive to free-ride whilst everyone else contributes. The worst collective outcome is obtained if everyone decides to free-ride.

PGG comes in two versions, with and without punishment. In the punishment version, each player is informed anonymously after each round about everyone else’s contributions and is allowed to punish whom ever they deem a free-loader by deducting from the free-loader’s final take. But the punisher must pay for some fraction of the punishment amount from his own take.

In a version of the game without punishment, repeating the game many times always causes cooperation to tank, because those who initially made high contributions learn about the free riders at the end of each iteration and then lower their subsequent contributions. But in the version of the game with punishment, a high level of cooperation is sustained. From Fehr & Gächter 2002:

F&G2002

It’s possible three stable personality types exist in repeated public goods games: free riders, “unconditional cooperators”, and “conditional cooperators” practising “strong reciprocity” or “altruistic punishment”. (Peter Turchin calls the three types ‘knaves’, ‘saints’, and ‘moralists’.)

The existence of pro-social instincts has been replicated in dozens of societies around the world — at many scales and complexity of social organisation, western and non-western, university and non-university students. [More on this below; also see the separate post “Experimenting with Social Norms in Small-Scale Societies“.]

The implication is that large-scale collective action or market exchange cannot be sustained unless self-interest in some fraction of the population is constrained by an internalised ethical adherence to the rules of the game. You can’t have a society entirely composed of people whose “good behaviour” is achieved only through fear of punishment.

What ever you think of such a view, it certainly dovetails with the thinking of classical political economists of the 18th century, as well as Hayek and neo-institutionalists like Douglass North. The latter even mentions early versions of these experiments.

Optimal institutions?

Successful legal institutions may therefore depend on some interaction between conditionally cooperative norms and formal institutions.

Once again Avner Greif (who, I swear, must think about these issues from morning to night, even in his most indelicate moments): in a series of papers on the nature of market exchange in individualist versus collectivist societies, he argues (amongst many other things) that informal institutions can be more efficient than formal ones under certain conditions.

The closed ethnic networks such as those of the Maghribi traders or of Chinese clans could be highly efficient because ethical norms and customary rules save on transaction costs. Imagine commercial contracts which don’t need to spell out every possible contingency [efficient incomplete contracts] or don’t even need to exist, because you can trust the parties to settle disputes according to long-standing custom. Or suppose you don’t have to do “due diligence” on every single transaction, because reputation counts for everything and its loss constitutes social death.

In such a world you save on the high costs of legal institutions, but it’s difficult to scale up because community cohesion may break down after a certain size. You need formal, legal rules and enforcement to conduct market exchange on the scale of millions of people.

But then the optimal arrangement is to have a large society which reproduces the cohesion of small-scale communities regulated by social norms, along with a predictable but lightly used formal enforcement system which every now and then disciplines the determined opportunists and other dickheads. Such is the world of high “social capital” as described by Putnam and writ large by Fukuyama.

This is how Putnam contrasted the North of Italy, with its high “social capital”, with the Mezzogiorno or the South of Italy:

“Collective life in the civic regions is eased by the expectation that others will probably follow the rules. Knowing that others will, you are more likely to go along, too, thus fulfilling their expectations. In the less civic regions nearly everyone expects everyone else to violate the rules. It seems foolish to obey the traffic laws or the tax code or the welfare rules, if you expect everyone else to cheat.”

One might argue, the real institutional difference between developed and developing countries is actually a “social capital” gap: there are just many more coordination failures in developing countries. Never mind countries torn by civil war. Never mind countries where the kleptocrat with a monopoly of violence does not even bother to hide his plundering. Even the political systems of minimally functioning democratic societies are still organised de facto according to segmentary lineages, with clan- and tribe-based political parties campaigning to distribute to their members the spoils of the public treasury. In societies without clans and tribes, the distributive conflict in politics is played out along ethnolinguistic or caste divisions. But even in some relatively homogeneous societies, political parties are often a system of N-party competitive distribution of public spoils, with only nominal ideological differences between the parties. Greece is an upper-middle-income country and it’s still like that.

But how do you improve a society’s collective action capacity ? How do people become more public-spirited ? How do people achieve the transition from group-specific “limited morality” to “generalised morality” ? This is effectively the same as asking how does the radius of trust widen beyond small kin or clan groups in the first place.

Tribal social instincts & group competition

One answer offered by cultural-evolutionists is “tribal social instincts”.

‘Tribal’ is an unfortunate choice of words because it normally connotes group balkanisation and incoherence. But in the sense used by Boyd & Richerson, it refers to a genetically evolved predisposition to combine into groups successively larger than friends and family. ‘Tribalism’ is innate, but the specific size and manifestation of ‘tribe’ depends on social norms and cultural innovations:

“…the ways that people behave toward others can depend heavily on how those others are classified—as kin, friends, and community members or outsiders, strangers and foreigners. Second, human populations can vary dramatically in: (1) how they define closeness and distance of a social partner and (2) how these qualities of a partner influence social behavior. Third, these population differences are not fixed or static. Populations can change quite dramatically within several generations, [as with the Iban of Borneo], from hunting the heads of neighboring groups to participating relatively peacefully in a much larger nation-state and world system.” [Hruschka & Henrich]

You get parochial altruism or ethnocentrism when an expanding in-group progressively absorbs outsiders and reclassifies them as insiders. In recorded history, the most important way in-group favouritism has been applied to larger populations is through symbolic markers of common identity such as language or religion or extreme rituals. These create the glue for large associations of fictive kinship, “honourary friendship”, and imagined communities.

Then different levels of aggregation — from tribe to states to empires — are decided on the basis of intergroup competition. In short: more cooperative groups grow larger by outcompeting less cooperative groups in war.

Cooperation within a group is constantly being undermined by competition within the group — i.e., free-riding behaviour by those maximising individual fitness. That’s because kin bias and tribal bias are in tension. As Boyd & Richerson put it:

“These new tribal social instincts were superimposed onto human psychology without eliminating those that favor friends and kin. Thus, there is an inherent conflict built into human social life. The tribal instincts that support identification and cooperation in large groups are often at odds with selfishness, nepotism, and face-to-face reciprocity.”

But this tendency for groups to lose internal coherence can be counteracted by competition between groups through “multi-level selection” and “cultural group selection“. In the cultural-evolution jargon: between-group cultural variation in pro-social traits exceeds the within-group variation.

Groups whose cultural innovations — such as “high moralising gods” — tend to suppress or moderate the anti-social instincts of their members and promote the pro-social ones, grow bigger and more powerful than groups which remain riddled with group-suicidally selfish actors or subgroups. (Think: the small yet cohesive Prussia versus the large but fragmented Austria-Hungary.) Those less able to cohere as groups, all else equal, get exterminated, assimilated, or reduced to insignificance. Those wishing or able to survive imitate the cultural innovations of the successful ones.

War and violence feature prominently in the CE literature since that has characterised so much of human history. Thus the mathematical ecologist Peter Turchin, asking “how are empires [even] possible” in the first place, models group solidarity as an endogenous variable in the rise and fall of empires between 1500 BCE and 1500 CE.

(For claims about how norms and institutions co-evolve rather more peacefully, see the separate post “Experimenting with Social Norms in Small-Scale Societies“.)

The prominence of war in the cultural-evolutionist explanations of the most ancient human institutions such as hierarchy, the state, and ethnocentrism has a clear parallel with the literature in economic history on “state capacity” in which the rise of effective states in the early modern period is also linked to war (BrewerTillyHoffmanDincecco 2015, etc.) It also has an obvious link with the literature in comparative historical development inspired by Jared Diamond stressing the importance of “an early start” in agriculture and state history (Bockstette et al.Borcan et al., Spolaore & Wacziarg).

Social norms and ‘stateness’

But if the long history of ‘stateness’ and agriculture, plus a long history of state-level warfare, are somehow crucial to the development of a ‘modern society and state, that naturally prompts the question of how and why. It cannot be merely the long history of state experience. From Olsson & Paik, log GDP per capita in 2005 plotted against time since adoption of agriculture:

OP

On a global scale, it’s as Jared Diamond argued: a long history of agriculture (and therefore also of the state) is a good thing, but within regions, it may not be such a good thing after all. (Statistically, this is Simpson’s Paradox.)

So at this point, cultural-evolutionists often say “history matters” and reference the empirical work in mainstream neoclassical economics and other social sciences which demonstrates that “institutional shocks” in the past can leave a persistent imprint on contemporary culture. (If I’m going to get referred back to econ why did I bother to read you people again ???) This evidence sorts well with the cultural-evolutionist assumption that cultural inertia can persist for a long time.

That econ literature finds real effects, by which I mean it does establish that culture matters, social norms or cultural values do change, and this has consequences for political and economic development on the margins. But the overall effects found by this literature seem to me kind of small, or highly local, or particular to their datasets. Or in some cases it cannot rule out reverse causality, e.g., it could be economic development which results in big cultural changes, such as rising levels of trust.

I myself have little doubt different levels of trust induced by the experience of communism, for example, play a role in the various disparities between western and eastern Germany. Nor do I doubt different (cultural) preferences for leisure and work contribute non-trivially to the US-European differences in work hours. And it makes perfect sense to me that the strength of family ties explains political attitudes which influence the demand for labour market regulations in northern versus southern Europe. So different institutional paths chosen thanks to differences in culture and history do matter.

But as the gap gets bigger between countries I grow more sceptical about how powerful ‘culture’ — as defined above — can really be in explaining the massive global variation in state capacity and economic development.

There are always clever ways of magnifying the impact of culture by modelling culture-institutions-geography traps. Thanks in part to ancient biogeographic conditions, a particular cultural group is able to set up institutions reflecting their prevailing norms in a newly settled country; the ‘bad’ institutions persist and persist and persist, because this group controls the resources and has no incentive to make a “credible commitment” to “inclusive institutions” or an “open access order”; and because the ‘bad’ institutions persist, pro-market norms fail to emerge and the ‘bad’ norms reinforce the ‘bad’ institutions. There are like 3 or 4 tangled traps in there.

So is ‘good’ governance just a matter of time, a long time, but still a matter of time ? Jerry Hough and Robin Grier have argued just that in a recent book which actually inspired this post. That book also makes a cultural evolution argument in which state capacity, markets, and “rational-legal values” co-evolve. Or is ‘bad’ governance an issue of timing mismatch, when democracy arrives before a strong state, as Fukuyama has repeatedly argued ? Interestingly, both Fukuyama and Hough & Grier point to the corruption-ridden United States of the 19th century as an important piece of evidence.

I think the “really long time” argument has some validity, but that just prompts the question: why have some countries — those of East Asia come to mind — have had such rapid and dramatic institutional revolutions ?

I don’t find plausible theories for answering that question in cultural evolution. So I turn to the intersection of economics and differential psychology.

§  §  §  §  §

Intelligence and Cooperation

In the workhorse model of (non)cooperation — the prisoner’s dilemma — two players are faced with the decision to cooperate or defect based on a matrix of 4 possible payoff combinations.

Suppose a sedentary peasant would end up with $4 (the loot + his own output), if he ambushed and robbed a passing horse-backed nomad, but only $3 if he traded goods with him. The nomad faces the identical decision: $4 with robbing, $3 with trading. If both decided to rob, then they would be left with $2 each.

PDIt’s set up so that each has a perfectly rational self-interest in robbing the other, but the trading world is clearly better than both-turn-to-robbery world.

It’s well known from simulations of “infinitely repeated” prisoner’s dilemma games that cooperation is best sustained when both players adopt some variety of conditional cooperation strategy: cooperate first, but then copy the other play’s earlier choice. [Axelrod] The gains to both parties are bigger in the long run if both parties behave like that. And, as we have already seen, real people do in practise show an instinct for conditional cooperation.

But Proto, Rustichini & Sofianos (2014) demonstrates that in a multi-round Prisoners’ Dilemma experiment with actual human participants and real money, the intelligent are much more likely to practise conditional cooperation.

In this experiment, subjects were first administered Raven’s Progressive Matrices, a test which measures fluid intelligence (i.e., not based on knowledge). They were also tested for risk attitudes and the Big Five personality traits. In the end, 130 participants were allocated to two groups — “high Raven” and “low Raven” — and the only statistically significant difference between the two groups was in fluid intelligence. The participants did not know how they were grouped. (Edit: Also relevant: “Participants in these non-economists sessions had not taken any game theory modules or classes.”)

Then within each group, different pairs of participants repeatedly played the prisoner’s dilemma — the maximum number of rounds was 10 but a computer decided whether to terminate the session after each round with a fixed probability. There were multiple sessions of these rounds of games.

fig2b

[Blue: high Ravens group, Red: low Ravens group. X-axis: each period represents 10 rounds; Y-axis: the fraction of players cooperating.]

The high Raven group not only diverged early from the low Ravens, but also sustained cooperation much longer. There actually wasn’t much difference between the two groups in the early rounds. The difference grew incrementally, in drips and drabs, but in the end, it was substantial. This suggests high Ravens learnt the optimal behaviour from the previous rounds better than the other group.

Proto et al. sliced and diced the data in various ways and found that:

  • reciprocation is much stronger with the smart: high Ravens are more likely than the low Ravens, to match prior cooperation with cooperation in kind, and punish prior defection with defection in kind;
  • reaction times — the time it took to decide whether to cooperate or defect — were shorter and declined faster for the high Ravens;
  • the only statistically significant difference in individual participant characteristics was fluid intelligence;
  • when the monetary payoffs were manipulated to make cooperation less profitable in the long run, the high Ravens were no more cooperative than the low Ravens — if anything, the low Ravens were slightly more cooperative !

But the most amazing result has to be this: “Low Raven subjects play Always Defect with probability above 50 per cent, in stark contrast with high Raven subjects who play this strategy with probability statistically equal to 0. Instead, the probability for the high Raven to play more cooperative strategies (Grim and Tit for Tat) is about 67 per cent, while for the low Raven this is lower (around 45 per cent).”

Understanding the benefits of working together in complex situations — which is what a repeated prisoner’s dilemma simulates — implicitly requires reasoning skills, the ability to learn from mistakes, the ability to anticipate, and accurate beliefs about other people’s motives.

The ethical implication: the intelligent are more likely to practise the Golden Rule, and this actually breeds trust; and the less intelligent are more likely to think they can get away with it, and this breeds mistrust. You only need intelligence to generate this difference. You can immediately see where social and civic capital might come from, at least in part.

The Proto et al. study replicates and extends a few earlier studies on intelligence and cooperation (Al-Ubaydli et al. 2014; Jones 2008). Moreover, it’s consistent with cross-cultural findings from Public Goods Games as described below.

Anti-social punishment

Earlier I mentioned that in the Public Goods Game there were possibly 3 stable personality types: free riders, unconditional cooperators, and conditional cooperators who punish free-riders. But when the PGG was conducted in 16 different cities around the world, Herrmann, Thöni, & Gächter (2008) reported the existence of “anti-social punishment”.

That is, in addition to people actively punishing free-riders, in some cases there were people actively punishing cooperators !aspAnother interesting thing is that subjects in both Seoul and Chengdu started at fairly low levels of cooperation but rapidly converged to Northwest European levels of cooperation:

asp2

Anti-social punishment kills cooperation in non-western countries except East Asia. Hmmm. What’s different about the Koreans and the Chinese ?

When the punishment option is entirely removed in a separate treatment, cooperation levels plummeted across the board and even the rank order changed to some extent:

asp3

Those Danes remained ever true to stereotype, but what a fall for most other NW Europeans ! This suggests pro-social punishment is important to sustaining collective action in western countries.

Interestingly, the authors found in their regressions that the “rule of law” indicator was not a statistically significant predictor of social punishment, but it was strongly inversely correlated with antisocial punishment. That is, weak rule of law and anti-social punishment tend to go hand in hand. It’s probably a feedback effect.

Patience and Cooperation

Patience also matters to whether cooperation is sustained. Low time preference is correlated with intelligence [Shamosh & Gray], but the association is not so big that you can’t have many smart people who are more impatient than some less intelligent people. So patience is a quasi-separate factor in cooperation.

Intuitively, if the benefits of cooperation accrue in the long run, then less patient and more impulsive people are less likely to cooperate. In the repeated prisoners’ game, the degree of ‘patience’ is represented by discounting the total gains in each period after the first one. In the nomad-peasant example from above, if both were impatient, the $6 cooperative payoff in (say) the 9th round would only be worth (say) $4, because a dollar in the future is not worth the same as a dollar today. But for more patient people, the future payoff will be closer to $6.

But the Stag Hunt may better highlight the role of patience in cooperation. It’s similar to the prisoner’s dilemma, but the payoffs to cooperation are obviously superior to those of non-cooperation (defection). The choice is between cooperating to hunt stag (which requires team effort) or going it alone chasing rabbits. The scenario is set up so that non-cooperation is riskless — a sure thing for both parties — whereas cooperation is risky because you don’t know whether the other party will come through, and the big payoff absolutely requires both to participate.

stag

Many social situations are more like a stag hunt than a prisoner’s dilemma, and intuitively you would think taste for risk would predict cooperative behaviour in such cases. But in fact Al-Ubaydli, Jones & Weel (2013) finds that patience is the most important element in successful cooperation in Stag Hunt experiments with real people. The study participants had been tested for patience, intelligence, risk aversion, and personality traits; and “risk aversion and intelligence have no bearing on any aspect of behavior or outcomes at conventional significance levels”.

AJW

[There are other interesting details to this study, such as increasing returns to pairs of patient players, but it would take too much space to describe.]

Need I say, that populations vary substantially in patience ? Wang et al. (2011) asked subjects in 45 countries about how much they value offers of money now, in the near future, the medium future, and in the far future. Based on their answers the authors computed discount rates for each group.

But people tend to be “time inconsistent”, i.e., they can be very patient about long-term rewards, but simultaneously very impulsive about things right in front of them. If your future self tearfully regrets that your past self was such an idiot as to consume 5 slices of cake after each meal, then you are a “hyperbolic discounter”.

Wang et al. found that countries and regions vary a lot more in their present bias than in their long-term discount factor:HPD

The above says: how people in the present value rewards they expect to receive (say) 10 years into the future, is pretty similar across the world — although small differences can make a big difference in the long term through compounding. But the degree to which people want things right now, as opposed to tomorrow, varies quite dramatically.

By the way, Russia’s β is 0.21 !!! If that has nothing to do with low investment rates or insecure property rights for foreign companies, then I will eat my shorts !

The role of patience in cooperation is relevant to the “commitment problem” of the state in solving collective action problems. In theorising about the origins of the state, Mancur Olson gave a famous answer with his dichotomy of roving bandits and stationary bandits. In the world of political anarchy, roving bandits fight one another for opportunities to pillage the productive peasants. But sometimes one of them defeats all the others and establishes himself as a “stationary bandit”. He then acquires a strong intrinsic interest in restraining his plunder — his ‘taxation’ — in order to let the economy grow. It’s the “fatten the goose that lays the golden eggs” principle.

But that depends ! If the stationary bandit is impulsive and impatient, he can remain a predator for a very long time.

Political scientist Carles Boix in a recent book pointed out that the reciprocity of stateless foraging societies cannot be sustained when the distribution of resources is too unequal. But even his model depends on ‘patience’, with the implication that uncoordinated cooperation is still possible with more inequality as long as people are patient enough. This is actually true of models using prisoner’s dilemma and stag hunt in general. Even Acemoglu‘s ruling elite with vested interests in maintaining “extractive institutions” would have incentives for “inclusive institutions” if they were only patient enough.

§  §  §  §  §

So to answer the question at the head of this post, “where do pro-social institutions come from?” — if ‘bad’ institutions represent coordination failures, then intelligence and patience must be a big part of the answer. This need not have the same relevance for social evolution from 100,000 BCE to 1500 CE. But for the emergence of ‘modern’, advanced societies, intelligence and patience matter.

It’s not that people’s norms and values don’t or can’t change. They do. But that does not seem enough. Solving the most complex coordination failures and collective action problems requires a lot more than just ‘good’ culture.

I am not saying intelligence and patience explain ‘everything‘, just that they seem to be an important part of how ‘good’ institutions happen. Nor am I saying that intelligence and patience are immutable quantities. Pinker argued in The Better Angels of Our Nature that the long-run secular decline in violence may be due to the Flynn Effect:

…the pacifying effects of reason, and the Flynn Effect. We have several grounds for supposing that enhanced powers of reason—specifically, the ability to set aside immediate experience, detach oneself from a parochial vantage point, and frame one’s ideas in abstract, universal terms—would lead to better moral commitments, including an avoidance of violence.

What is the above describing, other than the increasing ability of people to empathise with a wider group of people than friends and family ? Intelligence and patience allow you to understand, and weigh, the intuitive risks and the counterintuitive benefits from collaborating with perfect strangers. With less intelligence and less patience you stick to what you know — intuit the benefits from relationships cultivated over a long time through blood ties or other intimate affiliations.

Your “moral circle” is wider with intelligence and patience than without.

In the 1990s, in the middle of free market triumphalism, it was widely assumed that if you let markets rip, the institutions necessary to their proper functioning would ‘naturally’ follow. Those with a vested interested in protecting their property rights would demand them, politically. That assumption went up in flames in the former communist countries and the developing countries under economic restructuring.

 To paraphrase Garett Jones, one of the co-authors of the stag hunt study: smart, patient people are more Coasian; they find a way to cooperate and build good institutions.


PS: It’s not out yet so I don’t know what’s in it exactly, but based on his papers that I’ve read, I strongly recommend Jones’s forthcoming book, The Hive Mind. And according to the table of contents now available at Google Books, it will cover the aforementioned “Political Coase Theorem” territory.

Edit-Note: The Proto et al. and the al-Ubaydli et al. studies were, indeed, conducted with university students in WEIRD countries (the UK and the USA, respectively). The Wang et al. was with economics students in 45 countries. The Herrmann et al. was in 16 different cities including western and non-western countires. These are not intended as definitive evidences of anything. Yet they all strongly suggest intelligence and patience generate cooperative behaviour. Future studies like these — especially with regard to intelligence — will certainly be carried out in more diverse societies. Besides, Henrich et al. have already conducted one-shot ‘trust’ or ‘sharing’ game experiments in dozens of small-scale societies around the world.


Filed under: Behavioural economics, Cultural Evolution, Political Development, Political Economy, Social & Civic Capital, Social Evolution Tagged: Boyd & Richerson, collective action problem, cooperation, cultural evolution, cultural group selection, Joseph Henrich, market norms, social evolution, ultra-sociality

The Baptist Question Redux: Emancipation & Cotton Productivity

$
0
0

Edward Baptist, the author of The Half Has Never Been Told, has been claiming since the publication of his book that a putative post-Emancipation drop in overall agricultural productivity in the American South is proof that it was torture, not new cotton cultivars and frontier soils, which had been largely responsible for the US cotton boom of 1800-60. But there are severe limitations to what the cliometric literature on slavery can reveal about post-Emancipation productivity specifically in cotton-picking.


The nature of the “Baptist Question”

In The Half Has Never Been Told the Cornell historian Edward Baptist attributed the very large increase in cotton-picking rates by slaves in the American south to the scientific application of torture in the gang labour system. Baptist ignored more than dismissed the statistical case made earlier by Olmstead & Rhode that the 400% increase (from 25 lbs. to 100 lbs.) in the raw cotton picked per day per slave between 1800 and 1860 was made possible by the introduction of new seeds with higher yields producing easier-to-pick plants, in the fertile frontier soils of the New South.

[For details, see my post “Plant breeding…drove cotton productivity gains in the US South”.]

In Baptism by Blood Cotton, I acknowledged that the extra cotton that was grown had to be picked by someone, even if the higher yield was due to better plants and better soils. Nobody disputes that technology and (coerced) labour were complements. Absolutely no one disputes that slaves were forced to work harder than they would have done had they been free. Nobody disputes the violent coercion part of this story (even if Baptist appears to think his critics do deny it).

But according to O & R, the new plant varieties were taller and therefore physically easier to pick, and produced more bolls containing more raw cotton lint. In that case, there was simply more cotton to pick for any given level of work effort induced by the lash. Recently Brad Hansen put the choices quite well :

“There are essentially two ways that this increase [in cotton picked] over time could have occurred. First, slaves could have been forced to pick closer to the maximum that they were physically capable of. Second, the maximum that they were physically capable of picking increased over time. O & R argue for the second explanation. Improved plants enabled slaves to pick more cotton in a given amount of time. In other words, slaveholders used physical coercion to force slaves to pick at maximum picking rates and through plant breeding they were able to increase this maximum amount that a person was physically capable of picking overtime.”

So it’s a question of which factor predominates in that 400% increase in daily cotton-picking rates: [a] slaves were made to pick closer to their physical maximum; or [b] the biological maximum was raised by new seeds and soils.

Baptist contends that in the traditional ‘task’ system, slaves were less supervised and therefore their labour was less than fully utilised. The gang system moved the utilisation of slave labour closer to full capacity. But it’s difficult to believe slaves had been so under-utilised in the Old South relative to maximum physical potential that you could raise utilisation rates by 400% ! (Baptist now says he does not discount the seeds explanation as partial, but in the book he barely mentions it and dismisses it in the footnotes.)

In the aftermath of his book’s publication, Baptist faced some harsh criticisms, especially about his garbling of economics. See for example Burnard (with response from Baptist); as well as the roundtable reviews in the Journal of Economic History [ungated version], with Olmstead particularly severe in his critique; and recently Clegg which, for some of its arguments, cites my own critique. Also see the multiple blogposts by the economic historian Brad Hansen [1, 2, 3, 4].

Partly in response to such criticisms, Baptist has been loudly arguing that the decline in southern agricultural productivity after the war ‘destroys’ Olmstead & Rhode’s botanical argument. See his Twitter pronunciamentos:

baptist1

baptist2baptist3abaptist3

And also some of his comments at the Junto blog:

“One problem here is the critique of the cotton productivity argument. Post bellum sharecropping was actually notoriously unproductive. See Fogel, who calculates a 40% drop in productivity”.

[Baptist has also now written a direct criticism of Olmstead & Rhode.]

Unfortunately, the Baptist Question can not be satisfactorily addressed by citations from the cliometric slavery debates of the 1970s and 1980s, especially regarding the effects of Emancipation. Their focus was too macro and rarely cotton-specific; and there are just too many uncertainties in their estimates of aggregate productivity to settle the decidedly micro question of cotton picking rates. As I argued previously, you need evidence from direct observation of how much cotton was picked or pickable per day or per hour for freedmen.

Which productivity are you talking about ?

There is a lot of loose talk of ‘productivity’ by historians who really need to understand the following (very elementary) distinctions in its meaning:

  • total output
  • output per worker (per year)
  • output per worker per day
  • output per worker per hour
  • total factor productivity (TFP), or output per input of all factors of production (land, labour, capital)

Today, when most people talk about productivity, they usually mean labour productivity in the sense of output per worker-hour. But the slavery debate initiated by Time on the Cross was couched in terms of TFP — the cliometricians of the era were arguing about the North-South differences in total factor productivity for the year 1860. By contrast, the productivity estimated by Olmstead & Rhode is output (cotton picked) per worker per day on southern farms between 1800 and 1860.

Care needs to be taken in comparing all these things. You can have hourly labour productivity rising or staying constant, even when TFP is stagnant or falling. You can have daily labour productivity growing without any growth in hourly productivity, which implies that workers simply worked more hours per day. Conversely, if daily labour productivity shoots up while the work day gets shorter or stays the same, that could mean more work was being squeezed out of workers for every hour they worked.

Then there is the distinction between the productivity of the agricultural sector as a whole and the crop-specific productivity for cotton. Most of the cliometric slavery debate focused on all-agriculture productivity. Of course cotton was an important part of the southern economy, but it was still less than 30% of its total agricultural output (TOC2, pp 131-138, WCC-TP1 pg 262n). So you cannot simply deduce changes in crop-specific productivity from changes in sectoral productivity — especially on farms where many crops were grown at any given time — at least not without having some detailed data on the allocation of labour time to specific crops within individual farms. So even in the case of large plantations where 60% of the monetary value of output was derived from cotton (WCC-TP1 pg 254; small farms cotton share: 29%), you still can not infer how much labour-time was devoted to producing that crop and therefore you can not infer the cotton-specific labour productivity from the all-crop information.

However, there is an alternate method (the price dual) for estimating productivity changes for a single crop like cotton. I leave this for the end of the post.

Edit 6-Nov-15: Do note, Roger Ransom and Robert Sutch in One Kind of Freedom: the Economic Consequences of Emancipation, which is often taken to be the “Time on the Cross” of Emancipation, dispute there was a decline in productivity at all:

ransomsutch1ransomsutch2

[One of Fogel’s students had a clever counterargument to the above, for which see the comments section.]

Further micro-evidence on how long it took to pick cotton

In my earlier post I presented some evidence from the 1930s about how much labour-time was required to hand-pick a pound of raw cotton just before picking became mechanised. That evidence is consistent with Whatley 1987 which found that the “average picking rate of 125 pounds of seed cotton per 10-hour man-day is the quote found most frequently” in US Department of Agriculture bulletins of the 1930s. Neither seems terribly far removed from the average of 100 lbs. in 1860 found by Olmstead & Rhode.

Both are also consistent with other evidence. From Craig and Weiss, “Hours at Work and Total Factor Productivity Growth in the Nineteenth-Century U.S. Agricuture” (chapter 1 of New Frontiers in Agricultural History; my uploaded PDF copy):

hours

According to the above, the amount of labour-time required to produce 100 lbs of cotton lint steadily declined. Of course, it’s not the same as picking rates of raw seed cotton.

However, recently the economic historian Trevon Logan (Logan 2015) published manual cotton-picking rates from 1952-65 on the Mississippi farm belonging to his own sharecropping grandfather:

“Although the agricultural productivity of African Americans in the antebellum era is well-investigated, after Reconstruction we know much less about how productive African Americans were in the fields.

“The cotton picking books retained by my deceased paternal grandparents are quite detailed and allow me to estimate the individual productivity of nine children who were involved in intensive manual cotton picking from 1952 to 1965. This rich quantitative data forms the basis for this study. In addition to estimating the overall productivity (pounds of cotton picked per person per day), I also use the records to investigate gender differences in agricultural productivity.

“On average, children in the Logan family picked approximately 120 pounds of cotton per person per day in their late teen years. These estimates are quite similar to recent estimates of slave productivity produced by Olmstead and Rhode (2010), who calculate productivity using detailed plantation records from the late antebellum era from the same region. On average, the children in the Logan family were more than 95% as productive as their enslaved predecessors in the field. Gender differences in productivity were quite small within the Logan family, disappearing almost entirely by late pubescence. Given that the method of picking cotton was largely unchanged over time — both the enslaved and Logan children picked cotton by hand in a process that underwent very little technological innovation — the estimates imply that the extraordinary productivity of slaves persisted long after Emancipation.”

Conveniently, Logan 2015 reproduces some of the methods of Olmstead & Rhode, and the picking rates are expressed as percentage of slave averages recorded by O&R:

logan1

loganfig3

loganfig4

It must be kept in mind, the above picking rates should not be considered completely comparable with those of antebellum slave plantations. First, the cotton plants in the 1950s may have had an intrinsically higher biological yield than those of 1860. Second, the incentives might have been very different — global cotton prices were lower than a century earlier. And as Logan himself points out, the cotton seeds were a separate source of revenue for the Logan family. Finally — of course — this is just one family.

Nonetheless you can not blithely assume that cotton-picking rates collapsed with the abolition of slavery.


A note on the “price dual” labour productivity for cotton

The one observation made by Robert Fogel about post-bellum productivity specifically in cotton production (not picking) is:

“Since the breakup of the gang-system farms was responsible for the decline in productivity, it might be thought that the price of cotton should have been higher than it actually was in 1880. Had the price of inputs remained constant, a 20 or 30 percent decline in total factor productivity would have led to a 20 or 30 percent rise in the price of cotton. However, the prices of the main inputs into cotton production — labor, land, and most other items of capital — fell by amounts that nearly offset all of the upward pressure on cotton prices caused by the change in productivity. Indeed, the ratio of input prices to the price of cotton is still another way of measuring total factor productivity, and this index indicates a 35 percent decline in the efficiency of cotton production.” [WCC, pg 101]

“According to the calculations of Moen, TP, #15, total factor productivity [in overall southern agriculture] fell by 13 percent between 1860 and 1880 if measured by quantities, and by 35 percent if measured by the price dual. Cf. EM [Yang], #38; Jaynes (1986)” [pg. 439, footnote 49]

The above refer to the quick-and-dirty dual method. The “primal method” which was used to compute total factor productivity by Fogel & Engerman is expressed in terms of physical quantities. (At least theoretically: you still have to use prices to make physical quantities addable in terms of common units.) But the price dual allows you to infer changes in productivity as long as you have output and input prices over time. This is more or less the same method used in Eltis et al. which found zero labour productivity growth in Caribbean sugar over the long run, despite a very harsh gang labour system.

The simplest computation for the price dual labour productivity change in agriculture between 1860 and 1880 is found in Yang (WCC-EM pg 275) that was referenced by Fogel :

yang_dual

The monthly wage rate for 1860 used above and in Moen (WCC-TP1, pg. 340) is for free male labour in the South. I don’t understand why that should be the basis for representing the cost of slave labour input ! (Eltis et al. used slave prices for the entire period.) It’s possible to argue it’s merely an accounting difference: the slave owners expropriated the difference between the market wage and what slaves consumed, and the difference just showed up in capital’s share of income. But one of the arguments made by Fogel was there was no wage at which free workers (whether antebellum whites or postbellum freedmen) would ever work in gangs as the slaves had been forced to do on plantations. So what could the 1860 market wage even mean for the purposes of the price dual ?

You can substitute the cost of slave consumption as an imputed wage. Ransom & Sutch (pg. 212) estimates the $ value of slave consumption for 1859 at $29-32 per year. So say $3 per month. Assume cotton prices were unchanged between 1860 and 1880. Then simply replicating Yang with monthly slave consumption in 1860 and the market wage rate in 1880 would imply a large increase in labour productivity in cotton ( 9.26 ÷ 3 ). This figure is not to be taken seriously. It’s just intended to show, if the estimate were redone with slave ‘wages’ for 1860, the price dual almost certainly would imply no drop in labour productivity for the cotton sector by 1880.


Note on the postbellum deterioration of the cotton-growing environment

In Creating Abundance: Biological Innovation and American Agricultural Development, Alan Olmstead and Paul Rhode report that the cotton-growing environment of the South in the years after the Civil War deteriorated from worm infestation:

orca


For more on productivity issues from the cliometric literature, see the first post in the comments section (which is very tedious).


Filed under: cotton, Economics of Slavery, Edward Baptist, Slavery Tagged: cotton productivity, Edward Baptist, Emancipation, Robert Fogel, Stanley Engerman, The Half has never been told, Time on the Cross, Without Consent or Contract

Economic History Link Dump 15-01-2015

$
0
0

A haphazard mass, a chaotic carnival, a Bikini Atoll, of links relating to economic history, political economy, and allied matters. I also have brief comments on some of the links.


I just decided to start doing this, so I am going to do a link-dump of stuff I found interesting in the last couple of months, to the extent that I can remember having found them in the last couple of months. Note : I am not necessarily endorsing the content. I include them because I find them worth reading. The links in bold are the ones I’m highlighting.

I nominate Gregory Clark’s A Son Also Rises: Surnames and the History of Social Mobility, if not the best in terms of reading pleasure, then at least the most important book of 2014. (I think Piketty has already gotten enough coverage.)

UnknownDiane Coyle has put up a page full of economics books forthcoming in 2015. One of them, British Economic Growth 1270-1870, I already have in hand and I may review it at some point. Readers of this blog know that I don’t put too much stock in Broadberry’s GDP estimates, but the book definitely contains many virtues. Another book I just got is unfortunately so far in Spanish only : El Fin de la Confusiónabout the 200 years of “errors which have impeded the development of Mexico”. I read Spanish but rather slowly, so I haven’t gotten to it yet, but according to the description, it promises to be the offspring of Daron-Acemoglu-with-Deirdre-McCloskey. Macario Schettino, a well-known analyst in Mexico, sort of combines the functions of Robert Samuelsen, Dave Warsh, and Martin Wolf.

Bad History

Anton Howes proposes a WikiErrata for sloppy or otherwise bad citations in scholarship. Speaking of which, Jo Guldi and David Armitage, authors of The History Manifesto, affirmed at Columbia University, “The History Manifesto is not an attack on microhistory. It is an attack on the discipline of economics!” (See 15:20 in the video.) Less than a week later at the Harvard Book Store, they reaffirmed, “Enemies we’re shooting against in The History Manifesto are short-termists, determinists, and with apologies, the economists”. Interestingly, in October 2014, they had presented their book at the LSE and were quite mild and bland about economists when a member of the audience inquired. Hmmm, I wonder what changed between October and November ?

Deborah Cohen and Peter Mandler, historians at Northwestern and Cambridge respectively, have penned a scathing review of The History Manifesto, which is scheduled to appear in the February 2015 issue of The American Historical Review. They agree (on page 16 of the preprint) with my comments on the book’s treatment of Joel Mokyr, Paul Johnson, and Stephen Nicholas.

Slavery & quasi-slavery

In an earlier blogpost I discussed whether slave cotton was necessary to British industrialisation in reaction to the claim made by Edward Baptist. Bradley Hansen addresses Baptist’s other claim, about how important slavery was to American economic development. See also his Back of Ed Baptist’s Envelope.

In Mexico in the late 19th century, Javier Arnaut finds internal divergence in regional wages and attributes it to the lack of labour mobility, itself due to debt peonage systems of labour in the centre and south of the country. Institutional wage repression is also illustrated by Kevin Bryan blogging a paper by Dippel, Greif, & Trefler. When cane sugar prices collapsed in the 19th century after the introduction of beet sugar, wages fell in some Caribbean sugar colonies, as expected, but rose in others. Earlier, following the abolition of slavery, the island planters had resorted to bonded labour. It turns out, the islands where wages rose were the ones least suited agronomically to sugar cultivation, and the marginal planters quit after the sugar depression.

The blog A Fine Theorem overall is highly recommended as he often blogs on economic history, particularly from a theoretical perspective.

Argentina

Speaking of extractive institutions, I really dislike the Acemoglu-Robinson “reversal of fortune” paper. I stated my reasons here, which agree with Joe Francis that high population density in 1500 implied low incomes, the opposite of the peculiar and unconventional stance taken by Acemoglu and Robinson. Francis, a specialist in Argentine economic history, is not a prolific blogger but most of his posts are interesting and derive in one way or another from his PhD dissertation at LSE. I particularly recommend those on “Mickey Mouse Numbers“; a critique (and follow-up) of Jeffrey Williamson’s Trade and Poverty: When the Third World Fell Behind (which I really like); and (though a bit obscure for those without interest in Argentina) the “Halperín Paradox“.

Francis approaches most of these topics from a data quality angle. And the piece with the widest potential appeal is the one where he asserts “Argentina’s apparent decline is…an illusion created by faulty GDP statistics“. That flies in the face of the conventional wisdom about the well-known “century-long decline of Argentina“, standard treatments of which include this and, more recently, this.

Political economy

An old paper by Dani Rodrik and Arvind Subramanian argued (inter alia) that the origins of India’s economic reform went back to the 1977 election in which the Congress Party was defeated for the first time since independence. After the 1980 return to power, Congress became selectively more pro-business, and Rodrik & Subramanian found that state governments in India which were allied with Congress “experienced differentially higher growth rates in registered manufacturing”. That’s what I had in my head when I saw The Hindu newspaper reporting on the state-level representation for the recently victorious BJP :

B6SI7gDCQAElKVH

Noah Smith argues in his Bloomberg column that contrary to stereotype, Japan is a fractious polity which hides its inability to enact needed policies by presenting the image of harmonious consensus to the world. That’s hardly news, but the interesting part is his initial metaphor — the chaos of Japanese decision-making during the Second World War. A good background is Jacob Schlesinger’s Shadow Shoguns, the portrait of Japan’s post-war political machine run by the Liberal Democratic Party. The LDP is actually a loose coalition of parties, not a party in the ordinary sense.

Benjamin Preisler, who blogs in three languages, has a good Monkey Cage explainer about the 2014 elections in Tunisia. Those from the inland regions who had led the 2010 revolution lost in the recent elections to those from the wealthier coastal areas. The latter had been the primary beneficiaries (and parasitic constituents) of the ancien régime. See also the 2011 election results.

B6mMcDXIIAETusu-1 tunisia-2011

Was the Arab Spring caused by volatile food prices, as some have argued ? In the general case, Marc Bellemare finds that between 1990 and 2011, high food prices, but not the volatility of prices, were associated with riots. This poses a problem for regimes who have difficulty financing food subsidies aimed at urban populations, where riots occur.

A propos of bread & circuses, Frances Coppola digs up an old paper by Rudiger Dornbusch and Sebastian Edwards and situates the current ruinous state of Venezuela in the long history of Latin America’s populist policies. That paper, by the way, was turned into a very good book, The Macroeconomics of Populism in Latin America, with many country examples.

Erik Meyersson summarises the deterioration of Turkey’s political institutions with a neat spider graph (clickable):

B6_z3hEIIAEJnfn

 

The IMF and Ebola

The economist-turned-political-scientist Chris Blattman criticised a letter in Lancet arguing the IMF is partly to blame for the Ebola outbreak. (The IMF’s response.) The Lancet authors replied, and an anthropologist-and-poli-sci pair kind of scoffed at Blattman and recommended some readings. He counter-counter-responded in a fine display of dismal scientism. (Roving Bandit’s take.) But the best, quasi-definitive comment is by Morten Jerven, the author of Poor Numbers which I highly recommend. (He also has two new books coming out soon, plus see his World Bank blogpost, “The Dismal State of Numbers for Economic Governance in Africa“.)

Institutions

If you read through everything, you will notice neither Blattman nor Jerven rates too highly the empirical “institutions” literature in economics. When you have some area and period knowledge of actual institutions, that literature will seem kind of thin. (Branko Milanovic once made an acid remark, “Acemoglu reads like Wikipedia with regressions”.) I recommend once again Dietrich Vollrath’s The Sceptic’s Guide to Institutions (in 4 parts), along with Chris Blattman’s institutions reading list, emphasising political science and history. Vollrath, the cautious empiricist, will be the first to tell you not to overread his scepticism, but everyone already agrees that “institutions” are important. The real issue is that the institutions literature is riddled with identification problems, especially the global cross-country regressions. The papers that deal with them best tend to be smaller-scale studies exploiting natural experiments (see Part 3). But those are about “persistence” just as much as about “institutions”, anyway.

Beware of maps and eyeball correlations !

Many people (including me, as well as some people in Poland) have been misled by nicely coloured electoral maps like these from Wikipedia :

B3NJVebCYAIYRXh B3NJVbjCAAIttvZ

To the west of the 1918 Russo-German frontier, liberals (PO, orange) tend to win in current Polish electoral districts ; to the east, more conservative, religiously-orientated parties (PiS, blue) dominate. The map makes you believe there is a stark contrast between west and east, as though the ghost of the pre-1918 borders continues to exert an eerie influence. But those colours are an illusion. There is no discontinuity between orange and blue :

zhuravskaya_fig2

Discontinuities do exist at the 1918 Russo-German border, but only in the number of votes for post-communist parties that nationally get <20% of the vote ; and at the old Austro-Russian border, in the vote for liberals.

In a discussion of interregional income inequality in the UK, Tim Worstall common-sensically points out that the gaps are exaggerated because of differences in the local cost of living. Rent is a bit steeper in London than in Lothkrackenclyde-on-the-Despond. But that also means all maps like the following are misleading about the regional differences in GDP per capita, at least as measures of consumption living standards. GDP in such maps is usually deflated by national, not regional, price indices.

B5jd7GwIMAAu8ho B5i93U2CEAAA5ZW

Ethnic observations à la Thomas Sowell

The Counterreformation had some famous economic effects. After Antwerp fell to the American-bullion-fattened armies of the Holy Roman Emperor, Protestants in Flanders were given the choice of exile or recantation. Flemish Protestants thus fled to Dutch cities (where they also found Sephardic refugees from Spain), just as they also boosted the English wool industry in Norwich. Many of the merchants in 18th century London were Huguenots. But less well known is the impact of Huguenot refugees in Prussia.

South African economic historian Johan Fourie blogs about a paper he co-authored which found that Huguenot settlers in the Cape Colony who had come from wine-producing regions of France were more productive than those from wheat-producing regions, and this disparity endured for almost a century. He also observes the great expansion of viticulture and the shortage of labour in the Cape Colony helped lay the foundation of slavery in South Africa. By the way, the paper in question appeared in the Economic History Review which had a special issue devoted to the “Renaissance of African Economic History“. Bonus: The Economics of Apartheid

There’s now a slightly middle-aged body of social science literature showing that in developing countries, heterogeneity (religious, linguistic, ethnic, etc.) is associated: (a) with higher levels of inefficient redistribution, i.e., resources from one group are diverted by politicians of another group as patronage to their own; (b) with lower levels of public goods provision at the national level (because groups don’t want to help pay for goods which other groups can also enjoy); and (c) with higher levels of regional public goods. (I think much of this literature has one big mother of an omitted variable, but I won’t get into that for now.) Addressing this issue of social capital Kevin Bryan has a fantastic post on a paper which uses a natural experiment with Native American reservations in the USA.

Race

At least 19% of US census-designated blacks passed for white in 1880-1940, with approximately 10% reverting to black. The act of passing itself is not news, but I think this is the first time numbers have been estimated with an explicit methodology.

B6WtdxQIMAErkzh

[Edit: Nix & Qian’s conclusions about racial passing were challenged by Greg Cochran. See the comments. At first I disagreed, but came to agree with him.]

Interesting to consider in conjunction with the above is “The Genetic Ancestry of African Americans, Latinos, and European Americans across the United States“. Most self-described whites in the United States have little non-European ancestry. The open-access paper has graphics for the geographical distribution of admixture rates for these groups, but this one is worth highlighting (clickable):

distribution-1

You should read geneticist Razib Khan’s commentary on the paper. Also: his thoughts on the future of the racial makeup in the United States, which are well informed by history.

Religion & cultural norms

In 107 prefectures of China’s Shandong province during the period 1651-1910, counties which had more schools (implying more education in the Confucian classics), more Confucian temples, and more “chaste women” were associated with fewer peasant rebellions in times of bad weather. (Also noteworthy: the use of principal components analysis to create an index of Confucian norms.)

BxCloLLCAAA9UaF.jpg-large

L’Islam n’existe pas is how I would summarise, perhaps tendentiously, the position taken by Razib Khan for a long time and which I agree with enthusiastically. Only Muslims, with actual beliefs and behaviours rooted in history, exist. Scripture, doctrine, and theology are the opiate of the intellectuals, and do not explain in any fundamental way the beliefs and behaviours of ordinary people. So what “Islam really is” — or for that matter, what “Christianity really is” — should be left to the believers. Yet people, both critics and defenders, still impute an essence to religions, even though no essence can exist. Khan approaches religious behaviour and belief from an anthropological-psychological-evolutionary perspective, which is right, but one might infer the same thing from history — which seems to have determined the formation and interpretation of scripture and docrtrine, rather than the other way around.

Neither Islam nor Western foreign policy drives terrorism” (a summary title belonging to Ben Southwood, not to the author) is a fine survey of recent literature. But it’s stronger on the foreign policy part than on the latter, mostly because “Islam” is not clearly or relevantly identified. Degree of religiosity ≠ sociocultural identification as Muslim. Better-educated clerics=>less support for jihadism may be true but could be misleading, as better educated clerics also imply greater official cooptation (e.g., al-Azhar University). Besides, since when did jihadis need religious education to declare themselves competent to pass fatwas ? The post also does not address perhaps the most salient issue: why terrorists are disproportionately Muslim.

A paper in Current Biology, of all places, argues that the rise of the great moralising, ascetical religions of the Axial Age — including Buddhism, Daoism, Stoicism, and Second Temple Judaism — was ironically caused by affluence. Such religions emerged in societies where energy capture exceeded 20,000 kilocalories per day. (Readers of Ian Morris will be familiar with the concept of energy capture.) Peter Turchin, evolutionary ecologist turned quantitative modeller of historical dynamics, offers a cogent critique in two parts.

Inequality

Turchin also spoke on BBC about the deep origins of hierarchy and inequality. Speaking of which, those interested, either in long-run inequality dynamics, or Malthusianism in general, really must read Turchin’s Secular Cycles. It wades into The Brenner Debate of the 1970s and presents a combined economic-institutional quasi-solution to the wage/population boom-and-bust cycles of the Middle Ages. Elite extraction and income inequality matter to the dynamics. In my opinion, Clark’s neo-Malthusianism in A Farewell to Alms was a step backward, as it contained purely economic logic. Clark also moved away from Malthus’s original emphasis on wages and shifted toward income as the key variable. He has occasionally hinted at income inequality as a factor (with the implication that the labour share matters more), but never really explored it.

Turchin is also coming out with a book on the history of inequality dynamics in the United States. This Aeon article from 2013 gives a foretaste of it, as does the blogpost that adds some detail about that article. It’s really a very different way of looking at income & wealth inequality over time than you find in Piketty or other standard treatments.

At the group-run Economic History Blog-Latin America, I idly asked the Uruguayan economic historian Javier Rodríguez Weber how Chile’s median income compared with Uruguay’s. To my surprise, he responded by writing a whole blogpost, “Tell me what quintile you belong to, and I’ll tell you how much chicken you eat” (in Spanish).

ingreso-por-quintiles-chile-y-uruguay-ac3b1o-2011-dc3b3lares-ppp-2011

Chile’s GDP per capita is ~12% larger than Uruguay’s, but, according to Rodríguez’s rough calculation, the median income is ~13% lower. In fact, income at every quintile is just a little higher in Uruguay than in Chile, except for the top (fifth). Chile has had higher economic growth than Uruguay, but in this particular case, Thatcher’s iconic “gap” rebuke does not apply (around 1:30) :

 

Demography

Anatoly Karlin wrote up a really fantastic correction of outdated views of Russian population issues, “The Normalisation of Russian Demography“. Keep this in mind if you read idiots like Masha Gessen.

B7QraaMIAAAdLxK-1.png-large

U.S. mortality rates fell during the worst of the Great Depression and had risen during the 1920s boom. The paper doesn’t explore the reasons, but I would guess, severe deflation implied that real wages were higher for those who remained employed.

The Japanese family is at the lower bound of western norms regarding extended families, the relationship of grandparents to their adult children, etc.

B5eOYCfIgAAP71mOver the course of a century the North-South disparity in German life expectancy flipped. Of course, the regional gap is smaller now than a century ago, but still it’s about 5 years, which is pretty big for developed countries.

B24dXgqIcAA14g5 B24dXhAIgAAMnSH

Finally, “The History of Fertility Transitions and the New Memeplex, on the role of ideas in the demographic transition, plus a good brief history of ideas about fertility in general. Not sure what to make of it, but interesting.

General development themes. Bill Gates, whom I had always expected to speak fluent Ur-Davossprach, had a nice review of Studwell’s How Asia Works. Which reminds me that there are several topics in South Korean development I want to blog about. Two posts by Dietrich Vollrath sort with how I generally think about growth/development. Trust, families, & economic growth was prompted in part by reading Mitterauer. I understand Vollrath didn’t want to load up the post with references, but some nod at Francis Fukuyama might have been apropos, since he argued in Trust that economists (at least at the time of his writing) paid too little attention to trust as an important factor in economic development. Another classic in the trust-and-development theme is The Moral Basis of a Backward Society. Another post, Populations, not countries, dictate development might have been titled, “It’s the People, Stupid !” or, with the voice of Charlton Heston, “Institutions are made of people !”

The Commercial Revolution. Robert Allen’s theory of the Industrial Revolution starts from the observation that England had high wages (and cheap fuel). Branko Milanovic, who really likes Allen’s theory, still wonders why England in the 18th century had higher wages than the rest of Europe in the first place. He tables a hypothesis by Mattia Fochesato : feudal institutions remained stronger in southern & eastern Europe after the Black Death than in the northwest. Milanovic does not mention hypotheses by Voigtländer & Voth. After the Black Death, a virtuous-vicious cycle of increasing urbanisation and elevated mortality due to diseases & wars checked population growth more in the Northwest than elsewhere in Europe (Malthus’s “positive check”). And there was more fertility restriction via higher age of first marriage (Malthus’s “preventive check”). None of these is mutually inconsistent. Allen aired his own theory in his book : a virtuous cycle of urbanisation, agricultural productivity growth, international trade, and “proto-industrialisation”.

The last is primarily about the shift of wool textiles production away from northern Italy to England and the Low Countries. Allen says one reason for the shift was that the staple length of English wool increased, compared with that of Mediterranean sheep. Northern sheep were better fed than southern sheep for the same reason people were better fed after the Black Death. But I wonder if deliberate sheep breeding at the time of the Little Ice Age did not play a role ? Anton Howes, there is much more room for British ingenuity.

( Edit: Morgan Kelly has emailed me to note that he and Cormac Ó Gráda could find no evidence of the Little Ice Age in Europe’s annual temperature series from the Middle Ages to the 19th century. Cf 1, 2 )

Miscellaneous

I’ve not had the opportunity to take a close look at the working paper Unified China, Divided Europe, but the literature survey, the data visualisation, and the topic alone make it worth reading.

How Agricultural Science Struggled to Defuse the Population Bomb“, via Matthew Holmes, historian & philospher of science with an environmental and agricultural focus, which itself is kind of fascinating.

From T. Greer: The “Kremlinology” of Chinese economic reform. (I don’t agree that what ever kind of financial adjustment happens China would be left in a Japan-like lost decade of stagnation.) A Short History of Han-Xiongnu Relations and its followup, plus Every book read by T. Greer in 2014.

If you’re a fan of Alex Mesoudi, there’s an excellent précis of cultural evolution theory. (Link thanks to T. Greer and Razib Khan.) Faster than Fisher, arguing from a simple population-genetics model why the spread of lactose tolerance had to be by natural selection. Crops & cousin marriage in France. By “boiling off” the less-Amish, the Amish become ever more and more Amish. Via Emil Kierkegaard, the International General Socioeconomic Factor (“s”, in analogy with “g”). The American Historical Review had a special issue on “History & Biology“, though I think the only ones worth reading are the entries by Harper and Scheidel.

From Ben Southwood, the jolly libertarian shock trooper at the Adam Smith Institute: The “Genetics of Political Views” is a neat digest of a survey of the literature on the biological roots of political orientation. Another post, “Why does the son rise?” comments on some of the literature on whether positive financial shocks (aka “luck”) have a persistent effect on economic outcomes, including my favourite, the natural experiment supplied by the Cherokee Land Lottery. I scoff at the idea that “per capita innovations” is a meaningful metric of any damned thing, but “Are we innovating less?” contains a variety of references worth reading.

Another guy who only read the first couple of chapters of A Farewell to Alms. Maryland’s Protestant Revolution. This World Bank blogpost is only interesting for this chart. (It also says Costa Rica’s standard deviation is 75% lower than Taiwan’s.)

B7P8W0BIQAAoV28

Finally, the Industrial Revolution

Anton Howes has questions about the Ming dynasty. Does it really bloody matter, protectionism or free trade ? (It’s a little overstated, because terms of trade matter. Plus Howes is waaaaay too nice with Ha-Joon Chang. But it’s a salutary corrective, since there is an overemphasis on trade amongst both free-marketers and left-wingers.) Immigration and the Industrial Revolution. Howes, why no Flamos and Huguenots ?  When exactly did the Industrial Revolution start ? (I agreed with this when he first posted it, but now I disagree.) Hey, silly physicist, you can’t touch most of GDP.

Dietrich Vollrath notes, Robert Allen‘s and Joel Mokyr’s theories of the Industrial Revolution are often depicted as opposed, but are complementary. He echoes Nicholas Crafts on this point. Anton Howes, who aims to be for the British Industrial Revolution what Lord Byron had been for Greek independence, comments on Allen’s induced innovation (implicitly) from the Industrial-Enlightenment and Bourgeois-Dignity points of view. The adoption of technology is not governed by the same process as the invention of it. I think Howes’s blogpost should have been titled, “There are more things in heaven & earth, Professor Allen, than are dreamt of in your factor price theory”. Another suggested re-title : “Cry God for Invention, England, and Saint George!

 


Filed under: Economic History, Links

The Little Divergence

$
0
0

Summary : A “great divergence” between the economies of Western Europe and East Asia had unambiguously occurred by 1800. However, there’s a growing body of opinion that this was preceded by a “little divergence” (or “lesser divergence”?) which might have started as early as 1200. I argue that the pre-modern “little divergence” was probably real, but that doesn’t mean it happened because of a modern growth process — a sustained rise in the production efficiency of the divergent economies.


[Warning : This blogpost is mostly about how data on incomes from the pre-modern period are constructed. I’ve done my best to minimise details, but I cannot guarantee it won’t be as boring as atonal music performed with a spoon.]

(1)

The “little divergence” may now be close to a consensus view amongst economic historians both in Europe and the United States. In a way it’s a reaction to the revisionist book by Kenneth Pomeranz, The Great Divergence, which argued that Chinese and Western European economies had been fairly comparable as late as 1800. Pomeranz and the “California School of Economic History” are themselves the culmination of the “global systems” macro-histories exemplified by Fernand Braudel. Pomeranz then set off a cascade of dense elaborations by historians of Asia. Before Pomeranz and the Asian revisionism, most histories had pegged the start of the divergence between the two coasts of Eurasia to about 1500 or 1600. But in countering the Pomeranz revisionism, economic historians ended up pushing back the divergence to the High Middle Ages !

These two charts (source) encapsulate the little divergence :

littledivergence2littledivergence1

The modern growth of Northwestern Europe after 1800 is now deemed a mere acceleration — albeit a very great acceleration — of an almost millennium-long trend. So people may marvel at the technological sophistication and scientific cleverness of the Song or the Ming or the Bling Dynasty, but in the final, brute number-crunching of per capita incomes, the wretched peasants of Western Europe had shot right past all of them.

Such views are now embedded in the popular imagination, as evidenced in the Atlantic magazine website from which I extracted those charts, as well as in a Vox article by one of the foremost proponents of the “little divergence” himself. (Examples of blogs using the same data or making similar claims : 1, 23)

In this blogpost I will argue the following :

  • While very few economic historians now dispute that East Asia had lower living standards than Europe well before 1800,
  • there is no agreement on whether European economies prior to 1800 were “modern” or “Malthusian” ;
  • if they were Malthusian, then the “little divergence” is rather trivial and unremarkable.
  • Furthermore, the income “data” for years prior to 1200 are mostly fictitious.
  • While real data exist after 1200 for Western Europe and China, output estimates are still calculated using assumptions that, were they better understood, would shatter confidence in the enterprise of economic history !

(2) Malthusian or Modern ?

In the Malthusian “biological” or “organic” economy, the level of technology at any given time permitted only a certain number of people to live off any given piece of land. The carrying capacity could vary according to the natural ecology of the land, because some environments are naturally more productive than others. Different peoples also possessed different levels of technology, defined in the widest possible sense as the stock of knowledge about the manipulation of the environment. When a people entered new, empty land, they would reproduce themselves until their population hit the carrying capacity — just like caribou or horse flies.

Of course, people can improve the carrying capacity through technological innovations, but in the premodern world those were very slooooow to happen and very rare in comparison with today.

I don’t want to go into too much detail, because you can read about the Malthusian model anywhere. (There are strong and weak versions.) Suffice it to say for my purposes, under Malthusian assumptions, per capita income was determined exclusively by the birth rate and the death rate.

This does not necessarily mean that the average person was living on the edge of starvation. This is a common misconception. To the contrary, the neo-Malthusian model implies that anything which lowers the birth rate and increases the death rate, will raise the living standards of the average person.  This is why different societies with different fertility practises and mortality conditions had very different income levels.

As far as I can tell, few people dispute that Western Europe was richer (per capita) than East Asia or India well before 1800. Gregory Clark in A Farewell to Alms argued that the daily wage, expressed in terms of wheat-pounds or rice-pounds, was much lower in Asia than Western Europe. But it was also much lower in East Asia and India than in Turkey, Egypt and Poland. Other lines of evidence all point to the same thing : the inhabitants of East Asia and India may have had the lowest living standards on earth before the modern period. Paradoxically, this was a sign of cultural sophistication and/or ecological good fortune, for Asian societies were capable of squeezing more people onto a piece of land than other societies.

It’s now well known that in mediaeval Western Europe women married later than in other parts of the world, and fewer women got married in the first place. This had the effect of reducing fertility rates well below the biological maximum. In East Asia, the female marital age was much lower, but a combination of infanticide, birth-spacing and other factors apparently kept net fertility only a little higher than Western Europe’s. Thus, under Malthusian assumptions, East Asia’s relative poverty is largely to be explained by its lower mortality : life in Western Europe was simply more lethal but richer, whilst more East Asian adults survived and lived longer but more miserably. The differences in mortality could be due to differences in disease prevalence, hygienic practises (such as bathing), medical knowledge or public health knowledge.

So, the question is, was the “little divergence” in living standards between Europe and Asia the result of “modern” or “Malthusian” mechanisms ? That is, was Europe’s income higher than China’s and Japan’s because the Europeans were becoming more efficient at extracting output from land, capital and labour long before 1800 ? Or is it simply that Europe and Asia had different birth and death schedules ?

If it’s the latter, then the “little divergence” is trivial and uninteresting. Or perhaps it’s interesting in the perverse sense that East Asia might have been poorer than Western Europe only because East Asians discovered earlier not to shit on themselves, itself because they understood the commercial and technological value of human foeces.

The previous questions can also be rephrased : was there a rising trend of income in Western Europe over the long run before 1800 ? And was what happened to Europe some time in the 18th century a major break with the past ?

(3) North-Central Italy

Most economic historians are either anti-Malthusians or “moderate” neo-Malthusians who think England and other European countries started slowly escaping their ecological constraints earlier than 1800. A fairly small camp of radical neo-Malthusians maintain a view which can be summarised by Gregory Clark’s assertion for England : “England in 1381, with only 55 percent of the population engaged in farming, was at income levels close to those of 1817”.

There is little dispute that between 1300 and 1850 there was long-run income stagnation in North-Central Italy, which is right now one of the richest regions of Europe. The two following charts are both from Malanima :

malanimagdp

In the above, income is represented by the aggregate consumption of goods, which itself is computed, essentially, by {daily wage rate} x {number of working days per year} x {prices of basic goods}, along with (very crucial) weights for these variables — based on theoretical assumptions about how Italians of centuries ago might have switched between goods when prices and wages changed. The number of working days per year is unknown, but Italians are assumed to have behaved much as peasants in the poorest countries today who tend to work more when wages fall and work less when wages rise. Hours of work per day, which are also unknown, are assumed to be constant over time.  (This is not stated explicitly in Malanima, but is true, by implication.) What this means is that when prices were high Italian workers of the past were assumed to just work more days of the week, rather than 4 extra hours a day from Monday to Thursday, in anticipation of the demoralising boney fish on Friday…

For North-Central Italy, there exist adequate data for wage rates, prices of basic goods, and population. That’s actually pretty good, but we think of Italian mediaeval data as pretty solid only because we compare them with the complete unknowns like the Axumite Empire in Ethiopia or ancient Greece. We probably have more information to judge the economic performance of the Soviet Autonomous Republic of Tatarstan under Stalin in the 1930s or Zaire under Mobutu Sese Seko. Yet we think of both as relatively inadequate, because the reference comparison for those would be Eurostat or the BEA.

Individually, many of the assumptions behind the construction of income data seem reasonable, but, taken together, they are a little dodgy. And when you consider that the above income series looks more or less like the wage series below, you begin to wonder, what was all that painstaking computation all about anyway ? North-central Italian wages over the same period :

italywages

There’s understandable reluctance to rely exclusively on wages, since the proportion of wages in national income can vary when the capital share (i.e., rent, in this case) varies. Malanima does his best to check his income data against more limited information on rents and production.

Few people dispute his reconstruction of Italian data. The Malthusians have no cause to dispute it since the Italian story fits so nicely with the “biological theory of living standards”. The anti-Malthusians, perhaps, don’t find it implausible that Italy, even north-central Italy, was so stagnant over such a long period. After all, they didn’t start the Industrial Revolution, did they ?

(4) England : Broadberry versus Clark

The argument is largely over England (and, perhaps the Netherlands). And that battle is best encapsulated in this chart of competing estimates of income per capita for England over 600 years [source] :

clarkbroadberry

(The rival sets of economic aggregates are described and compiled in Clark and Broadberry.)

How you view English economic history prior to 1800 — Malthusian or modern — depends on your opinion of the estimate of English income in 1400-1450. If income was high, per Clark, then the time series would look Malthusian. If, however, income was low, per Broadberry, then there was a subsequent long-run trend, which would be consistent with the slow-but-modern view of English economic growth.

Clark’s view is that despite ups and downs England in the mid-18th century was no richer than it was in 1350, and the 1350 standard of living was high by comparison with the rest of the world at the same time or most of Sub-Saharan Africa in the present. That is, England was always fairly well off — because England controlled fertility and had high death rates. Broadberry, by contrast, believes England in 1350 was about as poor as Tanzania today (and poorer still in 1250), but English income rose slowly but reliabily over the next 500 years because farmers, artisans, craftsmen, and merchants were getting slowly more efficient at their tasks.

What accounts for the difference between the two estimates ? Remember, Clark’s income for 1450 is roughly double Broadberry’s. That’s a big gap. Clark, like Malanima, aggregates wage data, but pre-modern England is also much richer territory for the economic historian with its bounty of records about rents, tithes, sheep counts, wills, tax records, etc. Broadberry uses pretty much the same data as Clark, but computes the physical output of goods.

In modern GDP accounting, there are three separate methods of computation which serve as checks on one another : the income approach (incomes received by workers and owners of capital) ; the output approach (the sum of physical output minus inputs in the business & public sectors) ; and the expenditure approach (the sum of spending by households, businesses, and the government). There are smallish discrepancies in the GDP estimates from these three approaches, but they get reconciled plausibly in a predictable way.

But for the Middle Ages, the wage approach has always been more popular because it’s thought to be simpler and more straightforward, involving fewer assumptions. Broadberry himself describes how wages have been the most traditional way income has been calculated by English economic historians :

“The quantitative picture of long run economic development in Europe is based largely on the evidence of real wages. In the case of Britain, the standard source is Phelps Brown and Hopkins (1955; 1956), who showed that there was no trend in the daily real wage rates of building labourers from the late thirteenth century to the middle of the nineteenth century, albeit with quite large swings over sustained periods. This view has recently been supported by Clark (2004, 2005, 2007a), who constructs a new price index, refines the Phelps Brown and Hopkins industrial wage series and adds a wage series for agricultural labourers. In addition, Clark (2010) provides new time series for land rents and capital income to construct a series for GDP from the income side. This new series is dominated by the real wage and hence paints a bleak Malthusian picture of long run stagnation of living standards and productivity.”

But the anti-Malthusians are sceptical — incredulous, really — of the wage-based results, because, in Broadberry’s words :

“…there are good reasons to be sceptical about this interpretation of long run economic history [based on wage data], which seems to fly in the face of other evidence of rising living standards, including the growing diversity of diets (Feinstein, 1995; Woolgar, Serjeantson and Waldron, 2006), the availability of new and cheap consumer goods (Hersh and Voth, 2009), the growing wealth of testators (Overton, Whittle, Dean and Haan, 2004; de Vries, 1994), the virtual elimination of famines (Campbell and Ó Gráda, 2011), the growth of publicly funded welfare provision (Slack, 1990), increasing literacy (Houstan, 1982; Schofield, 1973), the growing diversity of occupations (Goose and Evans, 2000), the growth of urbanization and the transformation of the built environment (de Vries, 1984).”

So Broadberry and his team made a truly herculean effort to count the total physical output of the English economy between 1300 and 1800. The description of their methodology makes for an even more boring read than this blogpost, but I have read it so you don’t have to. The next paragraph may be particularly boring, so skip it if you trust my later characterisation of it.

Just to give you an idea of how Broadberry et al. came up with England’s total agricultural output : they compute the percentage of arable land from many sources ; then estimate the percentage of fallow and cultivated land, mostly inferred from probate records ; assume there are no major differences between manorial land and freehold land ; use Clark’s regression estimates of yield per acre based on a sample of farms across counties ; make allowances for part of the crop set aside as seed (not clear how they inferred that) ; also make allowances for crops fed to animals based on samples of what horses and oxen ate in 1300, 1600 and 1800 (OK, they have different samples for oats and pulses…) ; extrapolate output of the agricultural sector by multiplying yield per acre by cultivated arable land for each crop, minus seeds and feed ; estimate the output of the pastoral sector (i.e., herds) by counting sheep from a sample of manorial records and probate inventories ; assume arbitrarily that 90% of cows and sheep produce milk and wool, respectively ; also assume (what looks to me like) arbitrary percentages of slaughter of livestock ; extrapolate all this to national pastoral output by assuming certain proportions between manorial and freehold stocks of animals ; estimate output of hay by assuming each horse ate 2.4 tonnes of hay per year, with the number of horses also estimated from diverse records.

Then the statistically inferred physical count of output is multiplied by price data supplied by, again, Clark. Note all of the physical output data  are highly discontinuous : more plentiful in the 1700s, available only once every century before the 1500s or maybe a few times between 1500 and 1700. Broadberry et al. were very careful and diligent. They even try to check to see if the number of sheep they came up with for any given century was consistent with what England exported.

I won’t get into the nonfarm sector, because the preceding makes the point clear : the chain of assumptions and inferences at each step is iffy enough, but when all is said and done, how can we know to trust the aggregates ?

Normally, you compare the GDP estimates calculated with different methods, but in this case, Clark’s and Broadberry’s are very different, especially for the late Middle Ages. Where, precisely, do they differ ? That is, what statistical adjustment is necessary to harmonise Broadberry’s and Clark’s estimates ? The number of days worked ! In Clark’s data, the number of days worked per worker per year stays within the range of 250-280 days over the course of 550 years :

workdays

(Of course, the number of hours worked per worker per year does not even figure in anyone’s calculations, since that is unknowable, even though we really need that information to truly assess the pre-1800 years in the same way we assess the post-1800 years.)

Broadberry does not actually use any of the published days-worked data as shown above. What he does, instead, is impute the days worked from his output estimates. This means, he reconciles his output-based GDP with the wage-based GDP by increasing or decreasing the days worked as necessary to fit his own GDP data. Here are the “imputed” days-worked in Broadberry :

imputeddaysworked

I stress : the third and fourth columns do not contain any values which have been actually observed, or inferred from statistical samples. It’s literally the numbers he needs to make Clark’s wage series “fit” his output series. Broadberry is not being sneaky. He’s quite upfront about his assumptions :

“The second purpose of this paper is to explore the differences between the trends in the real wage and output-based GDP per capita series. The most straightforward way to reconcile the two series is to posit an “industrious revolution”, so that annual labour incomes grew as a result of an increase in the number of days worked, despite the stagnation in the daily real wage (de Vries, 1994).”

The reference is to Jan de Vries, aptly, the author of The Industrious Revolution. For de Vries this “revolution” was his way of reconciling the increase in luxury goods mentioned in wills starting in the 17th century with the reality of stagnating wages. In the narrative he constructed, early modern households, desiring the new luxury goods made available by global trade and New World expansion, supplied more labour than ever before, including that of wives and children. Broadberry allies himself with this story and extends it deeper into the past, because it’s obviously consistent with his output estimates.

I am not suggesting Clark’s estimates are free of tremendous uncertainties. His wage series have been criticised on grounds of representativeness, for example. But I think his methods are more straightforward and he does use observed or sampled values for the basic aggregates. His estimates do not require hypothesising an unobserved massive increase in English working habits between 1450 and 1600.

There are many other critiques and counter-critiques of both sides, as well as ingenious attempts to cross-check Broadberry’s estimates with other kinds of calculations (especially by Karl Gunnar Persson, cf “The End of the Malthusian stagnation thesis”). But I think that’s enough for now !

(5)

There’s a big cognitive bias in economic history when data are sketchy. “Soft” qualitative factors have this tremendous rhetorical effect on impressions of wealth and poverty. Thus in the 1970s and 1980s, when China’s imperial stagnation was assumed to have started around 1500 or 1600, historians got the idea that the Chinese must have been pretty rich earlier. After all, an enomorous Chinese fleet of hundreds sailed to Africa, Marco Polo had been dazzled by paper money and the compass, etc. etc. Such a society must have been pretty damned rich. So historians looked for evidence to confirm those impressions. By contrast, how could semi-naked hunter-gatherers have been better off, even if they worked many fewer hours for many more calories full of protein and fat than your average Chinese peasant ?


Addendum-Final note : Just to avoid ambiguity, I state as baldly as I can the point of this blogpost : the pre-modern “little divergences” were probably real, but that doesn’t mean they happened because the divergent economies were smarter or more efficient. Today people assume that higher income implies more technological sophistication. But in the Malthusian world, inhabitants of “smarter” or technologically more advanced societies could be poorer on average than those of less sophisticated societies, because what determined living standards was the balance of birth and death rates. 

I think a solid piece of evidence for the Malthusian view is that height in England in the years 1-1800 saw no long-term trend :

height england 1-1800


Addendum #2: There is now a separate blogpost, “Height in the Dark Ages“, with assesses  living standards in post-Roman Europe using evidence from height.

Addendum #3 : There is now another separate blogpost, “Angus Maddison“, which examines the dubious assumptions behind the pre-1200 income data published by the late Angus Maddison.


Filed under: Economic History Tagged: great divergence, Gregory Clark, little divergence, long-run growth, Malthus, Malthusianism, Stephen Broadberry

Samples of Greek & Latin, Restored Pronunciation

$
0
0

Some MP3 samples of the “restored” pronunciation of classical Greek and Latin.
I’ve long been a fan of attempts to reconstruct the pronunciation of ancient Greek and Latin. I’ve embedded MP3 snippets of the first line of The Odyssey as well as most of Catullus I. (They take up a lot of space !)


Odyssey Line 1.1 (spoken)

read by Stephen Daitz, “A Recital of Ancient Greek Poetry”, 2nd ed

ἄνδρα μοι ἔννεπε, μοῦσα, πολύτροπον, ὃς μάλα πολλὰ

 

Odyssey Line 1.1 (chanted)

 


Catullus 1

Read by Robert P. Sonkowsky, “Selections from Catullus and Horace”

Sonkowsky is not as good as Stephen Daitz reading the Greek. He has a very strong American accent and his nasal consonants are particularly bad, sounding rather like a feckless schoolgirl’s attempt to reproduce nasals in French. But still the recording gives the exotic and alien flavour of the “original” pronunciation of Classical Latin.

 

Cui dono lepidum novum libellum
arida modo pumice expolitum?
Corneli, tibi: namque tu solebas
meas esse aliquid putare nugas.
Iam tum, cum ausus es unus Italorum
omne aevum tribus explicare cartis…
Doctis, Iuppiter, et laboriosis!
Quare habe tibi quidquid hoc libelli—
qualecumque, quod, o patrona virgo,
plus uno maneat perenne saeclo!


Filed under: Ancient Greek, Classics, Latin Tagged: Ancient Greek, Latin, Phonology, Restored Pronunciation

Debate with Matt on India, China, Cuba, Korea, etc.

$
0
0

Below I quote the lengthy exchange I had with Matt on India, China, Cuba, South Korea, etc. in the comments section of a blogpost by HBDchick. Since our debate was off-topic, Matt and I have agreed to move it here. My latest reply to Matt is contained in the separate blogpost, “Ideology & Human Development“. Note : Matt had already been arguing with others about something else, so below I merely extract that part of the debate relevant to ours.


Pseudoerasmus

…Kerala has been studied a lot. Read Amartya Sen, for example. The (proximate) reason Kerala has high HDI for its income class is that it has had a strong Marxist party in electoral politics which caused the state to invest more in health & education than other states. In independent countries Marxist regimes normally nationalised private property and redistributed incomes to things like health and education. So, on average, other poor non-communist countries with comparable levels of income will usually have lower HDI. Now, I say this is “proximate” because the real question is why Kerala has such a strong Marxist party. Emmanuel Todd argues in several books it’s about the family structure.

Matt

I’m aware of Sen’s work, and I agree with his/your explanation for this. My point in bringing up Kerala was that to show that even polities with high levels of diversity can have robust, effective social democracy if the government is competent, treats each group fairly, and is dedicated to improving social conditions (see the Bo Rothstein article in my comment above for more details). Diversity is not necessarily incompatible with social cohesion or a welfare state.

Sen also does a good job of explaining why Maoist China, for all its many evils, did much better than India at raising life expectancy over the same period. Short answer: because China was Marxist. See, e.g., “Indian Development: Lessons and Nonlessons,” Daedalus Vol. 118, No.4, 1989….

Pseudoerasmus

…I actually don’t believe the Marxist explanation for Kerala in any deep sense. After all West Bengal has also had a strong communist party and its HDI scores are abysmal… Which is also why Sen’s assertions about China are ultimately shallow : East Asia in general stresses education, health and egalitarian growth much more than other countries.

This shows up in land reform. Many have observed that Japan’s land redistribution in 1946, which created a large class of small proprietor-farmers out of what had been closer to a Latin-America-like latifundist system, was the work of the Americans. That is true. However, the very similar land reforms in South Korea and Taiwan were not the work of the Americans. [Note : I meant, these were not compelled by the Americans, as with Japan.] More importantly, all three succeeded. And China has also succeeded with small-holder agriculture since decollectivisation. But the record of land reform in most other places is truly abysmal.

Democratic India in the 1950s and 1960s had a “zamindari abolition commission” yet the number of small holders in India is still fairly low because the process was strangulated by bureaucratic delays, corruption, repartition into smaller within-family plots, etc. There’s more going on here than mere redistribution of wealth. Well, I think you know what I’m getting at : even if we allow that certain political regimes will invest more in people all things equal, redistribution still requires a certain amount of social competence that is not uniformly distributed in the world. Some people appear to do better under socialism and communism than others.

[Emmanuel Todd argues] that Kerala is an extreme example of the matrilineal family system found in the South in general which produces better HDI than the north. Todd explains the unusual predilection for Marxism in Kerala as a reaction to the slow erosion of that family structure. I think Todd supplies good descriptions, but not very good explanations…

Matt

Sen also points to Sri Lanka (“Indian Development,” p. 376), which although non-Communist, carried out similar investment in education, health and welfare, and now has an HDI of 0.715. Sen (ibid, p. 380-82) also mentions post-1975 Communist Vietnam (current HDI 0.617; higher than India (0.554), higher than Cambodia and Laos (both 0.553); I also think we need to account for the impact of the war in these countries).

I would also point to Cuba, with an HDI of 0.780, close to Kerala’s and well above the demographically similar Dominican Republic (0.702). Also the Seychelles, another diverse Marxist country with the highest HDI score in Africa (0.806, even above Kerala).

But I think Sen’s argument is strongest when he points to differences within China over time. Thus, life expectancy in China underwent a sharp downturn following the market-based reforms of Deng Xiaoping in the late 1970s (ibid., pp. 385-87).

This was because the breakup of the communal farms dismantled the system of healthcare provision in place. Sen explains here (p. 2):

“[T]he economic reforms of 1979 greatly improved the working and efficiency of Chinese agriculture and industry; but the Chinese government also eliminated, at the same time, the entitlement of all to public medical care (which was often administered through the communes). Most people were then required to buy their own health insurance, drastically reducing the proportion of the population with guaranteed health care….

…The change sharply reduced the progress of longevity in China. Its large lead over India in life expectancy dwindled during the following two decades—falling from a fourteen-year lead to one of just seven years.

The Chinese authorities, however, eventually realized what had been lost, and from 2004 they rapidly started reintroducing the right to medical care. China now has a considerably higher proportion of people with guaranteed health care than does India. The gap in life expectancy in China’s favor has been rising again, and it is now around nine years; and the degree of coverage is clearly central to the difference.”

HBD doesn’t do a very good job of explaining these changes.

Pseudoerasmus

You misunderstand me. I have no problem with the view that, all else equal (such as demographic characteristics), a redistributionist political regime in a poor country is more likely to improve HDI than a non-redistributionist one. That was my point about East Asia. The [sociobiological] angle would address who is more likely to adopt redistributionist policies, and who is more competent at them once they are adopted.

So I think that easily covers Cuba vs [the Dominican Republic] (fairly similar demographics) — though you do not consider that Soviet subsidies to Cuba were on the order of 1/3 of GDP (via purchase of inflated price of sugar) and that helped a lot in Cuba’s human development… In fact most of your examples are pretty bad. The Seychelles compared with the rest of Africa ? Why ? The Seychelles are a mixed-race Franco-East-African country with about 80,000 people and a GDP per capita comparable with the Czech Republic. I should hope they would have decent HDI !

As for China and life expectancy, see the chart I’ve uploaded here :

lifeexpectancy

Don’t see any big drop. The rate of increase slowed, but that’s normal especially in a country like China with a big divide between the coasts and the interior. Besides, life expectancy is not strongly correlated with access to medical care in the broadest first-world sense, and only weakly correlated with income. (You don’t need huge jumps in income to improve HDI.) The post-war global increase in life expectancy (as well as the global fall in infant mortality) is best explained by greater food availability, more balanced micronutrient intake, innoculations, public health measures (such as sanitation), etc. Most of these measures don’t require high incomes.

Matt

Re: Cuba.

It’s been [23] years since Cuba received those subsidies, and the DR still hasn’t caught up. Also, we have to factor in the embargo against Cuba from 1959. Remember, from 1964 until 1975, that embargo wasn’t just from the United States, it was from the entire Organization of American States, except Mexico. There’s also the fact that Cuba needed to divert spending to its military in order to deter the very real threat of an American invasion (which happened of course in 1961) and the near-constant terrorism directed from Miami and Langley. Finally, if we’re going to look at subsidies, we’d also have to look at the massive U.S. subsidies to South Korea during the Cold War.

Re: China

I’ll quote Sen directly:

“While the gross value of agricultural output doubled between 1979 and 1986, the death rate firmly rose after 1979, and by 1983 reached a peak of being 14 percent higher than in 1979 (in rural areas, the increase was even sharper: 20 percent). The death rates have come down somewhat since then, but they remain higher than before the reforms were launched” (“Indian Development,” p. 385).

See also the chart on p. 383 of “Indian Development” and p. 26 of Sen’s “Hunger and Entitlements.” He takes the Chinese part of the chart from Judith Banister’s “An Analysis of Recent Data on the Population of China,” Population and Development Review, 10 (1984). It shows a noticeable drop from 1979.

Bannister (ibid., 254) says that China’s life expectancy, after having risen every year from 1960 to 1978, fell from 65.1 to 64.7 from 1978-1982. The Google chart (which says it came from World Bank data) says that life expectancy rose from 66.51 to 67.57 during the same years. I don’t know why the discrepancy exists.

Sen repeats his claim about the China-India gap falling from 14 to 7 from 1979 to the early 2000s, then rising from 7 to 9 after 2004 (when the Chinese reinstituted the public health system) in “The Art of Medicine: Learning from Others,” The Lancet, Vol 377 (2011), but he doesn’t give a source.

What do you think about Sri Lanka?

P.S. Sen also mentions this paper by Athar Hussain and Nicholas Stern, and his own paper “Food and Freedom.” See Table 5 on p. 16 for data on the rise in the death rate from 1979, and Table 6 on p. 17 for data on the decline in the number of “barefoot doctors” from 1980.

Pseudoerasmus

Why do you keep talking about the Dominican Republic ? I have already agreed with you that redistributionist policies are more likely to result in better HDI than otherwise.

However, you are looking at it the wrong way. Cuba had to expropriate nearly all private assets and receive large external subsidies to get it done. The Dominican Republic didn’t exproproriate and its foreign assistance was much more limited, but its HDI score today is not that much lower than Cuba’s…

No need to “factor” in [the US & OAS embargo] at all [in the case of Cuba]. What ever Cuba lost via the embargo, it was much more than made up for by sugar purchases by the Soviet Union and the rest of the East bloc at inflated prices — especially after 1972, when the Soviet Union agreed to pay not the international price, but nearly four times the international price.

Also, Cuba never lost export markets for sugar outside the East bloc. At any given time between 1960 and 1990, exports to non-communist countries were between 20% to 50% of the total volume. Western Europe and Japan never observed any embargo against Cuba.

By the way, the OAS dropped its embargo in 1975. Besides, that never stopped anyone from having trade relations unilaterally with Cuba if they wanted, like Argentina before 1976…

Castro built up the Cuban armed forces to such an extent that he could send thousands of troops to Angola, Ethiopia, Mozambique, etc. Now you can say this was tit-for-tat against US support of the opposing side, but these luxury foreign adventures belie the claim of Castro’s “having” to divert spending anywhere.

[Re South Korea] At the peak of US aid to South Korea in the late 1950s and early 1960s, it amounted to less than 5% of South Korean GDP. [Note : This was intended as net of military assistance, I will address this later.] This was not trivial but never approached the vicinity of Cuba’s dependence on the Soviet Union in the 1970s and 1980s. Besides, no sensible person believes South Korea’s explosive growth has much to do with external assistance.

As for Sen, I’ve looked into his claims a little more, and, yes, there was a drop in Chinese life expectancy after 1979 which gets reversed in the late 1980s. But in the Bannister data the drop is trivial. Hussein & Stern’s argument is more interesting : the life expectancy data appear to be driven by rising infant mortality in the first half of the 1980s, which are substantial enough to be interesting. But there must be more happening than is implied by Sen’s argument, because China’s crude death rate hit its low in 1979 and still remains higher than then… So the age-structure effects of the population must be important — something Hussein & Stern do not discount.

Matt [sent to me by email]

I think you’re understating the disparity [between Cuba’s and the Dominican Republic’s HDI scores].

First of all, the difference between the two countries’ HDI is 0.078, which is 10% of Cuba’s score. If we add 10% to Cuba’s score, we almost get to Greece (0.860; not the best place in the world, but better than Cuba). If we subtract 10% from the DR’s score, we get Honduras (0.632; one of the worst places in Latin America), and a little worse than Botswana (0.634; one of the best places in sub-Saharan Africa). If we subtract 10% from the U.S.’s score of 0.937, we somewhere between Slovakia (0.840) and Andorra (0.846).

Secondly, if we look at non-income HDI (which we should be able to, given that Cuba’s and DR’s per capita GDPs are comparable), we find that Cuba’s is 0.894 and DR’s is 0.726, a difference of 0.168. Cuba not only does much better than DR on this measure, it actually scores within the same range as the UK (0.886) and Hong Kong (0.907), despite far lower per capita income.

https://data.undp.org/dataset/Non-income-HDI-value/2er3-92jj

Next, we should look at the Inequality-Adjusted HDI values. When adjusted for inequality, DR’s score drops to 0.510 (a fall of 27.3%), putting it with Tajikistan and Guyana. I can’t imagine that Cuba’s IHDI score falls further than DR’s, but unfortunately, we have no data on Cuba’s total IHDI value. However, we do have Cuba’s Inequality-adjusted Life Expectancy Index (LEI) value, which is 0.882 (a drop of 5.4%). This not only puts Cuba far above DR (0.708, a drop of 16% and a difference with Cuba of 0.174), it actually makes Cuba almost exactly the same as Denmark on this measure (ILEI 0.887). That’s remarkable. If Cuba’s Inequality-Adjusted LEI gives us any indication of its overall Inequality-Adjusted HDI, then the latter should be within the range of the developed world, and of course far above the DR.

https://data.undp.org/dataset/Table-3-Inequality-adjusted-Human-Development-Inde/9jnv-7hyp

And again, it has been a quarter-century since the Soviet spigot was shut off. I suppose Soviet aid has been to some extent replaced by Venezuelan aid, but it’s still not even close.

[I also forgot to mention something: about 10% of Cuba’s population left the island in the decades following the Revolution, encouraged by, among other things, special privileges granted to them in the US immigration system. These emigres are wealthier and more educated than the average Cuban. Nothing remotely comparable holds for the Dominican Republic; if anything, emigration from the DR has been disproportionately unskilled.]

[Quoting PE] Castro built up the Cuban armed forces to such an extent that he could send thousands of troops to Angola, Ethiopia, Mozambique, etc.

He never actually sent combat troops to Mozambique; only a few hundred advisors. By that standard, you could say that the U.S. “sent troops” to El Salvador in the 1980s, or is “sending troops” to Iraq again today. Cuba did, however, send a few thousand troops to Syria during the Yom Kippur War, and engaged in at least some combat with the Israelis.

 [Quoting PE] Now you can say this was tit-for-tat against US support of the opposing side, but these luxury foreign adventures belie the claim of Castro’s “having” to divert spending anywhere

At least in the case of Angola, Cuba was engaging in collective self-defense (which is a guaranteed right in Article 51 of the U.N. Charter) against a South African/Zairean attack. Ethiopia, too, was attacked by Somalia, though you could make the argument that the regime in Ethiopia was so heinous that it would have been better had Cuba stayed out of it. With the Yom Kippur War, the situation was more complicated, since, unlike in 1967 and virtually every other Arab-Israeli war, the Arabs actually fired the first shot that time, though they were only trying reacquire their own conquered territory (Sinai and Golan). In any case, Cuban intervention in that war wasn’t very consequential.

So in the two major cases of Cuban intervention, the Cubans could argue that they were only exercising their right of collective self-defense in accordance with the U.N. Charter, and that they needed to do this in order to deter American aggression and interventionism which could very easily have turned its sights on Cuba (and did, in fact). The United States has similarly exercised (what it described as) “collective self-defense” in Korea, Vietnam, Laos, Cambodia, and Kuwait, all of which cases were at least as complicated as Cuba’s interventions in Angola, Ethiopia, and Syria (and, I would argue, much more so, in each case).

But let’s examine the presuppositions of your argument. Imagine we were having a discussion about the distortions to South Korea’s development caused by South Korea’s military spending, needed in order to deter a North Korean/Chinese invasion, which of course happened in 1950). [Leave aside the fact that this war was far more complicated than it appears from how it is usually discussed (it followed frequent border clashes, most of which were initiated by the South; not to mention a virtual civil war from 1945-50 within the South, for example on Jeju)]. Clearly there’s something to this: South Korea’s actual performance has been impressive, but it would have been arguably better in the absence of the North Korean threat. But imagine I were to respond by saying:

“The South Korean generals built up the South Korean armed forces to such an extent that they could send thousands of troops to Vietnam. Now you can say this was tit-for-tat against Soviet/Chinese/North Korean support of the opposing side, but these luxury foreign adventures belie the claim of South Korea’s ‘having’ to divert spending anywhere.”

Clearly this isn’t a good response.

Even if you think that Cuba didn’t “have” to intervene in Angola, etc., that doesn’t mean that they weren’t driven to spend on their military by the very real threat of U.S. attack. After 9/11, we invaded Iraq, which we <i>definitely</i> didn’t need to do. But it wouldn’t have happened if it weren’t for 9/11. Cheney, Wolfowitz, et al may still have <i>wanted</i> to do it, but they wouldn’t have gotten away with it. Now imagine that this country experienced <i>multiple</i> 9/11s (which is the equivalent of what we’ve done to Cuba over the years once you control for population size). Imagine that Al Qaeda tried to kill the president dozens, maybe hundreds of times. Imagine that there was a Taliban-backed rebellion in the interior of the U.S. How ape-shit do you think we would have gone? How much would we have diverted away from health, education, and welfare to spend on war, the military and the national security state?

[Quoting PE] At the peak of US aid to South Korea in the late 1950s and early 1960s, it amounted to less than 5% of South Korean GDP.

Actually, at its peak in 1957, total foreign aid (most of it from the US) to South Korea hit 16% of GNP, and averaged 8-9 percent from 1959-1962. See the following article by Susan M. Collins and Won-Am Park:

http://www.nber.org/chapters/c9033.pdf

The following paper by Marcus Noland says that foreign aid reached over 20% of GDP, and over 80% of imports, at its peak (see Figure 2, p. 36):

http://scholarspace.manoa.hawaii.edu/bitstream/handle/10125/22215/econwp123.pdf

However, as Collins and Park point out (p. 178), this is a significant underestimation, because it does not include subsidized loans from the US and Japan, which continued well after aid had slowed.

Also see Figure 3 (p. 37) of Noland’s paper, which shows that in the early 60’s, US aid financed almost all of South Korea’s investment, since domestic savings net of aid was near 0.

Going back to your post on Kerala/West Bengal/East Asia/Land reform, I noticed that you ran together two slightly different topics: HDI and land reform. It’s true that West Bengal has abysmal HDI, but it actually did very well compared with the rest of India in terms of land reform. According to Maitreesh Ghatak and Sanchari Roy, West Bengal and Kerala

“accounted for 11.75 and 22.88 per cent, respectively, of the total number of tenants conferred ownership rights (or protected rights) up to 2000, despite being only [7.05] and 2.31 per cent of India’s population, respectively…. West Bengal’s share of total surplus land distributed was almost 20 per cent of the all-India figure… although the state accounts for only about 3 per cent of India’s land resources…..” (p. 253).

http://www.academia.edu/157203/Land_Reform_and_Agricultural_Productivity_in_India_A_Review_of_the_Evidence

I don’t know why West Bengal seems to have done well with land reform and poorly with HDI. Any ideas?

Pseudoerasmus [responding re China only]

In my last reply to Matt, I conceded his/Sen’s argument that China’s market reforms of 1979 may have disrupted or undermined the “barefoot doctors” programme, which could have had adverse consequences for Chinese infant mortality. However, Matt’s own source points out the increase in China’s infant mortality started before the market reforms of 1979 :

hussain

[Hussain & Stern] click to enlarge

Moreover, the above analysis is powered by scepticism about China’s official statistics for 1979-1989. The World Bank’s data on infant mortality at birth and child mortality under age 5, which are based on official statistics, do not show a deterioration in either indicator for the years at issue.  (There is, however, a general discrepancy between the World Bank’s and the WHO’s data, although the latter only go back to 1990. But the World Bank and the UN are consistent.) Nonetheless, the PRC’s current infant mortality rate is considerably higher than that of the rich countries (and Cuba’s), although at least according to official statistics, that appears to be in large part because of the divide between the cities and the rural interior.

The rest of my reply is contained in the separate blogpost, Ideology & Human Development. Please post any comments regarding the above over there.


Filed under: Human Development, Sociometrics Tagged: Cuba, HDI, human development, Kerala, South Korea

Ideology & Human Development

$
0
0

How real are Cuba’s accomplishments in health and education since the revolution ? How do they compare with the situation prior to the revolution ? Was the Soviet Union’s subsidy to Cuba crucial to its human development ? Did the US hostility to the Cuban Revolution have an impact ?

This blogpost is a rejoinder to a debate I had with commenter Matt about human development in Kerala, China, South Korea, Cuba, West Bengal, the Dominican Republic, etc. (See Debate with Matt.) This post mostly addresses life expectancy, infant mortality, and education from the perspectives of politics and economics. [The comments section contains an extensive discussion.]


Years ago some astute person noted that I was a hypocrite for being centre-left in the context of discussing political economy in developed countries, but rather centre-right when it came to the Third World. He was right, except that it’s not hypocritical. In already-rich countries with already-efficient economies, the levels of income redistribution that are in political play are typically not so great as to endanger efficiency. The productivity of the core OECD economies is high enough that social democracy is fundamentally affordable. One can think of Germany, for example, as being able to manage a strong welfare state despite a low labour force participation rate (compared with the United States), because German output per hour is so high.

However, there’s a much greater trade-off between economic efficiency and income redistribution in poorer countries. In the moderate scenario you have a case like the macroeconomic populism of Argentina under Peronism, where consumption transfers to the public were financed by budget deficits and money printing. In the extreme, a variety of Marxist-Leninist regimes expropriated, controlled and managed all productive assets. But most of those regimes did divert national resources toward ending mass poverty and toward healthcare and education. Thus, under communist rule, the Soviet Union, its Eastern European satellites, Mongolia, the People’s Republic of China and Cuba achieved better outcomes in literacy, infant morality, life expectancy, and years of schooling than countries with comparable levels of per capita income.

In most of these countries, the human development would have occurred under capitalism anyway, but probably with a delay. So they sacrificed higher incomes in the long run for the immediate alleviation of poverty. However, since the sample of countries that have been governed by Marxist regimes for any length of time under conditions of peace and stability is quite small, we really don’t know whether this “human development” pattern is a general tendency of actually-existed Marxist regimes, or merely a cultural characteristic of those particular societies. With the exception of Cuba, they are all European or East Asian. (Yes, I’m aware there were other Marxist regimes, but their lifespans were much shorter and/or they were embroiled in war.)

There was a time when I entertained the idea that Sub-Saharan Africa, and some of the poorest countries of other regions, were so inept at capitalist economic development that most of them might actually be better off under a totalitarian redistributionist regime. But I don’t think that any more, because they probably would have botched that too. Even when it comes to central planning socialism some peoples are just less good at it than others.

Nonetheless, the fact that mass poverty persists caused some left-wing observers to question the moral difference between democratic capitalism and something as extreme as Maoism. For example, Noam Chomsky made the case in his column on The Black Book of Communism that India killed even more people than Maoist China, just more slowly and less visibly :

Like others, Ryan reasonably selects as Exhibit A of the criminal indictment the Chinese famines of 1958-61, with a death toll of 25-40 million…. The terrible atrocity fully merits the harsh condemnation it has received for many years, renewed here. It is, furthermore, proper to attribute the famine to Communism. That conclusion was established most authoritatively in the work of economist Amartya Sen, whose comparison of the Chinese famine to the record of democratic India received particular attention when he won the Nobel Prize a few years ago. Writing in the early 1980s, Sen observed that India had suffered no such famine. He attributed the India-China difference to India’s “political system of adversarial journalism and opposition,” while in contrast, China’s totalitarian regime suffered from “misinformation” that undercut a serious response, and there was “little political pressure” from opposition groups and an informed public (Jean Dreze and Amartya Sen, Hunger and Public Action, 1989; they estimate deaths at 16.5 to 29.5 million).

The example stands as a dramatic “criminal indictment” of totalitarian Communism, exactly as Ryan writes. But before closing the book on the indictment we might want to turn to the other half of Sen’s India-China comparison, which somehow never seems to surface despite the emphasis Sen placed on it. He observes that India and China had “similarities that were quite striking” when development planning began 50 years ago, including death rates. “But there is little doubt that as far as morbidity, mortality and longevity are concerned, China has a large and decisive lead over India” (in education and other social indicators as well). He estimates the excess of mortality in India over China to be close to 4 million a year: “India seems to manage to fill its cupboard with more skeletons every eight years than China put there in its years of shame,” 1958-1961 (Dreze and Sen).

In both cases, the outcomes have to do with the “ideological predispositions” of the political systems: for China, relatively equitable distribution of medical resources, including rural health services, and public distribution of food, all lacking in India. This was before 1979, when “the downward trend in mortality [in China] has been at least halted, and possibly reversed,” thanks to the market reforms instituted that year.

Overcoming amnesia, suppose we now apply the methodology of the Black Book and its reviewers to the full story, not just the doctrinally acceptable half. We therefore conclude that in India the democratic capitalist “experiment” since 1947 has caused more deaths than in the entire history of the “colossal, wholly failed…experiment” of Communism everywhere since 1917: over 100 million deaths by 1979, tens of millions more since, in India alone.

Thus, for Chomsky, the social inequities of capitalism in the Third World — regardless of whether they are caused by capitalism or merely tolerated under the system — are so evil that any political programme which does not redistribute wealth for the immediate remedy of these inequities is as lethal as the worst excesses of Stalinism or Maoism.

For many on the left, the “human development” accomplishments and aspirations of the old socialist states like Cuba still compare favourably with the evils of capitalist development in the Third World.

So, are Cuban accomplishments real and impressive ?

Life Expectancy

I argued that since the Cuban government has total command of all resources on the island and marshals them without democratic constraint, Cuba’s HDI score is not all that impressively greater than the Dominican Republic’s. Matt replies I understate the disparity :

“…if we look at non-income HDI (which we should be able to, given that Cuba’s and DR’s per capita GDPs are comparable), we find that Cuba’s is 0.894 and DR’s is 0.726, a difference of 0.168. Cuba not only does much better than DR on this measure, it actually scores within the same range as the UK (0.886) and Hong Kong (0.907), despite far lower per capita income.”

The “Human Development Index”, which is generated by the United Nations Development Programme as a way of capturing human welfare and living standards which are only imperfectly measured by GDP, is a composite score of per capita income, educational attainment, and life expectancy. Non-income HDI is therefore simply a composite of life expectancy and educational attainment, which I will examine separately.

First, an empirical note about life expectancy: the relationship between GDP per capita and life expectancy is approximated by a logarithmic curve also known as the Preston curve :

PrestonCurve2005For low incomes, increasing income can lead to hefty gains in life expectancy, but as income gets higher the “returns” to income diminish. Yet, at the same time, there’s a pretty large variation in life expectancy values even for fairly low levels of per capita income. So countries such as Mexico, Syria, Honduras, and Bangladesh have values in the 70s. In other words, it’s not that onerous, in terms of income requirement, to raise life expectancy to within 10 years of the richest countries in the world. Quite apart from simply having more food to eat, the job can be done by fairly low-cost public health measures that raise micronutrient intake, inoculate populations, and improve sanitary standards (e.g., relating to water and sewage). Which is why life expectancy at birth has grown more steadily than per capita income in the developing countries :

esperance de vie

(Sorry it’s in French, I could not find a comparably detailed time series by region in English.)

So, at first approximation at least, it’s a matter of politics, whether societies choose to make those relatively inexpensive outlays to improve the conditions that prolong life. (Africa’s progress has been depressed by AIDS, particularly in southern Africa.) Of course it requires a certain amount of administrative competence and social cohesion in order to implement basic public health measures in the first place. These abilities are not uniformly distributed in the world. But what ever the causes of the global variation in institutional capacity and administrative competence, they are clearly very difficult to modify if there is a vicious circle, a stable equilibrium, of bad institutions => low growth, low human development => bad institutions, etc.

Infant Mortality

The World Bank data peg Cuba’s and the Dominican Republic’s life expectancy at 79 and 73, respectively. According to UNSTAT, life expectancy at 60 is roughly the same for Cuba and the Dominican Republic. This implies that most of the difference in their average population longevities is due to their differences in neonatal and under-five mortality.

During the Middle Ages, childhood was the most dangerous period in a person’s life, but once you survived it, you generally could expect a fairly long life. Likewise, the epidemiological difference between a developing and a developed country is that there is a high probability of dying of childhood and communicable diseases in the poorer country, than in the richer country where there is a high likelihood of dying from the noncommunicable diseases of old age and affluence, such as heart disease, cancer, and diabetes.

Wikipedia has UNSTAT’s data for infant mortality (neonatal) spanning over six decades. The World Bank has the U5 child mortality rates over a similar period. Cuba’s infant mortality rate of 5-6 per 1000 live births is not quite as low as the range seen in the developed countries (2-4), but fairly close. Thus, mortality in Cuba very much mirrors the developed world pattern : most people die of the diseases of old age. The Dominican Republic’s neonatal mortality, however, is in the range of 25-30 per 1000 live births.

Now, I had already said “I have no problem with the view that, all else equal…, a redistributionist political regime in a poor country is more likely to improve HDI than a non-redistributionist one”. Clearly, Cuba has put a large share of its scanty resources into prenatal and postnatal care, whilst the Dominican Republic has not done to the same degree. Matt finds it “remarkable” that Cuba has achieved this (along with other things) despite numerous obstacles. But I’m not so impressed.

First, as I’ve already argued, it’s not very expensive and it’s not technically difficult to improve such indicators as life expectancy and infant mortality. It’s largely a matter of importing technology and getting one’s administrative act together, given the political desire to do so. And compared with most developing countries with their institutional deficiencies, a central planning dictatorship with exclusive control over resources and without traditional constraints can probably exercise more brute administrative competence. (More on this below.)

Second, it was inherently easier for Cuba to lower its infant mortality rate than for the Dominican Republic. Why ? Because it was already lower for Cuba in 1950 and 1960, than in most of the rest of Latin America and the Caribbean. Look at the UNSTAT data. Cuba’s infant mortality rates in 1950-55 and 1955-60 were well below the average/median for Latin American and the Caribbean. In fact, only Uruguay, Puerto Rico, Argentina and various non-Hispanophone Caribbean countries beat Cuba in this regard. The Dominican Republic was about average for the region. (But the Latin American country in 2012 that’s most improved relative to its rank in 1950, would appear to be Chile.)

The same thing for life expectancy and literacy. Cuba in 1950-60 was already more advanced in these areas than most other Latin American countries and certainly more than the Dominican Republic.

The Production of Cuban Health

The Soviet production system was famously wasteful. Resources (energy, raw materials, labour, machine-time etc.) were used to generate a unit of output which was relatively undesirable or even worth less than the inputs. For example, the Soviets harvested Kamchatka crab but their canneries converted them into dogdy tins of semi-preserved arthropodic matter, along with a lot of “leakage”. That’s why ultimately the Soviets could not afford their system ; at some point you just can’t throw more and more resources to salvage your production targets.

Input/ouput issues also matter in healthcare. Cuba’s health-related finances are opaque, but there’s one simple proxy for the amount of resources the Cubans have thrown at producing their health outcomes: physicians per 1000 population. It’s astronomical ! 6.7 per 1000 is the second highest in the world and is astounding by any standard, let alone for a poor country like Cuba. Most rich countries have 3-4 per 1000. I’ve also read that Cuba has the highest doctor-patient ratio in the world, but I can’t find a proper citation (as opposed to a bunch of rubbish sites saying it). People think this is a good thing, but it is not. It’s clearly a misallocation of resources, just like those Soviet tinned crabs.

Maybe we shouldn’t be talking about productivity when it comes to saving the life of the extra infant or two per 1000. Perhaps, but the issue speaks to how “impressive” the achievement really is. No normal society with a market economy, even with a large welfare state and nationalised healthcare system, would allocate so many resources to producing so many doctors. And short of authoritarian central planning socialism it probably could never happen, especially in most developing countries with weak institutions. Just think of Pakistan, where mobile health workers offering child vaccinations meet resistance from parents or are terrorised by religious fanatics. Cuba’s health outcomes almost certainly require intrusive, authoritarian measures.

There’s a lot of propaganda and misinformation about Cuba, on both left and right, so I’m wary of available sources. But this article cites anthropologist Katherine Hirschfeld, author of this book which I have read and found reliable :

“Cuba does have a very low infant mortality rate, but pregnant women are treated with very authoritarian tactics to maintain these favorable statistics,” said Tassie Katherine Hirschfeld, the chair of the department of anthropology at the University of Oklahoma who spent nine months living in Cuba to study the nation’s health system. “They are pressured to undergo abortions that they may not want if prenatal screening detects fetal abnormalities. If pregnant women develop complications, they are placed in ‘Casas de Maternidad’ for monitoring, even if they would prefer to be at home. Individual doctors are pressured by their superiors to reach certain statistical targets. If there is a spike in infant mortality in a certain district, doctors may be fired. There is pressure to falsify statistics.”

I find the above credible because Cuba has one of the highest reported abortion rates in the world. (Most communist or ex-communist countries are above average.) The link is to a publication associated with Planned Parenthood, so I think it’s not biased against Cuba or abortions. It’s also believable because, for Cuba, health has become what Olympic gold medals had been to the East bloc : an international badge of prestige to showcase the achievements of socialism. So while I do believe the official health data are probably accurate, it’s likely draconian means are used and material deprivations are exacted on the populace, in order to achieve or maintain those outcomes.

Cuba’s “Obstacles”

Matt has cited numerous “obstacles” in the way of Cuba’s achieving human development outcomes.  These include :

  • the US trade embargo against Cuba ;
  • the loss of Soviet foreign aid after 1990 ;
  • high military expenditure on the part of Cuba, made necessary by unremitting “terrorism directed from Miami and Langley”
  • the flight of a large number of educated Cubans to the United States after 1960

I argued that the US trade embargo against Cuba was more than offset by a combination of Soviet subsidies and trade with other countries. Cuba sold sugar to the Soviet Union at a loss relative to the world price during the 1960s, but in the 1970s the Soviet price was more than a third above the world price. By the late 1980s the Soviet subsidy to Cuba implicit in the official price of sugar was 11 times the world price. [Source]

Matt has countered that Cuba lost this sugar daddy 23 years ago. That’s true, but he fails to consider that the fixed costs of investment in schools, universities, hospitals, sugar-refineries, disease eradication, etc. are front-loaded. Even those skilled Cubans who received their education in the late 1980s are still only at the mid-point of their working age. Likewise, if the Castro regime mostly eliminated dengue fever through the use of pesticides and water management, that continues to produce health returns today.

Besides, Cuban GDP per capita began its slow recovery in 1993 and reverted to the 1990 level by 2005. This was caused by a combination of tourist receipts, increased remittances from Cuban-Americans, barter trade with Venezuela, foreign investment, and debt accumulation with European and Japanese banks. Despite the embargo, US financial flows to Cuba were sizeable in the 1990s and are today the largest single source of foreign currency for Cuba.

Matt has argued Cuba’s human development spending was all the more impressive because US hostility required Castro to spend so much money on the military. But Castro’s adventurism in Africa in the 1970s totally belies the claim that its military expenditure was fundamentally defensive.

Angola was a real war for Cuba, with actual military operations conducted by up to 35,000 troops against South African forces in 1975-76 and 55,000 troops in 1987-88. [Source : Castro’s own words.] The total number of Cubans who ever served in Angola in 1975-1991 is on the order of 400,000. [Source, page 146.] Nearly 20,000 Cuban combat and support troops also saw action in the Ogaden War between Ethiopia and Somalia. I reproduce the following from Porter :

cubanadventurism

All of the above was luxury consumption for Castro. Nothing forced him to divert resources away from human development toward adventurism in Africa.

Cuban education

One half of the non-income HDI composite is just years of schooling. That doesn’t really tell us anything about how Cuban students actually perform in comparison with other countries, and Cuba doesn’t participate in PISA. But its students do take the SERCE exams administered by UNESCO. Here are the results :

sercemath

sercereading

sercescience

Amazing ! Cuba’s score in each category is more than 1 standard deviation above the mean. If the above scores are representative of these countries’s students, then, according to these calculations, that implies Cuba’s IQ would be 2 std above Ecuador’s and the Dominican Republic’s, and at least 1 std above Cuban-Americans — the very group Matt has claimed is disproportionately comprised of the pre-revolutionary elite. If that’s true, then Cuban teachers have accomplished something that no one else, anywhere else, has ever done. ¡ Viva la Revolución !

Or, another possibility is suggested by this chart from the same SERCE report :

sercescoredistribution

Apparently an assistant forgot to tell the Cuban minister of education that schools must not perform the academic equivalent of electing a president with 99.9% of the vote. What do American education researchers call it ? Creaming ? Well, I call them Potemkin schools.

Matt argues that an additional handicap for Cuba was the emigration of at least 10% of the country’s population who were disproportionately skilled and educated. The Dominican Republic, he contrasts, emitted less skilled immigrants to the United States. Here are the educational characteristics of Cuban-Americans and Dominican-Americans :

cubanamericans

dominicans

Ideally, these cohorts should be matched by various characteristics, such as age and generation, and that’s possible through data from the Census Bureau and the American Community Survey, but Matt will have to pay me to do it. All the same, Cubans who actually left Cuba do not look any more elite than Dominican-Americans. Native-born Cuban-Americans look better educated, but not by so much. It only makes sense : there were three major waves of Cuban emigration, and most of the skilled and educated were concentrated in the first wave whereas the last wave (the Mariel) were clearly the opposite of the upper stratum.

Conclusion

My overall point can be summarised thus. It’s not technically difficult or financially onerous to substantially improve life expectancy and infant mortality even for a poor country. What usually gets in the way is a combination of politics, institutional capacity, and cultural predispositions. Cuba’s accomplishments in human development are real, but not nearly as impressive as boosters claim. First, Cuba’s social indicators were already advanced in 1960 compared with its natural peers. Second, Castro’s regime was massively subsidised by the Soviets in overcoming the fixed costs associated with improving human development to near-developed country levels. Third, Cuba’s HD outcomes were facilitated by an authoritarian central planning regime with few political and social constraints faced by most human societies, which treated prestige health and educational metrics like Gosplan production targets to be met at all cost.

Edit: Also see José Ricón’s post (in Spanish), “El sorprendiente Indice de Desarrollo Humano de Cuba“.


Filed under: Health & Economics, Human Development, Social Development, Sociometrics Tagged: Amartya Sen, Cuba, HDI, human development

Argentina’s Exclusion from the Marshall Plan 1948-50

$
0
0

In the comments section of an unrelated blogpost, the commenter Matt doggedly argues that the Truman administration deliberately prohibited the European beneficiaries of the Marshall Plan from using American funds to purchase Argentinian wheat in 1948-50. This discrimination, Matt contends, was an attempt to punish Argentina for its nationalist economic policies under Juan Perón. I disputed that a deliberate discrimination occurred at all. But Matt has now cited what I think is conclusive evidence that at least the Economic Cooperation Administration (the administrator of the Marshall Plan funds) did deliberately exclude Argentina, probably under Congressional pressure. The issue is small, a mere footnote on the marginalia of US-Latin American relations very early in the Cold War, but there is a lot of information and argument in there and it’s worth reading the exchange.


Filed under: Cold War, International Relations, Latin America Tagged: Argentina, Cold War, Marshall Plan, Perón, Peronism

The Mystery of US Behaviour in the World

$
0
0

Summary : (Part 4 of 4) I argue that American behaviour on the world stage defies any rational explanation. I also question whether the United States has derived much economic benefit from its activist and interventionist approach in the world.

American behaviour in the world hasn’t made much sense on any rational, self-interested grounds. Neither economic self-interest, how ever narrowly or broadly defined, nor a coldly realistic strategic vision, can explain the behaviour of the United States on the international stage from 1898 to the present.

Every single war prosecuted by the United States since 1898 grossly exceeded any reasonable assessment of the national self-interest, whether defined in terms of strategic security or economic/commercial concerns. What ever might have been their initial, parochial motivations at the time, the Americans almost always transformed their interventions into grand projects.

Let’s start with the Philippines, which were officially annexed in open emulation of European colonial empires by the United States in 1899 as a result of the Spanish-American War. Filipino nationalists who had been fighting Spain continued the fight against the Americans in the northern islands, but they surrendered within about a year. Then the Muslim Moros revolted when in 1903 the Americans decided on direct rule in the southern islands. The Moros had been, at best, loosely governed by the Spaniards who basically stuck to the northern coastal cities anyway. They occasionally dispatched punitive expeditions against them, but they quickly went away. The Moros had therefore barely experienced Spanish administration.

So how did the United States respond to the Moro Rebellion ? Why, it conducted a 12-year counterinsurgency campaign against Muslim guerrilla insurgents in the tropical-montane landscape, almost a forerunner of a combination of Vietnam and Afghanistan. Not for the Americans, the piecemeal, almost accidental approach the British had taken in possessing India. That began in Bengal and proceeded bit by bit over 130 years through short and easy wars on the plains against decadent rajas and sultans, wars which involved very few Britons to begin with. Still less was the American approach like that of the unromantic Portuguese, who made a pragmatic habit of taking peninsulas and coastal strips on behalf of their maritime trading empire. No, the United States just had to take administrative control of the entire 87 quadrillion islands of the Philippine archipelago and would spend a dozen years completing the southern conquest. In the process, the US administration built schools, roads, clinics, etc. ; and, oh, the United States also abolished slavery and the Sharia-based Moro legal code.

Compare this with the Dutch acquisiton of their East Indian archipelago-empire, which began with an outpost in the spice-dense Java in 1602 and a gradual satellisation of local potentates — an “Anglo-Portuguese” approach, one might say. The Dutch finally took direct possession of Sumatra only in 1907 ! They did fight three wars of insurgent pacification in Aceh in the late 19th century (and it’s still a pesky, irritating place to this very day), but at least the Dutch could rationalise they had a valuable colonial possession in Java and environs to protect from European competition.

I don’t know what purpose the Philippines served, which also required the United States to physically control the entire archipelago in short order. If the Philippines were an entrée into the potentially lucrative Asia trade, as many at the time openly argued (in addition to the various “mission civilisatrice” type motives), the green bits below would have conduced just fine to that objective :

moro land map

Actually just the northern island of Luzon would been the fit staging post for the early version of the “open door policy” in Asia. I can’t think of any European colonial war that was quite like the US counterinsurgency campaign against the Moros. There was no obvious lucre in it. There was no obvious preexisting interest to protect. There was no obvious strategic reason.

The alleged “pro-investment” orientation of US foreign policy often seems like a misguided extrapolation to the whole world from traditional American behaviour in its immediate backyard, Central America and the Caribbean. Even in that case there’s cause to doubt the US application of the Monroe Doctrine was intended to create an American economic sphere of influence. European capital operated freely throughout the Western Hemisphere in the late 19th and early 20th centuries. In fact, Britain was the largest foreign investor in the Western Hemisphere, bar none. Most of the export-led economic growth in the Southern Cone in 1870-1914 was made possible by European capital investment.

If anything, it was the fear of European intervention to protect its investments in the Western Hemisphere that T. Roosevelt promulgated his “corollary” to the Monroe Doctrine. Just as European governments would dispatch war ships to bombard the Khedive’s palace to enforce debt obligations and even take over the customs house for the revenues, so they might do the same in the Western Hemisphere in several countries suffering from political instability and fiscal collapse.

Yet, regardless of whether the USA was motivated by economic imperialism or strategic realism or both in complementary fashion, US interventions in the Western Hemisphere are still anomalous. The United States occupied Haiti for 19 years, the Dominican Republic for 8 years, and Nicaragua for 24 years. Cuba was occupied and administered for several years on at least 3 separate occasions.

And in all those instances, the United States did engage in what amounted to nation-building, regardless of whether it was successful or not, and regardless of the original reasons for the interventions. Now, nation-building in a colony is not inconsistent with making things safe for business, but with the exception of Cuba, where American investments were quite large, US commercial interests elsewhere just weren’t substantial to warrant so much attention. 19 years in Haiti and 24 years in Nicaragua ?

Of course none of the preceeding compares with the extravagance of the US commitment to the two world wars.

I fail to see why the United States had to enter the Great War at all, in terms of its own interests. For what had the USA to lose 100,000+ men in a mere year’s worth of fighting as front-line trench fodder for the Anglo-French forces ? The economic rationale seems the least compelling. In 1914 there was barely any US investment to speak of in Europe, and US spending on the Great War was roughly 10 times the value of its exports to the entire world. A German victory would probably not have reduced American trade, since most exports went to the UK anyway and German ambitions did not include British enslavement.

The Second World War makes even less sense than the first, again, whether from a geoeconomic or geopolitical perspective.

Let’s say the United States did engage the Pacific theatre of the Second World War as an attempt to tear down Japan’s Greater East Asian Co-Prosperity Sphere and maintain the Open Door Policy. Why would such an objective require the United States to fight the Japanese inch by inch, take shitty islet by shitty islet, in that entire bloody slog of island hopping all the way to Okinawa, at the cost of hundreds of thousands of American lives ? If the atomic bombs hadn’t been ready in the summer of 1945, the US forces would have staged Okinawa as a base for landing on the Japanese main islands, in order to seize and occupy the whole country.

Why was all that necessary, given any rational motivation, economic or defensive or strategic ? The Pacific theatre could have been fought with much less cost to the United States if the apparent war aim had not been as totalising and maximal as “eject Japan from every damned rock in the Pacific and tear out the Japanese state by its very roots”. It seems almost unhinged.

Revenge is a normal human motive, of course, and punitive expeditions are simply traditional. But in order to avenge Pearl Harbor, the United States could have built a great navy ; destroyed the Japanese counterpart on the open seas ; dropped a million tonnes of ordnance on the home islands ; simply abandoned the scatterplot of islands including the Philippines ; and let the Japanese get mired in the vastness of Asia and slowly bleed away. (By 1939 the Japanese were already bogged down and could not advance into the Chinese interior. The same was happening in Burma as they advanced toward India.) In the meanwhile, a more realpolitical United States might have simply declared a no-cross boundary in the middle of the Pacific at the international dateline.

On the western front, why was it necessary to save Britain and France from the Germans ? France was not being exterminated. Britain probably wouldn’t have been, either, if the Germans were even really interested in taking Britain after the winter of 1940-41. Why not, as Truman suggested, let the Russians and the Germans duke it out to the death ?

Even if, for geopolitical reasons, saving Britain and France was deemed utterly necessary, why was it further necessary to go the whole distance and fight every inch of the way to Berlin ? Roosevelt declared a policy of unconditional surrender in Casablanca in 1943, promising vengeance and retribution on the Germans. What had the Germans ever done to the Americans ?

The gargantuan, maximal efforts by the United States in the two world wars were not necessary, and wildly incommensurate with any reasonable assessment of selfish interests, economic or defensive. The United States was basically self-sufficient, it was flanked by two great oceans which it had the resources to police, and it could have easily sat out the two world wars in serene isolation from the Old World as the overlord of the New. And even if the participation in those wars were conceded as necessary for rational self-interested reasons, nonethless the conduct and conclusion of those wars was well in excess of satisfying those reasons.

And there’s the Cold War. Suppose, as a baseline for comparison, that the United States withdrew back into isolation at the end of 1945 or early 1946. Instead of endeavouring to save the world from communism, it reverted to a new Monroe Doctrine : a Fortress Americas Policy, do what ever you want out in the Old World, but don’t fuck with the New. The United States would by then already possess nuclear weapons, rocket technology, the beginnings of jet engines, advanced naval power, German scientists, valuable European refugee brains, etc. The United States would still be safe from behind the defensive perimeter of the Western Hemisphere, without lifting a finger for the Koreans or the Greeks or anybody else under threat from communism. The United States could have engaged in a nuclear arms race with the Soviet Union without a policy of containment or rollback or a network of strategic alliances and bases around the world. I think that would have been perfectly feasible. It certainly would have been much much much cheaper — $30-40 trillion cheaper.

George Kennan, the original author of the containment policy, had observed there were “only five centres of industrial and military power in the world which are important to us from the standpoint of national security”, and the object of containment was to keep the Soviets from taking them over. Other areas of the world, Kennan did not regard as crucial. The United States would protect the four essential industrial-military regions of the world not in Soviet hands at the time, and care not a whit about the rest of the world. My point is not the specific number of regions that the United States might have protected, but the finiteness of the US security umbrella.

But instead of practising Kennan’s minimalist geopolitics, what did the United States actually do ? The United States waged a global Cold War, on every front, tit for tat everywhere, whether the communist threat was obvious or not. Dulles busily created alliances around the periphery of the Soviet Union in the 1950s, on the model of NATO — Seato, Cento, the Eisenhower Doctrine, the whole works. Why did Truman and Eisenhower help out the French in Vietnam and the British in Malaya ?

The Korean War comes closest amongst all the military engagements of 1898-1989 to being consistent with realpolitik, given the assumption that it was in the interest of the United States to engage in the Cold War in Asia in the first place. Once you were committed to the defence of Japan, then the Korean War might have been logical. But certainly the Vietnam War — at least in the lavish, improvident lengths to which the United States went to maintain its commitment to the project — made no sense any from realist, geopolitical or geoeconomic perspective. I’m pretty sure I am preaching to the choir as far as most people are concerned (though probably not Matt), so I won’t bother with an elaboration.

The point is, the United States isn’t one to do things by half-measures. The United States doesn’t go to wars or engage in mere action; rather, it undertakes extravagant projects which only incidentally contain fighting and killing on a large scale.

In individual cases, economic and/or strategic self-interest is creditable as a motivation. It’s plausible that the USA thought owning the Philippines would be the gateway to Asian trade, even if it never became that. It’s plausible that the United States feared Imperial Germany might control Europe and close off its market to future penetration by the United States, even though Europe hadn’t been such a big market for American exports or investments. It’s plausible that Japan had to be not just defeated in 1941-45, but also deracinated and extirpated, to permanently lower the coffin on Japanese nationalism. It’s plausible that the United States believed the domination of Europe by the Nazis might eventually haunt them in the Western Hemisphere. It’s plausible that the United States believed if you just let Vietnam go, others might follow and having already put in so much effort and treasure it became captive to the sunk cost fallacy. It’s plausible that the Bush administration actually believed in the potential of Iraqi democracy or that the United States could get its hands on Iraqi oil.

[ I don’t want to restate what most people already think about Iraq and Afghanistan as particularly extravagant projects, so I don’t even bother mentioning them. ]

Each motivation is plausible in its own historical context. And of course countries do make mistakes, do get trapped in a vicious cycle of trying to recoup wasted investment, and do waste a lot of resources in accomplishing even rational objectives. I’m a fan of irrationality, short-sightedness, error, overestimation, exaggeration, overreaction, overoptimism and blindness to unintended consequences as an explanation of policy. But the disproportion between the scale of power projected by the USA in nearly every war since 1898 and its objective national interests (*), is too great, too consistent, and too systematic to just say shit happens. Given all the US foreign adventures which turned “extravagant” over such a long time and in diverse contexts, you must begin to think, maybe it’s not a series of one-off overkills, but some national character trait.

[ (*) Gulf War 1 is the only anomaly. It’s the only war prosecuted by the United States in a manner reminiscent of the old school realism of the Metternich-Castlereagh-Bismarck-Kennan-Kissinger variety. ]

In the final analysis, what did all that global power, all that prodigal spending, and all that effort get the United States that it could not have gotten without it all ?

The traditional discourse of diplomatic history and international relations assumes that great powers are always players in the anarchy of the international system. Germany, situated between France and Russia, never had a choice about playing power politics. In fact, no country in the world with neighbours has ever had a choice in the matter.

The only major exception that I can think of is the United States. By dint of its geographical isolation, its continental size, the poverty and weakness of its neighbours, the extraordinary fecundity of the land, the technological prowess and staggering productivity of a fairly large population, the United States is far more capable of aloofness from the world than any other country I can think of.

Yet it has been deeply involved in the world. I’ve already argued its motivations don’t have a rational explanation. But what has it actually gained from the world ? I don’t mean gains in the narrow sense of receiving returns accruing from a “favourable investment climate”, which I’ve already argued are a joke. Rather, is the USA as a whole more prosperous on account of empire ? Did empire even contribute to American prosperity ?

To the last question, I can’t think of any plausible ways the answer could be a very big yes. What are some advantages of empire that would be missing, if the United States acted like Canada ?

It’s possible without American global hegemony, you would not have the multilateral trading regime of the postwar world. Certainly if Western Europe and Japan had fallen into Soviet hands there would have been much less trade in the whole world !

If international trade has had a substantial benefit for the United States, it would not be via the demand effects of exports and imports. The USA has had a trade deficit since about 1980, and before that its overall trade balances were quite small. So the impact of trade is via the supply-side effects of import competition affecting prices and exports causing increased specialisation in the economy.

But has international trade had an impact on the efficiency of the US economy that’s much greater than technological progress ? Historically, the latter has not been really trade-dependent. Before the war, a lot of technology transfer happened without too much trade. And after the war, the United States itself was the source of most technology transfer, enabled by a combination of native talent and smart European and Asian immigrants. Then there is market size : the US economy has been so big and productive all on its own, one wonders whether import competition really mattered all that much. Competition from Japanese auto manufacturers had an effect only with the oil crises of the 1970s.

Suppose trading with other rich countries is conceded as important. What if the United States had simply taken the limited Kennan approach to containment, and protected the “industrial-military centres” of Western Europe and East Asia, and let the rest of the world rot ? No Vietnam, no Cuba, no Central America, no Chile, no School of the Americas, no Afghan mujahiddin, no proxy wars in Africa, no Iran, no Israel-Palestine, no Gulf Wars, no entanglements other than Western Europe and East Asia.

I can’t think of much difference between such a counterfactual world and the actual world we live in today, at least in terms of affecting the material prosperity of the United States or other rich countries. Maybe oil prices could be higher because fewer states might control the petroleum deposits, but I doubt that too. And, besides, the rich economies would have adjusted to the higher prices and Detroit might have been more efficient, earlier.

But hasn’t the dominance of the US dollar as a global reserve currency required the maintenance of American empire ? Certainly not ; besides, the benefits of the dollar as a reserve currency are grossly overstated. (I can’t explain it any better than this.)

How about the fact that the United States has consistently and persistently had a savings shortfall on the order of 4-5-6% of GDP over the course of thirty years ? (External deficit = shortfall of domestic savings = need to import foreign savings.) That amount is exactly equal to what American households, businesses and governments have been able to spend on consumption and capital investment, in excess of domestic income. The Americans have had a higher standard of living than implied by GDP alone, because they could borrow the huge savings surpluses of the Europeans and the Asians.

Did that require American Empire ? It would have certainly required that Western Europe and East Asia had been kept from communism. But beyond that ? No, not really. The United States could, and still can, borrow those surplus savings because its internal economy, its relatively laissez-approach and its political stability inspire investor confidence, not because it can conquer Iraq and hold it for 10 years. Japan, which has always had external surpluses until 2012, still borrows heavily to finance large government budget deficits and has accumulated a national debt that’s laughably bigger (relative to GDP) than the USA’s. Yet it has absolutely no problem attracting bond investors from all over the world.

Besides, would it have been so terrible if those 4-5-6% of GDP worth of foreign savings hadn’t been available to the United States because the Red Army was in Calais and Osaka ? You might have had fewer Americans owning their own houses, fewer people living on credit cards, and fewer students going to university in debt, and perhaps smaller budget deficits (or the same budget deficits and higher interest rates). But I don’t see a very big difference.

So, particularly since the end of the Cold War, what has the American empire been good for ?


Filed under: Cold War, History, International Relations, U.S. foreign policy Tagged: Cold War, international politics, International Relations, US foreign policy

The Balance Sheet of US Foreign Policy 1940-2013

$
0
0

Summary : (Part 3 of 4) I argue that commenter Matt’s view of US foreign policy, as presented in Part 1, makes no sense because the “returns” from that investment climate are laughably low. I present a balance sheet of American internationalism since 1940. ( Cf. parts 1, 2, 4 )


In part 2 of this series, I showed that the control and ownership of Third World assets by the rich countries peaked in 1914 and diminished to almost nothing by the 1950s. But that diminishing trend was reversed some time in the 1980s and we have every reason to believe there’s a now a long-term, secular trend for rising ownership of Third World assets by the rich countries. Of course, there are today a few more countries considered rich who have started exporting capital, but that is a minor phenomenon compared with what looks like a throwback to the early 20th century.

Has Matt’s view of US foreign policy been vindicated ? Is the “carve-up” of the Third World the dividend the United States is receiving for vanquishing the Soviet Union and all other alternatives to American capitalism ? Since Matt asserts the United States has had a global strategy to create a “favourable investment climate” for itself in the Third World, we are justified in asking, what in the end has the USA gotten out of that strategy ?

(All the tables in this blopost were compiled from data I’ve assembled in this spreadsheet, wherein I also detail the sources and the calculation methods.)

First, the benefits side of the ledger :

table2The above represent the existing, cumulative stock or overall investment position (not flows of capital) in the companies, factories, real estate, mines, oil refineries, and other productive assets, owned by American entities and citizens, and located outside the United States. The figures also include bank deposits, but do not include “portfolio investment”, i.e., stocks and bonds. (But even now portfolio capital is still something which bounces around the stock & bond markets of the rich countries.)

So all assets accumulated in the past, which have not been liquidated or depreciated away, are summarised above. Basically about $4 trillion out of the total ~$4.7 trillion are located in developed countries or in Caribbean bank accounts (and some tourist infrastructure).

First, relative to the US capital stock, American-owned assets abroad are puny. Capital stock figures are not published regularly, so I can’t compare them easily. But the wealth (not GDP, which is income) of the United States is between $60 trillion and $80 trillion in current dollars.

Second, despite the FDI boom, US-owned assets in individual countries are still relatively meager. For example, those American-owned maquiladora factories exporting all manner of goods to the USA notwithstanding, the US-owned share of Mexico’s capital stock is perhaps 2% (conservatively assuming a 3-4 to 1 ratio of capital stock to GDP). There’s nothing to brag about here for an economic imperialist, let alone for the unchallenged hegemon bestriding the unipolar world.

The comparable figure is even smaller for Brazil. It’s slightly bigger for Chile, on the order of 4%, which reflects investor confidence in that country. But really, if you removed Brazil and Chile from the South American totals, it’s as though South America barely existed. Most of the “other Caribbean” are, again, bank deposits and tourist infrastructure in the French and Dutch Caribbean. So when you realise that Mexico, Brazil, Chile and the “nice” Caribbean account for almost all of the relatively meager US-owned assets south of the Rio Grande, the whole idea of an investment-driven foreign policy that the United States was willing to fight gargartuan wars both hot and cold over is….there’s no other word for it….a joke.

Matt has argued perhaps Africa was too poor to matter. Actually, almost the entire Third World hardly seems a bother…

Now, the FDI position data above do not include exports or “factor income”, i.e., dividends, interest payments, rents and other returns to assets held abroad. Here follow my computations of the net cumulative resource flows to and from the United States and the Third World between 1940 and 2013 :

resource_transfer

 

I stress : the above do not represent flows of investment capital. Rather, they represent the net proceeds from exports and imports between the United States and developing countries, plus the balance of repatriated investment income (like profits from foreign operations, or dividends, or interest on debt, etc.), plus transfer payments like immigrant remittances, foreign aid grants, humanitarian relief and charitable donations.

Note, the biggest numbers above are in the trillions. I also stress the word “cumulative” : I’ve added up all the annual data for exports, imports, incomes and transfers. Normally these would be expressed as percent of GDP, year to year, but I have chosen to convert all the nominal data into constant 2013 dollars and aggregate the years for a reason which will become clear soon enough.

Also, for the earliest years (i.e., before the 1970s), I could not easily find complete investment income data. I only found incomes received by the United States from developing countries, but not the income outflows and transfers from the USA. But I’ve included the incomplete data in the 1940-2013 aggregates, because the incompleteness biases the data against my view anyway.

Most of the enormous resources flowing out of the United States went to China just in the past 15 years. But nearly $3 trillion went to other developing countries over the same time period.

But the pattern that’s most strikingly contrary to what we should expect if Matt’s theory were correct, is that these resource flows into the Third World get bigger as we approach the present. In part, this is because even the poorest economies have grown in size in the last 75 years. But it’s remarkable, even as the Cold War has receded from memory, and even as the Neoliberal Imperium has stormed nearly all redoubts of economic nationalism, the undoubted hegemon that is the United States has reaped such meager rewards. Actually, at least as judged by net resource extraction, no rewards at all !

In terms of the resource extraction from the periphery by the “metropolitan core”, the Third World looked more stereotypically colonial in the 1940s, 1950s and 1960s, at the height of economic nationalism, than it would later, when nationalism was defeated. And it’s almost certain that had it not been for the debt crisis of the 1970s and 1980s, the resource flows would have been even more favourable even earlier for the Third World.

Now, I’m not disputing the world has been “neoliberalised” to a great extent. Obviously other rich countries, besides the United States, have resource flows with the Third World. If a neoliberal world with a “favourable investment climate” had been the goal of US foreign policy, then it seems to have been achieved, but largely on behalf of non-Americans.

What about the costs of getting to this point, of American military dominance coupled with an underwhelming American share of the spoils ? The cumulative costs in 2013 dollars of the US engagement in the world since 1940 :

costs_us_internationalism

Those are all trillions. The second column excludes foreign aid spending in case there’s been double-counting in the balance of payments data from earlier. These figures do not include the $1 trillion of military spending in the period 1900-1940, or any interest payments on debts incurred as a result of any of the wars at any time, or the potential economic efficiency costs of those expenditures, or the supplemental appropriations for the Vietnam and the Korean Wars, which I could not locate quickly. But what’s a couple of trillion ? They do include, as far as I know, all the off-budget supplemental appropriations for Iraq and Afghanistan up to 2010.

Now, all this we know in retrospect. Maybe the American launch out of isolation and into the world was a kind of runaway train, with a logic and momentum all its own. But that “retrospect” has got a 75-year span. You would think even a half-rational actor or an outright half-wit might have realised a mere few decades into the costly project that it just didn’t make any sense if the ultimate goal was a “favorable investment climate”. At some point you need to see returns. And given the resource flow patterns that exist right now, you could not amortise those $50 trillion at all !

If there was a kind of long-range but flexible strategy by the United States to create a “favourable investment climate” for itself in the Third World, then it must have been implemented by rank nincompoops. Or, another possibility is that Chomskyians have overextrapolated from the Central-America-Caribbean model of the interwar era to the whole world.

(The comments section on this post is closed. If you’d like to comment, please go to Part 4, “The Mystery of US Behaviour in the World”.)


Filed under: Cold War, Foreign Investment, International Relations, U.S. foreign policy Tagged: Cold War, Foreign Investment, international politics, International Relations, US foreign policy

A Very Brief History of Foreign Investment

$
0
0

Summary : (Part 2 of 4) As the prelude to a critique of commenter Matt’s view of American foreign policy presented in Part 1, I sketch a brief history of foreign investment as context. Fear not the drear of evil, for the post is mostly pictures (charts) ! ( Cf. parts 1, 3, 4 )


Global Capital, 1914 to the present

In the Victorian era, most of the world’s surplus savings flowed from Western Europe to the developing countries of the time, which included some rapidly industrialising ones like the United States. So American railroads and canals were financed, at least in part, by European capital. At the same time, European portfolios also included the raw materials and agricultural output of what today is called the Third World. These would be extracted, processed and exported to the European market.

Of course when I say “Europe”, I mean mostly the United Kingdom, for it was the premier global investor par excellence, by far the largest shareholder in the global stock of foreign investments in 1870-1914. The UK had accumulated huge savings surpluses from its early industrialisation and these were exported around the world. Sometimes, historians talk of Britain’s “informal empire” which might have been more important and influential than the formal empire of the red bits on the map. I agree with that view.

But all that was essentially brought to a dramatic end by a conspiracy amongst the First World War, the Great Depression, the Second World War, Decolonisation and Third World socialism. The scanty historical data are presented by one of the best sources on this subject :

twomey T8

FDI or “foreign direct investment” refers to the direct ownership of assets like factories, farms and mines. “FI” includes both FDI and loans/bonds held by foreigners. “OFDI” is “non-railroad FDI”. ]

A chart of Mexico from Twomey :

twomey G3

 

After 1945, global FDI flows did recover from depression and war, but the geographical pattern of foreign investment was completely transformed. In the postwar economic boom of 1945-80, most of the investment flows took place within and between developed countries in a phenomenon which has been labelled the Lucas Paradox. Here is a comparison of the foreign investment stocks between 1914 and 2001 from Schularick :

 

SchularickT4

 

Basically, most — over 90% in 2001 — of the world’s foreign investment positions represent rich countries owning one another’s productive assets. The chart below (source) uses cruder data, but it still illustrates the within-rich pattern of FDI flows had been well under way for the United States as early as 1960 :

usoutwardfdi1960s

 

An even more illuminating way of looking at FDI data, from the second best source on the subject :

 

obstfeldtaylor_figure10

 

I cannot improve on the implications of the above by Obstfeld and Taylor themselves :

Figure 10 both illustrates the periphery’s need to draw on industrial country savings, as well as an important dimension in which the globalization of capital markets remains behind the level attained under the classical gold standard. In the last great era of globalization, the most striking characteristic is that foreign capital was distributed bimodally; it moved to both rich and poor countries, with relatively little in the middle. Receiving regions included both colonies and independent regions. The rich countries were the settler economies where capital was attracted by abundant land, and the poor countries were places where capital was attracted by abundant labor.

Globalized capital markets are back, but with a difference: capital transactions seem to be mostly a rich-rich affair, a process of “diversification finance” rather than “development finance.” The creditor-debtor country pairs involved are more rich-rich than rich-poor, and today’s foreign investment in the poorest developing countries lags far behind the levels attained at the start of the last century. In other words, we see again the paradox noted by Lucas (1990), of capital failing to flow to capital-poor countries, places where we would presume the marginal product of capital to be very high. And the figure may understate the failure in some ways: a century ago world income and productivity levels were far less divergent than they are today, so it is all the more remarkable that so much capital was directed to countries at or below the 20 percent and 40 percent income levels (relative to the U.S.). Today, a much larger fraction of the world’s output and population is located in such low productivity regions, but a much smaller share of global foreign investment reaches them. [emphasis mine]

Personally I’ve never found the Lucas Paradox terribly paradoxical — it’s one of those “paradoxes” that arise only from ahistorical models of the world. Basically, in the 19th century, raw materials had high value relative to the manufacturing process that converted them into stuff. But with the exponential rise in technological sophistication, the manufacturing process became relatively more valuable (=added much more value than previously), and the materials intensity of output (the amount of raw material input per unit of output) declined. Put simply, more was being made with less.

But such a process requires technological mastery, and that mainly exists in the rich countries. So whereas Victorian investors were concerned with acquiring high-value raw materials for use in production (and consumption) at home, mid-20th century foreign investment was more about owning the high-value-adding manufacturing operations that produced stuff for the internal markets of the rich countries themselves.

It’s not that resources were not flowing to the Third World in the post-war period. They were, but mostly in the form of foreign aid and bank loans (source) :

imf fig1Ironically, most of the resource transfers from the rich to developing countries in the form of loans were just reshuffling of ledger entries in Western and Japanese banks. In the 1970s the prices of all commodities skyrocketed — oil, copper, gold, tin, coffee, everything. So developing countries were awash in money. Yet, lacking strong financial institutions they (including many of the Arab oil producers) deposited the commodity revenue in western banks. Thus the dollars, francs, yen, marks & pounds exchanged for Chilean copper, Iranian oil and Ethiopian coffee were re-lent to a different mix of developing countries. What later followed is one of history’s great financial catastrophes.

As the chart above intimates, the late 1980s and early 1990s did see an upswing in FDI flows into the Third World. And that’s related to the debt crisis. In exchange for a combination of debt relief and financial assistance, the international financial community demanded that the debtor countries restructure their economies in fundamental ways, including privatisation of state-owned assets, statutory protection of foreign investment, and the lifting of controls on the mobility of international capital. Thus started, in fitful steps, the return of investment in “emerging markets” :

unctad I3

Source : UNCTAD. Click to enlarge

The pattern of rich countries primarily investing in other rich countries has not gone away. That’s still the predominant case, right now, in 2014. But the absolute level of foreign investment flows has burgeoned in the last 15 years for every country open to them. We are nowhere near back to 1914 levels in most countries, but 25 years of neoliberal prescriptions have definitely had a conspicuous effect on foreign direct investment in the Third World. (Go to part 3 of 4.)

(The comments section of this post is closed. If you’d like to comment, please go to Part 4, “The Mystery of US Behaviour in the World”.)


Filed under: Cold War, Economic History, Foreign Investment, International Relations, U.S. foreign policy Tagged: Cold War, Foreign Investment, international politics, International Relations, US foreign policy

The Political Economy of US Foreign Policy

$
0
0

Summary : (Part 1 of 4) I critique commenter Matt’s argument that, at the deepest level, American foreign policy has sought a “favourable investment climate” for itself in the Third World.

US Foreign Policy & Crony Capitalism

Before I get specifically into Matt’s beliefs, let me first address what I think is a common argument about US foreign policy : the “crony capitalist theory”. I stress, this is not quite the same as Matt’s view.

According to the crony capitalist view, most US actions in the Third World promote American business interests, including such things as the ownership or control of oil in the Middle East ; or the protection of fruit plantations in Central America and the Caribbean — classic staples of the vulgar street-corner naive cynic.

The crony capitalist model assumes narrowly self-interested, parochial actors that influence the US government in discrete cases. It’s plausible a priori, because, in the domestic context of any political system, “socialise the costs and privatise the gains” is a classic form of rent-seeking behaviour. If and when they can, businesses naturally seek to curry political influence and extract advantages for themselves in the design of legislation or the administration of policies. So the crony capitalist model is simply the same principle applied to foreign policy. Thus, one might argue that many of the US interventions in the Caribbean Basin, especially before 1945, emerged from documented collusions between the US government and very specific business interests, such as the United Fruit Company.

However, the crony capitalist theory fails when it is applied to the whole global strategy of the United States over the long run. It founders on the immense multitude of examples, especially during the Cold War, in which the United States clearly felt unbothered by economic policies or events in the Third World which were patently contrary to the objective of domination by US corporations.

A small but telling example from after the Cold War would be the US tilt toward Armenia in the Nagorno-Karabakh War. Although the United States had initially sought neutrality in the conflict, nonetheless Armenian-Americans in California prevailed upon the US Congress to embargo all aid to Azerbaijan. This is despite the fact that it was host to numerous US multinationals doing deals in oil and natural gas in the Caspian Sea. Likewise, under the influence of Cuban-Americans, the United States doggedly maintains an embargo against Cuba even though the US business community appears to favour doing business with it. Then there is Iraq : the fact that US oil companies neither control Iraq’s oil, nor receive much more fee revenue from its oil fields than the Malaysian state oil company, is a serious rebuke to all those “war for oil” babblers from the early 2000s.

The incongruities of the “crony capitalist” theory during the Cold War are never-ending. In the period 1950-80, nearly the entire Third World would indulge the global fashion for “import-substitution industrialisation” (ISI), which aimed to limit imports of manufactured goods from the rich countries and stimulate the production of local “import substitutes”. In some cases this plan involved the use of tariffs, subsidies and licences to encourage local production for the domestic market ; and in others central planners would allocate capital to state-owned enterprises and set up detailed production targets. Although in its classic form ISI is strongly associated with Latin America’s response to the Great Depression, some of the celebrated postwar stalwarts of the system included India, Nigeria, South Africa, Ghana, Tanzania, Turkey, Iran, Iraq and Israel (a state founded by socialists, with crucial support from the communist bloc).

The list is endless and it would be more efficient to cite the exceptions. Whether the country was a friend or foe of the United States, did not make much difference. Morocco, considered a staunch, conservative ally during the Cold War, had one “five-year plan” after another for its bloated state enterprises. (These were privatised in the late 1990s and early 2000s, although they went from state-owned to royal-family-controlled.)

But the best example of a prominent US ally whose political economy was not vividly different from that of neighboring Soviet-allied states was Iran under the Shah. He might have been restored to power by the USA and the UK after being overthrown in a nationalist revolution, but the supposed stooge himself fully nationalised Iranian oil assets. The stooge also led the drive in 1974, as member of OPEC, to quadruple the international price of oil, an act which brought economic chaos to his puppet master’s country. The Shah of Iran was also an economic progressive who used his carbon windfall to provide free public education and healthcare, and to finance a land reform which transferred millions of hectares of land to landless peasants

In contrast with the ISI countries, most of the East Asian countries turned to the strategy of export-led industrialisation. This was just as state-directed as ISI, except for a crucial difference. The ISI model assumes domestic producers could count on domestic consumption, whereas the Asian model exploits the preexisting cultural habits of high savings and low consumption. Thus the East Asian developmental state intensified the suppression of internal demand and promoted export-manufacturing industries. The likes of Japan and South Korea would close their markets, for the most part, to manufactures imports and foreign investment from the United States, the benefactor to whom both literally owed their existence. In return, the United States largely practiced unilateral free trade.

By far the most egregious discordance between the crony capitalist theory and the reality of American behaviour has to be the US support of Israel. (Its causes have parallels, writ much larger, with the case of the US stance on Cuba and Armenia.) The United States had actually been fairly aloof in the 1950s — and turned downright hostile in 1956 with the Suez Crisis — but the 1960s saw a tilt toward Israel and against its Arab enemies, very much solidified after the 1967 war. I don’t see what, at least in crude self-interested terms, the USA gets out of its extraordinary closeness with Israel, an intimacy which rivals the Anglo-American alliance. I mostly see handicaps in a region the USA regards as vital enough to have prosecuted several wars in. There was the 1973 oil embargo, the near-confrontation with the Soviet Union in 1973-74 resulting from the Yom Kippur War, diplomatic complications at every front, terrorist attacks since the 1970s, etc.

(A propos of which, there’s a strong whiff of inconsistency between blaming US intimacy with Israel and the Palestinian situation for the terrorism committed by Saudis, Egyptians, and Pakistanis, and at the same time arguing that the relationship is fundamentally driven by crass self-interest on the part of the United States. Not an outright contradiction, but there’s a tension between the statements which are often contained within the same mind.)

“A Favourable Investment Climate”

Matt is aware of such incongruities, and he’s pointed out some of them himself. So he favours a more abstract approach from which I quote the choice part :

The general strategy of US foreign policy in the Third World, and especially Latin America, during the Cold War was to promote a “favorable investment climate.”

It was also the strategy before the Cold War: the peak of US intervention in Latin America was 1898-1933; for half of this period there were no “commies” in existence, and for the other half they were hardly a serious threat…

Now, I repeat: US foreign policy is not omnipotent. We have limited resources, and we cannot do away with all the things we dislike at once. Since Communism was the greatest threat to favorable investment climates, we tended, at any given point, to focus most of our resources on combatting movements that were explicitly Communist or which we perceived to be Communist. This leads some people to believe that we only opposed Communism, and were just fine with democratic nationalism and other threats to the American-dominated international economic system. But this is the wrong moral to draw from the Cold War. It only sometimes looks as if this were true because we tended to focus less effort on opposing non-Communist nationalists, since Communism was the greater threat. But that doesn’t mean we liked economic nationalism, and it doesn’t mean that we didn’t do what we could to oppose it when it was feasible to do so.

The US also opposed fascism before and during World War II, especially from Japan. Why? Because fascism creates an inhospitable investment climate, or at least an imperfect one. The Japanese Greater East Asian Co-Prosperity Sphere would have put an end to US plans for an Open Door in Asia (even as it strove for a relatively closed one in the Western Hemisphere). The US also opposed European imperialism after WWII, and for the same reasons. Except, of course, where the alternative was worse, like in Indochina…

You mention South Korea, Iran, and Turkey (why not Taiwan too?), all US allies with nationalist economic policies. Notice something about these countries: they were all on the periphery of the Communist world. The US needed these countries as bulwarks against the Soviet Union and China. They needed to be prosperous and militarily powerful. We could not afford for South Korea to be Honduras…. [ More comments on the “net” strategic value of Iran under the Shah, Turkey, Israel, etc. which outweighed other considerations. ]

As Lenin recognised, capitalists often compete amongst themselves and do not coordinate their efforts against their ideological enemies. He was talking about capitalist states, but his observation applies equally to capitalists within a country. Businesses promote their own individual interests and do not necessarily advance capitalism in the abstract. So it is up to the neutral, disinterested governments of capitalist states to promote the general welfare of capitalism — at least their own capitalists.

In that vein Matt regards the US government, not as the captive of discrete business insiders with numerous conflicting agendas, but as a rational, ‘above the fray’ actor with a coherent, long-range plan to make the world safe for American business in general. So, rather than haphazardly seek any short-sighted commercial gain, the USA flexibly weighs its options and pragmatically sets priorities in the face of threats to Pax Americana. All three European colonialism, German/Japanese fascism, and Soviet communism have been rivals of US mercantilist capitalism, and in defeating them utterly the United States was able to take the long view and tolerate the fairly minor deviations from ideological orthodoxy, like state-led industrialisation in the Third World.

Matt’s argument in a way mirrors George Kennan’s diagnosis of Soviet behaviour in the famous “Sources of Soviet Conduct”, published in Foreign Affairs in 1947 under the pseudonym Mr X. The analysis ties together very well the internal and external roots of Soviet behaviour. The Marxist-Leninist equivalent of the “favourable investment climate” was identified by Kennan : the ideology required its own expansion, but it would be accomplished with patience, flexibility and opportunism. Which is of course how the Soviet Union actually behaved in the world, not dogmatically, but pragmatically in pursuit of its long-range ideological goals. The Soviets just weren’t very choosey or punctilious about the ideological orientation of their clients.

But since the collapse of the Soviet Union and international communism, the United States is now virtually unrestrained in its ability to pursue laissez-faire capitalism around the world. That ideology need no longer take a back seat to strategic military priorities, nor is there an alternate superpower to balance against American hegemony. Thus arrived the neoliberal “Washington Consensus” of the last quarter-century with the mantra of privatisation, financial liberalisation, free trade, fiscal austerity, and deregulation.

Conveniently, the Third World boom of 1950-80 had wound up in the utter shambles of hyperinflation, massive debt, balance of payments crises, growth collapse, and (in some cases) civil war and famine. With the economic nationalism of the Third World and the central planning of the communist states both in discredit, the trinity of the IMF, the World Bank and the US Treasury could now impose on desperate countries the most dramatic top-d0wn economic restructuring the world had ever seen. There had even been a dress rehearsal : Chile under Pinochet spent the years 1973-89 remodelling a stagnant social-democratic populist regime after the postulates of Milton Friedman and the Chicago Boys.

In exchange for assistance and debt relief, the neoliberal institutions of the Washington Consensus enjoined the financially desperate countries be pried open to penetration by global capital. Whether it was water works in Bolivia or electricity providers in South Africa or the national telephone monopoly in Mexico, the ‘commanding height’ assets of the Third World could be carved up much like they had been in the 19th century.


Matt’s “favourable investment” global strategy is inherently unfalsifiable.  If one’s prior is that everything the United States does in the world is in order to create/maintain a favourable investment climate in a broad sense, then all decisions can be rationalised around that premise. If the United States didn’t appear much bothered by countless instances of economic nationalism or social democratic experimentalism in countries which were either neutral or pro-American in the Cold War, then those must have been because their strategic value was more important than their investment value ; or their markets just weren’t that big anyway ; or the USA was not omnipotent and had to weigh their priorities in view of limited resources ; or the USA was sacrificing short-term gains with a disciplined eye toward the long-term goal of defeating the Soviet Union, the principal obstacle to the unalloyed triumph of capitalism.

It’s not possible to falsify such a premise with individual cases. So I attack his thesis in a different way.


Edit : Responding in the comments section, Matt has said, I have not accurately characterised his views : “I was caught off guard by your attribution to me of a belief that the United States had a grand, long-term master plan to conquer the economies of the Third World. Although I never said that the US promoted favorable investment climates with such a scheme in mind, when I reread my comments I can see how someone could come to that conclusion”. I had also made characterizations of Noam Chomsky to which he objected. So I have removed references to Chomsky from the above, in order to avoid confusion.

(The comments section of this post is closed. If you’d like to comment, please go to Part 4, “The Mystery of US Behaviour in the World”.)


Filed under: Cold War, Foreign Investment, International Relations, U.S. foreign policy Tagged: Cold War, Foreign Investment, international politics, International Relations, US foreign policy

大東亞共現代性圏

$
0
0

I just noticed Tyler Cowen had blogged a Boston Globe article about the number of loanwords in various languages (is there something from the press Cowen will not blog ?), and his own take was to ask, which major language has the lowest percentage of foreign loanwords ? He seems to think Chinese could be one, but many people in the comments section (correctly) reject the suggestion. Here I talk about “Japanese-made Chinese words”.

(1)

There are two basic kinds of loanwords amongst languages : the “conventional” one where both the meaning and the form of the word are borrowed simultaneously ; and the other where the meaning is borrowed but the form is “translated” into indigenous roots — or “calque“.

In the “conventional” loanword, the borrowing is usually apparent because the original form hasn’t changed too much. English, a world champion importer and exporter of words, contains tens of thousands of Latin and French borrowings whose appearance is only slightly modified, such as guarantee and importance. In more recent borrowings, English hardly bothers even with perfunctory transformation, e.g., tsunami and angst. Likewise, languages like Turkish and Indonesian don’t invent new words for “electromagnetism” ; they only modify the word to reflect the local difference in pronunciation and spelling standards.

But in calque languages, the loanwords tend to be invisible. An example is the Russian самолёт (samolyot “self-flight” or airplane), which looks and sounds purely Slavic. Both Russian and German are abundant in calques, but not as much as Arabic, a language which on first appearance seems to lack any foreign loanwords. In fact its abstract vocabulary is heavily borrowed from classical Greek (and later the modern western languages), but the actual words were calqued from Semitic roots. Sometimes the borrowing first took place in Syriac, which lent to Arabic.

English and the Romance languages normally do not create calques from indigenous roots, but they still have thousands of “classicising” calques — neologisms built on Greek or Latin roots which were not found in the original languages. Thus microscope is created entirely out of Greek parts, viticulture from Latin, and automobile, a miscegenation of Greek and Latin.

(2)

Most people are aware that Classical Chinese stands in a similar relation to Japanese and other East Asian languages, as Greek and Latin have stood to the modern European languages. Japanese has borrowed thousands of whole words of Chinese origin, but using classical Chinese roots the Japanese have also come up with calques called wasei kango (和製漢語) or “Japanese-made Chinese”. A pretty basic example is the formal Japanese word for car, or jidosha (自動車 or “self motion vehicle”) — which is almost exactly parallel to the classicising calque automobile. (That set of characters is not used in Chinese to denote “automobile”.)

Since the Japanese were the first in East Asia to adopt Western knowledge and technology on a large scale, they had to find equivalents of new words by the thousands in the late 19th and early 20th centuries. A great many of these were “borrowed back” by the Chinese. According to this source,

Chinese reform leader Kang Youwei 康有為 once said: “I regard the West as a cow, and the Japanese as a farmhand, while I myself sit back and enjoy the food!’” Early Japanese translations made large numbers of important scholarly works and concepts from the West widely available to Chinese audiences; the Chinese felt that Japanese was an “easier” language than Western ones for a Chinese to learn. The Qing court sent increasing numbers of students to Japan – 13,000 in 1906. Between 1902-1904, translations from Japanese accounted for 62.2 per cent of all translations into Chinese. The great majority of these works were themselves translations from English and other Western languages..

But because the Chinese language has its own way of pronouncing Chinese characters that’s different from Japanese, the reborrowed words may sound completely different and many Chinese people may not even know these had been first coined in Japan.

Here is a very short list of “Japanese-made Chinese” words which did get exported to Chinese [source]:

  • telephone, train or tram, electron
  • chemistry, physics, biology, astronomy
  • philosophy, history
  • library, art, religion, comedy, symphony
  • system, industry, corporation, market, international
  • communism, communist party, proletariat
  • people’s republic

Notice the word “philosophy”, which may surprise some people because, after all, wasn’t classical Chinese civilisation full of philosophers ? Yes, but beware of anachronism ! We moderns find the similarity, but East Asians, when first confronted with European philosophy, considered it something quite different from Confucius et al.

Basically, a great many modern words in Chinese related to science, technology, government, and commerce, as well as abstract western concepts which may not have had an exact equivalent in East Asia, trace back to late 19th and early 20th century coinages in Japan. An article in a modern Chinese newspaper would be impossible without these Sino-Japanese calques.

In some cases, the Japanese went looking in ancient Chinese texts for words with similar but not identical meanings, and resurrected them by giving them modern, western significance.  These include : society, capital, revolution, economy, law, science, election, heredity, literature, etc. There are also some pure Japanese words written in Chinese characters that were borrowed.

But the “glamour words” are not the extent of it. I was completely surprised to learn that Chinese appears to have borrowed a range of fairly mundane phrases or compounds from Japanese. Some examples include “new products appearance”, “shopping district”, “low birth rate”, and “housekeeping”. The nature of the Chinese character system implies that a new phrase or compound is almost a low-grade invention, because there’s no inevitable way such words must be formed.

(In Korean, the situation is more complicated, since it has heavy influence from both China and Japan. In short, the Korean language has directly borrowed Chinese loanwords, “Korean-made Chinese” words, “Japanese-made Chinese” words exported to Korea, European words converted into Japanese form and then exported to Korea, etc.)

Japanese used to have a lot of Portuguese and Dutch loanwords as a result of contact with traders starting in the 16th century, but most of those are now obsolete. One major survivor is the Japanese word for bread (パン pan), which is derived from the Portuguese pão. I’m convinced, though I can’t prove it, that the knowledge of the Luso-Japanese word pan was the impetus behind the Chinese translation of “bread” as mian bao (lit. “wheat bun”, simplified 面包 traditional 麵包). The fact that bao sounds like pan is pure coincidence, and its character has been used in words referring to various kinds of filled buns for a very very long time. But the main reason I believe in the pan-bao connexion is that in some other Chinese languages (e.g., Wu or Shanghainese, and Min Nan or Taiwanese) the second character would be read as pao or pau. Technically, /b/ and /p/ are voiced and unvoiced variants of the same sound. Vietnamese, I believe, also uses a cognate of mian bao, especially in reference to that headcheese-on-baguette sandwich, whose choice might have been influenced by the French pain. I could be completely wrong, but it would be neat if all this were true !

(3)

Speaking of bread… The point of the above is that Japan has been the intermediary for the diffusion of Western modernity in East Asia. And that also shows up in bread, or rather bakery items in general. Anyone who has been to East Asia knows it’s full of bakeries and patisseries just like this (image source):

panya

In such places the delicacies on offer include familiar western staples, like croissant, baguette or strawberry shortcakes, which are western, but with localized taste ; and various semi-traditional pan-Asian buns filled with bean paste or chestnut purée. But there are also many hybrid pseudo-western bizarreries, with no equivalent elsewhere in the world, such as :

yakisoba_pan

The above is Japan’s answer to both China and the West : fried noodles in a hot dog bun. More specifically, the noodles are yakisoba, itself a very modern interpretation of fried noodles dating from the early 20th century, one of whose principal ingredients is … Worcestershire sauce, or, rather, the Japanese version of it. The green bits are dried seaweed flakes (actually algae, but that’s being pedantic). This alarming combination of starches probably emerged after the war with the American occupation, but I’m not sure.

But Japan’s caricature of globalisation is surely the karei pan, or bun filled with Japanese “curry“, covered in breadcrumbs and deep-fried :

karei_pan

The breadcrumbs are panko, the coarse type preferred by the Japanese which has become inexplicably trendy in many western countries. The British pseudo-Indian “curry” was most likely an import along with many other semi-western dishes that are mainstays of Japanese dining today. The Japanese twist on “curry” is primarily that it’s a roux of starch and palm oil, made from dissolving the semblance of a chocolate bar in water, into which miscellaneous detritus are then introduced.

Bakeries like the above are ubiquitous in East Asia, and they are imitations of the Japanese bakery model, so to speak. It would also seem, the Japanese preference for ultra-refined white flour for making breads and pastries has been transferred to the rest of East Asia. The fashionable sort of “whole grain” breads mixed with twigs and birdseed that one finds at trendy locales in western countries, is not yet widespread. (Though brown rice is making a comeback in Japan.)

Speaking of both loanwords and food, one Japanese word that’s not European-derived but was coined in response to industrialisation and later imparted to Chinese is ajinomoto (MSG powder ; Japanese 味の素 lit. “principle of taste”, Chinese 味之素). A Japanese scientist early in the 20th century had isolated umami, one of the fundamental tastes, and this was the basis of a major international food corporation, Ajinomoto, the world’s largest supplier of MSG as well as aspartame. Since so much of East Asia’s cuisines are based on exploiting and intensifying the naturally occurring glutamates in their ingredients, Japanese MSG played a major role in Asia’s enormous processed food industry.

The history of MSG and its extremely widespread use in global food processing is something I consider a synecdoche of Japanese industrialisation, but that’s a topic for another day.


Filed under: East Asia, Food, Languages Tagged: calques, Chinese, Chinese borrowings from Japanese, Hanzi, Japanese, Japanese-made Chinese words, Kanji, loanwords, wasei kango, 和製漢語

שׂבולת שׂמית

$
0
0

Stream-of-consciousness thoughts about why we say “Semitic” even though the root is “Shem”. And, yes, I know the Hebrew letters in the title say “semitic sibboleth” and not “shemitic shibboleth”.


In my youth I was exposed to many years of Greek and Latin, and a concomitant of pubescence was my learning that the ancient Greek sound inventory did not include the phoneme /sh/.

“It is for the lack of Sh in Greek”, revealed our teacher, “that we say See-mite, and not Shee-mite, which might incite unintended entomological afterthoughts”. Or at least that’s how I remember the observation. Maybe he was referring to the widow’s mite, for all I remember now.

Even though the root is Shem (שׁם), the name of Noah’s eldest, we say “Semitic” just because we got the word through Greek and the Hebrew name had to be transcribed in Greek as Sem (Σημ). By contrast, in Modern Hebrew, the Israelis do say “SH[emitic]” and “[anti]SH[emitism]”. [1]

( To the sound value /sh/ I henceforth refer, from a misguided sense of linguistic united-nationism, by the symbol /ʃ/ of the International Phonetic Alphabet, also known as the “voiceless palato-alveolar sibilant“. )

Technically, Greeks can’t tell the difference between /s/ and /ʃ/, for these are “allophonic” in Greek. So to this day Greeks say pasas for the Turkish pasha or sakis (σαχής) for the Persian shah, a variety of monarch with which Greeks have been long familiar. Thus the famous Persian emperors Kurosh and Darayavahush are mostly known as Cyrus and Darius. (For the latter it’s a serious improvement.)

The distinction between /s/ and /ʃ/ is famously exploited in the Bible :

“Then said they unto him, Say now Shibboleth: and he said Sibboleth: for he could not frame to pronounce it right. Then they took him, and slew him at the passages of Jordan: and there fell at that time of the Ephraimites forty and two thousand.” [Judges 12:6, King James Version]

The Ephraimites were one of the Tribes of Israel and thus Hebrew-speaking. But Ephraim, son of Joseph, was half-Egyptian, was born in Egypt and grew up there. So the Ephraimites had lost the ability to distinguish between /s/ and /ʃ/ — at least that’s the story.

( The odd effect of being Egyptian has continued to the present day, linguistically. The first name of Nasser, the nationalist leader of Egypt 1954-70, was جمال which is pronounced Jamal in most other Arab countries, but not in Egypt, where the letter ج is rendered like the hard /g/ as in Gamal Abdel Nasser. ) [2]

When I read Judges 12:6, I had my first doubts about the original explanation for why we say Semitic instead of Shemitic. After all, the Hebrew Bible had been translated into Greek 2300 years ago and the story of shibboleth had to be told somehow. Since Biblical Greek was completely disdained at my school, I actually had no idea how “shibboleth” was rendered in the Septuagint (the Greek translation of the Hebrew Bible, completed in Alexandria around 200 BC). So I looked it up, and what a disappointment ! The Greek translation completely avoids the issue by not mentioning “shibboleth” at all. But where ever “Shem” is mentioned, it is transcribed as “Sem”.

But my doubts about the origin of “Semitic” in English would periodically resurface over the years. What does it say in the Vulgate, the Latin translation of the Bible by St Jerome ? It actually distinguishes between Scibboleth and Sibboleth, although this could be because late Vulgar Latin might also have distinguished between /s/ and /ʃ/. Certainly Italian does today. But the Vulgate renders Shlomo as Solomon and Ishmael as Ismael, so the /sc/ in the passage from Judges was probably ad hoc, to actually tell the tale, unlike the Septuagint.

And the name of Noah’s son is Saam (سام ) in Arabic, Hebrew’s cousin language, even though Arabic does have a letter representing the /ʃ/ phoneme ( ش ). I am assured, however, by Semitic historical linguistics that thanks to the evolution of Semitic languages the Arabic /s/ corresponds to Hebrew /ʃ/ in cognates, e.g., SLM which is salam in Arabic but sholom in Hebrew.

Then I noticed that word “Shemitic” appeared fairly often in books from the late 18th century, probably thanks to Protestants’ sense of Biblical accuracy. So I entertained the theory that the German polymath August Ludwig von Schlözer who actually invented the category called “Semitic languages”, had confused Shem and Sem. Therefore we live with an error of German Orientalist scholarship. I found that plausible because the difference between the Hebrew letters sin /s/ and shin /ʃ/ amounts to a dot :

hebrewsinshin

(And you can see how the Russian Ш /ʃ/ and the Greek letter sigma ∑ are all related. ) The two glyphs are identical but for the left-right difference in the placement of the dot above. And frequently the dot is not even written down ! “Those damned dots”, as Churchill’s father used to refer to decimal points when he was Chancellor of the Exchequer…

There are other instances of “misreading” begetting a life of its own. In Turkish, the word Ottoman is Osmanli. Osman is the Turkish variant of the Arabic name Uthman عثمان ), the name of the third Caliph of Islam. Turks can’t lisp, so they said Usman or Osman. The English word Ottoman comes from an Italian corruption of the Arabic original. Like the Turks, the Italians can’t lisp, but unlike the Turks the Italians interpret / θ / as / t /. Thus, Ottoman. The Germans remain faithful to the Turks : Osmanisches Reich.


Then recently I read In the Beginning : A Short History of the Hebrew Language, which despite the title spends an inordinate amount of time evaluating the work of the Masoretic scribes in great detail. These were the people who, in a fit of crazed diligence, feverishly marked up the Hebrew script, like this bit :

aleppo_codex

All those dots, lines, and squiggles are pronunciation guides for readers to ensure proper reading of scripture. Those are necessary because Hebrew, ancient and modern, is written with key pronunciation clues totally missing.

Semitic languages like Hebrew and Arabic lack letters for short vowels, and the 3 (or 3½) long vowels that do get written down nonetheless lead double lives as….consonants, and you aren’t told which is which and when. Five consonant pairs, including S/Sh and B/V, are distinguished only by dots, but (in Hebrew though not in Arabic) the dots are normally not written ! Doubled consonants are also unmarked — as well as a bunch of other inconveniences which make the writing system vaguely hieroglyphic, from the point of view of a speaker of modern European languages.

Thus, prospective learners of Semitic languages like Arabic and Hebrew might have difficulty knowing how a string of letters is supposed to sound without looking it up in a dictionary. Modern Israeli newspapers totally dispense with those damned dots, lines and squiggles. If English were transcribed along unpointed Semitic principles, then the sentence “ripples in the sea show where a ship had passed near the boat” might be rendered :

RPL AN T SY SV VR SP HD PSD NR T VVT

So the Masoretic scribes, in order to banish forever any possibility of mispronunciation of scripture, decided to mark everything and I mean everything — vowels, consonants, doubled letters, sentence stops (there are no punctuation marks in the original Bible), phrase separations, etc.

The trouble is, those scribes lived anywhere between 1000 and 200o years after the various parts of the Hebrew Bible had been composed. The Hebrew they spoke was definitely not ancient Hebrew. Also, there were many different Masoretic traditions in different places, such as Palestine, Mesopotamia, etc., and they do not agree completely on the pronunciations.

Yet one of those traditions, called Tiberian Masoretic, established today’s standard of pronunciation for Biblical Hebrew. And their work is also the basis of the pronunciation of modern Israeli Hebrew. And they also established the “official” text of the Hebrew Bible, on which all post-1500 translations of the Hebrew Bible have been based. Babylonian Jews, the Alexandrian Jewish translators of the Septuagint, Jesus, the Apostles, the early Church fathers, and St Jerome who did the Vulgate, had all read different pre-Masoretic texts, possibly quite different in some places, and these survive only in sorry fragments.

Thus, on those Tiberian shoulders much does hang. For example, the very first phrase of the Hebrew Bible, “In the beginning” from Genesis 1:1, was originally written simply as a string of 6 letters, VRASYT (בראשית), and nothing more, and it was not even separated from the subsequent words. By adding 8 markers, the Tiberian Masoretes turned this into a separate and discrete b’reshit, which is the basis for all subsequent interpretations of the phrase, both rabbinical and secular.

On the markers devised by the Tiberian Masoretes, Joel Hoffman of the above mentioned book is quite clear : while the Tiberians did not simply project their own speech, they also certainly did not recapture the sounds of Ancient Biblical Hebrew. Hoffman goes into considerable depth to convincingly establish that point.

The Tiberian Masoretes mostly agree on the consonants with the Alexandrian Jewish translators of the Septuagint, but there are numerous discordances on vowels and sometimes on key values for consonants. It’s entirely possible that the Greek versions Abraham and Eua (Eve) are more accurate in representing ancient Hebrew than the “official” versions Avram and Havah. And Rebecca may actually be closer to the original than Rivkah. We don’t know for sure.

How does that illuminate the question of Semitic vs Shemitic ? Although Hoffman does not specifically address that question, he does give numerous instances of Masoretic interpretations not being supported by Greek transliterations of the Hebrew in the 3rd century BCE. That doesn’t mean the Masoretes were wrong, but we also can’t assume they were right.

So maybe, just maybe, the Masoretes got the sin/shin distinction in Hebrew wrong. In the original Hebrew of Judges, the distinction is between “shibboleth” spelt with shin/sin (ש) and “shibboleth” spelt with samekh (ס), another letter representing the sound /s/. It’s entirely possible that the /ʃ/ sound didn’t exist in Hebrew by the time Judges was composed, and our primary (only?) attestation is the Masoretes. Just a few centuries ago neither English nor German possessed the /ʃ/ sound, even though neither language can be imagined without it now. A candidate for what sin/shin (ש) might have been is the voiceless alveolar lateral fricative, which Greek probably would also have rendered /s/.

But the preceding paragraph could be utterly, completely and irredeemably inconsistent with some well-established part of Semitic historical linguistics. I’m not sure and I have to delve into it more deeply…


Edits :

[1] I’ve been told now Israelis do not in practise say “anti-shemitic”. I only went by the location of the dot over ש in the dictionary. Edit : But apparently I was right the first time !

[2] The statement about Nasser’s first name has been criticised. The point here was a joke exploiting the fact that the Egyptian colloquial pronunciation of the Arabic letter ج is considered deviant by most of the Arab world, just as the Ephraimite pronunciation of “s(h)ibboleth” was considered nonstandard in Judges. Most of the rest of the Arab world realise the letter either as /d͡ʒ/, /ʒ/ or /j/. Originally  ج was /g/ or /gʲ/, which implies that Egyptian is more conservative and the other dialects, including Modern Standard Arabic, are the innovative ones. But that doesn’t change the social perception of Egyptian colloquial !


Filed under: Ancient Greek, Biblical Hebrew, Languages Tagged: Bible, Greek, Hebrew, Masoretes, Semitic, Shem, Shibboleth

Links 18 July 2014

$
0
0

I’m not intending to do “weekly links” or anything, but I wanted to highlight a blogpost by Victor Mair : what the Dungan language sounds like from snippets of the movie “Jesus” dubbed in Dungan. This is the language of Chinese Muslims who fled to Russian Central Asia and is considered a divergent dialect of Mandarin. You can hear what it sounds from Mair’s links. The comments section, as usual, is excellent. Plus, the movie “Jesus” dubbed in over 1000 languages !

While I’m at it… Is English in reality a North Germanic (Scandinavian) language, rather than West Germanic ? The orthodox position is defended with great energy in a three-part critique by Asya Pereltsvaig.

T. Greer has an interesting blogpost in which he recollects that after reading Crosby’s Ecological Imperialism he started finding many “big” histories without ecological or biological awareness rather deficient. I concur with his assessment of the book, except that in my case the same process had started for me after reading Diamond’s Guns, Germs and Steel back when it was first published in 1997, and it was in Diamond’s bibliography that I first found Crosby.

HBDchick has a blogpost called “Reverse Renaissance“, which very much takes the opposite view from my “The Creativity of Civilisations“, at least on the subject of the Islamic Golden Age. She had also had a two-parter on “asabiyyah”, a word which (I think) has been popularised by Peter Turchin and which I wish would just go away. Those posts are Asabiyyah 1 and Asabiyyah 2. ( I am plentifuly present in the comments section of both. )

Razib Khan shows that despite nominal exogamy northern Indians still show elevated homozygosity, because they marry locally and within-caste.

Also, Pincher Martin recommended The Empire Trap, which I also now recommend, if you were interested in the discussion of foreign investment from last week. It chronicles the messy, ad hoc evolution of the US government’s attempts to protect American property abroad.


Filed under: Links

Azar Gat’s Nations

$
0
0

I saw Razib Khan‘s review of Azar Gat’s Nations : The Long History and Deep Roots of Political Ethnicity and Nationalism. Without intending to make it that long I posted a 1000-word comment there. Then I realised I could have posted it here.


I just read Nations last weekend. It’s interesting you [i.e., Razib] drew most of your examples from the Byzantines since Gat himself practically skipped over the distance between Roman and Ottoman. Yet Greek identity would be an excellent case study for discussion of modernist theories of nationalism versus what ever its opposite is. (I hate to use the word “essentialist”.)

Gat basically argues there is an objective pre-modern foundation, based on a cultural core, for many modern ethnic identities. But I think that misses the point.

Modernists (e.g., Hobsbawm, B. Anderson) do often overstate their case, but their major contribution was to stress the contingency of modern nation-states and the modern manifestations of nationalism — not that they are totally arbitrary constructions invented whole-cloth in a single generation. [*] It’s about arguing against the inevitability of the configuration we see today. We know the core of the Byzantine empire was Greek-speaking and Orthodox Christian. But there’s a kind of survivorship bias — because the modern state of Greece is the evolved remnant of a larger Hellenistic world, we tend to think of a continuously existing core Greek identity. Yet change just a few variables and you might have had several “Greek” states in analogy with the “Roman” states of Italy, Spain, France and Romania. In the reverse direction we might imagine many fewer Arab republics than the membership roster of the Arab League.

 [*] Not the nation-state in the abstract, but particular “actually existed” nation-states.

Now, maybe some nation-states we see today are more likely to have emerged than others, because of deeper reasons like geography, early adoption of agriculture, early state formation, linguistic consolidation, etc. Gat comes close to arguing that the division of French Indochina into those three particular ethnolinguistic states was inevitable, as opposed to Indonesia. China certainly looks more inevitable than Norway, but that could be a failure of imagination.

A key premise of modernists is, there was a persistent cultural gulf between the elites and the masses before modern times. Modern Greek nationalists have dug up many instances where Byzantine writers referred to themselves not simply as Romaioi (as we learn from the modern historiography of the Byzantine empire), but also Graikoi and Hellenes. I think those are thin, but let’s grant that — after all, many of the Byzantine elites were steeped in the pagan classics especially after paganism was long dead and hardly a threat. But did such feelings of ancient cultural unity and continuity exist, say, amongst the peasantry of the Cappadocian theme in the 10th century ? Gat points out there’s almost no evidence about the self-conception of the masses from premodern times. So what does he do ? He goes on to argue that the masses did have a self-conception roughly congruent with that of the elites.

Gat anticipates the “elite versus masses” distinction and the “customary objections to the common-sense proposition that Egypt was a national state” by focusing on religion, customs & ritual. But did that really a create a subjective sense of national identity in pre-modern times ? Despite the reams of facts presented, that’s far from demonstrated by Gat. Take the Greek war of independence. He points out, correctly, that the peasantry played a crucial role in independence ; and their awareness of themselves as Christians is part and parcel of modern Greek identity, even if the elites were motivated by more abstract European ideals. Sure, but then, why didn’t the Greeks revolt earlier ? Basically Gat’s answer is, they did ! So he recasts pre-modern wars, like the Serb rebellions against the Ottomans, as having a fundamentally nationalistic character defined around religion. The Taiping rebellion has been deemed a nationalist rebellion by modern historiography for some time now, but was the Red Turban Rebellion also a nationalist reaction, like the Indonesian war of independece from the Dutch ? Gat doesn’t reference the Red Turban rebellion by name, but I think his answer would be yes.

I don’t see the difference between modernists and Gat as empirical. I see it as ideological and axiomatic. Gat presents many broad facts and reinterprets them in the light of his thesis.

The modernists like Hobsbawm and Anderson talk endlessly about the role of mass compulsory education and the enforcement of a national langauge. Their greatest validation is that almost all non-western societies belatedly followed the western model of mass linguistic unification. Now I call that a “western model” only because of where it happened first, not because there is anything uniquely western about it. A striking feature of modern European societies is they became less diglossic earlier than non-western societies. In 1900 the gulf between the classical Chinese of the mandarins and the vernaculars was pretty vast.

And we know from the recent experience of non-western states most of them placed great emphasis on the standard language, sometimes to the point of suppressing dialects & regional languages. The Chinese do not actively suppress, but they make a big show of insisting Cantonese, Shanghainese et al. are dialects not languages. (Not to mention, what other country that size has a single time zone ?) The Arabs have this extraordinary attachment to “Modern Standard Arabic” even though it’s only acquired at school (and not even the same as Qur’anic Arabic), much like westerners acquire Latin. The subordinate place of the numerous vernaculars is seldom questioned. Pakistan is frequently cited as a failed state but it has had much better success in making the Urdu variant of Hindustani a national language, which is non-native to at least 90% of the Pakistani population, than India has had with its variant of Hindustani. (*) The Turks purged Ottoman Turkish to the point that no Turk can read it without highly specialised training ; and of course they ruthlessly stamped out everything else. The Indonesians, by contrast, adopted a divergent dialect of Malay instead of the dominant Javanese. The Greeks were arguing into the 1960s what kind of language (the faux-classical Katharevousa or the more “natural” Demotic) they should be learning at school !

The point here is not who succeeded at what and how and why. The point is the centrality of the language issue in the “national question” of so many disparate non-western countries. So why that centrality, if the “western” model of language assimilation & unification were mere eurocentric misextrapolation ? (Gat talks a lot about languages but frankly I don’t get what his point is. The artificial standard language did spread in pre-modern times, but the masses never learnt it until very recently.)

EDIT : (*) This was a casual comment. Yes, Pakistan inherently had an easier time of making Urdu the national language than India had of it with Hindi. See comments section.


Filed under: Ethnicity, History, Political Development Tagged: Azar Gat, Ethnicity, Nation-State, Nationalism, Nations

ελαδιοξιδιολατολαχανοκαρυκευμα

$
0
0

A very brief history of Greek diglossia.


Most people know that even after the collapse of the western Roman empire, the Catholic Church continued the Latin tradition. But centuries before Odoacer declared himself King of Italy, spoken Latin had already been evolving into something we today call Vulgar Latin, the forerunner of the Romance languages. Even the official ecclesiastical Latin of the Middle Ages was more “relaxed” and closer to spoken forms than the language of Cicero which nobody could actually speak spontaneously anyway.

Eventually the vernacular languages came to the fore : at first they made their beach head in literature but not long thereafter seized the courts and government. By the 17th and 18th centuries, the place of Latin was confined to the highest intellectual life. Perhaps the most prominent of the last major texts written in Latin is Newton’s Principia Mathematica.

But imagine, just imagine, Latin was never displaced as the language of government, law, education, religion, and business in Italy, France and Iberia. Colloquial speech would be perfectly free to evolve all on its own at the same time, but everything formal, official, and “elevated”, whether written or spoken, would continue in Latin. That means today, non-fiction, newspapers, television news reports, parliamentary debates, university lectures, school curricula, and presidential speeches would be in Latin.

In the Arab world, it’s kind of like that (a topic for a later blogpost). And the linguistic situation in most of the rest of world was very similar to that until quite recently. It’s only in the 1920s, for example, Turkey jettisoned the unnatural construct called Ottoman Turkish.

Today I concentrate on Greek — the 3000 years of resistance of Ancient Greek to mortality.

The history of the Greek language is conventionally periodised into Archaic, Classical, Koine, Mediaeval, and Modern. That simplicity hides two things : (1) Attic or the speech and writing style of Athens of 500-300 BCE cast a long shadow and remained an ideal against which the “best” writers of the subsequent 2300 years measured themselves ; and (2) underneath the official Greek of any historical period, the vernacular evolved freely, unconstrained by the straitjacket of a glorious past.

The Greek language spread across western Eurasia with the conquests of Alexander the Great. While it never displaced the native languages it did become the lingua franca of political and commercial communication in the post-Alexandrian world known as “Hellenistic” :

hellenistic

Greek coin hordes have been unearthed from as far east as Afghanistan and Central Asia. This particular coin depicts Euthyemus, a king who ruled an area encompassing some portion of the present-day Tajikistan and the Wakhan Corridor. You can click to enlarge and still read the Greek word for “king” :

220px-EuthydemusMedailles

It is the “Koine” Greek of this period, a language slightly less “rigorous” than Attic, into which the Hebrew Bible was translated in Egypt and in which the New Testament was composed. The distinctive features of Koine are thought to be the result of many non-Greeks acquiring Greek, which as always language purists considered a source of debasement, contamination and irredeemable vulgarity. But compared with modern Greek, Koine was still a highly inflected, synthetic language close to its Attic roots. I would judge Koine less distant from Attic than medieval Latin was from Augustan classical.

In the early Byzantine period the New Testament lent prestige to Koine, but by the 8th or 9th century CE the Koine of Byzantine writers and the Orthodox church itself had become removed from the spoken language. Just imagine, how much evolution any language goes through in 1000 years. When ever you hold the language of a particular historical period as a standard and use it for writing, in just a few centuries the spoken and written languages will inevitably go their separate ways. By the time the Seljuk Turks staked their presence in the middle of Asia Minor, the Greek language had had more than 1300 years of evolution since Aristotle ! So during its long history Greek would always go through cycles of normal “degeneration” and corrective “neo-Atticism”.

But few people could perfectly imitate the “purest” Attic and there would always be varying degrees of conformity to classical norms — “high”, “middle”, and “low” styles. At the highest level a few writers, like high-IQ zombies, would painstakingly reproduce a Greek that might have been appreciated by Demosthenes as punctilious yet unimaginative. Most, however, wrote in a mixed or “middle” style, based either on the Biblical Greek that was already alien to the spoken language of the Middle Ages ; or on the speech of the elites of Constantinople but with lots of ancient Attic words and “flavours”. In short, the language of writing had an artificial life and logic all its own.

Much of the difference between ancient and modern spoken forms of Greek was probably already present during the Byzantine period, complete with grammatical evolution, pronunciation shifts, numerous Latin loanwords, and shifts in meaning of native words due to Christianisation. Many Turkish words entered vernacular Greek under the Ottomans, but probably the biggest effect of the Sultans was that the Greeks sought to maintain identity with a strong link with the ancient past. Because the Ottomans placed the Orthodox Church in charge of Greek affairs the life of the language continued the classicising pseudo-Attic ossification of the previous millennium.

That does not even address the fact that spoken Greek also had many “dialects” — really separate Hellenic languages — in the Pelopponesian heartland of Greece, in southern Italy, along the Black Sea (not far from the recent Sochi Olympics), in central Asia Minor, in Crete, in Cyprus, on many of the smaller islands, etc. In the southern tip of Italy you can still occasionally see trilingual signs — in Italian, in Greek, and in another language written with the Greek alphabet. Given a different political history all of these might have become distinct, separate, and national languages, a little like Spanish, Italian, and French. And today we might be referring to the Hellenic subbranch of the Indo-European family containing several major languages.

But the situation at the end of the 18th century was that the weight of a glorious past had helped create a bed of many layers separating the educated from the unschooled masses of the Hellenic-speaking peasantry. There were the numerous levels of the written language, the common spoken form of the Greek elites of Constantinople and Athens, and the separate vernacular evolutions everywhere else. The long shadow of Athens, and perhaps also the need to preserve a Christian identity under the Ottomans, deformed the evolution of the Greek language. It had never had its equivalent of Dante’s De vulgari eloquentia or Petrarch’s “lingua toscana in bocca romana”.

All hell breaks loose with Greek independence and the emergence of The Language Question. For nearly 2000 years there had been an informal consensus that Greek would live in two parallel universes of spoken and written, each with a life of its own. But with the birth of the nation-state, and under Western European influence, it suddenly became important to decide on a common written language around which everyone could rally. Would the Greeks emulate the progressive European model and create a standard language largely based on the spoken language of the metropolis, or the conservative model of reviving as much as possible of classical Attic ?

To simplify there were two camps : (1) the “pragmatic classicists”, who favoured the adoption of the spoken educated language of Athens and Constantinople, but purged of 2000 years’ worth of Latin and Turkish loanwords, along with new spellings and reinvention of modern concepts often using Attic roots ; and (2) the “romantic nationalists”, who wanted the naturally spoken language, also known as Demotic, complete with “barbarous” Turcisms, Latinisms, etc. intact. There were even some fringe elements who wanted to revive Classical Attic pure and simple.

But the cycles of classicisation and vulgarisation in Greek language history would not go away just because Greeks now had independence. From the minds of the conservative and classicist camps would emerge a strange, griffin-like creature called Katharevousa, a “purified” language whose grammar was definitely not classical, and was sort of based on the educated speech of Athens and Constantinople, but still with all manner of artificiality in grammar and vocabulary that made it very different from natural speech. It had to be taught at schools like a foreign language. Yet Katharevousa would vie with Demotic to be the “true” language of Greece and the medium of government, education, journalism, science, and literary prose.

By contrast, Demotic thrived in poetry, and by the late 19th century most modern Greek literature was in Demotic. But even Demotic was not totally natural. It could not have been. Since Greek had a bewildering variety of dialects, the demoticists sought a compromise that could be neutral for as many as possible from Macedonia to Cyprus to Anatolia. Ionian had a prominence of place because there, under Venetian rule rather than Ottoman, it had developed a vernacular literature all its own. Demoticists also drew heavily on folk songs and popular mediaeval literature.

Again, this is about the written language. Everybody spoke Demotic, more or less. The question had to do with written and official language. The Demotic-Katharevousa split was a war of competing visions of romanticism : Demotic wanted to be just like the Europeans, cherishing the national myths of a folkloric, “natural” past of the people, usually defined as peasants ; whereas Katharevousa hankered for the glorious past when Hellas was great.

In some ways Katharevousa was the most absurd development in the long, strange history of the Greek language. Its lexicon, already not intended for everyday speaking, would nonetheless often have two separate forms of a word, a pseudo-classical one for formal speaking and sometimes an outright Attic word for writing. Imagine a Greek member of Parliament in 1900. He could choose from amongst three words for “fish” — not three words with slightly different meanings, but three words expressing exactly the same thing.  In ordinary conversation, he would have just said ψαρι /psari/ (Demotic), but during a Parliamentary debate he might speak about οψαριον /opsarion/ (Katharevousa). But if he were writing a report on the Ottoman harassment of Greek fishermen, he might write, perhaps to just show off, ιχθυς /ichthys/ (Attic).

Writers led the war against Katharevousa on behalf of Demotic. When Homer and the New Testament were translated into Demotic at the turn of the 20th century, riots broke out in Athens and the translators were accused of being pro-Turk traitors ! This is when the Language Question acquires a left-right orientation, with the Left being tribunes of Demotic and the Right the defenders of Katharevousa. When the Liberal Party controlled the government, there were reforms which chipped away at the place of Katharevousa in education and administration; naturally, the right reversed or weakened these reforms when they were returned to power. Liberal governments might order the printing of textbooks in Demotic. Then subsequent right-wing governments, invoking Family, Property, Church and Language, would order new ones in Katharevousa.

But there were many odd twists. The dictatorship of Ioannis Metaxas feared the influence of the communists on Demotic, so his government commissioned an official state grammar of the popular language. But at that time the Communist Party of Greece regarded Demotic as “bourgeois liberal” and insisted on addressing workers in Katharevousa.

Demotic got a tremendous boost from the Greek resistance to the German occupation, both communist and noncommunist ; and after the war Katharevousa was basically a lost cause. Yet when the infamous colonels came to power in the 1967 coup, they did not simply try to weaken and undermine Demotic.  They banned it from schools !!! Demoticists were driven out of university faculties. The popular language, the regime argued, was mere slang, not fit for serious exercise of thought. Thus, the actual language of Greeks was once again associated with communists, hippies, atheists, and other degenerates. If the language question were still alive today, Demotic might be associated with gay vegetarians calling for an end to the mutilation of Palestinian clitorises by meat-eating multinationals.

Katharevousa’s last stand accomplished nothing. When democracy was restored in 1974, the widespread backlash against the colonels’ regime even convinced the mainstream right to drop their traditional insistence on Katharevousa.  Demotic became the official language of Greece in all spheres — in 1976 !!! It was the first time in 2300 years of Greek history that the spoken language and the “official” language would coincide.

However, in their century of conflict, Demotic and Katharevousa ended up influencing each other, with the spoken Demotic naturally acquiring some Katharevousa characteristics and Katharevousa loosening up a bit.  But that’s another story, and I’ve already told you the history of the Greek language is a bit warped.

Note : I have tremendously simplified a very long history and elided many many details. I barely acknowledged the existence of Mediaeval Greek. And I have surely touched some sensitive political issues near the end. But it was for the sake of telling a little known but fascinating tale. The history of Greek also illustrates very well the idea of diglossia — something most Anglophones have little notion of. I drew most of the above information from several histories of the Greek language, my favourites being Horrocks and Mackridge.


Filed under: Ancient Greek, Languages Tagged: Ancient Greek, Demotic, Demotiki, Diglossia, Greek, Katharevousa

Der Todd des Euro

$
0
0

Part 1. The French anthropologist-demographer Emmanuel Todd, who is becoming increasingly fashionable in the Anglosphere, is also a scathing critic of the euro. I examine his “anthropological” views of Germany and the euro, which I also contrast with those of Michael Pettis


(1)

The other day I stumbled upon this innocently admiring blogpost on Emmanuel Todd which contained a video of him speaking in English about France and the euro. The clip is an excerpt of a panel discussion organised by Harper’s magazine (partial transcript).

In the Harper’s video Todd mostly wears his demographer’s hat and the anthropologist doppelgänger that is conspicuous in his French appearances is rather restrained. Around 18:20 he argues that the euro cannot work, in part, because the French population is much younger than Germany’s. There’s some truth in this, because a younger population may prefer more inflation and an older one may prefer more surpluses. But, arguably, Germany with its grumpy old people is more youth-friendly when it comes to employment policies, than France.

Between 2:30 and 3:30, Todd mentions that France had traditionally relied on currency devaluation in times of recession in order to achieve full employment and avoid fiscal austerity. That’s precisely what Eurozone countries can’t do today. That’s mentioned almost in passing, but it’s a key element of Todd’s views of Germany and the euro. He believes that for Germany “competitive disinflation is a nationalist strategy“, because German industry has achieved international competitiveness by restraining growth of labour costs at the expense of its European partners. But when it comes specifically to competing for a share of the export pie, there is no difference between Germany’s “internal devaluation” and the standard “external devaluation” that Todd would like France to be able to do. They are both competitive devaluations !

(2)

A prominent French critic of the euro ubiquitous in the French media, Todd is an anthropologist-demographer who has documented the family structures of the world and their relation to political systems and ideologies. His very best book, L’invention de l’Europe, which has yet to be translated into English, reinterprets the whole of European history in terms of diverse family systems. (See Craig Willy’s masterly summary of the book.)

For anyone who follows France, Todd’s views of the euro should be familiar — the European situation cannot be analysed just economically, it requires the insight of the historian-demographer-anthropologist ; the euro is a vehicle for German domination ; the survival of the euro would imply the death of democracy ; François Hollande is not president of France but der deutsche Vizekanzler, etc. And he’s been saying such things for some time now.

Although he’s a brilliant polemicist who runs circles around his debate opponents, his rhetoric can be pretty crude and shrill, and he’s been criticised for “Germanophobia” (also here). In a French TV panel discussion, he argued, in a few years there may be “no French industry worthy of its name”, and the “attitude of the German bosses behind Angela Merkel” is to engineer the “elimination of Germany’s competitors”. When the co-panelist Ulrike Guérot protested that Germany is today a liberal pluralistic society, Todd replied, yes, we all know about the many Germanies but in the end the illiberal, authoritarian one always wins ! In the Harper’s transcript of its panel discussion, Todd said,

The idea was to make Germany a European country. What we have instead is Europe as a German power zone. It’s all very peaceful. It’s like a peaceful parody of the past, but it’s the same past.

In this French TV debate with Marine Le Pen nodding across from him, Todd went all out (after 13:00) :

We speak of protectionism for Europe. Why don’t the Germans favour protectionism for Europe ? And why are the Germans so-called “free-traders” ? It’s not like the English. They are true free-traders, they committed suicide for free trade, they sacrified their industry for free trade…. The Germans are different…. And I repeat [tone of sarcastic pro forma gratitude] it’s an admirable culture, we owe them [this and that] But there is, in the German culture, a different conception of the nation and national solidarity. What does that mean ? If you were to implement in France a free trade regime, the French, the really nice universalists that we are, woud buy any old car, according to price and quality. Moreover we would buy foreign just because it’s nice to have foreign things. But that is not [waving his finger] German culture. It’s a culture of controlling its neighbours… The Germans hunt in packs… It’s a cultural thing. It’s not pretty, it’s very unpleasant for our French conception of the universal man. [interruption] But that’s what’s happening ! The truth is, the German citizen, not always, but statistically, will buy German…

The core problem, in Todd’s eyes, is a deep-seated chauvinistic quest across the Rhine for economic hegemony. His vision of the “German problem” is rather reminiscent of the one identified by Clyde Prestowitz in the 1980s, a jeremiad of Japanese economic hegemony. It’s not the conventional narrative of low-wage countries eating away at the industrial base of the rich economies. In an interview in Marianne, Todd compared Germany with China :

But the policy carried out by Germany in Europe, or by China in Asia, shows that globalisation does not, uniquely or even principally, pit the emerging markets against the developed countries. Globalisation leads to confrontation between neighbours. When the Germans conduct a policy of wage reduction in order to lower labour costs, the impact is non-existent on the Chinese economy, but is considerable for its partners in the Euro zone. When the Chinese manipulate the yuan, it’s against Thailand, Indonesia or Brazil, its competitors in low-wage labour. What we notice is a tendency of the emerging markets to fight amongst themselves and the developed countries to exterminate one another industrially, with the objective of being the last to go down with the ship. This mechanism has turned the Euro zone into a trap, with Germany, whose economy is the most powerful,  in the role of fox in the henhouse.

Since Todd is quite arrogant and condescending about having a deeper “anthropological” perspective on the European Union than other commentators, it’s worth exploring what that is exactly.

(3)

According to himself, much of Todd’s analysis is a direct implication of his academic work. In Willy’s summary of Todd, the “stem family” that is characteristic of Germany is

…authoritarian and inegalitarian. Several generations may live under one roof, notably the first-born, who will inherit the entirety of property and family headship (and thus perpetuate the family line).

In L’origine des systèmes familiaux, Todd’s magnum opus (also yet untranslated), the “stem family combines authority and inequality, essential bureaucratic values, and its ideal of continuity was one of the roads toward the modern state”. This implies :

Whether on the left, on the right or in the centre, German ideological forces always end up creating enormous agglomeration machines [“vastes machines intégratrices”]. The mass political parties — SDP, the Centre, NSDAP — are surrounded by a constellation of professional or cultural satellite organisations. Spontaneously, party loyalty produces in Germany vertically integrated “subsocieties” which realise, within the context of modern society and economy, the ideal of the “estates” system of the Ancien Régime. The social-democratic estate of workers, the Christian democratic estate of Catholics, the Nazi order of Protestant middle classes in 1930. [From L’invention de l’Europe, my translation.]

The book which discusses some actual economics is the untranslated L’illusion économique. It argues that globalisation is defined by the interaction of two opposite yet complementary systems of capitalism — the Anglo-American or individualistic capitalism ; and the “integrated” capitalism exemplified by Germany and Japan. (The book also contains a whole chapter bitching about the lack of anthropological perspectives in economic analysis.)

I paraphrase Todd : Anglo-Saxon capitalism is focused on short-term profits and consumption, resulting in, simultaneously, high turnover amongst workers, frequent creative destruction of businesses, a low savings rate and high external deficits. This system requires for its perpetuation the existence of its “double negative”, the “integrated capitalism” of Germany and Japan where

…the true objective of the firm is not the optimisation of profit, the satisfaction of the shareholder, but the conquest of market shares, through the perfection and expansion of production. From an ideological point of view, the producer is king : the attention to technological progress and the training of labour are intensive. You have to excel in quality. The consumer is but a modest subject and one is tempted to assert that the deep logic of the system is to treat consumption as a necessary evil…

Germany and Japan are viscerally incapable of consuming the totality of goods produced by their industrial systems. Like Anglo-Saxon capitalism, the Germano-Nippon type is simultaneously coherent and unbalanced. Exports are a condition of survival, which presuppose the existence of its double negative, the capitalism of the importers.

Much of the above synthesises two strands of academic thought which were a vivid part of the Zeitgeist in the 1980s and 1990s : the “Asian economic model” as exposited in Chalmers Johnson’s MITI and the Japanese Miracle or Alice Amsden’s Rise of the Rest : Challenges to the West from the Late Industrialisers ; as well as Michel Albert‘s “capitalisme rhénan” or “Rhenish capitalism”, the Eurocentric variant of the “Asian model” argued in Capitalism vs Capitalism (which was more widely read in France than elsewhere). Both strands were ultimately rooted in older alternatives to British classical economics — the American institutional economics associated with Thorstein Veblen (and later with John Kenneth Galbraith) and the German historical school of economics. The ultimate genealogy for both might be the German Friedrich List (who heavily influenced Japanese planners in the 1880s and 1890s). In the 1990s, however, much of that fashion receded with Japanese stagnation, the Asian financial crisis of 1997-98, high unemployment in the “big” economies of Europe, and American economic boom.

Todd’s own idiosyncratic twist is the very Gallic anthropologisation of that “duality of global capitalism” :

Most of the significant traits of individualistic capitalism can be reduced to the fundamental values of the absolute nuclear family, which favour emancipation and mobility of individuals. At the most general level, the values of the nuclear family determine a preference for the short term, what Anglo-Saxon authors call “short-termism”. The nuclear family system does not have a lineage plan, it is defined by a continuous separation between generations [“ruptures générationnelles successives”]. Children, once adults, must leave [the family], begin a new story. The discontinuities which characterise the Anglo-Saxon world, whether it be the mobility of capital or of labour, are but the reflexion of the customs favouring mobility in general…

The long-term outlook of “integrated” capitalism, favouring technological research, investment, training of personnel and job stability within the firm — symmetrically [i.e., as a parallel with Anglo-Saxon capitalism] finds its source in the values of continuity which define the stem family. Strong parental authority, inequality of inheritance, existed only to assure the perpetuation of the lineage. The continuity of the past, noble or peasant, becomes the continuity of the firm and its projects.

The strong propensity to save and invest which characterise “stem capitalism” is but a particular economic manifestation accounting for this relationship with time. To save, to invest, is to project oneself into the future. Conversely, consumption, engrossed in the present, and escape into debt reflect, in a logically complementary manner, the mental universe of the nuclear family.

[Paraphrase : the differences between “nuclear” capitalism and “stem” capitalism could not be made manifest without globalisation.]

It’s because they can export that Japan and Germany have expressed their tendency to underconsumption ; it’s because they can import that the United States has expressed its tendency to overconsumption. Openness has not led to a convergence of systems but to their differentiation. Economic history does not follow La Fontaine’s fable : the ant (stem) lends to the cricket (absolute nuclear) what it needs — cars, television sets, computers — in order to continuing singing.

I think the above is very silly. Although I do agree there are cultural differences between “individualism” and “collectivism”, nevertheless Todd’s derivation of these traits from family structures is little more than literary semiotics. That’s perhaps why he can occasionally babble about “killing God” or “killing the father” (à la Freud) in La troisième planète (which has been translated into English as Explanation of Ideology).

The mapping of those individualist/collectivist traits to the macroeconomic aggregates we see today is all wrong. Todd wants to “essentialise” the macro conditions that have only existed since the late 1970s, but those are not deep-seated historical truths at all or manifestations of deep culture. They are instead highly contingent facts of politics and economics. England had been the premier surplus-savings exporter of the 19th century, contrary to everything Todd says. At the same time, the United States, despite its individualistic ethos that supposedly champions the consumer, nonetheless created an utterly producer-dominated system of quasi-monopolistic industrial trusts protected by tariffs. And it’s especially clear in retrospect that Japan and the East Asian tigers were high-surplus countries after 1945 only because the United States played along as part of the Cold War strategy of building up its allies.

In fact, Todd’s “essentialisation” of current account balances can easily be disproved by the record of the last 140 years (5-year moving averages of current account balances as % of GDP) :

current account balance long duree

South Korea, another “stem capitalism” country in Todd’s estimation, had actually been riddled with current account deficits before the 2000s when it decided to change policy as a result of the Asian financial crisis.

Todd’s “anthropological determinism” is misplaced. His vision of the hierarchical authoritarian German culture is much more geared toward explaining the German past, than the common thread running through both that past and the current realities of the Federal Republic. I’m not opposed to anthropological determinism  per se. There are indeed deep cultural differences between Germany (or northern Europe) and the rest of Europe that affect the euro, but there is a better “anthropological” angle than the one Todd pushes. (*)

(4)

Todd’s “anthropological” view of global balances clashes fundamentally with that aired by Michael Pettis, author of The Great Rebalancing, who is has become for the 2010s what Nouriel Roubini had been in 1998-2008. Pettis is not the Roubini of Dr. Doom, but another who popularly expounds a view of the world economy as a closed system in which current account imbalances play a commanding role. The drumbeat that Pettis constantly beats is that government policy, not the primordially thrifty behaviour of Chinese and German households, is the cause of global imbalances.

Germany’s export of its excess national savings (i.e., not needed for domestic purposes ) did help finance the debt binge in the crisis countries of Europe and, in the view of many, Germany’s large current account surpluses are inconsistent with economic recovery in the Eurozone. From Pettis’s article cited above :

In the 1990s Germany could be described as saving too little. It often ran current account deficits during the decade, which means that the country imported capital to fund domestic investment. A country’s current account deficit is simply the difference between how much it invests and how much it saves, and Germans in the 1990s did not always save enough to fund local investment.

But this changed in the first years of the last decade. An agreement among labor unions, businesses and the government to restrain wage growth in Germany (which dropped from 3.2 percent in the decade before 2000 to 1.1 percent in the decade after) caused the household income share of GDP to drop and, with it, the household consumption share. Because the relative decline in German household consumption powered a relative decline in overall German consumption, German saving rates automatically rose.

Notice that German savings rate did not rise because German households decided that they should prepare for a difficult future in the eurozone by saving more. German household preferences had almost nothing to do with it. The German savings rate rose because policies aimed at restraining wage growth and generating employment at home reduced household consumption as a share of GDP.

As national saving soared, the German economy shifted from not having enough savings to cover domestic investment needs to having, after 2001, such high savings that not only could it finance all of its domestic investment needs but it had to invest abroad by exporting large and growing amounts of savings. As it did so its current account surplus soared, to 7.5 percent of GDP in 2007. Martin Wolf, in an excellent Financial Times article on Wednesday on the subject, points out that

“between 2000 and 2007, Germany’s current account balance moved from a deficit of 1.7 per cent of gross domestic product to a surplus of 7.5 per cent. Meanwhile, offsetting deficits emerged elsewhere in the eurozone. By 2007, the current account deficit was 15 per cent of GDP in Greece, 10 per cent in Portugal and Spain, and 5 per cent in Ireland.”

( Pettis is a little too lenient on the PIGS countries. Leniency is perhaps deserved for Ireland and Spain, whose governments were not fiscally reckless as their economies were growing in the 2000s, although their fiscal policies might have been even tighter given all the capital inflows ; and it was their private sector which absorbed their countries’ current account deficits. But in Portugal and especially Greece, it was government budget deficits which fuelled the current account deficits. That was not “inevitable”. )

Two things to keep in mind, which are tautologies I won’t bother explaining : (a) trade surplus/deficit (more precisely, current account surplus/deficit) is the mirror reflexion of the national savings surplus/deficit, or the excess over GDP of total domestic spending by consumers, businesses and the public ; and (b) national savings is not the same thing as household savings. Changes in national savings can be caused by shifts in the behaviour of governments and businesses, not merely through the stirrings in the bosom of the thrifty burgher. So while it’s true that the Germans are thriftier than many of their neighbours,

personal_savings

the German household savings rate has changed relatively little, in comparison with the country’s current account balance :

german_current_account

So Germany as a country is not, contra Todd, some intrinsically surplus-producing economy. Still less was the turn from deficit to surplus in Germany’s current account an element of some mercantilist-hegemonist export strategy by the Reich to “exterminate” its neighbours economically, as Todd would have it. Unlike China’s current account balances, Germany’s are a by-product of domestic developments, and in fact I will argue — contra Pettis — the external surpluses were not even really intended at a policy level.

I repeat, I do believe there are deeper “anthropological” reasons for Germany’s  economic behaviour. I only reject Todd’s traditional family structure as the explanation. But part 2 discusses roots of German economic behaviour.

[ The comments section of Part 1 is now closed. In order to comment, please go to Part 2, “The Anthropology of Financial Crises“. ]


Filed under: Emmanuel Todd, Financial Crises, International Monetary Economics Tagged: current account, Emmanuel Todd, Euro, Eurozone, financial crisis, France, Germany, Pettis
Viewing all 102 articles
Browse latest View live