Quantcast
Channel: pseudoerasmus
Viewing all 102 articles
Browse latest View live

Samples of Greek & Latin, Restored Pronunciation

$
0
0

Some MP3 samples of the “restored” pronunciation of classical Greek and Latin.
I’ve long been a fan of attempts to reconstruct the pronunciation of ancient Greek and Latin. I’ve embedded MP3 snippets of the first line of The Odyssey as well as most of Catullus I. (They take up a lot of space !)


Odyssey Line 1.1 (spoken)

read by Stephen Daitz, “A Recital of Ancient Greek Poetry”, 2nd ed

ἄνδρα μοι ἔννεπε, μοῦσα, πολύτροπον, ὃς μάλα πολλὰ

 

Odyssey Line 1.1 (chanted)

 


Catullus 1

Read by Robert P. Sonkowsky, “Selections from Catullus and Horace”

Sonkowsky is not as good as Stephen Daitz reading the Greek. He has a very strong American accent and his nasal consonants are particularly bad, sounding rather like a feckless schoolgirl’s attempt to reproduce nasals in French. But still the recording gives the exotic and alien flavour of the “original” pronunciation of Classical Latin.

 

Cui dono lepidum novum libellum
arida modo pumice expolitum?
Corneli, tibi: namque tu solebas
meas esse aliquid putare nugas.
Iam tum, cum ausus es unus Italorum
omne aevum tribus explicare cartis…
Doctis, Iuppiter, et laboriosis!
Quare habe tibi quidquid hoc libelli—
qualecumque, quod, o patrona virgo,
plus uno maneat perenne saeclo!


Filed under: Ancient Greek, Classics, Latin Tagged: Ancient Greek, Latin, Phonology, Restored Pronunciation

Debate with Matt on India, China, Cuba, Korea, etc.

$
0
0

Below I quote the lengthy exchange I had with Matt on India, China, Cuba, South Korea, etc. in the comments section of a blogpost by HBDchick. Since our debate was off-topic, Matt and I have agreed to move it here. My latest reply to Matt is contained in the separate blogpost, “Ideology & Human Development“. Note : Matt had already been arguing with others about something else, so below I merely extract that part of the debate relevant to ours.


Pseudoerasmus

…Kerala has been studied a lot. Read Amartya Sen, for example. The (proximate) reason Kerala has high HDI for its income class is that it has had a strong Marxist party in electoral politics which caused the state to invest more in health & education than other states. In independent countries Marxist regimes normally nationalised private property and redistributed incomes to things like health and education. So, on average, other poor non-communist countries with comparable levels of income will usually have lower HDI. Now, I say this is “proximate” because the real question is why Kerala has such a strong Marxist party. Emmanuel Todd argues in several books it’s about the family structure.

Matt

I’m aware of Sen’s work, and I agree with his/your explanation for this. My point in bringing up Kerala was that to show that even polities with high levels of diversity can have robust, effective social democracy if the government is competent, treats each group fairly, and is dedicated to improving social conditions (see the Bo Rothstein article in my comment above for more details). Diversity is not necessarily incompatible with social cohesion or a welfare state.

Sen also does a good job of explaining why Maoist China, for all its many evils, did much better than India at raising life expectancy over the same period. Short answer: because China was Marxist. See, e.g., “Indian Development: Lessons and Nonlessons,” Daedalus Vol. 118, No.4, 1989….

Pseudoerasmus

…I actually don’t believe the Marxist explanation for Kerala in any deep sense. After all West Bengal has also had a strong communist party and its HDI scores are abysmal… Which is also why Sen’s assertions about China are ultimately shallow : East Asia in general stresses education, health and egalitarian growth much more than other countries.

This shows up in land reform. Many have observed that Japan’s land redistribution in 1946, which created a large class of small proprietor-farmers out of what had been closer to a Latin-America-like latifundist system, was the work of the Americans. That is true. However, the very similar land reforms in South Korea and Taiwan were not the work of the Americans. [Note : I meant, these were not compelled by the Americans, as with Japan.] More importantly, all three succeeded. And China has also succeeded with small-holder agriculture since decollectivisation. But the record of land reform in most other places is truly abysmal.

Democratic India in the 1950s and 1960s had a “zamindari abolition commission” yet the number of small holders in India is still fairly low because the process was strangulated by bureaucratic delays, corruption, repartition into smaller within-family plots, etc. There’s more going on here than mere redistribution of wealth. Well, I think you know what I’m getting at : even if we allow that certain political regimes will invest more in people all things equal, redistribution still requires a certain amount of social competence that is not uniformly distributed in the world. Some people appear to do better under socialism and communism than others.

[Emmanuel Todd argues] that Kerala is an extreme example of the matrilineal family system found in the South in general which produces better HDI than the north. Todd explains the unusual predilection for Marxism in Kerala as a reaction to the slow erosion of that family structure. I think Todd supplies good descriptions, but not very good explanations…

Matt

Sen also points to Sri Lanka (“Indian Development,” p. 376), which although non-Communist, carried out similar investment in education, health and welfare, and now has an HDI of 0.715. Sen (ibid, p. 380-82) also mentions post-1975 Communist Vietnam (current HDI 0.617; higher than India (0.554), higher than Cambodia and Laos (both 0.553); I also think we need to account for the impact of the war in these countries).

I would also point to Cuba, with an HDI of 0.780, close to Kerala’s and well above the demographically similar Dominican Republic (0.702). Also the Seychelles, another diverse Marxist country with the highest HDI score in Africa (0.806, even above Kerala).

But I think Sen’s argument is strongest when he points to differences within China over time. Thus, life expectancy in China underwent a sharp downturn following the market-based reforms of Deng Xiaoping in the late 1970s (ibid., pp. 385-87).

This was because the breakup of the communal farms dismantled the system of healthcare provision in place. Sen explains here (p. 2):

“[T]he economic reforms of 1979 greatly improved the working and efficiency of Chinese agriculture and industry; but the Chinese government also eliminated, at the same time, the entitlement of all to public medical care (which was often administered through the communes). Most people were then required to buy their own health insurance, drastically reducing the proportion of the population with guaranteed health care….

…The change sharply reduced the progress of longevity in China. Its large lead over India in life expectancy dwindled during the following two decades—falling from a fourteen-year lead to one of just seven years.

The Chinese authorities, however, eventually realized what had been lost, and from 2004 they rapidly started reintroducing the right to medical care. China now has a considerably higher proportion of people with guaranteed health care than does India. The gap in life expectancy in China’s favor has been rising again, and it is now around nine years; and the degree of coverage is clearly central to the difference.”

HBD doesn’t do a very good job of explaining these changes.

Pseudoerasmus

You misunderstand me. I have no problem with the view that, all else equal (such as demographic characteristics), a redistributionist political regime in a poor country is more likely to improve HDI than a non-redistributionist one. That was my point about East Asia. The [sociobiological] angle would address who is more likely to adopt redistributionist policies, and who is more competent at them once they are adopted.

So I think that easily covers Cuba vs [the Dominican Republic] (fairly similar demographics) — though you do not consider that Soviet subsidies to Cuba were on the order of 1/3 of GDP (via purchase of inflated price of sugar) and that helped a lot in Cuba’s human development… In fact most of your examples are pretty bad. The Seychelles compared with the rest of Africa ? Why ? The Seychelles are a mixed-race Franco-East-African country with about 80,000 people and a GDP per capita comparable with the Czech Republic. I should hope they would have decent HDI !

As for China and life expectancy, see the chart I’ve uploaded here :

lifeexpectancy

Don’t see any big drop. The rate of increase slowed, but that’s normal especially in a country like China with a big divide between the coasts and the interior. Besides, life expectancy is not strongly correlated with access to medical care in the broadest first-world sense, and only weakly correlated with income. (You don’t need huge jumps in income to improve HDI.) The post-war global increase in life expectancy (as well as the global fall in infant mortality) is best explained by greater food availability, more balanced micronutrient intake, innoculations, public health measures (such as sanitation), etc. Most of these measures don’t require high incomes.

Matt

Re: Cuba.

It’s been [23] years since Cuba received those subsidies, and the DR still hasn’t caught up. Also, we have to factor in the embargo against Cuba from 1959. Remember, from 1964 until 1975, that embargo wasn’t just from the United States, it was from the entire Organization of American States, except Mexico. There’s also the fact that Cuba needed to divert spending to its military in order to deter the very real threat of an American invasion (which happened of course in 1961) and the near-constant terrorism directed from Miami and Langley. Finally, if we’re going to look at subsidies, we’d also have to look at the massive U.S. subsidies to South Korea during the Cold War.

Re: China

I’ll quote Sen directly:

“While the gross value of agricultural output doubled between 1979 and 1986, the death rate firmly rose after 1979, and by 1983 reached a peak of being 14 percent higher than in 1979 (in rural areas, the increase was even sharper: 20 percent). The death rates have come down somewhat since then, but they remain higher than before the reforms were launched” (“Indian Development,” p. 385).

See also the chart on p. 383 of “Indian Development” and p. 26 of Sen’s “Hunger and Entitlements.” He takes the Chinese part of the chart from Judith Banister’s “An Analysis of Recent Data on the Population of China,” Population and Development Review, 10 (1984). It shows a noticeable drop from 1979.

Bannister (ibid., 254) says that China’s life expectancy, after having risen every year from 1960 to 1978, fell from 65.1 to 64.7 from 1978-1982. The Google chart (which says it came from World Bank data) says that life expectancy rose from 66.51 to 67.57 during the same years. I don’t know why the discrepancy exists.

Sen repeats his claim about the China-India gap falling from 14 to 7 from 1979 to the early 2000s, then rising from 7 to 9 after 2004 (when the Chinese reinstituted the public health system) in “The Art of Medicine: Learning from Others,” The Lancet, Vol 377 (2011), but he doesn’t give a source.

What do you think about Sri Lanka?

P.S. Sen also mentions this paper by Athar Hussain and Nicholas Stern, and his own paper “Food and Freedom.” See Table 5 on p. 16 for data on the rise in the death rate from 1979, and Table 6 on p. 17 for data on the decline in the number of “barefoot doctors” from 1980.

Pseudoerasmus

Why do you keep talking about the Dominican Republic ? I have already agreed with you that redistributionist policies are more likely to result in better HDI than otherwise.

However, you are looking at it the wrong way. Cuba had to expropriate nearly all private assets and receive large external subsidies to get it done. The Dominican Republic didn’t exproproriate and its foreign assistance was much more limited, but its HDI score today is not that much lower than Cuba’s…

No need to “factor” in [the US & OAS embargo] at all [in the case of Cuba]. What ever Cuba lost via the embargo, it was much more than made up for by sugar purchases by the Soviet Union and the rest of the East bloc at inflated prices — especially after 1972, when the Soviet Union agreed to pay not the international price, but nearly four times the international price.

Also, Cuba never lost export markets for sugar outside the East bloc. At any given time between 1960 and 1990, exports to non-communist countries were between 20% to 50% of the total volume. Western Europe and Japan never observed any embargo against Cuba.

By the way, the OAS dropped its embargo in 1975. Besides, that never stopped anyone from having trade relations unilaterally with Cuba if they wanted, like Argentina before 1976…

Castro built up the Cuban armed forces to such an extent that he could send thousands of troops to Angola, Ethiopia, Mozambique, etc. Now you can say this was tit-for-tat against US support of the opposing side, but these luxury foreign adventures belie the claim of Castro’s “having” to divert spending anywhere.

[Re South Korea] At the peak of US aid to South Korea in the late 1950s and early 1960s, it amounted to less than 5% of South Korean GDP. [Note : This was intended as net of military assistance, I will address this later.] This was not trivial but never approached the vicinity of Cuba’s dependence on the Soviet Union in the 1970s and 1980s. Besides, no sensible person believes South Korea’s explosive growth has much to do with external assistance.

As for Sen, I’ve looked into his claims a little more, and, yes, there was a drop in Chinese life expectancy after 1979 which gets reversed in the late 1980s. But in the Bannister data the drop is trivial. Hussein & Stern’s argument is more interesting : the life expectancy data appear to be driven by rising infant mortality in the first half of the 1980s, which are substantial enough to be interesting. But there must be more happening than is implied by Sen’s argument, because China’s crude death rate hit its low in 1979 and still remains higher than then… So the age-structure effects of the population must be important — something Hussein & Stern do not discount.

Matt [sent to me by email]

I think you’re understating the disparity [between Cuba’s and the Dominican Republic’s HDI scores].

First of all, the difference between the two countries’ HDI is 0.078, which is 10% of Cuba’s score. If we add 10% to Cuba’s score, we almost get to Greece (0.860; not the best place in the world, but better than Cuba). If we subtract 10% from the DR’s score, we get Honduras (0.632; one of the worst places in Latin America), and a little worse than Botswana (0.634; one of the best places in sub-Saharan Africa). If we subtract 10% from the U.S.’s score of 0.937, we somewhere between Slovakia (0.840) and Andorra (0.846).

Secondly, if we look at non-income HDI (which we should be able to, given that Cuba’s and DR’s per capita GDPs are comparable), we find that Cuba’s is 0.894 and DR’s is 0.726, a difference of 0.168. Cuba not only does much better than DR on this measure, it actually scores within the same range as the UK (0.886) and Hong Kong (0.907), despite far lower per capita income.

https://data.undp.org/dataset/Non-income-HDI-value/2er3-92jj

Next, we should look at the Inequality-Adjusted HDI values. When adjusted for inequality, DR’s score drops to 0.510 (a fall of 27.3%), putting it with Tajikistan and Guyana. I can’t imagine that Cuba’s IHDI score falls further than DR’s, but unfortunately, we have no data on Cuba’s total IHDI value. However, we do have Cuba’s Inequality-adjusted Life Expectancy Index (LEI) value, which is 0.882 (a drop of 5.4%). This not only puts Cuba far above DR (0.708, a drop of 16% and a difference with Cuba of 0.174), it actually makes Cuba almost exactly the same as Denmark on this measure (ILEI 0.887). That’s remarkable. If Cuba’s Inequality-Adjusted LEI gives us any indication of its overall Inequality-Adjusted HDI, then the latter should be within the range of the developed world, and of course far above the DR.

https://data.undp.org/dataset/Table-3-Inequality-adjusted-Human-Development-Inde/9jnv-7hyp

And again, it has been a quarter-century since the Soviet spigot was shut off. I suppose Soviet aid has been to some extent replaced by Venezuelan aid, but it’s still not even close.

[I also forgot to mention something: about 10% of Cuba’s population left the island in the decades following the Revolution, encouraged by, among other things, special privileges granted to them in the US immigration system. These emigres are wealthier and more educated than the average Cuban. Nothing remotely comparable holds for the Dominican Republic; if anything, emigration from the DR has been disproportionately unskilled.]

[Quoting PE] Castro built up the Cuban armed forces to such an extent that he could send thousands of troops to Angola, Ethiopia, Mozambique, etc.

He never actually sent combat troops to Mozambique; only a few hundred advisors. By that standard, you could say that the U.S. “sent troops” to El Salvador in the 1980s, or is “sending troops” to Iraq again today. Cuba did, however, send a few thousand troops to Syria during the Yom Kippur War, and engaged in at least some combat with the Israelis.

 [Quoting PE] Now you can say this was tit-for-tat against US support of the opposing side, but these luxury foreign adventures belie the claim of Castro’s “having” to divert spending anywhere

At least in the case of Angola, Cuba was engaging in collective self-defense (which is a guaranteed right in Article 51 of the U.N. Charter) against a South African/Zairean attack. Ethiopia, too, was attacked by Somalia, though you could make the argument that the regime in Ethiopia was so heinous that it would have been better had Cuba stayed out of it. With the Yom Kippur War, the situation was more complicated, since, unlike in 1967 and virtually every other Arab-Israeli war, the Arabs actually fired the first shot that time, though they were only trying reacquire their own conquered territory (Sinai and Golan). In any case, Cuban intervention in that war wasn’t very consequential.

So in the two major cases of Cuban intervention, the Cubans could argue that they were only exercising their right of collective self-defense in accordance with the U.N. Charter, and that they needed to do this in order to deter American aggression and interventionism which could very easily have turned its sights on Cuba (and did, in fact). The United States has similarly exercised (what it described as) “collective self-defense” in Korea, Vietnam, Laos, Cambodia, and Kuwait, all of which cases were at least as complicated as Cuba’s interventions in Angola, Ethiopia, and Syria (and, I would argue, much more so, in each case).

But let’s examine the presuppositions of your argument. Imagine we were having a discussion about the distortions to South Korea’s development caused by South Korea’s military spending, needed in order to deter a North Korean/Chinese invasion, which of course happened in 1950). [Leave aside the fact that this war was far more complicated than it appears from how it is usually discussed (it followed frequent border clashes, most of which were initiated by the South; not to mention a virtual civil war from 1945-50 within the South, for example on Jeju)]. Clearly there’s something to this: South Korea’s actual performance has been impressive, but it would have been arguably better in the absence of the North Korean threat. But imagine I were to respond by saying:

“The South Korean generals built up the South Korean armed forces to such an extent that they could send thousands of troops to Vietnam. Now you can say this was tit-for-tat against Soviet/Chinese/North Korean support of the opposing side, but these luxury foreign adventures belie the claim of South Korea’s ‘having’ to divert spending anywhere.”

Clearly this isn’t a good response.

Even if you think that Cuba didn’t “have” to intervene in Angola, etc., that doesn’t mean that they weren’t driven to spend on their military by the very real threat of U.S. attack. After 9/11, we invaded Iraq, which we <i>definitely</i> didn’t need to do. But it wouldn’t have happened if it weren’t for 9/11. Cheney, Wolfowitz, et al may still have <i>wanted</i> to do it, but they wouldn’t have gotten away with it. Now imagine that this country experienced <i>multiple</i> 9/11s (which is the equivalent of what we’ve done to Cuba over the years once you control for population size). Imagine that Al Qaeda tried to kill the president dozens, maybe hundreds of times. Imagine that there was a Taliban-backed rebellion in the interior of the U.S. How ape-shit do you think we would have gone? How much would we have diverted away from health, education, and welfare to spend on war, the military and the national security state?

[Quoting PE] At the peak of US aid to South Korea in the late 1950s and early 1960s, it amounted to less than 5% of South Korean GDP.

Actually, at its peak in 1957, total foreign aid (most of it from the US) to South Korea hit 16% of GNP, and averaged 8-9 percent from 1959-1962. See the following article by Susan M. Collins and Won-Am Park:

http://www.nber.org/chapters/c9033.pdf

The following paper by Marcus Noland says that foreign aid reached over 20% of GDP, and over 80% of imports, at its peak (see Figure 2, p. 36):

http://scholarspace.manoa.hawaii.edu/bitstream/handle/10125/22215/econwp123.pdf

However, as Collins and Park point out (p. 178), this is a significant underestimation, because it does not include subsidized loans from the US and Japan, which continued well after aid had slowed.

Also see Figure 3 (p. 37) of Noland’s paper, which shows that in the early 60’s, US aid financed almost all of South Korea’s investment, since domestic savings net of aid was near 0.

Going back to your post on Kerala/West Bengal/East Asia/Land reform, I noticed that you ran together two slightly different topics: HDI and land reform. It’s true that West Bengal has abysmal HDI, but it actually did very well compared with the rest of India in terms of land reform. According to Maitreesh Ghatak and Sanchari Roy, West Bengal and Kerala

“accounted for 11.75 and 22.88 per cent, respectively, of the total number of tenants conferred ownership rights (or protected rights) up to 2000, despite being only [7.05] and 2.31 per cent of India’s population, respectively…. West Bengal’s share of total surplus land distributed was almost 20 per cent of the all-India figure… although the state accounts for only about 3 per cent of India’s land resources…..” (p. 253).

http://www.academia.edu/157203/Land_Reform_and_Agricultural_Productivity_in_India_A_Review_of_the_Evidence

I don’t know why West Bengal seems to have done well with land reform and poorly with HDI. Any ideas?

Pseudoerasmus [responding re China only]

In my last reply to Matt, I conceded his/Sen’s argument that China’s market reforms of 1979 may have disrupted or undermined the “barefoot doctors” programme, which could have had adverse consequences for Chinese infant mortality. However, Matt’s own source points out the increase in China’s infant mortality started before the market reforms of 1979 :

hussain

[Hussain & Stern] click to enlarge

Moreover, the above analysis is powered by scepticism about China’s official statistics for 1979-1989. The World Bank’s data on infant mortality at birth and child mortality under age 5, which are based on official statistics, do not show a deterioration in either indicator for the years at issue.  (There is, however, a general discrepancy between the World Bank’s and the WHO’s data, although the latter only go back to 1990. But the World Bank and the UN are consistent.) Nonetheless, the PRC’s current infant mortality rate is considerably higher than that of the rich countries (and Cuba’s), although at least according to official statistics, that appears to be in large part because of the divide between the cities and the rural interior.

The rest of my reply is contained in the separate blogpost, Ideology & Human Development. Please post any comments regarding the above over there.


Filed under: Human Development, Sociometrics Tagged: Cuba, HDI, human development, Kerala, South Korea

The Little Divergence

$
0
0

Summary : A “great divergence” between the economies of Western Europe and East Asia had unambiguously occurred by 1800. However, there’s a growing body of opinion that this was preceded by a “little divergence” (or “lesser divergence”?) which might have started as early as 1200. I argue that the pre-modern “little divergence” was probably real, but that doesn’t mean it happened because of a modern growth process — a sustained rise in the production efficiency of the divergent economies.


[Warning : This blogpost is mostly about how data on incomes from the pre-modern period are constructed. I’ve done my best to minimise details, but I cannot guarantee it won’t be as boring as atonal music performed with a spoon.]

(1)

The “little divergence” may now be close to a consensus view amongst economic historians both in Europe and the United States. In a way it’s a reaction to the revisionist book by Kenneth Pomeranz, The Great Divergence, which argued that Chinese and Western European economies had been fairly comparable as late as 1800. Pomeranz and the “California School of Economic History” are themselves the culmination of the “global systems” macro-histories exemplified by Fernand Braudel. Pomeranz then set off a cascade of dense elaborations by historians of Asia. Before Pomeranz and the Asian revisionism, most histories had pegged the start of the divergence between the two coasts of Eurasia to about 1500 or 1600. But in countering the Pomeranz revisionism, economic historians ended up pushing back the divergence to the High Middle Ages !

These two charts (source) encapsulate the little divergence :

littledivergence2littledivergence1

The modern growth of Northwestern Europe after 1800 is now deemed a mere acceleration — albeit a very great acceleration — of an almost millennium-long trend. So people may marvel at the technological sophistication and scientific cleverness of the Song or the Ming or the Bling Dynasty, but in the final, brute number-crunching of per capita incomes, the wretched peasants of Western Europe had shot right past all of them.

Such views are now embedded in the popular imagination, as evidenced in the Atlantic magazine website from which I extracted those charts, as well as in a Vox article by one of the foremost proponents of the “little divergence” himself. (Examples of blogs using the same data or making similar claims : 1, 23)

In this blogpost I will argue the following :

  • While very few economic historians now dispute that East Asia had lower living standards than Europe well before 1800,
  • there is no agreement on whether European economies prior to 1800 were “modern” or “Malthusian” ;
  • if they were Malthusian, then the “little divergence” is rather trivial and unremarkable.
  • Furthermore, the income “data” for years prior to 1200 are mostly fictitious.
  • While real data exist after 1200 for Western Europe and China, output estimates are still calculated using assumptions that, were they better understood, would shatter confidence in the enterprise of economic history !

(2) Malthusian or Modern ?

In the Malthusian “biological” or “organic” economy, the level of technology at any given time permitted only a certain number of people to live off any given piece of land. The carrying capacity could vary according to the natural ecology of the land, because some environments are naturally more productive than others. Different peoples also possessed different levels of technology, defined in the widest possible sense as the stock of knowledge about the manipulation of the environment. When a people entered new, empty land, they would reproduce themselves until their population hit the carrying capacity — just like caribou or horse flies.

Of course, people can improve the carrying capacity through technological innovations, but in the premodern world those were very slooooow to happen and very rare in comparison with today.

I don’t want to go into too much detail, because you can read about the Malthusian model anywhere. (There are strong and weak versions.) Suffice it to say for my purposes, under Malthusian assumptions, per capita income was determined exclusively by the birth rate and the death rate.

This does not necessarily mean that the average person was living on the edge of starvation. This is a common misconception. To the contrary, the neo-Malthusian model implies that anything which lowers the birth rate and increases the death rate, will raise the living standards of the average person.  This is why different societies with different fertility practises and mortality conditions had very different income levels.

As far as I can tell, few people dispute that Western Europe was richer (per capita) than East Asia or India well before 1800. Gregory Clark in A Farewell to Alms argued that the daily wage, expressed in terms of wheat-pounds or rice-pounds, was much lower in Asia than Western Europe. But it was also much lower in East Asia and India than in Turkey, Egypt and Poland. Other lines of evidence all point to the same thing : the inhabitants of East Asia and India may have had the lowest living standards on earth before the modern period. Paradoxically, this was a sign of cultural sophistication and/or ecological good fortune, for Asian societies were capable of squeezing more people onto a piece of land than other societies.

It’s now well known that in mediaeval Western Europe women married later than in other parts of the world, and fewer women got married in the first place. This had the effect of reducing fertility rates well below the biological maximum. In East Asia, the female marital age was much lower, but a combination of infanticide, birth-spacing and other factors apparently kept net fertility only a little higher than Western Europe’s. Thus, under Malthusian assumptions, East Asia’s relative poverty is largely to be explained by its lower mortality : life in Western Europe was simply more lethal but richer, whilst more East Asian adults survived and lived longer but more miserably. The differences in mortality could be due to differences in disease prevalence, hygienic practises (such as bathing), medical knowledge or public health knowledge.

So, the question is, was the “little divergence” in living standards between Europe and Asia the result of “modern” or “Malthusian” mechanisms ? That is, was Europe’s income higher than China’s and Japan’s because the Europeans were becoming more efficient at extracting output from land, capital and labour long before 1800 ? Or is it simply that Europe and Asia had different birth and death schedules ?

If it’s the latter, then the “little divergence” is trivial and uninteresting. Or perhaps it’s interesting in the perverse sense that East Asia might have been poorer than Western Europe only because East Asians discovered earlier not to shit on themselves, itself because they understood the commercial and technological value of human foeces.

The previous questions can also be rephrased : was there a rising trend of income in Western Europe over the long run before 1800 ? And was what happened to Europe some time in the 18th century a major break with the past ?

(3) North-Central Italy

Most economic historians are either anti-Malthusians or “moderate” neo-Malthusians who think England and other European countries started slowly escaping their ecological constraints earlier than 1800. A fairly small camp of radical neo-Malthusians maintain a view which can be summarised by Gregory Clark’s assertion for England : “England in 1381, with only 55 percent of the population engaged in farming, was at income levels close to those of 1817”.

There is little dispute that between 1300 and 1850 there was long-run income stagnation in North-Central Italy, which is right now one of the richest regions of Europe. The two following charts are both from Malanima :

malanimagdp

In the above, income is represented by the aggregate consumption of goods, which itself is computed, essentially, by {daily wage rate} x {number of working days per year} x {prices of basic goods}, along with (very crucial) weights for these variables — based on theoretical assumptions about how Italians of centuries ago might have switched between goods when prices and wages changed. The number of working days per year is unknown, but Italians are assumed to have behaved much as peasants in the poorest countries today who tend to work more when wages fall and work less when wages rise. Hours of work per day, which are also unknown, are assumed to be constant over time.  (This is not stated explicitly in Malanima, but is true, by implication.) What this means is that when prices were high Italian workers of the past were assumed to just work more days of the week, rather than 4 extra hours a day from Monday to Thursday, in anticipation of the demoralising boney fish on Friday…

For North-Central Italy, there exist adequate data for wage rates, prices of basic goods, and population. That’s actually pretty good, but we think of Italian mediaeval data as pretty solid only because we compare them with the complete unknowns like the Axumite Empire in Ethiopia or ancient Greece. We probably have more information to judge the economic performance of the Soviet Autonomous Republic of Tatarstan under Stalin in the 1930s or Zaire under Mobutu Sese Seko. Yet we think of both as relatively inadequate, because the reference comparison for those would be Eurostat or the BEA.

Individually, many of the assumptions behind the construction of income data seem reasonable, but, taken together, they are a little dodgy. And when you consider that the above income series looks more or less like the wage series below, you begin to wonder, what was all that painstaking computation all about anyway ? North-central Italian wages over the same period :

italywages

There’s understandable reluctance to rely exclusively on wages, since the proportion of wages in national income can vary when the capital share (i.e., rent, in this case) varies. Malanima does his best to check his income data against more limited information on rents and production.

Few people dispute his reconstruction of Italian data. The Malthusians have no cause to dispute it since the Italian story fits so nicely with the “biological theory of living standards”. The anti-Malthusians, perhaps, don’t find it implausible that Italy, even north-central Italy, was so stagnant over such a long period. After all, they didn’t start the Industrial Revolution, did they ?

(4) England : Broadberry versus Clark

The argument is largely over England (and, perhaps the Netherlands). And that battle is best encapsulated in this chart of competing estimates of income per capita for England over 600 years [source] :

clarkbroadberry

(The rival sets of economic aggregates are described and compiled in Clark and Broadberry.)

How you view English economic history prior to 1800 — Malthusian or modern — depends on your opinion of the estimate of English income in 1400-1450. If income was high, per Clark, then the time series would look Malthusian. If, however, income was low, per Broadberry, then there was a subsequent long-run trend, which would be consistent with the slow-but-modern view of English economic growth.

Clark’s view is that despite ups and downs England in the mid-18th century was no richer than it was in 1350, and the 1350 standard of living was high by comparison with the rest of the world at the same time or most of Sub-Saharan Africa in the present. That is, England was always fairly well off — because England controlled fertility and had high death rates. Broadberry, by contrast, believes England in 1350 was about as poor as Tanzania today (and poorer still in 1250), but English income rose slowly but reliabily over the next 500 years because farmers, artisans, craftsmen, and merchants were getting slowly more efficient at their tasks.

What accounts for the difference between the two estimates ? Remember, Clark’s income for 1450 is roughly double Broadberry’s. That’s a big gap. Clark, like Malanima, aggregates wage data, but pre-modern England is also much richer territory for the economic historian with its bounty of records about rents, tithes, sheep counts, wills, tax records, etc. Broadberry uses pretty much the same data as Clark, but computes the physical output of goods.

In modern GDP accounting, there are three separate methods of computation which serve as checks on one another : the income approach (incomes received by workers and owners of capital) ; the output approach (the sum of physical output minus inputs in the business & public sectors) ; and the expenditure approach (the sum of spending by households, businesses, and the government). There are smallish discrepancies in the GDP estimates from these three approaches, but they get reconciled plausibly in a predictable way.

But for the Middle Ages, the wage approach has always been more popular because it’s thought to be simpler and more straightforward, involving fewer assumptions. Broadberry himself describes how wages have been the most traditional way income has been calculated by English economic historians :

“The quantitative picture of long run economic development in Europe is based largely on the evidence of real wages. In the case of Britain, the standard source is Phelps Brown and Hopkins (1955; 1956), who showed that there was no trend in the daily real wage rates of building labourers from the late thirteenth century to the middle of the nineteenth century, albeit with quite large swings over sustained periods. This view has recently been supported by Clark (2004, 2005, 2007a), who constructs a new price index, refines the Phelps Brown and Hopkins industrial wage series and adds a wage series for agricultural labourers. In addition, Clark (2010) provides new time series for land rents and capital income to construct a series for GDP from the income side. This new series is dominated by the real wage and hence paints a bleak Malthusian picture of long run stagnation of living standards and productivity.”

But the anti-Malthusians are sceptical — incredulous, really — of the wage-based results, because, in Broadberry’s words :

“…there are good reasons to be sceptical about this interpretation of long run economic history [based on wage data], which seems to fly in the face of other evidence of rising living standards, including the growing diversity of diets (Feinstein, 1995; Woolgar, Serjeantson and Waldron, 2006), the availability of new and cheap consumer goods (Hersh and Voth, 2009), the growing wealth of testators (Overton, Whittle, Dean and Haan, 2004; de Vries, 1994), the virtual elimination of famines (Campbell and Ó Gráda, 2011), the growth of publicly funded welfare provision (Slack, 1990), increasing literacy (Houstan, 1982; Schofield, 1973), the growing diversity of occupations (Goose and Evans, 2000), the growth of urbanization and the transformation of the built environment (de Vries, 1984).”

So Broadberry and his team made a truly herculean effort to count the total physical output of the English economy between 1300 and 1800. The description of their methodology makes for an even more boring read than this blogpost, but I have read it so you don’t have to. The next paragraph may be particularly boring, so skip it if you trust my later characterisation of it.

Just to give you an idea of how Broadberry et al. came up with England’s total agricultural output : they compute the percentage of arable land from many sources ; then estimate the percentage of fallow and cultivated land, mostly inferred from probate records ; assume there are no major differences between manorial land and freehold land ; use Clark’s regression estimates of yield per acre based on a sample of farms across counties ; make allowances for part of the crop set aside as seed (not clear how they inferred that) ; also make allowances for crops fed to animals based on samples of what horses and oxen ate in 1300, 1600 and 1800 (OK, they have different samples for oats and pulses…) ; extrapolate output of the agricultural sector by multiplying yield per acre by cultivated arable land for each crop, minus seeds and feed ; estimate the output of the pastoral sector (i.e., herds) by counting sheep from a sample of manorial records and probate inventories ; assume arbitrarily that 90% of cows and sheep produce milk and wool, respectively ; also assume (what looks to me like) arbitrary percentages of slaughter of livestock ; extrapolate all this to national pastoral output by assuming certain proportions between manorial and freehold stocks of animals ; estimate output of hay by assuming each horse ate 2.4 tonnes of hay per year, with the number of horses also estimated from diverse records.

Then the statistically inferred physical count of output is multiplied by price data supplied by, again, Clark. Note all of the physical output data  are highly discontinuous : more plentiful in the 1700s, available only once every century before the 1500s or maybe a few times between 1500 and 1700. Broadberry et al. were very careful and diligent. They even try to check to see if the number of sheep they came up with for any given century was consistent with what England exported.

I won’t get into the nonfarm sector, because the preceding makes the point clear : the chain of assumptions and inferences at each step is iffy enough, but when all is said and done, how can we know to trust the aggregates ?

Normally, you compare the GDP estimates calculated with different methods, but in this case, Clark’s and Broadberry’s are very different, especially for the late Middle Ages. Where, precisely, do they differ ? That is, what statistical adjustment is necessary to harmonise Broadberry’s and Clark’s estimates ? The number of days worked ! In Clark’s data, the number of days worked per worker per year stays within the range of 250-280 days over the course of 550 years :

workdays

(Of course, the number of hours worked per worker per year does not even figure in anyone’s calculations, since that is unknowable, even though we really need that information to truly assess the pre-1800 years in the same way we assess the post-1800 years.)

Broadberry does not actually use any of the published days-worked data as shown above. What he does, instead, is impute the days worked from his output estimates. This means, he reconciles his output-based GDP with the wage-based GDP by increasing or decreasing the days worked as necessary to fit his own GDP data. Here are the “imputed” days-worked in Broadberry :

imputeddaysworked

I stress : the third and fourth columns do not contain any values which have been actually observed, or inferred from statistical samples. It’s literally the numbers he needs to make Clark’s wage series “fit” his output series. Broadberry is not being sneaky. He’s quite upfront about his assumptions :

“The second purpose of this paper is to explore the differences between the trends in the real wage and output-based GDP per capita series. The most straightforward way to reconcile the two series is to posit an “industrious revolution”, so that annual labour incomes grew as a result of an increase in the number of days worked, despite the stagnation in the daily real wage (de Vries, 1994).”

The reference is to Jan de Vries, aptly, the author of The Industrious Revolution. For de Vries this revolution was his way of reconciling the increase in luxury goods mentioned in wills starting in the 17th century with the reality of stagnating wages. In the narrative he constructed, early modern households, desiring the new luxury goods made available by global trade and New World expansion, supplied more labour than ever before, including that of wives and children. Broadberry allies himself with this story and extends it deeper into the past, because it’s obviously consistent with his output estimates.

Of course, if incomes increased because people are working longer hours than they had been used to, not working ‘better’, then that’s not inconsistent with a Malthusian story in which the productive efficiency of the economy is stagnant.

I am not suggesting Clark’s estimates are free of tremendous uncertainties. His wage series have been criticised on grounds of representativeness, for example. But I think his methods are more straightforward and he does use observed or sampled values for the basic aggregates. His estimates do not require hypothesising an unobserved massive increase in English working habits between 1450 and 1600.

There are many other critiques and counter-critiques of both sides, as well as ingenious attempts to cross-check Broadberry’s estimates with other kinds of calculations (especially by Karl Gunnar Persson, cf “The End of the Malthusian stagnation thesis”). But I think that’s enough for now !


Addendum-Final note : Just to avoid ambiguity, I state as baldly as I can the point of this post: the pre-modern “little divergences” were probably real, but that doesn’t mean they happened because the divergent economies were smarter or more efficient. Today people assume that higher income implies more technological sophistication. But in the Malthusian world, inhabitants of “smarter” or technologically more advanced societies could be poorer on average than those of less sophisticated societies, because what determined living standards was the balance of birth and death rates.

I think a solid piece of evidence for the Malthusian view is that height in England in the years 1-1800 saw no long-term trend :

height england 1-1800


Addendum #2: There is now a separate blogpost, “Height in the Dark Ages“, with assesses  living standards in post-Roman Europe using evidence from height.

Addendum #3 : There is now another separate blogpost, “Angus Maddison“, which examines the dubious assumptions behind the pre-1200 income data published by the late Angus Maddison.


Filed under: Economic History Tagged: great divergence, Gregory Clark, little divergence, long-run growth, Malthus, Malthusianism, Stephen Broadberry

Random thoughts on critiques of Allen’s theory of the Industrial Revolution

$
0
0

{ This post is mostly stringing together my scattered tweets over the past couple of weeks. I’ve had numerous discussions on this subject with Vincent Geloso, Judy Stephenson, Ben Schneider, Benjamin Guilbert, Anton Howes, and Mark Koyama. But yesterday Geloso sent me the paper he’s working on for Alsatian wages and that kick-started further thoughts I shared with Geloso privately, and then with the others on Twitter. You can follow the most recent discussion below this tweet, although it’s very difficult to keep track of the many different threads. I’m generally a sceptic of Allen’s theory, but in this post it seems I ended up critiqueing the critiques as much as Allen himself. }


First, a quick preface. I love the work of Robert Allen. I love his papers on steel from the 1970s and 1980s. I have a love-hate relationship (on some days love, some days hate) with his book on the Soviet Union. I swoon over his work on English agriculture. And his little book on global economic history — is there a greater marvel of illuminating concision than that?

Allen has an old-fashioned interest in the economics of making stuff, the bread-and-butter of traditional economic history. He doesn’t shy away from learning the nuts and bolts of technology, which too many economists and historians do these days. Inasmuch as he discusses other things like institutions or culture, he doesn’t get carried away by lofty abstractions, and his point of departure is always the very concrete reasons that a firm or an industry or a country is more productive than another. I’m not rubbishing institutions or culture as explanations — I’m just saying, Allen’s virtue is to start with problems of production first.

Yet I always find myself in the peculiar position of loving his work like a fan-girl and disagreeing with so much of it.

In particular, I’m sceptical of his theory of the Industrial Revolution.

51p8cjrfv0l-_sx331_bo1204203200_

Allen has been advocating for at least 20 years now that England in the 18th century possessed a “high wage economy”. English labour costs relative to continental Europe and Asia were unusually high. This is an important part of his “induced innovation theory” for the invention and adoption of machines in the leading industries of the Industrial Revolution. In short, England’s high wages relative to its cheap energy and low capital costs biased technical innovation in favour of labour-saving equipment, and that is why it was cost-effective to industrialise in England first, before the rest of Europe (let alone Asia).

I hasten to add, Allen’s is not a monocausal theory. To the contrary, it is a complete multi-causal model, but his distinctive contribution is the high-wage economy. Here is Allen’s own flowchart taken from the book:

allendiagr

The theory is appealing, in part, because the technological innovations of the early Industrial Revolution were not exactly rocket science (a phrase used by Allen himself), so one wonders why they weren’t invented earlier and elsewhere. (Mokyr paraphrasing Cardwell said something like nothing invented in the early IR period would have puzzled Archimedes.)

But I’ve always had reasons to doubt it. As Mokyr has tirelessly argued, inventions were too widespread across British society to be a matter of just the right incentives and expanding markets — and this is a point now being massively amplified by Anton Howes.

There are more concrete reasons for scepticism. As Kelly, Mokyr, & Ó Gráda (2014) have pointed out, although nominal and real wages were indeed higher in Britain, Allen must assume that unit labour costs (wage divided by labour productivity) were also higher. But if the Anglo-French wage gap were matched by a commensurate labour productivity gap, then the labour cost to the employer would have been the same in the two countries. Actually Allen himself brings up the issue of unit labour cost in his book, but mostly hand-waves it away and implicitly assumes that ULC was higher in England. But that’s far from proven.

Besides, you already had capital-intensive production techniques in several sectors well before the classic industrial revolution period — especially in silk and calico-printing. Silk-throwing (analogous to spinning in cotton) was mechanised in Italy before 1700. The idea was pirated by Lombe who set up a water-powered silk-throwing factory circa 1719, and he was imitated by many others by the 1730s. Then you had heavily machine-dependent printing works for textiles (especially calicoes) in many European cities before the canonical industrial revolution period. None of these seemed to require Allen’s “high wage economy”. (Not to mention, Allen’s model has implications for the diffusion of the Industrial Revolution, and Scottish industrialisation was almost simultaneous with the English one, despite wage differences.)

Nonetheless, I had mentally reconciled Allen and Mokyr in the manner of Crafts by considering Mokyr = supply of inventions, Allen = demand.

But there has been a spate of critiques of Allen’s work recently. Humphries (2013); Gragnolati et al. (2011); and Stephenson (2016). The latter establishes through archival research that those builders’ wages for London on which so much of Allen’s reasoning is based weren’t wages at all, but fees paid to labour contractors and in fact the wages received were at least 20-30% lower. (That doesn’t really address the issue of the actual labour cost to firms, though.)

Then there’s Humphries & Schneider (2016). Most of the wages cited in the literature have been drawn from secondary literature (books, pamphlets, etc.), but Humphries & Schneider actually dug into all kinds of archival sources to show that the estimated 1 million women and children who spun yarn with wool, linen, and cotton in their rural homes were paid much lower wages than Allen’s narrative has relied on: ~4 d [pence] per day, rather than the >8d/day assumed in Allen. And one of the showcases of his theory is the series of inventions mechanising yarn spinning!

hs

The source of the data makes H & S conclusions persuasive, but it’s also theoretically compelling. Men, especially in big cities, may have been paid higher wages, but women and children in the countryside were not. This makes early modern England much more like a “surplus labour economy” with an “unlimited supply of labour” à la Arthur Lewis. H & S describe putting-out merchants expanding their network of spinners farther and farther away from their core areas to find fresh labour, so that even as the demand for cloth rose they could avoid bidding up wages. This was probably reinforced by a cartel-like arrangement amongst the merchants. Labour market monopsonists also loom large in modern development microeconomics!

Quick thoughts on some of these critiques:

Allen v Gragnolati: At least on the question of the jenny, Allen’s Achilles’s heel may be working hours. In the spinning jenny paper where Allen argues the jenny was profitable in England but not in France, he assumes total production stays the same when spinning productivity rises with the jenny, because households reduce working hours in response. Gragnolati et al. asked him why not increase production in order to earn more income? The jenny would have been profitable in France as long as total output rose. Allen’s response was to cite his paper with Weisdorf. He argues that the existing working hours/year estimates are consistent with the idea that British households maintained a fixed consumption target, according to which they adjusted work hours in conjunction with market wage rates. Allen’s argument therefore implicitly assumes that British and French spinners had similar preferences for leisure-work tradeoff. But this is a highly uncertain result, and it goes against the spirit of the “Industrious Revolution“. I can’t imagine this will hold up in future work. Besides, I violently hate the “peasant mode of production” idea…

The wage gap & market size: I’ve mentioned this many times to people on both Twitter and in real life, but the role of market size in Allen’s model gets too often neglected. The question that no critic of Allen has so far posed is this: what is the wage gap between Britain and France that renders inventions in Great Britain profitable but not in France, given the two countries’ differences in market size. There’s a balance between factor savings due to inventions and the costs of invention which are reduced as a function of market size (i.e., costs divided by the number of goods possible to produce in a given market. In terms of the isocost model used by Allen which simplifies a section of Acemoglu’s “Directed Technical Change” paper, bigger the market size, bigger is the isocost’s shift to the origin for any given fixed cost of invention. BTW, the wage rate affects the slope of the French and British isocost curves in Allen.)

The real wage ratio for Paris/London used by Allen is ~50% in 1750-1775 and ~57% in 1775-1786, and this gets adjusted by Stephenson (2016) to ~62% and ~71% respectively. But the significance of the “Stephenson adjustment” can only be assessed in relation to market size differentials between France and Britain. And by “market size” we must take into consideration not only population and colonies but also internal barriers to trade.

But of course it’s entirely possible that further research might revise French wages upward or downward, making the Anglo-French wage gap smaller or bigger.

Allen & labour markets: The downward wage revision for spinners by Humphries & Schneider (2016) is also restricted to Britain and therefore does not address Allen’s international comparison.

Another potential problem is there may be a regional and sectoral heterogeneity in spinning wages, and Humphries & Schneider have very few cotton- and Lancashire-specific observations.

Allen assumes that wages in cotton were set by wool, which as late as 1770 is estimated at >90% of textile value added in the UK. This is equivalent to an assumption that the British labour market was well-integrated and labour was mobile. But this is unrealistic. There were natural, institutional, infrastructural, and possibly cultural reasons for labour to be relatively immobile at the time — at least between provinces rather than from the provinces to the cities. And this would have been even more likely under the rural putting-out system. Remember, we’re not talking about firms here, but about proto-industry — production taking place inside hovels.

And IF labour markets were highly local and fragmented, then that ironically supports Allen’s view against the criticism found in Humphries & Schneider (2016). What matters is not the ‘national’ wage set by spinning wool, but the specifically cotton wages paid specifically in Lancashire. (There might have even been variation within Lancashire.) You can have local labour ‘shortages’, when the labour market is not national.

In the surplus labour scenario envisioned by Humphries & Schneider, rising demand for labour need not bid up wages. But if labour markets were fragmented and local, then the Lewis-like model need not apply.

It is not convincing to argue that ‘labour-saving’ is seldom if ever mentioned in patent applications or in inventors’ records. Demand for cotton goods was growing faster than wool, and on first approximation this implies demand for cotton-specific labour was growing and cotton-specific wages could be bid up in local labour markets. The motive for producing more output faster is equivalent (especially in a constant returns-to-scale industry like textile putting-out) to demanding more labour.

It could be argued that even if labour was not very mobile, the putting-out merchants were. They could have widened their spinning network and induced more households to spin cotton rather than wool, as the demand for cotton grew. But (again) that implies higher wages in cotton than in wool.

Besides, under the putting-out system, expanding output (=increasing labour inputs proportionately) could raise transaction costs rapidly and prohibitively. Merchant-middlemen had to deliver raw material to each household, retrieve the yarn, then deliver the yarn to weavers, and then retrieve the cloth. Thus there was a natural constraint on output.


Other blogs on Allen:

How (much) were British workers paid? Evidence beyond wage rates

Spinning little stories: Why cotton in the Industrial Revolution was not what you think

The High Wage Economy: the Stephenson critic

England circa 1700: low-wage or high-wage, which blogs about the new working paper by Humphries & Weisdorf (2016). Vincent Geloso’s summary: “preindustrial labor markets had search costs; workers were willing to sacrifice on the daily wage rate (lower w) in order to obtain steady employment (greater L) and thus the proper variable of interest is the wage paid on annual contracts”.

Vollrath on Allen vs Mokyr

Howes on Allen vs Mokyr

Vincent Geloso’s summary of the Twitter ménage à entre-quatre-et-huit.

By the way, evidence from Spain does support Allen. Martínez-Galarraga & Prat (2015) find relatively high wages were a factor in Catalan industrialisation. Also see their post explaining their findings at Nada es Gratis (in Spanish).


Filed under: Industrial Revolution, Uncategorized Tagged: biased technical change, directed technical change, high-wage economy, induced innovation, Industrial Revolution, Robert Allen

The emptiness of life will save us from mass unemployment

$
0
0

I don’t I have much to add to the debate about the dystopian robot future scenario envisioned by many people. But I do think the nightmare scenario is less mass unemployment than a kind of revamped neo-mediaevalism. I’m not predicting that, so much as saying that’s the worst-case scenario. {Edit 28/12/2016: This was written more than 2 years as a kind of joke!}

In the past 250 years, technological progress has not caused unemployment because human wants have been infinite. Every time productivity (output per unit of input) rises, the implied extra income in the economy still gets spent on something (at least when there isn’t a recession), and extra work gets created to produce that something. In other words, fewer inputs may be used to make one unit of output, but more output always gets desired / created. (OK, that sounds Say’s Law-ish, but please be patient.)

Environmentalists understand keenly that when energy prices fall, people frequently just drive more or fly more, or the savings get spent, ultimately, on something else that uses energy. Productivity growth produces the same effect. Which is why, as of now, we’ve never had permanent mass unemployment from technological displacement.

After the basic needs of food and shelter are satisfied, people go in search of other fulfillments — more caloric, varied, and exotic diets; more living space to fill with ever more stuff; 58 changes of clothes instead of 2 per year; more leisure in the form of vacations and entertainment; and ever more marginal extensions of life expectancy. That’s all very obvious.

But as people get wealthier, they demand not only more quantity of stuff, but also ever more trivial and even imaginary increments to the quality of goods and services. Goods and services they want to consume become more labour-intensive. How else to explain the market for, say, honey in a jar that’s ‘raw’, unfiltered, unpasteurised, ‘fair-trade’, non-GMO, single-country-origin, single-bee-colony, and single-flower-species ?

Ironically, as production becomes more brutally efficient with labour-sparing technology, consumption becomes more ‘inefficient’. The hallmark of consumption by the rich has always been its labour-intensiveness. Think of aristocratic dining halls as recently as the Gilded Age, with one liveried footman for every guest at the long table in the dining hall.

That’s why ‘hand-made’ has snob appeal. Bespoke fetishists may think of it as “valuing timeless artisanal quality”, as does one London financial journalist who apparently has not only suits and shoes custom-made by hand, but also socks, neck ties, and (!) pocket squares. (When those silks stick out of the breast pocket, woe unto those rolled edges sewn with plebeian machine-neatness….) This tailor-blogger with a cult following makes suits by hand, or ‘deconstructs’ famous brands, and blogs about every lovely stitch. But in reality such sartorial epicureanism is about deriving more and more marginal utility out of sillier and sillier quality ‘improvements’. And such things point to the niche consumption fantasies of the merely upper-middle-class.

As long as there are still some things machines can’t do, I don’t see why that infinite-wants process can’t be extended indefinitely in the world where 15% earn a charmed living and 85% can at best aspire to the status of lumpenbourgeoisie. People — rich people — will just get even more petty, demanding, absurd, and elaborate in the infinity of their wants. I can’t imagine how exactly, but don’t underestimate the emptiness of human life !

The 15% of workers (in Tyler Cowen’s reckoning) who will succeed in the future must know how to work with AI-flavoured machines in a very complex production chain of human-machine complements. Such a profile would favour not only intelligence, but also conscientiousness, precision, discipline, and cooperative team work. The 85% who won’t make that cut, Cowen imagines, will scrounge around as petty entrepreneurs, freelancers, ‘consultants’, street-vendors, mobile taquería purveyors serving “Korean tacos” handled by tattooed hipsters and transacting with Paypal and iPad, and other genres of precariously ‘self-employed’. The lowest segment of that low segment will be “threshold earners” who proudly just “get by and who do not push ambitiously for a higher wage or stronger credentials at every step”. The culture will change to make that sort of thing hip, respectable, and freedom-enhancing.

Why can’t the conspicuous-luxury consumption sector account for a growing share of employment, as the top 10-15% capture a larger and larger share of GDP growth on account of skill-biased technological change (or what ever is the latest cause-du-jour of growing inequality)? Really, why not ?

With the advent of settled agriculture and the rise of cities, most human beings have most of the time worked for the rich in one form or another — whether as slaves/serfs, or sharecroppers, or free peasants paying confiscatory taxes, or factory workers, or service sector drones. There have been periods of freedom for frontier freeholders, but those don’t last very long.

Perhaps the future promises a world where the majority will still work for the rich, but in enhancing their lifestyles rather helping to generate their capital income. From adjutants in production to adjutants in consumption.

The size of the class that can afford absurdly infinite and infinitely absurd wants will expand. In 2013, the 99th percentile of household income in the USA was $385,000 and the average income of the top 1%, numbering approximately 1.7 million households, was $717,000. That’s nothing ! After taxes, after the big mortgage payment for the primary house and the vacation spare, after school fees for two offspring, there’s barely enough income left for a maid and a resident minder for the children. It’s definitely not enough to afford the whimsical last-minute day-trip by 90-minute suborbital flight from New York to Tokyo just to indulge in sushi at Sukiyabashi Jiro.

If economic growth of the future is severely skill-biased and concentrated in the top 10-15%, just think of the potential capacity for expansion in luxury consumption. The lifestyles of the top 0.1% will trickle gradually to aspirants in the 0.9%, just as the 5% will be closer to apeing the bottom of the top 1%, and down the line.

Given the number of workers employed “in service” at stately houses in England or robber-baron America, as late as 1914, or in developing countries today, you can easily imagine such “household establishments” being reproduced in the developed countries, but in more egalitarian-seeming ways. You’ll get the resident in-house staff ranging from manservants to henhouse keepers for each house owned at different locations. But a lot of the luxury service will come in the form of freelance labour that won’t seem nearly as bonded and mediaeval as hereditary footmen in a manor house or a fan-wallah in a maharajah palace. So there needn’t be a total reversion to the Downton Abbey world to suppose that more and more of the top 15% will consume snobbish labour-intensive goods as incomes grow and labour in the bottom 85% gets cheaper.

Why would a 5-percenter go to the chic supermarket or the picturesque farmer’s market for organic milk once a week, which is really for the sad people at the top of the fourth quintile, when he can keep a milk-cow of his own, hire a full-time farmhand to maintain it, and every morning partake of superfresh, hyperlocal, unpasteurised milk from his own arugula-fed, hand-massaged cow with the high-quality but low-yield magic-udder ? That doesn’t mean he won’t send his manservant (aka “personal assistant” in egalitarianese) to the market so he can also now and then try that subtly different, vaguely briney-creamy milk from the clover-fed Bronze-Age heritage-breed cows of the Vendée, delivered daily by the newest version of Concorde. Posh markets will continue to exist because the chic will always talk local and seasonal but will always want winter strawberries and asparagus from Chile and South Africa.

More 5- and 10-percenters will have full-time cooks, who won’t be inordinately skilled, but everything will be made from scratch from their urban gardens and small dedicated suburban livestock menagerie, or collected by the dedicated forager-hunter they co-employ with a neighbour. Right now the well-healed go to restaurants where ‘innovative’ chefs work with ingredients harvested that very day, foraged that very day, caught that very day, and even slaughtered that very day. But the rich of the not-so-distant future could get a less elaborate version of that at home ! And why not, if it involves no effort on their part ? Of course they would still go to restaurants because of the comparative advantage offered by highly specialised chefs in originating more novel, trendy dishes.

I draw my hypotheticals from food because I know something about foodies and cuisines, and because in my unimaginative laziness I really can’t think of better examples. Social arrangements of the future are difficult to predict.


Filed under: Income distribution, Inequality, Technology Tagged: Average is Over, skill-biased technological change, Technological Unemployment

The Bairoch conjecture on tariffs & growth

$
0
0

{ Note: This post describes and summarises a literature on 19th century growth & trade. I do not necessarily endorse its findings. This post is intended as largely descriptive. }

There is a vast cross-country literature which finds a positive correlation between economic growth and various measures of openness to international trade in the post-1945 period. Despite intense methodological bickering amongst researchers, nonetheless maybe 50 studies (maybe more?), using a variety of methods and approaches, come to the same conclusion: trade openness was associated with growth after 1945. (This amazing critical survey lists most of those studies.)

This huge body of research does have some quite compelling critics, the most prominent being Rodríguez & Rodrik (2000). This widely cited paper argues — amongst many other things — that there is no necessary relationship between trade and growth, either way. It depends on the global context as well as domestic economic conditions. I think that view is correct.


There is also a smaller literature on the “19th century growth-tariff paradox” associated with the historian Paul Bairoch. He argued informally that European countries with higher tariffs grew faster in the half century before the Great War.

Bairoch’s rough eyeball correlation was confirmed econometrically by O’Rourke (2000) for a sample of 10 rich countries (Australia, Canada, Denmark, France, Germany, Italy, Norway, Sweden, the UK, and the USA) in the period 1875-1913. This finding was supported by several other studies, including Clemens & Williamson (2001, 2004), but was disputed by Irwin (2002) on the grounds that the correlation was driven by rapidly growing settler economies with high land-labour ratios which relied on tariffs for revenue.

cw

Lehmann & O’Rourke (2008, 2011) then countered by disaggregating tariffs of those 10 rich countries into revenue, agricultural, and industrial components, reporting that duties specifically protecting the manufacturing sector were indeed correlated with growth.

But these 19th century studies take the 20th century findings as a valid point of departure — there is a contrast between the positive tariff-growth correlation for 1870-1914 and the negative correlation after 1945. The ‘paradox’ is therefore in keeping with the criticisms of Rodriguez & Rodrik (2000), one of whose major points is that there is not, even in principle, any necessary relationship between openness and growth. It depends on the global environment, domestic conditions, complementary domestic non-trade policies, what actually gets protected, etc.

Yet even the small 19th century literature is mixed — more mixed than the 20th century literature. Clemens & Williamson (2001, 2004) confirms the overall positive correlation between tariffs and growth found by O’Rourke (2000). But C&W also tests the proposition with a larger sample of 35 countries in 1870-1914 that includes many from the poor periphery. The positive growth-tariff relationship for the rich countries is large; much smaller for the non-European periphery, and negative for the European periphery (e.g., Spain, Russia, etc.) So obviously even with the same global conditions there’s a lot of heterogeneity.

According to Clemens & Williamson (2001, 2004) the reason there was an overall positive correlation in the 19th century, may be that countries with higher tariffs tended to export to countries with lower tariffs:

“[E]very non-core region faced lower tariff rates in their main export markets than they themselves erected against competitors in their own markets. The explanation, of course, is that the main export markets were located in the Core, where tariffs were much lower.”

This is how I personally interpret it: Great Britain and others acted as free-trade sinks (my phrase, not C&W’s) for exporting countries such as the United States (and Wilhelmine Germany) which protected their steel and other industries. (Echoes of East Asia which benefited from US policy during the Cold War? It’s nice if there are countries willing to indulge your export-led development strategy without reciprocal openness.)

Edit: Yes, yes, yes, as many have pointed out on Twitter, Britain in the 19th century famously settled its trade deficits with America, Canada, Europe, etc. through its surpluses with India, etc.

settlements

But as you can see in the graphic above, it’s a little bit more complicated: India also had surpluses with the USA, Japan, and Europe. )

(Clemens & Williamson appeal to a prisoner’s dilemma model in which trade coordination between two countries is the best outcome, but if one country does defect, i.e., imposes tariffs, then the other is better off retaliating. So in the 19th century, the non-retaliating party might have been worse off in terms of growth.)

The correlation reverses after 1945 because “tariff barriers faced by the average exporting country have fallen to their lowest levels in a century-and-a-half”, and rich countries in particular were much more open. So the international environment does matter for the relationship between trade and growth.

Jacks (2006) — using the Frankel-Romer gravity model approach — both replicates the positive correlation between growth & tariffs, and supports the free-trade-sink view. The reason higher-tariff countries grew faster was that “an increase of 1 percentage point in the level of tariffs led to an increase of roughly 0.7 percentage points in the balance-of-trade to GDP ratio”. In other words, the more protectionist countries could generate trade surpluses (which add to GDP), apparently because the more free-trading countries did not retaliate against them. Again, the global environment matters.

Tena-Junguito (2010) focuses on industrial tariffs and supports the other aspect of the Clemens & Williamson finding: the tariff-growth correlation applies only to the “rich country club”. For Latin America and the European periphery, the correlation was negative. Unfortunately this is a semi-cross-sectional view, relating tariffs in 1875 with cumulative growth in GDP per capita between 1875-1913, which obviously misses the change over time in tariff schedules after 1875. Most other studies use panel data, relating 5-year chunks of growth rates and average tariffs.

On the other hand, Schularick & Solomou (2011) find no evidence for the tariff-growth correlation for a sample of 30 countries, rich and poor. It turns out to be spurious once you control for the business cycle (which was transmitted internationally via the gold standard). The time trend which drives the tariff-growth relationship is apparently the 1875-79 depression. Countries became more protectionist during the recession, but the higher tariffs remained in place after the global economy recovered, driving the spurious result.

However, Schularick & Solomou do not look specifically at manufacturing tariffs, so their findings do not overturn Lehmann & O’Rourke (2008, 2011), which did find a correlation between manufacturing tariffs and overall growth for the rich countries. It’s possible Lehmann & O’Rourke’s results might be duplicated for a larger sample which includes poor countries.

Personally I find that unlikely because, if anything, the duties levied by primary commodities-exporting countries such as those in Latin America would surely have been on manufactured goods.

In the final analysis, we shouldn’t make too much out of this literature, either way. Cross-country regressions — especially in a sample ranging from 10 to 30 countries — are a pretty crude and blunt tool for assessing infant-industry arguments.

Edit: I prefer single-country examinations of tariffs and development. For the USA in 1870-1913, some great examples are Yoon; DeLong; and Irwin, all of which argue that tariff protection likely did not play an important role in US economic development in the post-civil war period. Also see my post on the Napoleonic blockade & the infant industry argument.


Addendum: One major reason (amongst many many!) that cross-country regressions are a poor tool for assessing the infant industry argument is that many industries are often protected for the ‘bad’ reasons. (But Nunn & Trefler address this issue successfully in my opinion.)

Another major reason is this: it’s actually quite easy for poor countries to temporarily increase growth rates “through protectionism” as long as you can do technological upgrading by importing it from more advanced countries. If tariff policy or state subsidies distort investment away from agriculture and toward industry by increasing the rates of return in manufacturing, you will get structural transformation (movement of labour from lower-productivity to higher productivity sectors).

Traditional agriculture is so unproductive and so full of underemployed labour that any diversion of resources toward any ‘modern’ sector can raise growth rates, all else equal. If a government decided to import some machines, close off the country to global trade, round up peasants at gun point, and force them to work in factories, you will get growth. If this were not true, Stalinist industrialisation would have been impossible ! (I ignore the welfare considerations, of course.)

In fact, even if productivity growth were zero in both the traditional and modern sectors, you can still get positive economy-wide productivity growth just by moving resources out of the traditional sector and into the ‘modern’ sector. All it takes is movement out of one stagnant sector to another stagnant (but ‘better’) sector. You can continue with this process until you run out of peasants — or run out of idle labour in agriculture (because, at some point, agriculture itself will need productivity growth to release more labour — unless you start importing food, in which case you will need to start export something).

Postscript: By the way, the spectacular historical vulgarian Ha Joon Chang has a habit of cherry-picking from this literature. HJC gleefully cites any study, especially O’Rourke (2000), which confirms his priors, but fails to report nuanced or inconsistent results.

Of course, I do not fault Chang for failing to reference research which did not exist at the time of writing his popular Kicking Away the Ladder (2002), but he doesn’t cite any of the follow-ups in Bad Samaritans (2007), either. And every time he writes an article, he always appeals to the same references and fails to cite stuff inconsistent with his priors (e.g., HJC 2010 and 2013, as well as Chang’s reply to Easterly’s review of Bad Samaritans.)

To the best of my knowledge the only time he has come close to nuance is in the 2010 proceedings of the Annual World Bank Conference on Development Economics, but the modicum of subtlety is relegated to the footnotes:

At least for the 1870–1913 period, there is even evidence of a positive correlation between tariff rate and rate of growth (O’Rourke 2000; Vamvakidis 2002; Clemens and Williamson 2004)”

“5. Irwin (2002) argues that this correlation was driven by high tariffs imposed for revenue reasons in the New World countries (the United States, Canada, and Argentina in his sample) that were growing quickly for other reasons (such as rich natural resource endowments). However, the United States was the home of infant industry protection at the time, and many of its tariffs were not for revenue reasons. Moreover, O’Rourke (2000) and Lehmann and O’Rourke (2008) show that the positive tariff-growth statistical correlation is not driven primarily by the New World countries.”

“6. Clemens and Williamson (2001) argue, on the basis of an econometric analysis, that around a third of this growth differential between Asia and Latin America during 1870–1913 can be explained by the differences in tariff autonomy.”

Notice that even when HJC does mention Clemens & Williamson (2001), he omits details which do not suit his priors !

Chang (20052010, 2013) loves to cite Rodríguez-Rodrik (2000) and argues against the feasibility of econometrically validating any trade-growth relationship. But then he cites those selective bits of the econometric literature on the Bairoch conjecture anyway, and all the time! Apparently the Rodríguez-Rodrik warnings about the non-universality, historical contingency, and context-specificity of the trade-growth relationship apply only to the 20th century findings!


Edit 26 December 2016: See Vincent Geloso’s remarks on some of the papers mentioned in this post. He’s also posted below in the comments section.


Filed under: economic growth, industrial policy, Infant industry argument, international trade, protectionism, trade & development Tagged: Bairoch conjecture, growth-tariff paradox, Ha Joon Chang, Kevin O'Rourke, Paul Bairoch

The Napoleonic blockade & the infant industry argument: caveats, limitations, reservations

$
0
0

Some reservations about, and limitations of, the Napoleonic blockade paper on the infant industry argument that’s making waves. (Major caveat to the paper: protection persisted for decades after the blockade and may have helped keep the French cotton industry quite backward relative to Britain.)


Noah Smith had a Bloomberg column on the infant industry argument with a nice mention of Reka Juhasz’s paper on the Napoleonic blockade. It’s long been plausibly argued by historians that Napoleon’s attempt to embargo Britain acted as a kind of de facto protection from British competition for cotton industries across Europe.

Juhasz’s paper deserves the accolades it has received. It is the first truly rigourous demonstration that temporary protection for a fledgling industry can ‘work’ — work in the sense that a country doing the protection can begin to acquire comparative advantage in that sector, and this has long-lasting effects over many decades.

( I’m not exactly sure but I seem to have mixed readership — some readers would know the theoretical rationales behind infant industry protection, and others not. So I have written a bare-bones description of the classical and modern infant industry arguments at the end of this post. )

The Juhasz paper overcomes the endogeneity problem in other research about infant industry protection — the nagging feeling that the government might have chosen an industry which would have become competitive anyway without the protection. The blockade was not intended as a commercial policy, and it was more effective in some parts of the French empire than in others. So Juhasz can exploit the geographical variation in blockade efficacy as a proxy for the regional intensity of British competition.

That ‘exogenous variation’ part of the paper seems to get the most attention, but for me the most interesting is that agglomeration economies played a key role — the change in the spatial pattern of textile production. Initially, before the blockade, French cotton mills were much more widely dispersed across the country than they would prove to be after the blockade. Juhasz demonstrates that those parts of France least exposed to the smuggling of British cotton were the areas where French textile production took off — production in the south was inhibited by Britain’s first mover advantage, whereas the north was protected. This initial locational advantage persisted long after the blockade through path dependence and self-reinforcing cumulative advantages of agglomeration. It’s that switch in spatial concentration from south to north that makes the paper convincing.

juhasz-agglom

Edit: Yes, Alsace-Lorraine, along with Mulhouse the “French Manchester” was annexed by Germany in 1871 following the Franco-Prussian War. But the map is about the south-north switch. }

After 1815, France became an exporter of cotton textiles, suggesting some degree of international competitiveness.

I must admit I was sceptical at first. Even in the UK, the first automated Arkwright mills were not tightly concentrated in Lancashire either — they were dispersed across northern England, the Midlands, and Scotland according to water source.

colquhoun_map

I figured this must also be true for France, but Juhasz controls for natural advantages and her findings are intact. Ironically, water sources or coal deposits were not important partly because French spinning firms relied much more on jennies and mules, which could be powered by hand or animals, than British firms which were more likely to use water- and steam-powered frames or mules.

§  §  §  §  §

I say that’s ironic because, as I pointed out on Twitter, the greater persistence of hand technologies in France could be an effect of protectionism itself !

Although France did become an exporter of textiles after 1815, nevertheless French cloth would still not be competitive with British cloth. In fact, France banned the import of British textiles (both cotton and wool) until the Cobden-Chevalier Treaty of 1860, after which 20-30% tariffs were imposed. (See Nye 1991.) France did not protect its silk and linen industries, which were already competitive with Britain.

The state of technology in the French cotton sector on the eve of Cobden-Chevalier:

The Second Empire was a period of significant change in the textile industry, with cotton textiles the most evident beneficiary. There was increasing use of the steam engine and a consequent expansion in the number of spindles. In the East, 9 out of 100 jennies were automatic in 1856, 80 out of 100 in 1868. The number of spindles went from 320,000 to 464,000 in that same period…”

Normandy, the largest consumer of raw cotton imported from abroad, had few modern factories with relatively few spindles in even the largest firms. The water-powered establishments had difficulty expanding their output and were slow to combine water with steam, which could supplement water power in the winter. Most firms still depended on the old mule-jennies, which were only semi-automatic and required much manual labor.” (Nye 1987)

Now, to be fair, in that paper Nye argues that firm size in France was appropriate for French conditions at the time. But the backwardness of the cotton industry in France circa 1860 is quite striking and the continued protectionism past 1815 must have had something to do with it, even if it is not the full explanation. Also, as Juhasz herself points out citing Saxonhouse & Wright, Britain prohibited the export of machinery and expertise until 1843, so that the diffusion of technology to France might have been obstructed. But I’m sceptical of that argument because I suspect the ban was quite porous, and because the New England textile industry was also more advanced than the French industry.

Here’s the contrast with the contemporaneous British use of power:

crafts-2004

[Source: Crafts 1994]

( The 1860 policy change [from ban to 20-30% tariffs] might also qualify as a quasi-natural experiment, possibly allowing the estimation of any effect of the liberalisation on technological upgrading in the French cotton industry. There’s a good-sized modern literature on that issue, e.g., Aghion & Burgess, Bustos, etc. )

So you definitely cannot say an effect of the Napoleonic blockade was to make the French cotton industry in 1860 relative to the UK, similar to the Japanese auto industry in 1990 relative to the USA. France could not export cotton textiles to Britain.

Yet one major purpose of the Juhasz paper is to argue it is feasible for the state to promote a modern, increasing-returns-to-scale industry with knowledge spillovers that benefit the rest of society outside the protected sector itself. And Juhasz points out that French per capita import of raw cotton would become as large as Britain’s and bigger than other European countries. But even if the French cotton industry was very large, if it was also relatively unmechanised and backward, then the technological externalities might have been fairly limited.

A classic critique of the infant industry argument is Baldwin (1969), which is mentioned by Noah Smith in his column. Baldwin mostly addresses the ‘appropriability’ issue — the idea that because entrepreneurs can’t appropriate the full benefits of externalities-generating, increasing-returns production, they will undersupply such socially valuable activities. The gist of Baldwin’s critique is that tariff protection doesn’t necessarily alleviate that problem, and the actual history of French cotton seems consistent with that view.

All this also lends support to the political economy critique of the infant industry argument, i.e., the idea that when governments do protect an infant industry, protection often lasts longer than necessary because the state gets captured by political interests benefiting from ‘senile’ industries. I know many people roll their eyes at the rent-seeking critique — and I do agree it’s overused to dogmatically reject reasonable theoretical cases for protection. But the rent-seeking argument is still valid.

I suppose one could argue that a period of continued protection (1815-60) followed by liberalisation was the optimal path. On Twitter Juhasz sort of implied that with this chart:

c0zp_u1xeaaer2j

But 1860 is a pretty late date to have been a technological laggard in cotton. The “Second Industrial Revolution” with steel and chemicals was already beginning in Europe and the United States.

It’s also possible that cotton was not that important as a source of knowledge externalities anyway. Technologically it was a dead-end industry — something argued recently by Kelly & Ó Gráda (2016) — and it may have been the beneficiary of linkages created by other industries more than a creator of its own. The cotton industry certainly wasn’t important as a source of demand for educated labour. See Becker, Hornung, & Woessmann (2011). Therefore, for France at least, you must question how much cotton really contributed to “technological upgrading” at all, relative to other northern Europe.

§  §  §  §  §

Juhasz has replied to what I said on Twitter, acknowledging the political economy issue, but qualifying:

But I don’t think lumping the economic case for infant industry with the [political economy] problems has been conducive to the debate in the past. France [was] not competitive with Britain in cloth, but hard to see how they would have moved into factory based manufacturing so early otherwise. For developing countries I see this as the broader view on what some form of infant industry policy could achieve if done right”.

Let’s call the distinction she makes the “purely econ angle” versus the “historical angle”.

I agree it’s valuable to demonstrate the “purely econ angle” whilst ignoring the “historical angle”. And I acknowledge, so far in this post I’ve focused on the historical question of what eventually happened in France, not on the ahistorical, purely social-scientific issue of what is possible to do with the optimal “intelligent protectionism”.

An ideal infant industry protection, if done right, would be truly temporary protection to provide that “breathing space” until industry achieves some measure of competitiveness, and no more. That is exactly what Juhasz’s paper shows is possible in principle: she manages to show that a discrete event [i.e., the Napoleonic blockade per se], independent of subsequent events, has certain long-term effects [i.e., France possessed a large cotton industry in 1850]. The paper is therefore part of the corpus of historical persistence studies relating some shock in the past with later outcomes.

But the political economy issue has been central to the debate about infant industries. The debate has not been primarily about the “purely econ angle”. Most reasonable people agree that even with import substitution industrialisation, there was growth and structural transformation in developing countries. (The question is only how much of that growth can really be attributed to ISI, as opposed to other factors, such as the favourable global environment of the 1945-73/80 which I will call the LDC “trente glorieuses”. Also none of that growth ever amounted to any appreciable unconditional convergence with the rich countries.)

The political economy dimension was certainly stressed by the 1970s-80s Krueger-Bhagwati-Balassa critiques of trade policies in developing countries. They argued that the duration and the form of protection were endogenous to politics and institutions. What gets protected, for how long, and in what form are not decided in a political vacuum. (James Robinson argues the same, but in more ‘updated’ ways.) Krueger, Bhagwati, and Balassa emphasised the effect of politically decided policies on currency overvaluation and the negative effective rates of protection for exporters. So, in their argument, political economy considerations were directly related to the inward orientation and anti-export bias of ISI.

 

But I won’t elaborate because I’m going to write a post about ISI soon.


Postscript: Some theoretical rationales for infant industry protection

{ This is for readers who may not know about these rationales. }

Contrary to popular stereotype, mainstream economics does not a priori rule out things variously described as “infant industry protection” or “industrial policy”. Abundant theoretical rationales for such things can be found at the intersection of endogenous growth theory, strategic trade theory, and new economic geography.

Everyone acknowledges that the spillover benefits (positive externalities) of technology are very large. If your country has a computer industry, this implies formal as well as tacit knowledge that’s embodied in people and organisations whose benefits spill outside that specific industry. So even the service sector benefits from the country’s ability to create PayPal or Just in Time inventory control.

When you have positive externalities, the social return to producing some good exceeds the private return, and that’s a good thing for society, but it is a market failure which keeps the rate of investment lower than it could be. Private investors anticipate there is excess return which cannot be appropriated and they will undersupply that good.

( Or so the story goes. I think that theory must be qualified by the Industrial Revolution. Many key inventors like Hargreaves and Crompton didn’t seem to care, even ex ante, about capturing any chunk of the social value of their work. They more or less gave away their inventions. Other inventors like Arkwright were plagued by piracy and mired in patent infringement cases at court. But that’s for another day. )

Hence, the question naturally arises whether the state should promote technology which might be expected to generate knowledge spillovers. Most people don’t object to subsidies for research and development. But many people do object if the promotion entails restrictions on international trade or direct subsidies to businesses.

In the classic infant industry argument, a domestic industry is prevented by established, more efficient foreign producers with first-mover advantage from getting off the ground during the initial high-cost phase of production. Since you still need your factory, machinery, and work force in order to produce a single unit of output, the high fixed cost of merely starting up production can only be amortised by increasing the scale of production.

Besides, even if the world’s cutting-edge technology and best practises drop from the sky onto your lap, these must still be adapted to local conditions. You learn to reduce costs and produce at frontier efficiency levels only by actually doing the production. This practical experience generates the techniques and improvements to production methods which are not easily codified and must reside in the brains of people — what in the Industrial Revolution literature is often called “tacit knowledge”, “micro-inventions”, and “local learning”.

So firms in the start-up industry need “breathing room” to get up to speed. Otherwise they might get wiped out by foreign competitors before they’ve had the chance to mature. In this theoretical setup, temporary protection is justified since there’s latent comparative advantage to draw out.

In the debate about antebellum US tariffs, it’s generally not about whether the New England cotton industry could have survived the full force of British competition. Rather the debate is between those who think it could have survived tariff-free after 1830 or after 1850. (cf. Irwin & Temin 2001 versus Harley 1992, 2001). David (1973) argued learning-by-doing effects were an important part of how and when New England became competitive, as production costs declined even without significant changes in machinery.

A more modern form of the infant industry argument incorporates the so-called Marshallian externalities, or agglomeration economies. If learning-by-doing is a process internal to the firm, then the clustering of many firms — a large industry — generates ‘external’ benefits from collective learning. (And agglomeration economies figure prominently in the Juhasz paper.)

Firms locate nearest their largest markets, their labour force, and their suppliers, but they also cluster amongst other firms like themselves. More of them there are in an industry, more knowledge is generated which cannot be contained within a single firm. Knowledge spills over, as skilled workers move between firms or set up their own shop; ideas and techniques are stolen and pirated; and firms simply demonstrate to one another what’s possible to do. It’s not only things like inventions which diffuse in the network of firms, but also all that practical experience and tacit knowledge.

And then there are linkages aka “pecuniary externalities”. If the industry is bigger (for example, due to tariffs or subsidies), there is more specialisation in different value-added stages of production. During the Industrial Revolution, you could say a bigger cotton industry made it more likely to get more specialist machine makers, more specialist textile printers, more specialist weavers, etc. And all this lowers unit costs.

There are many other possible rationales. I haven’t mentioned the Big Push. And I’ve barely touched on the implications of scale. The post-bellum USA, Wilhelmine Germany, post-war Japan, and South Korea all built cartelised, highly capital-intensive industries like steel and autos. They used a protected domestic market to help pay for the massive fixed costs of starting up production through high prices and monopoly rents, and when unit costs fell enough they started exporting. For a small-ish country like South Korea, this strategy could not have succeeded without exporting to another country like the USA that was not very picky about trade reciprocity… (Noah Smith asked me on Twitter who might have modelled this, and I think Krugman did in the early 1980s, but I have to check.)

By the way, I agree industrial policy should also pass the welfare test. But when it comes to development in poor countries, I don’t put much stock in static welfare considerations — people tend to be hyperbolic discounters and industrial policy should be seen as forced savings/investment for the really really really long haul. Their revealed preferences suggest they don’t care all that much about their descendants anyway! Hell, according to Banerjee and Duflo, even physically stunted and malnourished people often blow extra income on expensive festivals! It’s normal to want fun in the dreary here and now… So I prefer to examine industrial policy arguments for developing countries on the grounds of effectiveness of intervention as well as institutional & state capacity problems. And human capital. That’s quite neglected when it comes to industrial policy debates in LDCs. In the industrial policy literature connected with trade & development, there’s just way too much emphasis on skills & knowledge externalities generated by promoted industries, as opposed to the knowledge and skill created by formal schooling.


Filed under: Infant industry argument, international trade, protectionism Tagged: cotton, Infant industry argument, Napoleonic blockade, Reka Juhasz

Tariff Protection of British cotton 1774-1820s

$
0
0

British Tariff Protection after 1774: Competition, Innovation, & Misallocation, plus a note on Weaving

This is an addendum to a post about the Calico Acts, which had prohibited within Britain the consumption of cotton cloths both foreign and domestic. But even after their repeal in 1774, Indian cloths entering the British market continued to face stiff import duties, ranging from 27-59% ad valorem in 1803 to 71-85% in 1813.

Although the Calico Acts are frequently discussed by Western scholars, the protection of the British cotton industry that continued until the 1820s is something only Indian scholars bother to mention.

In the other post, I argued that the Calico Acts probably hampered and delayed the rise of the British cotton industry. But is it possible that tariff protection was necessary for this later phase of British cotton after 1774?

ray_p884.jpg

[Source: Ray 2009]

It’s universally acknowledged that British industry became globally competitive in all varieties of cotton cloth only some time in the 1820s [Clingingsmith & Williamson 2008; Broadberry & Gupta 2009]. Indian labour was cheap, and until the 1820s, British costs were still higher on average because automation in the industry remained quite partial.

The mechanisation of carding and spinning was already in full swing by the 1780s, but you only got the self-acting mule (for spinning) in the 1820s. Printing and other linkages in cloth production were still in the early stages of mechanisation. Weaving continued to be dominated by hand processes.

During those 60 years of technological evolution and learning-by-doing, the new cotton factories became increasingly more capital-, scale-, and energy-intensive.

So it’s plausible, a priori, that protection gave the British cotton industry in 1770s-1820s the necessary “breathing space” to develop its competitiveness.

On the other hand, the infant-industry argument is typically about protecting a new industry in a technological follower or laggard country against competition from a technological leader country possessing a first-mover advantage into increasing-returns-to-scale production. In this case, Britain was the clear technological leader over India (and everyone else). Mechanisation and automation in British industry was just not fast enough to fully overcome India’s labour cost advantage before ~1830.

Then it’s also plausible, a priori, that tariffs on Indian cottons in the late 18th and early 19th centuries helped slow the development and diffusion of cost-reducing technology in the British cotton industry.

This is suggested by a striking feature of the cotton industry during the industrial revolution.

Firm Size & Misallocation

Different ‘stages’ of technology in all three spinning, weaving, and printing coexisted for decades in Britain during the Industrial Revolution.

In spinning, yarn was spun with water-powered Arkwright frames alongside hand-powered mules and jennies. As Chapman (1970a) put it:

…the structure of the early cotton industry in Lancashire and the North West showed unusual polarisation: a small number of merchant-manufacturers (50 or 60 in 1795) eagerly seized on Arkwright’s lucrative system, while a much larger number of small men struggled with a carding engine and a few jennies (or a mule or two) to maintain their place in the industry, and expanded their investment only by the closest devotion to business and the most ascetic living habits.

Berg covers similar ground:

In 1780 Britain had no more than 15 or 20 cotton mills, and seven years later there were 145 Arkwright-type mills. Before the end of the eighteenth century there were 900 cotton-spinning factories. These ranged, however, from 300 Arkwright-type factories—purpose-built buildings of several stories employing over fifty workers—through 600 ‘factories’ using jennies and mules, some of which were little more than sheds or workshops employing around a dozen workers.12 The industry’s capital in the late 1780s was still predominantly spread over hand or domestic processes, especially weaving.” (p32)

“There survived, however, a number of little workshops with a carding engine, a few spinning jennies, and hand-, horse-, or rudimentary water-power mechanisms. The roomletting or floor-letting system used in Manchester and Stockport was common, and one mill in Stockport had twenty-seven masters employing 250 people in total.75 These were often small businesses which later grew much bigger. Just as often, however, they were second or third mills owned by risk-spreading and diversifying firms. A firm maintaining several mills of varying scale could experiment with new techniques either in its larger factory or in one or two of its smaller mills. Either way, it would avoid the risk of losing everything. Overall averages confirm this picture. As late as 1835, Ure calculated that the average cotton mill employed 175.5 people.” (p201)

The above refer to factories/mills, not firms, and some large firms owned several factories to spread the risk or try different methods at different locations.

However, Chapman (1970b), exploiting insurance records for cotton firms in the 1790s, reveals a large dispersion in firm size. Those firms valued at more than £5000 numbered two dozen or so, with the largest being vertically integrated operations like the Peels that started from raw cotton and finished with printed cloth. But there is a much larger number valued at a couple of thousand pounds and below, with a dozen or so in Stockport (Manchester) valued at a mere £100.

It’s possible to take all this as evidence that the minimum efficient scale and therefore the barriers to entry were still quite low in cotton. And indeed there was a rapid turnover of firms in the period 1785-1840 (Harley 2012). But it’s also possible that the tariff lowered the MES by raising the price of cloth and keeping less productive firms in business. If that’s true, and if we assume that scale is an indicator of firm-level productivity, then this large variation in firm size and technological adoption is also evidence of misallocation — too much labour and capital was tied up in relatively lower-productivity firms.

The Slow Mechanisation of Weaving

The mechanisation of weaving after 1790 was perhaps even slower than spinning in 1770-90. Although Cartwright is credited with inventing a power loom circa 1785, the first commercially successful one was patented in 1803 by Horrocks and even then it was decades before power looms were widely adopted.

Between the 1790s and the 1830s, the hand loom sector was not just bigger than the power loom sector, but it also actually expanded in size. The lag in mechanisation between spinning and weaving meant there was a lot of cheap yarn leading to the weaving boom of 1780s-1815. Cotton hand weavers quadrupled in number (Brown 1990) as farmers took up weaving full-time and, for the highest quality cloths, the wage more than tripled (Allen 2016).

hills_powerlooms

[Source: Hills (1989), pg. 117. In 1823, there were 240,000 to 250,000 hand looms according to Ray (2009).]

Although a power loom was about three times as productive as a hand loom, hand weavers were needed for the finer printing cloths as machines could not yet reproduce the quality of the hand looms. So the small mechanised sector focused on the coarser, cheaper cloths, often for export markets, whilst the larger hand loom sector wove the finer fabrics that most directly competed with Indian goods.

Traditionally, the slow mechanisation of weaving is attributed to exogenous, purely technological issues in power loom development (Rose, pp 45-7). But the late protectionism in the British cotton industry, to the best of my knowledge, has never been mentioned as a possible factor.

What ever the case may be, the boom in hand-weaving was certainly helped along by the tariffs still restricting Indian textiles in the early 19th century. These protected the hand weavers and kept the prices of finer fabrics higher than they would have been under free trade. This implies more resources were allocated to producing finer fabrics than coarser fabrics.

As is well known, the steam engine also diffused more slowly than widely believed, and to the extent that the power looms were driven by steam, protective tariffs may have delayed the diffusion of both.

crafts-2004

[Source: Crafts 1994]

In that scenario, the tariffs on Indian goods should be seen in the same light as the Corn Laws, which certainly kept more resources tied up in agriculture. Then British trade policy was de facto Luddite, an unintentional complement to the machine breakers.

Under free trade, cheaper Indian cloths of the finer grades might have wiped out British hand loom weaving in the worst case scenario, but certainly the yarn sector would have more than survived. British machine-spun yarn never faced any obstacle to its development from any foreign competitor, because Britain imported almost no cotton yarn (Hoffmann pp 255-7)In the late 18th century, Britain’s mechanised spinning sector produced much more yarn than could be absorbed by its weavers, and the surplus was exported (including to India). In fact, British yarn imported into India may have helped temporarily retard Britain’s competitive edge over India in cloth. [Ray 2009]

But does it matter that Britain might not have had a large hand-weaving sector under free trade? One might say: who cares if hand loom weavers had been wiped out in 1800? They eventually got wiped out anyway, just more slowly.

However Allen (2016) argues, using his usual directed technical change perspective, that high wages for weavers induced by the boom were absolutely necessary to the invention and adoption of power looms. But Allen also has nothing to say about the effects of competition on incentives to originate innovations or adopt them.

Competition & Innovation

Economic theory is generally ambiguous about the relationship between competition and innovation.

On the one hand, everyone agrees that firms need some degree of monopoly rents (i.e., profits) as an incentive to finance the high fixed costs of research & development. Either subsidies, tariffs, or just a large market can raise the return to investing in an industry. So “too much competition”, which reduces profits, can be bad for innovation.

On the other hand, abnormal profit levels themselves might encourage new entrants into the industry and drive down profits. [See Grossman & Helpman: sections 9.2, 9.3; also Baldwin (1969) for the ‘appropriability’ problem].

The second effect may predominate if the barriers to entry are very low — and, as already seen above, they appear to have been low for the cotton industry during the Industrial Revolution. We are talking about cotton, not steel.

In neo-Schumpeterian growth theories (Aghion, Akcigit, & Howitt 2014), competition and innovation have an inverted-U shaped relationship.

aghion1

( The X-axis: the ratio of marginal cost to price. Left = less competitive; right= more competitive. The upper curve shows firms whose productivity levels are pretty similar; the lower curve is all firms, where productivity levels are diverse. )

Although it’s based on modern evidence from British firms, the inverted-U shape indeed suggests that too little competition discourages innovation, but so does too much competition. There is some optimal level in-between the two extremes.

The graph below shows the impact of trade liberalisation on modern British firms (coincidentally!), with the X-axis representing the entry of foreign firms (i.e., more competition):

aghion2Although firms whose productivity is farther away from the frontier fall behind and are more likely to exit the market altogether, those closer to the frontier become even more efficient. (Aghion & Burgess present similar evidence from trade liberalisation for Britain and India.)

§ § § § §

We might speculate that a trade ‘liberalisation’ in Britain in the late 18th or early 19th century would have weeded out the smaller, less efficient firms in the cotton industry — those struggling with “a carding engine and a few jennies (or a mule or two) to maintain their place in the industry”.

At first the most efficient firms might have been forced to specialise in the coarser fabrics where India was no longer competitive. With more resources allocated to production and mechanisation at the lower end of the market, all the happy learning-by-doing and agglomeration economies might have generated new knowledge about machinery for weaving the finer fabrics. There certainly would have been an incentive to innovate in order to conquer the last redoubt of India’s market share.

So the net effect of the post-1774 tariffs might have been slower technological development and diffusion.


Filed under: cotton, industrial policy, Industrial Revolution, Infant industry argument, international trade, protectionism, trade & development

The Calico Acts: Was British cotton made possible by infant industry protection from Indian competition?

$
0
0

Many “global historians” argue that the British cotton industry was the product of (unintentional) infant industry protection from Indian competition in the 18th century. The various Calico Acts created an import-substitution industry by banning Indian cloths and reserving the home market for British producers. This supposedly gave them the freedom to invent and adopt the machines that led to the Industrial Revolution.

To the best of my knowledge, economic historians have never seriously examined this issue, perhaps because the necessary data are lacking or remain unearthed. Nonetheless there are sound historical reasons for doubting the presumption that “protection allowed British goods to become competitive”.

Warning: This is a tedious post which gets into some detail about the British textile industry in the 18th century. A must-skip, if you ask me. Which is why I provide this handy summary:

  1. The Calico Act of 1721 (which was intended to protect the wool and silk industries) actually banned most varieties of pure-cotton cloths in general, not just Indian.
  2. Before the era of mechanisation, British ‘cotton’ was overwhelmingly cotton-linen, a limitation of British technology (in the economic sense).
  3. Mainstream economic theory supplies many justifications for interventionist trade policy to promote innovation. But the standard rationales simply do not apply to constant returns-to-scale activities such as handicraft cottage industry.
  4. Lancashire would have survived competition with Indian cloths in an unprotected home market.
  5. British machine-spun yarn never faced any direct foreign competitor, since Britain barely imported cotton yarn in the 18th century. The domestic output of yarn was affected by foreign competition only to the extent that it was turned into printed cloth.
  6. But there were many other products besides British imitations of Indian cloth which used cotton yarn as a major input, and their role in the mechanisation of yarn production is overshadowed by a selective, Whiggish genealogy which overemphasises the calico branch.
  7. IF, as so many argue, competition in the export markets was an important stimulus to inventions in cotton, then the home market could have served just as well and the only reason overseas became so important is that British firms were denied a home market for all-cotton cloths by the Calico Acts!
  8. Therefore, it’s entirely plausible — not demonstrated — that the Calico Acts functioned as a Luddite policy which delayed the mechanisation of textile production by decades.

This post elaborates on the above points, covering the period up to 1774, when the Calico Act was repealed.


The Calico Acts

In the 17th century, the various East India Companies started importing cotton and silk fabrics from Asia which were printed with beautiful and intricate patterns. The fine, hand-painted cotton cloth from India was called a ‘calico’, after Calicut, although we might call it chintz today.

2006AJ9958_jpg_l

[Image source]

The ensuing “calico craze” shifted some demand away from Europe’s well-established wool and silk industries, which screamed for relief from their governments. In Britain, agitation over the course of several decades induced Parliament to enact a series of protective measures culminating in 1721 with an outright ban on most varieties of printed cottons, whether foreign or domestic. The target was not merely Asian goods but potential substitutes for silk and worsted.

I reiterate: the second so-called Calico Act of 1721 (7 Geo. I) prohibited the “wearing or using in apparel, household stuff, or furniture, … any stuff made of cotton or mixed therewith, which shall be printed or painted with any colour or colours, or any calico chequered or striped…” (There were a few exemptions, the most important being the superluxurious muslins from India. See the statute.)

The gory, tangled details of the politics behind the Calico Acts are brilliantly narrated in Griffiths, Hunt, & O’Brien (1991), but here are some relevant bits:

  • Exports: You could still manufacture the banned cloths for export to foreign and colonial markets, or print imported white cloths in London but only for re-export.
  • Linen: There was an ambiguous loophole for cotton-linen blends, which was made explicit in 1736, and these became the material of choice for printing cloths in Britain.
  • 1774: The ban on printed cottons was repealed.
  • After 1774, and until the 1820s, Indian imports continued to face stiff duties, ranging from 27-59% ad valorem in 1803 to 71-85% in 1813 [Ray 2009]. But this is something usually only Indian historians bother to mention, and I have treated this issue in a separate post.

The rise of British cotton as industrial policy?

It’s peculiar to argue that a product ban should promote the domestic manufacturing of that banned product.

Yet surprisingly many do credit the Calico Acts with {accidentally} stimulating cotton as an import substitution industry in Britain. Parliament may have intended to benefit wool and silk, but cotton ended up the beneficiary nonetheless. This idea has been around for more than a century, but is newly popular with authors who stress the role of industrial policy, political contingency, and “global connections” behind the Industrial Revolution, such as AshworthBeckertChangMarks, MoeVries, countless Indian writers, etc. To be fair, most of these mention the idea largely in passing, as a throwaway en route to their larger point. Some of them, however — especially Beckert and Chang — garble the story and do not seem to realise the ban applied to cottons in general.

But Parliamentary “industrial policy” becomes a key element of the narrative about why British cotton emerged as a globally dominant industry in three works:

Both Inikori and Parthasarathi stress the combination of domestic protection and overseas competition. The captive home market gave British manufacturers of cotton-linen ersatz the higher rents and the freedom to experiment at home, facilitating the “development of skills, technology and markets” [Parthasarathi]. At the same time, there was rising demand in the overseas markets secured by mercantilism and imperialism for fabrics of the types made in India and marketed by the East India Companies. Thus, British firms engaged in “protracted competition” abroad with Indian textiles which “stretched their ingenuity to be equal to the fight” and “induced them to adopt cost-reducing and quality-raising innovations” [Inikori].

Inikori leans heavily on his reading of the development econ literature from the 1970s, especially Balassa. The way he sees it, 18th century England practised a policy of import-substitution industrialisation, kind of like … Latin America before the 1980s. But unlike the inward-looking ISI, England managed to avoid industrial stagnation through an “export push”, kind of like … East Asia.

In keeping with historians of consumption like Maxine Berg and Beverly Lemire, Parthasarathi also puts tremendous emphasis on the development of the domestic capability for finishing and printing of cloth in emulation of the intricate patterns and bright colours of Indian cloths. Cotton took to colours better than any other material at the time, and in Europe’s first consumer age with a taste for expanding product varieties, a large portion of the value added of cloth was their look and finish. Since Indian artisans were more advanced in dyes and mordants, the British textile printing industry faced a steep learning curve which might have required protection.

Complementary to the above is the argument in the paper co-authored by the venerable British historian Patrick O’Brien (1991). The “legislative foundations” were just right in Britain, neither as restrictive as the total ban on printed cottons and linens in France [Rouen excepted]; nor as permissive as the free-for-all in the Dutch Republic. So Parliament, by reserving the home market, made the size of the British proto-industry bigger than it would have been otherwise. And “technological innovation and reorganization become more probable once industries attain critical scales of production and experience”.

O’Brien et al. do not elaborate much, but at the simplest level, their argument could be taken to mean the “R & D sector” faced a larger domestic market for inventions and therefore more profit opportunities for inventors. It could also imply the size of the domestic industry generated more collective learning, economies of scale, and agglomeration economies.

All these propositions seem reasonable at first sight, but there are some basic problems.

The Linen Caveat

Even before the 1721 legislation, Britain (and Europe in general) produced almost no pure-cotton cloth. Cotton had been primarily a material used to blend with linen in a traditional class of heavy, often textured fabrics called fustians, which might be considered the common ancestor to denim, velvet, corduroy, moleskin, etc. (This term is often confused with anything that’s half-cotton, half-linen, but in this post ‘fustian’ refers to the heavy article.)

Until the 1770s, lightweight British ‘cottons’ made in imitation of Indian cloth, with names like calicos, stripes, and checks, would also be cotton-linen even in the export markets. The reason was a technological constraint: European hand-spinners could spin only cotton weft at manageable cost, not warp; and ‘cotton’ weavers substituted linen thread for the warp.

Warp_and_weftHargreaves’s spinning jenny, though considered the first of the canonical cotton-related inventions of the Industrial Revolution, could only make the loosely spun weft. It took Arkwright’s water frame to produce the tightly wound warp with cotton. Only then could an all-cotton cloth be woven in Britain at less-than-astronomical cost.

None of this is esoteric knowledge. Although a reader will never learn it from Beckert’s oppressively information-dense Empire of Cotton, British reliance on cotton-linen blends prior to mechanisation is widely noted in the literature, e.g., Parthasarathi himself; Riello, Harley (1998), and Wadsworth & Mann, the cited-by-everybody proto-industrial history of Lancashire. Also, Styles (2016) reports a microscopic fibre content analysis of Lancashire swatches from 1759-60, confirming that ‘cottons’ for ordinary people were indeed linen blends. {For the Styles paper, sombrero tip to Anton Howes, who could not anticipate the dastardly uses I would put it to.}

So, on the face of it, the Calico Acts reinforced the preexisting ‘backward’ technology of cotton-linen. But there’s even more.

    • Printed cotton-linen was subject to discriminatory excise taxes which could be as high as 25% of the price for the cheapest cloths. [Wadsworth & Mann, pg 140]
    • Although Lancashire depended on imported linen yarn to produce substitutes for Indian cloths, Parliament made that key input more scarce and expensive by subsidising the weaving of linen cloth in Ireland and Scotland. [O’Brien et al. 2008]

White cotton cloth was never banned, but there was hardly any market for it, since British preference in shirts, shifts, bedsheets, and undergarments did not switch from linen to cotton until the 19th century. More fragile cotton had to become cheap enough before overcoming linen’s chief advantage: you could beat the crap out of it in the ‘brutal’ washing methods of the time. [Styles 2009]

A common-sensical first approximation is therefore that the effect of Parliamentary legislation was to discourage British cotton which emerged in spite of legal hindrances.

Economic historians & industrial policy

But that doesn’t stop Parthasarathi from offering his 2 cents (or 2 paise in his case) on the anti-empirical dogmatism of economists:

It is a bedrock conviction of mainstream economics that restrictions on trade are harmful and reduce economic efficiency and social welfare. This belief has led many economic historians to downplay the role of protection in the development of British cotton manufacturing.”

“Robert Allen’s British Industrial Revolution in Global Perspective, for instance, contains no discussion of restrictions on Indian cloth and Joel Mokyr’s Enlightened Economy dismisses protection on the grounds that it was rent-seeking rather than value-creating. The historical record, however, does not support these views, and the protectionist response to the Indian competitive challenge facilitated the development of skills, technology and markets, which were a precondition for the growth and expansion of the British cotton industry”.

Given the smugness of the last sentence I would normally start spitting venom, especially since Parthasarathi’s book supplies a mere narrative description which he falsely equates with causal analysis. But I calmly point out that economic historians actually keep an open mind on the matter.

Allen himself — indicted in the passage above — has shown sympathy for infant industry arguments in several publications (e.g., 2011, 2014). Parthasarathi might also have perused the research on the “growth tariff paradox” of the late 19th century. Or the debate on the role of antebellum tariffs in US economic development, much of which focuses on cotton! Not to mention, a rich body of theoretical literature on industrial policy and “dynamic comparative advantage” has existed for the past 25 years at the intersection of endogenous growth theory, strategic trade theory, and economic geography. (I describe the various classical and modern arguments for infant industry protection here.)

But do the standard rationales for protection apply under 18th century British conditions?

Probably not.

Most of the standard theoretical rationales are inapplicable and anachronistic for traditional handicraft manufacturing. The size of the cotton-linen proto-industry was not important. It generated little technical learning which would spill over to the future cotton industry. And the market for cotton-related inventions was quite large even without the industry making substitutes for Indian cloth.

(a) No scale in the cotton-linen proto-industry

Under the putting-out system for textiles, terms like ‘industry’ or ‘firms’ can be misleading, because the ‘manufacturer’ was basically an organiser of handicraft production taking place inside rural hovels.

spinningwheellofthusnorway1888

In this system, capital costs and the minimum efficient scale were laughably low. According to Muldrew (2012), a spinning wheel cost only about a shilling, equivalent to 2-3 days’ worth of spinners’ earnings. Even the spinning jenny, conventionally considered a major step in mechanisation, was still just a domestic hand-powered implement which happened to make more than one spindle of yarn at a time. Although in Allen’s analysis the jenny cost ~70 shillings, more realistically it may have cost only 16-20 shillingsCompare that with a thousand pounds for the early Arkwright-style spinning factories in the 1790s.

By the same token, putting-out was also a constant returns to scale activity. There were no economies of scale to exploit, which is to say, the overall size of production in proto-industry, per se, did not matter to unit costs or productivity. This also rules out the “East Asian” or “late 19th century steel” model whereby firms exploited scale in the protected domestic market and turned to exports after their average cost was sufficiently reduced.

Likewise, in proto-industry, barriers to entry were low and there was little first-mover advantage. By implication, even if Indian competition had wiped out British cotton-linen cloth (say) in the 1730s, you could easily reenter the industry at a later date (say the 1750s, when war and unrest in India reduced the global supply of cloth).

(b) Lancashire would have survived open competition with India in the home market.

It might have had a smaller market share in printed cloths, but linen or cotton-linen knock-offs would have coexisted with higher-quality Indian cloths as differentiated products in a segmented market, kind of like the iPhone and cheap smartphones. British proto-industry would have strategically adjusted the fibre content of their cloths according to market conditions. This is not some idle, far-fetched speculation, as it actually happened in West Africa and the mainland American colonies in the 18th century, where British cloths competed with Indian textiles. It’s also suggested by the (admittedly patchy) price evidence on the British side, and the better documented trend of rising purchase prices of Indian cloth by the British East India Company. Riello also suggests, “in the period of the prohibition of cotton textiles, high prices – rather than legal zeal – might have hampered the use of Indian chintzes and calicoes in England”. (I elaborate in the comments section.)

(c) Learning & source of knowledge externalities

There was no learning-by-doing in the cotton-linen proto-industry which especially mattered to the mechanisation of cotton production. There were product innovations, and there were (probably) gains from fibre-specific learning and from the division of labour in hand processes. But cotton had already been used as a blending input in traditional products. So it’s not clear that the extra activity specifically promoted by the Calico Acts — i.e., more hand-spinning of cotton weft and hand-weaving with linen warp — was terribly consequential for mechanisation.

Simply as a matter of historical fact, many of the advances in mechanising cotton production were knowledge spillovers from industries whose products were unrelated to the import substitution industry for Indian cottons. Things like steam engines as well as what Mokyr (1990, 2012) called Britain’s comparative advantage in mechanical skills are well known. But there are many less well known transfers of knowledge to the cotton industry from millwrights, metallurgy, watch-making, and other textiles such as wool and silk. I put the details in the comments section, but here’s a neat ‘map’ of the externalities from Chapman (1992):

chapman1992

 

{ Textile printing, which was indeed capital-intensive and required substantial learning-by-doing, was an exception. Again I relegate this to the comments section.}

Competing versus non-competing sectors of British ‘cotton’

During most of the 18th century, branches of the British ‘cotton’ industry which did not compete with Indian goods were much larger than the ‘competing’ sector. The non-competing sector grew rapidly until the 1770s and demanded more and more cotton yarn. These diverse uses of cotton fibre and yarn afforded ample profit opportunities and incentives for the invention and adoption of spinning machines.

(a) How big was the ‘cotton’ proto-industry anyway?

In the early 1980s, Harley (1982) and Crafts (1983famously published their downward revision of British GDP growth in the ‘critical’ 1770-1840 period. The most important element of the ensuing debate was about the size of the cotton industry. (See Cuenca-Esteban 1994 & Harley 1998, and a literature’s worth of replies in-between.)

According to Crafts, as late as 1770, cotton as a whole represented ~2.6% of value added in British industry. Silk was almost double the size; and linen, more than three times. But cotton was completely dwarfed by wool, which was 12 times bigger. Thirty years later, cotton and wool would be neck-to-neck.

One thing was highlighted in the debate: there is surprisingly little hard quantitative information about the cotton industry even during the Industrial Revolution. The period before the 1770s might as well be a black hole. For example, we can infer the cotton industry’s aggregate gross output only very crudely, by making assumptions about how the imported raw cotton input was transformed into output in 1698-1770s.

That may be adequate for computing GDP estimates, but it doesn’t help with micro issues — such as, how were those fibre imports distributed amongst the diverse uses of cotton? We can’t precisely answer this question, even though there was a dizzying variety of cotton-containing products with quite different values added.

We do have export data on various product categories of the cotton industry, such as those checks whose importance as an export item to West Africa and the Caribbean is touted by Inikori and several other scholars. But we can really only guess about the production of checks for domestic consumption.

(b) The diverse uses of cotton in proto-industry

But we do know one thing: the non-competing branches of the British ‘cotton’ industry prior to mechanisation were larger than the ‘competing’ sector. Besides the part-linen checks, stripes, and imitation-calicoes that served as Indian substitutes, Britain’s other cotton-consuming sub-sectors included:

      • fustians — a catch-all term for heavy fabrics associated with Manchester, including velvet, velveteen, corduroy, ticking (upholstery fabric), etc.
      • smallwares’ (tapes and ribbons for garters, etc., also associated with Manchester)
      • hosiery & lace (associated with the Midlands)

We don’t know exactly how big each of these ‘non-competing’ sectors was, or which sector was growing at what rate in the mid-18th century. I’m sure it’s possible to work like a donkey in the excise & customs archives and probate inventories to gather the necessary data for estimating the proportions. But to the best of my knowledge no one has done it.

But we can still deduce from published data on excise duties that the non-competing sectors of cotton as a whole accounted for a maximum 17% of the raw cotton imported in 1765 — but more likely 10%. (The details of the back-of-the-envelope calculation are relegated to comments.)

10-17% is not trivial, especially if printed textiles had faster growth rates or disproportionately high value added. But the other ‘cotton’ branches like fustians and hosiery also had high value added. In the years after the Napoleonic Wars, Harley (1998) estimated that the hosiery sector accounted for 7-10% of cotton yarn by weight but 1/3 of the value added in spinning. Earlier in the 18th century, hosiery might conceivably have been bigger, since yarn and cloth output expanded rapidly after 1790.

And fustians, smallwares, and hosiery were technologically dynamic sectors with a spurt of inventive activity in product innovations in the mid-18th century (Griffiths et al. 1992). Hosiery had the Derby Rib as well as the stocking frame with rotary motion. Smallwares had beginnings of factory organisation with the swivel loom.

J. K. Thomson describes fustians in the mid- to late 18th century:

The industry in Lancashire was in full expansion during the years of [Lewis Paul’s] efforts [to invent a spinning machine], with progress being logged in two spheres in particular, that of checks and cottons for Africa, whose exports multiplied by six between 1752 and 1763, and that of fustians [my emphasis] in which a steady process of product innovation in patterns, finishes and fibre mixtures had been occurring. Milestones here were the introduction of the drawboy loom [a precursor of the Jacquard loom — PE], facilitating complex weaving patterns and the development of pure cotton thicksets, introduced before 1740; cotton velvets, introduced soon after this; veleverets, 1763; and velveteens, ‘incomparably the most important development in the middle of the century’ (Mann), patented in 1776. The tendency has been to give prominence to the meteoric expansion in cotton stripes and checks among these developments but this may be mistaken, firstly insofar as their growth led to few technical improvements in spinning … and secondly in that this growth was reversed from 1763. The developments in fustians, in contrast…did depend on significant technical improvements, in spinning as well as weaving”. [Thomson in Prados de la Escosura, ed. 2004]

The so-called “Manchester velvets” — as far from anything to do with India as possible — became a prized item in Europe after the 1750s. These were hot enough that a Jacobite spy for the French, John Holker, supposedly risked his life to secret its manufacturing methods out of England [Rose pp 190-1].

The importance of the non-competing sectors is also suggested by the distribution by Arkwright-type (first genuinely automated, water-powered) spinning mills in 1787-8:

colquhoun1

[Source: Chapman (1981); also see map of the above data.]

Before cotton firms agglomerated in Lancashire, the availability of water power was a major determinant of factory location. But the map also speaks to the diverse final uses those factories served. Only about a quarter of the Arkwright-type spinning mills were located in Lancashire, and of those only a handful in towns associated with weaving for calicoes, such as Blackburn.

(c) The Neglected Role of the Midlands

Despite the obvious importance of the non-competing sectors, strong and confident assertions are made about the Calico Acts, because it is just assumed anachronistically that Britain’s domestic import substitute for Indian cottons is the only relevant thing to consider.

The size and importance of the non-competing sectors of cotton implies that British cotton spinners per se never faced much foreign competition at all. Britain imported almost no cotton yarn (Hoffmann pp 255-7); and yarn has competitors only insofar as it is turned into competing cloth.

So even if Indian cloths wiped out British imitations in the home market, the fustian, smallwares, and hosiery sectors would have merrily gone their way churning out other stuffs with cotton yarn. They would have demanded more and better yarn from domestic sources. And, had there been Indian competition in a fully open British market, even more capital would likely have been allocated toward the ‘non-competing’ sectors in order to exploit Britain’s comparative advantage.

Again, this is no fanciful speculation, as the actual historical record shows the two great inventors of the Industrial Revolution hagiography — Hargreaves and Arkwright — were initimately connected with the Midlands hosiery industry.

Every scholar has his hobbyhorse “big thing” when it comes to why spinning was mechanised: it’s high wages; it’s low wages; it was legislation; it’s the African trade; it’s the American trade; it was fashionable prints; it was to match Indian quality and skill; it was the rising demand for yarn from the flying shuttle; etc.

But nobody ever nominates stockings in Nottingham!

Arkwright, whose game-changing water frame finally allowed British weavers to abandon linen yarn and make an all-cotton cloth, removed to Nottingham around 1769. There he received financial backing from Need and Strutt, two manufacturers of stockings from Nottingham and Derby. (Strutt himself was the inventor of the aforementioned Derby Rib, a loom for stockings.)

Hosiers in the Midlands had been using wool and silk, but after mid-century they were increasingly substituting cotton for silk and sometimes importing expensive Indian yarn. Arkwright sought to alleviate the shortage of tightly twisted cotton yarn for the hosiers and the first water-powered spinning mill with his frame was built at Cromford (in Derbyshire).

Indeed Strutt and many other hosiers eventually became cotton spinners — all the incestuous connections are pictured below:

chapman

[Source: Chapman 1974]

Hargreaves, the inventor of the spinning jenny, had been a Lancashire weaver in the employ of Peels, the manufacturer of cloth for calicoes, but the machine-breaking riots of 1768 “induced Hargreaves to accept an offer from Rawson, Heath and Watson, the Nottingham hosier” [Chapman 1969] and he “continued his development between 1764-67 in Nottingham in partnership with a local joiner, one Thomas James” [O’Brien]. Hargreaves himself constructed a mill with financing in Nottingham and supplied the local hosiers with yarn from his jennies.

None of these Midlands connections is esoteric information, but something widely mentioned in any source of information about the early Industrial Revolution. (Baines pp 161-3; 339-45; FittonWadsworth & Mann pp 483-5; Crouzet; Thomson in Prados de la Escosura; Chapman 1965, 1972, 1987).

But the fact that Arkwright went on to become a manufacturer of calicoes and muslins (and petitioned Parliament to repeal the 1774 law) has completely overshadowed his important links with the hosiery trade.

( Hargreaves’s spinning jenny and Arkwright’s water frame were eventually even adopted in woollens and worsted, respectively. [See Hudson; Chapman (1965)] )

Innovation & export competition

Many scholars ranging from Inikori and Parthasarathi to Allen and Findlay & O’Rourke have argued that export markets were an important stimulus to technical innovations for the British proto-industry. For some of these, exports increase the size of the potential market for firms. For others, overseas markets exposed firms in a protected domestic market to healthy competition. But which ever is your theory, the home market could have served these purposes just as well.

This is not necessarily true for nascent industries struggling with modern technologies, high fixed capital costs, increasing returns to scale, etc. These might, in theory, need “breathing space” from more efficient foreign producers. But this “breathing space” was likely unnecessary in the case of the cotton-linen proto-industry in the British industrial revolution. This conclusion follows in the absence of scale economies or significant learning or major barriers to entry.

I am not saying exporting was irrelevant — the lure of profits through competition with Indian goods in West Africa and the mainland American colonies may have been in actual historical fact a major incentive for British entrepreneurs. But the reason export markets became so important in the first place may be that British firms were denied the chance in the domestic market to supply all-cotton cloths by the Calico Acts !

Why wasn’t the jenny invented in the 1730s rather than in the 1760s? It’s not a very sophisticated machine and one might argue its invention was inevitable given the right market conditions. And there were early attempts by Paul and Waytt to build a spinning machine on different principles in the late 1730s, but those did not succeed commercially.

One possible reason is that the Calico Acts diminished the size of the potential market for pure-cotton cloths by excluding the home market, and successful inventions in cotton spinning were deferred until population growth enlarged the export markets:

The growth of British exports can be almost entirely explained by population growth in North America and naval successes that opened markets in the Spanish and Portuguese colonies to British trade. From 1730 to the end of the century three quarters of the increase in exports went to North America and the West Indies, and more than four fifths of the spectacular export growth from 1770 to the end of the century went to those markets. The growth of exports to North America was almost entirely a reflection of population growth there. Between 1710 and 1770 British exports to North America increased to 8.6 times their initial level as American population increased to seven times its initial level”. (Harley 1982)

So Parthasarathi is right to say economic historians ignore the Calico Acts. Maybe they can help explain the timing of the industrialisation of cotton!

§ § § § §

Of course many (such as the very good historians, O’Brien, Ashworth, and Vries, but also Erik Reinert a lesser twin of Ha Joon Chang) might argue it was not the protection of any single industry but the whole ‘structure’ of mercantilism and imperialism which pushed Britain toward the Industrial Revolution. Even Parthasarathi, though single-mindedly focused on cotton, has a brief digression on iron. The silk industry which contributed technology to cotton was itself an object of mercantilist promotion and interstate competition. So what’s really needed is a model of the Industrial Revolution which endogenises innovation in terms of the factors alleged to be partly causal, such as war, mercantilism, colonialism, and slavery. But the argument that the Calico Acts were an important factor behind the mechanisation of cotton, seems misguided.


Postscript: In the comments section, I put some details I left out above:

  • pounds of cotton consumed by printed textiles
  • prices & market shares of British & Indian cloths
  • textile printing
  • knowledge externalities
  • miscellaneous stuff

Also: there’s a separate post about tariff protection for British cotton after 1774, with a note on the hand weavers.


Filed under: cotton, import substitution industrialization, industrial policy, Industrial Revolution, Infant industry argument, protectionism, trade & development Tagged: Calico Acts, calicos, fustians, Joseph Inikori, Patrick O'Brien, Prasannan Parthasarathi

The most stimulating economic history books since 2000

$
0
0

Inspired by Vincent Geloso, here is a list of the 20-25 books in economic history published since 2000 which I have found most stimulating or provocative. Not necessarily the best or the most ‘correct’, but stimulating or provocative.

Some of these are on my bigger Economic History Books List, which is intended to be a list of survey books for the economic history of particular regions or countries.


Filed under: books, Uncategorized Tagged: books

More frivolously assembled lists of books

$
0
0

Kind of sort of a follow-up to the previous book list.

Big History and “Deep Determinants” (published since 2000)

Books in archaeology, anthropology, prehistory, psychology, evolution, etc. which are relevant to economic history and social change in general. My thinking has been altered by all of the following books:

I must thank Razib Khan for blogging about Atran so often, because otherwise I don’t think I would have finished reading it. But sticking with it definitely pays off.

Stimulating econ history books: Honourable Mentions

In my previous post listing the 25 most stimulating or thought-provoking books in economic history published since 2000, I left out several items which I now list as honourable mentions.

Must-Read (but not yet read) Forthcoming or Recent Books


Filed under: books

Economic History Papers, Articles & Blogs

The Political Economy of US Foreign Policy

$
0
0

Summary : (Part 1 of 4) I critique commenter Matt’s argument that, at the deepest level, American foreign policy has sought a “favourable investment climate” for itself in the Third World.

US Foreign Policy & Crony Capitalism

Before I get specifically into Matt’s beliefs, let me first address what I think is a common argument about US foreign policy : the “crony capitalist theory”. I stress, this is not quite the same as Matt’s view.

According to the crony capitalist view, most US actions in the Third World promote American business interests, including such things as the ownership or control of oil in the Middle East ; or the protection of fruit plantations in Central America and the Caribbean — classic staples of the vulgar street-corner naive cynic.

The crony capitalist model assumes narrowly self-interested, parochial actors that influence the US government in discrete cases. It’s plausible a priori, because, in the domestic context of any political system, “socialise the costs and privatise the gains” is a classic form of rent-seeking behaviour. If and when they can, businesses naturally seek to curry political influence and extract advantages for themselves in the design of legislation or the administration of policies. So the crony capitalist model is simply the same principle applied to foreign policy. Thus, one might argue that many of the US interventions in the Caribbean Basin, especially before 1945, emerged from documented collusions between the US government and very specific business interests, such as the United Fruit Company.

However, the crony capitalist theory fails when it is applied to the whole global strategy of the United States over the long run. It founders on the immense multitude of examples, especially during the Cold War, in which the United States clearly felt unbothered by economic policies or events in the Third World which were patently contrary to the objective of domination by US corporations.

A small but telling example from after the Cold War would be the US tilt toward Armenia in the Nagorno-Karabakh War. Although the United States had initially sought neutrality in the conflict, nonetheless Armenian-Americans in California prevailed upon the US Congress to embargo all aid to Azerbaijan. This is despite the fact that it was host to numerous US multinationals doing deals in oil and natural gas in the Caspian Sea. Likewise, under the influence of Cuban-Americans, the United States doggedly maintains an embargo against Cuba even though the US business community appears to favour doing business with it. Then there is Iraq : the fact that US oil companies neither control Iraq’s oil, nor receive much more fee revenue from its oil fields than the Malaysian state oil company, is a serious rebuke to all those “war for oil” babblers from the early 2000s.

The incongruities of the “crony capitalist” theory during the Cold War are never-ending. In the period 1950-80, nearly the entire Third World would indulge the global fashion for “import-substitution industrialisation” (ISI), which aimed to limit imports of manufactured goods from the rich countries and stimulate the production of local “import substitutes”. In some cases this plan involved the use of tariffs, subsidies and licences to encourage local production for the domestic market ; and in others central planners would allocate capital to state-owned enterprises and set up detailed production targets. Although in its classic form ISI is strongly associated with Latin America’s response to the Great Depression, some of the celebrated postwar stalwarts of the system included India, Nigeria, South Africa, Ghana, Tanzania, Turkey, Iran, Iraq and Israel (a state founded by socialists, with crucial support from the communist bloc).

The list is endless and it would be more efficient to cite the exceptions. Whether the country was a friend or foe of the United States, did not make much difference. Morocco, considered a staunch, conservative ally during the Cold War, had one “five-year plan” after another for its bloated state enterprises. (These were privatised in the late 1990s and early 2000s, although they went from state-owned to royal-family-controlled.)

But the best example of a prominent US ally whose political economy was not vividly different from that of neighboring Soviet-allied states was Iran under the Shah. He might have been restored to power by the USA and the UK after being overthrown in a nationalist revolution, but the supposed stooge himself fully nationalised Iranian oil assets. The stooge also led the drive in 1974, as member of OPEC, to quadruple the international price of oil, an act which brought economic chaos to his puppet master’s country. The Shah of Iran was also an economic progressive who used his carbon windfall to provide free public education and healthcare, and to finance a land reform which transferred millions of hectares of land to landless peasants

In contrast with the ISI countries, most of the East Asian countries turned to the strategy of export-led industrialisation. This was just as state-directed as ISI, except for a crucial difference. The ISI model assumes domestic producers could count on domestic consumption, whereas the Asian model exploits the preexisting cultural habits of high savings and low consumption. Thus the East Asian developmental state intensified the suppression of internal demand and promoted export-manufacturing industries. The likes of Japan and South Korea would close their markets, for the most part, to manufactures imports and foreign investment from the United States, the benefactor to whom both literally owed their existence. In return, the United States largely practiced unilateral free trade.

By far the most egregious discordance between the crony capitalist theory and the reality of American behaviour has to be the US support of Israel. (Its causes have parallels, writ much larger, with the case of the US stance on Cuba and Armenia.) The United States had actually been fairly aloof in the 1950s — and turned downright hostile in 1956 with the Suez Crisis — but the 1960s saw a tilt toward Israel and against its Arab enemies, very much solidified after the 1967 war. I don’t see what, at least in crude self-interested terms, the USA gets out of its extraordinary closeness with Israel, an intimacy which rivals the Anglo-American alliance. I mostly see handicaps in a region the USA regards as vital enough to have prosecuted several wars in. There was the 1973 oil embargo, the near-confrontation with the Soviet Union in 1973-74 resulting from the Yom Kippur War, diplomatic complications at every front, terrorist attacks since the 1970s, etc.

(A propos of which, there’s a strong whiff of inconsistency between blaming US intimacy with Israel and the Palestinian situation for the terrorism committed by Saudis, Egyptians, and Pakistanis, and at the same time arguing that the relationship is fundamentally driven by crass self-interest on the part of the United States. Not an outright contradiction, but there’s a tension between the statements which are often contained within the same mind.)

“A Favourable Investment Climate”

Matt is aware of such incongruities, and he’s pointed out some of them himself. So he favours a more abstract approach from which I quote the choice part :

The general strategy of US foreign policy in the Third World, and especially Latin America, during the Cold War was to promote a “favorable investment climate.”

It was also the strategy before the Cold War: the peak of US intervention in Latin America was 1898-1933; for half of this period there were no “commies” in existence, and for the other half they were hardly a serious threat…

Now, I repeat: US foreign policy is not omnipotent. We have limited resources, and we cannot do away with all the things we dislike at once. Since Communism was the greatest threat to favorable investment climates, we tended, at any given point, to focus most of our resources on combatting movements that were explicitly Communist or which we perceived to be Communist. This leads some people to believe that we only opposed Communism, and were just fine with democratic nationalism and other threats to the American-dominated international economic system. But this is the wrong moral to draw from the Cold War. It only sometimes looks as if this were true because we tended to focus less effort on opposing non-Communist nationalists, since Communism was the greater threat. But that doesn’t mean we liked economic nationalism, and it doesn’t mean that we didn’t do what we could to oppose it when it was feasible to do so.

The US also opposed fascism before and during World War II, especially from Japan. Why? Because fascism creates an inhospitable investment climate, or at least an imperfect one. The Japanese Greater East Asian Co-Prosperity Sphere would have put an end to US plans for an Open Door in Asia (even as it strove for a relatively closed one in the Western Hemisphere). The US also opposed European imperialism after WWII, and for the same reasons. Except, of course, where the alternative was worse, like in Indochina…

You mention South Korea, Iran, and Turkey (why not Taiwan too?), all US allies with nationalist economic policies. Notice something about these countries: they were all on the periphery of the Communist world. The US needed these countries as bulwarks against the Soviet Union and China. They needed to be prosperous and militarily powerful. We could not afford for South Korea to be Honduras…. [ More comments on the “net” strategic value of Iran under the Shah, Turkey, Israel, etc. which outweighed other considerations. ]

As Lenin recognised, capitalists often compete amongst themselves and do not coordinate their efforts against their ideological enemies. He was talking about capitalist states, but his observation applies equally to capitalists within a country. Businesses promote their own individual interests and do not necessarily advance capitalism in the abstract. So it is up to the neutral, disinterested governments of capitalist states to promote the general welfare of capitalism — at least their own capitalists.

In that vein Matt regards the US government, not as the captive of discrete business insiders with numerous conflicting agendas, but as a rational, ‘above the fray’ actor with a coherent, long-range plan to make the world safe for American business in general. So, rather than haphazardly seek any short-sighted commercial gain, the USA flexibly weighs its options and pragmatically sets priorities in the face of threats to Pax Americana. All three European colonialism, German/Japanese fascism, and Soviet communism have been rivals of US mercantilist capitalism, and in defeating them utterly the United States was able to take the long view and tolerate the fairly minor deviations from ideological orthodoxy, like state-led industrialisation in the Third World.

Matt’s argument in a way mirrors George Kennan’s diagnosis of Soviet behaviour in the famous “Sources of Soviet Conduct”, published in Foreign Affairs in 1947 under the pseudonym Mr X. The analysis ties together very well the internal and external roots of Soviet behaviour. The Marxist-Leninist equivalent of the “favourable investment climate” was identified by Kennan : the ideology required its own expansion, but it would be accomplished with patience, flexibility and opportunism. Which is of course how the Soviet Union actually behaved in the world, not dogmatically, but pragmatically in pursuit of its long-range ideological goals. The Soviets just weren’t very choosey or punctilious about the ideological orientation of their clients.

But since the collapse of the Soviet Union and international communism, the United States is now virtually unrestrained in its ability to pursue laissez-faire capitalism around the world. That ideology need no longer take a back seat to strategic military priorities, nor is there an alternate superpower to balance against American hegemony. Thus arrived the neoliberal “Washington Consensus” of the last quarter-century with the mantra of privatisation, financial liberalisation, free trade, fiscal austerity, and deregulation.

Conveniently, the Third World boom of 1950-80 had wound up in the utter shambles of hyperinflation, massive debt, balance of payments crises, growth collapse, and (in some cases) civil war and famine. With the economic nationalism of the Third World and the central planning of the communist states both in discredit, the trinity of the IMF, the World Bank and the US Treasury could now impose on desperate countries the most dramatic top-d0wn economic restructuring the world had ever seen. There had even been a dress rehearsal : Chile under Pinochet spent the years 1973-89 remodelling a stagnant social-democratic populist regime after the postulates of Milton Friedman and the Chicago Boys.

In exchange for assistance and debt relief, the neoliberal institutions of the Washington Consensus enjoined the financially desperate countries be pried open to penetration by global capital. Whether it was water works in Bolivia or electricity providers in South Africa or the national telephone monopoly in Mexico, the ‘commanding height’ assets of the Third World could be carved up much like they had been in the 19th century.


Matt’s “favourable investment” global strategy is inherently unfalsifiable.  If one’s prior is that everything the United States does in the world is in order to create/maintain a favourable investment climate in a broad sense, then all decisions can be rationalised around that premise. If the United States didn’t appear much bothered by countless instances of economic nationalism or social democratic experimentalism in countries which were either neutral or pro-American in the Cold War, then those must have been because their strategic value was more important than their investment value ; or their markets just weren’t that big anyway ; or the USA was not omnipotent and had to weigh their priorities in view of limited resources ; or the USA was sacrificing short-term gains with a disciplined eye toward the long-term goal of defeating the Soviet Union, the principal obstacle to the unalloyed triumph of capitalism.

It’s not possible to falsify such a premise with individual cases. So I attack his thesis in a different way.


Edit : Responding in the comments section, Matt has said, I have not accurately characterised his views : “I was caught off guard by your attribution to me of a belief that the United States had a grand, long-term master plan to conquer the economies of the Third World. Although I never said that the US promoted favorable investment climates with such a scheme in mind, when I reread my comments I can see how someone could come to that conclusion”. I had also made characterizations of Noam Chomsky to which he objected. So I have removed references to Chomsky from the above, in order to avoid confusion.

(The comments section of this post is closed. If you’d like to comment, please go to Part 4, “The Mystery of US Behaviour in the World”.)


Filed under: Cold War, Foreign Investment, International Relations, U.S. foreign policy Tagged: Cold War, Foreign Investment, international politics, International Relations, US foreign policy

Labour relations & textiles: addenda

$
0
0

This post contains related topics and disjointed observations as addenda to “Labour repression & the Indo-Japanese divergence” in cotton textiles.

x


Japanese industrial policy in cotton textiles, with a note on Sven Beckert

In the India-Japan post, I am not arguing that technology, education, institutions, and trade policy were unimportant in Japan’s industrialisation. But the experience of the Japanese and Indian textile industries also suggests those may have been necessary but not sufficient. India’s lack of competitiveness vis-à-vis Japan in cotton textiles was primarily due to Japan’s comparative advantage in labour market institutions. I don’t see how industrial policy could have worked in India given the militancy of Indian labour.

But I expand on that issue of industrial policy first.

The Japanese cotton textile industry was not, I repeat not, a beneficiary of Japanese industrial policy.

Until 1911, Japan lacked “tariff autonomy”. That is, the “unequal treaties” signed by Japan at the time of its opening to the world in the 1850s prohibited any tariffs beyond a small level designed for revenues.

[Source: Braguinsky & Hounshell (2015)]

However, Otsuka, Ranis & Saxonhouse (1988) argue that Japan could still selectively exempt favoured industries from the 5% revenue tariffs. The Japanese cotton spinning firms were granted precisely that exemption — the raw cotton imported from abroad (mostly India and the United States) were allowed in with 0 tariffs instead of 5%. Otsuka et al. then calculate the effective rate of protection on Japanese textiles as a result of this preferential zero-tariff policy. And their estimates run pretty high.

But EPA is not a pure measure of tariff protection. It very generally measures the difference between the domestic supply price of a good and its international price. So it can be ‘contaminated’ by a number of factors, such as monopoly pricing due to cartels within the domestic market, or the layers of markups created by the domestic distribution network. Both elements were present in the case of Japanese spinning.

How about non-tariff industrial policies?

The Japanese government set up and financed model spinning mills between the 1870s and the mid-1880s to promote a national cotton textile industry. But these government-owned and -promoted failed mills and had to be sold off.

Even more importantly, the Japanese government pushed the wrong technology on several fronts.

For the spinning equipment, the government chose mules, which was what Lancashire firms used. But this was a skill-intensive technology not well suited to Japan’s needs. The Japanese government’s promotion programme led the earliest private spinning industry down the wrong road!

Later, private Japanese spinning firms would adopt, literally overnight, the deskilled ring spinning machine. Otsuka, Ranis & Saxonhouse (1988): “By 1891, no firm invested in the mule. In the space of two years, the importation of mules ceased completely. Thus, aided by fires which conveniently destroyed a substantial portion of existing mule stock, a virtually instantaneous switch from mules to rings occurred”.

[Source: Otsuka, Ranis & Saxonhouse (1988)]

The Japanese government had chosen the mule, in part, because it wanted to promote Japan’s own domestic short-staple cotton. Short-staples could only be spun with mules. Yet, Braguinsky & Hounshell (2015) says:

The government’s decision to promote the use of Japanese-grown cotton was also misguided. With a staple length of just 5/8 inch (Saxonhouse and Wright 2010, p. 562), Japanese-grown cotton was too short to be effectively transformed into quality yarn using Western machinery.”

Government desires to couple industrial policy and agricultural policy, rather than the ignorance of the technical difficulties, were to blame. The objective of using locally-sourced cotton also led authorities to promote the construction of numerous small-scale (2,000-spindle) mills scattered all over the country, hampering industry development (Takamura 1971, 1:45)”.

This is rather like some Latin American or African import substitution strategy in the 1960s or 1970s. Or the pre-revolutionary Russian textile industry, which was also hampered by the government’s decision to slap foreign cotton imports with high tariffs in order to support raw cotton from Central Asia. Central Asian cotton was expensive by international standards, and not of high quality, but the Russian government had invested too much in making Central Asia a major cotton-growing region.

The Japanese government also pushed for water power, which was also another horrible choice.

In The Empire of Cotton, Sven Beckert makes the following observations about state support of the Japanese cotton textile industry:

From the 1870s, the new nation-state began to pursue a more active policy to promote industry— and cottons were foremost on the new rulers’ minds….”

“From 1879 to the mid-1880s, the minister of home affairs, Ito Hirobumi, expanded domestic spinning capacity by organizing ten spinning mills with two thousand spindles each, importing them from Great Britain, and giving them on favorable terms to local entrepreneurs. These mills failed as commercial enterprises because their scale of production was too small to make them profitable. But unlike their predecessors they introduced new policies that turned into the key factors for the success of Japan’s industrialization: a switch to much cheaper Chinese cotton (in lieu of domestically grown cotton); experimental labor systems that would structure Japanese textile industrialization long into the future (such as the day and night shift system, which gave cost advantages over Indian competitors); and encouragement of government managers to become entrepreneurs themselves. These mills, moreover, created the “ideological roots” of low-wage, harsh-labor regimes, drawing on women whose pay was below subsistence levels, combined with a powerful rhetorical commitment to paternal care, and a transfer of power from samurai and merchants to managers and factory owners.”

Beckert completely fails to note any of the things I’ve mentioned above from Braguinsky & Hounshell and Otsuka et al. In fact, most of the passage is just flim-flam — the idea that the state had to implant the idea of a low-wage advantage or multiple-shifts in the minds of businessmen is just plain stupid.

The only thing of substance in the passage is the part I’ve put in bold. But the choice of Chinese cotton was also an error on the Japanese government’s part, because it worked best with mules and mules were the wrong technology.

When the private Japanese industry threw out the bad technology choices the government had made, it went with ring spinning. And rings worked better with a mix of Indian and American cotton:

[Source: Otsuka, Ranis & Saxonhouse (1988)]

The Japanese textile industry was famous for its innovative multi-continent cotton-mixing, something which apparently never registered with Sven Beckert. That book packs so much information yet misses so many key details!

Note on India

Would infant industry protection have helped Indian cotton? Bagchi (1972) and Gupta (2011) believe tariffs would have helped if they had been used early enough.

Had Britain altruistically closed off the Indian market to import competition, the Indian industry certainly would have been bigger.

Wolcott (1997) estimates the counterfactual size of the Indian textile industry if tariff policy had allowed Indian firms to produce all cloth that had been imported in 1921-38. She estimates that the Indian industry would have achieved its 1938 size by 1927. But the counterfactual size in 1938 would have been 8% higher at best, using generous assumptions.

But the big problem of the Indian textile industry was labour inefficiency, i.e., using too many labour inputs to produce a given unit of output, and this was due to workers’ resistance to handling more machines. Why would a well-protected Indian industry have faced greater incentives or ability to rationalise its work force? I just don’t see how that works.

One might justify industrial policy or tariff protection because a period of learning-by-doing is necessary (i.e., acquisition of skill through experience by both workers and management). But if productivity growth stagnates because workers resist labour intensification and rising capital-labour ratios, then the industry can only expand on the extensive margin, i.e., through a proportionate expansion of both labour and capital inputs. It’s difficult to see how tariffs or other interventionist trade policy could have done anything other than promote the size of the Indian industry without improving its productivity. That is, after all, what actually happened after tariff protection was granted by the British Raj in the interwar period. The industry outside Bombay expanded rapidly but its productivity growth was minimal.

[Source: Otsuka, Ranis & Saxonhouse (1988)] x

Bargaining & capital-labour substitution

(a)

Clark (1987) presents two stylised facts about pre-war cotton textile technology which I believe are supported by subsequent research such as Bessen (2012) and Zeitz (2013):

  • At any given point in time, output per machine was approximately the same across countries. Holding product quality constant, you could only vary machine speeds within a limited range.
  • There was inherently little scope for factor substitution. If wages were low and costs of capital and raw material were high, you could run your machines a little faster, and you could also use lower-quality raw material. Both of these actions raised labour requirements, but you could only get modest net savings in labour costs per unit of output.

Bessen (2012) also shows that price-driven capital-labour substitution had almost nothing to do with going from 1 loom per weaver to 20 per weaver because the elasticity of substitution was very low in weaving. The technology looks near-Leontieff! Isoquants for weaving at New England mills for the benchmark years 1801, 1819, and 1901:

(b)

With higher wages, all else equal, there exists an incentive to substitute more capital for labour. But in the textile and other industries, this can take the form of (1) investment in improved machinery; or (2) labour intensification through increasing machines per worker, i.e., stretch-out (and possibly also speed-up of machinery).

If the employer has bargaining power in the labour market, the incentive for (b) may be greater than for (1) because it would be cheaper. But you can also have a combination of (1) and (2), because you can intensify labour even more if you have improved (effort-saving) machines.

But there are tradeoffs in making a worker operate more machines. You save on labour costs, but you also run the risk of idling machines more often as a worker can only attend to a single machine at a time when there are problems or errors.

1 loom per weaver => fewer machines idled when worker intervention is required => more likely to reach maximum machine capacity

8 looms per weaver => more machines can be idled => higher risk of losing machine efficiency

Because New England had the highest wages in the world, the trade-off worked in the direction of more machines per worker.

In Lancashire, where even skilled labour was cheaper, power looms per weaver was only about 4.

Wouldn’t it still have made sense for British firms to lower labour costs through work intensification? Yes, but Lancashire also faced much more powerful trade unions than New England. This changed the tradeoffs facing British firms, because the transaction costs of negotiating new wage schedules were very high. And the new wages demanded by unions might have been high enough to offset the labour cost advantages of 6 or 8 looms per worker in England.

From the American point of view, if New England workers had had more market power, it’s plausible that firms could not have made their workers operate more machines at the right cost.

(c)

Gupta (2011) argues that before the (exogenous) rise in real wages caused by the boom of the First World War, the Indian “cotton mill entrepreneur faced little pressure to increase productivity”. After the war, the wage-rental ratio (the ratio of wages to the cost of capital) rose faster in Japan than in India, so there was a greater incentive for Japan to invest in technology.

But Gupta only looks at one dimension of incentives to raise labour productivity and ignores the simple fact that Japanese competitors were stealing away (domestic and export) market share from Indian mills through lower labour costs. That’s an incentive for Indian mills to reduce labour costs, period, regardless of capital costs. If Indian capital costs were not falling as much as in Japan, that simply biases the method of cost savings toward labour intensification rather than investment in new technology. Hence all the calls for rationalisation. As argued in (b), it was cheaper to intensify labour than to invest in new machines anyway, all things equal. x


More on technological divergence in cotton textiles

How much did technological change contribute to the Indo-Japanese divergence in machines per worker in the interwar period?

(1) Rationalisation required no new technology

First, as mentioned in the main post, there were at least 15 Indian mills in the 1920s-30s which did reduce manning ratios substantially. According to Wolcott (1994, 1997) new equipment were not necessary for these changes. Most of the changes were organisational: a matter of getting each worker to handle more machines. But this required paying higher wages for this increase in effort levels than in Japan.

(2) More on automatic looms

Circa 1936 only about 12% of Japan’s looms were automatic (Rose 1991); and the entire weaving sector itself accounted for a relatively small fraction of the value added in Japanese textiles. So automatics just can’t account for the huge disparity between India and Japan in textile labour productivity.

And the Indian workers were unwilling to handle enough automatics. Here’s the full quote from the Indian Tariff Board that I cited in the main post:

Experiments with [the automatic loom] have been made from time to time in Bombay, Ahmedabad, and Sholapur, but the results obtained have not been encouraging…”

“…The additional cost…could only be compensated by higher production, higher prices, or a reduction in labour costs. It does not appear that the production of the Northrop loom is higher than that of the ordinary loom or that any higher price can be obtained for the cloth manufactured on it…. The reduction in labour costs would depend on the number of looms tended by one weaver. In America, one operative attends to 20 to 24 of these looms against 8 to 10 ordinary looms, but the result of the experiments so far made with them in India goes to show that it would be difficult to get weavers in Bombay to look after more than four. Even if the number of looms went up to six as in Madras, the statement below shows that the balance of advantage still lies with the ordinary loom.” (Report of the Indian Tariff Board [Cotton Textile Industry Enquiry] 1927, Vol. 1, pp 143-4)

(3) Spinning technology

There was little change in spinning technology in the interwar period. Saxonhouse (1977) finds no effects from machine vintage and concludes: “Since spinning technology was almost entirely quiescent throughout the period prior to the early 1930s, the absence of any role for machinery improvements in the above explanation [for the rapid growth in spinning productivity] is probably not surprising”.

Wolcott & Clark (1999) also make inferences about labour requirements per unit of output from the purchase records at the British firm of Platt Brothers, the world’s most important textile machine manufacturer. Even in the interwar period, Indian and Japanese firms were buying ring spinning machines with similar specifications, so the “rise in labour productivity in Japan in relation to India had little to do with the differences in machinery”.

(4) Mules versus rings

A hoary technological canard is the supposed conservatism of Indian mill owners and their mental enslavement by British practises when it came to spinning technology. Much like the United States, Japan quickly adopted the less skill-intensive ring spinning in the 1890s — an example of “unskilled-biased” technical change. But India was slower to completely transition out of power mules, which was the less automated and more skill-intensive technology used primarily in Lancashire.

[Source: Otsuka, Ranis & Saxonhouse (1988)]

But as Morris and Saxonhouse & Wright pointed out, mules persisted longer in India because they were well-suited to spinning yarn with India’s short-staple native cotton. Ring spinning worked better with the expensive long-staple cotton which the Japanese mills imported from the American South.

Short staples are considered lower-quality because yarn made from them is more likely to break. But mules treated fibres gently, so it was an optimal technology if you had to work with low-quality fibres or use them sparingly.

India also had to meet demand from domestic hand loom weavers who preferred mule-spun yarn. In Japan, hand weaving disappeared earlier than in India, where it still survives in 2017! 

(4) Piecing & machine speeds

Chandavarkar (1994) on page 284 notes:

But by running their machines above the normal speeds, using equipment and inferior materials and employing make shifts in the process of production, millowners increased the intensity of effort demanded of their workforce. According to Mr J.M. Moore of the Eastern Bedaux Company, consultants on scientific management, a ring sider in India had to deal with nine times as many breakages per 100 spindle hours as his counterpart in the United States, ‘so that it shows’, he argued, ‘that whereas in India we are only tending two sides, still they may be doing in India almost as much work as they would be doing in America if they watched 8 or 10 sides’.”

According to Wolcott & Clark (1999, pg 399), the observed number of broken threads per 100 spindle hours varied between 25 and 35 for India. The British and American rate was between 3.5 and 10. Under Indian conditions with a spinning frame of 326 spindles, piecing 25-35 breaks required 20 to 30 minutes. This would have left 170-230 minutes of idle time.x


Human capital divergence in Japan & India?

In general, pre-war textile work was low-skill, in the sense that it was not cognitively demanding and pretty much anyone was capable of acquiring the necessary skills on the job. Workers largely differed in how long it took to achieve peak productivity (Leunig 2003).

But could literacy have made a big difference? Indian textile workers were overwhelmingly illiterate, while Japan’s work force was overwhelmingly literate. Surely this must have made some difference, even if textile work did not require easily observable skill.

Saxonhouse (1977) found literacy did matter in his regressions — Japanese mills with more literate workers were more productive. [McHugh () for the US South and A’Hearn (1998) for Italy also stress the importance of literacy in textile work, although the A’Hearn data are pretty coarse and aggregated.] But since literacy was not strictly required for textile work, Saxonhouse argued this this must be a non-cognitive effect of schooling. At schools, workers must have acquired “soft skills” like discipline and deference to authority which paid off in the factory environment.

Bowles & Gintis (2001) believe schooling appears to socialise pupils in ways valued by employers, perhaps predisposing them to desirable work habits. This is also consistent with evidence that employers value non-cognitive skills associated with schooling (Heckman & Rubinstein 2001).

Another possibility is: literacy and basic schooling increase general cognitive ability, relative to illiteracy and no-schooling (Ritchie, Bates & Plomin 2015). IQ is also lowered by childhood stunting [cf the numerous references in Kelly, Mokyr & Ó Gráda 2014). And IQ affects job performance even when it is cognitively non-demanding and certainly makes you a faster learner of even simple tasks [Hunter ref].

But we can actually get a direct measure of the impact of literacy on mill work from when New England mills gradually replaced literate young women with illiterate immigrants, per Bessen (2003). As the local supply of experienced workers increased, it was becoming more profitable to hire illiterate workers.

Since textile workers were generally paid according to piece rates, you can infer individual worker productivity (within a single factory) from earnings. What the earnings profiles show is individual worker productivity generally peaked after a year of experience or so, and after about a year or two, returns to an additional year of experience were small and diminishing.

Bessen estimates that illiterate workers peaked slightly later than literate workers, and were ~12% less productive at peak. That was in the 1850s, but the direction of technological change throughout the 19th century was to reduce tasks for the operative while machines ran faster with fewer defects. So by the early 20th century, ring spinning machines should have been even easier for the operative than in the mid 19th century.

It’s worth noting that Wolcott (1997) examines two mills in India (Tatas and Binnys) in the 1930s to improve worker welfare in emulation of Japanese practises. They adopted all the stereotypes of Japan’s benevolent employer paternalism — schools, hospitals, loan associations, technical training, subsidised housing, etc. Yet labour costs were not much improved. x


Exchange rates

x


Gupta (2011): wages, effort & coordination failure

Gupta (2011) challenges the Wolcott-Clark (1999) view that Indian workers were the problem by offering a novel synthesis of elements from the Lewis model and unions-as-voice. India was a surplus labour economy where manufacturing wages were set as a premium over the subsistence rates in agriculture. Wages in Indian industry remained low as long as agricultural productivity stagnated. Indian workers would have supplied more effort, if they had been paid more. But because the wages were low, Indian mill owners had little incentive to economise on labour inputs or even to discipline the workers properly. A vicious circle.

So there was a coordination failure. Owners and workers were stuck in a low-wage, low-effort equilibrium and failed to reach a high-wage, high-effort equilibrium. Indian unions, far from resisting ‘rationalisation’, was actually a force for coordination and promoted productivity by raising wages which induced management to undertake ‘rationalisation’ measures. By contrast, in Japan, productivity growth in agriculture raised wages in the economy as a whole, forcing industry to pay competitive rates to workers. In turn, both workers supplied more effort, and industry responded with efficiency measures and capital investments. A virtuous circle.

As already stated in the main text of the post, Gupta’s own evidence reinforces the Wolcott-Clark finding that higher wages pretty much offset any productivity gains!! Gupta spends much time on whether wages cause productivity or productivity causes wages, but I don’t think this really matters. And no one doubts that Indian workers responded to incentives, like anyone else. What matters, however, is the elasticity of effort with respect to wages. x


Roy (2008): Jobber power at Indian mills 

Roy (2008) argues that jobbers, senior workers who functioned both as labour recruiter and as foremen of mill hands, were responsible for the low machines per worker at Indian mills.

Jobbers maintained patronage ties with the hires and collected bribes from them. This decentralised management structure created a principal-agent problem in which jobbers had perverse incentives to hire more transient day labourers (‘badlis’) than needed, without regard for skill or experience, and without regard for the actual labour requirements of the mill. This also encouraged absenteeism. Turnover and absenteeism benefited the jobber because more bribes could be earned from sending workers to another mill and finding replacements.

Jobbers were useful in the early stages of the Indian industry to bridge the linguistic, cultural, and social distance between the mostly Parsi owners, Indian & European managers, and the largely Marathi-speaking peasant-workers. But by the interwar period they were hard-to-get-rid-of relics of an earlier age. Roy argues before the war, Indian mill owners had little incentive to reduce ‘overmanning’ but the jobber problem was exposed by the arrival of Japanese competition after the war.

All that is plausible, but…

  • Roy does not go much beyond sketching this mechanism. Jobbers’ salary from the mills depended on output. So there would have been a trade-off between the amount of bribes from hiring unnecessary ‘badlis’ and the jobbers’ wages at the mill. Did the bribes exceed the official salary? Was the jobber’s interest from bribery greater than his interest in higher output? There is no evidence on this.
  • The jobber problem should manifest in turnover and absenteeism. But the rates of absenteeism at Indian mills were no worse than in Japan. As for turnover, it was worse in Japan! According to Otsuka et al. (1988), “through the 1930s, Japanese industry did rely on a labour force whose modal entrant left after some 3 to 6 months of service”.
  • As argued in the main post, if the mill owners were forced to rely on ‘badlis’ then it was almost certainly because the labour supply conditions were not in their favour.

Chandavarkar (1994, pg 296) supports the last point: “The importance of the jobber’s recruiting function derived from the extensive use of casual labour in the industry. However, as the reserve supplies of labour expanded and became more readily available at the mill gates, the importance of the jobber’s recruiting function declined. ‘As the supply of labour has been greater than the demand for a considerable time past’, reported the Labour Office in 1934, ‘the agency of the jobbers is not much in requisition today.’ “

Also, the elimination of the jobber system would have been a major reorganisation which would have been very difficult to implement given the existing relations between management and labour. Roy says this as much: “Efficiency now demanded implementation of long-postponed institutional reforms. To many managers and owners, reforms meant getting rid of the jobber. Attempts to do so made worse a fresh burst of industrial disputes, a story too well-known to be repeated here”.

So ultimately the jobber issue came down to industrial relations, once again!

Also, Mazumdar (1973), citing testimony from 1927, says spinning and weaving masters could reject the employees selected by the jobbers. So perhaps the jobbers did not have the ability to maintain excess manning levels after all. Report of the Indian Tariff Board (1927), volume 2, pp 347-352 (358-363). x


Learning & Bargaining at New England mills

Although formal schooling was not necessary in the textile industry, experience and on-the-job learning definitely mattered.

A worker could not go right away from 2 looms to 3 looms with the snap of a finger. He or she needed a period of training and practise to do the extra work. The more experienced you were in operating textile machinery, the more quickly you could get up to speed in operating more machines. The bigger the local supply of experienced workers, the more profitable it was for a firm to undertake the investment in the training necessary to make a worker operate more machines. The incentive is also stronger if labour costs are rising (whether due to rising wages, or to falling product prices).

However, in comparing the textile industries of India and Japan, individual worker experience was stronger on the Indian side. The Indian industry had a larger core fraction of committed lifers. The point is that experience was not a sufficient condition for learning to increase machines per worker. Workers must still be willing or be made to operate more machines, and at a wage consistent with profit maximisation. This does not necessarily happen because it depends on the balance of market power between workers and employers.

Bessen (2003revisits the famous ‘stretch-out’ in Lowell, Massachussetts — when the average weaver successively went from operating 2 looms to 3 and then 4 over the course of 20 years. This is the same case famously analysed by Paul David (1973) as an empirical demonstration of learning-by-doing, and by Lazonick & Brush (1985) as a complementary demonstration of the “reserve army of labour“.

Although Bessen stresses the learning angle, what he actually does is reconcile the two previous papers: workers did learn to operate more and more looms, but they were paid approximately the same hourly wage in his benchmark years 1834, 1842, and 1854!

Three charts from Lazonick & Brush:

Textile workers around the world were paid piece rates — a fixed payment per unit of output, so that if you produced more, you earned more. Figure 2 implies that when weavers at Lawrence Mill #2 increased their earnings (= produced more cloth per hour), the employers cut their piece rates!!!

Piece rates were adjusted so that their hourly wages were unchanged even though workers were working more intensely every hour and producing more per hour. In other words, mill workers were being paid less than their marginal product, and capturing a decreasing share of labour productivity growth. Lazonick & Brush call this the “unremunerated intensification of effort”, although Bessen shows firms paid for worker training out of the rents.

Why wouldn’t workers quit or shirk? 

At New England mills, this 20-year episode of stretch-out coincided with a transformation of the labour force. In the early years, New England mills were staffed with literate Yankee farm girls originating from far away but living in company dormitories. This work force was gradually replaced by local residents, often illiterate, both natives and Irish immigrants. The literate workers had outside options, often as school teachers (or as wives). The local illiterates and immigrants with dependents had fewer options outside mill work, and Irish immigration was also plentiful in this period. Uncooperative mill workers could be more easily replaced in the 1850s than in the 1830s. 

From Bessen (2003):

Wages did eventually rise after 1870 or so. From Bessen (2015):

Bessen cites returns to textile operatives’ skills (and the relative scarcity of skill giving workers more bargaining power). But there could be more general, economy-wide reasons since the real wages of workers classified as ‘unskilled’ approximately doubled between 1880 and 1920. Naidu & Yuchtman (2016), figure 2. x


Labour relations in Lancashire & New England

Unions were fairly weak in New England, compared with those in Britain (although perhaps not compared with those in the South).

Lancashire employers abided by fairly rigid collective bargaining agreements with trade unions of subcontractor-workers defending a craft tradition.

This was probably because in Britain, hundreds of highly specialised firms — the single Lancashire town of Oldham had 200 spinning firms! — were locked in fierce competition (Harley 2012), and they could never coordinate their actions very well against spinners’ and weavers’ unions (Huberman).

But American mills were highly vertically integrated and their ownership was concentrated in a few families with interlocking directorates. The mills therefore could support one another and coordinate their actions against unions (Cohen 1985). In Lowell, according to Bessen (2003), firms exercised some monopsony power, sometimes changing wage rates in unison.

{ This is speculation, but it’s possible this was reinforced by high tariffs against textile imports. According to Brown (1992), deviations from cartel agreements in late imperial Germany were minimised by tariff policy. }

But the reason was also partly technological. British spinning was heavily dependent on the mule, a technology which required a lot of skill and experience to operate. This was a slower machine but well-suited to Britain’s need to produce high quality goods from relatively poor raw materials. Lancashire stuck to this male-dominated technology until its very demise in the 1960s.

The rest of the world — including American, Japanese, and Indian mills — gradually switched over to the deskilled ring spinning technology. Rings were much better at mass production of cheaper goods, but were less flexible than mules about raw material quality.

Earlier in the 19th century, Lancashire spinning firms had desperately wanted to get rid of the proud and prickly male mule spinners and replace them with spinning throstles (i.e., the early versions of rings) which could be operated by women and children.

But this was not possible — ironically because of rapid technological change. The spinners knew better than the employers the peculiarities of the contraptions which were still primitive by later standards. And because each spinning machine had unique tweaks and adjustments, these spinners did not merely have firm-specific skills, but they had also acquired “mule-specific” skills !

Britain relied crucially on mule spinning because it was best for producing fine textiles and for producing fine products from relatively poor materials. And Lancashire was forced to compete globally by moving up the quality ladder as more and more infant textile industries emerged in the world. New industries would produce cheap coarse fabrics, so Britain had to keep making finer and finer fabrics.

So the more profitable product line for the spinning firms required a certain level of craft skill possessed only by the elite mule spinners (Lazonick, Freiheld). In fact, spinners turned themselves into internal subcontractors with power to hire and fire their own labour force. They maintained workplace autonomy.

Cohen (1985) puts it best:

Mule spinners in Britain were autonomous craftsmen. They were independent from management supervision, exercised authority over other workers,had complete control over entry into the trade, and were protected from arbitrary wage cuts by a complex system of collective bargaining”

“The mule spinner was largely independent of management supervision(United Kingdom 1834: 125; Montgomery 1836: 272), and he also exercised complete authority over a number of assistants. From the days when mule spinning first became a factory trade until after the industry’s decline in thetwentieth century, British spinners had the prerogative to subcontract their own helpers. They hired, fired, disciplined, supervised, and paid their as-sistants

“The distinctive characteristic of British mule spinning was the success of spinners in retaining control of their craft intact over a period of some 180years, from the two closing decades of the eighteenth century, when the mule had first gone into operation, until almost the demise of the industry in the 1960s (Lazonick 1979).”

Therefore, a combination of product and labour market conditions gave the spinners a certain power over employers.

By contrast, American mills destroyed the craft basis of textile production. The proud and prickly mule spinners (often British immigrants made redundant in Lancashire) were eventually replaced en masse by a production system based on ring spinning operated by a work force of women and immigrant labour.

What was true of textiles, was also true of the British and American economies as a whole. Per Katz & Margo (2014), the US labour force in the late 19th century was largely deskilled in the middle of the distribution: craft workers were got rid of and were substituted by a combination of unskilled labour, skilled managers, and high-throughput machinery.

Precisely the opposite prevailed in the UK as a whole: per Harley (1974), skilled craft workers equipped with older vintage technology was the basis of British production until well after the Great War. x


Labour power in early Lancashire

A series of bitter strikes by male spinners forced Lancashire firms to agree to fixed piece rates in spinning codified in public wage (Huberman 1996a, 1996b). This was essentially one of the earliest instances of collective bargaining agreements, which was achieved privately between firms and employees, without intervention from the state.

Earlier, Lancashire spinning firms had hoped to get rid of these trouble-makers through the introduction of the “self-acting mule” which might be operated by women and children. But for reasons too complicated to go into, this was not possible and British spinning remained an elite craft preserve of men. See Lazonick 1979, 1981; Cohen 1985; Freifeld 1986.

Yet, in the first 70 years of the British Industrial Revolution, real wages in the overall economy were stagnant even as labour productivity was growingly briskly. Profits were rising and the labour share of national income was falling.

The British textile industry itself appeared awash in “surplus labour”. Competition from mechanised factories threw domestic outworkers out of business, who were then ‘proletarianised’ to work in the mills. Hundreds of thousands of hand loom weavers, so memorialised in Thompson’s The Making of the English Working Class, saw their wages collapse utterly as the power loom spread (Brown 1990; Allen 2016). The spinning section itself was also subject to the constant threat of short-term technological unemployment. According to Rose et al. (1989), the “number of spinners employed in Manchester firms dropped by over 40 per cent between 1829 and 1840”, many of whom emigrated to the USA.

But none of that prevented Lancashire workers — or at least the elite male mule spinners — from bringing Lancashire firms to heel and forcing them into collective bargaining agreements with far-reaching consequences in the early 20th century. x


Productivity & Industrial relations in Lancashire

Clark (1987) notes in his conclusion:

Outputs per worker increased greatly in all the national textile industries over the nineteenth and early twentieth centuries. In England in 1850 the average weaver tended only 2.2 power looms compared with 3.44 in 1906, despite the fact that looms in 1906 were about 50 percent faster than those of 1850.78 The number of spindles per operative increased also. In 1833 the mule spinning frame had 440 spindles on average, but by 1910 this had increased to 1080, with no increase in workers per frame, despite the fact that the speed of spindles had more than doubled.79 Differences in manning levels among countries suggest that it is unsafe to infer that the increase in output per worker resulted solely from technical progress…”

How was this labour intensification achieved?

Lancashire’s distinctive labour market institution was the public wage rate list. Spinners, weavers, and other operatives were paid by the piece, and a detailed schedule strictly defined the relationship between pay and output.

Both workers and firms recognised these lists as having the “force of laws”, and their credible commitment was reflected in workers sanctioning other workers who shirked, and firms sanctioning other firms which tried to defect from the arrangement. Collective bargaining labouriously spelt out minute adjustments to the payment schedule based on product quality, machine size, and other characteristics, which would affect productivity and therefore wages.

The public wage lists optimised the incentives of both firms and workers. Workers were protected against wage cuts in recession, and could share in gains made from technical improvements. Employers, in turn, could be assured workers would put in their best effort through rational self-interest, and also the structure of the lists made sure workers would not capture the entire marginal product of capital when new investments were made.

Or at least that is the argument advanced by Michael Huberman. In a whole series of papers and a book on the subject, he attempts to explain how the industrial conflict in the Lancashire of 1800-60 was transformed into the relative industrial harmony of 1870-1914.

Gupta (2011), appealing to Huberman, argues this high-effort, high-wage equilibrium is what Indian mills and workers could not settle on. (“Huberman argues that cotton mills in Lancashire in the mid-nineteenth century standardized piece rates and forced the inefficient firms to raise productivity with a given technology. If firms had lower wages, workers would reduce their effort and lower their output.”)

But but but there’s always a but.

  • It took decades — 6-7 decades! — for this arrangement to work itself out in the various sections of the Lancashire industry.
  • In the fine spinning section, the mule spinners had bargaining power with the employers because they possessed scarce skill in operating mules. It’s not clear whether a similar labour market institution might have emerged if the predominant spinning technology in Britain had turned out to be the ring, as it was almost everywhere else.
  • The wage lists transformed the spinners and the weavers into a “labour aristocracy”. As internal subcontractors, they hired and fired the assistants who could be ‘exploited’. If Lancashire firms paid spinners and weavers well, those in turn could squeeze their direct employees like piecers and/or make them work harder.

Master & Servant laws

A not-widely investigated aspect of labour market institutions in Lancashire was the role of Master & Servant laws. Here’s a good description from Wallis ():

…but Justices of the Peace actively intervened in labour contracts through the nineteenth century. Labour contract enforcement had been asymmetric from the beginning. Workers could be whipped or imprisoned; employers could only be fined. However, Hay argues that in the nineteenth century the law ‘became more inimical to labor’ as its scope and application were broadened and Justices were increasingly drawn from among employers (Hay 2004 ; Johnson 2010 ). Evidence about the actions of such magistrates survives for 1857–67, when 9,000 workers were being prosecuted each year for absence from work, moving to another employer, misbehaviour or insubordination. Many were imprisoned for three months’ hard labour (Galenson 1994 : 124; Hay 2004 : 60). Harsh punishments affected industrial and ‘traditional’ workers alike. In Preston and Blackburn, prosecutions of cotton spinners occurred almost weekly in the 1850s (Johnson 2010 : 76). Criminal penalties for breach of contract applied until 1875, when employer and worker were finally put on an equal basis in civil proceedings. Workers’ wages and their responsiveness to demand shocks increased following the removal of these laws (Naidu and Yuchtman 2011).”

Frank (2010):

….the statute law, in fact, loomed very large for workers in many trades in leading sectors of the economy. Local systems of collective bargaining were always shaped by the possibilities of the criminal law, even in sectors of the job market where that law was not applied.22  Many historians exploring regimes of labor recruitment and discipline have demonstrated that in a variety of regions and trades, the penal characteristics of nineteenth-century employment law, enforced locally by largely unsupervised magistrates, were absolutely central to employer strategies for retaining and controlling workers at low costs.23  On the morning of 8 January 1845, John Williams and his three workmates had no doubt whatsoever that in nineteenth-century labor relations, the law mattered a great deal”

The law seems to have been most often applied when the labour market was relatively tight. From Steinfeld (2001):

Naidu & Yuchtman (2013) sort of argue these laws acted as a commitment device by workers. But might the Master & Servant laws have played a role in inducing Lancashire workers’ to moderate their demands against their employers in the various strikes of the 1830s, 1850s, and 1870s? And what role did they play in maintaining the wage list system?

Incentives, institutional rigidity & the decline of Lancashire

Huberman (1996) has this interesting remark:

The average number of spindles per mule in fine spinning increased from, roughly, 144 spindles in 1790, to 600o in the 182os, and to 1,200 by the late 183os. Firms and workers disputed the amount of effort required to spin on the new and longer mules. To preserve the standard or normal relation between effort and pay, workers insisted that piece rates be the same on all mules because of the increased physical effort required to spin on longer ones; but firms were adamant that if rates were not cut or discounted on longer mules, workers would capture all the gains of technical change and leave little incentive for further investment. Pressured by the entry of new firms, employers sought changes to the 1813 list.17

A protracted and bitter strike ensued in Bolton between 1822 and 1823, and, in the end, firms succeeded in introducing a list with discounting.

In other words, when Lancashire firms introduced new equipment, workers did not end up pocketing all the marginal product of capital (the incremental increase in output from installing longer mules with more spindles). The workers did try to capture all the rents from new investment by demanding the same piece rate when mules were lengthened, but the firms would not concede.

So incentives were, in the end, well aligned in the spinning section: spinners gained from productivity growth from the speeding up of the machines and the increasing spindlage, while the firms also gained without having to engage in costly monitoring.

However, this was not true in the weaving section. The piece rates in weaving stayed fixed with the number of looms (Wood 1910), so weavers had every incentive to increase the number of looms but the employers did not. As Lazonick (1990) puts it: “for a given intensity of labor, the lower the number of looms per weaver, the faster each loom could be run, the higher the output per loom, and the lower total unit factor costs”.

Lazonick (1990, pg 184-5) claims that after 1885, real wage growth in Lancashire outstripped productivity growth and unit costs were controlled only by using cheaper raw cotton. (This is strongly disputed by Sandberg.) Cheaper cotton meant more broken yarn, which created tensions between them and the firms. Ironically it was the wage list, a product of early labour relations in Lancashire, which created this incentive.

This was all fine as long as you could keep speeding up the looms. Besides, before 1914, Lancashire was so far ahead of everyone else in terms of technology, organisational experience, collective worker learning, and agglomeration externalities from the industry’s tight industrial concentration in a small geographical area, that the institutional rigidities of the wage lists probably did not matter until a truly lower-cost competitor would show up on the scene.

When Japanese competition materialised out of nowhere after the Great War, it exposed the problems of the wage list. (This is a parallel with India.) In order to compete on labour costs, Lancashire weaving firms had to increase the number of looms per weaver. But this was not possible without getting approval of both the unions and the employers’ association, all of whom had an interest when some aspect of the collective bargaining agreement would be changed (Bowden & Higgins 1999).

Greaves (2000) notes:

In the inter-war depression a modified version of this bargain survived. Cotton workers accepted low wage rates (by the 1930s they were among the lowest paid manual workers in Britain), but only on the basis of labour intensive production methods and single shift working. Employment was thereby spread (many households derived more than one income from the industry) and disruptions to family life minimized.”

“With mass unemployment any attempt to recast this bargain by employers would have been bitterly resisted.84 At the very least workers would have demanded much better remuneration to compensate for the diminution in job opportunities. This was shown by their reaction to attempts by weaving employers to introduce the so-called ‘more-looms’ system. The latter involved no more than a redivision of labour based on existing technologies to make more efficient use of the workforce. But the only basis on which it proved possible to introduce ‘more-looms’ in the 1930s was to pay very high wages to the few weavers who used the system. Any other approach merely brought industrial relations anarchy and social unrest”.

Lancashire was foredoomed by lower-cost competitors, but it probably could have survived longer and healthier if there was more flexibility about wages and manning ratios. The similarity with India in this respect is striking, even though Indian workers had no formal collective bargaining institutions before the 1940s.


Filed under: Uncategorized Tagged: industrial relations, labour relations, Lancashire, New England textiles

Labour repression & the Indo-Japanese divergence

$
0
0

There used to be more research and debate on the negative effects of labour resistance on economic development, but that topic has been crowded out by the intense focus on inequality of recent years. There now prevails a quiet presumption that labour movements have made only positive and large contributions to the historical rise in living standards.

So I illustrate the relevance of labour relations to economic development through the contrasting fortunes of India’s and Japan’s cotton textile industries in the interwar period, with some glimpses of Lancashire, the USA, interwar Shanghai, etc.

TL;DR version: At the beginning of the 20th century, the Indian and the Japanese textile industries had similar levels of wages and productivity, and both were exporting to global markets. But by the 1930s, Japan had surpassed the UK to become the world’s dominant exporter of textiles; while the Indian industry withdrew behind the tariff protection of the British Raj. Technology, human capital, and industrial policy were minor determinants of this divergence, or at least they mattered conditional on labour relations.

Indian textile mills were obstructed by militant workers who defended employment levels, resisted productivity-enhancing measures, and demanded high wages relative to effort. But Japanese mills suppressed strikes and busted unions; extracted from workers much greater effort for a given increase in wages; and imposed technical & organisational changes at will. The bargaining position of workers was much weaker in Japan than in India, because Japan had a true “surplus labour” economy with a large number of workers ‘released’ from agriculture into industry. But late colonial India was rather ‘Gerschenkronian’, where employers’ options were more limited by a relatively inelastic supply of labour.

The state also mattered. The British Raj did little to restrain on behalf of Indian capitalists the exercise of monopoly power by Indian workers. Britain had neither the incentive, nor the stomach, nor the legitimacy to do much about it. But a key element of the industrial policy of the pre-war Japanese state was repression of the labour movement, which kept the labour market more competitive than it otherwise would have been.

Note: By “labour repression” I do not mean coercing workers, or suppressing wage levels, but actions which restrain the monopoly effects of worker combinations. {Edit: Nor am I saying unions are necessarily bad! I’ve written before that unions in Germany are great.} Also, I do not claim this post has any relevance for today’s developed countries. It’s mainly about labour-intensive manufacturing in historical industrialisation or in today’s developing countries.

This is my longest and ramblingest post ever (to compensate for lack of posting for most of 2017), so here’s an overview:

  1. Lancashire v India v Japan
  2. The cotton mills on the eve of the Great War
  3. Labour intensification & economic development
  4. Stagnation in India & productivity explosion in Japan
  5. Technological divergence?
  6. Bargaining over capital-labour ratios
  7. Labour resistance in economic history
  8. “The labour problem”: Lewis versus Gerschenkron
  9. Workers unite but bosses eat other bosses
  10. Strikes in India versus Japan
  11. The “demand for militancy”
  12. Surplus Labour” in Japan & India
  13. Rationalisation & the “wage elasticity of effort”
  14. Managerial & organisational failures?
  15. Quasi-natural experiment: Japanese mills in interwar Shanghai
  16. How does the State matter?
  17. Labour repression as industrial policy in Japan
  18. The British Raj was not a “committee for managing the common affairs of the (Indian) bourgeoisie”
  19. Jute: the exception that proves the rule
  20. Random implications

x


Lancashire v India v Japan

As late as 1913, nearly 150 years after the onset of the Industrial Revolution, Britain still accounted for ~75% of the world’s exports of cotton textiles. The world’s second largest cotton industry, the United States, even after 100 years of high tariffs, could barely sell anything outside its own captive market. US mills had very high productivity, but the highest wages in the world still priced American textiles out of global markets. Those mills which did export were located in the South with slightly lower wages. British mills, on the other hand, had just the right combination of high productivity and high but not too high wages to dominate the unprotected export markets of the world.

Precisely the opposite prevailed in India, China, and Japan. Their cotton mills had very low productivity by global standards which largely offset their low-wage advantage. But they could coexist with Lancashire by specialising in the lowest-count (‘coarse’) grades of yarn and cloth that Britain had abandoned decades earlier.

India’s cotton textile industry was slowly gaining domestic market share at the expense of Lancashire, which had captured much of the Indian market in the 19th century under an imposed free trade regime:

[ Source: Bagchi (1972), Table 7.1. Yards of cloth and cloth-equivalent yards of yarn. ] 

Indian mills had also been exporting to global markets, including Japan; and before the end of the 19th century, Indian producers had driven British yarn out of China, which was Lancashire’s second most important export market after India.

Then Japan happened…

Until 1911 Japan had been prohibited by the unequal treaties with the western powers from imposing protective tariffs on imports, but it managed to become a net exporter of cotton yarn more than a decade earlier:

[Source: Braguinsky & Hounshell (2015)]

Mass & Lazonick (1990) summarise what happened next:

During the mid-1930s Japan surpassed Britain as the world’s dominant exporter, while the Indian cotton industry, with an earlier start and greater capacity than the Japanese, required tariff protection to keep Japanese goods out. Even with the tariff, India became Japan’s largest market for the export of cotton cloth. Between 1914 and 1932, while Britain’s share of Indian cloth imports declined from 97 per cent to 50 per cent, Japan’s share rose from 0.1 per cent to 45 per cent. In 1937 the Japanese cotton industry accounted for 37 per cent by volume of the world’s exports of cotton cloth, whereas Britain accounted for 27 per cent and India only 3 per cent.”

Yet the two industries had similar levels of productivity and wages circa 1900. Why was it Japan, not India, which supplanted Lancashire in the 1930s?

Make no mistake: textiles were important to Japan’s industrial revolution. As late as the 1930s, when Japan was already undergoing heavy industrialisation, textiles (cotton and silk) accounted for 50-60% Japan’s exports, ~30% of manufacturing output, and <50% of manufacturing employment (Hunter 2003, pg 36; Minami 1986, pg 28).

The decline of Lancashire is the subject of a voluminous literature in economic history. Many millions of words have also been written on the problems and premature decline of the Bombay industry in the interwar period, but the analysis is mostly qualitative. The literature specifically comparing the Indian and the Japanese textile industries in any depth is tiny — perhaps a book and a handful of papers.

The Indian literature largely blames Indian management for its inability to compete with Japan. But the well-known Clark (1987), “Why Isn’t the Whole World Developed? Lessons from the Cotton Mills“, which was not about India per se, nonetheless changed the terms of the Indian debate and refocused the problem sharply on labour. Instead of taking his cue, however, I am most persuaded by Susan Wolcott‘s contributions — Indian workers, through strikes, restricted the mill owners’ scope for decision-making. But she does not explore the “surplus labour” conditions in late colonial India, nor explore the political context of the labour movement.x


The State of the Mills on the eve of the Great War

Around the year 1910, there was a huge variation in machines per worker in the cotton textile industries of the world, with both India and Japan ranked near the bottom:

[Source: Clark (1987)]

The above says, the average textile operative in different countries operated a different number of machines. At the extremes, a weaver in New England operated more than 5 times as many power looms as a Chinese weaver (8 versus 1.5), and a spinner in New England more than 5 times as many ring spindles as a Chinese spinner (902 versus 168).

[ Ring spinner in the 1920s. Source: Wikipedia ]

The typical operative in both India and Japan operated approximately the same number of machines: about 200 ring spindles per worker and less than 2 power looms per worker.

[ Weaving with power looms. Source ]

Clark (1987) is famous for arguing that each worker operating more machines is equivalent to each working more intensely per hour: “In 1910, one New England cotton textile operative performed as much work as 1.5 British, 2.3 German, and nearly 6 Greek, Japanese, Indian, or Chinese workers”.

Only a small fraction of the variation can be explained by conventional factors such as technology, capital-labour substitution, human capital, raw material differences, product quality, etc. Technology was largely the same across countries; and there was inherently limited scope for factor substitution. So there remains a large unexplained residual which Clark calls “individual worker efficiency” or the “level of effort” in different countries.

Later, in A Farewell to Alms (2007), Clark seemed to ascribe the (cross-sectional) differences in machines per worker to workers’ taste for effort, but I reject the argument from preferences. In my opinion the principal contribution of Clark (1987) is highlighting the role of labour intensification in economic development.x


Labour intensification & economic development

Under the pre-war textile technology, a large part of the growth in the capital-labour ratio came from “stretch-out”, or making each worker operate more and more machines over time. Machines were also improved to run faster, or “speed-up”.

Unless your machines were decrepit, it was usually cheaper to make each worker operate a larger number of your 20-year old machines than invest in brand new ones. Even when you did invest in new equipment, your investment paid off faster if each worker operated more of them.

So labour productivity growth in textiles came from a combination of “speed-up” and “stretch-out”, which is equivalent to “labour intensification” — making each worker exert more effort for every hour of work.

Clark (1987) notes that over the course of the 19th century the average Lancashire operative roughly doubled the number of machines tended, even as the speed of machines also increased. This higher workload makes it “unsafe to infer that the increase in output per worker resulted solely from technical progress”.

That view is powerfully supported by Bessen (2012), who estimates approximately 1/4 of the 50-fold increase in cloth output per worker-hour between 1800 and 1900 was due to each weaver simply operating more looms than they had done initially. That’s really big. But if you cut off the initial quantum leap from the hand loom (1800) to the power loom (1819) and consider only the mechanised era after 1819, the share of  the productivity growth due to greater exertion of effort is even biggermore than 60% !

During that century, the work load in operating one single machine was reduced by technological changes which made machinery easier to use, more reliable, and less prone to error. The amount of labour time necessary to produce a unit of cloth was cut drastically.

However, even as each improved machine became more “effort-saving”, management could also both increase machine speeds and make the worker operate an extra machine.

Bessen shows that in the early 19th century, a New England weaver operating a single power loom spent 70-75% of the time watching the loom. By 1900, monitoring without active intervention was reduced to ~20% of the weaver’s time, and actively performing tasks took up 80% of the time. This is because the weaver in 1900 was made to operate 8 power looms. (Of course, workers had to learn to operate extra machines.)

In other words, over the course of a century, technological change did not reduce the amount of work for the New England operative, because he or she was made by employers to handle more and faster machines. The effects of technology were counteracted by a social process.

This is also what happened in Japan — but not in India.x


Stagnation in India but productivity explosion in Japan

By the early 1930s Japan had tripled its manning ratios to 600 spindles per operative and 6 power looms per weaver. The ratios at Japanese-owned mills in Shanghai had been doubled. But in India the average remained 200:1 and 2:1, i.e., unchanged since 1910. This means many more workers were employed in India to handle a given number of machines than in Japan.

[Roy (2008); also see Report of the Indian Tariff Board (1932), pp 111-2]

The change in Japan’s capital-labour (K/L) ratios is not an effect of installing more equipment (an increase in K), but an effect of making each worker operate more machines (a decrease in L). The Japanese textile operative was working harder every year. But the average Indian operative’s work load was no greater in the 1930s than in 1900-10.

India’s stagnation in the capital-labour ratio is also reflected in productivity growth in spinning one of the most common product categories of cotton yarn:
clarkwolcott

[ Source: Wolcott & Clark (1999). For the entire spinning section, see this from Otsuka et al. (1988). Weaving productivity is more difficult to compare. See Wolcott (1997) or this from Otsuka et al. (1988). ]

Labour productivity in Indian spinning grew by only 20-50% over 50 years. But at the same time, labour productivity on the Japanese side flew into the stratosphere, an increase of 400%.

Japan’s machine speeds almost doubled, but this was not enough to account for the difference (400% against 100%). Besides, all things equal, faster machines imply more work for the operative.

Wolcott & Clark (1999) also assess the number of auxiliary workers employed in labour-intensive tasks at Indian mills in the late 1920s, in light of the observed minutes required to complete the tasks per hour. Workers would have been idle approximately 80% of the total time required for such tasks. They conclude: “between three-quarters and four-fifths of the workers at Indian mills were supernumerary”; and “most of the [Indian] mill labor force was redundant”.

According to Wolcott & Clark (1999), “labor costs in India in the 1920s were at least double the costs of fixed capital”, so the abysmal labour productivity in India was overwhelmingly the most important problem. In effect, the low-wage advantage of the Indian mills was offset by employing too many workers exerting too little effort in all stages of production.

I should mention that even without Japanese competition, Indian mills had an incentive to reduce labour costs. As a result of the boom of the Great War, industrial wages in general spiked all over the world in the early 1920s, forcing textile mills to match them. The wage spike also coincided with a 15-year depression or stagnation in the international price of textiles. Here’s the ratio of wages to output price in Indian spinning:

(Weaving shows the same pattern and isn’t shown.) The equivalent chart for Japan.

[Source: Otsuka et al. (1988)]

In the 1920s, the Japanese mills compensated for the higher wages by squeezing more work out of their workers. Indian mills, however, failed to do this. x


Technological Divergence?

Technological change and human capital differences were minor determinants of the Indo-Japanese disparity in machines per worker that emerged in the interwar period.

In the pre-war textile industry, the human capital requirements were modest. It required neither literacy nor strength to operate textile machinery. Although the machines were far from fully automated, the tasks required of the operative were routine and predictable. The most important requirements for a mill worker were precision, attention, diligence, and conscientiousness. The necessary skills were picked up on the job, and individual worker productivity typically peaked after a year or two (Leunig 2003). At any rate, the industry experience of the average worker was stronger on the Indian side than in Japan. (More on this below.)

As for technology… in the India chapter of a recent book edited by O’Rourke & Williamson (2017), Bishnupriya Gupta and Tirthankar Roy charge that Indian mill owners were indifferent and conservative about technology.

But did technological change reduce intrinsic labour requirements in Japan during our period in question? A worker can certainly handle more machines if he has less to do on improved machinery.

The short answer is: yes to some extent. But in general (a) the role of technological change is greatly overestimated; and (b) the role of labour relations in the successful implementation of technological change is greatly underestimated.

I’ve relegated the tedious textile techno-geekery issues to a separate post (plus industrial policy and human capital). Here I address the most common issue raised in the literature: Japan developed a machine-making industry of its own which marketed its own automatic loom.

Many books on late colonial Indian history complain that Indian mills were too slow in adopting the automatic (e.g. Bagchi, Banerjee, etc.). Indian managers had the choice of importing British, American, or Japanese automatics, but allegedly failed to seize the opportunity.

The fully automatic or Northrop loom (as opposed to the power loom) did substantially reduce intrinsic labour requirements, so that in the USA, a single weaver on average could by 1900 handle either 8 power looms or 18 automatics (Bessen 2012).

But none of that really matters! Because, at those Indian mills where automatics were adopted, including a large Tata mill in Madras (now Chennai), its manning ratio was drastically lower than Japan’s 20-25 automatics per weaver (Otsuka et al. 1988, pg 130)! According to Buchanan (1936), those Madras workers would accept only 4 automatics to a man, or 6 according to the Indian Tariff Board:

…the result of the experiments so far made with them in India goes to show that it would be difficult to get weavers in Bombay to look after more than four. Even if the number of looms went up to six as in Madras, the statement below shows that the balance of advantage still lies with the ordinary loom.” (Report of the Indian Tariff Board [Cotton Textile Industry Enquiry] 1927, Vol. 1, pp 143-4)

Indian mills could not make the very expensive automatic loom pay for itself, let alone compete with Japan, without increasing the number of looms per weaver beyond 4-6

What this suggests is that it’s meaningless to talk about management failures without reference to labour relations.x


Bargaining over Capital-Labour Ratios

The ability of mills to increase the capital-labour ratio depended on the relative bargaining power of workers and employers.

In the global history of the textile industry, the issues driving industrial conflict always involved, inter alia, how many looms or how many spindles should be operated by each worker and at what wage.

In the 1920s and 1930s, Lancashire weaving firms tried, and failed, to get weavers to operate more than 3-4 looms in the so-called “More Looms System“. Even within Great Britain, Scottish weavers refused to operate more than 2 looms per worker.

At the global level, labour relations in cotton textiles could be represented by two extremes. At one extreme, New England mills in the mid-19th century were able to circumvent worker resistance and engage in the “unremunerated intensification of labour”. At the other extreme, powerful unions backed by the state during the Mexican Revolution forced a collective bargaining agreement upon employers which fixed the number of machines that could be operated by each worker (Gómez-Galvarriato 2007).

Most of the world, including the famously unionised Lancashire, fell somewhere in-between. But late colonial India ended up being closer to the Mexican end of the spectrum even with little government intervention; and pre-war Japan in some ways exceeded New England.

In my opinion, the best and purest illustration that industrial relations strongly determined labour inputs in the pre-war textile industry is doffing — or the removal of finished yarn from a stopped spinning machine. This is a measure uncontaminated by human capital differences or relative factor prices.

Doffing was a manual operation which was not widely automated until after 1945. How often you stopped the machine to do doffing could be influenced by how high wages were. But once you did stop the machine, the number of doffs per worker per hour was just a matter of how fast the worker worked.

Doffing was also close to the Platonic ideal of the zero-skill activity. According to Wright (1986), boys at North Carolina mills who started doffing at age 12 peaked by age 17.

Yet in this simple task, a doffer in the USA doffed 6 times as much per hour as an adult Indian doffer.

[Clark & Wolcott 2003 in Rodrik]

Clark (2007) cites different doffing rates as evidence for “taste for effort”, but that’s rather implausible. Why did British doffers doff at only about half the rate of American doffers? We know that the British textile industry was much more unionised and its industrial relations more institutionalised than New England mills.x


Labour resistance in economic history

To generalise the above — if, for what ever reason, workers have some advantage in bargaining with employers, they might exercise this power in order to…

  1. raise wages above competitive rates, which need not be too problematic;
  2. reduce the level of effort for a given wage, through shirking or restriction of output, which could force employers to employ more workers than necessary;
  3. resist technological or organisational changes that reduce labour inputs, in order to defend the overall level of employment.

{ Compared with the above, labour’s campaign to improve working conditions, e.g., work hours, child labour, safety regulations, etc., had relatively minor effects on industry. So I completely ignore those in this post. }

(a)

Whether workers capturing a larger share of the industry rents is a problem depends to what extent firms rely on profits for capital accumulation in the early phase of industrialisation. For the first 100 years of the British Industrial Revolution, the labour share of national income was falling:

Allen (2009) argues the rise in inequality in 1770-1840 was necessary to the capital accumulation that later helped raise real wages. This argument is generalised by Galor (2011). Although a more equal distribution of income can promote growth when the returns to skill are relatively high, nevertheless inequality can promote development through profit-driven capital accumulation when capital is scarce and returns to skill are relatively low.

Similarly, Japan’s convergence with the Western countries was accelerated by capital accumulation coming at the expense of real wage growth. According to Broadberry, Fukao & Zammit (2015): “Japan overtook the United Kingdom as an exporter of manufactured goods not simply through catching up in labour productivity, but through holding down real wage growth so as to enjoy a unit labour cost advantage”.

(Note! “…holding down real wage growth” does not necessarily mean wages must stagnate and workers must be immiserated. It just means labour productivity growth > real wage growth.)

If British and Japanese workers had had greater bargaining power than they actually did in the earliest phase of industrialisation, especially in sectors most subject to technical change, would that have retarded economic development?

(b) & (c)

These points imply resistance to higher capital-labour ratios, so it’s basically just a modern form of Luddism.

Labour resistance is likely to matter most in an industry which uses a lot of labour — like pre-war textiles. But did labour resistance actually matter in history?

Mokyr (1992, 2002 pp 259-61) entertained the possibility that labour resistance in the 18th century could have been an element in continental Europe’s later and slower industrial takeoff. Randall (1989) notes that relatively simple but strongly labour-saving inventions that had existed for decades or even a century like the gig mill were not adopted in the British wool industry because of worker resistance. The spread of similar technologies from cotton to wool within England were also likely delayed by riots and machine-breaking.

Horn (2005, 2006) cites the fear of revolution powerfully affected French industrialists in the early 19th century, and this is one reason industrialisation took a ‘different’ path in France.

It’s possible the effect of labour resistance is precisely what shows up in the number of spindles per worker in the French cotton textile industry. From Marx (!!), Chapter 22 of Capital (v1), a European comparison of spindles per worker in 1866:

{ Marx, citing one Mr Redgrave, notes that the British figure is understated because “weavers are not deducted” whereas continental firms were pure spinning firms. }

Fifty years after it was jump-started by the Napoleonic blockade, French spinning may have been the most backward in Europe in terms of capital-labour ratios. Since most of the continental states got their start in mechanised spinning around the same time, it’s unlikely this reflected differences in learning. (But this is just speculation — a closer look is needed.)

The implementation of the so-called American system of mass production between the 1880s and the 1920s was predicated on putting an end to the craft control of production. Skilled artisans, who had been autonomous internal contractors, were replaced by a combination of hierarchical management, lower-skill labour, and high-speed, high-volume production technology. This phenomenon prevailed both in American textiles and in the US manufacturing sector as a whole (Lazonick 1990; Katz & Margo 2014).

The proud craft workers naturally did not sit idly by, and they used their power of combination to resist such changes through strikes.

[Source: Naidu & Yuchtman (2016)]

The tragic and violent Homestead Strike of 1892 is only the most (in)famous example of the labour conflict of this period. In the words of Rosenberg & Birdzell:

The Amalgamated Association of Iron, Steel, and Tin Workers, said to be the strongest union of its day, controlled every aspect of production at the mill. In time, its accumulation of rules about methods of production, restriction of output, and opposition to labor-saving saving machines opened a considerable gap between the actual and potential costs of production.”

Few ever consider the strikes of this period as something akin to Luddites or the Swing riots, but why not? Maybe they were the inevitable byproduct of the distinctive path of American economic development.

That’s how we should interpret the difference between the Indian and the Japanese textile industries: in the one case, economic progress was derailed by labour resistance, but not in the other. x


The “Labour problem”: Lewis versus Gerschenkron

It’s popularly assumed that firms usually have more market power than workers in setting the terms of employment. Even Adam Smith thought that employers, thanks to their resources, can out-endure any industrial dispute with workers. Modern labour economics often views labour markets as riddled with naturally monopsonistic tendencies.

It’s also implicitly assumed in Arthur Lewis’s famous model of early industrial development that workers have no bargaining power of any kind. Labour supply is ‘unlimited’ and perfectly elastic thanks to huge ‘disguised’ unemployment in the ‘traditional’ sector. So higher demand for labour from an expanding industry does not translate into higher wages. You could just beat the fields with a stick and the peasants keep leaping out. Or at least that should be so until much of the “surplus labour” was absorbed by industry.

But Alexander Gerschenkron implicitly argued that the supply of labour was relatively inelastic — at least in the short to medium run. Labour mobility could be quite limited because information about job opportunities was poorly disseminated, and/or until the necessary infrastructure was built up. And interregional mobility could also be limited by customary inhibitions, ethnolinguistic fragmentation, etc. India, to this day, has relatively low labour mobility.

The ‘effective’ supply of labour could be also limited because peasants did not necessarily make a reliable, disciplined labour force for the factories. Employers had little information about worker quality or reliability.

The earliest factory workers were lacking in what Mokyr & Voth call “discipline capital” — non-cognitive ‘skills’ like punctuality, sobriety, reliability, docility, and pliability. Whether they had been peasants or artisans, early workers were new to industrial work habits and they had a strong preference for autonomous work arrangements. They were accustomed to setting their own pace of work in farming, domestic outwork, or artisanal workshops, and disliked the time rules and strict supervision of the factories.

All this is consistent with colourful descriptions of the early history of the textile industry in the Global South, including Japan. Mills were described as places of chaos and disorder. They were supposedly filled with workers ‘idling’, ‘loitering’, ‘socialising’, smoking, tea-drinking, or just disappeared for the day. In Japan, “twenty percent of the female operatives…absent themselves after they receive their monthly pay check” (Saxonhouse & Kiyokawa 1978). In Shanghai, it was said female mill workers could be found breast-feeding infants during work hours (Cochran 2000). Or at Mumbai mills, workers “bathed, washed clothes, ate his meals, and took naps” (Gupta 2011).

But this bit from Haber (1989) about Mexican mills has to be my favourite:

Even in Britain the early factories had to pay a premium of nearly 20% to get workers to submit to the regimentation of the shopfloor. Yet turnover was still quite high. (See Pollard 1963, Marglin 1974, Clark 1994.)

So “reliable labour” was scarce in early industrialisation, and workers could derive some market power from that situation. Employers had fewer outside options and higher replacement costs if production was disrupted, than is suggested by phrases like “surplus labour” or the “reserve army of labour”.

This is, in part, why in early industrialisation we see ‘exotic’ labour market institutions to deal with inelastic supply and reduce search & transaction costs — e.g., penal labour, indentured paupers, migrant contracts, unscrupulous “labour lords”, company towns & dormitories, vagrancy laws, criminal prosecutions for ‘absconding’, the right of private arrest by employers, pass laws which restrict mobility, etc. etc. etc. The list is endless. x


Workers unite but bosses eat other bosses

When urban labour markets deepen, there are still imperfections which keep them from being a spot market for labour inputs. In the late 19th century United States, which is often depicted as a model of laissez-faire, Naidu & Yuchtman (2016) find evidence of pervasive frictions and rents in the labour market.

And it’s still possible to overstate the bargaining power of employers. Put quite simply: workers are more likely to stick together than capitalists.

Given half the chance, and out of perfectly rational motives to get the best deal possible, workers everywhere will combine to bargain for better terms of employment.

Likewise, firms typically would love to restrict competitive bidding for labour and combine in order to fix wages or suppress wage growth.

But, all else equal, workers are better at collective action than firms. They deviate less from cartel arrangements. Left to their own devices, without interference from the state or the employers, workers organise more spontaneously to exercise monopoly power.

Firms are more cut-throat, free-riding, dog-eat-dog, opportunistic defectors — at least if the product market is competitive. The ability to exercise monopsony power falls in proportion as there are many firms, ownership is diffuse, economies of scale are modest, barriers to entry are low, and the cost of changing jobs is not too high. With some notable exceptions, the pre-war textile industry in most countries fits that description.

The collective power of workers even in the early 19th century to wrestle with employers over the terms of employment is powerfully illustrated by the emergence of collective bargaining institutions in British cotton. A series of bitter strikes by male spinners forced Lancashire firms to agree to “fair wages” enshrined in public lists having the “force of laws”. The same workers also frustrated the spinning firms’ attempts to replace them by introducing equipment which might be operated by women and children. Yet, at this time in the larger British economy, technology was constantly displacing workers and real wages were stagnant.

( During the 19th century, the contrasting evolutions of trade union power in the UK and the USA are probably owed to national differences in scale/market size, firm size, and industrial concentration. )

Early industrialisation saw spontaneous strikes all the time in the absence of formal unions. To the best of my knowledge, no one has specifically studied unorganised strikes per se, but the fact that they happened frequently in early industrialisation is undeniable and testifies to the ‘organic’ solidarity and monopolistic instincts of workers.

Just two out of an infinity of examples: in the General Strike of 1919, 100,000 workers walked out of the Bombay mills and there were no unions at all at the time. Or between 1929 and 1931, non-union workers across the entire textile region of the American South staged hundreds of spontaneous walkouts to protest increased machine assignments (Wright 1986). Only when these got national attention did outside unions show up.

So we should not underestimate worker power to influence both the terms of employment and the production process — at least in early industrialisation.x


Strikes in India versus Japan

The prima facie evidence of the militancy of Indian labour is the size, duration, frequency, and generality of strikes.

Between 1918 and 1938, the Indian textile industry saw at least 8 general strikes in Mumbai alone. Mumbai basically saw no production for large parts of 1928-29. And this is before counting the general strikes in the second major textile centre, Ahmedabad, or the minor centres like Sholapur and Madras (now Chennai). It’s also before counting up to a thousand smaller firm-level strikes or lockouts or other troubles at individual cotton mills.

Wolcott (20062008) has compiled comparative strike data for the textile industries of Great Britain, New England, and the Bombay Presidency (which includes Mumbai and other mill cities). Note this does NOT include the big strikes that occurred in other industries, such rail, steel, docks, municipal services, etc.

[Source: Wolcott (2006)]

{ There are no Indian data collected before the early 1920s, but the qualitative literature is clear there were large strikes often involving thousands of workers and many mills between the 1890s and 1918. See Karnik (1960) or Morris (1965).}

The turn of the 20th century in the USA and the interwar period for the UK are generally considered tense periods of labour strife and industrial conflict. Yet British and American millhands had absolutely nothing on their Indian counterparts. Whether measured in the number of workers striking per year per mill, or the number of strike events per year per worker, or the days lost per worker per year, Indian textile workers in the interwar period were true stars of the labour militancy league. See some of the descriptive stats.

Mumbai clearly had the most fractious labour relations in India — accounting for 21 million out of ~32 million worker-days lost to strikes in all of India in all industries in 1928 alone. Ahmedabad’s strike data are much more moderate. This city (now in Gujarat state) is where a true formal union under a collective bargaining system had been in existence since the early 1920s, thanks to the mediation of the native son Gandhi. Yet Ahmedabad was still clearly much more strike-prone than the UK or the USA at any time between the 1880s and the 1930s. It’s difficult to argue the positive role of unions as ‘voice’ made such a big difference with Gandhian unionism.

Before the late 1920s, Indian strikes outside Ahmedabad were largely unorganised. When there were strikes, ‘unions’ of sorts or strike committees would spontaneously emerge and then vanish after the strikes were over. This is amply documented in Newman (1981)’s detailed study of Mumbai’s general strikes. ‘Unions’ came and went with strikes.

Formal unions with real organisation led by educated middle-class outsiders were more enduring in the 1930s, but even then they struggled to maintain leadership. It was typically workers who took the initiative and the unions followed their lead. Indian labour power is about the spontaneous militancy of the workers themselves, not the success of trade unionism.

Wolcott (2008) argues “unorganized Indian workers could initiate a very large strike in as orderly and complete a manner as the most organized examples in English and U.S. labor history”. Morris (1965) quotes a government official in the 1890s noting that spontaneous organisation was made possible by an “unnamed and unwritten bond of union among the workers”.

The first few general strikes concerned wages. But the general strikes of 1928 and 1929, and some in the 1930s, were explicitly to oppose ‘rationalisation’ — the attempts by the mills to reorganise production in the face of Japanese competition through work force reduction. Remaining workers would tend more machines, but at higher wages.

This cannot be emphasised enough. Workers resisted staffing reductions even when higher wages were offered to the remaining workers. The sources are basically unanimous on this. Chandavarkar (1994) argues that forms of worker indiscipline such as loitering at the workplace or the slowing down of productioncan be construed as “positive forms of working-class action”. He quotes a British textile expert hired by the Sassoons to reorganise the mills:

….’The workers in Bombay’, said Fred Stones, ‘seem to favour the idea of half work for everybody rather than full work for a few’. Workers who experienced fluctuating and uncertain conditions employment sought to slow down production and to control the intensity of effort which employers demanded of them”.

You can call this by the pejorative US term featherbedding, or by the more positive “work-sharing”. Either way, it helps explain Indian workers’ resistance to higher machine assignments: accepting greater work loads = less employment.

Labour unrest following the Great War appears to have been a global phenomenon. During the war, workers in many countries, including India and Japan, demanded pay rises to compensate for the inflation and succeeded in getting them thanks to the temporary labour shortage. But in the recession following the war, firms tried to reverse the raises, but workers saw current wage levels as customary and defended them aggressively.

Of course …the Bolshevik Revolution… might also have had something to do with it, at least in stimulating labour to new possibilities.

During the Great War and afterward, Japan also witnessed a flurry of strikes and industrial disputes in both textiles and the rest of the economy alike. Similar factors were at work as elsewhere: rising cost of living followed by recession and retrenchment.

Data on Japanese strikes are not available in terms of days/lost per worker, but there was clearly an upsurge near the end of WW1. These data are for the Japanese economy as a whole:

As for textiles specifically, according to the ILO, “[f]rom 1923 to 1929, the [cotton and silk] textile industries headed the list [of strikes] with an annual average of 92 disputes, or 23 per cent”.

This is not surprising because worker discontent clearly existed. Conditions in Japan’s mills suffered from a terrible international reputation, very much as Bangladeshi garment factories have today. Saxonhouse & Wright note that an American visitor in 1900 described “Japanese mills as grim prisons, the workers attracted by misrepresentations, and kept against their wills in overcrowded, unventilated barracks”. Many Japanese contemporaries themselves characterised various strategies of labour recruitment by the mills as akin to indentured servitude. One historian of Japanese labour argues the “low wages and difficult working conditions placed [Japanese textile workers] in perhaps the worst objective situation of any group of workers” in Japan.

But there were never any general strikes in the Japanese textile industry, and those that occurred were short, sporadic episodes restricted to individual firms and were quickly quashed. Although Japan was the most rapidly industrialising country in the Global South, union membership in the country as a whole was less than 8% of the industrial work force at its peak in 1931 (Garon 1987, app. VI). Amongst women who dominated textile workers, it was less than 1%. x


The “demand for militancy” in India & Japan

Indian workers may have replicated in the city and on the shop floor the “solidarity networks” of the villages. Risk-averse, “safety first” peasants living in uncertain environments formed informal insurance schemes as mutual support in bad times; and this income-sharing cooperative equilibrium may have been unusually strong in India, with its rain-dependent, monsoonal agriculture (Roy 2007).

The “moral economy” of peasantry may therefore have facilitated Indian workers’ spontaneous collective action as industrial workers. And it was maintained in the city, as in the villages, through a combination of social sanctions and threats of violence. When the mill workers struck, their goal was not only to stop wage cuts, but also to maintain the employment of their ‘village’ fellows. Workers could also return to their home villages and rely on income-sharing support systems when their resources ran out during long strikes.

That is a speculation advanced by Wolcott (19942008), but it’s consistent with the recreation of village governance and informal justice institutions in city neighbourhoods, such as the panchayat, as vividly described in Chandavarkar’s (1994) amazing sociological descriptions of the Bombay neighbourhoods and tenements (chawls). The city connections were just as important as the village connections, in terms of activism and support systems during strikes.

But why was there so much more labour militancy in India than in Japan? Why did Indian workers display much more strike solidarity and discipline than the Japanese mill hands?

The most outstanding difference in labour force characteristics between India and Japan was gender. The Indian mill hands were ~80% male, a result of male bias in migration, and this is reflected in Bombay’s gender ratios:

The Japanese textile work force was, famously, more than 80% unmarried adolescent women. Saxonhouse (1976) observes “the Japanese textile industry’s labor force is clearly the most female and the most transient” of all the other national textile industries.

Wolcott (1994), in line with the Japanese literature, attributes the near-zero union penetration of its work force to the transience and low attachment of the female labourers. The country girls intended only to work until marriage and then return to the farm. So they had little incentive for union-organising efforts because any gains from collective action would be “reaped by subsequent generations” and “strikers themselves would receive little of the benefits despite incurring all of the costs”.

But the Indian mills employed a much higher fraction of committed, long-term workers than the Japanese industry:

[Source: Saxonhouse & Kiyokawa (1978)]

Strong job attachment can be a source of institutional rigidity. Committed workers are more tenacious in their demands than transient workers. In Lancashire, the elite male spinners were prickly and assertive about their craft status (Huberman 1996). Even in Japan, most of the mill hands who did strike emerged from the minority of commuting male workers. At mills in the US South, middle-aged men with families often did work such as doffing which, elsewhere, were done by children and teenagers (Wright 1986).

According to Saxonhouse & Wright (1984), the textile work force of the American South

…came to be dominated by men who had invested their identities and career aspirations in their jobs… These Southern mills were not unionized, but the mature mill village community developed strong notions of appropriate work organization and employer behavior. This kind of “moral economy” perception made drastic changes difficult. The bitter strikes [in the late 1920s and early 1930s]…. were marked by a sense of betrayal on the part of the workers at attempts to change accepted job definitions and work norms”.

In addition to their transience, the Japanese female workers were also recruited at a great distance from the mill site (e.g., Okinawa girls working at an Osaka mill) and were required to live in strictly surveilled company dormitories. There would be no neighbourhood support networks. Tsurumi (1992), pointing to the difference in unrest from before and after the widespread introduction of the dormitory system in the late 1890s, is absolutely convinced that the surveillance and inaccessibility of the workers in dormitories was critical for preventing strikes and union organisation.

In Japan, the labour supply decision also belonged to the household, not to the girl herself. The mills signed employment contracts with her father or brother and he received the much coveted signing bonus (although these contracts were rarely enforced when the girls ran way). Hunter (2003), which goes into some detail about recruitment practises over time, mentions when female workers did strike, the girls’ families often sided with the employers! If this holds generally, it’s an amazing contrast with Indian workers’ reliance on their village connections.

If the above are valid, then what ever reason female labour was much more widely available in Japan than in India must have mattered to the Japanese textile industry’s advantage.

It is speculative, but plausible, that even before the modern era, female labour may have been more ‘commodified’ in Japan than in India. The more advanced commercialisation of the Tokugawa period (1603-1868) in Japan had already created a deeper market for rural labour. Saito (2011) suggests this may be related to the rising female age of marriage in the that period, explicitly comparing with the “Western European marriage pattern” of Hajnal and Laslett.

The India-Japan divergence in textile industrialisation is, in my opinion, the single most persuasive case that gender matters in economic development. Given the low age of marriage in India, and the underrepresentation of women in general, Indian industry was denied a substantial fraction of potential “surplus labour”. x


“Surplus labour” in Japan versus India

But the Japanese mills also had more “outside options” than the Indian mills, in part, because the supply of labour was more elastic in Japan than in India

The Japanese mills suppressed strikes and thwarted union penetration. When unorganised strikes popped up, employers defused them through a mix of minor concessions, unleashing hired gangsters, police arrests of the agitators, and getting rid of the most troublesome malcontents. They could just get more of them where they came from.

Japanese employers could therefore squeeze as much work as possible out of their workers — make each of them handle more machines — without fear of resistance which might disrupt production. Those workers who didn’t like the intensity of work could escape — exercise their “exit option”. Again, the employers could find more where they came from.

But Indian mills, as an industry, were far more dependent on a core of committed, long-tenure workers, and individual mills walked on eggshells in fear of potential protest from workers. The Indian employers found it quite difficult to get rid of the uncooperative ones and obtain adequate replacements for them.

A country as impoverished as India, with such a large potential pool of labour for industry, ‘should’ have been able to intensify work effort, as Japan did. And they ‘should’ have been able to use scabs/blacklegs to break strikes more easily. Instead, Indian mills faced a combination of wage rigidity and labour resistance even worse than the textile industries of the much richer countries during the 1920s and 1930s.

Pre-1945 Japan was almost a parody of the Lewis model of development — or more accurately, of the Frei-Ranis extension with balanced growth in agriculture and industry. Growth in population and agricultural productivity induced large-scale migration from the countryside to the cities. At the same time, the remaining farm households increased the hours and days worked per year (Mosk). So real wage growth was stagnant or moderate despite rising industrial demand for labour.

This process slowed down after WW1, when real wages in the economy as a whole increased and you saw the first signs of some wage rigidity. But Japanese cities had by then a large pool of workers in the urban informal sector which the mills could tap (Taira 1978; 1989; Francks).

If Japan seemed to fit some version of the “surplus labour” model, then late colonial India was more ‘Gerschenkronian‘. First and foremost, there was basically no structural transformation in late colonial India: in 1875-1947, agriculture claimed ~75% of the work force. Any labour ‘released’ from agriculture would be due to population growth. But the ‘effective’ supply of labour available to industry was reduced by low rates of geographical mobility, especially in the short run.

Low rates of mobility can reduce employers’ options if they can’t get the ‘right’ workers or get rid of the obstreperous ones. According to Rosenbloom (1998) even strike-breaking became easier in the United States in the late 19th century thanks to rising worker mobility in an integrated national labour market.

The contrast in mobility is supported by differences in national wage convergence: there was a lot of it in Japan before 1945 but not much in India whether at the national or regional level (Saito 2006; Collins 1999).

Another indicator of how “surplus labour” a developing economy might be, is the reservation wage in manufacturing. This should be set as a premium over the opportunity in agriculture.

[ Source: Bagchi (1972), Table 5.3; Mosk (1995), Tables 2.1 & 2.6; Japanese data are quinquennial averages ]

The male Mumbai ratio fluctuated between 2 and 3 throughout the entire period, but female wages in Japanese textiles were usually not that much higher than in agriculture!

Wolcott (2008) also shows textile wage ratios for other cities in India, plus the UK and New England in the 1890s:

India’s faux surplus-labour economy was manifest in a variety of other ways. Unlike double- or triple-shifts in Japan, most Indian mills never went beyond a single shift on a wide scale until the late 1930s.

Indian mills relied on decentralised labour recruitment for an unusually long time. Roy (2008a, 2008b) observes that the “labour lord” was a ubiquitous feature of early industrialisation in most countries. But as labour markets deepened, management eventually took over the recruitment function themselves. This was the case in Japan, but not in India.

Indian foremen or ‘jobbers’ recruited workers through their own connections based on caste, kinship, village origins, and city neighourhoods. These jobbers were powerful authority figures with patronage relationships.

This system of recruitment resulted in an extreme form of labour market segmentation: each textile mill effectively faced numerous separate labour markets defined by each jobber’s network. So as a hypothetical example, the fancy weaving department at a mill could be composed entirely of Muslim guild weavers from a single village; or the lint waste removers entirely of untouchables from another village (Newman 1981).

Thus there was no single “urban labour market” that the mills could tap, and the employers’ search costs were high.

(Can I say it again? It’s still like that in many developing countries. Housing, job search, credit allocation, occupational sorting, etc. happen through a combination of chain migration and social networks. Friends and relatives from the same village migrate to the same city neighbourhoods where friends and relatives are located and often find jobs with the same employer in the same occupation and borrow money to tide themselves over from the same money-lenders. See Munshi 2014.)

Jobbers often abused their position by taking bribes from prospective hires, and they might even help instigate strikes. But the mill owners found the jobbers indispensable for recruitment, because up to a third of the Bombay mill force were casual day labourers, i.e., hired on a daily basis as the fluctuating demand conditions in the product market required. And they also served as temporary replacements for the mills’ many absentee workers.

But the rates of absenteeism at Indian mills were no worse than in Japan! In terms of turnover (annual % of workers leaving and being replaced at firms), both the Japanese and the Indian industries started with unbelievably high rates of instability, but by the 1930s the “shift toward a lower labor turnover was much more marked in India than in Japan”:

[Source: Saxonhouse & Kiyokawa (1978)]

In order to reduce turnover, the Japanese mills did offer more job amenities (often described as “paternalistic welfare policies”) in the form of schooling, health clinics, nicer housing, etc. These cannot be considered a failure, because turnover did fall over time. But, the primary means of job separation in Japanese cotton remained ‘running away’ or ‘absconding’ !!

As Saxonhouse & Kiyokawa (1978) put it, “Japanese textile industry achieved world dominance in the first three decades of the 20th century using seemingly poorly motivated labor”.

An interpretation which would reconcile all this information is that in its half-century of existence the Japanese textile industry, but not the Indian, had achieved optimal turnover.

Turnover is usually evidence of workers’ ability to exit, i.e., something which reduces a firm’s control over workers. But it may have had quite different effects in Japan.

On the one hand, high turnover has costs in terms of losing experienced workers and having to hire new ones you must train. A worker could not go right away from 4 looms to 6 looms with the snap of a finger. There was a costly period of training and learning, in terms of foregone output, in order to do the extra work (Bessen 2003). So the initial costs of higher machine assignments are lower with more experienced workers.

On the other hand, high turnover has benefits because you prevent ‘fairness’ norms about pay or work loads from emerging. Since returns to experience in textiles diminished after a couple of years, employers did not necessarily benefit from workers who stayed in their jobs for years upon years, except for the handful selected to be supervisors.

The benefits of turnover can exceed the costs if (a) the required skills are relatively easy to acquire; and (b) the required skills are general to the industry and not specific to the firm, which was the case in interwar textiles; and (c) the fraction of new hires with some industry experience at firms was rising over time.

The ‘maturation’ of the local labour market in terms of a critical level of industry-level experience is tremendously stressed by Saxonhouse (1977) for Japan; Wright (1981) for the US South; Leunig (2003) and Bessen (2003) for New England.

For the the Japanese textile industry, this was a self-reinforcing process since it was constantly shedding workers in the 1920s and 1930s. The Japanese mills therefore had a big enough “reserve army” of workers with just enough textile industry experience but not too much firm tenure. They could get rid of troublesome workers without worrying too much about lost production.

However, the Indian industry faced a smaller pool of ‘excessively’ experienced workers — despite at least 70 years of existence. If those guys went on strike or you had to lock them out, you could not replace them as easily.

YET Indian mills also had a “reserve army” of labourers — the ‘badlis’ or the casual day labourers mentioned earlier. But they were not reliable as scabs/blacklegs, nor reliable as long-term replacements for the regular workers. There are two theories:

  • Mazumdar (1973): the core of the stable work force at Indian mills was adult men with families permanently settled in the city. But the ‘badli’ temps were primarily men with families still in the village and never became a stable source of labour because they returned to their farms on a seasonal basis. That could be 4-6 months.
  • Newman (1981): replacing a striking work force implied overcoming the decentralised system of recruitment, which was totally against the interest of the jobbers; and the badlis themselves were dependent on jobbers. It was a catch-22.

All this is just a roundabout way of saying that agricultural productivity in late colonial India was stagnant but Japan’s was (relatively) dynamic, and therefore each had different rates of ‘release’ of agricultural labour. But the mechanism by which agricultural performance mattered to industry was, ultimately, in increasing employers’ strike-breaking and union-busting capacity. x


The Great Depression & bargaining power

Another indicator of the relative bargaining power of workers and employers is wage flexibility.

At the beginning of the Great Depression, there was some downward nominal wage rigidity in most of the world — wages were slow to fall despite rising unemployment and contraction in output and profits. Employers are usually reluctant to cut wages in a recession because they don’t want to demoralise workers, or simply for fear of revolt by workers in defense of customary wage levels.

According to Hanes (1993), the earliest signs of increased nominal wage rigidity in the United States can be dated to the late 19th century, when strikes and industrial disputes proliferated.

Textile wages fell in both India and Japan, but the process was much faster and much more drastic in Japan:

[ Source: Bagchi (1972), Table 5.3; Ramseyer; Bank of Japan ]

In India, it took 5 years after the onset of the depression for nominal wages in the textile industry to begin falling, and the wage cut precipitated the general strike of 1934. In Japan, by contrast, nominal wages began falling immediately in 1929-30. (In the real wage data above, the initial nominal cut is obscured by the severe deflation of 1930.) Japanese mills could shed workers and slash wages at will.

So late colonial India had the wage rigidity of a “developed country” despite being abjectly poor. x


Rationalisation & the “wage elasticity of effort”

Indian workers’ militancy influenced production, not necessarily by raising wages per se, but by raising the wage at which a given level of effort would be supplied. Indian mill owners could not get workers to increase effort at manageable cost.

Gupta (2011) argues that textile workers in Japan had more incentive to increase effort than in India because real wages grew faster in Japan.

[ Source: Bagchi (1972), Table 5.3; Otsuka et al. (1988) ]

Even ignoring the massive wage clawback in Japan during the 1930s, the direction of causation between wage and effort is still ambiguous. But what’s not ambiguous: for any given increase in wages, Japanese mills squeezed much more productivity out of their workers than Indian mills.

Whereas labour productivity growth exceeded real wage growth in Japan over the entire period 1900-1938, labour productivity in Indian textiles stagnated in the 1920s and 1930s, but real wages went up anyway.

[Source: Wolcott (1994)]

Japan’s ability to increase labour productivity without paying “too much” in extra wages was the cause of its vigorous expansion. The Indian industry’s lack of competitiveness was precisely its inability to achieve a wage-effort combination as good as Japan’s.

Most scholars (e.g., Morris 1965) argue that before the Great War, Indian mills could afford to be ‘wasteful’ with labour use because wages were so low. But when Japanese competition arrived on the scene in India, the mills found they had to reorganise production and reduce labour costs, through some mix of wage cuts, increasing work intensity, and reducing employment levels.

In Indian textile history, this is known as ‘rationalisation’. And the employers’ attempt to reduce employment at the mills while increasing wages for the remaining workers provoked the General Strikes of 1928 and 1929.

As a result, most Indian mills did not ‘rationalise’, but at least 15 of the largest mills didWolcott (1994, 1997) and Wolcott & Clark (1999) construct a “labour efficiency index” for mills in various Indian regions (lower is better):

[Source: Wolcott (1994)]

But the ‘rationalising’ mills in India were no more profitable than before ‘rationalisation’ because the gains were offset by the extra wages paid to the work force retained to operate more machines. The profit rates of the ‘rationalisers’ were only about 2% against the 1.7% of the non-rationalisers. Wolcott & Clark’s estimate for the increase in Indian wages associated with a 1% reduction in labour requirements is 70-101%.

Conversely, when mills were set up outside of Mumbai to escape from the higher wages, the lower productivity offset the wage savings. Thus Wolcott and Clark argue there existed across India the same pattern uncovered by Clark (1987) for the whole world: “labour inefficiency and wages varied inversely in such a way as to keep wage costs fairly similar across India”.

Gupta (2011), which is intended as a challenge to the Wolcott-Clark view, inadvertently supplies evidence in support. She estimates Bombay had 33-45% higher labour productivity, and Ahmedabad 22-25% higher, than cotton mills in the rest of India with less unionisation and lower wages. But then she calculates the difference in wage cost per unit of output:

Consistent with Wolcott & Clark, the labour costs in Ahmedabad were actually higher (the wrong sign!) and those in Mumbai only slightly lower than the rest of India.

Gandhian collective bargaining at Ahmedabad apparently made no difference in terms of employers’ ability to reduce unit labour costs.

Most of the time Indian strikes did not result in a win for workers in terms of formal employer concessions. But the credible threat of strikes was sufficient to make management wary of worker hostility. According to Wolcott (2006), “perhaps as much as 8.1 percent of capital was idled each year by strikes” in the mill cities of the Bombay Presidency.

But it was not just direct costs. Indian workers drastically raised the marginal transaction cost of implementing organisational and technical changes. Unless you had really big changes to implement, and you had no other choice, the huge cost and risk was not worth it.

In the Wolcott-Clark view, there was no managerial failure in India. The majority of Indian mills that did not bother with ‘rationalisation’ really lost nothing by it.

The above is consistent with surveys of union effects on firms in the USA, such as Hirsch (2004) and Doucouliagos, Freeman & Laroche (2017): while the effect of unions on firm-level productivity is highly variable and on average zero, the effects are negative for firm profitability, investment, and management discretion.

Given the rigidity produced by labour militancy, Indian mills also found it easier to lobby for tariff protection which they eventually got. That suited the Raj just fine because it would also protect Lancashire’s exports to India and reduce any urban unrest which might destabilise British rule.

The connection between labour militancy and trade protectionism is very similar to what Gómez-Galvarriato (20022007, 2013) describes for revolutionary Mexico, where textile mills found it was profitable even when labour blocked new technology, as long as foreign competition was also eliminated. x


Managerial & organisational failures?

Gupta (2011), as well as Roy (2008), argue the failure to undertake efficiency measures at Indian mills attests to numerous organisational and institutional failures. (I describe both views more extensively here and here.)

This is plausible. Alice Amsden spends the first half of her book, The Rise of the Rest, persuasively cataloguing the many managerial failures at manufacturing firms in Asia and Latin America before the Second World War.

There’s also plenty of modern evidence collected by economists that management styles differ drastically across countries, and firm-level productivity varies systematically with management characteristics and ownership structures. (Businessmen and business historians: well duh!) Management may be a technology that managers must train in and learn-by-doing.

Evidence from a randomised controlled trial in the late 2000s — using Indian textile mills no less! — does show quite wasteful managerial practises can persist. Productivity rose by 17% in those textile mills which took the free advice from a professional management consultancy.

But why hadn’t management eliminated those wasteful practises on their own in that RCT? There are two clear reasons: high tariffs reduced import competition; and family ownership encouraged excessive centralisation. The family owners did not trust middle management and the few male members concentrated even the lowliest decisions at the top. Firm size in India (as in other developing countries) is frequently constrained by family ownership.

But the argument made by Gupta and Roy is that Indian mill owners/managers were too hands-off and delegated too much authority.

And it’s not clear whether a decentralised or a centralised system was ‘better’ in the pre-war textile industry. Indian mills had a form of internal subcontracting system based on jobbers. But so did Lancashire, which probably had the most decentralised system possible, with spinners and weavers being quasi-independent firms which hired their own employees, inside the larger Lancashire firms. Yet American mill management was more centralised and ‘Chandlerian‘. Japanese mills evolved from something like India in the 1880s to an extreme form of hierarchical control by the 1920s.

Probably the possiblity of centralisation depended on labour resistance. Again, management discretion is endogenous with respect to labour relations!

One suggestive evidence against the management failure view of Indian textiles is that Japanese management at India’s only Japanese-owned mill, Toyo Podar, could not make things work:

Despite his reformist zeal and his determination to break labour resistance to the introduction of fresh methods and novel techniques in the industry, Mr T. Sasakura, the Managing Director of the newly taken over Toyo Podar Mills, had to abandon the use of the universal winding machine because wages proved to be too high. The returns of introducing universal winding machines could not justify their expense unless weavers agreed to mind three or four looms and this proved impossible to secure.” (Chandavarkar 1994, p343)

Toyo Podar was intended as the vanguard of further Japanese investment in India. But it was eventually sold off and nothing came of it.

YET Japanese management transformed cotton mills in Shanghai’s extraterritorial International Settlementx


Japanese mills in interwar Shanghai

Japanese textile mills in Shanghai illustrate that management does matter, but it mattered through its control of labour.

Before the mid-1920s, Japanese-owned mills in Shanghai had operated much like other cotton mills. They relied on local foremen or “number ones” — the Chinese equivalent of the Indian jobber.

Like the Indian jobbers, the “number ones” were powerful authority figures who had complex patronage relationships with the workers under their charge, often sharing native place origins. They recruited, trained, and supervised the workers; and they also collected ‘gifts’ from the them. “The personalistic rule of shop floor supervisors indeed made production lines in China a ‘foreman’s fiefdom‘.”

Unlike the Indian jobber, however, the Chinese number-ones were usually members of the Green Gang — a pervasive criminal organisation with strong links to the Chinese Communist Party and to the Nationalists (the Guomindang).

The Japanese owners were determined to replicate at their Shanghai mills the management practises prevailing in Japan. That meant exerting direct control over all aspects of labour management from recruitment to shop floor supervision. That also meant replacing the gang-connected “number ones” with Japanese supervisors. And the factory work force would be converted from male-dominated (as in Indian and Chinese mills) to predominantly female (as in Japanese mills).

The change also involved intensive monitoring and disciplining of workers, including corporal punishment. The Japanese paid a premium for these disamenities: wages at Japanese mills in Shanghai were ~25% higher than at non-Japanese mills.

But the attempt at managerial transformation was powerfully opposed by the number-ones and the male workers, who activated their ties with the Green Gang and the Chinese Communist Party. The Japanese mills were then greeted with strikes at least as numerous and frequent in a short period as any faced by the Indian mills. One of the largest Japanese textile firms in Shanghai was hit with 44 strikes between February 1925 and November 1927, “by far the largest number of strikes in a three-year period against any business in Chinese history”, with almost 2 million worker-days lost!

It did not help matters, of course, that all this coincided with the anti-foreign outburst of the May 30th Movement and the anti-Japanese boycotts.

But the transformative political event was the end of the United Front between the Communists and the Nationalists. Before 1927, they had been in alliance — for the communists, it was part of Stalin’s international policy under the Comintern. But when the Nationalists broke this alliance, they purged the Communists and destroyed independent trade unions, with much gang violence:

In March 1927, on Chiang’s triumphant return to Shanghai as commander of the Northern Expedition, … [Green Gang leaders] financed and directed the combination of gangsters and military forces that carried out the attack, smashing unions, killing hundreds of workers and labor organizers, and driving Communists out of Shanghai”. (Cochran 2000)

“The alliance between gangsters and the Nationalist regime formed with the purge of Communist-union leaders in 1927 and the subsequent “reorganization” of unions under the Nationalist government. In Shanghai, Nationalist control over labor was tantamount to Green Gang dominance of the labor market and local unions. By the early 1930s many leading union politicians and local government officials had joined the Green Gang. The Green Gang’s dominance of labor markets, exercised at the grassroots through contractors and foremen, expanded dramatically during the 1930s as a result of its ties with local officialdom in Shanghai.42 The Green Gang controlled not only labor recruitment at the factory level but also the city’s official labor union and its administrative units that oversaw labor issues”. (Frazier 2004)

With the communists out of the way, and the Shanghai labour movement emasculated, the Japanese mill owners then made a bargain with the Green Gang. The Japanese got the female workers they wanted; cut down employment levels; and intensified work through higher machine/worker assignments. In exchange, the “number ones” were paid off with higher salaries, and the Green Gang could keep their labour recruitment racket (which was almost a semi-slave trade according to Honig 1983).

This appears to be the result of the Japanese bargain with the Green Gang:

[Source: Zeitz (2013)]

Once again, productivities of both labour and capital went up, but labour productivity growth substantially exceeded that of the machinery. Machines per worker at Japanese-owned mills in Shanghai were not increased to the same level as in Japan, but they were higher than at Chinese-owned mills (Duus). Despite the higher wages at Japanese-owned mills, Zeitz says: “Normalizing unit costs in Japan at 1, I estimate production costs for the Japanese-, Chinese-, and British owned sectors as 0.79, 1.21, and 1.58, respectively.

I have no idea why British and Chinese mills could not strike a similar deal with the Green Gang. Regardless, the Japanese reorganisation of Shanghai mills would have been impossible without the political turn against the communists and the trade unions.

Sources: CochranDuus; Frazier (1994); Frazier (2004); Honig; Perry; Zeitz (2013). x


How does the State matter?

Of course labour historians have always been keenly interested in state and labour, but from the point of view of workers’ rights. Their lead is now followed by some in the current generation of economic historians focused on inequality and income distribution.

Both groups give a lot of credit to democratisation and the labour movement in pressuring the state into reforms which changed the distribution of income. That view of the long-run improvement in living standards is embodied in this chart from the new CORE intro econ textbook:

But the ‘purely’ economic forces in the form of technical change and demand for labour were surely much more important in improving living standards than political bargains.

I cannot prove, but also suspect: the hostile responses of the state to labour militancy in the early phase of industrialisation must have made a greater contribution to living standards than either the labour movement or legislation could have possibly made in the later phases of industrialisation.

Some of the most famous acts of labour resistance and protest in the British Industrial Revolution were usually mobs rioting and lashing out — the Luddites, the Swing riots, Peterloo, etc. These were easily quelled by a state acting decisively to protect Property Rights and Innovation, or siding with Capital at the expense of Labour. (The right choice of verbiage depends on your native dialect of political economy.) The British state responded by fielding thousands of troops on the ground, hanging machine-breakers, and deporting troublesome ‘Jacobins’ to lonely corners of the world.

But the British state also placed restrictions on the labour movement. The Combination Acts of 1799 essentially made strikes and unions illegal by declaring them monopolistic restraints of trade. (Yet there were no laws against employer combinations.) This was later repealed and unions made theoretically legal, but the Combinations of Workmen Act 1825 kept striking technically illegal or subject to civil liability until the last quarter of the 19th century.

Under the Master & Servant laws, British workers were criminally prosecuted for breach of contract due to “absence from work, moving to another employer, misbehaviour or insubordination” (Wallis). And there were thousands of prosecutions by special magistrates often drawn from amongst employers, and this law was not repealed until 1875.

In the United States of the Gilded Age, court injunctions and numerous other legal remedies were available to employers to deal with striking workers. The strong presumption in favour of freedom of contract meant US courts frequently interpreted strikes or actions in support of strikes (boycotts, picketting, etc.) as violations of property rights — in fact, until 1914, strikes could be subject to the anti-trust laws! “Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor” (Rosenbloom 2008). American federal and state troops were also deployed in labour disputes: “between 1877 and 1892, the modal use of American militia was to quell labor unrest” (Naidu & Yuchtman 2016).

The greater success rate of big strikes in France was likely enabled by accomodating interventions of the French state, at least when compared with the United States (Friedman 1988).

But did any of these matter to economic development? I really don’t know.

I’m not suggesting that state repression of labour might have worked by lowering wages. Nor am I asking about working conditions.

Rather, I’m asking: by restraining the monopolistic strength of worker combinations, did the state increase the ability of firms to undertake measures which increased productivity?

Such issues are largely neglected in economic history and development. All arguments about technology, human capital, and industrial policy implicitly hold industrial relations constant — take them as given.

In the 1970s and 1980s, there was a lot of scholarly attention to the “urban bias” in developing countries, where the state appeased urban rent-seekers including trade unions. (See for example Bates’s States and Markets in Tropical Africa.) But that’s also now a largely forgotten line of inquiry.

In the case of pre-war Japan, I believe the most important “industrial policy” may have been the state’s weakening of workers’ bargaining with employers. By contrast, the British Raj did basically nothing to restrain workers or quell strikes on behalf of Indian capitalists — especially compared with what the British or the American state had done to support manufacturers in the 19th century. x


Japan: Labour repression as industrial policy

By the 1920s, that pre-war picture of the transient and uncommitted female mill hand was out of date in Japan. According to Molony (1991):

The overwhelming majority of farm girls, however, either remained industrial workers throughout their teen years and often even after marriage to fellow industrial workers, or else retired at marriage to lead the life of urban working-class housewives. Even during the depression-ridden 1930s, only 22.5 percent of the mill workers of rural origin returned to the farm, and of these only a quarter were in registered marriages within a year of their retirement”.

Then there’s the interwar surge in labour disorder and greater union activism, a sign of increasing commitment by workers to their jobs. Early in the Great Depression, Japanese textile workers faced their firms’ draconian retrenchments and, like their Indian counterparts, they lashed out against ‘rationalisation’.

Two of the largest disputes in 1930 occurred at the Toyo Muslin Company, where female workers struck to defend their jobs and seniority allowances against company cutbacks. The second strike, which lasted nearly two months, involved 2,050 women and 449 men. Sodomei and the centrist Federation of Japanese Labor Unions actively assisted the workers, succeeding in organizing a few hundred of the women. Tempers mounted as the second strike wore on. On 24 October, hundreds of angry workers clashed with right-wing thugs recruited by the company.” (Garon 1987, p58)

The strike at Toyo Muslin, one of Japan’s leading textile firms, became a cri de coeur of the labour movement:

“In this broad outline, the [Toyo Muslin] dispute was typical of dozens of textile strikes in 1930. The additional event that made the Toyo Muslin workers famous was the riotous demonstration on October 24 joined by hundreds of young female workers. A support group led by the prominent left-wing socialist Kato Kanjii organized the demonstration as part of its effort to build a regional general strike out of the widespread unrest in Nankatsu factories. Representatives from 115 factories in the area had attended a “Factory Representatives’ Council” on October and resolved to support all area strikes in hopes of bringing on a general strike.

“At a second meeting on the 21st, they resolved to rally in support of the Toyo Muslin strikers on the night of the 24th…When authorities extinguished the street lighting, the demonstrators marched through darkened streets toward the Toyo Muslin factory singing the Internationale and shouting slogans. They threw stones, smashed streetcar windows, and fought police, who arrested 197 demonstrators, including 4 of the women. Over 20 workers were injured. The event was subsequently dubbed simply “the street war.” (Gordon 1992, p245).

This suggests you could not take for granted that strikes in the Japanese textile industry were naturally rare and sporadic. If they could have done, and if given half the chance, Japan’s textile workers would have organised collectively to improve their bargaining position with employers.

So the state’s attitude to monopoly unionism probably mattered.

[ Aside: Many critiques of the “surplus labour” model in the post-WW2 period focused on how, even with lots of “surplus labour”, rural migrants to the cities in developing countries were not being fully ‘absorbed’ by industry. One reason for this was either direct state intervention or state-backed unions enhanced urban labour’s bargaining power and introduced rigidities into the labour market. This is true for independent India under Congress rule, as well as cases like Egypt under Nasser (Mabro 1967). ]

Before the mid-1920s, the Japanese state was unambiguously “intolerant of the efforts of workers to organize themselves to advance their own interests” (Garon 1987, pg 29). Article 17 of the Public Order & Police Law outlawed strikes and unions de facto through a ban on ‘instigation’ and ‘incitement’ of “others to strike, join unions, or engage in collective bargaining”.

And unlike some other countries with similar laws, the Japanese police regularly enforced them. A combination of police arrests and military deployment put down four of the largest strikes in the labour unrest f0llowing the Great War (Yawata Steel Works, Tokyo Streetcar Company, Kawasaki Shipywards, & Mitsubishi Shipyards).

[Source: Garon (1987)]

But the interwar period also saw divisions within the Japanese government between reformists at the Home Ministry who advocated liberal labour legislation in order to encourage the ‘responsible’ elements within the labour movement; and conservatives at the Justice Ministry who opposed such measures and proceeded with unabashed repression.

Article 17 was repealed, but repression continued under the new Peace Preservation Law: “After the 1925 repeal of Article 17, administrative regulations provided sanctions for continued repressive measures, and the number of workers arrested in strikes actually rose. Laws restricting assembly, ‘dangerous meetings’, or seditious publications served to hinder the labor movement as well” (Gordon 1988).

The reformists also got factory laws regulating working conditions passed, but in a way this was to placate workers before they became too radical. Even the reformists’ idea of “sound trade unionism” was the “vertical enterprise union”, i.e. a cooperative, intra-firm body which consulted with management on matters of workers’ interests. That’s more or less what Japan has today. The reformers blanched at the idea of centralised, industry-wide ‘horizontal’ unions which would bargain collectively with employers, such as most countries in Western Europe have today. (Recently, the OECD ‘warned’ about the trend for decentralising negotiations….)

But because of this internal schizophrenia, the Japanese state tended to use a mix of police repression and bureaucratic conciliation at the same time. The government did try to mediate more and more labour disputes. But such interventions had this kind of flavour, as seen in the big Tokyo Streetcar Strike of 1934:

Yoshida Shigeru rammed a “compromise” settlement through the conciliation committee that would reduce wages 20 percent [emphasis mine]. When the social democratic union rejected the proposal and resumed the strike, Yoshida simply negotiated with a Japanist union that stayed on the job. The police, for their part, arrested several strike leaders and protected scabs until the hapless union agreed to accept Yoshida’s terms”. [Garon 1987]

Who prevailed in practise in 1920-37, the reformists or the conservatives? Did this division really matter to labour’s bargaining power? Well, one possible indicator is the labour share of Japan’s aggregate non-agricultural income:

[Source: Minami (1986)]

Despite the surge in labour’s share after 1915, it was dramatically clawed back after its peak in 1923. (The correlation with economic growth rates is modest at best.) Over the same period, the labour share rose continuously in France and fell modestly in the UK from a much higher base.

The fall in labour’s share in Japan was bigger than Germany during the 1930s! And all this was before the militarists took over Japan after 1937 and abolished all independent unions in 1940. x


The British Raj was a not “committee for managing the common afffairs of the [Indian] bourgeoisie”

If Japan had a true developmental state whose priority was to support business, then the first priority of the British Raj was the preservation of British rule. To serve that end, it became more expedient to accommodate Indian labour than collaborate with Indian capitalists in constraining labour’s strength.

But this is only evident by comparison.

In one of those innumerable British enquiries on Indian labour issues, you can find extraordinary exchanges between labour leaders and businessmen serving as witnesses. In one instance, Victor Sassoon, an owner of several large mills in India as well as a luminary of Bombay’s Iraqi-Jewish community, is questioned about ‘terrorism’ by S.A. Dange, the Marathi labour leader and one of the founders of the Communist Party of India!! In another exchange, the non-communist labour leader, N. M. Joshi of the All-Indian Trade Union Congress, probes Mr Sasakura of a Japanese-owned cotton mill in India about the police repression of radicals in Japan.

In other words, in the surrealism of interwar British India, a communist could interrogate a captain of Indian industry, and the other major unionist could badger an important foreign investor in the country!

This would have been completely unthinkable in Japan. At the same time as those committee hearings, this was happening in Japan under “Taisho democracy”:

[in 1928] the Tanaka cabinet struck at the entire labor and tenant movement by dissolving the 21,000-member Hyogikai and the 62,000-member Japan Farmers’ Union. The government also destroyed the Labor-Farmer Party, which had drawn 190,000 votes in the February election, more than any other proletarian party. In all three cases, Home Minister Suzuki obliterated legal, mass-based organizations on the grounds that they contained a handful of Communist Party members, or simply people who sympathized with the Communist Party”. (Garon 1987)

Yet, India is full of scholars and journalists hysterical about the repression of Labour by the colonial State on behalf of Capital, or some such twaddle (e.g., DeSousa 2010).

A more sane assessment is this: like Japan, the British Raj was also kind of schizophrenic in its carrot-and-stick approach on labour issues. And like the Congress Party that succeeded it, the British Raj tried to marginalise the communists from the labour movement by competitively placating workers at the expense of business productivity.

On the one hand, the Raj feared lightening strikes would lead to public disorder. In the interwar period, the colonial state dealt with a morass of troubles taking place on many fronts:

  • labour disputes in Indian- & British-owned industries
  • labour disputes in strategic state assets like railways
  • communist agitation in industrial disputes
  • the peaceful nationalist movement of Gandhi and Congress
  • the minority violent nationalist movement (e.g. Bose)
  • urban ethnoreligious conflict
  • possibiliity of agrarian revolt, since agriculture was hard-hit in the interwar period

The law & order motive was quite strong, because Indian strikes could and did get violent, especially in the longer and larger ones. The disturbances of 1928-9 saw “communal riots” in Mumbai, which is Indian euphemism for pogroms by one group against another

On the other hand, the British sought to defuse worker grievances and reduce the chance of strikes in the first place.

First and foremost, Britain’s accomodation with labour was reflected in a variety of factory legislations affecting factory conditions, working hours, workmen’s compensation for injuries, etc. Labour representatives were also guaranteed seats in the Central Legislative Assembly and contested seats in the provincial and municipal assemblies. Not to mention, dozens of committees of official inquiry investigated everything ranging from strikes to wages to mill conditions.

The British Raj was trying to create an institutional framework for “responsible trade unions” to engage in negotiations and conciliation with employers, and avert or settle strikes before they got out of hand.

However, the British model of “responsible trade unionism” was more liberal than what the Japanese reformists were contemplating around the same time. It could be the independent British trade unionist engaged assertively but peacefully in collective bargaining. But there was also an actual Indian example to follow. In 1920, before it was technically legal to do so, Gandhi had helped found the Ahmedabad Textile Labour Association and won over the city’s mill owners to a true collective bargaining system. Without renouncing the credible threat of strikes, it won better working conditions (and extraction of rents) through negotiation and arbitration.

So Britain enacted the Trade Union Act for India in 1926 — which set up a legal registration system for formal unions and exempted them from civil and criminal liability in the event of strikes, one of the first priorities sought by most labour movements around the world. By contrast, in Japan, numerous union legislations were mooted and debated between 1920 and 1931, but none ever passed. Not even a liability exemption law!

Although the legislation partly reflected the increasingly liberal norms about labour issues within Britain, it was primarily intended to steer the Indian labour movement in a ‘healthy’, non-communist direction and keep it focused purely on industrial disputes. Besides, it was better that Indian workers were represented by a real union leadership who could be negotiated with, rather than remain a headless mass of aggressive but inarticulate grievances, as had been so often the case in Bombay and Calcutta.

I’m not arguing the British Raj enhanced Indian workers’ bargaining power. I agree with Roy & Swamy (2016) that most of the legislations probably made no difference in that respect. How could it have done, when, as we’ve already seen, Indian workers apparently already possessed extraordinary spontaneous strike capabilities?

And, as we’ve also already seen, it made no difference whether there was a real union organisation, as in Ahmedabad, or strikers spontaneously taking the initiative with unions scrambling to follow, as in Bombay. The effect on the textile mills was the same: an organisational rigidity which made it difficult for employers to reduce unit labour costs.

British officials also grew more alarmed by the apparent strength of the communist union in the General Strikes of 1928-9. Communists even set up “workers’ committees” aka cells inside textile mills (Newman 1981, pp 228-9). In Ahmedabad, the Gandhian unionists were forced to compete for workers’ loyalty with the pugnacious local communists, who seemed to appeal particularly to Muslim weavers (Patel).

So as part of a greater effort to stamp out communist influence in the labour movement, there would now be more direct state intervention in labour disputes. Governors of Bombay or the police commissioner had often informally mediated during strikes and pressured the mill owners to settle. But new legislation formally provided for a Labour Officer to receive worker grievances and intercede with businesses. Another law put restrictions on the fines and penalties employers could charge against wages, although under the piece rate system that was an important means of discipline at textile mills.

At the same time, there would be some legal regulations of strikes. The Trades Disputes Act of 1929 prohibited strikes without notice or strikes in public utilities, as well as strikes with “any object other than the furtherance of a trade dispute within the trade or industry”. Some argue in the 1930s the British relied more on loosely defined police powers under the Bombay Special Emergency Powers Act and the Criminal Law Amendment Act in order to act against labour.

But here’s another contrast with Japan — the one time that the clause on “political strikes” was invoked to arrest some of the leaders of the 1934 general strike, the Bombay High Court threw out the case! (Karnik, pg 259)

In the Meerut conspiracy case (1929), the Raj prosecuted communists accused of conspiring to overthrow the King-Emperor, although some of the arrested were British radicals agitating in India. But as Stolte (2013) shows, British intelligence was concerned with the accused’s “international connections” (i.e., Comintern and the Soviet Union), not their labour activism per se.

At any rate, none of that had much lasting effect on anything, because the British didn’t really have the stomach for truly extirpating communists out of the Indian labour movement. After the 1929 strike ended, membership in most formal unions melted away, but wildcat or lightening strikes remained common through the early depression years. The communist penetration of textile workers came back with a vengeance in the General Strike of 1934, as though nothing had changed since 1929.

The Communist Party was then banned in 1934, but the ban was lifted in 1937; and during WW2, when Congress leaders were all in prison, the communists were free because they supported the war!!! Communist parties around the world had switched from anti-war to pro-war in June 1941.

In comparative global and historical perspective, the British attitude was hesistant and stumbling. India’s strikes speak eloquently for themselves. If Bombay-scale strikes had ever materialised in the USA in the 1890s, there would have followed, as actually happened in so many American strikes, federal troops, state militias, Pinkertons, courts issuing injunctions against strikes like spittle out of judges’ mouths, and one railway car after another full of scabs arriving from all parts of the country.

If strikes had brought Osaka or Tokyo to a virtual standstill for 18 months, as happened in Mumbai in 1928-29, the Japanese government would have declared martial law and compelled the strikers back to work many months earlier! Given its record of interventions, it’s unimaginable the Japanese government would have allowed such a thing to happen to the country’s greatest export industry.

The Raj was not squeamish if violence was deemed necessary. After all, they put whole cities like Peshawar and Sholapur under shoot-to-kill martial law declarations, in order to stamp out civil disorder or hunt down violent revolutionaries.

And the British were perfectly capable of unambiguously anti-labour use of force if the dispute affected a strategic state asset like the Indian railways. The several large railway strikes in 1928-30 across the country saw some use of auxiliary militias against workers rebelling against … you guessed it… ‘rationalisation’.

Another category of labour dispute fit for similar treatment was on behalf of British-owned plantations in eastern India. In the so-called Chandpur Incident of 1921, tea plantation coolies in Assam decided to run away from their miserable working conditions and repatriate en masse to their home provinces, without the employers’ permission. The situation ended up with the military police firing on them at a railway station (Karnik).

In the final analysis the British Raj was not a “committee for managing the common affairs of the [Indian] bourgeoisie“. Labour’s inhibition of Indian business was secondary, as long as it was not a challenge to British rule, or as long as it did not turn violent.

The British Raj had little incentive to support Indian business by restraining Indian labour’s exercise of monopoly power. But it also had little ability: there were just too many strikes, and too little political legitimacy to do what Japan did. x


Jute: The exception that proves the rule

In The Empire of Cotton, Sven Beckert argued that the British Raj undermined Indian industrialisation through the imposition of free trade, but at least “Indian cotton industrialists … drew on the colonial state for the mobilization of labor”.

In his Polanyist hallucination Britain had ‘enclosed’ the Indian countryside and “drove huge numbers of workers into cities and into cotton mills”. But if Beckert had ever turned his eyes away from cotton, he might have noticed that the British Raj actually did engage in a substantial and deliberate “mobilisation of labour” — but on behalf of British-owned industries in eastern India.

In one of the most famous works of Indian economic history, Private investment in India, 1900-1939, Amiya Kumar Bagchi suggests this might have made an important difference to the evolution of industrial relations in cotton versus jute textiles.

The cotton mills of western India were owned by Indians — Parsis, Gujaratis, Ismaili Muslims, and Indian Jews. But the industrial and export-oriented commodity sector of eastern India was largely owned by the British — the tea plantations of Assam, the coal mines of Bihar and Bengal, and the jute textile mills of Calcutta. This was the longest British-ruled part of India where a kind of “Latin American” planter and extractive class dominated.

After cotton, the jute textile industry of Calcutta was the second modern industry of late colonial India, principally devoted to producing gunny sacks — an important packing material for trade before WW2. It also had some famous military uses.

Eastern India was the primary destination of the vast majority of India’s internal migrants who sought work outside their home region. And the legal apparatus for the recruitment of labour for the eastern plantations included some of the most illiberal aspects of British colonial labour policy, complete with worker tying, anti-enticement laws, legal restrictions on ‘absconding’, and rights of private arrest by employers.

The majority of the Bombay and Ahmedabad mills recruited from their immediate hinterlands, with the Bombay mills disproportionately filled with migrants from a single district. But the Calcutta jute mills, feeding off the influx of the officially encouraged migrations, were filled with workers from all over eastern and north-central India. Here’s a map with recruitment areas for jute (blue) and cotton (red):

[Based on information from Morris (1965and Chakrabarty (2000)]

Although Calcutta is a Bengali-speaking city, Bengalis were a minority in the jute mills whose work force was much more heterogeneous than the cotton mills. Many US labour historians have long argued union-organising and strike solidarity were impeded by the ethnic heterogeneity of workers in the USA.

Unlike the Bombay Presidency’s relationship with Indian mill owners in western India, the provincial governments of Assam, Bihar, and Bengal were often creatures of the British industrialists in eastern India.

In his study of the Calcutta jute mills, Chakrabarty (2000) notes that the central government in Delhi had trouble getting factory legislation enforced by officials of the Bengal Presidency. He lists many instances but the most interesting is this 1928 comment by the Viceroy Lord Irwin, better known to the world as Lord Halifax, Britain’s Foreign Secretary in 1938-40:

We had a discussion in council this week on the contemplated enquiry into labour matters…. no Local Government except Bengal had any objection to our announcing now that such an enquiry would be held; but the Bengal Government entered a strong protest… Bengal have on other occasions lately a disposition to act as a brake in questions of this kind… The influence of the employers — and particularly the European employers is strong there, and they were not likely to receive news of an enquiry with joy”. [p80]

All of these things seem reflected in labour market data.

Unlike cotton wages, jute textile wages look much more like those of a “surplus labour economy”, whether in nominal wages or real wages or the premium over agriculture. In fact the real wages of jute textile workers in Calcutta barely registered an increase over time.

[ Source: Wolcott (2015)]

[Source: Gupta (2011)]

[ Source: Bagchi (1972), Table 5.3 ]

Strikes in Calcutta jute were by no means trivial but less frequent than in cotton and involved substantially fewer man-days lost — about 3.5 days per worker per year in 1921-38, against 10.5 days per worker per year in cotton. Only about 4% of the jute workers belonged to a formal union in 1929, compared with 42% in Bombay and 28% in Ahmedabad.

Wolcott (2015) attempts to infer the market power of workers from the co-movement of wages with profits in various Indian industries, though the data she assembles are a bit ambiguous. For most of the interwar period, cotton wages trend up regardless of profitability in the industry, whereas in jute this only seems to happen at the end of the 1930s when Calcutta was infested with communists. (See the charts.)

Most importantly, the capital-labour ratio in jute textiles was fairly close to that in Dundee — Scotland’s version of Manchester for jute:

[Source: Buchanan 1936]

In cotton textiles, machines per worker in Lancashire were at least 3-4 times that in India. But in jute textiles, the Dundee ratio was between 1 and 2 times that in Calcutta. x


Random implications

  1. Most people who are inspired by the trade and industrial policies of East Asia, or the 19th century USA, tend to overlook or discount the labour repression that accompanied them. South Korea had militant trade unions kept in check by Park Chung Hee and Chun Doo-hwan. Chiang Kai-shek was not nice to unions, either. Post-war Japan, thanks to MacArthur, was the most progressive in East Asia, but Japanese unions are “enterprise unions” and there is no centralised collective bargaining as you find in Western Europe.
  2. After independence, the Congress Party was in one sense typical of “Third World socialism” — it competed with communists for the loyalty of the urban workers. Perhaps we can call it “democratic Nasserism”, but it also stands comparison with Mexico’s Institutional Revolutionary Party whose power was also founded on a combination of crony capitalism and trade unionism. If the communists had an incentive for confrontation to signal their value to workers, then the state had an incentive to prevent them, and Congress-affiliated unions had an incentive to preemptively offer rents to workers.
  3. The interwar Indo-Japanese divergence in textiles foreshadows China’s commanding lead over India after 1990 in exports of labour-intensive manufactures. A neglected element of that lead may be that China was unencumbered by post-independence India’s industrial relations and resistance from refractory workers. Even within India, states with more pro-worker legislation experienced slower growth in manufacturing in the period 1958–1992.
  4. Rodrik & Subramanian (2005) argue (inter alia) that the origins of India’s economic reform went back to the 1977 election in which the Congress Party was defeated for the first time since independence. After the 1980 return to power, Congress became selectively more pro-business, and state governments in India which were allied with Congress “experienced differentially higher growth rates in registered manufacturing”. But the late 1970s and early 1980s break in the trend growth rate (also found by DeLong) coincides with a pronounced change in labour disputes. The height of trade union power in India had been in the 1960s and 1970s. And Indian unions were literally Luddite, blocking the introduction of technology. For example, Candland (pp 30-31) says the bank tellers’ union blocked the introduction of computers for 20 years! Yet during the martial law of 1975-77, the Congress Party switched from being collaborators with trade unions to breaking them. The reduction of union power has not been explored as a contributor to India’s growth transformation.


Filed under: cotton, cotton textiles, India, Japan, labour, Lancashire, New England textiles, strikes Tagged: Bishnupriya Gupta, Bombay textile industry, Bombay textiles, Greg Clark, industrial relations, Japanese cotton, labour resistance, Lancashire, Susan Wolcott, Tirthankar Roy

Samples of Greek & Latin, Restored Pronunciation

$
0
0
Some MP3 samples of the “restored” pronunciation of classical Greek and Latin. I’ve long been a fan of attempts to reconstruct the pronunciation of ancient Greek and Latin. I’ve embedded MP3 snippets of the first line of The Odyssey as … Continue reading

Debate with Matt on India, China, Cuba, Korea, etc.

$
0
0
Below I quote the lengthy exchange I had with Matt on India, China, Cuba, South Korea, etc. in the comments section of another blog. Since our debate was off-topic, Matt and I have agreed to move it here. My latest reply to Matt is … Continue reading

Ideology & Human Development

$
0
0
How real are Cuba’s accomplishments in health and education since the revolution? How do they compare with the situation prior to the revolution? Was the Soviet Union’s subsidy to Cuba crucial to its human development? Did the US hostility to the … Continue reading

Argentina’s Exclusion from the Marshall Plan 1948-50

$
0
0
In the comments section of an unrelated blogpost, the commenter Matt doggedly argues that the Truman administration deliberately prohibited the European beneficiaries of the Marshall Plan from using American funds to purchase Argentinian wheat in 1948-50. This discrimination, Matt contends, was an … Continue reading

大東亞共現代性圏

$
0
0
I just noticed Tyler Cowen had blogged a Boston Globe article about the number of loanwords in various languages (is there something from the press Cowen will not blog ?), and his own take was to ask, which major language has the lowest percentage of foreign … Continue reading
Viewing all 102 articles
Browse latest View live