A paper idea in Stigler (1964) on Oligopoly

Next week, I am teaching collusive agreements in my price theory class. I decided to take a different approach to the discussion than the one usually found in textbook. The approach consists in showing how economic thought on a topic has evolved over time. For collusion, I decided to discuss George Stigler’s 1964 article on the theory of oligopoly published in the Journal of Political Economy.

Simply put, Stigler proposes a simple approach for stating how collusive agreements can break apart by asking how much extra sales a firm can obtain by cutting its prices without being detected by other firms. Stigler argued that detection got easier as the number of buyers increased or as concentration increased. He also argued that detection became harder if buyers do not repeat purchases and if there is growth in the market through the addition of new customers as firms are not able to detect whether the growth of other firms is due to new customers or because old customers are purchasing its wares. Detection also became harder with a greater number of sellers but he also argued that this was of equal (or maybe lesser) importance than low repeat-sales rates or the arrival of new customers into the market.

This is pretty standard price theory and it is well executed. After postulating the theory, Stigler throws the empirical kitchen sink to see if, broadly speaking, his point is confirmed. One interesting regression is from table 5 in the article (which is illustrated below). That regression estimated rates for a line of advertising in newspapers market (i.e., cities) conditional on circulation in 1939 (its a cross-section of 53 markets). The regression itself is uninteresting to Stigler as he wants to consider the residuals. Why? Because he could classify the residuals by the structure of the market (with only one newspapers or with two newspapers. The idea is that more newspapers should be marked with lower rates as collusive agreements tend to be harder to enforce. Stigler thought this confirmed his idea that “that the number of buyers, the proportion of new buyers, and the relative sizes of firms are as important as the number of rivals” (p. 56).

While looking at Stigler’s regression, I thought that there might an interesting economic history paper to write. Notice that the source of the data used is cited below the table. Retracing that source and checking if (because there are clearly volumes of the Market and Newspaper Statistics) a panel can be constructed could allow for something interesting to be done. Indeed, a panel allows to directly test for the new customers’ hypothesis by adding a population growth variable. This advantage compounds that of increasing the number of observations. Both of those advantages could allow to test the relative importance of the mechanisms highlighted by Stigler.

A paper of this kind, I believe, would be immensely interesting. It is always worth engaging with important theoretical articles on their own terms. As Stigler set this test as one of his illustration, a paper that extends his test would engage Stigler on his own term and could provide a usefully contained discussion of the evolution of the theory of oligopoly. I honestly could see this published in journals like History of Political Economy or Journal of the History of Economic Thought or journals of economic history such as Cliometrica, European Review of Economic History or Explorations in Economic History.

Counting the missing poor in pre-industrial societies

There is a new paper available at Cliometrica. It is co-authored by Mathieu Lefebvre, Pierre Pestieau and Gregory Ponthiere and it deals with how the poor were counted in the past. More precisely, if the poor had “a survival disadvantage” they would die. As the authors make clear “poor individuals, facing worse survival conditions than non-poor ones, are under-represented in the studied populations, which
pushes poverty measures downwards.” However, any good economist would agree that people who died in a year X (say 1688) ought to have their living standards considered before they died in that same year (Amartya Sen made the same point about missing women). If not, you will undercount the poor and misestimate their actual material misery.

So what do Lefebvre et al. do deal with this? They adapt what looks like a population transition matrix (which is generally used to study in-,out-migration alongside natural changes in population — see example 10.15 in this favorite mathematical economics textbook of mine) to correctly estimate what the poor population would have been in a given years. Obviously, some assumptions have to be used regarding fertility and mortality differentials with the rich — but ranges can allow for differing estimates to get a “rough idea” of the problem’s size. What is particularly neat — and something I had never thought of — is that the author recognize that “it is not necessarily the case that a higher evolutionary advantage for the non-poor over the poor pushes measured poverty down”. Indeed, they point out that “when downward social mobility is high”, poverty measures can be artificially increased upward by “a stronger evolutionary advantage for the non-poor”. Indeed, if the rich can become poor, then the bias could work in the opposite direction (overstating rather than understating poverty). This is further added to their “transition matrix” (I do not have a better term and I am using the term I use in classes).

What is their results? Under assumptions of low downward mobility, pre-industrial poverty in England is understated by 10 to 50 percentage points (that is huge — as it means that 75% of England at worse was poor circa 1688 — I am very skeptical about this proportion at the high-end but I can buy a 35-40% figure without a sweat). What is interesting though is that they find that higher downward mobility would bring down the proportion by 5 percentage points. The authors do not speculate much as to how likely was downward mobility but I am going to assume that it was low and their results would be more relevant if the methodology was applied to 19th century America (which was highly mobile up and down — a fact that many fail to appreciate).

The price of nails since 1695 and its lessons

There is a new paper in Journal of Economic Perspectives. Its author, Dan Sichel, studies the price of nails since 1695 (image below). Most of you have already tuned off your attention by now. Please don’t do that: the price of nails is full of lessons about economic growth.

Indeed, Sichel is clear in the title in the subtitle about why we should care — nail prices offer “a window into economic change”. Why? Because we can use them to track the evolution of productivity over centuries.

Take a profit-maximizing firm and set up a constrained optimization problem like the one below. For simplicity, assume that there is only one input, labor. Assume also that a firm is in a relatively competitive market so as to remove the firm’s ability to affect prices so that, when you try to do your solutions, all the quantity-related variables will be subsumed into a n term that represent’s the firm share of the market which inches close to zero.

If you take your first order conditions and solve for A (the technological scalar). You will find this this identity

What does this mean? Ignore the n and consider only w and p. If wages go up, marginal costs also increase. From a profit-maximizing firm’s standpoint trying to produce a given quantity, if prices (i.e. marginal revenue) remained the same, there must have been an increase in total factor productivity (A). Express in log-form, this means that changes in total factor productivity are equal to αW – αP. This means that, if you have estimates of output and input prices, you can estimate total factor productivity with minimal data. This is what Sichel essentially does (and Douglas North did the same in 1968 when estimating shipping productivity). All that Sichel needs to do is rearrange the identity above to explain price changes. This is how he gets the table below.

The table above showcases the strength of Sichel’s application of a relatively simple tool. Consider for example the period from 1791 to 1820. Real nail prices declined about 0.4 percent a year even though the cost of all inputs increased noticeably. This means that total factor productivity played a powerful role in pushing prices down (he estimates that advances in multifactor productivity pulled down nail prices by an average of 1.5 percentage points per year). This is massive and suggestive of great efficiency gains in America’s nail industry! In fact, this efficiency increases continued and accelerated to 1860 (reinforcing the thesis of economic historians like Lindert and Williamson in Unequal Gains that American had caught up to Britain by the Civil War).

I know you probably think that the price of nails is boring, but this is a great paper to teach how profit-maximizing (and constrained optimization) logic can be used to deal with problems of data paucity to speak to important economic changes in the past.

Subtle ways to sneak in rationality

Generally, when you do your microeconomics class you get to see isoquants. I mean, I hope you get to see them (some principles class dont introduce them and are left to intermediate classes). But when you do, they look like this:

Its a pretty conventional approach. However, there is a neat article in History of Political Economy by Peter Lloyd (2012) titled “the discovery of the isoquant“. The paper assigns the original idea, rather than to the usual suspect of Abba Lerner in 1933, to W.E. Johnson in 1913 as A.L. Bowley was referring to his “isoquant” in a work dated from 1924 (from which the image is drawn). But what is more interesting that the originator of the idea is how the idea has morphed from another of its early formulations. In the 1920s and 1930s, Ragnar Frisch was teaching his price theory classes in Norway and depicted isoquants in the following manner in his lectures notes.

Do you notice something different about Frisch’s 1929 (or 1930) lectures relative to the usual isoquants we know and love today? Watch the end of each isoquant. They seem to “arc” do they not? How could an isoquant have such a bent? Most economists are probably so used to using isoquants that do not bend (except for perfect complements) that it will take a minute to answer. Well, here is the answer: its because Frisch was assuming that the production function from which the isoquant is derived had a maximum which means that the marginal product of an input could become negative. This is in stark contrast with our usual way to assume production functions as smoothly declining (but never negative) marginal products. This is why Frisch includes an arc to this shape (a backward bend).

Why did we move away from Frisch’s depiction? Well think about the economic meaning of a negative meaning marginal product. It means that a firm would be better off scaling down production regardless of anything else. Its a straightforward proposition to understand why in all settings, a firm would automatically from such an “uneconomic” zone. In other words, we should never expect firms to continually operate in a zone of negative marginal product. Ergo, the “bend”/”arc” is economically trivial or irrelevant. Removing it simplifies the discussion and formulation but also does something subtle — it sneaks in a claim rationality of behavior from firmowners and operators.

This is a good setup for a question to ask your students in an advanced microeconomics class that isnt just about the mathematics but about what the mathematical formulations mean economically!

“Using word analysis to track the evolution of emotional well-being in nineteenth-century industrializing Britain”

This is the title to a paper in Historical Methods that I believe should convince you of two things. The first, and this applies to scholars in economic history, is that the journal Historical Methods is a highly interesting one. It tends to publish new and original work by economists, historians, sociologists and anthropologists who are well-versed in statistical analysis and data construction. The articles that get published there often offer a chance to discover solutions to longstanding problems through both interactions of different fields and the creation of new data.

The second is that it is becoming increasingly harder to hold the view that the industrial revolution was “a wash”. I described elsewhere this view of the industrial revolution as a wash as believing one or more of the following claims: “living standards did not increase for the poor; only the rich got richer; the cities were dirty and the poor suffered from ill-health; the artisans were crowded out; the infernal machines of the Revolution dumbed down workers”. Since the 1960s, many articles and books have confirmed that the industrial revolution was marked by rising wages and incomes as well as long-run improvements in terms of nutrition, mortality and education. The debates that persist focus on the pace of these improvements and the timing of the sustained rise that is commonly observed (i.e. when did it start)

The new paper in Historical Methods that I am mentioning here suggests that these many articles and books are correct. The author, Pierre Lack, takes all the 19th century pamphlets published in Britain and available online to analyze the vocabulary contained within them. Lack’s idea is to use the fact that books became immensely cheap (books were becoming more affordable through both falling prices and rising incomes — see table above) to evaluate emotional well-being by the words contained in them. What Lack finds is that there were no improvements in emotional well-being as proxied by the types of words in those pamphlets.

But how could this be positively tied to the industrial revolution as not being a wash? This is because, if you believe that there is such a thing as a hedonic treadmill (i.e. more income only allows us to actualize upward our preferences so that the income has no impact on happiness), you cannot hold many of the beliefs associated with the industrial revolution being a wash. For example, if you think that living standards for the poor did not rise while other dimensions of their well-being (e.g., health, environment of the city, working conditions) fell, then there the graph produced by Lack should have exhibited a downward trend!

This is not the only belief associated with the “industrial revolution was a wash” view that cannot withstand Lack’s new paper. One frequently advanced factor that purportedly affects emotional wellbeing is inequality. Because we care about our relative position (e.g., I am happier if my neighbor have a worse car than me), rising inequality should be associated with falling emotional well-being (that was for example the case that the Spirit Level of Wilkinson and Pickett tried to advance). However, if you believe that Britain enjoyed rising inequality (it did at first and it then fell according to Jeffrey Williamson who shows that inequality rose to 1860 and fell to 1913), then Lack’s data should show falling emotional well-being. It does not which means that it is quite hard to hold the view that the revolution was a wash.

This is probably my favorite paper at Historical Methods and I hope you will like it too. I also hope that you will add it to your list of articles to inform your own research.

Vaccine persuasion is cheaper

Canadians are blocking a bridge. For Americans who like to engage in stereotypes about Canadians, this is inexplicable (even though the practice of blocking things in Canada is not new by any means). However, for me as an economist, it is entirely explicable.

Consider what vaccine mandates/passports (which is what initiated the current mayhem) do in pure economics terms: they raise costs for the unvaccinated. They do not alter the benefit of being vaccinated. All they do is raise costs. People could be more or less inelastic to this cost, but the fact that many are willing to spend time and resources (fuel, wear and tear of trucks etc.) to prevent such policies from continuing suggest that their behavior is not perfectly inelastic.

How elastic is it then? Well, we can see that by looking at what happens when we alter the benefit of being vaccinated. This is the case with vaccine lotteries. The “extra” benefits associated with a lottery is that the unvaccinated obtains the value of the vaccine plus the expected value (i.e. the probability) of winning a particular prize. One recent paper in Economics Letters finds that for $55, you can convince an extra person to be vaccinated. That is basically the cost of administering a lottery plus the prizes themselves. That is a relatively cheap way to increase the benefit for the unvaccinated in order to have them change their mind. Another paper, in the American Journal of Health Economics, finds a similar results by concentrating on the Ohio vaccine lottery. The difference is that the amount is $75 instead of $55. Still, pretty cheap for an extra vaccinated person and the generally high social benefit of a vaccine in terms of avoided costs of infections/hospitalizations/deaths.

Thus, we can say that behavior is quite elastic. But this is where the rub comes. When you raise the benefits in this case, the story is over. There is nothing else that happens after that. When you raise the costs, people might resist and adopt other measures to avoid the costs. This includes blocking bridges on the US-Canada border. And what is the social cost of that attempt at avoiding the cost of the coercive private-cost-increasing policy? Pretty high. Probably higher than the cost of a lottery system or other voluntary programs that play with the marginal private benefit of being vaccinated.

The point I am trying to get across to you is quite simple: persuasion works because it essentially increases the perception of benefits from doing X or Y activity. Coercion is impose a private cost of not doing X or Y with the potential downside that people respond in ways that create socially detrimental outcomes. Yup, coercion isn’t cheap.

Economics, Economic Freedom and the Olympics

The Olympics have begun. Is there anything economists can say about what determines a country’s medal count? You might not think so, but the answer is a clear yes! In fact, I am going to say that both the average economist and the average political economist (in the sense of studying political economy) have something of value to say.

Why could they not? After all, investing efforts and resources in winning medals is a production decision just like using labor and capital to produce cars, computers or baby diapers. Indeed, many sports cost thousand of dollars in equipment alone each year – a cost to which we must add the training time, foregone wages, and coaching. Athletes also gain something from these efforts – higher incomes in after-career, prestige, monetary rewards per medal offered by the government. As such, we can set up a production function of a Cobb-Douglas shape

Where N is population, Y is total income (i.e., GDP), A is institutional quality and T is the number of medals being won. The subscript i and t depict the medals won at any country at any Olympic-event. This specification above is a twist (because I change the term A’s meaning as we will see below) on a paper in the Review of Economics and Statistics published in 2004 by Andrew Bernard and Meghan Busse.

The intuition is simple. First, we can assume that Olympic-level performance abilities requires a certain innate skill (e.g. height, leg length). The level required is an absolute level. To see this, think of a normal distribution for these innate skills and draw a line near the far-right tail of the distribution. Now, a country’s size is directly related to that right-tail. Indeed, a small country like Norway is unlikely to have many people who are above this absolute threshold. In contrast, a large country like Germany or the United States is more likely to have a great number of people competing. That is the logic for N being included.

What about Y? That’s because innate skill is not all that determines Olympic performance. Indeed, innate skills have to be developed. In fact, if you think about it, athletes are less artists who spend years perfecting their art. The only difference is that this art is immensely physical. The problem is that many of the costs of training for many activities (not all) are pretty even across all income levels. Indeed, many of the goods used to train (e.g., skis, hockey sticks and pucks, golfing equipment) are traded internationally so that their prices converge across countries. This tends to give an edge to countries with higher income levels as they can more easily afford to spend resources to training. This is why Norway, in spite of being quite small, is able to be so competitive – its quite-high level of income per capita make it easier to invest in developing sporting abilities and innate talent.

Bernard and Busse confirm this intuition and show that, yes, population and development levels are strong determinants of medal counts. The table below, taken from their article, shows this.

What about A? Normally, A is a scalar we use in a Cobb-Douglas function to illustrate the effect of technological progress. However, it is also frequently used in the economic growth literature as the stand-in for the quality of institutions. And if you look at Bernard and Musse’s article, you can see institutions. Do you notice the row for Soviet? Why would being a soviet country matter? The answer is that we know that the USSR and other communist countries invested considerable resources in winning medals as a propaganda tool for the regimes. The variable Soviet represents the role of institution.

And this is where the political economist has lots to say. Consider the decision to invest in developing your skills. It is an investment with a long maturity period. Athletes train for at least 5-10 years in order to even enter the Olympics. Some athletes have been training since they were young teenagers. Not only is it an investment with a long maturity period, but it pays little if you do not win a medal. I know a few former Olympic athletes from Canada who occupy positions whose prestige-level and income-level that are not statistically different from those of the average Canadian. It is only the athletes who won medals who get the advertising contracts, the sponsorships, the talking gigs, the conference tours, and the free gift bags (people tend to dismiss them, but they are often worth thousands of dollars). This long-maturity and high-variance in returns is a deterrent from investing in Olympics.

At the margin, insecurity in property rights heighten the deterrent effect. Indeed, why invest when your property rights are not secured? Why invest if a ruler can take the revenues of your investment or if he can tax it to level punitive enough to deter you? In a paper published in Journal of Institutional Economics with my friend Vadim Kufenko, I found that economic freedom was a strong determinant of medal count. Vadim and I argued that secure property rights – one of the components of economic freedom indexes – made it easier for athletes to secure the gains of their efforts (see table below).

Two other papers, one by Christian Pierdzioch and Eike Emrich and the other by Lindsay Campbell, Franklin Mixon Jr. and Charles Sawyer, also find that institutional quality has a large effect on medal counts won by countries. Another article, this time by Franklin Mixon and Richard Cebula in the Journal of Sports Economics, also argues that the effective property rights regime in place for athletes creates incentives that essentially increase the supply of investment in developing athletic skills. The overall conclusion is the same: Olympics medal counts depends in large part in the quality of institutions in an athlete’s country of origin.

Phrased differently, the country that is most likely to win a ton of medals is the economically free, rich and populous one. That’s it!

On Lockdowns and Hospital Capacity

My home province of Quebec in Canada has been under lockdown since the Holidays (again). At 393 days of lockdown since March 11th 2020, Quebec has been in lockdown longer than Italy, Australia and California (areas that come as examples of strong lockdown measures). Public health scientists admit that the Omicron variant is less dangerous. But the issue is not the health danger, but rather the concern that rising hospitalizations will cause an overwhelming of an exhausted health sector.

And to be sure, when one looks at the data on hospital bed capacity and use-rates, you find that the intensity of lockdowns is well-related to hospital capacity. Indeed, Quebec is a strong illustration of this as its public health care system has one of the lowest levels of hospital capacity in the group of countries with similar income-levels. The question then that pops to my mind is “how elastic is the supply of hospital/medical services?”.

In places like my native Quebec, where health care services are largely operated and financed by the government, the answer is “not much”. This is not surprising given that the capacity is determined bureaucratically by the provincial government according to its constraints. And with bureaucratic control comes well-known rigidities and difficulties in responding to changes in demand. But that does not go very far in answering the question. Indeed, the private sector supply could also be quite inelastic.

A few months ago, I came across this working paper by Ghosh, Choudhury and Plemmons on the topic of certificate-of-needs (CON) laws. CON-laws essentially restrict entry into the market for hospital beds by allowing incumbent firms to have a say in determining who has a right to enter a given geographical segment of the market. The object of interest of Ghosh et al. was to determine the effect of CON-laws on early COVID outbreak outcomes. They found that states without such laws performed better than states with such laws (on both non-COVID and COVID mortalities). That is interesting because it tells us the effect of a small variation in the legal ability of private firms to respond to changes in market conditions. Eliminating legal inability to respond to changes leaves us with normal difficulties firms face (e.g. scarce skilled workers such as nurses, time-to-build delays etc.).

But what is more telling in the paper is that Ghosh et al. studied the effect of states with CON-laws that eased those laws because of COVID. This is particularly interesting because it unveils how fast previously regulated firms can start acting like deregulated firms. They find similar results (i.e. fewer deaths from COVID and non-COVID sources).

Are there other works? I found a few extra ones such as this one in the Journal of Risk and Financial Management that find that hospitals were less overcrowded in states without CON-laws. Another one, in the Journal of General Internal Medicine finds that states with CON-laws tended to have more overcrowded installations — notably nursing homes — which meant higher rates of COVID transmission in-hospital.

All of these, taken together, suggest to me that hospital capacity is not as fixed as we think of. Hospitals are capable of adjusting on a great number of margins to increase capacity in the face of adverse exogenous shocks. That is if there are profit-motives tied behind it — which is not the case in my home country of Quebec.

Empirical Austrian Economics?

David Friedman recently got into an online debate with Walter Block that could be seen as a boxing match between “Austrian economics” and the “Chicago School of Economics”. In the wake of this debate, Friedman assembled his thoughts in this piece which is supposed (if I understand properly) to be published as a chapter in an edited volume. Upon reading this piece, I thought it worthy of providing my thoughts in part because I see myself as being both a member of both schools of thought and in part because I specialize in economic history. And here is the claim I want to make: I don’t see any meaningful difference between both and I don’t understand why there are perpetual attempts to create a distinction.

But before that, let’s do a simple summary of the two views according to Friedman (which is the first part of the essay). The “Chicago” version is that you can build theoretical models and then test them. If the model is not confirmed, it could be because a) you used incorrect data, b) relied on incorrect assumptions, c) relied on an incorrect econometric specification. The Austrian version is that you derive axioms of human action and that is it. The real world cannot be in contradiction with the axioms and it only serves to provide pedagogical illustrations. That is the way Friedman puts the differences between the schools of thought. The direct implication from this difference is that there cannot be (or there is no point to) empirical/econometric work in the Austrian school’s thinking.

Now, I understand that this is the viewpoint shared by many — as noticed by a shared distrust of econometrics and mathematical depictions of the economy among Austrian-school scholars. In fact, Rothard was pretty clear about this in an underappreciated book he authored, the A History of Money and Banking in the United States. But I do not understand why.

After all, all models are true if they are logically consistent. I can go to my blackboard and draw up a model of the economy and make predictions about behavior. That is what the Austrians do! The problem is that predictions rely on assumptions. For example, we say that a monopoly grant is welfare-reducing. However, when there are monopolies over common-access resources (fisheries for example), they are welfare-enhancing since the monopoly does not want to deplete the resource and compete against its future self. All we tweaked was one assumption about the type of good being monopolized. Moreover, I can get the same result as the conventional logic regarding monopolies by tweaking one more assumption regarding time discounting. Indeed, a monopoly over a common access resource is welfare-enhancing as long as the monopolist values the future stream of income more than than the future value of the present income. In other words, something on the brink of starvation might not care much about not having fish tomorrow if he makes it to tomorrow.

If I were to test the claims above, I could get a wide variety of results (here are some conflicting examples from Canadian economic history of fisheries) regarding the effects of monopoly. All of these apparent contradictions result from the nature of the assumptions and whether they apply to each case studied. In this case, the empirical part is totally in line with the Austrian view. Indeed, empirical work is simply telling which of these assumptions apply in case X, Y, or Z. In this way of viewing things, all debates about methods (e.g. endogeneity bias, selection bias, measurement, level of data observation) are debates about how to properly represent theories. Nothing more, nothing less.

It is a most Austrian thing to start with a clear model and then test predictions to see if the model applies to a particular question. A good example is the Giffen-good. The Giffen good can theoretically exist but we have yet to find one that convinces a majority of economist. Ergo, the Giffen good is theoretically true but it is also an irrelevant imaginary pink unicorn. Empirically, the Giffen good has simply failed to materialize over hundreds of papers in top journals.

In fact, I see great value to using empirical work in an Austrian lens. Indeed, I have written articles (one is a revise and resubmit at Public Choice, another is published in Review of Austrian Economics and another is forthcoming at Essays in Economic and Business History) using econometric methods such as difference-in-difference and a form of regression discontinuity to test the relevance of the theory of the dynamics of interventionism (which proposes that government intervention is a cumulative process of disequilibrium that planners cannot foresee). n each of these articles, I believe I demonstrated that the theory has some meaningful abilities to predict the destabilizing nature of government interventions. When I started writing these articles, I believed that the body of theory I was using was true because it was logically consistent. However, I was willing to accept that it could be irrelevant or generally not applicable.

In other words, you can see why I fail to perceive any meaningful difference between Austrian theory and other schools of economic thought. For year, I realized I was one of the few to see like this and I never understood why. A few months ago, I think I put my finger on the “why” after reading a forthcoming piece by my colleague Mark Koyama: Austrians assume econometrics to be synonymous with economic planning.

I admit that I have read Mises’ Theory and History and came out not understanding why Austrians think that Mises admonished the use of econometrics. What I read was more of the domain of the reaction to the use econometrics for planning and policy-making. Econometrics can be used to answer questions of applicability without in any way rejecting any of the Austrian framework. Maybe I am an oddball, but I was a fellow Austrian traveler when I entered the LSE and remained one as I learned to use econometrics. I never saw any conflict between using quantitative methods and Austrian theory. I only saw a conflict when I spoke to extreme Rothbardians who seemed to conflate the use of tools to weigh theories and the use of econometrics to make public policy. The former is desirable while the latter is to be shunned. Maybe it is time for Austrians to realize that there is good reason to reject econometrics as a tool to “plan” the economy (which I do) and accept econometrics as a tool of study and test. After all, methods are tools and tools are not inherently bad/good — its how we use them that matters.

That’s it, that’s all I had to say.

Elasticity of Substitution or Why Simple Tools Teach Us Tons

I enjoy simple methods in economics. For economic history, which is my field of specialization, its often by constraint that I have to use them. Because of that, one has to be creative. In the process, however, one spots how well-used simple methods can be more powerful (both in terms of pedagogy and explanatory uses) than more advanced methods. Let me show you an example from Canadian history: the fur trade industry.

Yes, Canada’s mighty beaver! Generally known for its industriousness, the beaver has been mostly appreciated for its pelt which was the main export staple from Canada during the 17th and 18th centuries. In fact, if one is pressed to state what they think of when they think about Canada, fur pelts come in the top 10 (if not the top 5). It is thus unsurprising that there are hundreds of books on the business history of the fur trade in Canada.

One big thesis in Canadian economic history is that the fur trade was actually a drag on economic development (here and here and, most importantly, here with a wikipedia summary here). The sector’s dominance meant that the colony was not developing a manufacturing sector or other industries such as the timber, cod fishing, agriculture or potash. Political actors were beholden to a class of fur merchants who dominated. In a way, it looks a lot like the resource curse argument. And, up to 1810-1815, the industry represents the vast majority of exports (north of 60% always and generally around 75%). During the French colonial era, they represented 20% of GDP at some ponts.

Its only after 1815 that furs collapse as a staple — and quite rapidly. It represented less than 10% of exports and less than 2% of GDP by 1830. To explain the rapid turnaround, most of the available work has focused on demand for the industry’s output (see here) or internal industry factors. In a weird way, the industry is taken in isolation.

And that is where a simple tool like the elasticity of substitution between inputs becomes useful. First, I want you to notice the dates I invoked for the turning point: 1810-1815. These are not trivial years. They mark the end of the contest at sea between Britain and France and the beginning of the former’s hegemony on the sea. This means few trade interruptions due to war and insecurity at sea. Before 1815, the colonies in North America would have experienced nearly one year out of two.

What does that have to do with the fur trade’s dominance and elasticity of substitution? Well, it could be that war affects industry differently. Lets look at isoquants for a second to see how that could be the case. Imagine a constant elasticity of substitution function of the following shape:

Where L and K are your usual terms for labor and capital and r is the elasticity. Now, for the sake of argument, let us imagine what happens to the isoquant of a production function as r tends to infinity. As it tends to infinity, the marginal rate of technical substitution between L and K approaches zero if L > K. This means that there is a form of pure complementarity between inputs and no substitution is possible to produce the same quantity of output. The isoquant looks like this.

As r tends to infinity

On the other hand, if r tends to -1, there is perfect substitutability between both L and K. The isoquant then looks like this.

As r tends to -1

What if the fur industry’s isoquant looked more like the latter case while other industries looked like the former? More precisely, what if wars affected the supply of one input more than another? With a simple element like our description of the production function above, we see that if wars did not evenly affected the supply of one input, then one industry would be forced to contract output more than another. In our case, this would be the timber, potash, cod and agricultural sectors versus the fur trade.

Does that fit with the historical evidence? We know that the fur industry frequently changed the inputs it used in trading with the First Nations of Canada to buy furs. Whatever was deemed most valued by the natives would be what would be used. It could be alcohol, clothing, firearms, furnishings, silverware, tobacco, spices, salt, etc. This we get clearly from the work of Ann Carlos and Frank Lewis (a book linked to above). There was great ability to substitute. In contrast, other industries could not shift as easily. Take the timber industry which needed to import axes, saws, hoops, iron and nails from France or the United Kingdom for most of the 18th century. If wars disrupted the supply of these capital goods from Europe, there was very little substitution available which meant that the timber industry would have to contract output considerably to reflect the higher cost of these items. The same thing applies to the cod fishing industry whose key input was salt. No salt, no drying of the cod for preservation and export, thus no cod exports. And salt needed to be imported. In wartime, salt prices tended to jump much faster than other goods because its supply was entirely imported. Thus, wartime meant that the cod industry had to contract its output quite importantly.

The cod fishing industry is an amazing example of this if you take the American revolutionary war. During the war, the colony of Quebec (which represented 85% + of Canada’s population at the time) was invaded by the Americans and the French’s alliance with the Americans jeopardized trade between Quebec and Britain (its mother country at that point). The result was that salt prices jumped rapidly compared to all other goods and the output of the cod industry contracted. In contrast, the fur trade sector was barely affected. Look at this graph of the exports of beaver skins and codfish. Codfish output collapses whereas beaver skins barely show any sign of a major military conflagration.

In a longer-run perspective, its easy now to understand why the industry was dominant. It was the only industry that was robust to wartime shocks. All other industries would have had quite large shifts in factor prices causing them to contract and expand output in a very volatile manner. Now you may think this is just a trivial re-arranging of the argument. It is not because it invalidates the idea that the colony was poor or developed slowly because of the dominance of the fur industry. Rather, it shifts the burden on wartime shocks. Wars, not the dominance of the fur trade itself, meant that the economy was heavily mono-industrial.

A simple tool, the elasticity of substitution (which we can derive from the marginal rate of technical substitution), changes the entire interpretation of Canadian economic history. Can you see what I mean by the claim that simple tools combined with simple empirical observations can lead to powerful explanations? I hope you do!