Empirical Austrian Economics?

David Friedman recently got into an online debate with Walter Block that could be seen as a boxing match between “Austrian economics” and the “Chicago School of Economics”. In the wake of this debate, Friedman assembled his thoughts in this piece which is supposed (if I understand properly) to be published as a chapter in an edited volume. Upon reading this piece, I thought it worthy of providing my thoughts in part because I see myself as being both a member of both schools of thought and in part because I specialize in economic history. And here is the claim I want to make: I don’t see any meaningful difference between both and I don’t understand why there are perpetual attempts to create a distinction.

But before that, let’s do a simple summary of the two views according to Friedman (which is the first part of the essay). The “Chicago” version is that you can build theoretical models and then test them. If the model is not confirmed, it could be because a) you used incorrect data, b) relied on incorrect assumptions, c) relied on an incorrect econometric specification. The Austrian version is that you derive axioms of human action and that is it. The real world cannot be in contradiction with the axioms and it only serves to provide pedagogical illustrations. That is the way Friedman puts the differences between the schools of thought. The direct implication from this difference is that there cannot be (or there is no point to) empirical/econometric work in the Austrian school’s thinking.

Now, I understand that this is the viewpoint shared by many — as noticed by a shared distrust of econometrics and mathematical depictions of the economy among Austrian-school scholars. In fact, Rothard was pretty clear about this in an underappreciated book he authored, the A History of Money and Banking in the United States. But I do not understand why.

After all, all models are true if they are logically consistent. I can go to my blackboard and draw up a model of the economy and make predictions about behavior. That is what the Austrians do! The problem is that predictions rely on assumptions. For example, we say that a monopoly grant is welfare-reducing. However, when there are monopolies over common-access resources (fisheries for example), they are welfare-enhancing since the monopoly does not want to deplete the resource and compete against its future self. All we tweaked was one assumption about the type of good being monopolized. Moreover, I can get the same result as the conventional logic regarding monopolies by tweaking one more assumption regarding time discounting. Indeed, a monopoly over a common access resource is welfare-enhancing as long as the monopolist values the future stream of income more than than the future value of the present income. In other words, something on the brink of starvation might not care much about not having fish tomorrow if he makes it to tomorrow.

If I were to test the claims above, I could get a wide variety of results (here are some conflicting examples from Canadian economic history of fisheries) regarding the effects of monopoly. All of these apparent contradictions result from the nature of the assumptions and whether they apply to each case studied. In this case, the empirical part is totally in line with the Austrian view. Indeed, empirical work is simply telling which of these assumptions apply in case X, Y, or Z. In this way of viewing things, all debates about methods (e.g. endogeneity bias, selection bias, measurement, level of data observation) are debates about how to properly represent theories. Nothing more, nothing less.

It is a most Austrian thing to start with a clear model and then test predictions to see if the model applies to a particular question. A good example is the Giffen-good. The Giffen good can theoretically exist but we have yet to find one that convinces a majority of economist. Ergo, the Giffen good is theoretically true but it is also an irrelevant imaginary pink unicorn. Empirically, the Giffen good has simply failed to materialize over hundreds of papers in top journals.

In fact, I see great value to using empirical work in an Austrian lens. Indeed, I have written articles (one is a revise and resubmit at Public Choice, another is published in Review of Austrian Economics and another is forthcoming at Essays in Economic and Business History) using econometric methods such as difference-in-difference and a form of regression discontinuity to test the relevance of the theory of the dynamics of interventionism (which proposes that government intervention is a cumulative process of disequilibrium that planners cannot foresee). n each of these articles, I believe I demonstrated that the theory has some meaningful abilities to predict the destabilizing nature of government interventions. When I started writing these articles, I believed that the body of theory I was using was true because it was logically consistent. However, I was willing to accept that it could be irrelevant or generally not applicable.

In other words, you can see why I fail to perceive any meaningful difference between Austrian theory and other schools of economic thought. For year, I realized I was one of the few to see like this and I never understood why. A few months ago, I think I put my finger on the “why” after reading a forthcoming piece by my colleague Mark Koyama: Austrians assume econometrics to be synonymous with economic planning.

I admit that I have read Mises’ Theory and History and came out not understanding why Austrians think that Mises admonished the use of econometrics. What I read was more of the domain of the reaction to the use econometrics for planning and policy-making. Econometrics can be used to answer questions of applicability without in any way rejecting any of the Austrian framework. Maybe I am an oddball, but I was a fellow Austrian traveler when I entered the LSE and remained one as I learned to use econometrics. I never saw any conflict between using quantitative methods and Austrian theory. I only saw a conflict when I spoke to extreme Rothbardians who seemed to conflate the use of tools to weigh theories and the use of econometrics to make public policy. The former is desirable while the latter is to be shunned. Maybe it is time for Austrians to realize that there is good reason to reject econometrics as a tool to “plan” the economy (which I do) and accept econometrics as a tool of study and test. After all, methods are tools and tools are not inherently bad/good — its how we use them that matters.

That’s it, that’s all I had to say.

Elasticity of Substitution or Why Simple Tools Teach Us Tons

I enjoy simple methods in economics. For economic history, which is my field of specialization, its often by constraint that I have to use them. Because of that, one has to be creative. In the process, however, one spots how well-used simple methods can be more powerful (both in terms of pedagogy and explanatory uses) than more advanced methods. Let me show you an example from Canadian history: the fur trade industry.

Yes, Canada’s mighty beaver! Generally known for its industriousness, the beaver has been mostly appreciated for its pelt which was the main export staple from Canada during the 17th and 18th centuries. In fact, if one is pressed to state what they think of when they think about Canada, fur pelts come in the top 10 (if not the top 5). It is thus unsurprising that there are hundreds of books on the business history of the fur trade in Canada.

One big thesis in Canadian economic history is that the fur trade was actually a drag on economic development (here and here and, most importantly, here with a wikipedia summary here). The sector’s dominance meant that the colony was not developing a manufacturing sector or other industries such as the timber, cod fishing, agriculture or potash. Political actors were beholden to a class of fur merchants who dominated. In a way, it looks a lot like the resource curse argument. And, up to 1810-1815, the industry represents the vast majority of exports (north of 60% always and generally around 75%). During the French colonial era, they represented 20% of GDP at some ponts.

Its only after 1815 that furs collapse as a staple — and quite rapidly. It represented less than 10% of exports and less than 2% of GDP by 1830. To explain the rapid turnaround, most of the available work has focused on demand for the industry’s output (see here) or internal industry factors. In a weird way, the industry is taken in isolation.

And that is where a simple tool like the elasticity of substitution between inputs becomes useful. First, I want you to notice the dates I invoked for the turning point: 1810-1815. These are not trivial years. They mark the end of the contest at sea between Britain and France and the beginning of the former’s hegemony on the sea. This means few trade interruptions due to war and insecurity at sea. Before 1815, the colonies in North America would have experienced nearly one year out of two.

What does that have to do with the fur trade’s dominance and elasticity of substitution? Well, it could be that war affects industry differently. Lets look at isoquants for a second to see how that could be the case. Imagine a constant elasticity of substitution function of the following shape:

Where L and K are your usual terms for labor and capital and r is the elasticity. Now, for the sake of argument, let us imagine what happens to the isoquant of a production function as r tends to infinity. As it tends to infinity, the marginal rate of technical substitution between L and K approaches zero if L > K. This means that there is a form of pure complementarity between inputs and no substitution is possible to produce the same quantity of output. The isoquant looks like this.

As r tends to infinity

On the other hand, if r tends to -1, there is perfect substitutability between both L and K. The isoquant then looks like this.

As r tends to -1

What if the fur industry’s isoquant looked more like the latter case while other industries looked like the former? More precisely, what if wars affected the supply of one input more than another? With a simple element like our description of the production function above, we see that if wars did not evenly affected the supply of one input, then one industry would be forced to contract output more than another. In our case, this would be the timber, potash, cod and agricultural sectors versus the fur trade.

Does that fit with the historical evidence? We know that the fur industry frequently changed the inputs it used in trading with the First Nations of Canada to buy furs. Whatever was deemed most valued by the natives would be what would be used. It could be alcohol, clothing, firearms, furnishings, silverware, tobacco, spices, salt, etc. This we get clearly from the work of Ann Carlos and Frank Lewis (a book linked to above). There was great ability to substitute. In contrast, other industries could not shift as easily. Take the timber industry which needed to import axes, saws, hoops, iron and nails from France or the United Kingdom for most of the 18th century. If wars disrupted the supply of these capital goods from Europe, there was very little substitution available which meant that the timber industry would have to contract output considerably to reflect the higher cost of these items. The same thing applies to the cod fishing industry whose key input was salt. No salt, no drying of the cod for preservation and export, thus no cod exports. And salt needed to be imported. In wartime, salt prices tended to jump much faster than other goods because its supply was entirely imported. Thus, wartime meant that the cod industry had to contract its output quite importantly.

The cod fishing industry is an amazing example of this if you take the American revolutionary war. During the war, the colony of Quebec (which represented 85% + of Canada’s population at the time) was invaded by the Americans and the French’s alliance with the Americans jeopardized trade between Quebec and Britain (its mother country at that point). The result was that salt prices jumped rapidly compared to all other goods and the output of the cod industry contracted. In contrast, the fur trade sector was barely affected. Look at this graph of the exports of beaver skins and codfish. Codfish output collapses whereas beaver skins barely show any sign of a major military conflagration.

In a longer-run perspective, its easy now to understand why the industry was dominant. It was the only industry that was robust to wartime shocks. All other industries would have had quite large shifts in factor prices causing them to contract and expand output in a very volatile manner. Now you may think this is just a trivial re-arranging of the argument. It is not because it invalidates the idea that the colony was poor or developed slowly because of the dominance of the fur industry. Rather, it shifts the burden on wartime shocks. Wars, not the dominance of the fur trade itself, meant that the economy was heavily mono-industrial.

A simple tool, the elasticity of substitution (which we can derive from the marginal rate of technical substitution), changes the entire interpretation of Canadian economic history. Can you see what I mean by the claim that simple tools combined with simple empirical observations can lead to powerful explanations? I hope you do! 

A paper that needs to be written: Does WebMD save lives?

I have a few friends who are physicians. Often, they tell me tales of crazy patients who did/said (both) crazy things. Often, the topic of eHealth platforms like WebMD comes up. Each of those friends has expressed a variant of anger at those platforms because patients self-diagnose. Thinking about it, its clear that they think that the platforms make health outcomes worse.

But is that correct? One could reply that there are a few studies suggesting that the platforms are providing reliable information. One could also reply that it solves a problem of asymmetric information whereby the doctors cannot easily “hide” information to their patients. But both replies are, in my opinion, a bit lazy. A more important question is: did it save lives?

Let me take a personal example. A few months ago, my two year old got sick. He had a fever with a temperature of 38.8 celsius. That had me worried a bit. However, I googled the information and found that children tend to have higher body temperatures than adults and the range of “worrisome” temperatures is thus a slight notch higher. This information got me reassured and I simply waited it out and kept monitoring the temperature. I did not consume any medical services in the end.

Now, lets do a proper counterfactual in which the technological constraint facing me is that of the 1970s or 1960s — not medical dark ages by any means. What would I have done absent the internet? Most likely, I would have gone to a clinic for a consult. The physician doing that consult would not have been available for another patient while he told me to go home, wait three days (or give him baby tylenol), visit back only if the temperature increased above 39 celsius.

That example may appear trivial, but it illustrates the point about how WebMD and other eHealth platforms might be saving lives: they liberate medical resources by eliminating ignorance about trivial problems that are time-consuming for physicians. In fact, I might go a step further by pointing out that there were numerous “grandmother’s remedies” still being held as true in the 1960s and 1970s — beliefs that may have been counterproductive and would have forced physicians to needlessly expend resources.

I tried to find economic studies about the effect of eHealth platforms (especially if they tested the mechanism above). Unfortunately, I found absolutely nothing. This is a paper that needs to be written.

Book Review: Cronyism: Liberty versus Power in Early America, 1607–1849

For the past few weeks, economist Patrick Newman has been doing the rounds for his new book (i.e. in the title of this blog post) on American economic history from 1607 to 1849. Well, its not only about American economic history. Its a bit more about the institutional history of the United States before 1850 and how it relates to economic history. It is an amazing book. Unfortunately, I expect many economic historians to ignore or fail to notice it. I hope that this blog post will at least reduce the likelihood of this happening because Newman’s book holds strong explanatory power if one is interested in the link between growth and institutions.

Newman’s argument is actually quite simple. First, there are two broadly-defined camps: the forces of liberty and the forces of power. Already, some may balk at this dichotomy but I would advise them not to. There are many reasons to keep going. The first is that It invokes an older tradition in historical studies that starts with Lord Acton and has been continued by numerous historians on the left and right. The other reasons become evident as one moves along in the book.

The forces of liberty are those that seek to constrain the state and the exercise of power. The forces of power, for their part, are those that seek to be empowered by a strong, capable and relatively unconstrained state. The forces of power, however, invite cronyism because the empowerment also permits personal aggrandizement (e.g. legally protected monopolies such as charters, tariffs, subsidies, grants, patronage).

The founding of the United States was, according to Newman, a battle between both forces with the British being the forces of power. After the Revolution, the forces of power continued inside the Federalist Forces — who basically dominated the constitutional convention of 1787 and the first Congress. Acting a de facto (because that is the title I give him) heir to Murray Rothbard, Newman adopts the position that the foundation of the US was in fact a rent-seeking bargain thanks to the federalists forces (Newman notably edited the lost volume of Rothbard’s Conceived in Liberty on the early republic).

After that, antifederalists and republicans coalesced into a working coalition that reinterpreted the constitution in a way that backfired against the Federalists and led to the Jeffersonian revolution of 1800. Important reforms, which Newman credits as being beneficial to living standards, were adopted. However, the Jeffersonians rapidly became corrupted by power. And here is the second reason to not balk at Newman’s dichotomy of the forces of power/liberty: people can move between camps. In other words, ideological commitment is not inelastic. Some in one camp or the other can switch when the rewards to do so change. However, the key point that Newman makes is that commitment to the forces of liberty is far more elastic than the commitment to the forces of power (which is more inelastic). The Jeffersonians’ commitment to liberty waned and they eventually enacted relatively similar policies to those of the federalists. They too engaged in cronyism. The same ebb and flow reoccurred later with the Jacksonians.

And here comes the third reason not to balk at Newman’s dichotomy: it actually hold pretty decent explanatory power. One common argument among financial and economic historians is that the United States may have sounded like a Jeffersonian project but the policies of the Early Republic and Antebellum were distinctly Hamiltonian (i.e. Federalist). To be sure, there is some evidence to that effect — which is what someone could retort to Newman. However, the old adage that “one in a glass house should not throw bricks” applies here. Revisions to the historical estimates of living standards have gradually swung in favor of the predictions associated with Newman’s model of the forces of power/liberty.

Consider this new article in Historical Methods by Frank Garmon (of Christopher Newport University). Garmon took issue with data from 1798 used by many scholars. In 1798, Congress introduced the a direct property tax to prepare for the possibility of a war with France. As Garmon succintly summarizes: “The law creating the tax consisted of three elements: a flat tax on slaves per head, a progressive tax on houses with rates escalating based on value, and a proportional tax on land based on value to make up the difference in each state’s obligation”. Other scholars, such as my co-author Peter Lindert and Jeffrey Williamson, argued that these features invited corruption during the assessing of tax liabilities. This was particularly true in the south because of the flat tax per slave. Thus, if one tries to use the tax data to estimate economic activity circa 1800, one has to augment it to some degree to reflect the geographically varying levels of corruption. Garmon finds that corruption was not an issue. The disparities pointed out by others (which made sense at first glance) could be largely explained by normal economic factors such as population density (which would affect land valuations etc.). Thus, Garmon argues that there is no need to deflate. As a result, he finds that incomes were roughly 5% lower in the southern states in 1800 (a proportion that would have been smaller in northern states).

Why is Garmon’s result relevant to Newman’s claim? Because any lowering of the 1800-level of income is going to increase the rate of growth from there to 1840 when the commonly-used estimates (produced by R.A. Easterlin) become available. Any increasing in that rate of growth goes in favor of Newman’s model because his prediction because the era from 1800 to 1840 is predominantly occupied with pro-liberty forces (even though there are ebbs and flows).

I am not in full agreement with Newman’s book and his Rothbardian narrative (I am much less fond of Rothbard than he is notably because of the tendency for villains and heroes to exist in his narrative). However, the reality is that Newman’s description (and the Rothbardian narrative he imports and adapts) holds strong explanatory powers.

Car Prices and Quality

Inflation is on everyone’s mind. Everybody freaks out. You cannot do anything about it. As such, lets talk about something mildly related: how price indexes (those that we use to talk about inflation) deal with quality changes.

One big problem when we try to measure the cost of living is that the price information we collect does not reflect the same thing we consume. I know that sentence seems weird. After all, 1$ for a pound of bread is 1$ for a pound a bread. And if prices go up 10%, then the price per pound of bread is 1.10$!

If you think that, you’re wrong. Think about the following example from my native province of Quebec. In the 1990s, Quebec deregulated opening hours for grocery stores. The result was … higher prices at large superstores. Why? Before the reform, stores had shorter hours especially on sundays. This meant that stores were competing with each other on a smaller quality dimension which meant more price-based competition. With deregulation, some consumers were willing to pay slightly higher prices to shop at ungodly hours. What were these consumers consuming? Were they consuming only the breadloafs they bought or were they consuming those loafs and the flexible schedule of the grocery stores? The answer is the latter! Ergo, the change from 1$ per pound to 1.10$ per pound does not mean that the price of bread alone increased — it may have even fallen all else being equal!

So how do you adjust for that? There are many papers on how to do hedonic adjustments (hedonic is the fancy words we use to say “quality-adjusted”) and they are all a pain to read unless you are very familiar with real analysis, set theory and advanced calculus (and even there, its still a pain). Fortunately, I recently found a neat little application from an old econometrics graduate text from the 1960s (see image below) that allows me to teach this to my students (and now, you too!) in an easy-to-get format.

A neat book

The book has a neat chapter by one of the most famous econometricians of the 20th century, Zvi Griliches, titled “Hedonic Price Indexes for Automobiles: An Econometric Analysis of Quality Change”. In the chapter, Griliches points out that from 1954 to 1960, car prices went up some 20% — well above the overall price index. From 1937 to 1950, prices for cars went up in line with inflation. Taken together, these two facts suggest that the real price of cars stayed constant from 1937 to 1950 and increased to 1960. But that suggestion is wrong Griliches points out because of our aforementioned quality issues. Up until 1960, there were considerable improvement in vehicle quality: better gears, better brakes, more horsepower, safer settings, automatic transmission, hardtops, switching to V-8 engines rather than 6 cylinders engines etc.

How do you account for these quality changes? Griliches simply went about consulting guide books for autobuyers. He collected price data for the cars as well the details regarding quality. And he used this very simple specification where the log of the nominal price is set as a dependent variable.

Griliches’ specification

The vector X is all the quality dimensions he could find (horsepower, shipping weight, length, V-8 engine, hardtop, automatic transmission, power steering, power brakes, compact car). All of these dimensions were statistically significant determinants of the price of cars (with the exception of V-8 engines which was not significant). Then, Griliches assumed that all quality dimensions were “unchanged” from 1954 to 1960 in order to see how prices would have evolved without any changes in quality. The result is the figure below. The blue line depicts the actual prices he collected where you can see the 20% increase to 1960 (which is a 30%+ increase to 1959). The orange line depicts the price holding quality constant. That orange line is unambiguous: quality-constant car prices didn’t change much during the 1950s. Adjusting for inflation during the period suggests a drop in 10% in the real price of a quality-constant car.


Isn’t that a fascinating way to understand what we are actually measuring when we collect prices to talk about inflation? I find this to be an utterly fascinating example (and a useful teaching tool). Okay, I am done, you can go back to freaking out about inflation and how bad the Fed, Bank of Canada, ECB are.

Economic freedom and income mobility

A few weeks ago, my friend James Dean (see his website here, he will soon be a job market candidate and James is good) and I received news that the Journal of Institutional Economics had accepted our paper tying economic freedom to income mobility. I think its worth spending a few lines explaining that paper.

In the last two decades, there has been a flurry of papers testing the relationship between economic freedom (i.e. property rights, regulation, free trade, government size, monetary stability) and income inequality. The results are mixed. Some papers find that economic freedom reduces inequality. Some find that it reduces it up to a point (the relationship is not linear but quadratic). Some find that there are reverse causality problems (places that are unequal are less economically free but that economic freedom does not cause inequality). Making heads or tails of this is further complicated by the fact that some studies look at cross-country evidence whereas others use sub-national (e.g. US states, Canadian provinces, Indian states, Mexican states) evidence.

But probably the thing that causes the most confusion in attempts to measure inequality and economic freedom is the reason why inequality is picked as the variable of interest. Inequality is often (but not always) used as a proxy for social mobility. If inequality rises, it is argued, the rich are enjoying greater gains than the poor. Sometimes, researchers will try to track the income growth of the different income deciles to go at this differently. The idea, in all cases, is to see whether economic freedom helps the poor more than the rich. The reason why this is a problem is that inequality measures suffer from well-known composition biases (some people enter the dataset and some people leave). If the biases are non-constant (they drift), you can make incorrect inferences.

Consider the following example: a population of 10 people with incomes ranging from 100$ to 1000$ (going up in increments of 100$). Now, imagine that each of these 10 people enjoy a 10% increase in income but that a person with an income of 20$ migrates to (i.e. enters) that society (and that he earned 10$ in his previous group). The result will be that this population of now 11 people will be more unequal. However, there is no change in inequality for the original 10 people. The entry of the 11th person causes a composition bias and gives us the impression of rising inequality (which is then made synonymous with falling income mobility — the rich get more of the gains). Composition biases are the biggest problem.

Yet, they are easy to circumvent and that is what James Dean and I did. We used data from the Longitudinal Administrative Database (LAD) in Canada which produces measures of income mobility for a panel of people. This means that the same people are tracked over time (a five-year period). This totally eliminates the composition bias and we can assess how people within that panel evolve over time. This includes the evolution of income and relative income status (which decile of overall Canadian society they were in).

Using the evolution of income and relative income status by province and income decile, we tested whether economic freedom allowed the poor to gain more than the rich from high levels of economic freedom. The dataset was essentially the level of economic freedom in each five-year window matching the LAD panels for income mobility. The period covered is 1982-87 to 2013-18.

What we found is in the table below which illustrates only our results for the bottom 10% of the population. What we find is that economic freedom in each province heavily affects income mobility.


More importantly, the results we find for the bottom decile are greater than the results “on average” (for all the panel) or than for the top deciles. In other words, economic freedom matters more for the poor than the rich. I hope you will this summary here to be enticing enough to consult the paper or the public policy summary we did for the Montreal Economic Institute (here)

People of the past were not irrational morons

That sentence is one that I repeat every time I teach economic history. It is repeated because a common misconception in history is that there are “different mentalities”: a pre-capitalist mentality versus a capitalist mentality; a western mentality versus a non-western one etc. The variations are endless but the common denominator is quite simple: there are discontinuities in economic rationality and these discontinuities explain economic change.

That, as I explain to my students, amounts to labelling people of the past as “irrational morons” who would leave $100 bills on the sidewalk. There are no variations in rationality, merely variations in constraints and incentives. That is what I tell my students. And the thing is, that statement is actually testable! Indeed, arguing that something in people’s brain changed is an argument that can never be tested because they are dead and cannot testify. In fact, even if they were alive, their statements would be meaningless because nothing speaks louder than actions (i.e. preferences are revealed by action). Making statements about the rationality of X or Y action is easily testable as we can observe what people did (or do now). And its really easy to refute differences in “mentalities”

Let me give you an example from my native Canada. In Canada, there is a large French minority (the majority of which lives in Quebec) which has long been argued to hold different economic mentalities than the neighboring English majority. Peddled (yes, that is a strong term but I think it applies) by both French and English historians (and economists), this view is used to explain the relative poverty of the French minority (which has historically been 60%-75% as rich as the English majority). As far back as the early 19th century, French-Canadians are argued to have clung on to archaic farming techniques even though they observed better techniques from their English-speaking neighbors. Their “traditional conservative” outlook (in the words of an eminent Canadian historian) pushed into economic stagnation (and even retrogression by some accounts). This view continues today. I vividly remember a debate on French-Canadian TV with former Quebec premier (like a governor for Americans) Bernard Landry telling me that there was a difference between my “anglo-saxon economic worldview” (i.e. neoclassical economics) and that which most French-Canadians held.

The virtue of this example is French-Canadians are deemed to be of a “lesser” mentality than English-Canadians at the same moment in time. Thus, it is easy to test whether this is the case. In multiple works, notably in this paper at Historical Methods, I have used simple tools from economic theory to assess this lesser mentalities hypothesis. Start from a simple Cobb-Douglas production function:

Where A is technology residual (also known as total factor productivity or TFP), Y is total output, K is the capital stock and L is the labor supply. The exponents are just the elasticities of capital and labor. If the English and French in Canada are separated, there are two production functions (with one for each) and they can be divided by each other. But the neat part about the Cobb-Douglas function here is that you can rearrange the equation and solve in terms of A rather than Y. As A is total factor productivity, it tells us how effectively people combine inputs K and L to produce Y. And then you can express A in the French sector (1) as a ratio of A in the English sector (2) as in the formulation below

Technically, if the French farmers were less efficient than the English farmers the ratio on the left-hand side should be less than 1 (as A1 < A2 ). Using data from the 1831 census of Lower Canada (as Quebec was known then), I compared farms in French areas to farms in English areas. The results? Yes, the French farmers were poorer (income Y1 < Y2 ) but there were very small differences in efficiency of input use (A) between French and English farmers as can be seen in the table below. French areas were 4.3% to 0.5% less efficient than English areas.

But when you controlled for land quality, distance from urban markets, recency of settlement, complementary industries and other controls, there are no statistically significant effect of culture (proxied in the table below by share of Catholics as all French-Canadians in 1831 were Catholic and very few English-Canadians were Catholic). In other words, the small differences have nothing to do with culture or differences in mentalities.

Notice that this was a relatively simple logical test. The farming actions of French-Canadians were observed in the data. We know which inputs they chose to use (in which quantities as well). The results of these actions are easily observable through the output data in the census. Irrationality on their part is thus easy to test as a simple Cobb-Douglas model suggest that irrationality would be manifest by an inferior ability to use and combine inputs. They used inputs equally well as English Canadians and so that claim of inferior mentalities was wrong.

One could reply that I am just picking an easy case to dismantle the “mentalities” claim. But I am actually late to that party by adding the French-Canadians. Similar claims have been made for Russian, French, Italian, Chinese, Vietnamese, Korean, Mexican, Indian, Polish, New Englanders (yes, you read right), Danish, Irish, Kenyans, Algerians, Egyptians etc. Hundreds of economic historians and economists have shown that these cases do not hold.

If you wish to explain economic change (or economic disparities), you have to look elsewhere than “changes in mentalities” (or differences in mentalities). If you dont, you are essentially claiming that people of the past were irrational morons who simply lacked your expert knowledge.

Deficits and presidents

Wranglings over spending plans, deficits and public debt increases have been quite intense of late. What is quite surprising, at least at first glance, is that there are so few individuals who are arguing for reducing public spending. Right now, the most “hawkish” policy stance is a slower rate of spending increases. Why the pro-spending tilt of debates?

One could argue that its the pandemic. A crisis is, after all, a natural moment to increase spending. However, that argument is a bit weak now. This position was easily defensible six to twelve months ago, but not today when the economy is starting its recovery. If anything, as recovery is underway, the case for slashing spending levels is stronger than the case for raising them.

So, once again, why the pro-spending tilt? Let me point to the work of James Buchanan and Richard Wagner in Democracy in Deficit. In this work, whose lessons are underappreciated today, Buchanan and Wagner argue that there is an asymmetry in the political returns to fiscal policy. This is known as fiscal illusion. When a deficit occurs, the costs are delayed and thus harder to observe. The benefits are immediate. When a surplus takes place, the benefits are delayed and the costs (i.e. less spending, higher taxes) are immediate.

Second, there’s the far more serious threat of fiscal illusion—that the public’s perception of the true costs and benefits of government expenditures is misconstrued. As long as the costs of taxation are underestimated and that the benefits of public expenditures are overestimated, there’s fiscal illusion. The nature of politics thus creates a strange incentive system where governments reap more electoral rewards from deficits than from surpluses. If you buy Buchanan and Wagner’s explanation, the pro-spending tilt is easy to explain.

However, the empirical evidence for this is somewhat limited. For example, Alberto Alesina in the 1990s, showed that he could not find empirical patterns that confirmed Buchanan and Wagner’s theorizing. But, I have recent work (co-authored with Marcus Shera of George Mason University — a good graduate student of mine) which proposes a simple mechanism by which to observe whether the first condition for a pro-deficit/pro-spending tilt is present.

American presidents are incredibly mindful of their historical reputation. As I argued elsewhere, presidents consider historians as a constituency they want to cater to so as to be remembered as great. If there is a reward from engaging in deficit spending given by historians, this would suggest that presidents have at least some incentives to be fiscally imprudent. Phrased differently, such returns by historians would suggest some divergence between what is fiscally prudent and what is politically beneficial.

Using the surveys of American presidents produced by C-Span and the American Political Science Association, Marcus and I found that there are strong rewards to engaging in deficit spending. Without any controls for the personal features (e.g. war hero, scandal, intellect) of a president and the features of a presidency (e.g. war, victory in war, economic growth), an extra percentage point of deficit to GDP is associated with a strong positive reward to a president (see table below). Once controls are introduced, the result remains: there are strong rewards from engaging in deficit spending.

Thus, at any time, a president who is mindful of his place in the history books would be tempted to engage in deficit spending. While Marcus and I are somewhat cautious in the paper, I do think that we are presenting a “lower-bound” case for a pro-deficit bias. Indeed, one could think that the hindsight of history would lead to greater punitions for fiscal recklessness. After all, historians are not like voters — their time-horizons for evaluating a presidency are clearly not as short. If that is the case, one should expect that historians should be less likely to reward deficits. And yet, they seem to do so — which is why I argue this is a lower-bound case.

In other words, Joe Biden might simply believe that the extra spending will secure him a place in history books. If other presidents are any indication, he is making a good bet.

The history of work and the myth of a leisurely past

Since Marshall Sahlins in the 1970s (and thanks to James Suzman’s Work ), a weird idea has worked its way into popular imagination: people of the past did not work much. Well, more precisely, the idea is that for most of human history our ancestors worked far less and thought very differently about work than we do now. That is based on a weird starting point and a misunderstanding of how “work” works.

The starting point is the pre-neolithic era when the vast majority of time was spent hunting and gathering. In that setting, the effort to acquire calories was modest largely because food was abundant relative to a tiny human population. Some early estimates suggest that, because of that relative endowment, people worked maybe less than 20 hours per week hunting and gathering. Some say even less. That is probably correct and also wrong.

Notice that I italicized hunting and gathering above suggesting that the time commitment of these two tasks was quite small. However, this is not the sum of all work people did then. One has to understand that nomadic groups were nomad in part because the largest share of their calories was also quite mobile. This meant moving around significantly to track food under a key constraint — that calories from gathering be available.

This meant that people moved from “oasis” to “oasis” or from “patch” to “patch”. Between each patch/oasis, there was a lot of time spent “in transit” (let’s call this d for dead time). That time is technically not work for hunting or gathering — but it is work. Not counting it is a mistake.

To see how it matters, consider the graph below which depicts a forager who moves between oases/patches where food is available. As they stay in a oasis, the yield of food y is marginally decreasing so that at one point he may have an incentive to move on. When he moves on, he incurs the cost d which is dead time while moving. Suppose also that a single oasis/patch per year (which encompasses multiple time periods) is insufficient to survive the year. Thus, multiple patches must be exploited. Supposing that all oases are equally distant, of equal quality and that there are many oases in total, how can we picture the decision to move to another? If you want to maximize your food intake over a long period of time, you have to go to multiple oases in a year. This is where we introduce the dashed blue line which is the total yield from all oases/patches divided by time. Notice that it starts at origin so that we are capturing the cost of d.

Figure 1: How people in the past worked

These two lines tell us that you stay at a single oasis until its marginal return is inferior to the average yield over all oases/patches. Why does this matter? Well, imagine the implications if each patch is less productive? You have to move more to reach a certain target and incur d more frequently. That effectively means that you have to exploit a greater territory to meet a certain target of food (e.g. survival).

The estimate of time spent hunting and gathering are essentially the time within patches rather than the time spent for all patches. Thus, there is a massive underestimate. The yield on a single oasis/patch was so low in the pre-neolithic that moving was something that clans did often. In the late Ice Age, family groups apparently moved every 3-6 days. Modern nomads in certain regions move some 400 km per year. At 5km/h, this is 80 hours of work per year. However, that 5km/h is too high as there were children to carry which slow things down. At 3km/h, we are talking 133 hours per year (or roughly 2.6 extra hours per week). This is just dead time but it is work. As such, more exhaustive worktime estimates suggest values of 35 to 43 hours per week. Most western countries are below this level. Moreover, it is worth considering that work started at young ages and there was no retirement. With shorter lives and earlier work-entry, a smaller fraction of awake life-time was spent in leisurely pursuits. Ergo, it is insanely likely that no society today exhibits more “life-time” work than the prehistoric humans.

Finally, it is worth pointing out the very obvious. The introduction of agriculture, by removing the need to move around and also reducing variability in calories (i.e. fewer chances of catastrophes), essentially increased the benefit of working (i.e. making leisure relatively costlier). It is unsurprising then that the introduction of agriculture led to some increases in labor supply. However, that being said, it is clearly false than we work more today than our prehistoric ancestors did. There is no way around it.

Supply chain failures and the O-Ring

Difficulties in the global supply chain are a recurrent news item since the beginning of fall. The result has been that many pundits or politicians have argued for new policies that spout platitudes such as the need to “rethink trade“. For my part, all I could think of was the O-Ring theory of development developed by Michael Kremer.

The name for that theory is taken from the 1986 Challenger disaster, in which the failure of one small, inexpensive part caused the shuttle to explode upon take-off. Generally, the theory is applied to questions of development and speaks to high complementarities between inputs. Suppose the economy is divided into multiple sectors that exchange intermediaries goods between them (i.e. all firms are dependent on each other). Each of these goods can be labelled as n and producing these goods require skills q. However, each sector buys multiple different n as intermediary goods. For example, this would mean that sector “Vincent” buys goods from sectors “Joy”, “Jeremy” and “James” to produce the “Vincent” goods.

Imagine now that q is the percentage chance that n is produced with sufficient quality so that it bears its full market value (in which case, 1-q is the probability that n is produced so poorly that it gets a zero-price). This means that, to produce its goods, sector “Vincent” needs sectors “Joy”, “Jeremy” and “James” to produce high-quality goods. If one of the intermediary goods “Vincent” buys from the other is inefficient, all of Vincent’s production is worthless. Hence, the analogy to the O-Ring of the Challenger disaster.

So what’s the link with the supply chain failures you ask? Well, its pretty straightforward: the O-Ring theory implies that the impact of a bottleneck has a multiplicative effect on other productions. Now, everyone may be excused for thinking that I simply explained in a complex way something that is simple (i.e. dont half-ass things). However, this way of formulating is very helpful because of q.

If q is the probability of a badly-performed task, what determines q? Some could say its the pandemic, but that would be incorrect. An article in Nature shows that COVID-19 has yielded widely disparate effects on supply chains in different countries. If it was global, it should be roughly similar everywhere. Ergo, some local factors must be in play. Local factors of relevance would be laws on shipping such as the Jones Act in the United States or the public ownership of ports in many western countries. By preventing cabotage and limiting foreign ships, such as in the Jones Act, there is little excess capacity in the American shipping industry available when demand shocks occur. By being more bureaucratically rigid, ports may be unable to adapt to unforeseen events (which is why there are papers in transportation economics that show that privatizing ports tends to increase productivity and reduce shipping costs notably by speeding turnarounds).

Each of these local factors have to do with local policies that reduce q and tend to increase the likelihood of failures (i.e. bottlenecks) which then reverberate on total output (beyond the narrow supply chain sector). From this, I get to a simple: complications that we attribute to the COVID crisis are more likely the results of local factors.