Deficits and presidents

Wranglings over spending plans, deficits and public debt increases have been quite intense of late. What is quite surprising, at least at first glance, is that there are so few individuals who are arguing for reducing public spending. Right now, the most “hawkish” policy stance is a slower rate of spending increases. Why the pro-spending tilt of debates?

One could argue that its the pandemic. A crisis is, after all, a natural moment to increase spending. However, that argument is a bit weak now. This position was easily defensible six to twelve months ago, but not today when the economy is starting its recovery. If anything, as recovery is underway, the case for slashing spending levels is stronger than the case for raising them.

So, once again, why the pro-spending tilt? Let me point to the work of James Buchanan and Richard Wagner in Democracy in Deficit. In this work, whose lessons are underappreciated today, Buchanan and Wagner argue that there is an asymmetry in the political returns to fiscal policy. This is known as fiscal illusion. When a deficit occurs, the costs are delayed and thus harder to observe. The benefits are immediate. When a surplus takes place, the benefits are delayed and the costs (i.e. less spending, higher taxes) are immediate.

Second, there’s the far more serious threat of fiscal illusion—that the public’s perception of the true costs and benefits of government expenditures is misconstrued. As long as the costs of taxation are underestimated and that the benefits of public expenditures are overestimated, there’s fiscal illusion. The nature of politics thus creates a strange incentive system where governments reap more electoral rewards from deficits than from surpluses. If you buy Buchanan and Wagner’s explanation, the pro-spending tilt is easy to explain.

However, the empirical evidence for this is somewhat limited. For example, Alberto Alesina in the 1990s, showed that he could not find empirical patterns that confirmed Buchanan and Wagner’s theorizing. But, I have recent work (co-authored with Marcus Shera of George Mason University — a good graduate student of mine) which proposes a simple mechanism by which to observe whether the first condition for a pro-deficit/pro-spending tilt is present.

American presidents are incredibly mindful of their historical reputation. As I argued elsewhere, presidents consider historians as a constituency they want to cater to so as to be remembered as great. If there is a reward from engaging in deficit spending given by historians, this would suggest that presidents have at least some incentives to be fiscally imprudent. Phrased differently, such returns by historians would suggest some divergence between what is fiscally prudent and what is politically beneficial.

Using the surveys of American presidents produced by C-Span and the American Political Science Association, Marcus and I found that there are strong rewards to engaging in deficit spending. Without any controls for the personal features (e.g. war hero, scandal, intellect) of a president and the features of a presidency (e.g. war, victory in war, economic growth), an extra percentage point of deficit to GDP is associated with a strong positive reward to a president (see table below). Once controls are introduced, the result remains: there are strong rewards from engaging in deficit spending.

Thus, at any time, a president who is mindful of his place in the history books would be tempted to engage in deficit spending. While Marcus and I are somewhat cautious in the paper, I do think that we are presenting a “lower-bound” case for a pro-deficit bias. Indeed, one could think that the hindsight of history would lead to greater punitions for fiscal recklessness. After all, historians are not like voters — their time-horizons for evaluating a presidency are clearly not as short. If that is the case, one should expect that historians should be less likely to reward deficits. And yet, they seem to do so — which is why I argue this is a lower-bound case.

In other words, Joe Biden might simply believe that the extra spending will secure him a place in history books. If other presidents are any indication, he is making a good bet.

The history of work and the myth of a leisurely past

Since Marshall Sahlins in the 1970s (and thanks to James Suzman’s Work ), a weird idea has worked its way into popular imagination: people of the past did not work much. Well, more precisely, the idea is that for most of human history our ancestors worked far less and thought very differently about work than we do now. That is based on a weird starting point and a misunderstanding of how “work” works.

The starting point is the pre-neolithic era when the vast majority of time was spent hunting and gathering. In that setting, the effort to acquire calories was modest largely because food was abundant relative to a tiny human population. Some early estimates suggest that, because of that relative endowment, people worked maybe less than 20 hours per week hunting and gathering. Some say even less. That is probably correct and also wrong.

Notice that I italicized hunting and gathering above suggesting that the time commitment of these two tasks was quite small. However, this is not the sum of all work people did then. One has to understand that nomadic groups were nomad in part because the largest share of their calories was also quite mobile. This meant moving around significantly to track food under a key constraint — that calories from gathering be available.

This meant that people moved from “oasis” to “oasis” or from “patch” to “patch”. Between each patch/oasis, there was a lot of time spent “in transit” (let’s call this d for dead time). That time is technically not work for hunting or gathering — but it is work. Not counting it is a mistake.

To see how it matters, consider the graph below which depicts a forager who moves between oases/patches where food is available. As they stay in a oasis, the yield of food y is marginally decreasing so that at one point he may have an incentive to move on. When he moves on, he incurs the cost d which is dead time while moving. Suppose also that a single oasis/patch per year (which encompasses multiple time periods) is insufficient to survive the year. Thus, multiple patches must be exploited. Supposing that all oases are equally distant, of equal quality and that there are many oases in total, how can we picture the decision to move to another? If you want to maximize your food intake over a long period of time, you have to go to multiple oases in a year. This is where we introduce the dashed blue line which is the total yield from all oases/patches divided by time. Notice that it starts at origin so that we are capturing the cost of d.

Figure 1: How people in the past worked

These two lines tell us that you stay at a single oasis until its marginal return is inferior to the average yield over all oases/patches. Why does this matter? Well, imagine the implications if each patch is less productive? You have to move more to reach a certain target and incur d more frequently. That effectively means that you have to exploit a greater territory to meet a certain target of food (e.g. survival).

The estimate of time spent hunting and gathering are essentially the time within patches rather than the time spent for all patches. Thus, there is a massive underestimate. The yield on a single oasis/patch was so low in the pre-neolithic that moving was something that clans did often. In the late Ice Age, family groups apparently moved every 3-6 days. Modern nomads in certain regions move some 400 km per year. At 5km/h, this is 80 hours of work per year. However, that 5km/h is too high as there were children to carry which slow things down. At 3km/h, we are talking 133 hours per year (or roughly 2.6 extra hours per week). This is just dead time but it is work. As such, more exhaustive worktime estimates suggest values of 35 to 43 hours per week. Most western countries are below this level. Moreover, it is worth considering that work started at young ages and there was no retirement. With shorter lives and earlier work-entry, a smaller fraction of awake life-time was spent in leisurely pursuits. Ergo, it is insanely likely that no society today exhibits more “life-time” work than the prehistoric humans.

Finally, it is worth pointing out the very obvious. The introduction of agriculture, by removing the need to move around and also reducing variability in calories (i.e. fewer chances of catastrophes), essentially increased the benefit of working (i.e. making leisure relatively costlier). It is unsurprising then that the introduction of agriculture led to some increases in labor supply. However, that being said, it is clearly false than we work more today than our prehistoric ancestors did. There is no way around it.

Supply chain failures and the O-Ring

Difficulties in the global supply chain are a recurrent news item since the beginning of fall. The result has been that many pundits or politicians have argued for new policies that spout platitudes such as the need to “rethink trade“. For my part, all I could think of was the O-Ring theory of development developed by Michael Kremer.

The name for that theory is taken from the 1986 Challenger disaster, in which the failure of one small, inexpensive part caused the shuttle to explode upon take-off. Generally, the theory is applied to questions of development and speaks to high complementarities between inputs. Suppose the economy is divided into multiple sectors that exchange intermediaries goods between them (i.e. all firms are dependent on each other). Each of these goods can be labelled as n and producing these goods require skills q. However, each sector buys multiple different n as intermediary goods. For example, this would mean that sector “Vincent” buys goods from sectors “Joy”, “Jeremy” and “James” to produce the “Vincent” goods.

Imagine now that q is the percentage chance that n is produced with sufficient quality so that it bears its full market value (in which case, 1-q is the probability that n is produced so poorly that it gets a zero-price). This means that, to produce its goods, sector “Vincent” needs sectors “Joy”, “Jeremy” and “James” to produce high-quality goods. If one of the intermediary goods “Vincent” buys from the other is inefficient, all of Vincent’s production is worthless. Hence, the analogy to the O-Ring of the Challenger disaster.

So what’s the link with the supply chain failures you ask? Well, its pretty straightforward: the O-Ring theory implies that the impact of a bottleneck has a multiplicative effect on other productions. Now, everyone may be excused for thinking that I simply explained in a complex way something that is simple (i.e. dont half-ass things). However, this way of formulating is very helpful because of q.

If q is the probability of a badly-performed task, what determines q? Some could say its the pandemic, but that would be incorrect. An article in Nature shows that COVID-19 has yielded widely disparate effects on supply chains in different countries. If it was global, it should be roughly similar everywhere. Ergo, some local factors must be in play. Local factors of relevance would be laws on shipping such as the Jones Act in the United States or the public ownership of ports in many western countries. By preventing cabotage and limiting foreign ships, such as in the Jones Act, there is little excess capacity in the American shipping industry available when demand shocks occur. By being more bureaucratically rigid, ports may be unable to adapt to unforeseen events (which is why there are papers in transportation economics that show that privatizing ports tends to increase productivity and reduce shipping costs notably by speeding turnarounds).

Each of these local factors have to do with local policies that reduce q and tend to increase the likelihood of failures (i.e. bottlenecks) which then reverberate on total output (beyond the narrow supply chain sector). From this, I get to a simple: complications that we attribute to the COVID crisis are more likely the results of local factors.