Book Review: Cronyism: Liberty versus Power in Early America, 1607–1849

For the past few weeks, economist Patrick Newman has been doing the rounds for his new book (i.e. in the title of this blog post) on American economic history from 1607 to 1849. Well, its not only about American economic history. Its a bit more about the institutional history of the United States before 1850 and how it relates to economic history. It is an amazing book. Unfortunately, I expect many economic historians to ignore or fail to notice it. I hope that this blog post will at least reduce the likelihood of this happening because Newman’s book holds strong explanatory power if one is interested in the link between growth and institutions.

Newman’s argument is actually quite simple. First, there are two broadly-defined camps: the forces of liberty and the forces of power. Already, some may balk at this dichotomy but I would advise them not to. There are many reasons to keep going. The first is that It invokes an older tradition in historical studies that starts with Lord Acton and has been continued by numerous historians on the left and right. The other reasons become evident as one moves along in the book.

The forces of liberty are those that seek to constrain the state and the exercise of power. The forces of power, for their part, are those that seek to be empowered by a strong, capable and relatively unconstrained state. The forces of power, however, invite cronyism because the empowerment also permits personal aggrandizement (e.g. legally protected monopolies such as charters, tariffs, subsidies, grants, patronage).

The founding of the United States was, according to Newman, a battle between both forces with the British being the forces of power. After the Revolution, the forces of power continued inside the Federalist Forces — who basically dominated the constitutional convention of 1787 and the first Congress. Acting a de facto (because that is the title I give him) heir to Murray Rothbard, Newman adopts the position that the foundation of the US was in fact a rent-seeking bargain thanks to the federalists forces (Newman notably edited the lost volume of Rothbard’s Conceived in Liberty on the early republic).

After that, antifederalists and republicans coalesced into a working coalition that reinterpreted the constitution in a way that backfired against the Federalists and led to the Jeffersonian revolution of 1800. Important reforms, which Newman credits as being beneficial to living standards, were adopted. However, the Jeffersonians rapidly became corrupted by power. And here is the second reason to not balk at Newman’s dichotomy of the forces of power/liberty: people can move between camps. In other words, ideological commitment is not inelastic. Some in one camp or the other can switch when the rewards to do so change. However, the key point that Newman makes is that commitment to the forces of liberty is far more elastic than the commitment to the forces of power (which is more inelastic). The Jeffersonians’ commitment to liberty waned and they eventually enacted relatively similar policies to those of the federalists. They too engaged in cronyism. The same ebb and flow reoccurred later with the Jacksonians.

And here comes the third reason not to balk at Newman’s dichotomy: it actually hold pretty decent explanatory power. One common argument among financial and economic historians is that the United States may have sounded like a Jeffersonian project but the policies of the Early Republic and Antebellum were distinctly Hamiltonian (i.e. Federalist). To be sure, there is some evidence to that effect — which is what someone could retort to Newman. However, the old adage that “one in a glass house should not throw bricks” applies here. Revisions to the historical estimates of living standards have gradually swung in favor of the predictions associated with Newman’s model of the forces of power/liberty.

Consider this new article in Historical Methods by Frank Garmon (of Christopher Newport University). Garmon took issue with data from 1798 used by many scholars. In 1798, Congress introduced the a direct property tax to prepare for the possibility of a war with France. As Garmon succintly summarizes: “The law creating the tax consisted of three elements: a flat tax on slaves per head, a progressive tax on houses with rates escalating based on value, and a proportional tax on land based on value to make up the difference in each state’s obligation”. Other scholars, such as my co-author Peter Lindert and Jeffrey Williamson, argued that these features invited corruption during the assessing of tax liabilities. This was particularly true in the south because of the flat tax per slave. Thus, if one tries to use the tax data to estimate economic activity circa 1800, one has to augment it to some degree to reflect the geographically varying levels of corruption. Garmon finds that corruption was not an issue. The disparities pointed out by others (which made sense at first glance) could be largely explained by normal economic factors such as population density (which would affect land valuations etc.). Thus, Garmon argues that there is no need to deflate. As a result, he finds that incomes were roughly 5% lower in the southern states in 1800 (a proportion that would have been smaller in northern states).

Why is Garmon’s result relevant to Newman’s claim? Because any lowering of the 1800-level of income is going to increase the rate of growth from there to 1840 when the commonly-used estimates (produced by R.A. Easterlin) become available. Any increasing in that rate of growth goes in favor of Newman’s model because his prediction because the era from 1800 to 1840 is predominantly occupied with pro-liberty forces (even though there are ebbs and flows).

I am not in full agreement with Newman’s book and his Rothbardian narrative (I am much less fond of Rothbard than he is notably because of the tendency for villains and heroes to exist in his narrative). However, the reality is that Newman’s description (and the Rothbardian narrative he imports and adapts) holds strong explanatory powers.

Will we repeat the Christmas Covid wave?

EDIT at 7pm, same day as posting: You know you have good friends when someone quietly emails you and tells you that the news about Omicron just got much worse and you should probably edit your post. I’ve been trying to rationalized why this January will be better than last January. Of course if it were not for Omicron, I would expect very little from holiday gatherings among mostly-vaccinated Americans. However, having known Omicron was looming, I probably shouldn’t have even tried to speculate. Get your booster and be prepared to hunker down in January if the 2-3 week data indicates that infections are turning extra-lethal. </edit>

In keeping with the “dismal science” brand, let’s dwell on the horrible death toll of the January 2021 Covid wave in the US that followed the Christmas holiday. Here comes Christmas (and other winter holidays) again, a major public health event.

https://www.cnbc.com/2021/01/27/us-reports-record-number-of-covid-deaths-in-january.html

This graph I borrowed from CNBC shows how fast deaths spiked up after the winter holidays of 2020. See also https://data.cdc.gov/.

According to Google search auto-complete, the public is more interested in whether there will be another Christmas Prince movie than whether there will be another Christmas Covid death wave.

I think it’s unlikely that we will see a repeat of exactly what happened last year. I’ve been looking online for predictions and mostly I have found articles warning that Omicron will cause a some kind of wave. No one wants to commit to predicting how many people will die, because anyone who tries is sure to be wrong. The consensus is that breakthrough infections are likely but that vaccines protect against extreme illness.

Nearly a million Americans have died from Covid already (Jeremy argues for a million). Some of those deaths, in retrospect, can almost certainly be tied to family travel during the holidays in 2020. The January Covid wave has only happened once, so it’s impossible to predict what will happen this time. Unfortunately we may get an interaction from increased holiday travel plus a novel highly infectious variant.

The Omicron variant is spreading fast, but no one knows if it will be worse than we we are currently dealing with from Delta. It seems like triple-vaxxed people are not at high risk, from preliminary data. That is reassuring to me personally. Thank you South Africa for being fast and sharing data with the world. For communities with low vaccination rates, it seems certain that more deaths will result from fast-traveling Omicron. Yet, from my reading this week, it is hard to know if it’s really much worse than what they are currently experiencing from Delta.

I’m keeping a Twitter thread going of what other people are saying. Caleb Watney points out that we have two things going for us. Widely available vaccines keep people safer from infection and reduces the chance of needing medical treatment. Secondly, we have gotten better at treating the disease. Together, that should mean less deaths in January 2022, as long as people seek treatment quickly and hospital capacity does not become a limiting factor. Omicron could multiply cases so quickly that we can’t apply all our best treatments to everyone. That is the biggest reason to worry.

Even though people will be less cautious about winter holiday travel this year than they were last year, the country has been open for many months now, including the recent Thanksgiving holiday. The vulnerable population this time should be smaller, in terms of the people likely to die from Omicron.

To say that we won’t blindly exactly repeat the biggest mortality event of my lifetime is not “optimism”. It seems like this January will not be as bad as last January for the reason Watney states: better medical tech on hand, most importantly vaccines for prevention.

Continue reading

Covid-19 & The Federal Reserve

I remember people talking about Covid-19 in January of 2020. There had been several epidemic scare-claims from major news outlets in the decade prior and those all turned out to be nothing. So, I was not excited about this one. By the end of the month, I saw people making substantiated claims and I started to suspect that my low-information heuristic might not perform well.

People are different. We have different degrees of excitability, different risk tolerances, and different biases. At the start of the pandemic, these differences were on full display between political figures and their parties, and among the state and municipal governments. There were a lot of divergent beliefs about the world. Depending on your news outlet of choice, you probably think that some politicians and bureaucrats acted with either malice or incompetence.

I think that the Federal Reserve did a fine job, however. What follows is an abridged timeline, graph by graph, of how and when the Fed managed monetary policy during the Covid-19 pandemic.

February, 2020: Financial Markets recognize a big problem

The S&P begins its rapid decent on February 20th and would ultimately lose a third of its value by March 23rd.  Financial markets are often easily scared, however. The primary tool that the Fed has is adjusting the number of reserves and the available money supply by purchasing various assets. The Fed didn’t begin buying extra assets of any kind until mid-March. There is a clear response by the 18th, though they may have started making a change by the 11th.  One might argue that they cut the federal funds rate as early as the 4th, but given that there was no change in their balance sheet, this was probably demand driven.

https://fred.stlouisfed.org/graph/?g=JYVL
https://fred.stlouisfed.org/graph/?g=JYVy

March, 2020: The Fed Accommodates quickly and substantially.

In the month following March 9th, the Fed increased M2 by 8.3%. By the week of March 21st, consumer sentiment and mobility was down and economic policy uncertainty began to rise substantially – people freaked out. Although the consumer sentiment weekly indicator was back within the range of normal by the end of April, EPU remained elevated through May of 2020. Additionally, although lending was only slightly down, bank reserves increased 71% from February to April. Much of that was due to Fed asset purchases. But there was also a healthy chunk that was due to consumer spending tanking by 20% over the same period.

https://fred.stlouisfed.org/graph/?g=JYXj
https://fred.stlouisfed.org/graph/?g=JYYz

In the 18 months prior to 2020, M2 had grown at rate of about 0.5% per month. For the almost 18 months following the sudden 8.3% increase, the new growth rate of M2 almost doubled to about 1% per month. The Fed accommodated quite quickly in March.

April, 2020: People are awash with money

Falling consumption caused bank deposit balances to rise by 5.6% between March 11th and April 8th. The first round of stimulus checks were deposited during the weekend of April 11th. That contributed to bank deposits rising by another 6.7% by May 13th.

By the end of March, three weeks after it began increasing M2, the Fed remembered that it really didn’t want another housing crisis. It didn’t want another round of fire sales, bank failures, disintermediation, collapsed lending, and debt deflation. It went from owning $0 in mortgage-backed securities (MBS) on March 25th to owning nearly $1.5 billion worth by the week of April 1st. Nobody’s talking about it, but the Fed kept buying MBS at a constant growth rate through 2021.

May, 2020 – December, 2021: The Fed Prevents Last-Time’s Crisis

Jerome Powell presided over the shortest US recession ever on record. The Fed helped to successfully avoid a housing collapse, disintermediation, and debt deflation – by 2008 standards. The monthly supply of housing collapsed, but it had bottomed out by the end of the summer. By August of 2021, the supply of housing had entirely recovered. The average price of new house sales never fell. Prices in April of 2020 were typical of the year prior, then rose thereafter. A broader measure of success was that total loans did not fall sharply and are nearly back to their pre-pandemic volumes. After 2008, it took six years to again reach the prior peak. A broader measure still, total spending in the US economy is back to the level predicted by the pre-pandemic trend.

The Fed can’t control long-run output. As I’ve written previously, insofar as aggregate demand management is concerned, we are perfectly on track. The problem in the US economy now is real output. The Fed avoided debt deflation, but it can’t control the real responses in production, supply chains, and labor markets that were disrupted by Covid-19 and the associated policy responses.

What was the cost of the Fed’s apparent success? Some have argued that the Fed has lost some of its political insulation and that it unnecessarily and imprudently over-reached into non-monetary areas. Maybe future Fed responses will depend on who is in office or will depend on which group of favored interests need help. Personally, I’m not so worried about political exposure. But I am quite worried about the Fed’s interventions in particular markets, such as MBS, and how/whether they will divest responsibly.

Of course, another cost of the Fed’s policies has been higher inflation. During the 17 months prior to the pandemic, inflation was 0.125% per month. During the pandemic recession, consumer prices dipped and inflation was moderate through November.  But, in the 16 months since April of 2020, consumer prices have grown at a rate of 0.393% per month – more than three times the previous rate. Some of that is catch-up after the brief fall in prices.

Although people are genuinely worried about inflation, they were also worried about if after the 2008 recession and it never came. This time, inflation is actually elevated. But people were complaining about inflation before it was ever perceptible. The compound annual rate of inflation rose to 7% in March of 2021. But it had been almost zero as recent as November, 2020. That March 2021 number is misleading. The actual change in prices from February to March was 0.567%. Something that was priced at $10 in February was then priced at $10.06 in March. Hardly noticeable, were it not for headlines and news feeds.

Hospital Merger Update

The panel on the proposed merger of Rhode Islands two largest hospital systems I mentioned last week happened yesterday, I’ll post some reactions here, there was a lot I didn’t get to say since my section only had 45 minutes split across 4 panelists and Senator Whitehouse naturally got more of the time.

The Lifespan and Care New England CEOs trying to merge their systems opened with what to me seemed like their weakest argument, a general appeal to togetherness. They said that if the Patriots offense and defense had to be kept as separate teams, they wouldn’t be very good. To me the right metaphor is that if you merged all the NFL teams into one super team, they wouldn’t try very hard.

To their credit though, overall the hospital CEOs and President Paxson of Brown University were surprisingly honest about the risks, basically acknowledging that hospital mergers are often just a way to gain market power at everyone else’s expense, but arguing that for various reasons this one is different. They seem to realize that if you define the relevant market area as the state of Rhode Island (as e.g. the Dartmouth Atlas does in their “Hospital Referral Regions”) then the merged entity would have a nearly 80% market share and be challenged by the FTC as an obvious monopoly. So they argue that the relevant market should include Boston and much of Connecticut. They argue that it won’t just be an excuse to raise prices because they are non-profits and the state has rate regulations.

They identified two potential true efficiencies, integrating the electronic medical records of the two systems and being able to easily conduct research across both systems (both systems have many employees who are faculty at Brown Med School, including my wife). In a reasonable world these efficiencies could be gained without merging, though I suspect HIPAA prevents this, meaning one of its many perverse unintended consequences would be incentivizing mergers.

Their biggest admission against interest was that “the primary benefit [of the merger] comes from scale” and that “scale matters for purchasing supplies and staffing”. To me this implies “don’t worry, we won’t use our monopoly power against consumers, we’ll just use it against suppliers and staff”. But the FTC just repealed their consumer welfare standard, and so I think these statements could come back to haunt the merging parties.

800,000 Deaths? Or 1 Million Deaths?

According to the Johns Hopkins COVID tracker, the US has now surpassed 800,000 COVID deaths during the pandemic. The CDC COVID tracker is almost to 800,000 too. But is this number right? Confusion about COVID deaths and total deaths has been rampant throughout the pandemic, especially when comparing across countries.

One method that many have suggested is excess deaths, which is generally defined as the number of deaths in a country above-and-beyond what we would expect given pre-pandemic mortality levels. It’s a very rough attempt at creating a counterfactual of what mortality would have looked like without the pandemic. Of course, you can never know for sure what the counterfactual would look like. Would overdoses in the US have increased anyway? Hard to say, though they had been on the rise for years even before the pandemic.

So don’t treat excess deaths as a true counterfactual, but just a very rough estimate. I wrote about excess deaths in the US way back in January 2021 (feels like a lifetime ago!), and at the time for 2020 it looked like the US had about 3 million total deaths (in the first 48 weeks of 2020), which was about 357,000 deaths more than expected (again, based on historical levels of the past few years), or about 13.6% above normal.

But once we had complete data for 2020, deaths were even higher: about 19% above expected, or somewhere around 500,000 excess deaths. This compares with the official COVID death count of about 385,000 in 2020 for the US.

What happens if we update those numbers with the most recent available mortality data for 2021? Keep in mind that data reporting is always delayed, so I’ll just use data through October 2021. The following chart shows both confirmed COVID deaths and total excess mortality, cumulative since the beginning of 2020.

As we can see in the chart, there are a lot more excess deaths than confirmed COVID deaths. There were already over 1 million excess deaths through the end of October 2021 in the US, cumulative since January 2020. This compares with about 766,000 confirmed COVID deaths. That’s a big gap!

We could spend a lot of time trying to understand this gap of 250,000 deaths. Is this under-reporting of COVID deaths? Is it deaths caused by government restrictions? Is it caused by the overwhelming of the health system?

I won’t be able to answer any of those questions today. Instead, let’s ask a different question: is the potential US undercount of COVID deaths unusual?

Continue reading

Earning Steady 9% Interest in My New Crypto Account

One reason for opening an account where you can purchase cryptocurrencies is to speculate on their price movements. There have been many cases where some coin has quadrupled in a few weeks, or gone up ten-fold in a few months, or even a hundred-fold within a year.

Another facet of crypto accounts is that in some cases you are paid interest on the coin you have purchased and hold in your account. That was the main draw for me. I already have a little Bitcoin and Ethereum exposure in my brokerage account through the funds GBTC and ETHE, enough to feel the thrill of victory and the agony of defeat when they go up, up, up and down, down, down, but I am not a big speculator at heart. So, I am drawn to the so-called “stablecoins”, whose value is tied to some major regular currency such as the U.S. dollar. It turns out that you can get high, steady interest payments on those stablecoins.

There are several crypto brokers which pay interest on coins. Some names include BlockFi, Celsius, Nexo, and Voyager Digital. Several such firms are reviewed here.  Initially I leaned towards Voyager, since it gives access to lots of the new, little alt-coins where you can 10X your money if you pick the right ones and jump in early. However, I still do my own taxes, and the tax reporting from Voyager looked daunting. Last I looked, they just provide a dump of all your transactions in a giant table, and it’s up to you to figure out capital gains/losses. The word on the street is that this is not as straightforward as it seems. Also, Voyager offered only mobile apps, not a desktop interface. All in all, Voyager seems more geared towards intense younger Robin Hood/Reddit crowd, punching daring trades into their phones at all hours.

BlockFi is quite staid by comparison. It only offers a few, mainstream coins. However, it is one of the best-established firms, and it provides a nice clear 1099 tax reporting form at the end of the year. BlockFi is backed by major institutional partners, and manages over $9 billion in assets.

Unlike some of its competitors, it is U.S.-based, and as such it is structured to function well in this jurisdiction. Also, its interest payouts are straightforward. In contrast, many of its competitors incentivize  you to receive your interest in special tokens issued by those companies, which adds another element of risk. Finally, BlockFi allows you to immediately transfer money in and out of your account by using a bank ACH link. I wanted that flexibility since I plan to keep a portion of my cash holdings in BlockFi instead of in the bank, but I want to be able to access those cash holdings on short notice and without penalty. (Last week I described some of my struggles over using the Plaid financial app which manages the bank-BlockFi interface, but I was able to get past that).

All in all, BlockFi is boring in a good way. All I want to do is make steady money, with minimal distraction. Here is a listing of the interest rates paid for holdings of Bitcoin and Ethereum:

BlockFi only pays significant interest for smaller holdings of these coins. (We will discuss the reason for this seemingly odd policy in a future blog post; it is basically an outcome of BlockFi’s conservative financial practices).

For Bitcoin, the interest rate is 4.5% for up to 0.10 BTC, which at today’s prices is about $4,700. After that, the interest plummets to 1%, and to a mere 0.10% for more than 0.35 BTC (about $16,000). There is a similar pattern for Ethereum. If your goal is to hold large amounts of these coins and earn substantial interest on them, there are probably better platforms than BlockFi.

However, the interest picture is brighter for the stablecoins. The biggest U.S.-based stablecoin is USD Coin (USDC), which is backed by significant institutions. Gemini Dollar (GUSD) is smaller, but also takes great pains to garner trust. Its issuer, Gemini, operates under the regulatory oversight of the New York State Department of Financial Services (NYDFS). It boasts, “The Gemini Dollar is fully backed at a one-to-one ratio with the U.S. dollar. The number of Gemini dollar tokens in circulation is equal to the number of U.S. dollars held at a bank in the United States, and the system is insured with pass-through FDIC deposit insurance as a preventative measure against money laundering, theft, and other illicit activities.” GUSD is the “native” currency within BlockFi, though users can easily exchange it for other coins. At this point I am holding just GUSD, though if I put in more funds, I would plan to partially diversify into USDC. Besides being much bigger, USDC now runs on multiple platforms, whereas GUSD is limited to Ethereum; if Ethereum finally does switch from proof-of-work to proof-of-stake, it may be more subject to outages or hacking, so it would be nice to not be totally dependent on Ethereum.

For these two stablecoins, BlockFi currently pays 9% interest on holdings up to $40,000, and a respectable 8% on larger holdings:

A complete list of BlockFi interest rates (which change from time to time) is here.

The alert reader may at this point object, “Hey, you are losing most of the purported benefits of blockchain cryptocurrencies – – without holding the coins in your own wallet, you don’t actually own them, so you are back dependent on The System. Moreover, those stablecoins are centrally managed, not deliberately decentralized like Bitcoin and Ethereum. You are treating this like a plain bank account!”

My reply is, “Yes, I am treating it like a plain bank account – – but an account that pays me 9% interest, with no drama.” That is exactly what I wanted.

UPDATE MARCH 2022 – – BLOCKFI INTEREST ACCOUNT NO LONGER AVAILABLE. For some time now, state and federal government authorities have been hassling crypto exchanges that offer interest on crypto holdings. In February, the SEC fined BlockFi $100 million for allegedly violating securities laws, and shut them down from taking in any new funds for interest-bearing accounts. BlockFi hopes someday to provide a regulation-compliant interest product, but don’t hold your breath.

Zealous state and federal regulators have been attacking other crypto firms offering interest, such as Celsius and Voyager. The main player still standing that I am aware of is Gemini. Gemini is very conscientious about audits and has always tried to work closely with regulators. It is offering about 6.5% interest on stablecoins (which is still way better than money markets or CDs), and a measly 1-1.25% on Bitcoin and Ethereum.

What would a Great Reorganization look like?

In our eternal quest to never let go of any effective rhetorical device that can double as a headline, the last 12-18 months have been dubbed The Great Resignation. Within voluntary job separations, a sizable chunk of which appear to be early retirements, many are young people transitioning from low-paying jobs to those that have seen fit to adapt to the labor shortage faster, offering some combination of higher wages, better benefits, or a higher quality of life, often through the channel of relaxed educational or experience prerequisites.

Some, generally from the political left, are framing this as a shift in power from management to labor, particularly for those who hope this can be the moment that pushes unionization back to the forefront. Others, mostly from the political right, are framing this as a catastrophic undercutting of the incentive to work induced by the expanded welfare state. I tend to see these positions as frantic over-optimism or pessimism from those desperate for a sexy political narrative to sell.

I think the closer parallel, in terms of mechanism (not scale), isn’t the Great Depression or the New Deal era that followed, but rather the World War II draft-accelerated entry of women into the workforce. I think what we’re seeing is a massive reorganization of the US labor market. If this half-baked generalization were true, what would it look like?

  1. Education, Training, and Experience reconsidered

My guess is that managers in a range of fields have long had a itch in the back of their minds that they weren’t always hiring the right people. Specifically, they were eliminating large swaths of applicants from the pool of consideration because they lacked the minimum formal education or years of narrowly defined experience. A lot of these requirements, I suspect, existed not as tried and true markers of the subset of optimal candidates, but because they could be routinized through online job applications and human resources triage, largely in an effort to conserve on managerial and administrative time. Combined CYA incentives and other sources of herding behavior both within and across firms (i.e. no one gets fired for only hiring college graduates), these are exactly that kind of sub-optimal practices that can widely embed themselves when an economy is growing, but the labor market is relatively loose, so any suboptimality is lost in the wash.

A negative labor shock, be it a military draft or global pandemic, is exactly the kind of thing that rewards firms that begin hiring from whole strata of previously unconsidered job candidates. Not for nothing, that’s how you end learning all kinds of new things: the relative value of various degrees and training, the cross-applicability of job experience previously treated as irrelevant to an open position, and the marginal products of a firms employment portfolio.

2. Compensation bundles rebalanced

There’s plenty of fuss (rightly so) over the shift towards working from home. Yes, it saves on fixed costs, particularly in cities with sky-high commercial real estate costs, but I suspect the greater impact in the long run will be on the composition of wages+benefits+flexibility in employee compensation bundles, where flexibility is largely a catch-all for the quality of life component associated with any job. Maybe we already knew that health insurance and paid leave were valuable, but I think a lot of employees have discovered they were previously undervaluing the costs of commuting, schedule uncertainty, and existing “on call” for co-workers and superiors. Whether its working from home or as an independent contractor, many people are discovering that recapturing 10 hours a week of the rest of your life is worth a lot more than the wages being foregone. We already know that women are the future, but we also know that women value flexibility in work schedules more than men. A shift towards quality of life in compensation bundles was likely already in the cards, the pandemic just accelerated it.

For firms that have spent the last 20 years burning out the handful of key employees, rewarding their exceptional productivity by turning them into productivity bottlenecks, they are either going to have to find a way to spread the work thinner or recapture those key employees by finding other means of maintaining the quality of their employee lives.

3. The service industry is dead. Long live the service industry?

We’ve been eating on borrowed time. Through the combination of over-priced and over-valued higher education, a gratuitous over-stigmatization of non-violent criminal records, and the employment trap of limited human capital building, but lots of cash in hand, the service industry has been feeding us all on the cheap for a very long time now.

Turns out, though, that the relative frugality of diners has squeezed margins in restaurants razor thin, and has largely come at the expense of servers and kitchen staff. Came at the expense, I should say. I think we’re all going to have find a new normal where outsourcing meal preparation is, at the margin, slightly less of a staple and slightly more of a luxury. I still see Help Needed signs in lots of restaurants, and owners complaining in news stories that “No one wants to work“, but I’m also seeing new employees bring home higher salaries at McDonald’s after 90 days than fine dining cooks in their 3rd year working sauté. Eventually the new equilibrium will be reached, and I predict it’s going to involve higher salaries and better benefits for line cooks, but it’s also going to mean customers are going to have to get over there perceived $28 ceiling on entrees. Also, don’t expect your favorite restaurants to be open on Monday’s and Tuesdays, because it turns out everyone wants to have weekend.

4. The same, but different
What will the labor market look like in 5 years? Forecasting is a fool’s errand, but I never promised anyone I wasn’t a fool. Here’s my best guess:

I don’t expect a revival of unionization, but I do expect that employment will start taking on a lot of the attributes that pro-union people are currently agitating for. There will be more people with 3 and 4-day work weeks, though I suspect those people will be working 10 and 12 hour shifts. I think there will be a lot of flexible office-home work schedules, where firms coordinate their employees around days when everyone is in the office, the rest floating between the office and home as the work dictates. I expect there will be more independent contractors, but unlike previously self-employed people who bounced from contract to contract, they will instead be people who balance a portfolio of employment, with what amounts a small number of long term contracts. Rather than work for one person at a time 40 hours a week, they’ll work for 2 or 3, 8-10 hours each, building up enough firm-specific capital that contracts will last years, even decades, at a time.

I expect kitchens will remain hot, crowded, and loud. I expect chef’s will remain angry and owner’s tight-fisted with every penny. I expect that servers will still finish every shift with sore feet and stories of annoying customers. Maybe even more annoying than before, because those customer’s will be paying 15% more than the prices they already manage to complain about. But it’ll be okay, because everyone in that restaurant is going to be earning a much better living. They’ll have to, because otherwise they’re not coming back.

Car Prices and Quality

Inflation is on everyone’s mind. Everybody freaks out. You cannot do anything about it. As such, lets talk about something mildly related: how price indexes (those that we use to talk about inflation) deal with quality changes.

One big problem when we try to measure the cost of living is that the price information we collect does not reflect the same thing we consume. I know that sentence seems weird. After all, 1$ for a pound of bread is 1$ for a pound a bread. And if prices go up 10%, then the price per pound of bread is 1.10$!

If you think that, you’re wrong. Think about the following example from my native province of Quebec. In the 1990s, Quebec deregulated opening hours for grocery stores. The result was … higher prices at large superstores. Why? Before the reform, stores had shorter hours especially on sundays. This meant that stores were competing with each other on a smaller quality dimension which meant more price-based competition. With deregulation, some consumers were willing to pay slightly higher prices to shop at ungodly hours. What were these consumers consuming? Were they consuming only the breadloafs they bought or were they consuming those loafs and the flexible schedule of the grocery stores? The answer is the latter! Ergo, the change from 1$ per pound to 1.10$ per pound does not mean that the price of bread alone increased — it may have even fallen all else being equal!

So how do you adjust for that? There are many papers on how to do hedonic adjustments (hedonic is the fancy words we use to say “quality-adjusted”) and they are all a pain to read unless you are very familiar with real analysis, set theory and advanced calculus (and even there, its still a pain). Fortunately, I recently found a neat little application from an old econometrics graduate text from the 1960s (see image below) that allows me to teach this to my students (and now, you too!) in an easy-to-get format.

A neat book

The book has a neat chapter by one of the most famous econometricians of the 20th century, Zvi Griliches, titled “Hedonic Price Indexes for Automobiles: An Econometric Analysis of Quality Change”. In the chapter, Griliches points out that from 1954 to 1960, car prices went up some 20% — well above the overall price index. From 1937 to 1950, prices for cars went up in line with inflation. Taken together, these two facts suggest that the real price of cars stayed constant from 1937 to 1950 and increased to 1960. But that suggestion is wrong Griliches points out because of our aforementioned quality issues. Up until 1960, there were considerable improvement in vehicle quality: better gears, better brakes, more horsepower, safer settings, automatic transmission, hardtops, switching to V-8 engines rather than 6 cylinders engines etc.

How do you account for these quality changes? Griliches simply went about consulting guide books for autobuyers. He collected price data for the cars as well the details regarding quality. And he used this very simple specification where the log of the nominal price is set as a dependent variable.

Griliches’ specification

The vector X is all the quality dimensions he could find (horsepower, shipping weight, length, V-8 engine, hardtop, automatic transmission, power steering, power brakes, compact car). All of these dimensions were statistically significant determinants of the price of cars (with the exception of V-8 engines which was not significant). Then, Griliches assumed that all quality dimensions were “unchanged” from 1954 to 1960 in order to see how prices would have evolved without any changes in quality. The result is the figure below. The blue line depicts the actual prices he collected where you can see the 20% increase to 1960 (which is a 30%+ increase to 1959). The orange line depicts the price holding quality constant. That orange line is unambiguous: quality-constant car prices didn’t change much during the 1950s. Adjusting for inflation during the period suggests a drop in 10% in the real price of a quality-constant car.

Image

Isn’t that a fascinating way to understand what we are actually measuring when we collect prices to talk about inflation? I find this to be an utterly fascinating example (and a useful teaching tool). Okay, I am done, you can go back to freaking out about inflation and how bad the Fed, Bank of Canada, ECB are.

Watching Get Back

I enjoyed watching Get Back, the new documentary about making a Beatles album. Sometimes I skipped over rehearsal scenes. The streaming format allows you to treat Get Back like a coffee table book, if you choose, as opposed to a feature film that you watch all the way through in one sitting.

I know very little about The Beatles, aside from recognizing their hit songs. Here are my impressions after watching most of Get Back.

Paul McCartney is a rock star. His hair could have its own line in the closing credits. When Paul goofs off, he appears to be entertaining his bandmates because he loves playing for any audience. Conversely, John Lennon seems to joke around because he does not take their music seriously. Paul is motivated to make the Beatles excellent. Ringo’s ability to show up and be quiet is almost as important as Paul’s ability to lead.

I’ll put up my tribute. Then I’ll add more casual observations.

Continue reading

PSNE: No More, No Less

Today marks the 27th anniversary of John Nash winning The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for his contributions to game theory.

Opinions on game theory differ. To most of the public, it’s probably behind a shroud of mystery. To another set of the specialists, it is a natural offshoot of economics. And, finally a 3rd non-exclusive set find it silly and largely useless for real-world applications.

Regardless of the camp to which you claim membership, the Pure Strategy Nash Equilibrium (PSNE) is often misunderstood by students. In short, the PSNE is the set of all player strategy combinations that would cause no player to want to engage in a different strategy. In lay terms, it’s the list of possible choices people can make and find no benefit to changing their mind.

In class, I emphasize to my students that a Nash Equilibrium assumes that a player can control only their own actions and not those of the other players. It takes the opposing player strategies as ‘given’.

This seems simple enough. But students often implicitly suppose that a PSNE does more legwork than it can do. Below is an example of an extensive form game that illustrates a common point of student confusion. There are 2 players who play sequentially. The meaning of the letters is unimportant. If it helps, imagine that you’re playing Mortal Kombat and that Player 1 can jump or crouch. Depending on which he chooses, Player 2 will choose uppercut, block, approach, or distance. Each of the numbers that are listed at the bottom reflect the payoffs for each player that occur with each strategy combination.

Again, a PSNE is any combination of player strategies from which no player wants to deviate, given the strategies of the other players.

Students will often proceed with the following logic:

  1. Player 2 would choose B over U because 3>2.
  2. Player 2 would choose A over D because 4>1.
  3. Player 1 is faced with earning 4 if he chooses J and 3 if he chooses C. So, the PSNE is that player 1 would choose J.
  4. Therefore, the PSNE set of strategies is (J,B).

While students are entirely reasonable in their thinking, what they are doing is not finding a PSNE. First of all, (J,B) doesn’t include all of the possible strategies – it omits the entire right side of the game. How can Player 1 know whether he should change his mind if he doesn’t know what Player 2 is doing? Bottom line: A PSNE requires that *all* strategy combinations are listed.

The mistaken student says ‘Fine’ and writes that the PSNE strategies are (J, BA) and that the payoff is (4,3)*.  And it is true that they have found a PSNE. When asked why, they’ll often reiterate their logic that I enumerate above. But, their answer is woefully incomplete. In the logic above, they only identify what Player 2 would choose on the right side of the tree when Player 1 chose C. They entirely neglected whether Player 2 would be willing to choose A or D when Player 1 chooses J. Yes, it is true that neither Player 1 nor Player 2 wants to deviate from (J, BA). But it is also true that neither player wants to deviate from (J, BD). In either case the payoff is (4, 3).

This is where students get upset. “Why would Player 2 be willing to choose D?! That’s irrational. They’d never do that!” But the student is mistaken. Player 2 is willing to choose D – just not when Player 1 chooses C. In other words, Player 2 is indifferent to A or D so long as Player 1 chooses J. In order for each player to decide whether they’d want to deviate strategies given what the other player is doing, we need to identify what the other player is doing! The bottom line: A PSNE requires that neither player wants to deviate given what the other player is doing –  Not what the other player would do if one did choose to deviate.

What about when Player 1 chooses C? Then, Player 2 would choose A because 4 is a better payoff than 1. Player 2 doesn’t care whether he chooses U or B because (C, UA) and (C, BA) both provide him the same payoff of 4. We might be tempted to believe that both are PSNE. But they’re not! It’s correct that Player 2 wouldn’t deviate from (C, BA) to become better off. But we must also consider Player 1. Given (C, UA), Player 1 won’t switch to J because his payoff would be 1 rather than 3.  Given (C, BA), Player 1 would absolutely deviate from C to J in order to earn 4 rather than 3. So, (C, UA) is a PSNE and (C, BA) is not. The bottom line: Both players must have no incentive to deviate strategies in a PSNE.

There are reasons that game theory as a discipline developed beyond the idea of Nash Equilibria and Pure Strategy Nash Equilibria. Simple PSNE identify possible equilibria, but don’t narrow it down from there. PSNE are strong in that they identify the possible equilibria and firmly exclude several other possible strategy combinations and outcomes. But PSNE are weak insofar as they identify equilibria that may not be particularly likely or believable. With PSNE alone, we are left with an uneasy feeling that we are identifying too many possible strategies that we don’t quite think are relevant to real life.

These features motivated the later development of Subgame Perfect Nash Equilibria (SGPNE). Students have a good intuition that something feels not quite right about PSNE. Students anticipate SGPNE as a concept that they think is better at predicting reality. But, in so doing, they try to mistakenly attribute too much to PSNE. They want it to tell them which strategies the players would choose. They’re frustrated that it only tells them when players won’t change their mind.

Regardless of whether you get frustrated by game theory, be sure to have a drink and make toast to John Nash.

*Below is the normal form for anyone who is interested.