The “Textbook Definition” of a Recession

Three weeks I wrote a blog post about how economists define a recession. I pretty quickly brushed aside the “two consecutive quarters of declining GDP,” since this is not the definition that NBER uses. But since that post (and thanks to a similar blog post from the White House the day after mine), there has been an ongoing debate among economists on social media about how we define recessions. And some economists and others in the media have insisted that the “two quarters” rule is a useful rule of thumb that is often used in textbooks.

It is absolutely true that you can find this “two quarters” rule mentioned in some economics textbooks. Occasionally, it is even part of the definition of a recession. But to try and move this debate forward, I collected as many examples as I could find from recent introductory economics textbooks. I tried to stick with the most recent editions to see what current thinking on the topic is among textbook authors, though I will also say a little bit about a few older editions after showing the results of my search.

Undoubtedly, I have missed a few principles textbooks (there are a lot of them!) so if you have a recent edition that I didn’t include, please share it and I’ll update the post accordingly. I also tried to stick with textbooks published in the last decade, though I made an exception for Samuelson and Nordhaus (2010) since Samuelson is so important to the history of principles textbooks (and his definition has changed, which I’ll discuss below).

But here’s my data on the 17 recent principles textbooks that I’ve found so far (send me more if you have them!). Thanks to Ninos Malek for gathering many of these textbooks and to my Twitter followers for some pointers too.

Continue reading

Recession or not, the biggest GDP political football is 3 months away

US GDP fell for the second straight quarter according to statistics released this week by the Bureau of Economic Analysis. This means that by one common definition we’re now in a recession, which has ignited a debate about whether “two consecutive quarters of negative GDP growth” is the best definition (as opposed to ‘when the NBER says there’s one’, like I generally teach and Jeremy argued for here, or something else).

Naturally this debate has political overtones, since the party in power would be blamed for a recession, so we’ve seen the White House CEA argue that we’re not in a recession, many on the other side argue that we are, and plentiful hypocrisy from people who should know better.

But in political terms, the fight over the binary “are we in a recession” call won’t be the big economic factor in November’s elections- that will be inflation and GDP, especially 3rd quarter GDP. One of the oldest and best predictors of US elections is the Fair Model, which uses inflation and the number of recent “strong growth quarters”. Fair’s update following the recent Q2 GDP announcement states:

the predicted vote share for the Democrats is 46.70, which compares to 48.99 in October. The smaller predicted vote share for the Democrats is due to two fewer strong growth quarters and slightly higher inflation

By Election Day we’ll have 3 more months of economic data making it clear whether inflation is getting under control and whether economic activity is picking back up or continuing to decline. Monthly data releases on inflation and unemployment will be closely watched, but the most discussed release will likely be third quarter GDP. It will summarize 3 months instead of just one, it will be of huge relevance to the debate over how severe the recession is or whether we’re even in one, and it will likely be released less than two weeks before election day. The NBER almost certainly won’t weigh in by then; they tend to take over a year to date recessions, not adjudicate debates in real time.

So when BEA does release their Q3 GDP estimate in late October, what will it say? Markets currently estimate at least a 75% chance it will be positive (they had estimated a 36% chance of positive Q2 GDP just before the latest announcement). That sounds high to me, the yield curve is still inverted and I bet investment will continue to drag, but forecasting exact GDP numbers is hard. Its a much easier bet that whatever the number turns out to be will loom large in political debates just before the elections. Perhaps we’ll get the Q3 GDP growth number that would make for the most chaotic debate: 0.0%.

Trial Updates: Novavax Approved, Potatoes Work

I’m usually the one writing the papers, but I recently did two studies as a participant / guinea pig. Both just released major positive updates.

I joined the Novavax trial in late 2020 to have the chance to get a Covid vaccine sooner; at the time Pfizer had just got emergency approval but wasn’t available to the general public. The smart bio people on Twitter also seemed to think it was likely to be safer, and perhaps more effective, than other Covid vaccines (it delivers relevant proteins directly, rather than using mRNA or a viral vector). The trial results were published over a year ago now, and were in fact excellent:

Results from a Phase 3 clinical trial enrolling 29,960 adult volunteers in the United States and Mexico show that the investigational vaccine known as NVX-CoV2373 demonstrated 90.4% efficacy in preventing symptomatic COVID-19 disease. The candidate showed 100% protection against moderate and severe disease

As usual the FDA dragged its feet, even as other agencies around the world like the European Medical Agency and the World Health Organization approved the US-made Novavax. But last week it finally gave emergency authorization, and yesterday the CDC recommended Novavax. Of course, by now almost everyone who wants a Covid vaccine has one, and this approval is only for adults. But this will be a great option for boosters, as well as for anyone who was genuinely just concerned with the new technologies in the other vaccines (rather than just afraid of needles, or preferring to cut off their nose to spite authority’s face). As the CDC put it:

Protein subunit vaccines package harmless proteins of the COVID-19 virus alongside another ingredient called an adjuvant that helps the immune system respond to the virus in the future. Vaccines using protein subunits have been used for more than 30 years in the United States, beginning with the first licensed hepatitis B vaccine. Other protein subunit vaccines used in the United States today include those to protect against influenza and whooping cough….

Today, we have expanded the options available to adults in the U.S. by recommending another safe and effective COVID-19 vaccine. If you have been waiting for a COVID-19 vaccine built on a different technology than those previously available, now is the time to join the millions of Americans who have been vaccinated

I’m glad I was in this trial- I got a Covid vaccine several months before I otherwise could have, I made a few hundred dollars, and I learned a lot. But it would have been much better if they found a way to do fewer blood draws, and if FDA approval had come quicker. I’ve been in a weird gray area with respect to vaccine mandates for the last year; almost everyone ended up accepting my vaccine card, but I never knew if they were going to say “no, you need an FDA approved one”. I ended up getting Pfizer for a booster even though I think it’s a worse vaccine, partly for this reason, and partly because Novavax said they’d only give me the booster if I did another blood draw and I was tired of that.

The all-potato diet trial I wrote about here also released its results this week. This trial was much less formal, much smaller, and had no control group, so the results aren’t a slam-dunk the way Novavax is. But I think they’re still impressive. I lost 8 pounds in the 4-week trial, but it turns out the average participant who did all 4 weeks did even better:

Of the participants who made it four weeks, one lost 0 lbs…. Everyone else lost more than that. The mean amount lost was 10.6 lbs, and the median was 10.0 lbs.

Their summary also explains other costs and benefits of the diet, showing lots of data as well as many quotes from participants, including two from me. They conclude with some fascinating speculation about potential mechanisms from the boring (literally, lower variety makes eating boring so you eat less) to the speculative (low lithium? high potassium? weird lithium-potassium interactions), check it out if you’re interested in why obesity rates keep rising or if you’re considering doing the potato diet.

I’m glad I was in these two trials- what to try next?

Are We in A Recession?

The truth is, we don’t know. But let’s be clear: whether we are or not doesn’t depend on the 2nd quarter GDP report. Though two consecutive quarters of declining GDP is often cited as the definition of a recession, it’s not the definition economists use. And with good reason.

Instead, the NBER Business Cycle Dating Committee uses this definition: “a significant decline in economic activity that is spread across the economy and that lasts more than a few months.” And they explain why GDP is not their preferred measure, which includes several reasons but this one seems most germane to our current moment: “[the] definition includes the phrase, ‘a significant decline in economic activity.’ Thus real GDP could decline by relatively small amounts in two consecutive quarters without warranting the determination that a peak had occurred.”

If not GDP, what do they look at? I’ll get into more detail later, but in short, they look at monthly measures of income, consumption, employment, sales, and production (a direct measure of production, which GDP is not — it’s a proxy).

However, the American public seems convinced that we are in a recession. The most recent poll I can find on this is from mid-June, which is useful because (as we’ll see below) we have most of the relevant measures of the economy for June 2022 already. In that poll, 56% of Americans say we are in a recession. And while there is some partisan bent to the responses, even 45% of Democrats seem to think we are in a recession. For those that say we are in a recession, 2/3 cite inflation as the primary indicator that we are in a recession.

Already here we can see the difference between the general public and NBER: the rate of inflation is not one of the measures that NBER considers when defining a recession. So, what are the measures they use?

Continue reading

GDP Growth and Excess Mortality in the G7

Two weeks ago my post looked at GDP growth during the pandemic. But of course, economic growth isn’t the only important outcome to look at in the pandemic. Health outcomes are important too, and indeed I have posted about those in the past alongside GDP data.

Today, my chart looks at the G7 countries (representing roughly half of global wealth and GDP), showing both their economic performance (as measured by real GDP growth) and health performance (as measured by excess mortality through February 2022).

The US has clearly had the best economic performance. But the US also had the highest level of excess deaths per capita (not all of this is from COVID — US drug overdoses are also way up — but even using official COVID deaths, the US still tops this group).

Japan had the best health performance, in fact amazingly no cumulative excess deaths through February 2022 (this has risen very slightly since then, but I stopped in February so all countries had complete data). However, Japan also had slightly negative economic growth.

Which country ends up looking the best? Canada! Very low levels of excess deaths, and at least some positive economic growth. Not as much growth as the US, but Canada is the second best performer in the G7.

To give some context of just how low the level of deaths have been in Canada, first recognize that the US had 1.1 million excess deaths in the pandemic through February 2022. If instead our excess deaths had been roughly equal to Canada on a per capita basis, we would have only had 180,000 excess deaths in the US, saving over 900,000 lives.

Some of Canada’s COVID policy have been overly restrictive, such as the vaccine mandates that sparked protests in February 2022. But by then, Canada had already largely achieved it’s COVID victory over the US and most other G7 nations. Compare excess mortality in Canada with the US: the only big wave in Canada that came close to the US was the Spring 2020 wave. After that, Canada was always much lower.

Teaching with ACS regional data

If you are teaching a quantitative college course, then you have probably thought about where to get data that students can practice with.

Public Use Microdata Areas (PUMAs) are non-overlapping, statistical geographic areas that partition each state or equivalent entity into geographic areas containing no fewer than 100,000 people each. The image here shows PUMAs around Birmingham, AL. I created a dataset for my students that includes demographic data from the American Community Survey (ACS) for the region around our university.

For just about any topic you would teach in stats, I can create a mini assignment using data on the people around us. Any American metro area has clusters of high-income households and clusters of low-income households. One example of a an exercise is to create summary statistics on income by PUMA. Students will be surprised to learn the facts about their own city.

Zachary has blogged about how great IPUMS is. The way I obtained the data was to make a free account with IPUMS. If you asked for data on every American, you’ll end up with an unwieldy big file. The trick is to filter out all but a handful of PUMAs. I also recommend restricting it to just one year unless you are teaching time series techniques.

I originally got the idea from Matt Holian. Matt wrote fantastic book called Data and the American Dream. The book has data and R codes that allow you to reproduce the findings from several interesting econ papers that all use ACS data. I’m not teaching material that overlaps perfectly with Matt’s book, so I couldn’t assign it to my students, but I did borrow some elements of his idea and even (with his permission) some of his code.

Book Review: Big Data Demystified

Last year, our economics department launched a data analytics minor program. The first class is a simple 2 credit course called Foundations of Data Analytics. Originally, the idea was that liberal arts majors would take it and that this class would be a soft, non-technical intro of terminology and history.

However, it turned out that liberal arts majors didn’t take the class and that the most popular feedback was that the class lacked technical challenge. I’m prepping to teach the class and it will have two components. A Python training component where students simply learn Python. We won’t do super complicated things, but they will use Python extensively in future classes. The 2nd component is still in the vein of the old version of the course.

I’ll have the students read and discuss “Big Data Demystified” by David Stephenson. He spends 12 brief chapters introducing the reader to the importance of modern big data management, analytics, and how it fits into an organization’s key performance indicators. It reads like it’s for business majors, but any type of medium-to-large organization would find it useful.

Davidson starts with some flashy stories that illustrate the potential of data-driven business strategies. For example, Target corporation used predictive analytics to advertise baby and pregnancy products to mothers who didn’t even know that they were pregnant yet. He wets the appetite of the reader by noting that the supercomputers that could play Chess or Go relied on fundamentally different technologies.

The first several chapters of the book excite the reader with thoughts of unexploited potentialities. This is what I want to impress upon the students. I want them to know the difference between artificial intelligence (AI) and machine learning (ML). I want them to recognize which tool is better for the challenges that they might face and to see clear applications (and limitations).

AI uses brute force, iterating through possible next steps. There are multiple online tic-tac-toe AI that keep track records. If a student can play the optimal set of strategies 8 games in a row, then they can get the general idea behind testing a large variety of statistical models and explanatory variables, then choosing the best.

But ML is responsive to new data, according to what worked best on previous training data. There are multiple YouTubers out there who have used ML to beat Super Mario Brothers. Programmers identify an objective function and the ML program is off to the races. It tries a few things on a level, and then uses the training rounds to perform quite well on new levels that it has never encountered before.

There are a couple of chapters in the middle of the book that didn’t appeal to me. They discuss the question of how big data should inform a firm’s strategy and how data projects should be implemented. These chapters read like they are written for MBAs or for management. They were boring for me. But that’s ok, given that Stephenson is trying to appeal to a broad audience.

The final chapters are great. They describe the limitations of big data endeavors. Big data is not a panacea and projects can fail for a variety of what are very human reasons.

Stephenson emphasizes the importance of transaction costs (though he doesn’t say it that way). Medium sized companies should outsource to experts who can achieve (or fail) quickly such that big capital investments or labor costs can be avoided. Or, if internals will be hired instead, he discusses the trade-offs between using open source software, getting locked in, and reinventing the wheel. These are a great few chapters that remind the reader that data scientists and analysts are not magicians. They are people who specialize and can waste their time just as well as anyone else.

Overall, I strongly recommend this book. I kinda sorta knew what machine learning and artificial intelligence were prior to reading, but this book provides a very accessible introduction to big data environments, their possible uses, and organizational features that matter for success. Mid and upper level managers should read this book so that they can interact with these ideas prudentially. Those with a passing interest in programming should read it for greater clarity and to get a better handle on the various sub-fields. Hopefully, my students will read it and feel inspired to be on one side or the other of the manager- data analyst divide with greater confidence, understanding, and a little less hubris.

The Latest GDP Data: First Quarter 2022 in the OECD

Today two data releases for Gross Domestic Product were released. The first release was for the United States, giving us the third and “final” release for first quarter 2022 data. It was down 1.6% from the prior quarter (though we knew this two months ago — not much has changed since the “advance” estimate). That’s not good (but see this great Joseph Politano newsletter for some more detail).

The second release was the annual 2021 GDP data for the European Union. The release showed strong growth in 2021 (+5.4%), but that’s relative to the bad year of 2020. So compared to the pre-pandemic level of 2019, the EU was still about 0.8% below this more accurate baseline. Comparatively, the US was already 2% above 2019 with the annual 2021 release (everything in these two paragraphs is adjusted for inflation). Of course, within the EU, there is a lot of variation, but overall the US looks comparatively well.

Let’s break down that variation in the EU and include the first quarter of 2022 data to make the best comparison with the US. To bring in some more relevant comparison countries, I’ll use data from the OECD for a complete comparison. Note: I’ve excluded Ireland, because their GDP is weird. I’ve also excluded Turkey, because even though all the data here is adjusted for inflation, Turkey is in a highly inflationary environment, making the data a little difficult to interpret.

Here is the chart, which shows the change in real GDP from the 4th quarter of 2019 up through the 1st quarter of 2022 (I use the volume index, which is similar to adjusting for price inflation). I have highlighted in orange the largest economies in the OECD (anything with about $2 trillion of GDP or larger, with Spain and Canada at about that level).

Continue reading

Shifts in Labor Participation: The Great Resignation Becomes the Great Reshuffling

More than 47 million workers quit their jobs in 2021, in what has become known as The Great Resignation. However, many of these workers are getting re-hired elsewhere. Hiring rates have outpaced quit rates since November, 2020.

The U.S. Chamber of Commerce has published some statistics on this reshuffling of the labor force, which I will reproduce here.  As shown in the chart below, quit rates in leisure and hospitality  (which require in-person attendance and pay lower salaries) were enormous. However, the recent hiring rates have been even higher in this area, so the shortage of labor there is only moderate.

When taking a look at the labor shortage across different industries, the transportation, health care and social assistance, and the accommodation and food sectors have had the highest numbers of job openings.

But yet, despite the high number of job openings, transportation and the health care and social assistance sectors have maintained relatively low quit rates. The food sector, on the other hand, struggles to retain workers and has experienced consistently high quit rates.

I am not sure I understand exactly what the following chart represents, but it was deemed important:

I think the % of yellow is the ratio of unemployed persons with experience in the field (i.e., who could readily participate) to the total job openings in that field. E.g., “…if every unemployed person with experience in the durable goods manufacturing industry were employed, the industry would only fill 65% of the vacant jobs.” These are interesting data, although I’d be even more interested in seeing  numbers on unfilled job openings as fraction of total (filled and unfilled) job openings to give a better idea on how much each industry is hurting for labor. Anyway, here is some of the commentary from the article:

It is interesting to look at labor force participation across different industries. Some have a shortage of labor, while others have a surplus of workers. For example, durable goods manufacturing, wholesale and retail trade, and education and health services have a labor shortage—these industries have more unfilled job openings than unemployed workers with experience in their respective industry. Even if every unemployed person with experience in the durable goods manufacturing industry were employed, the industry would only fill 65% of the vacant jobs.

Conversely, in the transportation, construction, and mining industries, there is a labor surplus. There are more unemployed workers with experience in their respective industry than there are open jobs.

The manufacturing industry faced a major setback after losing roughly 1.4 million jobs at the onset of the pandemic. Since then, the industry has struggled to hire entry level and skilled workers alike.

And finally:

Some industries have been less impacted by labor shortages but are grappling with how to deal with the rise of remote work. For example, the rise of remote work might explain why there has been less “reshuffling” in business and professional services.

Everyone’s an Expert: Easy Data Maps in Excel

I love data, I love maps, and I love data visualizations.

While we tend not to remember entire data sets, we often remember some patterns related to rank. Speaking for myself anyway, I usually remember a handful of values that are pertinent to me. If I have a list of data by state, then I might take special note of the relative ranking of Florida (where I live), the populous states, Kentucky (where my parents’ families live), and Virginia (where my wife’s family lives). I might also take special note of the top rank and the bottom rank. See the below table of liquor taxes by State. You can easily find any state that you care about because the states are listed alphabetically.

A ranking is useful. It helps the reader to organize the data in their mind. But rankings are ordinal. It’s cool that Florida has a lower liquor tax than Virginia and Kentucky, but I really care about the actual tax rates. Is the difference big or small? Like, should I be buying my liquor in one of the other states in the southeast instead of Florida? Without knowing the tax rates, I can’t make the economic calculation of whether the extra stop in Georgia is worth the time and hassle. So, the most useful small data sets will have both the ranking and the raw data. Maybe we’re more interested in the rankings, such as in the below table.

But, tables take time to consume. A reader might immediately take note of the bottom and top values. And given that the data is not in alphabetical order, they might be able to quickly pick out the state that they’re accustomed to seeing in print. But otherwise, it will be difficult to scan the list for particular values of interest.  

Continue reading