This post is quick and simple. We all know that states have different land areas and different populations. We also know that different states produce different amounts of output. We have a pretty good sense for which are the ‘big’ states since these things often go hand-in-hand. But what about household spending on consumption? It’s easy to imagine that some states produce plenty but then invest the proceeds. So, which states consume the most relative to their income?
The map above illustrates which states consume more of their income. There’s not much correlation geographically. But, among the ‘big’ states (Texas, California, New York, Illinois), the consumption per GDP is below the average of 67%. Can we make sense of this? As it turns, out more productive states also tend to have a higher per capita output. So, those higher GDP states also have richer populations on average. And, sensibly, those richer populations have lower marginal propensities to consume. They save more. But this is just spit-balling.
A recent post from the blogger (Substacker?) Cremieux called Rich Country, Poor Country showed how small differences in economic growth add up over time. Because he used nominal GDP growth rates, I don’t think that post is exactly the right way to analyze the question, but I still think it’s a very important one. So in this post I will offer, not necessarily a critique of that post, but perhaps a better way of looking at the data.
For the data, I will use the Maddison Project Database, which attempts to create comparable GDP per capita estimates for countries going back as far as possible… for some, back thousands of years, but for most countries at least the last 100 years. And the estimates are stated in modern, purchasing power adjusted dollars, so they should be roughly comparable over time (if you think these estimates are a bit ambitious, please note that they are scaled back significantly from Angus Maddison’s original data, which had an estimate for every country going back to the year 1 AD). The most recent year in the data is currently 2022, so if I slip up in this post and say “today,” I mean 2022, or roughly today in the long sweep of history.
Like Cremieux’s post, I am interested in how much slightly lower economic growth rates can add up over time. Or even not so slightly lower growth rates, like 1 percentage point less per year — this is a huge number, because the compound annual average growth rate for the US from 1800 to 2022 is 1.42%. So let’s look at the data way back to 1800 (the first year the MPD gives us continuous annual estimates for the US) to see how changes in growth rates affect long-term growth.
It probably won’t surprise you that if our 1.42% growth rate had been 1 percentage point lower, the US would be much poorer today, but to put a precise number on it, we would be about where Bolivia is today (that is, ranked 116th out of the 169 countries in the MP Database). Note: I’m using a logarithmic scale, both so it’s easier to see the differences and because this is standard for showing long-run growth rates.
What is very interesting, I think, is that if our growth rate had been just 0.25 percentage points lower per year since 1800, we would be about where Spain is. Now, Spain is certainly a fine, modern developed country (they rank 34th of the 169 MPD countries). But Spain’s growth has not been spectacular lately. Average income in Spain is almost half of the US today (purchasing power adjusted!), which is another way to say that just 0.25 percentage points lower over 222 years reduces your growth rate by half.
That’s the power of economic growth.
And if our growth rate had been 0.5 percentage points lower, we’d be about where the big former Communist countries are today (both China and the former countries of the USSR are about equal today — about 1/3 of the income of the US).
What if we perform the same analysis for a shorter time horizon? If we go back 50 years to 1972, the effects are not quite as dramatic, but still visible.
Our cumulative annual growth rate since 1972 has been a bit higher than the long-run average, around 1.68%. Under these four alternative growth scenarios since 1972, the comparable countries don’t sound so bad. It probably wouldn’t be a huge deal if we were only at Australia’s level, losing just about a decade of economic growth. But it would be a huge failure if we were only at Italy’s current level of development. Under that 1 percentage point lower growth scenario, we would have had no net growth since about year 2000, which has roughly been the case for Italy.
All of these alternative scenarios show the power of economic growth to add up over time, but they do so in pessimistic way: what if growth had been slower. What if we look at the opposite: what if growth had been faster over some time horizon. Sticking with the 1972 medium-run example, if real growth rates had been 1 percentage point higher, our income today would be almost double what it actually is, about $95,000 compared with the current $58,000 (the MPD data is stated in 2011 dollars, so that sounds lower than it actually is now: over $80,000).
What if we went back even further? If our economic growth rate since 1800 had been 1 percentage point higher every year, our average income in 2022 would be an astonishing $517,000 — almost 10 times what it actually was in 2022. That’s a dizzying number to think about, and maybe that’s not a realistic alternative scenario.
But what if it had only been 0.25 percentage points higher since 1800 — that probably is a world that was possible. In that case, GDP per capita would be about double what it actually was in 2022, at over $100,000 (again, stated in 2011 dollars).
Grocery prices are definitely up a lot in the past few years. I’ve wrote about thisseveral times before. But lately there has been a trend on social media to “post your receipts” and show how much your grocery prices have gone up. Unfortunately, very few people actually post the full receipts, often just showing the total, which leads to wild claims like prices being up 250% in just the past 2 years! That’s a huge contrast to BLS “food at home” category of the CPI, which shows an increase of 4.7% from July 2022 to July 2024 (it’s also unclear in the video what the exact date of the receipt is, he just says “2 years”). Depending on the exact base month, you’re going to be in the 20-25% compared with pre-pandemic or early pandemic using BLS data.
What if we actually looked at receipts? I tried such an exercise in November 2023, when there was another round of social media videos claiming prices had doubled in just a single year. My own personal receipt matched the corresponding BLS data pretty closely, but that was just one receipt with only eight items from Sam’s Club (which might not match grocery stores, for various reasons). At the time, I couldn’t find any good receipts from 2019 or 2020 (Kroger and Walmart drop old receipts in your account after about 2 years), but after scouring an old email account, I discovered two more receipts to compare. These are both from Walmart, in 2019 and 2020, and they contain a larger number of items than my Sam’s Club receipt (each with about a dozen and half items that are fairly typical grocery purchases, and I was able to find matching products today).
Recorded music sales peaked in 1999- then came Napster and other ways to listen to the exact music you want for free. Recorded music sales still haven’t fully recovered, but with the rapid growth of paid streaming since 2014, they have been increasing again:
Meanwhile, live music sales have exploded since the ’90s:
The latest report from Pollstar on the top live tours is positively glowing:
2023 was a colossus, the likes of which the live industry has never before seen. If 2022 was a historic record-setting year, which it was, then this year completely blew it out of the water— by double digits. Total grosses for the 2023 Worldwide Top 100 Tours were up 46% to $9.17 billion
When you combine live and recorded sales, total spending on music has now passed the 1999 peak; this is the biggest the market for music has ever been. Of course, this doesn’t mean its an easy time to be a musician; touring is hard work and, as always, record labels and others are taking a big share of the money before it gets to artists. And opinions differ about whether today’s environment is good for creating good new music.
There are dozens of songs about how the road is hard, and the more time you spend on the road, the less they sound like cliches than like a simple and sometimes stark description of your life. Sooner or later everybody spots the exit that has their name on it –John Darnielle
The BLS data is noisy but suggests that the number of musicians in the US has been fairly flat and is projected to stay that way. A lot will depend on whether live music continues to grow, how much of that is captured by a few superstars, and whether the current streaming paradigm continues, or goes in a more or less artist-friendly direction. But now that consumers are willing to pay for music again, artists at least have a fighting chance.
This morning the Bureau of Labor Statistics released the latest quarterly data for their Quarterly Census of Employment and Wages for the first quarter of 2024. Along with this release is the announcement of their preliminary “benchmark estimate” for March 2024, which will eventually (next year) be used to revise employment data for the Current Employment Statistics program. To keep all of the alphabet soup of programs clear in year head, CES is the more familiar “nonfarm jobs” data that is released each month, usually with some media fanfare.
Benchmarking is an important part of the process for many data releases, because the monthly CES data is based on a survey of employers, a subset of the total. But the QCEW data is the universe of employees — at least the universe of the those covered by Unemployment Insurance law, which is something like 97-98% of workers in the US. So the numbers will never match exactly (CES is supposed to be measuring all workers, not just the 97-98% covered by UI), but they should be pretty close. The media reports the CES monthly data more prominently, because it is more timely and usually pretty close to correct — but benchmarking is the process to see just how correct those initial surveys were.
That brings us to the release today, which is the preliminary estimate of the benchmark adjustment for March 2024 (it will be finalized early in 2025). And that preliminary estimate was a big number, with a downward revision projected of 818,000 jobs. To put this in perspective, the current CES data shows 2.9 million jobs were added between March 2023 and March 2024, so this estimate suggest that the job growth was overstated by perhaps 40 percent. That’s a big revision, though large revisions are not unheard of: the same figure for March 2022 was an estimated 468,000 jobs higher, while March 2019 was 501,000 jobs lower. But this year is a big one (largest absolute number since 2009). Here’s a chart summarizing recent years revisions from Bloomberg:
I’ve covered this topic before, such as an April 2024 post where I noted that as of September 2023, there was an 880,000 gap in job growth between the CES and QCEW over the prior year. So this was not unexpected, and in the days leading up to the report, close followers of the data were forecasting that the revision could be up to 1 million jobs.
When I was in high school I remember talking about video game consumption. Yes, an Xbox was more than two hundred dollars, but one could enjoy the next hour of that video game play at a cost of almost zero. Video games lowered the marginal cost and increased the marginal utility of what is measured as leisure. Similarly, the 20th century was the time of mass production. Labor-saving devices and a deluge of goods pervaded. Remember servants? That’s a pre-20th century technology. Domestic work in another person’s house was very popular in the 1800s. Less so as the 20th century progressed. Now we devices that save on both labor and physical resources. Software helps us surpass the historical limits of moving physical objects in the real world.
There’s something that I think about a lot and I’ve been thinking about it for 20 years. It’s simple and not comprehensive, but I still think that it makes sense.
Labor is highly regulated and costly.
Physical capital is less regulated than labor.
Software and writing more generally is less regulated than physical capital.
I think that just about anyone would agree with the above. Labor is regulated by health and safety standards, “human resource” concerns, legal compliance and preemption, environmental impact, and transportation infrastructure, etc. It’s expensive to employ someone, and it’s especially expensive to have them employ their physical labor.
As I wrote last November, the question “are you better off than you were four years ago?” is a common benchmark for evaluating Presidential reelection prospects. And even though Biden is no longer running for reelection, voters will no doubt be considering the economic performance of his first term when thinking about their vote in November.
The good news for American wage earners (and possibly Harris’ election prospects) is that average wages have now outpaced average price inflation since January 2021. Despite some of that time period containing the worst price inflation in a generation, wages have continued to grow even as price growth has moderated. Key chart:
For most of Biden’s term, it was true that prices had outpaced wages. But no longer.
The real growth in wages, admittedly, is not very robust, despite being slightly positive. How does this compare to past performance under recent Presidents? Surprisingly, pretty well! (Lots of caveats here, but this is what the raw data shows.)
Will a recession happen? It’s famously hard/impossible to predict. Personally, I have a relatively monetarist take. I consider the goals of the Federal reserve, what tools they have, and how they make their decisions. I also think about the very recent trend in the macroeconomy and how it’s situated relative to history. Right now, the yield curve has been inverted for quite some time and the Sahm rule has been satisfied, both are historical indicators of recession.
Recessions are determined by the NBER’s Business Cycle Dating Committee. They always make their determination in hindsight and almost never in real time. They look at a variety of indicators and judge whether each declines, for how long, how deeply, and the breadth of decline across the economy. So plenty of ‘bad’ things can happen without triggering a recession designation.
In my expert opinion, recessions can largely be prevented by maintaining expected and steady growth in NGDP. This won’t solve real sectoral problems, but it will help to prevent contagion and spirals. The Fed can control NGDP to a great degree. In doing so, they can affect unemployment and growth in the short run, and inflation in the medium to long run.
One drawback of the NGDP series is that it’s infrequent, published only quarterly. It’s hard to know whether a dip is momentary, a false signal that will later be updated, or whether there is a recession coming. So, what should one examine? One could examine leading indicators or the various high-frequency indicators of economic activity. But those are a little too much like tarot cards and fortune telling for my taste.
Despite its many flaws*, I always like to checkinon what the Taylor Rule suggests for the Fed. Its virtues are that it gives a definite precise answer, and that it has been agreed upon ahead of time by a variety of economists as giving a decent answer for what the Fed should do. Without something like the Taylor Rule, everyone tends to grasp for reasons that This Time Is Different. Academics seek novelty, so would rather come up with some new complex new theory of what to do instead of something undergrads have been taught for years. Finance types tend to push whatever would benefit them in the short term, which is typically rate cuts. Political types push whatever benefits their party; typically rate cuts if they are in power and hikes if not, though often those in power simply want to emphasize good economic news while those out of power emphasize the bad news.
The Taylor Rule can cut through all this by considering the same factors every time, regardless of whether it makes you look clever, helps your party, or helps your returns this quarter. So what is it saying now? It recommends a 6.05% Fed funds rate:
Fed Funds Rate Suggested by the Bernanke Version of the Taylor Rule Source: My calculation using FRED data, continually updated here
I continue to use the Bernanke version of the Taylor Rule, which says that the Fed Funds rate should be equal to:
Core PCE + Output Gap + 0.5*(Core PCE – 2) +2
*What are the flaws of the Taylor Rule? It sees interest rates as the main instrument of monetary policy; it relies on the Output Gap, which can only really be guessed at; and it incorporates no measures of expectations. If I were coming up with my own rule I would probably replace the Output Gap with a labor market measure like unemployment, and add measures of money supply shifts and inflation expectations. Perhaps someday I will, but like everyone else I would naturally be tempted to overfit it to the concerns of the moment; I like that the Taylor Rule was developed at a time when Taylor had no idea what it might mean for, say, the 2024 election or the Q3 2024 returns of any particular hedge fund.
That said, people have now created enough different versions of the Taylor Rule that they can produce quite a range of answers, undermining one of its main virtues. The Atlanta Fed maintains a site that calculates 3 alternative versions of the rule, and makes it easy for you to create even more alternatives:
Two of their rules suggest that Fed Funds should currently be about 4%, implying a major cut at a time that the Bernanke version of the rule suggests a rate hike. On the other other hand, perhaps this variety is a virtue in that it accurately indicates that the current best path is not obvious; and the true signal comes in times like late 2021 when essentially every version of the rule is screaming that the Fed is way off target.
Recently there has been some discussion in the Presidential race about the taxation of parents vs. childless taxpayers. The discussion has been ongoing, but it was kicked up again when a 2021 video of J.D. Vance resurfaced where he said that taxpayers with children should be lower tax rates than those without children. There was some political back-and-forth about this idea, much of it tied up in the framing of the issue, with the usual bad faith on both sides about the fundamental issue (in short: most Democrats and a small but growing number of Republicans support increasing the size of the Child Tax Credit).
Let’s leave the politicking aside for a moment and focus on policy. As many pointed out in response to Vance’s idea, we already do this. In fact, we have almost always done this in the history of the US income tax — “this” meaning giving taxpayers at least some break for having kids. For most of the 20th century, this was done through personal exemptions which usually included some tax deduction for children, and later in the century the Child Tax Credit was added (after 2017, the exemptions were eliminated in favor of a large CTC). Other features of the tax code also make some accounting for the number of children, most notably the size of the Earned Income Credit.
The chart below is my attempt to show how the tax breaks for children have affected four sample taxpaying households. What I show here is sometimes called the “zero bracket” — that is, how much income you can earn without paying any federal income taxes. The four households are: a single person with no children, a married couple with no children, a single person with two children (“head of household”), and a married couple with two children. All dollar amounts are inflation-adjusted to current dollars