Median Income Is Down Again. Are There Any Silver Linings in the Data?

This week the Census Bureau released their annual update on “Income, Poverty and Health Insurance Coverage in the United States.” This release is always exciting for researchers, because it involves as massive release of data based on a fairly large (75,000 household) sample with detailed questions about income and related matters. For non-specialists, it also generates some of the most commonly used national data on income and poverty. Have you heard of the poverty rate? It’s from this data. How about median household income? Also from this data.

I’ll focus on income data in this post, though there is a lot you could say about poverty and health insurance too. The headline result on median income is, once again, a dismal one. Whether you look at median household income (very commonly reported, even though I don’t like this measure) or median family income (which I prefer), both are down from 2021 to 2022 when adjusted for inflation. Both are still down noticeably from the pre-pandemic high in 2019 (though both are also above 2018 — we aren’t quite back to the Great Depression or Dark Ages, folks!).

These headline results are bad. There is no way to sugarcoat or “on the other hand” those results. And these results are probably more robust and representative than other measures of average or median earnings, since they aren’t subject to “composition effects” — when those with zero wages in one period don’t show up in the data. I will note that these results are for 2022, and we are highly likely to see a turnaround when we get the 2023 data in about a year (inflation has slowed to less than wage growth in 2023).

But given that obviously bad headline result, was there any good data? As I mentioned above, a ton of data, sliced many different ways, is released with this report. Some of it also gives us consistent data back decades, in some cases to the 1940s. What else can we learn from this data release?

Median Income by Race

When we look at median income by race, there are a few silver linings. The headline data from Census tells us that only the drop in household income for White, Non-Hispanics was statistically significant. For other races and ethnicities, the changes were not statistically significant from 2021 to 2022 — and some of those changes were actually positive. We shouldn’t dismiss White, Non-Hispanics — they are the largest racial/ethnic group! — but it is useful to look at others.

Black household and families are the most interesting to look at in more detail, especially because they are the poorest large racial group in the US. Black household and family income increased from 2021 to 2022, although the increase was small enough that we can’t say it is statistically significant (remember, this is a sample, not the universe of the decennial Census).

But what’s more important is that median Black household income is now at the highest level it has ever been (adjusted for inflation, as always). Median Black household income is about $1,000, or around 2 percent higher than in 2019 — the peak date for overall median income. Two percent growth over 3 years is nothing to shout from the rooftops, but it is very different from White, Non-Hispanic households, which are down over 6 percent since 2019.

Median black family income is roughly flat since 2019, but it is up about 1.5 percent in the past year — not quite as robust, but still better than the overall numbers.

Historical Income Data

The other silver lining I always like to mention is the long-run historical data. This data often gets overlooked in the obsessive focus on the most recent changes, so it’s useful to sit back and look at how far we have come. Let’s start where we just left off, with Black families. I wrote a post back in February about Black family income, which had data current through 2021, but it’s useful once again to look at the data with another year (plus they have updated the inflation adjustments for 2000 onward).

The chart shows the percent of Black families that are in three income groups, using total money income data. The data is adjusted for inflation. The progress is dramatic. In 1967, the first year available, half of Black families had incomes under $35,000. By 2022 that number had been cut in half to just one quarter of families (the 2022 number is the lowest on record, even beating 2019). Twenty-five percent is still very high, especially when compared to White, Non-Hispanics (it’s about 12 percent), but it’s still massive progress. It’s even a 10-percentage point drop from just 10 years ago. And Black families haven’t just moved up a little bit: the “middle class” group (between $35,000 and $100,000) has been pretty stable in the mid-40 percentages, while the number of rich (over $100,000) Black families has grown dramatically, from just 5 percent to over 30 percent.

We saw earlier that progress for White, Non-Hispanics has stumbled in the past 3 years, but the long run data is much more optimistic (this data starts in 1972).

The progress here should be evident too, but let me highlight one thing for emphasis: as far back as 1999, the largest of these three groups was the “rich” (over $100,000 group). And since 2017, the upper income group has been the majority, with median White Non-Hispanic family income surpassing $100,000 in 2017, up from $70,000 at the beginning of the series in the early 1970s (all inflation adjusted, of course).

The next question I often get with this historical data is: How much of this increase is due to the rise of two-income households. Well, this same data release allows us to look at that data too! This final chart shows median family income for families with either one or two earners (there are families with zero earners or more than two, but these two categories make up the bulk of families). This data is pretty cool because it goes all the way back to 1947.

This chart doesn’t look so good for one-earner families. After growing along with two-earner families in the 1950s and 1960s, it basically stagnates from the early 1970s until the late 2010s. Then you get a little growth. Not good!

I think more investigation is needed here, but the share of families that have two earners has grown dramatically, from 26 percent of families in 1947 to 42 percent in 2022. Single earner families shrunk from 59 percent to 31 percent, and dual-income families have been the most common family type since the late 1960s. There are some important compositional differences here in what types of families only have one earner. If we imagine some alternate history where, by law, only one spouse was allowed to work, certainly the single earner line would have risen more. And many of the single earner families today are single mothers, who for a variety of reasons have much lower earning potential than the fathers heading married couples in the 1950s and 1960s. So the numbers aren’t perfectly comparable.

Still, even for single earner families, real median income has more than doubled since 1947 — though most of that growth had happened by the early 1970s.

As we make our way through a challenging economic time following the pandemic and 2 years of unusually high inflation, hopefully we can look forward to a future of resuming the upward trajectory of incomes for all kinds of families.

Generative AI Nano-Tutorial

Everyone who has not been living under a rock this year has heard the buzz around ChatGPT and generative AI. However, not everyone may have clear definitions in mind, or understanding of how this stuff works.

Artificial intelligence (AI) has been around in one form or another for decades. Computers have long been used to analyze information and come up with actionable answers. Classically, computer output has been in the form of numbers or graphical representation of numbers. Or perhaps in the form of chess moves, beating all human opponents since about 2000.

Generative AI is able to “generate” a variety of novel content, such as images, video, music, speech, text, software code and product designs, with quality which is difficult to distinguish from human-produced content. This mimicry of human content creation is enabled by having the AI programs analyze reams and reams of existing content (“training data”), using enormous computing power.

I wanted to excerpt here a fine article I just saw which is informative on this subject. Among other things, it lists some examples of gen-AI products, and describes the “transformer” model that underpins many of these products. I skipped the section of the article that discusses the potential dangers of gen-AI (e.g., problems with false “hallucinations”), since that topic has been treated already in this blog.

Between this article and the Wikipedia article on Generative artificial intelligence , you should be able to hold your own, or at least ask intelligent questions, when the subject next comes up in your professional life (which it likely will, sooner or later).

One technical point for data nerds is the distinction between “generative” and “discriminative” approaches in modeling. This is not treated in the article below, but see here.

All text below the line of asterisks is from Generative AI Defined: How it Works, Benefits and Dangers, by Owen Hughes, Aug 7, 2023.

*******************************************************

What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

How does generative AI work?

Generative AI models use a complex computing process known as deep learning to analyze common patterns and arrangements in large sets of data and then use this information to create new, convincing outputs. The models do this by incorporating machine learning techniques known as neural networks, which are loosely inspired by the way the human brain processes and interprets information and then learns from it over time.

To give an example, by feeding a generative AI model vast amounts of fiction writing, over time the model would be capable of identifying and reproducing the elements of a story, such as plot structure, characters, themes, narrative devices and so on.

……

Examples of generative AI

…There are a variety of generative AI tools out there, though text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding it a prompt that guides it towards producing a desired output, be it text, an image, a video or a piece of music, though this isn’t always the case.

Examples of generative AI models include:

  • ChatGPT: An AI language model developed by OpenAI that can answer questions and generate human-like responses from text prompts.
  • DALL-E 2: Another AI model by OpenAI that can create images and artwork from text prompts.
  • Google Bard: Google’s generative AI chatbot and rival to ChatGPT. It’s trained on the PaLM large language model and can answer questions and generate text from prompts.
  • Midjourney: Developed by San Francisco-based research lab Midjourney Inc., this gen AI model interprets text prompts to produce images and artwork, similar to DALL-E 2.
  • GitHub Copilot: An AI-powered coding tool that suggests code completions within the Visual Studio, Neovim and JetBrains development environments.
  • Llama 2: Meta’s open-source large language model can be used to create conversational AI models for chatbots and virtual assistants, similar to GPT-4.
  • xAI: After funding OpenAI, Elon Musk left the project in July 2023 and announced this new generative AI venture. Little is currently known about it.

Types of generative AI models

There are various types of generative AI models, each designed for specific challenges and tasks. These can broadly be categorized into the following types.

Transformer-based models

Transformer-based models are trained on large sets of data to understand the relationships between sequential information, such as words and sentences. Underpinned by deep learning, these AI models tend to be adept at NLP [natural language processing] and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Bard are examples of transformer-based generative AI models.

Generative adversarial networks

GANs are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generator’s role is to generate convincing output such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. Both DALL-E and Midjourney are examples of GAN-based generative AI models…

Multimodal models

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 2 and OpenAI’s GPT-4 are examples of multimodal models.

What is ChatGPT?

ChatGPT is an AI chatbot developed by OpenAI. It’s a large language model that uses transformer architecture — specifically, the “generative pretrained transformer”, hence GPT — to understand and generate human-like text.

What is Google Bard?

Google Bard is another example of an LLM based on transformer architecture. Similar to ChatGPT, Bard is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAI’s ChatGPT and Microsoft’s Copilot AI tool. In July 2023, Google Bard was launched in Europe and Brazil.

…….

Benefits of generative AI

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can enable enterprises to automate specific tasks and focus their time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and new insights into how well certain business processes are — or are not — performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing and potentially more. Again, the key proposed advantage is efficiency because generative AI tools can help users reduce the time they spend on certain tasks so they can invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important.

Use cases of generative AI

Generative AI has found a foothold in a number of industry sectors and is rapidly expanding throughout commercial and consumer markets. McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

In customer support, AI-driven chatbots and virtual assistants help businesses reduce response times and quickly deal with common customer queries, reducing the burden on staff. In software development, generative AI tools help developers code more cleanly and efficiently by reviewing code, highlighting bugs and suggesting potential fixes before they become bigger issues. Meanwhile, writers can use generative AI tools to plan, draft and review essays, articles and other written work — though often with mixed results.

The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

  • Healthcare: Generative AI is being explored as a tool for accelerating drug discovery, while tools such as AWS HealthScribe allow clinicians to transcribe patient consultations and upload important information into their electronic health record.
  • Digital marketing: Advertisers, salespeople and commerce teams can use generative AI to craft personalized campaigns and adapt content to consumers’ preferences, especially when combined with customer relationship management data.
  • Education: Some educational tools are beginning to incorporate generative AI to develop customized learning materials that cater to students’ individual learning styles.
  • Finance: Generative AI is one of the many tools within complex financial systems to analyze market patterns and anticipate stock market trends, and it’s used alongside other forecasting methods to assist financial analysts.
  • Environment: In environmental science, researchers use generative AI models to predict weather patterns and simulate the effects of climate change

….

Generative AI vs. machine learning

As described earlier, generative AI is a subfield of artificial intelligence. Generative AI models use machine learning techniques to process and generate data. Broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

( Again, to make sure credit goes where it is due, the text below the line of asterisks above was excerpted from Generative AI Defined: How it Works, Benefits and Dangers, by Owen Hughes).

AI as matter compiler

I just learned that the de-aging process used to produce scenes with a “younger” Harrison Ford in the most recent Indiana Jones movie was produced using an AI process that matched the Ford’s face in each newly filmed moment with a perfectly matching facial expression from archival stock of the actor.

The movie was fine, even if the third act was a little undercooked. I just want to point out two things. First, this is a natural extension of the LLM model of artificial intelligence: pulling from library of information provided (i.e. the internet or footage of Indiana Jones punching Nazis) and then reassembling that information to produce a new product. When we debate whether or not these pieces of software constitute actual “intelligence” what we are really arguing about is whether or not the substrate is sufficiently simple, sufficiently inert for the act of assembly to constitute an act of intelligence.

Case in point, nobody is arguing that de-aging Harrison Ford betrays true intelligence on the part of the software. The material being acted upon is already coherent, it’s just being reordered to convey a new message ( i.e. scenes in a different movie). Similar things could be said about ChatGPT. It’s just searching through pre-existing text and ideas, sifting for relevance, and reordering to optimally assemble into an updated product.

Operating within this metaphor, what would constitute intelligence is the that can sift through a primordial substrate of inorganic, individually incoherent components, and assemble them into original pieces of coherent information; sparks of cognition. An example of this would be to take the ambient, ineffable sentiments implied by the collective set of questions being asked by ChatGPT users (hopes, fears, feelings, etc) and produce not just answers to questions not yet answered by humans, but to answer questions not yet asked. To sift through information, break it into into its smallest possible molecules of cognition, and contribute to the broader collective body of knowledge by assembling new thoughts.

This sort of process has been occasionally posited in the form of a “matter compiler” in science-fiction, which is essentially just a 3-D printer at the molecular level. That remains pretty far off, as best I know. I suspect the same will be true of true artificial generalized intelligence. We know an awful lot about how molecules are assembled, the problem with producing a matter compiler is largely one of cost. We know comparably less about how neurons firing are assembled into acts of generative intelligence, of creativity. We will know no doubt get there, but getting there usually happens well before we cross the chasm of cost, of material feasibility.

But yeah, the new Indiana Jones movie was okay.

Assorted Links on Women and Family

First, there have been many tweets about Sophie Turner as a young mom and human who is getting divorced. Here’s an article (Stylist UK).

Also so many tweets about the 29-year-old who made eggs on the weekend. Here’s an article about it by Mary Harrington.

Thirdly, Understanding the Baby Boom (Works in Progress)

Parenthood rapidly became much easier and safer between the 1930s and 1950s. The spread of labour-saving devices in the home such as washing machines and fridges made raising children easier; improvements in medicine making childbirth safer; and easier access to housing made it cheaper to house larger families.

Anvar Sarygulov & Phoebe Arslanagic-Wakefield

I hate to be the next person publicly talking about Joe Jonas and Sophia Turner. I wish them both the best, and this kind of attention is probably hard on their kids. Anyway… what interests me about this case is that parenting seems to have been hard on them, even though Joe Jonas is worth $50 million. They could have a washing machine on every floor of their huge house. So, do the Works in Progress authors really understand the Baby Boom?

The Inner Ring and Barbie and Academia

C.S. Lewis is known as a novelist, but he was an academic familiar with university politics. In 1944, he gave a lectured called “The Inner Ring” about how everyone wants to be accepted into an “inner ring” of friends or colleagues. Being on the fringes of a group can be a source of misery.

My main purpose in this address is simply to convince you that this desire is one of the great permanent mainsprings of human action. It is one of the factors which go to make up the world as we know it—this whole pell-mell of struggle, competition, confusion, graft, disappointment… Unless you take measures to prevent it, this desire is going to be one of the chief motives of your life, from the first day on which you enter your profession…

To a young person, just entering on adult life, the world seems full of “insides,” full of delightful intimacies and confidentialities, and he desires to enter them. But if he follows that desire he will reach no “inside” that is worth reaching.

I’ll list the items that got me thinking about “the inner ring”.

This week, Alex posted on Misandry. Are men starting to feel like it is actually the women who are in the inner ring and men are on the outside?

I’ll share a story in which I felt like I was not in the inner ring. Before I had a job, I was at a professional conference. A colleague invited me to go out with him and some guys to a cigar shop that evening. “Yes!” I said at first, because this sounded both fun and good for my career. Then I remembered that I was three months pregnant. Smoking would damage the baby’s health, so I awkwardly backed out of the event. Of course, this is not a big deal in retrospect, but it’s the kind of thing that can bother you if you obsess over the rings you can’t join.

Women have long felt like they were on the outside of the boy’s club. “Is everyone smoking cigars without me?” The second item in reverse chronological order is the Barbie movie. In the movie, the top-floor meeting room of male executives at the L.A. Mattel office represents the male inner ring. The cul-de-sac of pink dream houses in Barbieland represents the female inner ring. Every character in the movie feels left out of a ring. In the article I was pointed to by Alex, John Tierney writes, “Smug misandry has been box-office gold for Barbie, which delights in writing off men as hapless romantic partners, leering jerks, violent buffoons, and dimwitted tyrants who ought to let women run the world.”

Several posts by my excellent co-bloggers are related to being left out of opportunities for networking or funding. Click through if you want to learn more about the NBER, the dark side, or grants.

The National Bureau of Economic Research (NBER) sent out its membership invitations this week. My twitter timeline quickly filled with explicit congratulations and oblique commentary. My private messages filled with…less than oblique commentary. Academia has always been hierarchical and economics has never been an exception.

Let’s Talk about the NBER, Mike

In my early days, I innocently asked a researcher what the letters N.B.E.R. stood for. He remarked that it was a “money laundering scheme run out of Boston.” I essentially took him literally, because I didn’t know any better, at the time. It is easy to tell that he was never invited to be a member, else he would have described it positively.

The bureaucratic and scholarly gamesmanship that can hold back one paper and elevate another. Every story your paranoid lizard brain can dream up explaining why a node in the tournament decision tree turned against you and in another’s favor.

On EJMR, status competitions, and tapeworms, Mike

I may have lost count but I’m pretty sure this was the 13th “true grant” I have applied for, and the 1st I will actually receive.

13th Time’s A Charm: Finally Grant Funded, James

It’s always nice, rhetorically speaking, to end on a positive note. That’s what Lewis did in his lecture. He said that there is a form of human association that is rewarding and virtuous: true friendship. Have a good weekend, friends.

[Not] Choosing Rationally

I’ve written previously on game theory, about the generality of Pure Strategy Nash Equilibria (PSNE), and the drawbacks of Sub-Game Perfect Nash Equilibria (SGPE). In this post I have another limitation for SGPE.


First, some definitions:
PSNE: “No player can change one of their strategies and improve their payoff, given the strategies of all other players.”
Subgame: “A subset of any extensive-form game that includes an initial node (which doesn’t share an information set with other nodes) and all its successor nodes.”
Subgame Equilibrium (SGE): “The PSNE of the Subgame”
SGPE: “The set of PSNE that are also SGE”


Clearly, there is nothing inconsistent about the above definitions. The reason that SGPE emerged was because some PSNE assert that a player would be willing to choose strategies that do not maximize conditional payoffs in subgames that are off of the equilibrium path. So, people often characterize the SGPE as a player ‘being rational each step of the way in each subgame’.

But, there is a problem. “Each step of the way” and “in each subgame” are not the same thing. Each step of the way implies that a player is rational at each decision – ie, at each information set. But, not every information set is a subgame! So, a SGPE can include rationality at each SGE while also permitting some irrationality at individual information sets. Since economists like to identify the bounds of their claims, let me emphasize the word can. In order to be correct, I need only identify one case in which the claim is true.


Here is that case:

Continue reading

Cool the Schools

Short post today because I’m busy watching my kids, who had their school canceled because of excessive heat, like many schools in Rhode Island today.

I thought this was a ridiculous decision until my son told me he heard from his teacher that his elementary school is the only one in town that has air conditioning for every classroom. Given that, the decision to cancel given the circumstances is at least reasonable, but the lack of AC is not.

It’s not just that hot classrooms are unpleasant for students and staff, or that sudden cancellations like this are a major burden for parents. Several economics papers have found that air conditioning significantly improves students’ learning as measured by test scores (though some find not). Park et al. (2020 AEJ: EP) find that:

Student fixed effects models using 10 million students who retook the PSATs show that hotter school days in the years before the test was taken reduce scores, with extreme heat being particularly damaging. Weekend and summer temperatures have little impact, suggesting heat directly disrupts learning time. New nationwide, school-level measures of air conditioning penetration suggest patterns consistent with such infrastructure largely offsetting heat’s effects. Without air conditioning, a 1°F hotter school year reduces that year’s learning by 1 percent.

This can actually be a bigger issue in somewhat Northern places like Rhode Island- we’re South enough to get some quite hot days, but North enough that AC is not ubiquitous. Data from the Park paper shows that New York and New England are actually some of the worst places for hot schools:

This is because of the lack of AC in the North:

The days are only getting hotter…. it’s time to cool the schools.

The Dodge Caravan, Quality Improvements, and Affordability

1996 was a big year for minivans. While modern minivans had been around for about a decade by that point, 1996 marked a turning point. That year Dodge introduced what is referred to as the “third generation” of its Caravan, and it won Motor Trend’s car of the year award. That’s the first, and only time, that a minivan ever won this award. If you drive a minivan today or see one on the road, you are seeing the look, style, and features that were first introduced in 1996 (interestingly, that year also seems to have marked the peak in sales for the Chrysler family of minivans).

If you wanted to buy the cheapest possible Dodge Caravan in 1996, you would have paid about $18,500. You could always pay more for more features, as with any car, but if you wanted this “car of the year,” and you wanted it new and cheap, that was what you paid.

Dodge continued to produce the Caravan for the US market until 2020, when it was discontinued in favor of other nameplates (though it still lived on in Canada). In 2020, the base model Caravan was about $29,000 (and now only available in the “Grand” version, an upgrade in 1996).

Oren Cass has used the prices of these two minivans to make a point about price indexes, quality adjustments, and affordability. If you look at the raw prices, clearly it is more expensive. But the consumer price index tells us that the price of new cars was flat between 1996 and 2020.

So what gives?

Continue reading

Is Long Covid Really a Thing?

We seem to be somewhat exhausted by all the dire predictions around Covid, now that life has largely gotten back to the normal. Shops and theaters are open, and people are once more crowding aboard those floating petri dishes called cruise ships. The most vulnerable segments of the population have mainly been vaccinated, and each new strain of the disease seems less harmful. All the anti-vaxxers I know have had Covid at least once and hence have some level of immunity, or else they caved and got vaccinated after seeing a close friend or relative die back in the winter of 2021-22. One enduring benefit of Covid is much more availability to work from home.

One of the direst prognostications was that the world would suffer a more or less permanent step down in standards of living due to “long Covid.”  According to this narrative, untold numbers of healthy young or middle-aged people would remain debilitated indefinitely due to the ongoing after-effects of a Covid infection: struck down in their prime, never to rise again.

A recent review of the field in Nature concluded, “The oncoming burden of long COVID faced by patients, health-care providers, governments and economies is so large as to be unfathomable”. Ouch. The federal government has provided $1.15 billion for research into the problem of long COVID and its mitigation.

Just the Facts

A couple of facts stand out: First, in many cases, scans of internal organs have shown changes in victims’ hearts and lungs and brains, following a severe Covid infection. Second, many people have reported symptoms such as weakness, fatigue and general malaise, impaired concentration and breathlessness, weeks after the primary symptoms of the disease have resolved.

How big a problem is this? I cannot, in the scope of a short blog post, adequately canvass all the data and literature. I will just cite a few numbers and charts, and let the professional data analysts dig into the fine points.

One meta-analysis found that a full “41.7% of COVID-19 survivors experienced at least one unresolved symptom and 14.1% were unable to return to work at 2-year after SARS-CoV-2 infection.” [That number seems much higher than my personal observations would suggest]. A CDC survey found that as of July 26-Aug 7, 2023, about 5.8 % of all Americans (which is 10.4% of Americans who ever had Covid) report experiencing some effects of long Covid, with 1.5% of all American adults experiencing significant activity limitations as result of long Covid. These numbers show a modest downward trend with time.

The chart below depicts the incidence of long Covid in England, again showing a modest downward trend in the latest year:

Weekly estimates of prevalence of COVID-19 and long COVID in England. Source.

Correlation versus Causation

So: we have many people experience severe symptoms from Covid, but most resolve within a few months at most. That leaves a small but nontrivial minority of Covid victims reporting problems long after that window. A significant question is whether Covid of itself caused those long-term symptoms, or just precipitated some problem that was bound to show up anyway.

I have read poignant anecdotes of perfectly healthy young people who suffer from brain fog two years later. But I have lived long enough to be wary of generalizing from poignant anecdotes. After all, the whole anti-vaccination movement has been fueled by poignant anecdotes of, say,  perfectly normal two-year-olds going autistic shortly after getting their vaccine shots.

The 2023 metastudy referred to earlier found that long Covid sufferers tended to be older, and had pre-existing medical comorbidities.  Similarly, we have known since 2020 that the cohorts most likely to die from Covid were older folks (such as me!), many of whom were bound to die anyway.  

In this light, the data brought forth by James Baily in his recent article on this blog, Long Covid is Real in the Claims Data… But so is “Early Covid”?, is most interesting. He noted that on average people use more health care for at least 6 months post-Covid compared to their pre-Covid baseline, which is consistent with some measure of long Covid. However, those same individuals also spent significantly more on healthcare 1-2 months before their Covid diagnosis. This seems consistent with the notion that some of what gets blamed on Covid would have occurred sooner or later anyway.

A Nuanced View of Long Covid

An article in Slate by Jeff Wise has dug deeper into the data. He noted that the survey-based datasets that have been largely used to estimate the effects of long Covid tend to be biased: those who feel ongoing symptoms are more likely to complete the surveys, giving rise to some of the largish numbers I have shared above. Newer, better-controlled retrospective cohort studies tend to show much lower ongoing incidence of symptoms, especially compared to control groups who had not had Covid. The feared tidal wave of mass disabilities never arrived:

“The best available figures, then, suggest two things: first, that a significant number of patients do experience significant and potentially burdensome symptoms for several months after a SARS-CoV-2 infection, most of which resolve in less than a year; and second, that a very small percentage experience symptoms that last longer. ”

Further, “Another insight that emerges from the cohort studies into long COVID is that it is not so easy to prove causality between a particular infection and a symptom. Almost all the symptoms associated with long COVID can also be triggered by all sorts of things, from other viruses to even the basic reality of living through a pandemic.”

Finally:

It looks more as if people who complain of long COVID are suffering from a collection of different effects. “I think there’s quite a heterogeneous group of people all sailing under the one flag,” said Alan Carson, a neuropsychiatrist at the University of Edinburgh in Scotland. Some patients may be experiencing the lingering aftereffects that occur in the wake of many diseases; some patients with chronic comorbidities might be experiencing the onset of new symptoms or the continuation of old ones; others might be affected by the sorts of mood disorders and psychiatric symptoms you’d expect to find in a population undergoing the stress of a global pandemic.

Another Slate article from last month gently debunks alarmism stemming from a Nature Medicine study of U.S. veterans who showed increased susceptibility to disease even two years after contracting Covid.

 There is often great difficulty in discerning the actual organic, biochemical basis for the reported symptoms. This makes it hard to come up with a pill or a shot that might adjust the body’s metabolic pathways in order to cure them. Thus, simply treating the symptoms as such may offer the best near-term relief. To that end, a team of French researchers had the audacity to propose that much of the fatigue and brain fog associated with long Covid may be largely in our heads. In an article in the Journal of Psychosomatic Research  Why the hypothesis of psychological mechanisms in long COVID is worth considering , Lemogne, et al. noted strong links between a patient’s prior expectations of symptom severity and the actual reported outcomes. The intent of the researchers is not to belittle the reported distress of long Covid sufferers, but to point towards established therapeutic methods to help treat disorders with at least a partial psychosomatic basis:

Many potential psychological mechanisms of long COVID are modifiable factors that could thus be targeted by already validated therapeutic interventions. Beside the treatment of a comorbid psychiatric condition, which may be associated with fatigue, cognitive impairment or aberrant activation of the autonomous nervous system, therapeutic interventions may build on those used in the treatment of ‘functional somatic disorders’, defined as the presence of debilitating and persistent symptoms that are not fully explained by damage of the organs they point. These disorders are common after an acute medical event, particularly in women, and include psychological risk factors, such as anxiety, depression, and dysfunctional beliefs that can lead to deleterious, yet modifiable health behaviors. Addressing these factors in the management of long COVID may provide an opportunity for patient empowerment.

In sum: A significant number of those who contract COVID suffer ongoing symptoms for a number of months afterward. Over a billion dollars of research has been directed at the problem. The severity of these symptoms tends to decline with time, in the vast majority of cases resolving by twelve months. This leaves some individuals still suffering fatigue and brain fog over a year later. Studies are ongoing to discern the organic basis of these complaints, and the exact role that COVID may have played, in the light of the fact that complaints of enduring fatigue and brain fog were not uncommon before the pandemic. We hope that following the science will bring more relief here.

Circling back to our original interest in the economic impact of long COIVD, early studies indicated that a large fraction of the population might continue to be debilitated, to the point of being unable to work, with significant effects on the workforce and GDP. Actual data (e.g., on disability claims) indicate that these problems have not actually materialized.

Condescension shouldn’t signal competence

On this, the Day of our Labor in the year 2023, I leave you with a simple observation/plea into the void: don’t confusion condescending overconfidence with competence. I give to you Hans-Heman Hoppe stating, in public and on camera, his exasperated and deeply aggrieved belief that Paul Krugman doesn’t know what money is or how it works:

Never forget that “con man” is short for “confidence man“, and part of the trick when trying to get other people to believe in you is to first project unerring belief in yourself. Confidence communication is effective in the sharing of ideas, but the real trick to all con men is what they’re selling. Or, more specifically, what the customers are buying. Yes, he’s selling the idea what monetary policy and fiat currencies are all part of a giant scam, a fugazi, because lol money isn’t real. But that’s not what his customers are buying.

What they’re buying is the idea that “I, person who holds this belief, am smarter than Paul Krugman.” I imagine that is probably a pretty enjoyable belief to hold. I would certainly enjoy believing that I am smarter than Paul Krugman, but wanting something doesn’t make it true…which, now that I say it outloud, is a nearly sufficient encapsulation of the entire human condition.

To be clear, this isn’t about Krugman’s accomplishments or status in society inviting censure or criticism from all corners. That’s just the eminence tax. It’s actually, in a weird way, about everything he had to do to acquire that eminence, in the form of arduous education, training, and research production over decades. Because what I think the Hoppe’s of the world are often really selling to their audience is a shortcut. An opportunity to believe that you, the listener, don’t have to go through such trials of human capital acquisition. You just have to hang out with me, parrot a few trivializations of economic thought, cope with a little light reading, and boom, you’re smarter than a Nobel laureate with 275k citations. All available to you for the low low price of believing Paul Krugman doesn’t know how money works.

Don’t wait, supplies are limited.