
Uncategorized
Are Americans Thriving Under Trump? No, According to the Cost of Thriving Index
The Cost of Thriving Index from Oren Cass’s American Compass is an attempt to calculate how well US families are doing financially, but without using traditional inflation adjustments to income. Instead, Cass and crew have chosen 5 categories of goods and services, and tracked those over time relative to median earnings for men ages 25 and older (in the baseline model — it can also be applied to different categories of workers).
Scott Winship and I wrote a detailed critique of the COTI, which I summarized in a previous blog post. Our critique comes from several angles, including correcting several major errors in COTI, as well as arguing that standard inflation adjustments to median income are superior to this new approach.
Based on our critique, I don’t think COTI is a very good measure of how well US families are doing financially. But the COT Index still has many fans. And Cass seems to think Trump is in large part pursuing many policies that should help out US workers and families, such as Trump’s tariff policies. Thus, it will be useful to see if Trump’s policies are leading to American workers “thriving” in the first year of Trump’s presidency.
Unfortunately, even using Cass’s preferred approach, Americans don’t appear to be thriving under Trump.

Our glorious future is tech troubleshooting in space
Having enjoyed the quotes from our brave astronauts about software troubles, I wrote for Econlog:
Tech Troubleshooting in Space (EconLog)
Click to learn the story of email quote and why it went viral. With all due respect to Christina Koch, I think I’m the first woman in history to paraphrase The Notorious B.I.G. at Econlog.
Are we complaining? Tech has made our lives better. With only a few exceptions, everyone in the country chooses to have TVs and smartphones.
Digital tools like email save me time over what I can only imagine used to be sending paper memos or something. Did people have owls or pigeons or what? But some of that saved time goes to fighting new problems of evil people in cyberspace. Someone (Tyler?) points out that the “better angels of our nature” argument doesn’t look quite as rosy if you consider of all the digital criminality.
I do not know whom to credit for this banger: “Man is born free and everywhere he has to 2-factor authenticate.”
I had to do my annual mandatory employee Cyber Security training session this week. I don’t get paid extra to do this. It’s just work on top of my job. It’s estimated to take 40 minutes to complete. (I powered through in under 15 minutes.) We are obviously living in the future with iPads that translate foreign languages for refugee kids in real time and all, but it would feel more glorious if I could stop these phishing trainings.
If quantum/AI means the end of privacy and cheap tech connectivity, then what will that mean for productivity? To send a secure message to someone, we might need to go back to owl post. Get ready for mandatory annual owl training.
A Canticle for Aadam Jacobs
For the talk of the future of generating art, let’s not forget the task of remembering the art we’ve already made. Behold: more than 10,000 cassette recorded concerts, from as far back as 1984, recorded in community centers, church basements, taverns, all-ages clubs, and hundreds of other unsung “venue” owners who let then (and often always) unknown bands play shows for a a couple dozen attendees, all in the hopes that door money and beverages might keep the owner out of the red on a random weeknight while.
I have a couple bootlegs from concerts I attended, but it never occurred to me that I might get to listen to a 1995 Blonde Redhead show at The Empty Bottle or The Blow Pops playing 1991 show at a Milwaukee spot I’ve never heard of. These shows have always had an ephemeral quality to them, existing far more in the stories of those who claimed to be there that night than the actual direct artistic footprint.
But maybe not. Maybe the internet can and does, in fact, remember. Because while there is a lot to be absorbed from the finished product, but there is often so much more learn from the imperfect and unpolished early stages. A band before they slowed down or ventured beyond their first 3 chords, a writer still stuck in the first person, a disseratation chapter still haunted by the writing of the insecure graduate student we all were. The awkard phases when an artist (or artists) are still finding their voice. Perhaps, more than ever, we need to remember the importance of not skipping over the embarassing, exhausting, and, yes, often futile work at the beginning and middle. There are more shortcuts than ever to making a thing, but no shortcut to becoming the version of yourself that can make the thing that only you can make.
Education is a core US export
While there is no shortage of examples of willful ignorance and outright lying in politics, the idea that blocking foreign students from attending US universities is anything other than disastrous to US students is positively enraging. The real curiousity here is whether the value of a US degree has yet dipped below the full tuition price tags that foreign students almost always pay. Beyond the billions in tuition received and tuition subsidies indirectly consumed, I couldn’t even begin to put a price on the cultural power accrued from being the global center for higher education for the last century. This administration’s capacity to find new and innovative ways to tear down US institutions is unrivaled and beyond even the grandest dreams of our most optimistic enemies.

The actual AI problem in academic economics
There is a steady flow of takes on the impact of AI on academic economics research, whether its the example of someone writing an ostensibly legitimate, if somewhat trite, research paper with only a few hours effort to the implication that there is already no need to continue writing papers as the AIs are already better at at. Oh, what shall all the candlemakers do now that the sun has risen?
I think the idea that AI has already rendered the research paper an obselete endeavor is very wrong, almost to the point of negligence. It both vastly underestimates the quality of the median contribution provided in the 80 to 100 or so best journals and vastly overestimates the reliablity of current AI attempts at research on the margin. Putting such concerns aside for the moment, it’s still worth pondering how we can extrapolate from current AI as a tool for status quo research to forecast if it might reshape labor as an input 5 or 10 years from now. That’s far enough away that it borders on futurism and, more importantly, the kind of forecasting that I shy away from. Feel free to tell me in the comments where we are headed.
At this moment, however, we already are in the middle of a far more subtle disruption in academic research that I haven’t seen anyone write about yet. The quiet, but pronounced, uptake of AI tools in the writing of referee reports for academic journals. If you’ve submitted papers for review in the last 18 months, dollars to donuts you’ve received a referee report that has been lengthy, well-organized, with an unusual number of bullet points and headers discussing your paper, summarizing it’s contributions, and offering suggestions that on their face seem reasonable but upon a moment’s reflection are quickly realized to be entirely vapid by someone familiar with the structure of the data and relevant literature.
There is something uniquely frustrating about working on a research project for 3 to 5 years only to have judgement passed down on the basis, at least in part, of a review written by ChatGPT that is not just wrong but, well, kind of stupid. I’ve already personally had to deal with having a paper refereed via ChatGPT, rejected, and then, thanks to it being internalized by ChatGPT into their text base, it being reconfigured into a citaton hallucination cited by other papers that, to maxmize comedy, replaced 3 (including me) of the 4 authors with other (nicer? better looking?) economists. What’s most frustrating, however, is that this is hitting economics journals that do not seem to have any plan in place to deal with it. Not to suggest this is an easy problem to solve (not remotely), but it certainly should not be coming as a surprise to anyone. Let’s look at the facts:
- Academic economists are almost universally overcommitted.
- Journal referees are, for the most part, unpaid for their time.
- As the number of quality articles produced and submitted to journals has increased, so has the strain on the entire editorial process, including review writing.
- The only thing holding it together at all has been reputational incentives (i.e. nobody wants a bad reputation with the editors that are going to consider your future work) and a disciplinary sense of “civic duty”. Reputation is, of course, the load bearing mechanism here.
- A technology was introduced that, at the very least, pantomimes the review process well enough that it can produce a low quality fascimile of a review that, with a few sentences tossed in at the beginning and a short separate letter written directly the editor by the reviewer, can allow a task that used to take a 0.5 to 1.5 work days can now be crossed off your to-do list in less than an hour.
Is it really that hard to see what’s coming? Of course academic economists are going to be tempted to ask ChatGPT to write a review for them. There are almost no direct rewards for writing good reviews, while the costs are significant. Evaluating a genuinely new and distinct piece of research that has never been done before is hard work and takes significant time.
Now, how this is playing out across the body of journals is an open question. Here’s my best educated guess:
At the top journals, reputational concerns are the strongest, but so is the opportunity cost of everyone’s time and the competition for limited article space. Referees might not have the courage to outsource the actual decision to ChatGPT, but they’ll be awfully tempted to offload as much of the grunt work as they can. If I were an editor at a top 10-15 journal, I would expect a growing number of reports from referees who read the paper quickly (<15 minutes), then made a decision to recommend acceptance or rejection based on 1) if they knew any of the authors, 2) whether the content is a complement or a substitute for their own research, 3) whether they had seen the paper presented in person and was well-received, 4) the general bundle of status associated with the authors and the subject, and 5) whether they liked the paper (you can, in fact, have a strong opinion on paper you’ve looked at for 15 minutes. We’re all guitly of it). Having arrived at their positive or negative assessment, they then outsource the actual first draft of the the review to ChatGPT, with the instruction to write a positive or negative review. Now, given the strong reputational considerations that any credible reviewer at a top journal should have, I expect there to then be significant rewriting of the review, including that addition of the reviewer’s preferried economist gripes about identification, whether the results generalize, etc, giving an otherwise generic report some more bespoke vibes. This isn’t the real recommendation anyway, that’s the letter to the editor that goes unseen by the research authors. I don’t think most referees will have the brass to outsource that.
That’s probably not great, especially for young authors trying to break into a field. But honestly, none of those problems are new. If anything it takes a very old problem (i.e. overcommitted economist at top school asks his or her student to write a referee report rejecting an article for them) and just tweaks it slightly (i.e. overcommitted economist at top school asks ChatGPT to write a referee report rejecting an article for them, freeing up a PhD student to get back to work cleaning and analyzing their data for them). Not optimal, but hey, what is?
The real problem, I am sad to say, is the next tier down. The field journals. The second-tier general journals. The oddball and heterodox journals. The journals that used to struggle to get enough good submissions and now struggle to find anyone to referee for them. What used to be a trickle is now a deluge of higher quality research. That deluge, however, comes from authors who also constitute a referee pool that is far busier than they were before and without the same resources that come with appointment as top institutions.
I promise you, from experience, keeping a significant research agenda going during my salad days when I was teaching a 3-3 load was not easy. What happens when the 71st ranked journal that you might submit an article to one day sends a seemingly acceptable, if mediocre and slightly banal article to review? Are you really going to give it a precious work day? Or are you going to give it a once over, ask chatGPT to review it, and then give a recommendation based on a 5 minute skim? I want believe that I would never associated my professional reputation with a half-assed review, but that’s easier to say on this side of the R1 tenure fence.
Now’s the part where I smugly tell you the obvious solution and call it a night. As is often the case, however, I don’t have one. Not one that anyone is going to like, at least. Because, the only solution I have is precisely the suggestion that got Jerry Maguire fired. We could simply publish and write fewer papers. If we write fewer papers, we can review fewer papers. If we review fewer papers we can pay people to review them. If we can pay people to review them, we can hold them to higher quality standards. Editors can review the reviews. Every now and then someone suggests we get rid of anonymous reviewers, but I worry that anonymity is load bearing when it comes to the quality standards that are in many ways the hallmark of modern economics. I don’t think we can give up on quality. Quality is our comparative advantage. So maybe its time we let go of quantity. If your dean says you’ve written some good and important article, but there aren’t enough lines on your vitae, then what they’re really saying is that they don’t want research faculty, they want AI middlemen.
Don’t be an AI middleman.
The theory of the firm remains unfinished
Why do firms exist? Transaction costs. Specialization. Returns to scale. Risk pooling. Reputation. Institutional capital. Is that everything? Probably not.
It wasn’t that long ago we were talking about the prevalence of zero marginal productivity employees within firms. Perhaps we should add low (zero?) marginal productivity employers to our list of considerations.

Graduate students rejoice, there remains more work to be done.
Learn to Ode 2026

Joke: https://x.com/TheLincoln/status/2027215235103207693
Writing about the Citrini Research report on February 28 feels like a being 6 years behind (it was only 6 days ago).
“THE 2028 GLOBAL INTELLIGENCE CRISIS: A Thought Exercise in Financial History, from the Future”
Two things the white-collar chattering class fears is that their jobs will disappear or their stock portfolios will crash. The Citrini note put that feared scenario in a picture frame so we could stare at it, like Annie Jacobsen’s book on nuclear war. The post imagines a 2028 scenario: AI automates white-collar work, companies collapse, private credit blows up, mortgages default, unemployment hits 10%.
Brian Albrecht responded: “We don’t need to just make up fantasy stories: Using economics to read Citrini Research’s AI”
Tyler encouraged us to consider a response put out by Citadel “The 2026 Global Intelligence Crisis“
Even cognitive automation faces coordination frictions, liability constraints, and trust barriers. It seems more likely that AI will be a complement rather than a substitute for labor is many areas.
One barrier to AI taking all the white-collar jobs as quickly as 2028 is just physical scaling constraints.
Having done research on “learn to code” (Buchanan 2022), I always watch new developments with interest. In 2023, I told an auditorium full of students in Indiana to learn to code if they don’t hate the work too much. At that time I had forecast that AI tools would make coding less miserable but not eliminate the need for technical human workers. Even if that was good advice at the time, is it still good advice today? I wish I had time to put up a blog on this topic every week.
Adjustments can happen along the margin of price as well as quantity. Wages to programmers can come down from their previously exalted heights, which could help the market absorb some of the young professionals who listened to “learn to code” in 2023.
So, now that the value of coding skills is in question, people are turning back to the value of the maligned English degree. It has been true for a long time that employers felt soft skills were more scarce than STEM degrees. I might add that an economics degree conveys a highly marketable blend of hard and soft skills.
Buchanan, Joy (2022). “Willingness to be paid: Who trains for tech jobs?” Labour Economics,
79, Article 102267.
Supply and demand has a mind of its own
I think there’s a lot of crosstalk about AI in part because proponents tend to focus on the immenient supply side shifts from innnovation, while critics seem to happily observe failures to stoke consumer demand. Not being much of a futurist, I’m largely content to watch and wait with minimal speculation. At the same time, I see signs of increasing demand for other products, in blatant disregard for past and present identity politics. It’s probably good to remember that supply and demand are less a beast to be wrangled than a rocking ocean to be adapted to.
Learning the Bitter Lesson at EconLog
I’m in EconLog with:
“Learning the Bitter Lesson in 2026“
At the link, I speculate on doom, hardware, human jobs, the jagged edge (via a Joshua Gans working paper), and the Manhattan Project. The fun thing about being 6 years late to a seminal paper is that you can consider how its predictions are doing.
Sutton draws from decades of AI history to argue that researchers have learned a “bitter” truth. Researchers repeatedly assume that computers will make the next advance in intelligence by relying on specialized human expertise. Recent history shows that methods that scale with computation outperform those reliant on human expertise. For example, in computer chess, brute-force search on specialized hardware triumphed over knowledge-based approaches. Sutton warns that researchers resist learning this lesson because building in knowledge feels satisfying, but true breakthroughs come from computation’s relentless scaling.
The article has been up for a week and some intelligent comments have already come in. Folks are pointing out that I might be underrating the models’ ability to improve themselves going forward.
Second, with the frontier AI labs driving toward automating AI research the direct human involvement in developing such algorithms/architectures may be much less than it seems that you’re positing.
If that commenter is correct, there will be less need for humans than I said.
Also, Jim Caton over on LinkedIn (James, are we all there now?) pointed out that more efficient models might not need more hardware. If the AIs figure out ways to make themselves more efficient, then is “scaling” even going to be the right word anymore for improvement? The fun thing about writing about AI is that you will probably be wrong within weeks.
Between the time I proposed this to Econlog and publication, Ilya Sutskever suggested on Dwarkesh that “We’re moving from the age of scaling to the age of research“.