Education is a core US export

While there is no shortage of examples of willful ignorance and outright lying in politics, the idea that blocking foreign students from attending US universities is anything other than disastrous to US students is positively enraging. The real curiousity here is whether the value of a US degree has yet dipped below the full tuition price tags that foreign students almost always pay. Beyond the billions in tuition received and tuition subsidies indirectly consumed, I couldn’t even begin to put a price on the cultural power accrued from being the global center for higher education for the last century. This administration’s capacity to find new and innovative ways to tear down US institutions is unrivaled and beyond even the grandest dreams of our most optimistic enemies.

The actual AI problem in academic economics

There is a steady flow of takes on the impact of AI on academic economics research, whether its the example of someone writing an ostensibly legitimate, if somewhat trite, research paper with only a few hours effort to the implication that there is already no need to continue writing papers as the AIs are already better at at. Oh, what shall all the candlemakers do now that the sun has risen?

I think the idea that AI has already rendered the research paper an obselete endeavor is very wrong, almost to the point of negligence. It both vastly underestimates the quality of the median contribution provided in the 80 to 100 or so best journals and vastly overestimates the reliablity of current AI attempts at research on the margin. Putting such concerns aside for the moment, it’s still worth pondering how we can extrapolate from current AI as a tool for status quo research to forecast if it might reshape labor as an input 5 or 10 years from now. That’s far enough away that it borders on futurism and, more importantly, the kind of forecasting that I shy away from. Feel free to tell me in the comments where we are headed.

At this moment, however, we already are in the middle of a far more subtle disruption in academic research that I haven’t seen anyone write about yet. The quiet, but pronounced, uptake of AI tools in the writing of referee reports for academic journals. If you’ve submitted papers for review in the last 18 months, dollars to donuts you’ve received a referee report that has been lengthy, well-organized, with an unusual number of bullet points and headers discussing your paper, summarizing it’s contributions, and offering suggestions that on their face seem reasonable but upon a moment’s reflection are quickly realized to be entirely vapid by someone familiar with the structure of the data and relevant literature.

There is something uniquely frustrating about working on a research project for 3 to 5 years only to have judgement passed down on the basis, at least in part, of a review written by ChatGPT that is not just wrong but, well, kind of stupid. I’ve already personally had to deal with having a paper refereed via ChatGPT, rejected, and then, thanks to it being internalized by ChatGPT into their text base, it being reconfigured into a citaton hallucination cited by other papers that, to maxmize comedy, replaced 3 (including me) of the 4 authors with other (nicer? better looking?) economists. What’s most frustrating, however, is that this is hitting economics journals that do not seem to have any plan in place to deal with it. Not to suggest this is an easy problem to solve (not remotely), but it certainly should not be coming as a surprise to anyone. Let’s look at the facts:

  1. Academic economists are almost universally overcommitted.
  2. Journal referees are, for the most part, unpaid for their time.
  3. As the number of quality articles produced and submitted to journals has increased, so has the strain on the entire editorial process, including review writing.
  4. The only thing holding it together at all has been reputational incentives (i.e. nobody wants a bad reputation with the editors that are going to consider your future work) and a disciplinary sense of “civic duty”. Reputation is, of course, the load bearing mechanism here.
  5. A technology was introduced that, at the very least, pantomimes the review process well enough that it can produce a low quality fascimile of a review that, with a few sentences tossed in at the beginning and a short separate letter written directly the editor by the reviewer, can allow a task that used to take a 0.5 to 1.5 work days can now be crossed off your to-do list in less than an hour.

Is it really that hard to see what’s coming? Of course academic economists are going to be tempted to ask ChatGPT to write a review for them. There are almost no direct rewards for writing good reviews, while the costs are significant. Evaluating a genuinely new and distinct piece of research that has never been done before is hard work and takes significant time.

Now, how this is playing out across the body of journals is an open question. Here’s my best educated guess:

At the top journals, reputational concerns are the strongest, but so is the opportunity cost of everyone’s time and the competition for limited article space. Referees might not have the courage to outsource the actual decision to ChatGPT, but they’ll be awfully tempted to offload as much of the grunt work as they can. If I were an editor at a top 10-15 journal, I would expect a growing number of reports from referees who read the paper quickly (<15 minutes), then made a decision to recommend acceptance or rejection based on 1) if they knew any of the authors, 2) whether the content is a complement or a substitute for their own research, 3) whether they had seen the paper presented in person and was well-received, 4) the general bundle of status associated with the authors and the subject, and 5) whether they liked the paper (you can, in fact, have a strong opinion on paper you’ve looked at for 15 minutes. We’re all guitly of it). Having arrived at their positive or negative assessment, they then outsource the actual first draft of the the review to ChatGPT, with the instruction to write a positive or negative review. Now, given the strong reputational considerations that any credible reviewer at a top journal should have, I expect there to then be significant rewriting of the review, including that addition of the reviewer’s preferried economist gripes about identification, whether the results generalize, etc, giving an otherwise generic report some more bespoke vibes. This isn’t the real recommendation anyway, that’s the letter to the editor that goes unseen by the research authors. I don’t think most referees will have the brass to outsource that.

That’s probably not great, especially for young authors trying to break into a field. But honestly, none of those problems are new. If anything it takes a very old problem (i.e. overcommitted economist at top school asks his or her student to write a referee report rejecting an article for them) and just tweaks it slightly (i.e. overcommitted economist at top school asks ChatGPT to write a referee report rejecting an article for them, freeing up a PhD student to get back to work cleaning and analyzing their data for them). Not optimal, but hey, what is?

The real problem, I am sad to say, is the next tier down. The field journals. The second-tier general journals. The oddball and heterodox journals. The journals that used to struggle to get enough good submissions and now struggle to find anyone to referee for them. What used to be a trickle is now a deluge of higher quality research. That deluge, however, comes from authors who also constitute a referee pool that is far busier than they were before and without the same resources that come with appointment as top institutions.

I promise you, from experience, keeping a significant research agenda going during my salad days when I was teaching a 3-3 load was not easy. What happens when the 71st ranked journal that you might submit an article to one day sends a seemingly acceptable, if mediocre and slightly banal article to review? Are you really going to give it a precious work day? Or are you going to give it a once over, ask chatGPT to review it, and then give a recommendation based on a 5 minute skim? I want believe that I would never associated my professional reputation with a half-assed review, but that’s easier to say on this side of the R1 tenure fence.

Now’s the part where I smugly tell you the obvious solution and call it a night. As is often the case, however, I don’t have one. Not one that anyone is going to like, at least. Because, the only solution I have is precisely the suggestion that got Jerry Maguire fired. We could simply publish and write fewer papers. If we write fewer papers, we can review fewer papers. If we review fewer papers we can pay people to review them. If we can pay people to review them, we can hold them to higher quality standards. Editors can review the reviews. Every now and then someone suggests we get rid of anonymous reviewers, but I worry that anonymity is load bearing when it comes to the quality standards that are in many ways the hallmark of modern economics. I don’t think we can give up on quality. Quality is our comparative advantage. So maybe its time we let go of quantity. If your dean says you’ve written some good and important article, but there aren’t enough lines on your vitae, then what they’re really saying is that they don’t want research faculty, they want AI middlemen.

Don’t be an AI middleman.

The theory of the firm remains unfinished

Why do firms exist? Transaction costs. Specialization. Returns to scale. Risk pooling. Reputation. Institutional capital. Is that everything? Probably not.

It wasn’t that long ago we were talking about the prevalence of zero marginal productivity employees within firms. Perhaps we should add low (zero?) marginal productivity employers to our list of considerations.

Graduate students rejoice, there remains more work to be done.

Learn to Ode 2026

Joke: https://x.com/TheLincoln/status/2027215235103207693

Writing about the Citrini Research report on February 28 feels like a being 6 years behind (it was only 6 days ago).

THE 2028 GLOBAL INTELLIGENCE CRISIS: A Thought Exercise in Financial History, from the Future”

Two things the white-collar chattering class fears is that their jobs will disappear or their stock portfolios will crash. The Citrini note put that feared scenario in a picture frame so we could stare at it, like Annie Jacobsen’s book on nuclear war. The post imagines a 2028 scenario: AI automates white-collar work, companies collapse, private credit blows up, mortgages default, unemployment hits 10%.

Brian Albrecht responded: “We don’t need to just make up fantasy stories: Using economics to read Citrini Research’s AI”

Tyler encouraged us to consider a response put out by Citadel “The 2026 Global Intelligence Crisis

Even cognitive automation faces coordination frictions, liability constraints, and trust barriers. It seems more likely that AI will be a complement rather than a substitute for labor is many areas.

One barrier to AI taking all the white-collar jobs as quickly as 2028 is just physical scaling constraints.

Having done research on “learn to code” (Buchanan 2022), I always watch new developments with interest. In 2023, I told an auditorium full of students in Indiana to learn to code if they don’t hate the work too much. At that time I had forecast that AI tools would make coding less miserable but not eliminate the need for technical human workers. Even if that was good advice at the time, is it still good advice today? I wish I had time to put up a blog on this topic every week.

Adjustments can happen along the margin of price as well as quantity. Wages to programmers can come down from their previously exalted heights, which could help the market absorb some of the young professionals who listened to “learn to code” in 2023.

So, now that the value of coding skills is in question, people are turning back to the value of the maligned English degree. It has been true for a long time that employers felt soft skills were more scarce than STEM degrees. I might add that an economics degree conveys a highly marketable blend of hard and soft skills.

Buchanan, Joy (2022). “Willingness to be paid: Who trains for tech jobs?” Labour Economics,
79, Article 102267.

Supply and demand has a mind of its own

I think there’s a lot of crosstalk about AI in part because proponents tend to focus on the immenient supply side shifts from innnovation, while critics seem to happily observe failures to stoke consumer demand. Not being much of a futurist, I’m largely content to watch and wait with minimal speculation. At the same time, I see signs of increasing demand for other products, in blatant disregard for past and present identity politics. It’s probably good to remember that supply and demand are less a beast to be wrangled than a rocking ocean to be adapted to.

Learning the Bitter Lesson at EconLog

I’m in EconLog with:

Learning the Bitter Lesson in 2026

At the link, I speculate on doom, hardware, human jobs, the jagged edge (via a Joshua Gans working paper), and the Manhattan Project. The fun thing about being 6 years late to a seminal paper is that you can consider how its predictions are doing.

Sutton draws from decades of AI history to argue that researchers have learned a “bitter” truth. Researchers repeatedly assume that computers will make the next advance in intelligence by relying on specialized human expertise. Recent history shows that methods that scale with computation outperform those reliant on human expertise. For example, in computer chess, brute-force search on specialized hardware triumphed over knowledge-based approaches. Sutton warns that researchers resist learning this lesson because building in knowledge feels satisfying, but true breakthroughs come from computation’s relentless scaling. 

The article has been up for a week and some intelligent comments have already come in. Folks are pointing out that I might be underrating the models’ ability to improve themselves going forward.

Second, with the frontier AI labs driving toward automating AI research the direct human involvement in developing such algorithms/architectures may be much less than it seems that you’re positing.

If that commenter is correct, there will be less need for humans than I said.

Also, Jim Caton over on LinkedIn (James, are we all there now?) pointed out that more efficient models might not need more hardware. If the AIs figure out ways to make themselves more efficient, then is “scaling” even going to be the right word anymore for improvement? The fun thing about writing about AI is that you will probably be wrong within weeks.

Between the time I proposed this to Econlog and publication, Ilya Sutskever suggested on Dwarkesh that “We’re moving from the age of scaling to the age of research“.

Bad ideas are costly

I know this has gotten coverage at other econ blogs, but I’ve been thinking about this paper for a couple days now.

Combine this with the classic Besley and Burgess paper on the political economy of government responsiveness to natural disasters, and you have a perfect Venn diagram of how bad ideas and bad political incentive alignment can lead to truly awful outcomes. An unfortunately “evergreen” mechanism in political economy.

Markets adjust: Superbowl quarterback edition

Yesterday’s super bowl was fun for a variety of reasons, but your 147th favorite economist was especially happy to see that markets continue to keep things interesting. The NFL was a “only teams with elite quarterbacks can win” league…until it wasn’t. After Brady, Manning, Brees, and Maholmes winning two decades of Super Bowls, we have back to back years of decidedly average quarterbacks winning (within-NFL average, to be clear. These are all objectively incredible athletes). How did this happen? Is it tactical evolution, flattening talent pools, institutional constraints, or markets updating? The answer is, of course, all of the above, but updating markets is the mechanistic straw that stirs the drink.

The NFL is a salary capped, which means each team can only spend so much money on total player salaries. As teams placed greater and greater value on quarterbacks, a larger share of their of their salary pool was dedicated accordingly. These markets are effectively auctions, which means eventually the winner’s curse kicks in, with the winner of the player auction being whoever overvalues the player the most. Iterate for enough seasons, and you eventually arrive at a point where the very best quarterbacks are cursed with their own contracts, condemned to work with ever decreasing quality teammates. Combine that with a little market and tactical awareness, and smart teams will start building their teams and tactics around the players and positions that market undervalues. And that (combined with rookie salary constraints), is how you arrive at a Super Bowl with the 18th and 28th salary ranked quarterbacks.

Whenever a market identifies an undervalued asset (i.e. quarterbacks 25 years ago) there will, overtime, be an update. Within that market updating, however, is a collective learning-as-imitation that eventually results in some amount of overshooting via the winners curse. This overshoot, of course, may only last seconds, as market pressure pushes towards equilibrium. In markets like long term sports contracts or 12 year aged whiskey, that overshoot can be considerable, as mistakes are calcified by contracts and high fixed cost capital.

What does this predict? In a market like NFL labor, I’d expect a cycle over time in the distribution of salaries, iterating between skewed top-heavy “star” rosters and depth-oriented evenly distributed rosters. At some point a high value position or subset of stars are identified and distproportionately committed to, but the success of those rosters eventually leads to over-committment, so much so that the advantage tilts towards teams that spread their resources wider across a larger number of players undervalued teams whose fixed pie of resources are overcommitted to a small number of players. That’s how you get the 2025 Eagles and 2026 Seahawks as super bowl champions.

I wonder when it will cycle back and what the currently undervalued position will be?

IP Paper on Econlog

My research on intellectual property is featured at

Everyone Take Copies (Econlog)

The title of this post, “everyone take copies,” comes from a conversation between the human subjects in an experiment in our lab, on which the paper is based. The experiment was studying how and when people take resources from one another.

Here’s a tip that doesn’t require any piracy. For those of you who are tired of the subscription economy fees, I think it’s safe to say in 2026 that anyone in the United States can find a local thrift store or annual rummage sale with oodles of nearly-free media. DVDs for a dollar. Used books for a dollar. Basically you are paying the transaction costs – the media itself is free. (I typed that dash myself, not AI!)

“Buying” a movie to stream on Amazon Prime can run over $20. Buying a used DVD is usually less than $10.

Something like the above observation probably lead to this parody news headline Awesome New Streaming Service Records Movie Streams Onto Cool Shiny Discs And You Can Buy Them And Own Them Forever

Here’s a response from the prompt “Make a picture of my office with AOL CD-ROMs decorating the wall.”

Unweighted Bayesians get Eaten By Wolves

A village charges a boy with watching the flock and raising the alarm if wolves show up. The boy decides to have a little fun and shout out false alarms, much to the chagrin of the villagers. Then an actual wolf shows up, the boy shouts his warning, but the villagers are proper Bayesians who, having learned from their mistakes, ignore the boy. The wolves have a field day, eating the flock, the boy, and his entire village.

I may have augmented Aesop’s classic fable with that last bit.

The boy is certainly a crushing failure at his job, but here’s the thing: the village is equally foolish, if not more so. The boy revealed his type, he’s bad at his job, but the village failed to react accordingly. They updated their beliefs but not their institutions. “We were good Bayesians” will look great on their tombstones.

They had three options.

A) Update their belief about the boy and ignore him.

This is what they did and look where that got them. Nine out of ten wolves agree that Good Bayesians are nutritious and delicious.

B) Update their beliefs about the boy, but continue to check on the flock when the boy raises the alarm.

They should have weighted their responses. Much like Pascal taking religion seriously because eternal torment was such a big punishment, you have to weight you expected probability of truth in the alarm against the scale of the downside if it is true. You can’t risk being wrong when it comes to existential threats.

C) Update their beliefs about the boy and immediately replace him with someone more reliable.

It’s all fine and good to be right about the boy being a lying jerk but that doesn’t fix your problem. You need to replace him with someone who can reliably do the job.

So this is a post about fascism. Some think that fascism is already here, others dismiss this as alarmism, others splititng the difference claiming that we are in some state of semi- or quasi-fascism. Within the claims that it is all alarmism, what I hear are the echoes of villagers annoyed by 50 years of claims that conservative politics were riddled with fascism, that Republicans were fascists, that everything they didn’t like was neoliberalism, fascism, or neoliberal fascism. Get called a wolf enough times and you might stop believing that wolves even exist.

Even if I am sympathetic, that doesn’t get you off the hook. It hasn’t been fascism for 50 years will look pretty on your tombstone.

Let’s return to our options

  • A) Don’t believe the people who have been shouting about fascism for years, but take seriously new voices raising the alarm.
  • B) Find a set of people who, exogenous to current events, you would and do trust and take their warnings seriously.
  • C) Don’t believe anyone who shouts fascism, because shouting fascism is itself evidence they are non-serious people.
  • D) Start monitoring the world yourself

Both A) and B) are sensible choices! If you’ve Bayesian updated yourself into not trusting claims of fascism from wide swaths of the commentariat, political leaders, and broader public, that’s fine, but you’ve got to find someone you trust. And if that leads you to a null set, then D) you’re going to have to do it yourself. Good luck with that. It takes a lot of time, expertise, and discipline not to end up the fascism-equivalent of an anti-vaxxer who “did their own research.”

Because let me tell you, C) is the route to perdition in all things Bayesian. Once your beliefs are mired in a recursive loop of confirmation bias, it’s all downhill. Every day will be just a little dumber than the one before. And that’s the real Orwellian curse of fascism.