Broad Slump in Tech and Other Stocks: Fear Over AI Disruption Replaces AI Euphoria

Tech stocks (e.g. QQQ) roared up and up and up for most of 2023-2025, more than doubling in those three years. A big driving narrative was how AI was going to make everything amazing – productivity (and presumably profits) would soar, and robust investments in computing capacity (chips and buildings), and electric power infrastructure buildout, would goose the whole economy.

Will the Enormous AI Capex Spending Really Pay Off?

But in the past few months, a different narrative seems to have taken hold. Now the buzz is “the dark side of AI”. First, there is growing angst among investors over how much money the Big Tech hyperscalers (Google, Meta, Amazon, Microsoft, plus Oracle) are pouring into AI-related capital investments. These five firms alone are projected to spend over $0.6 trillion (!) in 2026. When some of this companies announced greater than expected spends in recent earning calls, analysts threw up all over their balance sheets. These are just eye-watering amounts, and investors have gotten a little wobbly in their support. These spends have an immediate effect on cash flow, driving it in some cases to around zero. And the depreciation on all that capex will come back to bite GAAP earnings in the coming years, driving nominal price/earnings even higher.

The critical question here is whether all that capex will pay out with mushrooming earnings three or four years down the road, or is the life blood of these companies just being flushed down the drain?  This is viewed as an existential arms race: benefits are not guaranteed for this big spend, but if you don’t do this spending, you will definitely get left behind. Firms like Amazon have a long history of investing for years at little profit, in order to achieve some ultimately profitable, wide-moat quasi-monopoly status.  If one AI program can manage to edge out everyone else, it could become the default application, like Amazon for online shopping or Google/YouTube for search and videos. The One AI could in fact rule us all.

Many Companies May Get Disrupted By AI

We wrote last week on the crash in enterprise software stocks like Salesforce and ServiceNow (“SaaSpocalypse”). The fear is that cheaper AI programs can do what these expensive services do for managing corporate data. The fear is now spreading more broadly (“AI Scare Trade”);  investors are rotating out of many firms with high-fee, labor-driven service models seen as susceptible to AI disruption. Here are two representative examples:

  • Wealth management companies Charles Schwab and Raymond James dropped 10% and 8% last week after a tech startup announced an AI-driven tax planning tool that could customize strategies for clients
  • Freight logistics firms C.H. Robinson and Universal Logistics fell 11% and 9% after some little AI outfit announced freight handling automation software

These AI disruption scenarios have been known for a long time as possibilities, but in the present mood, each new actual, specific case is feeding the melancholy narrative.

All is not doom and gloom here, as investors flee software companies they are embracing old-fashioned makers of consumer goods and other “stuff”:

The narrative last week was very clearly that “physical” was a better bet than “digital.” Physical goods and resources can’t be replaced by AI like digital goods and services can be at an alarming rate

As I write this (Monday), U.S. markets are closed for the holiday. We will see in the coming week whether fear or greed will have the upper hand.

Bad ideas are costly

I know this has gotten coverage at other econ blogs, but I’ve been thinking about this paper for a couple days now.

Combine this with the classic Besley and Burgess paper on the political economy of government responsiveness to natural disasters, and you have a perfect Venn diagram of how bad ideas and bad political incentive alignment can lead to truly awful outcomes. An unfortunately “evergreen” mechanism in political economy.

Telephone Classroom Game for Teaching Large Language Models

Use the above game to generate interaction in a class setting. Students collectively form an LLM and have fun seeing the final sentence that gets produced. I call this game “LLM Telephone” based on the classic game of telephone. I suggest downloading the file LLM_Telephone_Game_Sheet and handing out printed copies. However, this game could be adapted to a virtual setting.

The nice thing about passing papers in the classroom is that you can have several sheets circulating in a quite room, so when the final sentence is read allowed it comes as a surprise to most people.

If you’d like to have a handout to follow the game with a more technical explanation, you can use this two-page PDF:

The game relies on a player presenting two tokens of which the next player can select their favorite. Participants should be bound by the rules of grammar and logic when making their selection and presenting two tokens to the next player.

This game works as a fun ice breaker for any type of class that touches on the topic of artificial intelligence. It is suitable for many ages and academic disciplines.

Truth: The Strength and Weakness of AI Coding

There was a seismic shift in the AI world recently. In case you didn’t know, a Claude Code update was released just before the Christmas break. It could code awesomely and had a bigger context window, which is sort of like memory and attention span. Scott Cunningham wrote a series of posts demonstrating the power of Claude Code in ways that made economists take notice. Then, ChatGPT Codex was updated and released in January as if to say ‘we are still on the frontier’. The battle between Claude Code and Codex is active as we speak.

The differentiation is becoming clearer, depending on who you talk to. Claude Code feels architectural. It designs a project or system and thrives when you hand it the blueprint and say “Design this properly.” It’s your amazingly productive partner. Codex feels like it’s for the specialist. You tell it exactly what you want. No fluff. No ornamental abstraction unless you request it.

Codex flourishes with prompts like “Refactor this function to eliminate recursion”, or “ Take this response data and apply the Bayesian Dawid-Skene method. It does exactly that. It assumes competence on your part and does not attempt to decorate the output. It assumes that you know what you’re doing. It’s like your RA that can do amazing things if you tell it what task you want completed. Having said all of this, I’ve heard the inverse evaluations too. It probably matters a lot what the programmer brings to the table.

Both Claude Code and Codex are remarkably adept at catching code and syntax errors. That is not mysterious. Code is valid or invalid. The AI writes something, and the environment immediately reveals whether it conforms to the rules. Truth is embedded in the logical structure. When a single error appears, correction is often trivial.

When multiple errors appear, the problem becomes combinatorial. Fix A? Fix B? Change the type? Modify the loop? There are potentially infinite branching possibilities. Even then, the space is constrained. The code must run, or time out. That constraint disciplines the search. The reason these models code so well is that the code itself is the truth. So long as the logic isn’t violated, the axioms lead to the result. The AI anchors on the code to be internally consistent. The model can triangulate because the target is stable and verifiable.

AI struggles when the anchor disappears

Continue reading

Commodity Sports

I’m trying to coin “Commodity Sports” as the term to refer to sports betting that takes place on exchanges regulated by the US Commodity Futures Trading Commission, as opposed to sports betting that takes place through casinos regulated by state gaming commissions. So far it seems to be working alright, I haven’t convinced Gemini but have got the top spot in traditional Google search:

That article- Will Commodity Sports Last?– is my first at EconLog. I’m happy to get a piece onto one of the oldest economics blogs, one where I was reading Arnold Kling’s takes on the Great Recession in real time, where I was introduced to Bryan Caplan’s writing before I read his books, and where Scott Sumner wrote for many years (though I started reading him at The Money Illusion before that).

The key idea of the piece, other than the legal oddity of sports betting sharing a legal category with corn futures, is that the Commodity Sports category is being pioneered by prediction markets like Kalshi. As readers here will know, I like prediction markets:

I love that CFTC-regulated exchanges like Kalshi and Polymarket are bringing prediction markets to the mainstream. The true value of prediction markets is to aggregate information dispersed across the world into a single number that represents the most accurate forecast of the future.

But I’m not so excited to see them expanding into sports:

Although I see huge value in prediction markets when they are offering more accurate forecasts on important issues that help policymakers, businesses, and individuals make more informed plans for our future (e.g., Which world leaders will leave office this year?, or Which countries will have a recession?)… I see much less value in having a more accurate forecast of how many receptions Jaxon Smith-Njigba will have.

Like Robin Hanson, I worry that the legal battles against Commodity Sports and the brewing cultural backlash against sports betting risk taking the most informative prediction markets down along with it.

The full piece is here.

SaaSmageddon: Will AI Eat the Software Business?

A big narrative for the past fifteen years has been that “software is eating the world.” This described a transformative shift where digital software companies disrupted traditional industries, such as retail, transportation, entertainment and finance, by leveraging cloud computing, mobile technology, and scalable platforms. This prophecy has largely come true, with companies like Amazon, Netflix, Uber, and Airbnb redefining entire sectors. Who takes a taxi anymore?

However, the narrative is now evolving. As generative AI advances, a new phase is emerging: “AI is eating software.”  Analysts predict that AI will replace traditional software applications by enabling natural language interfaces and autonomous agents that perform complex tasks without needing specialized tools. This shift threatens the $200 billion SaaS (Software-as-a-Service) industry, as AI reduces the need for dedicated software platforms and automates workflows previously reliant on human input. 

A recent jolt here has been the January 30 release by Anthropic of plug-in modules for Claude, which allow a relatively untrained user to enter plain English commands (“vibe coding”) that direct Claude to perform role-specific tasks like contract review, financial modeling, CRM integration, and campaign drafting.  (CRM integration is the process of connecting a Customer Relationship Management system with other business applications, such as marketing automation, ERP, e-commerce, accounting, and customer service platforms.)

That means Claude is doing some serious heavy lifting here. Currently, companies pay big bucks yearly to “enterprise software” firms like SAP and ServiceNow (NOW) and Salesforce to come in and integrate all their corporate data storage and flows. This must-have service is viewed as really hard to do, requiring highly trained specialists and proprietary software tools. Hence, high profit margins for these enterprise software firms.

 Until recently, these firms been darlings of the stock market. For instance, as of June, 2025, NOW was up nearly 2000% over the past ten years. Imagine putting $20,000 into NOW in 2015, and seeing it mushroom to nearly $400,000.  (AI tells me that $400,000 would currently buy you a “used yacht in the 40 to 50-foot range.”)

With the threat of AI, and probably with some general profit-taking in the overheated tech sector, the share price of these firms has plummeted. Here is a six-month chart for NOW:

Source: Seeking Alpha

NOW is down around 40% in the past six months. Most analysts seem positive, however, that this is a market overreaction. A key value-add of an enterprise software firm is the custody of the data itself, in various secure and tailored databases, and that seems to be something that an external AI program cannot replace, at least for now. The capability to pull data out and crunch it (which AI is offering) it is kind of icing on the cake.

Firms like NOW are adjusting to the new narrative, by offering pay-per-usage, as an alternative to pay-per-user (“seats”). But this does not seem to be hurting their revenues. These firms claim that they can harness the power of AI (either generic AI or their own software) to do pretty much everything that AI claims for itself. Earnings of these firms do not seem to be slowing down.

With the recent stock price crash, the P/E for NOW is around 24, with a projected earnings growth rate of around 25% per year. Compared to, say, Walmart with a P/E of 45 and a projected growth rate of around 10%, NOW looks pretty cheap to me at the moment.

(Disclosure: I just bought some NOW. Time will tell if that was wise.)

Usual disclaimer: Nothing here should be considered advice to buy or sell any security.

Markets adjust: Superbowl quarterback edition

Yesterday’s super bowl was fun for a variety of reasons, but your 147th favorite economist was especially happy to see that markets continue to keep things interesting. The NFL was a “only teams with elite quarterbacks can win” league…until it wasn’t. After Brady, Manning, Brees, and Maholmes winning two decades of Super Bowls, we have back to back years of decidedly average quarterbacks winning (within-NFL average, to be clear. These are all objectively incredible athletes). How did this happen? Is it tactical evolution, flattening talent pools, institutional constraints, or markets updating? The answer is, of course, all of the above, but updating markets is the mechanistic straw that stirs the drink.

The NFL is a salary capped, which means each team can only spend so much money on total player salaries. As teams placed greater and greater value on quarterbacks, a larger share of their of their salary pool was dedicated accordingly. These markets are effectively auctions, which means eventually the winner’s curse kicks in, with the winner of the player auction being whoever overvalues the player the most. Iterate for enough seasons, and you eventually arrive at a point where the very best quarterbacks are cursed with their own contracts, condemned to work with ever decreasing quality teammates. Combine that with a little market and tactical awareness, and smart teams will start building their teams and tactics around the players and positions that market undervalues. And that (combined with rookie salary constraints), is how you arrive at a Super Bowl with the 18th and 28th salary ranked quarterbacks.

Whenever a market identifies an undervalued asset (i.e. quarterbacks 25 years ago) there will, overtime, be an update. Within that market updating, however, is a collective learning-as-imitation that eventually results in some amount of overshooting via the winners curse. This overshoot, of course, may only last seconds, as market pressure pushes towards equilibrium. In markets like long term sports contracts or 12 year aged whiskey, that overshoot can be considerable, as mistakes are calcified by contracts and high fixed cost capital.

What does this predict? In a market like NFL labor, I’d expect a cycle over time in the distribution of salaries, iterating between skewed top-heavy “star” rosters and depth-oriented evenly distributed rosters. At some point a high value position or subset of stars are identified and distproportionately committed to, but the success of those rosters eventually leads to over-committment, so much so that the advantage tilts towards teams that spread their resources wider across a larger number of players undervalued teams whose fixed pie of resources are overcommitted to a small number of players. That’s how you get the 2025 Eagles and 2026 Seahawks as super bowl champions.

I wonder when it will cycle back and what the currently undervalued position will be?

IP Paper on Econlog

My research on intellectual property is featured at

Everyone Take Copies (Econlog)

The title of this post, “everyone take copies,” comes from a conversation between the human subjects in an experiment in our lab, on which the paper is based. The experiment was studying how and when people take resources from one another.

Here’s a tip that doesn’t require any piracy. For those of you who are tired of the subscription economy fees, I think it’s safe to say in 2026 that anyone in the United States can find a local thrift store or annual rummage sale with oodles of nearly-free media. DVDs for a dollar. Used books for a dollar. Basically you are paying the transaction costs – the media itself is free. (I typed that dash myself, not AI!)

“Buying” a movie to stream on Amazon Prime can run over $20. Buying a used DVD is usually less than $10.

Something like the above observation probably lead to this parody news headline Awesome New Streaming Service Records Movie Streams Onto Cool Shiny Discs And You Can Buy Them And Own Them Forever

Here’s a response from the prompt “Make a picture of my office with AOL CD-ROMs decorating the wall.”

Against Eugenics, on its Own Terms

Once upon a time, eugenics was all the rage. It was nascent during the reconstruction era and persisted into the 20th century. It grew out of biological evolutionary theory and emphasized reproductive fitness. In brief, the theory asserted that there are differences in individual fitness and that the more fit living things will survive better and reproduce, eventually becoming a greater part of the population. The ability to compile and evaluate statistics about various human measurements made inferences hard to resist. Of course, researchers were plagued by small sample size, omitted variable bias, and social biases of the day (for example, phrenology inferred fitness characteristics from skull shape).

People employing eugenic thinking, overwhelmingly, supported theories that their own type of person was among the more fit. Eugenicists didn’t promote theories of their own un-fitness. In the progressive era of the early 20th century, eugenics met the prevailing attitude that government could be employed to resolve social and economic ills. This era is when the income tax emerged, prohibition was enacted, the Federal Reserve was formed, and various labor regulations were enacted.

The result was that policy sometimes pursued greater ‘fitness’ among its populations. Rather than systematically encouraging the supposedly more fit with economic incentives, most policy was geared toward reducing the reproductive success of supposedly less fit people. These included forced sterilization, institutionalization, and economic exclusion. Besides rejecting basics individual human dignity, the harm was all the more tragic given that fitness was often poorly specified. That is, policy criteria weren’t dependably related to fitness. Fatal conceit, indeed!

One of my favorite ways to argue is to grant premises and then change details on the margin to see whether the conclusion changes. Let’s do that. Let’s grant that there are innate differences between people that are related to biological success. Since survivability is related to resource acquisition, let’s grant also that economic success overlaps at least somewhat.  Taking that as granted, does pursuit of the historical eugenic policy still follow?

It does not.

There are two mistakes that eugenicists and various sorts of racists and xenophobes made. They assert or imply 1) that fitness characteristics are stable and systematically identifiable, and 2) that policy needed to intentionally select for the fitness characteristics.

Continue reading