Second Quarter GDP Predictions

Back in April I wrote about 4 different estimates of GDP growth and how well they have performed since 2023. With the 2nd quarter of 2025 GDP data coming out next week, what do the best performing predictors currently say?

In that last post, I showed that the Atlanta Fed GDPNow model and the Kalshi betting market were generally the best performers. And furthermore, averaging these two improves the predictive power a little more. As of today, the GDPNow model is predicting 2.4% growth and Kalshi is… also predicting 2.4%!

There will be a few more updates to GDPNow over the next week, and of course Kalshi is constantly updating as more people bet. But as of right now, 2.4% growth seems like a reasonable prediction. That may surprise some people, especially given all of the pessimism surrounding tariffs and policy uncertainty generally. But despite all of this, the US economy appears to be just continuing to chug along.

Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence

We noted last week Meta’s successful efforts to hire away the best of the best AI scientists from other companies, by offering them insane (like $300 million) pay packages. Here we summarize and excerpt an excellent article in Newsweek by Gabriel Snyder who interviewed Meta’s chief AI scientist, Yann LeCun. LeCun discusses some inherent limitations of today’s Large Language Models (LLMs) like ChatGPT. Their limitations stem from the fact that they are based mainly on language; it turns out that human language itself is a very constrained dataset.  Language is readily manipulated by LLMs, but language alone captures only a small subset of important human thinking:

Returning to the topic of the limitations of LLMs, LeCun explains, “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning,” a reference to Daniel Kahneman’s influential framework that distinguishes between the human brain’s fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

The limitations of this approach become clear when you consider what is known as Moravec’s paradox—the observation by computer scientist and roboticist Hans Moravec in the late 1980s that it is comparatively easier to teach AI systems higher-order skills like playing chess or passing standardized tests than seemingly basic human capabilities like perception and movement. The reason, Moravec proposed, is that the skills derived from how a human body navigates the world are the product of billions of years of evolution and are so highly developed that they can be automated by humans, while neocortical-based reasoning skills came much later and require much more conscious cognitive effort to master. However, the reverse is true of machines. Simply put, we design machines to assist us in areas where we lack ability, such as physical strength or calculation.

The strange paradox of LLMs is that they have mastered the higher-order skills of language without learning any of the foundational human abilities. “We have these language systems that can pass the bar exam, can solve equations, compute integrals, but where is our domestic robot?” LeCun asks. “Where is a robot that’s as good as a cat in the physical world? We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”

This gap exists because language, for all its complexity, operates in a relatively constrained domain compared to the messy, continuous real world. “Language, it turns out, is relatively simple because it has strong statistical properties,” LeCun says. It is a low-dimensionality, discrete space that is “basically a serialized version of our thoughts.”  

[Bolded emphases added]

Broad human thinking involves hierarchical models of reality, which get constantly refined by experience:

And, most strikingly, LeCun points out that humans are capable of processing vastly more data than even our most data-hungry advanced AI systems. “A big LLM of today is trained on roughly 10 to the 14th power bytes of training data. It would take any of us 400,000 years to read our way through it.” That sounds like a lot, but then he points out that humans are able to take in vastly larger amounts of visual data.

Consider a 4-year-old who has been awake for 16,000 hours, LeCun suggests. “The bandwidth of the optic nerve is about one megabyte per second, give or take. Multiply that by 16,000 hours, and that’s about 10 to the 14th power in four years instead of 400,000.” This gives rise to a critical inference: “That clearly tells you we’re never going to get to human-level intelligence by just training on text. It’s never going to happen,” LeCun concludes…

This ability to apply existing knowledge to novel situations represents a profound gap between today’s AI systems and human cognition. “A 17-year-old can learn to drive a car in about 20 hours of practice, even less, largely without causing any accidents,” LeCun muses. “And we have millions of hours of training data of people driving cars, but we still don’t have self-driving cars. So that means we’re missing something really, really big.”

Like Brooks, who emphasizes the importance of embodiment and interaction with the physical world, LeCun sees intelligence as deeply connected to our ability to model and predict physical reality—something current language models simply cannot do. This perspective resonates with David Eagleman’s description of how the brain constantly runs simulations based on its “world model,” comparing predictions against sensory input. 

For LeCun, the difference lies in our mental models—internal representations of how the world works that allow us to predict consequences and plan actions accordingly. Humans develop these models through observation and interaction with the physical world from infancy. A baby learns that unsupported objects fall (gravity) after about nine months; they gradually come to understand that objects continue to exist even when out of sight (object permanence). He observes that these models are arranged hierarchically, ranging from very low-level predictions about immediate physical interactions to high-level conceptual understandings that enable long-term planning.

[Emphases added]

(Side comment: As an amateur reader of modern philosophy, I cannot help noting that these observations about the importance of recognizing there is a real external world and adjusting one’s models to match that reality call into question the epistemological claim that “we each create our own reality”.)

Given all this, developing the next generation of artificial intelligence must, like human intelligence, embed layers of working models of the world:

So, rather than continuing down the path of scaling up language models, LeCun is pioneering an alternative approach of Joint Embedding Predictive Architecture (JEPA) that aims to create representations of the physical world based on visual input. “The idea that you can train a system to understand how the world works by training it to predict what’s going to happen in a video is a very old one,” LeCun notes. “I’ve been working on this in some form for at least 20 years.”

The fundamental insight behind JEPA is that prediction shouldn’t happen in the space of raw sensory inputs but rather in an abstract representational space. When humans predict what will happen next, we don’t mentally generate pixel-perfect images of the future—we think in terms of objects, their properties and how they might interact

This approach differs fundamentally from how language models operate. Instead of probabilistically predicting the next token in a sequence, these systems learn to represent the world at multiple levels of abstraction and to predict how their representations will evolve under different conditions.

And so, LeCun is strikingly pessimistic on the outlook for breakthroughs in the current LLM’s like ChatGPT. He believes LLMs will be largely obsolete within five years, except for narrower purposes, and so he tells upcoming AI scientists to not even bother with them:

His belief is so strong that, at a conference last year, he advised young developers, “Don’t work on LLMs. [These models are] in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

This approach seems to be at variance with other firms, who continue to pour tens of billions of dollars into LLMs. Meta, however, seems focused on next-generation AI, and CEO Mark Zuckerberg is putting his money where his mouth is.

Will LLMs get us the Missing Data for Solving Physics?

Tyler suggested that a “smarter” LLM could not master the unconquered intellectual territory of integrating general relatively and quantum mechanics.

Forget passing Ph.D. level qualifying exams. (j/k James) Are the AI’s going to significantly surpass human efforts in generating new knowledge?

What exactly is the barrier to solving the fundamental mysteries of physics? How do we experimentally confirm that all matter breaks down to vibrating strings?

In a podcast episode of Within Reason, Brian Greene says that we can imagine an experiment that would test the proposed unifying String Theory. The Large Hadron Collider is not big enough (17 miles in circumference is too small). We would need a particle accelerator as big as a galaxy.

ChatGPT isn’t going to get us there. However, Brian Greene did suggest that there is a possibility that an advance in mathematics could get us closer to being able to work with the data we have.

Beh Yeoh summarized what he heard from Tyler et al. at a live event on how fast the acceleration in our knowledge will get boosted from AI. They warned that some areas will hit bottlenecks and therefore not advance very fast. Anything that require clinical trials, for example, isn’t going to proceed at breakneck speed. Ben warns that “Protein folding was a rare success” so we shouldn’t get too too excited about acceleration in biotech. If advances in physics require bigger and better physical tools to do more advanced experimental observations, then new AI might not get us far.

However, one of the categories that made Yeoh’s list of where new AI might accelerate progress is “mathematics,” because developing new theories does not face the same kind of physical constraints.

So, we are unlikely to obtain new definitive tests of String Theory to the extent that it is a capital-intensive field. The scenario for AI advances to bring a solution to this empirical question in my lifetime is probably if the solution comes from advances in mathematics so that we can reduce our reliance on new observational data.

Related links:
my article for the Gospel Coalition – We are not “building God,” despite some claims.
my article for EconLog – AI will be constrained by the same problem that David Hume faced. AI can predict what is likely to occur in the future based on what it has observed in the past.

“The big upward trend in Generative AI/LLM tool use in 2025 continues but may be slowing.” Have we reached a plataue, at least temporarily? Have we experienced the big upswing already in productivity, and it’s going to level out now? At least programming will be less painful forever after?

LLM Hallucination of Citations in Economics Persists with Web-Enabled Models” I realize that, as of today, you can pay for yet-better models than what we tested. But if web-enabled 4o can’t cite Krugman properly, you do wonder if “6o” will be integrating general relatively and quantum mechanics. A slightly longer context window probably isn’t going to do it.

Impossible Trinity of Macroeconomic Stability

Trump wants both low taxes and low interest rates. I hope that he doesn’t get it.

For the last ten days of my Principles of Macroeconomics course, I emphasize the aggregate supply and aggregate demand model coupled with monetary offset. What’s monetary offset? It says that, given some target and administrative insulation, the Federal Reserve can ‘offset’ the aggregate demand effects of government fiscal policy. It’s what gives us a relatively stable economy, despite big fiscal policy changes from administration to administration.

For example, if the Fed has a 2% inflation target, then they have an idea of how much total spending in the economy (NGDP) must change. If the federal government changes tax revenues or spends more, then the Fed can increase or decrease the money supply in order to achieve the NGDP growth rate that will realize their target. For example, after the 2017 Tax and Jobs Act lowered taxes, the Federal Funds rate rose in 2018. The effect of the tax cuts on NGDP were *offset* by monetary policy tightening to keep inflation near 2%.

If the Fed doesn’t engage in monetary offset, then fiscal policy has a bigger impact on the business cycle, causing more erratic bouts of unemployment and inflation. The economy would be less stable. Importantly, monetary offset  works in both directions. It prevents tight fiscal policy from driving us into a national depression, and loose fiscal policy from fueling inflation. That’s good since politicians face an incentive/speed/knowledge/political problem.

Personally, I would love lower taxes and lower interest rates. I’d get to enjoy more of my income rather than sending it to uncle Sam and, after refinancing, I’d pay less to service my debts. BUT, the same is true for everyone else too. All of that greater spending would result in higher prices and persistent inflation.

Right now, low taxes and high spending meant that the government is running persistent budget deficits – it’s borrowing money. That’s stimulative. If the Fed lowers interest rates, individuals would refinance and borrow more. That’s also stimulative. If both fiscal and monetary policy are stimulative as part of achieving the Fed’s target, then there is nothing wrong. But deviation from that policy goal brings economic turbulence.

This analysis implies an impossible trinity of macroeconomic stability (not the one from international trade):

Continue reading

Freedom for Freestanding Birth Centers

Iowa recently joined the growing list of states where midwives or obstetricians can open a freestanding birth center without needing to convince a state board that it is economically necessary. The Des Moines Register provides an excellent summary:

A Des Moines midwife who sued the state for permission to open a new birthing center may have lost a battle in court, but ultimately, she has won the war.

Caitlin Hainley of the Des Moines Midwife Collective sought to open a standalone birthing center in Des Moines, essentially a single-family home repurposed with birthing tubs and other equipment needed to give birth in a comfortable, home-like environment.

To do so, the collective alleged in its 2023 lawsuit, would have required going through a lengthy, expensive regulatory process that would give already established maternity facilities, such as local hospitals, the chance to argue against granting what is known as a certificate of need for the new facility, essentially vetoing competition.

A federal district judge ruled in November that Iowa’s certificate-of-need law is constitutional, finding that legislators had a rational interest in protecting existing hospitals and health care providers.

But while losing the first round in court, the collective’s cause was winning support in a more important venue: the Iowa Capitol. Iowa legislators in their 2025 session passed a bill, which Gov. Kim Reynolds signed on May 1, removing birth centers from the definition of health facilities covered by the certificate-of-need law. The law will formally take effect July 1.

I’m honored to have played a small part in this as the expert witness in the lawsuit.

If you’d like to get involved in making sure birth options are available your state, a great place to start would be to attend the Zoom seminar Roadmap For Reform: Advancing Birth Freedom on July 23rd. It is hosted by the Pacific Legal Foundation, which represented the midwives pro-bono in the Iowa case.

There is strong momentum here with Connecticut, Kentucky, Michigan, Vermont, and West Virginia also recently repealing Certificate of Need requirements for birth centers, but a variety of other barriers remain. States often require freestanding birth centers to obtain a transfer agreement with a nearby hospital before opening to ensure that the hospital will take their emergency cases, even though hospitals are legally required to take all emergency cases. The problem is that hospitals provide both complementary services (emergency care) and substitute services (labor and delivery), and they often choose not to sign transfer agreements in order to prevent competition from a partial substitute. This whole area would benefit both from more academic study, as well as more investigation from antitrust enforcement.

But for today, congratulations to Caitlin Hainley and to Iowa on their victory.

Inflation Is Stuck

Here’s a somewhat niche measure of inflation: 6-month CPI excluding food, shelter, and energy. It might seem like a weird measure, as it excludes over half of the CPI. But there is a logic to at least considering it along with other measures.

Food and energy are both volatile, so they can give us a lot of noise. That’s why “core CPI” and other core measures are followed closely by the Fed and inflation watchers. But excluding shelter might also make sense, because increasing housing prices are largely due to supply constraints, and will move independently of monetary policy to some extent. Six-month inflation is also useful for a more timely measure than 12 months, the headline number.

As you can see in the chart above, this niche measure of inflation has been stuck for two and a half years. It has oscillated between about 0.5% and 1.5% since December 2022. And right now it’s almost exactly in the middle of that range. It has come down from 6 months ago, but higher than 1 year ago.

As you can see in the pre-2020 years, it generally oscillated between 0% and 1%. So 6-month inflation is stuck about 0.5% higher than we had become used to, which translates into roughly 1% higher annually.

In the grand scheme of things, 1% higher inflation isn’t the end of the world. But we do seem to be stuck at a slightly elevated rate of inflation relative to the decade before 2020.

Meta Is Poaching AI Talent With $100 Million Pay Packages; Will This Finally Create AGI?

This month I have run across articles noting that Meta’s Mark Zuckerberg has been making mind-boggling pay offers (like $100 million/year for 3-4 years) to top AI researchers at other companies, plus the promise of huge resources and even (gasp) personal access to Zuck, himself. Reports indicate that he is succeeding in hiring around 50 brains from OpenAI (home of ChatGPT), Anthropic, Google, and Apple. Maybe this concentration of human intelligence will result in the long-craved artificial general intelligence (AGI) being realized; there seems to be some recognition that the current Large Language Models will not get us there.

There are, of course, other interpretations being put on this maneuver. Some talking heads on a Bloomberg podcast speculated that Zuckerberg was using Meta’s mighty cash flow deliberately to starve competitors of top AI talent. They also speculated that (since there is a limit to how much money you can possibly, pleasurably spend) – – if you pay some guy $100 million in a year, a rational outcome would be he would quit and spend the rest of his life hanging out at the beach. (That, of course, is what Bloomberg finance types might think, who measure worth mainly in terms of money, not in the fun of doing cutting edge R&D).

I found a thread on reddit to be insightful and amusing, and so I post chunks of it below. Here is the earnest, optimist OP:

andsi2asi

Zuckerberg’s ‘Pay Them Nine-Figure Salaries’ Stroke of Genius for Building the Most Powerful AI in the World

Frustrated by Yann LeCun’s inability to advance Llama to where it is seriously competing with top AI models, Zuckerberg has decided to employ a strategy that makes consummate sense.

To appreciate the strategy in context, keep in mind that OpenAI expects to generate $10 billion in revenue this year, but will also spend about $28 billion, leaving it in the red by about $18 billion. My main point here is that we’re talking big numbers.

Zuckerberg has decided to bring together 50 ultra-top AI engineers by enticing them with nine-figure salaries. Whether they will be paid $100 million or $300 million per year has not been disclosed, but it seems like they will be making a lot more in salary than they did at their last gig with Google, OpenAI, Anthropic, etc.

If he pays each of them $100 million in salary, that will cost him $5 billion a year. Considering OpenAI’s expenses, suddenly that doesn’t sound so unreasonable.

I’m guessing he will succeed at bringing this AI dream team together. It’s not just the allure of $100 million salaries. It’s the opportunity to build the most powerful AI with the most brilliant minds in AI. Big win for AI. Big win for open source

And here are some wry responses:

kayakdawg

counterpoint 

a. $5B is just for those 50 researchers, loootttaaa other costs to consider

b. zuck has a history of burning big money on r&d with theoretical revenue that doesnt materialize

c. brooks law: creating agi isn’t an easily divisible job – in fact, it seems reasonable to assume that the more high-level experts enter the project the slower it’ll progress given the communication overhead

7FootElvis

Exactly. Also, money alone doesn’t make leadership effective. OpenAI has a relatively single focus. Meta is more diversified, which can lead to a lack of necessary vision in this one department. Passion, if present at the top, is also critical for bleeding edge advancement. Is Zuckerberg more passionate than Altman about AI? Which is more effective at infusing that passion throughout the organization?

….

dbenc

and not a single AI researcher is going to tell Zuck “well, no matter how much you pay us we won’t be able to make AGI”

meltbox

I will make the AI by one year from now if I am paid $100m

I just need total blackout so I can focus. Two years from now I will make it run on a 50w chip.

I promise

The option to leave

The US, like every geopolitical entity to ever exist, has produced global public goods (i.e. international security, defeating the Nazis, etc) and global public bads (greenhouse gases, failed interference in other countries, etc).

I would like to posit something very simple: the greatest public good the United States has ever produced is the option to leave where you are and emigrate to the United States. If a country and its leadership is failing, non-trivial fractions of their population had the viable option to pack their bags and walk out the door. Perhaps unfairly, this is doubly true for their best, brightest, and most endowed with resources, making the threat all the more salient. It’s voting with your feet i.e. Tiebout effects writ large.

If you are a failing nation, your options become to watch your population dissipate or put up a wall blocking exit. Either that or, you know, actively take steps to improve your country so that fewer people wish to leave their home and start over elsewhere. The ramifications of stifled immigraion to the United States will be felt for decades, and not just in the United States in the form of an enervated economy and betrayal of our core civic values, but globally in weakended constraints on every failing regime.

Chesterton Right about the History of Patriotism

Unexpectedly, Chesterton on Patriotism from 2021 is one of my all-time top performing posts due to a slow but steady drip of Google Search hits.

In 1908, G.K. Chesterton published the following line in Orthodoxy,

This, as a fact, is how cities did grow great. Go back to the darkest roots of civilization and you will find them knotted round some sacred stone or encircling some sacred well.

By 1908, Chesterton had likely been exposed to Victorian early anthropological thinkers like Tylor and Frazer. Maybe I shouldn’t be impressed that he’d get it right, but I don’t think of Chesterton as having access to the best and latest evidence for how human civilization evolved.

I was browsing the book Sapiens (2011) this week and came across:

In the conventional picture, pioneers first built a village, and when it prospered, they set up a temple in the middle. But Göbekli Tepe suggests that the temple may have been built first, and that a village later grew up around it. (pg 102)

Today’s post is dedicated to congratulating Chesterton on making a conjecture that turns out to line up with the best we now know and archeological evidence that was only discovered in 1995.

Chesterton wrote,

The only way out of it seems to be for somebody to love Pimlico; to love it with a transcendental tie and without any earthly reason. If there arose a man who loved Pimlico, then Pimlico would rise into ivory towers and golden pinnacles… If men loved Pimlico as mothers love children, arbitrarily, because it is theirs, Pimlico in a year or two might be fairer than Florence.

Also this month I witnessed Americans celebrating the 4th of July. People here love this country “because it is theirs.”

I’ve heard a lot of panicking in the past 10 years about the fate of the nation, and I think we should always be in a partial state of paranoia. But, if love of country is needed in the recipe, we’ve still got it. (you might need an Instagram account to view Mark Zuckerberg Zuck wakeboarding in a bald eagle suit)

The Simple Utility Function Vs. Socialism

I’m a big fan of Friedrich Hayek. I first read his work in an academic setting. But many people first encounter him via The Road to Serfdom, his book that outlines the political and social consequences of state economic controls. I always meant to go back and read it, but it usually took a back seat to other works. Now, I’m slowly making my way through.

A lovely snippet includes Hayek explaining the popular sentiment that “it’s only money” or that money-related concerns are base or superficial. Such an attitude is especially common when people recount their childhood or family life during times of financial difficulty. The story often goes “times were hard, but we had each other”. Similarly, a popularly derisive trope is that economists ‘only care about money’ [, rather than the more important things].

Continue reading