GDP Predictions: Pretty Good!

Last week I wrote about the GDP predictions from Kalshi and the GDPNow Model. They were both showing 2.4% for Q2 of 2025 last week. They both changed slightly by yesterday, up to 2.8% and 2.9%. The final result (technically, the “advanced” result, but the final one for purposes of this comparison) was 2.97%. The Atlanta Fed GDPNow model continues to be a top performer, and you can’t do much better than averaging these two estimates. And you can pretty consistently do better than the median result from the WSJ/Dow Jones survey of economists.

Here’s the updated table:

And here is the original post explaining the data.

Warren Buffett Quotes on Gold as a Bad Investment; Was He Right?

To say Warren Buffett is not a fan of gold would be an understatement. His basic beef is that gold does not produce much of practical value.  His instincts have always been to buy businesses that generate steady and growing cash by producing goods or services that people need or want –  – businesses like railroads, beverage makers, and insurance companies.

Here are some quotes on the subject from the Oracle of Omaha, where I have bolded some phrases:

“Gold … has two significant shortcomings, being neither of much use nor procreative. True, gold has some industrial and decorative utility, but the demand for these purposes is both limited and incapable of soaking up new production. Meanwhile, if you own one ounce of gold for an eternity, you will still own one ounce at its end” — Buffett, letter to shareholders, 2011

“With an asset like gold, for example, you know, basically gold is a way of going long on fear, and it’s been a pretty good way of going long on fear from time to time. But you really have to hope people become more afraid in the year or two years than they are now. And if they become more afraid you make money, if they become less afraid you lose money. But the gold itself doesn’t produce anything” — Buffett, CNBC’s Squawk Box, 2011

This from when the world’s 67-cubic foot total gold hoard was worth about $7 trillion, which by his reckoning was the value of all U.S. farmland plus seven times the value of petroleum giant ExxonMobil plus an extra $1 trillion:

“And if you offered me the choice of looking at some 67-foot cube of gold … and the alternative to that was to have all the farmland of the country, everything, cotton, corn, soybeans, seven ExxonMobils. Just think of that. Add $1 trillion of walking around money. I, you know, maybe call me crazy but I’ll take the farmland and the ExxonMobils”  – – Cited in https://www.nasdaq.com/articles/3-things-warren-buffett-has-said-about-gold

And my favorite:

Gold gets dug out of the ground in Africa, or someplace. Then we melt it down, dig another hole, bury it again and pay people to stand around guarding it. It has no utility. Anyone watching from Mars would be scratching their head“. – – From speech at Harvard, see https://quoteinvestigator.com/2013/05/25/bury-gold/

One thing Buffett did NOT say is that gold is “barbarous relic”.  That line is owned by John Maynard Keynes from a hundred years ago, referring to the notion of tying national money issuance to the number of bars of gold held in the national vaults:

“In truth, the gold standard is already a barbarous relic. All of us, from the Governor of the Bank of England downwards, are now primarily interested in preserving the stability of business, prices, and employment, and are not likely, when the choice is forced on us, deliberately to sacrifice these to outworn dogma, which had its value once” –  Monetary Reform (1924)

Has Buffett’s Berkshire Hathaway Beaten Gold as an Investment?

 Given all that trash talk from the legendary investor, let’s see how an investment in his flagship Berkshire Hathaway company (stock symbol BRK.B) compares to gold over various time periods. I will use the ETF GLD as a proxy for gold, and will include the S&P 500 index as a proxy for the general U.S.  large cap stock market.

As always, these comparisons depend on your starting and ending points. In the 1990s and 2000s, BRK.B hugely outperformed the S&P 500, cementing Buffett’s reputation as one of the greatest investors of all time. (GLD data doesn’t go back that far).  In the past twelve months, gold (up 41%) has soundly beaten SPY (up 14 %) and completely trounced BRK.A (up 9%), as of last week. A couple of one-off factors have gone into these results: Gold had an enormous surge in January-April as the world markets digested the implications of never-ending gigantic U.S. budget deficits, and the markets soured on BRK.A due to the announced upcoming retirement of Buffett himself.

Stepping back to look over the past ten years shows the old master still coming out on top. In this plot, gold is orange, S&P 500 is blue, and BRK.A is royal purple:

Over most of this time period (through 7/21/2025), BRK.A and SP500 were pretty close, and gold lagged significantly. Gold was notably left behind during the key stock surge of 2021. Even with the rise in gold and dip in BRK.A this year, Buffett’s company (up 232%) still beats gold (198%) over the past ten years. BRK.A pulled well ahead of SP500 during the 2022 correction, and never gave back that lead. In the April stock market panic this year, BRK.A actually went up as everything else dropped, as it was seen as a tariff-proof safe haven. SP500 was ahead of gold for nearly all this period, until the crash in stocks and the surge in gold in the first half of 2025 brought them to essentially a tie for the past decade.

The consequences of a “Papers, please” economy

While DOGE is advertising their new deregulation AI (HT MR) with promises of “trimming 100,000 of those rules“, the reality is that the administration is ushering in the most profound layer of government involvement into our lives since the introduction of the income tax.

It defies the opportunity cost of my time to try to recap the crappy-policy-via-executive-order blunderbuss that has been the last 6 months, but it is sufficient to focus on two dimensions: immigrant targeting and tariffs. ICE is pulling people off of the street and detaining them for hours for “based on their physical appearance” in what can only be described as a dedicated effort to remove current immigrants, denaturalize past immigrants, and deter future immigrants. While these travesties play out one raid and immigration court ambush after another, tariffs are being introduced rapidly and haphazarly, always at the expense of the economy, and sometimes even in opposition to their stated goals of reshoring manufacturing. The prospect for (relatively) frictionless commerce across borders is quickly becoming unobtainable.

So what’s going to happen, now? Is America going to become a Whiter, autarkic island that steadily de-growths itself into a quieter state of nostalgic bliss, cheerfully accepting a shorter, sicker, less opulent life than before? Sure, the food will be worse and more expensive, our electronics obsolete and more expensive, our cars older and made from inferior materials (and more expensive), but that’s just the way things have to be, right? People will live and do as they’re told, right?

Have you met people?

People adapt. They find every workaround, every crack. Their lives will change, in many ways for the much worse, but they will work with what they have to make the best that they can. And this case, the best way to adapt will be to become just a little more criminal. Not fully criminal, just a little more. More aspects of our lives will become akin to driving 10 mph over the speed limit because that’s just what everyone does.

Daily life will, slowly and at times imperceptibly, move underground. More jobs will pay in cash. Fewer exchanges will be made absent a personal relationship. More goods will arrive in suitcases at the luggage return. Friends will ask friends to pick up a phone/earbuds/tablets for them when visiting less economically restricted countries, while also reminding them to delete their messaging apps before heading through security. More goods will be altered from their true, optimal consumable form to qualify for a lower tariff. The advantage of physical over digital media will widen again. Where exchange exists outside of the law, trust needs to be found outside of contract. At the margin, business will become just a fraction of a percent more nepotistic. More employees will be found somewhere in the family tree. Everyone will just become a bit more crooked and, in doing so, expect everyone else to be a just a little more crooked. The US is a shockingly high trust society because it pays to be trustworthy. This is how such things unravel.

More immigrants will live within arrangements that hide them from not just the authorities, but from observation in general. Curtains and blinds covering windows at all hours. Dinner will taken at home rather than the restaurant. Clubs and concert gatherings that appeal to immigrant crowds, or even just less White crowds, will advertise less, relying on word of mouth. Workers will move more often, rather than garner attention. The sick and injured will not be taken to the emergency room. The gaps in an already fractured society will become a little wider.

Employers will keep more people off the books. Off health insurance and workers compensation. Employees will, perversely, be grateful for the lower exposure. Insurance companies will find new ways to audit liability without exposing their clients personnel. Predatory human trafficking will find larger herds of underground populations to hide their practices within. Fewer people will trust and rely on the law. Fewer people will enjoy it’s protections.

What about compensating wage differentials? On the one hand, labor supply will be reduced as it is pushed underground, reducing their numbers and safely available hours. On the other hand, the necessity for employers to reduce the visibility of their workers while incurring the risk of legal punishment will reduce demand. The net effect on equilibrium wages is uncertain. However, those employers who manage to guarantee longer and safer tenured employment will capture greater rents from those they employ. Getting to work more consistently and going to sleep feeling safer is quite the fringe benefit, one that employers may find to be a more a profitable form of compensation than just simple wages.

I’d keep writing, but I already sound like a paranoid crank. I’m not sure I am comfortable, anonymous reader, with this level of intellectual vulnerability in such a public forum. Papers, please.

NYC Family Summer Trip Itinerary

This is a condensed list of what we did with elementary-aged kids for three fun days in Manhattan in July. 

Like a camping trip, NYC with kids depends on the weather. In good walking weather you can occupy many hours exploring free outdoor attractions. In bad weather, you might feel the need to constantly buy admission tickets, retail, taxis and sit-down restaurant meals.

Our hotel was a 5-minute walk from Grand Central Station. We had a good view from our room on the 28th floor. We could even look down on an interesting active construction site. When traveling with kids, or any group larger than a couple, you’ll probably be stuck in the hotel sometimes, so paying extra for a good view might be worth it.

In the hotel lounge, adults could drink a free glass of wine and listen to a guy playing calm piano songs from memory like “My Heart Will Go On.” When my ten-year-old (10yo) asked for “Seven Nation Army” by The White Stripes, the request was denied. 

Tuesday

We walked to The Empire State Building. Passing the Public Library was a highlight although it was not open yet. I had booked entry tickets to the Empire State building online months ahead of time for a 9:30am time on a Tuesday. We spent no time waiting in line.

Next, we took the Subway to the Museum of Modern Art (MOMA). I had reserved tickets for the day. If your kids need a break from quietly appreciating art on the wall, there is a garden courtyard and kid’s craft area.  

It was hot but we were lucky to not be in the middle of a genuine heat wave. We got to-go food from a shop and walked north to Central Park for a picnic on rocks in the shade. Playgrounds and fun statues provide points of interest for kids.

We walked up to the Chess pavilion where my kids dropped in on chess games with friendly strangers (a nice man provided the pieces). You could bring your own chess or checkers pieces if you want to guarantee a game.

Note: Even though many places claim to have bag polices online, you can almost always get a regular sized backpack and/or metal water bottle through security.

Wednesday

This might sound like a planned itinerary, but the only thing we locked in ahead of time for Wednesday was an afternoon Broadway show. By the time we got back to our hotel at night, my phone had counted over 14,000 steps for the day. Our first stop was Clinton Castle on the southern edge of the island. Benefits to children include bathrooms and a museum display of how Manhattan was expanded through land reclamation. I’m not including all the places we stopped to eat in this blog but we especially liked discovering Liberty Bagels.

We saw the bull statue and the New York Stock Exchange (from the outside). A great stop in Lower Manhattan is a free self-guided tour of Trinity Church. We walked to the 9/11 memorial and then back to the subway so we could rest in our hotel during the hottest part of the day.

Lion King on Broadway was fun (and expensive). Afterwards, we were in Time Square, and it was finally time to do what my kids had been talking about all year: return to the M&M store. Kids love the M&M store.  

Then we walked to Hell’s Kitchen for dinner. We went back east to Rockefeller Center and bought books for the kids at McNally Jackson. From there, with a “we can make it” attitude, we walked back to the hotel in the dark. This is where the “safe streets” matter as much as the weather.

Thursday

This morning had not been planned ahead of time, so we spent some of our time figuring things out. We walked to the United Nations headquarters. I was able to get visitor passes and a guided tour. We had a great guide who explained the building and the aims of the UN for 45 minutes. I learned a lot and my 10yo was engaged.

Should you take your children on a tour of the UN? I had to carry my 7yo most of the time. The next day, I asked her what she remembered. She sincerely replied, “What United Nations?” If you don’t think your younger child will be outright disruptive, then you might take a younger kid along with an older kid who can appreciate it.

We had an appointment to enter the USS Intrepid Museum at 2:30. We didn’t make it until 3:30 and it closes at 5pm. The place deserves more than 90 minutes. It has a big kids’ activity area that is fully indoors.

Our last scheduled event was a Circle Line Harbor Lights Cruise from 7pm to 9pm. In the summer, this is a sunset cruise of lower Manhattan and the Statue of Liberty. Just at the end you see the city lights against the night sky. The tour guide was entertaining and smart! I learned interesting NYC facts and history. They have enclosed areas with windows for bad weather but that would not be as fun. Being able to sit up on the open-air top deck made the view amazing for everyone.

I’m Chair! 😬

As of July 1st of this year, I am the Chairman of the Department of Economics at my university. It’s one of those positions that includes more work and not much compensation. Depending on who I tell, I’m given both congratulations and condolences. Generally, at my university there is an expectation that department faculty ‘take turns’ being chair. So, we’re expected to serve whether the pay is good or not. There’s a lot of informal practice around this process.

In addition, Economics Majors have been less popular at liberal arts institutions over the past several years. No one knows why and there are probably multiple reasons. At my institution, our department has healthy enrollment among the peripheral majors. So, the Economics BA and BS have lower enrollment, but the Business Economics and the Global Affairs majors are more popular than ever.

All the same, I’d like to increase the number of students who have declared majors in our department and the number of Economics graduates. How do I do that?

Continue reading

Is A Music Major Worth It?

Our new paper concludes that the answer is a resounding “It Depends”.

It depends on your answer to the following questions:

  1. If you didn’t major in music, would you major in something else, or not finish college?
  2. How dead set are you on a career in music?
Source: Figure 1 of Bailey and Smith (2025)

We found that

  1. Music majors earn more than people who didn’t graduate from college, even if they don’t end up working as musicians
  2. Among musicians, music majors earn more than other majors
  3. But among non-musicians, other majors earn much more than music majors

So on average a music major means higher income if you would be a musician anyway, or if you wouldn’t have gone to college for another major, but lower income than if you majored in something else and worked outside of music. The exact amounts depend on what you control for; this gets complex but this table gives the basic averages before controls:

Source: Table 2 of Bailey and Smith (2025), showing wage plus business income for respondents to the 2018-2022 American Community Survey

For better or worse, a music major also means you are much more likely to be a musician- 113 times more likely, in fact (this is just the correlation, we’re not randomizing people into the major). Despite that incredible correlation, only 9.8% music majors report being professional musicians, and only 22.3% of working musicians were music majors.

Sean Smith had the idea for this paper and wrote the first draft in my Economics Senior Capstone class in 2024. After he graduated I joined the paper as a coauthor to get it ready for journals, and it was accepted at SN Social Sciences last week. We share the data and code for the paper here.

Continue reading

Second Quarter GDP Predictions

Back in April I wrote about 4 different estimates of GDP growth and how well they have performed since 2023. With the 2nd quarter of 2025 GDP data coming out next week, what do the best performing predictors currently say?

In that last post, I showed that the Atlanta Fed GDPNow model and the Kalshi betting market were generally the best performers. And furthermore, averaging these two improves the predictive power a little more. As of today, the GDPNow model is predicting 2.4% growth and Kalshi is… also predicting 2.4%!

There will be a few more updates to GDPNow over the next week, and of course Kalshi is constantly updating as more people bet. But as of right now, 2.4% growth seems like a reasonable prediction. That may surprise some people, especially given all of the pessimism surrounding tariffs and policy uncertainty generally. But despite all of this, the US economy appears to be just continuing to chug along.

Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence

We noted last week Meta’s successful efforts to hire away the best of the best AI scientists from other companies, by offering them insane (like $300 million) pay packages. Here we summarize and excerpt an excellent article in Newsweek by Gabriel Snyder who interviewed Meta’s chief AI scientist, Yann LeCun. LeCun discusses some inherent limitations of today’s Large Language Models (LLMs) like ChatGPT. Their limitations stem from the fact that they are based mainly on language; it turns out that human language itself is a very constrained dataset.  Language is readily manipulated by LLMs, but language alone captures only a small subset of important human thinking:

Returning to the topic of the limitations of LLMs, LeCun explains, “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning,” a reference to Daniel Kahneman’s influential framework that distinguishes between the human brain’s fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

The limitations of this approach become clear when you consider what is known as Moravec’s paradox—the observation by computer scientist and roboticist Hans Moravec in the late 1980s that it is comparatively easier to teach AI systems higher-order skills like playing chess or passing standardized tests than seemingly basic human capabilities like perception and movement. The reason, Moravec proposed, is that the skills derived from how a human body navigates the world are the product of billions of years of evolution and are so highly developed that they can be automated by humans, while neocortical-based reasoning skills came much later and require much more conscious cognitive effort to master. However, the reverse is true of machines. Simply put, we design machines to assist us in areas where we lack ability, such as physical strength or calculation.

The strange paradox of LLMs is that they have mastered the higher-order skills of language without learning any of the foundational human abilities. “We have these language systems that can pass the bar exam, can solve equations, compute integrals, but where is our domestic robot?” LeCun asks. “Where is a robot that’s as good as a cat in the physical world? We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”

This gap exists because language, for all its complexity, operates in a relatively constrained domain compared to the messy, continuous real world. “Language, it turns out, is relatively simple because it has strong statistical properties,” LeCun says. It is a low-dimensionality, discrete space that is “basically a serialized version of our thoughts.”  

[Bolded emphases added]

Broad human thinking involves hierarchical models of reality, which get constantly refined by experience:

And, most strikingly, LeCun points out that humans are capable of processing vastly more data than even our most data-hungry advanced AI systems. “A big LLM of today is trained on roughly 10 to the 14th power bytes of training data. It would take any of us 400,000 years to read our way through it.” That sounds like a lot, but then he points out that humans are able to take in vastly larger amounts of visual data.

Consider a 4-year-old who has been awake for 16,000 hours, LeCun suggests. “The bandwidth of the optic nerve is about one megabyte per second, give or take. Multiply that by 16,000 hours, and that’s about 10 to the 14th power in four years instead of 400,000.” This gives rise to a critical inference: “That clearly tells you we’re never going to get to human-level intelligence by just training on text. It’s never going to happen,” LeCun concludes…

This ability to apply existing knowledge to novel situations represents a profound gap between today’s AI systems and human cognition. “A 17-year-old can learn to drive a car in about 20 hours of practice, even less, largely without causing any accidents,” LeCun muses. “And we have millions of hours of training data of people driving cars, but we still don’t have self-driving cars. So that means we’re missing something really, really big.”

Like Brooks, who emphasizes the importance of embodiment and interaction with the physical world, LeCun sees intelligence as deeply connected to our ability to model and predict physical reality—something current language models simply cannot do. This perspective resonates with David Eagleman’s description of how the brain constantly runs simulations based on its “world model,” comparing predictions against sensory input. 

For LeCun, the difference lies in our mental models—internal representations of how the world works that allow us to predict consequences and plan actions accordingly. Humans develop these models through observation and interaction with the physical world from infancy. A baby learns that unsupported objects fall (gravity) after about nine months; they gradually come to understand that objects continue to exist even when out of sight (object permanence). He observes that these models are arranged hierarchically, ranging from very low-level predictions about immediate physical interactions to high-level conceptual understandings that enable long-term planning.

[Emphases added]

(Side comment: As an amateur reader of modern philosophy, I cannot help noting that these observations about the importance of recognizing there is a real external world and adjusting one’s models to match that reality call into question the epistemological claim that “we each create our own reality”.)

Given all this, developing the next generation of artificial intelligence must, like human intelligence, embed layers of working models of the world:

So, rather than continuing down the path of scaling up language models, LeCun is pioneering an alternative approach of Joint Embedding Predictive Architecture (JEPA) that aims to create representations of the physical world based on visual input. “The idea that you can train a system to understand how the world works by training it to predict what’s going to happen in a video is a very old one,” LeCun notes. “I’ve been working on this in some form for at least 20 years.”

The fundamental insight behind JEPA is that prediction shouldn’t happen in the space of raw sensory inputs but rather in an abstract representational space. When humans predict what will happen next, we don’t mentally generate pixel-perfect images of the future—we think in terms of objects, their properties and how they might interact

This approach differs fundamentally from how language models operate. Instead of probabilistically predicting the next token in a sequence, these systems learn to represent the world at multiple levels of abstraction and to predict how their representations will evolve under different conditions.

And so, LeCun is strikingly pessimistic on the outlook for breakthroughs in the current LLM’s like ChatGPT. He believes LLMs will be largely obsolete within five years, except for narrower purposes, and so he tells upcoming AI scientists to not even bother with them:

His belief is so strong that, at a conference last year, he advised young developers, “Don’t work on LLMs. [These models are] in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

This approach seems to be at variance with other firms, who continue to pour tens of billions of dollars into LLMs. Meta, however, seems focused on next-generation AI, and CEO Mark Zuckerberg is putting his money where his mouth is.

Will LLMs get us the Missing Data for Solving Physics?

Tyler suggested that a “smarter” LLM could not master the unconquered intellectual territory of integrating general relatively and quantum mechanics.

Forget passing Ph.D. level qualifying exams. (j/k James) Are the AI’s going to significantly surpass human efforts in generating new knowledge?

What exactly is the barrier to solving the fundamental mysteries of physics? How do we experimentally confirm that all matter breaks down to vibrating strings?

In a podcast episode of Within Reason, Brian Greene says that we can imagine an experiment that would test the proposed unifying String Theory. The Large Hadron Collider is not big enough (17 miles in circumference is too small). We would need a particle accelerator as big as a galaxy.

ChatGPT isn’t going to get us there. However, Brian Greene did suggest that there is a possibility that an advance in mathematics could get us closer to being able to work with the data we have.

Beh Yeoh summarized what he heard from Tyler et al. at a live event on how fast the acceleration in our knowledge will get boosted from AI. They warned that some areas will hit bottlenecks and therefore not advance very fast. Anything that require clinical trials, for example, isn’t going to proceed at breakneck speed. Ben warns that “Protein folding was a rare success” so we shouldn’t get too too excited about acceleration in biotech. If advances in physics require bigger and better physical tools to do more advanced experimental observations, then new AI might not get us far.

However, one of the categories that made Yeoh’s list of where new AI might accelerate progress is “mathematics,” because developing new theories does not face the same kind of physical constraints.

So, we are unlikely to obtain new definitive tests of String Theory to the extent that it is a capital-intensive field. The scenario for AI advances to bring a solution to this empirical question in my lifetime is probably if the solution comes from advances in mathematics so that we can reduce our reliance on new observational data.

Related links:
my article for the Gospel Coalition – We are not “building God,” despite some claims.
my article for EconLog – AI will be constrained by the same problem that David Hume faced. AI can predict what is likely to occur in the future based on what it has observed in the past.

“The big upward trend in Generative AI/LLM tool use in 2025 continues but may be slowing.” Have we reached a plataue, at least temporarily? Have we experienced the big upswing already in productivity, and it’s going to level out now? At least programming will be less painful forever after?

LLM Hallucination of Citations in Economics Persists with Web-Enabled Models” I realize that, as of today, you can pay for yet-better models than what we tested. But if web-enabled 4o can’t cite Krugman properly, you do wonder if “6o” will be integrating general relatively and quantum mechanics. A slightly longer context window probably isn’t going to do it.