Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence

We noted last week Meta’s successful efforts to hire away the best of the best AI scientists from other companies, by offering them insane (like $300 million) pay packages. Here we summarize and excerpt an excellent article in Newsweek by Gabriel Snyder who interviewed Meta’s chief AI scientist, Yann LeCun. LeCun discusses some inherent limitations of today’s Large Language Models (LLMs) like ChatGPT. Their limitations stem from the fact that they are based mainly on language; it turns out that human language itself is a very constrained dataset.  Language is readily manipulated by LLMs, but language alone captures only a small subset of important human thinking:

Returning to the topic of the limitations of LLMs, LeCun explains, “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning,” a reference to Daniel Kahneman’s influential framework that distinguishes between the human brain’s fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

The limitations of this approach become clear when you consider what is known as Moravec’s paradox—the observation by computer scientist and roboticist Hans Moravec in the late 1980s that it is comparatively easier to teach AI systems higher-order skills like playing chess or passing standardized tests than seemingly basic human capabilities like perception and movement. The reason, Moravec proposed, is that the skills derived from how a human body navigates the world are the product of billions of years of evolution and are so highly developed that they can be automated by humans, while neocortical-based reasoning skills came much later and require much more conscious cognitive effort to master. However, the reverse is true of machines. Simply put, we design machines to assist us in areas where we lack ability, such as physical strength or calculation.

The strange paradox of LLMs is that they have mastered the higher-order skills of language without learning any of the foundational human abilities. “We have these language systems that can pass the bar exam, can solve equations, compute integrals, but where is our domestic robot?” LeCun asks. “Where is a robot that’s as good as a cat in the physical world? We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”

This gap exists because language, for all its complexity, operates in a relatively constrained domain compared to the messy, continuous real world. “Language, it turns out, is relatively simple because it has strong statistical properties,” LeCun says. It is a low-dimensionality, discrete space that is “basically a serialized version of our thoughts.”  

[Bolded emphases added]

Broad human thinking involves hierarchical models of reality, which get constantly refined by experience:

And, most strikingly, LeCun points out that humans are capable of processing vastly more data than even our most data-hungry advanced AI systems. “A big LLM of today is trained on roughly 10 to the 14th power bytes of training data. It would take any of us 400,000 years to read our way through it.” That sounds like a lot, but then he points out that humans are able to take in vastly larger amounts of visual data.

Consider a 4-year-old who has been awake for 16,000 hours, LeCun suggests. “The bandwidth of the optic nerve is about one megabyte per second, give or take. Multiply that by 16,000 hours, and that’s about 10 to the 14th power in four years instead of 400,000.” This gives rise to a critical inference: “That clearly tells you we’re never going to get to human-level intelligence by just training on text. It’s never going to happen,” LeCun concludes…

This ability to apply existing knowledge to novel situations represents a profound gap between today’s AI systems and human cognition. “A 17-year-old can learn to drive a car in about 20 hours of practice, even less, largely without causing any accidents,” LeCun muses. “And we have millions of hours of training data of people driving cars, but we still don’t have self-driving cars. So that means we’re missing something really, really big.”

Like Brooks, who emphasizes the importance of embodiment and interaction with the physical world, LeCun sees intelligence as deeply connected to our ability to model and predict physical reality—something current language models simply cannot do. This perspective resonates with David Eagleman’s description of how the brain constantly runs simulations based on its “world model,” comparing predictions against sensory input. 

For LeCun, the difference lies in our mental models—internal representations of how the world works that allow us to predict consequences and plan actions accordingly. Humans develop these models through observation and interaction with the physical world from infancy. A baby learns that unsupported objects fall (gravity) after about nine months; they gradually come to understand that objects continue to exist even when out of sight (object permanence). He observes that these models are arranged hierarchically, ranging from very low-level predictions about immediate physical interactions to high-level conceptual understandings that enable long-term planning.

[Emphases added]

(Side comment: As an amateur reader of modern philosophy, I cannot help noting that these observations about the importance of recognizing there is a real external world and adjusting one’s models to match that reality call into question the epistemological claim that “we each create our own reality”.)

Given all this, developing the next generation of artificial intelligence must, like human intelligence, embed layers of working models of the world:

So, rather than continuing down the path of scaling up language models, LeCun is pioneering an alternative approach of Joint Embedding Predictive Architecture (JEPA) that aims to create representations of the physical world based on visual input. “The idea that you can train a system to understand how the world works by training it to predict what’s going to happen in a video is a very old one,” LeCun notes. “I’ve been working on this in some form for at least 20 years.”

The fundamental insight behind JEPA is that prediction shouldn’t happen in the space of raw sensory inputs but rather in an abstract representational space. When humans predict what will happen next, we don’t mentally generate pixel-perfect images of the future—we think in terms of objects, their properties and how they might interact

This approach differs fundamentally from how language models operate. Instead of probabilistically predicting the next token in a sequence, these systems learn to represent the world at multiple levels of abstraction and to predict how their representations will evolve under different conditions.

And so, LeCun is strikingly pessimistic on the outlook for breakthroughs in the current LLM’s like ChatGPT. He believes LLMs will be largely obsolete within five years, except for narrower purposes, and so he tells upcoming AI scientists to not even bother with them:

His belief is so strong that, at a conference last year, he advised young developers, “Don’t work on LLMs. [These models are] in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

This approach seems to be at variance with other firms, who continue to pour tens of billions of dollars into LLMs. Meta, however, seems focused on next-generation AI, and CEO Mark Zuckerberg is putting his money where his mouth is.

Meta Is Poaching AI Talent With $100 Million Pay Packages; Will This Finally Create AGI?

This month I have run across articles noting that Meta’s Mark Zuckerberg has been making mind-boggling pay offers (like $100 million/year for 3-4 years) to top AI researchers at other companies, plus the promise of huge resources and even (gasp) personal access to Zuck, himself. Reports indicate that he is succeeding in hiring around 50 brains from OpenAI (home of ChatGPT), Anthropic, Google, and Apple. Maybe this concentration of human intelligence will result in the long-craved artificial general intelligence (AGI) being realized; there seems to be some recognition that the current Large Language Models will not get us there.

There are, of course, other interpretations being put on this maneuver. Some talking heads on a Bloomberg podcast speculated that Zuckerberg was using Meta’s mighty cash flow deliberately to starve competitors of top AI talent. They also speculated that (since there is a limit to how much money you can possibly, pleasurably spend) – – if you pay some guy $100 million in a year, a rational outcome would be he would quit and spend the rest of his life hanging out at the beach. (That, of course, is what Bloomberg finance types might think, who measure worth mainly in terms of money, not in the fun of doing cutting edge R&D).

I found a thread on reddit to be insightful and amusing, and so I post chunks of it below. Here is the earnest, optimist OP:

andsi2asi

Zuckerberg’s ‘Pay Them Nine-Figure Salaries’ Stroke of Genius for Building the Most Powerful AI in the World

Frustrated by Yann LeCun’s inability to advance Llama to where it is seriously competing with top AI models, Zuckerberg has decided to employ a strategy that makes consummate sense.

To appreciate the strategy in context, keep in mind that OpenAI expects to generate $10 billion in revenue this year, but will also spend about $28 billion, leaving it in the red by about $18 billion. My main point here is that we’re talking big numbers.

Zuckerberg has decided to bring together 50 ultra-top AI engineers by enticing them with nine-figure salaries. Whether they will be paid $100 million or $300 million per year has not been disclosed, but it seems like they will be making a lot more in salary than they did at their last gig with Google, OpenAI, Anthropic, etc.

If he pays each of them $100 million in salary, that will cost him $5 billion a year. Considering OpenAI’s expenses, suddenly that doesn’t sound so unreasonable.

I’m guessing he will succeed at bringing this AI dream team together. It’s not just the allure of $100 million salaries. It’s the opportunity to build the most powerful AI with the most brilliant minds in AI. Big win for AI. Big win for open source

And here are some wry responses:

kayakdawg

counterpoint 

a. $5B is just for those 50 researchers, loootttaaa other costs to consider

b. zuck has a history of burning big money on r&d with theoretical revenue that doesnt materialize

c. brooks law: creating agi isn’t an easily divisible job – in fact, it seems reasonable to assume that the more high-level experts enter the project the slower it’ll progress given the communication overhead

7FootElvis

Exactly. Also, money alone doesn’t make leadership effective. OpenAI has a relatively single focus. Meta is more diversified, which can lead to a lack of necessary vision in this one department. Passion, if present at the top, is also critical for bleeding edge advancement. Is Zuckerberg more passionate than Altman about AI? Which is more effective at infusing that passion throughout the organization?

….

dbenc

and not a single AI researcher is going to tell Zuck “well, no matter how much you pay us we won’t be able to make AGI”

meltbox

I will make the AI by one year from now if I am paid $100m

I just need total blackout so I can focus. Two years from now I will make it run on a 50w chip.

I promise

Economic Impact of Agricultural Worker Deportations Leads to Administration Policy Reversals

Here is a chart of the evolution of U.S. farm workforce between 1991 and 2022:

Source: USDA

A bit over 40% of current U.S. farm workers are illegal immigrants. In some regions and sectors, the percentage is much higher. The work is often uncomfortable and dangerous, and far from the cool urban centers. This is work that very few U.S. born workers would consider doing, unless the pay was very high, so it would be difficult to replace the immigrant labor on farms in the near term. I don’t know how much the need for manpower would change if cheap illegal workers were not available, and therefore productivity was supplemented with automation.

It apparently didn’t occur to some members of the administration that deporting a lot of these workers (and frightening the rest into hiding) would have a crippling effect on American agriculture. Sure enough, there have recently been reports in some areas of workers not showing up and crops going unharvested.

It is difficult for me as a non-expert to determine how severe and widespread the problems actually are so far. Anti-Trump sources naturally emphasize the genuine problems that do exist and predict apocalyptic melt-down, whereas other sources are more measured. I suspect that the largest agribusinesses have kept better abreast of the law, while smaller operations have cut legal corners and may have that catch up to them. For instance, a small meat packer in Omaha reported operating at only 30% capacity after ICE raids, whereas the CEO of giant Tyson Foods claimed that “every one who works at Tyson Foods is authorized to do so,” and that the company “is in complete compliance” with all the immigration regulations.

With at least some of these wholly predictable problems from mass deportations now becoming reality, the administration is undergoing internal debates and policy adjustments in response. On June 12, President Trump very candidly acknowledged the issue, writing on Truth Social, “Our great Farmers and people in the hotel and leisure business have been stating that our very aggressive policy on immigration is taking very good, long-time workers away from them, with those jobs being almost impossible to replace…. We must protect our Farmers, but get the CRIMINALS OUT OF THE USA. Changes are coming!” 

The next day, ICE official Tatum King wrote regional leaders to halt investigations of the agricultural industry, along with hotels and restaurants. That directive was apparently walked back a few days later, under pressure from outraged conservative supporters and from Deputy White House Chief of Staff Stephen Miller. Miller, an immigration hard-liner, wants to double the ICE deportation quota, up to 3,000 per day.

This issue could go in various ways from here. Hard-liners on the left and on the right have a way of pushing their agendas to unpalatable extremes. It can be argued that the Democrats could easily have won in 2024 had their policies been more moderate. Similarly, if immigration hard-liners get their way now, I predict that the result will be their worst nightmare: a public revulsion against enforcing immigration laws in general. If farmers and restaurateurs start going bust, and food shortages and price spikes appear in the supermarket, public support for the administration and its project of deporting illegal immigrants will reverse in a big way. Some right-wing pundits would not be bothered by an electoral debacle, since their style is to stay constantly outraged, and (as the liberal news outlets currently demonstrate), it is easier to project non-stop outrage when your party is out of power.

An optimist, however, might see in this controversy an opening for some sort of long-term, rational solution to the farm worker issue. Agricultural Secretary Brooke Rollins has proposed expansion of the H-2A visa program, which allows for temporary agricultural worker residency to fill labor shortages. This is somewhat similar to the European guest worker programs, though with significant differences. H-2A requires the farmer to provide housing and take legal responsibility for his or her workers. H-2B visas allow for temporary non-agricultural workers, without as much employer responsibility. A bill was introduced into Congress with bi-partisan support to modernize the H-2A program, so that legislative effort may have legs. Maybe there can be a (gasp!) compromise.

President Trump last week came out strongly in favor of this sort of solution, with a surprisingly positive take on the (illegal) workers who have worked diligently on a farm for years. By “put you in charge” he is seems to refer to the responsibilities that H-2A employers undertake for their employers, and perhaps extending that to H-2B employers. He acknowledges that the far-right will not be happy, but hopes “they’ll understand.” From Newsweek:

“We’re working on legislation right now where – farmers, look, they know better. They work with them for years. You had cases where…people have worked for a farm, on a farm for 14, 15 years and they get thrown out pretty viciously and we can’t do it. We gotta work with the farmers, and people that have hotels and leisure properties too,” he said at the Iowa State Fairgrounds in Des Moines on Thursday.

“We’re gonna work with them and we’re gonna work very strong and smart, and we’re gonna put you in charge. We’re gonna make you responsible and I think that that’s going to make a lot of people happy. Now, serious radical right people, who I also happen to like a lot, they may not be quite as happy but they’ll understand. Won’t they? Do you think so?”

We shall see.

Central Banks Are Buying Gold; Should You?

Anyone who reads financial headlines knows that gold prices have soared in the past year. Why?

Gold has historically been a relatively stable store of value, and that role seems to be returning after decades of relative neglect. Official numbers show sharply increased buying by the world’s central banks, led by China, Poland, and Azerbaijan in early 2025. Russia, India and Turkey have also been major buyers. There is widespread conviction that actual gold purchases are appreciably higher than the officially-reported numbers, to side-step President Trump’s threatened extra tariffs on nations seen as de-dollarizing.

I think the most proximate cause for the sharp run-up in gold prices in the past twelve months has been the profligate U.S. federal budget deficit, under both administrations. This is convincing key world actors that the dollar will become increasingly devalued over time, no matter which party is in power. Thus, it is prudent to get out of dollars and dollar-denominated assets like U.S. T-bonds.

Trump’s erratic and offensive policies and statements in 2025 have added to the desire to diversify away from U.S. assets. This is in addition to the alarm in non-Western countries over the impoundment of Russian dollar-related assets in connection with the ongoing Russian invasion of Ukraine. Also, there is something of a self-fulfilling momentum aspect to any asset: the more it goes up, the more it is expected to go up.

This informative chart of central bank gold net purchasing is courtesy of Weekend Investing:

Interestingly, central banks were net sellers in the 1990s and early 2000s; it was an era of robust economic growth, gold prices were stagnant or declining, and it seemed pointless to hold shiny metal bars when one could invest in financial assets with higher rates of return. The Global Financial Crisis of 2008-2009 apparently sobered up the world as to the fragility of financial assets, making solid metal bars look pretty good. Then, as noted, the Western reaction to the Russian attack on Ukraine spurred central bank buying gold, as this blog predicted back in March, 2022.

Private investors are also buying gold, for similar reasons as the central banks. Gold offers portfolio diversification as a clear alternative from all paper assets. In theory it should offer something of an inflation hedge, but its price does not always track with inflation or interest rates.

Here is how gold (using GLD fund as a proxy) has fared versus stocks (S&P 500 index) and intermediate term U. S. T-bonds (IEF fund) in the past year:

Gold is up by 40%, compared to 12.6% for stocks. That is huge outperformance. This was driven largely by the fact that gold rose strongly in the Feb-April timeframe, while stocks were collapsing.

Below we zoom out to look at the past ten years, and include the intermediate-term T-bond fund IEF:

Gold prices more than doubled from 2008 to 2011, then suffered a long, painful decline over the next two years. Prices were then fairly stagnant for the mid-2010s, rose significantly 2019-2020, then stagnated again until taking off in 2023. Stocks have been much more erratic. Most of the time stock returns were above gold, but the 2020 and 2024 plunges brought stocks down to rough parity with gold. Since about 2019, T-bonds have been pathetic; pity the poor investor who has been (according to traditional advice) 40% invested in investment-grade bonds.

How to invest in gold? Hard-core gold bugs want the actual coins (no-one can afford a full bullion bar) to rub between their fingers and keep in their own physical custody. You can buy coins from on-line dealers or local dealers. Coins are available from the U.S. Mint, but reportedly their mark-ups are often higher than on the secondary market. 

An easier route for most folks is to buy into a gold-backed stock fund. The biggest is GLD, which has over $100 billion in assets. There has long been an undercurrent of suspicion among gold bugs that GLD’s gold is not reliably audited or that it is loaned out; they refer derisively to GLD as “paper gold” or gold derivatives.  The fund itself claims that it never lends out its gold, and that its bars are held in the vaults of the custodian banks JPMorgan Chase Bank, N.A. and HSBC Bank plc, and are independently audited. The suspicious crowd favors funds like Sprott Physical Gold Trust, PHYS. PHYS is claimed to have a stronger legal claim on its physical gold than GLD. However, PHYS is a closed-end fund, which means it does not have a continuous creation process like GLD, an open-end ETF. This can lead to discrepancies between the fund’s share price and the value of its gold holdings. It does seem like PHYS loses about 1% per year relative to GLD.

Disclaimer: Nothing here should be taken as advice to buy or sell any security.

Saving Money by Ordering Car Parts from Amazon or eBay

Here is a personal economical anecdote from this week. A medium-sized dead branch fell from a tall tree and ripped off the driver side mirror on my old Honda. My local repair shop said it would cost around $600 to replace it. That is a significant percentage of what the old clunker is worth. Ouch.

They kindly noted that most of that cost would was ordering a replacement mirror assembly from Honda, which would cost over $400 and take several days to arrive.  I asked if I could try to get a mirror from a junkyard, to save money. The repair guy said they would be willing to install a part I brought in, but suggested eBay or Amazon instead.

Back 20 years ago, before online commerce was so established, my local repair shop would routinely save us money by getting used parts from some sort of junkyard network.
So, I started looking into that route. First, junkyards are not junkyards anymore, they are “salvage yards.” Second, it turns out that to remove a side mirror from a Honda is not a simple matter. You have to remove the inside whole plastic door panel to get at the mirror mounting screws, and removing that panel has some complications. Also, I could not find a clear online resource for locating parts at regional salvage yards. It looks like you have to drive to a salvage yard, and perhaps have them search some sort of database to find a comparable vehicle somewhere that might have the part you want.


All this seemed like a lot of hassle, so I went to eBay, and found a promising looking new replacement part there for about $56, including shipping. It would take about a week to get here (probably being direct shipped from China). On Amazon, I found essentially the same part for about $63, that would get here the next day. For the small difference and price, I went the Amazon route, partly for the no hassle returns if the part turned out to be defective and partly because I get 5% back on my Amazon credit card there.
I just got the car back from the repair shop with the replacement mirror, and it works fine. The total cost, with labor was about $230, which is much better than the original $600+ estimate.


I’m not sure how broadly to generalize this experience. Some further observations:

( 1 ) For a really critical car part, I’d have to consider carefully if the Chinese knock-off would perform appreciably worse than some name-brand part – -although, I believe many repair shops often use parts that are not strictly original parts.

( 2 ) Commonly replaced parts like oil and air filters are typically cheaper to buy on-line than from your local Auto Zone or other local merchant. I like supporting local shops, so sometimes I eat the few extra $$ and shopping time, and buy from bricks and mortar.

( 3 ) Some repair shops make significant money on their markup on parts, and so they might not be happy about you bringing in your own parts. They also might decline to warrant the operation of that part. And many big box franchise repair shops may simply refuse to install customer-supplied parts.

( 4 ) For a newish car, still under warranty, the manufacturer warranty might be affected by using non-original parts.

( 5 ) Back to junk/salvage yards: there are some car parts, so-called hard parts, that are expected to last the life of the car. Things like the mounting brackets for engine parts. Typically, no spares of these are manufactured. So, if one of those parts gets dinged up in an accident, your only option may be used parts taken from a junker.

Did Apple’s Recent “Illusion of Thinking” Study Expose Fatal Shortcomings in Using LLM’s for Artificial General Intelligence?

Researchers at Apple last week published with the provocative title, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.”  This paper has generated uproar in the AI world. Having “The Illusion of Thinking” right there in the title is pretty in-your-face.

Traditional Large Language Model (LLM) artificial intelligence programs like ChatGPT train on massive amounts of human-generated text to be able to mimic human outputs when given prompts. A recent trend (mainly starting in 2024) has been the incorporation of more formal reasoning capabilities into these models. The enhanced models are termed Large Reasoning Models (LRMs). Now some leading LLMs like Open AI’s GPT, Claude, and the Chinese DeepSeek exist both in regular LLM form and also as LRM versions.

The authors applied both the regular (LLM) and “thinking” LRM versions of Claude 3.7 Sonnet and DeepSeek to a number of mathematical type puzzles. Open AI’s o-series were used to a lesser extent. An advantage of these puzzles is that researchers can, while keeping the basic form of the puzzle, dial in more or less complexity.

They found, among other things, that the LRMs did well up to a certain point, then suffered “complete collapse” as complexity was increased. Also, at low complexities, LLMs actually outperform LRMs. And (perhaps the most vivid evidence of lack of actual understanding on the part of these programs), when they were explicitly offered an efficient direct solution algorithm in the prompt, the programs did not take advantage of it, but instead just kept grinding away in their usual fashion.

As might be expected, AI skeptics were all over the blogosphere, saying, I told you so, LLMs are just massive exercises in pattern matching, and cannot extrapolate outside of their training set. This has massive implications for what we can expect in the near or intermediate future. Among other things, the optimism about AI progress is largely what is fueling the stock market, and also capital investment in this area: Companies like Meta and Google are spending ginormous sums trying to develop artificial “general” intelligence, paying for ginormous amounts of compute power, with those dollars flowing to firms like Microsoft and Amazon building out data centers and buying chips from Nvidia. If the AGI emperor has no clothes, all this spending might come to a screeching crashing halt.

Ars Technica published a fairly balanced account of the controversy, concluding that, “Even elaborate pattern-matching machines can be useful in performing labor-saving tasks for the people that use them… especially for coding and brainstorming and writing.”

Comments on this article included one like:

LLMs do not even know what the task is, all it knows is statistical relationships between words.   I feel like I am going insane. An entire industry’s worth of engineers and scientists are desperate to convince themselves a fancy Markov chain trained on all known human texts is actually thinking through problems and not just rolling the dice on what words it can link together.

And

if we equate combinatorial play and pattern matching with genuinely “generative/general” intelligence, then we’re missing a key fact here. What’s missing from all the LLM hubris and enthusiasm is a reflexive consciousness of the limits of language, of the aspects of experience that exceed its reach and are also, paradoxically, the source of its actual innovations. [This is profound, he means that mere words, even billions of them, cannot capture some key aspects of human experience]

However, the AI bulls have mounted various come-backs to the Apple paper. The most effective I know of so far was published by Alex Lawsen, a researcher at LLM firm Open Philanthropy. Lawsen’s rebuttal, titled “The Illusion of the Illusion of Thinking,  was summarized by Marcus Mendes. To summarize the summary, Lawsen claimed that the models did not in general “collapse” in some crazy way. Rather, the models in many cases recognized that they would not be able to solve the puzzles given the constraints input by the Apple researchers. Therefore, they (rather intelligently) did not try to waste compute power by grinding away to a necessarily incomplete solution, but just stopped. Lawsen further showed that the ways Apple ran the LRM models did not allow them to perform as well as they could. When he made a modest, reasonable change in the operation of the LRMs,

Models like Claude, Gemini, and OpenAI’s o3 had no trouble producing algorithmically correct solutions for 15-disk Hanoi problems, far beyond the complexity where Apple reported zero success.

Lawsen’s conclusion: When you remove artificial output constraints, LRMs seem perfectly capable of reasoning about high-complexity tasks. At least in terms of algorithm generation.

And so, the great debate over the prospects of artificial general intelligence will continue.

The Comeback of Gold as Money

According to Merriam-Webster, “money” is: “something generally accepted as a medium of exchange, a measure of value, or a means of payment.”  Money, in its various forms, also serves as a store of value.  Gold has maintained the store of value function all though the past centuries, including our own times; as an investment, gold has done well in the past couple of decades. I plan to write more later on the investment aspect, but here I focus on the use of physical gold as a means of payment or exchange, or as backing a means of exchange.

Gold, typically in the form of standardized coins, served means of exchange function for thousands of years. Starting in the Renaissance, however, banks started issuing paper certificates which were exchangeable for gold. For daily transactions, the public found it more convenient to handle these bank notes than the gold pieces themselves, and so these notes were used instead of gold as money.     

In the late nineteenth and early twentieth centuries, leading paper currencies like the British pound and the U.S. dollar were theoretically backed by gold; one could turn in a dollar and convert it to the precious metal. Most countries dropped the convertibility to gold during the Great Depression of the 1930’s, so their currencies became entirely “fiat” money, not tied to any physical commodity. For the U.S. dollar, there was limited convertibility to gold after World War II as part of the Bretton Woods system of international currencies, but even that convertibility ended in 1971. In fact, it was illegal for U.S. citizens to own much in the way of physical gold from FDR’s (infamous?) executive order in 1933 until Gerald Ford’s repeal of that order in 1977.

So gold has been essentially extinct as active money for nearly a hundred years. The elite technocrats who manage national financial affairs have been only too happy to dance on its grave. Keynes famously denounced the gold standard as a “barbarous relic”, standing in the way of purposeful management of national money matters.

However, gold seems to be making something of a comeback, on several fronts. Most notably, several U.S. states have promoted the use of gold in transactions. Deep-red Utah has led the way.  In 2011, Utah passed the Legal Tender Act, recognizing gold and silver coins issued by the federal government as legal tender within the state. This legislation allows individuals to transact in gold and silver coins without paying state capital gains tax.  The Utah House and Senate passed bills in 2025 to authorize the state treasurer to establish a precious metals-backed electronic payment platform, which would enable state vendors to opt for payments in physical gold and silver. The Utah governor vetoed this bill, though, claiming it was “operationally impractical.” 

Meanwhile, in Texas:

The new legislation, House Bill 1056, aims to give Texans the ability, likely through a mobile app or debit card system, to use gold and silver they hold in the state’s bullion depository to purchase groceries or other standard items.

The bill would also recognize gold and silver as legal tender in Texas, with the caveat that the state’s recognition must also align with currency laws laid out in the U.S. Constitution.

“In short, this bill makes gold and silver functional money in Texas,” Rep. Mark Dorazio (R-San Antonio), the main driving force behind the effort, said during one 2024 presentation. “It has to be functional, it has to be practical and it has to be usable.”

Arkansas and Florida have also passed laws allowing the use of gold and silver as legal tender. A potential problem is that under current IRS law, gold and silver are generally classified as collectibles and subject to potential capital gains taxes when transactions occur. Texas legislator Dorazio has argued that liability would go away if the metals are classified as functional money, although he’s also acknowledged the tax issue “might end up being decided by the courts.”

But as Europeans found back in the day, carrying around actual clinking gold coins for purchasing and making change is much more of a hassle than paper transactions. And so, various convenient payment or exchange methods, backed by physical gold, have recently arisen.

Since it is relatively easy and lucrative to spawn a new cryptocurrency (which is why there are thousands of them), it is not surprising that there are now several coins supposedly backed by bullion. These include include Paxos Gold (PAXG) and Tether Gold (XAUT). The gold of Paxos is stored in the worldwide vaults of Brinks, and is regularly audited by a credible third party. Tether gold supposedly resides somewhere in Switzerland. The firm itself is incorporated in the British Virgin Islands. Tether in general does not conduct regular audits; its official statements dance around that fact. These crypto coins, like bullion itself or various funds like GLD that hold gold, are in practice probably mainly an investment vehicle (store of value), rather than an active medium of exchange.

However, getting down to the consumer level of payment convenience, we now have a gold-backed credit card (Glint) and debit card (VeraCash Mastercard). Both of these hold their gold in Swiss vaults. The funds you place with these companies have gold allocated to them, so these are a (seemingly cost-effective) means to own gold. If you get nervous, you can actually (subject to various rules) redeem your funds for actual shiny yellow metal.

“Final Notice” Traffic Ticket Smishing Scam

Yesterday I got a scary-sounding text message, claiming that I have an outstanding traffic ticket in a certain state, and threatening me with the following if I did not pay within two days:

We will take the following actions:

1. Report to the DMV Breach Database

2. Suspend your vehicle registration starting June 2

3. Suspension of driving privileges for 30 days…

4. You may be sued and your credit score will suffer

Please pay immediately before execution to avoid license suspension and further legal disputes.

Oh, my!

A link (which I did NOT click on) was provided for “payment”.

I also got an almost (not quite) identical text a few days earlier. I was almost sure these were scams, but it was comforting to confirm that by going to the web and reading that, yes, these sorts of texts are the flavor of the month in remote rip-offs; as a rule, states do not send out threatening texts with payment links in them.

These texts are examples of “smishing”, which is phishing (to collect identity or bank/credit card information) via SMS text messaging. It must be a lucrative practice. According to spam blocker Robokiller, Americans received 19.2 billion spam robo texts in May 2025. That’s nearly 63 spam texts for every person in the U.S.

Beside these traffic ticket scams, I often get texts asking me to click to track delivery of some package, or to prevent the misuse of my credit card, etc. I have been spared text messages from the Nigerian prince who needs my help to claim his rightful inheritance; I did get an email from him some years back.

The FTC keeps a database called Sentinel on fraud complaints made to the FTC and to law enforcement agencies. People reported losing a total of $12 billion to fraud in 2024, an increase of $2 billion over the previous year. That is a LOT of money (and a commentary on how wealthy Americans are, if that much can get skimmed off with little net impact on society). The biggest single category for dollar loss was investment; the number of victims was smaller than for other categories, but the loss per victim ($9,200) was quite high. Other areas with high median losses per capita were Business and Job Opportunities ($2,250) and Mortgage Foreclosure Relief and Debt Management ($1,500).

Imposter scams like the texts I have gotten (sender pretending to be from state DMV, post office, bank, credit card company, etc.) were by far the largest category by number reported (845,806 in 2024). Of those imposter reports, 22% involved actual losses ($800 median loss), totaling a hefty $2,952 million. That is a juicy enough haul to keep those robo frauds coming.

How to not get scammed: Be suspicious of every email or text, especially ones that prey on emotions like fear or greed or curiosity and try to engage you to payments or for prying information out of you. If it purports to come from some known entity like Bank of America or your state DMV, contact said entity directly to check it out. If you don’t click on anything (or reply in any way to the text, like responding with Y or N), it can’t hurt you.

I’m not sure how much they can do, considering the bad guys tend to hijack legit phone numbers for their dirty work, but you can mark these texts as spam to help your phone carrier improve their spam detection algorithm. Also, reporting scam texts to the U.S. Federal Trade Commission and/or the FBI’s Internet Crime Complaint Center can help build their data set, and perhaps lead to law enforcement actions.

Later add: According to EZPass, here is how to report text scams:

You can report smishing messages to your cell carrier by following this FCC guidance.  This service is provided by most cell carriers.

  1. Hold down the spam TXT/SMS message with your finger
  2. Select the “Forward” option
  3. Enter 7726 as the recipient and press “Send”

Additionally, to report the message to the FBI, visit the FBI’s Internet Crime Complaint Center (ic3.gov) and select ‘File a Complaint’ to do so.  When completing the complaint, include the phone number where the smishing text originated, and the website link listed within the text.

Wild Pigs Are a Big Problem; You, Too, Can Thin the Herds from a Chopper with a Machine Gun

Wild pigs kill more people worldwide than sharks do (I didn’t know that a week ago). They do much damage to agriculture and the environment, and transmit diseases:

According to the U.S. Department of Agriculture, feral hogs cause approximately $2.5 billion in agricultural damages each year…Nearly 300 native plant and animal species in the U.S. are in rapid decline because of feral swine, and many of the species are already at risk, according to Animal and Plant Health Inspection Service. The swine also carry at least 40 parasites, 30 bacterial and viral illnesses, and can infect humans, livestock and other animals with diseases like brucellosis and tuberculosis

Besides eating and injuring crops and livestock, hogs damage the environment:

…They will also feed on tree seeds and seedlings, causing significant damage in forests, groves and plantations… Rooting — digging for foods below the surface of the ground — destabilizes the soil surface, uprooting or weakening native vegetation, damaging lawns and causing erosion. Their wallowing behavior destroys small ponds and stream banks, which may affect water quality. They also prey upon ground-nesting wildlife, including sea turtles. Wild hogs compete for food with other game animals such as deer, turkeys and squirrels, and they may consume the nests and young of many reptiles, ground-nesting birds and mammals.

Pigs are smart (ahead of dogs and horses), tough, and adaptable, and they breed very quickly. The protected, overfed, calm hogs you see on farms quickly  turn lean and mean if they have to fend for themselves in the wild. You pretty much only see female pigs or castrated males on the farm, since whole males (boars) are intrinsically aggressive and destructive. But vigorous 200-pound boars, with their 3 inch-long, razor-sharp tusks, are well-represented in feral swine.

This is a growing problem. The population of wild pigs in the southern third of the U.S. has increased significantly in the past few decades. There have historically been some wild pigs in spots like Florida and Texas, escapees from Spanish settlers long ago. But they seem to be spreading northward, largely because hunters transplant them:

From 1982 to 2016, the wild pig population in the United States increased from 2.4 million to an estimated 6.9 million, with 2.6 million estimated to be residing in Texas alone. The population in the United States continues to grow rapidly due to their high reproduction rate, generalist diet, and lack of natural predators. Wild pigs have expanded their range in the United States from 18 States in 1982 to 35 States in 2016. It was recently estimated that the rate of northward range expansion by wild pigs accelerated from approximately 4 miles to 7.8 miles per year from 1982 to 2012 (12). This rapid range expansion can be attributed to an estimated 18-21% annual population growth and an ability to thrive across various environments, however, one of the leading causes is the human-mediated transportation of wild pigs for hunting purposes.

As for pigs attacking and killing humans, a definitive study was recently made in 2023 by Mayer, et al., covering 2000-2019. This report includes informative tables and charts, such as:

and

Comparison of mean annual number of human fatalities from attacks by various wild animals for time periods ranging between 2000 and 2019. From Mayer, et al.

About half of these fatalities occurred in rural regions of India. Government policies there prohibit farmers from killing marauding pigs, so farmers try to chase them away from their fields with rakes and stones. Sometimes that provokes the pig to attack, slashing at thigh level and often lacerating the femoral artery. But a disturbing 39% of deadly attacks were unprovoked, including a horrific case with an elderly woman in Texas. So danger to humans is an issue, though for perspective, far more people are killed each year by snakes (100,000), rabid dogs (30,000), and crocodiles (1000). In the U.S., over 100 people are killed a year, and 30,000 injured, by collisions with deer (see here for a market-based solution for this problem).

What to do? Hunters in many states are free to blast away at feral pigs year-round, since they are considered a harmful, invasive (non-native) species. Paradoxically, however, allowing hunting of pigs can be counterproductive: amateur hunting does not eliminate enough pigs to stop their spread, and it incentivizes hunters to transport pigs to new regions to make for more targets. For instance, Arkansas allows hunting and even transport of pigs, and has seen swine populations skyrocket. The state of Missouri, next door, took the enlightened approach of banning hunting and transport, leaving population control to wildlife professionals. By removing the sport-hunting incentive, Missouri removed the incentive to transport them, which stymied their spread.

To control pig populations, the pros mainly set up baited large corrals, and monitor them remotely with webcams. After several weeks, the local pigs get comfortable coming there to feed. When the cameras show that every single pig in the herd is in the corral, the gate is sprung shut remotely. Then the pros drive out to, er, euthanize the pigs. The goal is to wipe out the entire herd, and leave no sadder-but-wiser survivors who will be harder to catch next time. Once a hog population has become established in an area, it typically takes ongoing eradication efforts to keep the numbers down.

If you want to do your own part to reduce the surplus swine population, the following notable opportunity came to my attention: for a largish fee the Helibacon company will train you in firing automatic weapons and take you up in a chopper where you can mow down a marauding herd in the low Texas scrubland. It sounds like a guy thing, but Helibacon reminds us that full auto is for ladies, too.  See also PorkChoppersAviation for similar service.

This is actually a fine example of a free market solution to a problem: wild hogs were such a problem for landowners that they were paying expensive professional helo hunters to take out herds, but in Texas, “All that changed in 2011, when the state legislature passed the so-called pork chopper law, which allowed hunters to pay to shoot feral hogs out of helicopters – and a new business model was born.” Hunters are happy to pay to hunt, helo companies are happy to take their money, and landowners are happy to have pigs reduced for free. Voila, voluntary exchange creates value…

United Health Care Stock Implodes After Withdrawing Guidance, CEO Suddenly Resigns, and WSJ Alleges DOJ Fraud Probe

The United Healthcare Group (UNH) is a gigantic ($260 B market cap, even after recent dip) health plan provider, which until recently seemed to be the bluest of blue-chip companies. It is a purveyor of essential medical services with a wide moat, largely unaffected by tariff posturing, and considered too big to fail. The ten-year stock price chart shows it steadily grinding up and up, shrugging off market tantrums like 2020 and 2022, and even the tragic gunning down of one of its division presidents in December.

But things really unraveled in the past month. Let’s look at the charts, and then get into the underlying causes.

The year-to-date chart above shows the price hanging around $500, then rising to nearly $600 as the April 17 quarterly earnings report approached. Presumably the market was licking its chops in anticipation of the usual UNH earnings beat. The actual report was OK by most corporate standards, but it failed to match expectations. Revenue growth was a hearty +9.8% Y/Y, but this was $2.02B “miss”. Earnings were up 4% over year-ago Q1, but they missed expectation (by a mere 1%). What was probably much more disturbing was guidance on 2025 total adjusted earnings down to $26 to $26.50 per share, compared to $29.74 consensus.

That took the stock down from $600 to around $450 immediately, and then it drifted below $400 in the following month as investors looked for and failed to find better news on the company. But then two things happened last week. The effects are seen in the 1-month chart below:

On May 13 (blue arrow) the company came out with a stunning dual announcement. It noted that the recently-appointed CEO, Andrew Witty, had suddenly resigned “for personal reasons.” The blogosphere speculated (perhaps unfairly) that you don’t suddenly resign from a $25 million/year job unless your “personal reasons” involve things like not going to prison for corporate fraud. The other stunner was that the company completely yanked 2025 financial guidance, due to an unexpected rise in health care costs (i.e., what they must pay out to their participants). Over the next day or two, the stock fell to about 50% of its value in early April.

Then on May 14 the Wall Street Journal came out with an article claiming that the U.S. Department of Justice is carrying out a criminal investigation into UNH for possible Medicare fraud, focusing on the company’s Medicare Advantage business practices. The WSJ said that while the exact nature of the allegations is unclear, it has been an active probe since at least last summer.

UNH promptly fired back a curt response to the “deeply irresponsible” reporting of the WSJ:

We have not been notified by the Department of Justice of the supposed criminal investigation reported, without official attribution, in the Wall Street Journal today.

The WSJ’s reporting is deeply irresponsible, as even it admits that the “exact nature of the potential criminal allegations is unclear.”   We stand by the integrity of our Medicare Advantage program.

The stock nose-dived again (red arrow, above), touching 251, as investors completely panicked over “Medicare fraud.”  Cooler heads promptly started buying back in, leading to substantial recovery. That includes the new CEO, Steven Hemsley, who was the highly-paid CEO from 2009 to 2017, and since then has been the highly-compensated “executive chairman of the board”, a role created just for him. Pundits were impressed that he stepped in to buy some $25 million of UNH stock near its lows, saying wow, he is really putting some skin in the game. Well, not really: the dude is worth over $1 billion (did I mention high compensation of health care execs?), so $25 mill is hardly heroic. He is already up some 12% or a cool $3 million on this purchase, a tidy little example of how the rich become richer.