Bureau of Labor Statistics Under Siege

Thousands of keyboards were likely drenched four days ago as coffee spewed from thousands of nostrils upon reading the headlines that President Trump fired the head of the Bureau of Labor Statistics because he (the prez) didn’t like the July 2025 job numbers that were reported. Apparently, the job stats were not as great as we had been led to expect for the new regime of tariffs and deportations. (Someone should inform the politicians that businessmen need predictability for making any expansionary plans). So, shoot the messenger, that will fix it.

The First Ire was apparently kindled especially by the truly massive downward revisions to the May (-125,000) and June (-133,000) job figures, which reduced the combined employment gain for those months by 258,000. That made for three anemic employment months in a row, which is a different picture that had been earlier portrayed. For those unfamiliar with past BLS reports, that could seem like manipulation or gross incompetence. For instance, whitehouse.gov published an article titled, “BLS Has Lengthy History of Inaccuracies, Incompetence”, excoriating the “Biden-appointed”, now-fired Erika McEntarfer who “consistently published overly optimistic jobs numbers — only for those numbers to be quietly revised later.”

But massive overestimations of jobs creation, followed a month or two or three later by massive downward revisions are pretty standard procedure for the BLS in recent years. Fellow blogger Jeremy Horpedahl has noted prior occurrences of this, e.g. here and here. There is no reason to suspect nefarious motives, though. The understaffed and overworked folks at BLS seem to be doing the best they can. It is just a fact that some key data simply is not available as early as other data. There are also rational adjustments, e.g. seasonal trends, that must first be estimated, and only later get revised.

Bloomberg explains some of the fine points of the recent revisions:

The downward revision to the prior two months was largely a result of seasonal adjustment for state and local government education, BLS said in earlier comments to Bloomberg. Those sectors substantially boosted June employment only to be largely revised away a month later.

But economists say the revisions also point to a more concerning, underlying issue of low response rates.

BLS surveys firms in the payrolls survey over the course of three months, gaining a more complete picture as more businesses respond. But a smaller share of firms are responding to the first poll. Initial collection rates have repeatedly slid below 60% in recent months — down from the roughly 70% or more that was the norm before the pandemic.

In addition to the rolling revisions to payrolls that BLS does, there’s also a larger annual revision that comes out each February to benchmark the figures to a more accurate, but less timely data source. BLS puts out a preliminary estimate of what that revision will be a few months in advance, and last year [2024], that projection was the largest since 2009.

Perhaps it would be wise for the BLS to hang a big “preliminary” label on any of the earlier results they publish, to minimize the howls when the big revisions hit later. Or perhaps some improvements could be made in pre-adjusting the adjustments, since revisions there do seem to swing things around outrageously. I expect forthcoming BLS reports to be the subject of derision from all sides. We all know which parties will scoff if the job report looks great or if it looks not great. Presumably the interim head of the Bureau, William Wiatrowski, is busy polishing his resume.

And POTUS should be careful what he wishes for – “great” job growth numbers would, ironically, strengthen the case for the Fed to delay the interest rate cuts he so desires.

Warren Buffett Quotes on Gold as a Bad Investment; Was He Right?

To say Warren Buffett is not a fan of gold would be an understatement. His basic beef is that gold does not produce much of practical value.  His instincts have always been to buy businesses that generate steady and growing cash by producing goods or services that people need or want –  – businesses like railroads, beverage makers, and insurance companies.

Here are some quotes on the subject from the Oracle of Omaha, where I have bolded some phrases:

“Gold … has two significant shortcomings, being neither of much use nor procreative. True, gold has some industrial and decorative utility, but the demand for these purposes is both limited and incapable of soaking up new production. Meanwhile, if you own one ounce of gold for an eternity, you will still own one ounce at its end” — Buffett, letter to shareholders, 2011

“With an asset like gold, for example, you know, basically gold is a way of going long on fear, and it’s been a pretty good way of going long on fear from time to time. But you really have to hope people become more afraid in the year or two years than they are now. And if they become more afraid you make money, if they become less afraid you lose money. But the gold itself doesn’t produce anything” — Buffett, CNBC’s Squawk Box, 2011

This from when the world’s 67-cubic foot total gold hoard was worth about $7 trillion, which by his reckoning was the value of all U.S. farmland plus seven times the value of petroleum giant ExxonMobil plus an extra $1 trillion:

“And if you offered me the choice of looking at some 67-foot cube of gold … and the alternative to that was to have all the farmland of the country, everything, cotton, corn, soybeans, seven ExxonMobils. Just think of that. Add $1 trillion of walking around money. I, you know, maybe call me crazy but I’ll take the farmland and the ExxonMobils”  – – Cited in https://www.nasdaq.com/articles/3-things-warren-buffett-has-said-about-gold

And my favorite:

Gold gets dug out of the ground in Africa, or someplace. Then we melt it down, dig another hole, bury it again and pay people to stand around guarding it. It has no utility. Anyone watching from Mars would be scratching their head“. – – From speech at Harvard, see https://quoteinvestigator.com/2013/05/25/bury-gold/

One thing Buffett did NOT say is that gold is “barbarous relic”.  That line is owned by John Maynard Keynes from a hundred years ago, referring to the notion of tying national money issuance to the number of bars of gold held in the national vaults:

“In truth, the gold standard is already a barbarous relic. All of us, from the Governor of the Bank of England downwards, are now primarily interested in preserving the stability of business, prices, and employment, and are not likely, when the choice is forced on us, deliberately to sacrifice these to outworn dogma, which had its value once” –  Monetary Reform (1924)

Has Buffett’s Berkshire Hathaway Beaten Gold as an Investment?

 Given all that trash talk from the legendary investor, let’s see how an investment in his flagship Berkshire Hathaway company (stock symbol BRK.B) compares to gold over various time periods. I will use the ETF GLD as a proxy for gold, and will include the S&P 500 index as a proxy for the general U.S.  large cap stock market.

As always, these comparisons depend on your starting and ending points. In the 1990s and 2000s, BRK.B hugely outperformed the S&P 500, cementing Buffett’s reputation as one of the greatest investors of all time. (GLD data doesn’t go back that far).  In the past twelve months, gold (up 41%) has soundly beaten SPY (up 14 %) and completely trounced BRK.A (up 9%), as of last week. A couple of one-off factors have gone into these results: Gold had an enormous surge in January-April as the world markets digested the implications of never-ending gigantic U.S. budget deficits, and the markets soured on BRK.A due to the announced upcoming retirement of Buffett himself.

Stepping back to look over the past ten years shows the old master still coming out on top. In this plot, gold is orange, S&P 500 is blue, and BRK.A is royal purple:

Over most of this time period (through 7/21/2025), BRK.A and SP500 were pretty close, and gold lagged significantly. Gold was notably left behind during the key stock surge of 2021. Even with the rise in gold and dip in BRK.A this year, Buffett’s company (up 232%) still beats gold (198%) over the past ten years. BRK.A pulled well ahead of SP500 during the 2022 correction, and never gave back that lead. In the April stock market panic this year, BRK.A actually went up as everything else dropped, as it was seen as a tariff-proof safe haven. SP500 was ahead of gold for nearly all this period, until the crash in stocks and the surge in gold in the first half of 2025 brought them to essentially a tie for the past decade.

Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence

We noted last week Meta’s successful efforts to hire away the best of the best AI scientists from other companies, by offering them insane (like $300 million) pay packages. Here we summarize and excerpt an excellent article in Newsweek by Gabriel Snyder who interviewed Meta’s chief AI scientist, Yann LeCun. LeCun discusses some inherent limitations of today’s Large Language Models (LLMs) like ChatGPT. Their limitations stem from the fact that they are based mainly on language; it turns out that human language itself is a very constrained dataset.  Language is readily manipulated by LLMs, but language alone captures only a small subset of important human thinking:

Returning to the topic of the limitations of LLMs, LeCun explains, “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning,” a reference to Daniel Kahneman’s influential framework that distinguishes between the human brain’s fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

The limitations of this approach become clear when you consider what is known as Moravec’s paradox—the observation by computer scientist and roboticist Hans Moravec in the late 1980s that it is comparatively easier to teach AI systems higher-order skills like playing chess or passing standardized tests than seemingly basic human capabilities like perception and movement. The reason, Moravec proposed, is that the skills derived from how a human body navigates the world are the product of billions of years of evolution and are so highly developed that they can be automated by humans, while neocortical-based reasoning skills came much later and require much more conscious cognitive effort to master. However, the reverse is true of machines. Simply put, we design machines to assist us in areas where we lack ability, such as physical strength or calculation.

The strange paradox of LLMs is that they have mastered the higher-order skills of language without learning any of the foundational human abilities. “We have these language systems that can pass the bar exam, can solve equations, compute integrals, but where is our domestic robot?” LeCun asks. “Where is a robot that’s as good as a cat in the physical world? We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”

This gap exists because language, for all its complexity, operates in a relatively constrained domain compared to the messy, continuous real world. “Language, it turns out, is relatively simple because it has strong statistical properties,” LeCun says. It is a low-dimensionality, discrete space that is “basically a serialized version of our thoughts.”  

[Bolded emphases added]

Broad human thinking involves hierarchical models of reality, which get constantly refined by experience:

And, most strikingly, LeCun points out that humans are capable of processing vastly more data than even our most data-hungry advanced AI systems. “A big LLM of today is trained on roughly 10 to the 14th power bytes of training data. It would take any of us 400,000 years to read our way through it.” That sounds like a lot, but then he points out that humans are able to take in vastly larger amounts of visual data.

Consider a 4-year-old who has been awake for 16,000 hours, LeCun suggests. “The bandwidth of the optic nerve is about one megabyte per second, give or take. Multiply that by 16,000 hours, and that’s about 10 to the 14th power in four years instead of 400,000.” This gives rise to a critical inference: “That clearly tells you we’re never going to get to human-level intelligence by just training on text. It’s never going to happen,” LeCun concludes…

This ability to apply existing knowledge to novel situations represents a profound gap between today’s AI systems and human cognition. “A 17-year-old can learn to drive a car in about 20 hours of practice, even less, largely without causing any accidents,” LeCun muses. “And we have millions of hours of training data of people driving cars, but we still don’t have self-driving cars. So that means we’re missing something really, really big.”

Like Brooks, who emphasizes the importance of embodiment and interaction with the physical world, LeCun sees intelligence as deeply connected to our ability to model and predict physical reality—something current language models simply cannot do. This perspective resonates with David Eagleman’s description of how the brain constantly runs simulations based on its “world model,” comparing predictions against sensory input. 

For LeCun, the difference lies in our mental models—internal representations of how the world works that allow us to predict consequences and plan actions accordingly. Humans develop these models through observation and interaction with the physical world from infancy. A baby learns that unsupported objects fall (gravity) after about nine months; they gradually come to understand that objects continue to exist even when out of sight (object permanence). He observes that these models are arranged hierarchically, ranging from very low-level predictions about immediate physical interactions to high-level conceptual understandings that enable long-term planning.

[Emphases added]

(Side comment: As an amateur reader of modern philosophy, I cannot help noting that these observations about the importance of recognizing there is a real external world and adjusting one’s models to match that reality call into question the epistemological claim that “we each create our own reality”.)

Given all this, developing the next generation of artificial intelligence must, like human intelligence, embed layers of working models of the world:

So, rather than continuing down the path of scaling up language models, LeCun is pioneering an alternative approach of Joint Embedding Predictive Architecture (JEPA) that aims to create representations of the physical world based on visual input. “The idea that you can train a system to understand how the world works by training it to predict what’s going to happen in a video is a very old one,” LeCun notes. “I’ve been working on this in some form for at least 20 years.”

The fundamental insight behind JEPA is that prediction shouldn’t happen in the space of raw sensory inputs but rather in an abstract representational space. When humans predict what will happen next, we don’t mentally generate pixel-perfect images of the future—we think in terms of objects, their properties and how they might interact

This approach differs fundamentally from how language models operate. Instead of probabilistically predicting the next token in a sequence, these systems learn to represent the world at multiple levels of abstraction and to predict how their representations will evolve under different conditions.

And so, LeCun is strikingly pessimistic on the outlook for breakthroughs in the current LLM’s like ChatGPT. He believes LLMs will be largely obsolete within five years, except for narrower purposes, and so he tells upcoming AI scientists to not even bother with them:

His belief is so strong that, at a conference last year, he advised young developers, “Don’t work on LLMs. [These models are] in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

This approach seems to be at variance with other firms, who continue to pour tens of billions of dollars into LLMs. Meta, however, seems focused on next-generation AI, and CEO Mark Zuckerberg is putting his money where his mouth is.

Meta Is Poaching AI Talent With $100 Million Pay Packages; Will This Finally Create AGI?

This month I have run across articles noting that Meta’s Mark Zuckerberg has been making mind-boggling pay offers (like $100 million/year for 3-4 years) to top AI researchers at other companies, plus the promise of huge resources and even (gasp) personal access to Zuck, himself. Reports indicate that he is succeeding in hiring around 50 brains from OpenAI (home of ChatGPT), Anthropic, Google, and Apple. Maybe this concentration of human intelligence will result in the long-craved artificial general intelligence (AGI) being realized; there seems to be some recognition that the current Large Language Models will not get us there.

There are, of course, other interpretations being put on this maneuver. Some talking heads on a Bloomberg podcast speculated that Zuckerberg was using Meta’s mighty cash flow deliberately to starve competitors of top AI talent. They also speculated that (since there is a limit to how much money you can possibly, pleasurably spend) – – if you pay some guy $100 million in a year, a rational outcome would be he would quit and spend the rest of his life hanging out at the beach. (That, of course, is what Bloomberg finance types might think, who measure worth mainly in terms of money, not in the fun of doing cutting edge R&D).

I found a thread on reddit to be insightful and amusing, and so I post chunks of it below. Here is the earnest, optimist OP:

andsi2asi

Zuckerberg’s ‘Pay Them Nine-Figure Salaries’ Stroke of Genius for Building the Most Powerful AI in the World

Frustrated by Yann LeCun’s inability to advance Llama to where it is seriously competing with top AI models, Zuckerberg has decided to employ a strategy that makes consummate sense.

To appreciate the strategy in context, keep in mind that OpenAI expects to generate $10 billion in revenue this year, but will also spend about $28 billion, leaving it in the red by about $18 billion. My main point here is that we’re talking big numbers.

Zuckerberg has decided to bring together 50 ultra-top AI engineers by enticing them with nine-figure salaries. Whether they will be paid $100 million or $300 million per year has not been disclosed, but it seems like they will be making a lot more in salary than they did at their last gig with Google, OpenAI, Anthropic, etc.

If he pays each of them $100 million in salary, that will cost him $5 billion a year. Considering OpenAI’s expenses, suddenly that doesn’t sound so unreasonable.

I’m guessing he will succeed at bringing this AI dream team together. It’s not just the allure of $100 million salaries. It’s the opportunity to build the most powerful AI with the most brilliant minds in AI. Big win for AI. Big win for open source

And here are some wry responses:

kayakdawg

counterpoint 

a. $5B is just for those 50 researchers, loootttaaa other costs to consider

b. zuck has a history of burning big money on r&d with theoretical revenue that doesnt materialize

c. brooks law: creating agi isn’t an easily divisible job – in fact, it seems reasonable to assume that the more high-level experts enter the project the slower it’ll progress given the communication overhead

7FootElvis

Exactly. Also, money alone doesn’t make leadership effective. OpenAI has a relatively single focus. Meta is more diversified, which can lead to a lack of necessary vision in this one department. Passion, if present at the top, is also critical for bleeding edge advancement. Is Zuckerberg more passionate than Altman about AI? Which is more effective at infusing that passion throughout the organization?

….

dbenc

and not a single AI researcher is going to tell Zuck “well, no matter how much you pay us we won’t be able to make AGI”

meltbox

I will make the AI by one year from now if I am paid $100m

I just need total blackout so I can focus. Two years from now I will make it run on a 50w chip.

I promise

Economic Impact of Agricultural Worker Deportations Leads to Administration Policy Reversals

Here is a chart of the evolution of U.S. farm workforce between 1991 and 2022:

Source: USDA

A bit over 40% of current U.S. farm workers are illegal immigrants. In some regions and sectors, the percentage is much higher. The work is often uncomfortable and dangerous, and far from the cool urban centers. This is work that very few U.S. born workers would consider doing, unless the pay was very high, so it would be difficult to replace the immigrant labor on farms in the near term. I don’t know how much the need for manpower would change if cheap illegal workers were not available, and therefore productivity was supplemented with automation.

It apparently didn’t occur to some members of the administration that deporting a lot of these workers (and frightening the rest into hiding) would have a crippling effect on American agriculture. Sure enough, there have recently been reports in some areas of workers not showing up and crops going unharvested.

It is difficult for me as a non-expert to determine how severe and widespread the problems actually are so far. Anti-Trump sources naturally emphasize the genuine problems that do exist and predict apocalyptic melt-down, whereas other sources are more measured. I suspect that the largest agribusinesses have kept better abreast of the law, while smaller operations have cut legal corners and may have that catch up to them. For instance, a small meat packer in Omaha reported operating at only 30% capacity after ICE raids, whereas the CEO of giant Tyson Foods claimed that “every one who works at Tyson Foods is authorized to do so,” and that the company “is in complete compliance” with all the immigration regulations.

With at least some of these wholly predictable problems from mass deportations now becoming reality, the administration is undergoing internal debates and policy adjustments in response. On June 12, President Trump very candidly acknowledged the issue, writing on Truth Social, “Our great Farmers and people in the hotel and leisure business have been stating that our very aggressive policy on immigration is taking very good, long-time workers away from them, with those jobs being almost impossible to replace…. We must protect our Farmers, but get the CRIMINALS OUT OF THE USA. Changes are coming!” 

The next day, ICE official Tatum King wrote regional leaders to halt investigations of the agricultural industry, along with hotels and restaurants. That directive was apparently walked back a few days later, under pressure from outraged conservative supporters and from Deputy White House Chief of Staff Stephen Miller. Miller, an immigration hard-liner, wants to double the ICE deportation quota, up to 3,000 per day.

This issue could go in various ways from here. Hard-liners on the left and on the right have a way of pushing their agendas to unpalatable extremes. It can be argued that the Democrats could easily have won in 2024 had their policies been more moderate. Similarly, if immigration hard-liners get their way now, I predict that the result will be their worst nightmare: a public revulsion against enforcing immigration laws in general. If farmers and restaurateurs start going bust, and food shortages and price spikes appear in the supermarket, public support for the administration and its project of deporting illegal immigrants will reverse in a big way. Some right-wing pundits would not be bothered by an electoral debacle, since their style is to stay constantly outraged, and (as the liberal news outlets currently demonstrate), it is easier to project non-stop outrage when your party is out of power.

An optimist, however, might see in this controversy an opening for some sort of long-term, rational solution to the farm worker issue. Agricultural Secretary Brooke Rollins has proposed expansion of the H-2A visa program, which allows for temporary agricultural worker residency to fill labor shortages. This is somewhat similar to the European guest worker programs, though with significant differences. H-2A requires the farmer to provide housing and take legal responsibility for his or her workers. H-2B visas allow for temporary non-agricultural workers, without as much employer responsibility. A bill was introduced into Congress with bi-partisan support to modernize the H-2A program, so that legislative effort may have legs. Maybe there can be a (gasp!) compromise.

President Trump last week came out strongly in favor of this sort of solution, with a surprisingly positive take on the (illegal) workers who have worked diligently on a farm for years. By “put you in charge” he is seems to refer to the responsibilities that H-2A employers undertake for their employers, and perhaps extending that to H-2B employers. He acknowledges that the far-right will not be happy, but hopes “they’ll understand.” From Newsweek:

“We’re working on legislation right now where – farmers, look, they know better. They work with them for years. You had cases where…people have worked for a farm, on a farm for 14, 15 years and they get thrown out pretty viciously and we can’t do it. We gotta work with the farmers, and people that have hotels and leisure properties too,” he said at the Iowa State Fairgrounds in Des Moines on Thursday.

“We’re gonna work with them and we’re gonna work very strong and smart, and we’re gonna put you in charge. We’re gonna make you responsible and I think that that’s going to make a lot of people happy. Now, serious radical right people, who I also happen to like a lot, they may not be quite as happy but they’ll understand. Won’t they? Do you think so?”

We shall see.

Central Banks Are Buying Gold; Should You?

Anyone who reads financial headlines knows that gold prices have soared in the past year. Why?

Gold has historically been a relatively stable store of value, and that role seems to be returning after decades of relative neglect. Official numbers show sharply increased buying by the world’s central banks, led by China, Poland, and Azerbaijan in early 2025. Russia, India and Turkey have also been major buyers. There is widespread conviction that actual gold purchases are appreciably higher than the officially-reported numbers, to side-step President Trump’s threatened extra tariffs on nations seen as de-dollarizing.

I think the most proximate cause for the sharp run-up in gold prices in the past twelve months has been the profligate U.S. federal budget deficit, under both administrations. This is convincing key world actors that the dollar will become increasingly devalued over time, no matter which party is in power. Thus, it is prudent to get out of dollars and dollar-denominated assets like U.S. T-bonds.

Trump’s erratic and offensive policies and statements in 2025 have added to the desire to diversify away from U.S. assets. This is in addition to the alarm in non-Western countries over the impoundment of Russian dollar-related assets in connection with the ongoing Russian invasion of Ukraine. Also, there is something of a self-fulfilling momentum aspect to any asset: the more it goes up, the more it is expected to go up.

This informative chart of central bank gold net purchasing is courtesy of Weekend Investing:

Interestingly, central banks were net sellers in the 1990s and early 2000s; it was an era of robust economic growth, gold prices were stagnant or declining, and it seemed pointless to hold shiny metal bars when one could invest in financial assets with higher rates of return. The Global Financial Crisis of 2008-2009 apparently sobered up the world as to the fragility of financial assets, making solid metal bars look pretty good. Then, as noted, the Western reaction to the Russian attack on Ukraine spurred central bank buying gold, as this blog predicted back in March, 2022.

Private investors are also buying gold, for similar reasons as the central banks. Gold offers portfolio diversification as a clear alternative from all paper assets. In theory it should offer something of an inflation hedge, but its price does not always track with inflation or interest rates.

Here is how gold (using GLD fund as a proxy) has fared versus stocks (S&P 500 index) and intermediate term U. S. T-bonds (IEF fund) in the past year:

Gold is up by 40%, compared to 12.6% for stocks. That is huge outperformance. This was driven largely by the fact that gold rose strongly in the Feb-April timeframe, while stocks were collapsing.

Below we zoom out to look at the past ten years, and include the intermediate-term T-bond fund IEF:

Gold prices more than doubled from 2008 to 2011, then suffered a long, painful decline over the next two years. Prices were then fairly stagnant for the mid-2010s, rose significantly 2019-2020, then stagnated again until taking off in 2023. Stocks have been much more erratic. Most of the time stock returns were above gold, but the 2020 and 2024 plunges brought stocks down to rough parity with gold. Since about 2019, T-bonds have been pathetic; pity the poor investor who has been (according to traditional advice) 40% invested in investment-grade bonds.

How to invest in gold? Hard-core gold bugs want the actual coins (no-one can afford a full bullion bar) to rub between their fingers and keep in their own physical custody. You can buy coins from on-line dealers or local dealers. Coins are available from the U.S. Mint, but reportedly their mark-ups are often higher than on the secondary market. 

An easier route for most folks is to buy into a gold-backed stock fund. The biggest is GLD, which has over $100 billion in assets. There has long been an undercurrent of suspicion among gold bugs that GLD’s gold is not reliably audited or that it is loaned out; they refer derisively to GLD as “paper gold” or gold derivatives.  The fund itself claims that it never lends out its gold, and that its bars are held in the vaults of the custodian banks JPMorgan Chase Bank, N.A. and HSBC Bank plc, and are independently audited. The suspicious crowd favors funds like Sprott Physical Gold Trust, PHYS. PHYS is claimed to have a stronger legal claim on its physical gold than GLD. However, PHYS is a closed-end fund, which means it does not have a continuous creation process like GLD, an open-end ETF. This can lead to discrepancies between the fund’s share price and the value of its gold holdings. It does seem like PHYS loses about 1% per year relative to GLD.

Disclaimer: Nothing here should be taken as advice to buy or sell any security.

Saving Money by Ordering Car Parts from Amazon or eBay

Here is a personal economical anecdote from this week. A medium-sized dead branch fell from a tall tree and ripped off the driver side mirror on my old Honda. My local repair shop said it would cost around $600 to replace it. That is a significant percentage of what the old clunker is worth. Ouch.

They kindly noted that most of that cost would was ordering a replacement mirror assembly from Honda, which would cost over $400 and take several days to arrive.  I asked if I could try to get a mirror from a junkyard, to save money. The repair guy said they would be willing to install a part I brought in, but suggested eBay or Amazon instead.

Back 20 years ago, before online commerce was so established, my local repair shop would routinely save us money by getting used parts from some sort of junkyard network.
So, I started looking into that route. First, junkyards are not junkyards anymore, they are “salvage yards.” Second, it turns out that to remove a side mirror from a Honda is not a simple matter. You have to remove the inside whole plastic door panel to get at the mirror mounting screws, and removing that panel has some complications. Also, I could not find a clear online resource for locating parts at regional salvage yards. It looks like you have to drive to a salvage yard, and perhaps have them search some sort of database to find a comparable vehicle somewhere that might have the part you want.


All this seemed like a lot of hassle, so I went to eBay, and found a promising looking new replacement part there for about $56, including shipping. It would take about a week to get here (probably being direct shipped from China). On Amazon, I found essentially the same part for about $63, that would get here the next day. For the small difference and price, I went the Amazon route, partly for the no hassle returns if the part turned out to be defective and partly because I get 5% back on my Amazon credit card there.
I just got the car back from the repair shop with the replacement mirror, and it works fine. The total cost, with labor was about $230, which is much better than the original $600+ estimate.


I’m not sure how broadly to generalize this experience. Some further observations:

( 1 ) For a really critical car part, I’d have to consider carefully if the Chinese knock-off would perform appreciably worse than some name-brand part – -although, I believe many repair shops often use parts that are not strictly original parts.

( 2 ) Commonly replaced parts like oil and air filters are typically cheaper to buy on-line than from your local Auto Zone or other local merchant. I like supporting local shops, so sometimes I eat the few extra $$ and shopping time, and buy from bricks and mortar.

( 3 ) Some repair shops make significant money on their markup on parts, and so they might not be happy about you bringing in your own parts. They also might decline to warrant the operation of that part. And many big box franchise repair shops may simply refuse to install customer-supplied parts.

( 4 ) For a newish car, still under warranty, the manufacturer warranty might be affected by using non-original parts.

( 5 ) Back to junk/salvage yards: there are some car parts, so-called hard parts, that are expected to last the life of the car. Things like the mounting brackets for engine parts. Typically, no spares of these are manufactured. So, if one of those parts gets dinged up in an accident, your only option may be used parts taken from a junker.

Did Apple’s Recent “Illusion of Thinking” Study Expose Fatal Shortcomings in Using LLM’s for Artificial General Intelligence?

Researchers at Apple last week published with the provocative title, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.”  This paper has generated uproar in the AI world. Having “The Illusion of Thinking” right there in the title is pretty in-your-face.

Traditional Large Language Model (LLM) artificial intelligence programs like ChatGPT train on massive amounts of human-generated text to be able to mimic human outputs when given prompts. A recent trend (mainly starting in 2024) has been the incorporation of more formal reasoning capabilities into these models. The enhanced models are termed Large Reasoning Models (LRMs). Now some leading LLMs like Open AI’s GPT, Claude, and the Chinese DeepSeek exist both in regular LLM form and also as LRM versions.

The authors applied both the regular (LLM) and “thinking” LRM versions of Claude 3.7 Sonnet and DeepSeek to a number of mathematical type puzzles. Open AI’s o-series were used to a lesser extent. An advantage of these puzzles is that researchers can, while keeping the basic form of the puzzle, dial in more or less complexity.

They found, among other things, that the LRMs did well up to a certain point, then suffered “complete collapse” as complexity was increased. Also, at low complexities, LLMs actually outperform LRMs. And (perhaps the most vivid evidence of lack of actual understanding on the part of these programs), when they were explicitly offered an efficient direct solution algorithm in the prompt, the programs did not take advantage of it, but instead just kept grinding away in their usual fashion.

As might be expected, AI skeptics were all over the blogosphere, saying, I told you so, LLMs are just massive exercises in pattern matching, and cannot extrapolate outside of their training set. This has massive implications for what we can expect in the near or intermediate future. Among other things, the optimism about AI progress is largely what is fueling the stock market, and also capital investment in this area: Companies like Meta and Google are spending ginormous sums trying to develop artificial “general” intelligence, paying for ginormous amounts of compute power, with those dollars flowing to firms like Microsoft and Amazon building out data centers and buying chips from Nvidia. If the AGI emperor has no clothes, all this spending might come to a screeching crashing halt.

Ars Technica published a fairly balanced account of the controversy, concluding that, “Even elaborate pattern-matching machines can be useful in performing labor-saving tasks for the people that use them… especially for coding and brainstorming and writing.”

Comments on this article included one like:

LLMs do not even know what the task is, all it knows is statistical relationships between words.   I feel like I am going insane. An entire industry’s worth of engineers and scientists are desperate to convince themselves a fancy Markov chain trained on all known human texts is actually thinking through problems and not just rolling the dice on what words it can link together.

And

if we equate combinatorial play and pattern matching with genuinely “generative/general” intelligence, then we’re missing a key fact here. What’s missing from all the LLM hubris and enthusiasm is a reflexive consciousness of the limits of language, of the aspects of experience that exceed its reach and are also, paradoxically, the source of its actual innovations. [This is profound, he means that mere words, even billions of them, cannot capture some key aspects of human experience]

However, the AI bulls have mounted various come-backs to the Apple paper. The most effective I know of so far was published by Alex Lawsen, a researcher at LLM firm Open Philanthropy. Lawsen’s rebuttal, titled “The Illusion of the Illusion of Thinking,  was summarized by Marcus Mendes. To summarize the summary, Lawsen claimed that the models did not in general “collapse” in some crazy way. Rather, the models in many cases recognized that they would not be able to solve the puzzles given the constraints input by the Apple researchers. Therefore, they (rather intelligently) did not try to waste compute power by grinding away to a necessarily incomplete solution, but just stopped. Lawsen further showed that the ways Apple ran the LRM models did not allow them to perform as well as they could. When he made a modest, reasonable change in the operation of the LRMs,

Models like Claude, Gemini, and OpenAI’s o3 had no trouble producing algorithmically correct solutions for 15-disk Hanoi problems, far beyond the complexity where Apple reported zero success.

Lawsen’s conclusion: When you remove artificial output constraints, LRMs seem perfectly capable of reasoning about high-complexity tasks. At least in terms of algorithm generation.

And so, the great debate over the prospects of artificial general intelligence will continue.

The Comeback of Gold as Money

According to Merriam-Webster, “money” is: “something generally accepted as a medium of exchange, a measure of value, or a means of payment.”  Money, in its various forms, also serves as a store of value.  Gold has maintained the store of value function all though the past centuries, including our own times; as an investment, gold has done well in the past couple of decades. I plan to write more later on the investment aspect, but here I focus on the use of physical gold as a means of payment or exchange, or as backing a means of exchange.

Gold, typically in the form of standardized coins, served means of exchange function for thousands of years. Starting in the Renaissance, however, banks started issuing paper certificates which were exchangeable for gold. For daily transactions, the public found it more convenient to handle these bank notes than the gold pieces themselves, and so these notes were used instead of gold as money.     

In the late nineteenth and early twentieth centuries, leading paper currencies like the British pound and the U.S. dollar were theoretically backed by gold; one could turn in a dollar and convert it to the precious metal. Most countries dropped the convertibility to gold during the Great Depression of the 1930’s, so their currencies became entirely “fiat” money, not tied to any physical commodity. For the U.S. dollar, there was limited convertibility to gold after World War II as part of the Bretton Woods system of international currencies, but even that convertibility ended in 1971. In fact, it was illegal for U.S. citizens to own much in the way of physical gold from FDR’s (infamous?) executive order in 1933 until Gerald Ford’s repeal of that order in 1977.

So gold has been essentially extinct as active money for nearly a hundred years. The elite technocrats who manage national financial affairs have been only too happy to dance on its grave. Keynes famously denounced the gold standard as a “barbarous relic”, standing in the way of purposeful management of national money matters.

However, gold seems to be making something of a comeback, on several fronts. Most notably, several U.S. states have promoted the use of gold in transactions. Deep-red Utah has led the way.  In 2011, Utah passed the Legal Tender Act, recognizing gold and silver coins issued by the federal government as legal tender within the state. This legislation allows individuals to transact in gold and silver coins without paying state capital gains tax.  The Utah House and Senate passed bills in 2025 to authorize the state treasurer to establish a precious metals-backed electronic payment platform, which would enable state vendors to opt for payments in physical gold and silver. The Utah governor vetoed this bill, though, claiming it was “operationally impractical.” 

Meanwhile, in Texas:

The new legislation, House Bill 1056, aims to give Texans the ability, likely through a mobile app or debit card system, to use gold and silver they hold in the state’s bullion depository to purchase groceries or other standard items.

The bill would also recognize gold and silver as legal tender in Texas, with the caveat that the state’s recognition must also align with currency laws laid out in the U.S. Constitution.

“In short, this bill makes gold and silver functional money in Texas,” Rep. Mark Dorazio (R-San Antonio), the main driving force behind the effort, said during one 2024 presentation. “It has to be functional, it has to be practical and it has to be usable.”

Arkansas and Florida have also passed laws allowing the use of gold and silver as legal tender. A potential problem is that under current IRS law, gold and silver are generally classified as collectibles and subject to potential capital gains taxes when transactions occur. Texas legislator Dorazio has argued that liability would go away if the metals are classified as functional money, although he’s also acknowledged the tax issue “might end up being decided by the courts.”

But as Europeans found back in the day, carrying around actual clinking gold coins for purchasing and making change is much more of a hassle than paper transactions. And so, various convenient payment or exchange methods, backed by physical gold, have recently arisen.

Since it is relatively easy and lucrative to spawn a new cryptocurrency (which is why there are thousands of them), it is not surprising that there are now several coins supposedly backed by bullion. These include include Paxos Gold (PAXG) and Tether Gold (XAUT). The gold of Paxos is stored in the worldwide vaults of Brinks, and is regularly audited by a credible third party. Tether gold supposedly resides somewhere in Switzerland. The firm itself is incorporated in the British Virgin Islands. Tether in general does not conduct regular audits; its official statements dance around that fact. These crypto coins, like bullion itself or various funds like GLD that hold gold, are in practice probably mainly an investment vehicle (store of value), rather than an active medium of exchange.

However, getting down to the consumer level of payment convenience, we now have a gold-backed credit card (Glint) and debit card (VeraCash Mastercard). Both of these hold their gold in Swiss vaults. The funds you place with these companies have gold allocated to them, so these are a (seemingly cost-effective) means to own gold. If you get nervous, you can actually (subject to various rules) redeem your funds for actual shiny yellow metal.

“Final Notice” Traffic Ticket Smishing Scam

Yesterday I got a scary-sounding text message, claiming that I have an outstanding traffic ticket in a certain state, and threatening me with the following if I did not pay within two days:

We will take the following actions:

1. Report to the DMV Breach Database

2. Suspend your vehicle registration starting June 2

3. Suspension of driving privileges for 30 days…

4. You may be sued and your credit score will suffer

Please pay immediately before execution to avoid license suspension and further legal disputes.

Oh, my!

A link (which I did NOT click on) was provided for “payment”.

I also got an almost (not quite) identical text a few days earlier. I was almost sure these were scams, but it was comforting to confirm that by going to the web and reading that, yes, these sorts of texts are the flavor of the month in remote rip-offs; as a rule, states do not send out threatening texts with payment links in them.

These texts are examples of “smishing”, which is phishing (to collect identity or bank/credit card information) via SMS text messaging. It must be a lucrative practice. According to spam blocker Robokiller, Americans received 19.2 billion spam robo texts in May 2025. That’s nearly 63 spam texts for every person in the U.S.

Beside these traffic ticket scams, I often get texts asking me to click to track delivery of some package, or to prevent the misuse of my credit card, etc. I have been spared text messages from the Nigerian prince who needs my help to claim his rightful inheritance; I did get an email from him some years back.

The FTC keeps a database called Sentinel on fraud complaints made to the FTC and to law enforcement agencies. People reported losing a total of $12 billion to fraud in 2024, an increase of $2 billion over the previous year. That is a LOT of money (and a commentary on how wealthy Americans are, if that much can get skimmed off with little net impact on society). The biggest single category for dollar loss was investment; the number of victims was smaller than for other categories, but the loss per victim ($9,200) was quite high. Other areas with high median losses per capita were Business and Job Opportunities ($2,250) and Mortgage Foreclosure Relief and Debt Management ($1,500).

Imposter scams like the texts I have gotten (sender pretending to be from state DMV, post office, bank, credit card company, etc.) were by far the largest category by number reported (845,806 in 2024). Of those imposter reports, 22% involved actual losses ($800 median loss), totaling a hefty $2,952 million. That is a juicy enough haul to keep those robo frauds coming.

How to not get scammed: Be suspicious of every email or text, especially ones that prey on emotions like fear or greed or curiosity and try to engage you to payments or for prying information out of you. If it purports to come from some known entity like Bank of America or your state DMV, contact said entity directly to check it out. If you don’t click on anything (or reply in any way to the text, like responding with Y or N), it can’t hurt you.

I’m not sure how much they can do, considering the bad guys tend to hijack legit phone numbers for their dirty work, but you can mark these texts as spam to help your phone carrier improve their spam detection algorithm. Also, reporting scam texts to the U.S. Federal Trade Commission and/or the FBI’s Internet Crime Complaint Center can help build their data set, and perhaps lead to law enforcement actions.

Later add: According to EZPass, here is how to report text scams:

You can report smishing messages to your cell carrier by following this FCC guidance.  This service is provided by most cell carriers.

  1. Hold down the spam TXT/SMS message with your finger
  2. Select the “Forward” option
  3. Enter 7726 as the recipient and press “Send”

Additionally, to report the message to the FBI, visit the FBI’s Internet Crime Complaint Center (ic3.gov) and select ‘File a Complaint’ to do so.  When completing the complaint, include the phone number where the smishing text originated, and the website link listed within the text.