A Visual Summary of the 2025 Economics Nobel Lectures

Fellow EWED blogger Jeremy Horpedahl generally gives good advice. Therefore, when the other week he provided a link and recommended that we watch Joel Mokyr’s 2025 Nobel lecture*, I did so.

There were three speakers on that linked YouTube, who were the economics laureates for this year. They received the prize for their work on innovation-driven economic growth. The whole video is nearly two hours long, which is longer than most folks want to listen to, unless they are on a long car trip. Joel’s talk was the first, and it was truly engaging.

For time-pressed readers here, I have snipped many of the speakers’ slides, and pasted them below, with minimal commentary.

First, here are the great men themselves:

Talk # 1.  Joel Mokyr: Can Progress in Innovation Be Sustained?

And indeed, one can find pieces of evidence that point in this direction, such as the slower pace of pharm discoveries.

But Joel is optimistic:

Joel provides various examples of advances in theoretical knowledge and in practical technology (especially in making instruments) feeding each other. E.g., nineteenth century advances in high resolution microscopy led to study of micro-organisms which led to germ theory of disease, which was one of the all-time key discoveries that helped mankind:

So, on the technical and intellectual side, Joel feels that the drivers are still in place for continued strong progress. What may block progress are unhelpful human attitudes and fragmentation, including outright wars.

Or, as Friedrich Schiller wrote, “Against stupidity, the gods themselves contend in vain”.

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

Talk # 2: Philippe Aghion, The Economics of Creative Destruction

He commented that on the personal level, what seems to be a failure in your life can prove to be “a revival, your savior” (English is not his first language; but the point is a good one).

Much of his talk discussed some inherent contradictions in the innovation process, especially how once a new firm achieves dominance through innovation, it tends to block out newer entrants:

KEY SLIDE:

Outline of the rest of his talk:

[ There were more charts on fine points of his competition/innovation model(s)]

Slide on companies’ failure rate, grouped by age of the firm:

His comment..if you are a young , small firm, it only takes one act of (competitors’) creative destruction to oust you, whereas for older, larger, more diverse firms, it might take two or three creative destructions to wipe you out.

He then uses some of these concepts to address “Historical enigmas”

First, secular stagnation:

[My comment: Total factor productivity (TFP) growth rate in economics measures the portion of output growth not explained by increases in traditional inputs like labor and capital. It is often considered the primary contributor to GDP growth, reflecting gains from technological progress, efficiency improvements, and other factors that enhance production]

I think this chart was for the US. Productivity, which grew fast in the 1996-2005 timeframe, then slowed back down.

In the time of growth soaring, there was increased concentration in services. The boost in ~1993-2003 was a composition effect, as big techs like Microsoft, Amazon, bought out small firms, and grew the most. But then this discouraged new entries.

Gap is increasing between leaders and laggers, likely due to quasi-monopoly of big tech firms.

Another historical enigma – why do some countries stop growing? “Middle Income Trap”

s

Made a case for Korea, Japan growing fastest when they were catching up with Western technology, then slowed down.

China for past 30 years has been growing by catching up, absorbing outside technology. But the policies for pioneering new technologies are different than those for catching up.

Europe: During WWII lot of capital was destroyed, but they quickly started to catch up with US (Europe had good education, and Marshall plan rebuilt capital)…but then stagnated, because not as strong in innovation.

Europeans are doing mid-tech incremental innovation, whereas US is doing high tech breakthrough.

[my comment: I don’t know if innovation is the whole story, it is tough to compete with a large, unified nation sitting on so much premium farmland and oil fields]

Patents:

Red =US,  blue=China, yellow=Japan, green=Europe. His point: Europe is lagging.

Europe needs true unified market, policies to foster innovation (and creative destruction, rather than preservation).

Finally: Rethinking Capitalism

GINI index is a measure of inequality.

Death of unskilled middle-aged men in U.S.…due in part to distress over of losing good jobs [I’m not sure that is the whole story]. Key point of two slides above is that US has more innovation, but some bad social outcomes.

So, you’d like to have best of both…flexibility (like US) AND inclusivity (like Europe).

Example: with Danish welfare policies, there is little stress if you lose your job (slide above).

Found that innovation (in Europe? Finland?) correlated with parents’ income and education level:

…but that is considered suboptimal, since you want every young person, no matter parents’ status, to have the chance to contribute to innovation. Pointed to reforms of education in Finland, that gave universal access to good education..claimed positive effects on innovation.

Final subtopic: competition. Again, the mega tech firms discourage competition. It used to be that small firms were the main engine of job growth, now not so much:

Makes the case that entrant competition enhances social mobility.

Conclusions:

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

Talk # 3. Peter Howitt

The third speaker, Peter Howitt showed only a very few slides, all of which were pretty unengaging, such as:

So, I don’t have much to show from him. He has been a close collaborator of Philippe Aghion, and he seemed to be saying similar things. I can report that he is basically optimistic about the future.

* The economics prize is not a classic “Nobel prize” like the ones established by the Swedish dynamite inventor himself, but was established in 1968 by the Swedish national bank “In Memory of Alfred Nobel.”

Here is an AI summary of the 2025 economics prize:  

The 2025 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel was awarded to Joel Mokyr, Philippe Aghion, and Peter Howitt for their groundbreaking work on innovation-driven economic growth. Mokyr received half of the prize for identifying the prerequisites for sustained growth through technological progress, emphasizing the importance of “useful knowledge,” mechanical competence, and institutions conducive to innovation. The other half was jointly awarded to Aghion and Howitt for developing a mathematical model of sustained growth through “creative destruction,” a concept that explains how new technologies and products replace older ones, driving economic advancement. Their research highlights that economic growth is not guaranteed and requires supportive policies, open markets, and mechanisms to manage the disruptive effects of innovation, such as job displacement and firm failures. The award comes at a critical time, as concerns grow over threats to scientific research funding and the potential for de-globalization to hinder innovation.

The Fed Resumes Buying Treasuries: Is This the Start of, Ahem, QE?

In some quarters there is a sense that quantitative easing (QE), the massive purchase of Treasury and other bonds by the Fed, is something embarrassing or disreputable – – an admission of failure, or an enabling of profligate financial behaviors. For months, pundits have been smacking their lips in anticipation of QE-like Fed actions, so they could say, “I told you so”. In particular, folks have predicted that the Fed would try to disguise the QE-ness of their action by giving some other, more innocuous name.

Here is how liquidity analyst Michael Howell humorously put it on Dec 7:

All leave has been cancelled in the Fed’s Acronym Department. They are hurriedly working over-time, desperately trying to think up an anodyne name to dub (inevitable) future liquidity interventions in time for the upcoming FOMC meeting. They plainly cannot use the politically-charged ‘QE’. We favor the term ‘Not-QE, QE’, but odds are it will be dubbed something like ‘Bank Reverse Management Operations’ (BRMO) or ‘Treasury Market Liquidity Operations’ (TMLO). The Fed could take a leaf from China’s playbook, since her Central Bank the PBoC, now uses a long list of monetary acronyms, such as MTL, RRRs, RRPs and now ORRPs, probably to hide what policy makers are really doing.

And indeed, the Fed announced on Dec 10 that it would purchase $40 billion in T-bills in the very near term, with more purchases to follow.

But is this really (the unseemly) QE of years past? Cooler heads argue that no, it is not. Traditional QE has focused on longer-term securities (e.g. T-bonds or mortgage securities with maturities perhaps 5-10 years), in an effort to lower longer-term rates. Classically, QE was undertaken when the broader economy was in crisis, and short-term rates had already been lowered to near zero, so they could not be lowered much further.

But the current purchases are all very short-term (3 months or less). So, this is a swap of cash for almost-cash. Thus, I am on the side of those saying this is not quite QE. Almost, but not quite.

The reason given for undertaking these purchases is pretty straightforward, though it would take more time to explicate it that I want to take right now. I hope to return to this topic of system liquidity in a future post.Briefly, the whole financial system runs on constant refinancing/rolling over of debt. A key mechanism for this is the “repo” market for collateralized lending, and a key parameter for the health of that market is the level of “reserves” in the banking system. Those reserves, for various reasons, have been getting so low that the system is getting in danger of seizing up, like a machine with insufficient lubrication. These recent Fed purchases directly ease that situation. This management of short-term liquidity does differ from classic purchases of long-term securities.

The reason I am not comfortable saying robustly, “No, this is not all QE” is that the government has taken to funding its ginormous ongoing peacetime deficit with mainly short-term debt. It is that ginormous short-term debt issuance which has contributed to the liquidity squeeze. And so, these ultra-short term T-bill purchases are to some extent monetizing the deficit. Deficit monetization in theory differs from QE, at least in stated goals, but in practice the boundaries are blurry.

Google’s TPU Chips Threaten Nvidia’s Dominance in AI Computing

Here is a three-year chart of stock prices for Nvidia (NVDA), Alphabet/Google (GOOG), and the generic QQQ tech stock composite:

NVDA has been spectacular. If you had $20k in NVDA three years ago, it would have turned into nearly $200k. Sweet. Meanwhile, GOOG poked along at the general pace of QQQ.  Until…around Sept 1 (yellow line), GOOG started to pull away from QQQ, and has not looked back.

And in the past two months, GOOG stock has stomped all over NVDA, as shown in the six-month chart below. The two stocks were neck and neck in early October, then GOOG has surged way ahead. In the past month, GOOG is up sharply (red arrow), while NVDA is down significantly:

What is going on? It seems that the market is buying the narrative that Google’s Tensor Processing Unit (TPU) chips are a competitive threat to Nvidia’s GPUs. Last week, we published a tutorial on the technical details here. Briefly, Google’s TPUs are hardwired to perform key AI calculations, whereas Nvidia’s GPUs are more general-purpose. For a range of AI processing, the TPUs are faster and much more energy-efficient than the GPUs.

The greater flexibility of the Nvidia GPUs, and the programming community’s familiarity with Nvidia’s CUDA programming language, still gives Nvidia a bit of an edge in the AI training phase. But much of that edge fades for the inference (application) usages for AI. For the past few years, the big AI wannabes have focused madly on model training. But there must be a shift to inference (practical implementation) soon, for AI models to actually make money.

All this is a big potential headache for Nvidia. Because of their quasi-monopoly on AI compute, they have been able to charge a huge 75% gross profit margin on their chips. Their customers are naturally not thrilled with this, and have been making some efforts to devise alternatives. But it seems like Google, thanks to a big head start in this area, and very deep pockets, has actually equaled or even beaten Nvidia at its own game.

This explains much of the recent disparity in stock movements. It should be noted, however, that for a quirky business reason, Google is unlikely in the near term to displace Nvidia as the main go-to for AI compute power. The reason is this: most AI compute power is implemented in huge data/cloud centers. And Google is one of the three main cloud vendors, along with Microsoft and Amazon, with IBM and Oracle trailing behind. So, for Google to supply Microsoft and Amazon with its chips and accompanying know-how would be to enable its competitors to compete more strongly.

Also, AI users like say OpenAI would be reluctant to commit to usage in a Google-owned facility using Google chips, since then the user would be somewhat locked in and held hostage, since it would be expensive to switch to a different data center if Google tried to raise prices. On contrast, a user can readily move to a different data center for a better deal, if all the centers are using Nvidia chips.

For the present, then, Google is using its TPU technology primarily in-house. The company has a huge suite of AI-adjacent business lines, so its TPU capability does give it genuine advantages there. Reportedly, soul-searching continues in the Google C-suite about how to more broadly monetize its TPUs. It seems likely that they will find a way. 

As usual, nothing here constitutes advice to buy or sell any security.

AI Computing Tutorial: Training vs. Inference Compute Needs, and GPU vs. TPU Processors

A tsunami of sentiment shift is washing over Wall Street, away from Nvidia and towards Google/Alphabet. In the past month, GOOG stock is up a sizzling 12%, while NVDA plunged 13%, despite producing its usual earnings beat.  Today I will discuss some of the technical backdrop to this sentiment shift, which involves the differences between training AI models versus actually applying them to specific problems (“inference”), and significantly different processing chips. Next week I will cover the company-specific implications.

As most readers here probably know, the popular Large Language Models (LLM) that underpin the popular new AI products work by sucking in nearly all the text (and now other data) that humans have ever produced, reducing each word or form of a word to a numerical token, and grinding and grinding to discover consistent patterns among those tokens. Layers of (virtual) neural nets are used. The training process involves an insane amount of trying to predict, say, the next word in a sentence scraped from the web, evaluating why the model missed it, and feeding that information back to adjust the matrix of weights on the neural layers, until the model can predict that next word correctly. Then on to the next sentence found on the internet, to work and work until it can be predicted properly. At the end of the day, a well-trained AI chatbot can respond to Bob’s complaint about his boss with an appropriately sympathetic pseudo-human reply like, “It sounds like your boss is not treating you fairly, Bob. Tell me more about…” It bears repeating that LLMs do not actually “know” anything. All they can do is produce a statistically probably word salad in response to prompts. But they can now do that so well that they are very useful.*

This is an oversimplification, but gives the flavor of the endless forward and backward propagation and iteration that is required for model training. This training typically requires running vast banks of very high-end processors, typically housed in large, power-hungry data centers, for months at a time.

Once a model is trained (e.g., the neural net weights have been determined), to then run it (i.e., to generate responses based on human prompts) takes considerably less compute power. This is the “inference” phase of generative AI. It still takes a lot of compute to run a big program quickly, but a simpler LLM like DeepSeek can be run, with only modest time lags, on a high end PC.

GPUs Versus ASIC TPUs

Nvidia has made its fortune by taking graphical processing units (GPU) that were developed for massively parallel calculations needed for driving video displays, and adapting them to more general problem solving that could make use of rapid matrix calculations. Nvidia chips and its CUDA language have been employed for physical simulations such as seismology and molecular dynamics, and then for Bitcoin calculations. When generative AI came along, Nvidia chips and programming tools were the obvious choice for LLM computing needs. The world’s lust for AI compute is so insatiable, and Nvidia has had such a stranglehold, that the company has been able to charge an eye-watering gross profit margin of around 75% on its chips.

AI users of course are trying desperately to get compute capability without have to pay such high fees to Nvidia. It has been hard to mount a serious competitive challenge, though. Nvidia has a commanding lead in hardware and supporting software, and (unlike the Intel of years gone by) keeps forging ahead, not resting on its laurels. 

So far, no one seems to be able to compete strongly with Nvidia in GPUs. However, there is a different chip architecture, which by some measures can beat GPUs at their own game.

NVIDIA GPUs are general-purpose parallel processors with high flexibility, capable of handling a wide range of tasks from gaming to AI training, supported by a mature software ecosystem like CUDA. GPUs beat out the original computer central processing units (CPUs) for these tasks by sacrificing flexibility for the power to do parallel processing of many simple, repetitive operations. The newer “application-specific integrated circuits” (ASICs) take this specialization a step further. They can be custom hard-wired to do specific calculations, such as those required for bitcoin and now for AI. By cutting out steps used by GPUs, especially fetching data in and out of memory, ASICs can do many AI computing tasks faster and cheaper than Nvidia GPUs, and using much less electric power. That is a big plus, since AI data centers are driving up electricity prices in many parts of the country. The particular type of ASIC that is used by Google for AI is called a Tensor Processing Unit (TPU).

I found this explanation by UncoverAlpha to be enlightening:

A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.

The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.

A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.

The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).

In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).

  1. It loads data (weights) once.
  2. It passes inputs through a massive grid of multipliers.
  3. The data is passed directly to the next unit in the array without writing back to memory.

What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.

Google has developed the most advanced ASICs for doing AI, which are now on some levels a competitive threat to Nvidia.   Some implications of this will be explored in a post next week.

*Next generation AI seeks to step beyond the LLM world of statistical word salads, and try to model cause and effect at the level of objects and agents in the real world – – see Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence .

Standard disclaimer: Nothing here should be considered advice to buy or sell any security.

Structure Integrated Panels (SIP): The Latest, Greatest (?) Home Construction Method

Last week I drove an hour south to help an acquaintance with constructing his retirement home. I answered a group email request, looking for help in putting up a wall in this house.
I assumed this was a conventional stick-built construction, so I envisioned constructing a studded wall out of two by fours and two by sixes whilst lying flat on the ground, and then needing four or five guys to swing this wall up to a vertical position, like an old-fashioned barn raising.

But that wasn’t it at all. This house was being built from Structure Integrated Panels (SIP). These panels have a styrofoam core, around 5 inches thick, with a facing on each side of thin oriented strandboard (OSB). (OSB is a kind of cheapo plywood).


The edges have a sort of tongue and groove configuration, so they mesh together. Each of the SIP panels was about 9 feet high and between 2 feet and 8 feet long. Two strong guys could manhandle a panel into position. Along the edge of the floor, 2×6’s had been mounted to guide the positioning of the bottom of each wall panel.


We put glue and sealing caulk on the edges to stick them together, and drove 7-inch-long screws through the edges after they were in place, and also a series of  nails through the OSB edges into the 2×6’s at the bottom. Pneumatic nail guns give such a satisfying “thunk” with each trigger pull, you feel quite empowered. Here are a couple photos from that day:


The homeowner told me that he learned about SIP construction from an exhibit in Washington, DC that he attended with his grandson. The exhibit was on building techniques through the ages, starting with mud huts, and ending with SIP as the latest technique. That inspired him.

(As an old guy, I was not of much use lifting the panels. I did drive in some nails and screws. I was not initially aware of the glue/caulk along the edges, so I spent my first 20 minutes on the job wiping off the sticky goo I got all over my gloves and coat when I grabbed my first panel. My chief contribution that day was to keep a guy from toppling backwards off a stepladder who was lifting a heavy panel beam overhead).

We amateurs were pretty slow, but I could see that a practiced crew could go slap slap slap and erect all the exterior walls of a medium sized single-story house in a day or two, without needing advanced carpentry skills. Those walls would come complete with insulation. They would still need weatherproof exterior siding (e.g. vinyl or faux stone) on the outside, and sheetrock on the inside. Holes were pre-drilled in the Styrofoam for running the electrical wiring up through the SIPs.

From my limited reading, it seems that the biggest single advantage of SIP construction is quick on-site assembly. It is ideal for situations where you only have a limited time window for construction, or in an isolated or affluent area where site labor is very expensive and hard to obtain (e.g., a ski resort town). Reportedly, SIP buildings are mechanically stronger than stick-built, handy in case of earthquakes or hurricanes. Also, an SIP wall has very high insulation value, and the construction method is practically airtight.

SIP construction is not cheaper than stick built. It’s around 10% more expensive. You need perfect communication with the manufacturer of the SIP panels; if the delivered panels don’t fit properly on-site, you are hosed. Also, it is tough to modify an SIP house once it is built.

Because it is so airtight, it requires some finesse in designing the HVAC system. You need to be very careful protecting it from the walls from moisture, both inside and out, since the SIP panels can lose strength if they get wet. For that reason, some folks prefer to not use SIP for roofs, but only for walls and first-story flooring.
For more on SIP pros and cons, see here and here.

Michael Burry’s New Venture Is Substack “Cassandra Unchained”: Set Free to Prophesy All-Out Doom on AI Investing

This is a quick follow-up to last week’s post on “Big Short” Michael Burry closing down his Scion Asset Management hedge fund. Burry had teased on X that he would announce his next big thing on Nov 25. It seems he is now a day or two early: Sunday night he launched a paid-subscription “Cassandra Unchained” Substack. There he claims that:

Cassandra Unchained is now Dr. Michael Burry’s sole focus as he gives you a front row seat to his analytical efforts and projections for stocks, markets, and bubbles, often with an eye to history and its remarkably timeless patterns.

Reportedly the subscription cost is $39 a month, or $379 annually, and there are 26,000 subscribers already. Click the abacus and…that comes to a cool $ 9.9 million a year in subscription fees. Not bad compensation for sharing your musings on line.

Michael Burry was dubbed “Cassandra” by Warren Buffett in recognition of his prescient warnings about the 2008 housing market collapse, a prophecy that was initially ignored, much like the mythological Cassandra who was fated to deliver true prophecies that were never believed. Burry embraced this nickname, adopting “Cassandra” as his online moniker on social media platforms, symbolizing his role as a lone voice warning of impending financial disaster. On the About page of his new Substack, he wrote that managing clients’ money in a hedge fund like Scion came with restrictions that “muzzled” him, such that he could only share “cryptic fragments” publicly, whereas now he is “unchained.”

Of his first two posts on the new Substack, one was a retrospective on his days as a practicing doctor (resident in neurology at Stanford Hospital) in 1999-2000. He had done a lot of on-line posting on investing topics, focusing on valuations, and finally left medicine to start a hedge fund. As he tells it, he called the dot.com bubble before it popped.

The Business Insider summarizes Burry’s second post, which attacks the central premise of those who claim the current AI boom is fundamentally different from the 1990s dot.com boom:

The second post aims straight at the heart of the AI boom, which he calls a “glorious folly” that will require investigation over several posts to break down.

Burry goes on to address a common argument about the difference between the dot-com bubble and AI boom — that the tech companies leading the charge 25 years ago were largely unprofitable, while the current crop are money-printing machines.

At the turn of this century, Burry writes, the Nasdaq was driven by “highly profitable large caps, among which were the so-called ‘Four Horsemen’ of the era — Microsoft, Intel, Dell, and Cisco.”

He writes that a key issue with the dot-com bubble was “catastrophically overbuilt supply and nowhere near enough demand,” before adding that it’s “just not so different this time, try as so many might do to make it so.”

Burry calls out the “five public horsemen of today’s AI boom — Microsoft, Google, Meta, Amazon and Oracle” along with “several adolescent startups” including Sam Altman’s OpenAI.

Those companies have pledged to invest well over $1 trillion into microchips, data centers, and other infrastructure over the next few years to power an AI revolution. They’ve forecasted enormous growth, exciting investors and igniting their stock prices.

Shares of Nvidia, a key supplier of AI microchips, have surged 12-fold since the start of 2023, making it the world’s most valuable public company with a $4.4 trillion market capitalization.

“And once again there is a Cisco at the center of it all, with the picks and shovels for all and the expansive vision to go with it,” Burry writes, after noting the internet-networking giant’s stock plunged by over 75% during the dot-com crash. “Its name is Nvidia.”

Tell us how you really feel, Michael. Cassandra, indeed.

My amateur opinion here: I think there is a modest but significant chance that the hyperscalers will not all be able to make enough fresh money to cover their ginormous investments in AI capabilities 2024-2028. What happens then? For Google and Meta and Amazon, they may need to write down hundreds of millions of dollars on their balance sheets, which would show as ginormous hits to GAAP earnings for a number of quarters. But then life would go on just fine for these cash machines, and the market may soon forgive and forget this massive misallocation of old cash, as long as operating cash keeps rolling in as usual. Stocks are, after all, priced on forward earnings. If the AI boom busts, all tech stock prices would sag, but I think the biggest operating impact would be on suppliers of chips (like Nvidia) and of data centers (like Oracle). So, Burry’s comparison of 2025 Nvidia to 1999 Cisco seems apt.

“Big Short” Michael Burry Closes Scion Hedge Fund: “Value” Approach Ceased to Add Value?

Michael Burry is famed for being among the first to both discern and heavily trade on the ridiculousness of subprime mortgages circa 2007.  He is a quirky guy: brilliant, but probably Asperger‘s. That comes through in his portrayal in the 2015 movie based on the book, The Big Short.

He called it right with mortgages in 2007, but was early on his call, and for many months lost money on the bold trading positions he had put on in his hedge fund, Scion Capital. Investors in his fund rebelled, though he eventually prevailed. Reportedly he made $100 million himself, and another 700 million for his investors, but in the wake of this turmoil, he shut down Scion Capital.

In 2013 he reopened his hedge fund under the name Scion Asset Management. He has generated headlines in the past several years, criticizing high valuations of big tech companies. Disclosure of his short positions on Nvidia and Palantir may have contributed to a short-term decline in those stocks. He has called out big tech companies in general for stretching out the schedule of depreciation of their AI data center investments, to make their earnings look bigger than they really are.

Burry is something of an investing legend, but people always like to take pot shots at such legends. Burry has been rather a permabear, and of course they are right on occasion. For instance, I ran across the following OP at Reddit:

Michael burry is a clown who got lucky once

I am getting sick and tired of seeing a new headline or YouTube video about Michael burry betting against the market or shorting this or that.

First of all the guy is been betting against the market all his career and happened to get lucky once. Even a broken clock is right twice in a day. He is one of these goons who reads and understands academia economics and tries to apply them to real world which is they don’t work %99 of the time. In fact guys like him with heavy focus on academia economic approach don’t make it to far in this industry and if burry didn’t get so lucky with his CDS trade he would be most likely ended up teaching some bs economic class in some mid level university.

Teaching econ at some mid-level university, ouch.  (But a reader fired back at this OP: OP eating hot pockets in his moms basement criticizing a dude who has made hundreds of millions of dollars and started from scratch.)

Anyway, Burry raised eyebrows at the end of October, when he announced that he was shutting down his Scion Asset Management hedge fund. This Oct 27 announcement was accompanied by verbiage to the effect that he has not read the markets correctly in recent years:

With a heavy heart, I will liquidate the funds and return capital—minus a small audit and tax holdback—by year’s end. My estimation of value in securities is not now, and has not been for some time, in sync with the markets.

Photo

To me, all this suggested that Burry’s traditional Graham-Dodd value-oriented approach had gotten run over by the raging tech bull market of the past eight years. I am sensitive to this, because I, too, have a gut bias towards value, which has not served me well in recent years. (A year ago I finally saw the light and publicly recanted value investing and embraced the bull, here on EWED).

Out of curiosity, therefore, I did some very shallow digging to try to find out how his Scion fund has performed in the last several years. I did not find the actual returns that investors would have seen. There are several sites that analyze the public filings of various hedge funds, and then calculate the returns on those stocks in those portfolio percentages. This is an imperfect process, since it will miss out on the actual buying and selling prices for the fund during the quarter, and may totally miss the effects of shorting and options and convertible warrants, etc., etc. But it suggests that Scion’s performance has not been amazing recently. Funds are nearly always shut down because of underperformance, not overperformance.

Pawing through sites like HedgeFollow (here and here) , Stockcircle, and Tipranks, my takeaway is that Burry probably beat the S&P 500 over the past three years, but roughly tied the NASDAQ (e.g. fund QQQ). This performance would naturally have his fund investors asking why they should be paying huge fees to someone who can’t beat QQQ.

What’s next for Burry? In a couple of tweets on X, Burry has teased that he will reveal some plans on November 25. The speculation is that he will refocus on some personal asset management fund, where he will not be bothered by whiny outside investors. We shall see.

META Stock Slides as Investors Question Payout for Huge AI Spend

How’s this for a “battleground” stock:

Meta stock has dropped about 13% when its latest quarterly earnings were released, then continued to slide until today’s market exuberance over a potential end to the government shutdown. What is the problem?

Meta has invested enormous sums in AI development already, and committed to invest even more in the future. It is currently plowing some 65% (!!) of its cash flow into AI, with no near-term prospects of making big profits there. CEO Mark Zuckerberg has a history of spending big on the Next Big Thing, which eventually fizzles. Meta’s earnings have historically been so high that he can throw away a few billion here and there and nobody cared. But now (up to $800 billion capex spend through 2028) we are talking real money.

Up till now Big Tech has been able to finance their investments entirely out of cash flow, but (like its peers), Meta started issuing debt to pay for some of the AI spend. Leverage is a two-edged sword – – if you can borrow a ton of money (up to $30 billion here) at say 5%, and invest it in something that returns 10%, that is glorious. Rah, capitalism! But if the payout is not there, you are hosed.

Another ugly issue lurking in the shadows is Meta’s dependence on scam ads for some 10% of its ad revenues. Reuters released a horrifying report last week detailing how Meta deliberately slow-walks or ignores legitimate complaints about false advertising and even more nefarious mis-uses of Facebook. Chilling specific anecdotes abound, but they seem to be part of a pattern of Meta choosing to not aggressively curtail known fraud, because doing so would cut into their revenue. They focus their enforcement efforts in regions where their hands are likely to be slapped hardest by regulators, while continuing to let advertisers defraud users wherever they can get away with it:

…Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document.

But those fines would be much smaller than Meta’s revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that “present higher legal risk,” the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds “the cost of any regulatory settlement involving scam ads.”

Rather than voluntarily agreeing to do more to vet advertisers, the same document states, the company’s leadership decided to act only in response to impending regulatory action.

Thus, the seamy underside of capitalism. And this:

…The company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain – but still believes the advertiser is a likely scammer – Meta charges higher ad rates as a penalty, according to the documents. 

So…if Meta is 94% (but not 95%) sure that an ad is a fraud, they will still let it run, but just charge more for it.  Sweet. Guess that sort of thinking is why Zuck is worth $250 million, and I’m not.

But never fear, Meta’s P/E is the lowest of the Mag 7 group, so maybe it is a buy after all:

Source

As usual, nothing here should be considered advice to buy or sell any security.

Is Tesla Stock Grossly Overpriced?

One of the more polarizing topics in investing is the valuation of Tesla stock. Its peers among the Magnificent 7 big tech leaders sport price/earnings ratios mainly in the 30s. Those are high numbers, but growth stocks deserve high P/Es. A way to normalize for expected growth of earnings is to look at the Price/Earnings/Growth (PEG) ratio. This number is usually 1.5-2.0 for a well-regarded company. Anything much over 2 is considered overvalued.

Tesla’s forward P/E of about 270 is nearly ten times higher than peers. Its anticipated growth rate does not seem to justify this astronomical valuation, since its PEG of around 4-10 (depending on assumptions) is way higher than normal. This seems to be a case of the CEO’s personal charisma dazzling shareholders. There is always a new “story” coming out to keep the momentum going.

Tesla’s main actual business is selling cars, electric cars. It has done a pretty good job at this over the past decade, supported by massive government subsidies. With the phasing out of these subsidies by the U.S. and some other governments, and increasing competition from other electric carmakers, it seems unlikely that this business will grow exponentially. Ditto for its smallish ($10 billion revenue) business line of supplying large batteries for electric power storage. But to Tesla fans, that doesn’t really matter. Tesla is valued, not as a car company, but as an AI startup venture. Just over the horizon are driverless robo-taxis (whose full deployment keeps getting pushed back), and humanoid Optimus robots. The total addressable market numbers being bandied about for the robots are in the trillions of dollars.

Source: Wikipedia

From Musk’s latest conference call:

Optimus is Tesla’s bipedal humanoid robot that’s in development but not yet commercially deployed. Musk has previously said the robots will be so sophisticated that they can serve as factory workers or babysitters….“Optimus will be an incredible surgeon,” Musk said on Wednesday. He said that with Optimus and self driving, “you can actually create a world where there is no poverty, where everyone has access to the finest medical care.”

Given the state of Artificial General Intelligence, I remain skeptical that such a robot will be deployed in large numbers within the next five years. It is of course a mind-bending exercise to imagine a world where $50,000 robots could do anything humans can do. Would that be a world where there is “no poverty”, or a world where there is no wealth (apart from the robot owners)? Would there be a populist groundswell to nationalize the robots in order to socialize the android bounty? But I digress.

On the Seeking Alpha website, one can find various bearish articles with the self-explanatory titles of, for instance, Tesla: The Dream Factory On Wall Street, Tesla: Rallying On Robotaxi Hopium, and Tesla: Paying Software Multiples For A Car Business – Strong Sell . There are also bullish pieces, e.g. herehere, and here.

Musk’s personal interaction with shares has propped up their value. He purchased about $1 billion in TSLA shares in September. This is chicken feed relative to its market cap and his net worth, but it apparently wowed TSLA fans, and popped the share price. What seems even more inexplicable is the favorable response to a proposed $1 trillion (!!) pay package for Elon. For him to be awarded this amount, Tesla under his watch would have to achieve hefty boosts both in physical production and in stock market capitalization. But… said package would be highly dilutive (like 12%) to existing shareholders, so, rationally they should give it thumbs down. However, it seems likely that said shareholders are so convinced of Musk’s value that they will approve this pay package on Nov 6, since he has hinted he might leave if he doesn’t get it.

Such is the Musk mystique that shareholders seem to feel that giving him an even greater stake in Tesla than he already has  will cause hundreds of billions of dollars of earnings appear from thin air. From the chatter I read from Wall Street professionals, they view all this as ridiculous magical thinking, yet they do not dare place bets against the Musk fanbase: the short interest in TSLA stock is only a modest 2.2%. Tesla is grossly overvalued, but it will likely remain that way as long as Elon remains and keeps spinning grand visions of the future.

WW II Key Initiatives 3: Kurt Tank Gives Germany a Superior Fighter Plane, the Focke-Wulf 190

This is the third in a series of occasional blog posts on individual initiatives that made a strategic (not just tactical) difference in the course of the second world war.

World War II was not only the biggest, bloodiest conflict, in human history. It played a definitive role in giving us the world we have today. Everyone can find something to complain about in the current state of affairs, but think for a moment what the world would be like if the Axis powers had prevailed.

Having control of the air became crucial in the second world war. It meant you could drop bombs on enemy soldiers, ships, tanks, cities, factories, etc., etc. The Germans showed early on how important that can be. Their terror bombing of the Dutch city of Rotterdam compelled the Netherlands to surrender to spare other cities from being likewise bombed, even though the Dutch armed forces could have held out for some time longer. The German breakthrough in their invasion of France in 1940 was facilitated by a concentrated Stuka dive bombing attack on a key sector of the French front lines. The 1940 Battle of Britain was an air war, where the Germans hoped to whittle the British Air Force capability down enough to permit them to invade across the English Channel. And so on.

The main German fighter plane at first was the Messerschmidt Me 109. It was a good plane, although by 1941 the British Spitfire had become a match for it. Both the Me 109 and the Spitfire were designed around in-line engines, where the cylinders were arranged in two long rows in the engine block. That gave a narrow engine, and hence a skinny profile to the airplane, which tended to reduce wind resistance and make for higher speeds. A weak point of all in-line engines is that they need to have a circulating coolant system, going through a radiator, to cool down the engine block from the heat of the internal combustion. This makes for more complicated maintenance and is very vulnerable to being damaged by enemy fire,

Just when the Brits were starting to wrest air superiority back from the Germans, the FW 190 appeared in the skies over France. Allied pilots were shocked. The new German fighter could out-climb, out-roll, and in many cases out-fight the current Spitfire models. This so-called “butcher bird” gave air superiority back to the Germans.

Its remarkable performance was the result of one man’s engineering philosophy and persistence: Kurt Tank, chief designer at the German aircraft manufacturer Focke-Wulf. Tank was a pilot as well as an engineer, with long and varied prior military experience. He chose a radial engine for his plane, to make it more rugged and easy to maintain. With a radial engine, the individual cylinders all stick out from a central crankcase; airflow past the fins on the cylinders cools the engine. Hence, no vulnerable cooling system and radiator. The conventional thinking was that a radial engine was so fat that an airplane using it would have a wide, draggy profile. His ingenious design features allowed him to make a fast, agile plane. However, was an uphill job for Tank to sell his concept to the German military establishment. Eventually, his results spoke for themselves and the Fw 190 was produced. With its critical spots armored, the Fw 190 was hard to kill. Tank deliberately gave a wide stance and long travel to the landing gear, to allow deployment in rough frontline airfields.

The Fw 190 was a superb low-medium altitude fighter, and was also widely pressed into service (due to its rugged design) as a precision bomber on the front lines. Around 20,000 Fw 190s were produced. They shot down many thousands of Allied planes, killed untold thousands of Allied airmen and soldiers, and destroyed thousands of Allied vehicles, mainly on the Eastern Front. It was not enough to change the ultimate outcome of the war, but Tank stretched it out appreciably, by (largely single-handedly) giving the Germans such a versatile and deadly weapon.