Allbirds, Inc. Attempts Pivot from Making Wool Sneakers to AI Computing

A native New Zealander, Tim Brown had two separate ambitions: to become a professional soccer player and a designer. On the soccer (“football”, outside North America) front, he succeeded beyond expectations. He played on the New Zealand national team between 2004 and 2012, often as captain or vice-captain.  Brown executed a personal pivot in 2012. After retiring from soccer, he enrolled in the London School of Economics to learn the business skills needed to launch an idea he had been mulling for several years. This was a shoe made mainly of wool.

He wanted to give a boost to New Zealand’s declining sheep in industry (battered by competition from polyester textiles), and wanted to promote something more sustainable than the plasticky shoes that he was always being asked to endorse as a professional player. There seemed to be plenty of room in the half-billion dollar per year footwear industry for something more environmentally friendly.

Brown launched his idea on Kickstarter in 2014, raising over $100,000. He and his partner started selling the Allbirds Wool Runner in 2016. Their green vibe was perfect for that era, and their shoes became wildly popular among the Silicon Valley VC set. They were seen on Larry Page, Barack Obama, Leonardo DiCaprio, and a whole gaggle of Hollywood actors and actresses.

Allbirds expanded its product line, and opened brick and mortar stores on several continents. Allbirds went public in 2021, and its market value ran up to $4 billion. But then the novelty of wool shoes wore off, sustainability became less urgent, and it became widely known that these “Wool Runners” are too flimsy to actually run or exercise in. They are more like slippers, and folks outside of Hollywood or Silicon Valley were not eager to pay $150 for a pair of slippers. Also, better-capitalized competitors muscled into the sustainable footwear market. Sales slid down and down, management conflicts erupted, and founder Tim Brown left to pursue other interests. On April 1, Allbirds announced it was selling the remnants of its shoe business for an ignominious $39 million.

So far, the story is unremarkable – – as with so many other startups, idealistic founders have initial success, but eventually go under upon scale-up. But there is an interesting plot twist. Instead of just going chapter 7 BK, paying off creditors, and returning a few pennies to investors, the company is using the shell of its former business to generate capital and transform itself into a new AI venture of renting out computing centers for AI usage. I assume the managers wanted to keep their jobs as managers, and cooked up this scheme to traffic on the current AI hype.


Apparently, these guys know nothing about GPU centers, so they’ll have to hire folks with expertise. Some unknown investor is backing them to the tune of $50 million, but they will have to raise much more than that to compete in the AI server business. That will horribly dilute current stockholders. They are directly competing with much better-capitalized behemoths like CoreWeave and Oracle, that can raise money on better terms. No moat, no expertise, almost no capital. But, hey, it’s AI, and so the company stock BIRD soared 600% on the news of the computing pivot.

I give them modest odds of succeeding bigly, but sometimes a mission pivot like this does come off. I’m thinking of the 1960’s when Berkshire Hathaway, facing declining earnings from its core textile business, under the leadership of Warren Buffett shifted into insurance. That generated the “float” that then enabled the purchase of other profitable businesses. We shall see if Allbirds (soon to be “NewBird”) management can likewise preside over such a seismic business shift.

Claude Mythos Is Such a Dangerous Hacker Engine That Anthropic Has Withheld Broad Release

The latest AI model from Anthropic is so powerful that they don’t dare release it to the public. It is such a threat that Jay Powell and Scott Bessant summoned the major bank CEOs to a meeting last week to warn them about it. In line with Anthropic’s “helpful, honest, and harmless” motto, they have released it only to their Project Glasswing partners. These are organizations like AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, who have been granted access to the model to identify and patch vulnerabilities in critical software.

Mythos is designed to identify and exploit vulnerabilities in software systems when prompted. Its specialty is identifying critical software vulnerabilities and bugs, but it can also assemble sophisticated exploits.

What makes Mythos particularly unsettling is that its most dangerous capabilities were not deliberately engineered. Anthropic’s team made it clear that they did not explicitly train Mythos to have these capabilities. Instead, they “emerged.”

Internal testing revealed that Mythos has already uncovered thousands of weak points in “every major operating system and web browser.” The implications are disturbing. Claude Mythos has autonomously discovered thousands of zero-day vulnerabilities in major operating systems and web browsers— flaws that human security researchers, working for years, had never detected. (see also here and here for examples).

Mythos can rapidly uncover hidden flaws in the codes of organizations and software development firms, but it also raises the fear that attackers could find those vulnerabilities first. Much of the underlying software that Mythos can scan supports banking, retail, airlines, hospitals, and critical utilities. Regulators worry that if Mythos, or models like it, fell into the wrong hands, “systemically important” banks and even entire financial networks could be compromised before institutions even knew they were exposed.

Anthropic launched Project Glasswing in April 2026 to collaborate with tech giants and banks to identify and fix vulnerabilities before they can be exploited.   This year, organizations should expect a large influx of AI-discovered hack points in critical software. The game plan is to use AI tools to patch the vulnerabilities it discovers. Your venerable legacy system is no longer safe. What AI can expose, it can also fix. We hope.

Ray Kurzweil predicted The Singularity (when artificial intelligence growth accelerates beyond human control) would arrive in 2045, but we might be closing in on it ahead of schedule.

Oops: Anthropic Accidently Leaked the Entire Code for Its “Claude Code” Program

One of Anthropic’s biggest wins has been its wildly-popular Claude Code program, that can do nearly all the grunt work of programming. Properly prompted, it can build new features, migrate databases, fix errors, and automate workflows.

So, it was big news in the AI world last week when an Anthropic employee accidently exposed a link that allowed folks to download the source code for this crown jewel – – the entire code, all 512,000 lines of it, which revealed the complete logic flow of the program, down to the tiniest features. For instance, Claude Code scans for profanity and negative phrases like “this sucks” to discern user sentiment, and tries to adjust for user frustration.

Gleeful researchers, competitors, and hackers promptly downloaded zillions of copies. Anthropic issued broad copyright takedown requests, but the damage was done. Researchers quickly used AI to rewrite the original TypeScript source code into Python and Rust, claiming to get around copyright laws on the original code. Oh, the irony: for years, AI purveyors have been arguing that when they ingest the contents of every published work (including copyrighted works) and repackage them, that’s OK. So now Anthropic is tasting the other side of that claim.

The leak has been damaging to Anthropic to some degree. Competitors don’t have to work to try to reverse engineer Claude Code, since now they know exactly how it works. Hackers have been quick to exploit vulnerabilities revealed by the leak. And Anthropic’s claim to be all about “Safety First” has been tarnished.

On the other hand, the model weights weren’t exposed, so you can’t just run the leaked code and get Claude’s results. Also, no customer data was revealed. Power users have been able to discern from the source how to run Claude Code most advantageously. This YouTube by Nick Puru discussed such optimizations, which he summarized in this roadmap:

There have actually been a number of unexpected benefits of the leak for Anthropic. Per AI:

Brand resonance and community engagement have surged, with some observers calling the incident “peak anthropic energy” that generated significant hype and validated the product’s technical impressiveness.  The leak has acted as a massive free marketing campaign, reinforcing the narrative of a fast-moving, innovative company while bouncing the brand back among developers despite the security lapse. 

Accelerated ecosystem adoption and bug fixing are also potential benefits, as the exposure allowed engineers to dissect the agentic harness and create open-source versions or “harnesses” that keep users within the Anthropic ecosystem. Additionally, the public scrutiny likely helps identify and patch vulnerabilities faster, while the leaked source maps provide a roadmap for competitors to build “Claude-like” agents, potentially standardizing the market around Anthropic’s architectural patterns.

The leak also revealed hidden roadmap features that build anticipation, such as:

  • Kairos: A persistent background daemon for continuous operation. 
  • Proactive Mode: A feature allowing the AI to act without explicit user prompts. 
  • Terminal Pets: Playful, personality-driven interfaces to increase user engagement.

Because of these benefits, conspiracy theorists have proposed that Anthropic leaked the code on purpose, or even (April Fools!) leaked fake code. Fact checkers have come to the rescue to debunk the conspiracy claims. But in the humans vs. AI competency debate, this whole kerfuffle doesn’t make humans look so great.

Humanity’s Last Exam in Nature

Last July I wrote here about “Humanity’s Last Exam”:

When every frontier AI model can pass your tests, how do you figure out which model is best? You write a harder test.

That was the idea behind Humanity’s Last Exam, an effort by Scale AI and the Center for AI Safety to develop a large database of PhD-level questions that the best AI models still get wrong.

The group initially released an arXiV working paper explaining how we created the dataset. I was surprised to see a version of that paper published in Nature this year, with the title changed to the more generic “A benchmark of expert-level academic questions to assess AI capabilities.”

One the one hand, it makes sense that the core author groups at the Center for AI Safety and Scale AI didn’t keep every coauthor in the loop, given that there were hundreds of us. On the other hand, I’m part of a different academic mega-project that currently is keeping hundreds of coauthors in the loop as it works its way through Nature. On the third, invisible hand, I’m never going to complain if any of my coauthors gets something of ours published in Nature when I’d assumed it would remain a permanent working paper.

AI is now getting close to passing the test:

What do we do when it can answer all the questions we already know the answer to? We start asking it questions we don’t know the answer to. How do you cure cancer? What is the answer to life, the universe, and everything? When will Jesus return, and how long until a million people are convinced he’s returned as an AI? Where is Ayatollah Khamenei right now?

Learning the Bitter Lesson at EconLog

I’m in EconLog with:

Learning the Bitter Lesson in 2026

At the link, I speculate on doom, hardware, human jobs, the jagged edge (via a Joshua Gans working paper), and the Manhattan Project. The fun thing about being 6 years late to a seminal paper is that you can consider how its predictions are doing.

Sutton draws from decades of AI history to argue that researchers have learned a “bitter” truth. Researchers repeatedly assume that computers will make the next advance in intelligence by relying on specialized human expertise. Recent history shows that methods that scale with computation outperform those reliant on human expertise. For example, in computer chess, brute-force search on specialized hardware triumphed over knowledge-based approaches. Sutton warns that researchers resist learning this lesson because building in knowledge feels satisfying, but true breakthroughs come from computation’s relentless scaling. 

The article has been up for a week and some intelligent comments have already come in. Folks are pointing out that I might be underrating the models’ ability to improve themselves going forward.

Second, with the frontier AI labs driving toward automating AI research the direct human involvement in developing such algorithms/architectures may be much less than it seems that you’re positing.

If that commenter is correct, there will be less need for humans than I said.

Also, Jim Caton over on LinkedIn (James, are we all there now?) pointed out that more efficient models might not need more hardware. If the AIs figure out ways to make themselves more efficient, then is “scaling” even going to be the right word anymore for improvement? The fun thing about writing about AI is that you will probably be wrong within weeks.

Between the time I proposed this to Econlog and publication, Ilya Sutskever suggested on Dwarkesh that “We’re moving from the age of scaling to the age of research“.

Broad Slump in Tech and Other Stocks: Fear Over AI Disruption Replaces AI Euphoria

Tech stocks (e.g. QQQ) roared up and up and up for most of 2023-2025, more than doubling in those three years. A big driving narrative was how AI was going to make everything amazing – productivity (and presumably profits) would soar, and robust investments in computing capacity (chips and buildings), and electric power infrastructure buildout, would goose the whole economy.

Will the Enormous AI Capex Spending Really Pay Off?

But in the past few months, a different narrative seems to have taken hold. Now the buzz is “the dark side of AI”. First, there is growing angst among investors over how much money the Big Tech hyperscalers (Google, Meta, Amazon, Microsoft, plus Oracle) are pouring into AI-related capital investments. These five firms alone are projected to spend over $0.6 trillion (!) in 2026. When some of this companies announced greater than expected spends in recent earning calls, analysts threw up all over their balance sheets. These are just eye-watering amounts, and investors have gotten a little wobbly in their support. These spends have an immediate effect on cash flow, driving it in some cases to around zero. And the depreciation on all that capex will come back to bite GAAP earnings in the coming years, driving nominal price/earnings even higher.

The critical question here is whether all that capex will pay out with mushrooming earnings three or four years down the road, or is the life blood of these companies just being flushed down the drain?  This is viewed as an existential arms race: benefits are not guaranteed for this big spend, but if you don’t do this spending, you will definitely get left behind. Firms like Amazon have a long history of investing for years at little profit, in order to achieve some ultimately profitable, wide-moat quasi-monopoly status.  If one AI program can manage to edge out everyone else, it could become the default application, like Amazon for online shopping or Google/YouTube for search and videos. The One AI could in fact rule us all.

Many Companies May Get Disrupted By AI

We wrote last week on the crash in enterprise software stocks like Salesforce and ServiceNow (“SaaSpocalypse”). The fear is that cheaper AI programs can do what these expensive services do for managing corporate data. The fear is now spreading more broadly (“AI Scare Trade”);  investors are rotating out of many firms with high-fee, labor-driven service models seen as susceptible to AI disruption. Here are two representative examples:

  • Wealth management companies Charles Schwab and Raymond James dropped 10% and 8% last week after a tech startup announced an AI-driven tax planning tool that could customize strategies for clients
  • Freight logistics firms C.H. Robinson and Universal Logistics fell 11% and 9% after some little AI outfit announced freight handling automation software

These AI disruption scenarios have been known for a long time as possibilities, but in the present mood, each new actual, specific case is feeding the melancholy narrative.

All is not doom and gloom here, as investors flee software companies they are embracing old-fashioned makers of consumer goods and other “stuff”:

The narrative last week was very clearly that “physical” was a better bet than “digital.” Physical goods and resources can’t be replaced by AI like digital goods and services can be at an alarming rate

As I write this (Monday), U.S. markets are closed for the holiday. We will see in the coming week whether fear or greed will have the upper hand.

Truth: The Strength and Weakness of AI Coding

There was a seismic shift in the AI world recently. In case you didn’t know, a Claude Code update was released just before the Christmas break. It could code awesomely and had a bigger context window, which is sort of like memory and attention span. Scott Cunningham wrote a series of posts demonstrating the power of Claude Code in ways that made economists take notice. Then, ChatGPT Codex was updated and released in January as if to say ‘we are still on the frontier’. The battle between Claude Code and Codex is active as we speak.

The differentiation is becoming clearer, depending on who you talk to. Claude Code feels architectural. It designs a project or system and thrives when you hand it the blueprint and say “Design this properly.” It’s your amazingly productive partner. Codex feels like it’s for the specialist. You tell it exactly what you want. No fluff. No ornamental abstraction unless you request it.

Codex flourishes with prompts like “Refactor this function to eliminate recursion”, or “ Take this response data and apply the Bayesian Dawid-Skene method. It does exactly that. It assumes competence on your part and does not attempt to decorate the output. It assumes that you know what you’re doing. It’s like your RA that can do amazing things if you tell it what task you want completed. Having said all of this, I’ve heard the inverse evaluations too. It probably matters a lot what the programmer brings to the table.

Both Claude Code and Codex are remarkably adept at catching code and syntax errors. That is not mysterious. Code is valid or invalid. The AI writes something, and the environment immediately reveals whether it conforms to the rules. Truth is embedded in the logical structure. When a single error appears, correction is often trivial.

When multiple errors appear, the problem becomes combinatorial. Fix A? Fix B? Change the type? Modify the loop? There are potentially infinite branching possibilities. Even then, the space is constrained. The code must run, or time out. That constraint disciplines the search. The reason these models code so well is that the code itself is the truth. So long as the logic isn’t violated, the axioms lead to the result. The AI anchors on the code to be internally consistent. The model can triangulate because the target is stable and verifiable.

AI struggles when the anchor disappears

Continue reading

SaaSmageddon: Will AI Eat the Software Business?

A big narrative for the past fifteen years has been that “software is eating the world.” This described a transformative shift where digital software companies disrupted traditional industries, such as retail, transportation, entertainment and finance, by leveraging cloud computing, mobile technology, and scalable platforms. This prophecy has largely come true, with companies like Amazon, Netflix, Uber, and Airbnb redefining entire sectors. Who takes a taxi anymore?

However, the narrative is now evolving. As generative AI advances, a new phase is emerging: “AI is eating software.”  Analysts predict that AI will replace traditional software applications by enabling natural language interfaces and autonomous agents that perform complex tasks without needing specialized tools. This shift threatens the $200 billion SaaS (Software-as-a-Service) industry, as AI reduces the need for dedicated software platforms and automates workflows previously reliant on human input. 

A recent jolt here has been the January 30 release by Anthropic of plug-in modules for Claude, which allow a relatively untrained user to enter plain English commands (“vibe coding”) that direct Claude to perform role-specific tasks like contract review, financial modeling, CRM integration, and campaign drafting.  (CRM integration is the process of connecting a Customer Relationship Management system with other business applications, such as marketing automation, ERP, e-commerce, accounting, and customer service platforms.)

That means Claude is doing some serious heavy lifting here. Currently, companies pay big bucks yearly to “enterprise software” firms like SAP and ServiceNow (NOW) and Salesforce to come in and integrate all their corporate data storage and flows. This must-have service is viewed as really hard to do, requiring highly trained specialists and proprietary software tools. Hence, high profit margins for these enterprise software firms.

 Until recently, these firms been darlings of the stock market. For instance, as of June, 2025, NOW was up nearly 2000% over the past ten years. Imagine putting $20,000 into NOW in 2015, and seeing it mushroom to nearly $400,000.  (AI tells me that $400,000 would currently buy you a “used yacht in the 40 to 50-foot range.”)

With the threat of AI, and probably with some general profit-taking in the overheated tech sector, the share price of these firms has plummeted. Here is a six-month chart for NOW:

Source: Seeking Alpha

NOW is down around 40% in the past six months. Most analysts seem positive, however, that this is a market overreaction. A key value-add of an enterprise software firm is the custody of the data itself, in various secure and tailored databases, and that seems to be something that an external AI program cannot replace, at least for now. The capability to pull data out and crunch it (which AI is offering) it is kind of icing on the cake.

Firms like NOW are adjusting to the new narrative, by offering pay-per-usage, as an alternative to pay-per-user (“seats”). But this does not seem to be hurting their revenues. These firms claim that they can harness the power of AI (either generic AI or their own software) to do pretty much everything that AI claims for itself. Earnings of these firms do not seem to be slowing down.

With the recent stock price crash, the P/E for NOW is around 24, with a projected earnings growth rate of around 25% per year. Compared to, say, Walmart with a P/E of 45 and a projected growth rate of around 10%, NOW looks pretty cheap to me at the moment.

(Disclosure: I just bought some NOW. Time will tell if that was wise.)

Usual disclaimer: Nothing here should be considered advice to buy or sell any security.

Google’s TPU Chips Threaten Nvidia’s Dominance in AI Computing

Here is a three-year chart of stock prices for Nvidia (NVDA), Alphabet/Google (GOOG), and the generic QQQ tech stock composite:

NVDA has been spectacular. If you had $20k in NVDA three years ago, it would have turned into nearly $200k. Sweet. Meanwhile, GOOG poked along at the general pace of QQQ.  Until…around Sept 1 (yellow line), GOOG started to pull away from QQQ, and has not looked back.

And in the past two months, GOOG stock has stomped all over NVDA, as shown in the six-month chart below. The two stocks were neck and neck in early October, then GOOG has surged way ahead. In the past month, GOOG is up sharply (red arrow), while NVDA is down significantly:

What is going on? It seems that the market is buying the narrative that Google’s Tensor Processing Unit (TPU) chips are a competitive threat to Nvidia’s GPUs. Last week, we published a tutorial on the technical details here. Briefly, Google’s TPUs are hardwired to perform key AI calculations, whereas Nvidia’s GPUs are more general-purpose. For a range of AI processing, the TPUs are faster and much more energy-efficient than the GPUs.

The greater flexibility of the Nvidia GPUs, and the programming community’s familiarity with Nvidia’s CUDA programming language, still gives Nvidia a bit of an edge in the AI training phase. But much of that edge fades for the inference (application) usages for AI. For the past few years, the big AI wannabes have focused madly on model training. But there must be a shift to inference (practical implementation) soon, for AI models to actually make money.

All this is a big potential headache for Nvidia. Because of their quasi-monopoly on AI compute, they have been able to charge a huge 75% gross profit margin on their chips. Their customers are naturally not thrilled with this, and have been making some efforts to devise alternatives. But it seems like Google, thanks to a big head start in this area, and very deep pockets, has actually equaled or even beaten Nvidia at its own game.

This explains much of the recent disparity in stock movements. It should be noted, however, that for a quirky business reason, Google is unlikely in the near term to displace Nvidia as the main go-to for AI compute power. The reason is this: most AI compute power is implemented in huge data/cloud centers. And Google is one of the three main cloud vendors, along with Microsoft and Amazon, with IBM and Oracle trailing behind. So, for Google to supply Microsoft and Amazon with its chips and accompanying know-how would be to enable its competitors to compete more strongly.

Also, AI users like say OpenAI would be reluctant to commit to usage in a Google-owned facility using Google chips, since then the user would be somewhat locked in and held hostage, since it would be expensive to switch to a different data center if Google tried to raise prices. On contrast, a user can readily move to a different data center for a better deal, if all the centers are using Nvidia chips.

For the present, then, Google is using its TPU technology primarily in-house. The company has a huge suite of AI-adjacent business lines, so its TPU capability does give it genuine advantages there. Reportedly, soul-searching continues in the Google C-suite about how to more broadly monetize its TPUs. It seems likely that they will find a way. 

As usual, nothing here constitutes advice to buy or sell any security.

AI Computing Tutorial: Training vs. Inference Compute Needs, and GPU vs. TPU Processors

A tsunami of sentiment shift is washing over Wall Street, away from Nvidia and towards Google/Alphabet. In the past month, GOOG stock is up a sizzling 12%, while NVDA plunged 13%, despite producing its usual earnings beat.  Today I will discuss some of the technical backdrop to this sentiment shift, which involves the differences between training AI models versus actually applying them to specific problems (“inference”), and significantly different processing chips. Next week I will cover the company-specific implications.

As most readers here probably know, the popular Large Language Models (LLM) that underpin the popular new AI products work by sucking in nearly all the text (and now other data) that humans have ever produced, reducing each word or form of a word to a numerical token, and grinding and grinding to discover consistent patterns among those tokens. Layers of (virtual) neural nets are used. The training process involves an insane amount of trying to predict, say, the next word in a sentence scraped from the web, evaluating why the model missed it, and feeding that information back to adjust the matrix of weights on the neural layers, until the model can predict that next word correctly. Then on to the next sentence found on the internet, to work and work until it can be predicted properly. At the end of the day, a well-trained AI chatbot can respond to Bob’s complaint about his boss with an appropriately sympathetic pseudo-human reply like, “It sounds like your boss is not treating you fairly, Bob. Tell me more about…” It bears repeating that LLMs do not actually “know” anything. All they can do is produce a statistically probably word salad in response to prompts. But they can now do that so well that they are very useful.*

This is an oversimplification, but gives the flavor of the endless forward and backward propagation and iteration that is required for model training. This training typically requires running vast banks of very high-end processors, typically housed in large, power-hungry data centers, for months at a time.

Once a model is trained (e.g., the neural net weights have been determined), to then run it (i.e., to generate responses based on human prompts) takes considerably less compute power. This is the “inference” phase of generative AI. It still takes a lot of compute to run a big program quickly, but a simpler LLM like DeepSeek can be run, with only modest time lags, on a high end PC.

GPUs Versus ASIC TPUs

Nvidia has made its fortune by taking graphical processing units (GPU) that were developed for massively parallel calculations needed for driving video displays, and adapting them to more general problem solving that could make use of rapid matrix calculations. Nvidia chips and its CUDA language have been employed for physical simulations such as seismology and molecular dynamics, and then for Bitcoin calculations. When generative AI came along, Nvidia chips and programming tools were the obvious choice for LLM computing needs. The world’s lust for AI compute is so insatiable, and Nvidia has had such a stranglehold, that the company has been able to charge an eye-watering gross profit margin of around 75% on its chips.

AI users of course are trying desperately to get compute capability without have to pay such high fees to Nvidia. It has been hard to mount a serious competitive challenge, though. Nvidia has a commanding lead in hardware and supporting software, and (unlike the Intel of years gone by) keeps forging ahead, not resting on its laurels. 

So far, no one seems to be able to compete strongly with Nvidia in GPUs. However, there is a different chip architecture, which by some measures can beat GPUs at their own game.

NVIDIA GPUs are general-purpose parallel processors with high flexibility, capable of handling a wide range of tasks from gaming to AI training, supported by a mature software ecosystem like CUDA. GPUs beat out the original computer central processing units (CPUs) for these tasks by sacrificing flexibility for the power to do parallel processing of many simple, repetitive operations. The newer “application-specific integrated circuits” (ASICs) take this specialization a step further. They can be custom hard-wired to do specific calculations, such as those required for bitcoin and now for AI. By cutting out steps used by GPUs, especially fetching data in and out of memory, ASICs can do many AI computing tasks faster and cheaper than Nvidia GPUs, and using much less electric power. That is a big plus, since AI data centers are driving up electricity prices in many parts of the country. The particular type of ASIC that is used by Google for AI is called a Tensor Processing Unit (TPU).

I found this explanation by UncoverAlpha to be enlightening:

A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.

The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.

A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.

The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).

In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).

  1. It loads data (weights) once.
  2. It passes inputs through a massive grid of multipliers.
  3. The data is passed directly to the next unit in the array without writing back to memory.

What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.

Google has developed the most advanced ASICs for doing AI, which are now on some levels a competitive threat to Nvidia.   Some implications of this will be explored in a post next week.

*Next generation AI seeks to step beyond the LLM world of statistical word salads, and try to model cause and effect at the level of objects and agents in the real world – – see Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence .

Standard disclaimer: Nothing here should be considered advice to buy or sell any security.