The End of Boredom: How AI Companions Might Reduce Random Violence

Joy writes: I read Co-Intelligence by Ethan Mollick (thanks to Samford for the free book). Most of it is old news for those of us who follow Ethan on social media and use ChatGPT. However, something that stood out to me was his mention of a study in which humans decide to give themselves a painful shock rather than sit alone in silence for 15 minutes.

Claude comments further based on my prompt:

The End of Boredom: How AI Companions Might Reduce Random Violence

Remember that study where people would rather shock themselves than sit alone with their thoughts? Ethan Mollick references it in Co-Intelligence, and it reveals something unsettling: 67% of men and 25% of women chose electric shocks over sitting quietly for just 15 minutes.

Here’s a strange thought—what if our AI-saturated future accidentally reduces certain types of violence simply by eliminating boredom?

The Violence-Boredom Connection

“Idle hands are the devil’s workshop” exists for a reason. Research has long linked boredom to risk-taking, substance abuse, and impulsive violence—the opportunistic kind that emerges from restlessness rather than planning. Young men starting fights on street corners, vandalism on summer nights, the restless energy that sometimes turns destructive—much of this stems from the unbearable weight of having nothing to do.

Enter Infinite Engagement

We’re rapidly approaching a world where boredom might become extinct. AI companions are becoming always available, infinitely patient, endlessly novel, and perfectly tailored to individual interests. Your future AI won’t just answer questions—it will debate philosophy at 3 AM, create personalized games, generate stories with you as the protagonist, or help explore any curiosity rabbit hole.

The cognitive void that once led people to shock themselves rather than think? It might simply cease to exist.

The Unexpected Benefits

Consider the implications: Young men who might have started fights out of restlessness could instead be deeply engaged in AI-assisted music production or coding projects. The same restless energy that manifests destructively could be channeled into creative collaboration.

AI companions could help process frustration before it builds to a breaking point—imagine having an infinitely patient listener during those dangerous 2 AM spirals of rage. While not replacing human connection, AI interaction might buffer the worst effects of isolation that can lead to radicalization.

The Dark Side

This isn’t utopian. An always-engaged society raises concerns: What happens to human relationships when AI companionship is always easier? Does constant stimulation atrophy our ability to self-reflect? Might we lose the creative insights that emerge from boredom?

Crucially, this would only address impulsive, boredom-driven violence. Systemic violence, ideological extremism, and deeper social problems won’t disappear because people have engaging AI companions.

A Strange New World

For the first time in history, boredom—that uncomfortable void that has driven both creativity and destruction—might become optional. The same species that chose shocks over silence is about to get exactly what it wanted: constant, meaningful stimulation.

Whether this leads to a more peaceful society or new problems we haven’t imagined remains to be seen. The question is whether we’ll recognize what we’ve lost only after it’s gone.

The experiment is already underway—we’re all participating in it.

Joy comments at the end: The AI-written essay is willing to explore downsides of AI engagement.

AI Computing Tutorial: Training vs. Inference Compute Needs, and GPU vs. TPU Processors

A tsunami of sentiment shift is washing over Wall Street, away from Nvidia and towards Google/Alphabet. In the past month, GOOG stock is up a sizzling 12%, while NVDA plunged 13%, despite producing its usual earnings beat.  Today I will discuss some of the technical backdrop to this sentiment shift, which involves the differences between training AI models versus actually applying them to specific problems (“inference”), and significantly different processing chips. Next week I will cover the company-specific implications.

As most readers here probably know, the popular Large Language Models (LLM) that underpin the popular new AI products work by sucking in nearly all the text (and now other data) that humans have ever produced, reducing each word or form of a word to a numerical token, and grinding and grinding to discover consistent patterns among those tokens. Layers of (virtual) neural nets are used. The training process involves an insane amount of trying to predict, say, the next word in a sentence scraped from the web, evaluating why the model missed it, and feeding that information back to adjust the matrix of weights on the neural layers, until the model can predict that next word correctly. Then on to the next sentence found on the internet, to work and work until it can be predicted properly. At the end of the day, a well-trained AI chatbot can respond to Bob’s complaint about his boss with an appropriately sympathetic pseudo-human reply like, “It sounds like your boss is not treating you fairly, Bob. Tell me more about…” It bears repeating that LLMs do not actually “know” anything. All they can do is produce a statistically probably word salad in response to prompts. But they can now do that so well that they are very useful.*

This is an oversimplification, but gives the flavor of the endless forward and backward propagation and iteration that is required for model training. This training typically requires running vast banks of very high-end processors, typically housed in large, power-hungry data centers, for months at a time.

Once a model is trained (e.g., the neural net weights have been determined), to then run it (i.e., to generate responses based on human prompts) takes considerably less compute power. This is the “inference” phase of generative AI. It still takes a lot of compute to run a big program quickly, but a simpler LLM like DeepSeek can be run, with only modest time lags, on a high end PC.

GPUs Versus ASIC TPUs

Nvidia has made its fortune by taking graphical processing units (GPU) that were developed for massively parallel calculations needed for driving video displays, and adapting them to more general problem solving that could make use of rapid matrix calculations. Nvidia chips and its CUDA language have been employed for physical simulations such as seismology and molecular dynamics, and then for Bitcoin calculations. When generative AI came along, Nvidia chips and programming tools were the obvious choice for LLM computing needs. The world’s lust for AI compute is so insatiable, and Nvidia has had such a stranglehold, that the company has been able to charge an eye-watering gross profit margin of around 75% on its chips.

AI users of course are trying desperately to get compute capability without have to pay such high fees to Nvidia. It has been hard to mount a serious competitive challenge, though. Nvidia has a commanding lead in hardware and supporting software, and (unlike the Intel of years gone by) keeps forging ahead, not resting on its laurels. 

So far, no one seems to be able to compete strongly with Nvidia in GPUs. However, there is a different chip architecture, which by some measures can beat GPUs at their own game.

NVIDIA GPUs are general-purpose parallel processors with high flexibility, capable of handling a wide range of tasks from gaming to AI training, supported by a mature software ecosystem like CUDA. GPUs beat out the original computer central processing units (CPUs) for these tasks by sacrificing flexibility for the power to do parallel processing of many simple, repetitive operations. The newer “application-specific integrated circuits” (ASICs) take this specialization a step further. They can be custom hard-wired to do specific calculations, such as those required for bitcoin and now for AI. By cutting out steps used by GPUs, especially fetching data in and out of memory, ASICs can do many AI computing tasks faster and cheaper than Nvidia GPUs, and using much less electric power. That is a big plus, since AI data centers are driving up electricity prices in many parts of the country. The particular type of ASIC that is used by Google for AI is called a Tensor Processing Unit (TPU).

I found this explanation by UncoverAlpha to be enlightening:

A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.

The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.

A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.

The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).

In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).

  1. It loads data (weights) once.
  2. It passes inputs through a massive grid of multipliers.
  3. The data is passed directly to the next unit in the array without writing back to memory.

What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.

Google has developed the most advanced ASICs for doing AI, which are now on some levels a competitive threat to Nvidia.   Some implications of this will be explored in a post next week.

*Next generation AI seeks to step beyond the LLM world of statistical word salads, and try to model cause and effect at the level of objects and agents in the real world – – see Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence .

Standard disclaimer: Nothing here should be considered advice to buy or sell any security.

Structure Integrated Panels (SIP): The Latest, Greatest (?) Home Construction Method

Last week I drove an hour south to help an acquaintance with constructing his retirement home. I answered a group email request, looking for help in putting up a wall in this house.
I assumed this was a conventional stick-built construction, so I envisioned constructing a studded wall out of two by fours and two by sixes whilst lying flat on the ground, and then needing four or five guys to swing this wall up to a vertical position, like an old-fashioned barn raising.

But that wasn’t it at all. This house was being built from Structure Integrated Panels (SIP). These panels have a styrofoam core, around 5 inches thick, with a facing on each side of thin oriented strandboard (OSB). (OSB is a kind of cheapo plywood).


The edges have a sort of tongue and groove configuration, so they mesh together. Each of the SIP panels was about 9 feet high and between 2 feet and 8 feet long. Two strong guys could manhandle a panel into position. Along the edge of the floor, 2×6’s had been mounted to guide the positioning of the bottom of each wall panel.


We put glue and sealing caulk on the edges to stick them together, and drove 7-inch-long screws through the edges after they were in place, and also a series of  nails through the OSB edges into the 2×6’s at the bottom. Pneumatic nail guns give such a satisfying “thunk” with each trigger pull, you feel quite empowered. Here are a couple photos from that day:


The homeowner told me that he learned about SIP construction from an exhibit in Washington, DC that he attended with his grandson. The exhibit was on building techniques through the ages, starting with mud huts, and ending with SIP as the latest technique. That inspired him.

(As an old guy, I was not of much use lifting the panels. I did drive in some nails and screws. I was not initially aware of the glue/caulk along the edges, so I spent my first 20 minutes on the job wiping off the sticky goo I got all over my gloves and coat when I grabbed my first panel. My chief contribution that day was to keep a guy from toppling backwards off a stepladder who was lifting a heavy panel beam overhead).

We amateurs were pretty slow, but I could see that a practiced crew could go slap slap slap and erect all the exterior walls of a medium sized single-story house in a day or two, without needing advanced carpentry skills. Those walls would come complete with insulation. They would still need weatherproof exterior siding (e.g. vinyl or faux stone) on the outside, and sheetrock on the inside. Holes were pre-drilled in the Styrofoam for running the electrical wiring up through the SIPs.

From my limited reading, it seems that the biggest single advantage of SIP construction is quick on-site assembly. It is ideal for situations where you only have a limited time window for construction, or in an isolated or affluent area where site labor is very expensive and hard to obtain (e.g., a ski resort town). Reportedly, SIP buildings are mechanically stronger than stick-built, handy in case of earthquakes or hurricanes. Also, an SIP wall has very high insulation value, and the construction method is practically airtight.

SIP construction is not cheaper than stick built. It’s around 10% more expensive. You need perfect communication with the manufacturer of the SIP panels; if the delivered panels don’t fit properly on-site, you are hosed. Also, it is tough to modify an SIP house once it is built.

Because it is so airtight, it requires some finesse in designing the HVAC system. You need to be very careful protecting it from the walls from moisture, both inside and out, since the SIP panels can lose strength if they get wet. For that reason, some folks prefer to not use SIP for roofs, but only for walls and first-story flooring.
For more on SIP pros and cons, see here and here.

Michael Burry’s New Venture Is Substack “Cassandra Unchained”: Set Free to Prophesy All-Out Doom on AI Investing

This is a quick follow-up to last week’s post on “Big Short” Michael Burry closing down his Scion Asset Management hedge fund. Burry had teased on X that he would announce his next big thing on Nov 25. It seems he is now a day or two early: Sunday night he launched a paid-subscription “Cassandra Unchained” Substack. There he claims that:

Cassandra Unchained is now Dr. Michael Burry’s sole focus as he gives you a front row seat to his analytical efforts and projections for stocks, markets, and bubbles, often with an eye to history and its remarkably timeless patterns.

Reportedly the subscription cost is $39 a month, or $379 annually, and there are 26,000 subscribers already. Click the abacus and…that comes to a cool $ 9.9 million a year in subscription fees. Not bad compensation for sharing your musings on line.

Michael Burry was dubbed “Cassandra” by Warren Buffett in recognition of his prescient warnings about the 2008 housing market collapse, a prophecy that was initially ignored, much like the mythological Cassandra who was fated to deliver true prophecies that were never believed. Burry embraced this nickname, adopting “Cassandra” as his online moniker on social media platforms, symbolizing his role as a lone voice warning of impending financial disaster. On the About page of his new Substack, he wrote that managing clients’ money in a hedge fund like Scion came with restrictions that “muzzled” him, such that he could only share “cryptic fragments” publicly, whereas now he is “unchained.”

Of his first two posts on the new Substack, one was a retrospective on his days as a practicing doctor (resident in neurology at Stanford Hospital) in 1999-2000. He had done a lot of on-line posting on investing topics, focusing on valuations, and finally left medicine to start a hedge fund. As he tells it, he called the dot.com bubble before it popped.

The Business Insider summarizes Burry’s second post, which attacks the central premise of those who claim the current AI boom is fundamentally different from the 1990s dot.com boom:

The second post aims straight at the heart of the AI boom, which he calls a “glorious folly” that will require investigation over several posts to break down.

Burry goes on to address a common argument about the difference between the dot-com bubble and AI boom — that the tech companies leading the charge 25 years ago were largely unprofitable, while the current crop are money-printing machines.

At the turn of this century, Burry writes, the Nasdaq was driven by “highly profitable large caps, among which were the so-called ‘Four Horsemen’ of the era — Microsoft, Intel, Dell, and Cisco.”

He writes that a key issue with the dot-com bubble was “catastrophically overbuilt supply and nowhere near enough demand,” before adding that it’s “just not so different this time, try as so many might do to make it so.”

Burry calls out the “five public horsemen of today’s AI boom — Microsoft, Google, Meta, Amazon and Oracle” along with “several adolescent startups” including Sam Altman’s OpenAI.

Those companies have pledged to invest well over $1 trillion into microchips, data centers, and other infrastructure over the next few years to power an AI revolution. They’ve forecasted enormous growth, exciting investors and igniting their stock prices.

Shares of Nvidia, a key supplier of AI microchips, have surged 12-fold since the start of 2023, making it the world’s most valuable public company with a $4.4 trillion market capitalization.

“And once again there is a Cisco at the center of it all, with the picks and shovels for all and the expansive vision to go with it,” Burry writes, after noting the internet-networking giant’s stock plunged by over 75% during the dot-com crash. “Its name is Nvidia.”

Tell us how you really feel, Michael. Cassandra, indeed.

My amateur opinion here: I think there is a modest but significant chance that the hyperscalers will not all be able to make enough fresh money to cover their ginormous investments in AI capabilities 2024-2028. What happens then? For Google and Meta and Amazon, they may need to write down hundreds of millions of dollars on their balance sheets, which would show as ginormous hits to GAAP earnings for a number of quarters. But then life would go on just fine for these cash machines, and the market may soon forgive and forget this massive misallocation of old cash, as long as operating cash keeps rolling in as usual. Stocks are, after all, priced on forward earnings. If the AI boom busts, all tech stock prices would sag, but I think the biggest operating impact would be on suppliers of chips (like Nvidia) and of data centers (like Oracle). So, Burry’s comparison of 2025 Nvidia to 1999 Cisco seems apt.

Is Tesla Stock Grossly Overpriced?

One of the more polarizing topics in investing is the valuation of Tesla stock. Its peers among the Magnificent 7 big tech leaders sport price/earnings ratios mainly in the 30s. Those are high numbers, but growth stocks deserve high P/Es. A way to normalize for expected growth of earnings is to look at the Price/Earnings/Growth (PEG) ratio. This number is usually 1.5-2.0 for a well-regarded company. Anything much over 2 is considered overvalued.

Tesla’s forward P/E of about 270 is nearly ten times higher than peers. Its anticipated growth rate does not seem to justify this astronomical valuation, since its PEG of around 4-10 (depending on assumptions) is way higher than normal. This seems to be a case of the CEO’s personal charisma dazzling shareholders. There is always a new “story” coming out to keep the momentum going.

Tesla’s main actual business is selling cars, electric cars. It has done a pretty good job at this over the past decade, supported by massive government subsidies. With the phasing out of these subsidies by the U.S. and some other governments, and increasing competition from other electric carmakers, it seems unlikely that this business will grow exponentially. Ditto for its smallish ($10 billion revenue) business line of supplying large batteries for electric power storage. But to Tesla fans, that doesn’t really matter. Tesla is valued, not as a car company, but as an AI startup venture. Just over the horizon are driverless robo-taxis (whose full deployment keeps getting pushed back), and humanoid Optimus robots. The total addressable market numbers being bandied about for the robots are in the trillions of dollars.

Source: Wikipedia

From Musk’s latest conference call:

Optimus is Tesla’s bipedal humanoid robot that’s in development but not yet commercially deployed. Musk has previously said the robots will be so sophisticated that they can serve as factory workers or babysitters….“Optimus will be an incredible surgeon,” Musk said on Wednesday. He said that with Optimus and self driving, “you can actually create a world where there is no poverty, where everyone has access to the finest medical care.”

Given the state of Artificial General Intelligence, I remain skeptical that such a robot will be deployed in large numbers within the next five years. It is of course a mind-bending exercise to imagine a world where $50,000 robots could do anything humans can do. Would that be a world where there is “no poverty”, or a world where there is no wealth (apart from the robot owners)? Would there be a populist groundswell to nationalize the robots in order to socialize the android bounty? But I digress.

On the Seeking Alpha website, one can find various bearish articles with the self-explanatory titles of, for instance, Tesla: The Dream Factory On Wall Street, Tesla: Rallying On Robotaxi Hopium, and Tesla: Paying Software Multiples For A Car Business – Strong Sell . There are also bullish pieces, e.g. herehere, and here.

Musk’s personal interaction with shares has propped up their value. He purchased about $1 billion in TSLA shares in September. This is chicken feed relative to its market cap and his net worth, but it apparently wowed TSLA fans, and popped the share price. What seems even more inexplicable is the favorable response to a proposed $1 trillion (!!) pay package for Elon. For him to be awarded this amount, Tesla under his watch would have to achieve hefty boosts both in physical production and in stock market capitalization. But… said package would be highly dilutive (like 12%) to existing shareholders, so, rationally they should give it thumbs down. However, it seems likely that said shareholders are so convinced of Musk’s value that they will approve this pay package on Nov 6, since he has hinted he might leave if he doesn’t get it.

Such is the Musk mystique that shareholders seem to feel that giving him an even greater stake in Tesla than he already has  will cause hundreds of billions of dollars of earnings appear from thin air. From the chatter I read from Wall Street professionals, they view all this as ridiculous magical thinking, yet they do not dare place bets against the Musk fanbase: the short interest in TSLA stock is only a modest 2.2%. Tesla is grossly overvalued, but it will likely remain that way as long as Elon remains and keeps spinning grand visions of the future.

LinkedIn is OK, Actually

LinkedIn has its problems, but so does every other social network.

I joined LinkedIn out of college because it seemed like something you were supposed to do if you want a job someday, but I never checked it because the academic job market makes little use of LinkedIn. In 2013 LinkedIn added social media features like a newsfeed, but I still never spent time there. Facebook and Twitter seemed more interesting, and like many people I’ve always been allergic to “networking” or other social settings where one person is just trying to get something from another. It seemed like a recipe for posts that are cringe, soulless, or desperate.

But over the past couple years, I’ve found myself spending more time there- and not because I’m looking for a job or looking to hire. Some of the posts are genuinely interesting, and it is a nice way to keep up with what people I know are up to. Either LinkedIn got better or I got worse.

I find that LinkedIn is particularly good for staying in touch with my old students. I always told my students they could still e-mail me or stop by my office after the semester is over, but they almost never do; that takes a lot of thought and energy. Social networks are the ideal way to keep in touch with “weak ties“, but you have to find the right one. Facebook was the best for this when it was ubiquitous, but now it is becoming more common for Americans not to have or not to check Facebook, especially young ones (plus it was always a bit too personal for former students). Twitter has never been something that most people have, and the more popular networks are either too personal (Instragram, Snap et c) or too impersonal where almost all content users see comes from people they don’t know (TikTok, Youtube, et c).

LinkedIn by contrast is ubiquitous and just the right amount of personal. It also seems to be increasingly a good place to share interesting writing. I like much of what I read there, and my writing gets a good reception; I tend to get more engagement for EWED posts on LinkedIn than on X and Facebook despite having fewer connections there than Facebook friends or Twitter followers. Yes, you’ll still see some cringe posts there, but it beats the angry political posts that are ubiquitous on Facebook and especially X.

You can find me on LinkedIn here, if you dare.

WW II Key Initiatives 2: “Thatch Weave” Tactic to Counter More-Agile Japanese Fighter Planes

This is the second of a series of occasional posts on observations of how some individual initiatives made strategic impacts on World War II operations and outcome.  While there were innumerable acts of initiative and heroism that occurred during this conflict, I will focus on actions that shifted the entire capabilities of their side.

It’s the summer of 1941. The war in Europe between mainly Germany and Britain had been grinding on for around two years, with Hitler in control of nearly all of Europe. The Germans then attacked the Soviet Union, and quickly conquered enormous stretches of territory. It looked like the Nazis were winning. Relations with Japan, which aimed to take over the eastern Pacific region were uneasy. The Japanese had already conquered Korea and coastal China, and were eyeing the resource-rich lands of Southeast Asia and Indonesia. It was a tense time.

The Japanese military had been building up for decades, preparing for a war with the United States for control of the eastern Pacific. They developed cutting edge military hardware, including the world’s biggest battleships, superior torpedoes and a large, well-trained aircraft carrier force. They also produced a new fighter plane, dubbed the “Zero” by Western observers.

Intelligence reports started to trickle in that the Zero was incredibly agile: it could outrun and out-climb and out-turn anything the U.S. could put in the air, and it packed a wallop with twin machine cannons. Its designers achieved this performance with a modestly-powered engine by making the airframe supremely light.

As I understand it, the U.S. military establishment’s response to this intel was fairly anemic. It was such awful news, that seemingly they buried their heads in the sand and just hoped it wasn’t true. Why was this so disastrous? Well, since the days of the Red Baron in World War I, the way you shot down your opponent in a dogfight was to turn in a narrower circle than him, or climb faster and roll, to get behind him. Get him in your gunsights, burst of incendiary machine-gun bullets to ignite his gasoline fuel tanks, and down he goes. If the Zero really was that agile, then it could easily shoot down any U.S. plane with impunity. Even if you started to line up behind a Zero for a shot, he could execute a tight turning maneuver, and end up on your tail, every time. Ouch.

A U.S. Navy aviator named John Thatch from Pine Bluff, Arkansas did take these reports on the Zero seriously. He racked his brains, trying to figure out a way for the clunky American Wildcat fighters to take on the Zeros. He knew the American pilots were well-trained and were good shots, if only they could get some crucial four-second (?) windows of time to line up on the enemy planes.

So, he spent night after night that summer, using matchsticks on his kitchen table, trying to invent tactics that would neutralize the advantages of the Japanese fighters. He found that the standard three-plane section (one leader, two wingmen) was too clumsy for rapid maneuvering. He settled on having two sections of two planes each.   The two sections would fly parallel, several hundred yards apart. If one section got attacked, the two sections would immediately make sharp turns towards each other, and cross paths. The planes of the non-attacked section could then take a head-on shot at the enemy plane(s) that were tailing the attacked section.

Here is a diagram of how this works:

Source: U. S. Naval Institute

The blue planes are the good guys, with a section on the left and on the right. At the bottom of the diagram, an enemy plane (green) gets on the tail of a blue plane on the right. The left and the right blue sections then make sudden 90 degree turns towards one another. The green plane follows his target around the turn, whereupon he is suddenly face-to-face with a plane from the other section, which (rat-a-tat-tat) shoots him down. In a head-to-head shootout, the Wildcat was likely to prevail, since it was more substantial than the flimsy Zero. Afterwards, the two sections continue flying parallel, ready to repeat the maneuver if attacked again. And of course, they don’t just fly along hoping to be attacked, they can make offensive runs at enemy planes as well, as a unified formation. This technique was later dubbed the “Thatch weave”.

Thatch faced opposition to his unorthodox tactics from the legendary inertia of the pre-war U.S. military establishment. Finally, he and his trained team submitted to a test: their four-plane formation went into mock combat against another four planes (all Wildcats), but his planes had their throttles restricted to maximum half power. Normally that would have made them toast, but in fact, with their weaving, they frustrated every attempt of the other planes to line up on them. This demonstration won over many of the actual pilots in the carrier air force, though the brass on the whole did not endorse it.

By some measures the most pivotal battle in the Pacific was the battle of Midway in June, 1942. The Japanese planned to wipe out the American carrier force by luring them into battle with a huge Japanese fleet assembled to invade the American-held island of Midway. If they had succeeded, WWII would have been much harder for the U.S. and its allies to win.

The way that battle unfolded, the U.S. carriers launched their torpedo planes well before their dive bombers. The Japanese probably feared the torpedo planes the most, and so they focused their Zeros on them. Effectively only Thatch and two other of his Wildcats were the only American fighter protection for the slow, poorly-armored torpedo bombers by the time they got to their targets. Using his weave maneuver for the first time in combat, he managed to shoot down three Zeros while not getting shot down himself. This vigorous, unexpectedly effective defense by a handful of Wildcats crucially helped to divert the Japanese fighters and kept them at low altitudes, just in time for the American dive bombers to arrive and attack unmolested from high altitude.

In the end, four Japanese fleet carriers were sunk by the dive-bombers at Midway, at a cost of one U.S. carrier. That victory helped the U.S. to hang on in the Pacific until its new carriers started arriving in 1943. Thatch’s tactic made a material difference in that battle, and was quickly promulgated throughout the rest of the U.S. carrier force. It was not a complete panacea, of course, since the once the enemy knew what you were about to do, they might be able to counter it. However, it did give U.S. fighters a crucial tool for confronting a more-agile opponent, at a critical time in the war. Thatch went on to train other pilots, and eventually became an admiral in the U.S. Navy.

Source: Wikipedia

Circular AI Deals Reminiscent of Disastrous Dot.Com Vendor Financing of the 1990s

Hey look, I just found a way to get infinite free electric power:

This sort of extension-cord-plugged-into-itself meme has shown up recently on the web to characterize a spate of circular financing deals in the AI space, largely involving OpenAI (parent of ChatGPT). Here is a graphic from Bloomberg which summarizes some of these activities:

Nvidia, which makes LOTS of money selling near-monopoly, in-demand GPU chips, has made investing commitments in customers or customers of their customers. Notably, Nvidia will invest up to $100 billion in Open AI, in order to help OpenAI increase their compute power. OpenAI in turn inked a $300 billion deal with Oracle, for building more data centers filled with Nvidia chips.  Such deals will certainly boost the sales of their chips (and make Nvidia even more money), but they also raise a number of concerns.

First, they make it seem like there is more demand for AI than there actually is. Short seller Jim Chanos recently asked, “[Don’t] you think it’s a bit odd that when the narrative is ‘demand for compute is infinite’, the sellers keep subsidizing the buyers?” To some extent, all this churn is just Nvidia recycling its own money, as opposed to new value being created.

Second, analysts point to the destabilizing effect of these sorts of “vendor financing” arrangements. Towards the end of the great dot.com boom in the late 1990’s, hardware vendors like Cisco were making gobs of money selling server capacity to internet service providers (ISPs). In order to help the ISPs build out even faster (and purchase even more Cisco hardware), Cisco loaned money to the ISPs. But when that boom busted, and the huge overbuild in internet capacity became (to everyone’s horror) apparent, the ISPs could not pay back those loans. QQQ lost 70% of its value. Twenty-five years later, Cisco stock price has never recovered its 2000 high.

Beside taking in cash investments, OpenAI is borrowing heavily to buy its compute capacity. Since OpenAI makes no money now (and in fact loses billions a year), and (like other AI ventures) will likely not make any money for several more years, and it is locked in competition with other deep-pocketed AI ventures, there is the possibility that it could pull down the whole house of cards, as happened in 2000.  Bernstein analyst Stacy Rasgon recently wrote, “[OpenAI CEO Sam Altman] has the power to crash the global economy for a decade or take us all to the promised land, and right now we don’t know which is in the cards.”

For the moment, nothing seems set to stop the tidal wave of spending on AI capabilities. Big tech is flush with cash, and is plowing it into data centers and program development. Everyone is starry-eyed with the enormous potential of AI to change, well, EVERYTHING (shades of 1999).

The financial incentives are gigantic. Big tech got big by establishing quasi-monopolies on services that consumers and businesses consider must-haves. (It is the quasi-monopoly aspect that enables the high profit margins).  And it is essential to establish dominance early on. Anyone can develop a word processor or spreadsheet that does what Word or Excel do, or a search engine that does what Google does, but Microsoft and Google got there first, and preferences are sticky. So, the big guys are spending wildly, as they salivate at the prospect of having the One AI to Rule Them All.

Even apart from achieving some new monopoly, the trillions of dollars spent on data center buildout are hoped to pay out one way or the other: “The data-center boom would become the foundation of the next tech cycle, letting Amazon, Microsoft, Google, and others rent out intelligence the way they rent cloud storage now. AI agents and custom models could form the basis of steady, high-margin subscription products.”

However, if in 2-3 years it turns out that actual monetization of AI continues to be elusive, as seems quite possible, there could be a Wile E. Coyote moment in the markets:

James Webb Telescope Still Orbiting the Sun

Last week I took kids to an excellent show at Samford’s Christenberry Planetarium. If you live in Alabama, follow them on Instagram for updates on events (often free).

I have heard people say that the liberal project is doomed because people just want to war.
Well, did you know that the James Webb Space Telescope orbits the sun? (I was busy on Christmas 2021 when the rest of the world was alerted to this fact.)

You can keep up with the mission here https://science.nasa.gov/mission/webb/

You can see what is Webb observing

Make discoveries through international collaboration, not war.

For a small number of readers who have time and interest in cutting edge physics and speculation, I know Julian Gough through Emergent Ventures and he’s at : “man-made black holes, the hidden catastrophe at the heart of materialist science

Shift in AI Usage from Productivity to Personal Therapy: Hazard Ahead

A couple of days ago I spoke with a friend who was troubled by the case of Adam Raine, the sixteen-year-old who was counseled by a ChatGPT AI therapy chatbot into killing himself.  That was of course extremely tragic, but I hoped it was kind of an outlier. Then I heard on a Bloomberg business podcast that the number one use for AI now is personal therapy. Being a researcher, I had to check this claim.

So here is an excerpt from a visual presentation of an analysis done by Marc Zao-Sanders for Harvard Business Review. He examined thousands of forum posts over the last year in a follow-up to his 2024 analysis to estimate uses of AI. To keep it tractable, I just snipped an image of the first six categories:

It’s true: Last year the most popular uses were spread across a variety of categories, but in 2025 the top use was “Therapy & Companionship”, followed by related uses of “Organize Life” and “Find Purpose”. Two of the top three uses in 2024, “Generate Ideas” and “Specific Search”, were aimed at task productivity (loosely defined), whereas in 2025 the top three uses were all for personal support.

Huh. People used to have humans in their lives known as friends or buddies or girlfriends/boyfriends or whatever.  Back in the day, say 200 or 2000 or 200,000 or 2,000,000 years ago, it seems a basic unit was the clan or village or extended kinship group. As I understand it, in a typical English village the men would drift into the pub most Friday and Saturday nights and banter and play darts over a pint of beer.  You were always in contact with peers or cousins or aunts/uncles or grandmother/grandfathers who would take an interest in you, and who might be a few years or more ahead of you in life. These were folks you could bounce around your thoughts with, who could help you sort out what is real. The act of relating to another human being seems to be essential in shaping our psyches. The alternative is appropriately termed “attachment disorder.”

The decades-long decline in face-to-face social interactions in the U.S. has been the subject of much commentary. A landmark study in this regard was Robert Putnam’s 1995 essay, “Bowling Alone: America’s Declining Social Capital”, which he then expanded into a 2000 book. The causes and results of this trend are beyond the scope of this blog post.

The essence of the therapeutic enterprise is the forming of a relational human-to-human bond. The act of looking into another person’s eyes, and there sensing acceptance and understanding, is irreplaceable.

But imagine your human conversation partner faked sympathy but in fact was just using you.  He or she could string you along by murmuring the right reflective phrases (“Tell me more about …”,  “Oh, that must have been hard for you”, blah, blah, blah) but with the goal of getting money from you or turning you towards being an espionage partner. This stuff goes on all the time in real life.

The AI chatbot case is not too different than this. Most AI purveyors are ultimately in it for the money, so they are using you. And the chatbot does not, cannot care about you. It is just a complex software algorithm, embedded in silicon chips. To a first approximation, LLMs simply spit out a probabilistic word salad in response to prompts. That is it. They do not “know” anything, and they certainly do not feel anything.

Here is what my Brave browser embedded AI has to say about the risks of using AI for therapy:

Using AI chatbots for therapy poses significant dangers, including the potential to reinforce harmful thoughts, fail to recognize crises like suicidal ideation, and provide unsafe or inappropriate advice, according to recent research and expert warnings. A June 2025 Stanford study found that popular therapy chatbots exhibit stigmatizing biases against conditions like schizophrenia and alcohol dependence, and in critical scenarios, they have responded to indirect suicide inquiries with irrelevant information, such as bridge heights, potentially facilitating self-harm. These tools lack the empathy, clinical judgment, and ethical framework of human therapists, and cannot ensure user safety or privacy, as they are not bound by regulations like HIPAA.

  • AI chatbots cannot provide a medical diagnosis or replace human therapists for serious mental health disorders, as they lack the ability to assess reality, challenge distorted thinking, or ensure safety during a crisis.
  • Research shows that AI systems often fail to respond appropriately to mental health crises, with one study finding they responded correctly less than 60% of the time compared to 93% for licensed therapists.
  • Chatbots may inadvertently validate delusional or paranoid thoughts, creating harmful feedback loops, and have been observed to encourage dangerous behaviors, such as promoting restrictive diets or failing to intervene in suicidal ideation.
  • There is a significant risk of privacy breaches, as AI tools are not legally required to protect user data, leaving sensitive mental health information vulnerable to exposure or misuse.
  • The lack of human empathy and the potential for emotional dependence on AI can erode real human relationships and worsen feelings of isolation, especially for vulnerable individuals.
  • Experts warn that marketing AI as a therapist is deceptive and dangerous, as these tools are not licensed providers and can mislead users into believing they are receiving professional care.

I couldn’t have put it better myself.