Economic Impacts of Weather Apps Exaggerating Storm Dangers

Snowmageddon!! Over 20 inches of snow!!! That is what we in the mid-Atlantic should expect on Sat-Sun Jan 24-25 according to most weather apps, as of 9-10 days ahead of time.  Of course, that kept us all busy checking those apps for the next week. As of Wednesday, I was still seeing numbers in the high teens in most cases, using Washington, D.C. as a representative location. But my Brave browser AI search proved its intelligence on Wednesday by telling me, with a big yellow triangle warning sign:

 Note: Apps and social media often display extreme snow totals (e.g., 23 inches) that are not yet supported by consensus models. Experts recommend preparing for 6–12 inches as a realistic baseline, with the potential for more.

“Huh,” thought I. Well, duh, the more scared they make us, the more eyeballs they get and the more ad revenue they generate. Follow the money…

Unfortunately, I did not log exactly who said what when last week. My recollection is that weather.com was still predicting high teens snowfall as of Thursday, and the Apple weather app was still saying that as of Friday. The final total for D.C. was about 7.5 inches for winter storm Fern. In fairness, some very nearby areas got 9-10 inches, and it ended up being dense sleet rather than light fluffy snow. But there was still a pretty big mismatch.

Among the best forecasters I found was AccuWeather. They showed a short table of probabilities that centered on (as I recall) 6-10”, with some chances for higher and for lower, that let you decide whether to prepare for a low probability/high impact scenario. It seems that the Apple weather app is notoriously bad: instead of integrating several different forecast models like some other apps (and like your local talking head meteorologist), it apparently just spits out the results of one model:

The core issue is that many weather apps, including Apple Weather, display raw data from individual forecast models without the context and analysis that professional meteorologists provide. While meteorologists at the National Weather Service balance multiple computer models, dozens of simulations and their own expertise to create forecasts, apps often pull from a single source and deliver it directly to users.

“Everything that catches attention is mostly nonsense,” said Eric Fisher, chief meteorologist for WBZ-TV in Boston. He points to the viral snowfall maps that spread on social media, noting that extreme forecasts generate the most attention even when they may not be the most accurate.

Anyway, I tried to poke around and find out in dollar terms how much it benefits the weather apps to exaggerate storm dangers. I was unsuccessful there, but by playing with query wording, I was able to coax out of ChatGPT some numbers on how much these exaggerations may cost the rest of us. (A more qualitative search noted, among other things, that the “crying wolf” cost of people becoming jaded to alarmist forecasts may lead them to discount genuine warnings; and I will add it is inconvenient to find the stores out of things because of panic buying).

Everything below the line of asterisks is just a straight dump from the AI, since I could not readily improve on its presentation:

****************************************************************

The economic impact of weather apps exaggerating storm dangers isn’t easy to quantify precisely because most formal studies focus on weather warnings overall (from official sources like the National Weather Service), not specifically on private weather apps exaggerating risk. However, economics research on false alarms, over-warning, and exaggerated alerts offers concrete dollar figures that we can use to approximate the kinds of costs involved. Below are the key economic impacts with as many dollar figures as available:


💸 1. Opportunity Costs of False or Overbroad Storm Warnings

Time cost of false warnings:

  • Traditional county-wide tornado warnings (over-broad alerts) have historically forced many people to shelter unnecessarily, resulting in ≈234 million person-hours per year spent under warnings (1996–2004) — valued at roughly $2.7 billion annually in lost time/productivity in the U.S. alone.
  • Improved storm-based warnings that reduce false warning area could save up to $750 million–$1.9 billion/year in avoided lost time.

This demonstrates that exaggerated alerts — such as those sometimes amplified or presented without clear context by weather apps — can force unnecessary economic shutdowns of work and business activity.


🏙️ 2. Evacuation and Preparedness Costs from False Alarms

Evacuation costs:

  • A NOAA analysis estimates that evacuation costs for hurricanes, when done broadly for storm-warnings, can reach about $1.4 billion annually in the U.S., with roughly $1 billion being unnecessary cost due to false alarms (people evacuating where no damaging winds ultimately occur).

Evacuations are one of the most expensive forms of response — from lost wages to lodging and fuel costs — and exaggerated risk messaging can trigger them prematurely or more frequently.


🏭 3. Business Productivity Losses from False Alerts

Example from private sector:

  • A building management firm reported that by reducing false storm alerts (better targeted data rather than broad app warnings), it saved ~$10 million in lost production time in one year alone.

This is a real dollar example showing how over-warning impacts businesses directly when they preemptively shut down operations.


💼 4. Broader Firm-Level Economic Avoidance

Cost avoidance from better warning info:

  • A study of nearly 500 firms found that switching from deterministic (binary “storm/no storm”) warnings to probabilistic information could produce $2.3 billion to $7.6 billion per year in avoided costs for businesses, thanks to better decision-making.

While this doesn’t measure exaggeration specifically, it underscores the economic penalty of imprecise or alarmist warnings, which many weather apps unintentionally echo.


📉 5. Insurance & Property Impacts

Although not directly about exaggerated warnings, inaccurate perceptions of storm danger influence behavior and pricing:

  • Insurance premiums rising: In storm-prone states (e.g., Florida), homeowners’ premiums have risen by ~57% or more because of risk perceptions and loss patterns tied to severe weather.
  • Losses from storms: Billion-dollar disasters are common; for example, Hurricane Milton in 2024 had estimated total costs of $21–$85 billion (depending on source).

While not directly caused by exaggeration, inflated perceptions of danger (from apps or media) can affect insurance markets and property values, contributing indirectly to economic impacts.


📈 6. Net Benefits vs. Exaggeration

It’s important to balance:

  • Early warnings overall clearly save money and lives. Improved forecasts and early alert systems have been estimated to avoid $300 million to $2 billion per year in asset losses and provide up to $32 billion/year in broader economic benefits (through optimized activities and prep).

This highlights that the problem isn’t warnings per se, but inaccurate or exaggerated warnings that cause over-response.

The Hot Social Network Is… LinkedIn?

So says the Wall Street Journal. They have data to back it up:

Plus quotes from yours truly:

Even before Elon Musk gutted X’s content moderation, James Bailey was tired of the shouting. “It’s like a cursed artifact that gives you great power to keep up with what’s going on, but at the cost of subtly corrupting your soul,” said the 38-year-old Providence College economics professor.

He retreated. This year, he realized he was spending five to 10 minutes a day on a site he used to ignore.

The WSJ reporter contacted me after seeing my previous post about LinkedIn here, explaining how I think LinkedIn has improved as a way to share and read articles, and was always good as a way to keep up with former students. Just in the short time since the WSJ article came out, I finally used LinkedIn for one of its official purposes, hiring, where it worked wonders helping to fill a last-minute vacancy.

If you don’t trust me or the WSJ to identify the hot social network, lets see what the actual cool kids are up to

Steps To Grow Lettuce and Herbs in AeroGarden-Type Countertop Hydroponics Unit

This will be a longer-than-usual post, since I will try to include all the steps I used to grow salad ingredients in a compact (AeroGarden-type) hydroponics system. I hope this encourages readers to try this for themselves. See my previous post for an introduction to the hardware, including small modifications I made to it. I used a less-expensive ($45), reliable 18-hole MUGFA model here, but all the AeroGardens and its many knockoffs should work similarly.   Most plant roots need access to oxygen as well as to water; these hydroponic units allow the upper few inches of the root to sit in a (moist) “grow sponge” up out of the water to help with aerobic metabolism.

Step 1. Unbox the hydroponics unit, set up per instructions near a power outlet. Fill tank close to upper volume marking.

Step 2. Add nutrients to the water in the tank: usually there are two small plastic bottles, one with nutrient mix “A” and the other with nutrient mix “B”, initially as dry granules. Add water to the fill lines of each of these bottles with the granules, shake till dissolved. (You can’t mix the A and B solutions directly together without dilution, because some components would precipitate out as solids. So, you must add first one solution, then the other, to the large amount of water in the tank.)

There is more than one way to do this. I pulled the deck off the tank, used a large measuring cup to get water from my sink into the tank, a little below the full line. For say 5 liters of water, I add about 25 ml of nutrient Solution A, stir well, then add 25 ml of Solution B and stir. You could also keep the deck on, have the circulation pump running, and slowly pour the nutrient solutions in through the fill hole (frontmost center hole in the deck). You don’t have to be precise on amounts.

Step 3. Put the plastic baskets (sponge supports) in their holes in the deck, and put the conical porous planting sponges/plugs in the baskets. Let the sponges soak up water and swell. (This pre-wetting may not be necessary; it just worked for me).

Step 4. Plant the seeds: Each sponge has a narrow hole in its top. You need to get your seed down to the bottom of the hole. I pulled one moist sponge out at a time and propped it upright in a little holder on a table where I could work on it. I used the end of plastic bread tie to pick up seeds from a little plate and poke them down to the bottom of the hole. You have to make a judgment call how many seeds to plant in each hole. Lettuce seeds are large and pretty reliable, so I used two lettuce seeds for each lettuce sponge. Same for arugula (turns out that it was better to NOT pre-soak the arugula seeds, contrary to popular wisdom). If both seeds sprout, it’s OK to have two lettuce plants per hole, though you may not get much more production than from one plant per hole. For parsley*, where I wanted 2-3 plants per hole, I used three seeds each. For the tiny thyme seeds, I used about 5 seeds, figuring I could thin if they all came up. For cilantro, I used two pre-soaked seeds. I really wanted chives, but they are hard to sprout in these hydroponics units. I used five chive seeds each in two holes, but they never really sprouted, so I ended up planting something else in their holes.  

I chose all fairly low-growing plants, no basil or tomatoes. Larger plants such as micro-dwarf tomatoes can be grown in these hydroponics units; also basil, though need to aggressively keep cutting it back. It may be best to choose all low or all high plants for a given grow campaign. See this Reddit thread for more discussion of growing things in a MUGFA unit.

Once all the plugs are back in their holders, you stick a light-blocking sticker on top of each basket. Each sticker has a hole in the middle where the plants can grow up through, but they block most of the light from hitting the grow sponge, to prevent algae growth. Then pop a clear plastic seeding cover dome on top of each hole, and you are done. The cover domes keep the seeds extra moist for sprouting; remove the domes after sprouting.  Make sure the circulation pump is running and the grow lights are on (typically cycling on 16 hours/off 8 hours). This seems like a lot of work describing it here, but it goes fast once you have the rhythm. Once this setup stage is done, you can just sit back and let everything unfold, no muss, no fuss. Here is the seeded, covered state of affairs:

Picture: Seeds placed in grow sponges on Jan 14. Note green light-blocking stickers, and clear cover domes to keep seeds moist for germination. The overhead sunlamp has a lot of blue and red LEDs (which the plants use for photosynthesis), which gives all these photos a purple cast.

Jan 28 (Two weeks after planting): seedlings. Note some unused holes are covered, to keep light out of the nutrient solution in the tank. The center hole in front is used for refilling the tank.

Feb 6.  Showing roots of an arugula plant, 23 days after planting.

Step 5. Maintenance during 2-4 month grow cycle. Monitor water level via viewing port in front. Top up as needed. Add nutrients as you add water (approx. 5 ml of Solution A and 5 ml Solution B, per liter of added water). The water will not go down very fast during the first month, but once plants get established, water will likely be needed every 5-10 days. If you keep trimming outside leaves every several days, you can get away with having densely planted greens, whereas if you only harvest say every two weeks, the plants get so big they would crowd each other if you plant in every hole on the deck.

Optional: Supposedly it helps to keep the acidity (pH) of the nutrient solution in the range of 5.5-6.5. I think most users don’t bother checking this, since the nutrient solutions are buffered to try to keep pH in balance. Being a retired chemical engineer, I got this General Hydroponics kit for measuring and adjusting pH. On several occasions, the pH in the tank was about 6.5. That was probably perfectly fine, but I went ahead and added about 1/8 teaspoon of the pH lowering solution, to bring it down to about 6.0.   I also got a meter for measuring Electrical Conductivity/Total Dissolved Solids to monitor that parameter, but it was not necessary.

Feb 16: After a month, some greens are ready to snip the outer leaves. Lettuces (buttercrunch, red oak, romaine) on the right, herbs on the left.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Feb 17: Harvesting a small salad or sandwich filler every 2-3 days now.

March 6: Full sized, regular small harvests. All the lettuces worked great, buttercrunch is especially soft and sweet. Arugula (from the mustard plant family) gave a spicy edge. Italian parsley and thyme added flavor. The cilantro was slower growing, and only gave a few sprigs total.

Closeup March 16 (three months), just before closing out the grow cycle. Arugula foreground, lettuce top and right, thyme on left, Italian parsley upper left corner.

Step 6. Close out grow cycle. At some point, typically 2-4 months, it is time to bring a grow cycle to a close. I suppose with something like dwarf tomatoes, you could keep going longer, though you might need to pull the deck up and trim the roots periodically.  In my case, after three months, the arugula and cilantro were starting to bolt, though the lettuce, thyme, and parsley were still going strong. As of mid-March, my focus turned to outside planting, so I harvested all the remaining crops on the MUGFA, turned off the power, and gently pulled the deck off the tank. The whole space under the deck was a tangled mass of roots. I used kitchen shears to cut roots loose, enough to pull all the grow sponges and baskets out. The sponges got discarded, and the baskets saved for next time. I peeled off and saved the round green light-blocking stickers for re-use. I cleared all the rootlets from the filter sponge on the pump inlet. Then I washed out the tank per instructions. It took maybe 45 minutes for all this clean-out, to leave the unit ready for a next round of growing.

Stay tuned for a future blog post on growing watercress, which went really well this past fall. Looking to the future: In Jan 2026 I plan to do a replant of this 18-hole (blocked down to 14-holes) MUGFA device, sowing less lettuce (since we buy that anyway) but more arugula/Italian parsley/thyme for nutritious flavorings. For replacement nutrients and grow sponges, I got a Haligo hydroponics kit like this (about $12).

Growing these salad/sandwich ingredients in the kitchen under a built-in sunlamp provided good cheer and a bit of healthy food during the dark winter months. The clean hydroponic setup removed concerns about insect pests or under/overwatering.  It was a hobby; at this toy scale it did not “save money”, though from these learnings I could probably rig a larger homemade hydroponics setup which might reduce grocery costs. This exercise led to fun conversations with visitors and children, and was a reminder that nearly everything we eat comes from water, nutrients, and light, directly or indirectly.  

*Pro tips on germinating parsley seeds – – Parsley seeds have a tough coating, and can take weeks to germinate. Some techniques to speed things up:

( 1 ) Lightly abrade the seeds by gently rubbing between sheets of sandpaper.

( 2 ) Soak in warmish water for 24-48 hours.

( 3 ) For older seeds, cold stratification (1–2 weeks in a damp paper towel in the fridge) may help break dormancy.

Summary of You Wouldn’t Steal a Car

I have a new working paper with Bart Wilson titled: “You Wouldn’t Steal a Car: Moral Intuition for Intellectual Property” 

This quote from the introduction explains the title:

… in the early 2000s… the Motion Picture Association of America (MPAA) released an anti-piracy trailer shown before films that argued: (1) “You wouldn’t steal a car,” (2) pirating movies constitutes “stealing,” and (3) piracy is a crime. The very need for this campaign, and the ridicule it attracted, signals persistent disagreement over whether digital copying constitutes a moral violation.

The main idea:

In contemporary economies, “idea goods” comprise a substantial share of value. Our paper examines how norms evolve when individuals evaluate harm after the taking of nonrivalous resources such as digital files.

We report experimental evidence on moral evaluations of unauthorized appropriation, contingent on whether the good is rivalrous or nonrivalrous. In a virtual environment, participants produce and exchange two types of resources: nonrivalrous “discs,” replicable at zero marginal cost, and rivalrous “seeds,” which entail positive marginal cost and cannot be simultaneously possessed or consumed by multiple individuals. Certain treatments allow unauthorized taking, which permits observation of whether participants classify such actions as moral transgressions.

Participants consistently label the taking of rivalrous goods as “stealing,” whereas they do not apply the same term to the taking of nonrivalrous goods.

To test the moral intuition for taking ideas, we create an environment where people can take from each other and we study their freeform chat. The people in the game each control a round avatar in a virtual environment, as you can see in this screenshot below.

In the experiment, “seeds” represent a rivalrous resource, meaning they operate like most physical goods. If the playerin the picture (Almond) takes a seed from the Blue player, then Blue will be deprived of the seed, functionally the equivalent of one’s car being stolen.

Thus, it is natural for the players to call the taking of seeds “stealing,” Our research question is whether similar claims will emerge after the taking of non-rivalrous goods that we call “discs.”

The following quote from our paper indicates that the subjects do not label or conceptualize the taking of digital goods (discs) as “stealing.”

Participants discuss discs often enough to reveal how they conceptualize the resource. In many instances, they articulate the positive-sum logic of zero-marginal-cost copying. For example, … farmer Almond reasons, “ok so disks cant be stolen so everyone take copies,” explicitly rejecting the application of “stolen” to discs.

Participants never instruct one another to stop taking disc copies, yet they frequently urge others to stop taking seeds. The objection targets the taking away of rivalrous goods, not the act of copying per se. As farmer Almond explains in noSeedPR2, “cuz if u give a disc u still keep it,” emphasizing that artists can replicate discs at zero marginal cost.

We encourage you to read the manuscript if you are interested in the details of how we set up the environment. The conclusion is that it is not intuitive for people to view piracy as a crime.

This has implications for how the modern information economy will be structured. Consider “the subscription economy.” Increasingly, consumers pay recurring fees for ongoing access to products/services (like Netflix, Adobe software) instead of one-time purchases. Gen Z has been complaining on TikTok that they feel trapped with so many recurring payments and lack a sense of ownership.

In a recent interview on a talk show called The Stream, I speculated that part of the reason companies are moving to the subscription model is that they do not trust consumers with “ownership” of digital goods. People will share copies of songs and software, if given the opportunity, to the point where creators cannot monetize their work by selling the full rights to digital goods anymore.

A feature of our experimental design is that creators of discs get credit as the author of their creation even when it is being passed around without their explicit permission. Future work could explore what would happen if that were altered.

Related Reading

An Experiment on Protecting Intellectual Property” (2014) with Bart Wilson. Experimental Economics, 17:4, 691-716.

You Wouldn’t Steal a Car: Moral Intuition for Intellectual Property,” with Bart Wilson (SSRN)

Joy on The Subscription Economy (EWED)

The Anthropic Settlement: A $1.5 Billion Precedent for AI and Copyright (EconLog)

Review of MUGFA (Aerogarden type) Countertop Hydroponic Units

Last year about this time, as the outside world got darker and colder, and the greenery in my outdoor planters shriveled to brown – – I resolved to fight back against seasonal affect disorder, by growing some lettuce and herbs indoors under a sun lamp.

After doing some reading and thinking, I settled on getting a countertop hydroponics unit, instead of rigging a lamp over pots filled with dirt indoors. With a compact hydroponics unit there is no dirt, no bugs, it has built-in well-designed sun lamp on a timer, and is more or less self-watering.

These systems have a water tank that you fill with water and some soluble nutrients. There is a pump in the tank that circulates the water. There is a deck over the tank with typically 8 to 12 holes that are around 1 inch diameter. Into each hole you put a conical plug or sponge made of compressed peat moss, supported by a plastic basket. On the top of each sponge is a little hole, into which you place the seeds you want to grow.

A support basket with a dry (unwetted, unswollen) peat moss grow sponge/plug in it.

As long as you keep the unit plugged in, so the lights go on when they should, and you keep the nutrients solution topped up, you have a tidy automatic garden on a table or countertop or shelf.

The premier countertop hydroponics brand, which has defined this genre over the past twenty years, is Aerogarden. This brand is expensive. Historically its larger models were $200-$300, though with competition its larger models are now just under $200.  Aerogarden tries to justify the high cost by sleek styling and customizable automation of the lighting cycles, linked into your cell phone.

I decided to go with a cheaper brand, for two reasons. First, why spend $200 when I could get similar function for $50 (especially if I wasn’t sure I would like hydroponics)? Second, I don’t want the bother and possible malfunction associated with having to link an app on my cell phone to the growing device and program it. I wanted something simple and stupid that just turns on and goes.

So I went with a MUGFA brand 18-hole hydroponics unit last winter. It is simple and robust. The LED growing lights are distributed along the underside of a wide top lamp piece. The lamp has a lot of vertical travel (14“), so you could accommodate relatively tall plants. The lights have a simple cycle of 16 hours on, 8 hours off. You can reset by turning the power off and on again; I do this once, early on some morning, so from then on the lights are on during the day and the evening, and off at night.  The water pump pumps the nutrient solution through channels on the underside of the deck, so each grow sponge has a little dribble of solution dribbling onto it when the pump cycle is on. I snagged a second MUGFA unit, a 12 hole model, when it was on sale last spring. The MUGFA units come complete with grow sponges/plugs, support baskets/baskets for the sponges, nutrients (that you add to the water), clear plastic domes you put over the deck holes while the seeds are germinating, and little support sticks for taller plants. You have to buy seeds separately.

Images above from Amazon , for 12-hole model

I have made a couple small modifications to my MUGFA units. The pump is not really sized for reaching 18 holes, and with plants of any size you’re likely not going be stuffing 18 plants on that grow deck. Also, the power of the lamp for the 18-hole unit (24 W) is the same as the 12-hole unit; the LEDs are just spread over a wider lamp area. That 24W is OK for greens that don’t need so much light, but may only be enough to grow a few (mini) tomato plants. For all these reasons, I don’t use the four corner holes on the 18-hole unit. Those corner holes get the least light and the least water flow. To increase the water flow to the other 14 holes, I plugged up the outlets of the channels on the underside of the deck leading to those four holes. I cut little pieces of rubber sheeting, and stuffed them in channel outlets for those holes.

The 12-hole unit has a slightly more pleasing compact form factor, but it has a minor design defect [1]. The flow out of the outlet of each of the 12 channels under the deck is regular, but not very strong. Consequently, the water that comes out of each outlet drops almost straight down and splashes directly into the water tank, without contacting the grow sponge at that hole. The waterfall noise was annoying. The fix was easy, but a little tedious to implement. I cut little pieces of black strong duct tape and stuck them under the outlet of each hole, to make the water travel another quarter inch further horizontally. Those little tabs got the water in contact with the grow sponge basket. The picture below shows the deck upside down, showing the water channels under the deck going to each hole. There is a white sponge basket sticking through the nearest hole, and my custom piece of black duct tape is on the end of the water channel there, touching the basket. (In order to cover the exposed sticky side of the duct tape tab that would be left exposed and touching the basket, I cut another, smaller piece of duct tape to cover that portion of the tab, sticky side to sticky side.). This sounds complicated, but it is straightforward if you ever do it. Also, many cheap knock-off hydroponics units don’t have these under-deck flow channels at all. With MUGFA you are getting nearly Aerogarden type hardware for a third the price, so it is worth a bit of duct tape to bring it up to optimal performance.

12-hole MUGFA deck, upside down with one basket;  showing my bit of black duct tape to convey water from the channer over to the basket.

Some light escapes out sideways from under the horizontal lamps on these units. As an efficiency freak, I taped little aluminum foil reflectors hanging down from the back and sides of the lamp piece, but that is not necessary.

To keep this post short, I have just talked about the hardware here. I will describe actual plant growing in my next post. But here is one picture of my kitchen garden last winter, with the plants about 2/3 of their final sizes:

The bottom line is, I’ve been quite satisfied with both of these MUGFA units, and would recommend them to others. They provided good cheer in the dark of winter, as well as good conversations with visitors and good fresh lettuce and herbs. An alternate use of these types of hydroponics units is to start seedlings for an outside garden.

ENDNOTE

[1] For the hopelessly detail-obsessed technical nerds among us – – the specific design mistake in the 12-hole model is subtle. I’ll explain a little more here.        Here is a picture of the deck for the 18-hole model upside down, with three empty baskets inserted. The network of flow channels for the water circulation is visible on the underside. When the deck is in place on the tank, water is pumped into the short whitish tube at the left of this picture, flows into the channels, then out the ends of all the channels. (Note on the corner holes here, upper and lower right, I stuck little pieces of rubber into the ends of the flow channels to block them off since I don’t use the corner holes on this model; that blocking was not really necessary, it was just an engineering optimization by a technical nerd).

 Anyway, the key point is this: the way the baskets are oriented in the 18-hole model here, a rib of the basket faces the outlet of each flow channel. The result is that as soon as the water exits the flow channel, it immediately contacts a rib of the basket and flows down the basket and wets the grow sponge/plug within the basket. All good.

The design mistake with the 12-hole model is that the baskets are oriented such that the flow channels terminate between the ribs. The water does not squirt far enough horizontally to contact the non-rib part of basket or the sponge, so the water just drips down and splashes into the tank without wetting the sponge. This is not catastrophic, since the sponges are normally wetted just by sitting in the water in the tank, but it is not optimal. All because of a 15-degree error in radial orientation of the little rib notches in the deck. Who knows, maybe Mugfa will send me a free beta test improved 12-hole model if I point this out to them.

Google’s TPU Chips Threaten Nvidia’s Dominance in AI Computing

Here is a three-year chart of stock prices for Nvidia (NVDA), Alphabet/Google (GOOG), and the generic QQQ tech stock composite:

NVDA has been spectacular. If you had $20k in NVDA three years ago, it would have turned into nearly $200k. Sweet. Meanwhile, GOOG poked along at the general pace of QQQ.  Until…around Sept 1 (yellow line), GOOG started to pull away from QQQ, and has not looked back.

And in the past two months, GOOG stock has stomped all over NVDA, as shown in the six-month chart below. The two stocks were neck and neck in early October, then GOOG has surged way ahead. In the past month, GOOG is up sharply (red arrow), while NVDA is down significantly:

What is going on? It seems that the market is buying the narrative that Google’s Tensor Processing Unit (TPU) chips are a competitive threat to Nvidia’s GPUs. Last week, we published a tutorial on the technical details here. Briefly, Google’s TPUs are hardwired to perform key AI calculations, whereas Nvidia’s GPUs are more general-purpose. For a range of AI processing, the TPUs are faster and much more energy-efficient than the GPUs.

The greater flexibility of the Nvidia GPUs, and the programming community’s familiarity with Nvidia’s CUDA programming language, still gives Nvidia a bit of an edge in the AI training phase. But much of that edge fades for the inference (application) usages for AI. For the past few years, the big AI wannabes have focused madly on model training. But there must be a shift to inference (practical implementation) soon, for AI models to actually make money.

All this is a big potential headache for Nvidia. Because of their quasi-monopoly on AI compute, they have been able to charge a huge 75% gross profit margin on their chips. Their customers are naturally not thrilled with this, and have been making some efforts to devise alternatives. But it seems like Google, thanks to a big head start in this area, and very deep pockets, has actually equaled or even beaten Nvidia at its own game.

This explains much of the recent disparity in stock movements. It should be noted, however, that for a quirky business reason, Google is unlikely in the near term to displace Nvidia as the main go-to for AI compute power. The reason is this: most AI compute power is implemented in huge data/cloud centers. And Google is one of the three main cloud vendors, along with Microsoft and Amazon, with IBM and Oracle trailing behind. So, for Google to supply Microsoft and Amazon with its chips and accompanying know-how would be to enable its competitors to compete more strongly.

Also, AI users like say OpenAI would be reluctant to commit to usage in a Google-owned facility using Google chips, since then the user would be somewhat locked in and held hostage, since it would be expensive to switch to a different data center if Google tried to raise prices. On contrast, a user can readily move to a different data center for a better deal, if all the centers are using Nvidia chips.

For the present, then, Google is using its TPU technology primarily in-house. The company has a huge suite of AI-adjacent business lines, so its TPU capability does give it genuine advantages there. Reportedly, soul-searching continues in the Google C-suite about how to more broadly monetize its TPUs. It seems likely that they will find a way. 

As usual, nothing here constitutes advice to buy or sell any security.

The End of Boredom: How AI Companions Might Reduce Random Violence

Joy writes: I read Co-Intelligence by Ethan Mollick (thanks to Samford for the free book). Most of it is old news for those of us who follow Ethan on social media and use ChatGPT. However, something that stood out to me was his mention of a study in which humans decide to give themselves a painful shock rather than sit alone in silence for 15 minutes.

Claude comments further based on my prompt:

The End of Boredom: How AI Companions Might Reduce Random Violence

Remember that study where people would rather shock themselves than sit alone with their thoughts? Ethan Mollick references it in Co-Intelligence, and it reveals something unsettling: 67% of men and 25% of women chose electric shocks over sitting quietly for just 15 minutes.

Here’s a strange thought—what if our AI-saturated future accidentally reduces certain types of violence simply by eliminating boredom?

The Violence-Boredom Connection

“Idle hands are the devil’s workshop” exists for a reason. Research has long linked boredom to risk-taking, substance abuse, and impulsive violence—the opportunistic kind that emerges from restlessness rather than planning. Young men starting fights on street corners, vandalism on summer nights, the restless energy that sometimes turns destructive—much of this stems from the unbearable weight of having nothing to do.

Enter Infinite Engagement

We’re rapidly approaching a world where boredom might become extinct. AI companions are becoming always available, infinitely patient, endlessly novel, and perfectly tailored to individual interests. Your future AI won’t just answer questions—it will debate philosophy at 3 AM, create personalized games, generate stories with you as the protagonist, or help explore any curiosity rabbit hole.

The cognitive void that once led people to shock themselves rather than think? It might simply cease to exist.

The Unexpected Benefits

Consider the implications: Young men who might have started fights out of restlessness could instead be deeply engaged in AI-assisted music production or coding projects. The same restless energy that manifests destructively could be channeled into creative collaboration.

AI companions could help process frustration before it builds to a breaking point—imagine having an infinitely patient listener during those dangerous 2 AM spirals of rage. While not replacing human connection, AI interaction might buffer the worst effects of isolation that can lead to radicalization.

The Dark Side

This isn’t utopian. An always-engaged society raises concerns: What happens to human relationships when AI companionship is always easier? Does constant stimulation atrophy our ability to self-reflect? Might we lose the creative insights that emerge from boredom?

Crucially, this would only address impulsive, boredom-driven violence. Systemic violence, ideological extremism, and deeper social problems won’t disappear because people have engaging AI companions.

A Strange New World

For the first time in history, boredom—that uncomfortable void that has driven both creativity and destruction—might become optional. The same species that chose shocks over silence is about to get exactly what it wanted: constant, meaningful stimulation.

Whether this leads to a more peaceful society or new problems we haven’t imagined remains to be seen. The question is whether we’ll recognize what we’ve lost only after it’s gone.

The experiment is already underway—we’re all participating in it.

Joy comments at the end: The AI-written essay is willing to explore downsides of AI engagement.

AI Computing Tutorial: Training vs. Inference Compute Needs, and GPU vs. TPU Processors

A tsunami of sentiment shift is washing over Wall Street, away from Nvidia and towards Google/Alphabet. In the past month, GOOG stock is up a sizzling 12%, while NVDA plunged 13%, despite producing its usual earnings beat.  Today I will discuss some of the technical backdrop to this sentiment shift, which involves the differences between training AI models versus actually applying them to specific problems (“inference”), and significantly different processing chips. Next week I will cover the company-specific implications.

As most readers here probably know, the popular Large Language Models (LLM) that underpin the popular new AI products work by sucking in nearly all the text (and now other data) that humans have ever produced, reducing each word or form of a word to a numerical token, and grinding and grinding to discover consistent patterns among those tokens. Layers of (virtual) neural nets are used. The training process involves an insane amount of trying to predict, say, the next word in a sentence scraped from the web, evaluating why the model missed it, and feeding that information back to adjust the matrix of weights on the neural layers, until the model can predict that next word correctly. Then on to the next sentence found on the internet, to work and work until it can be predicted properly. At the end of the day, a well-trained AI chatbot can respond to Bob’s complaint about his boss with an appropriately sympathetic pseudo-human reply like, “It sounds like your boss is not treating you fairly, Bob. Tell me more about…” It bears repeating that LLMs do not actually “know” anything. All they can do is produce a statistically probably word salad in response to prompts. But they can now do that so well that they are very useful.*

This is an oversimplification, but gives the flavor of the endless forward and backward propagation and iteration that is required for model training. This training typically requires running vast banks of very high-end processors, typically housed in large, power-hungry data centers, for months at a time.

Once a model is trained (e.g., the neural net weights have been determined), to then run it (i.e., to generate responses based on human prompts) takes considerably less compute power. This is the “inference” phase of generative AI. It still takes a lot of compute to run a big program quickly, but a simpler LLM like DeepSeek can be run, with only modest time lags, on a high end PC.

GPUs Versus ASIC TPUs

Nvidia has made its fortune by taking graphical processing units (GPU) that were developed for massively parallel calculations needed for driving video displays, and adapting them to more general problem solving that could make use of rapid matrix calculations. Nvidia chips and its CUDA language have been employed for physical simulations such as seismology and molecular dynamics, and then for Bitcoin calculations. When generative AI came along, Nvidia chips and programming tools were the obvious choice for LLM computing needs. The world’s lust for AI compute is so insatiable, and Nvidia has had such a stranglehold, that the company has been able to charge an eye-watering gross profit margin of around 75% on its chips.

AI users of course are trying desperately to get compute capability without have to pay such high fees to Nvidia. It has been hard to mount a serious competitive challenge, though. Nvidia has a commanding lead in hardware and supporting software, and (unlike the Intel of years gone by) keeps forging ahead, not resting on its laurels. 

So far, no one seems to be able to compete strongly with Nvidia in GPUs. However, there is a different chip architecture, which by some measures can beat GPUs at their own game.

NVIDIA GPUs are general-purpose parallel processors with high flexibility, capable of handling a wide range of tasks from gaming to AI training, supported by a mature software ecosystem like CUDA. GPUs beat out the original computer central processing units (CPUs) for these tasks by sacrificing flexibility for the power to do parallel processing of many simple, repetitive operations. The newer “application-specific integrated circuits” (ASICs) take this specialization a step further. They can be custom hard-wired to do specific calculations, such as those required for bitcoin and now for AI. By cutting out steps used by GPUs, especially fetching data in and out of memory, ASICs can do many AI computing tasks faster and cheaper than Nvidia GPUs, and using much less electric power. That is a big plus, since AI data centers are driving up electricity prices in many parts of the country. The particular type of ASIC that is used by Google for AI is called a Tensor Processing Unit (TPU).

I found this explanation by UncoverAlpha to be enlightening:

A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.

The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.

A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.

The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).

In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).

  1. It loads data (weights) once.
  2. It passes inputs through a massive grid of multipliers.
  3. The data is passed directly to the next unit in the array without writing back to memory.

What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.

Google has developed the most advanced ASICs for doing AI, which are now on some levels a competitive threat to Nvidia.   Some implications of this will be explored in a post next week.

*Next generation AI seeks to step beyond the LLM world of statistical word salads, and try to model cause and effect at the level of objects and agents in the real world – – see Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence .

Standard disclaimer: Nothing here should be considered advice to buy or sell any security.

Structure Integrated Panels (SIP): The Latest, Greatest (?) Home Construction Method

Last week I drove an hour south to help an acquaintance with constructing his retirement home. I answered a group email request, looking for help in putting up a wall in this house.
I assumed this was a conventional stick-built construction, so I envisioned constructing a studded wall out of two by fours and two by sixes whilst lying flat on the ground, and then needing four or five guys to swing this wall up to a vertical position, like an old-fashioned barn raising.

But that wasn’t it at all. This house was being built from Structure Integrated Panels (SIP). These panels have a styrofoam core, around 5 inches thick, with a facing on each side of thin oriented strandboard (OSB). (OSB is a kind of cheapo plywood).


The edges have a sort of tongue and groove configuration, so they mesh together. Each of the SIP panels was about 9 feet high and between 2 feet and 8 feet long. Two strong guys could manhandle a panel into position. Along the edge of the floor, 2×6’s had been mounted to guide the positioning of the bottom of each wall panel.


We put glue and sealing caulk on the edges to stick them together, and drove 7-inch-long screws through the edges after they were in place, and also a series of  nails through the OSB edges into the 2×6’s at the bottom. Pneumatic nail guns give such a satisfying “thunk” with each trigger pull, you feel quite empowered. Here are a couple photos from that day:


The homeowner told me that he learned about SIP construction from an exhibit in Washington, DC that he attended with his grandson. The exhibit was on building techniques through the ages, starting with mud huts, and ending with SIP as the latest technique. That inspired him.

(As an old guy, I was not of much use lifting the panels. I did drive in some nails and screws. I was not initially aware of the glue/caulk along the edges, so I spent my first 20 minutes on the job wiping off the sticky goo I got all over my gloves and coat when I grabbed my first panel. My chief contribution that day was to keep a guy from toppling backwards off a stepladder who was lifting a heavy panel beam overhead).

We amateurs were pretty slow, but I could see that a practiced crew could go slap slap slap and erect all the exterior walls of a medium sized single-story house in a day or two, without needing advanced carpentry skills. Those walls would come complete with insulation. They would still need weatherproof exterior siding (e.g. vinyl or faux stone) on the outside, and sheetrock on the inside. Holes were pre-drilled in the Styrofoam for running the electrical wiring up through the SIPs.

From my limited reading, it seems that the biggest single advantage of SIP construction is quick on-site assembly. It is ideal for situations where you only have a limited time window for construction, or in an isolated or affluent area where site labor is very expensive and hard to obtain (e.g., a ski resort town). Reportedly, SIP buildings are mechanically stronger than stick-built, handy in case of earthquakes or hurricanes. Also, an SIP wall has very high insulation value, and the construction method is practically airtight.

SIP construction is not cheaper than stick built. It’s around 10% more expensive. You need perfect communication with the manufacturer of the SIP panels; if the delivered panels don’t fit properly on-site, you are hosed. Also, it is tough to modify an SIP house once it is built.

Because it is so airtight, it requires some finesse in designing the HVAC system. You need to be very careful protecting it from the walls from moisture, both inside and out, since the SIP panels can lose strength if they get wet. For that reason, some folks prefer to not use SIP for roofs, but only for walls and first-story flooring.
For more on SIP pros and cons, see here and here.

Michael Burry’s New Venture Is Substack “Cassandra Unchained”: Set Free to Prophesy All-Out Doom on AI Investing

This is a quick follow-up to last week’s post on “Big Short” Michael Burry closing down his Scion Asset Management hedge fund. Burry had teased on X that he would announce his next big thing on Nov 25. It seems he is now a day or two early: Sunday night he launched a paid-subscription “Cassandra Unchained” Substack. There he claims that:

Cassandra Unchained is now Dr. Michael Burry’s sole focus as he gives you a front row seat to his analytical efforts and projections for stocks, markets, and bubbles, often with an eye to history and its remarkably timeless patterns.

Reportedly the subscription cost is $39 a month, or $379 annually, and there are 26,000 subscribers already. Click the abacus and…that comes to a cool $ 9.9 million a year in subscription fees. Not bad compensation for sharing your musings on line.

Michael Burry was dubbed “Cassandra” by Warren Buffett in recognition of his prescient warnings about the 2008 housing market collapse, a prophecy that was initially ignored, much like the mythological Cassandra who was fated to deliver true prophecies that were never believed. Burry embraced this nickname, adopting “Cassandra” as his online moniker on social media platforms, symbolizing his role as a lone voice warning of impending financial disaster. On the About page of his new Substack, he wrote that managing clients’ money in a hedge fund like Scion came with restrictions that “muzzled” him, such that he could only share “cryptic fragments” publicly, whereas now he is “unchained.”

Of his first two posts on the new Substack, one was a retrospective on his days as a practicing doctor (resident in neurology at Stanford Hospital) in 1999-2000. He had done a lot of on-line posting on investing topics, focusing on valuations, and finally left medicine to start a hedge fund. As he tells it, he called the dot.com bubble before it popped.

The Business Insider summarizes Burry’s second post, which attacks the central premise of those who claim the current AI boom is fundamentally different from the 1990s dot.com boom:

The second post aims straight at the heart of the AI boom, which he calls a “glorious folly” that will require investigation over several posts to break down.

Burry goes on to address a common argument about the difference between the dot-com bubble and AI boom — that the tech companies leading the charge 25 years ago were largely unprofitable, while the current crop are money-printing machines.

At the turn of this century, Burry writes, the Nasdaq was driven by “highly profitable large caps, among which were the so-called ‘Four Horsemen’ of the era — Microsoft, Intel, Dell, and Cisco.”

He writes that a key issue with the dot-com bubble was “catastrophically overbuilt supply and nowhere near enough demand,” before adding that it’s “just not so different this time, try as so many might do to make it so.”

Burry calls out the “five public horsemen of today’s AI boom — Microsoft, Google, Meta, Amazon and Oracle” along with “several adolescent startups” including Sam Altman’s OpenAI.

Those companies have pledged to invest well over $1 trillion into microchips, data centers, and other infrastructure over the next few years to power an AI revolution. They’ve forecasted enormous growth, exciting investors and igniting their stock prices.

Shares of Nvidia, a key supplier of AI microchips, have surged 12-fold since the start of 2023, making it the world’s most valuable public company with a $4.4 trillion market capitalization.

“And once again there is a Cisco at the center of it all, with the picks and shovels for all and the expansive vision to go with it,” Burry writes, after noting the internet-networking giant’s stock plunged by over 75% during the dot-com crash. “Its name is Nvidia.”

Tell us how you really feel, Michael. Cassandra, indeed.

My amateur opinion here: I think there is a modest but significant chance that the hyperscalers will not all be able to make enough fresh money to cover their ginormous investments in AI capabilities 2024-2028. What happens then? For Google and Meta and Amazon, they may need to write down hundreds of millions of dollars on their balance sheets, which would show as ginormous hits to GAAP earnings for a number of quarters. But then life would go on just fine for these cash machines, and the market may soon forgive and forget this massive misallocation of old cash, as long as operating cash keeps rolling in as usual. Stocks are, after all, priced on forward earnings. If the AI boom busts, all tech stock prices would sag, but I think the biggest operating impact would be on suppliers of chips (like Nvidia) and of data centers (like Oracle). So, Burry’s comparison of 2025 Nvidia to 1999 Cisco seems apt.