Steps To Grow Lettuce and Herbs in AeroGarden-Type Countertop Hydroponics Unit

This will be a longer-than-usual post, since I will try to include all the steps I used to grow salad ingredients in a compact (AeroGarden-type) hydroponics system. I hope this encourages readers to try this for themselves. See my previous post for an introduction to the hardware, including small modifications I made to it. I used a less-expensive ($45), reliable 18-hole MUGFA model here, but all the AeroGardens and its many knockoffs should work similarly.   Most plant roots need access to oxygen as well as to water; these hydroponic units allow the upper few inches of the root to sit in a (moist) “grow sponge” up out of the water to help with aerobic metabolism.

Step 1. Unbox the hydroponics unit, set up per instructions near a power outlet. Fill tank close to upper volume marking.

Step 2. Add nutrients to the water in the tank: usually there are two small plastic bottles, one with nutrient mix “A” and the other with nutrient mix “B”, initially as dry granules. Add water to the fill lines of each of these bottles with the granules, shake till dissolved. (You can’t mix the A and B solutions directly together without dilution, because some components would precipitate out as solids. So, you must add first one solution, then the other, to the large amount of water in the tank.)

There is more than one way to do this. I pulled the deck off the tank, used a large measuring cup to get water from my sink into the tank, a little below the full line. For say 5 liters of water, I add about 25 ml of nutrient Solution A, stir well, then add 25 ml of Solution B and stir. You could also keep the deck on, have the circulation pump running, and slowly pour the nutrient solutions in through the fill hole (frontmost center hole in the deck). You don’t have to be precise on amounts.

Step 3. Put the plastic baskets (sponge supports) in their holes in the deck, and put the conical porous planting sponges/plugs in the baskets. Let the sponges soak up water and swell. (This pre-wetting may not be necessary; it just worked for me).

Step 4. Plant the seeds: Each sponge has a narrow hole in its top. You need to get your seed down to the bottom of the hole. I pulled one moist sponge out at a time and propped it upright in a little holder on a table where I could work on it. I used the end of plastic bread tie to pick up seeds from a little plate and poke them down to the bottom of the hole. You have to make a judgment call how many seeds to plant in each hole. Lettuce seeds are large and pretty reliable, so I used two lettuce seeds for each lettuce sponge. Same for arugula (turns out that it was better to NOT pre-soak the arugula seeds, contrary to popular wisdom). If both seeds sprout, it’s OK to have two lettuce plants per hole, though you may not get much more production than from one plant per hole. For parsley, where I wanted 2-3 plants per hole, I used three seeds each. For the tiny thyme seeds, I used about 5 seeds, figuring I could thin if they all came up. For cilantro, I used two pre-soaked seeds. I really wanted chives, but they are hard to sprout in these hydroponics units. I used five chive seeds each in two holes, but they never really sprouted, so I ended up planting something else in their holes.  

I chose all fairly low-growing plants, no basil or tomatoes. Larger plants such as micro-dwarf tomatoes can be grown in these hydroponics units; also basil, though need to aggressively keep cutting it back. It may be best to choose all low or all high plants for a given grow campaign. See this Reddit thread for more discussion of growing things in a MUGFA unit.

Once all the plugs are back in their holders, you stick a light-blocking sticker on top of each basket. Each sticker has a hole in the middle where the plants can grow up through, but they block most of the light from hitting the grow sponge, to prevent algae growth. Then pop a clear plastic seeding cover dome on top of each hole, and you are done. The cover domes keep the seeds extra moist for sprouting; remove the domes after sprouting.  Make sure the circulation pump is running and the grow lights are on (typically cycling on 16 hours/off 8 hours). This seems like a lot of work describing it here, but it goes fast once you have the rhythm. Once this setup stage is done, you can just sit back and let everything unfold, no muss, no fuss. Here is the seeded, covered state of affairs:

Picture: Seeds placed in grow sponges on Jan 14. Note green light-blocking stickers, and clear cover domes to keep seeds moist for germination. The overhead sunlamp has a lot of blue and red LEDs (which the plants use for photosynthesis), which gives all these photos a purple cast.

Jan 28 (Two weeks after planting): seedlings. Note some unused holes are covered, to keep light out of the nutrient solution in the tank. The center hole in front is used for refilling the tank.

Feb 6.  Showing roots of an arugula plant, 23 days after planting.

Step 5. Maintenance during 2-4 month grow cycle. Monitor water level via viewing port in front. Top up as needed. Add nutrients as you add water (approx. 5 ml of Solution A and 5 ml Solution B, per liter of added water). The water will not go down very fast during the first month, but once plants get established, water will likely be needed every 5-10 days.

Optional: Supposedly it helps to keep the acidity (pH) of the nutrient solution in the range of 5.5-6.5. I think most users don’t bother checking this, since the nutrient solutions are buffered to try to keep pH in balance. Being a retired chemical engineer, I got this General Hydroponics kit for measuring and adjusting pH. On several occasions, the pH in the tank was about 6.5. That was probably perfectly fine, but I went ahead and added about 1/8 teaspoon of the pH lowering solution, to bring it down to about 6.0.   I also got a meter for measuring Electrical Conductivity/Total Dissolved Solids to monitor that parameter, but it was not necessary.

Feb 16: After a month, some greens are ready to snip the outer leaves. Lettuces (buttercrunch, red oak, romaine) on the right, herbs on the left.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Feb 17: Harvesting a small salad or sandwich filler every 2-3 days now.

March 6: Full sized, regular small harvests. All the lettuces worked great, buttercrunch is especially soft and sweet. Arugula (from the mustard plant family) gave a spicy edge. Italian parsley and thyme added flavor. The cilantro was slower growing, and only gave a few sprigs total.

Closeup March 16 (three months), just before closing out the grow cycle. Arugula foreground, lettuce top and right, thyme on left, Italian parsley upper left corner.

Step 6. Close out grow cycle. At some point, typically 2-4 months, it is time to bring a grow cycle to a close. I suppose with something like dwarf tomatoes, you could keep going longer, though you might need to pull the deck up and trim the roots periodically.  In my case, after three months, the arugula and cilantro were starting to bolt, though the lettuce, thyme, and parsley were still going strong. As of mid-March, my focus turned to outside planting, so I harvested all the remaining crops on the MUGFA, turned off the power, and gently pulled the deck off the tank. The whole space under the deck was a tangled mass of roots. I used kitchen shears to cut roots loose, enough to pull all the grow sponges and baskets out. The sponges got discarded, and the baskets saved for next time. I peeled off and saved the round green light-blocking stickers for re-use. I cleared all the rootlets from the filter sponge on the pump inlet. Then I washed out the tank per instructions. It took maybe 45 minutes for all this clean-out, to leave the unit ready for a next round of growing.

Stay tuned for a future blog post on growing watercress, which went really well this past fall. Looking to the future: In Jan 2026 I plan to do a replant of this 18-hole (blocked down to 14-holes) MUGFA device, sowing less lettuce (since we buy that anyway) but more arugula/Italian parsley/thyme for nutritious flavorings. For replacement nutrients and grow sponges, I got a Haligo hydroponics kit like this (about $12).

Growing these salad/sandwich ingredients in the kitchen under a built-in sunlamp provided good cheer and a bit of healthy food during the dark winter months. The clean hydroponic setup removed concerns about insect pests or under/overwatering.  It was a hobby; at this toy scale it did not “save money”, though from these learnings I could probably rig a larger homemade hydroponics setup which might reduce grocery costs. This exercise led to fun conversations with visitors and children, and was a reminder that nearly everything we eat comes from water, nutrients, and light, directly or indirectly.  

Is the Silver Bubble Bursting?

This is a five-year chart of the silver ETF SLV:

By most standards, this pattern looks like we entered a bubble a few months ago: speculative froth, unjustified by fundamentals. Economic history is replete with such madness of crowds. It is accepted wisdom on The Street that these parabolic price rises seldom end well. I lost a few pesos buying into the great gold bubble of 2011. All sorts of justifications were given at the time by the gold bugs on why gold prices ought to just keep on rising, or at least reach a “permanently high plateau” (in the famous words of Irving Fisher, just before the 1929 crash). Well, gold then proceeded to go down and down and down, losing some 60% of its value, until the price in 2015 matched the price in 2009, before the great bubble of 2010-2011.

Today, similar justifications are proffered as to why silver is going to the moon. There is a long-standing deficit in supply vs. demand; it takes ten years for a new silver mine to get productive; China has started restricting exports; Samsung announced a breakthrough lithium battery that can charge in six minutes, but requires a kilogram of silver; AI infrastructure is eating all the silver. These narratives seem to feed on each other. As the silver price moved higher in the past month, out came yet wilder stories that ricochet around the internet at high speeds: the commodities exchanges have run out of physical silver to back the paper trades; and the persistent claim that “they” (shadowy paper traders, central banks, commodity exchanges, the deep state, etc.) are “suppressing” silver and gold prices by means of shorting (which makes no sense). Given this popular shorting myth, it was with great glee that the blogosphere breathlessly spread the bogus story that some “systematically important bank” was in the process of being liquidated because it got squeezed on its silver short position.

The extreme price action at the very end of December (discussed below) was like rocket fuel for these rumors. Having bought a little SLV myself so as to not feel like a fool if the silver rally did have legs, I spent a number of hours as 2025 turned to 2026 trying to sort all this out. Here are some findings.

First, as to  the medium term supply/demand issues, I refer the reader to a recent article on Seeking Alpha by James Foord. He shows a chart showing that silver demand is increasing, but slowly:

He also notes that as silver price increases, there is motivation for more recycling and substitution, to compensate. He concludes that the current price surge is not driven by fundamentals, but by paper speculation.

The last ten days or so have been a wild ride, which merits some explanation. Here is the last 30 days of SLV price action:

Silver prices were rising rapidly throughout the month, but then really popped during Christmas week, reaching a crescendo on Friday, Dec 26 (blue arrow), amid rumors of physical shortages on the Shanghai exchange. To cool the speculative mania, the COMEX abruptly raised the margin requirements on silver contracts by some 30%,  from $25,000 to $32,500, effective Monday, Dec 29. I think the exchange was trying to ensure that speculators could make good on their commitment, and the raise in margin requirement would help do that. (Note, the exchange is liable if some market participant fails to deliver as promised and goes BK).

Anyway, this move forced long speculators to either post more collatoral or to liquidate their positions, on short notice. Blam, the price of silver dropped a near record amount in one day (red arrow). For me, a little minnow caught in the middle of all this shark tank action, the key part is what came after this forced decline. Was the bubble punctured for good? Should I hold or fold?

As shown above, the price has traded in a range for the past week, with violent daily moves. Zooming out to the a one-year view, it looks like the upward momentum has been halted for the moment, but it is unclear to me whether the bubble will deflate or continue for a while:

I sold about a quarter of my (small) SLV holding, hoping to buy back cheaper sometime in the coming year. Time will tell if that was a good move.

Usual disclaimer: Nothing here is advice to buy or sell any security.

P.S. Tuesday, Jan 6, 2025, after market close: I wrote this last night (Monday, Jan. 5) when silver was still rangebound. SLV was about $69, and spot silver about $76/oz. But silver ripped higher overnight, and kept going during the day, up nearly 7% at the close to new all time high. It looks like the bubble is alive and well, for now. Congrats to silver longs…

Review of MUGFA (Aerogarden type) Countertop Hydroponic Units

Last year about this time, as the outside world got darker and colder, and the greenery in my outdoor planters shriveled to brown – – I resolved to fight back against seasonal affect disorder, by growing some lettuce and herbs indoors under a sun lamp.

After doing some reading and thinking, I settled on getting a countertop hydroponics unit, instead of rigging a lamp over pots filled with dirt indoors. With a compact hydroponics unit there is no dirt, no bugs, it has built-in well-designed sun lamp on a timer, and is more or less self-watering.

These systems have a water tank that you fill with water and some soluble nutrients. There is a pump in the tank that circulates the water. There is a deck over the tank with typically 8 to 12 holes that are around 1 inch diameter. Into each hole you put a conical plug or sponge made of compressed peat moss, supported by a plastic basket. On the top of each sponge is a little hole, into which you place the seeds you want to grow.

A support basket with a dry (unwetted, unswollen) peat moss grow sponge/plug in it.

As long as you keep the unit plugged in, so the lights go on when they should, and you keep the nutrients solution topped up, you have a tidy automatic garden on a table or countertop or shelf.

The premier countertop hydroponics brand, which has defined this genre over the past twenty years, is Aerogarden. This brand is expensive. Historically its larger models were $200-$300, though with competition its larger models are now just under $200.  Aerogarden tries to justify the high cost by sleek styling and customizable automation of the lighting cycles, linked into your cell phone.

I decided to go with a cheaper brand, for two reasons. First, why spend $200 when I could get similar function for $50 (especially if I wasn’t sure I would like hydroponics)? Second, I don’t want the bother and possible malfunction associated with having to link an app on my cell phone to the growing device and program it. I wanted something simple and stupid that just turns on and goes.

So I went with a MUGFA brand 18-hole hydroponics unit last winter. It is simple and robust. The LED growing lights are distributed along the underside of a wide top lamp piece. The lamp has a lot of vertical travel (14“), so you could accommodate relatively tall plants. The lights have a simple cycle of 16 hours on, 8 hours off. You can reset by turning the power off and on again; I do this once, early on some morning, so from then on the lights are on during the day and the evening, and off at night.  The water pump pumps the nutrient solution through channels on the underside of the deck, so each grow sponge has a little dribble of solution dribbling onto it when the pump cycle is on. I snagged a second MUGFA unit, a 12 hole model, when it was on sale last spring. The MUGFA units come complete with grow sponges/plugs, support baskets/baskets for the sponges, nutrients (that you add to the water), clear plastic domes you put over the deck holes while the seeds are germinating, and little support sticks for taller plants. You have to buy seeds separately.

Images above from Amazon , for 12-hole model

I have made a couple small modifications to my MUGFA units. The pump is not really sized for reaching 18 holes, and with plants of any size you’re likely not going be stuffing 18 plants on that grow deck. Also, the power of the lamp for the 18-hole unit (24 W) is the same as the 12-hole unit; the LEDs are just spread over a wider lamp area. That 24W is OK for greens that don’t need so much light, but may only be enough to grow a few (mini) tomato plants. For all these reasons, I don’t use the four corner holes on the 18-hole unit. Those corner holes get the least light and the least water flow. To increase the water flow to the other 14 holes, I plugged up the outlets of the channels on the underside of the deck leading to those four holes. I cut little pieces of rubber sheeting, and stuffed them in channel outlets for those holes.

The 12-hole unit has a slightly more pleasing compact form factor, but it has a minor design defect [1]. The flow out of the outlet of each of the 12 channels under the deck is regular, but not very strong. Consequently, the water that comes out of each outlet drops almost straight down and splashes directly into the water tank, without contacting the grow sponge at that hole. The waterfall noise was annoying. The fix was easy, but a little tedious to implement. I cut little pieces of black strong duct tape and stuck them under the outlet of each hole, to make the water travel another quarter inch further horizontally. Those little tabs got the water in contact with the grow sponge basket. The picture below shows the deck upside down, showing the water channels under the deck going to each hole. There is a white sponge basket sticking through the nearest hole, and my custom piece of black duct tape is on the end of the water channel there, touching the basket. (In order to cover the exposed sticky side of the duct tape tab that would be left exposed and touching the basket, I cut another, smaller piece of duct tape to cover that portion of the tab, sticky side to sticky side.). This sounds complicated, but it is straightforward if you ever do it. Also, many cheap knock-off hydroponics units don’t have these under-deck flow channels at all. With MUGFA you are getting nearly Aerogarden type hardware for a third the price, so it is worth a bit of duct tape to bring it up to optimal performance.

12-hole MUGFA deck, upside down with one basket;  showing my bit of black duct tape to convey water from the channer over to the basket.

Some light escapes out sideways from under the horizontal lamps on these units. As an efficiency freak, I taped little aluminum foil reflectors hanging down from the back and sides of the lamp piece, but that is not necessary.

To keep this post short, I have just talked about the hardware here. I will describe actual plant growing in my next post. But here is one picture of my kitchen garden last winter, with the plants about 2/3 of their final sizes:

The bottom line is, I’ve been quite satisfied with both of these MUGFA units, and would recommend them to others. They provided good cheer in the dark of winter, as well as good conversations with visitors and good fresh lettuce and herbs. An alternate use of these types of hydroponics units is to start seedlings for an outside garden.

ENDNOTE

[1] For the hopelessly detail-obsessed technical nerds among us – – the specific design mistake in the 12-hole model is subtle. I’ll explain a little more here.        Here is a picture of the deck for the 18-hole model upside down, with three empty baskets inserted. The network of flow channels for the water circulation is visible on the underside. When the deck is in place on the tank, water is pumped into the short whitish tube at the left of this picture, flows into the channels, then out the ends of all the channels. (Note on the corner holes here, upper and lower right, I stuck little pieces of rubber into the ends of the flow channels to block them off since I don’t use the corner holes on this model; that blocking was not really necessary, it was just an engineering optimization by a technical nerd).

 Anyway, the key point is this: the way the baskets are oriented in the 18-hole model here, a rib of the basket faces the outlet of each flow channel. The result is that as soon as the water exits the flow channel, it immediately contacts a rib of the basket and flows down the basket and wets the grow sponge/plug within the basket. All good.

The design mistake with the 12-hole model is that the baskets are oriented such that the flow channels terminate between the ribs. The water does not squirt far enough horizontally to contact the non-rib part of basket or the sponge, so the water just drips down and splashes into the tank without wetting the sponge. This is not catastrophic, since the sponges are normally wetted just by sitting in the water in the tank, but it is not optimal. All because of a 15-degree error in radial orientation of the little rib notches in the deck. Who knows, maybe Mugfa will send me a free beta test improved 12-hole model if I point this out to them.

A Visual Summary of the 2025 Economics Nobel Lectures

Fellow EWED blogger Jeremy Horpedahl generally gives good advice. Therefore, when the other week he provided a link and recommended that we watch Joel Mokyr’s 2025 Nobel lecture*, I did so.

There were three speakers on that linked YouTube, who were the economics laureates for this year. They received the prize for their work on innovation-driven economic growth. The whole video is nearly two hours long, which is longer than most folks want to listen to, unless they are on a long car trip. Joel’s talk was the first, and it was truly engaging.

For time-pressed readers here, I have snipped many of the speakers’ slides, and pasted them below, with minimal commentary.

First, here are the great men themselves:

Talk # 1.  Joel Mokyr: Can Progress in Innovation Be Sustained?

And indeed, one can find pieces of evidence that point in this direction, such as the slower pace of pharm discoveries.

But Joel is optimistic:

Joel provides various examples of advances in theoretical knowledge and in practical technology (especially in making instruments) feeding each other. E.g., nineteenth century advances in high resolution microscopy led to study of micro-organisms which led to germ theory of disease, which was one of the all-time key discoveries that helped mankind:

So, on the technical and intellectual side, Joel feels that the drivers are still in place for continued strong progress. What may block progress are unhelpful human attitudes and fragmentation, including outright wars.

Or, as Friedrich Schiller wrote, “Against stupidity, the gods themselves contend in vain”.

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

Talk # 2: Philippe Aghion, The Economics of Creative Destruction

He commented that on the personal level, what seems to be a failure in your life can prove to be “a revival, your savior” (English is not his first language; but the point is a good one).

Much of his talk discussed some inherent contradictions in the innovation process, especially how once a new firm achieves dominance through innovation, it tends to block out newer entrants:

KEY SLIDE:

Outline of the rest of his talk:

[ There were more charts on fine points of his competition/innovation model(s)]

Slide on companies’ failure rate, grouped by age of the firm:

His comment..if you are a young , small firm, it only takes one act of (competitors’) creative destruction to oust you, whereas for older, larger, more diverse firms, it might take two or three creative destructions to wipe you out.

He then uses some of these concepts to address “Historical enigmas”

First, secular stagnation:

[My comment: Total factor productivity (TFP) growth rate in economics measures the portion of output growth not explained by increases in traditional inputs like labor and capital. It is often considered the primary contributor to GDP growth, reflecting gains from technological progress, efficiency improvements, and other factors that enhance production]

I think this chart was for the US. Productivity, which grew fast in the 1996-2005 timeframe, then slowed back down.

In the time of growth soaring, there was increased concentration in services. The boost in ~1993-2003 was a composition effect, as big techs like Microsoft, Amazon, bought out small firms, and grew the most. But then this discouraged new entries.

Gap is increasing between leaders and laggers, likely due to quasi-monopoly of big tech firms.

Another historical enigma – why do some countries stop growing? “Middle Income Trap”

s

Made a case for Korea, Japan growing fastest when they were catching up with Western technology, then slowed down.

China for past 30 years has been growing by catching up, absorbing outside technology. But the policies for pioneering new technologies are different than those for catching up.

Europe: During WWII lot of capital was destroyed, but they quickly started to catch up with US (Europe had good education, and Marshall plan rebuilt capital)…but then stagnated, because not as strong in innovation.

Europeans are doing mid-tech incremental innovation, whereas US is doing high tech breakthrough.

[my comment: I don’t know if innovation is the whole story, it is tough to compete with a large, unified nation sitting on so much premium farmland and oil fields]

Patents:

Red =US,  blue=China, yellow=Japan, green=Europe. His point: Europe is lagging.

Europe needs true unified market, policies to foster innovation (and creative destruction, rather than preservation).

Finally: Rethinking Capitalism

GINI index is a measure of inequality.

Death of unskilled middle-aged men in U.S.…due in part to distress over of losing good jobs [I’m not sure that is the whole story]. Key point of two slides above is that US has more innovation, but some bad social outcomes.

So, you’d like to have best of both…flexibility (like US) AND inclusivity (like Europe).

Example: with Danish welfare policies, there is little stress if you lose your job (slide above).

Found that innovation (in Europe? Finland?) correlated with parents’ income and education level:

…but that is considered suboptimal, since you want every young person, no matter parents’ status, to have the chance to contribute to innovation. Pointed to reforms of education in Finland, that gave universal access to good education..claimed positive effects on innovation.

Final subtopic: competition. Again, the mega tech firms discourage competition. It used to be that small firms were the main engine of job growth, now not so much:

Makes the case that entrant competition enhances social mobility.

Conclusions:

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

Talk # 3. Peter Howitt

The third speaker, Peter Howitt showed only a very few slides, all of which were pretty unengaging, such as:

So, I don’t have much to show from him. He has been a close collaborator of Philippe Aghion, and he seemed to be saying similar things. I can report that he is basically optimistic about the future.

* The economics prize is not a classic “Nobel prize” like the ones established by the Swedish dynamite inventor himself, but was established in 1968 by the Swedish national bank “In Memory of Alfred Nobel.”

Here is an AI summary of the 2025 economics prize:  

The 2025 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel was awarded to Joel Mokyr, Philippe Aghion, and Peter Howitt for their groundbreaking work on innovation-driven economic growth. Mokyr received half of the prize for identifying the prerequisites for sustained growth through technological progress, emphasizing the importance of “useful knowledge,” mechanical competence, and institutions conducive to innovation. The other half was jointly awarded to Aghion and Howitt for developing a mathematical model of sustained growth through “creative destruction,” a concept that explains how new technologies and products replace older ones, driving economic advancement. Their research highlights that economic growth is not guaranteed and requires supportive policies, open markets, and mechanisms to manage the disruptive effects of innovation, such as job displacement and firm failures. The award comes at a critical time, as concerns grow over threats to scientific research funding and the potential for de-globalization to hinder innovation.

The Fed Resumes Buying Treasuries: Is This the Start of, Ahem, QE?

In some quarters there is a sense that quantitative easing (QE), the massive purchase of Treasury and other bonds by the Fed, is something embarrassing or disreputable – – an admission of failure, or an enabling of profligate financial behaviors. For months, pundits have been smacking their lips in anticipation of QE-like Fed actions, so they could say, “I told you so”. In particular, folks have predicted that the Fed would try to disguise the QE-ness of their action by giving some other, more innocuous name.

Here is how liquidity analyst Michael Howell humorously put it on Dec 7:

All leave has been cancelled in the Fed’s Acronym Department. They are hurriedly working over-time, desperately trying to think up an anodyne name to dub (inevitable) future liquidity interventions in time for the upcoming FOMC meeting. They plainly cannot use the politically-charged ‘QE’. We favor the term ‘Not-QE, QE’, but odds are it will be dubbed something like ‘Bank Reverse Management Operations’ (BRMO) or ‘Treasury Market Liquidity Operations’ (TMLO). The Fed could take a leaf from China’s playbook, since her Central Bank the PBoC, now uses a long list of monetary acronyms, such as MTL, RRRs, RRPs and now ORRPs, probably to hide what policy makers are really doing.

And indeed, the Fed announced on Dec 10 that it would purchase $40 billion in T-bills in the very near term, with more purchases to follow.

But is this really (the unseemly) QE of years past? Cooler heads argue that no, it is not. Traditional QE has focused on longer-term securities (e.g. T-bonds or mortgage securities with maturities perhaps 5-10 years), in an effort to lower longer-term rates. Classically, QE was undertaken when the broader economy was in crisis, and short-term rates had already been lowered to near zero, so they could not be lowered much further.

But the current purchases are all very short-term (3 months or less). So, this is a swap of cash for almost-cash. Thus, I am on the side of those saying this is not quite QE. Almost, but not quite.

The reason given for undertaking these purchases is pretty straightforward, though it would take more time to explicate it that I want to take right now. I hope to return to this topic of system liquidity in a future post.Briefly, the whole financial system runs on constant refinancing/rolling over of debt. A key mechanism for this is the “repo” market for collateralized lending, and a key parameter for the health of that market is the level of “reserves” in the banking system. Those reserves, for various reasons, have been getting so low that the system is getting in danger of seizing up, like a machine with insufficient lubrication. These recent Fed purchases directly ease that situation. This management of short-term liquidity does differ from classic purchases of long-term securities.

The reason I am not comfortable saying robustly, “No, this is not all QE” is that the government has taken to funding its ginormous ongoing peacetime deficit with mainly short-term debt. It is that ginormous short-term debt issuance which has contributed to the liquidity squeeze. And so, these ultra-short term T-bill purchases are to some extent monetizing the deficit. Deficit monetization in theory differs from QE, at least in stated goals, but in practice the boundaries are blurry.

Google’s TPU Chips Threaten Nvidia’s Dominance in AI Computing

Here is a three-year chart of stock prices for Nvidia (NVDA), Alphabet/Google (GOOG), and the generic QQQ tech stock composite:

NVDA has been spectacular. If you had $20k in NVDA three years ago, it would have turned into nearly $200k. Sweet. Meanwhile, GOOG poked along at the general pace of QQQ.  Until…around Sept 1 (yellow line), GOOG started to pull away from QQQ, and has not looked back.

And in the past two months, GOOG stock has stomped all over NVDA, as shown in the six-month chart below. The two stocks were neck and neck in early October, then GOOG has surged way ahead. In the past month, GOOG is up sharply (red arrow), while NVDA is down significantly:

What is going on? It seems that the market is buying the narrative that Google’s Tensor Processing Unit (TPU) chips are a competitive threat to Nvidia’s GPUs. Last week, we published a tutorial on the technical details here. Briefly, Google’s TPUs are hardwired to perform key AI calculations, whereas Nvidia’s GPUs are more general-purpose. For a range of AI processing, the TPUs are faster and much more energy-efficient than the GPUs.

The greater flexibility of the Nvidia GPUs, and the programming community’s familiarity with Nvidia’s CUDA programming language, still gives Nvidia a bit of an edge in the AI training phase. But much of that edge fades for the inference (application) usages for AI. For the past few years, the big AI wannabes have focused madly on model training. But there must be a shift to inference (practical implementation) soon, for AI models to actually make money.

All this is a big potential headache for Nvidia. Because of their quasi-monopoly on AI compute, they have been able to charge a huge 75% gross profit margin on their chips. Their customers are naturally not thrilled with this, and have been making some efforts to devise alternatives. But it seems like Google, thanks to a big head start in this area, and very deep pockets, has actually equaled or even beaten Nvidia at its own game.

This explains much of the recent disparity in stock movements. It should be noted, however, that for a quirky business reason, Google is unlikely in the near term to displace Nvidia as the main go-to for AI compute power. The reason is this: most AI compute power is implemented in huge data/cloud centers. And Google is one of the three main cloud vendors, along with Microsoft and Amazon, with IBM and Oracle trailing behind. So, for Google to supply Microsoft and Amazon with its chips and accompanying know-how would be to enable its competitors to compete more strongly.

Also, AI users like say OpenAI would be reluctant to commit to usage in a Google-owned facility using Google chips, since then the user would be somewhat locked in and held hostage, since it would be expensive to switch to a different data center if Google tried to raise prices. On contrast, a user can readily move to a different data center for a better deal, if all the centers are using Nvidia chips.

For the present, then, Google is using its TPU technology primarily in-house. The company has a huge suite of AI-adjacent business lines, so its TPU capability does give it genuine advantages there. Reportedly, soul-searching continues in the Google C-suite about how to more broadly monetize its TPUs. It seems likely that they will find a way. 

As usual, nothing here constitutes advice to buy or sell any security.

AI Computing Tutorial: Training vs. Inference Compute Needs, and GPU vs. TPU Processors

A tsunami of sentiment shift is washing over Wall Street, away from Nvidia and towards Google/Alphabet. In the past month, GOOG stock is up a sizzling 12%, while NVDA plunged 13%, despite producing its usual earnings beat.  Today I will discuss some of the technical backdrop to this sentiment shift, which involves the differences between training AI models versus actually applying them to specific problems (“inference”), and significantly different processing chips. Next week I will cover the company-specific implications.

As most readers here probably know, the popular Large Language Models (LLM) that underpin the popular new AI products work by sucking in nearly all the text (and now other data) that humans have ever produced, reducing each word or form of a word to a numerical token, and grinding and grinding to discover consistent patterns among those tokens. Layers of (virtual) neural nets are used. The training process involves an insane amount of trying to predict, say, the next word in a sentence scraped from the web, evaluating why the model missed it, and feeding that information back to adjust the matrix of weights on the neural layers, until the model can predict that next word correctly. Then on to the next sentence found on the internet, to work and work until it can be predicted properly. At the end of the day, a well-trained AI chatbot can respond to Bob’s complaint about his boss with an appropriately sympathetic pseudo-human reply like, “It sounds like your boss is not treating you fairly, Bob. Tell me more about…” It bears repeating that LLMs do not actually “know” anything. All they can do is produce a statistically probably word salad in response to prompts. But they can now do that so well that they are very useful.*

This is an oversimplification, but gives the flavor of the endless forward and backward propagation and iteration that is required for model training. This training typically requires running vast banks of very high-end processors, typically housed in large, power-hungry data centers, for months at a time.

Once a model is trained (e.g., the neural net weights have been determined), to then run it (i.e., to generate responses based on human prompts) takes considerably less compute power. This is the “inference” phase of generative AI. It still takes a lot of compute to run a big program quickly, but a simpler LLM like DeepSeek can be run, with only modest time lags, on a high end PC.

GPUs Versus ASIC TPUs

Nvidia has made its fortune by taking graphical processing units (GPU) that were developed for massively parallel calculations needed for driving video displays, and adapting them to more general problem solving that could make use of rapid matrix calculations. Nvidia chips and its CUDA language have been employed for physical simulations such as seismology and molecular dynamics, and then for Bitcoin calculations. When generative AI came along, Nvidia chips and programming tools were the obvious choice for LLM computing needs. The world’s lust for AI compute is so insatiable, and Nvidia has had such a stranglehold, that the company has been able to charge an eye-watering gross profit margin of around 75% on its chips.

AI users of course are trying desperately to get compute capability without have to pay such high fees to Nvidia. It has been hard to mount a serious competitive challenge, though. Nvidia has a commanding lead in hardware and supporting software, and (unlike the Intel of years gone by) keeps forging ahead, not resting on its laurels. 

So far, no one seems to be able to compete strongly with Nvidia in GPUs. However, there is a different chip architecture, which by some measures can beat GPUs at their own game.

NVIDIA GPUs are general-purpose parallel processors with high flexibility, capable of handling a wide range of tasks from gaming to AI training, supported by a mature software ecosystem like CUDA. GPUs beat out the original computer central processing units (CPUs) for these tasks by sacrificing flexibility for the power to do parallel processing of many simple, repetitive operations. The newer “application-specific integrated circuits” (ASICs) take this specialization a step further. They can be custom hard-wired to do specific calculations, such as those required for bitcoin and now for AI. By cutting out steps used by GPUs, especially fetching data in and out of memory, ASICs can do many AI computing tasks faster and cheaper than Nvidia GPUs, and using much less electric power. That is a big plus, since AI data centers are driving up electricity prices in many parts of the country. The particular type of ASIC that is used by Google for AI is called a Tensor Processing Unit (TPU).

I found this explanation by UncoverAlpha to be enlightening:

A GPU is a “general-purpose” parallel processor, while a TPU is a “domain-specific” architecture.

The GPUs were designed for graphics. They excel at parallel processing (doing many things at once), which is great for AI. However, because they are designed to handle everything from video game textures to scientific simulations, they carry “architectural baggage.” They spend significant energy and chip area on complex tasks like caching, branch prediction, and managing independent threads.

A TPU, on the other hand, strips away all that baggage. It has no hardware for rasterization or texture mapping. Instead, it uses a unique architecture called a Systolic Array.

The “Systolic Array” is the key differentiator. In a standard CPU or GPU, the chip moves data back and forth between the memory and the computing units for every calculation. This constant shuffling creates a bottleneck (the Von Neumann bottleneck).

In a TPU’s systolic array, data flows through the chip like blood through a heart (hence “systolic”).

  1. It loads data (weights) once.
  2. It passes inputs through a massive grid of multipliers.
  3. The data is passed directly to the next unit in the array without writing back to memory.

What this means, in essence, is that a TPU, because of its systolic array, drastically reduces the number of memory reads and writes required from HBM. As a result, the TPU can spend its cycles computing rather than waiting for data.

Google has developed the most advanced ASICs for doing AI, which are now on some levels a competitive threat to Nvidia.   Some implications of this will be explored in a post next week.

*Next generation AI seeks to step beyond the LLM world of statistical word salads, and try to model cause and effect at the level of objects and agents in the real world – – see Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence .

Standard disclaimer: Nothing here should be considered advice to buy or sell any security.

Structure Integrated Panels (SIP): The Latest, Greatest (?) Home Construction Method

Last week I drove an hour south to help an acquaintance with constructing his retirement home. I answered a group email request, looking for help in putting up a wall in this house.
I assumed this was a conventional stick-built construction, so I envisioned constructing a studded wall out of two by fours and two by sixes whilst lying flat on the ground, and then needing four or five guys to swing this wall up to a vertical position, like an old-fashioned barn raising.

But that wasn’t it at all. This house was being built from Structure Integrated Panels (SIP). These panels have a styrofoam core, around 5 inches thick, with a facing on each side of thin oriented strandboard (OSB). (OSB is a kind of cheapo plywood).


The edges have a sort of tongue and groove configuration, so they mesh together. Each of the SIP panels was about 9 feet high and between 2 feet and 8 feet long. Two strong guys could manhandle a panel into position. Along the edge of the floor, 2×6’s had been mounted to guide the positioning of the bottom of each wall panel.


We put glue and sealing caulk on the edges to stick them together, and drove 7-inch-long screws through the edges after they were in place, and also a series of  nails through the OSB edges into the 2×6’s at the bottom. Pneumatic nail guns give such a satisfying “thunk” with each trigger pull, you feel quite empowered. Here are a couple photos from that day:


The homeowner told me that he learned about SIP construction from an exhibit in Washington, DC that he attended with his grandson. The exhibit was on building techniques through the ages, starting with mud huts, and ending with SIP as the latest technique. That inspired him.

(As an old guy, I was not of much use lifting the panels. I did drive in some nails and screws. I was not initially aware of the glue/caulk along the edges, so I spent my first 20 minutes on the job wiping off the sticky goo I got all over my gloves and coat when I grabbed my first panel. My chief contribution that day was to keep a guy from toppling backwards off a stepladder who was lifting a heavy panel beam overhead).

We amateurs were pretty slow, but I could see that a practiced crew could go slap slap slap and erect all the exterior walls of a medium sized single-story house in a day or two, without needing advanced carpentry skills. Those walls would come complete with insulation. They would still need weatherproof exterior siding (e.g. vinyl or faux stone) on the outside, and sheetrock on the inside. Holes were pre-drilled in the Styrofoam for running the electrical wiring up through the SIPs.

From my limited reading, it seems that the biggest single advantage of SIP construction is quick on-site assembly. It is ideal for situations where you only have a limited time window for construction, or in an isolated or affluent area where site labor is very expensive and hard to obtain (e.g., a ski resort town). Reportedly, SIP buildings are mechanically stronger than stick-built, handy in case of earthquakes or hurricanes. Also, an SIP wall has very high insulation value, and the construction method is practically airtight.

SIP construction is not cheaper than stick built. It’s around 10% more expensive. You need perfect communication with the manufacturer of the SIP panels; if the delivered panels don’t fit properly on-site, you are hosed. Also, it is tough to modify an SIP house once it is built.

Because it is so airtight, it requires some finesse in designing the HVAC system. You need to be very careful protecting it from the walls from moisture, both inside and out, since the SIP panels can lose strength if they get wet. For that reason, some folks prefer to not use SIP for roofs, but only for walls and first-story flooring.
For more on SIP pros and cons, see here and here.

Michael Burry’s New Venture Is Substack “Cassandra Unchained”: Set Free to Prophesy All-Out Doom on AI Investing

This is a quick follow-up to last week’s post on “Big Short” Michael Burry closing down his Scion Asset Management hedge fund. Burry had teased on X that he would announce his next big thing on Nov 25. It seems he is now a day or two early: Sunday night he launched a paid-subscription “Cassandra Unchained” Substack. There he claims that:

Cassandra Unchained is now Dr. Michael Burry’s sole focus as he gives you a front row seat to his analytical efforts and projections for stocks, markets, and bubbles, often with an eye to history and its remarkably timeless patterns.

Reportedly the subscription cost is $39 a month, or $379 annually, and there are 26,000 subscribers already. Click the abacus and…that comes to a cool $ 9.9 million a year in subscription fees. Not bad compensation for sharing your musings on line.

Michael Burry was dubbed “Cassandra” by Warren Buffett in recognition of his prescient warnings about the 2008 housing market collapse, a prophecy that was initially ignored, much like the mythological Cassandra who was fated to deliver true prophecies that were never believed. Burry embraced this nickname, adopting “Cassandra” as his online moniker on social media platforms, symbolizing his role as a lone voice warning of impending financial disaster. On the About page of his new Substack, he wrote that managing clients’ money in a hedge fund like Scion came with restrictions that “muzzled” him, such that he could only share “cryptic fragments” publicly, whereas now he is “unchained.”

Of his first two posts on the new Substack, one was a retrospective on his days as a practicing doctor (resident in neurology at Stanford Hospital) in 1999-2000. He had done a lot of on-line posting on investing topics, focusing on valuations, and finally left medicine to start a hedge fund. As he tells it, he called the dot.com bubble before it popped.

The Business Insider summarizes Burry’s second post, which attacks the central premise of those who claim the current AI boom is fundamentally different from the 1990s dot.com boom:

The second post aims straight at the heart of the AI boom, which he calls a “glorious folly” that will require investigation over several posts to break down.

Burry goes on to address a common argument about the difference between the dot-com bubble and AI boom — that the tech companies leading the charge 25 years ago were largely unprofitable, while the current crop are money-printing machines.

At the turn of this century, Burry writes, the Nasdaq was driven by “highly profitable large caps, among which were the so-called ‘Four Horsemen’ of the era — Microsoft, Intel, Dell, and Cisco.”

He writes that a key issue with the dot-com bubble was “catastrophically overbuilt supply and nowhere near enough demand,” before adding that it’s “just not so different this time, try as so many might do to make it so.”

Burry calls out the “five public horsemen of today’s AI boom — Microsoft, Google, Meta, Amazon and Oracle” along with “several adolescent startups” including Sam Altman’s OpenAI.

Those companies have pledged to invest well over $1 trillion into microchips, data centers, and other infrastructure over the next few years to power an AI revolution. They’ve forecasted enormous growth, exciting investors and igniting their stock prices.

Shares of Nvidia, a key supplier of AI microchips, have surged 12-fold since the start of 2023, making it the world’s most valuable public company with a $4.4 trillion market capitalization.

“And once again there is a Cisco at the center of it all, with the picks and shovels for all and the expansive vision to go with it,” Burry writes, after noting the internet-networking giant’s stock plunged by over 75% during the dot-com crash. “Its name is Nvidia.”

Tell us how you really feel, Michael. Cassandra, indeed.

My amateur opinion here: I think there is a modest but significant chance that the hyperscalers will not all be able to make enough fresh money to cover their ginormous investments in AI capabilities 2024-2028. What happens then? For Google and Meta and Amazon, they may need to write down hundreds of millions of dollars on their balance sheets, which would show as ginormous hits to GAAP earnings for a number of quarters. But then life would go on just fine for these cash machines, and the market may soon forgive and forget this massive misallocation of old cash, as long as operating cash keeps rolling in as usual. Stocks are, after all, priced on forward earnings. If the AI boom busts, all tech stock prices would sag, but I think the biggest operating impact would be on suppliers of chips (like Nvidia) and of data centers (like Oracle). So, Burry’s comparison of 2025 Nvidia to 1999 Cisco seems apt.

“Big Short” Michael Burry Closes Scion Hedge Fund: “Value” Approach Ceased to Add Value?

Michael Burry is famed for being among the first to both discern and heavily trade on the ridiculousness of subprime mortgages circa 2007.  He is a quirky guy: brilliant, but probably Asperger‘s. That comes through in his portrayal in the 2015 movie based on the book, The Big Short.

He called it right with mortgages in 2007, but was early on his call, and for many months lost money on the bold trading positions he had put on in his hedge fund, Scion Capital. Investors in his fund rebelled, though he eventually prevailed. Reportedly he made $100 million himself, and another 700 million for his investors, but in the wake of this turmoil, he shut down Scion Capital.

In 2013 he reopened his hedge fund under the name Scion Asset Management. He has generated headlines in the past several years, criticizing high valuations of big tech companies. Disclosure of his short positions on Nvidia and Palantir may have contributed to a short-term decline in those stocks. He has called out big tech companies in general for stretching out the schedule of depreciation of their AI data center investments, to make their earnings look bigger than they really are.

Burry is something of an investing legend, but people always like to take pot shots at such legends. Burry has been rather a permabear, and of course they are right on occasion. For instance, I ran across the following OP at Reddit:

Michael burry is a clown who got lucky once

I am getting sick and tired of seeing a new headline or YouTube video about Michael burry betting against the market or shorting this or that.

First of all the guy is been betting against the market all his career and happened to get lucky once. Even a broken clock is right twice in a day. He is one of these goons who reads and understands academia economics and tries to apply them to real world which is they don’t work %99 of the time. In fact guys like him with heavy focus on academia economic approach don’t make it to far in this industry and if burry didn’t get so lucky with his CDS trade he would be most likely ended up teaching some bs economic class in some mid level university.

Teaching econ at some mid-level university, ouch.  (But a reader fired back at this OP: OP eating hot pockets in his moms basement criticizing a dude who has made hundreds of millions of dollars and started from scratch.)

Anyway, Burry raised eyebrows at the end of October, when he announced that he was shutting down his Scion Asset Management hedge fund. This Oct 27 announcement was accompanied by verbiage to the effect that he has not read the markets correctly in recent years:

With a heavy heart, I will liquidate the funds and return capital—minus a small audit and tax holdback—by year’s end. My estimation of value in securities is not now, and has not been for some time, in sync with the markets.

Photo

To me, all this suggested that Burry’s traditional Graham-Dodd value-oriented approach had gotten run over by the raging tech bull market of the past eight years. I am sensitive to this, because I, too, have a gut bias towards value, which has not served me well in recent years. (A year ago I finally saw the light and publicly recanted value investing and embraced the bull, here on EWED).

Out of curiosity, therefore, I did some very shallow digging to try to find out how his Scion fund has performed in the last several years. I did not find the actual returns that investors would have seen. There are several sites that analyze the public filings of various hedge funds, and then calculate the returns on those stocks in those portfolio percentages. This is an imperfect process, since it will miss out on the actual buying and selling prices for the fund during the quarter, and may totally miss the effects of shorting and options and convertible warrants, etc., etc. But it suggests that Scion’s performance has not been amazing recently. Funds are nearly always shut down because of underperformance, not overperformance.

Pawing through sites like HedgeFollow (here and here) , Stockcircle, and Tipranks, my takeaway is that Burry probably beat the S&P 500 over the past three years, but roughly tied the NASDAQ (e.g. fund QQQ). This performance would naturally have his fund investors asking why they should be paying huge fees to someone who can’t beat QQQ.

What’s next for Burry? In a couple of tweets on X, Burry has teased that he will reveal some plans on November 25. The speculation is that he will refocus on some personal asset management fund, where he will not be bothered by whiny outside investors. We shall see.