Pandemic Excess Savings Still Powering the Hot Economy

Well, the great “Recession Starting Next Quarter” that has been predicted for nearly two years is nowhere in sight. In fact, the Bureau of Labor Statistics just last week posted an absolute blowout jobs number:

The U.S. economy churned out a blockbuster 336,000 jobs in September, smashing economists’ expectations and heightening the risk that policymakers will have to push even harder to slow down the economy. The data released Friday by the Bureau of Labor Statistics offered yet another snapshot of the job market’s remarkable strength, with the unemployment rate holding at 3.8 percent and wage growth outpacing inflation in a boost to workers. But it was also the latest example of an economy that simply refuses to slow down, despite the Federal Reserve’s aggressive attempts to get prices and hiring closer to normal levelsThe September report, which showed the largest number of gains since January, had been expected to indicate continued moderation in the labor market, with forecasts of around 170,000 jobs created. Instead, it came in at nearly twice that amount. (Lauren Kaori Gurley and Rachel Siegel , Washington Post)

Before we get too excited, let’s note that the BLS numbers have a strong component of BS: nearly every jobs number they put out is quickly, quietly revised downward by 20% or so. Also, much of the jobs creation this year has been in the part-time category (so employers don’t have to pay health benefits). That said, it is indisputable that despite ferocious interest rate hikes, the economy continues to hum along, much more robustly that nearly anyone predicted six or twelve months ago. Why?

I suggest that we follow the time-tested approach of investigative reporters, which is to follow the money. We have noted earlier that since 2020 a key factor in consumer spending, which constitutes about 70% of the economy, has been the ginormous windfall of free money, over $4 trillion, that was put into the economy via various pandemic-related programs (enhanced unemployment benefits, direct stimmie payments, etc.). The story of the recent strong jobs market is largely the story of spending down that windfall.

When we were locked down in late 2020-early 2021, we consoled ourselves with ordering tons of goods on Amazon. While this generated some jobs for longshoremen and UPS and Amazon drivers, it was mainly Chinese workers who benefited from this phase. But for the past year and a half, we are out there in planes, trains, automobiles, and cruise ships, spending for services and restaurant food at a brisk pace. This has buoyed up the domestic economy, which in turn is keeping inflation far above the Fed’s 2% target.

Part of the incoming-recession story has been that the COVID windfall money is about to run out. For instance, here is a June, 2023 chart from Fed authors de Soyres, et al.  showing that in the U.S. (black curve below) this money has already been exhausted:

A different set of Fed authors (Abdelrahman and Oliveira of the San Francisco Fed) wrote, also in June, that there remained a smidge of excess savings, but that “would likely be depleted in the third quarter of 2023.”

However,  the Bureau of Economic Analysis (BEA) recently completed an update of national economic data that lowered the savings rate prior to the pandemic and increased it in 2020 and 2021. This basically reflected a change in the way the BEA accounts for income from mutual funds and REITS. The bottom line is that it has forced Wall Street economists to increase their excess savings projections to date by as much as $600 billion to $1 trillion, depending on the economics team. This in turn leads them to delay forecasts of recession by yet another 6-12 months.

For instance, James Knightley of ING Global Markets Research writes that there are still plenty of excess savings around; recent revisions in their numbers show the remaining hoard is even larger than they originally thought:

They did not break down this excess saving by income group, so it is possible that much of it remains with the upper 10-20% who may hoard/invest it, versus the bottom quartiles who have been spending it all into economy and now may be tapped out. We shall see how this continues to play out.

The Internet Knows EVERYTHING: Stopping My Car Alarm from Randomly Triggering

I have an oldish Honda that still runs smoothly. It is true that the cruise control does not work, and the left front fender is held on by a large binder clip, and I had to patch over a big rust hole in a rear wheel well, but as I said, it runs.

I sometimes park it down at the end of the street, under some shade trees, to get it out of the hot summer sun. A couple of times, for no reason, the antitheft system kicked on, so the car was honking and honking for hours on end because we didn’t hear it down there. Some neighbors down there finally figured out who it was and came and told us. They were nice about it, but I heard some other folks down there were pretty irritated.

That happened again two weeks ago, so I decided to keep it in front of our house all the time where we could keep an ear on it. Supposedly the alarm is triggered when the car thinks that a door or the trunk or the front hood has been opened without a legitimate unlocking by a key or a fob. Therefore, I opened and closed all four doors, and the trunk and the hood, and locked the car and hoped all will go well. But a few hours later there it was: honk, honk, honk….

As a temporary measure, I simply left it unlocked, so the system would not arm. But that’s not a long-term fix. So, I rolled up my sleeves and went to the internet to see what help I could find there. One common suggestion was to find the fuse that controls the alarm system and just pull it out of the fuse box. That would be great, but I checked multiple fuse diagrams for my model, and it does not seem to be a fuse that controls just the alarm system.

Other web sites mentioned that day sensor on the front hood latch is a common failure point. The sensor there can start giving spurious signals when it gets old. If you are sure that’s the problem, you can have a garage replace it for labor plus maybe 100 bucks for the replacement latch.

Alternatively, you can just pull apart the connector that connects the hood latch sensor to the alarm system. That connection is in plain sight near the latch. If the latch is the problem, disconnecting that sensor should make the alarm system think the latch is always firmly closed, so it will not trigger an armed system.

But what if the hood latch is not a problem? What if the problem is the common but elusive damage to wiring caused by rodents gnawing on the insulation which contains soybean derivatives??  After sifting through about 10 links that were thrown up by my DuckDuckGo search on the subject, I finally found a useful discussion on the “civicsforum.com”.

A certain “andrickjm” wrote that he had disconnected that wire junction, and his car alarm was still randomly going off. Some savant going by the moniker “ezone” wrote that what you needed to do then is to insert a little wire jumper between the two sockets of the connector that go to the alarm system. That will make the alarm system think the hood is always raised, never closed, and this will keep a system from ever arming.

So I cut a 1-inch piece of wire, stripped the insulation from the two ends, bent it into a U-shape, jammed the two bare wire ends into the two holes in the connector socket, and sealed it all up with duct tape.


The alarm has not sounded since. Victory at last, thanks to the distributed intelligence of the internet, resting on the efforts of millions of good-hearted souls who share their problems and solutions in all areas of life.

Wastewater Testing: COVID Surge Maybe Delayed for Now

We reported last month on yet another COVID surge beginning, driven by yet another new, highly transmissible  variant. When I checked in on the state of affairs this week, I found two different narratives.

With the demise of widespread public testing, it has become more difficult to track the progress of the disease. One means to do so now is to monitor hospital admissions for COVID. The New York Times provides this service, and it shows a continued uptrend in cases, at least through September 8:

Source: The New York Times

The chart above is for the whole country. It turns out that these cases are highly localized in certain hot spots, especially along the Atlantic seaboard (Delaware through  South Carolina), plus the region of St. Joseph, Missouri:

Source: The New York Times

Wastewater Analysis Suggests a Plateau

An alternate means of monitoring the progress of COVID is to do ongoing testing of municipal wastewater. The virus is “shed” (to put it delicately) in sewage, and can be detected there some days before a person reports any symptoms. Most recent wastewater analyses indicate that incidence of the disease is plateauing for now, according to an NBC News article by Erika Edwards:

Biobot Analytics, a company that tracks wastewater samples at 257 sites nationwide, said that the current average Covid levels across the United States are approximately 5% lower than they were last week.

“All fingers crossed,” Cristin Young, a Biobot epidemiologist said, “this wave is plateauing and may be declining.”

While data from the Centers for Disease Control and Prevention show a rise in Covid-related hospitalizations and deaths, wastewater may indicate what’s to come.

After a mid- to late-summer rise, the CDC’s Covid wastewater surveillance now shows declines in mid-Atlantic states, such as Virginia and Maryland.

The findings are backed up from surveillance in North Carolina, said Jessica Schlueter, an associate professor in the department of bioinformatics and genomics at the University of North Carolina Charlotte. Her lab is responsible for testing 12 sites across the state.

The increase in Covid wastewater samples during the last six months “seems to be peaking and starting to taper off,” she said. Wastewater collection sites in the Midwest and the Northeast, however, show a steady uptick in Covid spread.

Hospitalizations and deaths are lagging indicators, whereas wastewater analysis provides something of a leading indicator. Putting it all together, it may be that what we are seeing now is the usual late summer COVID increase, which may come down in the next two months, to be followed by another winter surge. Do get your latest booster shots.

The Fermi Paradox: Where Are All Those Aliens?

Last week NASA’s independent study team released its highly anticipated report on UFOs.  A couple of takeaways: First, the term “UFO” has been replaced  in fed-speak by “UAP” (unidentified anomalous phenomena). Second, no hard evidence has emerged demonstrating an extra-terrestrial origin for UAPs, but, third, there is much that remains unexplained.

Believers in aliens are undeterred. Earlier this summer, former military intelligence officer David Grusch had made sensational claims in a congressional hearing that the U.S. government is concealing the fact that they are in possession of a “non-human spacecraft.”  The NASA director himself, Bill Nelson, holds that it is likely that intelligent life exists in other corners of the universe, given the staggering number of all the stars which likely have planets with water and moderate temperatures.

A famous conversation took place in 1950 amongst a group of top scientists at Los Alamos (think: Manhattan Project) over lunch. They had been chatting about the recent UFO reports and the possibility of faster-than-light travel. Suddenly Enrico Fermi blurted out something like, “But where is everybody?”

His point was that if (as many scientists believe) there is a reasonable chance that technically-advanced life-forms can evolve on other planets, then given the number of stars (~ 300 million) in our Milky Way galaxy and the time it has existed, it should have been all colonized many times over by now. Interstellar distances are large, but 13 billion years is a long time.  Earth should have received multiple visits from aliens. Yet, there is no evidence that this has occurred, not even one old alien probe circling the Sun. This apparent discrepancy is known as the Fermi paradox.

A variety of explanations have been advanced to explain it. To keep this post short, I will just list a few of these factors, pulled from a Wikipedia article:

Extraterrestrial life is rare or non-existent

Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the evolution of life—or at least the evolution of biological complexity—are rare or even unique to Earth.

It is possible that even if complex life is common, intelligence (and consequently civilizations) is not.

Periodic extinction by natural events [e.g., asteroid impacts or gamma ray bursts]

 Intelligent alien species have not developed advanced technologies [ e.g., if most planets which contain water are totally covered by water, many planets may harbor intelligent aquatic creatures like our dolphins and whales, but they would be unlikely to develop starship technology].

It is the nature of intelligent life to destroy itself [Sigh]

It is the nature of intelligent life to destroy other technically-advanced species [A prudent strategy to minimize threats; the result being a reduction in the number of starship civilizations].

And there are many other explanations proposed, including the “zoo hypothesis,” i.e., alien life intentionally avoids communication with Earth to allow for natural evolution and sociocultural development, and avoiding interplanetary contamination, similar to people observing animals at a zoo.

As a chemical engineer and amateur reader of the literature on the origins of life, I’d put my money on the first factor. We have reasonable evidence for tracing the evolution of today’s complex life-forms back to the original cells, but I think the odds for spontaneous generation of those RNA/DNA-replicating cells are infinitesimally  low.  Hopeful biochemists wave their hands like windmills proposing pathways for life to arise from non-living chemicals, but I have not seen anything that seems to pass the sniff test. It is a long way from a chemical soup to a self-replicating complex system. I would be surprised to find bacteria, much less star-travelling aliens, on many other planets in the galaxy.

Maybe that’s just me. But Joy Buchanan’s recent poll of authors on this blog suggest that we are collectively a skeptical lot.

Generative AI Nano-Tutorial

Everyone who has not been living under a rock this year has heard the buzz around ChatGPT and generative AI. However, not everyone may have clear definitions in mind, or understanding of how this stuff works.

Artificial intelligence (AI) has been around in one form or another for decades. Computers have long been used to analyze information and come up with actionable answers. Classically, computer output has been in the form of numbers or graphical representation of numbers. Or perhaps in the form of chess moves, beating all human opponents since about 2000.

Generative AI is able to “generate” a variety of novel content, such as images, video, music, speech, text, software code and product designs, with quality which is difficult to distinguish from human-produced content. This mimicry of human content creation is enabled by having the AI programs analyze reams and reams of existing content (“training data”), using enormous computing power.

I wanted to excerpt here a fine article I just saw which is informative on this subject. Among other things, it lists some examples of gen-AI products, and describes the “transformer” model that underpins many of these products. I skipped the section of the article that discusses the potential dangers of gen-AI (e.g., problems with false “hallucinations”), since that topic has been treated already in this blog.

Between this article and the Wikipedia article on Generative artificial intelligence , you should be able to hold your own, or at least ask intelligent questions, when the subject next comes up in your professional life (which it likely will, sooner or later).

One technical point for data nerds is the distinction between “generative” and “discriminative” approaches in modeling. This is not treated in the article below, but see here.

All text below the line of asterisks is from Generative AI Defined: How it Works, Benefits and Dangers, by Owen Hughes, Aug 7, 2023.

*******************************************************

What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

How does generative AI work?

Generative AI models use a complex computing process known as deep learning to analyze common patterns and arrangements in large sets of data and then use this information to create new, convincing outputs. The models do this by incorporating machine learning techniques known as neural networks, which are loosely inspired by the way the human brain processes and interprets information and then learns from it over time.

To give an example, by feeding a generative AI model vast amounts of fiction writing, over time the model would be capable of identifying and reproducing the elements of a story, such as plot structure, characters, themes, narrative devices and so on.

……

Examples of generative AI

…There are a variety of generative AI tools out there, though text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding it a prompt that guides it towards producing a desired output, be it text, an image, a video or a piece of music, though this isn’t always the case.

Examples of generative AI models include:

  • ChatGPT: An AI language model developed by OpenAI that can answer questions and generate human-like responses from text prompts.
  • DALL-E 2: Another AI model by OpenAI that can create images and artwork from text prompts.
  • Google Bard: Google’s generative AI chatbot and rival to ChatGPT. It’s trained on the PaLM large language model and can answer questions and generate text from prompts.
  • Midjourney: Developed by San Francisco-based research lab Midjourney Inc., this gen AI model interprets text prompts to produce images and artwork, similar to DALL-E 2.
  • GitHub Copilot: An AI-powered coding tool that suggests code completions within the Visual Studio, Neovim and JetBrains development environments.
  • Llama 2: Meta’s open-source large language model can be used to create conversational AI models for chatbots and virtual assistants, similar to GPT-4.
  • xAI: After funding OpenAI, Elon Musk left the project in July 2023 and announced this new generative AI venture. Little is currently known about it.

Types of generative AI models

There are various types of generative AI models, each designed for specific challenges and tasks. These can broadly be categorized into the following types.

Transformer-based models

Transformer-based models are trained on large sets of data to understand the relationships between sequential information, such as words and sentences. Underpinned by deep learning, these AI models tend to be adept at NLP [natural language processing] and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Bard are examples of transformer-based generative AI models.

Generative adversarial networks

GANs are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generator’s role is to generate convincing output such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. Both DALL-E and Midjourney are examples of GAN-based generative AI models…

Multimodal models

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 2 and OpenAI’s GPT-4 are examples of multimodal models.

What is ChatGPT?

ChatGPT is an AI chatbot developed by OpenAI. It’s a large language model that uses transformer architecture — specifically, the “generative pretrained transformer”, hence GPT — to understand and generate human-like text.

What is Google Bard?

Google Bard is another example of an LLM based on transformer architecture. Similar to ChatGPT, Bard is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAI’s ChatGPT and Microsoft’s Copilot AI tool. In July 2023, Google Bard was launched in Europe and Brazil.

…….

Benefits of generative AI

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can enable enterprises to automate specific tasks and focus their time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and new insights into how well certain business processes are — or are not — performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing and potentially more. Again, the key proposed advantage is efficiency because generative AI tools can help users reduce the time they spend on certain tasks so they can invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important.

Use cases of generative AI

Generative AI has found a foothold in a number of industry sectors and is rapidly expanding throughout commercial and consumer markets. McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

In customer support, AI-driven chatbots and virtual assistants help businesses reduce response times and quickly deal with common customer queries, reducing the burden on staff. In software development, generative AI tools help developers code more cleanly and efficiently by reviewing code, highlighting bugs and suggesting potential fixes before they become bigger issues. Meanwhile, writers can use generative AI tools to plan, draft and review essays, articles and other written work — though often with mixed results.

The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

  • Healthcare: Generative AI is being explored as a tool for accelerating drug discovery, while tools such as AWS HealthScribe allow clinicians to transcribe patient consultations and upload important information into their electronic health record.
  • Digital marketing: Advertisers, salespeople and commerce teams can use generative AI to craft personalized campaigns and adapt content to consumers’ preferences, especially when combined with customer relationship management data.
  • Education: Some educational tools are beginning to incorporate generative AI to develop customized learning materials that cater to students’ individual learning styles.
  • Finance: Generative AI is one of the many tools within complex financial systems to analyze market patterns and anticipate stock market trends, and it’s used alongside other forecasting methods to assist financial analysts.
  • Environment: In environmental science, researchers use generative AI models to predict weather patterns and simulate the effects of climate change

….

Generative AI vs. machine learning

As described earlier, generative AI is a subfield of artificial intelligence. Generative AI models use machine learning techniques to process and generate data. Broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

( Again, to make sure credit goes where it is due, the text below the line of asterisks above was excerpted from Generative AI Defined: How it Works, Benefits and Dangers, by Owen Hughes).

Is Long Covid Really a Thing?

We seem to be somewhat exhausted by all the dire predictions around Covid, now that life has largely gotten back to the normal. Shops and theaters are open, and people are once more crowding aboard those floating petri dishes called cruise ships. The most vulnerable segments of the population have mainly been vaccinated, and each new strain of the disease seems less harmful. All the anti-vaxxers I know have had Covid at least once and hence have some level of immunity, or else they caved and got vaccinated after seeing a close friend or relative die back in the winter of 2021-22. One enduring benefit of Covid is much more availability to work from home.

One of the direst prognostications was that the world would suffer a more or less permanent step down in standards of living due to “long Covid.”  According to this narrative, untold numbers of healthy young or middle-aged people would remain debilitated indefinitely due to the ongoing after-effects of a Covid infection: struck down in their prime, never to rise again.

A recent review of the field in Nature concluded, “The oncoming burden of long COVID faced by patients, health-care providers, governments and economies is so large as to be unfathomable”. Ouch. The federal government has provided $1.15 billion for research into the problem of long COVID and its mitigation.

Just the Facts

A couple of facts stand out: First, in many cases, scans of internal organs have shown changes in victims’ hearts and lungs and brains, following a severe Covid infection. Second, many people have reported symptoms such as weakness, fatigue and general malaise, impaired concentration and breathlessness, weeks after the primary symptoms of the disease have resolved.

How big a problem is this? I cannot, in the scope of a short blog post, adequately canvass all the data and literature. I will just cite a few numbers and charts, and let the professional data analysts dig into the fine points.

One meta-analysis found that a full “41.7% of COVID-19 survivors experienced at least one unresolved symptom and 14.1% were unable to return to work at 2-year after SARS-CoV-2 infection.” [That number seems much higher than my personal observations would suggest]. A CDC survey found that as of July 26-Aug 7, 2023, about 5.8 % of all Americans (which is 10.4% of Americans who ever had Covid) report experiencing some effects of long Covid, with 1.5% of all American adults experiencing significant activity limitations as result of long Covid. These numbers show a modest downward trend with time.

The chart below depicts the incidence of long Covid in England, again showing a modest downward trend in the latest year:

Weekly estimates of prevalence of COVID-19 and long COVID in England. Source.

Correlation versus Causation

So: we have many people experience severe symptoms from Covid, but most resolve within a few months at most. That leaves a small but nontrivial minority of Covid victims reporting problems long after that window. A significant question is whether Covid of itself caused those long-term symptoms, or just precipitated some problem that was bound to show up anyway.

I have read poignant anecdotes of perfectly healthy young people who suffer from brain fog two years later. But I have lived long enough to be wary of generalizing from poignant anecdotes. After all, the whole anti-vaccination movement has been fueled by poignant anecdotes of, say,  perfectly normal two-year-olds going autistic shortly after getting their vaccine shots.

The 2023 metastudy referred to earlier found that long Covid sufferers tended to be older, and had pre-existing medical comorbidities.  Similarly, we have known since 2020 that the cohorts most likely to die from Covid were older folks (such as me!), many of whom were bound to die anyway.  

In this light, the data brought forth by James Baily in his recent article on this blog, Long Covid is Real in the Claims Data… But so is “Early Covid”?, is most interesting. He noted that on average people use more health care for at least 6 months post-Covid compared to their pre-Covid baseline, which is consistent with some measure of long Covid. However, those same individuals also spent significantly more on healthcare 1-2 months before their Covid diagnosis. This seems consistent with the notion that some of what gets blamed on Covid would have occurred sooner or later anyway.

A Nuanced View of Long Covid

An article in Slate by Jeff Wise has dug deeper into the data. He noted that the survey-based datasets that have been largely used to estimate the effects of long Covid tend to be biased: those who feel ongoing symptoms are more likely to complete the surveys, giving rise to some of the largish numbers I have shared above. Newer, better-controlled retrospective cohort studies tend to show much lower ongoing incidence of symptoms, especially compared to control groups who had not had Covid. The feared tidal wave of mass disabilities never arrived:

“The best available figures, then, suggest two things: first, that a significant number of patients do experience significant and potentially burdensome symptoms for several months after a SARS-CoV-2 infection, most of which resolve in less than a year; and second, that a very small percentage experience symptoms that last longer. ”

Further, “Another insight that emerges from the cohort studies into long COVID is that it is not so easy to prove causality between a particular infection and a symptom. Almost all the symptoms associated with long COVID can also be triggered by all sorts of things, from other viruses to even the basic reality of living through a pandemic.”

Finally:

It looks more as if people who complain of long COVID are suffering from a collection of different effects. “I think there’s quite a heterogeneous group of people all sailing under the one flag,” said Alan Carson, a neuropsychiatrist at the University of Edinburgh in Scotland. Some patients may be experiencing the lingering aftereffects that occur in the wake of many diseases; some patients with chronic comorbidities might be experiencing the onset of new symptoms or the continuation of old ones; others might be affected by the sorts of mood disorders and psychiatric symptoms you’d expect to find in a population undergoing the stress of a global pandemic.

Another Slate article from last month gently debunks alarmism stemming from a Nature Medicine study of U.S. veterans who showed increased susceptibility to disease even two years after contracting Covid.

 There is often great difficulty in discerning the actual organic, biochemical basis for the reported symptoms. This makes it hard to come up with a pill or a shot that might adjust the body’s metabolic pathways in order to cure them. Thus, simply treating the symptoms as such may offer the best near-term relief. To that end, a team of French researchers had the audacity to propose that much of the fatigue and brain fog associated with long Covid may be largely in our heads. In an article in the Journal of Psychosomatic Research  Why the hypothesis of psychological mechanisms in long COVID is worth considering , Lemogne, et al. noted strong links between a patient’s prior expectations of symptom severity and the actual reported outcomes. The intent of the researchers is not to belittle the reported distress of long Covid sufferers, but to point towards established therapeutic methods to help treat disorders with at least a partial psychosomatic basis:

Many potential psychological mechanisms of long COVID are modifiable factors that could thus be targeted by already validated therapeutic interventions. Beside the treatment of a comorbid psychiatric condition, which may be associated with fatigue, cognitive impairment or aberrant activation of the autonomous nervous system, therapeutic interventions may build on those used in the treatment of ‘functional somatic disorders’, defined as the presence of debilitating and persistent symptoms that are not fully explained by damage of the organs they point. These disorders are common after an acute medical event, particularly in women, and include psychological risk factors, such as anxiety, depression, and dysfunctional beliefs that can lead to deleterious, yet modifiable health behaviors. Addressing these factors in the management of long COVID may provide an opportunity for patient empowerment.

In sum: A significant number of those who contract COVID suffer ongoing symptoms for a number of months afterward. Over a billion dollars of research has been directed at the problem. The severity of these symptoms tends to decline with time, in the vast majority of cases resolving by twelve months. This leaves some individuals still suffering fatigue and brain fog over a year later. Studies are ongoing to discern the organic basis of these complaints, and the exact role that COVID may have played, in the light of the fact that complaints of enduring fatigue and brain fog were not uncommon before the pandemic. We hope that following the science will bring more relief here.

Circling back to our original interest in the economic impact of long COIVD, early studies indicated that a large fraction of the population might continue to be debilitated, to the point of being unable to work, with significant effects on the workforce and GDP. Actual data (e.g., on disability claims) indicate that these problems have not actually materialized.

Collapsible Boats You Can Store in Your Apartment: ORU Folding Kayaks and MyCanoe Canoes

My wife and I were sitting on a bench near a local lake, having a picnic dinner. On a little grassy spot nearby I noticed a young woman put down a large bag, and then slide out some large, odd-looking plastic pieces. Then she unfolded something, and, oh my goodness, she had brought a fold-up kayak in that bag:

A friend joined her with sliding some joiner tubular pieces over the seams on top to zip these seams together:

The whole assembly took less than ten minutes. The resulting kayak was very light to carry:

And away she paddled:

I had drifted over to talk to her as she was assembling the kayak, and she said she just stored the boat in its bag in a closet in her apartment. Also, that it was great  fun to use.

This was one of a selection of foldable kayaks sole by ORU. They make smaller, lighter, cheaper models for paddling on still water, and heavier-duty kayaks for ocean waves and white-water rivers. These kayaks get generally very high reviews. They are a bit pricy, and may not stand up for long scraping over rocks. But they are  clearly  full-blown, worth-paddling kayaks with rigid sides and clean lines.

This resonated with me, because maybe twenty years ago, I got a pair of inflatable kayaks that we could store in the basement and pull out and inflate at the lake. Paddling them was an awful experience. Although we inflated them to spec, they sagged in the middle, with the two ends sticking up in the air and catching the wind. It was like paddling a bathtub which was being constantly carried downwind.

I also found through that experience that kayaking was very uncomfortable for me. But I do like canoeing. So, after seeing how great the folding kayak was, I looked online and found a similar collapsible canoe, made by MyCanoe.  The design is a little harder to execute, because with a canoe you have an open top, whereas with a kayak you can seal up the top and get the whole boat to be something of a nice structural tubular structure. But the MyCanoe seems to work OK, and has the same advantages of being lightweight (19 lb for one-person Solo, 43lb for two-person Duo) and of folding into a small package for transport and storage. There is an oar-lock accessory so you can row it with two oars, as an alternative to paddling. The Solo is pretty short and wide, so it is very maneuverable , but I would be surprised if it tracks well in a straight line when you just want to paddle from point A to point B using one paddle.

You can find plenty of demos and reviews on YouTube for these folding kayaks and canoes. And there are other collapsible kayaks out there, per this review, but some of them are heavier and more involved to assemble.

Anyway, these folding craft are a pretty classy, free-enterprise technology solution for folks who like to get out on the water, but don’t have a garage or backyard to store a regular kayak or canoe, much less a trailer for a motorboat or a sailboat.

Help! My Celery is Too Stringy to Eat!

For maybe three purchases of celery, bought in 2023 from different stores, the fibers or strings in them were so tough that we could not chew them to point of chopping them into small enough pieces to comfortably swallow. We would chew away for several minutes, masticating and swallowing most of what we bit off from the stalk, but this left a tangle of intact strings in our mouths, to be spit out. Prior to 2023, we never recall having a batch of celery that was simply inedible like this. For at least one batch we were so disgusted that we just threw it out.

I tried steaming a couple of stalks for a minute or so in microwave. This turned most of the celery into unappealing mush, whilst doing the stalks no apparent harm.

For the most recent bunch of unchewable celery, I finally got wise and harnessed the vast power of the internet to solve this problem. I did not have to invoke ChatGPT, so I was perhaps spared an AI hallucination regarding string theory. A simple DuckDuckGo search (this search engine respects your privacy, unlike You Know Who) found there are at least three reasonable ways to strip the offending strings out of a celery stalk. This article from Kitchen Ambitions does a great job describing these three ways:

( 1 ) Carefully snap the stalks in half the correct way (it is obvious when you think about it; or see the article), leaving the two halves connected by the strings. Then you can peel the strings down the lengths of the stalks. This is the easiest and cleanest way. I found I usually had to do a second round of snapping and peeling to get the rest of the strings.

Or

( 2 ) At one end of the stalk, use sharp knife to tease up the ends of several strings at a time, and peel them down the length of the stalk.

Or

( 3 ) A brute force approach is to use a vegetable/carrot peeler. This does work, but removes more of good celery along with the strings.

Hurray for economical life hacks – – the internet knows everything.

New EG.5  Variant Spreading: Start of New Covid Surge?

The spread of highly-contagious and sometimes fatal Covid-19, and the responses to it (lockdowns and then trillions of dollars of federal giveaway money to mitigate the effects of the lockdowns and now huge interest rate hikes to counter the inflation caused by that giveaway money) have been arguably the most economically momentous events of this decade so far. Thus, it behooves us to keep an ongoing eye on this beast, since it seems to keep coming back in waves.

We all know that Covid is spread by little “aerosol”  droplets coming out the infected people’s mouths and noses. Those aerosols are mainly generated by speaking and singing. So being in a room full of talking or singing people (e.g., a happy convention or bar, or a hymn-singing church) can be a super-spreader situation.

I have reasons to try to avoid respiratory diseases, and so I attended church on-line or outdoors for most of the past three years. The Covid numbers finally got low enough this spring that I started attending inside, and even going unmasked the past two months.

Alas, Covid cases and hospitalizations are back on the rise, it seems due to the new Eris or EG.5 subvariant. Like the infamous omicron variant of a year ago, it is very transmissible and resistant to existing vaccines, but is not as deadly as the original strain. Much of the population has some immunity due to vaccines and/or prior exposure. Also, antivirals like Paxlovid are widely available to help mitigate symptoms. Still, a case of Covid often makes for an uncomfortable and disruptive  week or two, and can still be fatal or debilitating.

So, I have done a quick amateur scan of the internet, trying to get a fix on what to expect. One thing that stands out is that actual case numbers are far higher than officially reported, for a couple of reasons. One is that the rigorous, systematic reporting of cases has fallen off, since Covid was deemed no longer an emergency. Also, with the end of free test kits and the generally more lax public attitude (we just want to be done with this), there is far less testing done than in 2022. (In communities with systematic testing, it turns out that the best way to track Covid is by analyzing wastewater).

Will the Latest Vaccines Save Us?

The vaccine story seems somewhat mixed. The latest booster vaccine, to be available around October, will target the XBB.1.5 subvariant, which is what was mainly circulating earlier this year. However, it is expected that since EB.5 is closely related to XBB.1.5 (both of these are of the general omicron family), the booster will confer some immunity to EB.5. That is the good news.

The bad news is that the public’s uptake of boosters in general is well under 50%, so we may expect EB.5 or whatever the next subvariant is to continue to circulate, and probably surge during the colder months when respiratory diseases tend to spread. Also, vaccines do not really stop you from getting Covid, they mainly act to mitigate the symptoms by helping your body’s defenses to react faster.

Starting next week, I will resume wearing an effective KN-95 or my preferred KF-94 mask at church and other venues where a lot of people are talking or singing.

Perma-Bear Jeffrey Snider: The Job Market Is Rolling Over, the Slide Into Recession Is Inevitable  

A stopped clock is occasionally right. And so are perma-bears, those commentators or analysts who continually predict that GDP and stocks will plunge – perhaps in the next quarter, but more often say six months from now. (And that deadline keeps getting pushed back every six months).

When I was first getting started investing, I was overly influenced by these seemingly cautious and sober souls, and I consequently lost out considerably compared to my colleagues who blithely stayed fully invested. So I hold my native pessimism in check when investing, and stay mainly in the market, but with a little cash in reserve just in case The Big One hits.

All that said, I do try to sample various points of view. If I have been mainly seeing positive chatter, I turn to my favorite perma-bear, an analyst named Jeffrey Snider. His YouTube channel is called Eurodollar University, and he runs a subscription service as well.

Jeff seems like a genuinely nice guy, who believes that his dire readings of the macroeconomic tea leaves are helping folks avoid disaster. His demeanor is more like an earnest teacher, not a huckster trying to sell something. I should add that he offers meaningful insights on the Eurodollar scene, which is globally significant and which most analysts do not understand or even recognize.

But Jeff’s bias is nearly always toward the negative, and it is something of a good-natured joke among his viewers. Typical comments: “ The market can remain irrational longer than Jeff can stay pessimistic” and “Jeff is the best on Youtube. I watch his videos every night right before I go to bed. In less than 5 minutes, I’m in a semi-conscious coma. Its better than any sleeping pill. That smooth soothing voice extoling the virtues of a collapsing economy works wonders. A++”.

Well, what is the bear-meister saying now? He claims that the seemingly red-hot employment numbers that have been reported in recent months are less hot than they appear. I will paste in a few snips from his recent YouTube, It Just Happened…The JOB MARKET JUST BROKE!! .

One point he makes is that there has been a persistent, inaccurate bias to the upside in the payroll numbers reported by the BLS. These big numbers are what gets reported; what does not get reported so much is, month after month, these monster payroll increases are quietly revised downwards, often by substantial amounts:

Even with the adjustments, these still seem like large increases in employment. Undaunted, Jeff pokes holes in the hot labor market scenario by claiming that full time employment is actually stagnant; it is the rise in part-time workers that creates the seemingly large army of the newly employed. The fact that total hours worked has plateaued seems to support his case here:

Another factor is worker hoarding. Employers were so burned trying to scramble for workers during the 2022 reopening-from-Covid that they are keeping their workers on payroll (even part-time), just in case the economy picks up and they need to pull them in full-time. A case in point is manufacturing. New orders are down considerably this year, and headed even lower, yet manufacturers have not cut their workforces appreciably:

If orders stay low for a long enough time, however, the manufacturers will have no choice but to start massive layoffs.

As another indicator of labor market softness, temporary workers may be a leading indicator of employment trends. They are not such a core part of a company, so there is less hoarding of them. And temporary help services have been in a steady decline this year, which is consistent with a cooler economy:

Sell Everything??

As I said, it is worth considering all sides. I think the specific points mentioned above are all valid ones. I would add that if students actually start payments on all those loans which taxpayers and the Fed have subsidized for the past three years, that will finally put a crimp in the spending. Also, the surprise downgrade of U.S. federal debt by the Fitch rating agency , and resulting jump in interest rates, has finally gotten people talking about out-of-control government spending, for one week anyway.   Also, the great China-reopening that was supposed to jump-start the global economy seems to be pretty flat.

However, a couple of counter-points to the bearish narrative:

First, even if manufacturing is rolling over, in the U.S.  it is fairly small relative to services. At least in some geographical areas, my anecdotal reports say that it is still a challenge to get good workers to do services.

Second, the tidal wave of cash from pandemic giveaways that washed into our collective bank accounts is still not depleted. Consumer confidence is high, and we are spending freely. This economy is a big, big ship, and it is still steaming full ahead, brushing aside high interest rates and yield curve inversions. The  recession seems to continually recede. There will inevitably be a downturn someday, of course, but absent some geopolitical event, I think it may take some time for it to arrive.

And finally, even if the long-awaited recession does arrive, it may not necessarily be so bad for stocks. Since the 2008-2009 Great Financial Crisis, the Fed has taken a very active role in supporting the markets. Wall Street has been conditioned to expect the Fed to flood the system with money if a serious downturn occurs. Also, the Street is betting that there will be enough howls of pain over the high interest being paid on the federal debt that unbearable pressure will be brought on the Fed to loosen up; the vaunted independence of that institution will be put to the test, with Congressional threats to alter their charter if they don’t cave to pressure. And so, “[economic] bad news is [investing] good news”, in contrast to the pre-2008 world. Furthermore, federal deficit spending ramps up during recessions, and as noted in The Kalecki Profit Equation: Why Government Deficit Spending (Typically) MUST Boost Corporate Earnings , this deficit spending tends to boost earnings.

And so even if Jeff Snider is correct that the economy is rolling over and will soon slide downward, this may not give investors a very useful signal. As another one of his YouTube viewers has commented, “This channel is a masterclass in learning that knowledge about the macro environment does not provide an edge in markets.”