Literature Review is a Difficult Intellectual Task

As I was reading through What is Real?, it occurred to me that I’d like a review on an issue. I thought, “Experimental physics is like experimental economics. You can sometimes predict what groups or “markets” will do. However, it’s hard to predict exactly what an individual human will do.” I would like to know who has written a little article on this topic.

I decided to feed the following prompt into several LLMs: “What economist has written about the following issue: Economics is like physics in the sense that predictions about large groups are easier to make than predictions about the smallest, atomic if you will, components of the whole.”

First, ChatGPT (free version) (I think I’m at “GPT-4o mini (July 18, 2024)”):

I get the sense from my experience that ChatGPT often references Keynes. Based on my research, I think that’s because there are a lot of mentions of Keynes books in the model training data. (See “”ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics“) 

Next, I asked ChatGPT, “What is the best article for me to read to learn more?” It gave me 5 items. Item 2 was “Foundations of Economic Analysis” by Paul Samuelson, which likely would be helpful but it’s from 1947. I’d like something more recent to address the rise of empirical and experimental economics.

Item 5 was: “”Physics Envy in Economics” (various authors): You can search for articles or papers on this topic, which often discuss the parallels between economic modeling and physics.” Interestingly, ChatGPT is telling me to Google my question. That’s not bad advice, but I find it funny given the new competition between LLMs and “classic” search engines.

When I pressed it further for a current article, ChatGPT gave me a link to an NBER paper that was not very relevant. I could have tried harder to refine my prompts, but I was not immediately impressed. It seems like ChatGPT had a heavy bias toward starting with famous books and papers as opposed to finding something for me to read that would answer my specific question.

I gave Claude (paid) a try. Claude recommended, “If you’re interested in exploring this idea further, you might want to look into Hayek’s works, particularly “The Use of Knowledge in Society” (1945) and “The Pretense of Knowledge” (1974), his Nobel Prize lecture.” Again, I might have been able to get a better response if I kept refining my prompt, but Claude also seemed to initially respond by tossing out famous old books.

Continue reading

Human Capital is Technologically Contingent

The seminal paper in the theory of human capital by Paul Romer. In it, he recognizes different types of human capital such as physical skills, educational skills, work experience, etc. Subsequent macro papers in the literature often just clumped together some measures of human capital as if it was a single substance. There were a lot of cross-country RGDP per capita comparison papers that included determinants like ‘years of schooling’, ‘IQ’, and the like.

But more recent papers have been more detailed. For example, the average biological difference between men and women concerning brawn has been shown to be a determinant of occupational choice. If we believe that comparative advantage is true, then occupational sorting by human capital is the theoretical outcome. That’s exactly what we see in the data.

Similarly, my own forthcoming paper on the 19th century US deaf population illustrates that people who had less sensitive or absent ability to hear engaged in fewer management and commercial occupations, or were less commonly in industries that required strong verbal skills (on average).

Clearly, there are different types of human capital and they matter differently for different jobs. Technology also changes what skills are necessary to boot. This post shares some thoughts about how to think about human capital and technology. The easiest way to illustrate the points is with a simplified example.

Continue reading

Rote Education has a Purpose

A tweet that got over 2 million views and 2500 likes:

https://x.com/ianmcorbin1/status/1831353564246979017

“Why do our students (even the ones paying a jillion dollars!) *want* to skip their lessons?”

“You give us work fit for machines. You want rote answers.”

He asks why students want to cheat and what is wrong with education. Why did this tweet take off? This is obvious.

I’m not of the opinion that education is entirely signaling (see Bryan Caplan). However, anyone can see that education is partly signaling. It’s difficult to get good grades. Good grades is a noisy signal of excellence. Students want to cheat so that they can obtain the good grades and signal to employers that they are excellent. There is nothing mysterious about that.

Part of a professor’s job is to make it hard to cheat and costly if you are caught.

Now we get to the “rote answers” part. How is a professor who has over 100 students every semester supposed to monitor the students’ performance and make it hard to cheat and be fair to every student? The “rote answers” part is a technology called the multiple-choice test with auto or semi-auto (e.g. Scantron machine) grading. Multiple choice tests serve an important role in our society, and they aren’t going anywhere.

A professor who has only 10 students per semester could give personalized assignments and grade oral exams and be an Oxford tutor for the students hand-written essays or whatnot. However, that kind of education would be extremely expensive/exclusive and does not scale.

Readers are more scarce than writers. AI’s can read now. The implications that will have for education and assessment have yet to be seen.

Charles Hugh Smith: Six Reasons the Global Economy Is Toast

If you are feeling OK about the world after a nice Labor Day weekend, I can fix that. How about six reasons why global economic growth will slow to a crawl, courtesy of perma-bear Charles Hugh Smith?

Smith is recognized as an earnest, good-willed alternative economic thinker. His OfTwoMinds blog and other publications bring out many valid facts and factors. He has been extrapolating from those factors to global financial collapse for well over fifteen years now, growing out of the imminent peak oil movement of circa 2007 vintage and the scary 2008-2009 financial crisis. Obviously, he has continually underestimated the resilience of the national and global systems, especially the ability of our finance and banking folks at keeping the debt plates spinning, and our ability to harness practical technology (e.g. fracking for oil production). Smith recommends preparing to become more self-reliant: we should learn more practical skills, and prepare to barter with local folks if the money system freezes up.

For now, I will let him speak for himself, and leave it to the readers here to ponder countervailing factors. From August 11, 2024, we have his article titled, These Six Drivers Are Gone, and That’s Why the Global Economy Is Toast:

The six one-offs that drove growth and pulled the global economy out of bubble-bust recessions for the past 30 years have all reversed or dissipated. Absent these one-off drivers, the global economy is stumbling off the cliff into a deep recession without any replacement drivers. Colloquially speaking, the global economy is toast.

Here are the six one-offs that won’t be coming back:

1) China’s industrialization.

2) Growth-positive demographics.

3) Low interest rates.

4) Low debt levels.

5) Low inflation.

6) Tech productivity boom.

( 1 ) Cutting to the chase, China bailed the world out of the last three recessions triggered by credit-asset bubbles popping: the Asian Contagion of 1997-98, the dot-com bubble and pop of 2000-02, and the Global Financial Crisis of 2008-09. In each case, China’s high growth and massive issuance of stimulus and credit (a.k.a. China’s Credit Impulse) acted as catalysts to restart global expansion.

The boost phase of picking low-hanging fruit via rapid industrialization boosting mercantilist exports and building tens of millions of housing units is over. Even in 2000 when I first visited China, there were signs of overproduction / demand saturation: TV production in China in 2000 had overwhelmed global and domestic demand: everyone in China already had a TV, so what to do with the millions of TVs still being churned out?

China’s model of economic development that worked so brilliantly in the boost phase, when all the low-hanging fruit could be so easily picked, no longer works at the top of the S-Curve. Having reached the saturation-decline phase of the S-Curve, these policies have led to an extreme concentration of household wealth in real estate. Those who favored investing in China’s stock market have suffered major losses.

( 2 ) Demographics

Where China’s workforce was growing during the boost phase, now the demographic picture has darkened: China’s workforce is shrinking, the population of elderly retirees is soaring, and so the cost burdens of supporting a burgeoning cohort of retirees will have to be funded by a shrinking workforce who will have less to spend / invest as a result.

This is a global phenomenon, and there are no quick and easy solutions. Skilled labor will become increasingly scarce and able to demand higher wages regardless of any other factors, and that will be a long-term source of inflation. Governments will have to borrow more–and probably raise taxes as well–to fund soaring pension and healthcare costs for retirees. This will bleed off other social spending and investment.

( 3 ) The era of zero-interest rates and unlimited government borrowing has ended. As Japan has shown, even at ludicrously low rates of 1%, interest payments on skyrocketing government debt eventually consume virtually all tax revenues. Higher rates will accelerate this dynamic, pushing government finances to the wall as interest on sovereign debt crowds out all other spending. As taxes rise, households are left with less disposable income to spend on consumption, leading to stagnation.

( 4 ) At the start of the cycle, global debt levels (government and private-sector) were low. Now they are high. The boost phase of debt expansion and debt-funded spending is over, and we’re in the stagnation-decline phase where adding debt generates diminishing returns.

( 5 ) The era of low inflation has also ended for multiple reasons. Exporting nations’ wages have risen sharply, pushing their costs higher, and as noted, skilled labor in developed economies can demand higher wages as this labor cannot be automated or offshored. Offshoring is reversing to onshoring, raising production costs and diverting investment from asset bubbles to the real world.

Higher costs of resource extraction, transport and refining will push inflation higher. So will rampant money-printing to “boost consumption.”

( 6 ) The tech productivity boom was also a one-off. Economists were puzzled in the early 1990s by the stagnation of productivity despite the tremendous investments made in personal and corporate computers, a boom launched in the mid-1980s with Apple’s Macintosh and desktop publishing, and Microsoft’s Mac-clone Windows operating system.

By the mid-1990s, productivity was finally rising and the emergence of the Internet as “the vital 4%” triggered the adoption of the 20% which then led to 80% getting online combined with distributed computing to generate a true revolution in sharing, connectivity and economic potential.

The buzz around AI holds that an equivalent boom is now starting that will generate a glorious “Roaring 20s” of trillions booked in new profits and skyrocketing productivity as white-collar work and jobs are automated into oblivion.

There are two problems with this story:

1) The projections are based more on wishful thinking than real-world dynamics.

2) If the projections come true and tens of millions of white-collar jobs disappear forever, there is no replacement sector to employ the tens of millions of unemployed workers.

In the previous cycles of industrialization and post-industrialization, agricultural workers shifted to factory work, and then factory workers shifted to services and office work. There is no equivalent place to shift tens of millions of unemployed office workers,as AI is a dragon that eats its own tail: AI can perform many programming tasks so it won’t need millions of human coders.

As for profits, as I explained in There’s Just One Problem: AI Isn’t Intelligent, and That’s a Systemic Risk, everyone will have the same AI tools and so whatever those tools generate will be overproduced and therefore of little value: there is no pricing power when the world is awash in AI-generated content, bots, etc., other than the pricing power offered by monopoly, addiction and fraud–all extreme negatives for humanity and the global economy.

Either way it goes–AI is a money-pit of grandiose expectations that will generate marginal returns, or it wipes out much of the middle class while generating little profit–AI will not be the miraculous source of millions of new high-paying jobs and astounding profits.

(End of Smith excerpt; emphases mainly his)

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

Have a nice day…

Many Impressive AI Demos Were Fakes

I recently ran across an article on the Seeking Alpha investing site with the provocative title “ AI: Fakes, False Promises And Frauds “, published by LRT Capital Management. Obviously, they think the new generative AI is being oversold. They cite a number of examples where demos of artificial general intelligence were apparently staged or faked.  I followed up on a few of these examples, and it does seem like this article is accurate. I will quote some excerpts here to give the flavor of their remarks.

In 2023, Google found itself facing significant pressure to develop an impressive innovation in the AI race. In response, they released Google Gemini, their answer to OpenAI’s ChatGPT. The unveiling of Gemini in December 2023 was met with a video showcasing its capabilities, particularly impressive in its ability to handle interactions across multiple modalities. This included listening to people talk, responding to queries, and analyzing and describing images, demonstrating what is known as multimodal AI. This breakthrough was widely celebrated. However, it has since been revealed that the video was, in fact, staged and that it does not represent the real capabilities of Google’s Gemini.

… OpenAI, the company behind the groundbreaking ChatGPT, has a history marked by dubious demos and overhyped promises. Its latest release, Chat GPT-4-o, boasted claims that it could score in the 90th percentile on the Unified Bar Exam. However, when researchers delved into this assertion, they discovered that ChatGPT did not perform as well as advertised.[10] In fact, OpenAI had manipulated the study, and when the results were independently replicated, ChatGPT scored on the 15th percentile of the Unified Bar Exam.

… Amazon has also joined the fray. Some of you might recall Amazon Go, its AI-powered shopping initiative that promised to let you grab items from a store and simply walk out, with cameras, machine learning algorithms, and AI capable of detecting what items you placed in your bag and then charging your Amazon account. Unfortunately, we recently learned that Amazon Go was also a fraud. The so-called AI turned out to be nothing more than thousands of workers in India working remotely, observing what users were doing because the computer AI models were failing.

… Facebook introduced an assistant, M, which was touted as AI-powered. It was later discovered that 70% of the requests were actually fulfilled by remote human workers. The cost of maintaining this program was so high that the company had to discontinue its assistant.

… If the question asked doesn’t conform to a previously known example ChatGPT will still produce and confidently explain its answer – even a wrong one.

For instance, the answer to “how many rocks should I eat” was:

…Proponents of AI and large language models contend that while some of these demos may be fake, the overall quality of AI systems is continually improving. Unfortunately, I must share some disheartening news: the performance of large language models seems to be reaching a plateau. This is in stark contrast to the significant advancements made by OpenAI’s ChatGPT, between its second iteration (GPT-2), and the newer GPT-3 – that was a meaningful improvement. Today, larger, more complex, and more expensive models are being developed, yet the improvements they offer are minimal. Moreover, we are facing a significant challenge: the amount of data available for training these models is diminishing. The most advanced models are already being trained on all available internet data, necessitating an insatiable demand for even more data. There has been a proposal to generate synthetic data with AI models and use this data for training more robust models indefinitely. However, a recent study in Nature has revealed that such models trained on synthetic data often produce inaccurate and nonsensical responses, a phenomenon known as “Model Collapse.”

OK, enough of that. These authors have an interesting point of view, and the truth probably lies somewhere between their extreme skepticism and the breathless hype we have been hearing for the last two years. I would guess that the most practical near-term uses of AI may involve some more specific, behind the scenes data-mining for a business application, rather than exactly imitating the way a human would think.

Services, and Goods, and Software (Oh My!)

When I was in high school I remember talking about video game consumption. Yes, an Xbox was more than two hundred dollars, but one could enjoy the next hour of that video game play at a cost of almost zero. Video games lowered the marginal cost and increased the marginal utility of what is measured as leisure. Similarly, the 20th century was the time of mass production. Labor-saving devices and a deluge of goods pervaded. Remember servants? That’s a pre-20th century technology. Domestic work in another person’s house was very popular in the 1800s. Less so as the 20th century progressed. Now we devices that save on both labor and physical resources. Software helps us surpass the historical limits of moving physical objects in the real world.


There’s something that I think about a lot and I’ve been thinking about it for 20 years. It’s simple and not comprehensive, but I still think that it makes sense.

  • Labor is highly regulated and costly.
  • Physical capital is less regulated than labor.
  • Software and writing more generally is less regulated than physical capital.


I think that just about anyone would agree with the above. Labor is regulated by health and safety standards, “human resource” concerns, legal compliance and preemption, environmental impact, and transportation infrastructure, etc. It’s expensive to employ someone, and it’s especially expensive to have them employ their physical labor.

Continue reading

Will the Huge Corporate Spending on AI Pay Off?

Last Tuesday I posted on the topic, “Tech Stocks Sag as Analysists Question How Much Money Firms Will Actually Make from AI”. Here I try to dig a little deeper into the question of whether there will be a reasonable return on the billions of dollars that tech firms are investing into this area.

Cloud providers like Microsoft, Amazon, and Google are building buying expensive GPU chips (mainly from Nvidia) and installing them in power-hungry data centers. This hardware is being cranked to train large language models on a world’s-worth of existing information. Will it pay off?

Obviously, we can dream up all sorts of applications for these large language models (LLMs), but the question is much potential downstream customers are willing to pay for these capabilities. I don’t have the capability for an expert appraisal, so I will just post some excerpts here.

Up until two months ago, it seemed there was little concern about the returns on this investment.  The only worry seemed to be not investing enough. This attitude was exemplified by Sundar Pichai of Alphabet (Google). During the Q2 earnings call, he was asked what the return on Gen AI investment capex would be. Instead of answering the question directly, he said:

I think the one way I think about it is when we go through a curve like this, the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over investing. [my emphasis]

Part of the dynamic here is FOMO among the tech titans, as they compete for the internet search business:

The entire Gen AI capex boom started when Microsoft invested in OpenAI in late 2022 to directly challenge Google Search.

Naturally, Alphabet was forced to develop its own Gen AI LLM product to defend its core business – Search. Meta joined in the Gen AI capex race, together with Amazon, in fear of not being left out – which led to a massive Gen AI capex boom.

Nvidia has reportedly estimated that for every dollar spent on their GPU chips, “the big cloud service providers could generate $5 in GPU instant hosting over a span of four years. And API providers could generate seven bucks over that same timeframe.” Sounds like a great cornucopia for the big tech companies who are pouring tens of billions of dollars into this. What could possibly go wrong?

In late June, Goldman Sachs published a report titled, GEN AI: TOO MUCH SPEND,TOO LITTLE BENEFIT?.  This report included contributions from bulls and from bears. The leading Goldman skeptic is Jim Covello. He argues,

To earn an adequate return on the ~$1tn estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he says, it isn’t built to do. He points out that truly life-changing inventions like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. And he’s skeptical that AI’s costs will ever decline enough to make automating a large share of tasks affordable given the high starting point as well as the complexity of building critical inputs—like GPU chips—which may prevent competition. He’s also doubtful that AI will boost the valuation of companies that use the tech, as any efficiency gains would likely be competed away, and the path to actually boosting revenues is unclear.

MIT’s Daron Acemoglu is likewise skeptical:  He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.

Goldman economist Joseph Briggs is more optimistic:  He estimates that gen AI will ultimately automate 25% of all work tasks and raise US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade. While Briggs acknowledges that automating many AI-exposed tasks isn’t cost-effective today, he argues that the large potential for cost savings and likelihood that costs will decline over the long run—as is often, if not always, the case with new technologies—should eventually lead to more AI automation. And, unlike Acemoglu, Briggs incorporates both the potential for labor reallocation and new task creation into his productivity estimates, consistent with the strong and long historical record of technological innovation driving new opportunities.

The Goldman report also cautioned that the U.S. and European power grids may not be prepared for the major extra power needed to run the new data centers.

Perhaps the earliest major cautionary voice was that of Sequoia’s David Cahn. Sequoia is a major venture capital firm. In September, 2023 Cahn offered a simple calculation estimating that for each dollar spent on (Nvidia) GPUs, and another dollar (mainly electricity) would need be spent by the cloud vendor in running the data center. To make this economical, the cloud vendor would need to pull in a total of about $4.00 in revenue. If vendors are installing roughly $50 billion in GPUs this year, then they need to pull in some $200 billion in revenues. But the projected AI revenues from Microsoft, Amazon, Google, etc., etc. were less than half that amount, leaving (as of Sept 2023) a $125 billion dollar shortfall.

As he put it, “During historical technology cycles, overbuilding of infrastructure has often incinerated capital, while at the same time unleashing future innovation by bringing down the marginal cost of new product development. We expect this pattern will repeat itself in AI.” This can be good for some of the end users, but not so good for the big tech firms rushing to spend here.

In his June, 2024 update, Cahn notes that now Nvidia yearly sales look to be more like $150 billion, which in turn requires the cloud vendors to pull in some  $600 billion in added revenues to make this spending worthwhile. Thus, the $125 billion shortfall is now more like a $500 billion (half a trillion!) shortfall. He notes further that the rapid improvement in chip power means that the value of those expensive chips being installed in 2024 will be a lot lower in 2025.

And here is a random cynical comment on a Seeking Alpha article: It was the perfect combination of years of Hollywood science fiction setting the table with regard to artificial intelligence and investors looking for something to replace the bitcoin and metaverse hype. So when ChatGPT put out answers that sounded human, people let their imaginations run wild. The fact that it consumes an incredible amount of processing power, that there is no actual artificial intelligence there, it cannot distinguish between truth and misinformation, and also no ROI other than the initial insane burst of chip sales – well, here we are and R2-D2 and C3PO are not reporting to work as promised.

All this makes a case that the huge spends by Microsoft, Amazon, Google, and the like may not pay off as hoped. Their share prices have steadily levitated since January 2023 due to the AI hype, and indeed have been almost entirely responsible for the rise in the overall S&P 500 index, but their prices have all cratered in the past month. Whether or not these tech titans make money here, it seems likely that Nvidia (selling picks and shovels to the gold miners) will continue to mint money. Also, some of the final end users of Gen AI will surely find lucrative applications. I wish I knew how to pick the winners from the losers here.

For instance, the software service company ServiceNow is finding value in Gen AI. According to Morgan Stanley analyst Keith Weiss, “Gen AI momentum is real and continues to build. Management noted that net-new ACV for the Pro Plus edition (the SKU that incorporates ServiceNow’s Gen AI capabilities) doubled [quarter-over-quarter] with Pro Plus delivering 11 deals over $1M including two deals over $5M. Furthermore, Pro Plus realized a 30% price uplift and average deal sizes are up over 3x versus comparable deals during the Pro adoption cycle.”

Sources on AI use of Information

  1. Consent in Crisis: The Rapid Decline of the AI Data Commons

Abstract: General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, Refined Web, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14, 000 web domains provides an expansive view of crawlable web data and how consent preferences to use it are changing over time. We observe a proliferation of AI specific clauses to limit use, acute differences in restrictions on AI developers, as well as general inconsistencies between websites’ expressed intentions in their Terms of Service and their robots.txt. We diagnose these as symptoms of ineffective web protocols, not designed to cope with the widespread re-purposing of the internet for AI. Our longitudinal analyses show that in a single year (2023-2024) there has been a rapid crescendo of data restrictions from web sources, rendering ~5%+ of all tokens in C4, or 28%+ of the most actively maintained, critical sources in C4, fully restricted from use. For Terms of Service crawling restrictions, a full 45% of C4 is now restricted. If respected or enforced, these restrictions are rapidly biasing the diversity, freshness, and scaling laws for general-purpose AI systems. We hope to illustrate the emerging crisis in data consent, foreclosing much of the open web, not only for commercial AI, but non-commercial AI and academic purposes.

AI is taking out of a commons information that was provisioned under a different set of rules and technology. See discussion on Y Combinator 

2. “ChatGPT-maker braces for fight with New York Times and authors on ‘fair use’ of copyrighted works” (AP, January ’24)

3. Partly handy as a collection of references: “HOW GENERATIVE AI TURNS COPYRIGHT UPSIDE DOWN” by a law professor. “While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don’t fit in any of those categories…” 

4. New gated NBER paper by Josh Gans “examines this issue from an economics perspective”

Joy: AI companies have money. Could we be headed toward a world where OpenAI has some paid writers on staff? Replenishing the commons is relatively cheap if done strategically, in relation to the money being raised for AI companies. Jeff Bezos bought the Washington Post. It cost a fraction of his tech fortune (about $250 million). Elon Musk bought Twitter. Sam Altman is rich enough to help keep the NYT churning out articles. Because there are several competing commercial models, however, the owners of LLM products face a commons problem. If Altman pays the NYT to keep operating, then Anthropic gets the benefit, too. Arguably, good writing is already under-provisioned, even aside from LLMs.

You, Parent, Should have a Robot Vacuum

Do you have a robot vacuum? The first model was introduced in 2002 for $199. I don’t know how good that first model was, but I remember seeing plenty of ads for them by 2010 or so. My family was the cost-cutting kind of family that didn’t buy such things. I wondered how well they actually performed ‘in real life’. Given that they were on the shelves for $400-$1,200 dollars, I had the impression that there was a lot of quality difference among them. I didn’t need one, given that I rented or had a small floor area to clean, and I sure didn’t want to spend money on one that didn’t actually clean the floors. I lacked domain-specific knowledge. So I didn’t bother with them.

Fast forward to 2024: I’ve got four kids, a larger floor area, and less time. My wife and I agreed early in our marriage that we would be a ‘no shoes in the house’ kind of family.  That said, we have different views when it comes to floor cleanliness. Mine is: if the floors are dirty, then let’s wait until the source of crumbs is gone, and then clean them when they will remain clean. In practice, this means sweeping or vacuuming after the kids go to bed, and then steam mopping (we have tile) after parties (not before). My wife, in contrast, feels the crumbs on her feet now and wants it to stop ASAP. Not to mention that it makes her stressed about non-floor clutter or chaos too.

Continue reading

Oster on Haidt and Screens

Emily Oster took on the Jonathan Haidt-related debate in her latest post “Screens & Social Media

Do screens harm mental health? Oster joins some other skeptics I know. She doesn’t fully back Haidt, and she does the economist thing by mentioning “tradeoffs.”

Oster, ever practical, makes a point that sometimes gets lost. Maybe social media doesn’t cause suicide. Maybe there is no causal relationship concerning diagnosed mental health conditions, as indicated by the data. That doesn’t mean that parents and teachers should not monitor and curtail screen time. Oster says that it’s obvious that kids should not have their phones in the classroom during school instruction.

Here’s a personal story from this week. My son wants Roblox. The game says 12+, and I’ve told him that I’m sticking to that. No. He can’t have it now and he can’t start chatting with strangers online. We aren’t going to re-visit the conversation until he’s 12. Is he mad at me? Yes. You know what he does when he’s really bored at home? He starts vacuuming. I’ve driven him to madness, with these boundaries I set, or to vacuuming. (Recall he likes these books. Since hearing Harry Potter 1 as an audiobook in the car, he’s started tearing through the series himself via hardcover book.)

An innocent tablet game I let him play (when he’s allowed to have screen time) is Duck Life. Rated E for everyone.

Previously, I wrote “Video Games: Emily Oster, Are the kids alright?

And more recently, Tyler had “My contentious Conversation with Jonathan Haidt” Maybe Tyler should debate Emily Oster next about limiting phone use.