No Tech Workers or No Tech Jobs?

Several recent tweets(xeets) about tech talent re-ignited the conversation about native-born STEM workers and American policy. For the Very Online, Christmas 2024 was about the H-1B Elon tweets.

Elon Musk implies that “elite” engineering talent cannot be found among Americans. Do Americans need to import talent?

What would it take to home grow elite engineering talent? Some people interpreted this Vivek tweet to mean that American kids need to be shut away into cram schools.

The reason top tech companies often hire foreign-born & first-generation engineers over “native” Americans isn’t because of an innate American IQ deficit (a lazy & wrong explanation). A key part of it comes down to the c-word: culture. Tough questions demand tough answers & if we’re really serious about fixing the problem, we have to confront the TRUTH:

Our American culture has venerated mediocrity over excellence for way too long (at least since the 90s and likely longer). That doesn’t start in college, it starts YOUNG. A culture that celebrates the prom queen over the math olympiad champ, or the jock over the valedictorian, will not produce the best engineers.

– Vivek tweet on Dec. 26, 2024

My (Joy’s) opinion is that American culture could change on the margin to grow better talent (and specifically tech talent) resulting in a more competitive adult labor force. This need not come at the expense of all leisure. College students should spend 10 more hours a week studying, which would still leave time for socializing. Elementary school kids could spend 7 more hours a week reading and still have time for TV or sports.

I’ve said in several places that younger kids should read complex books before the age of 9 instead of placing a heavy focus on STEM skills. Narratives like The Hobbit are perfect for this. Short fables are great for younger kids.  

The flip side of this, which creates the puzzle, is: Why does it feel difficult to get a job in tech? Why do we see headlines like “Laid-off techies face ‘sense of impending doom’ with job cuts at highest since dot-com crash” (2024)

Which is it? Is there a glut of engineering talent in America? Are young men who trained for tech frustrated that employers bring in foreign talent to undercut wages? Is there no talent here? Are H-1B’s a national security necessity to make up the deficit of quantity?

Previously, I wrote an experimental paper called “Willingness to be Paid: Who Trains for Tech Jobs?” to explore what might push college students toward computer programming. To the extent I found evidence that preferences matter, culture could indeed have some impact on the seemingly more impersonal forces of supply and demand.

For a more updated perspective, I asked two friends with domain-specific knowledge in American tech hiring for comments. I appreciate their rapid responses. My slowness, not theirs, explains this post coming out weeks after the discourse has moved on. Note that there are differences between the “engineers” whom Elon has in mind in the tweet below versus the broader software engineering world.

Software Engineer John Vandivier responds:

Continue reading

Excel’s Weird (In)Convenience: COUNTIF, AVERAGEIF, & STDEVIF

Excel is an attractive tool for those who consider themselves ‘not a math person’.  In particular, it visually organizes information and has many built-in functions that can make your life easier. You can use math if you want, but there are functions that can help even the non-math folks

If you are a moderate Excel user, then you likely already know about the AVERAGE and COUNT functions. If you’re a little but statistically inclined, then you might also know about the STDEV.S function (STDEV is deprecated). All of these functions are super easy and only have one argument. You just enter the cells (array) that you want to describe, and you’re done. Below is an example with the ‘code’ for convenience.

=COUNT(A2:A21)
=AVERAGE(A2:A21)
=STDEV.S(A2:A21)

If you do some slightly more sophisticated data analysis, then you may know about the “IF” function. It’s relatively simple; if a proposition is true (such as a cell value condition), then it returns a value. If the proposition is false, then it returns another value. You can even create nested “IF”s in which a condition being satisfied results in another tested proposition. Back when excel had more limited functions, we had to think creatively because there was a limit to the number of nested “IF” functions that were permitted in a single cell. Prior to 2007, a maximum of seven “IF” functions were permitted. Now the maximum is 64 nested “IF”s. If you’re using that many “IF”s, then you might have bigger problems than the “IF” limitations.

Another improvement that Excel introduced in 2019 was easier array arguments. In prior versions of Excel, there was some mild complication in how array functions must be entered (curly brackets: {}). But now, Excel is usually smart enough to handle the arrays without special instructions.  Subsequently, Excel has introduced functions that combine the array features with the “IF” functions to save people keystrokes and brainpower.

Looking at the example data we see that there is an identifier that marks the values as “A” or “B”. Say that you want to describe these subgroups. Historically, if you weren’t already a sophisticated user, then you’d need to sort the data and then calculate the functions for each subgroup’s array. That’s no big deal for small sets of data and two possible ID values, but it’s a more time-consuming task for many possible ID values and multiple ID categories.

The early “IF” statements allowed users to analyze certain values of the data, such as those that were greater than, less than, or equal to a particular value. But, what if you want to describe the data according to criteria in another column (such as ID)? That’s where Excel has some more sophisticated functions for convenience. However, as a general matter of user interface, it will be clear why these are somewhat… awkward.

Continue reading

More Productive than “Smart”

Public choice economists emphasize the process by which we select political leaders. Electoral and voting rules influence the type of leaders we get. Institutional economists agree and go one step further. Who we choose matters less than the environment we place them in. Leaders, regardless of their personal qualities, respond to the incentives that surround them. The ultimate policies, therefore, largely conform to those incentives. From this perspective, it’s important to adopt institutional incentives for leaders to promote policies oriented toward economic growth and provide the option to flourish.

The same principle applies to the private economy. Productivity is crucial, and higher IQ often correlates with greater productivity. Yet, genetic endowment—including IQ—is beyond individual control. Many other determinants of productivity are not exogenous when we can affect policy. Let’s adopt policies that allow individuals with lower IQ to act productively as if they had higher IQ. Protecting the freedom to contract and private property rights creates conditions whereby even those at the lower end of the cognitive ability distribution can thrive. These principles expand their opportunities. Market signals give them valuable feedback on their activities and enable them to contribute to the economy.

Continue reading

Human Capital is Technologically Contingent

The seminal paper in the theory of human capital by Paul Romer. In it, he recognizes different types of human capital such as physical skills, educational skills, work experience, etc. Subsequent macro papers in the literature often just clumped together some measures of human capital as if it was a single substance. There were a lot of cross-country RGDP per capita comparison papers that included determinants like ‘years of schooling’, ‘IQ’, and the like.

But more recent papers have been more detailed. For example, the average biological difference between men and women concerning brawn has been shown to be a determinant of occupational choice. If we believe that comparative advantage is true, then occupational sorting by human capital is the theoretical outcome. That’s exactly what we see in the data.

Similarly, my own forthcoming paper on the 19th century US deaf population illustrates that people who had less sensitive or absent ability to hear engaged in fewer management and commercial occupations, or were less commonly in industries that required strong verbal skills (on average).

Clearly, there are different types of human capital and they matter differently for different jobs. Technology also changes what skills are necessary to boot. This post shares some thoughts about how to think about human capital and technology. The easiest way to illustrate the points is with a simplified example.

Continue reading

Persistent Beliefs

The things that happen between people’s ears are difficult to study. Similarly, the actions that we take and the symbolic gestures that we communicate to the people around us are also difficult to study. We often and easily perceive the social signals of otherwise mundane activities, but they are nearly impossible to quantify systematically beyond 1st person accounts. And that’s me being generous. Part of the reason that these things are hard to study is that communication requires both a transmitter and a receiver. One person transmits a message and another person receives it. Sometimes, they’re on slightly or very different wavelengths and the message gets garbled or sent inadvertently and then conflict ensues.

Having common beliefs and understandings about the world help us to communicate more effectively. Those beliefs also tend to be relevant about the material world too. A small example is sunscreen. Because a parent rightly believes that sunscreen will protect their child from short-run pain and long-run sickness, they might lather it on. But, due to their belief, they also signal their love, compassion, and stewardship for their child. A spouse or another adult failing to apply sunscreen to a child signals the lack thereof and conflict can ensue even when the long-term impact of one-time and brief sun exposure is almost zero.

People cry both sad and happy tears because of how they interpret the actions of others – often apart from the other external effects. Therefore, beliefs imbue with costs and benefits even the behaviors that have seemingly immaterial consequences otherwise. We can argue all day about beliefs. And while beliefs might change with temporary changes in the technology, society, and the environment, core beliefs need to be durable over time. Therefore, if this economist were to recommend beliefs, then I would focus on the prerequisite of persistence before even trying to find a locally optimal set.

Here are three inexhaustive criteria for a durable beliefs:

Continue reading

Services, and Goods, and Software (Oh My!)

When I was in high school I remember talking about video game consumption. Yes, an Xbox was more than two hundred dollars, but one could enjoy the next hour of that video game play at a cost of almost zero. Video games lowered the marginal cost and increased the marginal utility of what is measured as leisure. Similarly, the 20th century was the time of mass production. Labor-saving devices and a deluge of goods pervaded. Remember servants? That’s a pre-20th century technology. Domestic work in another person’s house was very popular in the 1800s. Less so as the 20th century progressed. Now we devices that save on both labor and physical resources. Software helps us surpass the historical limits of moving physical objects in the real world.


There’s something that I think about a lot and I’ve been thinking about it for 20 years. It’s simple and not comprehensive, but I still think that it makes sense.

  • Labor is highly regulated and costly.
  • Physical capital is less regulated than labor.
  • Software and writing more generally is less regulated than physical capital.


I think that just about anyone would agree with the above. Labor is regulated by health and safety standards, “human resource” concerns, legal compliance and preemption, environmental impact, and transportation infrastructure, etc. It’s expensive to employ someone, and it’s especially expensive to have them employ their physical labor.

Continue reading

Will the Huge Corporate Spending on AI Pay Off?

Last Tuesday I posted on the topic, “Tech Stocks Sag as Analysists Question How Much Money Firms Will Actually Make from AI”. Here I try to dig a little deeper into the question of whether there will be a reasonable return on the billions of dollars that tech firms are investing into this area.

Cloud providers like Microsoft, Amazon, and Google are building buying expensive GPU chips (mainly from Nvidia) and installing them in power-hungry data centers. This hardware is being cranked to train large language models on a world’s-worth of existing information. Will it pay off?

Obviously, we can dream up all sorts of applications for these large language models (LLMs), but the question is much potential downstream customers are willing to pay for these capabilities. I don’t have the capability for an expert appraisal, so I will just post some excerpts here.

Up until two months ago, it seemed there was little concern about the returns on this investment.  The only worry seemed to be not investing enough. This attitude was exemplified by Sundar Pichai of Alphabet (Google). During the Q2 earnings call, he was asked what the return on Gen AI investment capex would be. Instead of answering the question directly, he said:

I think the one way I think about it is when we go through a curve like this, the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over investing. [my emphasis]

Part of the dynamic here is FOMO among the tech titans, as they compete for the internet search business:

The entire Gen AI capex boom started when Microsoft invested in OpenAI in late 2022 to directly challenge Google Search.

Naturally, Alphabet was forced to develop its own Gen AI LLM product to defend its core business – Search. Meta joined in the Gen AI capex race, together with Amazon, in fear of not being left out – which led to a massive Gen AI capex boom.

Nvidia has reportedly estimated that for every dollar spent on their GPU chips, “the big cloud service providers could generate $5 in GPU instant hosting over a span of four years. And API providers could generate seven bucks over that same timeframe.” Sounds like a great cornucopia for the big tech companies who are pouring tens of billions of dollars into this. What could possibly go wrong?

In late June, Goldman Sachs published a report titled, GEN AI: TOO MUCH SPEND,TOO LITTLE BENEFIT?.  This report included contributions from bulls and from bears. The leading Goldman skeptic is Jim Covello. He argues,

To earn an adequate return on the ~$1tn estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he says, it isn’t built to do. He points out that truly life-changing inventions like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. And he’s skeptical that AI’s costs will ever decline enough to make automating a large share of tasks affordable given the high starting point as well as the complexity of building critical inputs—like GPU chips—which may prevent competition. He’s also doubtful that AI will boost the valuation of companies that use the tech, as any efficiency gains would likely be competed away, and the path to actually boosting revenues is unclear.

MIT’s Daron Acemoglu is likewise skeptical:  He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.

Goldman economist Joseph Briggs is more optimistic:  He estimates that gen AI will ultimately automate 25% of all work tasks and raise US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade. While Briggs acknowledges that automating many AI-exposed tasks isn’t cost-effective today, he argues that the large potential for cost savings and likelihood that costs will decline over the long run—as is often, if not always, the case with new technologies—should eventually lead to more AI automation. And, unlike Acemoglu, Briggs incorporates both the potential for labor reallocation and new task creation into his productivity estimates, consistent with the strong and long historical record of technological innovation driving new opportunities.

The Goldman report also cautioned that the U.S. and European power grids may not be prepared for the major extra power needed to run the new data centers.

Perhaps the earliest major cautionary voice was that of Sequoia’s David Cahn. Sequoia is a major venture capital firm. In September, 2023 Cahn offered a simple calculation estimating that for each dollar spent on (Nvidia) GPUs, and another dollar (mainly electricity) would need be spent by the cloud vendor in running the data center. To make this economical, the cloud vendor would need to pull in a total of about $4.00 in revenue. If vendors are installing roughly $50 billion in GPUs this year, then they need to pull in some $200 billion in revenues. But the projected AI revenues from Microsoft, Amazon, Google, etc., etc. were less than half that amount, leaving (as of Sept 2023) a $125 billion dollar shortfall.

As he put it, “During historical technology cycles, overbuilding of infrastructure has often incinerated capital, while at the same time unleashing future innovation by bringing down the marginal cost of new product development. We expect this pattern will repeat itself in AI.” This can be good for some of the end users, but not so good for the big tech firms rushing to spend here.

In his June, 2024 update, Cahn notes that now Nvidia yearly sales look to be more like $150 billion, which in turn requires the cloud vendors to pull in some  $600 billion in added revenues to make this spending worthwhile. Thus, the $125 billion shortfall is now more like a $500 billion (half a trillion!) shortfall. He notes further that the rapid improvement in chip power means that the value of those expensive chips being installed in 2024 will be a lot lower in 2025.

And here is a random cynical comment on a Seeking Alpha article: It was the perfect combination of years of Hollywood science fiction setting the table with regard to artificial intelligence and investors looking for something to replace the bitcoin and metaverse hype. So when ChatGPT put out answers that sounded human, people let their imaginations run wild. The fact that it consumes an incredible amount of processing power, that there is no actual artificial intelligence there, it cannot distinguish between truth and misinformation, and also no ROI other than the initial insane burst of chip sales – well, here we are and R2-D2 and C3PO are not reporting to work as promised.

All this makes a case that the huge spends by Microsoft, Amazon, Google, and the like may not pay off as hoped. Their share prices have steadily levitated since January 2023 due to the AI hype, and indeed have been almost entirely responsible for the rise in the overall S&P 500 index, but their prices have all cratered in the past month. Whether or not these tech titans make money here, it seems likely that Nvidia (selling picks and shovels to the gold miners) will continue to mint money. Also, some of the final end users of Gen AI will surely find lucrative applications. I wish I knew how to pick the winners from the losers here.

For instance, the software service company ServiceNow is finding value in Gen AI. According to Morgan Stanley analyst Keith Weiss, “Gen AI momentum is real and continues to build. Management noted that net-new ACV for the Pro Plus edition (the SKU that incorporates ServiceNow’s Gen AI capabilities) doubled [quarter-over-quarter] with Pro Plus delivering 11 deals over $1M including two deals over $5M. Furthermore, Pro Plus realized a 30% price uplift and average deal sizes are up over 3x versus comparable deals during the Pro adoption cycle.”

You, Parent, Should have a Robot Vacuum

Do you have a robot vacuum? The first model was introduced in 2002 for $199. I don’t know how good that first model was, but I remember seeing plenty of ads for them by 2010 or so. My family was the cost-cutting kind of family that didn’t buy such things. I wondered how well they actually performed ‘in real life’. Given that they were on the shelves for $400-$1,200 dollars, I had the impression that there was a lot of quality difference among them. I didn’t need one, given that I rented or had a small floor area to clean, and I sure didn’t want to spend money on one that didn’t actually clean the floors. I lacked domain-specific knowledge. So I didn’t bother with them.

Fast forward to 2024: I’ve got four kids, a larger floor area, and less time. My wife and I agreed early in our marriage that we would be a ‘no shoes in the house’ kind of family.  That said, we have different views when it comes to floor cleanliness. Mine is: if the floors are dirty, then let’s wait until the source of crumbs is gone, and then clean them when they will remain clean. In practice, this means sweeping or vacuuming after the kids go to bed, and then steam mopping (we have tile) after parties (not before). My wife, in contrast, feels the crumbs on her feet now and wants it to stop ASAP. Not to mention that it makes her stressed about non-floor clutter or chaos too.

Continue reading

From Cubicles to Code – Evolving Investment Priorities from 1990 to 2022

I’ve written before about how we can afford about 50% more consumption now that we could in 1990. But it’s not all bread and circuses. We can also afford more capital. In fact, adding to our capital stock helps us produce the abundant consumption that we enjoy today. In order to explore this idea I’m using the BEA Saving and Investment accounts. The population data is from FRED.

The tricky thing about investment spending is that we need to differentiate between gross investment and net investment. Gross investment includes spending on the maintenance of current capital. Net investment is the change in the capital stock after depreciation – it’s investment in additional capital not just new capital.  Below are two pie charts that illustrate how the composition of our *gross investment* spending has changed over the past 30 years. Residential investment costs us about the same proportion of our investment budget as it did historically. A smaller proportion of our investment budget is going toward commercial structures and equipment (I’ve omitted the change in inventories). The big mover is the proportion of our investment that goes toward intellectual property, which has almost doubled.

It’s easiest for us to think about the quantities of investment that we can afford in 2022 as a proportion of 1990. Below are the inflation-adjusted quantities of investment per capita. On a per-person basis, we invest more in all capital types in 2022 than we did in 1990. Intellectual property investment has risen more than 600% over the past 30 years. The investment that produces the most value has moved toward digital products, including software. We also invest 250% more in equipment per person than we did in 1990. The average worker has far more productive tools at their disposal – both physical and digital. Overall real private investment is 3.5 times higher than it was 30 years ago.

Continue reading

Do I Trust Claude 3.5 Sonnet?

For the first time this week, I paid for a subscription to an LLM. I know economists who have been on the paid tier of OpenAI’s ChatGPT since 2023, using it for both research and teaching tasks.

I did publish a paper on the mistakes it makes: ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics In a behavioral paper, I used it as a stand-in for AI: Do People Trust Humans More Than ChatGPT?

I have nothing against ChatGPT. For various reasons, I never paid for it, even though I used it occasionally for routine work or for writing drafts. Perhaps if I were on the paid tier of something else already, I would have resisted paying for Claude.  

Yesterday, I made an account with Claude to try it out for free. Claude and I started working together on a paper I’m revising. Claude was doing excellent work and then I ran out of free credits. I want to finish the revision this week, so I decided to start paying $20/month.

Here’s a little snapshot of our conversation. Claude is writing R code which I run in RStudio to update graphs in my paper.

This coding work is something I used to do myself (with internet searches for help). Have I been 10x-ed? Maybe I’ve been 2x-ed.

I’ll refer to Zuckerberg via Dwarkesh (which I’ve blogged about before):

Continue reading