The things that happen between people’s ears are difficult to study. Similarly, the actions that we take and the symbolic gestures that we communicate to the people around us are also difficult to study. We often and easily perceive the social signals of otherwise mundane activities, but they are nearly impossible to quantify systematically beyond 1st person accounts. And that’s me being generous. Part of the reason that these things are hard to study is that communication requires both a transmitter and a receiver. One person transmits a message and another person receives it. Sometimes, they’re on slightly or very different wavelengths and the message gets garbled or sent inadvertently and then conflict ensues.
Having common beliefs and understandings about the world help us to communicate more effectively. Those beliefs also tend to be relevant about the material world too. A small example is sunscreen. Because a parent rightly believes that sunscreen will protect their child from short-run pain and long-run sickness, they might lather it on. But, due to their belief, they also signal their love, compassion, and stewardship for their child. A spouse or another adult failing to apply sunscreen to a child signals the lack thereof and conflict can ensue even when the long-term impact of one-time and brief sun exposure is almost zero.
People cry both sad and happy tears because of how they interpret the actions of others – often apart from the other external effects. Therefore, beliefs imbue with costs and benefits even the behaviors that have seemingly immaterial consequences otherwise. We can argue all day about beliefs. And while beliefs might change with temporary changes in the technology, society, and the environment, core beliefs need to be durable over time. Therefore, if this economist were to recommend beliefs, then I would focus on the prerequisite of persistence before even trying to find a locally optimal set.
Here are three inexhaustive criteria for a durable beliefs:
When I was in high school I remember talking about video game consumption. Yes, an Xbox was more than two hundred dollars, but one could enjoy the next hour of that video game play at a cost of almost zero. Video games lowered the marginal cost and increased the marginal utility of what is measured as leisure. Similarly, the 20th century was the time of mass production. Labor-saving devices and a deluge of goods pervaded. Remember servants? That’s a pre-20th century technology. Domestic work in another person’s house was very popular in the 1800s. Less so as the 20th century progressed. Now we devices that save on both labor and physical resources. Software helps us surpass the historical limits of moving physical objects in the real world.
There’s something that I think about a lot and I’ve been thinking about it for 20 years. It’s simple and not comprehensive, but I still think that it makes sense.
Labor is highly regulated and costly.
Physical capital is less regulated than labor.
Software and writing more generally is less regulated than physical capital.
I think that just about anyone would agree with the above. Labor is regulated by health and safety standards, “human resource” concerns, legal compliance and preemption, environmental impact, and transportation infrastructure, etc. It’s expensive to employ someone, and it’s especially expensive to have them employ their physical labor.
Cloud providers like Microsoft, Amazon, and Google are building buying expensive GPU chips (mainly from Nvidia) and installing them in power-hungry data centers. This hardware is being cranked to train large language models on a world’s-worth of existing information. Will it pay off?
Obviously, we can dream up all sorts of applications for these large language models (LLMs), but the question is much potential downstream customers are willing to pay for these capabilities. I don’t have the capability for an expert appraisal, so I will just post some excerpts here.
Up until two months ago, it seemed there was little concern about the returns on this investment. The only worry seemed to be not investing enough. This attitude was exemplified by Sundar Pichai of Alphabet (Google). During the Q2 earnings call, he was asked what the return on Gen AI investment capex would be. Instead of answering the question directly, he said:
I think the one way I think about it is when we go through a curve like this, the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over investing. [my emphasis]
Part of the dynamic here is FOMO among the tech titans, as they compete for the internet search business:
The entire Gen AI capex boom started when Microsoft invested in OpenAI in late 2022 to directly challenge Google Search.
Naturally, Alphabet was forced to develop its own Gen AI LLM product to defend its core business – Search. Meta joined in the Gen AI capex race, together with Amazon, in fear of not being left out – which led to a massive Gen AI capex boom.
Nvidia has reportedly estimated that for every dollar spent on their GPU chips, “the big cloud service providers could generate $5 in GPU instant hosting over a span of four years. And API providers could generate seven bucks over that same timeframe.” Sounds like a great cornucopia for the big tech companies who are pouring tens of billions of dollars into this. What could possibly go wrong?
In late June, Goldman Sachs published a report titled, GEN AI: TOO MUCH SPEND,TOO LITTLE BENEFIT?. This report included contributions from bulls and from bears. The leading Goldman skeptic is Jim Covello. He argues,
To earn an adequate return on the ~$1tn estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he says, it isn’t built to do. He points out that truly life-changing inventions like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. And he’s skeptical that AI’s costs will ever decline enough to make automating a large share of tasks affordable given the high starting point as well as the complexity of building critical inputs—like GPU chips—which may prevent competition. He’s also doubtful that AI will boost the valuation of companies that use the tech, as any efficiency gains would likely be competed away, and the path to actually boosting revenues is unclear.
MIT’s Daron Acemoglu is likewise skeptical: He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.
Goldman economist Joseph Briggs is more optimistic: He estimates that gen AI will ultimately automate 25% of all work tasks and raise US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade. While Briggs acknowledges that automating many AI-exposed tasks isn’t cost-effective today, he argues that the large potential for cost savings and likelihood that costs will decline over the long run—as is often, if not always, the case with new technologies—should eventually lead to more AI automation. And, unlike Acemoglu, Briggs incorporates both the potential for labor reallocation and new task creation into his productivity estimates, consistent with the strong and long historical record of technological innovation driving new opportunities.
The Goldman report also cautioned that the U.S. and European power grids may not be prepared for the major extra power needed to run the new data centers.
Perhaps the earliest major cautionary voice was that of Sequoia’s David Cahn. Sequoia is a major venture capital firm. In September, 2023 Cahn offered a simple calculation estimating that for each dollar spent on (Nvidia) GPUs, and another dollar (mainly electricity) would need be spent by the cloud vendor in running the data center. To make this economical, the cloud vendor would need to pull in a total of about $4.00 in revenue. If vendors are installing roughly $50 billion in GPUs this year, then they need to pull in some $200 billion in revenues. But the projected AI revenues from Microsoft, Amazon, Google, etc., etc. were less than half that amount, leaving (as of Sept 2023) a $125 billion dollar shortfall.
As he put it, “During historical technology cycles, overbuilding of infrastructure has often incinerated capital, while at the same time unleashing future innovation by bringing down the marginal cost of new product development. We expect this pattern will repeat itself in AI.” This can be good for some of the end users, but not so good for the big tech firms rushing to spend here.
In his June, 2024 update, Cahn notes that now Nvidia yearly sales look to be more like $150 billion, which in turn requires the cloud vendors to pull in some $600 billion in added revenues to make this spending worthwhile. Thus, the $125 billion shortfall is now more like a $500 billion (half a trillion!) shortfall. He notes further that the rapid improvement in chip power means that the value of those expensive chips being installed in 2024 will be a lot lower in 2025.
And here is a random cynical comment on a Seeking Alpha article: It was the perfect combination of years of Hollywood science fiction setting the table with regard to artificial intelligence and investors looking for something to replace the bitcoin and metaverse hype. So when ChatGPT put out answers that sounded human, people let their imaginations run wild. The fact that it consumes an incredible amount of processing power, that there is no actual artificial intelligence there, it cannot distinguish between truth and misinformation, and also no ROI other than the initial insane burst of chip sales – well, here we are and R2-D2 and C3PO are not reporting to work as promised.
All this makes a case that the huge spends by Microsoft, Amazon, Google, and the like may not pay off as hoped. Their share prices have steadily levitated since January 2023 due to the AI hype, and indeed have been almost entirely responsible for the rise in the overall S&P 500 index, but their prices have all cratered in the past month. Whether or not these tech titans make money here, it seems likely that Nvidia (selling picks and shovels to the gold miners) will continue to mint money. Also, some of the final end users of Gen AI will surely find lucrative applications. I wish I knew how to pick the winners from the losers here.
For instance, the software service company ServiceNow is finding value in Gen AI. According to Morgan Stanley analyst Keith Weiss, “Gen AI momentum is real and continues to build. Management noted that net-new ACV for the Pro Plus edition (the SKU that incorporates ServiceNow’s Gen AI capabilities) doubled [quarter-over-quarter] with Pro Plus delivering 11 deals over $1M including two deals over $5M. Furthermore, Pro Plus realized a 30% price uplift and average deal sizes are up over 3x versus comparable deals during the Pro adoption cycle.”
Do you have a robot vacuum? The first model was introduced in 2002 for $199. I don’t know how good that first model was, but I remember seeing plenty of ads for them by 2010 or so. My family was the cost-cutting kind of family that didn’t buy such things. I wondered how well they actually performed ‘in real life’. Given that they were on the shelves for $400-$1,200 dollars, I had the impression that there was a lot of quality difference among them. I didn’t need one, given that I rented or had a small floor area to clean, and I sure didn’t want to spend money on one that didn’t actually clean the floors. I lacked domain-specific knowledge. So I didn’t bother with them.
Fast forward to 2024: I’ve got four kids, a larger floor area, and less time. My wife and I agreed early in our marriage that we would be a ‘no shoes in the house’ kind of family. That said, we have different views when it comes to floor cleanliness. Mine is: if the floors are dirty, then let’s wait until the source of crumbs is gone, and then clean them when they will remain clean. In practice, this means sweeping or vacuuming after the kids go to bed, and then steam mopping (we have tile) after parties (not before). My wife, in contrast, feels the crumbs on her feet now and wants it to stop ASAP. Not to mention that it makes her stressed about non-floor clutter or chaos too.
The tricky thing about investment spending is that we need to differentiate between gross investment and net investment. Gross investment includes spending on the maintenance of current capital. Net investment is the change in the capital stock after depreciation – it’s investment in additional capital not just new capital. Below are two pie charts that illustrate how the composition of our *gross investment* spending has changed over the past 30 years. Residential investment costs us about the same proportion of our investment budget as it did historically. A smaller proportion of our investment budget is going toward commercial structures and equipment (I’ve omitted the change in inventories). The big mover is the proportion of our investment that goes toward intellectual property, which has almost doubled.
It’s easiest for us to think about the quantities of investment that we can afford in 2022 as a proportion of 1990. Below are the inflation-adjusted quantities of investment per capita. On a per-person basis, we invest more in all capital types in 2022 than we did in 1990. Intellectual property investment has risen more than 600% over the past 30 years. The investment that produces the most value has moved toward digital products, including software. We also invest 250% more in equipment per person than we did in 1990. The average worker has far more productive tools at their disposal – both physical and digital. Overall real private investment is 3.5 times higher than it was 30 years ago.
For the first time this week, I paid for a subscription to an LLM. I know economists who have been on the paid tier of OpenAI’s ChatGPT since 2023, using it for both research and teaching tasks.
I have nothing against ChatGPT. For various reasons, I never paid for it, even though I used it occasionally for routine work or for writing drafts. Perhaps if I were on the paid tier of something else already, I would have resisted paying for Claude.
Yesterday, I made an account with Claude to try it out for free. Claude and I started working together on a paper I’m revising. Claude was doing excellent work and then I ran out of free credits. I want to finish the revision this week, so I decided to start paying $20/month.
Here’s a little snapshot of our conversation. Claude is writing R code which I run in RStudio to update graphs in my paper.
This coding work is something I used to do myself (with internet searches for help). Have I been 10x-ed? Maybe I’ve been 2x-ed.
I’ll refer to Zuckerberg via Dwarkesh (which I’ve blogged about before):
Economics as a discipline really likes to boil things down to their essentials. There are plenty of examples. How many goods can one consume? Just two, bread and not bread. How can you spend your time? You can labor or leisure. How do you spend your money? Consume or save. It’s this last one that I want to emphasize here.
First, all income ultimately ends up being spent on consumption. Saving today is just the decision to consume in the future. And if not by you, then by your heirs. One determinant of inter-temporal consumption decisions is the real rate of return. That is, how many apples can you eat in the future by forgoing an apple eaten today? The bigger that number is, the more attractive the decision to save.
Further, since most saving is not in the form of cash and is instead invested in productive assets, we can also characterize the intertemporal consumption problem as the current budget allocation decision to consume or invest. The more attractive capital becomes, the more one is willing to invest rather than consume. The relative attractiveness between consumption and investment informs the consumption decision.
How attractive is investment? I’ll illustrate in two graphs. First, if the price of investment goods falls relative to consumption goods, then individuals will invest more. The graph below charts the price ratio of investment goods to consumption goods. Relative to consumption, the price of investment has fallen since 1980. Saving for the future has never been cheaper!
Of course, as in a price taker story, I am assuming that individuals don’t affect this price ratio. Truly, prices are endogenous to consumption/investment decisions. For all we know, it may be that the prices of investment goods are falling because demand for investment goods has fallen. But that doesn’t appear to be the case.
When I give talks about AI, I often present my own research on ChatGPT muffing academic references. By the end I make sure that I present some evidence of how good ChatGPT can be, to make sure the audience walks away with the correct overall impression of where technology is heading. On the topic of rapid advances in LLMs, interesting new claims from a person on the inside can by found from Leopold Aschenbrenner in his new article (book?) called “Situational Awareness.” https://situational-awareness.ai/ PDF: https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf
He argues that AGI is near and LLMs will surpass the smartest humans soon.
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.
Based on this assumption that AIs will surpass humans soon, he draws conclusions for national security and how we should conduct AI research. (No, I have not read all if it.)
Isn't it true that the "smart high schooler" can just repeat what they learned in a textbook? Why is it a linear progression from there to an AI researcher who is producing novel brilliant papers?
I might offer to contract out my services in the future based on my human instincts shaped by growing up on internet culture (i.e. I know when they are joking) and having an acute sense of irony. How is Artificial General Irony coming along?
Today I will write about something I care deeply about: the wellbeing of the moms of young children.
I can remember having a child enrolled in preschool. It was expensive but it was worth it, for us. What follows will be most relevant to readers who are working full-time and have children enrolled in full-time daycare/preschool. That is not the right choice for every family. If it’s the choice you made, then read on.
Do less for preschool. Save your energy and money for the years when your child will actually remember.
I recently did some business where I had a text file of names and email addresses that I wanted to send a group email to, in Gmail. Here I will share the steps I followed to import this info into a Google contact group.
The Big Picture
First, a couple of overall concepts. In Gmail (and Google), your contacts exist in a big list of all your contacts. To create a group of contacts for a mass email, you have to apply a label to those particular contacts. A given contact can have more than one label (i.e., can be member of more than one group).
To enter one new contact at a time into Gmail, you go to Contacts and Create Contact, and type in or copy/paste in data like name and email address for each person or organization. But to enter a list of many contacts all at once, you must have these contacts in the form of either a CSV or vCard file, which Google can import. So here, first I will describe the steps to create a CSV file, and then the steps to import that into Gmail.
Comma-separated values (CSV) is a text file format that uses commas to separate values. Each record (for us, this means each contact) is on a separate line of plain text. Each record consists of the same number of fields, and these are separated by commas in the CSV file.
A list of names and of email contacts (two fields) might look like this in CSV format:
We could have added additional data (more fields) for each contact, such as home phone numbers and cell numbers, again separated by commas.
For Gmail to import this as a contact list, this is not quite enough. Google demands a header line, to identify the meaning of these chunks of data (i.e., to tell Google that these are in fact contact names, followed by email addresses). This requires specific wording in the header. For a contact name and for one (out of a possible two) email address, the header entries would be “Name” and “E-mail 1 – Value”. If we had wanted to add, say, home phones and cell phones, we could have added four more fields to the header line, namely: ,Phone 1 - Type,Phone 1 - Value,Phone 2 - Type,Phone 2 – Value. For a complete list of possible header items, see the Appendix.
The Steps
Here are steps to create a CSV file of contacts, and then import that file to Gmail:
( 1 ) Start with a text file of the names and addresses, separated by commas. Add a header line at the top: Name, E-mail 1 – Value . If this is in Word, Save As a plain text file (.txt). For our little list, this text file would look like this:
( 2 ) Open this file in Excel: Start Excel, click Open, use Browse if necessary, select “All Files” (not just “Excel Files”) and find and select your text file. The Text Import Wizard will appear. Make sure the “Delimited” option is checked. Click Next.
In the next window, select “Comma” (not the default “Tab”) in the Delimiters section, then click “Next.” In the final window, you’ll need to specify the column data format. I suggest leaving it at “General,” and click “Finish.” If all has gone well, you should see an Excel sheet with your data in two columns.
( 3 ) Save the Excel sheet data as a CSV file: Under the File tab, choose Save As, and specify a folder into which the new file will be saved. A final window will appear where you specify the new file name (I’ll use “Close Friends List”), and the new file type. For “Save as type” there are several CSV options; on my PC I used “CSV (MS-DOS)”.
( 4 ) Go to Gmail or Google, and click on the nine-dots icon at the upper right, and select Contacts. At the upper left of the Contacts page, click Create Contact. You’ll have choice between Create a Contact (for single contact), or Create multiple contacts. Click on the latter.
( 5 ) Up pops a Create Multiple Contacts window. At the upper right of that window you can select what existing label (contact group name) you want to apply to this new list of names, or create a new label. For this example, I created (entered) a new label (in place of “No Label”), called Close Friends. Then, towards the bottom of this window, click on Import Contacts.
Then (in the new window that pops up) select the name of the incoming CSV file, and click Import. That’s it!
The new contacts will be in your overall contact list, with the group name label applied to them. There will also be a default group label “Imported on [today’s date]” created (also applied to this bunch of contacts). You can delete that label from the list of labels (bottom left of the Contacts page), using the “Keep the Contacts” option so the new contacts don’t get erased.
( 6 ) Now you can send out emails to this whole group of contacts. If this is a more professional or sensitive situation, or if the list of contacts is unwieldy (e.g. over ten or so), you might just send the email to yourself and bcc it to the labeled group.
APPENDIX: List of all Header Entries for CSV Files, for Importing Contacts to Gmail
I listed above several header entries which could be used to tell Google what the data is in your list of contact information. This Productivity Portfolio link has more detailed information. This includes tips for using VCard file format for transferring contact information (use app like Outlook to generate VCard or CSV file, then fix header info as needed, and then import that file into Google contacts).
There is also a complete list of header entries for a CSV file, which is available as an Excel file by clicking his “ My Google Contacts CSV Template “ button. The Excel spreadsheet format is convenient for lining things up for actual usage, but I have copied the long list of header items into a long text string to dump here, to give you the idea of what other header items might look like:
I bolded the two items I actually used in my example (Name and E-mail 1 – Value), as well as a pair of entries ( Phone 1 – Type and Phone 1 – Value) as header items which you might use for including, say, cell phone numbers in your CSV file of contact information.