Services, and Goods, and Software (Oh My!)

When I was in high school I remember talking about video game consumption. Yes, an Xbox was more than two hundred dollars, but one could enjoy the next hour of that video game play at a cost of almost zero. Video games lowered the marginal cost and increased the marginal utility of what is measured as leisure. Similarly, the 20th century was the time of mass production. Labor-saving devices and a deluge of goods pervaded. Remember servants? That’s a pre-20th century technology. Domestic work in another person’s house was very popular in the 1800s. Less so as the 20th century progressed. Now we devices that save on both labor and physical resources. Software helps us surpass the historical limits of moving physical objects in the real world.


There’s something that I think about a lot and I’ve been thinking about it for 20 years. It’s simple and not comprehensive, but I still think that it makes sense.

  • Labor is highly regulated and costly.
  • Physical capital is less regulated than labor.
  • Software and writing more generally is less regulated than physical capital.


I think that just about anyone would agree with the above. Labor is regulated by health and safety standards, “human resource” concerns, legal compliance and preemption, environmental impact, and transportation infrastructure, etc. It’s expensive to employ someone, and it’s especially expensive to have them employ their physical labor.

Continue reading

Will the Huge Corporate Spending on AI Pay Off?

Last Tuesday I posted on the topic, “Tech Stocks Sag as Analysists Question How Much Money Firms Will Actually Make from AI”. Here I try to dig a little deeper into the question of whether there will be a reasonable return on the billions of dollars that tech firms are investing into this area.

Cloud providers like Microsoft, Amazon, and Google are building buying expensive GPU chips (mainly from Nvidia) and installing them in power-hungry data centers. This hardware is being cranked to train large language models on a world’s-worth of existing information. Will it pay off?

Obviously, we can dream up all sorts of applications for these large language models (LLMs), but the question is much potential downstream customers are willing to pay for these capabilities. I don’t have the capability for an expert appraisal, so I will just post some excerpts here.

Up until two months ago, it seemed there was little concern about the returns on this investment.  The only worry seemed to be not investing enough. This attitude was exemplified by Sundar Pichai of Alphabet (Google). During the Q2 earnings call, he was asked what the return on Gen AI investment capex would be. Instead of answering the question directly, he said:

I think the one way I think about it is when we go through a curve like this, the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over investing. [my emphasis]

Part of the dynamic here is FOMO among the tech titans, as they compete for the internet search business:

The entire Gen AI capex boom started when Microsoft invested in OpenAI in late 2022 to directly challenge Google Search.

Naturally, Alphabet was forced to develop its own Gen AI LLM product to defend its core business – Search. Meta joined in the Gen AI capex race, together with Amazon, in fear of not being left out – which led to a massive Gen AI capex boom.

Nvidia has reportedly estimated that for every dollar spent on their GPU chips, “the big cloud service providers could generate $5 in GPU instant hosting over a span of four years. And API providers could generate seven bucks over that same timeframe.” Sounds like a great cornucopia for the big tech companies who are pouring tens of billions of dollars into this. What could possibly go wrong?

In late June, Goldman Sachs published a report titled, GEN AI: TOO MUCH SPEND,TOO LITTLE BENEFIT?.  This report included contributions from bulls and from bears. The leading Goldman skeptic is Jim Covello. He argues,

To earn an adequate return on the ~$1tn estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he says, it isn’t built to do. He points out that truly life-changing inventions like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. And he’s skeptical that AI’s costs will ever decline enough to make automating a large share of tasks affordable given the high starting point as well as the complexity of building critical inputs—like GPU chips—which may prevent competition. He’s also doubtful that AI will boost the valuation of companies that use the tech, as any efficiency gains would likely be competed away, and the path to actually boosting revenues is unclear.

MIT’s Daron Acemoglu is likewise skeptical:  He estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks. And he doesn’t take much comfort from history that shows technologies improving and becoming less costly over time, arguing that AI model advances likely won’t occur nearly as quickly—or be nearly as impressive—as many believe. He also questions whether AI adoption will create new tasks and products, saying these impacts are “not a law of nature.” So, he forecasts AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.

Goldman economist Joseph Briggs is more optimistic:  He estimates that gen AI will ultimately automate 25% of all work tasks and raise US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade. While Briggs acknowledges that automating many AI-exposed tasks isn’t cost-effective today, he argues that the large potential for cost savings and likelihood that costs will decline over the long run—as is often, if not always, the case with new technologies—should eventually lead to more AI automation. And, unlike Acemoglu, Briggs incorporates both the potential for labor reallocation and new task creation into his productivity estimates, consistent with the strong and long historical record of technological innovation driving new opportunities.

The Goldman report also cautioned that the U.S. and European power grids may not be prepared for the major extra power needed to run the new data centers.

Perhaps the earliest major cautionary voice was that of Sequoia’s David Cahn. Sequoia is a major venture capital firm. In September, 2023 Cahn offered a simple calculation estimating that for each dollar spent on (Nvidia) GPUs, and another dollar (mainly electricity) would need be spent by the cloud vendor in running the data center. To make this economical, the cloud vendor would need to pull in a total of about $4.00 in revenue. If vendors are installing roughly $50 billion in GPUs this year, then they need to pull in some $200 billion in revenues. But the projected AI revenues from Microsoft, Amazon, Google, etc., etc. were less than half that amount, leaving (as of Sept 2023) a $125 billion dollar shortfall.

As he put it, “During historical technology cycles, overbuilding of infrastructure has often incinerated capital, while at the same time unleashing future innovation by bringing down the marginal cost of new product development. We expect this pattern will repeat itself in AI.” This can be good for some of the end users, but not so good for the big tech firms rushing to spend here.

In his June, 2024 update, Cahn notes that now Nvidia yearly sales look to be more like $150 billion, which in turn requires the cloud vendors to pull in some  $600 billion in added revenues to make this spending worthwhile. Thus, the $125 billion shortfall is now more like a $500 billion (half a trillion!) shortfall. He notes further that the rapid improvement in chip power means that the value of those expensive chips being installed in 2024 will be a lot lower in 2025.

And here is a random cynical comment on a Seeking Alpha article: It was the perfect combination of years of Hollywood science fiction setting the table with regard to artificial intelligence and investors looking for something to replace the bitcoin and metaverse hype. So when ChatGPT put out answers that sounded human, people let their imaginations run wild. The fact that it consumes an incredible amount of processing power, that there is no actual artificial intelligence there, it cannot distinguish between truth and misinformation, and also no ROI other than the initial insane burst of chip sales – well, here we are and R2-D2 and C3PO are not reporting to work as promised.

All this makes a case that the huge spends by Microsoft, Amazon, Google, and the like may not pay off as hoped. Their share prices have steadily levitated since January 2023 due to the AI hype, and indeed have been almost entirely responsible for the rise in the overall S&P 500 index, but their prices have all cratered in the past month. Whether or not these tech titans make money here, it seems likely that Nvidia (selling picks and shovels to the gold miners) will continue to mint money. Also, some of the final end users of Gen AI will surely find lucrative applications. I wish I knew how to pick the winners from the losers here.

For instance, the software service company ServiceNow is finding value in Gen AI. According to Morgan Stanley analyst Keith Weiss, “Gen AI momentum is real and continues to build. Management noted that net-new ACV for the Pro Plus edition (the SKU that incorporates ServiceNow’s Gen AI capabilities) doubled [quarter-over-quarter] with Pro Plus delivering 11 deals over $1M including two deals over $5M. Furthermore, Pro Plus realized a 30% price uplift and average deal sizes are up over 3x versus comparable deals during the Pro adoption cycle.”

Sources on AI use of Information

  1. Consent in Crisis: The Rapid Decline of the AI Data Commons

Abstract: General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, Refined Web, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14, 000 web domains provides an expansive view of crawlable web data and how consent preferences to use it are changing over time. We observe a proliferation of AI specific clauses to limit use, acute differences in restrictions on AI developers, as well as general inconsistencies between websites’ expressed intentions in their Terms of Service and their robots.txt. We diagnose these as symptoms of ineffective web protocols, not designed to cope with the widespread re-purposing of the internet for AI. Our longitudinal analyses show that in a single year (2023-2024) there has been a rapid crescendo of data restrictions from web sources, rendering ~5%+ of all tokens in C4, or 28%+ of the most actively maintained, critical sources in C4, fully restricted from use. For Terms of Service crawling restrictions, a full 45% of C4 is now restricted. If respected or enforced, these restrictions are rapidly biasing the diversity, freshness, and scaling laws for general-purpose AI systems. We hope to illustrate the emerging crisis in data consent, foreclosing much of the open web, not only for commercial AI, but non-commercial AI and academic purposes.

AI is taking out of a commons information that was provisioned under a different set of rules and technology. See discussion on Y Combinator 

2. “ChatGPT-maker braces for fight with New York Times and authors on ‘fair use’ of copyrighted works” (AP, January ’24)

3. Partly handy as a collection of references: “HOW GENERATIVE AI TURNS COPYRIGHT UPSIDE DOWN” by a law professor. “While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don’t fit in any of those categories…” 

4. New gated NBER paper by Josh Gans “examines this issue from an economics perspective”

Joy: AI companies have money. Could we be headed toward a world where OpenAI has some paid writers on staff? Replenishing the commons is relatively cheap if done strategically, in relation to the money being raised for AI companies. Jeff Bezos bought the Washington Post. It cost a fraction of his tech fortune (about $250 million). Elon Musk bought Twitter. Sam Altman is rich enough to help keep the NYT churning out articles. Because there are several competing commercial models, however, the owners of LLM products face a commons problem. If Altman pays the NYT to keep operating, then Anthropic gets the benefit, too. Arguably, good writing is already under-provisioned, even aside from LLMs.

You, Parent, Should have a Robot Vacuum

Do you have a robot vacuum? The first model was introduced in 2002 for $199. I don’t know how good that first model was, but I remember seeing plenty of ads for them by 2010 or so. My family was the cost-cutting kind of family that didn’t buy such things. I wondered how well they actually performed ‘in real life’. Given that they were on the shelves for $400-$1,200 dollars, I had the impression that there was a lot of quality difference among them. I didn’t need one, given that I rented or had a small floor area to clean, and I sure didn’t want to spend money on one that didn’t actually clean the floors. I lacked domain-specific knowledge. So I didn’t bother with them.

Fast forward to 2024: I’ve got four kids, a larger floor area, and less time. My wife and I agreed early in our marriage that we would be a ‘no shoes in the house’ kind of family.  That said, we have different views when it comes to floor cleanliness. Mine is: if the floors are dirty, then let’s wait until the source of crumbs is gone, and then clean them when they will remain clean. In practice, this means sweeping or vacuuming after the kids go to bed, and then steam mopping (we have tile) after parties (not before). My wife, in contrast, feels the crumbs on her feet now and wants it to stop ASAP. Not to mention that it makes her stressed about non-floor clutter or chaos too.

Continue reading

Oster on Haidt and Screens

Emily Oster took on the Jonathan Haidt-related debate in her latest post “Screens & Social Media

Do screens harm mental health? Oster joins some other skeptics I know. She doesn’t fully back Haidt, and she does the economist thing by mentioning “tradeoffs.”

Oster, ever practical, makes a point that sometimes gets lost. Maybe social media doesn’t cause suicide. Maybe there is no causal relationship concerning diagnosed mental health conditions, as indicated by the data. That doesn’t mean that parents and teachers should not monitor and curtail screen time. Oster says that it’s obvious that kids should not have their phones in the classroom during school instruction.

Here’s a personal story from this week. My son wants Roblox. The game says 12+, and I’ve told him that I’m sticking to that. No. He can’t have it now and he can’t start chatting with strangers online. We aren’t going to re-visit the conversation until he’s 12. Is he mad at me? Yes. You know what he does when he’s really bored at home? He starts vacuuming. I’ve driven him to madness, with these boundaries I set, or to vacuuming. (Recall he likes these books. Since hearing Harry Potter 1 as an audiobook in the car, he’s started tearing through the series himself via hardcover book.)

An innocent tablet game I let him play (when he’s allowed to have screen time) is Duck Life. Rated E for everyone.

Previously, I wrote “Video Games: Emily Oster, Are the kids alright?

And more recently, Tyler had “My contentious Conversation with Jonathan Haidt” Maybe Tyler should debate Emily Oster next about limiting phone use.

From Cubicles to Code – Evolving Investment Priorities from 1990 to 2022

I’ve written before about how we can afford about 50% more consumption now that we could in 1990. But it’s not all bread and circuses. We can also afford more capital. In fact, adding to our capital stock helps us produce the abundant consumption that we enjoy today. In order to explore this idea I’m using the BEA Saving and Investment accounts. The population data is from FRED.

The tricky thing about investment spending is that we need to differentiate between gross investment and net investment. Gross investment includes spending on the maintenance of current capital. Net investment is the change in the capital stock after depreciation – it’s investment in additional capital not just new capital.  Below are two pie charts that illustrate how the composition of our *gross investment* spending has changed over the past 30 years. Residential investment costs us about the same proportion of our investment budget as it did historically. A smaller proportion of our investment budget is going toward commercial structures and equipment (I’ve omitted the change in inventories). The big mover is the proportion of our investment that goes toward intellectual property, which has almost doubled.

It’s easiest for us to think about the quantities of investment that we can afford in 2022 as a proportion of 1990. Below are the inflation-adjusted quantities of investment per capita. On a per-person basis, we invest more in all capital types in 2022 than we did in 1990. Intellectual property investment has risen more than 600% over the past 30 years. The investment that produces the most value has moved toward digital products, including software. We also invest 250% more in equipment per person than we did in 1990. The average worker has far more productive tools at their disposal – both physical and digital. Overall real private investment is 3.5 times higher than it was 30 years ago.

Continue reading

GLIF Social Media Memes

Wojak Meme Generator from Glif will build you a funny meme from a short phrase or single word prompt. Note that it is built to be derogatory, cruel for sport, and may hallucinate up falsehoods. (see tweet announcement)

I am fascinated by this from the angle of modern anthropology. The AI has learned all of this by studying what we write online. Someone can build an AI to make jokes and call out hypocrisy.

Here are GLIFs of the different social media user stereotypes as of 2024. Most of our current readers probably don’t need any captions to these memes, but I’ll provide a bit of sincere explanation to help everyone understand the jokes.

Twitter user: Person who posts short messages and follows others on the microblogging platform.

Facebook user: Individual with a profile on the social network for connecting with friends and sharing content.

Bluesky user: Early adopter of a decentralized social media platform focused on user control.

Continue reading

How Repurposing Graphic Processing Chips Made Nvidia the Most Valuable Company on Earth

Folks who follow the stock market know that the average company in the S&P 500 has gone essentially nowhere in the last couple of years. What has pulled the averages higher and higher has been the outstanding performance of a handful of big tech stocks. Foremost among these is Nvidia. Its share price has tripled in the past year, after nearly tripling in the previously twelve months. Its market value climbed to $3.3 trillion last week, briefly surpassing tech behemoths Microsoft and Apple as the most valuable company in the world.

What just happened here?

It all began in 1993 when Taiwanese-American electrical engineer Jensen Huang and two other Silicon Valley techies met in a Denny’s in East San Jose and decided to start their own company. Their focus was making graphics acceleration boards for video games. Computing devices such as computers, game stations, and smart phones have at their core a central processing unit, CPU. A strength of CPUs is their versatility. They can do a lot of different tasks, but sequentially and thus at a limited speed.  To oversimplify, a CPU fetches an instruction (command), and then loads maybe two chunks of data, then performs the instructed calculations on those data, and then stores the result somewhere else, and then turns around and fetches the next instruction. With clever programming, some tasks can be broken up into multiple pieces that can be processed in parallel on several CPU cores at once, but that only goes so far.

Processing large amounts of graphics data, such as rendering a high-resolution active video game, requires an enormous amount of computing. However, these calculations are largely all the same type, so a versatile processing chip like a CPU is not required. Graphics processing units (GPUs), originally termed graphics accelerators, are designed to do enormous number of these simple calculations simultaneously. To offload the burden on the CPU, computers and game stations for decades have included on auxiliary GPU (“graphics card”) alongside the CPU.

This was the original target for Nvidia. Video gaming was expanding rapidly, and they saw a niche for innovative graphics processors. Unfortunately, they the processing architecture they choose to work on fell out of favor, and they skated right up to the edge of going bankrupt. In 1993 Nvidia was down to 30 days before closing their doors, but at the last moment they got a $5 million loan to keep them afloat. Nvidia clawed its way back from the brink and managed to make and sell a series of popular graphics processors.

However, management had a vision that the massively parallel processing power of their chips could be applied to more exulted uses than rendering blood spatters in Call of Duty.  The types of matrix calculations done in GPUs can be used in a wide variety of physical simulations such as seismology and molecular dynamics. In 2007, and video released its CUDA platform for using GPUs for accelerated general purpose processing. Since then, Nvidia has promoting the use of its GPUs as general hardware for scientific computing, in addition to the classic graphics applications.

This line of business exploded starting around 2019, with the bitcoin craze. Crypto currencies require enormous amount of computing power, and these types of calculations are amenable to being performed in massively parallel GPUs. Serious bitcoin mining companies set up racks of processors, built on NVIDIA GPUs. GPUs did have serious competition from other types of processors for the crypto mining applications, so they did not have the field to themselves. With people stuck at home in 2020-2021, demand for GPUs rose even further: more folks sitting on couches playing video games, and more cloud computing for remote work.

Nvidia Dominates AI Computing

Now the whole world cannot get enough of machine learning and generative AI. And Nvidia chips totally dominate that market. Nvidia supplies not only the hardware (chips) but also a software platform to allow programmers to make use of the chips. With so many programmers and applications standardized now on the Nvidia platform, its dominance and profitability should persist for many years.

Nearly all their chips are manufactured in Taiwan, so that provides a geopolitical risk, not only for Nvidia but for all enterprises that depend on high end AI processing.

Is the Universe Legible to Intelligence?

I borrowed the following from the posted transcript. Bold emphasis added by me. This starts at about minute 36 of the podcast “Tyler Cowen – Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth” with Dwarkesh Patel from January 2024.

Patel: We are talking about GPT-5 level models. What do you think will happen with GPT-6, GPT-7? Do you still think of it like having a bunch of RAs (research assistants) or does it seem like a different thing at some point?

Cowen: I’m not sure what those numbers going up mean or what a GPT-7 would look like or how much smarter it could get. I think people make too many assumptions there. It could be the real advantages are integrating it into workflows by things that are not better GPTs at all. And once you get to GPT, say 5.5, I’m not sure you can just turn up the dial on smarts and have it, for example, integrate general relativity and quantum mechanics.

Patel: Why not?

Cowen: I don’t think that’s how intelligence works. And this is a Hayekian point. And some of these problems, there just may be no answer. Like maybe the universe isn’t that legible. And if it’s not that legible, the GPT-11 doesn’t really make sense as a creature or whatever.

Patel (37:43) : Isn’t there a Hayekian argument to be made that, listen, you can have billions of copies of these things. Imagine the sort of decentralized order that could result, the amount of decentralized tacit knowledge that billions of copies talking to each other could have. That in and of itself is an argument to be made about the whole thing as an emergent order will be much more powerful than we’re anticipating.

Cowen: Well, I think it will be highly productive. What tacit knowledge means with AIs, I don’t think we understand yet. Is it by definition all non-tacit or does the fact that how GPT-4 works is not legible to us or even its creators so much? Does that mean it’s possessing of tacit knowledge or is it not knowledge? None of those categories are well thought out …

It might be significant that LLMs are no longer legible to their human creators. More significantly, the universe might not be legible to intelligence, at least of the kind that is trained on human writing. I (Joy) gathered a few more notes for myself.

A co-EV-winner has commented on this at Don’t Worry About the Vase

(37:00) Tyler expresses skepticism that GPT-N can scale up its intelligence that far, that beyond 5.5 maybe integration with other systems matters more, and says ‘maybe the universe is not that legible.’ I essentially read this as Tyler engaging in superintelligence denialism, consistent with his idea that humans with very high intelligence are themselves overrated, and saying that there is no meaningful sense in which intelligence can much exceed generally smart human level other than perhaps literal clock speed.

I (Joy) took it more literally. I don’t see “superintelligence denialism.” I took it to mean that the universe is not legible to our brand of intelligence.

There is one other comment I found in response to a short clip posted by @DwarkeshPatel  by youtuber @trucid2

Intelligence isn’t sufficient to solve this problem, but isn’t for the reason he stated. We know that GR and QM are inconsistent–it’s in the math. But the universe has no trouble deciding how to behave. It is consistent. That means a consistent theory that combines both is possible. The reason intelligence alone isn’t enough is that we’re missing data. There may be an infinite number of ways to combine QM and GR. Which is the correct one? You need data for that.

I saved myself a little time by writing the following with ChatGPT. If the GPT got something wrong in here, I’m not qualified to notice:

Newtonian physics gave an impression of a predictable, clockwork universe, leading many to believe that deeper exploration with more powerful microscopes would reveal even greater predictability. Contrary to this expectation, the advent of quantum mechanics revealed a bizarre, unpredictable micro-world. The more we learned, the stranger and less intuitive the universe became. This shift highlighted the limits of classical physics and the necessity of new theories to explain the fundamental nature of reality.
General Relativity (GR) and Quantum Mechanics (QM) are inconsistent because they describe the universe in fundamentally different ways and are based on different underlying principles. GR, formulated by Einstein, describes gravity as the curvature of spacetime caused by mass and energy, providing a deterministic framework for understanding large-scale phenomena like the motion of planets and the structure of galaxies. In contrast, QM governs the behavior of particles at the smallest scales, where probabilities and wave-particle duality dominate, and uncertainty is intrinsic.

The inconsistencies arise because:

  1. Mathematical Frameworks: GR is a classical field theory expressed through smooth, continuous spacetime, while QM relies on discrete probabilities and quantized fields. Integrating the continuous nature of GR with the discrete, probabilistic framework of QM has proven mathematically challenging.
  2. Singularities and Infinities: When applied to extreme conditions like black holes or the Big Bang, GR predicts singularities where physical quantities become infinite, which QM cannot handle. Conversely, when trying to apply quantum principles to gravity, the calculations often lead to non-renormalizable infinities, meaning they cannot be easily tamed or made sense of.
  3. Scales and Forces: GR works exceptionally well on macroscopic scales and with strong gravitational fields, while QM accurately describes subatomic scales and the other three fundamental forces (electromagnetic, weak nuclear, and strong nuclear). Merging these scales and forces into a coherent theory that works universally remains an unresolved problem.

Ultimately, the inconsistency suggests that a more fundamental theory, potentially a theory of quantum gravity like string theory or loop quantum gravity, is needed to reconcile the two frameworks.

P.S. I published “AI Doesn’t Mimic God’s Intelligence” at The Gospel Coalition. For now, at least, there is some higher plane of knowledge that we humans are not on. Will AI get there? Take us there? We don’t know.

Do I Trust Claude 3.5 Sonnet?

For the first time this week, I paid for a subscription to an LLM. I know economists who have been on the paid tier of OpenAI’s ChatGPT since 2023, using it for both research and teaching tasks.

I did publish a paper on the mistakes it makes: ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics In a behavioral paper, I used it as a stand-in for AI: Do People Trust Humans More Than ChatGPT?

I have nothing against ChatGPT. For various reasons, I never paid for it, even though I used it occasionally for routine work or for writing drafts. Perhaps if I were on the paid tier of something else already, I would have resisted paying for Claude.  

Yesterday, I made an account with Claude to try it out for free. Claude and I started working together on a paper I’m revising. Claude was doing excellent work and then I ran out of free credits. I want to finish the revision this week, so I decided to start paying $20/month.

Here’s a little snapshot of our conversation. Claude is writing R code which I run in RStudio to update graphs in my paper.

This coding work is something I used to do myself (with internet searches for help). Have I been 10x-ed? Maybe I’ve been 2x-ed.

I’ll refer to Zuckerberg via Dwarkesh (which I’ve blogged about before):

Continue reading