The Day the Cloud Evaporated: Life After the Data Center Collapse (A Guest Post by AI)

This is a “guest” blog post that I asked Google Gemini Pro to write. Data centers are increasingly becoming a political issue in communities across America. People are asking questions like: “Why do we need these things? How much water will this use?” Because these are sometimes referred to as “AI Data Centers,” people might assume that data centers are primarily about creating cat memes and fake videos. And it’s true that’s a part of AI, and it’s true that much of the new data center construction is for AI.

But… data centers have been around for a while. People are only now taking notice of them, for the most part. To better understand this issue, I asked — what else? — AI to explain how much data centers are used in our daily lives. AI in this case means Google Gemini Pro.

I’ll paste the full guest post below, but I want to point something out first: this blog post makes no mention of AI. Instead, it talks about: GPS and mapping apps; almost everything you do if you work in an office; credit cards and digital banking; news and social media. All of these things rely on data centers and would cease to function without data centers. That’s not because I asked Gemini to leave out AI from the guest post — when I followed up on this omission, Gemini said “It was a calculated omission—partly to keep the focus on the immediate ‘analog’ shock to daily life.” Most people probably wouldn’t care of they lost the ability to create funny images with AI. They would care if they lost all of their photos, access to their Dropbox account, and the ability to send email.

You could interpret all of this as saying we are “too dependent” on data centers and the modern Internet. You could also say we are “too dependent” on electricity. Or modern plumbing. Or modern supply chains. Or agriculture. Modern life is based on modern technology. I don’t know if it really makes sense to say we are “dependent” on these things, other than that we use them and they are beneficial.

Anyway, on to the guest post from Google Gemini Pro:


The Day the Cloud Evaporated: Life After the Data Center Collapse

Imagine waking up tomorrow morning in your suburban home in Ohio, or your apartment in Seattle. You reach for your smartphone to silence the alarm, but the screen is a stubborn, glowing rectangle of error messages. You try to check the weather, but the app’s spinning wheel never stops. You try to text your partner, but the message stays “Sending…” until it eventually fails.

This isn’t just a bad Wi-Fi connection. Every data center on Earth—those massive, humming warehouses filled with silicon and cooling fans—has vanished. In an instant, the “brain” of the modern world has been lobotomized. For the average person in the United States, life wouldn’t just slow down; it would fundamentally reset to 1950, but without the physical infrastructure of 1950 to catch the fall.

Continue reading

Will AI kill the research paper?

Will AI kill the research paper?. I don’t know, probably not. But I do know that what has constituted a research paper has changed many times before and will change many times again.

Before the the 1940’s, economics research papers were largely prose. Analytic in nature, sure, but prose. Some graphs, maybe a box. A little math, but math largely for the sake of demonstrating logical relationships. Then Samuelson hit, reframed economics as thermodynamics and differential calculus. What was previously a research paper was was now a polemic, a monograph at best. Thought experiments were out, high theory was in.

This era of high theory flourished in the 70s, the math changed, and at some point computers arrived with the possibility of data sufficiently rich and numerous you couldn’t just plot all of the observations in Figure 1. That data couldn’t stand on its own, though. To be a credible publication you really needed to bundle your analysis with some theory that generated testable predictions. Pure theory papers gave way to an era of applied imperialism as economic models found themselves applied to every quantified context under the social scientific sun.

Causal identification became a thing of interest, and we got really good at telling stories again. Specifically, stories about instrumental variables. You needed a story to convince anyone, but we told so many that some folks started to notice that these stories were often pretty weak. That, in part, turned up the heat on a credibility revolution that was already in swing, which meant now you needed even better data and you needed to defend you identification strategy to the death. What was a paper before was now an embarassment you should probably consider retracting (nb: no one retracted anything, but that doesn’t mean people were suggesting it behind their backs).

Which kept rolling in data set after data set until we woke up one day and realized you either need to go out in the world and create your own actual experiment (nothing quasi- about it) or you needed to cultivate access to better…no, better…no, the very best-est, most detailed and granular administrative data ever, preferably a universe if possible. Data so perfect as to allow for contributions unassailable in their legitimacy. Do you have friends at the Danish Census? If you want tenure you should probably start flirting with someone at the Danish Census.

So a paper was a paper. Until it wasn’t a paper anymore. Until that wasn’t a paper anymore. Until that wasn’t a paper. The Recursive Dundee Theory of Research*, if you will. They all met the criteria of a contribution, until they didn’t.

So what does this mean for AI and research papers now? Well, if we look to thermodynamics in the 40s and cheap computing power in the 90’s for analogues, then I’d say it’s going to reshape the criteria for a contribution in no small part because it lowers the cost of mediocrity. Mediocre analysis will no doubt persist, but it will shift over into blog posts and journals no one ackowledges as legitimate. Do remember, please, that mediocrity is a relative concept. The quality of blog posts and publications in scam journals will likely massively improve as what can be accomplished in an afternoon’s work is radically increased. Don’t worry, I have no intention of improving beyond my current warm bath of blogging unremarkableness, but others will likely cave in to the pressure.

What about the papers in top journals, though? The papers Tyler is presumably talking about. Will AI kill those economic research papers? Probably not, but it will likely improve it significantly. Why? For the same reason that Michael Kremer says that technology and quality of life improve with the size of the human population. More people means more ideas, and there is nothing more important to economic growth than the sheer number of ideas. And no, I do not mean ideas generated by AI’s. I mean the raw number of researchers with the capacity to make major contributions is increasing dramatically because we’re all getting research assistants. We’re all getting copy editors. We’re all getting support. That’s how AI is going to change the research paper: by giving more ideas the support they need to reach the light of publication. The bar is going to get higher for the same reason that the level of sports improve as you widen the geography they pull from. There’s someone at a directional state school who didn’t get the placement they deserved out of grad school. Sure they have to teach a 3-3 load, but they’re licking their chops right now because they don’t need an army of grad assistants. Summer is here and they’ve got everything they need to make a contribution.

Or I don’t know. Maybe AI will do all of our thinking in 50 years. Forecasting technology beyond 5 years is like forecasting weather beyond 5 days: I can’t do it and neither can you.**

*Apologies to Justin Wolfers and all my Aussie friends for a bit of cultural appropriation. I promise to put some Vegemite on toast while enjoying a flat white and explaining Aussie Rules Football to a friend within 90 days.

**Except for Neal Stephenson. That guy’s the Warren Buffet of Sci Fi forecasting. Maybe he’s the one in a billion person actually experiencing one in a billion level luck, but that doesn’t make it any less impressive.

How do Income Tax Brackets Work?

I was listening to an episode of The Deduction, a podcast by the Tax Foundation. As if that first sentence isn’t evident enough, I was reminded of how confusing taxes are – period. Even experts disagree and see grey areas. As I was listening, I thought “man, they need a graph”. So, here we are.

Income Tax Vocabulary

The money that you are paid by your employer is your gross income. Not all of it is taxable. You can deduct money from your gross income to get your taxable income. Most people subtract the ‘standard deduction’ from their gross income, which is how I’ll proceed in this post. Since the standard deduction for 2026 is $16,100 for a single earner, that means that your taxable income is $16,100 less than your gross income. By following a formula, one can calculate the amount of money that they must pay the government. These payments can be all at once, throughout the year, or even directly from your paycheck. The total that’s due to the government by April 15 is called the total tax liability. Finally, the money that the government doesn’t take, and that you get to keep, is called your net income. It’s your income net of taxes.

If you’ve had a job, then you are probably most familiar with your gross income, what your employer pays you, and your net income, what you get to take home. The steps in between might include some hand-waving.

Marginal Tax Rates

One of the most confusing pieces of the income tax code is marginal income taxes. Below are the brackets for 2026.

Marginal Tax rates work like this: Every dollar that you earn faces a tax rate. If your taxable income would be below zero, then you pay zero in taxes. But if your taxable income is $5k, then it gets taxed at a rate of 10%. That part should be pretty straightforward. But what if your taxable income is $15k? According to the table, you face a tax rate of 10% for dollars earned up to $12,400. That would be a tax liability of $1,240. But the remainder of your $15k in taxable income exists in the next tax bracket. That portion of your taxable income faces a tax rate of 12%. Sticking with the example, $2,600 is in the 12% tax bracket, so the tax liability for that portion of your taxable income is $312 (=$2.6k*0.12). Therefore, your total tax liability would be the sum of your tax liabilities across all applicable tax brackets: $1,552 (=$1,240+$312).

There are some features of marginal tax rates that are worth mentioning. Since the tax rates on the lower taxable income brackets don’t change, earning more gross income never reduces your net income unless the tax rate exceeds 100% (which it doesn’t here). So, when someone says that their taxable income is in the 35% tax rate bracket, they probably just mean that their last dollar earned is there. They’re only paying 35% on the taxable income that’s above $256,225. They’re not paying 35% of all earned dollars to the Internal Revenue Service (IRS).

Below is a graph that details the different marginal tax rates with shaded areas. The blue line is the average tax rate. It’s calculated by dividing the tax liability by the gross income. Even though one might earn an income that’s greater than $257k where the marginal tax rate is 35% or greater, the average tax rate remains lower, topping out at about 30% in this figure. The average tax rate is lower than an earner’s top marginal tax rate because the income in those lower brackets never disappears or get taxed at a higher rate.

Continue reading

Most Published Research Findings Are Directionally Correct

As a new quick rule of thumb inspired by the Nature papers, you could do worse than “cut estimated effect sizes in half”. If a published paper says that a college degree raises wages 100%, then chances are the degree really does raise wages, but more like 40–50%. In 2005, John Ioannidis said that “most published research findings are false”. By 2026, we seem to have improved to “most published research findings are exaggerated.”

That’s the conclusion of my piece out today at Econlog: “Is Economics Finally Becoming Trustworthy?

There’s plenty of both good and bad news for economics and the social sciences in both my piece and the Nature special issue it describes. It’s kind of like the Our World in Data motto:

In short, our attempt to replicate hundreds of papers showed that published social science results shouldn’t be trusted precisely today, but they seem to be getting more reliable over time, and they are much more reliable than chance. Economics and political science look the best, though we are still very far from perfect:

You can read the full piece here.

On implicit numéraire

Just a quick thought today. When we, economist or otherwise, talk about the opportunity cost of time, the most common default is an individual’s expected wage. This ends up becoming a sort of implicit numéraire, a unit of measurement and exchange that captures value of an individual’s time.

Now, to be clear, this is a gross reduction of the complexity of opportunity cost and decision-making, but such reductionism is a necessity when observing the world on a day to day basis. People are generally, I hope, aware of this reductionism, but also understand that cognitive tractability is a necessity for getting through life. That also means, however, that there is no shortage of traps. If you reduce decision-making to a single variable equation, you can get yourself in a lot of trouble picking the wrong variable.

Which brings me back to expected wage as a single variable numeraire revealing the opportunity cost of time. Sure, such a simple model is a great way for understanding why high income CEO’s outsource and delegate so many of their “life maintenance” tasks while I, for example, do not. That same logic, however, can be a trap when looking at decision making at the other end of the income distribution. Why wouldn’t someone making minimum wage leave work to pick up their sick kid from school or bail their cousin out of jail? Their forgone wages, their opportunity cost of time, is relatively low, right?

Actually, no, it isn’t. In fact their opportunity cost of time is exceptionally high, it’s just that you’re using the wrong numeraire. The opportunity cost of time isn’t the wages foregone, but rather the additional risk that they are taking on. It is quite common for individuals to lack the precautionary savings necessary to maintain solvency and housing stability during a dip in earnings or unexpected job loss. Nobody likes asking their boss if they can leave work for two hours on no notice when they can’t afford to risk losing an extra shift, let alone their job. The opportunity cost of their time is best measured in the marginal probability of household economic catastrophe rather the explicit wages gained or lost.

A lot of economic decision-making is easy to make sense of when you get your single-variable numeraire right, but that is easier said than done. A good rule of thumb: if someone else’s decision-making looks grossly irrational to you, you probably aren’t using the right variable.

The Arithmetic of Family Punctuality

My children are getting more capable. They get more responsibility that comes with the independence that capability implies. Specifically, when getting ready in the morning they like to leave so that they arrive at school just barely on time. Except, when something comes up, they are rushed, flustered, short-tempered, and tardy. They lament that “if only the unforeseeable event X hadn’t happened, I would have been on time”.

It doesn’t matter what X is. Maybe they forgot to pack a lunch, or set out their clothes, or they have a flat tire on their bikes, or… whatever. The specific time-consuming event is unforeseeable. But, that *any* time-consuming event will occur is very foreseeable. What’s a Bayesian to do?

Before we even start the analysis, let’s acknowledge that being perfectly on time for some event usually involves stress and a lack of preparedness. Yes, you were ‘on time’, but given the probability of heavier traffic, difficulty finding a parking spot, or whatever, we know that tardiness is just one unforeseen event away.

Individual Punctuality

How long does it take to get somewhere? It takes both travel time and time preparing to depart. Let’s just generally call this ‘preparation’ time. Let’s assume that you complete everything that you would complete. That means that you aren’t forgoing a shower or breakfast or whatever lower priority you might choose to forgo to arrive at some obligation punctually.

Random events can occur either as you travel to work or as you prepare to depart, but let’s place the random travel events to the side and focus on what one can do to get out of the house ‘on time’. In my personal case, my children have a 30min interval during which they can arrive at school. They almost never arrive in the first 15min of that interval. That’s more of a policy choice than an accident. They don’t want to sit in a cold gymnasium for 20min if it’s avoidable. So, their planned arrival time has an effective 15min window.

Here is the problem. A time-consuming random event, X, is a right-skewed random variable. Discretely, the modal day includes X=0min. Though the most common delays are greater than 0min. See the distribution below. A 0min random event occurs 35% of the time. But, a time-consuming event happens 65% of the time. So, if you try to arrive exactly on time to your obligation, then you will be punctual 35% of the time and you will be tardy 65% of the time. That’s not a good look and not a good reputation to build – and that’s apart from building a habit of imprudence and the material consequence of not being ready for the task at hand.

Someone with just enough insight to be dangerous might say ‘Ah! Instead, leave with enough time to accommodate the expected unforeseen event’. Mathematically, that’s the weighted average. In this case, that’s six minutes. So, if you plan to arrive 6min early, then you will be punctual – on average. But even that’s not really what we’re after. We’d like to be on time for a preponderance of the days. Building in a 6-minute buffer does two things. 1) Every time that there is a 0min or 5min unforeseen event, you get to your destination 6min or 1min early. That’s good for your nerves, performance, and reputation. But, that also means that you’re late whenever there is a 10min, 15min, or 20min unforeseen event – and those occur 35% of the time!

Continue reading

GDP Forecasts for the First Quarter of 2026

Forecast models, betting markets, and surveys of experts all drastically overstated the actual growth of GDP in the last quarter of 2025. They were off in the initial release, which was just 1.4 percent, but this was even further revised down to 0.5 percent. All four of the sources I track were forecasting over well over 2 percent, with some over 3 percent.

Does that mean we shouldn’t trust the forecasts? Perhaps, but last quarter was largely pulled down by government spending cuts, which the models completed missed. You can see this very clearly in the Atlanta Fed GDPNow model. Perhaps they shouldn’t have been surprised by this drop in government spending, but that is where the major error was.

So what do these forecasts think about the first quarter data for 2026, which comes out tomorrow? The two best predictors historically, GDPNow (Atlanta Fed) and Kalshi, are pretty far apart on this one, over a percentage point difference, with GDPNow being the only forecast under 2 percent:

Price Level: Noise vs Signal

My university recently hosted a guest speaker. Among their content, they included some nominal macroeconomic values from pre-2020, back in the era when inflation was very low. That roughly includes the years 2012-2019. Truly, inflation stayed below 2% through February of 2021, but I think that we can all agree that the economy was different in a few ways beginning in 2020.

I asked the speaker why not express the nominal values in real terms. They were emphatic that the low rates of inflation at the time implied that the signal-to-noise ratio was too low. Therefore, the ‘real’ inflation adjusted values would not be more precise because excessive noise would be introduced into the series during a period when not much deflating was necessary in the first place.

My answer to this is a firm ‘maybe’. It makes sense and it’s plausible (Jeremy has written about error and revisions in the past). We can think about the noise in price indices in a few ways.

1) It may be information is incomplete and becomes more complete as time passes. This sort of noise only exists in the short-run and is resolved as more information becomes available later in time. Revisions tend to happen each month for prior months, as well as each year for prior years. There are also big revisions after methodological, consumption weight, and data source changes.

2) Another type of noise is due to incomplete information that is never resolved. After all, the government statisticians can’t see literally all of the transactions. Those unobserved transactions will never make it into the official inflation measures and we’ll never get a perfect picture.  

3) Methodological artifacts may also include known biases. This type of noise doesn’t get corrected except after major changes to the series. If those changes never happen, then we just sort of live with imprecision. Luckily, so long as the bias is consistent, then percent change in the price indices will approximate the underlying true levels. However, if there are non-random biases in the percent change, then it can cause some trouble.

One way to get an idea for the amount of noise in the data is to observe the magnitude of revisions. Of course, this only helps us with the first type of noise above that eventually gets resolved with more information. It’s much harder to get a handle on the imprecision that is not identifiable. The Philadelphia Federal Reserve Bank provides an easy-to-use database that puts all of the archival and revised numbers for many macro series in a single place: the Real-Time Data Set (RTDS). It includes every historical PCE price index value for each publication month. Let’s limit our sample to the 21st century.

Continue reading

Raise Rates- But Not Because Of Oil

Next week the Fed will almost certainly hold interest rates steady. Stephen Miran will probably dissent saying the Fed should be cutting rates. Kevin Warsh, Trump’s nominee for Fed Chair, would also like to see cuts. But other prominent voices think that rising oil and gas prices mean we should be raising rates.

I still think that rate hikes make more sense than cuts- but not because of oil. The high oil and gas prices we’re seeing are obviously driven by supply shocks from the Iran war- not increasing demand. Raising rates to fight an oil shock would mean repeating a classic mistake.

But raising rates to fight core inflation that is at 3% makes perfect sense. Especially when inflation (overall or core) hasn’t been at or below the Fed’s supposed 2.0% target in over 5 years, and market forecasts predict it will stay well above 2.0% for the next 5 years.

Especially when real GDP is growing, and NGDP is still above trend, and the unemployment rate is 4.3%. Financial conditions are so loose that stock markets are hitting all time highs in the middle of a war.

Various Taylor Rules suggest that the Fed Funds rate should be between 4.25% and 6.25%, but the Fed currently has us at 3.75%.

I see so many good arguments to raise rates- there is no reason to bring up a bad one like oil prices. If we must latch on to a headline to find the argument to raise rates, let’s focus on a shoe company’s stock going up 600% because they announced they were pivoting AI.

Are Americans Thriving Under Trump? No, According to the Cost of Thriving Index

The Cost of Thriving Index from Oren Cass’s American Compass is an attempt to calculate how well US families are doing financially, but without using traditional inflation adjustments to income. Instead, Cass and crew have chosen 5 categories of goods and services, and tracked those over time relative to median earnings for men ages 25 and older (in the baseline model — it can also be applied to different categories of workers).

Scott Winship and I wrote a detailed critique of the COTI, which I summarized in a previous blog post. Our critique comes from several angles, including correcting several major errors in COTI, as well as arguing that standard inflation adjustments to median income are superior to this new approach.

Based on our critique, I don’t think COTI is a very good measure of how well US families are doing financially. But the COT Index still has many fans. And Cass seems to think Trump is in large part pursuing many policies that should help out US workers and families, such as Trump’s tariff policies. Thus, it will be useful to see if Trump’s policies are leading to American workers “thriving” in the first year of Trump’s presidency.

Unfortunately, even using Cass’s preferred approach, Americans don’t appear to be thriving under Trump.

Continue reading