Is this the peak of inflation?

I think so, though the path back to 2% is a long one. Two months ago I wrote that “the Fed is still under-reacting to inflation“. We’ve had an eventful two months since; last Friday the BLS announced CPI prices rose 1% just in May, and that:

The all items index increased 8.6 percent for the 12 months ending May, the largest 12-month increase since the period ending December 1981

Then this Wednesday the Fed announced they were raising interest rates by 0.75%, the biggest increase since 1994, despite having said after their last meeting that they weren’t considering increases above 0.5%. I don’t like their communications strategy, but I do like their actions this month. This change in the Fed’s stance is one reason I think we’re at or near the peak.

Its not just what the Fed did this week, its the change in their plans going forward. As of April, the Fed said the Fed Funds rate would be 1.75% in December, and markets thought it would be 2.5%. But now the Fed and markets both project 3.5% rates in December.

The other reason I’m optimistic is that the days of rapid money supply growth continue to get further behind us. From March to May 2020, the M2 and M3 supply exploded, growing at the fastest pace in at least 40 years:

Rapid inflation began about 12 months later. But the rate of money supply growth peaked in February 2021, then began a rapid decline. Based on the latest data from April 2022, money supply growth is down to 8%, a bit high but finally back to a normal range. Money supply changes famously influence prices with “long and variable lags”, so its hard to call the top precisely. But the fact that we’re now 15 months past the peak of money supply growth (and have stable monetary velocity) is encouraging. Old-fashioned money supply is the same indicator that led Lars Christiansen to predict this high inflation in April 2021 after successfully predicting low inflation post-2009 (many people got one of those calls right, but very few got both).

Stocks also entered an official bear market this week (down 20% from highs), which is both a sign of excess money no longer pumping up markets, and a cause of lower demand going forward.

Markets seem to agree with my update: 5-year breakevens have fallen from a high of 3.6% back in March down to 2.9% today, implying 2.9% average inflation over the next 5 years. Much improved, though as I said at the top the path to 2% will be a long one- think years, not months. Even the Fed expects inflation to be over 5% at the end of this year, and for it to fall only to 2.6% next year.

What am I still worried about? The Producer Price Index is still growing at 20%. The Fed is raising rates quickly now but their balance sheet is still over twice its pre-Covid level and is shrinking very slowly. The Russia-Ukraine war drags on, keeping oil and gas prices high, and we likely still have yet to see its full impact on food prices. Making good predictions is hard.

While I’m sticking my neck out, I’ll make one more prediction, though this one is easier- Dems are in for a bad time in November. A new president’s party generally does badly at his first midterm, as in 2018 and 2010. But this time the economy will be a huge drag on top of that. November is late enough that the real economy will be notably slowed by the Fed’s inflation-fighting effects, but not so late that inflation will be under control (I expect it to be lower than today but still above 5%). Markets currently predict a 75% chance that Republicans take the House and Senate in November, and if anything that seems low to me.

No in-group bias from financial choices in latest experiment

“How Dictators Use Information about Recipients” is my new project with Laura Razzolini. A working paper is up at SSRN. We use the Dictator Game to measure if people are generous toward others who made a similar choice.

In the first stage of the experiment, every player gets to make their own choice about whether or not to invest in a risky option (called Option B). Players can pick Option A if they do not want to invest.

In the second stage, participants get to decide if they will send any money to another anonymous player. If a “dictator” (the person who determines the final allocation of money) decided to take the risk on Option B in stage 1, would they be more generous toward a counterpart if they know that person also picked Option B?

We explain in our paper why the literature indicates such a form of favoritism could be expected.

Social identity theory is the psychological basis for intergroup discrimination. Economic experiments have created feelings of group identity in various ways, leading to significant effects on behavior. Chen & Li (2009) demonstrate that group identity formation can affect social preferences.

Chen and Li (2009) started by having subjects review paintings by two different modern artists. The subjects were divided into two groups, based on their reported painting preferences. Subjects were informed about their group membership by the experimenter.

The Chen and Li paper has been cited almost 2000 times. Group identity is a topic of interest. Several experimental papers demonstrate that strangers can have team feelings induced quickly with the right procedures. Those team loyalties affect behavior in incentivized tasks.

Group feelings artificially induced in the lab by Eckel & Grossman (2005) influence levels of cooperation and contributions to public goods. Pan & Houser (2013)  induce group identities by asking subjects to complete tasks in groups.  Pan & Houser (2019) found that investors trust in-group members more. The in-group has been induced in several different ways in lab experiments. In this paper, we investigate whether in-group effects arise from making a common financial decision in the first stage of the experiment.

Do you think our manipulation in the beginning affected giving?

Nope. There was no effect. Dictators who chose Option B did not give more to recipients who also chose Option B.

Not every result in the paper is a null result. One piece of information caused a large increase in giving. If we inform the dictator that their counterpart started with less money in the first stage (due to bad luck) then the dictator would give more. Sympathy was inspired, as we predicted, by knowing if a recipient was “poor” in the experiment. Conversely, if dictators are informed that their counterpart is “rich” then they excused themselves from having to give up money to help.

Information about financial choices, at least in our sterile simple environment, neither polarized nor united the participants. The giving with only choice information was higher than giving to “rich” but lower than giving to “poor”. Lastly, we provided all of the information at once. With full information, dictators were still heavily influenced by the starting endowments and choices information had no effect.

Understanding polarization is important. Humans exhibit tribal instincts to not help those who are perceived as different. In our experiment we seem to have found one difference that that people are willing to tolerate or overlook.

See also my Works in Progress blog about polarization and a different experiment.  

References

Chen, Yan, and Sherry Xin Li. “Group Identity and Social Preferences.” American Economic Review 99, no. 1 (March 2009): 431–57.

Eckel, Catherine C., and Philip J. Grossman. “Managing Diversity by Creating Team Identity.” Journal of Economic Behavior & Organization 58, no. 3 (2005): 371–92.

Pan, Xiaofei, and Daniel Houser. “Why Trust Out-Groups? The Role of Punishment under Uncertainty.” Journal of Economic Behavior & Organization 158 (2019): 236–54.

Pan, Xiaofei Sophia, and Daniel Houser. “Cooperation during Cultural Group Formation Promotes Trust towards Members of Out-Groups.” Proceedings of the Royal Society B: Biological Sciences 280, no. 1762 (July 7, 2013): 20130606.

Unfashionable Investing

Investors such as mutual funds, index funds, and hedge funds tend to pick a particular strategy or asset type and stick with it. It’s what they know, it’s what they’re known for, and making major changes would often create legal difficulties; something marketed as a bond fund can’t suddenly switch to stocks even if they think stocks would do much better. Other types of investors like pension funds, endowments and individuals have more flexibility to change their strategies. These investors tend to chase performance, allocating to types of investments that have performed well recently. This can create fashions, types of investment strategies that become more popular for a few years.

These strategies might involve focus on a certain asset class (stocks / bonds / commodities / private equity / real estate / et c), a certain sector or region within an asset class, a certain factor (value, growth, momentum), et c. It seems like institutional incentives, trend chasing, and FOMO lead people and institutions to over-allocate to strategies that have been successful the last 1-5 years and under-allocate to those that haven’t. Everyone sees something has recently been successful, so they pile into it, which drives up prices and makes it look even more successful for a while; but eventually this drives things to be so clearly over-valued that there’s a crash, and the crash scares people away for years until it becomes clearly undervalued. Most recently 2020-2021 saw people pile into growth/tech stocks and alternatives like SPACs/crypto, but the beginning of Fed rate hikes was the signal that the party is over and people (over?)react by pulling out.

Given this, the ideal strategy is to show up right before the party starts, then leave right at the peak; but no one can time it that well. The possibly realistic alternative is to show up early when no one’s there, then leave right when the party’s getting good (Punchbowl Capital?). Timing and identifying which strategies are too hot and which cold enough (Glacier Capital? Cryo Capital?) is the biggest practical question in how to pull this off. The simplest/dumbest way to do it is to avoid timing decisions entirely and just invest fixed proportions into all strategies; when they’re over-valued your fixed investment doesn’t buy many shares, when they’re under-valued it buys lots. This actually sounds like a decent way to go, but its more buying into the Efficient Market Hypothesis than beating it, can we do better? Here are the types of meta-strategies I’m planning to look into:

  • How variable is the timing of strategy boom/busts? Could you possibly just use fixed numbers of months/years- if a strategy’s been hot this long get out, if its been cold this long get in?
  • Use market share numbers, get in when something gets below a certain % of the market and out when it gets above
  • Use valuation numbers like P/E ratios (seems to work well for the overall stock market, may be harder to measure for some strategies/classes)
  • Flow of funds- is there a rate of change that works as a trigger?
  • Proportion of major institutions allocating to each strategy
  • What looks promising right now along these lines (May 2022)? Without looking at the numbers, the perennial strategies that have been out-of-favor a few years seem like value, emerging markets, and commodities (though commodities might be too hot again just now). These (along with real estate; right now homes seem expensive but homebuilders are cheap and I think commercial is too) all did well after the 2000 tech crash

I’m obviously not the first person to think along these lines; the concepts of the commodity cycle and Shiller’s CAPE are related, and Global Macro and Multistrategy funds do some of this. In the latest AER: Insights, Xiao Yan and Zhang echo Robert Shiller and Paul Samuelson that predicting big things like this is actually easier than predicting little things like the valuation of a specific stock:

Samuelson’s Dictum refers to the conjecture that there is more informational inefficiency at the aggregate stock market level than at the individual stock level. Our paper recasts it in a global setup: there should be more informational inefficiency at the global level than at the country level. We find that sovereign CDS spreads can predict future stock market index returns, GDP, and PMI of their underlying countries. Consistent with the global version of Samuelson’s Dictum, the predictive power for both stock returns and macro variables is almost entirely from the global, rather than country-specific, information from the sovereign CDS market

Ungated version here

But I haven’t actually heard of any fund focused on “unfashionable investing” that considers all asset classes and strategies like this. What institution out there would be capable of saying in 2021 “growth stocks are at bubbly levels, we’re switching to commodities”, or saying in 2022 “commodities are high and growth stocks crashed, we’re switching back”? Please let me know if such an institution does exist, or what else to read along these lines.

Get rich or get famous? Edward Thorp vs Myron Scholes

When finance professors publish papers claiming to find inefficiencies in asset markets, my initial reaction is skepticism. The odds are stacked against them to start since asset markets are mostly efficient. Then even if the inefficiency they found is real, shouldn’t they keep that fact to themselves and get rich trading on it?

But listening to a recent interview with Edward Thorp, I realized I shouldn’t entirely discount the possibility that someone would publish a real inefficiency, even a tradeable one. After all, Myron Scholes and Fischer Black did just that when they published the Black-Scholes model in the Journal of Political Economy. This made them famous on Wall Street and in econ/finance academia, and won Scholes the 1997 Nobel Memorial Prize in Economics.

Thorp explained that he had come up with a similar model years earlier, but instead of publishing it, he started a hedge fund and got rich. He says it makes sense that he didn’t share the Nobel Prize, partly because the Black-Scholes model was better than his, but mostly because you should need to publish and share your ideas with the world to get scientific credit for them; his prize was 20% annual returns at his hedge fund.

Why do some opt to get rich, and others to get famous? I’d say academics’ first instinct is to publish everything rather than put it into practice. But Thorp was also an academic, a math professor. Thorp was already famous for publishing a book about how to beat the house at blackjack by counting cards (which is what I knew him for before this interview), so perhaps he valued additional fame less. But he was also already rich from winning at blackjack and from book sales.

Putting ideas into practice can also bring up unanticipated difficulties. When Myron Scholes finally did start working at a hedge fund in 1994 he saw initial success, but by 1998 it had become an embarrassing blunder that inspired the book “When Genius Failed: The Rise and Fall of Long-Term Capital Management”. Scholes may have been better off sticking to academic fame.

Black-Scholes formula for options pricing. The Efficient Markets Hypothesis says that markets instantly incorporate all public information, but original research like this isn’t public until you publish it, and even then it can take years for market participants to fully incorporate it

Why Many Substance Use Treatment Facilities Don’t Take Insurance

According to the latest data, about one in four facilities doesn’t accept private insurance or Medicaid, and more than half don’t accept Medicare. This makes substance use treatment something of an outlier, since 91% of all US health spending is paid for through insurance. Still, there are many reasons to prefer being paid in cash: insurance might reimburse at low rates, impose administrative hassles, and generally try to tell you how to run things.

Providers generally put up with the hassles of insurance because they see the alternative as not getting paid. But if demand for their services gets high enough that they can stay busy with patients paying cash, they will often try going cash-only. Some try to generate high demand by providing excellent service. Sometimes high demand comes from a growing health crisis, as with opioids.

Demand can also be high relative to supply because supply is restricted. US health care is full of supply restrictions, but in this case I wondered if Certificate of Need laws were playing a role. As we’ve written about previously, CON laws require health care providers in 34 states to get the permission of a government board to certify their “economic necessity” before they can open or expand. But there’s a lot of variation from state to state in what types of services are covered by this requirement; acute hospital beds and long-term care beds are most common. 23 states require substance use treatment facilities to obtain a CON before opening or expanding.

States with Substance Use–Treatment CON Laws in 2020. Created using data from Mitchell, Philpot, and McBirney

How do these laws affect substance use treatment? We didn’t really know- only one academic article had studied substance use CON, finding it led to fewer facilities in CON states. But I’ve studied other types of CON, so I joined forces with Cornell substance use researcher Thanh Lu and my student Patrick Vogt to investigate. The resulting article, “Certificate-of-need laws and substance use treatment“, was just published at Substance Abuse Treatment, Prevention, and Policy. Here’s the quick summary:

We find that CON laws have no statistically significant effect on the number of facilities, beds, or clients and no significant effect on the acceptance of Medicare. However, they reduce the acceptance of private insurance by a statistically significant 6.0%.

Overall I was surprised that CON didn’t significantly affect most of the outcomes we looked at, and appears to be far from the main reason that treatment facilities don’t take insurance. Still, repealing substance use CON would be a simple way to improve access to substance use treatment, particularly since CON doesn’t appear to bring much in the way of offsetting benefits.

Going forward I aim to investigate how these laws affect health outcomes like overdose rates, and to dig more into the text of state laws and regulations to determine exactly what is covered by substance use CON in different states. As the article explains, we identified several errors in the official data sources we were using. This makes me worry there are more errors we didn’t catch, and there are certainly things the sources just don’t specify, like in which states the laws apply to outpatient facilities. So I hope we (or someone else) will have even better work to share in the future, but for now this article is as good as it gets, and we share our data here.

College Major, Marriage, and Children

The American Community Survey began in 2000, and started asking about college majors in 2009, surveying over 3 million Americans per year. This has allowed all sorts of excellent research on how majors affect things like career prospects and income, like this chart from my PhD advisor Doug Webber:

See here for the interactive version of this image

But the ACS asks about all sorts of other outcomes, many of which have yet to be connected to college major. As far as I can tell this was true of marriage and children, though I haven’t searched exhaustively. I say “was true” because a student in my Economics Senior Capstone class at Providence College, Hannah Farrell, has now looked into it.

The overall answer is that those who finished college are much more likely to be married, and somewhat more likely to have children, than those with no college degree. But what if we regress the 39 broad major categories from the ACS (along with controls for age, sex, family income, and unemployment status) on marriage and children? Here’s what Hannah found:

Every major except “military technologies” is significantly more likely than non-college-grads to be married. The smallest effects are from pre-law, ethnic studies, and library science, which are about 7pp more likely to be married than non-grads. The largest effects are from agriculture, theology, and nuclear technology majors, each about 18pp more likely to be married.

For children the story is more mixed; library science majors have 0.18 fewer children on average than non-college-graduates, while many majors have no significant effect (communications, education, math, fine arts). Most majors have more significantly more children than non-college graduates, with the biggest effect coming from Theology and Construction (0.3 more children than non-grads).

In this categorization the ACS lumps lots of majors together, so that economics is classified as “Social Sciences”. When using the more detailed variable that separates it out, Hannah finds that economics majors are 9pp more likely than non-grads to be married, but don’t have significantly more children.

I love teaching the Capstone because I get to learn from the original empirical research the students do. In a typical class one or two students write a paper good enough that it could be published in an academic journal with a bit of polishing, and this was one of them. But its also amazing how many insights remain undiscovered even in heavily-used public datasets like the ACS. We’ve also just started to get good data on specific colleges, see this post on which schools’ graduates are the most and least likely to be married.

John List, Dramatist

As someone who has dabbled in lab experiments for over a decade, I’m familiar with complaints about external validity. If an experiment is run with only college students, then how can we know if the finding will generalize to other populations? It’s a question worth asking, but many questions are worth asking and it doesn’t mean that controlled experimentation can’t add value to the economics literature. In the age of general suspicion of small studies, people say that replications are needed. We should only trust a conclusion that is supported by multiple studies. The thing about replications is that the process has to start somewhere. Empirical work has to get read and published. Replications are composed of individual studies.

I just met John List at the Alabama stop on his epic national book tour. He directed me to his work of art: Ungated Link. He wrote a play in response to the attacks on his work concerning external validity. He employs a rhetorical strategy of making your critics look obtuse. Even though the play is absolutely silly (thoroughly entertaining), he builds a strong defense for doing experiments. It is literally presented as the arguments of a defense lawyer. Before the trial begins, a “reporter” summarizes the conflict that has created the need for a formal trial:

Court Reporter Clifton Hillegass: Thank you Judge Learner. While it is never easy to convey succinctly the key points of a debate, this dispute has crystallized in a manner that leaves no middle ground. The prosecution, led by Mr. Naiv Ete, argues that all empirical work in economics must pass a set of necessary external validity conditions before being published in academic journals or used by policymakers. To date, in this courtroom no empirical work has passed his conditions, effectively rendering the question of generalizability beyond dispute, or as Livius Andronicus reminded us, Non est Disputandum de Generalizability. Ms. Minerva, Lead Defense, has argued that this line of reasoning leaves only theoretical exercises and thought experiments to advance science and guide policymaking, an approach that she fears will return us to the dark ages.

The paper is called “NON EST DISPUTANDUM DE GENERALIZABILITY?” It’s a good refresher on the history of science, not just economics.

Maybe the first best is for you to spend your weekend reading dense technical papers. But if you aren’t feeling up to that, then this play will make you feel like you learned something without even trying.

I’ll link this up to some of the posts I wrote last year about experiments and critics:

Calling Behavioral Economics a Fad

Behavioral Economist at Work

Health Insurance Benefit Mandates and Health Care Affordability

My article on benefit mandates was published today at the Journal of Risk and Financial Management. It begins:

Every US state requires private health insurers to cover certain conditions, treatments, and providers. These benefit mandates were rare as recently as the 1960s, but the average state now has more than forty. These mandates are intended to promote the affordability of necessary health care. This study aims to determine the extent to which benefit mandates succeed at this goal

I began my research career by writing about these mandates, and my goal with this article was to tie up that whole chapter. I realized that all my articles on benefit mandates, as well as most of what other economists write about them, simply try to measure their costs- how much they raise health insurance premiums, raise employee contributions to premiums, lower wages, lower employment, or harm smaller businesses. Its good to know their costs, but to really evaluate a policy we should learn about its benefits too so that we can compare costs and benefits.

One key benefit that had yet to be measured was how much a typical mandate lowers out-of-pocket health care costs. In this article, I estimate that the average benefit mandate lowers costs by 0.8%-1%. I argue that combining this with a measure of how mandates affect total health spending by households could provide a sufficient statistic for the net benefits of mandates for households. I’m not totally confident this works in theory though, and it has a big challenge in practice- one of my empirical strategies finds that mandates reduce total spending, but the other finds they don’t. So I think the main contribution of the article ends up being the first estimate of how the average state health insurance benefit mandate affects out-of-pocket costs.

I’m currently planning to move on from writing about mandates- other topics are catching my eye, state policymakers don’t seem to particularly care what the research says about mandates, and changes in how economists use difference-in-difference methods are making it harder to publish articles like this that study continuous treatments. But I think there are still big opportunities here for anyone who wants to take up the torch. First, the ACA Essential Health Benefits provision changed the game for state mandates in a way that I have yet to see the empirical literature grapple with. Second, there are more than a hundred separate types of state benefit mandates; in most of my articles I aggregate them but they should really be studied separately. A handful have been, such as mandates for autism treatments, infertility treatments, and telemedicine. But the vast majority appear to be completely unstudied.

P.S. Writing this article gave me two wildly varying opinions of our federal bureaucracy. I tried to get both data and funding from the Agency for Healthcare Research and Quality for this article. The data side worked well- they were surprisingly fast, efficient and reasonable about the process of accessing restricted data. On the other hand, I applied for funding from AHRQ in March 2019 and still have yet to officially hear back about it (it is “pending council review” in NIH Commons). This sort of thing is why nimble organizations like Fast Grants can do so much good despite having much smaller budgets.

P.P.S. This article is part of a special issue on Health Economics and Insurance that is still accepting submissions. I’m the guest editor and would handle your submission, though my own got handled by other editors and put though multiple rounds of revisions.

Mises’s Bureaucracy, a Recap

My favorite two economists are Ludwig Von Mises and Milton Friedman. They might consider one another from very different schools of thought, though there is reason to think that they are not so different. As an undergraduate student, I liked them both, but I became more empirics-minded in graduate school and as a young assistant professor.

As I progressed through graduate school and conducted empirical research, my opinions and policy prescriptions changed and were refined from what they once were. In graduate school, I didn’t study Austrian Economics, though it was certainly in the water at George Mason University. Recently, as an assistant professor with a few years under my belt, I picked up Bureaucracy (1944) and read it as a matter of leisure.

One word:

Continue reading

Day care and new pre-K findings

There was a buzz over a new study showing that pre-K is not necessarily good for children. It’s amazing how experts can be completely surprised by the results of a major study on an issue like pre-K education.* Noah Smith summarized the literature and thought through some policy implications. Emily Oster also just summarized the paper and points out that it provides almost no help for parents making decisions. **

I’ll offer some “amateur astronomer” observations about preschool and childcare.

What to call the daycare I patronize, since it offers all of the pre-K functions? I’ll call it Day-K. My kid comes home from Day-K with worksheets difficult enough for a kindergartener, but it was handed to a 3-year-old and the kid just scrawled a few lines of crayon across it. Most little kids aren’t going to retain material that is beyond their developmental level. Why bother printing these nice worksheets at all instead of just letting them color a bear?

Something that surprised me was how early kids can learn the alphabet and yet how disconnected that is from anything useful such as being able to read words. If a 2-year-old can do it (e.g. recognize “A”) then a 4-year-old can probably pick it up easily anyway.

Good private daycares in desirable urban areas are expensive but have unbelievable waitlists. Donald Shoup advocates that cities should charge more for parking. He reasoned that every city block should have an open parking space. Instead of spending valuable time circling like a vulture, you should just pay a lot of convenient parking or else know you will have to go somewhere else. Would the same logic apply to the good daycares? Should they not charge so much that there is always an open slot for the next parent who can pay? One issue with this from the daycare owner’s perspective is that they don’t want new kids cycling through constantly. A brand-new kid who does not trust the staff and has not learned the routine is a temporary disaster. I believe that the waitlists work because the owners want a predictable flow of great committed customers. By keeping fees low enough to have a long waitlist, they get good families to stay and they can easily fill any holes left by departures or dismissals.

If the program was free, I suspect that would change the dynamic inside compared to high-fee Day-K. Daycare kids are on a regimented schedule. Everyone thrives on the routine. The staff are happy when the kids know the rules. If people were coming and going unpredictably, that might make it harder for kids to learn.

Even under optimal conditions, there are scuffles at daycare. Being pushed down on the playground is often the only thing a kid will remember from a full day of “instruction”. How could pre-K actually negatively affect some kids, as the new study shows? One way I can think of is that the experience a good teacher tries to provide could be ruined by one kid who is loud or violent. If half of the classes are functioning as day care and having no impact at all on future outcomes and half of the classes have a kid hitting, then the average effect for all pre-K classes could be negative. The social environment of pre-K is probably highly variable. Sometimes you could get a great social atmosphere in which kids learn to share and sing. Sometimes the chaos level could make things difficult, I imagine. This is speculative. But I think it’s ok to speculate in the brainstorming period that should follow a surprising result.

Daycare centers have a fantastic physical environment. When I think of the returns to scale, the low table and chairs that fits the 3-year-olds perfectly comes to mind. A preschool classroom has a perfect bathroom with low toilets and sturdy step stools at the sinks. There is no heirloom China or nice upholstery in the room to worry about. There are dozens of age-appropriate toys and craft supplies can be bought in bulk. This physical environment allows kids to be creative and have fun. Adults don’t have to hover over them, afraid that they’ll hurt themselves or break something at any moment. By contrast, having a 2-year-old child roam my house was terrible. I kick myself for not making more up-front investments in kid-proofing and creating safe play areas. But it’s expensive and difficult for a parent to outfit their own home perfectly for each stage of development. The great thing about a daycare classroom for 3-year-olds is that it is perfectly fitted for 3-year-olds, because 3-year-olds will be cycling through it for the next decade. The physical scale factor makes me a daycare optimist for urban areas. However, as I wrote earlier, things could be trickier for low-density population areas.

The study has given us a lot to think about. I hope the research community can be helpful in continuing to figure out the puzzle.

One thing we can conclude, as Noah says in his blog, is that a compulsory university pre-K would be bad. Forcing families to send 4-year-olds to an institutional program (the way 5-16 kids are regulated) would be an expensive “own goal” policy. I don’t know of anyone seriously considering that, which hopefully means that nobody is.

* As a lab experimentalist, I’m used to being surprised by data. Check out this podcast just recorded with John List. He talks about surprising findings from field experiments. You never know until you run the experiment. Hence, my post in September about a rant about behavioral economics.

** Yesterday, Emily Oster announced that she is leaving Twitter because it had become a toxic place for her. You can still find her at substack, instagram, and other traditional publishing outlets (e.g. her books).