Why Avocado on Toast?

We’ve all heard the stereotype. Millennials eat avocado toast (so say the older generations). The uncharitable version is that they can’t afford other things like cars, houses, etcetera due to their expensive consumption habits otherwise. And avocado on toast is the standard bearer for that spendthrift consumption.

I’m here to tell you that it’s bunch of nonsense and that the older folks are just jealous. Millennials, those born between 1981 & 1996, weren’t intrinsically destined to spend their money poorly as some generational sense of entitlement. Nor did the financial crisis imbue them with the mass desire for small but still affordable treats. The reason that millennials got the reputation for eating avocado on toast is that 1) it’s true, 2) because they could afford it, and 3) older generations didn’t even have access.

Continue reading

3 Great Habits from 3 Great Economists

Jumping right in:

Acknowledge Biases

Have you ever tried to do something objectively. It’s impossible. We might try, but how do we know when we’ve failed to compensate for a bias or when we’ve over compensated. Russ Roberts taught me 1) all people have biases, 2) all analysis is by people, & 3) analysis should be interpreted conditional on the bias – not discarded because of it.

The only people who don’t have biases are persons without values – which is no one. We all have apriori beliefs that color the way that we understand the world. Recognizing that is the first step. The second step is to evaluate your own possible biases or the bias of someone’s work. They may have blind spots or points of overemphasis. And that’s OK. One of the best ways to detect and correct these is to expose your ideas and work to a variety of people. It’s great to talk to new people and to have friends who are different from you. They help you see what you can’t.

Finally, because biases are something that everyone has, they are not a good cause to dismiss a claim or evidence. Unless you’re engaged in political combat, your role is usually not to defeat an opponent. Rather, we like to believe true things about the world. Let’s get truer beliefs by peering through the veil of bias to see what’s on the other side. For example, everyone who’s ever read Robert Higgs can tell that he’s biased. He wants the government to do much less and he’s proud of it. That doesn’t mesh well with many readers. But it’d be intellectually lazy to dismiss Higgs’ claims on these grounds. Higgs’ math and statistics work no differently than his ideological opponents. It’s important for us to filter which claims are a reflection of an author’s values, and the claims that are a reflection of the author’s work. If we focus on the latter, then you’ll learn more true things.

Know Multiple Models

In economics, we love our models.  A model is just a fancy word which means ‘argument’. That’s what a mathematical model is. It’s just an argument that asserts which variables matter and how. Models help us to make sense of the world. However, different models are applicable in different contexts. The reason that we have multiple models rather than just one big one is because they act as short-cuts when we encounter different circumstances. Understanding the world with these models requires recognizing context clues so that you apply the correct model.

Models often conflict with one another or imply different things for their variables. This helps us to 1) understand the world more clearly, and 2) helps us to discriminate between which model is applicable to the circumstances. David Andolfatto likes to be clear about his models and wants other people to do the same. It helps different people cut past the baggage that they bring to the table and communicate more effectively.

For example, power dynamics are a real thing and matter a lot in personal relationships. I definitely have some power over my children, my spouse, and my students. They are different kinds of power with different means and bounds, but it’s pretty clear that I have some power and that we’re not equal in deed. Another model is the competitive market model that is governed by property rights and consensual transactions. If I try to exert some power in this latter circumstance, then I may end up not trading with anyone and forgoing gains from trade. It’s not that the two models are at odds. It’s that they are theories for different circumstances. It’s our job to discriminate between the circumstances and between the models. Doing so helps us to understand both the world one another better.

Continue reading

DID Explainer and Application (STATA)

The Differences-in-Differences literature has blown up in the past several years. “Differences-in-Differences” refers to a statistical method that can be used to identify causal relationships (DID hereafter). If you’re interested in using the new methods in Stata, or just interested in what the big deal is, then this post is for you.

First, there’s the basic regression model where we have variables for time, treatment, and a variable that is the product of both. It looks like this:

The idea is that that there is that we can estimate the effect of time passing separately from the effect of the treatment. That allows us to ‘take out’ the effect of time’s passage and focus only on the effect of some treatment. Below is a common way of representing what’s going on in matrix form where the estimated y, yhat, is in each cell.

Each quadrant includes the estimated value for people who exist in each category.  For the moment, let’s assume a one-time wave of treatment intervention that is applied to a subsample. That means that there is no one who is treated in the initial period. If the treatment was assigned randomly, then β=0 and we can simply use the differences between the two groups at time=1.  But even if β≠0, then that difference between the treated and untreated groups at time=1 includes both the estimated effect of the treatment intervention and the effect of having already been treated prior to the intervention. In order to find the effect of the intervention, we need to take the 2nd difference. δ is the effect of the intervention. That’s what we want to know. We have δ and can start enacting policy and prescribing behavioral changes.

Easy Peasy Lemon Squeezy. Except… What if the treatment timing is different and those different treatment cohorts have different treatment effects (heterogeneous effects)?*  What if the treatment effects change over time the longer an individual is treated (dynamic effects)**?  Further, what if the there are non-parallel pre-existing time trends between the treated and untreated groups (non-parallel trends)?*** Are there design changes that allow us to estimate effects even if there are different time trends?**** There’re more problems, but these are enough for more than one blog post.

For the moment, I’ll focus on just the problem of non-parallel time trends.

What if untreated and the to-be-treated had different pre-treatment trends? Then, using the above design, the estimated δ doesn’t just measure the effect of the treatment intervention, it also detects the effect of the different time trend. In other words, if the treated group outcomes were already on a non-parallel trajectory with the untreated group, then it’s possible that the estimated δ is not at all the causal effect of the treatment, and that it’s partially or entirely detecting the different pre-existing trajectory.

Below are 3 figures. The first two show the causal interpretation of δ in which β=0 and β≠0. The 3rd illustrates how our estimated value of δ fails to be causal if there are non-parallel time trends between the treated and untreated groups. For ease, I’ve made β=0  in the 3rd graph (though it need not be – the graph is just messier). Note that the trends are not parallel and that the true δ differs from the estimated delta. Also important is that the direction of the bias is unknown without knowing the time trend for the treated group. It’s possible for the estimated δ to be positive or negative or zero, regardless of the true delta. This makes knowing the time trends really important.

STATA Implementation

If you’re worried about the problems that I mention above the short answer is that you want to install csdid2. This is the updated version of csdid & drdid. These allow us to address the first 3 asterisked threats to research design that I noted above (and more!). You can install these by running the below code:

program fra
    syntax anything, [all replace force]
    local from "https://friosavila.github.io/stpackages"
    tokenize `anything'
    if "`1'`2'"==""  net from `from'
    else if !inlist("`1'","describe", "install", "get") {
        display as error "`1' invalid subcommand"
    }
    else {
        net `1' `2', `all' `replace' from(`from')
    }
    qui:net from http://www.stata.com/
end
fra install fra, replace
fra install csdid2
ssc install coefplot

Once you have the methods installed, let’s examine an example by using the below code for a data set. The particulars of what we’re measuring aren’t important. I just want to get you started with the an application of the method.

local mixtape https://raw.githubusercontent.com/Mixtape-Sessions
use `mixtape'/Advanced-DID/main/Exercises/Data/ehec_data.dta, clear
qui sum year, meanonly
replace yexp2 = cond(mi(yexp2), r(max) + 1, yexp2)

The csdid2 command is nice. You can use it to create an event study where stfips is the individual identifier, year is the time variable, and yexp2 denotes the times of treatment (the treatment cohorts).

csdid2 dins, time(year) ivar(stfips) gvar(yexp2) long2 notyet
estat event,  estore(csdid) plot
estimates restore csdid

The above output shows us many things, but I’ll address only a few of them. It shows us how treated individuals differ from not-yet treated individuals relative to the time just before the initial treatment. In the above table, we can see that the pre-treatment average effect is not statistically different from zero. We fail to reject the hypothesis that the treatment group pre-treatment average was identical to the not-yet treated average at the same time period. Hurrah! That’s good evidence for a significant effect of our treatment intervention. But… Those 8 preceding periods are all negative. That’s a little concerning. We can test the joint significance of those periods:

estat event, revent(-8/-1)

Uh oh. That small p-value means that the level of the 8 pretreatment periods significantly deviate from zero. Further, if you squint just a little, the coefficients appear to have a positive slope such that the post-treatment values would have been positive even without the treatment if the trend had continued. So, what now?

Wouldn’t it be cool if we knew the alternative scenario in which the treated individuals had not been treated? That’s the standard against which we’d test the observed post-treatment effects. Alas, we can’t see what didn’t happen. BUT, asserting some premises makes the job easier. Let’s say that the pre-treatment trend, whatever it is, would have continued had the treatment not been applied. That’s where the honestdid stata package comes in. Here’s the installation code:

local github https://raw.githubusercontent.com
net install honestdid, from(`github'/mcaceresb/stata-honestdid/main) replace
honestdid _plugin_check

What does this package do? It does exactly what we need. It assumes that the pre-treatment trend of the prior 8 periods continues, and then tests whether one or more post-treatment coefficients deviate from that trend. Further, as a matter of robustness, the trend that acts as the standard for comparison is allowed to deviate from the pre-treatment trend by a multiple, M, of the maximum pretreatment deviations from trend. If that’s kind of wonky – just imagine a cone that continues from the pre-treatment trend that plots the null hypotheses. Larger M’s imply larger cones. Let’s test to see whether the time-zero effect significantly differs from zero.

estimates restore csdid
matrix l_vec=1\0\0\0\0\0
local plotopts xtitle(Mbar) ytitle(95% Robust CI)
honestdid, pre(5/12) post(13/18) mvec(0(0.5)2) coefplot name(csdid2lvec,replace) l_vec(l_vec)

What does the above table tell us? It gives us several values of M and the confidence interval for the difference between the coefficient and the trend at the 95% level of confidence. The first CI is the original time-0 coefficient. When M is zero, then the null assumes the same linear trend as during the pretreatment. Again, M is the ratio by which maximum deviations from the trend during the pretreatment are used as the null hypothesis during the post-treatment period.  So, above, we can see that the initial treatment effect deviates from the linear pretreatment trend. However, if our standard is the maximum deviation from trend that existed prior to the treatment, then we find that the alpha is just barely greater than 0.05 (because the CI just barely includes zero).

That’s the process. Of course, robustness checks are necessary and there are plenty of margins for kicking the tires. One can vary the pre-treatment periods which determine the pre-trend, which post-treatment coefficient(s) to test, and the value of M that should be the standard for inference. The creators of the honestdid seem to like the standard of identifying the minimum M at which the coefficient fails to be significant. I suspect that further updates to the program will come along that spits that specific number out by default.

I’ve left a lot out of the DID discussion and why it’s such a big deal. But I wanted to share some of what I’ve learned recently with an easy-to-implement example. Do you have questions, comments, or suggestions? Please let me know in the comments below.


The above code and description is heavily based on the original author’s support documentation and my own Statalist post. You can read more at the above links and the below references.

*Sun, Liyang, and Sarah Abraham. 2021. “Estimating Dynamic Treatment Effects in Event Studies with Heterogeneous Treatment Effects.” Journal of Econometrics, Themed Issue: Treatment Effect 1, 225 (2): 175–99. https://doi.org/10.1016/j.jeconom.2020.09.006.

**Sant’Anna, Pedro H. C., and Jun Zhao. 2020. “Doubly Robust Difference-in-Differences Estimators.” Journal of Econometrics 219 (1): 101–22. https://doi.org/10.1016/j.jeconom.2020.06.003.

***Callaway, Brantly, and Pedro H. C. Santa Anna. 2021. “Difference-in-Differences with Multiple Time Periods.” Journal of Econometrics, Themed Issue: Treatment Effect 1, 225 (2): 200–230. https://doi.org/10.1016/j.jeconom.2020.12.001.

****Rambachan, Ashesh, and Jonathan Roth. 2023. “A More Credible Approach to Parallel Trends.” The Review of Economic Studies 90 (5): 2555–91. https://doi.org/10.1093/restud/rdad018.

Update on Game Theory Teaching

I wrote at the end of the summer about some changes that I would make to my Game Theory course. You can go back and read the post. Here, I’m going to evaluate the effectiveness of the changes.

First, some history.

I’ve taught GT a total of 5 time. Below are my average student course evaluations for “I would recommend this class to others” and “I would consider this instructor excellent”. Although the general trend has been improvement, improving ratings and the course along the way, some more context would be helpful. In 2019, my expectations for math were too high. Shame on me. It was also my first time teaching GT, so I had a shaky start. In 2020, I smoothed out a lot of the wrinkles, but I hadn’t yet made it a great class. 

In 2021, I had a stellar crop of students. There was not a single student who failed to learn. The class dynamic was perfect and I administered the course even more smoothly. They were comfortable with one another, and we applied the ideas openly. In 2022, things went south. There were too many students enrolled in the section, too many students who weren’t prepared for the course, and too many students who skated by without learning the content. Finally, in 2023, the year of my changes, I had a small class with a nice symmetrical set of student abilities.  

Historically, I would often advertise this class, but after the disappointing 2022 performance, and given that I knew that I would be making changes, I didn’t advertise for the 2023 section. That part worked out perfectly. Clearly, there is a lot of random stuff that happens that I can’t control. But, my job is to get students to learn, help the capable students to excel, and to not make students *too* miserable in the process – no matter who is sitting in front of me.

Continue reading

Wrapping Up & Sneak Peaks

I’m wrapping up grading for the semester. So this one is super short. What will I be writing about in the upcoming weeks. Here’s a sneak peak:

  1. I will read the course evaluations and let you know how my Game Theory Course changes fared.
  2. I’ll discuss a little bit of the new DID Stata methods. I’ll keep it short and sweet provide an example.
  3. I want to share some thoughts on objectivity, unreasonable academic charity, and our ability to interpret evidence using multiple models.
  4. Squeezing out more time efficiencies in your home life (Especially for parents)
  5. There are too many A’s in my Principles of Macroeconomics class.

These are what’s on the Horizon. I’ll link back here to stay on track. Have a great weekend!

House Rich, House Richer

The third quarter ‘All Transaction’ housing price data was just released this week. These numbers are interesting for a few of reasons. One reason is that home prices are a big component of our cost of living. Higher home prices are relevant to housing affordability. This week’s release is especially interesting because it’s starting to look like the Fed might be pausing its year 18-month streak of interest rates hikes. In case you don’t know, higher interest rates increase the cost of borrowing and decrease the price that buyers are willing to pay for a home. Nationally, we only had one quarter of falling home prices in late 2022, but the recent national growth rate in home prices is much slower than it was in 2021 through mid-2022.

Do you remember when there were a bunch of stories about remote workers and early retirees fleeing urban centers in the wake of Covid? We stopped hearing that story so much once interest rates started rising. The inflection point in the data was in Q2 of 2022. After that, price growth started slowing with the national average home price up 6.5%. But the national average masks some geographic diversity.  

Continue reading

Growth of the Transfer State

I’ve written about government spending before. But not all spending is the same. Building a bridge, buying a stapler, and taking from Peter to pay Paul are all different types of spending. I want to illustrate that last category. Anytime that the government gives money to someone without purchasing a good or service or making an interest payment, it’s called a ‘transfer’. People get excited about transfers. Social security is a transfer and so is unemployment insurance benefits. Those nice covid checks? Also transfers.

Here I’ll focus on Federal transfers, though the data on all transfers is very similar if you include states in the analysis. Let’s start with the raw numbers. Below is data on GDP, Federal spending, and federal transfers. Suffice it to say that they are bigger than they used to be. They’ve all been growing geometrically and they all exhibit bumps near recessions.

Continue reading

Malinvestment Produces Knowledge

Austrian economists rightfully have some gripes about mainstream macroeconomics – specifically about aggregation. The conventional wisdom says that a fall in output can be prevented or remedied in the short-run by an expansion of total spending (via increasing the money supply). Total output is stabilized and the crisis is averted. Even if rising spending preceded the output decline, the standard prescription is the same.

The Austrian Business Cycle theory says that, actually, the prior expansion in spending resulted in yet-to-be-realized poor investments due to easy credit. The decline in output is self-inflicted by unsustainable endeavors, and the money supply expansion response prevents the correction. The consequence is more malinvestment. The Austrians say that the focus on gross investment is a misleading aggregation and commits the fallacy of composition that all investment is the same or the same on relevant margins.

Both schools of thought are on firm ground. I don’t see them as conflicting. They both make valid points and are correct about the world. The conventional wisdom is able to paper-over short-run hiccups, and the Austrians recognize that resources are suboptimally allocated. The two sides are talking past each other to some extent.

The market process of seeking profits and satisfying consumer demands is a messy process. Prices and profits (and losses) incentivize firms with information that they use to adjust their behavior. They innovate and reallocate resources from bad projects and toward money-making projects. When firms earn negative profits (a loss) they learn that their understanding of the world was wrong and that they malinvested their scarce resources. Therefore, malinvestment is a standard and *necessary* part of the market process of identifying and serving the changing and unknown demands of individuals. Without malinvestment we lack the necessary information to distinguish success from failure.

Mal-investment is harmful insofar as it represents resources that were invested such that future output did not rise as it could have otherwise. So, while malinvestment is necessary to the market process, a preponderance of it makes us poorer in the future. Luckily, firms have incentives and finite resources such that mal-investment remains somewhat tamed. Indeed, malinvestment is the cost that we bear for innovation and identifying what works.

The issue is that the above discussion is oriented to the long-run. The conventional wisdom is oriented toward resolving the short-run threats. The two meet one another when malinvestment realizations occur in a correlated manner. It’s not that policy causes malinvestment. Rather, depressed interest rates and easy credit prevent firms from identifying which of their projects turned out to be more or less productive. Firms persist in bad investments because they can’t discriminate between the failed and successful projects ex ante.

So, when interest rates suddenly rise, low or negative productivity projects are identified and resources are reallocated. The discovery and reallocation process takes time. And if many projects are found to be failures at once, then the result is a drop in economic activity that is detectable at the aggregate level. The problem is not that malinvestment exists. The problem is that malinvestment was permitted to persist and grow such that the eventual realization of losses is correlated and has macroeconomic effects. We observe spending, output, and employment declines. That’s the ‘business cycle’ part of the Austrian Business Cycle. Interest rates rising helps to identify the bad projects. That’s good. But policy that increases the popularity of bad projects is bad. It makes us poorer in the long-run and more vulnerable to declines in the short-run.

Delinquency Data

I keep reading and hearing people who are waiting for the shoe to drop on the next recession. They see high interest rates and… well, that’s what they see. Employment is ok and NGDP is chugging along.

One indicator of economic trouble is the delinquency rate on debt. That’s exactly what we would expect if people lose their job or discover that they are financially overextended. They’d fail to meet their debt obligations. But the broad measure of commercial bank loans is quiet. Not only is it quiet, it’s near historic lows in the data at only 1.25% in 2023Q2. Banks can lend with a confidence like never before.

But maybe that overall delinquency rate is obscuring some compositional items. After all, we know that many recessions begin with real-estate slowdowns. Below are the rates for commercial non-farmland loans, farmland loans, and residential mortgages. All are near historical lows, though there are hints that they’re might be on the rise. But one quarter doesn’t a recession make. I won’t show the graph for the sake of space, but all business loan delinquency rates have also been practically flat for the past five years.

Continue reading

5 Practical Gifts for 2023

Do you know someone who likes practical gifts? Then these timely recommendations are for you given that Christmas is on the horizon. If none of the below recommendation strike your fancy, then there’s also the list that I made last year. The nice thing about practical gifts is that they tend to remain good gifts from year after year. This year’s list mostly concerns home-goods.

#1: High Lumen Candelabra Bulbs

I didn’t build my house. And whoever installed the light fixtures had the poor foresight of choosing ones with candelabra bulbs (smaller bulbs with smaller plugs). They are much less bright. I like a nice bright room because it makes everything feel cleaner, neater, and there’s always enough light. I can always provide accent lighting with lamps, but the overhead light needs to – well – enlighten the room. I found these 800 lumen candelabra bulbs and they are pricey, but they are better than the daily resentment of a disappointing overhead light.

#2 Worm-Gear Clamp

If you liked last year’s custom length Velcro recommendation, then you’ll also like this year’s worm-gear clamps. Have you ever needed a heavy-duty fix that’s also fast and easy? It’s the same clamp that’s used in to affix dryer exhaust ducts. It’s great for any project that needs a quick and secure solution. It’s super easy if you have a drill, and relatively easy if you just have a screwdriver. I used mine for some mechanical elements of my golf card.

Continue reading