The past 12 months has been dominated by COVID-19, the related recession, the government response, and other matters. But it has not just dominated our lives, it has also dominated new research, including research by economists!
Working papers from the National Bureau of Economic Research are one place to track on-going research by economists. While not all economic research is released as an NBER working paper (there are other series, and some economists just post them on their own website or department page), the volume of NBER papers should tell us something about the trends.
Here’s a chart showing the weekly NBER working papers that are in some way related to COVID-19. The first batch of three papers was released in late February, one long year ago. The second batch of nine papers came one month later. Since then, there have been papers released every single week, with the exception of the week of Christmas.
In total, there have 373 papers released that relate to COVID-19. The peak comes in late May and early June, with 61 papers released in a 4-week period and 21 of those papers coming out on May 25 alone. Since the May-June peak, we’ve seen a slow decline in papers on COVID-19, and we are now at our lowest level, with just 14 papers released in the past 4 weeks.
I am grateful to Yang Zhou for inviting me to talk about a working paper (with Gavin Roberts) on Friday. Yang told me that this audience is not familiar with lab experiments, so I’m going to take a few minutes out of my time to set the stage for my research.
There is a new book out, Causal Inference by Scott Cunningham, that is the talk of #EconTwitter (Cunningham, 2021). The book is 500 pages of dense prose and code. Here is a review saying that Cunningham left out many key things that a practitioner would need to know. Causal inference from naturally occurring data is hard!
Lab experiments bring something important to the research community. Lab experiments give the researcher a lot of control, which is why they are particularly useful for causal inference (Samek, 2019).
Andrew Weaver is doing interesting work on “the skills gap.” One of his key methods is to create new data by interviewing firms. As someone who has looked hard for good data on the skills gap, I can say that we need more work like his.
Weaver’s 2017 paper with Paul Osterman is about data for U.S. manufacturing firms. These findings may or may not generalize perfectly outside of manufacturing, but I think this was a great place to start. There is plenty of talk about the decline of U.S. manufacturing and at least some of the talk was about a lack of skilled Americans to meet the great demand for high-tech doings. For this survey, they only ask about “core workers” who are doing the specialized roles of making widgets.
Here are two important empirical questions: a.) do American manufacturing firms want high-skill workers? b.) do they have trouble finding them? The authors answer, “not as much as you might think from policy discussions.”
There are lots of details in the paper that I don’t have time to cover. In table 2, they go over the determinants of a firm facing long-term vacancies. What is common among the (minority of) firms that report having long-term vacancies? Advanced computer proficiency is not associated with difficulty of filling jobs. The implication is that most manufacturing companies around 2017 were able to find workers who had the computer-related skills needed to do the core production tasks. What seemed to be a limiting factor was not computer skills but advanced reading skills. Half of the establishments surveyed said that they require workers with extended reading skills. That could mean, for example, reading a 10-page technical article in a trade journal.
I’m currently working on understanding the gender gap in tech careers. Here’s a paper published in 2016 about a survey conducted in 2011. They found that male students reported more time on the computer for leisure. However, if they asked about computer use for school activities, there is no gender difference. The question remains as to how much one’s leisure time and subjective attitudes affects one’s ability to take a high-income software engineering job.
Abstract: This study responds to a call for research on how gender differences emerge in young generations of computer users. A large-scale survey involving 1138 university students in Flanders, Belgium was conducted to examine the relationship between gender, computer access, attitudes, and uses in both learning and everyday activities of university students. The results show that women have a less positive attitude towards computers in general. However, their attitude towards computers for educational purposes does not differ from men’s. In the same way, being female is negatively related to computer use for leisure activities, but no relationship was found between gender and study-related computer use. Based on the results, it could be argued that computer attitudes are context-dependent constructs. When dealing with gender differences, it is essential to take into account the context-specific nature of computer attitudes and uses.
In the course of research work, I read “Sticky Prices as Coordination Failure” today, published in 1991 by L. Ball and David Romer.
They suggest that “coordination failure is at the root of inefficient non-neutralities of money”. They write an elegant theory of price setting and adjustment that includes a menu cost. A menu cost is imposed on an individual who adjusts prices. The name comes from the fact that some restaurants face a literal cost for switching the paper menus.
If changing prices is costly then there is inertia. People tend to stay where they were before, even if adapting to fluctuating external conditions is more efficient.
According to their model of rational individual agents, people will change if the expected benefit of adjustment is larger than the menu cost. In some cases, the optimal action for an individual depends on what others are doing. Thus
Increases in price flexibility by different firms are strategic complements: greater flexibility of one firm’s price raises the incentives for other firms to make their prices more flexible. Strategic complementarity can lead to multiple equilibria in the degree of nominal rigidity, and welfare may be much higher in the low-rigidity equilibria.
An implication is that if you are surrounded by people who are open to constantly changing, then you yourself will be more likely to adapt. The world is always fluctuating, so welfare is higher for communities that can adapt quickly. Example of changing circumstances include global warming and novel safety procedures suddenly needed during the time of Covid.
In this paper, “multiple equilibria” means that a community might settle at a high-wealth level or a low-wealth level simply because of what everyone else is doing. Ball and Romer don’t try to figure out which equilibrium is more likely to be the outcome in reality.
No one in their model would be out of equilibrium (unnecessarily poor) if it were not for the “sticky” prices. As the title implies, coordinating the optimal levels of production and consumption is difficult because of the inertia of prices.
In their conclusion, they reflect on the role of government when multiple equilibria are possible:
… with multiple equilibria, policy can be less coercive. Instead of prohibiting certain contract provisions, the government could simply convene meetings of business and labor leaders to coordinate adjustment … Second, by moving the economy to a new equilibrium, temporary regulations can permanently change the degree of nominal rigidity.
They assume that after a recession, the price adjustment that needs to happen is “for decentralized agents to reduce nominal wages in tandem.” It’s interesting to see, culturally speaking, how hesitant they seem to strongly recommend government intervention through inflation. I feel like writers in econlit today would not be shy about saying they think governments should intervene through monetary policy, if they believe that to be true.
In my JEBO paper, I found that a little inflation caused workers to not lower production so much in response to a real wage cut after a recession. In our environment, I would say “cooperation” was more important than “coordination”, because there were only two agents and their decisions were sequential.
We wish you all a happy Thanksgiving day. I wondered if the academic literature could provide any insights to use on this day. If Google is a good guide, the formal economics literature has ignored the phenomenon of the Thanksgiving tradition.
“We Gather Together” from the Journal of Consumer Research in 1991 does, at the very least, exist. The first line of the abstract made me smile.
Thanksgiving Day is a collective ritual that celebrates material abundance enacted through feasting.
The third line of the abstract made me think.
So certain is material plenty for most U.S. citizens that this annual celebration is taken for granted by participants.
This weekend I am participating (virtually, remotely) in the Southern Economics Association annual meeting where economists talk about research in progress. I saw Laura Razzolini present a new project yesterday.
She and coauthors surveyed people in the city of Birmingham, AL before and after a major disruption to commuter traffic. One thing they find is that people who have a longer commute due to a road closure are more stressed.
AS IT HAPPENED, Covid came along and started stressing people soon after. So they did another round of surveys and have great baseline data to compare Covid-stressed people with. I will not discuss her results on how stress affects decision making here. She has got some really neat results. The paper will be called something like “Uncovering the Effects of Covid-19 on Stress, Well-Being, and Economic Decision-Making”.
The magnitude of the increase in stress from a longer commute was something like 2.5 on a scale of 1-10. (Do not quote me – I do not have her paper to reference – this is from memory)
A comment from the audience was that it looked like the magnitude of the increase of stress from a longer commute and from Covid were similar. How could that be? Isn’t a deadly disease worse than traffic?
To explain this, I return to my favorite xkcd comic. When you hover your mouse over the comic, it says “Our brains have just one scale, and we resize our experiences to fit.” (Apropos of nothing, the fact that the comic artist picked Joe Biden as an example of someone who isn’t very important in 2011 seems pretty strange now.)
So, when traffic got worse people could only express “my life got worse”. And when Covid-19 caused shutdowns in the Spring of 2020, people again said “my life got worse”.
We only have one scale, and we resize our experience to fit. Thanksgiving is coming up. I would hope that we could take a day off from the 2020 year-of-doom talk and find something to be grateful for, because things actually can get worse. I also send out sincere condolences to all those who will be spending The Holidays apart from loved ones because of Covid-19.
Continuing on the theme of last week’s minimum wage increase in Florida, there are two interesting papers recently accepted for publication that both cover the 1966 Fair Labor Standards Act. This law extended the federal minimum wage to a number of previously uncovered. Crucially, the newly covered industries employed a large number of African-American workers.
The two papers agree on some points, such as that African Americans saw large wage gains following the increase. But was there a disemployment effect? Here is where the papers differ.
Ellora Derenoncourt and Claire Montialoux’s paper “Minimum Wages and Racial Inequality” is forthcoming in the Quarterly Journal of Economics. Here is what they find: “We can rule out significant disemployment effects for black workers. Using a bunching design, we find no aggregate effect of the reform on employment.”
So who is right? Let me clearly state here that both of these papers are very well done, both in their methods and in their assembling of historical data. But I think there is a key difference in the samples they analyze: Derenoncourt and Montialoux’s paper only includes workers aged 25-55. Bailey and co-authors use a broader age range, 16-64, which importantly includes teenagers (this is discussed in Section D of their online appendix).
Since teenagers and other young workers are the ones we suspect are going to be most impacted by the minimum wage (much of the literature focuses on teenagers), the exclusion of workers under 25 seems like a curious omission, and a reason I tempted to believe the results of Bailey and co-authors. But Derenoncourt and Montialoux do try to justify their choice of age group: 1. workers under 21 were subject to a different minimum wage; and 2. workers under 25 were subject to the draft for the Vietnam War.
So once again, you might ask, who is right? I will admit here that I don’t know. Standard economic theory suggests that disemployment effects will result from a legal minimum wage (I fully acknowledge the emerging literature on monopsony power, but I maintain this is still not the standard analysis), and especially so for teenagers and young workers. So I am skeptical of any analysis which excludes these workers, whatever other merits it may have.
Here’s my take: we probably can’t tell much about how the minimum wage will impact young workers today based on these studies. If Derenoncourt and Montialoux’s reasons for removing young workers are indeed sound, then we aren’t really testing the question most economists are interested in today (so I would caution against their attempt to apply the results to labor markets today). But that doesn’t mean these aren’t interesting papers to read on an important change in the history of minimum wage laws in the US!
The title comes from Bewley’s famous book “Why Don’t Wages Fall During a Recession?” In that book, Truman Bewley asks managers why they do not cut wages in a recession when equilibrium analysis tells us that the price of labor should fall.
We run an experiment in which employers and workers encounter a recession. The employers could cut wages, or they could keep them rigid as we normally observe during recession. The concept of a “cut” assumes a reference point from which to go down from. We establish that reference point by letting the employer set a wage before the recession and repeating that payment to workers for 3 rounds.
We use a Gift Exchange (GE) Game to model the relationship between employers and workers. Employers offer a wage that is guaranteed to the worker. Employers have to trust that workers will not shirk. We do observe a few subjects shirking, and those people are not very interesting to us. We are interested in the workers who respond with positive reciprocity because that means there is “good morale” in the “workplace”. The employers interviewed by Bewley were afraid that wage cuts would damage the good morale that is necessary for a business to run.
After three rounds, there was a recession. The total surplus available in the GE game shrank by 10%. In the Inflation treatment, the exchange rate of tokens to dollars increased, such that if firms kept nominal wages rigid there would in fact be a 10% real wage cut.
If workers resent nominal wage cuts, then firms should keep wages rigid in a recession. If worker morale falls and workers decrease effort, then firms will be hurt more by the fall in productivity than by a large real wage cost.
In fact, about half of the firms did cut wages. So, we did not observe wage rigidity and we’d like to do follow-up research on that point. It did mean that we had variation and could observe the counterfactual that we were interested in.
Workers don’t like wage cuts. Workers who had been selecting an effort level near the middle of the feasible range dropped their effort significantly if they experienced a wage cut. The real wage cuts under Inflation did not have as sharp of an effect on effort, which suggests some nominal illusion.
Here’s a cumulative distribution of effort choices among workers (Recession treatment had no inflation). After half of the workers experienced a wage cut, the effort distribution moves toward 0.05, the minimum effort level.
We measured loss aversion at the end. We can’t say that loss averse workers resent wage cuts, because everyone resents wage cuts. There’s maybe some evidence that loss averse employers are less likely to cut wages. Thanks for reading! Please reach out through my Samford email if you’d like to know more.
The relationship between loss aversion and wage rigidity deserves more attention from behavioral economics.
Special thanks to Misha Freer, Cesar Martinelli, and Ryan Oprea for conversations that helped us. Also, we are indebted to everyone that we cited, of course, and to all the people we failed to cite.
The big news in our world is that the Nobel Prize was announced today for economists. (We call it “the Nobel Prize”.)
Paul Milgrom and Robert Wilson win for 2020. They are known for auction theory and design. Here is a popular introduction from the Nobel Committee.
This prize is special to me because auction design was one of the very first practical problems that presented me with a chance to put economic ideas into practice. As an undergraduate at Chapman University, I had the privilege to spend time talking with people like Vernon Smith and Dave Porter. Some people think of Vernon Smith as being someone who “does things in the lab”. The thing that he actually did was often auctions.
My master’s thesis at Chapman University was a project on auctions. A practical problem to motived our inquiry. Students at Chapman were upset about the way that the most convenient parking spots were allocated. Concerns about parking showed up in quantitative student satisfaction surveys.
We had an important question, since we were actually going to run an auction that would affect people’s lives. How to we choose from among the different possible auction formats?
Paul Milgrom (with Robert J. Weber) provided guidance to us in their 1982 paper in Econometrica.
Among other things, in that paper, they compare the revenue properties of English auctions and Dutch auctions. In an English auction, the price starts low and bidders compete to out-bid each other until the price is so high that only one bidder remains. That is the popular conception of an auction. There is another mechanism class (Dutch) in which the price starts higher than anyone wants to pay and drops until a buyer jumps in. Once you start thinking about how many ways one could run an auction, then you need some way to decide between all the mechanisms.
Theory can help you predict who will be better off under different formats. And, in my case, needing to figure out the revenue properties of different auction formats can help you learn economic theory!