The experimental economics world is currently still doing data collection in traditional physical labs with human subjects who show up in person. This is still the gold standard, but it is expensive per observation. Many researchers, including myself, also do projects with subjects that are recruited online because the cost per observation is much lower.
As I remember it, the first platform that got widely used was Mechanical Turk. Prior to 2022, the attitude toward MTurk changed. It became known in the behavioral research community that MTurk had too many bots and bad actors. MTurk had not been designed for researchers, so maybe it’s not surprising that it did not serve our purposes.
The Prolific platform has had a good reputation for a few years. You have to pay to use Prolific but the cost per observation is still much lower than what it costs to use a traditional physical laboratory or to pay Americans to show up for an appointment. Prolific is especially attractive if the experiment is short and does not require a long span of attention from human subjects.
Kalshi just announced that they will begin paying interest on money that customers keep with them, including money bet on prediction market contracts (though attentive readers here knew was in the works). I think this is a big deal.
First, and most obviously, it makes prediction markets better for bettors. This was previously a big drawback:
The big problem with prediction markets as investments is that they are zero sum (or negative sum once fees are factored in). You can’t make money except by taking it from the person on the other side of the bet. This is different from stocks and bonds, where you can win just by buying and holding a diversified portfolio. Buy a bunch of random stocks, and on average you will earn about 7% per year. Buy into a bunch of random prediction markets, and on average you will earn 0% at best (less if there are fees or slippage).
This big problem just went away, at least for election markets (soon to be all markets) on Kalshi. But the biggest benefit could be how this improves the accuracy of certain markets. Before this, there was little incentive to improve accuracy in very long-run markets. Suppose you knew for sure that the market share of electric vehicles in 2030 would over 20%. It still wouldn’t make sense to bet in this market on that exact question. Each 89 cents you bet on “above 20%” turns into 1 dollar in 2030; but each 89 cents invested in 5-year US bonds (currently paying 4%) would turn into more than $1.08 by 2030, so betting on this market (especially if you bid up the odds to the 99-100% we are assuming is accurate) makes no financial sense. And that’s in the case where we assume you know the outcome for sure; throwing in real-world uncertainty, you would have to think a long-run market like this is extremely mis-priced before it made sense to bet.
But now if you can get the same 4% interest by making the bet, plus the chance to win the bet, contributing your knowledge by betting in this market suddenly makes sense.
This matters not just for long-run markets like the EV example. I think we’ll also see improved accuracy in long-shot odds on medium-run markets. I’ve often noticed early on in election markets, candidates with zero chance (like RFK Jr or Hillary Clinton in 2024) can be bid up to 4 or 5 cents because betting against them will at best pay 4-5% over a year, and you could make a similar payoff more safely with bonds or a high-yield savings account. Page and Clemen documented this bias more formally in a 2012 Economic Journal paper:
We show that the time dimension can play an important role in the calibration of the market price. When traders who have time discounting preferences receive no interest on the funds committed to a prediction-market contract, a cost is induced, with the result that traders with beliefs near the market price abstain from participation in the market. This abstention is more pronounced for the favourite because the higher price of a favourite contract requires a larger money commitment from the trader and hence a larger cost due to the trader’s preference for the present. Under general conditions on the distribution of beliefs on the market, this produces a bias of the price towards 50%, similar to the so-called favourite/longshot bias.
We confirm this prediction using a data set of actual prediction markets prices from 1,787 market representing a total of more than 500,000 transactions.
Hopefully the introduction of interest will correct this, other markets like PredictIt and Polymarket will feel competitive pressure to follow suit, and we’ll all have more accurate forecasts to consult.
As I was reading through What is Real?, it occurred to me that I’d like a review on an issue. I thought, “Experimental physics is like experimental economics. You can sometimes predict what groups or “markets” will do. However, it’s hard to predict exactly what an individual human will do.” I would like to know who has written a little article on this topic.
I decided to feed the following prompt into several LLMs: “What economist has written about the following issue: Economics is like physics in the sense that predictions about large groups are easier to make than predictions about the smallest, atomic if you will, components of the whole.”
First, ChatGPT (free version) (I think I’m at “GPT-4o mini (July 18, 2024)”):
Next, I asked ChatGPT, “What is the best article for me to read to learn more?” It gave me 5 items. Item 2 was “Foundations of Economic Analysis” by Paul Samuelson, which likely would be helpful but it’s from 1947. I’d like something more recent to address the rise of empirical and experimental economics.
Item 5 was: “”Physics Envy in Economics” (various authors): You can search for articles or papers on this topic, which often discuss the parallels between economic modeling and physics.” Interestingly, ChatGPT is telling me to Google my question. That’s not bad advice, but I find it funny given the new competition between LLMs and “classic” search engines.
When I pressed it further for a current article, ChatGPT gave me a link to an NBER paper that was not very relevant. I could have tried harder to refine my prompts, but I was not immediately impressed. It seems like ChatGPT had a heavy bias toward starting with famous books and papers as opposed to finding something for me to read that would answer my specific question.
I gave Claude (paid) a try. Claude recommended, “If you’re interested in exploring this idea further, you might want to look into Hayek’s works, particularly “The Use of Knowledge in Society” (1945) and “The Pretense of Knowledge” (1974), his Nobel Prize lecture.” Again, I might have been able to get a better response if I kept refining my prompt, but Claude also seemed to initially respond by tossing out famous old books.
I was pleased to be a (virtual) guest speaker for Plateau State University in Nigeria. My host was (Emergent Ventures winner) Nnaemeka Emmanuel Nnadi. The talk is up on Youtube with the following timestamp breakdown:
During the first ten minutes of the video, Ashen Ruth Musa gives an overview called “The Bace People: Location, Culture, Tourist Attraction.”
Ray Fair at Yale runs one of the oldest models to use economic data to predict US election results. It predicts vote shares for President and the US House as a function of real GDP growth during the election year, inflation over the incumbent president’s term, and the number of quarters with rapid real GDP growth (over 3.2%) during the president’s term.
Currently his model predicts a 49.28 Democratic share of the two-party vote for President, and a 47.26 Democratic share for the House. This will change once Q3 GDP results are released on October 30th, probably with a slight bump for the dems since Q3 GDP growth is predicted to be 2.5%, but these should be close to the final prediction. Will it be correct?
Probably not; it has been directionally wrong several times, most recently over-estimating Trump’s vote share by 3.4% in 2020. But is there a better economic model? Perhaps we should consider other economic variables (Nate Silver had a good piece on this back in 2011), or weight these variables differently. Its hard to say given the small sample of US national elections we have to work with and the potential for over-fitting models.
But one obvious improvement to me is to change what we are trying to estimate. Presidential elections in the US aren’t determined by the national vote share, but by the electoral college. Why not model the vote share in swing states instead?
Doing this well would make for a good political science or economics paper. I’m not going to do a full workup just for a blog post, but I will note that the Bureau of Economic Analysis just released the last state GDP numbers that they will prior to the election:
Mostly this strikes me as a good map for Harris, with every swing state except Nevada seeing GDP growth above the national average of 3.0%. Of course, this is just the most recent quarter; older data matters too. Here’s real GDP growth over the past year (not per capita, since that is harder to get, though it likely matters more):
Region
Real GDP Growth Q2 2023 – Q2 2024
US
3.0%
Arizona
2.6%
Georgia
3.5%
Michigan
2.0%
Nevada
3.4%
North Carolina
4.4%
Pennsylvania
2.5%
Wisconsin
3.3%
Still a better map for Harris, though closer this time, with 4 of 7 swing states showing growth above the national average. I say this assuming as Fair does that the candidate from the incumbent President’s party is the one that will get the credit/blame for economic conditions. But for states I think it is an open question to what extent people assign credit/blame to the incumbent Governor’s party as opposed to the President. Georgia and Nevada currently have Republican governors.
Overall I see this as one more set of indicators that showing an election that is very close, but slightly favoring Harris. Just like prediction markets (Harris currently at a 50% chance on Polymarket, 55% on PredictIt) and forecasts based mainly on polls (Nate Silver at 55%, Split Ticket at 56%, The Economist / Andrew Gelman at 60%). Some of these forecasts also include national economic data:
Gelman suggests that the economy won’t matter much this time:
We found that these economic metrics only seemed to affect voter behaviour when incumbents were running for re-election, suggesting that term-limited presidents do not bequeath their economic legacies to their parties’ heirs apparent. Moreover, the magnitude of this effect has shrunk in recent years because the electorate has become more polarised, meaning that there are fewer “swing voters” whose decisions are influenced by economic conditions.
But while the economy is only one factor, I do think it still matters, and that forecasters have been underrating state economic data, especially given that in two of the last 6 Presidential elections the electoral college winner lost the national popular vote. I look forward to seeing more serious research on this topic.
I missed Alan Kreuger’s 2019 book on the economics of popular music when it first came out, but picked it up recently when preparing for a talk on Taylor Swift. It turns out to be a well-written mix of economic theory, data, and interviews with well-known musicians, by an author who clearly loves music. Some highlights:
[Music] is a surprisingly small industry, one that would go nearly unnoticed if music were not special in other respects…. less than $1 of every $1,000 in the U.S. economy is spent on music…. musicians represented only 0.13 percent of all employees [in 2016]; musicians’ share of the workforce has hovered around that same level since 1970.
there has been essentially no change in the two-to-one ratio of male to female musicians since the 1970s
The gig economy started with music…. musicians are almost five times more likely to report that they are self-employed than non-musicians
30 percent of musicians currently work for a religious organization as their main gig. There are a lot of church choirs and organists. A great many singers got their start performing in church, including Aretha Franklin, Whitney Houston, John Legend, Katy Perry, Faith Hill, Justin Timberlake, Janelle Monae, Usher, and many others
I am one of several founders of a club with the abbreviation F.E.W. for Finance and Economics Women. This is a student organization that we have at Samford and that Dr. Darwyyn Deyo runs at San Jose State University.
Our short paper is mostly a how-to guide including a draft of a club charter document. We describe our institutions and how we use this group to engage and encourage students. Please read it for more details on how to start a club.
Like most student groups, the FEW model relies on student leaders who take initiative. Having done this for more than 6 years, we have a growing network of alumni and local business partners who connect to current students through FEW events. Personally, I am lucky that 3 faculty members total support the club at my school.
Women are often minorities in upper-division econ and finance classes. Women also have some unique challenges when it comes to choosing career paths and navigating the workplace. These events (e.g. bringing in a manager from a local bank to talk with student over lunch) allow a space for students to ask questions they might not normally ask in a classroom setting or in a standard networking environment.
We report the results of a small survey in our paper. We can’t infer causality, nor did we run any experiments. However, we did find that women were more likely to report that a role model in their chosen profession influenced their choice of major. Part of the purpose of the FEW model is to expose students to a variety of role models who they might not otherwise connect with.
Here’s a news article with a picture of the founding group at Samford. I have great appreciation and respect for our student leaders who keep it going, and I am grateful to the graduates who stay in contact with us.
If you didn’t know already, the past five years has been a whirl-wind of new methods in the staggered Differences-in-differences (DID) literature – a popular method to try to tease out causal effects statistically. This post restates practical advice from Jonathan Roth.
The prior standard was to use Two-Way-Fixed-Effects (TWFE). This controlled for a lot of unobserved variation over individuals or groups and time. The fancier TWFE methods were interacted with the time relative to treatment. That allowed event studies and dynamic effects.
Cowen’s 2nd Law states that there is a literature on everything. I would certainly expect there to be a literature on the best-selling musician in the world. And of course there is; Google Scholar returns 23,500 results for “Taylor Swift”, and we’ve done 5 posts here at EWED. But surprisingly, searching EconLit returns nothing, suggesting there are currently no published economics papers on Taylor Swift, though searching “Taylor” and “Swift” separately reveals hundreds of articles about the Taylor Rule and the SWIFT payment system. Google Scholar does report some economics working papers about her, but the opportunity to be the first to publish on Taylor Swift in an economics journal (and likely get many media interview requests as a result) is still out there.
Swift presents a variety of angles that could be worthy of a paper; re-recording her masters forcopyright reasons, her efforts to channel concert tickets to loyal fans over re-sellers, or her sheer macroeconomic impact. I’ve added a note about this to my ideas page (where I share many other paper ideas).
In the mean time, I’ll be giving a short talk on the Economics of Taylor Swift at 7pm Eastern on Monday, September 16th, as part of a larger online panel. The event is aimed at Providence College alumni, but I believe anyone can register here.
Update 10/25/24: A recording of the event is here, and a recording of a followup interview I did with local TV is here.
That is the title of a 2020 book by Dierdre McCloskey and Art Carden. It attempts to sum up McCloskey’s trilogy of huge books on the “Bourgeois Virtues” in one short, relatively easy to read book. I haven’t read the full trilogy, so I can’t say how good the new book is as a distillation, but I found that it was easy to read and at least makes me think I understand McCloskey’s basic thesis for why the world got rich. I share some highlights here.
Part 1 of the book aims to establish that the world did in fact get richer over recent centuries, plus give a basic explanation of liberal political thought. If you already know this you could skip this part and cut down an easy 189 page read to a very easy 106 page read (part 1 is for some reason written in a way that assumes you disagree with the authors, which grates when you don’t, or perhaps also if you do).
Part 2 gets to what I at least came for- digging into the history to solve the puzzle of why the Industrial Revolution / Great Enrichment took off when and where it did. Which means first, explaining why many things people think made 18th century England special were actually common elsewhere, like markets: