How FRASER Enhances Economic Research and Analysis

Most of us know about FRED, the Federal Reserve Economic Data hosted by the Federal Reserve of St. Louis. It provides data and graphs at your fingertips. You can quickly grab a graph for a report or for a online argument. Of course, you can learn from it too. I’ve talked in the past about the Excel and Stata plugins.

But you may not know about the FRED FRASER. From their about page, “FRASER is a digital library of U.S. economic, financial, and banking history—particularly the history of the Federal Reserve System”. It’s a treasure trove of documents. Just as with any library, you’re not meant to read it all. But you can read some of it.

I can’t tell you how many times I’ve read a news story and lamented the lack of citations –  linked or unlinked.  Some journalists seem to do a google search or reddit dive and then summarize their journey. That’s sometimes helpful, but it often provides only surface level content and includes errors – much like AI. The better journalists at least talk to an expert. That is better, but authorities often repeat 2nd hand false claims too. Or, because no one has read the source material, they couch their language in unfalsifiable imprecision that merely implies a false claim.

A topical example would be the oft repeated blanket Trump-tariffs. That part is not up for dispute. Trump has been very clear about his desire for more and broader tariffs. Rather, economic news often refers back to the Smoot-Hawley tariffs of 1930 as an example of tariffs running amuck. While it is true that the 1930 tariffs applied to many items, they weren’t exactly a historical version of what Trump is currently proposing (though those details tend to change).

How do I know? Well, I looked. If you visit FRASER and search for “Smoot-Hawley”, then the tariff of 1930 is the first search result. It’s a congressional document, so it’s not an exciting read. But, you can see with your own eyes the diversity of duties that were placed on various imported goods. Since we often use the example of imported steel and since the foreign acquisition of US Steel was denied, let’s look at metals on page 20 of the 1930 act. But before we do, notice that we can link to particular pages of legislation and reports – nice! Reading the Smoot-Hawley Tariff Act’s original language, we can see the diverse duties on various metals. Here are a few:

Continue reading

The Big Ideas

Do I really think that the things I write about here and in my papers are the most important things in the world? No. Like most academics, I tend to emphasize the issues where I think I bring a unique perspective, rather than most important issues. But if you don’t realize this, you might get the impression that I think the things I normally talk about are the most important, rather than simply the most neglected and tractable / publishable. I don’t work on the most important issues because I see no good way for me to attack them- but if you do see a way, that is where you should focus. So what are the big issues of the 2020’s?

I see two issues that stand out above the many other important events of the day:

  • Artificial Intelligence: At minimum, the most important new technology in a generation; has the potential to bring about either utopia or dystopia. Do you have ideas for how to nudge it one way or another?
  • Rise of China: From extreme poverty to the world’s manufacturing powerhouse in two generations. What lessons should other countries learn from this for their own economic policy? How can we head off a world war and/or Chinese hegemony?

Focusing a bit more on economics, I see two perennial issues where there could be new opportunities to solve vital old questions:

  • Economic Development: We still don’t have a definitive answer to Adam Smith’s founding question of economics- why are some countries rich while other countries are poor, and how can the poor countries become rich? I think economic freedom is still an underrated answer, but even if you agree, the question remains of how to advance freedom in the face of entrenched interests who benefit from the status quo.
  • Robust Prediction: How can we make economics into something resembling a real science, one where predictions that include decimal places don’t deserve to be laughed at? Can you find a way to determine how much external validity an experiment has? Or how to use machine learning to get at causality? Or at least push existing empirical research to be more replicable?

I’ve added these points to my ideas page, since all this was inspired by me talking through the ideas on the page with my students and realizing how small and narrow they all seemed. Yes, small and narrow ideas are currently easier to publish in economics, but there is more to research and life than easy publications.

Reblog: One acceptable truth or a million fantasies

I’m in Houston to give a talk on “Ability to Pay” reforms for how fines and fees are assigned in the criminal justice system, so I’m taking the opportunity to economize on my scarce time i.e. be lazy.

This post received renewed interest in the last week thanks to a vastly superior stating of the hypothesis by Zach Weinersmith. I think it holds up pretty well, title aside, whose connection to the actual material is, at best, unnecessarily oblique and high-handed.

One acceptable truth or a million fantasies (12/28/20)

Humans are soft, slow, and (to the best of my knowledge) make for fairly nutritious meals. Brains for tool-making, and the opposable thumbs for using them, are significant evolutionary adaptations, but it is our capacity to act collectively that placed us at the top of the food chain.

By the end of a standard undergraduate economics curriculum, one couldn’t be blamed for coming to the conclusion that the failures of collective action are the greatest obstacle to mankind – Oh what we could have accomplished if only we had ever found a way to just cooperate. Alas, all those externalities, Prisoners’ Dilemmas, free riders, easy riders, market failures, government failures, they just stopped us at every turn

I’m not doubting the pedagogical value of teaching any of these obstacles, I teach them myself, but I believe we spend insufficient time reminding students that humans have been solving collective action problems with great success for thousands of years. Every national government, book club, homeowners association, and sorority has managed to produce public goods. So has every military coup and angry mob (if only sometimes for fleeting moments), but collective action is collective action, regardless of how we may feel about the outcome.

More often than not the most interesting question to me isn’t can a collective action problem be solved, but rather i) how has it already been solved and ii) how is that solution going to be threatened or hijacked? When I look to the current political landscape and the only mildly-exaggerated state of political and social polarization, I see not just rival ideologies, but alternative strategies for engendering and ensuring cooperation. On the left, I observe greater recent emphasis on purity – there is a narrow band of acceptable truth and any deviation from that, be it however accidental or benign in intent, can lead to significant punishments, including purges colloquially referred to as cancellations. On the right, I see required public professing of incorrect, often seemingly absurd, beliefs. I might talk about purity tests and purges on the left another times. What I’m interested in at the moment are the public untruths of current right wing identities (broadly conceived) and how they fit into the sacrifice and stigma theory, or club theory, of religion.**

I’ve written a lot about sacrifice and stigma theory. It has become the hammer than has left me forever searching for nails. Originally put forth by Laurence Iannaccone in 1992, it is nothing short of brilliant to my mind. A tool for solving collective problems so profound that when it shows up we barely notice it, and where it shows up tends to be the most powerful clubs shaping our societies: the religious, martial, and extremist political groups that bend the arc of history.

Groups produce what we call “club goods” i.e. public goods only accessible to members of the group. What Iannaccone demonstrated was that a group could actually increase their production of club goods by burdening its members with completely unproductive costs. Why do religious groups require clothing, behavior, or language that could stigmatize their members in broader society? Why are members required to sacrifice their resources at the literal or figurative altar of the group? Because if you impair members’ private productivity, or if the fruits of that private production are skimmed away, they will invest more of their resources into the group. If all group members face these same altered incentives, guess what, you’ve solved the collective action problem!

When I see educated women and men declaring the earth is 5,000 years old, that evolution isn’t real, that climate change is a hoax, or that Donald Trump is a brilliant human being, what I see is public profession of beliefs that might limit social or even occupational opportunities and, in turn, further commit them to a specific subset of affiliations. In the constellation of beliefs that might end up as political shibboleths, of course, there stand to be some more costly than others. In fact, there might even be beliefs that impose negative externalities on others, such antipathy towards vaccines or mask-wearing during a pandemic. Excessive burden might hurt the group, of course – remember, club membership must to be a net gain to persist. In a polarized society, however, vitriol created in rival factions by the externality-generating belief could actually intensify the commitment of group members. The liberals hate real-Americans like me so much now, they’d never accept me as anything but a dumb redneck, so the rational thing to do is double down on my commitment to the only group that will have me. Beliefs that reduce private productivity, increase group productivity, and create long-run antipathy in rival groups can serve to create something incredibly valuable to the group: a captured membership. If there is one thing that is evolutionarily hard-wired into human beings it is the knowledge that isolation is death. A member so stigmatized by past public behavior that rival groups would never accept them stands to be very committed to the group going forward.

The vulnerability of sacrifice and stigma born of public adherence to false beliefs, however, is the capacity of leaders to incept preferred false beliefs into the dogma. This is one way that minority groups can become scapegoated, the carbon costs of fossil fuels denied, quack remedies pedaled, or the reliability of electoral institutions undermined. Religious texts exist (mostly) unedited for long periods of time for a very important reason: core rules of behavior, methods of tithing, and sets of beliefs must be inoculated against opportunistic actors who would hijack the club goods they produce.

Sacrifice and stigma through club-specific false beliefs is a dangerous strategy for political parties for the simple reason that without the constraints of fact or scripture, leaders will feel the pull of their own preferences. Far more dangerous however, is the megalomaniacal conman that any political party institutionally designed to demand cognitive dissonance of its members will eventually attract. Political parties need to solve collective action problems, yes, but they also need immune systems. One might point to social norms, both within and outside the group, as key means of protection. Recent years, however, would seem to suggest that norms are not sufficiently robust in the long run. The US court system has held up well, and has in many ways served as the nations constitutional immune system. Perhaps the major political parties should consider updating and reinforcing their own constitutions, and put in place mechanisms to protect themselves from the next inevitable invasion.

American political parties need to update and upgrade their immune systems.

Inspiring research:

Iannaccone, Laurence R. “Sacrifice and stigma: Reducing free-riding in cults, communes, and other collectives.” Journal of political economy 100.2 (1992): 271-291.

Aimone, Jason A., Laurence R. Iannaccone, Michael D. Makowsky, and Jared Rubin. “Endogenous group formation via unproductive costs.” Review of Economic Studies 80, no. 4 (2013): 1215-1236.

**Note: this is not to suggest that left-wing identity affiliations don’t utilize sacrifice and stigma mechanisms. There is no shortage of what I suspect are completely ineffective, but highly visible, ostensibly pro-environment behaviors that are demanded. But the “headline” mechanisms of herding left-of-center identities under the progressive banner look more like threats of exile than sacrifice and stigma.

DeepSeek vs. ChatGPT: Has China Suddenly Caught or Surpassed the U.S. in AI?

The biggest single-day decline in stock market history occurred yesterday, as Nvidia plunged 17% to shave $589 billion off the AI chipmaker’s market cap. The cause of the panic was the surprisingly good performance of DeepSeek, a new Chinese AI application similar to ChatGPT.

Those who have tested DeepSeek find it to perform about as well as the best American AI models, with lower consumption of computer resources. It is also available much cheaper. What really stunned the tech world is that the developers claimed to have trained the model for only about six million dollars, which is way, way less than the billions that a large U.S. firm like OpenAI, Google, or Meta would spend on a leading AI model. All this despite the attempts by the U.S. to deny China the most advanced Nvidia chips. The developers of DeepSeek claim they worked with a modest number of chips, models with deliberately curtailed capacities which met U.S. export allowances.

One conclusion, drawn by the Nvidia bears, is that this shows you *don’t* need ever more of the most powerful and expensive chips to get good development done. The U.S. AI development model has been to build more, huge, power-hungry data centers and fill them up with the latest Nvidia chips. That has allowed Nvidia to charge huge profit premiums, as Google and other big tech companies slurp up all the chips that Nvidia can produce. If that supply/demand paradigm breaks, Nvidia’s profits could easily drop in half, e.g., from 60+% gross margins to a more normal (but still great) 30% margin.

The Nvidia bulls, on the other hand, claim that more efficient models will lead to even more usage of AI, and thus increase the demand for computing hardware – – a cyber instance of Jevons’ Paradox (where the increase in the efficiency of steam engines in burning coal led to more, not less, coal consumption, because it made steam engines more ubiquitous).

I read a bunch of articles to try to sort out hype from fact here. Folks who have tested DeepSeek find it to be as good as ChatGPT, and occasionally better. It can explain its reasoning explicitly, which can be helpful. It is open source, which I think means the code or at least the “weights” have been published. It does seem to be unusually efficient. Westerners have downloaded it onto (powerful) PCs and have run it there successfully, if a bit slowly. This means you can embed it in your own specialized code, or do your AI apart from the prying eyes of ChatGPT or other U.S. AI providers. In contrast, ChatGPT I think can only be run on a powerful remote server.

Unsurprisingly, in the past two weeks DeepSeek has been the most-uploaded free app, surpassing ChatGPT.

It turns out that being starved of computing power led the Chinese team to think their way to several important innovations that make much better use of computing. See here and here for gentle technical discussions of how they did that. Some of it involved hardware-ish things like improved memory management. Another key factor is they figured out a way to only do training on data which is relevant to the training query, instead of training each time on the entire universe of text.

A number of experts scoff at the claimed six million dollar figure for training, noting that if you include all the costs that were surely involved in the development cycle, it can’t be less than hundreds of millions of dollars. That said, it was still appreciably cheaper than the usual American way. Furthermore, it seems quite likely that making use of answers generated by ChatGPT helped DeepSeek to rapidly emulate ChatGPT’s performance. It is one thing to catch up to ChatGPT; it may be tougher to surpass it. Also, presumably the compute-efficient tricks devised by the DeepSeek team will now be applied in the West, as well. And there is speculation that DeepSeek actually has use of thousands of the advanced Nvidia chips, but they hide that fact since it involved end-running U.S. export restrictions. If so, then their accomplishment would be less amazing.

What happens now? I wish I knew. (I sold some Nvidia stock today, only to buy it back when it started to recover in after-hours trading). DeepSeek has Chinese censorship built into it. If you use DeepSeek, your information gets stored on servers in China, the better to serve the purposes of the government there.

Ironically, before this DeepSeek story broke, I was planning to write a post here this week pondering the business case for AI. For all the breathless hype about how AI will transform everything, it seems little money has been made except for Nvidia. Nvidia has been selling picks and shovels to the gold miners, but the gold miners themselves seem to have little to show for the billions and billions of dollars they are pouring into AI. A problem may be that there is not much of a moat here – – if lots of different tech groups can readily cobble together decent AI models, who will pay money to use them? Already, it is being given away for free in many cases. We shall see…

What we hear at the campfire

A recent scout campout got me thinking about who gets an audience. A small group was sitting around a campfire silently. Eventually the person who piped up and sapped our attention was 9 years old, with all the maturity expected thereof. Who is to blame for the low quality of discourse that night? I didn’t expend any energy to make good use of that time. I could have taught those kids something, if I had told an engaging story or introduced a clever joke. It would have taken energy to communicate something important in a way that they would want to listen to it.

We have a limited number of minutes to pay attention to the world and we use few of them productively. There is a metaphorical campfire every night, after the work of subsistence is over. Who speaks up? Who gets an audience? When a journalist is doing their best to cover an important issue or sound an alarm, how many people bother to click or get a paid subscription?

I regularly see people complain that journalists or the media are doing it wrong. “Why didn’t the NYT cover X?” Jeremy regularly points out that the NYT did cover X, but not many people clicked.

Ship hijackings on the other side of the world aren’t very fun to read about. What really got clicks this past week was Melania’s hat.

Most of the handwringing over what the media should do is deflecting blame from what we should be doing, which is paying for good journalism and engaging in the boring/important news.

Even before LLMs, for decades, there has been no shortage of great serious writers and text could be shared at very low cost online. The bottleneck is the audience. Good readers are more scarce than writers.

Buying on Margin is Like an Option

Over the winter break I was able to catch up on a lot of podcasts. I also began listening to the Marginal Revolution podcast (which is phenomenal). I especially enjoyed the final episode of season 1 about options and how many transactions can be characterized as giving someone an option. Here, the term option echoes a financial option. You pay today for the ability to do something in the future. In financial markets, you can purchase the right to buy or sell at a particular price in the future.

But lots of things count as options. Staying in the financial context, purchasing a stock gives you the option to sell that stock at the future spot price. So, in this way, something can be characterized as an option even though we are not accustomed to describing as such explicitly. More mundane transactions can also be interpreted as options. Assume that you buy a can opener. You are buying the option to have that tool on hand in the future and to open some shelf-stable food. You can choose to exercise the option simply by opening your kitchen drawer.

But financial options often include the possibility of losing money. It may be that your grocery purchases never include canned items and that you never have occasion to use your can opener. Maybe that’s a bad investment. You sunk your money into something that you never used. Except… You did in fact have the option to use the can opener. Maybe you had peace of mind that you were well prepared just in case a guest arrived with a can of something. Buying a can opener is like buying an option.

Returning to the realm of finance, let’s discuss buying on margin. Buying an asset on margin is when you borrow from your broker in order to purchase a financial asset. It’s not entirely free money. They have rules about the amount you can borrow and, of course, you must pay back the loan with interest.

Continue reading

Forecasting 2025

WSJ’s survey of economists reports that inflation expectations for 2025 were around 2% before the election, but are closer to 3% now. Their economists expect GDP growth slowing to 2%, unemployment ticking up slightly but staying in the low 4% range, with no recession. The basic message that 2025 will be a typical year for the US macroeconomy, but with inflation being slightly elevated, perhaps due to tariffs.

Kalshi has a lot of good markets up that give more detailed predictions for 2025:

For those who hope for DOGE to eliminate trillions in waste, or those who fear brutal austerity, the message from markets is that the huge deficits will continue, with the federal debt likely climbing to over $38 trillion by the end of the year. This is one reason markets see a 40% chance that the US credit rating gets downgraded this year.

While the US has only a 22% chance of a recession, China is currently at 48%, Britain at 80%, and Germany at 91%. The Fed probably cuts rates twice to around 4.0%.

Will wage growth keep pace with inflation? It’s a tossup. Corporate tax cuts are also a tossup. The top individual rate probably won’t fall below it’s current 37%.

If you want to make your own predictions for the year, but don’t want to risk money betting on Kalshi, there are several forecasting contests open that offer prizes with no risk:

ACX Forecasting Contest: $10,000 prize pool, 36 questions, must submit predictions by Jan 31st

Bridgewater Forecasting Contest: $25,000 prize pool, half of prizes are reserved for undergraduates. Register now to make predictions between Feb 3rd and March 31st. Doing well could get you a job interview at Bridgewater.

One Hundred Years of U.S. State Taxation

From a paper recently published in the Journal of Public Economics by Sarah Robinson & Alisa Tazhitdinova, here is the history of federal and state taxation in the past century in the US in one picture:

The paper primarily focuses on US state taxes, thus mostly ignoring local taxes, but in the Appendix the authors do show us similar charts for local taxes:

In broad terms, the history of taxation in the US in the 20th century is a history of the decline of the property tax, and the rise of the income and sales taxes. In 1900, there were barely any federal taxes (other than those on alcohol and tobacco), 50% of state taxes were property tax, and almost 90% of local taxes were property taxes. Property taxes were essentially the only form of taxation most Americans would directly recognize (excise taxes and tariffs were baked into the price of the goods).

John Wallis (2000) provided a similar, and simpler picture of these changes: considering all taxes in the US, property taxes were over 40% of the total in 1900, but today are under 10%. Income taxes come out of nowhere and are now about half of all government revenues in the US:

Free Webinar, Jan. 25: Practical and Ethical Aspects of Future Artificial Intelligence

As most of us know, artificial intelligence (AI) has taken big steps forward in the past few years, with the advent of Large Language Models (LLM) like ChatGPT. With these programs, you can enter a query in plain language, and get a lengthy response in human-like prose. You can have ChatGPT write a computer program or a whole essay for you (which of course makes it challenging for professors to evaluate essays handed in by their students).

However, the lords of Big Tech are not content. Their goal is to create AI with powers that far surpass human intelligence, and that even mimics human empathy. This raises a number of questions:

Is this technically possible? What will be the consequences if some corporations or nations succeed in owning such powerful systems? Will the computers push us bumbling humans out of the way? Will this be a tool for liberation or for oppression? This new technology coming at us may affect us all in unexpected ways. 

For those who are interested, there will be a 75-minute webinar on Saturday, January 25 which addresses these issues, and offers a perspective by two women who are leaders in the AI field (see bios below). They will explore the ethical and practical aspects of AI of the future, from within a Christian tradition. The webinar is free, but requires pre-registration:

Here are bios of the two speakers:

Joanna Ng is a former IBM-er, pivoted to a start-up founder, focusing on Artificial Intelligence, specialized in Augmented Cognition, by integrating with IoT and Blockchain, in the context of web3, by applying design-thinking methodology. With forty-nine patents granted to her name, Joanna was accredited as an IBM Master Inventor. She held a seven-year tenure as the Head of Research, Director of the Center for Advanced Studies, IBM Canada. She has published over twenty peer-reviewed academic publications and co-authored two computer science books with Springer, The Smart Internet, and The Personal Web. She published a Christianity Today article called “How Artificial Intelligence Is Today’s Tower of Babel” and published her first book on faith and discipleship in October 2022, titled Being Christian 2.0.

Rosalind Picard is founder and director of the Affective Computing Research Group at the MIT Media Laboratory; co-founder of Affectiva, which provides Emotion AI; and co-founder and chief scientist of Empatica, which provides the first FDA-cleared smartwatch to detect seizures. Picard is author of over three hundred peer-reviewed articles spanning AI, affective computing, and medicine. She is known internationally for writing the book, Affective Computing, which helped launch the field by that name, and she is a popular speaker, with a TED talk receiving ~1.9 million views. Picard is a fellow of the IEEE and the AAAC, and a member of the National Academy of Engineering. She holds a Bachelors in Electrical Engineering from Georgia Tech and a Masters and Doctorate, each in Electrical Engineering and Computer Science, from MIT. Picard leads a team of researchers developing AI/machine learning and analytics to advance basic science as well as to improve human health and well-being, and has served as MIT’s faculty chair of their MindHandHeart well-being initiative.