Empirical Austrian Economics?

David Friedman recently got into an online debate with Walter Block that could be seen as a boxing match between “Austrian economics” and the “Chicago School of Economics”. In the wake of this debate, Friedman assembled his thoughts in this piece which is supposed (if I understand properly) to be published as a chapter in an edited volume. Upon reading this piece, I thought it worthy of providing my thoughts in part because I see myself as being both a member of both schools of thought and in part because I specialize in economic history. And here is the claim I want to make: I don’t see any meaningful difference between both and I don’t understand why there are perpetual attempts to create a distinction.

But before that, let’s do a simple summary of the two views according to Friedman (which is the first part of the essay). The “Chicago” version is that you can build theoretical models and then test them. If the model is not confirmed, it could be because a) you used incorrect data, b) relied on incorrect assumptions, c) relied on an incorrect econometric specification. The Austrian version is that you derive axioms of human action and that is it. The real world cannot be in contradiction with the axioms and it only serves to provide pedagogical illustrations. That is the way Friedman puts the differences between the schools of thought. The direct implication from this difference is that there cannot be (or there is no point to) empirical/econometric work in the Austrian school’s thinking.

Now, I understand that this is the viewpoint shared by many — as noticed by a shared distrust of econometrics and mathematical depictions of the economy among Austrian-school scholars. In fact, Rothard was pretty clear about this in an underappreciated book he authored, the A History of Money and Banking in the United States. But I do not understand why.

After all, all models are true if they are logically consistent. I can go to my blackboard and draw up a model of the economy and make predictions about behavior. That is what the Austrians do! The problem is that predictions rely on assumptions. For example, we say that a monopoly grant is welfare-reducing. However, when there are monopolies over common-access resources (fisheries for example), they are welfare-enhancing since the monopoly does not want to deplete the resource and compete against its future self. All we tweaked was one assumption about the type of good being monopolized. Moreover, I can get the same result as the conventional logic regarding monopolies by tweaking one more assumption regarding time discounting. Indeed, a monopoly over a common access resource is welfare-enhancing as long as the monopolist values the future stream of income more than than the future value of the present income. In other words, something on the brink of starvation might not care much about not having fish tomorrow if he makes it to tomorrow.

If I were to test the claims above, I could get a wide variety of results (here are some conflicting examples from Canadian economic history of fisheries) regarding the effects of monopoly. All of these apparent contradictions result from the nature of the assumptions and whether they apply to each case studied. In this case, the empirical part is totally in line with the Austrian view. Indeed, empirical work is simply telling which of these assumptions apply in case X, Y, or Z. In this way of viewing things, all debates about methods (e.g. endogeneity bias, selection bias, measurement, level of data observation) are debates about how to properly represent theories. Nothing more, nothing less.

It is a most Austrian thing to start with a clear model and then test predictions to see if the model applies to a particular question. A good example is the Giffen-good. The Giffen good can theoretically exist but we have yet to find one that convinces a majority of economist. Ergo, the Giffen good is theoretically true but it is also an irrelevant imaginary pink unicorn. Empirically, the Giffen good has simply failed to materialize over hundreds of papers in top journals.

In fact, I see great value to using empirical work in an Austrian lens. Indeed, I have written articles (one is a revise and resubmit at Public Choice, another is published in Review of Austrian Economics and another is forthcoming at Essays in Economic and Business History) using econometric methods such as difference-in-difference and a form of regression discontinuity to test the relevance of the theory of the dynamics of interventionism (which proposes that government intervention is a cumulative process of disequilibrium that planners cannot foresee). n each of these articles, I believe I demonstrated that the theory has some meaningful abilities to predict the destabilizing nature of government interventions. When I started writing these articles, I believed that the body of theory I was using was true because it was logically consistent. However, I was willing to accept that it could be irrelevant or generally not applicable.

In other words, you can see why I fail to perceive any meaningful difference between Austrian theory and other schools of economic thought. For year, I realized I was one of the few to see like this and I never understood why. A few months ago, I think I put my finger on the “why” after reading a forthcoming piece by my colleague Mark Koyama: Austrians assume econometrics to be synonymous with economic planning.

I admit that I have read Mises’ Theory and History and came out not understanding why Austrians think that Mises admonished the use of econometrics. What I read was more of the domain of the reaction to the use econometrics for planning and policy-making. Econometrics can be used to answer questions of applicability without in any way rejecting any of the Austrian framework. Maybe I am an oddball, but I was a fellow Austrian traveler when I entered the LSE and remained one as I learned to use econometrics. I never saw any conflict between using quantitative methods and Austrian theory. I only saw a conflict when I spoke to extreme Rothbardians who seemed to conflate the use of tools to weigh theories and the use of econometrics to make public policy. The former is desirable while the latter is to be shunned. Maybe it is time for Austrians to realize that there is good reason to reject econometrics as a tool to “plan” the economy (which I do) and accept econometrics as a tool of study and test. After all, methods are tools and tools are not inherently bad/good — its how we use them that matters.

That’s it, that’s all I had to say.

What we pay for the thing that some workers do that most people do not

In middle school, I broke my leg in a soccer tournament game. I needed to go to the hospital and get extra support for the next month. Some of the workers who helped me were not highly paid, but my value of their services was very high.

Why bring this up? There has been conversation about the label “low skill” work this week. Brian Albrecht summarized the debate. Brian tangentially mentioned the “diamond-water paradox,” but I think it is worth talking more about that. Economists have a few models and stories that change the way you think about the world.

When I teach Labor Economics, we read an excerpt from Average is Over and then I explain the diamond-water paradox in class. I ask the students why diamonds cost more than water, even though water is more important. The answer can help us understand how wages get set for human workers (I say “human” because by that time we are deep in the topic of robot workers as substitutes).

I tell my students that some of the low-pay work performed by humans is extremely important. I’m still looking for the perfect illustration here. The one I use goes something like this, which is related to my broken leg anecdote… imagine if you tripped on train tracks and couldn’t get yourself out of the way of an oncoming train. How much would you pay a human to haul you to safety? Almost any human could perform the task. That service would be as valuable as a glass of water if you are about to die from thirst, which is to say that your value for it is almost infinite.

The key to understanding the market price of cleaners as opposed to the high wages for repairing Facebook code is marginal thinking. There is a lot of water, so the next glass is going to be cheap.

In writing Average is Over, Tyler Cowen is trying to understand why wages for the-less-highly-paid-skills have stagnated recently, while wages for the-highly-paid-skills are increasing along with GDP. He brings computers and technology into the conversation, as one culprit for recent changes. There is a limited supply of humans who can show up to a tech job and contribute reliably. “Programmers” are not the only highly paid class of workers, but it’s easy to see that the supply of people who are proficient with Python is limited.

I see two opposing forces in the tech world, which I have been following for a few years. First, we have boot camps, code clubs and all kinds of resources to both equip and encourage people to go into tech. I volunteer to advise a club that provides resources for female college students taking a technical route. On the other hand, lots of people who do get a foot into the door of a tech company become upset and quit.

Here is a quitter (a twitter quitter?):

You can read about this specific situation at this woman’s website. It seems like she made the right choice for herself. She is actually on a mission to change tech for women. I’ll reproduce the text here, in case someone can’t see the tweet: “first day at my new job! i am now a ceramicist because it lets me have no commute, make my own hours, decide the value of my work, and bring people joy. make no mistake, i wanted to code, but tech fulfilled none of that. so i hand off the baton. please fix tech while i make pots!”

The point is that she is one of many people who have dropped out of the tech workforce. Those employees who remain are pushed up toward the “diamond market price” and away from the “water market price”. Here is a blog about “burnout” survey data from 2018.

Populations in rich countries are not growing and labor force participation is down. Could the market wage for lower-skill-requirement jobs in the US rise dramatically in the next century, or at least keep pace with the wage increases that were recently enjoyed by those-with-the-capabilities-that-are-highly-valued? Marginal utility still apply, but prices will change if supply shifts.

See my old blog about Andrew Weaver who is researching skills that are in demand.

Optimal Policy & Technological Contingency

A person’s optimal choice depends on what they know. To consume more ice cream? Or to consume more alcohol? It depends on what we know about the expected utility across time. If a person thinks that alcohol has few calories, then it is understandable that they would choose to drink rather than eat. The person might be totally wrong, but they are acting optimally contingent on their knowledge about the world. (FWIW, 4oz of ethanol has 262 calories and 4oz of typical ice cream has 228 calories.)

The case is analogous for good government policy. The best policy is contingent on accessing the distribution of knowledge that’s inside of multiple people’s heads. It’s not sensible to assert that a policy is suboptimal if the optimal policy requires knowledge that neither a single individual nor all people together have. Even if the sum of all knowledge does exist, it may not be possible to access it.

Economists like to tell their undergraduate classes that it doesn’t matter who you tax. But that’s contingent on 1) identical compliance costs among buyers and sellers and 2) identical relevant information. If a tax comes as a surprise to the buyer or the seller, then it absolutely matters who is taxed.

When I was in 1st grade in North Carolina, my class went on a field trip to a Christmas tree farm. We learned a bunch about maintaining the farm and we got to choose a pumpkin to take home. At the end of our visit we took turns perusing the gift shop. My mother had generously given me a dollar to spend  and I was eager to spend it (I rarely had money to spend). Unfortunately, even in the early mid-90s, most of the things in the shop cost more than $1. So, I settled on purchasing some beef jerky that cost 99 cents.

Continue reading

The Return of Independent Research

Universities have been around for about a thousand years, but for much of that time it was typical for cutting-edge research to happen outside of them. Copernicus wasn’t a professor, Darwin wasn’t a professor. Others like Isaac Newton, Robert Hooke, and Albert Einstein became professors only after completing some of their best work. Scientists didn’t need the resources of a university, they simply needed a means of support that gave them enough time to think. Many were independently wealthy (Robert Boyle, Antoine Lavoisier) or supported by the church (Gregor Mendel). Some worked “real jobs”, David Ricardo as a banker, Einstein famously as a patent clerk.

Over time academia grew and an increasing share of research was done by professors, with most of the rest happening inside the few non-academic institutions that paid people to do full time research: national labs, government agencies, and a few companies like Xerox Parc, Bell Labs and 3M. In many fields research came to require expensive equipment that was only available in the best-funded labs. “Researcher” became a job, and research conducted by those without that job became viewed with suspicion over the 20th century.

But the Internet Age is leading to the growth in opportunities outside academia, opportunities not just economic but intellectual. Anyone with a laptop and internet can access most of the key tools that professors use, often for free- scientific articles, seminars, supercomputers, data, data analysis. Particularly outside of the lab sciences, the only remaining barrier to independent research is again what it was before the 20th century- finding a means of support that gives you time to think. This will never be easy, but becoming a professor isn’t either, and a growing number of people are either becoming independently wealthy, able to support themselves with fewer work hours (even vs academics), or finding jobs that encourage part time research. If you work for the right company you might even get better data than the academics have.

Particularly in artificial intelligence and machine learning, the frontier seems to be outside academia, with many of the best professors getting offers from industry they can’t refuse.

Even in the lab sciences, money is increasingly pouring in for those who want to leave academia to run a start-up instead:

I think it’s great for science that these new opportunities are opening up. A natural advantage of independent research is that it allows people to work on topics or use methods they couldn’t in academia because they are seen as too high risk, too out there, make too many enemies, or otherwise fall into an academic “blind spot“.

I’m still happy to be in academia, and independent research clearly has its challenges too. But over my lifetime it seems like we have shifted from academia being the obvious best place to do research, to academia being one of several good options. Even as research has begun to move elsewhere though, universities still seem to be doing well at their original purpose of teaching students. Almost all of the people I’ve highlighted as great independent researchers were still trained at universities; most of the modern ones I linked to even have PhDs. There are always exceptions and the internet could still change this, but for now universities retain a near-monopoly on training good researchers even as the employment of good researchers becomes competitive.

As an academic I may not be the right person to write about all this, so I’ll leave you with the suggestion to listen to this podcast where Spencer Greenberg and Andy Matuschak discuss their world of “para-academic research”. Spencer is a great example of everything I’ve said- an Applied Math PhD who makes money in private sector finance/tech but has the time to publish great research, partly in math/CS where a university lab is unnecessary, but more interestingly in psychology where being a professor would actually slow him down- independent researchers don’t need to wait weeks for permission from an institutional review board every time they want to run a survey.

Latest Inflation Data: Hot Dogs and Cheese On Sale!

The latest CPI inflation data was released this morning. Mostly the new data just confirms what we’ve seen the past few months: consumer price inflation is at the highest levels in decades, and it is now very broad based.

To see how broad based the inflation is, we can look at any of the “special aggregates” that the BLS produces. CPI less food. CPI less shelter. CPI less food, shelter, energy, used cars and trucks (what a mouthful!). All of these are up substantially over the past year. The lowest number you can get is that last aggregate I listed, which excludes almost 60% of consumer spending, and even it is up 4.7% over the past year — the largest increase since 1991 for that particular special index.

Or, you can just look at food. We all have probably observed that meat prices are way up recently — about 15% over the past year. But it’s not just meat. It’s fruit, vegetables, grains, dairy… the whole darn food pyramid. In fact, there are only two food categories (hot dogs and cheese) and two drinks (tea and wine) that are actually down since December 2020.

I’ve covered the symbolic importance of hot dog prices before, but the fact that only four food or drink categories had price decreases are indications that food-price inflation is extremely broad-based.

So what’s causing the inflation?

Continue reading

Primary Driver for This Inflation Is Surging Demand (Fueled by COVID Payments), Not Supply Chain Constraints

Inflation is colloquially defined as, “Too much money chasing too few goods (and services)”. Supply chain constraints get talked about, and these are widely blamed for the inflation we are seeing.  Of course, supply limitations play into inflation, but to focus on them is to miss the elephant in room. The primary driver of this inflation is not “too few goods”, but “too much money.”

Such is the thesis of a widely circulated article by Ray Dalio’s investing firm Bridgewater Associates, “It’s Mostly a Demand Shock, Not a Supply Shock, and It’s Everywhere.” The point is summarized:

While the headlines tend to focus on the micro elements of the supply shock (the LA port, coal in China, natural gas in Europe, semiconductors globally, truckers in the UK, etc.), this perspective largely misses the macro cause that is likely to persist and for which there is no idiosyncratic solution. This is not, by and large, a pandemic-related supply problem: as we’ll show, supply of almost everything is at all-time highs. Rather, this is mostly an MP3-driven upward demand shock. [emphases in the original]

In Bridgewater’s terminology, “MP3” is “Monetary Policy #3”, and refers to massive deficit spending combined with central bank quantitative easing. We saw this implemented in 2020-2021 when the federal government pumped out trillions of dollars of stimulus payments and enhanced unemployment benefits, and the Fed instantly soaked up the bonds that were issued to pay for these trillions. This fed/Fed combo amounts to simply printing money on an enormous scale.

Those trillions of dollars funded a huge surge in durable goods purchases. By late 2021 the supply of these goods was well above 2019 (pre-COVID) levels, and even above normal growth trendlines. However, the supply and transport systems simply could not grow fast enough to accommodate this insatiable demand. Charts below substantiate this. To focus on supply chain bottlenecks of themselves is misleading. The primary driver for this inflation has been the trillions of dollars of federal largesse. The Fed knows all this, obviously, but Jay Powell (the Chief Enabler of this deficit spending) would likely not have been reappointed if he spoke too directly about the cause of this inflation. Hence the endless prattle about supply chains.

Continue reading

Infrastructure can only happen if we’re allowed to build it

This caught my eye.

This isn’t just expensive or inefficient. This is obstructive at a level only just short of an executive veto. Regardless of what sits at the top of your dream infrastructure list, this is the problem you have to solve first. Doesn’t matter if it’s high speed rail, the hyperloop, or offshore windfarms. Heck, maybe your big policy dream is universal healthcare or public education. If governments can’t build anything short of a 10X markup, then every large scale government provided solution has no value besides giving us something to argue over.

If I might put my even-more-cynical-than-usual hat on for a moment, the fact that this isn’t a top line item in every policy discussion is politically telling. This is relevant to the policy ambitions for everyone to the left of the politest anarchist you know. However, the urgency and relevance should increase exponentially as we move leftward across our political spectrum since those are the people most excited about the government actually building things. With a handful of exceptions, that’s just not what I am seeing, quite the contrary even.

Maybe it’s union indolence, conservative obstructionism, or just the quiet manifestation of all the reasons that public choice theory is actually more relevant as a left-wing school of thought than a conservative one. The fact remains that the incentives within modern politics and governance has brought us here, to a place where people want the same thing they always have: everything. And they’re willing to pay exactly as much as they always have: nothing. The difference is that our institutions used to give people incentive to bargain within the political marketplace and hammer out a deal where prices, both in dollars and political support, led to an an actual outcome where everyone ended up better off. Maybe it wasn’t as efficient as the private marketplace, but that’s almost besides the point. Sometimes the most important thing isn’t maximizing efficiency, but just managing to build the public good at all.

Instead, we seemed to have arrived at an equilibrium with enough legacy rent-seekers that the system is choking on them, with no one willing to flinch unless they continue to enjoy the previously established flow of benefits. We can try to blame this on conservative obstruction, but the fact remains that there just isn’t that much work for them to do. It’s a lot easier to tell voters they shouldn’t have to pay taxes when those taxes are disappearing into the suppurating maw of insatiable contractors, unfunded pension obligations, unplacatable union reps, and a menagerie of regulations that accomplish nothing but make a advocate 2 years removed from an overpriced BA in communications feel good about levying just one more papercut on a bloated corpse.

I have no idea if “supply-side” progressivism will gain anymore purchase than any of the other ad hoc attempts to coin a school of thought or political identity. But the idea stands, and I think it’s unescapable: if we want the government to be able to build stuff while leaving the 13th Amendment intact, they’re going to have to be able to pay market prices, and market wages, for it. Not much more, not much less.

Elasticity of Substitution or Why Simple Tools Teach Us Tons

I enjoy simple methods in economics. For economic history, which is my field of specialization, its often by constraint that I have to use them. Because of that, one has to be creative. In the process, however, one spots how well-used simple methods can be more powerful (both in terms of pedagogy and explanatory uses) than more advanced methods. Let me show you an example from Canadian history: the fur trade industry.

Yes, Canada’s mighty beaver! Generally known for its industriousness, the beaver has been mostly appreciated for its pelt which was the main export staple from Canada during the 17th and 18th centuries. In fact, if one is pressed to state what they think of when they think about Canada, fur pelts come in the top 10 (if not the top 5). It is thus unsurprising that there are hundreds of books on the business history of the fur trade in Canada.

One big thesis in Canadian economic history is that the fur trade was actually a drag on economic development (here and here and, most importantly, here with a wikipedia summary here). The sector’s dominance meant that the colony was not developing a manufacturing sector or other industries such as the timber, cod fishing, agriculture or potash. Political actors were beholden to a class of fur merchants who dominated. In a way, it looks a lot like the resource curse argument. And, up to 1810-1815, the industry represents the vast majority of exports (north of 60% always and generally around 75%). During the French colonial era, they represented 20% of GDP at some ponts.

Its only after 1815 that furs collapse as a staple — and quite rapidly. It represented less than 10% of exports and less than 2% of GDP by 1830. To explain the rapid turnaround, most of the available work has focused on demand for the industry’s output (see here) or internal industry factors. In a weird way, the industry is taken in isolation.

And that is where a simple tool like the elasticity of substitution between inputs becomes useful. First, I want you to notice the dates I invoked for the turning point: 1810-1815. These are not trivial years. They mark the end of the contest at sea between Britain and France and the beginning of the former’s hegemony on the sea. This means few trade interruptions due to war and insecurity at sea. Before 1815, the colonies in North America would have experienced nearly one year out of two.

What does that have to do with the fur trade’s dominance and elasticity of substitution? Well, it could be that war affects industry differently. Lets look at isoquants for a second to see how that could be the case. Imagine a constant elasticity of substitution function of the following shape:

Where L and K are your usual terms for labor and capital and r is the elasticity. Now, for the sake of argument, let us imagine what happens to the isoquant of a production function as r tends to infinity. As it tends to infinity, the marginal rate of technical substitution between L and K approaches zero if L > K. This means that there is a form of pure complementarity between inputs and no substitution is possible to produce the same quantity of output. The isoquant looks like this.

As r tends to infinity

On the other hand, if r tends to -1, there is perfect substitutability between both L and K. The isoquant then looks like this.

As r tends to -1

What if the fur industry’s isoquant looked more like the latter case while other industries looked like the former? More precisely, what if wars affected the supply of one input more than another? With a simple element like our description of the production function above, we see that if wars did not evenly affected the supply of one input, then one industry would be forced to contract output more than another. In our case, this would be the timber, potash, cod and agricultural sectors versus the fur trade.

Does that fit with the historical evidence? We know that the fur industry frequently changed the inputs it used in trading with the First Nations of Canada to buy furs. Whatever was deemed most valued by the natives would be what would be used. It could be alcohol, clothing, firearms, furnishings, silverware, tobacco, spices, salt, etc. This we get clearly from the work of Ann Carlos and Frank Lewis (a book linked to above). There was great ability to substitute. In contrast, other industries could not shift as easily. Take the timber industry which needed to import axes, saws, hoops, iron and nails from France or the United Kingdom for most of the 18th century. If wars disrupted the supply of these capital goods from Europe, there was very little substitution available which meant that the timber industry would have to contract output considerably to reflect the higher cost of these items. The same thing applies to the cod fishing industry whose key input was salt. No salt, no drying of the cod for preservation and export, thus no cod exports. And salt needed to be imported. In wartime, salt prices tended to jump much faster than other goods because its supply was entirely imported. Thus, wartime meant that the cod industry had to contract its output quite importantly.

The cod fishing industry is an amazing example of this if you take the American revolutionary war. During the war, the colony of Quebec (which represented 85% + of Canada’s population at the time) was invaded by the Americans and the French’s alliance with the Americans jeopardized trade between Quebec and Britain (its mother country at that point). The result was that salt prices jumped rapidly compared to all other goods and the output of the cod industry contracted. In contrast, the fur trade sector was barely affected. Look at this graph of the exports of beaver skins and codfish. Codfish output collapses whereas beaver skins barely show any sign of a major military conflagration.

In a longer-run perspective, its easy now to understand why the industry was dominant. It was the only industry that was robust to wartime shocks. All other industries would have had quite large shifts in factor prices causing them to contract and expand output in a very volatile manner. Now you may think this is just a trivial re-arranging of the argument. It is not because it invalidates the idea that the colony was poor or developed slowly because of the dominance of the fur industry. Rather, it shifts the burden on wartime shocks. Wars, not the dominance of the fur trade itself, meant that the economy was heavily mono-industrial.

A simple tool, the elasticity of substitution (which we can derive from the marginal rate of technical substitution), changes the entire interpretation of Canadian economic history. Can you see what I mean by the claim that simple tools combined with simple empirical observations can lead to powerful explanations? I hope you do! 

Remittances Eye-tracking Experiment: Meet the authors and paper

I am pleased to have been asked to discuss a paper in an ASHE (American Society of Hispanic Economists) session at the 2022 AEA meeting. Our session is “Hispanics and Finance” on Sunday January 9 at 12:15pm Eastern Time.

The paper is “Neuroeconomics for Development: Eye-Tracking to Understand Migrant Remittances”. Here is a bit about each author. Meeting in person is a benefit that I miss this time, since the meeting is virtual.

Eduardo Nakasone of Michigan State University has several papers on information and communication technologies and agricultural markets. I pondered this sentence from one of his abstracts, “Under certain situations, ICTs can improve rural households’ agricultural production, farm profitability, job opportunities, adoption of healthier practices, and risk management. All these effects have the potential to increase wellbeing and food security in rural areas of developing countries. Several challenges to effectively scaling up the use of ICTs for development remain, however.” His prior work on ICTs is relevant to the paper at hand, which is about how migrants utilize information about remittance tools.

Máximo Torero is the Chief Economist of the Food and Agriculture Organization (FAO). He has worked on development and poverty in many capacities including at the World Bank.

Angelino Viceisza, an associate professor at Spelman College, is doing interesting work at the intersection of Development and Experimental Economics. Here is his 2022 paper (Happy New Year!) published in the Journal of Development Economics.  

I am discussing their paper on how migrants choose financial services. The pre-analysis plan is public. Remittance sending is important for migrants and for the entire world economy. The authors remind us that a significant chunk of what migrants earn is “lost” to service fees. The authors are examining how migrants incorporate new information about competitive alternative services.

Some neat aspects of their work:

  • Their subject pool is migrants who send remittances, recruited in the DC area.
  • Like most experiments I am used to, the stakes are real and significant.
  • Not only can they observe which service is selected, but by using eye-tracking they can get a sense of what information was salient or persuasive.

It is potentially a big deal for migrants to compare services more rigorously and switch providers more readily. The internet, as least in theory, makes it easy to find information on transaction fees. Policy makers have even proposed subsidizing websites that compare the fees of money transfer operators (MTOs). The authors are trying to understand how such a website might impact behavior. A basic question is: does information in this format affect behavior? A small change in behavior could have a huge impact on the world economy and recipient countries. Imagine if a country currently receiving a billion dollars in remittances had 1% more next year because migrants switched to a more efficient service. Might it be cheaper to nudge people toward low-fee services than to send foreign aid?

Their experiment will reveal whether people make switches based on new information, and it also helps us start to understand which attributes of MTOs migrants consider. Their design includes a treatment manipulation that sometimes emphasizes either transfer speed or user reviews.

If you have read this far hoping for a summary of their results, I will disappoint. Their paper is not public yet and data is still being analyzed. I can say that migrant subjects do sometimes switch their choice of MTO, based on information, in some circumstances. They are more likely to make a switch when the induced stakes are higher. If you tune into the session tomorrow, you will get to hear a summary of preliminary results by the author (not free to public, requires conference registration).

The Justice Dividend

While I was listening to The New Bazaar and enjoying an episode with Tim Harford, I was reminded that economists don’t just have the job of understanding the world. We have a responsibility to our fellow man of keeping fallacy and economic misunderstanding at bay (a Sisyphean task).  That doesn’t mean that we just teach economic theory. We can and should advocate for good economic policy ideas and try to think up some policy alternatives that fit our political climate.

Here I was sitting, being grumpy at the US Federal deficit, when an idea came to me. I am full of ideas. Especially unpopular ones. So, I especially like ideas that make political sense to me given that the political parties care about their policy values and re-election. Asserting that people in congress actually care about policy apart from re-election is kind of a pie-in-the-sky assertion. But, here we go none the less.

Mancur Olson liked to emphasize the role of concentrated benefits and diffused costs in political decision making. Economists point to it and explain the billion-dollar federal subsidies that go to interest groups. A favorite example is Sugar subsidies. As of 2018 there were $4 billion in subsidies and sugar growers earned $200k on average. The typical family of four pays about $50 more in subsidies each year as a result. The additional tax burden of higher sugar prices is also relatively small. Therefore, says the economist, the few sugar beet and sugar cane farmers have a large incentive to ensure the subsidy’s survival while others pay a relatively small cost to maintain it. That small cost means that there is little money saved and little gain for any individual who might try to fight the applicable legislation.

That’s the standard story. But it’s so much worse than a story of concentrated benefits and diffused costs. The laity don’t know how the world works in two important ways. First, many people will simply say that they are happy to protect American producers for an additional $50 per year. That’s a small price to pay for ensuring the employment and production of our fellow Americans, they say. An economist might reply, in a manner that so automatic that it appears smug, that that $50 would instead go to producers of other goods and that our economy would be more productive if the sugar-producing resources were diverted elsewhere. This is Bastiat’s seen and unseen. Honestly, I suspect that neither economists nor non-economists can adopt the idea without a little bit of faith.

Secondly, people don’t know what causes a particular price to change. Hayek painted this characteristic as a feature of the price system. We are able to communicate information about value and scarcity without evaluating the values of others or the actual quantity of an available resource. However, lacking causal knowledge of prices makes for some bad policies. Say that the subsidies and protections subsided and the price of US sugar declined. The consumer would likely not know anything about the subsidies in the first place, much less that they were rescinded. Further, the world is a complicated place and people are apt to thank/blame irrelevant causes otherwise (corporate greed, anyone?).

When economists blame concentrated benefits and diffused costs, they often assume that there is perfect information. THERE ISN’T. People don’t know how the world works well enough to predict with confidence what will happen in an alternate version of reality without subsidies. Nor do they understand the particular determinants of prices in our current world. Half the battle is a lack of knowledge about the functioning of the world – not just that the costs and benefits fail to provide a strong enough incentive for legislative change.

Continue reading