There is a new paper available at Cliometrica. It is co-authored by Mathieu Lefebvre, Pierre Pestieau and Gregory Ponthiere and it deals with how the poor were counted in the past. More precisely, if the poor had “a survival disadvantage” they would die. As the authors make clear “poor individuals, facing worse survival conditions than non-poor ones, are under-represented in the studied populations, which pushes poverty measures downwards.” However, any good economist would agree that people who died in a year X (say 1688) ought to have their living standards considered before they died in that same year (Amartya Sen made the same point about missing women). If not, you will undercount the poor and misestimate their actual material misery.
So what do Lefebvre et al. do deal with this? They adapt what looks like a population transition matrix (which is generally used to study in-,out-migration alongside natural changes in population — see example 10.15 in this favorite mathematical economics textbook of mine) to correctly estimate what the poor population would have been in a given years. Obviously, some assumptions have to be used regarding fertility and mortality differentials with the rich — but ranges can allow for differing estimates to get a “rough idea” of the problem’s size. What is particularly neat — and something I had never thought of — is that the author recognize that “it is not necessarily the case that a higher evolutionary advantage for the non-poor over the poor pushes measured poverty down”. Indeed, they point out that “when downward social mobility is high”, poverty measures can be artificially increased upward by “a stronger evolutionary advantage for the non-poor”. Indeed, if the rich can become poor, then the bias could work in the opposite direction (overstating rather than understating poverty). This is further added to their “transition matrix” (I do not have a better term and I am using the term I use in classes).
What is their results? Under assumptions of low downward mobility, pre-industrial poverty in England is understated by 10 to 50 percentage points (that is huge — as it means that 75% of England at worse was poor circa 1688 — I am very skeptical about this proportion at the high-end but I can buy a 35-40% figure without a sweat). What is interesting though is that they find that higher downward mobility would bring down the proportion by 5 percentage points. The authors do not speculate much as to how likely was downward mobility but I am going to assume that it was low and their results would be more relevant if the methodology was applied to 19th century America (which was highly mobile up and down — a fact that many fail to appreciate).
By the time most students exit undergrad, they get acquainted with the Aggregate Supply – Aggregate Demand model. I think that this model is so important that my Principles of Macro class spends twice the amount of time on it as on any other topic. The model is nice because it uses the familiar tools of Supply & Demand and throws a macro twist on them. Below is a graph of the short-run AS-AD model.
Quick primer: The AD curve increases to the right and decreases to the left. The Federal Reserve and Federal government can both affect AD by increasing or decreasing total spending in the economy. Economists differ on the circumstances in which one authority is more relevant than another.
The AS curve reflects inflation expectations, short-run productivity (intercept), and nominal rigidity (slope). If inflation expectations rise, then the AS curve shifts up vertically. If there is transitory decline in productivity, then it shifts up vertically and left horizontally.
Nominal rigidity refers to the total spending elasticity of the quantity produced. In laymen’s terms, nominal rigidity describes how production changes when there is a short-run increase in total spending. The figure above displays 3 possible SR-AS’s. AS0 reflects that firms will simply produce more when there is greater spending and they will not raise their prices. AS2 reflects that producers mostly raise prices and increase output only somewhat. AS1 is an intermediate case. One of the things that determines nominal rigidity is how accurate the inflation expectations are. The more accurate the inflation expectations, the more vertical the SR-AS curve appears.*
The AS-AD model has many of the typical S&D features. The initial equilibrium is the intersection between the original AS and AD curves. There is a price and quantity implication when one of the curves move. An increase in AD results in some combination of higher prices and greater output – depending on nominal rigidities. An increase in the SR-AS curve results in some combination of lower prices and higher output – depending on the slope of aggregate demand.
Of course, the real world is complicated – sometimes multiple shocks occur and multiple curves move simultaneously. If that is the case, then we can simply say which curve ‘moved more’. We should also expect that the long-run productive capacity of the economy increased over the past two years, say due to technological improvements, such that the new equilibrium output is several percentage points to the right. We can’t observe the AD and AS curves directly, but we can observe their results.
The big questions are:
What happened during and after the 2020 recession?
It’s spring break and that means catching up on both research and my social network. It also means college basketball. I remain firmly in the camp that college athletes should be paid for their incredibly high-value labor and, in turn, recapture a huge share of the surplus currently enjoyed by schools and coaches. What I am beginning to rethink, however, is the way that “professionalization” can and will play out.
This rethinking began with the the realization that my enjoyment of the product is largely insensitive to the presence of great players. The gap between NBA and NCAA basketball, in terms of quality of play, is so great that I simply don’t watch the sports in the same way. I consume the NBA the way I do Denis Villeneuve films: enjoying an artform in its closest approximation to perfection at the bleeding edge of innovation. NCAA basketball, in contrast, is a soap opera for genre aficionados. It’s Battlestar Galactica for sports fans.
There is a floating, ever-changing cast of characters supporting a handful of recurring leads. Clans and sub-clans. Rises and falls. Tragic failures and heroic redemption arcs. And, much like the latest show about wizards or post-apocaplyptic alien invasion survivors on the SciFy channel, the enjoyment of this product doesn’t require high level precision or execution. Quite frankly, the show is more enjoyable when the actors aren’t famous or especially elite; it keeps me squarely focused on the shlocky fun, rather than getting distracted by any urge to pick apart the film composition, story logic, or actor subtext. College basketball, in much the same way, keeps me squarely focused on the drama of gifted athletes doing their best to help their team achieve success in a limited window before moving on to the rest of their lives. Trying to get a little slice of glory now, while their knees will allow for greatness, before getting on with the endless particulars of adult life later.
Which brings me back to the eventual professionalization of college sports with athlete compensation. Schools will find themselves faced with a decision of whether they should spend money on the very best athletes or try to compete with less expensive players. Athletes will have to decide where the best opportunities to develop their professional game are, and how much of their human capital investment portfolio they want to dedicate to sports. What might the equilibrium look like?
We can coarsely reduce the pool of athlete’s into three categories: all-in on athletics, those looking to purely subsidize secondary education, and those aiming for a mix of both. Currently schools capture the most rents from the pure athletics all-ins, who dedicate nothing but the bare minimum to schooling while maximizing their athletic preparation. The all-ins will often be the best players, who get the most media attention and contribute the most to winning glory, attracting applications from young fans and donations from nostalgic alumni. You might expect that compensation would shift the most suprlus to them. We have to consider, however, the possibility that a proper market for elite college athletic labor would provide the prices needed to accelerate the formation of pre-professional academies and player futures contracts. The very best 18-year old basketball players may find it far more lucrative to take a $120K in income and full-time coaching today in exchange for 2% of future professional earnings.
At the same time, college basketball may similarly learn the true nature of their collective good: that it is, in fact, a zero-sum competition where the total amount of talent isn’t nearly as important for earnings as they think. While a small number of schools absorbing all of the top talent might be exciting for covers of no longer existent sports magazines, in reality 120 teams competing for a less skewed distribution of talent more predominantly interested in subsidizing the full cost of college (i.e. tuition, lost wages, etc) may actually make for more drama, which means more ratings, which means more money. Why try to compete with the academies for 1 year of the next Lebron when those same resources, will get you 5 good players for 4 years? Combined with the fact that this bundle of athletes will place greater value on (nearly) marginally costless scholarships, teams looking to compete in the long-term with a maximimally effcient allocation of resources could shift the competitive equiibrium could actually shift away from the top talent.
Sports are fun when they are played at the highest level. They are also fun, however, when a little chaos is injected into the drama. It’s great when Steph Curry casually hits shots 40 feet from the basket, when Lebron James or Nikola Jokic make Matrix-esque passes through impossible angles. But it’s also great watching players struggle at the edge of far more human limitations to a find to win on the biggest stage of their lives while wearing the jersey of one of hundreds of colleges. The highest drama includes players making shots, but sometimes it needs players to dribble off their foot, too.
We don’t have to limit earnings to capture that glory. We don’t have to take money from young people whose particular talents put them in the sliver of the human population whose greatest earning potential might be age 20. We don’t need to appeal to platitudes or false nostalgia to explain why they’re being compensated with something better than money. We can just pay them. Some things will change, but I think you’ll be shocked to see how little the experience of college basketball will change. College sports will remain largely the same, but it will be a bit less shady, a bit less hypocritical. It will place greater value on, and care for, the players they have directly invested in.
Which, at least to me, would be a little more fun.
There is a new paper in Journal of Economic Perspectives. Its author, Dan Sichel, studies the price of nails since 1695 (image below). Most of you have already tuned off your attention by now. Please don’t do that: the price of nails is full of lessons about economic growth.
Indeed, Sichel is clear in the title in the subtitle about why we should care — nail prices offer “a window into economic change”. Why? Because we can use them to track the evolution of productivity over centuries.
Take a profit-maximizing firm and set up a constrained optimization problem like the one below. For simplicity, assume that there is only one input, labor. Assume also that a firm is in a relatively competitive market so as to remove the firm’s ability to affect prices so that, when you try to do your solutions, all the quantity-related variables will be subsumed into a n term that represent’s the firm share of the market which inches close to zero.
If you take your first order conditions and solve for A (the technological scalar). You will find this this identity
What does this mean? Ignore the n and consider only w and p. If wages go up, marginal costs also increase. From a profit-maximizing firm’s standpoint trying to produce a given quantity, if prices (i.e. marginal revenue) remained the same, there must have been an increase in total factor productivity (A). Express in log-form, this means that changes in total factor productivity are equal to αW – αP. This means that, if you have estimates of output and input prices, you can estimate total factor productivity with minimal data. This is what Sichel essentially does (and Douglas North did the same in 1968 when estimating shipping productivity). All that Sichel needs to do is rearrange the identity above to explain price changes. This is how he gets the table below.
The table above showcases the strength of Sichel’s application of a relatively simple tool. Consider for example the period from 1791 to 1820. Real nail prices declined about 0.4 percent a year even though the cost of all inputs increased noticeably. This means that total factor productivity played a powerful role in pushing prices down (he estimates that advances in multifactor productivity pulled down nail prices by an average of 1.5 percentage points per year). This is massive and suggestive of great efficiency gains in America’s nail industry! In fact, this efficiency increases continued and accelerated to 1860 (reinforcing the thesis of economic historians like Lindert and Williamson in Unequal Gainsthat American had caught up to Britain by the Civil War).
I know you probably think that the price of nails is boring, but this is a great paper to teach how profit-maximizing (and constrained optimization) logic can be used to deal with problems of data paucity to speak to important economic changes in the past.
The financial crisis recession that started in late 2007 was very different from the 2020 pandemic recession. Even now, 15 years later, we don’t all agree on the causes of the 2007 recession. Maybe it was due to the housing crisis, maybe due to the policy of allowing NGDP to fall, or maybe due to financial contagion. I watched Vernon Smith give a lecture in 2012 in which he explained that it was a housing crisis. Scott Sumner believes that a housing sectoral decline would have occurred, and that the economy-wide deep recession and subsequent slow recovery was caused by poor monetary policy.
Everyone agrees, however, that the 2007 recession was fundamentally different from the 2020 recession. The latter, many believe, reflected a supply shock or a technology shock. Performing social activities, including work, in close proximity to others became much less safe. As a result, we traded off productivity for safety.
The policy responses to each of the two were also different. In 2020, monetary policy was far more targeted in its interventions and the fiscal stimulus was much bigger. I’ll save the policy response differences for another post. In this post, I want to display a few graphs that broadly reflect the speed and magnitude of the recoveries. Because the recessions had different causes, I use broad measures that are applicable to both.
The saying that “The first casualty of war is the truth” has been credited to anti-war Senator Hiram Warren Johnson in 1918 and also to the ancient Greek dramatist Aeschylus. We have seen this played out dramatically with Russia’s invasion of Ukraine. From the Ukrainian side have come the predictable overinflated estimates of the enemy’s losses, and perhaps understated reporting of their own casualties. Also, on the first day or two of the war there was a raunchy defiant response of Ukrainian defenders to a “Russian ship” that was demanding their surrender; as far as I know that exchange was for real, but the initial report by Ukraine that all the heroic defenders were killed was not true. Maybe I am biased here, but these sorts of excesses are stretching some core truth, not trampling over it roughshod.
On the Russian side, perhaps because there is no even vaguely legitimate justification for their invasion, the lies have been simply ludicrous. Apparently, the Russian troops have been told that they are going there to rescue Ukrainians from the current regime which is a bunch of “neo-Nazis”. If Putin’s thugs had a sense of humor or perspective, they might have discerned the irony of characterizing the Ukrainian regime as “neo-Nazi” when the president (Zelenskyy) is a Jew, whose grandfather’s brothers died in Nazi concentration camps.
And the Russian lies go beyond ludicrous, to revolting and inhuman. Russian Foreign Minister Sergey Lavrov has dismissed concerns about civilian casualties as “pathetic shrieks” from Russia’s enemies, and denied Ukraine had even been invaded.
The Associated Press snapped a picture in the besieged city of Mariupol a few days ago which went viral, showing a pregnant woman with a bleeding abdomen being carried out on a stretcher from a maternity hospital which the Russians had bombed. The local surgeon tried to save her and her baby, but neither one survived. The Russian side put out a string of bizarre and contradictory stories, claiming that they had bombed the hospital because it was a militia base (a neo-Nazi militia, of course) but also that no, they didn’t bomb it, the hospital had been evacuated and the explosions were staged by the Ukrainians, and the bloody woman in the photos was a made-up model. Ugh. I find it chilling to observe a regime in operation where there is absolutely no respect for what the truth actually is; rather, lies are manufactured to serve whatever purpose will suit the regime.
I know that some of that goes on even with Western democracies, but we are still usually ashamed of outright lying, and stand discredited when exposed. But with hardcore authoritarian regimes, there does not seem to be even this minimal respect for integrity.
Freedom of speech becomes even more critical as cynicism about truth becomes more widespread in the world, even in our own political discourse. Putin is trying to suppress the truth within Russia, now with very harsh penalties (fifteen years in prison) for those disseminating information contrary to the party line. All he needs to do is deem such talk as “treasonous”, and into the clink you go.
I do worry about similar trends towards censorship within the West. In our case, it is not so much governments (so far) doing the censorship, but Big Tech. If Google [search engine and YouTube] / Facebook/Twitter disapprove of your content, they can label it “hate speech” or whatever, and your voice disappears from public discourse. But what gives the high priests of big tech the authority and the powers of moral discernment to rule on what discourse is permissible? Also, the algorithms of social media sites usually direct you towards other sites that reinforce your own point of view, so you rarely get exposed to why the other side believes what it does. However annoying it may be to see various forms of nonsense circulating on-line, the time-tested democratic response is to allow (nearly) all points of view to be fairly stated, and to trust in the people to figure out where the truth lies. Otherwise, the truth can become a casualty of culture wars, as it is in shooting wars.
I had the title of this post sitting in “Drafts” for a couple months now, but Kris and Paul have given me good reason to actually write about it. These thoughts are largely off the cuff, but they do come from experience.
What is Agent-Based Modeling?
This is not actually as straight-forward question as one might think. If you define it broadly enough as, say, any model within which agents make decisions in accordance with pre-defined rules and assigned attributes, then the answer to the overarching question posed by this post becomes: well, actually, economics has been producing agent-based models for decades, but that answer is as annoying as it is useless.
Instead, let’s start with a minimal definition of an agent-based model:
They are composed of n >3 agents making independent decisions
Agents are individually realized within the model.
Decisions are made in accordance with pre-defined rules. These rules may or may not evolve over time, but the manner in which they evolve are themselves governed by pre-defined rules (e.g. learning, mutation, reproduction under selective pressures, etc).
If we stop at this minimalist definition, then the answer becomes only marginally less trivial, as essentially any dynamic programming/optimal control model within macroeconomics would meet the definition. This leads to what I consider the minimal definiton of an agent-based model as a distinct subclass of computational model:
Agents within the model are characterized by deep heterogeneity.
Agents exist within a finite environment which serves as a constraint in at least one dimension (lattice, sphere, network, etc).
Decisions are made sequentially and repeatedly over time
Now we’re getting farther into the weeds and beginning to differentiate from whole swaths of modern macroeconomics that either employ a “representative agent” or collapse agent attributes to the 1st and 2nd moments of distributions. But that doesn’t eliminate all of modern macro. If embracing heterogeneous agents in your models of macroeconomics, banking, etc, are of interest to you, there are scholars waiting to embrace you with open arms.
Which brings me to the final attribute that I believe fully distinguishes the bulk of the agent-based models and their advocates from modern economics:
Agent-based models exist as permanently dynamic creations, absent any reliance on equilibria as a final outcome, characterization, or prediction.
The departure from general or partial equilibria as outcomes or predictions is where the schism actually occurs and, I suspect, is where many purveyors found themselves with a research product they had a hard time selling to economists. Economics, perhaps more than any other social science, demands that theoretic predictions be testable and falsifiable. Agent-based models (ABMs) don’t always produce particularly tidy predictions that lend themselves to immediate validation. Which doesn’t preclude them from making a scientific contribution, but it puts them on unsteady footing for economists who are used to having a clear path from the model to the data.
OK, but really, why didn’t agent-based modeling happen?
As much as big, irreconcilable differences in scientific philosophy would make for a satisfying explanation, I suspect the most salient reasons are less sexy and, in turn, less flattering of the day-to-day realities of grinding out research in the social sciences. Here are a few.
Economics was already a “model” social science
One of the reasons mobile phones caught on faster in Africa than North America was an absence of infrastructure. The value add of going from “no phones” to “mobile phones” is far larger than going from “reliable land lines in every edifice” to “mobile phones”, making it easier to justify both investments in relevant infrastructure and bearing of personal costs. Such a thing occurred across the social sciences with regards to ABMs.
Rational choice and mathematical sociology always had a limited following. Evolutionary biologists were often alone in their mathematical modeling, computational biology barely existed, and cultural anthropologists were more excited about Marx’s “exchange spheres” than they were about formal models of any kind. For a PhD student in these fields, the first time they saw a Netlogo demonstration of an agent based model, they were seeing something never previously available to their field: the ability to formalize their own theories in a way fully exogenous to themselves. There would be no fighting about what their words actually meant, whose ideas they were mischaracterizing, what they were actually predicting. Their critics, be it journal referees or thesis committee members, would have no choice but to confront their theory as an independent entity in the world.
This advantage of formality, of independent objectivity, in agent-based modeling was not something new to economics. While critics have many (often correct) complaints about modern economics, it’s rare to air concerns that economics is insufficiently formal or mathematized.
Too many “thought leaders”, not enough science
Axtell and Epstein wrote their landmark book “Growing Artificial Societies” in 1996. In it they produced a series of toy simulation models within which simple two-good economies emerged. This wasn’t revolutionary in it’s predictions by any means (whole swaths of macro models were able to make comparable predictions for two decades prior), but the elegance through which minimalist computer code could produce recognizable markets emergent from individual agent decisions was just incredible. The potential to readers was immediately obvious: if we can produce such things from 100 lines of code, what could we simulate the fully realized power of modern programming?
What came next was…still more people evangelizing and extolling the power of ABMs to revolutionize economics. What didn’t come were new models. Forget revolutionary, its hard to even find models that were useful or at least interesting. The ratio of “ABMs are gonna be great” books and articles to actual economic models is disappointing at best, catastrophic to the field at worst.
There were a couple early models that got attention (the artificial Anastazi comes to mind), but after a few years everyone noticed that same 2-3 models were still be brought up as examples by evangelists, and none of them had meaningful economic content. As for the new models that did end up floating out there, there was also an oversupply of “big models”, with millions (billions) of agents and gargantuan amounts of code that intended to make predictions about enormous chaotic systems. Models, such as the Santa Fe Artificial Stock Market, tried to broadly replicate the dynamics of actually stock markets across a large number of dimensions. Such ambitions were greeted with skepticism by economics for a variety of reasons, not least of which the “curse of dimensionality”, which limits what you can learn about underlying mechanisms when the number of modeler choices exceeds your ability to test them or, for that matter, verify their internal coherence. For better or worse, these models felt akin to amateurs trying to predict a town’s weather 30 days out.
Bad models drove out good
The problem of too few good models was closely followed by the over-supply of bad models. Agent-based modeling, for good and for ill, is not a technique with high entry costs. A successful macroeconomic theorist is effectively a Masters-level mathematician, bachelors level computer programmer, and PhD economist. Netlogo programming can be learned in a week. You can get really good at programming agent-based models in a dedicated summer.
This isn’t unto itself a problem, but I can tell you this: in my first 5 years as an assistant professor, I was asked to review at least 100 papers built around agent-based models. I’m not sure if any of them were any good. I am sure that many of them were extremely bad. Most concerning is that I don’t think I learned anything from any of them. The costs of producing bad ABM papers is much lower than the costs of producing bad theory papers based on pure math. Bad science is often evolutionarlily selected for in modern science, a dynamic that in the case of ABMs was only amplified by a lower cost supply curve.
Now, here’s the thing: there was probably huge selection effects into what I interacted with. I doubt I was getting the best papers sent to me for review given my status in the field. But the quantity of bad papers was astonishing. They were just too easy too churn out. I suspect that some decent papers were lost in haystack of ad hoc pseudoscience and, in turn, some decent scientific careers probably got lost in the shuffle. More than once I had the thought “Editors are going to start rolling their eyes every time they see the term agent-based modeling if this what keeps coming across their desks.” Combined with the fact that ABMs are tricky to evaluate because you really need to go through the code to know what is driving the results, I think a lot good modelers got lumped in with the dreck.
[Not for nothing, it wasn’t uncommon for ABM papers to spend the bulk of the paper describing model outputs, while having nearly nothing about model inputs (i.e. rules, code, math, etc). These models were essentially black boxes that expected you to take their coherence on faith. I should note here that I haven’t really kept up with the field in the past few years. Hopefully transparency norms have improved, particularly in biological, ecological, and anthropological modeling, where ABMs have thrived to a far greater extent.]
The empirical revolution took hold of economics
I’ve save the biggest reason for last, but honestly I think it dwarfs the others.
The same rise in cheap computational power that gave rise to other forms of computational modeling, including ABMs, came along with the plummeting cost of data creation, storage, analysis, and access. By 2010 it was already increasingly clear that theory was taking a backseat in economics. Not because we were becoming an a-theoretic discipline (far from it), but because the marginal contribution of theory against the body of broadly accepted economic framings was small compared to those made by empirically testing the predictions of the existing body of theories against real data. The questions were no longer “How do we mentally organize and make sense of the world”, but instead “What is the actual measured effect of X on Y?” Theory gave way to statistical identification. Modeling technique gave way to causal inference.
Agent-based models are hard to empirically evaluate and test
Which gives way to a sort of subsidiary problem. It is more difficult for agent-based models to take advantage of the new data-rich world we live in. They don’t produce neatly direct predictions the way that microeconomic theories do, nor do they lend themselves to measured empirical validation in the same way as general equilibrium predictions of macroeconomic models. Empirical validation is by no means impossible, but it requires the matching of observed dynamics or patterns, which is generally a taller order. In this way, agent-based computational models are a bit of a throwback to the days of “high theory”, making for interesting discussion but of secondary importance when it comes to the assigning of journal real estate that makes and breaks careers.
Bonus story
I once presented my ABM paper on emergent religious divides, only to have an audience member become extremely upset, closing with the denouncement that “This isn’t agent-based modeling, this is economics!” That was my first exposure to the theme of ABMs as “antidote” to the hegemony of economics and all of its false prophecies. The idea that the destiny of ABMs was to unseat economics as the queen of the social sciences was probably an effective marketing strategy in many hallways, but not so much in economics departments (well, maybe at The New School).
So why should economists give agent-based modeling another shot?
Overall, I’ve been disappointed with the reporting on the US embargo against Russian oil. The AP reported that the US imports 8% of Russia’s crude oil exports. But then they and other outlets list a litany of other figures without any context for relative magnitudes. Let’s shine some more light on the crude oil data.*
First, the 8% figure is correct – or, at least it was correct as of December of 2021. The below figure charts the last 7 years of total Russian crude oil exports, US imports of Russian crude oil, and the proportion that US imports compose. That 8% figure is by no means representative of recent history. The average US proportion in 2015-2018 was 7.8%. But the US share as since risen in level and volatility. Since 2019, the US imports compose an average of 11.9% of all Russian crude oil exports.
As an exogenous shock, the import ban on Russian crude oil might have a substantial impact on Russian exports. However, many of the world’s oil importers were already refusing Russian crude. The US ban may not have a large independent effect on Russian sales and may be a case of congress endorsing a policy that’s already in place voluntarily.
Russia planning and logistical failures mean a continuing heavy invasion may not be sustainable, leading instead to a long runing siege. If this is the case, then it becomes all the more important to get basic humanitarian resources in now in order to minimize the suffering caused by the siege and minimize the odds of Russian success.
Ukrainian resistance depends as much on morale as it does lethal resources. Knowing their families are fed and receiving basic healthcare is critical.
If the micro-returns protecting a Ukrainian soldier or feeding a Ukrainian family aren’t enough for you, here’s a macro one: if the autocratic leader of an increasingly fascist regime with the strategic advantage of a nuclear arsenal is rebuffed in the Ukraine by a heroic local resistance partnered by global economic sanctions, it will serve as a signal to every leader with similar aspirations that success is less likely than they previously estimated. If your donation can help force a Bayesian update on dangerous autocrats and strongmen everywhere, that seems like nothing less than a perfectly rational act of utility maximization to me.
Generally, when you do your microeconomics class you get to see isoquants. I mean, I hope you get to see them (some principles class dont introduce them and are left to intermediate classes). But when you do, they look like this:
Its a pretty conventional approach. However, there is a neat article in History of Political Economy by Peter Lloyd (2012) titled “the discovery of the isoquant“. The paper assigns the original idea, rather than to the usual suspect of Abba Lerner in 1933, to W.E. Johnson in 1913 as A.L. Bowley was referring to his “isoquant” in a work dated from 1924 (from which the image is drawn). But what is more interesting that the originator of the idea is how the idea has morphed from another of its early formulations. In the 1920s and 1930s, Ragnar Frisch was teaching his price theory classes in Norway and depicted isoquants in the following manner in his lectures notes.
Do you notice something different about Frisch’s 1929 (or 1930) lectures relative to the usual isoquants we know and love today? Watch the end of each isoquant. They seem to “arc” do they not? How could an isoquant have such a bent? Most economists are probably so used to using isoquants that do not bend (except for perfect complements) that it will take a minute to answer. Well, here is the answer: its because Frisch was assuming that the production function from which the isoquant is derived had a maximum which means that the marginal product of an input could become negative. This is in stark contrast with our usual way to assume production functions as smoothly declining (but never negative) marginal products. This is why Frisch includes an arc to this shape (a backward bend).
Why did we move away from Frisch’s depiction? Well think about the economic meaning of a negative meaning marginal product. It means that a firm would be better offscaling down production regardless of anything else. Its a straightforward proposition to understand why in all settings, a firm would automatically from such an “uneconomic” zone. In other words, we should never expect firms to continually operate in a zone of negative marginal product. Ergo, the “bend”/”arc” is economically trivial or irrelevant. Removing it simplifies the discussion and formulation but also does something subtle — it sneaks in a claim rationality of behavior from firmowners and operators.
This is a good setup for a question to ask your students in an advanced microeconomics class that isnt just about the mathematics but about what the mathematical formulations mean economically!