Three years ago I ruminated on why agent-based modeling never got any real traction in economics. It got a suprising amount of attention and I continue to receive emails about it to this day. I took care to explicitly punt on what the value-add of agent based models could and/or may yet be.
“ So why should economists give agent-based modeling another shot? That’s another post for another day. …
Well, today is that day, in no small part because this excellent thread led to a new batch of emails about my old post. Now, to be clear, that post was based on a solid decade of experience writing, presenting, and publishing papers built around agent-based models. This endeavor is far more speculative. I have a bit of prickly disdain for the genre of forecasting you find on “I’m not unemployed, I’m an Entrepreneur and Futurist” LinkedIn profiles, so I’ll ask you to indulge even more glibness than usual. With the cowardly caveats now out of the way, let’s get into it.
What are the advantages of agent-based models?
Deep heterogeneity, replicability, scale, flexibility, and time. There are different ways to frame it, but it all boils down to the fact that a multi-agent computational model does not require collapsing to statistical moments or limited heterogeneity (i.e. 3 or fewer types of agent) in order to “converge” or compute. It is not reliant on the single run of human history in order to postulate counterfactuals – you can run the model millions of times and observe the full distribution of outcomes. The population is not limited to the scale of the sample or the population – it can be as large as you can computationally handle. How flexibile can it be? Literally everything but the ur-text of the model can be endogenous. And time? Again, how long you run the model is limited only by computational capacity coupled with your own patience.
Do note that everything I just listed is also a disadvantage.
Agent-based modeling can be a new class of “meta-analysis”
The science of observing, distilling, interpreting, and even managing the scientific project is generally speaking the domain of statisticians and historians of thought. Interestingly, it’s been my experience that historians of economic thought were some of the biggest early enthusiasts for agent modesl (I even wrote a paper with one). I think there is an opportunity, however, to borrow from the logic of applied statistics used in the meta analysis of literatures.
Meta-analysis in economics is pre-demominantly constituted by reviews of empirical literatures that conduct statistical analysis of the coefficients estimated in regression equations across multiple papers. Comparisons across data sets, geographic and temportal settings, and statistical identification strategies allow practicioners, policy makers, and the curious public to better internalize the state of the literature and what it is actually telling us. These are valuable contributions not just because a decades work can be reduced to a paper reduced to an abstract reduced to a title that showed up in a google search conducted by an intern at the think tank recommending policy to a lawyer with good hair who won an election fourteen years ago. They are valuable because they fight against the current wherein we all are drawn to cherry-pick the empirical results that confirm our priors, particularly those that have a political valence associated with them. Meta-analyses have also shown the peculiar biases introduced by the career incentives in all social sciences – the seminal figure being the sharp cutoff in published p-values at traditional 0.05 “statistical significance” threshold.
To reiterate: these papers are useful, but they are also limited by the necessity to find like for like papers whose results can be compared. A framing must be set upon in advance within which the authors of the meta-analysis can curate the contributions to be included and collectively evaluated. Only when the analysis is completed can the authors take a step back and try to adjudicate what the collective results are and how they reflect upon any relevant bodies of theory. It is an inherently atheoretical exercise. There’s a reason schools of thought are rarely (ever?) upended by a meta-analysis that successfully adjudicates between competing models. There’s always just enough daylight between data estimation and a given model to resist acquiescing to claims that any analysis is testing a models validity.
Agent-based modeling offers the opportunity for meta-analysis of models. In an artificial world with millions of agents, we can program behavior that corresponds with different theories of labor markets, households, crime, addiction, etc. We can model markets characterized by monopoly, monopsony, and competition born of everything from government fiat to specific elasticities of substitution between goods. Hey now, hold your horses. A model of everything is a model of nothing. Once you allow for too much complexity, there’s no room for inference. It’s just noise.
Yes, of course. You can’t model everything. But there is a greater opportunity to find when models are mutually incompatible. Incongruent. Is there a way to run an artificial city of a million agents to formulate a social scientific theory of everything? Absolutely not. But it would be interesting if a million runs of a million models shows that you can never have both a highly monopsonistic labor market and a income-driven criminal market because the high substitutability of cash across sources necessary in the criminal market allows for the kind of Coasean bargains that undermine monopsony. To be clear, I just made that up. But there’s room for as yet unseen cross-pollination across bodies of applied theory.
Pushing the Lucas critique all the way to the hilt
This is essentially a recursive version of modern macroeconomics where agents within the model learn the results being reported in the paper about the model they inhabit, changing their behavior accordingly. Wait, isn’t that just the definition of “equilibrium”? I mean, we already have the Lucas Critique. Yes, but we typically have very well-behaved agents in those models. What if they are a bit noisier in their heterogeneity? What if they took suboptimal risks, many failed, but some won? What if there was an error term in their perceptions of the world i.e. they ran incomplete regressions, observed the results, and then treated the results as a sufficient approximation of the truth? Essentially a behavioral world where agents are often smart but sometimes unwise? Where the churn of human folly and hubris undermined equilibrium while fueling both suffering and growth. A story of Schumpeterian economic growth told by the iterating arcs of Tolstoy and Asimov.
No, I said all the way
I’m not sure if what I just described is just the kind of advanced macroeconomics I am currently ignorant of or complete nonsense. Possibly both. To be clear, I’m deeply skeptical of the preceding paragraphs. One of the ironies of complexity science is that those who take it seriously know that overly complex theoretic ambitions are the death of good science. No, I think if you really want to apply agent-based methodologies within economics, it is best to go in the opposite direction. Simpler models let loose in larger, less constrained sandboxes.
Almost a decade ago Paul Smaldino and I wrote a paper about how groups collectively evolve separate strategies for internal and external cooperation. It’s a cool paper, I’m proud of it, and I kinda, sorta think its a major plotline in “Pluribus“. No, I don’t think the writers are aware of our paper. Yes, I know I sound like a crazy person, but I think the model we designed and explored is relevant to the story they are telling. Maybe next week I’ll lay out the parallels now that season one is complete.
Our paper is a simple story where i) evolutionary pressure on a couple of simple parameters for behavior at the individual level, ii) combined with parameters for how collective behavior emergers from individual pressure, can lead to iii) a world where a society of nice people can be, collectively, quite vicious. The evolutionary pressue is subtle, but also simple. Populations of uncooperative people fail to scale their resources and die off. Populations of cooperative people thrive until they are confronted by aggressive collectives that exploit and expropriate from them, killing them off. But if a group somehow evolves a culture in which members cooperate internally and externally on an individual level, while also being difficult to exploit collectively- if they thread that needle, they thrive.
I think there’s an opportunity for agent-based models within economics to do what we did in our model, but much bigger and much better. Framed as a question: why are the agents in our model only varying along simple parameters? Why aren’t they varying in the complexity of their behavior? Why aren’t they evolving their own rich, multi-layered strategies? Why aren’t they evolving strategies based on their own predictions for not just individual behavior, but how they think that behavior will change the landscape of resources and institutions in the collective? Why they are only playing the game we laid out, choosing amongst the strategies we gave them?
For me, the seminal moment when AI became something worth considering was not as far back as when computers beat players at chess or last week when LLMs were used to fabricate college application essays. It was in 2017 when AlphaGo Zero arrived at a level of play in Go that surpassed grand champions without any outside information besides the rules for the game. It was very specifically not an LLM as I understand them. It learned only by playing against itself. It created knowledge and insight strictly be internally iterating within a set of rules that evaluated success and failure.
We don’t know how to model an entire economy. Apologies to those interested in the Sante Fe Artificial Stock Market, but that’s always been too complex for my blood. So, again, we don’t know enough to make an agent-based model of an entire economy from the ground up, but we do know the rules of evolutionary success (survival and reproduction) and market success (resources and risk). We also have rules that we are comfortable imposing on emotional, sympathetic, and empathetic success (quantity and intensity of interpersonal relationships, observation of other’s success, the absence of suffering). Add in a few polynomial parameters for shape of utility, disutility, and you’ve got a context where agents will learn how to play whatever games you throw at them.
So why not simpy set the rules in place, build a million agents in a world of other agents forced to play games in a world of interactive games? The twist, of course, is that their strategies start as a blank slate.
Step 1: randomly match with another
Step 2: randomly choose to interact or not
Step 3: If you interact, randomly chooes to cooperate or not
Step 4: Go to 1
The question is, can you make the agents smart enough to update and add to those 4 lines of code in a manner that could evolve complex behavior, but not so rigid or intelligent that emergent strategies are obvious from the get go? Can you write a model where not only the strategies being played are endogenous, but the games themselves? There’s at least two people who already think the answer may be yes. And, yes, that paper is exceptionally cool, even if they consider their model outside the rubric of agent-based models.
Is this an AI thing? Because it sounds like an AI thing
Again, we find ourselves in a meta-enterprise relative to the field as it stands, only now we’re talking about game theory and evolutionary behavioral economics where the human contribution is at the meta level – the ur text of the model where rules and parameters serve as a substrate upon which something new can emerge. New, but replicable. Something that you can work backwards from, through the simulated history, to reverse engineer the mechanism underlying the outcomes.
Economics is riding high (as a science, at least. Less so as as policy advocates.) The credibility revolution and emphasis on causal inference placed it in an ideal position to make contributions in what is a golden age of data availability. Before all this, however, was an era of high theory, one where macroeconomists formed schools of thought and waged wars of across texts. It’s no dougbt too conveniently cyclical to predict a new era of high theory on the horizon, but that’s what agent-based models could offer. A new era of theory, only this time centered around microeconomics, where milllions of deeply heterogenous agents are brought into being in a sandbox of carefully selected rules and hard parameters, where those rules and parameters are varied across millions of runs, and the model is run millions of time in parallel, each run a wholly fabricated counterfactual history.
Will the model replicate and explain our world? Almost assuredly not. But the models and strategies the agents come up with? Those could be entirely new. And that’s what the next era of high theory needs more than anything else. Not just new models. New sources of models.
New models for inventing models.