Artificial intelligence and the market in mitigating rational ignorance

Greenville, South Carolina does a pretty admirable job trying to lower the cost of being informed about local governance.

There’s no getting around the fact, however, that I remain pretty rationally ignorant of what’s happening in my neighborhood. This stands despite my being both a local homeowner and an economist who is intellectually invested in the idea that obstacles to housing construction are a major cause of a wide variety of social ills. The reason for my ignorance remains the same as most peoples: I’m busy.

Many cities have blogs and subreddits that one can follow to keep abreast of local policy. What I really need, though, is a paid liason who’s entire job is to absorb and distill all of these political currents into a single information digest consumable as a quarterly email. Decent chance there are at least 100 homeowners in my area who would pay for such a service. Should you offer such a service?

No, you should not. Why? Because you’d be rendered obsolete within a two years because I’m pretty sure I’m going to be able have a large language model produce exactly that email for me, probably for free.

Everyone keeps looking for “the big use case” for AI and LLMs. Allow me to suggest instead that the big use case is in fact thousands of micro use cases, those tasks for whom we could all use a 3-5 hours per year personal assistant, but such a relationship simply isn’t a net gain given the fixed costs of a retaining an assistant. Some of the big use cases for early AI’s will, in this sense, be similar to Uber or Airbnb: they reduce the fixed costs and transaction costs of personal services.

For me, one of those first personal services provided by Chat GPT or it’s closest rival may simply be telling me who to vote for:

“I am a X year old homeowner in zip code XXXXX. I am single/married with X children of ages [X….X]. I earned X dollars last year. What should I vote for and against in the upcoming election on November 11th?”

Estimating the effects of a slow news cycle

A the moment, the collapse of Silicon Valley Bank is the dominant story in the news cycle. It seemed like a big deal to me at first, then less of a big deal, then of enormous consequence again. At the moment, my estimation has settled into “A negative event that will hurt some people but will only be of long run consequence unless it yields sufficiently bad new economic policy out of it i.e. receive a bailed that entirely shields them from consequences. But honestly I don’t know. My estimation really shouldn’t move your priors too much unless you were previously sitting at one of the extremes of “Nothing actually happened” or “This is the beginning of a new Great Depression”. I’m quite confident neither of those is correct. If you want a solid accounting, read Noah Smith’s post. I think he probably nailed it.

What I do want to consider is enthusiasm within the “take marketplace” for breathless concerns this was the beginning of a financial meltdown, a desperate situation that calls for a federal bailout, the beginning of inevitable hyperinflation, evidence even that catastrophic consequences of “wokeism” for <checks notes> risk hedging within bank portfolios. All of these seem somewhere between overwrought and stupid, all got a non-trivial amount of oxygen within the news cycle. [UPDATE: Depositors were maintained through federal liquidity, shareholders were not bailed out. Seems pretty reasonable to me.] Many takes were no doubt motivated by personal assets at stake or economic hobbyhorses, but I’m more concerned with how much traction they got than their origin stories.

So here’s a research idea so quarter-baked I haven’t even looked on google scholar to see if it’s been done, let alone would work. What is the relationship between a slow news cycle and pessimistic affect in event coverage? Here’s I’d go about it:

  1. Create an idex of news story variation. Variation in news coverage is an indicator that nothing is happening. When important things happen, they get covered alot, which means there is less variation in stories across outlets.
  2. Run an natural language algorithm for measuring “pessimistic affect” i.e. doomerism in news stories.
  3. Estimate the relationship between lagged news story variation and current pessimistic affect.
  4. ?
  5. Publish

The hypothesis is simple: when the news cycle is slow, outlets and pundits have an incentive to not just hype the importance of any event, but accentuate it’s potential negative consequences going forward so they can keep talking about it.

That’s it. Thats the idea. I hope you will include me in the acknowledgments when accepting your various research awards and accolades.

Don’t get too mad about bad economics journalism

Rather than channel my inner, but very real, grumpy old economist, I want to instead reassure you that, yes, the NYT article “Is the entire economy gentrifying?” is as bad, if not worse than you think. I have a duty to link to it, but I’d actually prefer you not click through.

It’s bad in the all the ways that can make you feel crazy and gaslit.

  1. The title is a question even though the entire article is an assertion
  2. The subtitle uses colloquial language to signal condescension and superiority
  3. It makes grievous economic errors that betray the authors broad ignorance of the subject

There’s little doubt that part of why it so blatantaly telegraphs that it’s bad is for the very purpose of pulling in an additional audience of hate-readers. I could grump about the addition of that unnecessary question mark in the title to mitigate any culpability for the meandering anecdote driven assertions that follow. I could whine that describing profits as “fat”, rather than “large”, “growing” or, god forbid, without an adjective at all, let’s us know right away that their story has a villain that you can blame while feeling superior to all the fools who don’t realize they’re being taken advantage of.

I could definitely settle into a cathartic, apoplectic rage at the omission of the G*D D**M MONEY SUPPLY as a potential input into inflation. For such an economic sin they should have to take the train to Paul Krugman’s CUNY office and silently wait in contrition until he shows up to absolve them (pro tip: bring snacks).

I could do any of those things. You probably could, too.

But you shouldn’t. These are professional journalists, but amateur economists, filling column inches in the New York Times. Your sibling might have a marginally worse opinion on the economy tomorrow, but let’s be honest: their opinions were already pretty bad. Just enjoy your week.

On minimum wages and the devil’s discount

There’s a new paper about the minimum wage and its effects on crime. I wrote a paper (with Amanda Agan) about the minimum wage and crime (here’s a slightly older ungated version). I have received several requests to comment on the new paper because, based on the abstracts, our papers appear to generate conflicting results. Spoiler alert: they don’t. Sorry to disappoint those who came looking for an academic blood bath.

I am happy to talk about the new paper, by Fone, Sabia, and Cesur (FSC), but let’s get the big part out of the way. Our paper on the minimum wage looks at criminal recidivism, defined as a return to prison, for those who have been released from prison. These are people whose conviction resulted in them being in incarcerated in a prison (not jail) who, on average, served nearly 2 years and were subsequently released at age 35. The FSC paper uses arrest data. Their principal observation regards property crime arrests committed by 16-24 year olds.

Our two papers identify fundamentally different results about fundamentally different populations that, in my opinion, hinge on completely different mechanisms.

Our paper is old news, so I won’t belabor the point. Succinctly, we found that minimum wage increase of $0.50 reduced the probability an individual returns to prison within 3 years by 2.15%. The availability of state EITCs also reduced recidivism, but only for women.

The FSC paper use’s Uniform Crime Report data to look at arrests. Here’s the figures and tables that I’ll focus on for our discussion:

FSC find that property crime arrests increase for 16-24 year olds in an event study estimate, where an increase in the minimum wage of at least $1 serves as an “event”:

Property crime arrests in their diff-in-diff estimate reaffirm this estimate. They also, however, observe negative effects on property crime arrests on 35-49 year olds, though the coefficient is too noisy to be statistically significant. These results are similar to ours, though because we were looking at individual recidivism we had the benefit of estimating over ~6 million observations (vs the 45 thousand county-years of FSC).

When FSC dig into the crime categories further, there is no effect on burglary, robbery, or auto theft. The property crime effect is entirely in larceny. Let’s also note the positive effect of the minimum wage on vandalism.

Here’s an important tidbit: UCR data does not distinguish between misdemeanor petty (petit) larceny and felony larceny. One last result: employment is noisily declining for 16-24 year-olds who have not yet completed high school.

Let’s add it all up: when a state increases the minimum wage by at least $1, we observe an increase in larceny and vandalism arrests of 16-24 year-olds, without any effect on robbery, burglary, auto theft, or violent crime, all while reducing the employment of 16-24 olds who have not yet completed high school. Can you see where I’m going with this?

Shoplifting. When states significantly increase the minimum wage, employers stop hiring teenagers. Those teenagers, laden with time but bereft of spending money, rediscover the allure of the five-finger discount. That is my interpretation of these results and nothing about these results seems strange to me or at odds with the earlier findings in our paper on the minimum wage and recidivism.

I don’t think the authors have really done anything wrong here. I could manufacture some of the usual gripes if I really wanted too, but the identification strategy seems at least broadly sound and the data is widely used. The estimated magnitudes seem plausible. If I was going to complain about anything, it would probably be the imputed $766 million dollar price tag placed on the externality, but I’m also not well-versed in the costs of shoplifting (and in case you’re reading something into my tone, I do not think shoplifting can be dismissed as unimportant). If I had to hang my hat on something, though, I’d say that’s probably on the hefty side. In footnote 48 they consider a a more conservative estimate of a $128 million dollar externality. That seems more plausible to me.

The minimum wage literature is one we all, every single one of us, bring our own political and economic baggage to. When our paper found that the minimum wage reduced criminal recidivism, a lot of people latched on to it because what they heard was “minimum wages stop crime”. I’m sure a lot of people will latch on to FSC’s new paper because they want to hear “minimum wages cause crime”. The reality, of course, is vastly more nuanced. We should expect these laws to have heterogeneous effects born of complex interactions, particularly when we stratify populations into those interacting with an institution as rife with peculiarities and pathologies as the US criminal justice system.

We are all piecemeal workers now

I think the single most under-considered development in labor economics has been the revolution in the real-time measurement of labor output over last decade (although there was an interesting article recently in AEJ: Applied looking at the shift in the late 70s from standardized to variable wages within firms). A lot of ink has been spilled agonizing over why “no one wants to work” in fast food establishments for $15-$20 an hour, without appreciation for how much those jobs have been transformed by operations monitoring and management. Simply put, there’s no hiding on the line anymore. You’re either producing or you’re not and everyoone knows. Now, whether subpar performance will quickly result in termination is unclear in such a tight labor market, but you can be sure that your inadequate productivity will be quantified and communicated to you. These numbers may create a feeling of shame or inadequacy, perhaps even sufficient to make you work harder, increasing the disutility of labor faster than your earnings increase. Your prospects for advancement or a pay increase will correlate directly with your measured productivity. The spread of such indignities, previously reserved for those working assembly lines, sales, or independent contract work, are not limited to fast food:

There have long been lines of work where work could be paid “piecemeal” i.e. paid per unit output. These jobs were typically limited to those where labor’s output was discrete, easily measured, and where quality could be distilled into sufficient/not sufficient categorization. Great for sewing textiles, bad for writing code or making gourmet food. When you’re working a piecemeal job you can be rewarded for high output, but it’s a double-edged sword. There’s no obscuring your contributions within the uncertainty of productivity or the efforts of others. It’s the difference between singing in a 50 person choir or playing golf. No one listening to that choir will ever know I can’t hit a note, even after dozens of performances. My fraudulence on a golf course is transparent after a single swing.

The revolution in labor measurement has all kinds of ramifications for the nature of work, management-labor relations, and the distribution of income.

1) Being watched is stressful, being measured doubly so.

2) Nobody likes being judged. Always being watched will only heighten labor skepticism and antagonism towards management.

3) Bigger rewards for higher producers can only increase income inequality even if wages rise for everyone

Better measurement could increase labor’s share of their marginal revenue product simply by reducing uncertainty and risk. This increased share, combined with greater productivity, could raise incomes for all laborers. Even under these assumptions, however, greater measurement is can still increase income inequality because it will likely reward the most productive workers more than the least.

It’s hard to usefully speculate on the exact mechanisms through which transitioning to piecemeal work affects labor. Perhaps it’s safer to just be grossly reductionist: increased monitoring and measurement of labor threaten’s every worker’s god-given right to do a half-assed job.

Worker’s, like all of us, are under-appreciated in the guile and sophistication they bring to bear when maximizing their utility. It’s not just where and when we work, it’s how we work. Some want to climb the ladder, some don’t. Some work to live, some live to work. Some jobs sustain us while we participate in high risk-reward labor tournaments in our side-hustles (music, art, indie game design, etc).

Doing a half-assed job is a tried and true strategy to living a great life if you have the tremendous fortune of living in a wealthy country. Employing half-assed workers, however, is a bit trickier and I suspect the rise in monitoring and measurement is a market response that reveals that the conflict between half-assed labor and management has never solved.

How will such conflicts be reconciled? My suspicion is that this will eventually come out as a mutually beneficial gain, on average, for all parties. Workers will be monitored and measured more tightly, which will make work less pleasant, but they will be paid more and work less. I suspect many of us would actually prefer to work 40% harder for half the hours and make double the pay. There will be people who lose in the transition, however. Every workplace has a slacker who floats from job to job, doing far less than the bare minimum, riding the wave of uncertainty that keeps their unemployment at least temporarily intact. For many occupations that strategy will cease to be viable. I’d feel bad for them…but I don’t. Maybe that makes me a grumpy old man, but if if anyone was taking bets I’d put a lot of money down that for the last 50 years it’s been mostly white men riding off the labor of others…and it’s been mostly the over-contribution of women to marginal output that has subsidized their quarter-assed counterparts.

The days of management trying to increase productivity by exhorting motivational platitudes while dangling the carrot of advancement while pretending to know who deserves credit are over. We know who’s doing the work. Which means that even if you are still formally receiving a salary, your salary will so tightly hew to your productivity that it will effectively be a piece rate. That also means, by the way, that management has no excuses anymore, either. You know who’s getting the work done. The same forces undermining a half-assed labor strategy will hopefully continue to undermine casual cronyism and discrimination as well.

But don’t worry, humans are clever. We’ll game each new system along the way. You’ll never find a more whole-assed effort than someone trying to figure out how to half-ass their job.

The Eagles are going to win*

I like Patrick Maholmes and, all else equal, prefer the Kansas City fan base if for no other reason than they seem less interested in settting things on fire.

But alas, the Eagles are probably going to win because the value of quarterback play became so dominant that the incentives to innovate and invest in alternative strategies has finally resulted in an equilibrium where the opposing team is dominant in every other area of play. Which is a long-winded way of saying that annoying old people who rail on about line play, running the ball, and defense, who have been consistently wrong about everything for 20 years, finally get to have their moment in the sun and say “I told you so” and you, smart person who values your mental health, will simply smile and nod and not take the bait to argue with them further.

Or maybe Maholmes’s ankle is back to 70% and they win? Who knows? We’re talking about a sport that, 19 games deep, is almost entirely determined by the two interacting random probability generators: injuries and general luck.

Enjoy the excuse to socialize and eat junk food, which for many of us is 90% of the utility proposition in watching the game.

*probably

The blockade on California’s housing supply is unraveling

Policies can create their own entrenched consituencies, bad policies doubly so. Restrictions have long strangled the supply of housing in California, much to the detriment of the states inhabitants. This cost, of course, was spread across the entire population, while the resulting dramatic increase in housing prices proved to be a windfall for incumbent property owners. The immediate constituency of beneficiaries, however, has probably been less important than the capital committed property owners that followed, taking on larger and larger debt obligations, each generation of new homebuyers more terrified than the last that the rug might be pulled out from under them, more committed to maintaining an obiviously terrible status quo that they nonetheless found themselves bought into.

Economists refer to this as a transitional gains trap. Once the effects of the policy are internalized into the market, no one subsequent to the first generation of incumbent beneficiaries ever benefits. But, and this is most important, if the policy is *undone*, those who bought into the market after the policy was in place stand to lose. In the case of California housing, the potential losses could be significant.

Imagine you bought this house today (care of Andrew Baker’s cheeky tweet):

Now imagine you just bought this house today, only to read this headline tomorrow:

Darrell Owens gives all the necessary details. The point isn’t whether this or other policy eventswill the be the final blow to the California housing blockade. Or this legal case. Or this political agenda. The point is that the walls are closing in. Transitional gains traps depend on sufficiently concentrated benefits and diffuse costs. It may be the simple case that the costs of the California housing are so enormous that they are economic crippling for the majority of the population. The costs are too big to politically diffuse.

Transitional gains traps are impossible to costly get out of. Someone always loses. All you can really do is try not to fall in them yourselves. Even if the house actually does have good bones.

Redefining American law enforcement

Yesterday Noah Smith wrote a persuasive blog post about what police reform might look like. Similarly Jen Doleac wrote a thread about policing reform, in the comments of which Kevin Grier absolutely gave me the business while righteously criticizing the implication that law enforcement institutions could be even remotely trusted to reform themselves. I always find it awkward trying to respond to criticism when I essentially agree with every point being made.

I’m fine with Kevin’s criticism, to be clear, because I think it comes from the frustration that policing has arrived at it’s state along a tide of winking half-asssed internalization of some reforms exceeded only by the whole-assed petulant refusal of others. While the broader chattering classes and technocrats have been trying to adjudicate whether the dominance of White Supremacy within the culture of policing necessitated its wholesale defunding, law enforcement has managed to quietly be on an apathetic half-strike in major cities while bearing no material cost that I am aware of and remaining as militarized as ever.

What I do want to reconcile is the notion that the decentralization of policing across states, counties, and cities is an opportunity for reform because I’m not optimistic we’re going to get any meaninful action at the national level. What we can hope for, agitate and campaign for, is state and local reform. No one is getting elected to the presidency if they can have the “defund the police” label successfully slapped on them. A town, however, can fire its police and reform an entirely new force under different job expectations, with different hiring objectives (de-escalation, human services), qualifications (higher training bars), and bigger salaries. A town or small state can change the burden on police unions or even hire entirely parallel to them. We can decide that law enforcement is important work where you can make a professional salary with attractive benefits, but like other such jobs you can be fired with or without cause because someone else wants your job and might be able to do it better.

I don’t want to give the impression I think the problem of law enforcement in America can be solved with a few paragraphs. I guess all I want to do is remind you that Charles Tiebout has a pretty good point: local public goods always face the competition of those offered by their neighbors. Maybe the single most important contribution any of us can make to improving the deadly, destructive disaster that is the current state of law enforcement is to push your local government for reform. Your state could end police retention of seized property or end qualified immunity. Your sheriff’s office deputies could be at-will employees. Your city could require the police union to self-insure against civil lawsuits.

Because it only takes one place to start a Tiebout chain reaction, a place where people want to live and work that much more because they’re less afraid that the police are going to hurt their family or friends. Less likely to ignore theft and assault. Less likely to tase their teacher to death. Beat their neighbor to death. That’s sounds like a nicer place to live or start a businesss. The time for half measures is over, which unfortunately probably means the opportunity for national reform has passed. There are 18,000 police departments that can and need to be reformed.

Maybe we can start with yours.

The consequences of minting the trillion dollar coin

A group of congressmen are (again) opposing raising the US debt ceiling, which (again) threatens to put the US government into default on a portion of the US debt. There is some uncertainty about the magnitude of the consequences of a US default, varying between very bad and globally catastrophic. Phrases like “taking hostage” and “political extortion” are thrown around too casually in the discourse when opportunities for politically leverage are taken advantage of, but in this case I think the scale of consequences makes it completely appropriate. A threat to force a US debt default through the mechanics of a mistake made when legislating bond issuance rules during World War I is an act of political extortion that holds the global economy hostage.

The obvious solution is to eliminate the debt ceiling, but we have failed to do so because of the same political incentives underpinning our problems today. Some economists and economics-adjacent folks have suggested a policy solution, itself similarly born of an unintended legislative loophole: the trillion dollar coin.

As far as specialty areas go, I’m about as far from a monetary specialist as an economist can get, so I’m not going to litigate here whether putting the coin on the balance sheets of the Federal Reserve would be inflation neutral or compromise the independence of the Fed. What I want to consider is the Lucas Critique.

Specifically, the Lucas Critique applied to political economy after minting a trillion dollar coin. In briefest of terms, the Lucas Critique says a model of the world generated from past data to forecast a policy’s effects is wrong as soon as that policy changes the rules. We (rightfully) do not like the status quo as created by the current rules, but it is extremely difficult to predict the consequences of a big rule change, via loophole exploitation, made to fix the status quo because the underlying data generating process has been fundamentally altered.

I don’t know if minting a trillion dollar coin is a good idea or a bad idea. What I do know it is that we should be humble when trying to forecast the consequences of shifting the power to radically impact the balance sheets of the Federal Reserve from a elected body of 435 congressmen and 100 senators to a cabinet member appointed by a singular elected President.

Let’s ask two questions. I like to ask myself a version of these two questions when evaluating change in political options or rules:

  1. Why is the opposition reacting the way it is?
  2. What would Trump have done ?

The first is because it forces me to consider what the underlying incentives and strategies really are. The Republicans, as it stands, do not seem to view the trillion dollar coin as a policy outcome to be avoided. They’re, historically, the anti-inflation party. They represent a lot of bond holders. Hyper inflation should terrify them, so maybe they agree with the prediction of inflation neutrality. On the other hand, they also know that electoral college favors them and, with the growing aspiration within the party to win over Latino voters for the next few decades, maybe they like the idea of shifting more power away into the executive branch.

The second question is important because it forces me to acknowledge when I’m relying on norms to produce the outcome I prefer. Say what you will about Trump, the man was never concerned with norms, traditions, or the consequences for anyone but himself. This question also allows me to consider obviously ludicrous things that no one could get away because he got away with exactly such things. So, let me ask you this: if the Secretary of the Treasury can order the minting of a trillion dollar commemorative coin and deposit it in the Federal Reserve balance sheet, what other ways could the Treasury reallocate funds on US balance sheets? What if we stopped assuming it would only be used in the most benign, inflation neutral way possible? Why can’t they use it to loan money to Russia or pay for the balance of global debt held by a small country that specializes in off-shore banking? Or, stepping back from the brink of “The President stole a trillion dollars”, what are the ways in which a President could trigger an economic or constitutional crisis by appropriating the power to significantly increase M1? What are the ways this new option would be internalized in the political marketplace and equilibrium of power?

The point is this: political norms, especially those constraining power at the highest level, are more fragile than we sometimes appreciate. Nothing exposes this more than big changes to the rules of governance. Game theory and mediocre movie plots now considered, let’s return to the Lucas Critique. A political compromise made to expedite bond issuance under the pressures of The Great War produced an political lever that has been exploited for decades. This was an unintended consequence. As a current wing of the Republican party has put more and more weight on this lever, the opposition is now considering exploiting a loophole, itself an unintended consequence of the otherwise innocuous coinage act. It’s hard to forecast the effect of such a fundamental shift in the rules and distribution of power because it immediately renders obsolete the model currently informing our expectations.

Cards on the table, if we’re at the zero hour and it’s either a) mint the coin or b) default on US debt, I think we should mint the coin. Defaulting on the debt of the country that provides what is without question the currency tying together the global economy scares me enough that some sort of workaround gambit becomes a necessary risk. But what will be the unintended consequences of minting a trillion dollar coin? I don’t know.

And neither do you.

On the paucity of new ideas and the paradox of choice in modern research

I was once told that papers are never finished, only surrendered. It’s one of those turns of phrase who’s observational accuracy has only increased. I don’t know that I’ve felt good about submitting a paper for review in over a decade, and that includes the one’s that were accepted and subsequently published.

When I submitted papers early in my career I felt great. There was both a sense of accomplishment and eagerness to learn what the reviewers might think, a hopeful optimism. That eagerness didn’t reflect overwhelming confidence so much as naivete as to what the review process entailed. Now I know too much.

What I know, what I always know, is that more could be done. More alternative empirical specifications could be added to the robustness section. Newer models could be considered for the underlying mechanism. Older models too. Different literatures could be engaged and contended with. Summary statistics could be visualized. Specifications could be bootstrapped, a different identification stratgy used. I never applied for administrative data in Denmark. Wait, they don’t have this policy in Denmark. I could have tried Sweden. Or Dallas. Wasn’t there a close election in Baltimore in 1994?

This isn’t a rant or lament about the journal reviewing process. For every petty or uninformed referee report I’ve received in my career I’ve received three that were entirely fair and one that was so good the reviewer deserved to go in the acknowledgements of future drafts. This is more a reflection on a trap born of our own knowledge and imaginations.

There are so many tools at our disposal, so many data sets, so many options that I worry that we are collectively succumbing to a paradox of choice. The paradox of choice, for those who do not recall, was a theory that suggested that the number of options facing consumers was net lowering their utility because of the search and decisionmaking costs those options entailed. I think this theory is deeply wrong, but I am also going to be incredibly unfair to it here and simply dismiss it out of hand as a consumer theory. Instead, I want to consider a more collective application to the modern social scientific enterprise.

Every research paper is an attempt to contribute new ideas and refine old ones. There is occasional handwringing over the paucity of new ideas in economic research and the abandonment of broad swaths of traditionally difficult economic subjects. Explanations for these pathologies tend to be more sociological than economic in construct, invoking political preferences or mood affiliation. Others focus on the institutions of academic research, specifically faculty hiring and tenure. I’d like to add the paradox of choice to the mix.

There are countless methodological, theoretical, and rhetorical choices that can be made that will result in nearly identical research contributions. If your aim is to contribute a wholly new idea, then every one of those choices comes with the opportunity cost of the countless alternatives. If, on the other hand, your contribution is a refinement of a pre-existing idea in an already rich vein of research, then the choices you made are the contribution. For refinements, the choices made are a reason to recommend acceptance of your paper. For newer, more original contributions, your choices can be more easily framed as reasons to reject it. A more cynical academic might fear that the more original the contribution, the more likely the referee is to succumb to the Nirvana fallacy, disapproving of your paper’s choices relative to an imagined paper more perfectly in line with the choices the referee would have made if they had thought of the idea.

Now consider these two mechanisms in parallel for a young researcher. Not a wonderkid that faculties on other continents are already talking about. Consider an above average newly minted PhD from a top 25 economics department. They are executing their first research project since accepting a tenure track position, a defined question with explicit policy relevance. There are dozens of data sets they could pursue, hundreds they could build, and a countless number they could imagine feasibly existing. They could pick a workhorse model or contruct an entirely new pathway forward from dozens of building blocks. There are 3-4 “hot” identification strategies in their field, but they could also consider something off the beaten path.

Research projects aren’t binary constructs, “new” or “refining contributions”, but it’s not unreasonable to place their contributions on a spectrum of “entirely new” (i.e. Newtonian physics) to “marginal refinement” (i.e. weakening the asssumptions in a minor mathematical proof). From the start, our new faculty member will observe the inherent riskiness of overdifferentiating from the field, turning every choice into a reason referees might reject their paper. This will push them down the spectrum towards marginal refinements. Then they will start the iterative process of executing and writing up their research.

As they execute their analysis they will see the forking paths of alternative choices. Different specifications will be added to robustness tables. Alternative models will merit their own appendix. They will begin to write defensively, trying to anticipate and refute arguments from their mental model of a reviewer. They will try to divert an imagined conversation away from the conclusion that the choices made in the paper are wrong. The risk of newness only becomes starker. There must be, and remains, the contribution in the paper, but it will become narrower, buttressed on all sides by the rising masonry of appendices and references, it’s only weakness the narrow channel through which its contribution is made. This iterative process will continue until the opportunity cost of time not spent on their next project forces the unconditional surrender of their paper to that still unvanquished tyrant, diminishing returns.

All of this is weighing on young faculty shoulders. A million choices to be made, a million reasons to be rejected. So what do you do? You find your tribe. A tribe not based in the schools of thought that dominated the 1970’s but in the schools of methodological choices. This is how we estimate gravity models of trade. This is how we estimate monopsony rents. This is how we model the impact of the minimum wage on employment. If you want to be cynical, there are no doubt similar tribes of policy outcomes, but I don’t think those are what haunt the face-on-desk stress dreams of assistant professors working on a Sunday night.

We can get more new ideas the same way we can get bolder, more enthusiastic young researchers. Not by reducing their choices, but by lowering the price of those choices. Easier said than done, and maybe I’ll write up some thoughts on how lower the prices of researcher choices, but the first step is likely cultural i.e. I have no idea how to pull it off. The most important step may simply be reorienting how we read papers, shifting the focus from “What did the authors do right or wrong?” to “What do we learn from this?”