Academic Publishing: How I think we got here

Fabio Ghironi, whom you should be following on twitter already, threaded the #econtwitter needle the other day, managing to write about the growing problems within academic economic publishing without falling victim to the sorts of whining and nihilism that discussions of publishing experiences often degenerate into. Below I’ve included a sample. Do go read the whole thing.

I don’t want to adjudicate the merits and flaws of the economic journal system. I have no idea how it would fare in a benefit-cost analysis or how to improve it, and I’m deeply skeptical of anything that has a whiff of “easy fix” for what is a very complex system of scientific incentives, social benefit, and academic sociology.

Instead, I’d like to discuss how I think we got here. A couple stylized facts about how research in economics has changed over the last 50 years, none of which I expect to be controversial

  1. There are a lot more people writing academic journal articles.
  2. There is a lot more well-executed economic research.
  3. The teams of co-authors on papers/projects have become much larger.
  4. The number of journals whose prestige is commensurate with a tenured position at an elite school has grown slower than the total faculty employed by elite schools.
  5. Economics research has become more expensive and labor intensive.

What is immediately obvious from 1-4 is the journal space squeeze, resulting in journals with vanishingly small acceptance rates. The American Economic Journal: Microeconomics (one of the very top journals that isn’t part of the holy Top-5, hallowed be thy names) managed to go an entire year without accepting a paper! Their editorial team, as any Murphy’s Law aficionado would have predicted, interpreted this as evidence they should publish fewer papers.

[Update: 6/2/21 A reader has pointed out that AEJ:Micro has over the past year managed a more than respectable turnaround time on submissions and eventually accepted 33 papers in 2019, 20 in 2020, yielding acceptance rates of 5 to 9%. Editors Report here]

One of the things that economics has become, and maybe always has been, obsessed with is “super stars”, and not just those who get medals. Within every subfield there are a handful of current researchers who are known to everyone else, whose papers are always top of the list in the best working paper series, who tour the country tirelessly promoting their latest papers. And they are often promoting multiple papers. How is it that they find the time to do so much research?

Well, first and foremost, they are incredibly conscientious, with work ethics bordering on obsessive. But a not distant second is the change in the nature of their jobs. They are not just working at a chalkboard by themselves or analyzing the latest batch of data. They are managing research teams. They are applying for grants that support grad students and post-docs. They are meeting for 3 hours each day with different teams of scholars, some at different institutions. They are coming up with their own ideas and refining the ideas of others, they are guiding the research of apprentices while also collaborating with equally experienced peers. They are the CEOs of miniature research empires.

Let’s assume that for a second that the number of super stars in the field has remained constant (it’s grown, but lets keep it simple). In 1950 the top 5 journals probably could have published every single full research paper written by super stars and still had room to spare. Nowadays I’m not sure the top 5 journals could handle the research output in a given year just from MIT. I don’t think the top 10 journals could handle all of the research from the Boston metropolitan area

Let’s visit the other side of the fence now. If you are a co-editor at one of the 5 elite journals in economics, you are allotted roughly 13 acceptances per year. These are fixed. For these slots you review roughly 200 papers. Let’s say 50 of those papers are trash and 50 are good but below the bar. These you desk reject. Of the remaining 100, another 25 are a bad match for the aesthetic or substantive targets laid out by the editor-in-chief(s). Another 25 are good, but the reviewers are, upon closer inspection, able to identify real problems that will undermine the impact of the paper, ruling it out for an elite journal such as yours.

That leaves you with 25 papers for 13 slots. That might not sound like a problem, but think about the process of elimination you just went through. These are really good papers that make important contributions to the field and you need to reject half of them. The discipline will not accept you flipping a coin. You need reasons to reject some of these papers. Well, let’s look at the co-authors. You don’t want to be a jerk, but you’re both desperate and don’t want to be remembered in your hallway at work as the person who rejected that massively influential paper that reinvented the field. You’d feel bad, but 20 of the papers have at least 1 superstar on them. Sorry, but status is a heuristic for a reason. You still need to reject 7 more.

Let’s go through those referee reports again. Was there anything questionable? Any possible source of bias speculatively hypothesized by a person who spent two days thinking about the paper that the people who worked on it for three years never thought of? Are they relying on econometrics that someone has recently posited might sometimes fail to calculate error terms optimally? Is it a theory without an application? Is it an application without a theory? Are the coefficients too small to be interesting or too large to be believable?

Now, let’s remember the single most important thing: everyone knows this is happening. This is not a secret process and academic researchers have responded accordingly. Superstars have responded by managing bigger teams, producing even more research, adding more and more layers of robustness checks, alternative specification designs, even entirely different research designs serving as papers within papers that put Hamlet to shame. At the same time comparably excellent, but perhaps slightly less famous, authors with outstanding research records are thrilled to work with a star, knowing that it will increase their odds at a top journal. When designing the research they know what is in vogue, what is falling out of favor, and how to shape their papers to fit the ambitions of current editors. Research designs are defensive from the start, anticipating as many angles of attack as possible. When the research is completed, it will go on the presentation circuit for a year or two, subject to the slings and arrows from the pool of economists where your future referees will be drawn from. It is from these comments that your appendix will grow. And grow. And grow. You must anticipate every attack, lest your paper’s shortcomings make the editor’s job easier.

Now try to imagine what the research lives of everyone start to look like. For the bulk of good researchers, this means working on 3-6 projects at all time, with each of those projects stretching out over 3 to 5 years. Even if you land a 2 year post-doc, submitting your tenure packet in the fall of your 6th year means you have 7 total years to get multiple papers through a process accepting less than 3-5% of submissions and, more importantly, less than half of all the objectively outstanding research. At the same time, superstars are stretching themselves impossibly thin, expected to meet impossible expectations and get papers accepted at journals with impossible standards knowing full well the careers of their co-authors depend on those acceptances. A faculty appointment should come with a free clonazepam prescription.

To sum up: academic economics has more star researchers, managing larger teams producing more high-quality papers than there is space in the elite journals which have been forced to invent impossible acceptance criteria to produce the singular output that journal editors absolutely cannot shirk: rejections.

And if you think the easy answer is to just increase the size of journals, you are missing the entire function of journals. Journals no longer function as disseminators of economic science.** Rather, they are criteria for tenure and promotion. There are a finite number of faculty slots and schools need reasons to keep/dismiss/promote/retain/recruit. If the number of elite journal articles published were to change, the prinipal effect would be to shift the threshold for success or failure in tenure and promotion.

Of course, increasing the number of publication slots in historically high prestige journals might still be a good thing. Going back to our editor’s dilemma, if they could accept the entire 12.5% of papers that our editor-under-truth-serum genuinely believes are significant contributions, then everyone’s CV would more accurately reflect the subjectively assessed merit of their work, and less their luck and ability to tirelessly play a zero-sum game. Sure, the high-low prestige bar would inflated upwards, but it would nonetheless increase the signal-to-noise ratio on everyone’s CV.

This, of course, would lower the value of every CV that already includes a Top-5 publication, but such is the struggle of every YIMBY vs NIMBY movement. Increasing the supply of elite journal publications won’t be a Pareto improvement (what is?), but it seems likely to me to be welfare improving. So I lied. I do think I know how to improve the system. Big shocker, an academic who thinks they can solve a complex system in one blog post.

** That role has been entirely usurped by the NBER and their working paper series. Now that I have tenure, I would literally rather receive an email permitting me to distribute my future work as NBER working papers than an acceptance at a Top 5 journal. It’s not even close, actually.

John Duffy Experiments and Crypto

John Duffy and Daniela Puzzello published a paper in 2014 on adopting fiat money. I think of that paper when I hear the ever-more-frequent discussions of crypto currencies around me. To research the topic, I went to John Duffy’s website. There I found a May 2021 working paper about adopting new currencies in which they directly reference crypto. Before explaining that interesting new paper, first I will summarize the 2014 paper “Gift Exchange versus Monetary Exchange.”

Continue reading

More computer jobs than San Francisco

The U.S. Bureau of Labor Statistics reports Occupational Employment and Wages from May 2020 for

15-0000 Computer and Mathematical Occupations (Major Group). The website contains a few interesting insights.

Where are the computer jobs in the United States? When looking just at total numbers of jobs, three major population centers make it into the top 7 areas: NYC, LA, and Chicago. San Francisco is ahead of Chicago, while San Jose is behind Chicago. In terms of the total number of jobs, the D.C. area is ahead of any West Coast city. Is Silicon Valley not as central as we thought?

Here’s a map of the U.S. that isn’t just another iteration of population density.

When metropolitan areas are ranked by employment in computer occupations per thousand jobs, then New York City no longer makes the top-10 list. San Jose, California reigns at the top, which seems suitable for Silicon Valley. The 2nd ranked area will surprise you: Bloomington, IL. A region of Maryland and Washington D.C. shouldn’t surprise anyone. If you aren’t familiar with Alabama, then would you expect Huntsville to rank above San Francisco in this list?

Huntsville, AL is not a large city, but it is a major hub for government-funded high-tech activity. The small number of people who live there overwhelmingly selected in to take a high-tech job. For an example, I quickly checked a job website to sample in Huntsville. Lockheed Martin is hiring a “Computer Systems Architect” based in Huntsville.

Anyone familiar with Silicon Valley already knows that the city of San Francisco was not considered core to “the valley”. Even though computer technology seems antithetical to anything “historical”, there is in fact a Silicon Valley Historical Association. They list the cities of the valley, which does include San Francisco. (corrected an error here)

The last item reported on this Census webpage is annual mean wage. For that contest, San Francisco does seem grouped with the San Jose area, at last. The computer jobs that pay the most are in Silicon Valley or next-door SF. Those middle-of-the-country hotspots like Huntsville do not make the top-10 list for highest paid. However, if cost of living is taken into account, some Huntsville IT workers come out ahead.

What Forex says about cheap travel

The 2007-9 Financial Crisis turned Iceland into a major tourist destination, as a newly cheap currency combined with affordable flights and natural beauty. For anyone with plenty of time and a moderate amount of money, chasing the newly-cheap destination seems like a good travel strategy.

Since January 2020, here are the countries where the US dollar has gained the most vs the local currency:

Calculated using https://fx-rate.net/USD/?date_input=2020-01-01
Continue reading

Laboratories of Democracy in Pandemic

You’ve probably heard the phrase that US states are often “laboratories of democracy.” The phrase comes from a Supreme Court case. It’s well known enough that it has a short Wikipedia page. The basic idea is simple: states can try out different policies. If it works, other states can copy it. If it doesn’t work, it only hurts that state.

The 2020-21 pandemic has provided a number of possibilities for the “states as laboratories” concept. Here’s three big ones I can think of (please add more in the comments!):

  1. Do states that impose stricter pandemic policies (“lockdowns”) have better or worse outcomes? This could be about health, the economy, both, or some other outcome.
  2. Do states that end unemployment benefits sooner have quicker labor market recoveries? Or are these not the main drag on the labor market?
  3. Do states that offer incentives for vaccination have higher vaccination rates? And what sort of incentives work best?

These are all good questions, but let me throw some cold water on this whole concept: we might not be able to learn anything from these “experiments”! The primary reason: the treatments aren’t randomly assigned. States choose to implement them.

Let’s think through the potential problems with each of these three areas:

Continue reading

Composting Toilets May Help Save the World

A key discovery of nineteenth century science was that diseases can be transmitted via pathogens in human waste.  In regions of high population density, this can lead to epidemics if adequate sanitation facilities are not available. A milestone in epidemiology was the 1854 cholera outbreak in London. A physician named John Snow analyzed the incidence of the disease and concluded that the Broad Street public water pump was the source of infection. Even though he had no explanation in terms of germ theory at that time, he persuaded the authorities to remove the handle of that pump. This stopped the cholera epidemic. The well from which this pump drew had been dug a few feet away from an infected cesspool. A replica of this pump still stands in London:

Continue reading

A Simple Model of Why Everyone is Overpaid Except You

I could include as an alternative title: “Labor Theory of Value for Me, Compensating Differentials for Thee”, but alternative titles are kind of pretentious.

The market for art is, for all but the most famous artists, incredibly thin, where transactions for a given artist are sufficiently far between that there is rarely a “market price” to simply point to when setting a bid or ask price. If you’ve ever purchased art, you will be aware of the urge to feel like artist asking prices are always more expensive than they should be. This could be because we, as failed aesthetic creatures, undervalue art. It could be because non-artists rarely understand the cost of inputs (paints, brushes, metals, wood, plaster, etc…). It could be because we underestimate the total hours of labor that go into a piece of art, so even if we assess the market value of the artist’s time appropriately, we fail to appreciate how many hours a piece represents. Those are all reasonable guesses, but I tend to think that those phenomena are at work in how we estimate the value of just about anything.

Instead, what I think is at work here is that we tend to impute “compensating differentials” into the bid prices we internally calculate. “Compensating differentials” is the term economists use when referencing the additional (reduced) pay individuals receive in the market for unpleasant (fun) jobs.

I suspect that when consider original pieces of art for non-famous artists we have a tendency to factor a negative compensating differential in our bid that boils down to “Lucky, you, you get to be an artist. Part of your payment is my letting you be artist.” Many of us have a tendency to do this when grousing about the wages of professional athletes, tavern musicians, or the lady selling birdhouses at a craft market. If a job looks like it is fun, or at least more fun than your job, then the product of their labor should be relatively cheap. And it’s not just artists, either. When complaining about the price of landscapers (they get to work in the fresh air!) or furniture makers (it’s a hobby!) or really anyone that grumpy old person inside of you wants to scream at “They’re lucky they get paid at all!”

We rarely think this way when considering our own wages. When it’s our time and energy, we are quick to eschew not only any concept of market pricing, but also any compensating differentials for the fun or high status aspects of our work. To make matters more hilariously self-serving, while we are uninterested in acknowledging the value of the non-pecuniary delights that we benefit from in our own work, we actively go hunting for any negative aspect that might be unappreciated in our wages. Sure, my job is safe, reliably paid, and absent social stigma, but what about my emotional labor*? Why am I not being paid more given the FOMO, shame, and generalized anxiety I feel every day in this cowardly office job when I should have been an artist? Why am I not being paid more to compensate for doing something I would prefer my hobbies to?

When we value our own work, our labor has intrinsic value that should be compensated. If the market fails to meet our own valuation (and it almost always does), it is a market failure. It should therefore be with some shame that when we wade into a labor pool as buyers of products absent a well-defined market price, we abandon any since of intrinsic value, quickly transforming into villainous music producers agitating to pay the naïve and aspirational in everything but money, because they should be grateful that you’re enabling their art. They’re lucky you’re paying them anything at all.

Fortunately for all of us, most of our individual grousing is lost in the wash of countless transactions setting prices for the products of all our labor. If the majority of goods we consumed were subject to the ridiculously self-serving logic behind what we (at least try) to pay artists, we’d all be in the same unemployment line, making plans to apply for the job the person behind you got fired from. Sure, they hated it, but to you it sounds pretty sweet– definitely nicer than your old crappy job.

* I know a more correct use of “emotional labor” is in reference to the comfort and therapy service industries, but this is term that is regularly abused and stretched into meaning anything the writer wants, which is the appropriate caricature for my purposes here.

** P.S. If you’re curious about what to offer an artist who’s work doesn’t yet have a established market, I work with something akin to this simple rule:

1) Guess what you think a fair price is, X

2) Double it. This is your offer price, 2X.

3) Ask the artist for the price, P. If X<P<2.5X ,shake hands and enjoy your art. If P > 2.5X, tell them that is a completely reasonable price but it’s out of your price range. Art is a luxury good, we can’t always afford everything we want.

4) If P < X, just give them X. They are likely young and are undervaluing their own work. Don’t be a cheap bastard.

Teaching through my R mistakes

I blogged earlier about a new textbook that I am adopting for an analytics course. The first few chapters are primarily an introduction to using the R coding language within RStudio. One of the resources I’m posting for students this week is screen capture videos of me manipulating data in RStudio.

Sometimes I make mistakes, shockingly. I’m a professional, and yet sometimes I still make careless typos in R. I found out that my version of R was outdated, right when I was in the middle of recording a lecture.

I could have deleted the footage of my mistakes. I could have re-recorded a clean smooth video in which I run command after command without saying “ok… I got an error”.

Continue reading

Reading Sarah Ruden’s The Gospels

Sarah Ruden is an scholar of ancient literature who has translated classic works such as The Aeneid. Her new book is an English translation of the 4 first books of the Bible’s New Testament, the Gospels.

If you buy a standard Bible, there is usually only a 2-page preface to a 500+ page book. Ruden’s introduction and glossary takes up closer to 50 of the first pages. I would pay just to read the introduction. Ruden describes what it was like, as a professional translator of classics, to approach the Gospels. A reader who is already familiar with the Bible will learn as much from this introduction as from the translation itself. It’s rare to hear the Gospels discussed simply as books instead of as weapons wielded by all sides of the culture wars. I found it interesting to learn about how the Gospels, stylistically, compare to other ancient texts.

Ruben’s enthusiasm for listening to the voices of ancient writers is contagious. She makes it all sound so interesting that anyone, regardless of their previous stance on god (the lowercase g is her idea of what the ancients would write), will want to keep reading. Speaking as someone who has already read the New Testament, I have never been more excited to read the Gospels as I was after finishing Ruden’s introduction. Ruden promises to deliver to modern readers the voices of the ancient writers, with as much accuracy as possible.

Continue reading

Population Predicts Regulation

Texas is one of the most regulated states in the country.

This is one of the surprises that emerged from the State RegData project, which quantifies the number of regulatory restrictions in force in each state. It turns out that a state’s population size, rather than political ideology or any thing else, is the best predictor of its regulations.

This is what I found, with my coauthors James Broughel and Patrick McLaughlin, when we set out to test whether a previous paper (Mulligan and Shliefer 2005) that showed a regulation-population link held up when we used the better data that is now available. We found that across states, a doubling of population size is associated with a 22 to 33 percent increase in regulation.

Continue reading