Fabio Ghironi, whom you should be following on twitter already, threaded the #econtwitter needle the other day, managing to write about the growing problems within academic economic publishing without falling victim to the sorts of whining and nihilism that discussions of publishing experiences often degenerate into. Below I’ve included a sample. Do go read the whole thing.
I don’t want to adjudicate the merits and flaws of the economic journal system. I have no idea how it would fare in a benefit-cost analysis or how to improve it, and I’m deeply skeptical of anything that has a whiff of “easy fix” for what is a very complex system of scientific incentives, social benefit, and academic sociology.
Instead, I’d like to discuss how I think we got here. A couple stylized facts about how research in economics has changed over the last 50 years, none of which I expect to be controversial
- There are a lot more people writing academic journal articles.
- There is a lot more well-executed economic research.
- The teams of co-authors on papers/projects have become much larger.
- The number of journals whose prestige is commensurate with a tenured position at an elite school has grown slower than the total faculty employed by elite schools.
- Economics research has become more expensive and labor intensive.
What is immediately obvious from 1-4 is the journal space squeeze, resulting in journals with vanishingly small acceptance rates. The American Economic Journal: Microeconomics (one of the very top journals that isn’t part of the holy Top-5, hallowed be thy names) managed to go an entire year without accepting a paper! Their editorial team, as any Murphy’s Law aficionado would have predicted, interpreted this as evidence they should publish fewer papers.
[Update: 6/2/21 A reader has pointed out that AEJ:Micro has over the past year managed a more than respectable turnaround time on submissions and eventually accepted 33 papers in 2019, 20 in 2020, yielding acceptance rates of 5 to 9%. Editors Report here]
One of the things that economics has become, and maybe always has been, obsessed with is “super stars”, and not just those who get medals. Within every subfield there are a handful of current researchers who are known to everyone else, whose papers are always top of the list in the best working paper series, who tour the country tirelessly promoting their latest papers. And they are often promoting multiple papers. How is it that they find the time to do so much research?
Well, first and foremost, they are incredibly conscientious, with work ethics bordering on obsessive. But a not distant second is the change in the nature of their jobs. They are not just working at a chalkboard by themselves or analyzing the latest batch of data. They are managing research teams. They are applying for grants that support grad students and post-docs. They are meeting for 3 hours each day with different teams of scholars, some at different institutions. They are coming up with their own ideas and refining the ideas of others, they are guiding the research of apprentices while also collaborating with equally experienced peers. They are the CEOs of miniature research empires.
Let’s assume that for a second that the number of super stars in the field has remained constant (it’s grown, but lets keep it simple). In 1950 the top 5 journals probably could have published every single full research paper written by super stars and still had room to spare. Nowadays I’m not sure the top 5 journals could handle the research output in a given year just from MIT. I don’t think the top 10 journals could handle all of the research from the Boston metropolitan area
Let’s visit the other side of the fence now. If you are a co-editor at one of the 5 elite journals in economics, you are allotted roughly 13 acceptances per year. These are fixed. For these slots you review roughly 200 papers. Let’s say 50 of those papers are trash and 50 are good but below the bar. These you desk reject. Of the remaining 100, another 25 are a bad match for the aesthetic or substantive targets laid out by the editor-in-chief(s). Another 25 are good, but the reviewers are, upon closer inspection, able to identify real problems that will undermine the impact of the paper, ruling it out for an elite journal such as yours.
That leaves you with 25 papers for 13 slots. That might not sound like a problem, but think about the process of elimination you just went through. These are really good papers that make important contributions to the field and you need to reject half of them. The discipline will not accept you flipping a coin. You need reasons to reject some of these papers. Well, let’s look at the co-authors. You don’t want to be a jerk, but you’re both desperate and don’t want to be remembered in your hallway at work as the person who rejected that massively influential paper that reinvented the field. You’d feel bad, but 20 of the papers have at least 1 superstar on them. Sorry, but status is a heuristic for a reason. You still need to reject 7 more.
Let’s go through those referee reports again. Was there anything questionable? Any possible source of bias speculatively hypothesized by a person who spent two days thinking about the paper that the people who worked on it for three years never thought of? Are they relying on econometrics that someone has recently posited might sometimes fail to calculate error terms optimally? Is it a theory without an application? Is it an application without a theory? Are the coefficients too small to be interesting or too large to be believable?
Now, let’s remember the single most important thing: everyone knows this is happening. This is not a secret process and academic researchers have responded accordingly. Superstars have responded by managing bigger teams, producing even more research, adding more and more layers of robustness checks, alternative specification designs, even entirely different research designs serving as papers within papers that put Hamlet to shame. At the same time comparably excellent, but perhaps slightly less famous, authors with outstanding research records are thrilled to work with a star, knowing that it will increase their odds at a top journal. When designing the research they know what is in vogue, what is falling out of favor, and how to shape their papers to fit the ambitions of current editors. Research designs are defensive from the start, anticipating as many angles of attack as possible. When the research is completed, it will go on the presentation circuit for a year or two, subject to the slings and arrows from the pool of economists where your future referees will be drawn from. It is from these comments that your appendix will grow. And grow. And grow. You must anticipate every attack, lest your paper’s shortcomings make the editor’s job easier.
Now try to imagine what the research lives of everyone start to look like. For the bulk of good researchers, this means working on 3-6 projects at all time, with each of those projects stretching out over 3 to 5 years. Even if you land a 2 year post-doc, submitting your tenure packet in the fall of your 6th year means you have 7 total years to get multiple papers through a process accepting less than 3-5% of submissions and, more importantly, less than half of all the objectively outstanding research. At the same time, superstars are stretching themselves impossibly thin, expected to meet impossible expectations and get papers accepted at journals with impossible standards knowing full well the careers of their co-authors depend on those acceptances. A faculty appointment should come with a free clonazepam prescription.
To sum up: academic economics has more star researchers, managing larger teams producing more high-quality papers than there is space in the elite journals which have been forced to invent impossible acceptance criteria to produce the singular output that journal editors absolutely cannot shirk: rejections.
And if you think the easy answer is to just increase the size of journals, you are missing the entire function of journals. Journals no longer function as disseminators of economic science.** Rather, they are criteria for tenure and promotion. There are a finite number of faculty slots and schools need reasons to keep/dismiss/promote/retain/recruit. If the number of elite journal articles published were to change, the prinipal effect would be to shift the threshold for success or failure in tenure and promotion.
Of course, increasing the number of publication slots in historically high prestige journals might still be a good thing. Going back to our editor’s dilemma, if they could accept the entire 12.5% of papers that our editor-under-truth-serum genuinely believes are significant contributions, then everyone’s CV would more accurately reflect the subjectively assessed merit of their work, and less their luck and ability to tirelessly play a zero-sum game. Sure, the high-low prestige bar would inflated upwards, but it would nonetheless increase the signal-to-noise ratio on everyone’s CV.
This, of course, would lower the value of every CV that already includes a Top-5 publication, but such is the struggle of every YIMBY vs NIMBY movement. Increasing the supply of elite journal publications won’t be a Pareto improvement (what is?), but it seems likely to me to be welfare improving. So I lied. I do think I know how to improve the system. Big shocker, an academic who thinks they can solve a complex system in one blog post.
** That role has been entirely usurped by the NBER and their working paper series. Now that I have tenure, I would literally rather receive an email permitting me to distribute my future work as NBER working papers than an acceptance at a Top 5 journal. It’s not even close, actually.
Too true: “…You must anticipate every attack, lest your paper’s shortcomings make the editor’s job easier…”
This is a bad post. While the general point about increasing the number of articles published in top journals has merit, you start your article with outdated information about the AEJ: Micro and then present a distorted and inaccurate picture of the editorial process. Updated information about the editorial process at the AEJ: Micro is easily available here: https://pubs.aeaweb.org/doi/pdfplus/10.1257/pandp.111.725 In 2020, 100% of submissions received a response within 6 months; 89% within 3 months. 20 articles were accepted and 37 given R&Rs.
Moreover, the editorial process you describe just doesn’t reflect reality. I won’t even address the levels of cynicism and cronyism you attribute to editors. The bigger problem is that you describe a static problem where editors face a set of N papers all at once and decide between them. The truth is a dynamic problem, to which the solution is a “reservation quality.” The reputation of the author(s) could affect perceived quality, but there is no explicit head-to-head comparison like what you describe. The dynamic problem can induce risk averse editors to over-reject marginal papers, in order to preserve the option of accepting a future, better submission.
I agree that the increase in the number of submissions and the reduction in constraints to publishing justify an increase in the number of articles published in our leading journals. However, I think you do a disservice to the profession by describing the process and its shortcomings so inaccurately.
It was never the intent of the post to assert accusations of cronyism, or even cynicism, on the part of editors. Quite the contrary, it was to sympathize with an incredibly difficult task with insufficient resources that demands that heuristics be employed in the resolution of submissions. It certainly wasn’t an attack on AEJ: Micro, and I’m happy to update the post with links to the most recent editorial report. Shifting from a static to dynamic model would increase the predictive power of the model, but I don’t think it resolves the broader point of the post, which is that the ratio of research content to high-prestige publication space has resulted in a higher noise to signal ratio in the publication process.
To further increase the fidelity (and complexity) of the model, we can assume that a rational editor can, on average, observe the exact quality of the paper, but there is an error term in their observations. I think what has happened is that at the very best journals, such as AEJ:Micro, is that the error term is sufficiently large that it swamps differences in large swaths of the pool of candidate papers.
LikeLiked by 1 person
This back-and-forth reads like a referee report and it’s response. 😆
LikeLiked by 1 person
Your list of “stylized facts” is also true of most STEM fields. (Certainly physics and biology.) The size of academic research is *vastly* larger than it used to be, and many of academia’s problems are a result of this — our structures don’t scale well.
Today’s academic research in economics is more sophisticated in terms of technique than the work of yesteryear, but it is equally forgettable. That’s the way of the world. After your career is finished, you will be lucky to be remembered even for one contribution, just as a painter is lucky to have even one painting hanging in a museum somewhere. To me, a nonacademic, the low impact of almost all academic research makes the case for a different approach to weighing it: universities should base evaluations of the research of candidates for tenure and promotion on no more than three works or 300 pages that the candidate chooses, whichever is less. The tenure committee should actually read those works and exercise independent judgment rather than accepting that something must be good because a “top” journal published it.
LikeLiked by 1 person