Science keeps getting bigger- more researchers, more funding, and of course more publications. Scientific progress is much harder to measure, but there are good arguments that it’s roughly flat over time. This implies that productivity per researcher is plummeting.
There’s been a lively debate about what drives this falling productivity- is it that the easy discoveries got made first, leaving only harder ones for today’s scientists? Or is something else tanking scientific productivity, like the bureaucratic way we organize scientific research today?
A recent paper, “Slowed canonical progress in large fields of science“, suggests that the growth in the number of researchers and publications could itself be part of the problem. Comparing scientific fields over time, they find that:
When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon.
What is driving this? They argue:
First, when many papers are published within a short period of time, scholars are forced to resort to heuristics to make continued sense of the field. Rather than encountering and considering intriguing new ideas each on their own merits, cognitively overloaded reviewers and readers process new work only in relationship to existing exemplars. A novel idea that does not fit within extant schemas will be less likely to be published, read, or cited. Faced with this dynamic, authors are pushed to frame their work firmly in relationship to well-known papers, which serve as “intellectual badges” identifying how the new work is to be understood, and discouraged from working on too-novel ideas that cannot be easily related to existing canon. The probabilities of a breakthrough novel idea being produced, published, and widely read all decline, and indeed, the publication of each new paper adds disproportionately to the citations for the already most-cited papers.
Second, if the arrival rate of new ideas is too fast, competition among new ideas may prevent any of the new ideas from becoming known and accepted field wide.
Supposing they are correct, it’s not totally clear what to do. At the biggest level we could fund fewer researchers in large fields, or push more fields to be like economics, where the quality of each researcher’s publications is valued much more than the quantity. But what can an individual researcher do differently? One idea is “boutique science” or “hipster science”, trying to find the smallest or newest field you could reasonably attach yourself to.
Another idea is that the role of generalists and synthesizers is becoming more valuable, as Tyler Cowen often says and David Esptein applies to science in his book Range. When papers are coming out faster than anyone can read, we need more people to sift through them and explain which few are actually important and which are forgettable or wrong. There are lots of ways to do this- review articles, meta-analysis, replication at scale, and of course blogs. But the junk pile is going to keep growing, so we’ll need new and better ways of finding the hidden gems.