Last Friday a group of researchers mostly from Boston University posted a paper which revealed they had created a new chimeric coronavirus and used it to infect mice.
We generated chimeric recombinant SARS-CoV-2 encoding the S gene of Omicron in the backbone of an ancestral SARS-CoV-2 isolate and compared this virus with the naturally circulating Omicron variant. The Omicron S-bearing virus robustly escapes vaccine-induced humoral immunity, mainly due to mutations in the receptor-binding motif (RBM), yet unlike naturally occurring Omicron, efficiently replicates in cell lines and primary-like distal lung cells. In K18-hACE2 mice, while Omicron causes mild, non-fatal infection, the Omicron S-carrying virus inflicts severe disease with a mortality rate of 80%.
Many people who heard about this expressed concern that the risk of creating more contagious and/or deadly versions of Covid that could escape from a lab outweigh any potential benefits of what we could learn from this research.
Several researchers have responded to these concerns with variants of “trust virologists to weigh the risks here, they know more than you.”
Here’s the thing: the virologists do know the risks better than the public or potential regulators- but they also have different incentives. What I want to point out today is that virology isn’t special; this is true of just about every field. A nuclear engineer knows much more about what’s happening at their plant than voters do, or distant bureaucrats at the Nuclear Regulatory Commission. Should we leave it to the engineers on site to decide how much risk to take? Should federal regulators leave it to the financial experts at Bear Sterns and AIG to decide how much risk they can take?
To some extent I actually sympathize with these critiques; industry practitioners really do tend to have the best information, and voters often push regulatory agencies to be insanely risk-averse. With any profession this information problem is a reason to regulate less than you otherwise would, and/or pay to hire expert regulators.
But externalities are real- the practitioners who have the best information use it to promote their own interests, which tend to differ from the interests of the public. In finance this means moral hazard at best and fraud at worst (who are you to say Bernie Madoff is a fraud? You know more about finance than him?). In medicine it means doctors who get paid more for doing more; they gave the guy who invented lobotomies a Nobel Prize in Medicine. In research that involves creating new viruses, researchers get the private benefits of prestige publications for themselves, but the increased pandemic risk is shared with the whole world. In this case its not just outsiders who are concerned, some subject-matter experts are too (and not just “usual suspects” Alina Chan and Richard Ebright; see also Marc Lipsitch).
The main current check on research like this is supposed to be Institutional Review Boards. The chimeric Covid paper notes “All procedures were performed in a biosafety level 3 (BSL3) facility at the National Emerging Infectious Diseases Laboratories of the Boston University using biosafety protocols approved by the institutional biosafety committee (IBC)”. But there are many problems with this approach. The IRB is run by employees of the same institution as the researcher, the institution that also claims a disproportionate share of the benefits of the research.
IRBs are also incredibly opaque. The paper claims it was approved by Boston University’s institutional biosafety committee, but these committees don’t maintain public lists of approved projects; I e-mailed them Sunday to ask if they actually approved this project and they have yet to respond. There is also no public list of the members of these committees, although in BU’s case you can get a good idea of who they are by reading the meeting minutes. This chimeric Covid proposal appears to have been reviewed as the second proposal of their January 2022 meeting, reviewed by Robert Davey and Shannon Benjamin and approved by a 16-0 vote of the committee. During the January meeting the committee approved all 6 projects they considered unanimously, after hearing 6 reports of lab workers at BU being exposed to lab pathogens in the previous month, e.g.:
MD/PhD student reported experiencing low grade temperatures and other symptoms after he accidentally injured his thumb percutaneously on 12-6-21 while cleaning forceps that he had used to remove infected lungs from mice injected with NL63 virus
IRBs are supposed to protect research subjects from harm, but in practice largely serve to protect their institutions from lawsuits and PR disasters (part of why they’re often too strict). The fact that this did get institutional approval provides one silver lining here; if this chimeric Covid ever did escape and cause an outbreak, those infected by it could potentially sue for damages not only the individual researchers, but Boston University and its $3.4 billion endowment. Being able to internalize externalities in this way is one of many good reasons to be testing those infected with Covid to see what variant they have.
I think we should at least consider stronger national regulations against research like this, rather than leaving each decision to local institutional review boards (ask any researcher how much they trust IRBs). At the very least we should stop subsidizing it; NIH claims they don’t fund “gain of function” research like this, but the researchers who made a new version of Covid conclude their paper:
This work was supported by Boston University startup funds (to MS and FD), National Institutes of Health, NIAID grants R01 AI159945 (to SB and MS) and R37 AI087846 (to MUG), NIH SIG grants S10- OD026983 and SS10-OD030269 (to NAC)