Living means making decisions with imperfect information. But Covid provides many examples of how people and institutions are often still bad at this. A few common errors:
- Imperfect evidence = perfect evidence. “Studies show Asprin prevents Covid”. OK, were the studies any good? Did any other studies find otherwise?
- Imperfect evidence = “no evidence” or “evidence against”. In early 2020, major institutions like the WHO said “masks don’t work” when they meant “there are no large randomized controlled trials on the effectiveness of masks”
- Imperfect evidence = don’t do it until you’re sure Inaction is a choice, and often a bad one. If the costs of action are low and the potential benefits of action high, you might want to do it anyway. Think masks in 2020 when the evidence for them was mediocre, or perhaps Vitamin D now.
- Imperfect evidence = do it, we have to do something Even in a pandemic, it is possible to over-react if the costs are high enough and/or the evidence of benefits bad enough (possibly lockdowns, definitely taking up smoking)
Any intro microeconomics class will explain the importance of weighing both costs and benefits. But how do we know what the costs and benefits are? For many everyday purchases they are usually obvious, but in other situations like medical treatments and public policies they aren’t, particularly the benefits. We have to estimate the benefits using evidence of varying quality. This creates more dimensions of tradeoffs- do you choose something with good evidence for its benefits, but high cost? Or something with worse evidence but lower costs? Graphing this properly should take at least 3 dimensions, but to keep things simple lets assume we know what the costs are, and combine benefits and evidence into a single axis called “good evidence of substantial benefit”. This yields a graph like:
Applied to Covid strategies, this yields a graph something like this:
Judging the strength of the evidence for various strategies is inherently difficult, and might go beyond simply evaluating the strength of published research. But when evaluating empirical studies on Covid, my general outlook on the evidence is:
Of course, details matter, theory matters, the number of studies and how mixed their results are matters, potential fraud and bias matters, and there’s a lot it makes sense to do without seeing an academic study on it.
Dear reader, perhaps this is all obvious to you, and indeed the idea of adjusting your evidence threshold based on the cost of an intervention goes back at least to the beginnings of modern statistics in deciding how to brew Guinness. But common sense isn’t always so common, and this is my attempt to summarize it in a few pictures.