In a recent episode of EconTalk, Russ Roberts interviewed Stanford University’s John Ioannidis to discuss his 2017 publication “The Power of Bias in Economics Research.” Ioannidis and his co-authors found that the vast majority of estimates of economic parameters were from studies that were “underpowered,” and this, in turn, meant that the published estimates of the magnitude of the effects were often biased upward.

Unfortunately, many economists (including me) have little training in the concept of “statistical power” and might be unable to grasp the significance of Ioannidis’ discussion. In this article, I give a primer on statistical power and bias that will help the reader appreciate Ioannidis et al.’s shocking results: After reviewing meta-analyses of more than 6,700 empirical studies, they concluded that most studies, by their very design, would often fail to detect the economic relationship under study. Perhaps worse, these “underpowered” studies also provided estimates of the economic parameters that were highly inflated, typically by 100% but in one third of the cases, by 300% or more.

Economists should familiarize themselves with the concept of statistical power to better appreciate the possible pitfalls of existing empirical work and to produce more-accurate research in the future.

This is from Robert P. Murphy, “Economists Should Be More Careful With Their Statistics,” Econlib Feature Article, April 2, 2018.

Another highlight:

Furthermore, the underpowered studies also implied very large biases in estimates of the magnitude of economic parameters. For example, of 39 separate estimates of the monetary value of a “statistical life”–a concept used in cost/benefit analyses of regulations–29 (74%) of the estimates were underpowered. For the 10 studies that had adequate power, the estimate of the value of a statistical life was $1.47 million, but the 39 studies collectively gave a mean estimate of $9.5 million. After our hypothetical examples of coin-flipping researchers, this real-world example leads one to suspect that the figure of $9.5 million is likely to be vastly exaggerated.

Until reading this, I had had no idea that the $5 to $9 million number I had been using in cost/benefit analysis was likely way too high.

Read the whole thing.