Read Part 1 of my review.

 

While Ritchie’s book does a laudable job in describing for the reader some of the most common pitfalls in scientific research, after these first chapters he starts to lose his way.  He includes an additional chapter about what he calls “hype,” in which he tries to describe the risks that occur when academics rush to publish exciting, provocative results, without thoroughly examining the results or subjecting them to peer review.  Unfortunately, a lot of what he describes here are examples of the problems he’s articulated in the previous chapters.  But in hype, he finally talks at more length about the bias that many journals and media outlets have toward glitzy positive results.  In the cases he documents, this bias helps to both encourage people to fudge the results or rush them to press, rather than focus on rechecking work and exploring other explanations.  Hype can create a rush to conclusion, which the public saw in an embarrassing public and political debate among doctors and medical organizations over the possible effectiveness of hydroxychloroquine against COVID.  But even hydroxychloroquine is more of a cautionary tale, reminding researchers to be more precise and cross check their work.

 

But where the book really disappoints is in the proposed solutions to the problems he’s rightly described.  A lot of this seems to be a rather disorganized vision of human nature, competition in the academic world, and a very odd view of incentives.  On the one hand, he understands that journals have reasons they might not want to publish “boring” findings on replications and be drawn to more “exciting” findings of positive effects.  He also recognizes why scholars might be reluctant to share data and have reasons to keep their research agendas under the radar should another scientist swoop in and beat them to publication.

 

But then he lashes out at the increasing productivity of young professors, which he seems to believe is leading to more of the problems he’s identified.  However, arguing that increasing productivity is a problem, rather than a possible solution, reveals his underlying preferences.  He writes that rather than “publication itself” (emphasis his) scientists should “practice science” which apparently means more careful work.  One can understand how this can appear odd to a non-academic.  And why, the reader can fairly ask, is increasing research productivity necessarily an indicator of poor research?  In the earlier part of the book, he acknowledges that advancing computer technology is making cross checking for statistical errors and confirming results easier.  One would naturally assume the same is true as processing power makes producing more research less costly.  Instead, Ritchie argues that the psychology finding of a “speed accuracy trade-off” in brain function proves his point that more productivity is bad.  It’s now that Ritchie is starting to look a little like the biased one.

 

Ritchie then begins a review of the rise of half-baked journals and cash prizes for productivity, and he cherry picks examples to make his case that such measures show the rat race of research is destroying scientific quality.  Any reasonable university can distinguish non-refereed, low quality journals from good ones.  The issue of cash prizes seems largely centered on China and other authoritarian regimes.  He piles on examples of papers that address very small problems in disciplines (which he calls salami slicing) and the problem with self-referencing to boost one’s h-index.  Still, he doesn’t exactly make a strong case that these phenomena are undermining the progress of science.  It’s also certainly not ground breaking or new that competition in the sciences occurs – in fact it can be a highly productive endeavor as the competitive pursuit of things like nuclear weapons- which was a race and one that was critical to win.

 

Ritchie is also concerned about private sector biases of drug companies, but says virtually nothing about the biases and dangers presented by the single biggest funder of scientific research – the government.  According to Science, the US government still funds almost half of all scientific research in America.  Why focus on problems with the private sector when the National Science Foundation is still the 800 pound gorilla in the room?

 

Finally, one more complaint which I think helps explain the “bias” chapter’s conflation of a few different types of bias.  Ritchie lumps together the empirical social sciences and hard sciences.  Many of his concerns will ring true to economists and empirical political scientists.  However, there is a critical distinction between a discipline like physics and one like psychology.  Psychology experiments are run using human subjects, and as any social scientist will tell you, figuring out how humans tick is very difficult, even using advanced econometrics and good research design.  The problems that the physical and social sciences face are somewhat similar, but ultimately what Ritchie has given us is a useful reminder that all research is done by imperfect humans.  He is right to argue for care, precision, an openness to publishing null results, and concern about findings that can’t be confirmed.  But you can’t remove the human element, and because of our ingenuity, creativity and intelligence we have done a lot of good work unlocking how the various parts of the world work.  Ritchie has given us a higher bar to strive to achieve, but he might want to recognize that discouraging and disparaging increasing productivity, dismissing the possible role of incentives, and ignoring the promise of technological progress shows a bias in his thinking as well.