Biologist Paul Ehrlich’s recent appearance on 60 Minutes drew an immediate response, with a deluge of denunciations of his decades spent peddling baseless scare stories. Ehrlich responded, Tweeting:

If I’m always wrong so is science, since my work is always peer-reviewed, including the POPULATION BOMB and I’ve gotten virtually every scientific honor.

Ehrlich’s invocation of “peer review” is notable. Notice how he conflates this process with the practice of science itself.

But Ehrlich is wrong. As Adam Mastroianni, a postdoctoral researcher at Columbia Business School, noted in a recent article, peer review—where “we have someone check every paper and reject the ones that don’t pass muster”—is only about 60 years old:

From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries.

(Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)

Peer review’s supposed benefit is “catch[ing] bad research and prevent[ing] it from being published.” But, Mastroianni notes:

It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.

The Population Bomb belongs on the list of peer reviewed junk science.

And there are costs to the process:

By one estimate, scientists collectively spend 15,000 years reviewing papers every year. It can take months or years for a paper to wind its way through the review system…And universities fork over millions for access to peer-reviewed journals, even though much of the research is taxpayer-funded, and none of that money goes to the authors or the reviewers.

Huge interventions should have huge effects…if peer review improved science, that should be pretty obvious, and we should be pretty upset and embarrassed if it didn’t.

It didn’t. In all sorts of different fields, research productivity has been flat or declining for decades, and peer review doesn’t seem to have changed that trend. New ideas are failing to displace older ones. Many peer-reviewed findings don’t replicate, and most of them may be straight-up false. When you ask scientists to rate 20th century discoveries in physics, medicine, and chemistry that won Nobel Prizes, they say the ones that came out before peer review are just as good or even better than the ones that came out afterward. In fact, you can’t even ask them to rate the Nobel Prize-winning discoveries from the 1990s and 2000s because there aren’t enough of them.

A recent article in Nature is titled ‘‘Disruptive’ science has declined — and no one knows why,’ but Mastroianni may be giving us at least some of the answer:

The invention of peer review may have even encouraged bad research. If you try to publish a paper showing that, say, watching puppy videos makes people donate more to charity, and Reviewer 2 says “I will only be impressed if this works for cat videos as well,” you are under extreme pressure to make a cat video study work. Maybe you fudge the numbers a bit, or toss out a few outliers, or test a bunch of cat videos until you find one that works and then you never mention the ones that didn’t. 🎶 Do a little fraud // get a paper published // get down tonight 🎶

Researchers are as responsive to incentives as anyone. The peer review process incentivizes “gaming,” with people looking to satisfy reviewers and run up their publications rather than break new ground. The costs of peer review, it seems, do not outweigh the benefits. It ought not be a straightjacket for new research nor a shield for charlatans like Ehrlich.