The Economist has a long, detailed, and readable piece about the difficulties of inferring anything from the published findings of biomedical science. There are all sorts of problems that fall short of scientific fraud, including the the biases caused by industry-funding of biomedical science, the biases of unblinded raters who see what they want to see, and the biases of journal editors towards only publishing “positive” findings. (I am particularly enamored with this graphic, which shows the fundamental problem of inference.) It is rare for researchers to even bother to attempt to replicate prior findings, but when replications are attempted, they often fail.
The Economist piece can be read as something close to an outright assault on empiricism, at least as we now know it. In practical terms, it is prudent for physicians, patients, and payors to be wary of the findings presented in even the top journals.
One of the beauties of our scientific system is that it is wildly decentralized. Scientists (and their funders) can test any hypothesis that they find interesting, and they can use whatever methods they prefer. Likewise, journal editors can publish whatever they want. While such academic and market freedom is attractive, it results in quite a hodgepodge of science, with replication studies and publication of null results being afterthoughts. The NIH and NSF have in the past functioned to set an agenda and demand rigor, but as their funding wanes, the chaos waxes.
The problems are scientific, but any solution will be institutional (and thus legal). I have argued for a partial solution to industry bias in my short article, called “The Money Blind: How to Stop Industry Influence in Biomedical Science Without Violating the First Amendment.” Independent scientific testing could be conducted by a neutral intermediary, which would pool funds. In a similar vein, there is also a new project of the Science Exchange, called “The Reproducibility Initiative.” This program offers to be the independent scientific agency, which attempts to validate known results. But there is not yet a large-scale funding model in place. If biomedical journal editors would at least put disclosures in their structured abstracts (an intervention we have tested), over the long run that may also nudge industry to use such gold-standard independent testing, when they have something that is truly provable. And, at least in the domain of the products regulated by the FDA, the agency should consider using its current statutory authority to push companies towards independent, robust, and replicated science.