Having watched as the doctors balanced the morphine dosage for my mother as she died of colon cancer, you can count me as one who is very enthusiastic about the prospect of improved cancer therapies, for obvious reasons. Now--H/T Vox Populi--comes a report from Amgen that of 53 pioneering studies about cancer, 47 could not be replicated.
Now to be fair, these are studies, not clinical trials for the FDA (where at least two consecutive studies are required before a drug can be approved), but it is still sobering to read. If one is doing statistically based work, 95% confidence ought to be required, and hence if a large portion of these studies are not reproducible, what it means is that either (a) scientists are choosing a standard of statistical significance that is so low as to be meaningless or (and/or) (b) scientists are putting their fingers on the scales. Either one ought to be very sobering to us.
Podcast #1047: The Roman Caesars’ Guide to Ruling
-
The Roman caesars were the rulers of the Roman Empire, beginning in 27 BC
with Julius Caesar’s heir Augustus, from whom subsequent caesars took their
nam...
12 hours ago
3 comments:
I could write a LOT about this (and probably should, at some point). It is indeed troubling.
It isn't statistical significance; I can't think of a single biomedical journal that publishes anything with a less rigorous cutoff for significance than p < 0.05. Certainly not any journal the folks at Amgen are paying attention to when looking for drug targets. The only time I've ever seen anyone trying to claim anything with less robust results would be in a some grant proposals where a small pilot study has a pretty promising trend towards significance (like p = 0.1) with a small sample size. And even then, you're pushing your luck.
There's not one thing that accounts for this, but the systematic bias against the publication of negative data, at every level of the process, looms rather large in my mind.
The grad student/postdoc concludes an experiment "didn't work", doesn't even mention it to their adviser. The next time they get a positive result, and that one gets passed up the line. Or, the adviser crushes the negative result, because grants don't get funded on negative data. Or, the journal doesn't publish the negative result, because negative results don't get cited.
Unless...you're Amgen and you've got a $2bn annual in-house research budget and can perform a study of this scope that is too devastating to ignore.
Some journals are making an attempt to address this. One for which I review and have published in a few times has introduced a "negative results" section. They haven't actually published that many (I want to say maybe 8 papers in the first year, in a journal that publishes about 20 total a month). I don't know if that is for lack of submissions, or because what they are getting is garbage. I do know that the statistical rigor for these papers is actually higher, because it is pretty easy to call a result negative when a study is simply under-powered. So the incentives to pursue publishing such a paper are still pretty minimal.
I do think that if there were less "do or die" pressure on researchers to produce particular results, there would be less BS being put out there. I'm not sure how to achieve that, short of guaranteeing a minimum funding level for everyone, which (while a nice thought) would be a logistical nightmare.
That said, I'm not sure such a thing would necessarily be much more expensive, if 90% of the billions the NIH puts into extramural preclinical research now is funding stuff that no one can replicate.
Brian, suffice it to say that I wish you were wrong in how you're basically saying a lot of scientists are just plain dishonest in their dealings. That said, I can't argue with you on this one....
Well, there is certainly a non-trivial component of intentional dishonesty.
But I think the larger and more prevalent problem is confirmation bias in its various manifestation, made worse by perverse incentives. I don't think a majority of my colleagues are dishonest.
Most people don't have to really contemplate how easy it is to fool oneself...even though it is a universal human failing. Scientists are no different...but for us, it is serious impediment to doing our jobs well.
Post a Comment