Brent W. Roberts
My tweet, “Failure to replicate hurting your career? What about PhDs with no career because they were honest” was taken by some as a personal attack on Dr. Schnall. It was not and I apologize to Dr. Schnall if it were taken that way. The tweet was in reference to the field as a whole because our current publication and promotion system does not reward the honest design and reporting of research. And this places many young investigators at a disadvantage. Let me explain.
Our publication practices reward the reporting of optimized data—the data that looks the best or that could be dressed up to look nice through whatever means necessary. We have no choice given the way we incentivize our publication system. That system, which punishes null findings and rewards only statistically significant effects means that our published science is not currently an honest portrait of how our science works. The current rash of failures to replicate famous and not so famous studies is simply a symptom of a system that is in dire need of reform. Moreover, students who are unwilling to work within this system—to be honest with their failures to replicate published work, for example—are punished disproportionately. They wash out, get counseled into other fields, or simply choose to leave our field of their own accord.
Of course, I could be wrong. It is possible that the majority of researchers publish all of their tests of all of their ideas somewhere, including their null findings. I’m open to that possibility. But, like many hypotheses, it should be tested and I have an idea for how to test it.
Take any one of our flagship journals and for 1 year follow a publication practice much like that followed for the special replication issue just published. During that year, the editors agree to only review and publish manuscripts that have been 1) pre-registered, 2) have only their introduction, methods, and planned analyses described, not their results, 3) each paper would contain at least one direct replication of each unique study presented in any given proposed package of studies. The papers would be “accepted” based on the elegance of the theory and the adequacy of the methods alone. The results would not be considered in the review process. Of course, the pre-registered studies would be “published” in a form where readers would know that the idea was proposed even if the authors do not follow through with reporting the results.
After a year, we can examine what honest science looks like. I suspect the success rate for statistically significant findings will go down dramatically, but that is only a hypothesis. Generally speaking, think of the impact this would have on our field and science in general. The journal that takes up this challenge would have the chance to show the field and the world, what honest science looks like. It would be held up as an example for all fields of science for exactly how the process works, warts and all. And, if I’m wrong, if at the end that year the science produced in that journal looks exactly like the pages of our current journals I’ll not only apologize to the field, I’ll stop tweeting entirely.