One of the most provocative requests in the reproducibility crisis was Daniel Kahneman’s call for psychological scientists to collaborate on a “daisy chain” of research replication. He admonished proponents of priming research to step up and work together to replicate the classic priming studies that had, up to that point, been called into question.
What happened? Nothing. Total crickets. There were no grand collaborations among the strongest and most capable labs to reproduce each other’s work. Why not? Using 20:20 hindsight it is clear that the incentive structure in psychological science militated against the daisy chain idea.
The scientific system in 2012 (and the one still in place) rewarded people who were the first to discover a new, counterintuitive feature of human nature, preferably using an experimental method. Since we did not practice direct replications, the veracity of our findings weren’t really the point. The point was to be the discoverer, the radical innovator, the colorful, clever genius who apparently had a lot of flair.
If this was and remains the reward structure, what incentive was there or is there to conduct direct replications of your own or other’s work? Absolutely none. In fact, the act of replicating your work would be punitive. Taking the most charitable position possible, most everyone knew that our work was “fragile.” Even an informed researcher would know that the average power of our work (e.g., 50%) would naturally lead to an untenable rate of failures to replicate findings, even if they were true. And, failures to replicate our work would lead to innumerable negative consequences ranging from diminishment of our reputations, undermining our ability to get grants, decreasing the probability of our students publishing their papers, to painful embarrassment.
In fact, the act of replication was so aversive that then, and now, the proponents of most of the studies that have been called into question continue to argue passionately against the value of direct replication in science. In fact, it seems the enterprise of replication is left to anyone but the original authors. The replications are left to the young, the noble, or the disgruntled. The latter are particularly problematic because they are angry. Why are they angry? They are angry because they are morally outraged. They perceive the originating researchers as people who have consciously, willingly manipulated the scientific system to publish outlandish, but popular findings in an effort to enhance or maintain their careers. The anger can have unintended consequences. The disgruntled replicators can and do behave boorishly at times. Angry people do that. Then, they are called bullies or they are boycotted.
All of this sets up a perfectly horrible, internally consistent, self-fulfilling system where replication is punished. In this situation, the victims of replication can rail against the young (and by default less powerful) as having nefarious motivations to get ahead by tearing down their elders. And, they can often accurately point to the disgruntled replicators as mean-spirited. And, of course, you can conflate the two and call them shameless, little bullies. All in all, it creates a nice little self-justifying system for avoiding daisy chaining anything.
My point is not to criticize the current efforts at replication, so much as to argue that these efforts face a formidable set of disincentives. The system is currently rigged against systematic replications. To counter the prevailing anti-replication winds, we need robust incentives (i.e., money). Some journals have made valiant efforts to reward good practices and this is a great start. But, badges are not enough. We need incentives with teeth. We need Federally Funded Daisy Chains.
The idea of a Federally Funded Daisy Chain is simple. Any research that the federal government deems valuable enough to fund should be replicated. And, the feds should pay for it. How? NIH and NSF should set up research daisy chains. These would be very similar to the efforts currently being conducted at Perspectives on Psychological Science being carried out by Dan Simons and colleagues. Research teams from multiple sites would take the research protocols developed in federally funded research and replicate them directly.
And, the kicker is that the funding agencies would pay for this as part of the default grant proposal. Some portion of every grant would go toward funding a consortium of research teams—there could be multiple consortia across the country, for example. The PIs of the grants would be obliged to post their materials in such a way that others could quickly and easily reproduce their work. The replication teams would be reimbursed (e.g., incentivized) to do the replications. This would not only spread the grant-related wealth, but it would reward good practices across the board. PIs would be motivated to do things right from the get go if they knew someone was going to come behind them and replicate their efforts. The pool of replicators would expand as more researchers could get involved and would be motivated by the wealth provided by the feds. Generally speaking, providing concrete resources would help make doing replications the default option rather than the exception.
Making replications the default would go a long way to addressing the reproducibility crisis in psychology and other fields. To do more replications we need concrete positive incentives to do the right thing. The right thing is showing the world that our work satisfies the basic tenet of science—that an independent lab can reproduce our research. The act of independently reproducing the work of others should not be left to charity. The federal government, which spends an inordinate amount of taxpayer dollars to fund our original research, should care enough about doing the right thing that they should fund efforts to replicate the findings they are so interested in us discovering.
“The federal government, which spends an inordinate amount of taxpayer dollars to fund our original research, should care enough about doing the right thing that they should fund efforts to replicate the findings they are so interested in us discovering.”
Sponsoring replications will do nothing to change the system i think. People will still get ahead by publishing their low-powered, p-hacked studies like the have for decades, only now extra money will be wasted on replicating these spurious “findings”. To me, this is not a good idea and gets it all backwards. To me that’s like knowing you’re hiring plumbers who do a bad job but you still hire them to fix your toilet, and then spending more money on hiring plumbers who do a good job to diagnose and fix the problems the bad plumbers left behind. It’s crazy 🙂
I would much rather see federal government money being spent on researchers who perform high quality work in the first place. For instance, perhaps they could only give out money to those who perform high-powered, pre-registrated work which results will be published no matter how they turn out (e.g. demanding researchers use the Registered Reports-format).
I think that will be 1) a much better incentive, 2) much more helpful in improving practices, and 3) a much better use of resources.
(Side note: i also don’t see how money for replications will help concerning all the things that work against performing replications you nicely mention in the beginning of your post. Do you think all the priming-researchers will set up a daisy-chain when they receive money for it? I highly doubt it. Also, i reason the majority of their “findings” are based on studies involving the mostly free/very cheap use of student populations. It’s not that costly to replicate their own work i would say, which is another reason why i think money is not the issue. Furthermore, if these researchers would really care about replications, they can easily educate and use their own students to perform them. It’s not about the money, it’s about other things i gather.)
Reblogged this on Prove Yourself Wrong.
Although I believe that replication is valuable and necessary, so many empirical studies in psychology are so badly done that there is simply no point in trying to replicate them. It would be a waste of time, money, and other resources. I agree with Anonymous: “I would much rather see federal government money being spent on researchers who produce high quality work in the first place.” How to identify those researchers is a problem, though.
I also would like to see improvements in methodological/statistical training at the graduate level, required courses in critical thinking, all psychology journals requiring submission of the dataset(s) at the time the article is published, and psychology journals being much more willing to publish commentaries on published articles.