Be your own replicator

by Brent W. Roberts

One of the conspicuous features of the ongoing reproducibility crisis stewing in psychology is that we have a lot of fear, loathing, defensiveness, and theorizing being expressed about direct replications. But, if the pages of our journals are any indication, we have very few direct replications being conducted.

Reacting with fear is not surprising. It is not fun to have your hard-earned scientific contribution challenged by some random researcher. Even if the replicator is trustworthy, it is scary to have your work be the target of a replication attempt. For example, one colleague was especially concerned that graduate students were now afraid to publish papers given the seeming inevitability of someone trying to replicate and tear down their work. Seeing the replication police in your rearview mirror would make anyone nervous, but especially new drivers.

Another prototypical reaction appears to be various forms of loathing. We don’t need to repeat the monikers used to describe researchers who conduct and attempt to publish direct replications. It is clear that they are not held in high esteem. Other scholars may not demean the replicators but hold equally negative attitudes towards the direct replication enterprise and deem the entire effort a waste of time. They are, in a word, too busy making discoveries to fuss with conducting direct replications.

Other researchers who are the target of failed replications have turned to writing long rejoinders. Often reflecting a surprising amount of work, these papers typically argue that while the effect of interest failed to replicate, there are dozens of conceptual replications of the phenomenon of interest.

Finally, there appears to be an emerging domain of scholarship focused on the theoretical definition and function of replications. While fascinating, and often compelling, these essays are typically not written by people conducting direct replications themselves—a seemingly conspicuous fact.

While each of these reactions are sensible, they are entirely ineffectual, especially in light of the steady stream of papers failing to replicate major and minor findings in psychology. Looking across the various efforts at replication, it is not too much of an exaggeration to say that less than 50% of our work is reproducible. Acting fearful, loathing replicators, being defensive and arguing for the status quo, or writing voluminous discourses on the theoretical nature of replication are fundamentally ineffective responses to this situation. We dither while a remarkable proportion of our work fails to be reproduced.

 

There is, of course, a deceptively simple solution to this situation. Be your own replicator.

 

It is that simple. And, I don’t mean conceptual replicator; I mean direct replicator. Don’t wait for someone to take your study down. Don’t dedicate more time writing a rejoinder than it would take to conduct a study. Replicate your work yourself.

Now this is not much different than the position that Joe Cesario espoused, which is surprising because as Joe can attest to I did not care for his argument when it came out. But, it is clear at this juncture that there was much wisdom in his position. It is also clear that people haven’t paid it much heed. Thus, I think it merits restating.

Consider for a moment how conducting your own direct replication of your own research might change some of the interactions that have emerged over the last few years. In the current paradigm we get incredibly uncomfortable exchanges that go something like this:

Researcher R: “Dear eminent, highly popular Researcher A, I failed to replicate your study published in that high impact factor journal.”

Researcher A: “Researcher B, you are either incompetent or malicious. Also, I’d like to note that I don’t care for direct replications. I prefer conceptual replications, especially because I can identify dozens of conceptual replications of my work.”

 

Imagine an alternative universe in which Researcher A had a file of direct replications of the original findings. Then the conversation would go from a spitting match to something like this:

Researcher R: “Dear eminent, highly popular Researcher A, I failed to replicate your study published in that high impact factor journal.”

Researcher A: “Interesting. You didn’t get the same effect? I wonder why. What did you do?”

Researcher B: “We replicated your study as directly as we could and failed to find the same effect” (whether judged by p-values, effect sizes, confidence intervals, Bayesian priors or whatever).

Research A: “We’ve reproduced the effect several times in the past. You can find the replication data on the OSF site linked to the original paper. Let’s look at how you did things and maybe we can figure this discrepancy out.”

 

That is a much different exchange than the one’s we’ve seen so far which have been dominated by conspicuous failures to replicate and, well, little more than vitriolic arguments over details with little or no old or new data.

Of course, there will be protests. Some continue to argue for conceptual replications. This perspective is fine. And, let me be clear. No one to date has argued against conceptual replications per se. What has been said is that in the absence of strong proof that the original finding is robust (as in directly replicable), conceptual replications provide little evidence for the reliability and validity of an idea. That is to say, conceptual replications rock, if and when you have shown that the original finding can be reproduced.

And that is where being your own replicator is such an ingenious strategy. Not only do you inoculate the replicators, but also you bolster the validity of your conceptual replications in the process. That is a win-win situation.

And, of course, being your own direct replicator also addresses the argument that the replicators may be screw-ups. If you feel this way, fine. Be your own replicator. Show us you can get the goods. Twice. Three times. Maybe more. But, of course, make sure to pre-register your replication attempts otherwise some may accuse you of p-hacking your way to a direct replication.

It is also common, as noted, to see a response to a failure to replicate that lists out sometimes dozens of small sample, conceptual replications of original work as some kind of response. Why waste your time? The time spent crafting arguments about tenuous evidence could easily be spent conducting your own direct replication of your own work. Now that would be a convincing response. A direct replication is worth a thousand words—or a thousand conceptual replications.

Conversely, replication failures spur some to craft nuanced arguments about just what is a replication and if there anything that is really a “direct” replication and such. These are nice essays to read. But, we’ll have time for these discussions later, after we show that some of our work actually merits discussion. Proceeding to intellectual discussions is nothing more than a waste of time when more than 50% of our research fails to replicate.

Some might want to argue that conducting our own direct replications would be an added burden to already inconvenienced researchers. But, let’s be honest. The JPSP publication arms race has gotten way out of hand. Researchers seemingly have to produce at least 8 different studies to even have a chance of getting into the first two sections of JPSP. What real harm would there be if you still did the same number of studies but just included 4 conceptually distinct studies each replicated once? That’s still 8 studies, but now the package would include information that would dissipate the fear of being replicated.

Another argument would be that it is almost impossible to get direct replications published. And, that is correct. Our only bias more foolish than the bias against null findings is the bias against the value of direct replications. Resultantly, it would be hard to get direct replications published in mainstream outlets. I have utopian dreams sometimes where I imagine our entire field moving past this bias. One can dream, right?

But, this is no longer a real barrier. Some journals or sections of journals are actively fostering the publication of direct replications. Additionally, we have numerous outlets for direct replication research, whether it is formal ones, such as PloS-ONE or Frontiers, or less formal such as Psychfiledrawer or the Open Science Framework. If you have replication data, it can find a home, and interested parties can see it. Of course, it would help even more if the data were pre-registered.

So there you have it. Be your own replicator. It is a quick, easy, entirely reasonable way dispelling the inherent tension in the current replication crisis we are enduring.

 

 

 

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Be your own replicator

  1. Lynne Cooper says:

    Brent, an elegant and simple solution. I agree that including a direct replication as part of a multi-study project would be favorably received at JPSP:PPID. Likewise we encourage and publish both direct and conceptual replication packages as replication studies. Such submissions must still meet rigorous methodological standards, but the bar for novelty/scope of theoretical contribution is somewhat lowered. Here are the guidelines for submitting a replication study to JPSP: http://www.apa.org/pubs/journals/psp/index.aspx?tab=4#submit.

  2. Rolf Zwaan says:

    I agree that self-replications are useful. We performed some high-powered direct replications of our own work (and that of others) and got them published some years back: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0051382. The data and stimuli can be found here: https://osf.io/2dtrh/. I wonder whether if people want to replicate your work they should replicate the original or your own self-replication, which is bound to be higher-powered and so gives a more accurate estimate of the effect size.

    • pigee says:

      That truly is a utopian vision–people directly replicating your direct replications. This is only my opinion, but if faced with that circumstance I would meta-analyze the studies, find the mean effect size, and proceed accordingly (i.e., use power analysis based on that effect size).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s