Please Stop the Bleating

 

It has been unsettling to witness the seemingly endless stream of null effects emerging from numerous pre-registered direct replications over the past few months. Some of the outcomes were unsurprising given the low power of the original studies. But the truly painful part has come from watching and reading the responses from all sides.  Countless words have been written discussing every nuanced aspect of definitions, motivations, and aspersions. Only one thing is missing:

Direct, pre-registered replications by the authors of studies that have been the target of replications.

While I am sympathetic to the fact that those who are targeted might be upset, defensive, and highly motivated to defend their ideas, the absence of any data from the originating authors is a more profound indictment of the original finding than any commentary.  To my knowledge, and please correct me if I’m wrong, none of the researchers who’ve been the target of a pre-registered replication have produced a pre-registered study from their own lab showing that they are capable of getting the effect, even if others are not. For those of us standing on the sidelines watching things play out we are constantly surprised by the fact that the one piece of information that might help—evidence that the original authors are capable of reproducing their own effects (in a pre-registered study)—is never offered up.

So, get on with it. Seriously. Everyone. Please stop the bleating. Stop discussing whether someone p-hacked, or what p-hacking really is, or whether someone is competent to do a replication, what a replication is, or whether a replication was done poorly or well.  Stop reanalyzing the damn Reproducibility Project or any thousands of other ways of re-examining the past.  Just get on with doing direct replications of your own work. It is a critical, albeit missing piece of the reproducibility puzzle.

Science is supposed to be a give and take. If it is true that replicators lack some special sauce necessary to get an effect, then it is incumbent on those of us who’ve published original findings to show others that we can get the effect—in a pre-registered design.

Brent W. Roberts

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

8 Responses to Please Stop the Bleating

  1. Jeff Sherman says:

    Are you doing that? Are any personality psychologists replicating any personality psychology studies?

    • pigee says:

      Yes. By my reckoning, we’ve directly replicated 6 different effects drawn from longitudinal or cross-sectional research we’ve published or produced in the past few years. 3 of these were direct replications of prior pubs. In two cases we directly replicated our own studies within a study. In one case we did a “Rodiger Replication” (e.g., replication and extension). Two papers are still under review, btw. In one case we failed to replicate the original effect–that paper got rejected because study 2 found null results 🙂

      Here are four references:

      Luo, J. & Roberts, B.W. (2015). Concurrent and longitudinal relations among conscientiousness, stress, and self-perceived health. Journal of Research in Personality, 59, 93-103.

      Rieger, S., Göllner, R., Roberts, B.W., Trautwein, U. (2016). Low self-esteem prospectively predicts depression in the transition to young adulthood: A replication of Orth, Robins, and Roberts (2008). Journal of Personality and Social Psychology, 110, e16-e22 0022-3514/15/$12.00 http://dx.doi.org/10.1037/pspp0000037

      Nickel, L.B., Iveniuk, J., & Roberts, B.W. (in press). Compensatory conscientiousness redux: A direct replication of Roberts, Smith, Jackson, and Edmunds (2009). Social Psychology and Personality Science.

      Mu, W., Luo, J., Nickel, L., & Roberts, B.W. (in press). Generality or specificity? Examining the relation between personality traits and mental health outcomes using a bivariate bi-factor latent change model. European Journal of Personality.

      Your point is well taken though. Personality psychology is not conspicuously better at doing this than any other area. I do get the impression that certain subfields within psychology are more likely to do direct replications of effects in order to test their robustness before proceeding with using a technique or paradigm (sensation and perception research for example). That said, I’m casting aspersions at all areas of psychology for not doing this, as even in the case of sensation and perception research they claim they do the replications but don’t publish them.

      • Jeff Sherman says:

        I tip my hat to you, good sir!

      • pigee says:

        I forgot this one (sorry Nate):

        Hudson, N.W., & Roberts, B.W. (2016). Social investment in work reliably predicts changes in conscientiousness and agreeableness: A direct replication and extension of Hudson, Roberts, and Lodi-Smith (2012). Journal of Research in Personality, 60, 12-23.

        And JRP will have a special issue coming out soon where 11 studies are going to be pre-registered, direct replications of past studies. No clue when it will be published.

  2. pigee says:

    I figured if I told everyone to be their own replicator I should at least do it myself.

  3. David Funder says:

    As Chris Soto pointed out in a different thread somewhere (Facebook maybe?), a routine step in the development of any personality measure is cross-validation. This is otherwise known as replication, but hasn’t traditionally been called that. The failure to understand this basic methodological practice is what led to the infamous “voodoo correlation” brouhaha in the fMRI literature.

  4. This is a great post, and you are right that this should be a routine part of research. However, I’m not sure it solves the problem of dealing with failures to replicate. If a lab is able to reproduce their effect even in a pre-registered replication, it may be that the “secret sauce” is some combination of unreported aspects of the experiment (e.g., how the advertisement was phrased, some characteristic of the researcher, degrees of freedom in instructions etc.). Within-lab replications are a good way of preventing ourselves from p-hacking or exploring the garden of forking paths, but between-lab replications seem to be more essential to addressing whether the method as laid out in the method section reliably reproduces a result.

    One thing I also think we also forget to do in all this, but goes to the spirit of your comment, is give credit to replicators for actually doing something. There’s been extensive discussion and debate in journals, blogs and social media about what needs to change, but I find it more heartening to see ppl actually making changes (whether it be ppl replicating their own or other’s work, changing their workflow, or actually implementing concrete changes in journals)

  5. Pingback: The Day the Palm hit the Face | NeuroNeurotic

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s