Owning it

What happens when the authors of studies linking candidate gene polymorphisms to response to drug consumption tried to replicate their own research?

As many of you know, the saga of replication problems continues unabated in social and personality psychology. The most recent dust up being over the ability of some researchers to replicate Dijksterhuis’ professor prime studies and the ensuing arguments over those attempts.

While social and personality psychologists “discuss” the adequacies of the replication attempts in our field a truly remarkable paper was published in Neuropsychopharmacology (Hart, de Wit, & Palmer, 2013).  The second and third authors have a long collaborative history working on the genetics of drug addiction.  In fact, they have published 12 studies linking variations in candidate genes, such as BDNF, DRD2, and COMT to intermediary phenotypes related to drug addiction.  As they note in the introduction to their paper, these studies have been cited hundreds of times and would lead one to believe that single SNPS or variations in specific genes are strongly linked to the way people react to amphetamines.

The 12 original studies all relied on a really nice experimental paradigm.  The participants received placebos and varying doses of amphetamines across several sessions, and the experimenters and participants were blind to what dose they received.  The order of drug administration was counterbalanced.  After taking the drugs, the participants rated their drug-related experience over the few hours that they stayed in the lab.  The authors, their post docs, and graduate students published 12 studies linking the genetic polymorphisms to outcomes like feelings of anxiety, elation, vigor, positive mood, and even concrete outcomes such as heart rate and blood pressure.

While the experimental paradigm had rather robust experimental fidelity and validity, the studies themselves were modestly powered (Ns = 84 to 162).  Sound familiar?  It is exactly the same situation we face in many areas of psychological research now—a history of statistically significant effects discovered using modestly powered studies.

As these 12 studies were going to press (a 5-year period), the science of genetics was making strides in identifying the appropriate underlying model of genotype-phenotype associations.  The prevailing model moved from the common-variant model to the rare variant or infinitesimal model.  The import of the latter two models was that it would be highly unlikely to find any candidate gene effects linked to any phenotype, whether it be endophenotype, intermediate phenotype, subjective, or objective phenotype because the effect of any candidate gene polymorphism would be so small.  The conclusion would be that the findings published by this team would be called into question, with the remote possibility that they had been lucky enough to find one of the few polymorphisms that might have a big effect, like APOE.

So what did the authors do?  They kept on assessing people using their exemplary methods and also kept on collecting DNA.  When they reached a much larger sample size (N = 398), they decided to stop and try to replicate their previously published work.  So, at least in terms of our ongoing conversations about how to conduct a replication, the authors did what we all want replicators to do—they used the exact same method and gathered a replication sample that had more power than the original study.

What did they find?  None of the 12 studies replicated.  Zip, zero, zilch.

What did they do?  Did they bury the results?  No, they published them.  And, in their report they go through each and every previous study in painful, sordid detail and show how the findings failed to replicate—every one of them.

Wow.

Think about it.  Publishing your own paper showing that your previous papers were wrong.  What a remarkably noble and honorable thing to do–putting the truth ahead of your own career.

Sanjay Srivastava proposed the Pottery Barn Rule for journals–if a journal publishes a paper that other researchers fail to replicate, then the journal is obliged to publish the failure to replicate.  The Hart et al (2013) paper seems to go one step further.  Call it “Clean up your own mess” rule or the “Own it” rule—if you bothered to publish the original finding, then you should be the first to try and directly replicate the finding and publish the results regardless of their statistical significance.

We are several years into our p-hacking, replication lacking, nadir in social and personality psychology and have yet to see a similar paper.  Wouldn’t be remarkable if we owned our own findings well enough to try and directly replicate them ourselves without being prodded by others?  One can only hope.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Owning it

  1. Suzanne Segerstrom says:

    Like.

  2. We gave a go at replicating one of our findings in a way that thought would have been to your liking. Does this not count as “personality and social psychology?”

    http://www.psy.miami.edu/faculty/mmccullough/Papers/Hone_McCullough_in%20press_EHB_RelPrimHandGrip_Rep.pdf

    • pigee says:

      Most definitely Mike! I’m happy to report that there are several examples like yours that have emerged since this post was written (I’ll try to dig them up and do an update). That said, the researchers who have authored some of the most provocative studies that have had the most dramatic failed independent replications have yet to come forward with their own direct replications. They have written a lot of rejoinders though….

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s