Navel gazing on steroids in social and personality science

It seems a critical mass of papers, blogs, articles, and comments has hit the field of social and personality psychology in the last few months.  So much so, that I thought it would be constructive to catalogue everything that has emerged from Lehrer, to Bem, to Stapel, to Bargh, to Simonsohn, and on to Francis (borrowing freely from Sanjay Srivastava’s hard work).  I would like this post to be a resource of sorts for the whole sordid affair, which can be used for methods courses taught in the not too distant future. So, if I’ve missed anything important please send the link or paper my way.

The denouement: Lehrer & Bem

The story probably starts with Lehrer’s piece on the decline effect, which was followed closely by the news of Bem’s article on the existence of ESP.  Both pieces caused consternation in psychology in general and in social psychology in particular.  Bem’s article resulted in a veritable storm of criticism.  One of the most thorough critiques of Bem’s study can be found on the Skeptical Inquirer blog.  One of the most even-handed reactions can be found on the citation needed blog by Tal Yarkoni.

Just recently, Ritchie et al (2012) published their failure to replicate Bem’s research.  The paper contains a nice back and forth between the authors and Bem in the comment section.

Jumping the shark: Stapel

If anyone possessed any modicum of ESP, it would have been nice if they would have told us ahead of time about the Stapel affair.  Fortuitously, it came right on the heels of Bem. I think, no I hope that the juxtaposition of these two events is fortuitous because the combination has created a motivation for some methodological navel gazing which might make our science a little better.  I think if either one of these events happened in isolation, it would have been easy to blithely forget about them.

For those following the Stapel affair the committee in charge of evaluating his research has posted a preliminary report identifying an initial list of his fraudulant studies.  For another perspective on Stapel and how it might affect our reputation, see the recent post on the Gene Expression blog at Discover magazine.  The comment section is especially illuminating or depressing, depending on your perspective.  For a relatively complete compilation of Stapel-related information see Sanjay Srivastava’s post on his blog site, the Hardest Science

Throwing salt in the wound: Bargh & Doyen

Just when things seemed to be settling down a bit, a new kerfuffle erupted over on PloS-ONE and Psychology Today.  Doyen et al. (2012) reported a failure to replicate a classic study by John Bargh.  The findings were highlighted in Ed Yong’s blog at Discover magazine.  Bargh responded with an unsolicited review of Doyen and Yong’s missives that included a harsh take on the PloS-ONE model of publishing.  A number of people, including our own Dan Simons, have described Bargh’s post as a prime example of how not to respond to your critics.  To his credit Ed Yong responded in a most polite way.  And, finally Bargh posted some additional thoughts that did little to assuage his critics.  Again, read the comments.  They are more edifying that the original posts.  For a comprehensive run down on the whole affair, see Cedar Reiner’s post.

Another criticism, another dust up: Simonsohn & Schwarz

Simultaneous to the Bargh brouhaha, Simonsohn and Schwarz got into a hissy on the SPSP blog site.  Simonsohn along with Simmons & Nelson are co-authors on the now infamous Psych Science paper on false positives in psychology.  Apparently, Schwarz took umbrage  with some of the points made in the symposium and expressed his views most passionately at the recent SPSP meeting.  This resulted in a rather uncomfortable exchange on the SPSP discussion site.  The gist of Schwarz’s comments appear to be that Simonsohn and colleagues have overstated the problem.  I hope that one conclusion drawn from the litany of issues posted here is that Simmons et al, might have underestimated things.

No-one expects the Spanish Inquisition: Greg Francis

In what appears to be a fit of moral indignation, Greg Francis from Purdue University has produced a string of articles that apply Ioannide’s p-hacking analysis to a variety of experimental studies.  One paper calls into question Balcetis and Dunning’s research.  Balcetis and Dunning respond in the comment section–be sure to read their response as it is well crafted.  The second paper takes on Bem’s ESP work and Meissner & Brigham’s work on verbal overshadowing.  One has to wonder what would happen if the Ioannide’s analytical technique would be used systematically across all of our journal articles.

Psychologist heal thyself

Along with the Simmons et al paper numerous new papers are emerging that analyze the issues being raised in these forums.  There is the highly controversial John et al (in press) paper which attempts to identify the base rate of Questionable Research Practices.  Lebel & Peters use the Bem article as a case study for identifying deficiencies in modal research practice, for example, an overemphasis on conceptual rather than close replications.  Fanelli has written about the rise of positive results across time and how positive results correlate with the perceived status of the field of scientific inquiry.  Ledgerwood & Sherman fault the rise of the short report (especially cutesy, press happy nuggets) while Burtamini & Munafo show that short report format is correlated with publication bias.  The editors of Psychological Science see no problem with their format.  These papers clearly point to the fact that the status quo in our field–and other fields for that matter–is unacceptable.

Replication:  The road to redemption?

Beyond identifying the problematic research and the problematic research methods we employ, some individuals have started to propose solutions to this mess.  Yours truly proposed the Journal of Reproducible Results in (put your field  here: Personality Psychology, Social Psychology, Neuroscience, Medicine, Genetics, etc.).  Sanjay Srivastava has written about how replications could be handled by journals and whether journals should be groundbreaking or definitive.  Roediger wrote an eloquent paean to the varieties of replication that we can and should employ and reward in our field.  Pashler and friends have attempted to popularize www.psychfiledrawer.org as a site to record your attempts to replicate published research.  Purveyors of the site have nominated the top 20 articles that should be replicated.  Similarly, Brian Nosek has started the Open Science Consortium dedicated to replicating psychological experiments published in our leading journals.  All of these efforts highlight the need to value the cornerstone of science–direct replication.  It may not be creative, but it is necessary.

Where do we go from here?

The goal of this post is simple–collect together in one place the majority of methodological issues that have been raised in the past two years in social/personality psychology.  From my vantage point, the gestalt of the information is straightforward–we have a problem.  Moreover, we have enough of a problem that we should do something about it. My initial inclination was to end by saying that all of these issues deserve discussion.  To be honest, I think they deserve action.

Addendum I: The crows come home to roost

So much for things settling down over the summer.  Close on the heels of updated reports on the Stapel affair, we have another report of “fraud” at Erasmus University.  Once again, an experimental social psychologist, Dirk Smeesters, has gotten in hot water and has been forced to resign.  This time, he apparently did not fake data outright, but did what he believes many other colleagues do–resort to questionable research practices in order to compile a seemingly convincing set of statistically significant findings.  Here’s the quote from the report:

He “repeatedly indicates that the culture in his field and his department is such that he does not feel personally responsible, and is convinced that in the area of marketing and (to a lesser extent) social psychology, many consciously leave out data to reach significance without saying so.”

The new affair is documented nicely here at Retraction Watch.

The interesting, and troubling aspect of this new case is that Smeesters equates his actions with Standard Operation Procedure (SOP) in social psychology.  As Ed Yong notes, if these methods are SOP in psychology this may be the “first flake of the avalanche” of individuals having their QRPs rooted out by relatively simple post hoc analyses of their research reports.

Brent W. Roberts

This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Navel gazing on steroids in social and personality science

  1. Greg Francis says:

    This is a nice summary of some very important issues. I do suggest that people read the Balcetis and Dunning reply to my i-Perception paper; they raise a number of important and interesting points. Perhaps the most important point, though, is that they verified the conclusion of my analysis. There _was_ a suppressed null experiment in their research project. Thus, the claim of publication bias in my paper was correct.

    • Greg, in your reply to B&D you wrote the following:

      “In their conclusions, B&D’s response suggested that my investigations of publication bias engage in the very practice that I criticize. I would be susceptible to this criticism if I were making inferences about publication bias for the field in general. If a researcher wanted to estimate the frequency of publication bias across a field of study, then it would be necessary to take representative sets of experiments. Only with a representative (or random) sample can one validly infer back to the field. In contrast, the analysis reported in Francis (2012) is similar to a case study report. By definition, such a report is selective and the findings should not generally be extrapolated to situations outside of the particular report.”

      But in Ed Yong’s new post on Smeesters you are quoted as follows:

      “When I spoke to Francis about an earlier story, he told me: ‘For the field in general, if somebody just gives me a study and says here’s a result, I’m inclined to believe that it might be contaminated by publication bias.'”

      Does that mean you have now applied the Ioannidis technique to analyze a random sample of papers?

  2. mbdonnellan says:

    I am starting a blog and wrote up some guidelines for replication studies. I would love feedback.
    http://traitstate.wordpress.com/2012/07/25/preliminary-thoughts-about-guidelines-and-recommendations-for-exact-replications/

Leave a reply to mbdonnellan Cancel reply