Is Social Science Really Experiencing a Crisis?

April 25, 2019

Mainstream media frequently cover findings from psychological research, but until recently, the field itself was rarely the subject of intense public scrutiny. That has changed in recent years amid a so-called “replication crisis” – a pattern of researchers publishing findings that turn out to be hard for others to confirm. This pattern is actually not new, and calling it a crisis may be overblown, according to psychologists Joseph Lee Rodgers and Patrick Shrout. Writing in Policy Insights from the Behavioral and Brain Sciences, they explain that some factors leading up to the recent controversy are cause for concern, while others are natural parts of the research process that are sometimes misunderstood by the public and even researchers. Nonetheless, the authors believe the current discussion is an important opportunity for improvements to both research and the conclusions journalists and policymakers draw from it.  

To understand the controversy, it is important to recognize the difference between two related issues. Reproducibility is the ability of different investigators to obtain the same results from the original data and methods. Replicability is the ability of a different investigator using different data to find results that support the findings of the original study. The so-called crisis of late is mostly about the latter, and was spurred in large part by a project of the Open Science Committee that sought to replicate the findings of 100 social science studies. Researchers from that group found that only 36% of the original studies were replicated – a finding that captured headlines and sparked skepticism about research among the public.

But Rodgers and Shrout highlight several things about the OSC study that most people didn’t hear. First, the researchers only conducted one replication study for each original study. Statisticians advise we should only draw conclusions about human behavior by looking at patterns across multiple studies conducted over time; using a single replication study fails to accomplish this just as much as using one original study. Second, “virtually all” of the 100 replication studies were what statisticians call underpowered, meaning they didn’t have enough participants to make it likely they would find a significant effect even where one really existed. Third, the researchers often modified the design of the original studies for pragmatic reasons.

Rodgers and Shrout also explain that when people heard about the OSC study, they assumed the original studies that didn’t replicate were false positives – that is, they documented an effect that wasn’t a real trend but instead an error or coincidence in the data. But the study did not specify whether the inconsistencies were indeed false positives or false negatives – findings that two factors were unrelated when in fact there was an underlying relationship that went undetected. As a result, many people misinterpreted the headlines to mean that social scientists are overly confident about the patterns of behavior they document.

However, this is not to say there are no problems in social science research, Rodgers and Shrout are quick to point out. In fact, they believe the recent controversy is highlighting the need for reforms that some researchers have been urging for a long time, including a deeper understanding of statistical principles, better adherence to best practices in statistical analysis, and methodological advances. It has also led to new initiatives that could improve science and its application. One is the Registered Replication Report (RRR) mechanism by the Association of Psychological Science, through which researchers can submit “a potentially important finding that has already been published” to an editor who may assign the project to multiple collaborative teams to do replication studies. Also, the Center for Open Science and the Open Science Framework (OSF) have tools for improving transparency in research studies, including a way for researchers to “pre-register” a hypothesis before analyzing data, so that they can’t “go fishing” in the data in search of any statistically-significant finding that may or may not reflect a real underlying trend.

Rodgers and Shrout do worry that the “replication crisis” has damaged the reputation of psychology, especially social psychology, and could impact researchers’ ability to get funding. “Science, and especially social-behavioral science, is by its very nature uncertain,” Rodgers and Shrout write, but we live in a society hungry for instant gratification and impatient with nuances. To really benefit from social science, we will need to be more patient, thoughtful, and tolerant of complexity with research findings – and human behavior in general.