In 2011, 270 researchers began the massive undertaking of redoing nearly 100 studies to see if the results would turn out the same. Each study's findings had originally been published in one of three of the top psychology journals. Fewer than 50 percent of the studies duplicated the original outcomes.

The surprising results of the Reproducibility Project: Psychology, an initiative led by Brian Nosek of the University of Virginia, might lead the public to believe that published studies hold less weight than previously thought. But scientists involved with the project say that not being able to replicate results doesn't necessarily mean the original experiment was wrong.

"This project is not evidence that anything is broken. Rather, it's an example of science doing what science does," said study co-author Cody Christopherson of Southern Oregon University. "It's impossible to be wrong in a final sense in science. You have to be temporarily wrong, perhaps many times, before you are ever right."

To complete this research, the Reproducibility Project recruited scientists to redo the studies, each following the original methods as closely as possible so as not to add any new variables into the mix. The scientists were asked to pick a study from a list of preselected experiments. Each chose one that was in their area of expertise.

Sound science versus job security

"A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes," wrote the study's authors.

So, if all studies were followed to the letter, why the low percentage of reproducible data? There could be many reasons. For example, it could be that the first results were correct and the second set of results were wrong. It could be that both are wrong. Or, it could be that some unknown variable has altered the data in one study, causing different results.

Those are the more innocent possibilities. Others, rather than being rooted in science, are potentially rooted in another obvious variable: human interference. Some scientists have been known to fudge results to achieve the most newsworthy studies. In addition, scientists' jobs in academia and grants for their research often depend on publication, providing a clear motive to gather interesting or surprising results. Unfortunately, replicated results to verify the validity of a previous study are not considered to be as interesting, and therefore not as publishable, as new findings.

"Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication," write the authors in their conclusion. "Innovation is the engine of discovery and is vital for a productive, effective scientific enterprise. However, innovative ideas become old news fast. Journal reviewers and editors may dismiss a new test of a published idea as unoriginal. The claim that 'we already know this' belies the uncertainty of scientific evidence. Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both."

"To get hired and promoted in academia, you must publish original research, so direct replications are rarer," Christopherson told The Smithsonian. "I hope going forward that the universities and funding agencies responsible for incentivizing this research — and the media outlets covering them — will realize that they've been part of the problem, and that devaluing replication in this way has created a less stable literature than we'd like."

In order to reduce the number of studies with inaccurate findings, some journals and institutions have put in place stricter guidelines. One thing the scientific community does seem to agree on? They would prefer that their data be as accurate as possible.

"This project provides accumulating evidence for many findings in psychological research and suggests that there is still more work to do to verify whether we know what we think we know," write the study authors.