Bias Undermines the Reliability of Science

Scientific work, like that illustrated here with beakers and microscope, can be undermined by biases of scientists.

In my career I have attended hundreds of scientific research talks. What has always raised a red flag for me is when the presenter starts off saying something like “What we were trying to show in this study was …..” Such statements make me question the presenter’s objectivity. I am fine with a researcher testing a hypothesis or theory. I have done that myself many times. But scientists should not design and conduct a study to show something in particular. The goal is to see whether we can confirm or disconfirm the hypothesis and/or theory. We should be as prepared to accept one conclusion as the other, and there have been times when results of my own studies have surprised me because they were different from what I expected. That’s how science works because science is supposed to be objective. When it is not, bias undermines the reliability of science.

The Scientific Method

Science is the systematic collection of information or evidence to address a specific question. It is designed to reduce subjectivity by setting up tests of hypotheses or questions in a way that does not intentionally predetermine the results. If I am testing a theory, I derive predictions based on that theory and subject them to an empirical test. I do it in such a way that it is just as possible for me to support as refute my theory. Scientists are aware that they have biases, both conscious and unconscious, and do their best to avoid them. One such approach is seen in blind drug trials where people are randomly assigned to receive the medication being tested or a placebo. The person administering a drug and the person receiving the drug are “blind” to the condition.

Bias Undermines the Reliability of Science

The objectivity of science gets corrupted when scientists approach a study with an agenda to find a certain result. This can happen when a certain result is desired by the scientist. This happens under several conditions.

  • The scientist wants to find support for their own theory. An academic’s reputation and success can be amplified if they can come up with a theory that is supported by data, so there is pressure to find confirming results.
  • The scientist cares about the issue being studied. This happens with politically-charged issues where the scientist wants to find results that support particular side of an issue. In such situations there is the temptation to cherry-pick findings that support a position and ignore contrary evidence. I came across an article once on just such an issue where the authors mentioned the one study from the research literature that supported their position. What they failed to mention was the two dozen studies that found the opposite. This gave the reader the false impression that there was widespread support for their claim, but that support did not exist.
  • Statistically significant results are believed to be required for publication. There is a widespread belief that you must find significant results to be published. This can tempt a scientist to “p-hack” their data by conducting a series of analyses until one comes out in the way they are hoping. This is an example of the old adage, “torture the data until it surrenders”.

Reducing Bias in Research

Combating bias is difficult because merely knowing you have it does not make it disappear. Those of us who conduct statistical tests want our results to be statistically significant, and that desire can affect many of the decisions we make from the design of the study to how we treat the data. We cannot eliminate our own biases, but there are things we can do to reduce them.

  • Analyses should be preplanned to avoid p-hacking (conducting a series of analyses until you find something statistically significant). The probabilities of inferential statistics become distorted when analyses are conducted in multiple ways. Research shows that almost any dataset can be made to appear supportive of a hypothesis if enough data manipulation is tried.
  • Conduct comprehensive literature reviews that address the question rather than search only for research that supports a particular conclusion. Collect the research sources on a topic first. Then go through them in a systematic way to get a comprehensive overview of supporting and nonsupporting evidence.
  • Have debates and discussions among members of the research team designed to reduce group think. One technique is for a member of the team to assume the role of devil’s advocate to challenge what everyone believes.
  • Seek input from colleagues and peers who might have different biases. Often those with different opinions can help us see our blind spots.

Bias is part of human thinking and cannot be avoided. Science exists as a tool to control biases by using systematic approaches to collecting and evaluating evidence. Too often, however, bias undermines the reliability of science because it creeps into our methods and conclusions. A healthy science promotes a diversity of views where claims are evaluated and re-evaluated so that multiple forms of evidence can be combined to reach a conclusion.

Photo by Chokniti Khongchum from Pexels

SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE

Join 1,154 other subscribers

1 Reply to “Bias Undermines the Reliability of Science”

  1. Another good post.

    In reading the post, I thought about the need for more journal editors to accept papers with null results if (a) the research is well-designed and (b) the null results shed light on an issue or controversy. I recognize that reviewers may tend to believe that a null result is reflective of a problematic research design, but a null result could also reflect an important outcome in a well-designed study.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.