Striving for High Standards of Research Integrity

scientist in trench coal standing in front of turn of the century lab

The past two weeks I wrote about a growing problem of questionable research practices in industrial-organizational psychology and management. Two weeks ago I talked about how journal lists created pressure that drove dishonest behavior. Last week I summarized a study showing that bad practices are widespread. Of course, it is easy to identify problems; finding solutions not so much. At the end of the day, we can only control ourselves as we are striving for high standards of research integrity. Regardless of the pressures of our departments or established bad practices, each of us should do the best science we can. How we do that begins with the goal of making a contribution to our science. That means working the problem we are studying rather than focusing on getting published in a particular journal.

What Is a Contribution to Science

A contribution to science is a new insight that advances knowledge of a phenomenon. A publication in an elite journal is not in itself a contribution to science. It is nothing until others read it and make use of the findings or ideas. There are many kinds of contributions.

The most immediate impact of an academic paper is citations, that is, mentions of it in future papers by others. In a scientific paper, a claim might be made such as “Toxic leadership leads to follower burnout”. At the end of the sentence, there will be a citation that links to the reference at the end. Ideally, the cited work provides scientific evidence in support of the claim. There are several databases that count how often each published work is cited.

We can also look at impact more precisely in terms of the kind of impact it has. For example, someone might propose a new theory, and others conduct research to test it. Someone might propose a new method for doing research, and others use that method in their own work. It is also possible for academic research to impact practice. The findings of management research can inform decisions that managers make, and how they treat their employees.

Striving for High Standards of Research Integrity

Good science begins by defining the problem to be studied, e.g., what is the impact on employees of having a toxic leader. The researcher then plans a series of studies to learn about the phenomenon. I might start with a detailed literature review to find out what is already known. Next step might be a qualitative study where I interview people about their experience with toxic leaders. I might progress to a survey of employees, and end with an intervention study designed to make leaders less toxic. These steps reflect a programmatic approach where each study raises new questions to address. At certain points results can be written into a paper for a conference or journal submission.

Striving for high research standards of research integrity means taking a systematic approach designed to minimize bias as much as possible. Sound practices include

  • Balanced literature review. Because the purpose is to learn about the issue, not to find support for a particular idea, literature reviews should look for evidence on both sides. If I believe toxic leaders cause employee burnout, I want to include studies that provide evidence for that idea, as well as evidence that fails to support it. It is not unusual to see someone cite confirming sources and then in parentheses say something like “for contrary evidence see…”
  • Pre-plan procedures. The hallmark of science is that we pre-plan our procedures and then execute the plan. We do not deviate on the fly to provide better results. I was once involved in a study to compare an experimental treatment with standard treatment for mental health patients. The plan was to assign every-other patient to the new treatment, but the clinical staff violated the protocol and decided to send the patients they felt would benefit most to the new treatment, thus contaminating the study. Luckily, we discovered it and were able to course correct.
  • Pre-plan analyses. This is the best way to avoid p-hacking. Decide before analyzing the data what statistical tests will be run, and then execute the plan. If exploratory analyses are conducted, that should be clearly noted.
  • State hypotheses in advance. All hypotheses should be stated with justifications prior to analyzing data. New hypotheses belong in the Discussion at the end where it will be clear that they are based on the findings. These hypotheses can be very helpful as the motivator for future studies to test them.
  • Be transparent. Everything done should be disclosed in the write-up, and procedures should be justified. This includes adding/deleting cases, adding/deleting control variables, and conducting different analyses.

Keep the Ultimate Goal in Mind

Given a choice of having 10 papers in the most prestigious journal that no one cites or having 10 papers in lesser journals that are highly cited, I would go for the citations. I know that there are pressures to publish in the top business journals, especially for faculty in top business schools, and it can be tempting to game the system. Admittedly, it can be hard to compete when you are following the rules and so many others are not. My strategy has always been to focus my attention on the science while being aware of what common practices are in top journals but not letting that dictate what I do. I believe that if you do good work the journal hits will come. The alternative is to engage in questionable research practices that corrupt the science. In the short run it might lead to career success, but there is a cost for the field. Striving for high standards of research integrity might not always be easy, but it is necessary if we are serious about making a positive contribution to our science.

Image generated by DALL-E 4. Prompt “steampunk image of scientist in power pose in wide aspect”

SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE

Join 1,317 other subscribers

Leave a Reply