Questionable Research Practices in Top Management Journals

Carnival barker in top hat and striped vest in front of carnival.

In response to my blog article last week about how journal lists can promote questionable research practices, a friend sent me an article by Sven Kepes, Sheila Keener, Michael McDaniel and Nathan Hartman documenting how widespread those practices are in top business journals. This article builds on the Chrysalis effect study by Ernest O Boyle, George Banks, and Erik Gonzalez-Mule comparing dissertations with subsequent journal publications about the same research. These two articles show a pattern of widespread questionable research practices in top management journals that populate business school lists. These practices undermine the scientific integrity of our field. Should there be a mystery that practitioners don’t take our research more seriously?

Questionable Research Practices

There are two dishonest practices, euphemistically referred to as questionable, that have been the focus of concern in business and other scientific fields. HARKing or hypothesizing after results are known occurs when authors analyze the data first, and then state hypotheses as if they guided the research. This is a problem because in industrial-organizational psychology and management, top journals only publish research that confirms theory. Such research is based on deductive inference in which it is critical that the theory and hypotheses come first. The logic is based on a simple syllogism, if the theory is correct then we should find a certain research outcome. Finding that outcome is evidence that the theory is correct, although that evidence is not very strong. If A then B doesn’t mean if B therefore A. There can be a lot of reasons other than the theory for the findings. This is why we need to test theories multiple times with different methods. If all the varied tests converge on the same conclusion, we have faith that the theory is right. Hypotheses stated after the analyses are conducted are meaningless in testing theory.

P-hacking is conducting analyses multiple times in different ways until statistical significance is achieved. This might mean adding and deleting cases, trying different controls variables, or trying different kinds of analyses. As it winds up, the more of these things you try, the more likely it is to find statistical significance purely by chance, and come to an erroneous conclusion. This is perhaps the main reason that so many studies can’t be replicated. What makes the practice questionable is that generally authors do not report all the failed attempts, but only the final analysis. If they were transparent, it is unlikely a paper would be accepted for publication.

Questionable Research Practices in Top Management Journals

Sven Kepes and team compared dissertations with journal articles based on dissertations. A dissertation is based on a pre-approved written proposal that notes the hypotheses, methods, and analyses. The student must conduct the study as it is proposed, and reports whether or not hypotheses are supported using the proposed analyses. Whether the hypotheses are supported or not has no bearing on whether the student graduates. All that matters is that the proposal was followed. Thus, there is little incentive to engage in questionable research practices–there would be no point.

The journal version, on the other hand, is a different story. To be accepted in a top IO psychology or management journal one must have supported hypotheses. As I noted last week, when careers ride on publishing in a short list of top journals, there is immense pressure to game the system and engage in questionable research practices. Kepes and team show us how often.

They started with dissertations from the 10 top research-productive management programs and compared those dissertations to the journal version. They compared the 8 most prestigious journals with lower tiered journals. What they found was pretty shocking. If we look at the top journals, more than two-thirds of articles showed evidence of HARKing, and about three-fourths of articles showed evidence of p-hacking. Further the percentages were a lot higher in the top journals than in other journals, undoubtedly due to the pressure faculty are under to publish in those journals.

Time to Clean Up Our Science

The Open Science Movement is an attempt to clean up scientific practices and eliminate questionable research practices. It has had some success at getting researchers to pre-register studies (similar to a dissertation proposal) so they publicly commit to their methods and analyses and to be more transparent about what they did. This is a good start, but there is far more to do. We need for there to be less emphasis on short journal lists so faculty are not under extreme pressure to publish in top outlets where bad practices seem to be widespread. Journals need to reform so that it is not necessary to have supported hypotheses and statistically significant results. If we continue as we are, not only do we distort our science, but we undermine the credibility of business research.

Image generated by DALL-E 3. Prompt “Picture of a carnival barker. Wider aspect. No, the carnival barker from before in a wise aspect.”

SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE

Join 1,313 other subscribers

Leave a Reply