I was talking to a friend recently who said that at her business school, a requirement of tenure is that you publish from a very short list of journals. My experience with short journal lists is that when they are a requirement for tenure and other rewards, the science suffers because people become hyper-focused on publishing in those journals rather than doing good research that addresses an important problem. Even more concerning is that it encourages questionable research practices and even downright scientific fraud. It is extremely difficult to publish in business and business-related journals–most accept only a small percentage of papers that are submitted for consideration. Add the extreme pressure of requiring publication in a small number of outlets and we should not be surprised that we have produced bad research with journal lists.
Publish or Perish in Academia
Professors are expected to publish and contribute to the knowledge base of their disciplines. In industrial-organizational psychology and the business disciplines we conduct scientific research, most of it using statistical methods. The research we conduct is written into scientific reports and submitted for publication consideration to a journal where it undergoes peer review from typically 2 or 3 experts. Those journals vary in prestige, and odds of any submission being successful are very small no matter how good the research is in part because peer review is subjective and often experts disagree. Thankfully, there are many journals available, and it isn’t unusual for a paper to be rejected from several before finding a home. Most of my career was spent in a psychology department where there was no journal list. We were expected to publish in good journals, and we were free to publish in a variety of disciplines. Because I studied occupational health and well-being, I published some of my reports in medicine, nursing, and public health. In many business schools that would be discouraged because they would want me to focus my energy on their journal list.
Journal Lists
Business schools vary in the journal lists they use. Some have long and inclusive lists, like the Australian Business Deans Council list that gives an A-B-C rating to more than 2000 journals. It is easy to set criteria, like publishing 3 articles in A journals and 4 articles in B journals before tenure. This gives a great deal of latitude as there might be a dozen or more A journals and even more B journals in a given field. Other schools use narrow lists like the Financial Times 50 (FT50) that lists 50 business journals across all the business disciplines. Still more exclusive is the University of Texas Dallas (UTDallas24) list of 24 journals. In my field, there are only three journals on that list and only two of those accept scientific reports. Each submission has maybe a 1 in 20 chance of success, so you get only get two shots at the list for each report.
Bad Research with Journal Lists
Financial fraud can be explained by the Cressey Fraud Triangle that explains how fraud is the byproduct of three conditions: Pressure, Opportunity, and Rationalization. Narrow journal lists, by putting extreme pressure on professors, creates those conditions.
- Pressure. The requirement to publish puts pressure on faculty, but having to publish in a very small number of journals creates extreme pressure. Your career and livelihood depends on publishing in a small number of journals. What is an untenured professor to do when studies fail to produce sexy results, and research reports keep getting rejected?
- Opportunity. Professors have research and publication skills. It is not hard to figure out ways to game the system and even cheat if that’s what it takes for success.
- Rationalization. The two big rationalizations are “everybody is doing it” and “no one was harmed”. It is easy for professors to rationalize that questionable research practices are widespread and that even if their results are incorrect, no one is going to use them in the work world anyway, so they will do no harm.
Outright scientific fraud might be rare (how rare is impossible to know because it is hard to detect), but questionable research practices of HARKing and p-hacking are widespread. HARKing (hypothesizing after results are known) means looking at results and then writing hypotheses, claiming that you expected those results before conducting the study. It is dishonest, but it is not considered actual fraud. P-hacking means systematic reanalysis of data in different ways until they provide the results you want–an enactment of the saying “torture the data until they confess”. Statistical analysis is based on probability, typically by demonstrating that there is only a 1 in 20 chance that your conclusions about the outcome of a study was wrong. P-hacking can raise the probability of erroneously finding the results you want to 2 out of 3 or better. Clearly we get bad research with journal lists.
A Better Way
What I have seen is that young and even not-so-young faculty can become so obsessed with getting their work in journals on “the list” that they wind up choosing topics and methods that they believe will make them successful. Rather than identifying an interesting topic and working the problem through a series of studies intended to offer new insights, they attempt to replicate what they see in the target journals. A few years ago, I worked with a group of colleagues who came up with a new topic that had barely been studied. We started to research the problem, but abandoned the effort when some in the group got nervous that what we were doing wouldn’t get us on “the list”. Ironically, a couple of years later I came across a paper in one of the listed journals that did something very similar to what we set out to do. That could have been us, but the pressure of the list derailed us.
It is fine to have a list and encourage faculty to try to publish in those journals. What creates the problem is when tenure and other career rewards depend on publishing on the list. My old psychology department had it right. We expected people to develop a program of research that contributed to our understanding of whatever it was they studied. We expected some of their work to be in top-level journals, but we were not very fussy about the discipline if where they published made sense. A clinical psychologist, for example, might publish in a psychiatry journal rather than a psychology journal. The way to encourage good research is not to abandon lists entirely, but to remove the requirement to publish in listed journals. That pressure is why we get bad research with journal lists.
Image generated by DALL-E 3.0. Prompt “mad scientist. Man with beard and wild hair doing something crazy”
SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE