A Peer Reviewer Crisis in the Organizational Sciences

Bearded man at a table looking through documents.

Academic journals maintain rigor by soliciting evaluations by two or more experts who serve as peer reviewers for each paper submitted for publication consideration. I have been hearing more and more concerns lately by journal editors in the organizational sciences (e.g., industrial-organizational psychology and management) about having difficulty finding peer reviewers. This is not surprising to me, considering how often I get asked to review, and what the expectation for reviews has become. It seems to me that there is a peer reviewer crisis in the organizational sciences as we have created our own tragedy of the commons with more and more journals chasing a limited pool of potential reviewers.

Pressure on the Peer Reviewer Community

When I began my career, there were only a handful of journals, reviews were short, and there was only one round of review in most cases. The field has evolved over the years to where peer reviewer loads have exploded. This is due to several factors.

  • Paper length has increased. I did an analysis of article length in one of our top peer review outlets, Journal of Applied Psychology, between the time I was in graduate school in the 70s and now. Introductions are now more than 5 times longer. As a reviewer, I began to dread reviewing for this journal as wading through long and ponderous introductions full of convoluted theoretical statements and claims was a real chore.
  • Review length has increased. The first few papers I submitted to academic journals received reviews that were about half a page of strengths/weaknesses and perhaps a few errors (e.g., the number in the table doesn’t match the text). Over the years there has been inflation in review length to where 2-3 pages or more is common. I’ve seen reviews of 6-8 pages of picky criticisms and demands to change analyses, framing, interpretation, and writing.
  • Multiple rounds of review are common. At one time, most papers received only a single round of peer review with editors handling revisions themselves without sending papers back to reviewers. Today almost all resubmissions go back to peer reviewers and it is unusual to have only two rounds. Editors keep sending revisions back to the same reviewers asking them to re-review until they run out of criticisms. This can go on for many cycles–my longest is 5.
  • Rejections of papers after R&R. Another common practice, particularly at high status journals, is rejection of R&Rs, sometimes after multiple rounds of revision. An author submits a paper that is reviewed once, twice, three times, four times by the same three reviewers, only to be rejected and thrown back into the submission pool where it is sent to another journal that will enlist 2-3 reviewers to start the cycle again.
  • New Journals Are Sprouting Like Weeds. The number of journals is increasing at an incredible rate. Online only journals require far fewer resources than paper journals, and we see the growth of online only publishing companies that are producing new journals that put more pressure on the reviewer pool. It is a rare week that I don’t get at least one invitation to review for a journal I never heard of.

Reviewers Are a Limited Resource

Journals prefer peer reviewers who are active researchers with a track record of publications on the topic in question. Take any submitted article, and there are a limited number of ideal reviewers who fit the profile. It is not uncommon for someone to review a rejected paper for one journal, and then receive the same paper to review from another. Established researchers can get more than one new review invitation per week, plus invitations to re-review R&Rs. That can amount to half a day or more per week spent reviewing papers. While universities encourage faculty to engage in peer reviewing, this activity is not rewarded. Promotions, raises, and tenure are determined by publication not peer reviewing.

The tragedy of the commons occurs as the peer reviewer pool is overused and journal editors struggle to find peer reviewers for papers. This has produced a peer reviewer crisis as decision times increase, and often editors wind up with inexperienced reviewers because they are the only ones they can find.

A Peer Reviewer Crisis in the Organizational Sciences

The organizational research field has reached a tipping point in overutilizing the limited peer reviewer pool. There isn’t much that can be done about the proliferation of journals, but there is a lot that editors can do to reduce the pressure on this precious resource. Some suggestions:

  • Desk reject more papers. Many journals do a good job of having editors reject papers prior to peer review. The editor can tell when a paper does not meet minimum standards of rigor. Some journal editors like to provide detailed feedback to authors, even for papers that have no chance of acceptance, so they send them out for review. This is a nice service to provide, particularly for inexperienced authors, but unfortunately, we do not have the collective resources to review every paper. Furthermore, not every author wants to wait months to get feedback they did not ask for, making this an example of unhelpful help (well-intentioned attempts to provide help that is actually harmful).
  • Enforce length limits for papers. The length of papers in the organizational sciences has exploded over time. It is not necessary for every paper to have a dozen or more pages of introduction containing a detailed review of the literature on the topic. Other fields treat research reports as just that–a short report about a piece of research focused on what was done and what was found. I recently had a paper accepted in a nursing journal with an introduction that was a mere two pages.
  • Set a high bar for R&R invitations. There are too many rejected R&Rs because editors are inviting resubmissions on papers that have limited contributions. The contribution isn’t going to change because the authors make a stronger case in the writing. We need to stop inviting revisions on papers that have serious limitations, and stop with the “high risk R&Rs”.
  • There should be one round of peer review. Editors should handled R&Rs themselves. There is no reason in most cases to go back to reviewers.
  • Only use two reviewers. Nothing annoys me more than to agree to review and then discover at the end that there are 2 or even 3 other reviewers. Given the shortage of peer reviewers, this is a journal just being selfish.

The whole peer review process especially for elite journals in the organizational realm has evolved to a bad place. Not only are authors, especially the vulnerable untenured, treated poorly in the review process as I explain here, but the peer reviewer pool is being overutilized and stressed creating a peer reviewer crisis. It is no surprise to me that so many are giving up academic careers for industry. We are past the point of needing reforms to better balance the need to ensure rigor in our academic papers with supporting one another in our need to disseminate our work.

Photo by Michael Burrows at Pexels

SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE

Join 1,189 other subscribers

6 Replies to “A Peer Reviewer Crisis in the Organizational Sciences”

  1. I agree to a very large degree with your analysis Paul. But of course I have some comments of my own:

    Another reason for the increase in the peer reviewer load is that the number of papers to be reviewed has increased strongly over the years. Researchers tend to write more papers than, say, thirty years back when I started doing research; and more researchers from countries in which previously little research was done have entered the scene (e.g., China, Pakistan, India, South Africa, Brazil – basically the BRIC countries, with the exception of Russia). Although this should also have resulted in an increase in the number of eligible reviewers, journals tend to receive many more submissions than before, thus increasing peer reviewer load. Question: do we as journal editors use this potential of new reviewers sufficiently, or do we tend to focus on the established researchers that we already know from, say, ten-twenty years ago?

    In addition (and I am not sure this is new compared to 30 years ago) researchers tend to submit their research to high-ranking journals in the hope of having their work reviewed; even if their paper is rejected, researchers then still have a number of high-quality reviews to be used in a revision, after which a journey to a potentially long series of increasingly lower-ranking journals follows. If the number of authors pursuing this strategy increases (see my previous comment), the number of reviews to be conducted increases exponentially.

    Apart from the fact that journals should be more realistic regarding the reviewing process (see Paul’s comments and suggestions), perhaps authors should be more realistic as well regarding the quality of their papers, only submitting their papers to outlets that can realistically be expected to be interested in their research (instead of initially submitting their work to high-ranking journals, working their way back to the lower-ranking journal that will ultimately publish their work).

  2. More researchers, more journals, more papers, but more understanding? I have a sense that the returns on research efforts are diminishing. The big things to be discovered have been discovered and so it gets harder to judge a contribution–even putting aside the replication crisis. That could lead to the mental gymnastics in the introductions to papers and the volume of “suggestions” from reviewers.

  3. Great post, Paul. Really resonated with me. I am an AE at two journals, and after reading your earlier post about ethics, I try my best to go with the one-round of reviews, but don’t always succeed.

    I also feel that most introductions do not really need a literature review. Instead, if a researcher is going to do a literature review, then let that literature review be a systematic literature review rather than the one-sided narrative reviews we typically get in introductions. And let that systematic review stand alone as its own paper.

    Another factor here is, in some fields of IOP/OB that intersect with technology/CS (e.g., personnel selection), our research is competing for airtime against a group of people who simply upload pre-prints onto the arXiv in real-time. The outcome is that by the time our work gets published (following several rounds of revisions), it’s already obsolete, and meanwhile the CS community is already working on the next big thing. Some examples of this include the ‘recent’ (2022-2024) papers on NLP, which were essentially rendered redundant when LLMs became accessible. Some of that work has appeared in my ‘journal alert’ email within the last month, whereas ChatGPT has been available for over a year now!

    Also, I had to chuckle about your “high risk” comment; I can’t remember the last time I received an RnR from a top journal that wasn’t “high risk”; maybe that work should have been rejected.

    1. Great points Pat. Thanks for the comment. I didn’t even think of the issue of the lag between the time we do the work and when it gets out there. We rarely think about our work being time-sensitive, but often it is.

Leave a Reply

Your email address will not be published. Required fields are marked *

The reCAPTCHA verification period has expired. Please reload the page.