Last week I was browsing the posters at the Society for Industrial and Organizational Psychology (SIOP) conference in Chicago. A presenter of a meta-analysis was explaining how it was tough to compare countries because so few papers gave a complete description of their samples. Within the organizational sciences, people do not generally pay a lot of attention to the nature of their samples, and so they can often be casual about how those samples are described. I get it. Academics are under extreme publish or perish pressure, and the field doesn’t really care about the nature of samples. We rarely see comparisons of occupations as if results of all our studies are universal and apply equally to all kinds of workers. But there is some research showing that there can be differences. Therefore, populations are important in research reports, and not only in organizational research.
Populations Are Important in Research Reports
A few weeks ago I blogged about the need for us to study blue collar workers. I mentioned a study that showed how results differed among occupations. Yet there are very few papers contrasting occupations or industries. There are likely several reasons for this. First, academic researchers generally study what they think will lead to publication. If they do not see many papers studying something they will assume that it isn’t likely publishable. Thus papers concerning occupation differences are rare, just as it is rare to see intervention studies published.
Second, it is hard to find studies of occupations to compare. A few years ago I was part of a team that attempted to study occupation differences in the stress effects of workload. Our idea was to conduct a meta-analysis to contrast results of studies in different occupations. We were unable to complete the project because, other than nursing, there were no occupations where multiple papers were published. I recently took a quick look at the nature of samples in 2024 papers in Journal of Applied Psychology and found that half were from online panels, a quarter were students, and only a handful were of specific occupations such as sales and teachers.
Third, access to occupational samples can be difficult. It is not surprising that online panels are so popular. You can conduct a survey in a few hours. Students are readily at hand for faculty. Finding samples of particular occupations takes a lot more effort and time, so there would need to be a good reason to do so. If journals will accept online panels, why go to the effort of sampling occupations when likely peer reviewers will not reward doing so?
Complete Reporting Is Needed
If you read meta-analyses (quantitative combinations of results across studies), you will have noticed how effect sizes can vary tremendously across studies. One study might find a correlation between two variables of .50 while another finds a -.10. Meta-analysts are eager to explain those discrepancies, often trying to identify important population differences across studies. For example, they might want to know if results are the same in culturally dissimilar countries, or in different occupational groups. To perform analyses, it is necessary to know for each study what the nature of the sample is. Needed details include
- Where was the study conducted? This includes the country, and in many cases the region of the country. Within the US for example, some variables differ across regions and states. Would we expect that all findings from California would match those from Arkansas?
- When was the study conducted? One of my first doctoral students wanted to know if unemployment rate would affect the relationship between job satisfaction of employees and turnover rates–she found it did. In order to conduct the analysis, she had to know when the data were collected. Because few of the papers mentioned the time frame, she had to contact authors, and not everyone replied. Had authors reported this information in their papers, it would have saved her time and boosted her sample size.
- What industries and occupations are represented? Often descriptions are rather general. For example, it might be noted that participants were 200 employed individuals without much detail. Of course, some samples are very diverse, with only a handful of individuals from the same occupation, but that is not always the case. More attention to details would be helpful. Our papers routinely report on the gender, ethnicity/race, and age distribution of samples. Why not other details?
- How were the data collected? We need details about conditions under which data are collected. Were there incentives or were subjects pure volunteers? How were they recruited? Was the study supported by an organization, and if so how? Who collected the data–the researchers or organizational insiders? How as the study framed–purely as research or as an internal project? Were respondents given an expectation that results would inform future management actions/decisions?
We have a great deal of knowledge about employees and organizations, but there are important gaps that need attention. Unfortunately, at the current time the field is obsessed with theory, so the importance of knowledge takes a back seat to increasingly complex and convoluted tests of theoretical ideas, many of which have little relevance to the workplace. It is time we acknowledged that populations are important in research reports and began giving them the attention they deserve.
Note: Thank you to everyone at SIOP last week who commented to me about this blog. Professors mentioned using it with their students, and practitioners appreciated the accessible explanation of IO Psychology topics. I appreciate the feedback.
The image was generated with DALL-E 4.0
SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE