Did We Really Need a Study to Confirm the Obvious?

did we really need a study to confirm the obvious

Last week I blogged about a new study concerning work and suicide. I posted about it on social media and someone commented, “Did we really need a study to confirm the obvious?” This is far from the first time I have seen this comment about someone else’s research, and truth be told, I have made it a time or two myself. But is it a fair criticism of a study just because you found the results unsurprising?

Role of Gatekeepers

As editors and peer reviewers our job is to evaluate the contribution of a submitted work to determine if it is worth publication. Some of that evaluation has to do with the rigor of the research. For an empirical study, was the research design, assessment of variables, data analysis reasonable in light of the purpose. Are conclusions reasonable based on the study that was completed? A second purpose is more subjective. This has to do with a submission’s contribution to the scientific field. Does it provide enough of an incremental increase in knowledge to justify publication? Here reasonable people can disagree on whether given findings are of sufficient value, for example, showing that working conditions and burnout are associated with suicide?

Did We Really Need a Study to Confirm the Obvious?

The criticism that a study is merely confirming the obvious is often made about psychology and other social sciences. Sometimes the basis for this impression is that results of a study might be consistent with our experiences. A study showing that holding your hand over a lit candle will produce pain would seem obvious because we have experienced that at some point in our lives. But often this claim is not based on direct experience as much as on cognitive bias. That is, once we read a result and the explanation, it is easy to convince ourselves that it is obvious because it makes so much sense. But hindsight is 20/20. Had we been asked to predict the outcome of the study before seeing the result, would we have predicted what was found?

As an example, in 1982 Richard Shikiar and Rodney Freudenberg published a paper in a top peer-reviewed journal, Human Relations, that showed how the unemployment rate at the time of a study affected the relationship between job satisfaction and employee turnover. Studies have long shown that dissatisfied employees were at risk for turnover, and this study was concerned with a boundary condition. Using meta-analysis they found that when jobs were scarce and unemployment rates were high, job satisfaction better predicted turnover. Their explanation was that when you cannot easily find another job, you must really hate your job to quit. This makes logical sense. Likely there were some who saw the study and thought to themselves, did we really need a study to confirm the obvious?

A problem with concluding that their result is merely confirming the obvious is that the method led to the wrong conclusion. One of my then doctoral students, Jeanne Carsten, did her master’s thesis to replicate the unemployment effect. Shikiar and Freudenberg noted that if the date of data collection was not provided, they estimated it as two years prior to publication. To get the precise time frame, Jeanne contacted the authors of each study to ask when data collection was conducted and found that a two-year estimate was inaccurate, as publication lags were as long as 20 years from data collection. When she based unemployment rates on the time of data collection, her findings were the opposite of Shikiar and Freudenberg’s. When unemployment rates were high, there was little relationship between job satisfaction and turnover. Unhappy employees quit when job opportunities were plentiful rather than scarce.

Intuitively, both explanations make sense. The idea that only unhappy people quit when they cannot easily find another job, or unhappy people mainly quit when jobs are plentiful are both feasible. But only one of these explanations is supported by the data, and not only from Jeanne’s study, as there have been subsequent studies that found the same thing.

It is fair to criticize a study because what is found is not new. For example, if you submitted a paper merely showing a link between job satisfaction and turnover, reviewers would be justified in suggesting there is little new knowledge gained. There are dozens of studies that have already shown the same thing. However, suggesting that a study’s results are not new is not the same as saying they are confirming the obvious, especially when the obviousness judgment occurs after you see the results. The obvious needs confirmation in order to test the accuracy of what people believe, as it is not always supported by facts. This is especially true when someone asks “Did we really need a study to confirm the obvious?” If there are no existing studies on the issue, the answer is yes.

Photo by fotografierende from Pexels

Join 1,147 other subscribers

4 Replies to “Did We Really Need a Study to Confirm the Obvious?”

  1. Excellent items from you, man. I have be aware your stuff previous to
    and you’re just extremely magnificent. I really like what you
    have bought right here, really like what you’re stating and the way
    in which by which you say it. You make it enjoyable and
    you continue to take care of to keep it sensible. I can not wait to learn much more from you.
    That is actually a tremendous site.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.