Both academic research and evidence-based practice rely on information to reach conclusions and make decisions. For example, a practitioner might administer a battery of pre-employment assessments to job applicants to predict who will make the best employee. An academic might collect data on employee attitudes and motives to test a model that can explain why some salespeople sell more product than others. Statistical methods provide a wide array of tools that can be used to analyze and synthesize information. But too often the results of statistical analysis are used in an uncritical way, assuming that if something is statistically significant it must be true. It is best to avoid magical thinking with statistics by taking a more nuanced view.
Avoid Magical Thinking with Statistics
In 1971 one of the leading methodologists in psychology, Paul Meehl, discussed the idea of the automatic inference machine. His idea is that too often people uncritically accept the results of statistical analyses automatically as if you can draw undeniable conclusions by following some formulaic practice. Often statistics are taught in this cook-book fashion as if conclusions from data reside in the statistics used. This can lead to misconceptions, such as needing to use ANOVA rather than multiple regression to reach a causal conclusion (yes, I have seen this argument made).
In many fields, researchers will use complex structural equation modeling and other modeling methods to test process models. These are models that propose a causal sequence in which series of events unfolds over time. Such processes involve a chain in which variables drive one another in a specified order. Too often the design of the investigation cannot establish the temporal order of events or that one variable affects another. Even longitudinal designs in many cases only establish when events were measured and not when they occurred. At best they show that a pattern of correlations is consistent with the proposed model and not that the model is correct. These cases illustrate a comment by the statistician Howard Wainer who in 1999 wrote
“The magic of statistics cannot create information when there is none” p. 255
Complex Statistics Can Lead to Wrong Conclusions
Early in my career I became enamored with complex statistics–I was a kid with a new toy. I would design my studies to produce data that allowed me to use the most complicated methods available. It wasn’t long though before I realized that the complicated statistics were not giving me new insights that I couldn’t see in the patterns of correlations. There was no statistical magic that revealed what couldn’t be seen with simpler statistics. Even worse, sometimes just looking at the complex results would lead to an incorrect conclusion. For example, suppose you are interested in predicting employee job performance from a battery of personality tests. A test might not be related to performance on its own but when placed into a complex analysis it appears to be a good predictor due to complex patterns of relationship–the classic suppressor effect. It would be an incorrect conclusion to suggest that whatever the test assesses is related to performance or worse that it somehow drives performance.
Statistics Is about Scientific Inference
There is nothing magical about complex statistics, and I find that the more complex the statistics, the more it can obscure rather than illuminate. I like to start an analysis with simple descriptive statistics. If I’ve done a survey study with a dozen variables, I want to know first how people responded. Are most people high or low on each variable, or are people spread across the entire possible range of scores? Next, I want to know if each pair of scores is related. If my study is trying to understand an outcome (e.g., employee sales), I want to know which of my predictor variables relates to sales, and how predictors relate to one another. I typically will print off a correlation table and spend time “digesting” the patterns of relationships. Once I am satisfied that I understand how my variables are related, I will proceed to more complex analyses that go beyond variable pairs. That might be multiple regression or something more complex.
Statistical analysis cannot be interpreted in isolation from the research design that generated the data. If you want to draw inferences about what might be driving something, data have to be up to the task of determining when things occurred in relation to one another, and that one thing is actually driving another. The design that is considered the gold standard is a simple experiment–randomly assign people (or whatever units you are studying) to conditions and then compare those conditions. Are people exposed to some intervention higher on the outcome than people who serve as a control group? Ironically, this simple design that provides the best evidence requires only the simplest of statistics–a t-test.
Photo by Tima Miroshnichenko
SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE