In 1959 Donald Campbell and Donald Fiske published one of the most influential papers ever written about psychological measurement. They explored the issue of how we can determine if what we are measuring is what we intended. One concept that they discussed with the idea of method variance—that how we measure something affects the scores we get. They suggested that when we use the same method (e.g., rating scales in surveys) to assess different things, that method will make it appear that those things are more related than they are. A new article published in Journal of Business and Psychology on the measure-centric approach to method variance by Paul Spector, Cheryl Gray, and Chris Rosen challenges this idea.
What Is Method Variance?
Campbell and Fiske suggested that the method we use to measure something affects the scores we observe. This means that if we ask people to rate their agreement with personality items on a 5-point scale, the use of the same scale will produce a similar pattern of responses regardless of the questions asked. Thus, people will tend to agree or tend to disagree across items such as,
- I am the life of the party.
- I always try my best to succeed.
- I enjoy solving difficult problems.
- New experiences make me uneasy.
These items are designed to reflect different personality traits that are not necessarily related within people. The method variance concept says that measuring them with the same method will make the traits appear more related than they really are.
The idea that the same method produces inflated relationships among things has been referred to by several names including common method variance and Paul Meehl’s crud factor. This suggests that there is a background level of relationship due to method that is relatively constant across the things we measure. But when you break it down, there is little evidence that this view is correct.
The Measure-Centric Approach to Method Variance
The measure-centric approach says that there is no such thing as common method variance or the crud factor. Merely using the same method does not guarantee that the things we measure will be related. Psychological measurement is more complicated than that. Rather every time we measure something, there is the possibility that a host of different things might influence the scores people receive. Furthermore, the specific things that affect scores differ across measures. For example, if we ask people about a sensitive topic, some people will be reluctant to be honest whereas others will feel free to share. This tendency to reveal will affect measures of some topics but not others. Given there are many things that have the potential to affect scores, each measure we create can be influenced by a different combination of things. In other words, rather than there being one crud factor affecting all measures using the same method, there are an almost infinite number of potential crud factors, with each measured variable having its own mix.
What the New Research Showed
The Spector team conducted three studies of three measures of stressful working conditions (Interpersonal Conflict at Work Scale, Organizational Constraints Scale, and Quantitative Workload Inventory). They conducted a series of tests to see if the three scales differed in the specific factors that affected them. The results found support for the measure-centric idea in that each measure was affected in different ways by the factors investigated. One of those factors was hostile attribution bias (HAB)—the tendency for people to assume when something bad happens it was because someone did it to them on purpose. They found evidence that HAB affected people’s response to the interpersonal conflict scale, but not to the other two. None of the factors they studied seemed to affect the measure of workload.
The measure-centric idea is that the factors affecting a given measure is determined, not just by the method, but by the combination of method and the underlying thing we want to measure. Many of the remedies that people typically apply, such as measuring one variable today and the other next week, are unlikely to be helpful because they assume that the problem lies solely with the method. What is needed is a more nuanced approach to dealing with the possibility that there are extraneous things that can affect measurement that need to be identified and appropriately dealt with for each of the things we measure.
Photo by Ksenia Chernaya from Pexels
SUBSCRIBE TO PAUL’S BLOG: Enter your e-mail and click SUBSCRIBE