One of the most lively discussions among members of the Society for Industrial and Organizational Psychology (SIOP) is the gulf between science and practice. This issue is not unique to the field of IO Psychology. Much of this discussion revolves around reasons that academic research is so rarely used in practice, and things that might be done to remedy the problem. It is important that we learn how to bridge the academic-practice divide.
The Nature of the Academic-Practice Divide
Much of the discussion of the science-practice divide is predicated on the assumption that the research published in academic journals is applicable to the world of work. The main problem is not with the work itself, but rather how it is communicated (on the academic side) and how it is received (on the practice side). But how much of academic research is directly relevant to practice? In recent years, not much.
What Makes Research Actionable?
When we talk about evidence-based practice, we can think of three levels of evidence*.
- Association: Finding that a potential driver is correlated with an outcome, such as transformational leadership correlates with employee engagement. The obvious limitation is that we cannot determine if the potential driver led to the outcome or the reverse. In the typical study both variables are assessed via employee reports, so it is possible that engaged employees see their leaders differently.
- Prediction: This introduces the element of time in showing that the driver assessed at Time 1 can predict the outcome at Time 2. This is certainly useful information, but it cannot tell us if transformational leadership is really the driver, or is just associated with the driver. For example, perhaps transformation leaders do a better job of identifying and hiring employees predisposed to being engaged.
- Intervention: We create an intervention to manipulate the proposed driver to see if it affects our outcome. We train our leaders to be more transformational and see if engagement increases. This is not going to establish unequivocally that it is the transformational leadership behavior that is driving the outcome, as the training might have other effects as well, but we can establish that leadership training can have desirable effects.
Most academic literature is mainly at the first two levels. They use research designs that are great for establishing associations, and in some cases they can establish prediction over time. This is useful for the testing of theory, but not so useful in most cases for the practitioner who wants answers about what can be done to drive a desired outcome. There is a big difference between knowing that transformational leadership relates to engagement, and knowing 1, how to increase transformational leadership, and 2, that increasing it will raise engagement. In other words the practitioner needs studies that involve some sort of manipulation of a driver to see what happens. Such studies provide results that are actionable.
How to Bridge the Academic-Practice Divide
If we are serious about bridging the divide, we need to make more of the research in our academic journals actionable. We certainly need research on association and prediction to help build and test theory. But we also need research on interventions for practitioners to consult on potential solutions for organizational problems. There is a role for both academics and practitioners to play in conducting and publishing more research that is actionable.
*The three levels of evidence is based on Helena Kraemer et al.
21 Replies to “How to Bridge the Academic-Practice Divide”
I completely agree with this article. I would also add that academic literature that does a decent job on the intervention side could improve in communicating the applicability to practice. In some journals, using single subject and group designs are implementing interventions and showing improvement, however, the interventions themselves, the population of the study, or the scope of the study are not readily applicable or even transferable to practice. Your thoughts on this would be welcome as this is just my own opinion.
Academic articles focus heavily on research design and statistics, and often don’t give enough details of the intervention for someone to replicate it. This is likely because the focus is on the research side, so the authors aren’t trying to offer ideas about how to use their intervention. Of course, sometimes it might be because the authors are selling the intervention, so they don’t want to give too much of it away for free. Then again, interventions are sometimes designed for a particular industry/population so they might not generalize.
I agree with this article that actionable research is of best use for the practitioner. I don’t necessarily see the problem being with the research as much as it is with the communication and willingness to work together. Academics and practitioners need to both ask for and receive each other’s help in a meaningful way. Academics can ask practitioners, “Which problems do you need research on?” Practitioners can ask for help on specific problems. Academics can provide practitioners with solid answers based in tested theory, and practitioners can provide academics with sources for new publishable articles. Unfortunately, I believe that a big gap in the bridge is ego on both sides of the aisle. When each side recognizes how much they can help each other, egos will be toned down, the bridge will be completed, and the researchers and practitioners will flourish together.
We can hope that with more being discussed/written about the issue, there will be more partnerships.
This is a great article and it highlights a lot of the issues but another issue that bears addressing is the writing itself. Academic writing is not something that most practitioners would pick up and read while waiting in an airport for a plane. There almost needs to be a translation service that would take the academic writing and translate it into an article written to appeal to a practitioner.
You are beginning to see this being done in blogs where the writer translates a particular study or a line of research for a nonacademic audience. That is my goal with this blog, and some of the articles translate academic articles.
Easier said than done! DBA programs have been established to achieve this goal. Also, most of the qualitative methods, like action research and design science research that has some form of intervention is not perceived as rigorous scholarship. The best book I have found on this subject is Andrew Van De Van *Engaged Scholarship* Furthermore, as a practitioner, I have found most Journal peer reviews to be unbalanced enlarging the gulf between science and practice.
Hopefully DBA programs like USF’s will help lead the way. There is a preference for rigor (defined narrowly–in management that means using complex statistics) over applicability, but still some authors manage to publish intervention work.
As a follow up to my comments above. A good example is the Harvard Business Review. Most of the articles published in HBR are practitioner, real work problem focused with actionable timely solutions. However, HBR is NOT on the list of top academic scientific journals. Furthermore, most traditional academics give zero standing or weight to HBR from a publishing standpoint. UT Dallas List of Journals includes: The Accounting Review, MIS Quarterly, Journal of Marketing, Journal of International Business Studies, Journal of Finance and a few more….
True, but you have to be careful about HBR and other more practitioner oriented outlets because I find over-generalization and misinterpretation of results that the authors say supports a claim or conclusion. An interesting exercise is to check the sources they cite as evidence to see what the research actually was. Of course, this happens in academic papers too and I have had seminar students check sources to see how many sources are accurately cited. Many are not. As with anything people sell, both ideas and products, it is buyer beware.
Academics rightly focus on narrow research topics. In order to support a hypothesis with detailed testing, the topic needs to be sufficiently narrow to reduce the potential impact of non-related factors on a research outcome. The physical and natural sciences have been the leaders in rigorous scientific method of inquiry and drawing conclusions. We can see business academics struggle with this approach, as the Doctor of Business Administration was the original and common terminal business degree conferred, and yet over the last 40 years, as business academics have migrated more toward scientific methods and to be accepted in the larger academic research community, the degree conferred has also migrated to conferral of the Doctor of Philosophy, to mirror other non-business disciplines.
The challenge in bridging to practice, is generalizing academic research that evolves from a narrowly-focused research question with sample of data that is tested to represent a specific, and narrowly focused population, and then to generalize that study as to increase relevancy to a larger audience.
The scholarly authors who take this route will need to consider if the academy will respond favorably to attempts to generalize research to a larger audience. Would top journals see practitioner publications as non-scholarly and potentially shun academics who pursue that route, limiting the academic’s potential to continue to find scholarly outlets for his or her academic work? It’s a risk the researcher would need to consider.
Challenging indeed. It seems to me the bridge must be crossed in both directions. Practitioners are vulnerable to experiential assumptions and conclusions with little scientific rigor, which may be comfortable and meaningful. But losing actionable application to practice in the pursuit of academic rigor, and perhaps celebrity, seems counterproductive to the advancement of relevant knowledge. If we are unable to advance our understanding on the intervention level, and disseminate credible contributions, then I believe we are falling short. Theoretical integrity is critical in academic literature, but so should be practical credibility among decision-makers who pursue organizationally defined outcomes. What are the value drivers here?
I agree with this article whole-heartedly because the goals and results of academic versus practitioner research are different. I think if academic research are monetized based on applicability to the business environments with factors such as testing for customer response, viability based on how many organizations in the industry actually put the research to use and how effective the research was to their strategy, organizational operations and ROI. Then, one may see a sea change in focus and motivation towards practical applications that benefit practitioners.
I really liked the article.
This is a wonderful article! As a practicing professional that has been working in academia for many years, I have often wondered how this might best be accomplished. I have been blessed to be somewhat bilingual, but I am the first to admit that it is hard to get business professionals and academics on the same page. Business professionals are never going to read overly complex academic articles; unfortunately, this is the bread and butter of academics. This is further complicated by the fact that there is no incentive for the academic community to translate their articles for practitioner outlets. Having said that, when you can build bridges amazing things do happen. I have seen a number of interesting business solutions that came out of experiential learning partnerships. Perhaps, DBAs who speak both languages can lead the way and build the bridge!
Interesting article. Although I understand the definition of evidence as defined in the blog based on Kraemer et al, I am not sure that the three criteria are the most useful for understanding the concept of ‘actionable’. I would suggest that ‘actionable’, from a practitioner standpoint, may be based more upon criteria such as applicability (are the recommendations/findings relevant to my business?), practicality (is the benefit of adopting a recommendation worth the expenditure of resource?) or disruption (how much will friction will adoption of the recommendation cause to other aspects of my business?). If these criteria are valid, it may not be a communication issue as much as a utility issue that should be addressed to bridge the academic-practice divide.
To bridge the divide Research must not only be actionable but also relevant and reliable. Academic literature is too often disconnected from the actual problems practitioners face. Reliability is also a problem. Research findings are not generalizable or useful to practitioners when studies are conducted in a test environment that does not mirror the rugged landscape in which practitioners need to solve problems
Unlike the sciences like biology, chemistry, medical research, etc., academia in the business arena has little impact on the practitioner’s world. I believe there are many reason for this dichotomy. First and foremost, most academics seek Doctorate of Philosophy degrees in Business, and the majority of such individuals have little to no long term practical business experience. These individuals graduate from grade school, high school, college, graduate school and receive their doctorate and for the most part their work experience is either within a University setting or prior to attaining their graduate degree while working at Dairy Queen. As set forth in the article, the author rarely addresses outcome or the manipulation of a driver because they lack the practice experience to do so. Secondly, much like lawyers, academics strive to complicate their communications to impress one another, making their communications pointless because it is difficult to understand and thus the practitioner does not have a base of understanding to even begin to develop a model for implementation.
Even if an academic has practical experience, they soon learn that it isn’t a big help in publishing academic articles.
This is a very interesting article. I wonder how often academics conduct a test or trial to verify that they are focusing on a topic of relevance to practitioners. Attending conferences such as the Engaged Management Scholarship Conference is a great opportunity to discuss work in progress with academics and practitioners, and surface issues during the early research stage. It was intriguing to hear about the different stages; association, prediction and intervention. It seems like more work at the intervention stage would be published, however, it sounds like that is not the case in practice. Thank you for sharing this wisdom.
Most academic research, at least in management, is driven by the academic literature and focuses largely on theoretical issues. Academics are interested in general principles that might hold across many firms/situations. Often research driven by practice issues is seen as too limited and specific to a given firm or situation.
It seems that I have spent my whole career of 40 years telling I/O students “This is how it should be done but is not how it is done.” Our discipline hasn’t had as much impact as we would like and change that has occurred in the US has more often been through employment legislation. My way to deal with the big gulf is to largely ignore it and focus on teaching my students usable skills. For example, I’ve done a lot with interview skills, which are a foundation for pretty much all organizational work. So maybe you won’t be able to change hiring or appraisal systems but you can conduct a quality selection interview or a good performance appraisal. Likewise I see a lot of value in teaching students how to conduct a good survey, given that many organizations are currently survey crazy but do such a poor job of it.