In this article, we introduce the method of measuring skin conductance responses (SCR) reflecting peripheral (bodily) signals associated with emotions, decisions, and eventually behavior. While measuring SCR is a well-established, robust, widely used, and relatively inexpensive method, it has been rarely utilized in organizational research. We introduce the basic aspects of SCR methodology and explain the behavioral significance of the signal, especially in connection with the emotional experience. Importantly, we describe in detail a specific research protocol (fear conditioning) that serves as an illustrative example to support the initial steps for organizational scholars who are new to the method. We also provide the related scripts for stimulus presentation and basic data analysis, as well as an instructional video, with the aim to facilitate the dissemination of SCR methodology to organizational research. We conclude by suggesting potential future research questions that can be addressed using SCR measurements.
Despite growing interest in video-based methods in organizational research, the use of collaborative ethnographic documentaries is rare. Organizational research could benefit from the inclusion of collaborative ethnographic documentaries to (a) enable the participation of "difficult to research" groups, (b) better access the material, embodied, or sensitive dimensions of work and organizing, and (c) enhance the dissemination and practical benefits of findings. To increase understanding of this under-explored method, the authors first review the available literature and consider strengths, limitations, and ethical concerns in comparison with traditional ethnography and other video-based methods. Using recent data collected on working class men doing "dirty work," the authors then illustrate the use of collaborative ethnographic documentary as an investigative tool—capturing often concealed, embodied, and material dimensions of work—and a reflective tool—elaborating and particularizing participants’ narrative accounts. It is concluded that collaborative ethnographic documentary facilitates greater trust and communication between researchers and participants, triggering richer exploration of participants’ experiences, in turn strengthening theoretical insights and practical impact of the research.
Recently, the application of neuroscience methods and findings to the study of organizational phenomena has gained significant interest and converged in the emerging field of organizational neuroscience. Yet, this body of research has principally focused on the brain, often overlooking fuller analysis of the activities of the human nervous system and associated methods available to assess them. In this article, we aim to narrow this gap by reviewing heart rate variability (HRV) analysis, which is that set of methods assessing beat-to-beat changes in the heart rhythm over time, used to draw inference on the outflow of the autonomic nervous system (ANS). In addition to anatomo-physiological and detailed methodological considerations, we discuss related theoretical, ethical, and practical implications. Overall, we argue that this methodology offers the opportunity not only to inform on a wealth of constructs relevant for management inquiries but also to advance the overarching organizational neuroscience research agenda and its ecological validity.
Structural equation modeling (SEM) has been a staple of the organizational sciences for decades. It is common to report degrees of freedom (df) for tested models, and it should be possible for a reader to recreate df for any model in a published paper. We reviewed 784 models from 75 papers published in top journals in order to understand df-related reporting practices and discover how often reported df matched those that we computed based on the information given in the papers. Among other things, we found that both df and the information necessary to compute them were available about three-quarters of the time. We also found that computed df matched reported df only 62% of the time. Discrepancies were particularly common in structural (as opposed to measurement) models and were often large in magnitude. This means that the models for which fit indices are offered are often different from those described in published papers. Finally, we offer an online tool for computing df and recommendations, the Degrees of Freedom Reporting Standards (DFRS), for authors, reviewers, and editors.
Advances in data science, such as data mining, data visualization, and machine learning, are extremely well-suited to address numerous questions in the organizational sciences given the explosion of available data. Despite these opportunities, few scholars in our field have discussed the specific ways in which the lens of our science should be brought to bear on the topic of big data and big data's reciprocal impact on our science. The purpose of this paper is to provide an overview of the big data phenomenon and its potential for impacting organizational science in both positive and negative ways. We identifying the biggest opportunities afforded by big data along with the biggest obstacles, and we discuss specifically how we think our methods will be most impacted by the data analytics movement. We also provide a list of resources to help interested readers incorporate big data methods into their existing research. Our hope is that we stimulate interest in big data, motivate future research using big data sources, and encourage the application of associated data science techniques more broadly in the organizational sciences.
Magnetoencephalography (MEG) is a method to study electrical activity in the human brain by recording the neuromagnetic field outside the head. MEG, like electroencephalography (EEG), provides an excellent, millisecond-scale time resolution, and allows the estimation of the spatial distribution of the underlying activity, in favorable cases with a localization accuracy of a few millimeters. To detect the weak neuromagnetic signals, superconducting sensors, magnetically shielded rooms, and advanced signal processing techniques are used. The analysis and interpretation of MEG data typically involves comparisons between subject groups and experimental conditions using various spatial, temporal, and spectral measures of cortical activity and connectivity. The application of MEG to cognitive neuroscience studies is illustrated with studies of spoken language processing in subjects with normal and impaired reading ability. The mapping of spatiotemporal patterns of activity within networks of cortical areas can provide useful information about the functional architecture of the brain related to sensory and cognitive processing, including language, memory, attention, and perception.
Research has explored how embeddedness in small-world networks influences individual and firm outcomes. We show that there remains significant heterogeneity among networks classified as small-world networks. We develop measures of the efficiency of a network, which allow us to refine predictions associated with small-world networks. A network is classified as a small-world network if it exhibits a distance between nodes that is comparable to the distance found in random networks of similar sizes—with ties randomly allocated among nodes—in addition to containing dense clusters. To assess how efficient a network is, there are two questions worth asking: (a) What is a compelling random network for baseline levels of distance and clustering? and (b) How proximal should an observed value be to the baseline to be deemed comparable? Our framework tests properties of networks, using simulation, to further classify small-world networks according to their efficiency. Our results suggest that small-world networks exhibit significant variation in efficiency. We explore implications for the field of management and organization.
An emergent stream of research in management employs configurational and holistic approaches to understanding macro and micro phenomena. In this study, we introduce mixture models—a related class of models—to organizational research and show how they can be applied to nonexperimental data. Specifically, we reexamine the long-standing research question concerning the CEO pay–firm performance relationship using a novel empirical approach, treating individual pay elements as components of a mixture, and demonstrate its utility for other research questions involving mixtures or proportions. Through this, we provide a step-by-step guide for other researchers interested in compositional modeling. Our results highlight that a more nuanced approach to understanding the influence of executive compensation on firm performance brings new insights to this research stream, showcasing the potential of compositional models for other literatures.
Organizational scholars have grown increasingly aware of the importance of capturing phenomenon at the within-person level of analysis in order to test many organizational behavior theories involving emotions, motivation, performance, and interpersonal processes, to name a few. Experience sampling methodology (ESM) and diary-based procedures provide data that better match many dynamic organizational theories by measuring constructs repeatedly across events or days, providing an inter-episodic understanding of phenomena. In this article, we argue for the value of another measurement procedure that also adopts a repeated measures approach but does so by continuously measuring psychological processes without any gaps over relatively short timeframes. More specifically, we suggest that continuous rating assessments (CRA) can serve as a tool that enables the measurement of dynamic intra-episodic processes that unfold over the course of events, enabling precise determination of how, when, and in what way constructs change and influence each other over time. We provide an overview of this methodology, discuss its applicability to understanding time-based phenomena, and illustrate how this technique can provide new insight into dynamic processes using an empirical example.
Companies and organizations the world over wish to understand, predict, and ultimately change the behavior of those whom they interact with, advise, or else provide services for: be it the accident-prone driver out on the roads, the shopper bombarded by a myriad of alternative products on the supermarket shelf, or the growing proportion of the population who are clinically obese. The hope is that by understanding more about the mind, using recent advances in neuroscience, more effective interventions can be designed. But just what insights can a neuroscience-inspired approach offer over-and-above more traditional, not to mention contemporary, behavioral methods? This article focuses on three key areas: neuroergonomics, neuromarketing, and neurogastronomy. The utility of the neuroscience-inspired approach is illustrated with a number of concrete real-world examples. Practical challenges with commercial neuromarketing research, including the cost, timing, ethics/legality and access to scanners (in certain countries), and the limited ecological validity of the situations in which people are typically tested are also discussed. This commentary highlights a number of the key challenges associated with translating academic neuroscience research into commercial neuromarketing applications.
Organizational science has increasingly recognized the need for integrating time into its theories. In parallel, innovations in longitudinal designs and analyses have allowed these theories to be tested. To promote these important advances, the current article introduces time series analysis for organizational research, a set of techniques that has proved essential in many disciplines for understanding dynamic change over time. We begin by describing the various characteristics and components of time series data. Second, we explicate how time series decomposition methods can be used to identify and partition these time series components. Third, we discuss periodogram and spectral analysis for analyzing cycles. Fourth, we discuss the issue of autocorrelation and how different structures of dependency can be identified using graphics and then modeled as autoregressive moving-average (ARMA) processes. Finally, we conclude by describing more time series patterns, the issue of data aggregation, and more sophisticated techniques that were not able to be given proper coverage. Illustrative examples based on topics relevant to organizational research are provided throughout, and a software tutorial in R for these analyses accompanies each section.
This article applies paradox as a metatheoretical framework for the reflexive analysis of roles within a participatory video study. This analysis moves us beyond simply describing roles as paradoxical, and thus problematic, to offer insights into the dynamics of the interrelationship between participant, researcher, and video technology. Drawing on the concept of "working the hyphens," our analysis specifically focuses on the complex enactment of Participation-Observation and Intimacy-Distance "hyphen spaces." We explore how video technology mediates the relationship between participant and researcher within these spaces, providing opportunities for participant empowerment but simultaneously introducing aspects of surveillance and detachment. Our account reveals how video study participants manage these tensions to achieve participation in the project. It examines the roles for the researched, the technology, and the researchers that are an outcome of this process. Our analysis advances methodology by bringing together a paradox perspective with reflexive work on research relationships to demonstrate how we can more adequately explore tensions in research practice and detailing the role of technology in the construction and management of these tensions.
This article examines how video recording practices excert an influence on the ways in which an organizational phenomenon—in our case organizational space—becomes available for analysis and understanding. Building on a performative and praxeological approach, we argue that the practical and material ways of conducting video-based research have a performative effect on the object of inquiry and do not simply record it. Focusing in particular on configurations of camera angle and movement—forming what we call the Panoramic View, the American-Objective View, the Roving Point-of-View, and the Infra-Subjective View—we find that these apparatuses privilege different spatial understandings both by orienting our gaze toward different analytical elements and qualifying these elements in different ways. Our findings advance the methodological reflections on video-based research by emphasizing that while video has a number of general affordances, the research practices with which we use it matter and have an impact both on the analytical process and the researcher’s findings.
We examined the effects of response biases on 360-degree feedback using a large sample (N = 4,675) of organizational appraisal data. Sixteen competencies were assessed by peers, bosses, and subordinates of 922 managers as well as self-assessed using the Inventory of Management Competencies (IMC) administered in two formats—Likert scale and multidimensional forced choice. Likert ratings were subject to strong response biases, making even theoretically unrelated competencies correlate highly. Modeling a latent common method factor, which represented nonuniform distortions similar to those of "ideal-employee" factor in both self- and other assessments, improved validity of competency scores as evidenced by meaningful second-order factor structures, better interrater agreement, and better convergent correlations with an external personality measure. Forced-choice rankings modeled with Thurstonian item response theory (IRT) yielded as good construct and convergent validities as the bias-controlled Likert ratings and slightly better rater agreement. We suggest that the mechanism for these enhancements is finer differentiation between behaviors in comparative judgements and advocate the operational use of the multidimensional forced-choice response format as an effective bias prevention method.
My article examines how researchers use video recordings to gain insight into organizational phenomena. I conduct a literature review of articles published from 1990 to 2015 in six top-tier organizational journals: Academy of Management Journal, Administrative Science Quarterly, Journal of Management Studies, Organization Science, Organization Studies, and Strategic Management Journal. My review identifies 56 articles where video was central to the research design. My analysis demonstrates how researchers used the audible, visible, and timing affordances of video recordings to investigate organizational phenomena, including rhetoric, emotion, group interactions, and workplace studies. By exploring how researchers studied these phenomena, I show how video illuminates aspects of situated action and interaction that are difficult to evaluate using other kinds of data. My review contributes to the literature on video in organization studies by providing an overview of video-based research in these journals, highlighting the diversity of approaches used to collect and analyze video, and illustrating some of the ways that video helped to advance knowledge around organizational phenomena.
This article assesses the utility of video diaries as a method for organization studies. While it is frequently suggested that video-based research methodologies have the capacity to capture new data about the minutiae of complex organizational affairs, as well as offering new forms of dissemination to both academic and professional audiences, little is known about the specific benefits and drawbacks of video diaries. We compare video diaries with two established and "adjacent" methods: traditional diary studies (written or audio) and other video methods. We evaluate each in relation to three key research areas: bodily expressions, identity, and practice studies. Our assessment of video diaries suggests that the approach is best used as a complement to other forms of research and is particularly suited to capturing plurivocal, asynchronous accounts of organizational phenomena. We use illustrations from an empirical research project to exemplify our claims before concluding with five points of advice for researchers wishing to employ this method.
Mixed methods systematically combine multiple research approaches—either in basic parallel, sequential, or conversion designs or in more complex multilevel or integrated designs. Multilevel mixed designs are among the most valuable and dynamic. Yet current multilevel designs, which are rare in the mixed methods literature, do not strongly integrate qualitative and quantitative approaches for use in one study. This lack of integration is particularly problematic for research in the organization sciences because of the variety of multilevel concepts that researchers study. In this article, we develop a multilevel mixed methods technique that integrates qualitative comparative analysis (QCA) with hierarchical linear modeling (HLM). This technique is among the first of the multilevel ones to integrate qualitative and quantitative methods in a single research design. Using Miles and Snow’s typology of generic strategies as an example of organizational configurations, we both illustrate how researchers may apply this technique and provide recommendations for its application and potential extensions. Our technique offers new opportunities for bridging macro and micro inquiries by developing strong inferences for testing, refining, and extending multilevel theories of organizational configurations.
Usage of models integrating mediation and moderation is on the rise in the organizational sciences. While moderation and mediation are fairly well understood by themselves, additional complexities emerge when combining them. Some guidance exists regarding the empirical testing of such models, but this guidance is widely misunderstood. Furthermore, very little guidance exists regarding the theoretical justification of such models. This article offers a checklist of recommendations for the presentation, justification, and testing of models integrating mediation and moderation and compares these to what is actually being done via a review of empirical papers in top-tier journals.
Recent methods that allow a noninvasive modulation of brain activity are able to modulate human cognitive behavior. Among these methods are transcranial electric stimulation and transcranial magnetic stimulation that both come in multiple variants. A property of both types of brain stimulation is that they modulate brain activity and in turn modulate cognitive behavior. Here, we describe the methods with their assumed neural mechanisms for readers from the economic and social sciences and little prior knowledge of these techniques. Our emphasis is on available protocols and experimental parameters to choose from when designing a study. We also review a selection of recent studies that have successfully applied them in the respective field. We provide short pointers to limitations that need to be considered and refer to the relevant papers where appropriate.
Upon adequate stimulation, real-time maps of cortical hemodynamic responses can be obtained by functional near-infrared spectroscopy (fNIRS), which noninvasively measures changes in oxygenated and deoxygenated hemoglobin after positioning multiple sources and detectors over the human scalp. This review is aimed at giving a concise and simple overview of the basic principles of fNIRS including features, strengths, advantages, limitations, and utility for evaluating human behavior. The transportable/wireless commercially available fNIRS systems have a time resolution of 1 to 10 Hz, a depth sensitivity of about 1.5 cm, and a spatial resolution up to 1 cm. fNIRS has been found suitable for many applications on human beings, either adults or infants/children, in the field of social sciences, neuroimaging basic research, and medicine. Some examples of present and future prospects of fNIRS for assessing cerebral cortex function during human behavior in different situations (in natural and social situations) will be provided. Moreover, the most recent fNIRS studies for investigating interpersonal interactions by adopting the hyperscanning approach, which consists of the measurement of brain activity simultaneously on two or more people, will be reported.
The current article argues that video-based methodologies offer unique potential for multimodal research applications. Multimodal research, further, can respond to the problem of "elusive knowledges," that is, tacit, aesthetic, and embodied aspects of organizational life that are difficult to articulate in traditional methodological paradigms. We argue that the multimodal qualities of video, including but not limited to its visual properties, provide a scaffold for translating embodied, tacit, and aesthetic knowledge into discursive and textual forms, enabling the representation of organizational knowledge through academic discourse. First, we outline the problem of representation by comparing different forms of elusive knowledge, framing this problem as one of cross-modal translation. Second, we describe how video’s unique affordances place it in an ideal position to address this problem. Third, we demonstrate how video-based solutions can contribute to research, providing examples both from the literature and our own applied case work as models for video-based approaches. Finally, we discuss the implications and limitations of the proposed video approaches as a methodological support.
The desire to better understand the micro-behaviors of organizational actors has led to the increased use of video ethnography in management and qualitative research. Video captures detailed interactions and provides opportunities for researchers to link these to broader organizational processes. However, we argue there is a methodological gap. Studies that focus on the detail of the interactions "zoom in." Others that focus on the interactions in context "zoom out." But few go further and "zoom with"––that is, incorporate participants’ interpretations of their video-recorded interactions. Our methodological contribution is that zooming with participants enhances research findings, helps to develop theory, and provides new insights for management practice. The article develops this idea by exploring and describing the method and applying it to top management teams, as well as showing how each focus provides different theoretical insights depending on which perspective or combination of perspectives is used. We conclude with the suggestion that a three-pronged approach to video ethnography be taken. The final section of the article discusses the implications for research and highlights the benefits of reflexivity in management practice.
This article considers the application of video-based research to address methodological challenges for organizational scholars concerned with the sociomaterial foundations to work practice. In particular the claim that "all practices are always sociomaterial" raises a "problem of relevance"—that is, on what grounds can we select material to include in the analytic account when there is a vast array of material in each setting? Furthermore, how can we grasp the sociality of material objects that are often taken for granted and that drift in and out of view? We address these methodological questions drawing on ethnomethodology and conversation analysis, and by making use of video recordings of everyday work and organizing. We demonstrate the approach with data from two service settings and explore the analysis both of single cases and collections. To conclude, the article considers the distinctive contributions that these video-based studies have for our understanding of sociomateriality and organizational practice more generally.
The current conventions for test score reliability coefficients are unsystematic and chaotic. Reliability coefficients have long been denoted using names that are unrelated to each other, with each formula being generated through different methods, and they have been represented inconsistently. Such inconsistency prevents organizational researchers from understanding the whole picture and misleads them into using coefficient alpha unconditionally. This study provides a systematic naming convention, formula-generating methods, and methods of representing each of the reliability coefficients. This study offers an easy-to-use solution to the issue of choosing between coefficient alpha and composite reliability. This study introduces a calculator that enables its users to obtain the values of various multidimensional reliability coefficients with a few mouse clicks. This study also presents illustrative numerical examples to provide a better understanding of the characteristics and computations of reliability coefficients.
Video recording technology allows for the discovery of psychological phenomena that might otherwise go unnoticed. We focus here on gesture as an example of such a phenomenon. Gestures are movements of the hands or body that people spontaneously produce while speaking or thinking through a difficult problem. Despite their ubiquity, speakers are not always aware that they are gesturing, and listeners are not always aware that they are observing gesture. We review how video technology has facilitated major insights within the field of gesture research by allowing researchers to capture, quantify, and better understand these transient movements. We propose that gesture, which can be easily missed if it is not a researcher’s focus, has the potential to affect thinking and learning in the people who produce it, as well as in the people who observe it, and that it can alter the communicative context of an experiment or social interaction. Finally, we discuss the challenges of using video technology to capture gesture in psychological studies, and we discuss opportunities and suggestions for making use of this rich source of information both within the field of developmental psychology and within the field of organizational psychology.
The use of moving images to generate data for behavioral analysis has long been a methodology available to organizational researchers. In this article, we draw from previous research in team dynamics to describe and discuss various methodological approaches to using video recorded behavior as a source of quantitative data. More specifically, we identify and examine key decision points for researchers and illustrate benefits and drawbacks to consider. The article concludes with suggestions for ways in which quantitative video-based approaches could be improved.
At its inception, neuroeconomics promised to revolutionize economics. That promise has not yet been realized, and neuroeconomics has seen limited penetration into mainstream economics. Nevertheless, it would be a mistake to declare that neuroeconomics has failed. Quite to the contrary, the yearly rate of neuroeconomics papers has roughly doubled since 2005. While the number of direct applications to economics remains limited, due to the infancy of the field, we have learned an amazing amount about how the brain makes decisions. In this article, we review some of the major topics that have emerged in neuroeconomics and highlight findings that we believe will form the basis for future applications to economics. When possible, we focus on existing applications to economics and future directions for that research.
Moderator hypotheses involving categorical variables are prevalent in organizational and psychological research. Despite their importance, current methods of identifying and interpreting these moderation effects have several limitations that may result in misleading conclusions about their implications. This issue has been particularly salient in the literature on differential prediction where recent research has suggested that these limitations have had a significant impact on past research. To help address these issues, we propose several new effect size indices that provide additional information about categorical moderation analyses. The advantages of these indices are then illustrated in two large databases of respondents by examining categorical moderation in the prediction of psychological well-being and the extent of differential prediction in a large sample of job incumbents.
It is increasingly recognized that team diversity with respect to various social categories (e.g., gender, race) does not automatically result in the cognitive activation of these categories (i.e., categorization salience), and that factors influencing this relationship are important for the effects of diversity. Thus, it is a methodological problem that no measurement technique is available to measure categorization salience in a way that efficiently applies to multiple dimensions of diversity in multiple combinations. Based on insights from artificial intelligence research, we propose a technique to capture the salience of different social categorizations in teams that does not prime the salience of these categories. We illustrate the importance of such measurement by showing how it may be used to distinguish among diversity-blind responses (low categorization salience), multicultural responses (positive responses to categorization salience), and intergroup-biased responses (negative responses to categorization salience) in a study of gender and race diversity and the gender by race faultline in 38 manufacturing teams comprising 239 members.
Gaining access in fieldwork is crucial to the success of research, and may often be problematic because it involves working in complex social situations. This article examines the intricacies of access, conceptualizing it as a fluid, temporal, and political process that requires sensitivity to social issues and to potential ethical choices faced by both researchers and organization members. Our contribution lies in offering ways in which researchers can reflexively negotiate the challenges of access by (a) underscoring the complex and relational nature of access by conceptualizing three relational perspectives—instrumental, transactional, and relational—proposing the latter as a strategy for developing a diplomatic sensitivity to the politics of access; (b) explicating the political, ethical, and emergent nature of access by framing it as an ongoing process of immersion, backstage dramas, and deception; and (c) offering a number of relational micropractices to help researchers negotiate the complexities of access. We illustrate the challenges of gaining and maintaining access through examples from the literature and from Rafael’s attempts to gain access to carry out fieldwork in a police force.
Boundary conditions (BC) have long been discussed as an important element in theory development, referring to the "who, where, when" aspects of a theory. However, it still remains somewhat vague as to what exactly BC are, how they can or even should be explored, and why their understanding matters. This research tackles these important questions by means of an in-depth theoretical-methodological analysis. The study contributes fourfold to organizational research methods: First, it develops a more accurate and explicit conceptualization of BC. Second, it widens the understanding of how BC can be explored by suggesting and juxtaposing new tools and approaches. It also illustrates BC-exploring processes, drawing on two empirical case examples. Third, it analyzes the reasons for exploring BC, concluding that BC exploration fosters theory development, strengthens research validity, and mitigates the research-practice gap. Fourth, it synthesizes the analyses into 12 tentative suggestions for how scholars should subsequently approach the issues surrounding BC. The authors hope that the study contributes to consensus shifting with respect to BC and draws more attention to BC.
Historically, the lack of availability and prohibitive expense of brain imaging technology have limited the application of neuroscience research in organizational settings. However, recent advances in technology have made it possible to use brain imaging in organizational settings at relatively little expense and in a practical manner to further research efforts. In this article, we weigh the advantages and disadvantages of neuroscience applications to organizational research. Further, we present three key methodological issues that need to be considered with regard to such applications: (a) level of assessment, (b) intrinsic versus reflexive brain activity, and (c) the targeting of brain region(s) or networks. We also pose specific examples of how neuroscience may be applied to various topical areas in organizational behavior research at both individual and team levels.
Researchers are generally advised to provide rigorous item-level construct validity evidence when they develop and introduce a new scale. However, these precise, item-level construct validation efforts are rarely reexamined as the scale is put into use by a wider audience. In the present study, we demonstrate how (a) item-level meta-analysis and (b) substantive validity analysis can be used to comprehensively evaluate construct validity evidence for the items comprising scales. This methodology enables a reexamination of whether critical item-level issues that may have been supported in the initial (often single study) scale validation process—item factor loadings and theorized measurement model fit, as examples—hold up in a larger set of heterogeneous samples. Our demonstration focuses on a commonly used scale of task performance and organizational citizenship behavior, and our findings reveal that several of the items do not perform as may have been suggested in the initial validation effort. In all, our study highlights the need for researchers to incorporate item-level assessments into evaluations of whether construct scales perform as originally promised.
This article examines means of enhancing the value of mixed method research for organizational science. Conclusions are based on a comprehensive analysis of 69 mixed method articles published in four empirical journals between 2009 and 2014, detailed case comparison of four illustrative articles, and personal interviews with lead authors for each case. Findings provide three key contributions. First, documenting the prevalence of mixed methods over the past six years in a broad selection of journals, five approaches to mixed method research are identified—including three novel approaches not yet elaborated on in prior treatises on research methods, expanding the feasible options for mixed method scholarship and bolstering confidence in considering such approaches. Second, themes pertaining to enhancing the value of mixed method research are revealed, including elaboration, generalization, triangulation, and interpretation. Finally, findings uncovered four sets of practical techniques by which this value can be increased. Together, these contributions provide guidance for those endeavoring to utilize a mixed method approach in organizational science.
Sensitive constructs, such as counterproductive workplace behavior (CWB), are of interest to both basic and applied researchers; however, deliberate response distortions—active attempts on the part of respondents to be viewed more favorably—present a major difficulty with studying these topics. Although different methodologies purported to reduce distortions have been developed, they suffer from various limitations. For example, a notable limitation of what is currently considered best practice, randomized response techniques, is the inability to gather individual-level data. Across three experiments, we compare four different methods for obtaining self-reports of CWB that return individual-level data. Results suggest that whereas providing anonymity, counterbiasing, and implicit goal priming did not result in higher reporting of sensitive behaviors, the indirect questioning methodology did result in higher reporting. We also provide initial validity evidence for the indirect questioning scores and rule out some alternative explanations for the increased reporting of the indirect questioning method. Though more research is needed, these studies provide initial evidence regarding the potential utility of the indirect questioning method for increasing the reports on self-report measures of sensitive constructs.
A general interest in the study of social practices has been spreading across a diversity of disciplines in organization and management research, relying mostly on rich ethnographic accounts of units or teams. What is often called the practice-turn, however, has not reached research on interorganizational networks. This is mainly due to methodological issues that call, in the end, for a mixed-method approach. This article addresses this issue by proposing a research design that balances well-established social network analysis with a set of techniques of organizational ethnography that fit with the specifics of interorganizational networks. In what we call network ethnography, qualitative and quantitative data are collected and analyzed in a parallel fashion. Ultimately, the design implies convergence during data interpretation, hereby offering platforms of reflection for each method toward new data collection and analysis. We discuss implications for mixed-method literature, research on interorganizational networks, and organizational ethnography.
As compassion has become established in the organizational literature as an important area of study, calls for increased compassion in our own work and research have increased. Compassion can take many forms in academic work, but in this article we propose a framework for compassionate research methods. Not only driven by caring for others and a desire for improving their lot, compassionate research methods actually immerse the researcher in compassionate work. We propose that compassionate research methods include three important elements: ethnography, aesthetics, and emotionality. Together, these provide opportunities for emergent theoretical experimentation that can lead to both the alleviation of suffering in the immediate research context and new theoretical insights. To show the possibilities of this method, we use empirical data from a unique setting—the first U.S. permanent death penalty defense team.
The challenge of integration, namely, the bridging across different intellectual paradigms to combine empirical insights into a coherent and plausible explanation, is endemic to mixed methods research. In this article, we address this challenge in two ways: first, by drawing attention to the role that theoretical integration plays in mixed methods research as a complement to empirical integration and second, by broadening the repertoire of strategies for enhancing the interplay of theoretical and empirical elements in a mixed methods study. We use the technique of relational algorithms, a linguistic exercise designed to produce "novel relations between pairs of things" by experimenting with different words that can connect theory and empirics. We propose that connector words (e.g., along, near, within) can forge linkages between quantitative and qualitative methods that extend the simple coupling implied by and. We advance five strategies of integration, two that are commonly used in management research—conjoined and sequential—and three high-potential but relatively underused strategies—simultaneous, full-cycle, and mono-logic. We illustrate each of these with examples from the management and organizational literature.
Organizational researchers routinely have access to repeated measures from numerous time periods punctuated by one or more discontinuities. Discontinuities may be planned, such as when a researcher introduces an unexpected change in the context of a skill acquisition task. Alternatively, discontinuities may be unplanned, such as when a natural disaster or economic event occurs during an ongoing data collection. In this article, we build off the basic discontinuous growth model and illustrate how alternative specifications of time-related variables allow one to examine relative versus absolute change in transition and post-transition slopes. Our examples focus on interpreting time-varying covariates in a variety of situations (multiple discontinuities, linear and quadratic models, and models where discontinuities occur at different times). We show that the ability to test relative and absolute differences provides a high degree of precision in terms of specifying and testing hypotheses.
This study empirically examined the statistical and methodological issues raised in the reviewing process to determine what the "gatekeepers" of the literature, the reviewers and editors, really say about methodology when making decisions to accept or reject manuscripts. Three hundred and four editors’ and reviewers’ letters for 69 manuscripts submitted to the Journal of Business and Psychology were qualitatively coded using an iterative approach. Systematic coding generated 267 codes from 1,751 statements that identified common methodological and statistical errors by authors and offered themes across these issues. We examined the relationship between the issues identified and manuscript outcomes. The most prevalent methodological and statistical topics were measurement, control variables, common method variance, factor analysis, and structural equation modeling. Common errors included the choice and comprehensiveness of analyses. This qualitative analysis of methods in reviews provides insight into how current methodological debates reveal themselves in the review process. This study offers guidance and advice for authors to improve the quality of their research and for editors and reviewers to improve the quality of their reviews.
Conventional methods for assessing the validity and reliability of situational judgment test (SJT) scores have proven to be inadequate. For example, factor analysis techniques typically lead to nonsensical solutions, and assumptions underlying Cronbach’s alpha coefficient are violated due to the multidimensional nature of SJTs. In the current article, we describe how cognitive diagnosis models (CDMs) provide a new approach that not only overcomes these limitations but that also offers extra advantages for scoring and better understanding SJTs. The analysis of the Q-matrix specification, model fit, and model parameter estimates provide a greater wealth of information than traditional procedures do. Our proposal is illustrated using data taken from a 23-item SJT that presents situations about student-related issues. Results show that CDMs are useful tools for scoring tests, like SJTs, in which multiple knowledge, skills, abilities, and other characteristics are required to correctly answer the items. SJT classifications were reliable and significantly related to theoretically relevant variables. We conclude that CDM might help toward the exploration of the nature of the constructs underlying SJT, one of the principal challenges in SJT research.
Correcting attenuated correlations from selected samples is a common goal in organizational settings. Hunter and Schmidt introduced a procedure, called Case IV, for correcting correlations when a researcher has no information on the variable(s) used by an organization to form a suitability judgment. In this article, we compare Case IV to two other comparable procedures: the first correction (the expectation maximization algorithm) requires raw data about the selection variables used to form a suitability judgment. The second, the Pearson-Lawley correction, requires the variance-covariance matrix of the selection variables. We show that even when the variables used for selection are unobserved or unavailable, it is still possible to estimate parameters without making the restrictive assumptions of Case IV. In addition, these two corrections almost always outperform Case IV, particularly when the critical assumption of Case IV is violated. We also provide R code illustrating the use of these correction procedures.
This article surfaces some of the emotional encounters that may be experienced while trying to gain access and secure informants in qualitative research. Using the children’s game of hopscotch as a metaphor, we develop a dynamic, nonlinear process model of gaining access yielding four elements: study formulation with plans to move forward, identifying potential informants, contacting informants, and interacting with informants during data collection. Underlying each element of the process is the potential for researchers to re-strategize their approach or exit the study. Autobiographical stories about gaining access for our PhD dissertation research are used to flesh out each element of the process, including the challenges we experienced with each element and how we addressed them. We conclude by acknowledging limitations to our study and suggest future and continued areas of research.
We clarify differences among moderation, partial mediation, and full mediation and identify methodological problems related to moderation and mediation from a review of articles in Strategic Management Journal and Organization Science published from 2005 to 2014. Regarding moderation, we discuss measurement error, range restriction, and unequal sample sizes across moderator-based subgroups; insufficient statistical power; the artificial categorization of continuous variables; assumed negative consequences of correlations between product terms and its components (i.e., multicollinearity); and interpretation of first-order effects based on models excluding product terms. Regarding mediation, we discuss problems with the causal-steps procedure, inferences about mediation based on cross-sectional designs, whether a relation between the antecedent and the outcome is necessary for testing mediation, the routine inclusion of a direct path from the antecedent to the outcome, and consequences of measurement error. We also explain how integrating moderation and mediation can lead to important and useful insights for strategic management theory and practice. Finally, we offer specific and actionable recommendations for improving the appropriateness and accuracy of tests of moderation and mediation in strategic management research. Our recommendations can also be used as a checklist for editors and reviewers who evaluate manuscripts reporting tests of moderation and mediation.
Moderator variables are widely hypothesized and studied in the organizational sciences, but the empirical track record of moderator variable studies is very discouraging. These studies often lack sufficient statistical power and the type of designs and measures common in organizational research virtually guarantee that the moderator effects that are found are usually extremely small. We recommend that future attempts to identify and estimate moderator effects should be limited to situations where better measures, stronger research designs and a realistic cost-benefit assessment are available. Researchers should avoid moderator hypotheses in contexts where the measures and research designs employed do not allow them to be tested in a meaningful way, and should be cautious about interpreting the very small effects they are likely to find.
It is increasingly common to test hypotheses combining moderation and mediation. Structural equation modeling (SEM) has been the favored approach to testing mediation hypotheses. However, the biggest challenge to testing moderation hypotheses in SEM was the complexity underlying the modeling of latent variable interactions. We discuss the latent moderated structural equation procedure (LMS) approach to specifying latent variable interactions, which is implemented in Mplus, and offer a simple and accessible way of testing combined moderation and mediation hypotheses using SEM. To do so, we provide sample code for six commonly encountered moderation and mediation cases and relevant equations that can be easily adapted to researchers’ data. By articulating the similarities in the two different approaches, discussing the combination of moderation and mediation, we also contribute to the research methods literature.
Rapid advances in mobile computing technology have the potential to revolutionize organizational research by facilitating new methods of data collection. The emergence of wearable electronic sensors in particular harbors the promise of making the large-scale collection of high-resolution data related to human interactions and social behavior economically viable. Popular press and practitioner-oriented research outlets have begun to tout the game-changing potential of wearable sensors for both researchers and practitioners. We systematically examine the utility of current wearable sensor technology for capturing behavioral constructs at the individual and team levels. In the process, we provide a model for performing validation work in this new domain of measurement. Our findings highlight the need for organizational researchers to take an active role in the development of wearable sensor systems to ensure that the measures derived from these devices and sensors allow us to leverage and extend the extant knowledge base. We also offer a caution regarding the potential sources of error arising from wearable sensors in behavioral research.
Longitudinal studies with a mix of binary outcomes and continuous variables are common in organizational research. Selecting the dependent variable is often difficult due to conflicting theories and contradictory empirical studies. In addition, organizational researchers are confronted with methodological challenges posed by latent variables relating to observed binary outcomes and within-subject correlation. We draw on Dueker’s qualitative vector autoregression (QVAR) and Lunn, Osorio, and Whittaker’s multivariate probit model to develop a solution to these problems in the form of a qualitative short panel vector autoregression (QSP-VAR). The QSP-VAR combines binary and continuous variables into a single vector of dependent variables, making every variable endogenous a priori. The QSP-VAR identifies causal order, reveals within-subject correlation, and accounts for latent variables. Using a Bayesian approach, the QSP-VAR provides reliable inference for short time dimension longitudinal research. This is demonstrated through analysis of the durability of elite corporate agents, social networks, and firm performance in France. We provide our OpenBUGS code to enable implementation of the QSP-VAR by other researchers.
All methods individually are flawed, but these limitations can be mitigated through mixed methods research, which combines methodologies to provide better answers to our research questions. In this study, we develop a research design framework for mixed methods work that is based on the principles of triangulation. Core elements for the research design framework include theoretical purpose, i.e., theory development and/or theory testing; and methodological purpose, i.e., prioritizing generalizability, precision in control and measurement, and authenticity of context. From this foundation, we consider how the multiple methodologies are linked together to accomplish the theoretical purpose, focusing on three types of linking processes: convergent triangulation, holistic triangulation, and convergent and holistic triangulation. We then consider the implications of these linking processes for the theory at hand, taking into account the following theoretical attributes: generality/specificity, simplicity/complexity, and accuracy/inaccuracy. Based on this research design framework, we develop a roadmap that can serve as a design guide for organizational scholars conducting mixed methods research studies.
This article outlines a mixed method approach to social network analysis combining techniques of organizational history development, inductive data structuring, and content analysis to offer a novel approach for network data construction and analysis. This approach provides researchers with a number of benefits over traditional sociometric or other interpersonal methodologies including the ability to investigate networks of greater scope, broader access to diverse social actors, reduced informant bias, and increased capability for longitudinal designs. After detailing this approach, we apply the method on a sample of 143 new ventures and suggest opportunities for general application in entrepreneurship, strategic management, and organizational behavior research.
Construct proliferation—the accumulation of ostensibly different but potentially identical constructs representing organizational phenomena—is a salient problem in contemporary research. While a number of construct validation procedures exist, relatively few validation studies conduct comprehensive assessments of the discriminant validity of theoretically distinct constructs. In this article, we outline the key considerations a researcher must take into account when attempting to establish the empirical distinctness of new or existing constructs and provide a step-by-step guide on how to assess the discriminant validity of constructs while accounting for three major sources of measurement error: random error, specific factor error, and transient error. Using a number of popular measures from the leadership literature, we provide an illustrative example of how to conduct a study of discriminant validity. We include several analytic strategies in our study and discuss the similarities and differences between the results they yield. We also discuss several additional issues related to this type of research and make recommendations for conducting discriminant validity analyses.
Concepts are regarded as the building blocks of theory, yet there is little debate about the evolution and application of the concepts that we use in management. As measurement concerns overtake conceptualization, concepts are treated as empirical facts rather than theoretical constructions. They may be stretched to increase their empirical coverage, without considering the theoretical implications. We provide an alternative pathway for the career of a concept, based on a pragmatist–interactionist paradigm. Our alternative, concept reconstruction, involves evaluating the state of an existing concept by reviewing its usage in existing research. The results of this review are used to guide the appropriate case selection strategy for subsequent fieldwork. Evidence from fieldwork is then used to reconstruct the concept. As an illustration of our methodology, we analyze a popular concept in international management and entrepreneurship: the "born global" firm or "international new venture." We investigate the stretching of this concept by conducting a review of its usage in the scholarly literature, and by using the results of a "most-likely" case study to provide a way forward for reconstructing this concept.
Currently, the most popular analytical method for testing moderated mediation is the regression approach, which is based on observed variables and assumes no measurement error. It is generally acknowledged that measurement errors result in biased estimates of regression coefficients. What has drawn relatively less attention is that the confidence intervals produced by regression are also biased when the variables are measured with errors. Therefore, we extend the latent moderated structural equations (LMS) method—which corrects for measurement errors when estimating latent interaction effects—to the study of the moderated mediation of latent variables. Simulations were conducted to compare the regression approach and the LMS approach. The results show that the LMS method produces accurate estimated effects and confidence intervals. By contrast, regression not only substantially underestimates the effects but also produces inaccurate confidence intervals. It is likely that the statistically significant moderated mediation effects that have been reported in previous studies using regression include biased estimated effects and confidence intervals that do not include the true values.
Theoretical "necessary but not sufficient" statements are common in the organizational sciences. Traditional data analyses approaches (e.g., correlation or multiple regression) are not appropriate for testing or inducing such statements. This article proposes necessary condition analysis (NCA) as a general and straightforward methodology for identifying necessary conditions in data sets. The article presents the logic and methodology of necessary but not sufficient contributions of organizational determinants (e.g., events, characteristics, resources, efforts) to a desired outcome (e.g., good performance). A necessary determinant must be present for achieving an outcome, but its presence is not sufficient to obtain that outcome. Without the necessary condition, there is guaranteed failure, which cannot be compensated by other determinants of the outcome. This logic and its related methodology are fundamentally different from the traditional sufficiency-based logic and methodology. Practical recommendations and free software are offered to support researchers to apply NCA.
This article addresses (in)congruence across different kinds of organizational respondents or "organizational groups"—such as managers versus non-managers or women versus men—and the effects of congruence on organizational outcomes. We introduce a novel multilevel latent polynomial regression model (MLPM) that treats standings of organizational groups as latent "random intercepts" at the organization level while subjecting these to latent interactions that enable response surface modeling to test congruence hypotheses. We focus on the case of organizational culture research, which usually samples managers and excludes non-managers. Reanalyzing data from 67 hospitals with 6,731 managers and non-managers, we find that non-managers perceive their organizations’ cultures as less humanistic and innovative and more controlling than managers, and we find that less congruence between managers and non-managers in these perceptions is associated with lower levels of quality improvement in organizations. Our results call into question the validity of findings from organizational culture and other research that tends to sample one organizational group to the exclusion of others. We discuss our findings and the MLPM, which can be extended to estimate latent interactions for tests of multilevel moderation/interactions.
The too-much-of-a-good-thing (TMGT) effect occurs when an initially positive relation between an antecedent and a desirable outcome variable turns negative when the underlying ordinarily beneficial antecedent is taken too far, such that the overall relation becomes nonmonotonic. The presence of the TMGT effect incites serious concerns about the validity of linearly specified empirical models. Recent research posited that the TMGT effect is omnipresent, due to an overarching meta-theoretical principle. Drawing on the competitive mediation approach, the authors of the present study suggest an antecedent-benefit-cost (ABC) framework that explains the TMGT effect as a frequent but not omnipresent issue in empirical research and integrates a variety of linear and nonlinear relationships. The ABC framework clarifies important conceptual and empirical issues surrounding the TMGT effect and facilitates the choice between linear and curvilinear models. To avoid serious methodological pitfalls, future studies with desirable outcome variables such as, for example, task performance, job performance, firm performance, satisfaction, team innovation, leadership effectiveness, or individual creativity should consider the ABC framework.
rwg is a common metric used to quantify interrater agreement in the organizational sciences. Finn developed rwg but based it on the assumption that raters’ deviations from their true perceptions are influenced by random chance only. James, Demaree, and Wolf extended Finn’s work by describing procedures to account for the additional influence of response biases. We demonstrate that organizational scientists have relied largely on Finn’s procedures, at least in part because of a lack of specific guidance regarding the conditions under which various response biases might be present. In an effort to address this gap in the literature, we introduce the concept of target-irrelevant, nonrandom forces (those aspects of the research context that are likely to lead to response biases), then describe how the familiar "5Ws and an H" framework (i.e., who, what, when, where, why, and how) can be used to identify these biases a priori. It is our hope that this system will permit those who calculate rwg to account for the effects of response biases in a manner that is simultaneously rigorous, consistent, and transparent.
Partial least squares path modeling (PLS) has been increasing in popularity as a form of or an alternative to structural equation modeling (SEM) and has currently considerable momentum in some management disciplines. Despite recent criticism toward the method, most existing studies analyzing the performance of PLS have reached positive conclusions. This article shows that most of the evidence for the usefulness of the method has been a misinterpretation. The analysis presented shows that PLS amplifies the effects of chance correlations in a unique way and this effect explains prior simulations results better than the previous interpretations. It is unlikely that a researcher would willingly amplify error, and therefore the results show that the usefulness of the PLS method is a fallacy. There are much better ways to compensate for the attenuation effect caused by using latent variable scores to estimate SEM models than creating a bias into the opposite direction.
This article addresses Rönkkö and Evermann’s criticisms of the partial least squares (PLS) approach to structural equation modeling. We contend that the alleged shortcomings of PLS are not due to problems with the technique, but instead to three problems with Rönkkö and Evermann’s study: (a) the adherence to the common factor model, (b) a very limited simulation designs, and (c) overstretched generalizations of their findings. Whereas Rönkkö and Evermann claim to be dispelling myths about PLS, they have in reality created new myths that we, in turn, debunk. By examining their claims, our article contributes to reestablishing a constructive discussion of the PLS method and its properties. We show that PLS does offer advantages for exploratory research and that it is a viable estimator for composite factor models. This can pose an interesting alternative if the common factor model does not hold. Therefore, we can conclude that PLS should continue to be used as an important statistical tool for management and organizational research, as well as other social science disciplines.
The purpose of the present article is to take stock of a recent exchange in Organizational Research Methods between critics and proponents of partial least squares path modeling (PLS-PM). The two target articles were centered around six principal issues, namely whether PLS-PM: (a) can be truly characterized as a technique for structural equation modeling (SEM), (b) is able to correct for measurement error, (c) can be used to validate measurement models, (d) accommodates small sample sizes, (e) is able to provide null hypothesis tests for path coefficients, and (f) can be employed in an exploratory, model-building fashion. We summarize and elaborate further on the key arguments underlying the exchange, drawing from the broader methodological and statistical literature to offer additional thoughts concerning the utility of PLS-PM and ways in which the technique might be improved. We conclude with recommendations as to whether and how PLS-PM serves as a viable contender to SEM approaches for estimating and evaluating theoretical models.
Despite pervasive evidence that general mental ability and personality are unrelated, we investigated whether general mental ability may affect the response process associated with personality measurement. Study 1 examined a large sample of job applicant responses to four personality scales for differential functioning across groups of differing general mental ability. While results indicated that personality items differentially function across highly disparate general mental ability groups, there was little evidence of differential functioning across groups with similar levels of general mental ability. Study 2 replicated these findings in a different sample, using a different measure of general mental ability. We posit that observed differences in the psychometric properties of these personality scales are likely due to the information processing capabilities of the respondents. Additionally, we describe how differential functioning analyses can be used during scale development as a method of identifying items that are not appropriate for all intended respondents. In so doing, we demonstrate procedures for examining other construct-measurement interactions in which respondents’ standings on a specific construct could influence their interpretation of and response to items assessing other constructs.
This article discusses implications of participant withdrawal for inductive research. I describe and analyze how a third of my participants withdrew from a grounded theory study. I position my example, ensuing issues, and potential solutions as reflective of inductive methodologies as a whole. The crux of the problem is the disruption inflicted by withdrawal on inductive processes of generating knowledge. I examine the subsequent methodological and ethical issues in trying to determine the best course of action following withdrawal. I suggest three potential options for researchers: Continuing the study with partial data, continuing the study with all data, and discontinuing the study. Motivated by my experience, and wider theoretical considerations, I present several suggestions and questions, with the aim of supporting researchers in determining the best course of action for their individual field circumstances.
In this article we explain how the development of new organization theory faces several mutually reinforcing problems, which collectively suppress generative debate and the creation of new and alternative theories. We argue that to overcome these problems, researchers should adopt relationally reflexive practices. This does not lead to an alternative method but instead informs how methods are applied. Specifically, we advocate a stance toward the application of qualitative methods that legitimizes insights from the situated life-with-others of the researcher. We argue that this stance can improve our abilities for generative theorizing in the field of management and organization studies.
The ubiquity of surveys in organizational research means that their quality is of paramount importance. Commonly this has been addressed through the use of sophisticated statistical approaches with scant attention paid to item comprehension. Linguistic theory suggests that while everyone may understand an item, they may comprehend it in different ways. We explore this in two studies in which we administered three published scales and asked respondents to indicate what they believed the items meant, and a third study that replicated the results with an additional scale. These demonstrate three forms of miscomprehension: instructional (where instructions are not followed), sentential (where the syntax of a sentence is enriched or depleted as it is interpreted), and lexical (where different meanings of words are deployed). These differences in comprehension are not appreciable using conventional statistical analyses yet can produce significantly different results and cause respondents to tap into different concepts. These results suggest that item interpretation is a significant source of error, which has been hitherto neglected in the organizational literature. We suggest remedies and directions for future research.
Qualitative researchers have developed and employed a variety of phenomenological methodologies to examine individuals’ experiences. However, there is little guidance to help researchers choose between these variations to meet the specific needs of their studies. The purpose of this article is to illuminate the scope and value of phenomenology by developing a typology that classifies and contrasts five popular phenomenological methodologies. By explicating each methodology’s differing assumptions, aims, and analytical steps, the article generates a series of guidelines to inform researchers’ selections. Subsequent sections distinguish the family of phenomenological methodologies from other qualitative methodologies, such as narrative analysis and autoethnography. The article then identifies institutional work and organizational identity as topical bodies of research with particular research needs that phenomenology could address.
In this article we discuss optimal matching (OM), an invaluable yet underutilized tool in the analysis of sequence data. Initially developed in biology to identify and study patterns in DNA sequences, OM subsequently migrated over to sociology, where it has been used to examine career patterns in life course research. It involves the computation of the number of insertions, deletions, and substitutions of sequence elements that are needed to transform one sequence into another and the costs associated with such transformations. The goal is to identify similarities across sequences, which can then be used for pattern identification. Along with a discussion of the logic underlying OM analysis, we provide an illustration of its use in the examination of careers of deans at U.S. business schools. In addition, we use Monte Carlo simulation to compare OM and cluster analysis and highlight the superiority of OM analysis in the analysis of sequence data. Also discussed are recent methodological advances that have been made in OM and our recommendations and guidelines for future applications of OM in management research.
Multiple linear regression (MLR) remains a mainstay analysis in organizational research, yet intercorrelations between predictors (multicollinearity) undermine the interpretation of MLR weights in terms of predictor contributions to the criterion. Alternative indices include validity coefficients, structure coefficients, product measures, relative weights, all-possible-subsets regression, dominance weights, and commonality coefficients. This article reviews these indices, and uniquely, it offers freely available software that (a) computes and compares all of these indices with one another, (b) computes associated bootstrapped confidence intervals, and (c) does so for any number of predictors so long as the correlation matrix is positive definite. Other available software is limited in all of these respects. We invite researchers to use this software to increase their insights when applying MLR to a data set. Avenues for future research and application are discussed.
Insomnia is a prevalent experience among employees and survey respondents. Drawing from research on sleep and self-regulation, we examine both random (survey errors) and systematic (social desirability) effects of research participant insomnia on survey responses. With respect to random effects, we find that insomnia leads to increased survey errors, and that this effect is mediated by a lack of self-control and a lack of effort. However, insomnia also has a positive systematic effect, leading to lower levels of social desirability. This effect is also mediated by self-control depletion and a lack of effort. In supplemental analyses, we find that psychometric side effects of random and systematic error introduced by individuals high in insomnia negatively affect internal consistency estimates and measurement invariance on various organizational measures. Results were replicated across two studies, with alternative operationalizations of survey errors and social desirability and some alternative explanations examined. These findings suggest sleep may be a key methodological issue for conducting survey research. Recommendations from the sleep and self-regulation literature regarding potential strategies for counteracting the effect of insomnia on survey responses are discussed.
Multilevel theory and research have advanced organizational science but are limited because the research focus is incomplete. Most quantitative research examines top-down, contextual, cross-level relationships. Emergent phenomena that manifest from the bottom up from the psychological characteristics, processes, and interactions among individuals—although examined qualitatively—have been largely neglected in quantitative research. Emergence is theoretically assumed, examined indirectly, and treated as an inference regarding the construct validity of higher level measures. As a result, quantitative researchers are investigating only one fundamental process of multilevel theory and organizational systems. This article advances more direct, dynamic, and temporally sensitive quantitative research methods designed to unpack emergence as a process. We argue that direct quantitative approaches, largely represented by computational modeling or agent-based simulation, have much to offer with respect to illuminating the mechanisms of emergence as a dynamic process. We illustrate how indirect and direct approaches can be complementary and, appropriately integrated, have the potential to substantially advance theory and research. We conclude with a set of recommendations for advancing multilevel research on emergent phenomena in teams and organizations.
The purpose of this article is to provide the research design of a meta-synthesis of qualitative case studies. The meta-synthesis aims at building theory out of primary qualitative case studies that have not been planned as part of a unified multisite effect. By drawing on an understanding of research synthesis as the interpretation of qualitative evidence from a postpositivistic perspective, this article proposes eight steps of synthesizing existing qualitative case study findings to build theory. An illustration of the application of this method in the field of dynamic capabilities is provided. After enumerating the options available to meta-synthesis researchers, the potential challenges as well as the prospects of this research design are discussed.
In the two decades since storytelling was called the "sensemaking currency of organizations," storytelling scholarship has employed a wide variety of research methods. The storytelling diamond model introduced here offers a map of this paradigmatic terrain based on wider social science ontological, epistemological, and methodological (both quantitative and qualitative) considerations. The model is beneficial for both researchers and reviewers as they plan for and assess the quality and defensibility of storytelling research designs. The main paradigms considered in the storytelling diamond model are narrativist, living story, materialist, interpretivist, abstractionist, and practice all as integrated by the antenarrative process.
Organizational scholars study a number of sensitive topics that make employees and organizations vulnerable to unfavorable views. However, the typical ways in which researchers study these topics—via laboratory experiments and field surveys—can be laden with problems. In this article, the authors argue that the difficulties in studying sensitive topics can be overcome through the underutilized method of field experiments, detail strategies for conducting high-quality experimental field studies, and offer suggestions for overcoming potential challenges in data collection and publishing. As such, this article is designed to serve as a guide and stimulus for using the valuable methodological tool of field experiments.