Beruflich Dokumente
Kultur Dokumente
Although shared team mental models are believed to be important to team functioning,
substantial interstudy differences in the manner in which mental models are operation-
alized has impeded progress in this area. We use meta-analysis to cumulate 23
independent studies that have empirically examined shared mental models (SMMs) in
relation to team process and performance and test three aspects of measurement as
potential moderators: elicitation method, structure representation, and representation of
emergence. Results indicate the way in which SMMs are measured and represented at
the team level of analysis reveal meaningful distinctions in observed relationships.
Specifically, shared mental model operationalization impacts the observed relationship
between SMMs and team process; importantly, only methods that model the structure
or organization of knowledge are predictive of process. Conversely, while the magni-
tude of the relationship differed across measurement method, SMMs were positively
related to team performance regardless of the manner of operationalization. In sum-
mary, knowledge structure is predictive of team process, and both knowledge content
and structure are predictive of team performance.
Teams are increasingly being utilized as the 1993; Cooke, Gorman, Duran, & Taylor, 2007;
basic unit of work accomplishment, and both ac- Marks, Zaccaro, & Mathieu, 2000). At the other
ademics and practitioners have become particu- end of the spectrum are more decision-oriented
larly interested in the cognitive architecture that knowledge-based teams such as those engaged in
enables effective coordination and collaboration in software design and management consulting
work teams (Cannon-Bowers & Salas, 1990, projects. These teams also rely on a pattern of
2001). At one end of the spectrum, action-oriented cognitive similarity to effectively retrieve and
teams such as those used in medical and military share information (Ensley & Pearce, 2001; Faraj
settings perform highly interdependent time- & Sproull, 2000). Across the diverse range of
sensitive tasks in dynamic environments (Burke, tasks that teams perform, research has consistently
Salas, Wilson-Donnelly, & Priest, 2005). These identified cognitive similarity as a key founda-
teams require a pattern of cognitive similarity that tional element for success (DeChurch & Mesmer-
enables them to anticipate one another’s needs and Magnus, in press; Kozlowski & Ilgen, 2007; J. E.
actions and to synchronize their work in a way Mathieu, Heffner, Goodwin, Salas, & Cannon-
that is synergistic toward meeting the team’s ulti- Bowers, 2000). The thorn in the side of this re-
mate goals (Cannon-Bowers, Salas, & Converse, search stream has been measuring and represent-
ing cognition at the team level (Smith-Jentsch,
2009). The current study cumulates past research
on shared mental models (SMMs) and team out-
Leslie A. DeChurch, Department of Psychology, Univer- comes to address the question: How should shared
sity of Central Florida; Jessica R. Mesmer-Magnus, Depart- mental models be measured?
ment of Management, University of North Carolina Wil-
mington. One type of cognitive architecture in teams that
We thank Eduardo Salas, Christian Resick, Ronald Pic- has received substantial research attention is the
colo, and Toshio Murase for their helpful suggestions on shared team mental model. Shared team mental
earlier versions of this work. models have their roots in the mental model con-
Correspondence concerning this article should be ad-
dressed to Leslie A. DeChurch, Department of Psychology,
struct from cognitive psychology. At the individ-
University of Central Florida, PO Box 161390, Orlando, FL ual level, mental models reflect organized knowl-
32816-1390. E-mail: ldechurc@mail.ucf.edu edge that enables humans to understand the basic
1
2 DECHURCH AND MESMER-MAGNUS
sented in the mind and how the knowledge repre- In summary, we propose that these three as-
sentation is modeled by the researcher. As such, pects of how mental models are operationalized
structure representation constitutes a second cru- (elicitation, structure representation, and repre-
cial aspect of team mental model measurement sentation of emergence) will moderate the
(Klimoski & Mohammed, 1994; Mohammed et strength of observed relations between mental
al., 2000). Similarity ratings typically prompt re- models and outcomes because these strategies
spondents to think in terms of the degree of asso- are likely to articulate the relationships with
ciation between distinct components of their team varying degrees of reliability and clarity. Be-
or task. In this way, they capture associative net- cause SMMs have been featured prominently in
works of knowledge. At the other extreme, rating the input–process– outcome models of team per-
scales capture levels of knowledge but do not formance, we explore the moderating potential
model the structure or organization of that knowl- of mental model measurement in relation to
edge. Concept maps have been used to capture the both team process and team performance.
sequencing of team actions; sequencing reflects an
organization or structure of knowledge, though Method
sequencing inherently models less structure than
do network-based approaches (Marks et al., Database
2001).
The third distinguishing aspect of how men- Twenty-three independent studies reported
tal models are operationalized, representation in 22 journal articles, dissertations, and conference
of emergence, concerns how individuals’ men- presentations (total number of groups ⫽ 1511;
tal models are collectively considered as con- total N approximately 5668) examining SMMs
stituents of a team mental model (Kozlowski & were included in this meta-analysis. To ensure a
Klein, 2000). SMMs align with Chan’s (1998) comprehensive search, we located these studies by
using the following strategies: (a) a computerized
description of a dispersion construct. The mul-
search of the PsycInfo, ABI Inform, and ERIC
tilevel nature of SMMs is similar to that of
databases, using appropriate key words and
climate strength (Dickson, Resick, & Hanges,
phrases (e.g., team or group and cognition, mental
2006), in which, as opposed to considering the
models, shared cognition, schemas, knowledge
content of the climate at the group level, it is structure, cognitive structure, cognitive map, con-
the degree of sharedness in perceptions of cli- ceptual framework, etc.); (b) a manual search for
mate that is of interest. Thus, because climate references cited in studies included in this meta-
can be conceptualized as having both content analysis; (c) soliciting relevant unpublished manu-
(e.g., climate for safety) and strength (i.e., de- scripts from authors currently doing research in
gree of consensus at focal level of analysis), so team cognition; and (d) obtaining related studies
to can SMMs. When SMMs are considered at from recent conference presentations (e.g., Soci-
the team level, it is typically the degree of ety for Industrial Organizational Psychology and
similarity in models (analogous to climate Academy of Management). Our objective in ex-
strength) as opposed to the particular content of amining studies from the recent conferences was
the team model (analogous to climate type) that to incorporate relevant peer-reviewed research re-
is of interest. Researchers have tended to index sults that had not yet been incorporated into the
SMMs at the team level by using indices re- extant literature.
flecting the level of similarity in the group. For These search strategies resulted in an initial
example, in studies using pairwise comparison 225 empirical studies with the potential to re-
data, network analysis algorithms such as Path- port relationships relevant to our research ques-
finder’s C or UCINET’s QAP correlation (Bor- tions. These manuscripts were carefully re-
gatti, Everett, & Freeman, 1992) are typically viewed for relevance; only 23 of these studies
used to compare the overlap of team member’s were found to report sufficient information
models. Rwg(j) has been used (James, Dema- about relevant constructs to be retained in this
ree, & Wolf, 1984) to reflect agreement within meta-analysis. In order to be retained, studies
the team. Euclidean distance has been reported must have (a) assessed SMMs in relation to
as a geometric representation of the separation team process, team performance, or both and (b)
or closeness of the model. provided sufficient information to compute a
4 DECHURCH AND MESMER-MAGNUS
correlation between these variables (means and When authors reported separate correlations
standard deviations, effect sizes, etc.). for different samples, those correlations were
Only studies examining task-performing teams examined separately. The manuscripts included
were included in the meta-analysis (Kozlowski & in this meta-analysis are listed in the references
Ilgen, 2007). Although teams and groups both prefixed with an asterisk.
represent meaningful social units, teams possess
the additional qualifier of interdependence. Teams
Coding Procedure
are a subset of groups whose members share a
common goal and must cooperate in order to We each undertook an independent effort to
attain shared goals (Guzzo & Shea, 1992; Wage- code the 23 studies that met criteria for inclusion
man, 1995). We included both the terms team and in the analysis by using a jointly developed coding
group in our search to ensure that we identified all scheme. Intercoder agreement was very high
appropriate articles, regardless of the label applied (95.4% across all variables coded), likely because
by the author or authors. of the objective nature of the data coded. Table 1
Furthermore, because SMMs generally de- reports intercoder reliabilities for each of the
scribe the extent to which team members’ mental study’s key variables. Coding disagreements were
models are overlapping or similar (Cooke et al., resolved through discussion. Data coded included
2007; Edwards, Day, Arthur, & Bell, 2006; study sample size, number of groups included,
Mohammed et al., 2000), to fit within the context sample characteristics, method used to operation-
of this meta-analysis, team cognition constructs alize SMMs (in terms of the elicitation method,
examined within the primary studies must have structure representation, and representation of
attempted to explicate (a) the content of known emergence used), and relationships between
elements of the team members’ mental models SMMs and team process and team performance.
(elicitation) and (b) the collective representation When reported, we also coded reliability estimates
(similarity) of the team members’ elemental con- of team cognition, process, and performance. A
tent (representation of emergence; Klimoski & summary of study and sample characteristics
Mohammed, 1994; Kozlowski & Klein, 2000; (sample, team task, study design and setting, and
Mohammed et al., 2000).1 Studies were omitted if shared mental model measurement moderators)
they dealt with constructs that differed from those for the primary studies is included in the Appen-
examined in this research or did not report suffi- dix.
cient information to assess the mental model mea- Coding of process and performance. We
surement strategy used (in terms of elicitation coded relationships between team SMMs and
method, structure representation, or representation team process and team performance. Team pro-
of emergence). For example, research in the trans- cess describes verbal and behavioral acts that
active memory tradition assesses the distribution team members engage in to transform inputs to
of team cognition but does not attempt to assess outcomes (Marks et al., 2001). Examples of
the degree of overlap among team members’ men- team process included in these studies are co-
tal models of task or team concepts. As such, ordination, backup behavior, and planning. Al-
transactive memory research was not included though several forms of team process have been
in this meta-analysis. Similarly, research explor- examined (transition, action, and interpersonal
ing the accuracy of mental models typically ex- process; Marks et al., 2001), an insufficient
amines the overlap between a team member’s number of studies was available to permit an
mental model and that of an expert’s but does not examination of the measurement moderators for
necessarily examine the overlap in cognition each type of process. As such, we collapsed
among team members. These types of studies across form of process, computing composite
were also excluded. Finally, because SMMs are
meaningful at the team level, we excluded studies 1
Researchers in the shared mental model tradition also
that only reported relationships at the individual frequently attempt to explicate the relationships between
level. This practice was more common, for exam- team cognitive elements (structure representation). As this
is not uniformly done in shared mental model research, we
ple, in the human factors’ literature than in the included articles that did not assess structure representation
Industrial/Organizational (I/O) psychology and in our database but examined this measurement strategy
management literatures. moderator whenever possible.
MEASURING MENTAL MODELS 5
tion and its relevant correlates were not reported in when models are elicited by using similarity
all studies, corrections for unreliability were ac- ratings ( ⫽ .27, SD ⫽ 0). The next strongest
complished by using artifact distribution meta- relationship results from models elicited with
analysis. Artifact distribution meta-analysis recog- concept mapping or card sorting ( ⫽ .17,
nizes that it is often the case that every study in a SD ⫽ .06). Notably, these two confidence in-
meta-analysis does not report all necessary infor- tervals do not overlap, suggesting that elicita-
mation to correct for attenuation on all the artifacts tion does moderate the size of the observed
that impact relationships in that study (Hunter & SMMs–team process relationship. When mod-
Schmidt, 2004). Thus, correction for these arti- els were elicited with questionnaires or rating
facts (in this case, reliability) is accomplished with scales, the relationship between SMMs and
a distribution of artifact values collected from team process was small and negative ( ⫽
studies that do provide relevant artifact informa- ⫺.05, SD ⫽ .37), with a credibility interval
tion. Corrections were made for unreliability in ranging from ⫺.52 to .43.
both SMMs and correlate measures (process and Next we examined the SMMs–team process
performance). Given the possibility of a file- relationship on the basis of how model structure
drawer effect (wherein significant findings are
was represented. The strongest effects were evi-
more likely to be published; Rosenthal, 1979), we
denced with the Pathfinder PFNets ( ⫽ .31,
also conducted a file-drawer analysis (Hunter &
SD ⫽ 0). Positive, albeit smaller, effects were
Schmidt, 2004) to estimate the number of studies
reporting null findings that would be required to seen when structure was represented with a
reduce reliability-corrected correlations to a spec- card-sorting or concept-mapping approach ( ⫽
ified lower value (we used ⫽ .05). .17, SD ⫽ .06). Notably, the confidence intervals
do not overlap, suggesting that structure represen-
tation moderates this relationship. Lastly, when
Results the method used to operationalize SMMs did not
enable the representation of structure, there was
Tables 2 and 3 present meta-analytic results essentially no observed effect between SMMs and
examining team cognition relations. For each team process ( ⫽ .05, SD ⫽ .28).
analysis, we report the number of independent Lastly, we compared the size of the SMM–
samples included in the meta-analysis (k), the total team process relationship on the basis of the met-
number of groups across these samples (N), the
ric used to represent the “shared” aspect of team
sample size weighted mean observed correlation
mental models. Notably, effect sizes were highest
(r), the sample size weighted standard deviation
when using Pathfinder C ( ⫽ .30, SD ⫽ 0) but
associated with the observed mean (SDr), the re-
liability corrected correlation (), the standard de- were also high when using consistency or Euclid-
viation associated with the , the 80% credibility ean distance indices ( ⫽ .20, SD ⫽ 0). How-
interval around , the 90% confidence interval ever, studies using an index of within-group
around , percentage of variance due to sampling agreement showed no relation between SMMs
error (%SEV), and the percentage of variance due and team process ( ⫽ ⫺.06, SD ⫽ .35).
to all statistical artifacts (%ARTV).2
2
We report both the credibility and confidence intervals,
because each provides unique information about the nature of
Measurement Strategy as a Moderator of (Hunter & Schmidt, 2004; Whitener, 1990). Specifically, the
the SMM Similarity–Team Process credibility interval (CV) provides an estimate of the variability
Relationship of corrected correlations across studies. Wide credibility inter-
vals or those that include zero suggest the presence of a
moderator. An 80% CV that excludes zero indicates that more
We first examined the operationalization of than 90% of the corrected correlations are different from zero
SMMs as a moderator of the relationship be- (10% lie beyond the upper bound of the interval). The confi-
tween SMMs and team process. Table 2 pre- dence interval (CI) provides an estimate of the accuracy of our
sents results examining elicitation method, estimation of (Whitener, 1990); in other words, the confi-
structure representation, and representation of dence interval estimates the variability around due to sam-
pling error. A 90% CI that excludes zero indicates that if our
emergence as moderators of the SMM–team estimation procedures were repeated many times, 95% of the
process relationship. Results indicate that estimates of would be larger than zero (5% would fall
SMMs are most strongly related to team process beyond the upper limit of the interval).
Table 2
Mental Model Measurement as a Moderator of the Shared Mental Model–Team Process Relationship
Meta-analysis k N r SDr SD 80% CV 90% CI % SEV % ARTV FD k
SMMs–team process 17 1,238 .20 .23 .22 .22 ⫺.06/.50 ⫺.14/.58 25.49 25.83 58
Elicitation method
Similarity ratings 6 271 .24 .06 .27 0 .27/.27 .27/.27 100 100 26
Concept mapping/card sorting 3 208 .16 .13 .17 .06 .10/.24 .07/.27 82.10 82.49 7
Questionnaire/rating scale 5 302 ⫺.04 .34 ⫺.05 .37 ⫺.52/.43 ⫺.66/.56 14.53 14.53 —
Cognitive map/content analysis 1 20 .29 — — — — — — — —
Structure representation
Pathfinder 3 116 .28 .02 .31 0 .31/.31 .31/.31 100 100 16
UCINET 1 56 .17 — — — — — — — —
Multidimensional scaling — — — — — — — — — — —
Concept mapping/card sorting 3 208 .16 .13 .17 .06 .10/.24 .07/.27 82.10 82.49 7
None 9 538 .05 .28 .05 .28 ⫺.31/.42 ⫺.41/.51 20.96 20.98 —
Representation of emergence
Agreement 5 320 ⫺.05 .33 ⫺.06 .35 ⫺.50/.39 ⫺.64/.52 14.15 14.16 1
Consistency 4 219 .19 .05 .20 0 .20/.20 .20/.20 100 100 12
Concept map scoring system 1 63 .23 — — — — — — — —
MEASURING MENTAL MODELS
Table 3
Mental Model Measurement as a Moderator of the Shared Mental Model–Performance Relationship
Meta-analysis k N r SDr SD 80% CV 90% CI % SEV % ARTV FD k
SMMs–team performance 19 1,363 .29 .13 .33 .09 .22/.44 .18/.48 65.87 68.33 106
Elicitation method
Similarity ratings 9 483 .20 .15 .23 .07 .14/.32 .11/.35 80.74 81.35 32
Concept mapping/card sorting 6 411 .28 .10 .31 0 .31/.31 .31/.31 100 100 31
Questionnaire/rating scale 6 566 .32 .16 .39 .16 .18/.59 .13/.65 32.88 33.95 41
Cognitive map/content analysis 2 47 .50 .14 .56 0 .56/.56 .56/.56 100 100 61
Structure representation
Pathfinder 5 287 .28 .08 .31 0 .31/.31 .31/.31 100 100 26
UCINET 1 56 .14 — — — — — — — —
Multidimensional scaling 1 41 .48 — — — — — — — —
Concept mapping/card sorting 6 411 .28 .10 .31 0 .31/.31 .31/.31 100 100 31
None 9 756 .27 .16 .32 .15 .14/.51 .07/.57 40.19 41.52 49
Representation of emergence
Agreement 3 467 .36 .11 .39 .09 .27/.49 .24/.54 41.04 44.05 20
Consistency 6 289 .13 .13 .16 0 .16/.16 .16/.16 100 100 13
Concept map scoring system 4 266 .32 .10 .35 0 .35/.35 .35/.35 100 100 24
Pathfinder 5 287 .28 .08 .31 0 .31/.31 .31/.31 100 100 26
DECHURCH AND MESMER-MAGNUS
Cannon-Bowers, J. A., Salas, E., & Converse, S. A. James, L. R., Demaree, R. G., & Wolf, G. (1984).
(1993). Shared mental models in expert team de- Estimating within-group interrater reliability with
cision making. In N. J. Castellan, Jr. (Ed.), Indi- and without response bias. Journal of Applied Psy-
vidual and group decision making: Current issues chology, 69, 85–98.
(pp. 221–246). Hillsdale, NJ: Erlbaum. Klimoski, R., & Mohammed, S. (1994). Team mental
Chan, D. (1998). Functional relations among con- model: Construct or metaphor? Journal of Man-
structs in the same content domain at different agement, 20, 403– 437.
levels of analysis: A typology of composition *Knight, D., Pearce, C. L., Smith, K. G., Olian, J. D.,
models. Journal of Applied Psychology, 83, 234 – Sims, H. P., Smith, K. A., & Flood, P. (1999). Top
246. management team diversity, group process, and
Cooke, N. J., Gorman, J. C., Duran, J. L., & Taylor, strategic consensus. Strategic Management Jour-
A. R. (2007). Team cognition in experienced com- nal, 20, 445.
mand-and-control teams. Journal of Experimental Kozlowski, S. W. J., & Ilgen, D. R. (2007). Enhanc-
Psychology: Applied, 13, 146 –157. ing the effectiveness of work groups and teams.
*DeChurch, L. A. (2003). Teams leading teams: Ex- Psychological Science in the Public Interest, 7,
amining the role of leadership in multi-team sys- 77–124.
tems. Dissertation Abstracts International: Section Kozlowski, S. W. J., & Klein, K. J. (2000). A mul-
B. Sciences and Engineering, 64(01), 450. tilevel approach to theory and research in organi-
DeChurch, L. A., & Mesmer-Magnus, J. R. (in zations: Contextual, temporal, and emergent pro-
press). The cognitive underpinnings of team effec- cesses. In S. W. J. Kozlowski & K. J. Klein (Eds.),
tiveness: A meta-analysis. Journal of Applied Psy- Multilevel theory, research, and methods in orga-
chology. nizations: Foundations, extensions, and new direc-
Dickson, M. W., Resick, C. J., & Hanges, P. J. tions (pp. 3–90). San Francisco, CA: Jossey-Bass.
(2006). When organizational climate is unambig- Landis, J. R., & Koch, G. G. (1977). The measure-
uous, it is also strong. Journal of Applied Psychol- ment of observer agreement for categorical data.
ogy, 91, 351–364. Biometrics, 33, 159 –174.
*Edwards, B. D., Day, E. A., Arthur, Jr., W., & Bell, LePine, J. A., Piccolo, R. F., Jackson, C. L., Mathieu,
S. T. (2006). Relationships among team ability J. E., & Saul, J. R. (2008). A meta-analysis of
composition, team mental models, and team per- teamwork processes: Tests of a multidimensional
formance. Journal of Applied Psychology, 91, model and relationships with team effectiveness
727–736. criteria. Personnel Psychology, 61, 273–307.
*Ellis, A. P. J. (2006). System breakdown: The role *Levesque, L. L., Wilson, J. M., & Whorley, D. R.
of mental models and transactive memory in the (2001). Cognitive divergence and shared mental
relationship between acute stress and team perfor- models in software development project teams.
mance. Academy of Management Journal, 49, Journal of Organizational Behavior, 22, 135–144.
576 –589. *Lim, B. C., & Klein, K. (2006). Team mental mod-
Ensley, M. D., & Pearce, C. L. (2001). Shared cog- els and team performance: A field study of the
nition in top management teams: Implications for effects of team mental model similarity and accu-
new venture performance. Journal of Organiza- racy. Journal of Organizational Behavior, 27,
tional Behavior, 22, 145–160. 403– 418.
Faraj, S., & Sproull, L. (2000). Coordinating exper- *Lim, R. A. (2004). Patterns of cognition in infor-
tise in software development teams. Management mation systems development teams. Dissertation
Science, 46, 1554 –1568. Abstracts International: Section A. Humanities
*Fleming, P. J., Wood, G. M., Bader, P. K., & and Social Sciences, 65(05), 1862.
Zaccaro, S. J. (2003). The locus of SMM: Whence Lipsey, M. W., & Wilson, D. B. (2001). Practical
does the sharedness come? Paper presented at the meta-analysis. Thousand Oaks, CA: Sage.
annual meeting of the Society for Industrial Orga- Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001).
nizational Psychology, Orlando, FL. A temporally based framework and taxonomy of
Guzzo, R. A., & Shea, D. (1992). Group performance team processes. Academy of Management Re-
and intergroup relations in organizations. In M. D. view, 26, 356 –376.
Dunnette & L. M. Hough (Eds.), Handbook of *Marks, M. A., Sabella, M. J., Burke, C. S., &
industrial and organizational psychology (Vol. 3, Zaccaro, S. (2002). The impact of cross-training on
pp. 269 –313). Palo Alto, CA: Consulting Psychol- team effectiveness. Journal of Applied Psychol-
ogists Press. ogy, 87, 3–13.
Hunter, J. E., & Schmidt, F. L. (2004). Methods of *Marks, M. A., Zaccaro, S. J., & Mathieu, J. E.
meta-analysis: Correcting error and bias in re- (2000). Performance implications of leader brief-
search findings (2nd ed.). New York, NY: Sage. ings and team-interaction training for team adap-
12 DECHURCH AND MESMER-MAGNUS
tation to novel environments. Journal of Applied linkages between individuals and organizations
Psychology, 85, 971–986. (pp. 127–155). New York, NY: Psychology Press.
*Mathieu, J., Maynard, M. T., Rapp, T. L., & Man- *Resick, C. J., Dickson, M. W., & Mitchelson, J. K.
gos, P. M. (2006). Interactive effects of team and (2005). Cognitive ability, personality, and shared
task shared mental models as related to air traffic mental models in teams. Paper presented to the
controller’s collective efficacy and effectiveness. annual meeting of the Society for Industrial Orga-
Paper presented to the annual meeting of the Acad- nizational Psychology, Los Angeles, CA.
emy of Management, Atlanta, GA. *Rittman, A. L. (2004). A comparison of team train-
*Mathieu, J. E., Heffner, T. S., & Goodwin, G. F. ing strategies for team effectiveness. Dissertation
(2005). Scaling the quality of teammates’ mental Abstracts International: Section B. Sciences and
models: Equifinality and normative comparisons. Engineering, 65(05), 2679.
Journal of Organizational Behavior, 26, 37–56. Rosenthal, R. (1979). The “file drawer problem” and
*Mathieu, J. E., Heffner, T. S., Goodwin, G. F., tolerance for null results. Psychological Bulle-
Salas, E., & Cannon-Bowers, J. (2000). The influ- tin, 86, 638 – 641.
ence of shared mental models on team process and Rouse, W. B., & Morris, N. M. (1986). On looking
performance. Journal of Applied Psychology, 85, into the black box: Prospects and limits in the
273–283. search for mental models. Psychological Bulletin,
*Minionis, D. P. (1995). Enhancing team perfor- 100, 349 –363.
mance in adverse conditions: The role of shared Salas, E., Rosen, M., Burke, C. S., & Goodwin, G. F.
team mental models and team training on an inter- (2009). The wisdom of collectives in organiza-
dependent task. Dissertation Abstracts Interna- tions: An update of the teamwork competencies. In
tional: Section B. Sciences and Engineering, E. Salas, G. F. Goodwin, & C. S. Burke (Eds.),
56(02), 1139. Team effectiveness in complex organizations. New
Mohammed, S., Klimoski, R., & Rentsch, J. R. York, NY: Taylor & Francis.
(2000). The measurement of team mental models: Smith-Jentsch, K. A. (2009). Measuring team-related
We have no shared schema. Organizational Re- cognition: The devil is in the details. In E. Salas,
search Methods, 3, 123–165. G. F. Goodwin, & C. S. Burke (Eds.), Team effec-
*Mohammed, S., & Ringseis, E. (2001). Cognitive tiveness in complex organizations. New York, NY:
diversity and consensus in group decision making: Taylor & Francis.
The role of inputs, processes, and outcomes. Or- *Smith-Jentsch, K. A., Mathieu, J. E., & Kraiger, K.
ganizational Behavior and Human Decision Pro- (2005). Investigating linear and interactive effects
cesses, 85, 310 –335. of shared mental models on safety and efficiency
*Probber, J. K. (2000). Shared mental models as an in a field setting. Journal of Applied Psychol-
antecedent of work group effectiveness. Disserta- ogy, 90, 523–535.
tion Abstracts International: Section A. Humani- *Stout, R. J., Cannon-Bowers, J., & Salas, E. (1999).
ties and Social Sciences, 60(08), 3018. Planning, shared mental models, and coordinated
*Rapert, M. I., Velliquette, A., & Garretson, J. A. performance: An empirical link is established. Hu-
(2002). The strategic implementation process: man Factors, 41, 61–71.
Evoking strategic consensus through communica- Wageman, R. (1995). Interdependence and group
tion. Journal of Business Research, 55, 301–310. effectiveness. Administrative Science Quar-
*Rentsch, J. R., & Klimoski, R. J. (2001). Why do terly, 40, 145–180.
“great minds” think alike? Antecedents of team Whitener, E. M. (1990). Confusion of confidence
member schema agreement. Journal of Organiza- intervals and credibility intervals in meta-analysis.
tional Behavior, 22, 107–120. Journal of Applied Psychology, 75, 315–321.
Rentsch, J. R., Small, E. E., & Hanges, P. J. (2008).
Cognitions in organizations and teams: What is the Received July 31, 2008
meaning of cognitive similarity? In B. Smith (Ed.), Revision received February 20, 2009
The people make the place: Exploring dynamic Accepted February 23, 2009 䡲
Appendix
Marks, Sabella, Burke, Undergraduates Longbow2 (a) and TWIST Experiment Lab Similarity ratings Pathfinder (a); Pathfinder C (a);
and Zaccaro (2002) (b) military flight/tank (a); concept none (b) agreement (b)
simulations mapping/card
sorting (b)
Mathieu, Heffner,
Goodwin, Salas,
and Cannon-Bowers
(2000) Undergraduates Flight simulator Nonexperiment Lab Similarity ratings UCINET Consistency
(Appendix continues)
13
14
Appendix (continued)
Structure Representation of
Author (year) Sample Team task Design Setting Elicitation method representation emergence
Mathieu, Heffner, and
Goodwin (2005) Undergraduates Flight simulator Nonexperiment Lab Similarity ratings None Consistency
Mathieu, Maynard, Air traffic control Coordinate takeoff/landing, Nonexperiment Field Questionnaire/rating None Consistency
Rapp, and Mangos teams ensure route timing, etc.
(2006)
Minionis (1995) Undergraduates Tank battle simulation Experiment Lab Concept mapping/ Concept Mapping/ Agreement
card sorting Card Sorting
Mohammed and Undergraduates Towers market multi-issue Experiment Lab Questionnaire/rating None Agreement
Ringseis (2001) negotiation task
Probber (2000) Undergraduates Computer simulation of Experiment Lab Similarity ratings None Euclidean
hotel management teams distance–MDS
Rapert, Velliquette, CEO/marketing Various Nonexperiment Field Questionnaire/rating None Agreement
and Garretson executive teams
(2002)
Rentsch and Klimoski Intact work teams Various Nonexperiment Field Similarity ratings Multidimensional Euclidean
(2001) from U.S. Scaling distance–MDS
Department of
Defense
organization
Resick, Dickson, and Undergraduates Freelancer battle Nonexperiment Lab Similarity ratings Pathfinder Pathfinder (C)
DECHURCH AND MESMER-MAGNUS