You are on page 1of 7


discussions, stats, and author profiles for this publication at:

Cross-Cultural Methods of Research

Chapter · June 2016


0 194

2 authors:

Jia He Fons Van de Vijver

Tilburg University Tilburg University


Some of the authors of this publication are also working on these related projects:

South African Personality Inventory View project

Psychosocial functioning in people with diabetes in Zambia View project

All content following this page was uploaded by Fons Van de Vijver on 27 June 2016.

The user has requested enhancement of the downloaded file.


He, J., & Van de Vijver, F. J. R. (2016). Cross-cultural methods of

research. In H. L. Miller (Ed.), The SAGE encyclopedia of
theory in psychology (pp. 193-195). Thousand Oaks, CA: Sage.

Cross-Cultural Methods of Research

In cross-cultural psychology, the taxonomy of bias and equivalence provides a framework to

enhance methodological rigor. Bias and equivalence are generic terms referring to

measurement issues that challenge inferences from research studies. The concepts equally

apply to qualitative and quantitative studies, but, as they originate in quantitative cross-

cultural studies, the following discussion mainly derives from the quantitative tradition.

Bias refers to systematic errors that threaten the validity of measures administered in different

cultures. The existence of bias implies that differences in observed scores may not

correspond to genuine differences in the target construct. If not taken into account, bias can

be misinterpreted as substantive cross-cultural differences. Equivalence refers to the level of

comparability of scores across cultures. Minimizing bias and maximizing equivalence are the

prerequisites for valid comparison of cultures. Three types of bias and three levels of

equivalence and strategies to deal with the methodological challenges in cross-cultural

research are considered, after which we describe ways of dealing with bias and enhancing


The Taxonomy of Bias

Construct bias, method bias, and item bias are distinguished on the basis of the sources of

incomparability. Construct bias occurs when, in a cross-cultural study the target construct

varies in meaning across the cultures. For example, filial piety in most cultures is defined as

respect for, love of, and obedience to parents, whereas, in East Asia, this concept broadens to

bearing a large responsibility to parents, including caring for them when they grow old and

need help.

Method bias involves incomparability resulting from sampling, assessment instruments, and

administration procedures. Sample bias refers to the difference in sample characteristics, such

as age, gender, and socioeconomic status, which have a bearing on observed score

differences. Instrument bias refers to artifacts of instruments, such as response styles when

Likert-scale responses are used, and stimulus familiarity (the different levels of familiarity of

respondents with the assessment instrument). Administration bias can arise from cross-

cultural variations in the impact of the conditions of administration of the assessment (e.g.,

computer-assisted administration), clarity of instructions, or language differences.

Item bias, also known as Differential Item Functioning (DIF), occurs if persons with the same

trait (or ability) level, but coming from different cultures, are not equally likely to endorse the

item (or solve it correctly). As a result, an item may have a different psychological meaning

across cultures. The sources of item bias can be both linguistic (e.g., lack of translatability or

poor translations) and cultural (e.g., inapplicability of an item in a culture).

The Taxonomy of Equivalence

Three hierarchically-nested levels of equivalence, namely construct, metric, and scalar

equivalence have been established. Construct equivalence means that the same theoretical

construct is measured across all cultures. Without construct equivalence, there is no basis for

cross-cultural comparison. Metric equivalence implies that different measures at the interval

or ratio level have the same measurement unit across cultures but different origins. An analog

is temperature measurement using degrees Celsius and Kelvin; the measurement unit (the

degree) of the two measures is the same, yet their origins are 273.15 degrees different

(degrees Kelvin = degrees Celsius + 273.15). Scores may be compared within cultural groups

(e.g., male and female differences can be tested in each culture) but cannot be compared

directly across cultures unless the measurement scales have the same measurement unit and

origin (i.e., scalar equivalence). In this case, statistical procedures that compare means across

cultures, such as analyses of variance and t tests, are appropriate.

Dealing with Bias and Equivalence in Quantitative and Qualitative Studies

Methodological problems are a perennial issue in cross-cultural research, both in qualitative

and quantitative studies. Dealing with bias can be split up into approaches that focus on

design (a priori procedures) and approaches that focus on analysis (a posteriori procedures).

The former aims to change the design (sampling, assessment instruments, and their

administration) in such a way that bias can be eliminated or at least controlled as much as

possible. For example, items that are difficult to translate can be adapted to increase their

applicability in the target culture. Design changes to avoid bias issues are often equally

applicable to qualitative and quantitative studies.

By contrast, the procedures to deal with bias after the data have been collected are often

different for qualitative and quantitative studies. Quantitative researchers have a large set of

statistical procedures available to deal with bias in data, as outlined below. However, there

are also various procedures, such as in-depth and cognitive interviews, a probing technique to

reveal the thought processes of respondents who are presented with test items, can be used in

both qualitative and quantitative studies to reduce bias. For example, follow-up interviews

with participants or subgroups that manifested an extreme position on the target construct can

be useful for understanding the extent to which the extreme position is a consequence of

extreme response styles or reflects a genuinely-extreme position on the construct.

Dealing with Construct Bias

In quantitative studies, equivalence can be empirically demonstrated by confirming the cross-

cultural invariance (identity) of the structure of the construct and the adequacy of items used

for assessment. Both exploratory factor analysis (EFA) and confirmatory factor analysis

(CFA) can be used to detect construct bias or to ensure construct equivalence. When an

instrument is lengthy or does not have a clearly-defined expected structure, EFA is preferred.

The rationale of this approach is that the identity of factors (or dimensions) in an instrument

used to assess the target construct across cultures is sufficient evidence for equivalence. The

identity of factors is tested through target rotations of the factor structure across cultures.

When the expected structure of the target construct can be derived from theory or previous

research, CFA is more appropriate in confirming or disconfirming equivalence. CFA can test

hierarchical models with a systematically increasing or decreasing number of parameters,

which can satisfy construct, metric, and scalar equivalence. Given the high level of detail and

statistical rigor that exist in CFA, it is perhaps the most advanced procedure for establishing

quantitative equivalence. When there is construct bias, researchers should acknowledge the

incompleteness of the construct and may compare equivalent subfacets, such as the

comparison of the common operationalization of filial piety in Western and East Asian

cultures, omitting the culturally-specific elements.

In qualitative studies, there are fewer established procedures to deal with construct bias. The

definition of the target construct may be clarified through the interactions of investigators

with cultural experts. An in-depth analysis may reveal the common and unique aspects of a

construct across cultures. The need to evaluate existing, usually Western-based theoretical

frameworks vis-à-vis local conceptualizations also arises in non-comparative studies that

involve non-Western participants.

The procedures for reducing construct bias in quantitative and qualitative studies have their

own strengths and weaknesses. The power of the quantitative approach is statistical rigor,

which provides a firm basis for drawing conclusions about construct bias. Comparable rigor

cannot be achieved in a qualitative approach. However, the weakness of the quantitative

approach is its dependence on standard instruments to gauge specific constructs. Statistical

analyses can reveal whether, say, depression, as measured by the Beck Depression Inventory,

has the same underlying factors in all cultures that a particular study involves, but these

analyses cannot demonstrate that the instrument addresses salient aspects of depression in

specific cultures. This latter aspect, namely, exhaustively identifying the culturally-relevant

components of a construct, is the strength of the qualitative approach. Quantitative and

qualitative approaches to construct bias are compatible and often complementary.

Dealing with Method Bias and Item Bias

Method bias is an often-overlooked aspect of cross-cultural studies. Frequently-observed

sources of method bias are: Small, biased samples, especially when target respondents are

difficult to recruit (thus challenging the generalizability of findings to new samples from the

same population); response styles (such as social desirability); and the obtrusiveness of

interviewers or observers (a well-known issue in ethnographic research involving

observation). Changes in the research design so as to minimize the impact of the method used

can be implemented in quantitative and qualitative studies, such as lengthy introductions or

sessions without actual data collection to minimize experimenter or observer obtrusiveness.

In quantitative studies it is sometimes possible to statistically estimate the impact of method

bias sources by adapting the design of a study. An example is the administration of a social-

desirability inventory in conjunction with the target instrument in order to be able to

statistically control for the effect of social desirability on target-construct scores; in

operational terms, social desirability becomes a covariate. In qualitative studies, it is possible

for the research team to develop a coding scheme, provided they are involved in every stage

of the qualitative study. If necessary, external cultural experts, as well as interviewees, can be

invited to scrutinize the interpretation of qualitative data, and a second interview can be

arranged if further clarification is needed.

Problems with the cultural appropriateness of specific words in items or of entire items or the

poor translatability of words often underlie item bias. For example, an item that asks for the

number of times a woman has been pregnant may be perceived as an indirect question about

the number of abortions she has had, notably in countries where abortion is illegal. Numerous

quantitative procedures have been developed to analyze item bias, such as structural equation

modeling, item response theory, the Mantel-Haenszel approach, and logistic regression.

Cognitive interviewing is a powerful tool that can be used in qualitative studies.

Jia He and Fons van de Vijver

Further Recommended Readings

Denzin, N. (2006). Sociological methods: A sourcebook (5th ed.). Piscataway, NJ: Aldine


Harkness, J. A., Van de Vijver, F. J. R., & Mohler, P. Ph. (Eds.). (2003). Cross-cultural

survey methods. New York, NY: Wiley.

Matsumoto, D., & Van de Vijver, F. J. R. (Eds.). (2011). Cross-cultural research methods in

psychology. New York, NY: Cambridge University Press.

Van de Vijver, F. J. R., & Leung, K. (1997). Methods and data analysis for cross-cultural

research. Newbury Park, CA: Sage.

View publication stats