Sie sind auf Seite 1von 14

Journal of Social Service Research

ISSN: 0148-8376 (Print) 1540-7314 (Online) Journal homepage: http://www.tandfonline.com/loi/wssr20

Minimizing Social Desirability Bias in Measuring


Sensitive Topics: The Use of Forgiving Language in
Item Development

Jennifer L.K. Charles & Patrick V. Dattalo

To cite this article: Jennifer L.K. Charles & Patrick V. Dattalo (2018) Minimizing Social Desirability
Bias in Measuring Sensitive Topics: The Use of Forgiving Language in Item Development, Journal
of Social Service Research, 44:4, 587-599, DOI: 10.1080/01488376.2018.1479335

To link to this article: https://doi.org/10.1080/01488376.2018.1479335

Published online: 15 Aug 2018.

Submit your article to this journal

Article views: 57

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=wssr20
JOURNAL OF SOCIAL SERVICE RESEARCH
2018, VOL. 44, NO. 4, 587–599
https://doi.org/10.1080/01488376.2018.1479335

Minimizing Social Desirability Bias in Measuring Sensitive Topics: The Use of


Forgiving Language in Item Development
Jennifer L.K. Charlesa and Patrick V. Dattalob
a
The Catholic University of America, National Catholic School of Social Service, Washington, WA, USA; bSchool of Social Work, Virginia
Commonwealth University, Richmond, VA, USA

ABSTRACT KEYWORDS
Social science research has long been concerned with attitudes, beliefs, and behaviors that Social desirability bias;
are potentially objectionable, immoral, or illegal. These types of topics include, for example, forgiving language; stigma
racism, ableism, cheating, and stealing, among others. Referred to as “sensitive topics,” their of mental illness;
measurement development
investigation usually involves questions that require respondents to admit to attitudes,
beliefs, and behaviors that violate social norms, making their assessment susceptible to error
due to social desirability bias. This article describes an empirical investigation of an
approach to minimize this bias, the use of “forgiving language” in survey item development
and the effect on item variability. Using secondary data initially collected as part of a meas-
urement development study of mental health providers’ stigmatization of service users, 15
pairs of similar, thematically targeted items, varying with respect to wording approach were
tested using a purposive sample of mental health providers (N ¼ 220). Findings indicate that
items crafted in a forgiving manner were not significantly influenced by social desirability
bias, in contrast to items developed in more traditional language. Additionally, forgiving
language-items produced higher levels of agreement, on average, when compared to those
written in more traditional language. More research is indicated, including systematic vari-
ation of wording approaches, but these results seem promising.

Background Indeed, social science research has long been


Social science and social service research frequently concerned with attitudes, beliefs, and behaviors
uses self-reports, for example, questionnaires or that are potentially objectionable, immoral, or
interviews, to measure educational, psychological, illegal. These types of topics include racism, able-
and other outcome variables. However, these data ism, lying, cheating, and stealing, among others.
collection tools raise concerns about the presence Referred to as “sensitive topics,” surveys, or ques-
of a response bias, defined as cognitive biases that tions seeking understanding or information about
influence the responses of participants away from taboo topics, generally ask respondents to admit
an accurate or truthful response (Paulhus, 1991; that they have violated social norms. Further char-
Sedgwick, 2014). One type of response bias is acterized, sensitive topics may potentially pose a
“social desirability bias,” which is the tendency for substantial threat to respondents (Jann, Jerke, &
people to present a favorable image of themselves Krumpal, 2012; Lee & Renzetti, 1993). For
on self-report measures. Social desirability bias example, asking about drug use behaviors or racist
is the distortion of one’s answers based on how attitudes may pose the risk and/or cost of feelings
motivated they are to present themselves in a of shame and embarrassment. Also, sensitive topics
socially prescribed positive manner (DeVellis, may elicit fears of negative consequences, like
2003). This bias confounds research results by cre- sanctions or job loss. In addition, sensitive topics
ating false relationships or obscuring relationships can include questions viewed by respondents to
between variables. be intrusive, such as questions about income

CONTACT Jennifer L.K. Charles charlesj@cua.edu The Catholic University of America, National Catholic School of Social Service, 620 Michigan
Avenue, NE, Washington, 20064 United States.
ß 2018 Taylor & Francis Group, LLC
588 J. L. K. CHARLES AND P. V. DATTALO

(Tourangeau & Yan, 2007). These questions are (e.g. Schulz, Fischbacher, Thoni, & Utikal, 2014;
susceptible to systematic measurement error Valdesolo & DeSteno, 2008).
(Groves et al., 2009). When asked about sensitive
topics, there is a chance that respondents to a sur- Bogus Pipeline
vey will provide inaccurate responses, thus being The bogus pipeline approach refers to any
influenced by social desirability bias. For example, methodology that leads respondents to believe
when asked about a sensitive topic, like racism, a that their self-reported responses are linked to an
respondent who is concerned about appearing in a objective procedure that would be able to reveal
negative light may skew their responses in such a false-responses (Akers, Massey, Clarke, & Lauer,
way as to have more socially acceptable attitudes 1983; Campanelli, Dielman, & Shope, 1987; Jones
about race. & Sigall, 1971; Roese & Jamieson, 1993). For
example, when asking about drug use behaviors,
Methodological Efforts to Limit Bias respondents are led to believe that following
taking a survey participants would be subjected
In order to limit the potential influence of social
to a drug test urinalysis. Historical applications of
desirability bias, social science researchers have
the method made use of large mechanical equip-
attempted a variety of approaches in measure- ment, fashioned to look like a lie-detector, to
ment development and data collection. These which interviewees were connected, and their
methods include increasing a respondent’s cogni- “truthfulness” measured. The results of the effect-
tive load, use of “bogus pipelines,” randomized iveness of this approach are mixed. Some studies
response techniques, varying response option support the method’s effectiveness such as the
choices and their wording, providing context, work of Aguinis, Pierce, and Quigley (1993),
and forgiving language strategies. whose meta-analysis compared the use of the
bogus pipelines and self-reports of smoking
Cognitive Loading
habits, finding significantly more reports of fre-
Cognitive loading is a strategy used in social quent smoking in bogus-pipelines participants.
psychology that involves giving research partici- Additionally, Roese and Jamieson (1993), studied
pants a task to complete, such as memorizing and the bogus pipeline’s influence on self-report of
recalling a series of random numbers, while they socially undesirable attitudes reflecting racism
respond to survey questions (Stodel, 2015). The and sexism and found that the bogus pipeline
idea is that if a respondent is using a portion of reduced socially desirable responses. Similarly,
their cognitive capacity in task performance, they Alexander and Fisher (2003) found that partici-
are less likely to give untruthful responses or pants responding to questions about sexual
engage in image-management, both of which behavior gave less stereotypical responses in a
require a substantial amount of cognitive energy. bogus pipeline condition than those being inter-
A study conducted by van’t Veer, Stel, & van viewed in a traditional manner. On the other
Beest (2014) found that individuals occupied by hand, Campanelli et al.’s (1987) investigation of
performing a task, thus having a higher cognitive the use of bogus pipeline with college students
load, were less likely to be dishonest about the use of alcohol found no significant influence on
outcome of a dice roll. More specifically, it self-report.
appears that increasing the cognitive load of
respondents will lead to responses less-influenced Randomized Response Techniques
by social desirability bias. Similarly, other studies Another approach to data collection, with an eye
have found that by increasing a respondent’s cog- toward decreasing the influence of social desir-
nitive load, responses were more truthful and less ability bias, is the use of randomized response
influenced by face-saving bias, indicating that it techniques (RRT), particularly in face-to-face
takes more cognitive energy to give dishonest interviews (Warner, 1965) and with especially
responses to survey questions and task outcomes sensitive topics (Lensvelt-Mulders, Hox, van der
JOURNAL OF SOCIAL SERVICE RESEARCH 589

Heijden, & Maas, 2005). Essentially, RRT allows typical binary, yes or no response to voting
the respondent to maintain privacy through the behavior was altered to provide several face-
use of a randomizing device. For example, as saving options. With the addition of other
described by Krumpal (2013), an interviewer con- options besides “no” when asked about voting
cerned with a respondent’s drug use habits poses and political activity participation, the researchers
two (or more) statements to a participant, asking hypothesized they would have more self-reported
for their agreement with the following statements: non-voting behavior (a socially undesirable
“I sometimes smoke marijuana” versus “I never behavior). The results of their investigation were
smoke marijuana. “Then, using a randomizing mixed: while the type of response option offered
mechanism, like flipping a coin or rolling dice, had no influence on the self-report of voting
the respondent determines to which statement practice, it did have a significant influence on
they will respond. For instance, if heads is the report of political participation. In fact, the
flipped, the respondent answers the first state- inclusion of face-saving response options led to
ment; flipping tails, the respondent answers the lower report of political participation.
second. The researcher does not know which
statement the participant responded to, thus Context
increasing the likelihood that the respondent Another method used in measurement develop-
would give a truthful answer, knowing that they ment that has been used to sidestep social desir-
would not be responding to a threatening ques- ability bias is to situate items in a carefully crafted
tion, no potential consequences for admitting context (Krumpal, 2013). In practice, less intrusive,
deviant behaviors. The prevalence of the socially unoffending questions are asked first, related to
undesirable behavior can thus be estimated using the topic area of interest. The successive questions
probability theory, shielding respondents from are more targeted toward the phenomenon of
embarrassment, but providing the data that interest, increasing in sensitivity. This approach is
researcher’s seek. Data as to this method’s thought to reduce a respondent’s feelings of defen-
efficacy in producing valid results is promising, siveness, reducing the focus on the sensitive ques-
with a great deal of support for eliciting reports tions that are truly of interest. For example, if a
of sensitive attitudes, beliefs and behaviors, like researcher is interested in respondent’s use of
unprofessional behavior (Buchman & Tracy, illegal substances, a carefully constructed context
1982), cheating (Scheers & Dayton, 1988; might first begin with asking about use of ciga-
Franklin, 1989), stealing (i.e. Wimbush & Dalton, rettes, caffeine, or alcohol, then presenting ques-
1997), and using illegal drugs (i.e. Goodstadt & tions about drug use.
Gruson, 1975). On the other hand, some studies
indicate no added benefit of RRT when com- Forgiving Language Strategy
pared with direct self-report measures or other
Lastly, and evaluated in the present study, is the
techniques designed to assure anonymity (i.e.
use of a forgiving language strategy in item word-
Akers et al., 1983; Begin & Boivin, 1980; Coutts &
ing to evade the influence social desirability bias. A
Jann, 2011; Duffy & Waterton, 1988; Franklin,
forgiving language strategy is an approach to item
1989; Holbrook & Krosnick, 2010; Tamhane, 1981).
phrasing that “forgives” the attitude or behavior in
question, encouraging respondents to answer more
Response Wording
truthfully (Groves et al., 2009; Kundt, Misch, &
More specifically concerned with measurement Nerre, 2017; Sudman & Bradburn, 1982).
development, social scientists have investigated Suggestions about how to “forgive” socially undesir-
response wording and its influence on social able attitudes and beliefs include taking an
desirability. Essentially, survey item response “everybody-does-it approach” (Barton, 1958). Other
options are altered or crafted in such a way that forgiveness approaches include conveying the
promotes more truthful responses. For example impression that the attitude or behavior is appreci-
in a study by Persson and Solevid (2014), the ated by those with authority, that the attitude or
590 J. L. K. CHARLES AND P. V. DATTALO

behavior must have been the result of a compre- the farmers reported 35-40% of their actual coca
hensible reason, or even that the attitude or behav- cultivation. Alternatively, Naher and Krumpal
ior is already assumed to be present or have (2012) did not observe any significant effects of
occurred by the surveyor (Sudman & Bradburn, question wording on a respondent’s willingness
1982). The effectiveness of using the forgiving to report socially undesirable behavior. This ech-
approach to item wording in reducing social oes the findings of Catania et al. (1996) who did
desirability bias remains largely unknown or incon- not find an influence of item wording on reports
sistent (Garcia-Yi & Grote, 2012; Holtgraves Eck, & of sexual behavior. They, did however, find that
Lasky, 1997; Naher & Krumpal, 2012; Peter & the use of forgiving introductions decreased item
Valkenburg, 2011; Tourangeau & Yan, 2007). non-response, as did Peter and Valkenburg (2011).
Giving inconsistent support for the utility of for-
giving language as an approach to wording sensi-
Provider-Based Stigma as a Sensitive Topic
tive items, Holtgraves, Eck, and Lasky (1997)
conducted five surveys asking respondent’s know- The present study seeks to further the discussion
ledge of current events and engagement in socially on the use of forgiving wording as a means to cir-
desirable (voting, for example) and undesirable cumvent the influence of social desirability bias.
behaviors (such as using illegal drugs). The authors Specifically, this article makes use of data gathered
posed items in one of two ways: first, straight- as part of a measurement development project
forward “do you know … ” have you ever … ”; that made use of the forgiving wording strategy in
and second, using forgiving language or face-saving item development. The measure intended to ascer-
support, giving reasons why the respondent may tain the construct of mental health provider-based
not have had knowledge of the current event, or stigmatization of clients living with mental illness.
engaged/not engaged in the undesirable behavior/ This phenomenon can be understood as a
desirable. For example, “Have you had the oppor- sensitive topic, because mental health providers
tunity to familiarize yourself with [current event/ are unlikely to admit to endorsing attitudes and
policy]; “Some people find using drugs can help beliefs or engaging in behaviors that are unhelpful
them relax. Have you ever used any illegal drugs?” and disempowering to their clients. Prior to
(p. 1668). The results of analysis show that forgiv- describing the study, the item wording variations,
ing worded items allowed more respondents to and findings regarding the effectiveness of
admit not having socially desirable knowledge, forgiving language as a strategy to bypass social
counteracting social desirability bias. However, desirability bias, some background definitions
forgiving language did not have an influence on are needed.
reports of engaging in socially desirable behavior. The stigma of mental illness, has been referred
Results about the influence of forgiving language to as a pressing issue that presents the “most
on the report of socially undesirable behavior, like formidable obstacle to future progress in the arena
drug use, were inconsistent. of mental illness and mental health” (U.S.
Similarly, Peter and Valkenburg’s (2011) study Department of Health and Human Services, 1999,
indicates that forgiving wording of survey items p. 3). Stigma is conceptualized as the co-occur-
may increase the report of socially undesirable rence of labeling of difference, stereotyping based
behaviors with some respondents. Influence of on these differences, separation, status loss, and
forgiving wording on responses was linked to discrimination in the context of a power imbalance
respondent’s susceptibility to pressures to report (Link & Phelan, 2001). The role of stigma in the
socially desirable responses and on their develop- lives of people living with mental illness is pro-
mental level, such as adolescents and emerging found: if internalized, self-stigma can result in low
adults. Garcia-Yi and Grote’s (2012) survey of self-esteem, low self-efficacy, and feelings of futility
Peruvian farmers found that the use of forgiving (Corrigan, Watson, & Barr, 2006). Moreover,
language when interviewing about coca plant cul- stigma includes discrimination, where access to
tivation yielded responses within the normal resources and opportunities are limited, like housing
range when asking about sensitive topics, that is, and employment (Corrigan, Roe, & Tsang, 2011).
JOURNAL OF SOCIAL SERVICE RESEARCH 591

Provider-based stigma, defined as the negative stigmatizing provider attitudes and beliefs all the
attitudes, beliefs, and behaviors of mental health more critical.
providers directed toward their clients, whether Considering the detriment that social desirabil-
overt or subtle, is particularly toxic, as it can ity bias has on the usefulness of surveys in meas-
result in a fractured therapeutic alliance, limited uring sensitive topics, including mental health
access to supportive services and choices, and provider-based stigmatization, and the inconclu-
even avoiding seeking help (Charles, 2013; sive and inconsistent findings about the use of
Knaak, Mantler, & Szeto, 2017; Wang, Link, forgiving language as a strategic approach to
Corrigan, Davidson, & Flanagan, 2018) mitigate this bias, further investigation of the
Mental health providers are members of the method’s utility in instrument development is
general public, and therefore, subject to the same warranted. This study attempts to answer the fol-
influences of public stigma as any other citizen, lowing research questions: (1) Is there a relation-
including the media. Subsequently, mental health ship between a survey item’s wording approach
providers can subscribe to and endorse the same and the influence of social desirability bias? (2) Is
stereotypes about persons with mental illness as there a difference in response variance in items
the general public (Corrigan, Druss, & Perlick, crafted in a traditional versus forgiving manner?
2014; Schulze, 2007). Frequently reported emo-
tions of mental health providers regarding those
Methodology
living with mental illness include fear (Overton &
Medina, 2008), dislike, and anger (Penn & To answer these research questions, the current
Martin, 1998). In fact, research indicates that investigation involves a secondary data analysis
provider attitudes and beliefs are often no differ- of data gathered as part of a measurement devel-
ent, or even “more” pessimistic, than the general opment study, reported on in detail elsewhere
population (e.g. Bj€ orkman, Angelman, & J€ onsson, (Charles, 2015; Charles & Bentley, 2018). In light
2008; Lauber, Anthony, Ajdacic-Gross, & Rossler, of the critical need to identify the potentially
2004; Lauber, Nordt, Braunschweig, & R€ ossler, relationship-damaging provider-based stigma, the
2006; Nordt, Rossler, & Lauber, 2005; Overton & researchers constructed and psychometrically
Medina, 2008; Penn & Martin, 1998; Ross & evaluated a self-assessment of stigma for use by
Goldner, 2009). Few differences between provider providers of mental health services. The new
and non-provider attitudes have been found in measure reflected the client and caregiving-family
research utilizing implicit measures of stigma experience of provider-based stigma; item develop-
(Kopera et al., 2015). ment began using the five-themed experience-based
A provider’s attitudes can directly influence model developed by Charles (2013). The five themes
their practice choices. In Roger’s (1994, 1995) include: blame and shame; disinterest, annoyance,
work on person-centered counseling, he argued and/or irritation; degradation/dehumanization; poor
that certain conditions must exist for a person to prognosis/fostering dependence; coercion and
thrive, which include positive relationships. For lack of “real” choice. These themes serving as a
many mental health service users, their social rela- guide, items were crafted using Nunnally and
tionships, including those with mental health ser- Bernstein’s (1994) domain sampling model,
vice providers, are the most important sources of brainstorming to “theoretical saturation,” where
support helping them cope with mental health items are generated until new content is no lon-
challenges (Faulkner & Layzell, 2000). A person- ger identified.
centered therapeutic relationship, characterized by Noting that the use of self-report by providers
the qualities of acceptance, genuineness, and is inherently a problematic approach because of
empathy, is unlikely to exist if a provider endorses the high probability of measurement error due to
stigmatizing attitudes, beliefs, and behaviors with social desirability bias, the language of the devel-
respect to their client. Consequently, treatment oped surveys’ items were purposefully varied.
outcomes are threatened (Goldsmith et al., 2015) More specifically, item development made use of
making accurate assessment of these potentially two wording approaches: traditionally concrete
592 J. L. K. CHARLES AND P. V. DATTALO

wording and forgiving language. First, as in trad- respondents had been in their current role for
itional measurement development, some of the between 1 and 5 y (n ¼ 93, 42.3%). Most respond-
items consisted of statements containing little ents were employed in outpatient services
room for equivocation, assuming all variance (n ¼ 107, 48.6%). Many of the respondents were
would be evident in the response. For example, employed in the central Virginia region (n ¼ 80,
one item reads “I frequently refer to clients by 36.4%), Southwest Virginia (n ¼ 49, 22.3%), or
diagnoses they have, not their name.” Alternatively, Coastal and Tidewater Virginia (n ¼ 42, 19.1%).
other items used forgiving language, an approach
described above, thereby encouraging more truth-
ful responses (Groves et al., 2009; Sudman & Instrument
Bradburn, 1982), and avoiding defensiveness. For
example: “In the past, I have occasionally made As disseminated in the initial study for validation
reference to a client using a diagnostic label they and reliability analysis, 62 items were tested for
have, instead of their name.” the Mental Health Provider Self-Assessment of
Stigma Scale (MHPSASS). For the purposes of the
current investigation’s secondary data analysis, the
Sample Description
item pool was further refined to 30 items. These
Data were collected during one month between 15 pairs were similar, thematically targeted items,
September and October 2014 from a purposive varying with respect to their wording approach: 15
sample of current providers of adult mental items using a traditional approach to item devel-
health services from a range of professional and opment and 15 items using a forgiving language
paraprofessional disciplines. With assistance from approach. These pairs are presented in Table 2,
Virginia’s Department of Behavioral Health and along with average score (range from 0–6) and
Developmental Services, 40 public mental health standard deviation (SD).
service centers and eight in-patient facilities were In addition to these 15 item pairs reflecting
contacted via email, inviting agency participation. the two language approaches, a brief measure of
Of the 48 agencies invited, 21 participated, for an the influence of social desirability bias was also
agency participation rate of 43.75%. The total included in the survey package. The social desir-
number of individuals to which the survey was
ability measure, based on the work of Crowne
made available is unknown. Respondents were
and Marlowe (1960), shortened and validated by
provided with an incentive (entry into a drawing
Strahan and Gerbasi (1972), is a 10-item scale
for a $50 Target gift card). Excluding ineligible
asking respondents to indicate whether a series of
and non-completing respondents, a sample of
statements were “True or False.” Examples of
N ¼ 253 was gathered, however due to missing
these items include: “I’m always willing to admit
data, the final sample was N ¼ 220. Comparing
the demographic data of the gathered sample it when I make a mistake” and “I like to gossip
(N ¼ 253) with that of the final sample (N ¼ 220) at times.” The general idea is that if respondents
there were no substantial differences noted. answer “true” to the first example and “false” to
Demographic results are presented in Table 1. the second, they are being influenced by the
Most respondents were female (n ¼ 180, 81.8%) desire to endorse socially acceptable responses. It
and just over half held a master’s degree (n ¼ 116, is assumed that everyone fails to or refuses to
52.7%). Many respondents identified with the admit their wrongs, at least sometimes, and that
social work discipline (n ¼ 64, 29.1%), the next gossip is enjoyed by everyone, however rarely
most frequently identified being counseling that might occur. A high score of 10 suggests a
(n ¼ 54, 24.5%). Respondents indicated having strong influence of social desirability bias on
been employed in the mental health field primarily responses. Strahan and Gerbasi (1972) report the
for more than 21 y (n ¼ 62, 28.2%), with the internal consistency of this scale to vary between
second most indicated time in the field of between Cronbach’s alpha ¼ 0.59 and 0.70 for their ini-
1 and 5 y (n ¼ 58, 26.4%). Conversely, most tial sample.
JOURNAL OF SOCIAL SERVICE RESEARCH 593

Table 1. Descriptive demographic data.


Variable Response N ¼ 220 %
Gender Female 180 81.8
Male 40 18.2
Total 220 100
Education Less than bachelor’s 27 12.3
Bachelor’s degree 67 30.5
Master’s degree 116 52.7
Doctorate and/or M.D. 10 4.5
Total 220 100
Discipline Social work 64 29.1
Counseling 54 24.5
Human services 25 11.4
Nursing 24 10.9
Other 19 8.6
Psychology 16 7.3
Support Staff 7 3.2
Paraprofessional 5 2.3
Medicine 3 1.4
Marriage and family therapy 3 1.4
Total 220 100
Setting Outpatient services 107 48.6
Long-term inpatient services 41 18.6
Other 33 15.0
Crisis stabilization/acute care 23 10.5
Psychosocial clubhouse 9 4.1
Residential care 6 2.7
Missing 1 0.5
Total 220 100
Region Central Virginia 80 36.4
Southwest Virginia 49 22.3
Coastal and Tidewater Virginia 42 19.1
Northern Virginia 30 13.6
Northwest Virginia 9 4.1
South central Virginia 9 4.1
Missing 1 .5
Total 220 100
Length of time employed in current role? Less than 1 y 30 13.6
1–5 y 93 42.3
6–10 y 50 22.7
11–15 y 18 8.2
16–20 y 12 5.5
More than 21 y 17 7.7
Total 220 100
Length of time employed in mental health services field? Less than 1 y 5 2.3
1–5 y 58 26.4
6–10 y 42 19.1
11–15 y 33 15.0
16–20 y 20 9.1
More than 21 y 62 28.2
Total 220 100

Hypotheses and Data Analysis Plan significant relationship. These correlations were
For this study, using the responses to 15 thematic- tested via computation of Cronbach’s alpha for
ally targeted item pairs, data analysis investigated both paired forgiving and traditionally worded
the following hypotheses: (1) scores on items items. In addition, a third hypothesis predicted
worded in a traditional manner are predicted to that the variance in response to items crafted
be inversely correlated with scores on the social using forgiving language would be greater than
desirability bias measure; (2) scores on items those worded more traditionally. This was eval-
crafted using a forgiving language approach are uated via computation of paired samples t-tests.
hypothesized to not having a significant relation-
ship with social desirability bias measure, in either
Results
direction. The assumption is that those items
worded using forgiving language would evoke less Testing the predicted relationships between
social desirability concerns from respondents and the scores on the social desirability bias measure
therefore not be characterized by a statistically and scores on both traditional and forgiving
594 J. L. K. CHARLES AND P. V. DATTALO

Table 2. Forgiving language and traditionally worded item pairs of the MHPSASS development.
Paired sample Forgiving language or
item number Item stem traditional Mean (N ¼ 220) SD
1 Conflict between my client and their family members tends to initi- Forgiving 3.7 1.59
ate symptom relapses.
2 My client’s family members are often to blame when treatment Traditional 2.03 1.19
goals aren’t achieved.
3 If a client is relapsing with symptoms of mental illness, there is Forgiving 2.62 1.25
likely some part of their treatment plan they haven’t
been following.
4 If my client is not recovering from a relapse, there is something that Traditional 2.17 1.19
they aren’t doing.
5 When a client isn’t trying hard enough in their recovery I may not Forgiving 2.47 1.34
go out of my way to help them.
6 If my client is not following their treatment plan, I do not return Traditional 0.18 0.53
their phone calls.
7 If my client’s family is over-involved in my client’s life, I am less Forgiving 1.85 1.17
likely to include them in treatment planning.
8 Since my client’s family is to blame for the relapse, I do not include Traditional 0.48 0.85
them in status updates.
9 It’s hard not to sometimes be irritated with clients who have serious Forgiving 3.00 1.37
mental illnesses.
10 When a client calls me too often, I get irritated with their neediness. Traditional 2.66 1.37
11 When I am irritated with a client, I may be less helpful. Forgiving 2.63 1.41
12 When I am irritated with my client’s neediness, I attempt to Traditional 1.25 1.16
avoid them.
13 Sometimes, I wish my client would hurry up when speaking Forgiving 2.25 1.27
with me.
14 I have difficulty staying awake in therapy sessions because I am not Traditional 0.27 0.59
interested in what my client is saying.
15 If my client isn’t taking the medication they are prescribed, it is Forgiving 2.78 1.38
most likely because they lack insight into their illness.
16 If a client doesn’t take prescribed medication, they lack insight into Traditional 2.45 1.23
their illness.
17 When my client is very symptomatic, I sometimes do not need to Forgiving 2.20 1.12
fully explain my actions to them.
18 When my client is very symptomatic, I don’t need to fully explain Traditional 1.96 1.06
my actions to them.
19 In the past, I have occasionally made reference to a client using a Forgiving 2.25 1.41
diagnostic label they have, instead of their name.
20 I frequently refer to clients by diagnoses they have, not their name. Traditional 0.58 0.93
21 I generally do not believe clients with serious mental illness should Forgiving 3.24 1.56
terminate support services; they will likely need them in
the future.
22 Clients with serious mental illness will always require intensive com- Traditional 2.44 1.39
munity support services.
23 When a client says they have a goal that I think is unlikely they will Forgiving 2.21 1.04
achieve, I subtly discourage them from setting this goal, for their
own good.
24 I often discourage clients with a serious mental illness from setting Traditional 1.53 1.34
goals that are too “out of reach.”
25 When families ask if their loved one will achieve common life goals, Forgiving 2.09 1.16
I may try to minimize expectations, so they aren’t disappointed.
26 When families ask if their loved one will achieve common life goals, Traditional 1.27 1.17
I try to minimize expectations.
27 In some instances it may be necessary to make decisions for my cli- Forgiving 2.28 1.50
ent, without their collaboration, for their own good.
28 Sometimes I make decisions for my client, for their own good. Traditional 1.90 1.31
29 When considering options for housing, I try to highlight the options Forgiving 1.95 1.36
that I think they will benefit from.
30 When considering options for housing, I only let my client know Traditional 1.19 1.40
about the options that I think they will benefit from.

language-worded items, the internal consistency internal consistency, lending evidence to the util-
of the social desirability measure was first ity of this shortened version of Crown and
inspected. Cronbach’s alpha of the Strahan and Marlowe’s (1960) classic work. Next Pearson’s r
Gerbasi (1972) 10-item social desirability bias, for was computed to assess the relationship between
a sample of n ¼ 209 (reduced from 220 due to score on items differing by wording approach
missing data in the social desirability bias scale and score on social desirability bias scale. A
items) was computed at 0.68, nearing acceptable statistically significant inverse correlation was
JOURNAL OF SOCIAL SERVICE RESEARCH 595

identified between respondents’ score on trad- Table 3. Descriptive statistics for wording approach groups
itionally worded items and social desirability scale and social desirability bias scale.
score. Pearson’s r ¼ 0.164 (p < 0.05) indicating N Mean SD
that as a respondent’s score on the social desir- Social desirability bias scale 209 5.3636 2.25785
ability measure increased, reflecting a potentially Traditional wording pairs 220 22.3636 7.97052
Forgiving language pairs 220 37.5091 8.18702
stronger influence of social desirability on their Valid N 209
responses, their scores on those items crafted in a
traditional manner decreased, supporting the Table 4. Paired samples t-test results.
hypothesis. Additionally, the scores on items Paired differences
crafted using a forgiving language approach were 95%
not correlated with scores on social desirability bias Confidence
interval of
in a statistically significant manner (r ¼ 0.118, the difference
p > 0.05), again supporting the hypothesis regard- Mean SD Lower Upper t
ing relationships between social desirability’s influ- Pair 1: Forgiving language items 15.14 6.07764 14.34 15.95 36.96
ence and scores on survey items. Pair 2: Traditional wording items
p < 0.001 (df ¼ 219).
Assessing the influence of wording approach on
item variance, a paired-groups t test was performed
language with those items generated with more
comparing the mean scores on items crafted using
traditional wording. Results indicate that, for this
a forgiving approach to wording with scores on
sample, scores on items that were crafted in a for-
items crafted in a traditional manner. This was
giving manner were not significantly influenced
found to be statistically significant at the alpha level
by social desirability bias. In addition, items that
of 0.05, t(219) ¼ 36.962, p < 0.01. This indicates
were more traditionally worded were significantly
higher levels of agreement with stigma-measuring
correlated with the influence of social desirability
items crafted in a forgiving manner when com-
bias. The higher a respondent scored on the
pared to those crafted with traditional wording. In
measure of social desirability, the lower their
another way, items using forgiving wording pro-
scores on traditionally worded stigma items. This
duce an increase in respondent’s willingness to
could indicate that if a respondent is particularly
agree with these sensitive items, in contrast to trad-
sensitive to social desirability bias, the more hesi-
itionally worded items. The strength of the relation-
tant they are to indicate agreement with a survey
ship between wording strategy and score mean
item crafted using traditional wording.
was measured as 0.86, as indexed by eta-squared.
To investigate the overall group differences
Tables 3 and 4 show descriptive statistics, by word-
between forgivingly worded items and tradition-
ing-approach grouping, as well as the paired sam-
ally worded items, comparing the average scores
ples t-test output.
indicated that as a group, the items worded in a
forgiving manner resulted in higher scores of
Discussion and Implications for agreement, when compared to those written in
Future Research more traditional language. The average scores on
When attempting to gather truthful information items crafted using forgiving language were
from survey respondents about topics that may higher than those worded in a more traditional
be construed as “sensitive” or prone to efforts way. This indicates that respondents perhaps feel
to conceal or distort reality, researchers have freer to answer those questions more truthfully,
attempted various means in their creation of indicating a higher level of agreement with the
instruments. One such method, the use of a for- stigmatizing attitude or belief.
giving language strategy in item development, While the findings of this study support the
was employed in this investigation of mental usefulness of this language approach found in
health providers’ stigmatization of the clients other investigations (Peter & Valkenburg, 2011),
they serve. This study compared the performance they are not consistent with others (Naher &
of survey items crafted in light of forgiving Krumpal, 2012). Additionally, there are limitations
596 J. L. K. CHARLES AND P. V. DATTALO

to the present study that should be noted which the items, regardless of item wording approach.
bear on interpretation of the results. First, the Therefore the wording of one theme’s item (trad-
sample cannot be assumed to be representative of itional or forgiving) may have had an influence
all mental health providers. The sample included on another item. Even still, the impact of word-
those employed in public mental health services in ing on thematically targeted items was statistically
a southern state. The response rate of survey significant, when considering traditionally worded
respondents is unknowable; the number of pro- items and the respondent’s social desirability
viders to whom the invitation was forwarded bias score.
is unknown, so the ratio of responders could not Lastly, the mechanism used to measure social
be calculated. For these reasons, results of the desirability bias, Strahan and Gerbasi’s (1972) short-
analysis cannot be generalized to all providers of ened version of Crown and Marlowe’s scale (1960),
services. However, because the sample is fairly has exhibited minimally acceptable internal con-
large N ¼ 209, the findings are worth noting sistency (Cronbach’s alpha varying between ¼ 0.59
when attempting to measure a sensitive topic. and 0.70) and with the current study, the reliabil-
Additionally, the data were analyzed for correla- ity statistic was similar (alpha ¼ 0.68). Future
tions only and assertions regarding the multivari- studies may wish to use a measure of social
ate nature of the relationships among variables desirability bias that has higher internal consist-
were not made. This study demonstrates an ency. The shortened version was selected for the
attempt to investigate the presence of a relation- current study because of the lengthiness of the
ship between item wording approach and influ- survey package, so as to not add too many extra
ence of social desirability bias. To address these items to an already arduous survey package.
limitations, future research might include larger
samples of respondents, taking into consideration
Conclusions
other factors that may influence responses on the
outcome measure of interest. For example, in the Despite limitations, this study’s findings may have
case of provider-based stigma, future research important implications for future research, par-
could consider the multivariate relationship ticularly in developing measures designed to
between item wording approach, provider educa- address sensitive topics, a frequent goal in social
tional background, and even other related constructs service research. The results of the current investi-
such as professional burnout that may influence a gation indicate that a forgiving wording approach
provider’s susceptibility to social desirability bias. to crafting a survey’s items may be less influenced
As an additional limitation, it is worth noting by perceived social desirability. If a survey item is
that the language variations between traditional worded in a manner that is permissive of the
and forgiving items is fairly nuanced. Similarly, sensitive attitude, belief, or behavior, respondents
these items were varied in their language, and may be less susceptible to answering in a manner
targeted common, underlying themes. The man- that reflects social desirability bias. Further, as a
ner in which the items tapped the theme may group, forgivingly worded items may produce a
have been in a slightly different manner, beyond wider breadth of responses, as evidenced by larger
the variations in language. Thus, it is possible standard deviations and higher average scores.
that while the differences in response could be Certainly, more research is needed, including use
due to the change in language, it may also be of larger samples, explorations of potential multi-
that the language fundamentally changed the variate relationships impacting the influence of
item’s content. Future studies might consider a social desirability bias on responses to sensitive
more systematic testing of the language approach, items, systematic variation of the wording
for instance testing items that target a common approaches, and measuring the influence of social
theme, in the same manner, with only slight desirability bias with a measure with higher met-
variations in their wording, to more fully isolate rics of internal consistency - but the results are
the wording influence. Additionally, in the cur- promising. Indeed, social desirability can be a sig-
rent investigation all respondents completed all nificant threat to internal validity of a measure,
JOURNAL OF SOCIAL SERVICE RESEARCH 597

particularly when the topic of interest is a socially Campanelli, P. C., Dielman, T. E., & Shope, J. T. (1987).
taboo subject, such as racism and sexism, but also Validity of adolescents’ self-reports of alcohol use and
misuse using a bogus pipeline procedure . Adolescence,
when the attitudes, beliefs, and behaviors contra-
22(85), 7–22.
dict the professional values of the respondents, as Catania, J. A., Binson, D., Canchola, J., Pollack, L. M.,
in the case of provider-based stigmatization. Hauck, W., & Coates, T. J. (1996). Effects of interviewer
gender, interviewer choice, and item wording on
responses to questions concerning sexual behavior. Public
Disclosure statement Opinion Quarterly, 60(3), 345–375. doi:10.1086/297758
No potential conflict of interest was reported by Charles, J. L. K. (2013). Mental health provider-based
the authors. stigma: Understanding the experience of clients and fami-
lies. Social Work in Mental Health, 11(4), 360–375.
doi:10.1080/15332985.2013.775998
ORCID Charles, J. L. K. (2015). Measuring mental health provider
stigma: The development of a valid and reliable self-
Jennifer L.K. Charles http://orcid.org/0000-0002-
assessment instrument (Doctoral dissertation). Retrieved
1698-760X
from https://scholarscompass.vcu.edu/etd/3706
Patrick V. Dattalo http://orcid.org/0000-0002-6760-9035
Charles, J. L. K., & Bentley, K. J. (2018). Mental health pro-
vider stigma: The development and initial psychometric
testing of a self-assessment instrument. Community
References Mental Health Journal, 54, 33–48. doi:10.1007/s10597-
017-0137-4
Aguinis, H., Pierce, C. A., & Quigley, B. M. (1993).
Corrigan, P. W., Druss, B. G., & Perlick, D. A. (2014).
Conditions under which a bogus pipeline procedure
The impact of mental illness stigma on seeking and
enhances the validity of self-reported cigarette-smoking:
participating in mental health care. Psychological Science
A meta-analytic review. Journal of Applied Social in the Public Interest, 15(2), 37–70. doi:10.1177/
Psychology, 23(5), 352–373. doi:10.1111/j.1559- 1529100614531398.
1816.1993.tb01092.x Corrigan, P. W., Roe, D., & Tsang, H. W. H. (2011).
Akers, R. L., Massey, J., Clarke, W., & Lauer, R. M. (1983). Challenging the stigma of mental illness: Lessons for thera-
Are self-reports of adolescent deviance valid? Biochemical pists and advocates. West Sussex, UK: John Wiley &
measures, randomized response, and the bogus pipeline Sons, Ltd.
in smoking behavior. Social Forces, 62(1), 234–251. Corrigan, P. W., Watson, A. C., & Barr, L. (2006). The self-
doi:10.2307/2578357 stigma of mental illness: Implications for self-esteem and
Alexander, M. G., & Fisher, T. D. (2003). Truth and conse- self-efficacy. Journal of Social and Clinical Psychology,
quences: Using the bogus pipeline to examine sex differ- 25(8), 875–884. doi:10.1521/jscp.2006.25.8.875
ences in self-reported sexuality. The Journal of Sex Coutts, E., & Jann, B. (2011). Sensitive questions in online
Research, 40, 27–35. Retrieved from: http://www.jstor. surveys: Experimental results for the randomized
org/stable/3813768?origin¼JSTOR-pdf&seq¼1#page_scan_ response technique (RRT) and the unmatched count
tab_ contents technique. Sociological Methods & Research, 40, 169–193.
Barton, A. H. (1958). Asking the embarrassing question. doi:10.1177/0049124110390768
Public Opinion Quarterly, 22(1), 67–68. doi:10.1086/ Crowne, D. P., & Marlowe, D. (1960). A new scale of social
266761 desirability independent of psychopathology. Journal of
Begin, G., & Boivin, M. (1980). Comparison of data gath- Consulting Psychology, 24(4), 349–354. doi:10.1037/
ered on sensitive questions via direct questioning, h0047358
randomized response technique, and a projective method. DeVellis, R. F. (2003). Scale development: Theory and appli-
Psychological Reports, 47(3), 743–750. doi:10.2466/ cation (2nd ed.). Thousand Oaks, CA: Sage Publications.
pr0.1980.47.3.743 Duffy, J. C., & Waterton, J. J. (1988). Randomised response
Bj€orkman, T., Angelman, T., & J€ onsson, M. (2008). vs. direct questioning: Estimating the prevalence of
Attitudes toward people with mental illness: A cross-sec- alcohol-related problems in a field study. Australian
tional study among nursing staff in psychiatric and som- Journal of Statistics, 30(1), 1–14. doi:10.1111/j.1467-
atic care. Scandinavian Journal of Caring Sciences, 22(2), 842X.1988.tb00607.x
170–177. doi:10.1111/j.1471-6712.2007.00509.x Faulkner, A., & Layzell, S. (2000). Strategies for living.
Buchman, T. A., & Tracy, J. A. (1982). Obtaining responses London, UK: Mental Health Foundation.
to sensitive questions: Conventional questionnaire versus Franklin, L. A. (1989). Randomized response sampling from
randomized response technique. Journal of Accounting dichotomous populations with continuous randomization.
Research, 20(1), 263–271. doi:10.2307/2490775 Survey Methodology, 15, 225–235. Retrieved from:
598 J. L. K. CHARLES AND P. V. DATTALO

https://www150.statcan.gc.ca/n1/en/catalogue/12-001- people? European Psychiatry : The Journal of the


X198900214566 Association of European Psychiatrists, 19(7), 423–427.
Garcia-Yi, J., & Grote, U. (2012). Data collection: doi:10.106/j.eurpsy.2004.06.019
Experiences and lessons learned by asking sensitive ques- Lauber, C., Nordt, C., Braunschweig, C., & R€ ossler, W.
tions in a remote coca growing region in Peru. Survey (2006). Do mental health professionals stigmatize their
Methodology, 38(2), 131–141. Retrieved from: http://cite- patients? Acta Psychiatrica Scandinavica, 113(429), 51–59.
seerx.ist.psu.edu/viewdoc/download?doi¼10.1.1.464.3351& doi:10.111/j.1600-0447.2005.00718.x
rep¼rep1&type¼pdf#page ¼33 Lee, R. M., & Renzetti, C. M. (1993). The problems of
Goldsmith, L. P., Lewis, S. P., Dunn, G., & Bentall, R. P. research sensitive topics: An overview and introduction.
(2015). Psychological treatments for early psychosis In C.M. Renzetti & R.M. Lee (Eds.), Research sensitive
can be beneficial or harmful, depending on the thera- topics. London, England: Sage, 3–13.
peutic alliance: An instrumental variable analysis. Lensvelt-Mulders, G. J. L. M., Hox, J. J., van der Heijden,
Psychological Medicine, 45(11), 2365–2373. doi:10.1017/ P. G. M., & Maas, C. J. M. (2005). Meta-analysis of
S003329171500032X randomized response research: Thirty-five years of valid-
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, ation. Sociological Methods & Research, 33(3), 319–348.
J. M., Singer, E., & Tourangeau, R. (2009). Survey meth- doi:10.1177/0049124104268664
odology (2nd ed.). Hoboken, NJ: Wiley. Link, B. G., & Phelan, J. C. (2001). Conceptualizing stigma.
Goodstadt, M. S., & Gruson, V. (1975). The randomized Annual Review of Sociology, 27(1), 363–385. doi:10.1146/
response techniques: A test of drug use. Journal of the annurev.soc.27.1.36
American Statistical Association, 70(352), 814–818. Naher, A. F., & Krumpal, I. (2012). Asking sensitive ques-
doi:10.1080/01621459.1975.10480307 tions: The impact of forgiving wording and question con-
Holbrook, A. L., & Krosnick, J. A. (2010). Measuring voter text on social desirability. Quality & Quantity, 46(5),
turnout by using the randomized response technique: 1601–1616. doi:10.1007/s11136-011-9469-2
Evidence calling into question the method’s validity. Nordt, C., Rossler, W., & Lauber, C. (2005). Attitudes of
Public Opinion Quarterly, 72, 74(2), 328–343. mental health professionals toward people with schizo-
doi:10.1093/poq/nfq012 phrenia and major depression. Schizophrenia Bulletin,
Holtgraves, T., Eck, J., & Lasky, B. (1997). Face manage- 32(4), 709–714. doi:10.1093/schbul/sbj065
ment, question wording and social desirability. Journal of Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric the-
Applied Social Psychology, 27(18), 1650–1671. ory. New York, NY: McGraw-Hill.
doi:10.1111/j.1559-1816.1997.tb01618.x Overton, S. L., & Medina, S. L. (2008). The stigma of men-
Jann, B., Jerke, J., & Krumpal, I. (2012). Asking sensitive tal illness. Journal of Counseling & Development, 86(2),
questions using the crosswise model: An experimental 143–151. doi:10.1002/j.1556-6678.2008.tb00491.x
survey measuring plagiarism. Public Opinion Quarterly, Paulhus, D. L. (1991). Measurement and control of response
76(1), 32–49. doi:10.1093/poq/nfr036 bias. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman
Jones, E. E., & Sigall, H. (1971). Bogus pipeline: New para- (Eds.), Measures of Personality and Social Psychological
digm for measuring affect and attitude. Psychological Attitudes (pp. 17–59). San Diego, CA: Academic Press.
Bulletin, 76(5), 349–354. doi:10.1037/h0031617 Penn, D. L., & Martin, J. (1998). The stigma of severe men-
Knaak, S., Mantler, E., & Szeto, A. (2017). Mental illness- tal illness: Some potential solutions for a recalcitrant
related stigma in healthcare: Barriers to access and care problem. Psychiatric Quarterly, 69, 235–247. doi:10.1023/
and evidence-based solutions. Healthcare Management A:1022153327316
Forum, 30(2), 111–116. doi:10.1177/0840470416679413 Persson, M., & Solevid, M. (2014). Measuring political par-
Kopera, M., Suszek, H., Bonar, E., Myszka, M., Gmaj, B., ticipation: Testing social desirability bias in a web-survey
Ilgen, M., & Wojnar, M. (2015). Evaluating explicit and experiment. International Journal of Public Opinion
implicit stigma of mental illness in mental health profes- Research, 26(1), 98–112. doi:10.1093/ijpor/edt002
sionals and medical students. Community Mental Health Peter, J., & Valkenburg, P. M. (2011). The impact of
Journal, 51(5), 628–634. doi:10.1007/s10597-014-9796-6 “forgiving” introductions on the reporting of sensitive
Krumpal, I. (2013). Determinants of social desirability bias behavior in surveys: The role of social desirability
in sensitive surveys: A literature review. Quality & response style and developmental status. Public Opinion
Quantity, 47(4), 2025–2047. doi:10.1007/s11135-011- Quarterly, 75(4), 779–787. doi:10.1093/poq/nfr041
9640-9 Roese, N. J., & Jamieson, D. W. (1993). 20 years of bogus
Kundt, T. C., Misch, F., & Nerre, B. (2017). Re-assessing pipeline research: A critical-review and metaanalysis.
the merits of measuring tax evasion through business Psychological Bulletin, 114(2), 363–375. doi:10.1037//
surveys: An application of the crosswise model. 0033-2909.114.2.363
International Tax and Public Finance, 24(1), 112–133. Rogers, R. (1994). Client-centered therapy. London:
doi:10.1007/s10797-015-9373-0 Constable.
Lauber, C., Anthony, M., Ajdacic-Gross, V., & Rossler, W. Rogers, R. (1995). On becoming a person: A therapist’s view
(2004). What about psychiatrists’ attitude to mentally ill of psychotherapy. London: Constable.
JOURNAL OF SOCIAL SERVICE RESEARCH 599

Ross, C. A., & Goldner, E. M. (2009). Stigma, negative atti- Statistical Association, 76(376), 916–923. doi:10.1080/
tudes and discrimination towards mental illness within 01621459.1981.10477741
the nursing profession: A review of the literature. Journal Tourangeau, R., & Yan, T. (2007). Sensitive questions in
of Psychiatric and Mental Health Nursing, 16(6), 558–567. surveys. Psychological Bulletin, 133(5), 859–883.
doi:10.111/j.1365-2850.2009.01399.x doi:10.1037/0033-2909.133.5.859
Scheers, N. J., & Dayton, C. M. (1988). Covariate random- U.S. Department of Health and Human Services (1999).
ized response models. Journal of the American Statistical Mental health: A report of the surgeon general. Rockville,
Association, 83(404), 969–974. doi:10.2307/2290121 MD: U.S. Department of Health and Human Services,
Schulz, J., Fischbacher, U., Thoni, C., & Utikal, V. (2014). Substance Abuse and Mental Health Services
Affect and fairness: Dictator games under cognitive load. Administration, Center for Mental Health Services, National
Journal of Economic Psychology, 41, 77–87. doi:10.1016/ Institutes of Health, National Institute of Mental Health.
j.joep.2012.08.007 Valdesolo, P., & DeSteno, D. (2008). The duality of virtue:
Schulze, B. (2007). Stigma and mental health professionals: Deconstructing the moral hypocrite. Journal of
A review of the evidence on an intricate relationship.
Experimental Social Psychology, 44(5), 1334–1338.
International Review of Psychiatry, 19(2), 137–155.
doi:10.1016/j.esp.2008.03.010
doi:10.1080/09540260701278929
van’t Veer, A. E., Stel, M., & van, B. I. (2014). Limited cap-
Sedgwick, P. (2014). Non-response bias versus response
acity to lie: cognitive load interferes with being dishonest.
bias. British Medical Journal, 348 Retrieved from: http://
Judgment and Decision Making, 9(3), 199–206.
www.bmj.com/content/348/bmj.g2573
doi:10.2139/ssrn.2351377
Stodel, M. (2015). But what will people think?: Getting
Wang, K., Link, B. G., Corrigan, P. W., Davidson, L., &
beyond social desirability bias by increasing cognitive
load. International Journal of Market Research, 57(2), Flanagan, E. (2018). Perceived provider stigma as a pre-
313–321. doi:10.2501/IJMR-2015-024 dictor of mental health service users’ internalized stigma
Strahan, R., & Gerbasi, K. C. (1972). Short, homogenous and disempowerment. Psychiatry Research, 259, 526–531.
versions of the Marlow-Crowne Social Desirability Scale. doi:10.1016/j.psychres.2017.11.036
Journal of Clinical Psychology, 28(2), 191–1932. Warner, S. L. (1965). Randomized response: A survey tech-
doi:10.1002/1097-4679(197204)28:2<191::AID-JCLP22702 nique for eliminating evasive answer bias. Journal of the
80220>3.0.CO;2-G American Statistical Association, 60(309), 63–69.
Sudman, S., & Bradburn, N. M. (1982). Asking questions: A doi:10.1080/01621459.1965.10480775
practical guide to questionnaire design. San Francisco, CA: Wimbush, J. C., & Dalton, D. R. (1997). Base rate for
Jossey-Bass. employee theft: Convergence of multiple methods.
Tamhane, A. C. (1981). Randomized response techniques Journal of Applied Psychology, 82(5), 756–763.
for multiple sensitive attributes. Journal of the American doi:10.1037//0021-9010.82.5.756

Das könnte Ihnen auch gefallen