Sie sind auf Seite 1von 81

APPROVED:

Michael M. Beyerlein, Major Professor


J oseph W. Huff, Committee Member
Linda L. Marshall, Committee Member and
Chairof the Department of Psychology
Sandra L. Terrell, Dean of the Robert B. Toulouse
School of Graduate Studies
THE FACET SATISFACTION SCALE : ENHANCING THE
MEASUREMENT OF J OB SATISFACTION
Terence Eng Siong Yeoh, B.Sc.
Thesis Prepared for the Degree of
MASTER OF SCIENCE
UNIVERSITY OF NORTH TEXAS

August 2007
Yeoh, Terence Eng Siong, The Facet Satisfaction Scale: Enhancing the measurement of
job satisfaction. Master of Science (Industrial and Organizational Psychology), August 2007, 74
pp., 32 tables, 7 illustrations, references, 106 titles.
J ob satisfaction is an important job-related attitude that has been linked to various
outcomes for both the organization and its employees. In spite of this, researchers of the
construct disagree about how job satisfaction is defined and measured. This study proposes the
use of the Facet Satisfaction Scale, a new scale of measurement for job satisfaction that is based
on more recent definitions of the construct. Reliability and preliminary predictive validity studies
were conducted in order to determine the utility of this scale. Next steps in scale development are
discussed.


ii










Copyright 2007
by
Terence Eng Siong Yeoh

iii
TABLE OF CONTENTS
Page
LIST OF TABLES.iv
LIST OF ILLUSTRATIONS.vi
Chapter
1. INTRODUCTION...1
The Facet Satisfaction Scale Enhancing the measurement of job satisfaction
Defining job satisfaction
Measuring job satisfaction
Summary and hypotheses
2. METHODS20
Participants
Procedure
Measures
3. RESULTS..23
Initial analysis of all items
Item selection for the complete Facet Satisfaction Scale (FSS)
Item selection for the shortened FSS
Initial analysis of FSS predictive ability
4. DISCUSSION57
Creation of the 24-item six-facet FSS
The six-item shortened FSS
Facets as an incomplete measure of job satisfaction
Limitations and next steps
Conclusion
REFERENCES..............64
iv
LIST OF TABLES

Page

1. Means, standard deviation, and percent of missing values for the initial FSS Pay subscale
......................................................................................................................................... 23
2. Means, standard deviation, and percent of missing values for the initial FSS Promotion
subscale........................................................................................................................... 23
3. Means, standard deviation, and percent of missing values for the initial FSS Supervisor
subscale........................................................................................................................... 23
4. Means, standard deviation, and percent of missing values for the initial FSS Co-workers
subscale........................................................................................................................... 23
5. Means, standard deviation, and percent of missing values for the initial FSS Work
subscale........................................................................................................................... 24
6. Means, standard deviation, and percent of missing values for the initial FSS Benefits
subscale........................................................................................................................... 24
7. Means, standard deviation, and percent of missing values for the initial FSS Procedures
subscale........................................................................................................................... 24
8. Means, standard deviation, and percent of missing values for the initial FSS Physical
Working Conditions subscale ......................................................................................... 24
9. Initial 8-factor promax rotation factor loadings for all 63 FSS items............................. 26
10. Factor correlation matrix for the initial FSS eight-factor structure (all 63 FSS items) .. 27
11. Initial 6-factor promax rotation factor loadings for all 63 FSS items............................. 28
12. Factor correlation matrix for the initial FSS six-factor structure (all 63 FSS items)...... 29
13. Cronbachs values for the initial subscales of the FSS (all 63 items used) ................. 29
14. Model fit indices ............................................................................................................. 38
15. Cronbachs values for the final 24-item six-factor complete FSS............................... 39
16. Single-item reliability estimates for the shortened FSS (1 item measuring each facet) . 40
17. Hierarchical regression analysis for intent-to-quit (comparing Faces and FSS) ............ 43
18. Hierarchical regression analysis for OCBI (comparing Faces and FSS)........................ 43
19. Hierarchical regression analysis for OCBO (comparing Faces and FSS) ...................... 44
v
20. Hierarchical regression analysis for IRB (comparing Faces and FSS)........................... 44
21. Hierarchical regression analysis for intent-to-quit (comparing JDS and FSS)............... 45
22. Hierarchical regression analysis for OCBI (comparing JDS and FSS) .......................... 45
23. Hierarchical regression analysis for OCBO (comparing JDS and FSS)......................... 46
24. Hierarchical regression analysis for IRB (comparing JDS and FSS) ............................. 46
25. Hierarchical regression analysis for intent-to-quit (comparing Job Evaluation and FSS)48
26. Hierarchical regression analysis for OCBI (comparing Job Evaluation and FSS) ......... 48
27. Hierarchical regression analysis for OCBO (comparing Job Evaluation and FSS)........ 49
28. Hierarchical regression analysis for IRB (comparing Job Evaluation and FSS) ............ 49
29. Hierarchical regression analysis for intent-to-quit (comparing shortened and complete
FSS)................................................................................................................................. 51
30. Hierarchical regression analysis for OCBI (comparing shortened and complete FSS).. 52
31. Hierarchical regression analysis for OCBO (comparing shortened and complete FSS) 53
32. Hierarchical regression analysis for IRB (comparing shortened and complete FSS)..... 54

vi
LIST OF ILLUSTRATIONS

Page

1. Scree plot for the maximum likelihood promax factor analysis of all 63 FSS items ..... 27
2. Model 1 (63-item eight-facet lower-order null model)................................................... 31
3. Model 2 (63-item eight-facet higher-order null model).................................................. 32
4. Model 3 (32-item eight-facet hypothesized FSS) ........................................................... 33
5. Model 4 (63-item six-facet lower-order null model) ...................................................... 34
6. Model 5 (63-item six-facet higher-order null model) ..................................................... 35
7. Model 6 (24-item six-facet hypothesized FSS) .............................................................. 36

1
CHAPTER 1
INTRODUCTION
The Facet Satisfaction Scale Enhancing the Measurement of Job Satisfaction
The Hawthorne studies (Roethlisberger & Dickson, 1939) at AT&Ts Western Electric
Division provided early scientific indicators of the importance of the human factor and its impact
on organizationally relevant outcomes such as job performance. In the seven decades since then,
researchers have continued to study the impact of the individual in the workplace, placing special
attention on how various outcome measures are related to an individuals level of job satisfaction
(Brief & Weiss, 2002). By the mid-1970s, Locke (1976) provided further evidence of the
popularity of the job satisfaction when he estimated that over 7,000 studies had been published
examining this construct. More recently, Spector (1997) noted that the popularity of the construct
had not diminished, but instead continued to grow as over 12,400 studies analyzing job
satisfaction had been published before the turn of the century. Indeed, job satisfaction is not only
the most studied variable in I/O psychology (Spector, 2000, p. 196), but it is also the most
focal employee attitude from the perspective of research and practice (Saari & Judge, 2004).
The continued popularity of this construct is very likely linked to its relationship with
many important outcomes. Studies conducted over the years have clearly shown a relationship
between job satisfaction and organizationally-relevant outcomes such as on-the-job performance
(Judge, Thoresen, Bono, & Patton, 2001; Iaffaldano & Muchinsky, 1985; Herzberg, Mausner,
Peterson, & Capwell, 1957), organizational citizenship behavior (Wagner & Rush, 2000;
Bateman & Organ, 1983), absenteeism (Lambert, Edwards, Camp, & Saylor, 2005; Steers &
Rhodes, 1978), counter-productive work behaviors (Penney & Spector, 2005), and both
intention-to-quit (Campbell & Campbell, 2003) and actual turnover (Griffeth, Hom, Gaertner,

2
2000; Lee, Mitchell, Holtom, McDaniel, & Hill, 1999). Other studies have also found significant
relationships between job satisfaction and an employees psychological processes, such as the
level of organizational commitment (Sagie, 1998; Meyer, Allen, & Smith, 1993), motivation
(Hackman & Oldham, 1976), and job involvement (Freund, 2005). Finally, job satisfaction has
also been shown to be significantly related to an employees life outside of the work place, as
determined by measures of life satisfaction (McElwain, Korabik, & Rosin, 2005), work-to-
family and family-to-work conflict (Mesmer-Magnus & Viswesvaran, 2005), and the
manifestation of both physical and behavioral symptoms of stress (Siu, Spector, Cooper, & Lu,
2005). In addition, there are also indications that the relationship between job satisfaction and
these variables do not vary due to demographic factors such as age, gender, or race, once all
other variables (i.e. pay, education, tenure, etc.) are controlled for (Dipboye, Smith, & Howell,
1994; Witt & Nye, 1992).
While the research conducted thus far has revealed how important and popular job
satisfaction is as a research construct, there are nevertheless limitations that undermine its
effectiveness as a predictor of the various outcomes. As a result, the relationships that have been
found between job satisfaction and these outcomes are typically low. For example, a meta-
analysis of the relationship between job satisfaction and turnover found that the two constructs
correlated at -.19 (Griffeth, et al., 2000). A meta-analysis by Mesmer-Magnus and Viswesvaran
(2005) also found weak relationships between job satisfaction and both work-to-family and
family-to-work conflict (r reported at -.14 and -.18 respectively). Finally, a meta-analysis of the
job satisfaction performance relationship found similarly low correlations, with the overall r =
.17 (Iaffaldano & Muchinsky, 1985). While a more recent review (Judge, et al., 2001) found
somewhat higher correlations (r = .30) between job satisfaction and performance, the relationship

3
was still described as only qualifying as a moderate effect size (p. 388). The generally low
correlations found between job satisfaction and these outcome variables raises questions about
the validity of the construct as well as its efficacy as a predictor in studies in the organizational
sciences (Huff, Tekell, & Yeoh, 2005).
As a result, several authors have proposed causes for these weak relationships, including
poor or inconsistent operational definition of the job satisfaction construct (Brief & Weiss, 2002)
and faulty measurement systems (Brief & Roberson, 1989). In a recent review, Brief and Weiss
(2002) noted that research performed in the 1990s has raised questions about the definitions, and
measures of job satisfaction. This study will therefore analyze both of these issues in an attempt
to create an enhanced measure of the job satisfaction construct in order to improve our
understanding of the various job satisfaction outcome variable relationships.
Defining Job Satisfaction
Early definitions of job satisfaction tended to focus on an employees emotions and
feelings towards the job. Examples of this include the now classic definition in which job
satisfaction is defined as a pleasurable or positive emotional state resulting from the appraisal of
ones job or job experiences (Locke, 1976, p. 1300), and Smith, Kendall and Hulins persistent
feelings toward discriminable aspects of the job situation (1969, p. 37). This affect-based
definition of job satisfaction remains popular and continues to be used by researchers, some of
whom define job satisfaction as an affective reaction to a job that results from the incumbents
comparison of actual outcomes with those that are desired (Cranny, Smith, & Stone, 1992, p. 1).
A recent meta-analysis by Connolly and Viswesvaran (2000) also provided support for the use of
affect in definitions of job satisfaction when they found that job satisfaction was correlated with
both positive and negative affectivity. While popular, defining job satisfaction as affect raises

4
difficulties when measuring the construct, namely that affective reactions are likely to be
fleeting and episodic (Hulin & Judge, 2003, p. 256).
Thus defined as an unstable construct, job satisfaction would be difficult, if not
impossible to accurately measure or use in predictive studies. Fortunately, pioneering research in
the 1980s into the role and impact of affect on job satisfaction served as a counter to the view
that job satisfaction was an unstable state variable. This began with the work of Watson and
Tellegen (1985), whose research on self-reported mood first let to the proposed separation of
affect into two subcomponents positive and negative. The in a classic study, Staw and Ross
(1985) defined job satisfaction as the positive or negative affective disposition of an individual
towards his or her job (i.e. job satisfaction is predominantly based on personality). Consistent
with their hypotheses, these authors found significant stability of job satisfaction measures over
3- and 5-year time periods despite changes in the individuals occupation or employer (Staw &
Ross, 1985). A more recent study by Steel and Rentsch (1997), further provided support for the
stability of job satisfaction, this time over a 10-year period. The results were interpreted to mean
that dispositional affect is indeed a significant predictor or precursor to job satisfaction. When
coupled with findings that genetics have at least some impact on job satisfaction (Arvey,
Bouchard, Segal, & Abraham, 1989), these results indicate that job satisfaction is a stable, trait-
like construct instead of an unstable state variable, thus usable in studies attempting to predict
various organizationally-relevant outcomes. While there have been detractors to the dispositional
affect approach to job satisfaction (see Gerhart, 1987; 2005, for reviews), a recent review (Staw
& Cohen-Charash, 2005) found that this conceptualization is still popular today.
Instead of taking a dispositional or personality-based (i.e. affect-focused) approach, other
researchers have focused their measures of job satisfaction on judgment-based, cognitive

5
evaluations of jobs on characteristics or features of jobs and generally ignored affective
antecedents of evaluations of jobs and episodic events that happen on jobs (Hulin & Judge,
2003, p. 255). This line of thinking is not a new one, and authors have been insisting that job
satisfaction is primarily based on an individuals cognitions rather than affect at least since the
1980s (see for example, Organ & Near, 1985).
Rather than ignoring either the affective or cognitive aspect of job satisfaction, however,
Brief (1998, p. 86) described job satisfaction as an internal state that is expressed by affectively
and/or cognitively evaluating an experienced job with some degree of favor or disfavor. This
definition can be seen as an attempt at reconciling both the affective and cognitive dimension of
job satisfaction. This reconciliation of the affective-cognitive dimensions brings about a third
conceptualization of the construct that of job satisfaction as an attitude.
Attitudes were defined early on as a behavior pattern, anticipatory set or tendency,
predisposition to specific adjustment to designated situations, or more simply a conditioned
response to stimuli (LaPiere, 1934, p. 230). Over time however, attitude theorists began to
provide a tripartite definition of attitudes, with affective, cognitive, and behavioral elements (see
Franzoi, 2003 for a review). More recently, though, studies have shown that the behavioral
response may not always abide by the purported attitude, in terms of affect and cognition (see for
a review, Organ & Hamner, 1982). In response, a second school of thought, concerning the
multidimensional structure of attitudes, advocates a two-component model affective and
cognitive (Brief & Roberson, 1989, p. 718). In accordance with this conceptualization of
attitudes, job satisfaction is thus said to contain at least an affective and cognitive component
(see for examples Fisher, 2000; Crites, Fabrigar, & Petty, 1994; Millar & Tesser, 1986). As a
result, the behavioral component has been relegated to an outcome measure of the attitude itself

6
(see Franzoi, 2003). Applying this line of thinking to the construct of job satisfaction seems to be
consistent with the belief among researchers that an individuals job satisfaction level impacts
relevant organizational- and employee-related behavioral outcomes (see for examples, Siu, et al.,
2005; Judge, et al., 2001).
Thus, in these terms, job satisfaction can be operationalized as a relatively enduring
attitude shaped largely by social and interpersonal processes in the work environment (Dipboye,
et al., 1994). While not a new conceptualization (see for example the complex linkages proposed
by Hamner & Organ, 1978), defining job satisfaction as an attitude is advantageous as it allows
for the application of social psychological attitude methodology to analyze the construct (Brief,
1998; Organ & Near, 1985). Even more recently, researchers have further refined the attitudinal
definition of job satisfaction in order to include an evaluative element. Examples of this include
Motowidlos (1996, p. 176) definition of job satisfaction as judgments about the favorability of
the work environment and Weisss (2002, p. 6) positive or negative evaluative judgment one
makes about ones job or job situation. A recent review of the major theoretical models of job
attitudes by Hulin and Judge (2003) also found a common theme across all models a
comparator (i.e. an evaluative) component, which is used by the employee to express their level
of job satisfaction. Finally, a study by Huff and his colleagues (Huff, et al., 2005) also found that
including a measure of evaluation allowed for a better fitting model of job satisfaction beyond
simply affect, cognition, or affect and cognition. In addition to creating a better model of job
satisfaction, specifically defining the construct as an evaluation of the job instead of simply as an
attitude (with purely affective and cognitive components) is critical in that these two definitions
may bring about separate antecedents and indicators of the construct (see for a review, Brief &
Weiss, 2002; Crites, et al., 1994).

7
However, the addition of an evaluative component to job satisfaction (as opposed to
maintaining a strictly dispositional or affective-cognitive approach) again allows the potential for
instability of the construct. Picking up on this, mood researchers (see, for examples, Fisher,
2002; Weiss, Nicholas, & Daus, 1999) have found that job satisfaction is not entirely stable, but
fluctuates over timeframes even as short as hours within a day. While the construct is not entirely
predicated upon the situation as is proposed by the Social Information Processing model
(Salancik & Pfeffer, 1978), these findings do indicate that individuals take into account both
shorter-term situational cues as well as longer-term attitudes when asked to provide an account
of their job satisfaction.
Based on this review of the literature, an adequate operational definition of job
satisfaction must therefore take into account multiple factors, including the individuals
evaluation of their job and any salient situational and/or mood effects, which at first glance
would indicate that it is an unstable construct that is difficult if not impossible to measure
accurately. However, given that job attitudes are more salient and accessible to the individual
(Hulin & Judge, 2003) and that high accessibility leads to more consistent evaluations (via
automatic activation of the attitudinal evaluation) (Fazio, Powell, & Williams, 1989), it should
be possible to overcome the fluctuations caused by the situation and/or mood effects in order to
more accurately measure job satisfaction. Therefore, while the overwhelming majority of
studies of job satisfaction adopt some form of affective definition (Kuieck, 1980, p. 16), more
recent research supports defining the construct as an individuals evaluation of the job and job
situation (Weiss, 2002). Further evidence of this was presented in a recent review by Ajzen
(2001, p. 28), who noted that there is general agreement that attitude represents a summary
evaluation of a psychological object.

8
Measuring Job Satisfaction
The difficulty in agreeing to a definition of job satisfaction is but one of the limitations
facing researchers of the construct. Creating adequate measures to assess job satisfaction is the
second major hurdle that must be overcome in order to increase our ability to refine the job
satisfaction outcome variable relationship. After all, a construct must be accurately defined
before it can be properly analyzed, since measurement and theory should go hand in hand
(Smith, et al., 1969, p. 1). With job satisfaction thus defined in evaluative terms, it becomes
possible to create a measure that assesses the construct in a manner corresponding to its
definition. However, several issues need to be addressed before creating a new measure of job
satisfaction, including the use of: (1) global versus facet measures, (2) single-item measures of
job satisfaction facets, and (3) semantic differential as basis for item creation.
Global versus facet satisfaction
The decision to create either a global- or a facet-based measure of job satisfaction is an
important decision faced by researchers interested in job satisfaction scale creation. Fortunately,
prior research has provided us with several caveats and options to help in making this decision.
In a review of job satisfaction measures in the public domain, for example, Fields (2002)
discussed the three major approaches that have been taken by authors of measurement scales
(1) global measures, (2) facet measures, and (3) a combination in some fashion of the previous
two.
When job satisfaction measures contain items that directly ask an individual about his or
her overall feelings about a job, the measure is said to be a global measure of the construct (see
Ironson, Smith, Brannick, Gibson, & Paul, 1989 for a review). Examples of these scales would
include the single-item Faces Scale (Kunin, 1955) or the three-item Overall Job Satisfaction

9
Scale (OJS: Cammann, Fichman, Jenkins, & Klesh, 1983), where employees are essentially
asked to sum up their affective reactions or evaluations about their job and respond on the
measure of the construct in general terms. Authors advocating the use of global measures of job
satisfaction sustain that such measures successfully reflect individual differences in the construct
rather than simply focusing on responses to specific items (see for a review, Fields, 2002). In
addition, global measures have also been found to include areas of job satisfaction not measured
by many facet measures, thus accounting for a greater proportion of the overall construct
(Scarpello & Campbell, 1983).
Proponents of facet measures of job satisfaction however, have criticized the use of
global measures on several bases, with a key issue being the complexity of the construct itself.
Even since the early days of psychology, attitudes have been described to be complex constructs
that cannot be described fully using any single numerical index (Thurstone, 1928). More recent
reviews continue to stress that job satisfaction is a multifaceted construct, with various features
or facets contributing to the construct as a whole (see for reviews, Howard & Frink, 1996;
Kuieck, 1980; Porter & Steers, 1973). Furthermore, changes in one particular facet do not
necessarily lead to changes in an individuals level of satisfaction in other facets (Smith, et al.,
1969). This is especially the case if each facet is designed to be relatively homogenous and
discriminably different from the others (Ironson et al., 1989, p. 193) in order to cover the
principal areas of the general construct. In this case, each facet can be used as a diagnostic tool to
gauge the areas in which an employees satisfaction is satisfactory or needing improvement
(Russell, Spitzmuller, Lin, Stanton, Smith, & Ironson, 2004). Finally, specific facet measures
have also been noted to better reflect changes in relevant situational factors because of the more
precise referent (Gerhart, 1987, p. 371).

10
However, when using facet measures, a difficulty arises in that there is little agreement as
far as what constitutes a significant facet. Take for example, the Job Descriptive Index (JDI:
Smith, et al., 1969), which has been touted as the most widely used measure of job satisfaction in
use today (Cranny, et al., 1992). With a total of 72 items, the JDI focuses on five facets work
on the present job, present pay, opportunities for promotion, supervision on present job, and
people on your present job. Nearly twenty years later, Hatfield, Robinson, and Huseman (1985)
created the Job Perception Scale (JPS) measuring essentially the same five facets. While popular,
a study by Buckley, Carraher, and Cote (1992) found that the five facets of the JDI contained
only 42.7% trait variance, with the remainder being method and random error variance. Even the
authors of the JDI itself admit that the five facets do not specify completely the general
construct of job satisfaction (Smith, et al., 1969, p. 30). Thus it seems that while these five
facets do contribute significantly to measures of job satisfaction, they are not the only facets of
critical importance, and other researchers have added various facets to the list, including benefits,
rewards, operating procedures, and communication (Job Satisfaction Survey, JSS: Spector,
1985), as well as company identification, physical work conditions, and career future (Index of
Organizational Reactions, IOR: Dunham & Smith, 1979). The list goes on, and some scales have
even been created to measure up to twenty facets (see for example, the Minnesota Satisfaction
Questionnaire by Weiss, Dawis, England, & Lofquist, 1967).
As another point of caution, some authors (see for example, Rice, Gentile, & McFarlin,
1991) have also noted that the importance an individual places upon each facet of his or her job
satisfaction has a significant moderating impact on measures of the construct. Specifically, these
authors propose that an individuals overall job satisfaction is composed of a summation of the
description of each facet
1
multiplied by the importance of that particular facet to the individual

11
(Rice et al., 1991). If indeed this were the case, facet measures of job satisfaction would have to
include both descriptions of each facet and a measure or weight of how important the facet was
to the individual. These scores would then be multiplied and summated in order to obtain an
overall score of job satisfaction, thus further increasing the complexity of the measure.
Fortunately, other researchers have found that there is no increase in predictive ability when
using weighted versus unweighted job satisfaction measures (Jackson & Corr, 2002). These
authors believe that individuals do not process their levels of job satisfaction by multiplying each
facet description by its corresponding facet importance, but instead evaluate each facet in terms
of an overall have-want discrepancy (Jackson & Corr, 2002), thus simplifying measures of facet
satisfaction.
Finally, there are also researchers who call for the combination of both methods in order
to obtain an overall measure of job satisfaction based on combining the scores on the various
facets (Wright & Bonnett, 1992; Hackman & Oldham, 1974). A combination of the two methods
has been said to allow for measurement of job satisfaction in both context-specific and context-
free environments (Witt & Nye, 1992). In other words, the facet measures would allow for more
accurate measures of each sub-dimension of the construct while an overall measure allows for
comparison between individuals. Advocates for this combination approach have suggested two
major approaches to creating an overall job satisfaction score from facet measures the factor
and composite models (Law & Wong, 1999). Essentially, the factor model proposes that an
underlying multidimensional construct can be measured as the overlap between its various
factors, while the composite model proposes that the underlying construct is the sum total of its
facets (see Law & Wong, 1999 for an overview of the models). Most job satisfaction researchers
however, tend to opt for the simpler composite model which applies either a linear summation or

12
averaging technique in order to combine the items and/or facets into an overall index or job
satisfaction score (see for examples Bruck, Allen, & Spector, 2002; Jackson, Potter, & Dale,
1998; Lawler, 1983; Scarpello & Campbell, 1983; Locke, 1969), both of which produce an
overall score that is significantly related to global job satisfaction and related measures.
Single-item facet measures
The second issue that should be addressed during scale creation is the number of items to
be used in the scale. A common theme in data analysis has been the reduction of a large number
of variables into fewer and more accurate items in order to provide a more parsimonious and
meaningful summary of the data while continuing to account for the intercorrelations that may
exist (Leung & Sachs, 2005). In recent years, job satisfaction researchers have also become
interested in the possibility of creating shorter scales that continue to adequately measure the
construct (see for example Russell et al., 2004). One method that is currently being examined by
researchers is the use of single-item measures to assess each facet of job satisfaction (see for
examples, Wanous & Hudy, 2001). The appeal of using this approach to create a measure of job
satisfaction is quite significant, since it requires less space, increases cost effectiveness, increases
face validity by reducing perceived redundancy of questions, and increases the ability to
measures changes in the construct (see Nagy, 2002 for a review). In addition, single-item
measures have been used successfully in measuring other constructs, including depressive mood
states (Killgore, 1999) and religious values (Gorsuch & McFarland, 1972), as well as general job
satisfaction (Kunin, 1955).
Detractors of this method have been vocal however, noting that practitioners and
researchers are warned to be wary of single-item measures (Loo & Kells, 1998, p. 75). These
authors argument stem primarily from three notions (1) that the internal reliability of single-

13
item measures cannot be estimated, (2) that single-item reliabilities would be unacceptably low
even if they could be measured, and (3) that single-item measures are insufficient when
measuring complex psychological constructs (see for reviews, Loo, 2002; Wanous & Hudy,
2001; Wanous, Reichers, & Hudy, 1997).
The first argument has been adequately addressed by Wanous and his colleagues (see for
example, Wanous & Reichers, 1996) via the use of two different methods (1) correction for
attenuation, and (2) factor analysis communalities. The correction for attenuation formula has
been described by Nunnally and Bernstein (1994, p. 257) as:
r'
xy
= r
xy
Eq. 1
r
xx
* r
yy

where r
xy
= correlation between variables x and y, r
xx
= reliability of variable x, r
yy
= reliability
of variable y, and r
xy
= estimated true correlation between x and y had both variables been
perfectly measured. While this formula is usually applied in situations when x and y come from
different domains, it has successfully been applied in the current situation where both variables
come from the same conceptual domain (or are differing facets) of job satisfaction (Wanous &
Hudy, 2001). In such situations, r
xy
is expected to equal 1.0, leaving:
r
xy
= r
xx
* r
yy
Eq. 2
If we presume that x is a single-item facet scale and y is an alternate multi-item facet scale, we
find that we can solve the equation to estimate r
xx
(the reliability of x) through algebraic
manipulation such that:
r
xx
= r
xy
2
Eq. 3
r
yy

A second method previously used to estimate single-item reliability is based on factor
analysis communality scores (Weiss, 1976). In their research, Wanous and Hudy (2001, p. 363)

14
stated that the communality can be considered a conservative estimate of single-item
reliability. Specifically, the communality of any variable is less than or equal to the reliability
of the variable (Harman, 1967, p. 19), thus it can be used as a lower bound for estimating the
reliability of a single-item facet measure.
Two other methods can also be used to provide estimates of the reliability of a single-
item scale. The first is coefficient alpha, which has been noted to be a basic estimate of reliability
and can be used to create an estimate for the reliability of tests constructed using the domain-
sampling model (Nunnally & Bernstein, 1994). When using single-item facet measures each
assessing different facets of job satisfaction, an overall scale alpha can be determined to estimate
overall scale-level reliability. In addition, Nagy (2002) has also used correlations between a
single-item facet measure and a corresponding multi-item facet measure as an estimate for
reliability of the single-item measure. In essence, this method correlates a single-item facet
measure against an existing multi-item scale (with an acceptable level of reliability that was
determined beforehand) measuring the same facet. If the correlation between the two measures is
high, the single-item scale can be said to exhibit adequate reliability. In other words, there are
actually four methods that can or have been used to estimate the reliability of a single-item facet
measure.
The second argument against the use of single-item facet measures is the low levels of
reliability obtained, even when reliability can be measured in the first place (Loo, 2002). The
domain-sampling model posits that reliability decreases correspondingly as the number of items
measuring a single domain decreases, all else being equal, due to potentially higher measurement
error (Nunnally & Bernstein, 1994). In other words, the likelihood of accurately measuring a
construct is said to increase as more and more items are used in the measurement of that

15
construct. This is because any single item is typically viewed as an imperfect measure, and using
multiple measures allows for an increased probability of capturing the overall construct. Recent
research however, has shown that while there is a reduction in reliability among single-item facet
measures when compared to multi-item facet measures, Cronbachs values still approach a
modest reliability level of .70 (Nagy, 2002). Other researchers have even reported estimated
single-item facet reliability for supervision to be as high as .80 (Loo & Kells, 1998).
Finally, detractors of single-item measures also claim that they are insufficient when
measuring complex psychological constructs (Loo & Kells, 1998). They are however appropriate
when measuring sufficiently narrow or unambiguous constructs (Sackett & Larson, 1990). The
review of job satisfaction presented thus far does indicate that the construct as a whole (i.e.
general job satisfaction) is complex (Dipboye, et al., 1994; Hamner & Organ, 1978). However,
facet measures focus on a more specific and homogenous domain compared to global job
satisfaction measures (Ironson et al., 1989), and thus may be less influenced by domain sampling
errors when reduced to single-item measures.
2
As a result, the review conducted thus far seems to
indicate the potential for the use of single-item facet measures of job satisfaction.
Semantic differential scales
The third issue to be resolved focuses on the response scale that will be used for the new
measure of job satisfaction. This is especially critical since previous measures of job satisfaction
do not necessarily measure the construct in congruence to how it was defined. For example,
while typically defined as an attitude or feeling, the affective component of job satisfaction
seems to have been deemphasized by job satisfaction researchers (Hulin & Judge, 2003). In
addition, a study by Brief and Roberson (1989) comparing three popular measures of job
satisfaction the Job Descriptive Index

(Smith, Kendall, & Hulin, 1969), the Minnesota

16
Satisfaction Questionnaire

(Weiss, Dawis, England, & Lofquist, 1967), and the Faces Scale

(Kunin, 1955) and found that job cognitions were adequately captured by these job satisfaction
measures but affect was not. Specifically, among the three measures assessed only the Faces
Scale successfully measured both affect and cognition. Moorman (1993) also supported this
view, stating that job satisfaction measures differ in the extent to which they tap the affective or
cognitive components of job satisfaction.
Given that both the affective and cognitive components of job satisfaction have different
bases and predict different outcomes (Thoresen, Kaplan, Barsky, Warren, & de Chermont, 2003;
Crites, et al., 1994), and that the construct would be better assessed using differing measures (see
Moorman, 1993; Brief & Roberson, 1989), the one-sidedness of current measures is particularly
distressing. It can also be noted that the components of an attitude (i.e. affect and cognition) are
tied together and have implications for each other (Organ & Hamner, 1982), and thus failing to
adequately account for both components may be one of the reasons for the low correlations thus
far found between job satisfaction and the various outcome measures to which this construct is
supposedly related.
Since job satisfaction has been defined as an evaluation of the job and job situation for
the purposes of this study, it is imperative that the measurement scale used to tap this construct
be consistent with the definition (Smith et al., 1969). Fortunately, social psychological attitude
researchers provide us with methods with which to assess job satisfaction. The semantic
differential measurement system in particular, has long been used in social psychology to assess
social attitudes (see for a brief review, Yu, Albaum, & Swenson, 2003). In construing job
satisfaction as an evaluation of the job, we can easily borrow techniques such as this to create a
measurement scale that is appropriate to the current research construct (Huff, 2000).

17
The use of semantic differentials, however, is not without its own challenges. A review
by Crites and his colleagues (Crites, et al., 1994) for example, notes the problem of using
inappropriate scale end-points that focus specifically on one component of the overall attitude
(i.e. fear, anger, happiness, and disgust tapping only affective tone) instead of focusing on
evaluational tone (i.e. favorable unfavorable or positive negative). Others have also stressed
the importance of adequately balancing the attitude question stem in order to avoid biasing the
respondent in one way or another (see for a review, Shaeffer, Krosnick, Langer, & Merkle,
2005). As a result, researchers intent on using semantic differential scales in their studies should
proceed with caution to ensure that the question stem and response end-points are designed to
measure the appropriate tone (in this case evaluative).
Summary and Hypotheses
Job satisfaction is a construct that is related to various outcomes that are relevant to both
the organization and its employees. Unfortunately, research conducted thus far seems to indicate
weak relationships between job satisfaction and outcome variables. Researchers have proposed
two main culprits inappropriate definitions of the construct and poor measurement scales. The
review of the literature conducted thus far indicates that the most appropriate definition for job
satisfaction is that of an evaluation of the job or job situation or job facets.
A wide range of job satisfaction measures have been created to measure the construct, but
a large percentage of variance in job satisfaction is still unaccounted for. Job satisfaction
measurement scales have typically targeted both global job satisfaction and facets of the
construct. Global measures are said to reflect individual differences, while facet measures reflect
changes in the relevant sub-domain of the construct. Facet measures have also been successfully

18
combined to create a composite global measure. Studies of single-item facet measures also
indicate the potential use of this method in assessing job satisfaction.
The objective of this study is therefore three-fold. First, the study focused on the creation
of the Facet Satisfaction Scale (FSS), a new facet-based measure of job satisfaction. The
semantic differential scale with evaluative end-points was chosen as the basis of the FSS in order
to create a scale that was consistent with current definitions of the construct. As a result, the FSS
is expected to exhibit good psychometric properties. Specifically, the FSS will demonstrate
strong evidence of scale reliability, possess good factor structure, and account for a significant
level of construct variance based on initial factor analytic data.
Hypothesis 1: The eight-facet model of the Facet Satisfaction Scale (FSS) will demonstrate
strong evidence of scale reliability and possess good factor structure based on initial factor
analytic data.
Secondly, in order to take advantage of the significant savings offered by single-item
scales, a shortened version of the FSS will also be created and will be composed of one item
assessing each factor. Since criticism of single-item facet measures have focused primarily on
issues of item and scale reliability, the shortened version of the FSS will be assessed using four
estimates of single-item reliability to obtain measures of scale reliability.
Hypothesis 2: The shortened Facet Satisfaction Scale will demonstrate acceptable levels of facet
reliability as measured by four distinct estimates of single-item reliability.
Finally, preliminary analysis will be conducted to determine if both the complete and
shortened versions of the FSS will have significant predictive validity over outcome measures
typically associated with the job satisfaction construct. The outcome measures selected for this

19
analysis were intent to quit and job performance (both in-role and organizational citizenship
behaviors).
Hypothesis 3: Both the complete and shortened versions of the Facet Satisfaction Scale will
significantly predict intent to quit and job performance.


20
CHAPTER 2
METHODS
Participants
Study participants included University of North Texas undergraduate students working
full- or part-time. The only prerequisite for participation was that the individual had worked with
their current employer for a period of at least 30 days, at a rate of 15 hours a week or more. This
requirement was put in place to ensure that the participants had adequate time to form proper
attitudes towards their jobs and avoid initial instabilities of their job attitudes due to honeymoon
or hangover effects (see Boswell, Boudreau, & Tichy, 2005). No other demographic constraints
were placed on participants for eligibility. A pool of 820 (29.7% male, 70.3% female) student
participants who met these criteria was included in this study. The participants had an average
age of 21.1 years and worked an average of 25.1 hours a week. The average position tenure for
the sample was 13.9 months, while the average organization tenure was 16.4 months.
Procedure
Individuals interested in participating in this study were directed to the survey website for
additional information. Participants were then asked to read and agree to the informed consent
documentation before being allowed to proceed with the online survey. The participants were
required to complete the online survey measuring various aspects of their attitudes towards their
current job, as well as measures of demographic data. The survey took between 45 to 60 minutes
to complete. Students enrolled in psychology courses and who wished to receive extra course
credit for participation were asked to provide their names, university identification number, and
contact information after completing the survey. This information was used for record-keeping
purposes only (the information was kept separate from the survey materials, thus allowing for

21
complete anonymity of the participants). The study investigator then used the university extra
credit system or contacted the participants instructor directly to provide the appropriate number
of extra credit points to each participant.
Measures
A brief description of the scales used in this survey is listed below.
Facet Satisfaction Scale.
The Facet Satisfaction Scale (FSS: Yeoh, 2006) is being created in order to address the
problems associated with measuring job satisfaction. Envisioned as both a single-item and multi-
item facet measure, eight items were created to analyze each of the eight facets (for a total of 64
items). The facets examined by this scale are pay, promotion, supervisors, co-workers, the work,
benefits, procedures, and physical work conditions, all of which have been shown by research to
be significant job satisfaction facets (see for examples, Hatfield, et al., 1985, Spector, 1985,
Dunham & Smith, 1979, Smith, et al., 1969). Each item is assessed using a semantic differential
scale, and the item stem and scale endpoints is designed to elicit an individuals evaluations of
his/her job using wording similar to those found in the General Evaluation Scale (Crites,
Fabrigar, & Petty, 1994). Exploratory and confirmatory factor analysis methodology was used to
identify the items to be included in the final version of the scale.
Comparison measures.
Three comparison measures of job satisfaction will be used in initial validity studies to
compare the predictive validity of the FSS against established scales of job satisfaction. These
measures were three items from the Job Diagnostic Survey (JDS: Hackman & Oldham, 1974)
assessing job satisfaction (reliability scores ranging from .55 to .92 reported in Fields, 2002), the
four-item Job Evaluation measure by Crites, Fabrigar, and Petty (1994), and the Faces scale

22
(Kunin, 1955). The Faces scale is a single-item scale measuring global job satisfaction.
Participants are required to circle the face that corresponds best to their feelings about their job in
general. Internal consistency reliability was reported at .88 (Lau & Murnighan, 2005). A version
of the scale slightly altered by Huff (2002) to appear more androgynous will be used in this
study.
Outcome measures.
Outcome measures were used as a preliminary measure of criterion-related validity of the
scale. Participants were asked to rate their intention to quit and job performance. A single-item
question was created to measure the participants intention to quit. This item used a seven-point
Likert response scale that asked participants to rate how often they thought of quitting their
current job. The scale ranged from No intention at all to All the time. Participants were also
asked to rate themselves on in-role and organizational citizenship behaviors. This was done using
the Organizational Citizenship Behaviors measure developed by Williams and Anderson (1991),
which included subscales for citizenship behaviors directed at individuals (OCBI) and the
organization (OCBO) as well as in-role behaviors (i.e. on-the-job performance) (IRB). Seven
items measured each subscale using a five-point Likert response format ranging from Strongly
disagree to Strongly agree. Coefficient alpha values for these subscales ranged from .61 to .96
(see Fields, 2002) for a brief review.
Demographic information.
Demographic information about the participants was collected at the end of the survey.
This included measures of participant age, gender, educational level, position and organizational
tenure, brief questions addressing the participants industry and the type of work done, hours
worked in a week, and salary range.

23
CHAPTER 3
RESULTS
Initial Analysis of All Items
The means, standard deviations, and percent of missing values for the FSS subscales are
presented by facet in Tables 1-8. Due to a clerical error during the creation of the online survey,
one item from the physical working conditions subscale (PWC5) was left out of the survey and
thus was unavailable for analysis. Missing values were not significant across the FSS items (less
than 5% missing values for all items except PROM2, which had 5.12% missing values). Missing
values were deleted listwise, leaving a total of 681 cases available for use in the initial factor
analysis.

Table 1
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Pay Subscale
PAY1 PAY2 PAY3 PAY4 PAY5 PAY6 PAY7 PAY8
M 3.84 4.27 4.28 4.4 4.03 4.18 3.96 4.25
SD 1.49 1.54 1.43 1.39 1.45 1.51 1.62 1.48
Missing (%) 0.85 4.27 1.22 3.05 1.83 2.20 4.39 3.41

Table 2
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Promotion Subscale
PROM1 PROM2 PROM3 PROM4 PROM5 PROM6 PROM7 PROM8
M 3.95 3.89 4.12 3.69 3.96 3.79 4 3.82
SD 1.48 1.52 1.37 1.55 1.51 1.55 1.50 1.46
Missing (%) 4.02 5.12 3.90 1.71 1.46 1.22 4.88 1.46

Table 3
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Supervisor Subscale
SUPE1 SUPE2 SUPE3 SUPE4 SUPE5 SUPE6 SUPE7 SUPE8
M 4.98 4.88 5 4.95 4.64 4.71 4.9 4.63
SD 1.32 1.31 1.15 1.19 1.50 1.42 1.32 1.42
Missing (%) 2.20 2.56 0.98 2.56 0.85 1.95 3.66 4.63

Table 4
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Co-workers
Subscale
COWO1 COWO2 COWO3 COWO4 COWO5 COWO6 COWO7 COWO8
M 4.9 5.35 4.5 4.94 5.06 5.12 4.98 5.18
SD 1.17 0.89 1.32 1.18 1.05 1.05 1.15 1.03
Missing (%) 4.63 3.66 0.98 4.15 4.39 3.17 4.76 4.39

24
Table 5
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Work Subscale
WORK1 WORK2 WORK3 WORK4 WORK5 WORK6 WORK7 WORK8
M 5.27 4.06 4.86 4.77 4.24 4.64 4.86 4.31
SD 0.91 1.38 1.07 1.27 1.57 1.31 1.34 1.40
Missing (%) 2.20 0.85 1.83 2.07 4.15 2.20 4.39 4.27

Table 6
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Benefits Subscale
BENE1 BENE2 BENE3 BENE4 BENE5 BENE6 BENE7 BENE8
M 4.1 3.67 3.77 3.73 4.08 3.81 3.82 3.93
SD 1.50 1.63 1.57 1.56 1.53 1.57 1.69 1.55
Missing (%) 3.90 1.71 4.76 2.93 3.78 2.93 1.10 2.56

Table 7
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Procedures
Subscale
PROC1 PROC2 PROC3 PROC4 PROC5 PROC6 PROC7 PROC8
M 4.38 4.87 4.47 4.28 4.53 4.53 4.19 4.52
SD 1.33 1.06 1.24 1.24 1.21 1.18 1.22 1.32
Missing (%) 3.17 1.34 0.85 4.88 2.56 2.32 0.98 3.54

Table 8
Means, Standard Deviation, and Percent of Missing Values for the Initial FSS Physical Working
Conditions Subscale
PWC1 PWC2 PWC3 PWC4 PWC5
a
PWC6 PWC7 PWC8
M 4.21 4.89 5.04 4.59 N/A 4.8 4.47 4.56
SD 1.27 1.16 1.19 1.25 N/A 1.15 1.31 1.25
Missing (%) 3.54 3.29 3.41 3.05 N/A 2.20 4.51 3.78
a
Item PWC5 was omitted from the original survey due to clerical error and is not available for analysis.

In order to account for the potential intercorrelations among the different facets of job
satisfaction, an initial maximum likelihood promax rotation factor analysis using SPSS (v.14)
FACTOR was conducted on the remaining 63 items of the FSS data. The promax rotation was
chosen since it provided the simplest factor structure and allowed for intercorrelations between
the factors. The factor analysis discovered an eight-factor structure for the data based on the
eigenvalue more than 1.0 criterion. This eight-factor structure accounted for 73.74% of the total
variance explained. Analysis of the factor analysis pattern matrix indicated significant factor
loadings for the pay, promotion, supervisor, coworkers, work, and benefits subscales (all
loadings above .3), whereby each of these subscales loaded onto six separate factor headings.

25
The items in the procedures subscale evidenced a significant cross-loading (PROC4) across two
factors (the work and an eighth factor heading), while the physical working conditions items
either cross-loaded onto two factor headings (PWC1, PWC2, PWC6, and PWC7) or loaded on an
alternate factor heading (PWC3). Initial factor loadings for the eight-factor maximum likelihood
promax rotation can be found in Table 9. In addition, Table 10 shows the factor correlation
matrix for the initial eight-factor 63-item FSS.
An analysis of the scree plot (see Figure 1) and of the pattern matrix indicated the
possibility that the data would fit a six-factor structure. As a result a six-factor maximum
likelihood promax rotation factor analysis was also conducted on the research data. The six-
factor structure accounted for 70.02% of total variance explained. The items for the pay,
promotion, supervisor, coworkers, and benefits subscales loaded significantly (all loadings above
.3) onto five separate factor headings. The items for the work, procedures, and physical working
conditions loaded significantly onto a sixth factor (all loadings above .3), and did not show any
significant cross-loadings unlike the eight-factor structure (see Table 11). The factor correlation
matrix for the six-factor structure is described in Table 12.
Internal consistency reliability analyses using Cronbachs were conducted on each of
the eight expected subscales in the initial model using all 63 FSS items (see Table 13). The
results ranged from .91 for the procedures subscale, to .97 for the benefits subscale, indicating
high levels of internal consistency among the items within each subscale. In addition, taking into
account findings from the six-factor factor analysis indicating that the work, procedures, and
physical working conditions subscales loaded onto the same factor, a Cronbachs was
conducted to assess internal consistency of these three subscales combined into a work-related
factor (see Table 1). Results showed a Cronbachs value of .96 for the combined subscale. The

26
Table 9
Initial 8-Factor Promax Rotation Factor Loadings for All 63 FSS Items
a

Factor

1 2 3 4 5 6 7 8
PAY1 0.939
PAY2 0.813
PAY3 0.826
PAY4 0.630
PAY5 0.952
PAY6 0.867
PAY7 0.910
PAY8 0.908
PROM1 0.887
PROM2 0.766
PROM3 0.776
PROM4 0.856
PROM5 0.789
PROM6 0.713
PROM7 0.887
PROM8 0.706
SUPE1 0.870
SUPE2 0.949
SUPE3 0.743
SUPE4 0.825
SUPE5 0.741
SUPE6 0.881
SUPE7 0.979
SUPE8 0.824
COWO1 0.922
COWO2 0.667
COWO3 0.571
COWO4 0.945
COWO5 0.856
COWO6 0.877
COWO7 0.945
COWO8 0.903
WORK1 0.629
WORK2 0.725
WORK3 0.826
WORK4 1.015
WORK5 0.914
WORK6 0.866
WORK7 0.570
WORK8 0.941
BENE1 0.830
BENE2 0.939
BENE3 0.825
BENE4 0.900
BENE5 0.916
BENE6 0.930
BENE7 0.735
BENE8 0.934
PROC1 0.876
PROC2 0.557
PROC3 0.508
PROC4 0.640 0.369
PROC5 0.755
PROC6 0.428
PROC7 0.517
PROC8 0.481
PWC1 0.500 0.366
PWC2 0.423 0.641
PWC3 0.471
PWC4 0.482
PWC6 0.429 0.544
PWC7 0.619 0.308
PWC8 0.479
a
Factor loadings less than .300 suppressed


27
Table 10
Factor Correlation Matrix for the Initial FSS 8-Factor Structure (All 63 FSS Items)
Factor M SD 1 2 3 4 5 6 7

1. Pay 4.15 1.32
2. Promotion 3.90 1.27 .58**
3. Supervisor 4.84 1.16 .36** .44**
4. Co-workers 4.99 0.97 .31** .35** .49**
5. Benefits 3.85 1.42 .54** .60** .33** .31**
6. Work 4.62 1.04 .51** .50** .56** .55** .39**
7. Procedures 4.47 0.98 .54** .57** .60** .52** .45** .82**
8. Physical working conditions 4.65 1.01 .48** .50** .56** .53** .42** .77** .81**

**p < .001



Figure 1. Scree plot for the maximum likelihood promax factor analysis of all 63 FSS items.

28
Table 11
Initial 6-Factor Promax Rotation Factor Loadings for All 63 FSS Items
a

Factor

1 2 3 4 5 6
FSSPAY1 0.939
FSSPAY2 0.808
FSSPAY3 0.825
FSSPAY4 0.63
FSSPAY5 0.956
FSSPAY6 0.866
FSSPAY7 0.913
FSSPAY8 0.9
FSSPROM1 0.885
FSSPROM2 0.771
FSSPROM3 0.78
FSSPROM4 0.871
FSSPROM5 0.784
FSSPROM6 0.726
FSSPROM7 0.895
FSSPROM8 0.721
FSSSUPE1 0.874
FSSSUPE2 0.95
FSSSUPE3 0.738
FSSSUPE4 0.824
FSSSUPE5 0.752
FSSSUPE6 0.886
FSSSUPE7 0.969
FSSSUPE8 0.833
FSSCOWO1 0.913
FSSCOWO2 0.666
FSSCOWO3 0.566
FSSCOWO4 0.932
FSSCOWO5 0.842
FSSCOWO6 0.871
FSSCOWO7 0.93
FSSCOWO8 0.895
FSSWORK1 0.637
FSSWORK2 0.671
FSSWORK3 0.803
FSSWORK4 0.889
FSSWORK5 0.765
FSSWORK6 0.803
FSSWORK7 0.549
FSSWORK8 0.881
FSSBENE1 0.824
FSSBENE2 0.941
FSSBENE3 0.825
FSSBENE4 0.905
FSSBENE5 0.915
FSSBENE6 0.928
FSSBENE7 0.741
FSSBENE8 0.934
FSSPROC1 0.872
FSSPROC2 0.657
FSSPROC3 0.569
FSSPROC4 0.714
FSSPROC5 0.811
FSSPROC6 0.568
FSSPROC7 0.569
FSSPROC8 0.581
FSSPWC1 0.702
FSSPWC2 0.791
FSSPWC3 0.474
FSSPWC4 0.685
FSSPWC6 0.744
FSSPWC7 0.863
FSSPWC8 0.72
a
Factor loadings less than .300 suppressed


29
Table 12
Factor Correlation Matrix for the Initial FSS 6-Factor Structure (All 63 FSS Items)
Factor M SD 1 2 3 4 5

1. Pay 4.15 1.32
2. Promotion 3.90 1.27 .58**
3. Supervisor 4.84 1.16 .36** .44**
4. Co-workers 4.99 0.97 .31** .35** .49**
5. Benefits 3.85 1.42 .54** .60** .33** .31**
6. Work-related 4.57 0.94 .55** .56** .61** .57** .45**

**p < .001

Table 13
Cronbachs Values for the Initial Subscales of the FSS (All 63 Items Used)
Factor Cronbachs Range of item-total correlations
Pay .96 .68 - .92
Promotion .94 .76 - .87
Supervisor .96 .69 - .91
Coworkers .95 .63 - .89
Work .92 .58 - .83
Benefits .97 .79 - .91
Procedures .91 .63 - .79
Physical working conditions .92 .60 - .81
Work-related
a
.96 .53 - .83
a
A combination of the work, procedures, and physical working conditions subscales.

high Cronbachs value, combined with the high intercorrelations between the work,
procedures, and physical working conditions factors previously displayed in Table 10 shows
evidence that the three subscales may indeed be measuring the same facet, thus suggesting that a
six-factor model may be a good fit to the FSS data.
Item Selection for the Complete Facet Satisfaction Scale (FSS)
The next step in the data analysis was to select items that would be used in the final
version of the Facet Satisfaction Scale (FSS). The decision was made to create a scale comprised
of four items measuring each facet in order to exhibit adequate factor structure (see for a brief
review, Acito & Anderson, 1980) and to maintain consistency with existing job satisfaction
scales such as the Job Satisfaction Survey (Spector, 1985) and Job Perception Scale (Hatfield,
Robinson, & Huseman, 1985). The selection criterion was based on Fabrigar, Wegener,

30
MacCallum, and Strahans (1999) recommendation, whereby the items with the highest
reliability index (in this case, the items with the highest factor loadings) were selected as the
items of choice to make up a scale. As a result, thirty-two items were selected to make up the
eight-facet FSS. In addition, an alternate 24-item six-facet FSS was also created for model
testing (based on the possible significance of a six-factor model hinted at by initial data analysis)
to determine the best factor structure for the final version of the FSS. These two models were
compared against each other and their respective null models using R (v. 2.4.1) confirmatory
factor analysis (CFA). This resulted in the following six comparison models (see Figures 2-7):
1. 8-factor model without a higher-order job satisfaction factor before item deletion
2. 8-factor model with a higher-order job satisfaction factor before item deletion
3. 8-factor model with a higher-order job satisfaction factor after item deletion
4. 6-factor model without a higher-order job satisfaction factor before item deletion
5. 6-factor model with a higher-order job satisfaction factor before item deletion
6. 6-factor model with a higher-order job satisfaction factor after item deletion
Specifically, the original hypothesized 8-factor model (Model 3) consisted of four items
measuring each of the eight facets (pay, promotion, supervisor, co-workers, work, benefits,
procedures, and physical working conditions. This model was compared against two null models
(Models 1 and 2). The first null model (Model 1) was specified without a higher-order job
satisfaction factor, essentially allowing the eight facets to correlate with each other due to
chance. The second null model (Model 2) specified a higher-order job satisfaction factor onto the
factors, using this higher-order factor to account for the intercorrelations between the eight
facets. If indeed the eight factors were facets measuring various aspects of job satisfaction, the
second null model (Model 2) should be a better fitting model than the null model that did not
31
Figure 2. Model 1 (63-item 8-facet lower-order null model).

32

Figure 3. Model 2 (63-item 8-facet higher-order null model).

33




Figure 4. Model 3 (32-item 8-facet hypothesized FSS).
34
Figure 5. Model 4 (63-item six-facet lower-order null model).


35
Figure 6. Model 5 (63-item 6-facet higher-order null model).


36
Figure 7. Model 6 (24-item 6-facet hypothesized FSS).



37
specify a higher-order job satisfaction factor (Model 1). Furthermore, the hypothesized eight-
factor (32-item) FSS model (Model 3) should demonstrate better fit indices than both Models 1
and 2 since the items with the lowest factor loadings were removed to create the final scale. In
addition, since initial analysis of the FSS data hinted at a possible six-factor structure, a second
set of models was also compared. Like the original hypothesized eight-factor model, this six-
factor model (Model 6) also consisted of four items measuring each of the six facets (pay,
promotion, supervisor, co-workers, work-related, and benefits). This six-factor model was
compared to two other null models (Models 4 and 5) using the same logic.
The model fit indices for these model comparisons are described in Table 14. Model fit
was assessed using the goodness-of-fit index (GFI), adjusted GFI (AGFI), root mean square error
of approximation (RMSEA), Bentler-Bonnett normed fit index (NFI), Tucker-Lewis non-normed
fit index (NNFI), and the Bentler comparative fit index (CFI). It must be noted here that although
the chi-square model fit statistic was reported in this study, it was not used to determine model
fit. This was due to the fact that the chi-square method of assessing model fit tends to be
significant regardless of actual goodness of model fit when dealing with large sample sizes (see
for a review, Kline, 2005), as was the case in this study.
Conventional model fit thresholds were used whereby moderate and good fit were
assumed given NFI, NNFI, CFI, GFI, and adjusted GFI values above .90 and .95 respectively;
and RMSEA values were below .08 and .06 respectively (Beauducel & Wittmann, 2005). Results
for the eight-factor models were consistent with prior predictions. Model 1 had the worst fit of
all the eight-factor models. Fit indices for Model 2 were better across the board compared to
Model 1, but were surpassed by the fit indices for the hypothesized eight-factor (32-item) Model

38
3 (except for slight differences in the RMSEA index, although both Model 2 and Model 3 were
within the range described as good fit by Kline, 2005).
Similar results were found for the six-factor models. Again, the lower order null model
for the six-factor models (Model 4) exhibited the poorest fit indices. The higher-order null model
(Model 5) exhibited slightly better fit indices, but was surpassed by the final six-factor (24-item)
hypothesized model (Model 6). Model 6 actually showed good fit for four of the six fit indices
used (RMSEA, NFI, NNFI, and CFI) and moderate fit for the GFI. The adjusted GFI for this
model approached moderate fit at .89.

Table 14
Model Fit Indices
a

8 Factor Models 6 Factor Models
Fit indices
Model 1
(63-item
lower order
null model)
Model 2
(63-item
higher
order null
model)
Model 3
(Final 8-
factor 32-
item FSS)
Model 4
(63-item
lower order
null model)
Model 5
(63-item
higher
order null
model)
Model 6
(Final 6-
factor 24-
item FSS)
GFI .655 .775 .866 .694 .746 .908
AGFI .632 .760 .846 .674 .728 .889
RMSEA .072 .053 .059 .067 .058 .058
NFI .813 .879 .928 .832 .864 .951
NNFI .843 .914 .944 .864 .898 .962
CFI .848 .917 .948 .868 .902 .966
Chi-square
b
8505.6** 5505.9** 1561.5** 7622.3** 6164.7** 811.68**
DF 1894 1886 459 1894 1888 249
a
Refer to Figures 2-7 for model diagrams.
b
Chi-square values are reported, but were not used to predict model fit.
** p < .01

Finally, the hypothesized 32-item eight-factor (Model 3) and the 24-item six-factor
(Model 6) models were compared for goodness of fit to determine which model would be used
for the creation of the final FSS (see Table 14). Even though both these models fit the data well
according to the RMSEA index, results of the NFI, NNFI, and CFI fit indices showed that the

39
six-factor model exhibited good fit compared to the moderate fit of the eight-factor model. The
six-factor model was also a better fitting model compared to the eight-factor model using the
criterion established for the GFI. In addition, the high Cronbachs value for the combined
work, procedures, and physical working conditions subcales discovered during the initial
reliability analysis and the high intercorrelations between these three subscales further provides
support for the decision to combine these three separate subscales into a single work-related
subscale as was done in the case of the 24-item six-factor Model 6.
Simply put, a six-factor model would be more psychometrically sound and parsimonious,
therefore, the decision was made to create a final 24-item version of the FSS consisting of four
items loading on each of the six factors (Model 6). In addition, the reliability measures for each
facet subscale of this final version of the FSS was analyzed using SPSS scale reliability. The
Cronbachs values for the factors ranged from .89 for the work-related factor to .95 for the
pay, co-workers, and benefits factors (see Table 15). Thus, these results show support for
Hypothesis 1, in that the final complete FSS does exhibit good factor structure and reliability.

Table 15
Cronbachs Values for the Final 24-item 6-factor Complete FSS
Factor Cronbachs Range of item-total correlations
Pay .95 .85 - .92
Promotion .92 .78 - .83
Supervisor .94 .81 - .91
Coworkers .95 .81 - .90
Work-related
a
.89 .70 - .81
Benefits .95 .85 - .89
a
The Work-related facet consists of the following items: Work4, Work8, Proc1, PWC7

Item Selection for the Shortened FSS
The item that had the highest factor loading within each subscale of the 24-item six-facet
FSS was selected to make up the shortened version of the FSS (the single-item per facet FSS). In

40
keeping with the six-factor model described above, the shortened FSS contained one item from
each of the pay, promotion, supervisor, coworkers, work-related, and benefit facets. The
reliability estimates for these items are listed in Table 16.
3
Estimates of reliability based on factor
analysis communalities provided the lower boundary for reliability (see for a review, Harman,
1976) of the shortened version of the FSS. These communalities were obtained from the results
of the maximum likelihood promax rotation factor analysis for the complete FSS (24-item six-
facet scale), and ranged from .76 for the promotion and work-related item to .92 for the pay item.
Correction for attenuation reliability estimates (Eq. 3) for the data ranged from .89 to .96, which
was much higher than those reported to be within the reasonable range of .70 by Wanous,
Reichers, and Hudy (1997) and Nagy (2002), or even the .80 reported by Loo and Kells (1998).

Table 16
Single-Item Reliability Estimates for the Shortened FSS (1 Item Measuring Each Facet)
a


Pay Promotion Supervisor
Co-
workers
Work-
related
Benefits
Factor analysis
communality
.92 .76 .89 .88 .76 .85
Correction for
attenuation
.96 .89 .95 .94 .91 .93
Single-item and
four-item FSS
subscales correlation
.96 .90 .95 .94 .90 .94
Average estimated
reliability
.95 .85 .93 .92 .86 .91
a
Actual items used to create the shortened FSS were PAY5, PROM4, SUPE7, COWO7, WORK8
(for the work-related factor), and BENE8 respectively.

Finally, correlations between single-item facet measures on the shortened FSS and their
corresponding four-item scales from the 24-item six-facet FSS measure provided a third
reliability estimate. Essentially, a simple correlation was run between each single-item facet
measure on the shortened FSS and their corresponding complete (24-item six-facet) FSS
subscale (i.e. PAY5 with the complete FSS Pay subscale). The reliability estimate provided by

41
these correlations ranged from .90 for the promotion and work-related items to .96 for the pay
item. In addition, a mean score of single-item reliability was also calculated, by averaging the
reliability scores from the three previous estimates of single-item reliability in accordance with
the method employed by Nagy (2002). This overall average single-item reliability score was
used as a summary score of the various single-item reliability estimates, and ranged from .85 for
the promotion item to .95 for pay, which was described as good to excellent levels of reliability
(Charter, 2003).
In addition to the three estimates for single-item reliability presented in Table 16, an
analysis of internal consistency reliability using Cronbachs was also run on the six items used
to form the shortened FSS. The Cronbachs reported using SPSS (v.14) was .78, which
corresponds to the range listed by Charter (2003) as a fair level of reliability. The lower level
reported (compared to the other three estimates of single-item reliability) was expected,
considering that is a measure of internal consistency of a scale, which in this instance would be
lower since the shortened FSS uses six-items to measure six different facets.
Initial Analysis of FSS Predictive Ability
As an initial validity study, the complete 24-item six-facet FSS was used as a predictor of
common job satisfaction outcome measures, including intent-to-quit, organizational citizenship
behaviors towards individuals (OCBI) and the organization (OCBO), and in-role behaviors
(IRB). In addition, three other scales of general job satisfaction the Faces scale (Kunin, 1955),
the Job Diagnostic Survey (JDS: Hackman & Oldham, 1974), and a Job Evaluation measure
(Crites, Fabrigar, & Petty, 1994) were also used as comparison measures to determine initial
predictive validity of the complete six-facet FSS. Specifically, three hierarchical regression
analyses were run on each of the four outcome measures. Each of the comparison scales (Faces,

42
JDS, and Job Evaluation) was entered in Step 1 of the analysis on each outcome measure (Inten-
to-Quit, OCBI, OCBO, and IRB). The complete 24-item six-facet FSS was entered in Step 2 of
each analysis.
Hierarchical regression analyses Faces (Step 1), 24-item six-facet FSS (Step 2)
In the first set of analyses, the Faces data was entered in Step 1 of the hierarchical
regression, while the FSS facets were entered in Step 2. The results (see Tables 17-20) show a
significant increase in R
2
across all four outcome measures after the addition of the FSS facets.
Specifically, for intent-to-quit (M = 3.20, SD = 2.01), both models were shown to be significant
such that F (1, 765) = 605.94, p < .01 and F (6, 759) = 618.71, p < .01 for models 1 and 2
respectively. The Faces scale was a significant predictor in Step 1, ( = -.67, t = -24.616, p <
.01). In Step 2, both the supervisor ( = -.19, t = -4.65, p < .01) and work-related ( = -.17, t = -
4.30, p < .01) facets were significant predictors in addition to the Faces scale ( = -.66, t = -
10.53, p < .01) (see Table 17).
For OCBI (M = 3.96, SD = .68), both models were again significant whereby F (1, 746) =
52.08, p < .01 and F (6, 740) = 63.61, p < .01 for models 1 and 2 respectively. The Faces scale
was a significant predictor in Step 1 ( = .26, t = 7.22, p < .01. In Step 2, the Faces scale became
a non-significant predictor (t = -.72, p = ns) once the FSS facets were entered into the analysis.
Instead the co-workers ( = .21, t = 4.95, p < .01) and work-related ( = .19, t = 3.68, p < .01)
facets significantly predicted OCBI (see Table 18).
For OCBO (M= 4.08, SD = .56), both models were once again significant and F (1, 745)
= 48.64, p < .01 for model 1 and F (6, 739) = 62.03, p < .01 for model 2. The Faces scale was
significant in Step 1 ( = .25, t = 6.97, p < .01), but was a non-significant predictor of OCBO (t =
-1.39, p = ns) once the FSS facets were entered, leaving the FSS supervisor ( = .15, t = 3.45, p <

43
Table 17
Hierarchical Regression Analysis for Intent-to-Quit (Comparing Faces and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Faces -.735 .030 -.665 -24.616** .442 .442**
Step 2: FSS subscales added .493 .051**
Faces -.465 .044 -.420 -10.531**
Pay -.044 .047 -.031 -.930
Promotion -.099 .051 -.067 -1.934
Supervisor -.246 .053 -.152 -4.651**
Co-workers .045 .061 .023 .737
Work-related -.300 .070 -.172 -4.297**
Benefits -.017 .045 -.012 -.371
Note: N = 767; *p < .05. **p <.01.

Table 18
Hierarchical Regression Analysis for OCBI (Comparing Faces and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Faces .096 .013 .255 7.217** .064 .064**
Step 2: FSS subscales added .135 .061**
Faces -.014 .020 -.038 -.721
Pay .012 .021 .025 .572
Promotion .024 .023 .048 1.041
Supervisor .018 .024 .033 .761
Co-workers .134 .027 .208 4.948**
Work-related .116 .031 .194 3.681**
Benefits .003 .020 -.007 .879
Note: N = 748; *p < .05. **p <.01.

.01) and work-related ( = .31, t = 5.95, p < .01) facets as the only significant predictors of
OCBO in Step 2 (see Table 19).
Finally for IRB (M = 4.30, SD = .62), both models were significant F (1, 745) = 33.16, p
< .01 (model 1) and F (6, 739) = 44.84, p < .01 (model 2). The Faces scale was a significant
predictor in Step 1 ( = .21, t = 5.76, p < .01), and remained a significant predictor in Step 2 of
the hierarchical regression analysis ( = -.11, t = -2.04, p < .05). In addition to the FSS

44
supervisor ( = .13, t = 2.87, p < .01) and work-related ( = .31, t = 5.76, p < .01) facets were
significant predictors in Step 2 of the analysis (see Table 20).

Table 19
Hierarchical Regression Analysis for OCBO (Comparing Faces and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Faces .077 .011 .248 6.974** .061 .061**
Step 2: FSS subscales added .153 .092**
Faces -.030 .016 -.096 -1.839
Pay .002 .017 .005 .104
Promotion .023 .019 .057 1.244
Supervisor .067 .019 .148 3.446*
Co-workers .025 .022 .048 1.142
Work-related .153 .026 .312 5.947**
Benefits -.012 .017 -.031 -.732
Note: N = 747; *p < .05. **p <.01.

Table 20
Hierarchical Regression Analysis for IRB (Comparing Faces and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Faces .071 .012 .206 5.758** .043 .043**
Step 2: FSS subscales added .126 .083**
Faces -.037 .018 -.108 -2.035*
Pay .018 .019 .041 .918
Promotion -.026 .021 -.057 -1.232
Supervisor .063 .022 .126 2.872*
Co-workers .039 .025 .067 1.577
Work-related .167 .029 .306 5.760**
Benefits .003 .019 .007 .150
Note: N = 747; *p < .05. **p <.01.

Hierarchical regression analyses JDS (Step 1), 24-item six-facet FSS (Step 2)
In the second set of hierarchical regression analyses, JDS was entered in Step 1 of the
regression equation, while the 24-item six-facet complete FSS was entered in Step 2. The
outcome variables were once again intent-to-quit, OCBI, OCBO, and IRB (see Tables 21-24).

45
Table 21
Hierarchical Regression Analysis for Intent-to-Quit (Comparing JDS and FSS)
Step and variable B SE B t R
2
R
2

Step 1: JDS -.863 .036 -.660 -24.170** .436 .436**
Step 2: FSS subscales added .487 .051**
JDS -.566 .057 -.433 -9.963**
Pay -.040 .048 -.028 -.837
Promotion -.153 .052 -.103 -2.951*
Supervisor -.281 .053 -.174 -5.299**
Co-workers .018 .061 .009 .293
Work-related -.191 .078 -.109 -2.454*
Benefits .008 .046 .005 .165
Note: N = 759; *p < .05. **p <.01.

Table 22
Hierarchical Regression Analysis for OCBI (Comparing JDS and FSS)
Step and variable B SE B t R
2
R
2

Step 1: JDS .145 .015 .327 9.442** .107 .107**
Step 2: FSS subscales added .149 .042**
JDS .060 .025 .135 2.391*
Pay .001 .021 .003 .063
Promotion .022 .023 .043 .951
Supervisor .007 .023 .012 .285
Co-workers .124 .027 .193 4.634**
Work-related .057 .034 .096 1.672
Benefits -.005 .020 -.011 -.260
Note: N = 748; *p < .05. **p <.01.

For intent-to-quit (M = 3.20, SD = 2.01), both models were significant such that F (1, 757) =
584.17, p < .01, and F (6, 751) = 560.67, p < .01 for models 1 and 2 respectively. JDS was a
significant predictor in Step I ( = -.66, t = -24.17, p < .01). JDS continued to be a significant
predictor in Step 2 ( = -.43, t = -9.96, p < .01), in addition to the FSS facets of promotion ( = -
.10, t = -2.95, p < .05), supervisor ( = -.17, t = -5.30, p < .01), and work-related ( = -.11, t = -
2.45, p < .05) (see Table 21).

46
Table 23
Hierarchical Regression Analysis for OCBO (Comparing JDS and FSS)
Step and variable B SE B t R
2
R
2

Step 1: JDS .120 .013 .328 9.489** .108 .108**
Step 2: FSS subscales added .152 .044
JDS .030 .021 .082 1.461
Pay -.007 .017 -.018 -.421
Promotion .019 .019 .047 1.031
Supervisor .056 .019 .124 2.909*
Co-workers .016 .022 .031 .734
Work-related .107 .028 .218 3.786**
Benefits -.014 .017 -.035 -.820
Note: N = 747; *p < .05. **p <.01.

Table 24
Hierarchical Regression Analysis for IRB (Comparing JDS and FSS)
Step and variable B SE B t R
2
R
2

Step 1: JDS .122 .014 .301 8.606** .090 .090**
Step 2: FSS subscales added .124 .033**
JDS .036 .023 .090 1.569
Pay .007 .020 .016 .356
Promotion -.032 .021 -.069 -1.497
Supervisor .049 .022 .098 2.267*
Co-workers .028 .025 .048 1.124
Work-related .111 .032 .203 3.483*
Benefits .001 .019 .003 .077
Note: N = 747; *p < .05. **p <.01.

For OCBI (M = 3.96, SD = .68), both models were significant so that F (1, 746) = 89.16,
p < .01 (model 1) and F (6, 740) = 95.32, p < .01 (model 2) (see Table 22). JDS was a significant
predictor in Step 1 ( = .33, t = 9.44, p < .01) and was again a significant predictor in Step 2 ( =
.14, t = 2.39, p < .05). The FSS co-worker facet was also a significant predictor of OCBI in Step
2 ( = .19, t = 4.63, p < .01).

47
For OCBO (M = 4.08, SD = .56), both models were once again significant whereby F (1,
745) = 90.04, p < .01 and F (6, 739) = 96.44, p < .01 for models 1 and 2 respectively. JDS was a
significant predictor in Step 1 ( = .33, t = 9.49, p < .01), but became a non-significant predictor
in Step 2 (t = 1.46, p = ns). Instead, the supervisor ( = .12, t = 2.91, p < .05) and work-related (
= .22, t = 3.79, p < .01) FSS facets were the only significant predictors in Step 2 (see Table 23).
Finally, for IRB (M = 4.30, SD = .62), both models were significant so that F (1, 745) =
74.06, p < .01 (model 1) and F (6, 739) = 78.71, p < .01 (model 2). JDS was a significant
predictor in Step 1 ( = .30, t = 8.61, p < .01). In Step 2, JDS was not a significant predictor (t =
1.57, p = ns). Instead, the FSS facets for supervisor ( = .10, t = 2.27, p < .05) and work-related
( = .20, t = 3.48, p < .01) were the only significant predictors in Step 2 (see Table 24).
Hierarchical regression analyses Job Evaluation (Step 1), 24-item six-facet FSS (Step 2)
The final set of hierarchical regression analyses compared the predictive ability of Job
evaluation and the 24-item six-facet FSS on the outcome measures (intent-to-quit, OCBI, OCBO,
and IRB). The results of these hierarchical regression analyses are reported in Tables 25-28. For
intent-to-quit (M = 3.20, SD = 2.01), both regression models are significant so that F (1, 765) =
771.39, p < .01 for model 1 and F (6, 759) = 776.32, p < .01 for model 2. Job evaluation was a
significant predictor of intent-to-quit in Step 1 (b = -.71, t = -27.77, p < .01). In Step 2, both job
evaluation (b = -.57, t = -12.69, p < .01) and supervisor (b = -.13, t = -4.05, p < .01) were
significant predictors of intent-to-quit (see Table 25).
For OCBI (M = 3.96, SD = .68), both models were significant whereby F (1, 746) =
66.42, p < .01 and F (6, 740) = 75.23, p < .01 for models 1 and 2 respectively (see Table 26). Job
evaluation was a significant predictor of OCBI in Step 1 (b = .29, t = 8.15, p < .01) but was not a
significant predictor in Step 2 (t = -.49, p = ns). Instead, the FSS facets of co-wokers (b = .21, t =

48
Table 25
Hierarchical Regression Analysis for Intent-to-Quit (comparing Job Evaluation and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Job Evaluation -1.398 .050 -.709 -27.774** .502 .502**
Step 2: FSS subscales added .521 .019**
Job Evaluation -1.133 .089 -.574 -12.688**
Pay .003 .047 .002 .060
Promotion -.082 .050 -.055 -1.638
Supervisor -.209 .052 -.129 -4.045**
Co-workers .078 .059 .041 1.318
Work-related -.101 .074 -.058 -1.377
Benefits -.032 .044 -.023 -.726
Note: N = 767; *p < .05. **p <.01.

Table 26
Hierarchical Regression Analysis for OCBI (Comparing Job Evaluation and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Job Evaluation .191 .023 .286 8.150** .082 .082**
Step 2: FSS subscales added .143 .061**
Job Evaluation -.020 .041 -.030 -.490
Pay .012 .021 .025 .563
Promotion .023 .023 .046 1.012
Supervisor .017 .024 .032 .728
Co-workers .133 .027 .207 4.907**
Work-related .115 .034 .193 3.365*
Benefits -.003 .020 -.007 -.165
Note: N = 748; *p < .05. **p <.01.

4.91, p < .01) and work-related (b = .19, t = 3.37, p < .05) were the only significant predictors of
OCBI in Step 2.
For OCBO (M = 4.08, SD = .56), both models were significant so that F (1, 745) = 59.79,
p < .01 and F (6, 739) = 71.60, p < .01 for models 1 and 2 respectively. Job evaluation was a
significant predictor in Step 1 (b = .27, t = 7.73, p < .01), and continued to be a significant
predictor in Step 2 (b = -.14, t = -2.26, p < .05). In addition, the supervisor (b = .16, t = 3.59, p <

49
.01) and work-related (b = .34, t = 6.00, p < .01) FSS facets were significant predictors of OCBO
in Step 2 (see Table 27).

Table 27
Hierarchical Regression Analysis for OCBO (Comparing Job Evaluation and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Job Evaluation .150 .019 .273 7.732** .074 .074**
Step 2: FSS subscales added .155 .081**
Job Evaluation -.076 .034 -.139 -2.259*
Pay .005 .018 .014 .307
Promotion .024 .019 .059 1.290
Supervisor .070 .020 .155 3.585**
Co-workers .028 .022 .052 1.248
Work-related .168 .028 .342 6.001**
Benefits -.013 .017 -.034 -.783
Note: N = 747; *p < .05. **p <.01.

Table 28
Hierarchical Regression Analysis for IRB (Comparing Job Evaluation and FSS)
Step and variable B SE B t R
2
R
2

Step 1: Job Evaluation .142 .022 .233 6.546** .054 .054**
Step 2: FSS subscales added .127 .073**
Job Evaluation -.088 .038 -.144 -2.310*
Pay .022 .020 .049 1.094
Promotion -.026 .021 -.056 -1.204
Supervisor .066 .022 .132 2.989*
Co-workers .042 .025 .071 1.664
Work-related .181 .031 .334 5.776**
Benefits .002 .019 .004 .094
Note: N = 747; *p < .05. **p <.01.

Finally, for IRB (M = 4.30, SD = .62), both models were significant so that F (1, 745) =
48.85, p < .01 (model 1) and F (6, 739) = 53.09, p < .01 (model 2). Job evaluation was once
again a significant predictor in Step 1 (b = .23, t = 6.55, p < .01). In Step 2, job evaluation (b = -

50
.14, t = -2.31, p < .05) and the FSS supervisor (b = .13, t = 2.99, p < .05) and work-related (b =
.33, t = 5.78, p < .01) facets were significant predictors of IRB (see Table 28).
Summary of hierarchical regression analyses between the general job satisfaction measures and
complete 24-item six-facet FSS
The results of the hierarchical multiple regression analyses indicate that the 24-item six-
facet FSS adds significant predictive ability over the three comparison scales (Faces, JDS, and
Job evaluation) for the four outcome measures selected (intent-to-quit, organizational citizenship
behaviors towards individuals and the organization, and in-role behaviors). In several cases,
comparison measures actually become non-significant when the FSS facets are added into the
regression analysis. Specifically, the Faces scale becomes a non-significant predictor for OCBI
and OCBO once the FSS facets are added. The same happens to JDS when predicting OCBO and
IRB, and to job evaluation when predicting OCBI. These results provide support for Hypothesis
3 that the complete version of the FSS would significantly predict intent-to-quit and job
performance.
Shortened versus complete FSS
In addition to the comparison scales (Faces, JDS, and Job evaluation), the shortened
version of the FSS (using single-item facet measures) was entered also into hierarchical
regression analyses with the complete 24-item FSS. The six-item shortened FSS was entered in
Step 1 of the hierarchical regression analysis (just like the comparison scales in the earlier
analyses), while the complete 24-item six-facet FSS was entered in Step 2. The results of these
four hierarchical multiple regressions can be found in Tables 29-32. For the comparison in
predictive ability between the shortened FSS (single-item facet measure) and the complete FSS
(24-items 6-facet measure) on intent-to-quit (M = 3.19, SD = 2.00), both models were found to

51
Table 29
Hierarchical Regression Analysis for Intent-to-Quit (Comparing Shortened and Complete FSS)
Step and variable B SE B t R
2
R
2

Step 1: Shortened FSS .398 .398**
PAY5 -.135 .049 -.098 -2.753**
PROM4 -.106 .045 -.082 -2.388*
SUPE7 -.371 .051 -.246 -7.299**
COWO7 -.082 .058 -.047 -1.420
WORK8 -.469 .052 -.328 -9.067**
BENE8 -.058 .044 -.045 -1.317
Step 2: Complete FSS added .428 .030**
PAY5 -.004 .136 -.003 -.026
PROM4 .099 .085 .077 1.158
SUPE7 -.103 .134 -.068 -.769
COWO7 .173 .151 .099 1.142
WORK8 -.091 .096 -.064 -.949
BENE8 -.046 .107 -.035 -.432
Pay -.108 .141 -.076 -.762
Promotion -.267 .107 -.181 -2.492*
Supervisor -.242 .147 -.151 -1.647
Co-workers -.236 .168 -.125 -1.405
Work-related -.542 .125 -.312 -4.349**
Benefits .031 .118 .022 .264
Note: N = 746; *p < .05. **p <.01.


52
Table 30
Hierarchical Regression Analysis for OCBI (Comparing Shortened and Complete FSS)
Step and variable B SE B t R
2
R
2

Step 1: Shortened FSS .146 .146**
PAY5 .005 .020 .012 .273
PROM4 .035 .018 .079 1.888
SUPE7 .051 .021 .099 2.432*
COWO7 .108 .024 .183 4.533**
WORK8 .075 .021 .154 3.534**
BENE8 -.012 .018 -.027 -.666
Step 2: Complete FSS added .163 .017*
PAY5 -.020 .057 -.042 -.343
PROM4 .065 .036 .148 1.831
SUPE7 .186 .056 .361 3.327**
COWO7 .015 .063 .026 .239
WORK8 .027 .040 .055 .682
BENE8 -.033 .044 -.073 -.732
Pay .025 .060 .052 .417
Promotion -.049 .044 -.098 -1.113
Supervisor -.166 .061 -.305 -2.717**
Co-workers .111 .070 .173 1.579
Work-related .081 .052 .137 1.565
Benefits .034 .049 .072 .691
Note: N = 728; *p < .05. **p <.01.


53
Table 31
Hierarchical Regression Analysis for OCBO (Comparing Shortened and Complete FSS)
Step and variable B SE B t R
2
R
2

Step 1: Shortened FSS .148 .148**
PAY5 .001 .016 .003 .079
PROM4 .029 .015 .080 1.913
SUPE7 .076 .017 .180 4.408**
COWO7 .011 .019 .023 .562
WORK8 .090 .017 .226 5.195**
BENE8 -.015 .015 -.041 -1.004
Step 2: Complete FSS added .166 .018*
PAY5 050 .047 .130 1.063
PROM4 .066 .029 .183 2.265*
SUPE7 .101 .046 .239 2.208*
COWO7 -.087 .052 -.180 -1.678
WORK8 .016 .032 .041 .499
BENE8 -.020 .036 -.055 -.556
Pay -.056 .049 -.141 -1.137
Promotion -.061 .036 -.147 -1.664
Supervisor -.036 .050 -.080 -.716
Co-workers .106 .058 .201 1.839
Work-related .116 .043 .238 2.721**
Benefits .016 .040 .042 .403
Note: N = 727; *p < .05. **p <.01.


54
Table 32
Hierarchical Regression Analysis for IRB (Comparing Shortened and Complete FSS)
Step and variable B SE B t R
2
R
2

Step 1: Shortened FSS .120 .120**
PAY5 .011 .018 .026 .595
PROM4 .001 .017 .003 .079
SUPE7 .068 .019 .148 3.558**
COWO7 .038 .021 .073 1.782
WORK8 .085 .019 .196 4.429**
BENE8 -.005 .016 -.013 -.307
Step 2: Complete FSS added .143 .024**
PAY5 -.008 .052 -.018 -.149
PROM4 .081 .032 .208 2.547*
SUPE7 .085 .050 .187 1.699
COWO7 -.011 .057 -.021 -.194
WORK8 -.007 .036 -.016 -.198
BENE8 .016 .040 .040 .399
Pay .022 .054 .050 .402
Promotion -.122 .040 -.273 -3.048**
Supervisor -.023 .055 -.047 -.417
Co-workers .046 .063 .081 .736
Work-related .145 .047 .275 3.116**
Benefits -.010 .044 -.024 -.225
Note: N = 728; *p < .05. **p <.01.


55
be significant so that F (6, 739) = 81.43, p < .01 (model 1) and F (6, 733) 87.80, p < .01 (model
2). In Step 1, the single-item FSS facets of pay ( = -.10, t = -2.75, p < .01), promotion ( = -.08,
t = -2.39, p < .05), supervisor (b = -.25, t = -7.30, p < .01), and work ( = -.33, t = -9.07, p < .01)
were all significant predictors of intent-to-quit. When the complete 24-item six-facet FSS was
entered into the regression analysis in Step 2, none of the single-item FSS facets were significant
predictors of intent-to-quit (see Table 29). Instead, only the complete FSS multi-item facets for
promotion ( = -.18, t = -2.49, p < .05) and work-related ( = -.31, t = -4.35, p < .01) were
significant predictors of intent-to-quit.
For OCBI (M = 3.96, SD = .68), both models were once again significant so that F (6,
721) = 20.48, p < .01 for model 1 and F (6, 715) = 22.90, p < .05 for model 2. In Step 1, the
supervisor ( = .10, t = 2.43, p < .05), co-worker ( = .18, t = 4.53, p < .01), and work ( = .154,
t = 3.53, p < .01) items significantly predicted OCBI. Once the complete FSS was entered into
the regression analysis in Step 2, of these three items only the supervisor item ( = .36, t = 3.33,
p < .01) remained a significant predictor (see Table 30). In addition, the four-item supervisor
scale from the complete FSS was also a significant predictor of OCBI in Step 2 ( = -.31, t = -
2.72, p < .01).
For OCBO (M = 4.08, SD = .56), both models were significant whereby F (6, 720) =
20.85, p < .01 and F (6, 714) = 23.39, p < .05 for models 1 and 2 respectively. In Step 1, the
supervisor ( = .18, t = 4.41, p < .01) and work ( = .23, t = 5.20, p < .01) items were significant
predictors of OCBO. In Step 2, the single-item work item became a non-significant predictor (t =
.50, p = ns). The single-item supervisor ( = .24, t = 2.21, p < .05) and promotion ( = .18, t =
2.27, p < .05) items were significant predictors in Step 2, along with the 4-item work-related FSS
facet ( = .24, t = 2.72, p < .01) (see Table 31).

56
Finally, for IRB (M = 4.31, SD = .61), both models were again significant so that F (6,
721) = 16.34, p < .01 for model 1 and F (6, 715) = 19.62, p < .01 for model 2. Two single-item
measures were significant predictors of IRB in Step 1. These were the supervisor ( = .15, t =
3.56, p < .01) and the work ( = .20, t = 4.43, p < .01) items. When the complete (24-item 6
facet) FSS was entered in Step 2, the single-item promotion item ( = .21, t = 2.55, p < .05), and
the 4-item promotion ( = -.27, t = 3.05, p < .01) and work-related ( =.28, t = 3.12, p < .01)
subscales were significant predictors of IRB (see Table 32).
These results indicate that the complete 24-item six-facet FSS was a significant predictor
of the four outcomes (intent-to-quit, OCBI, OCBO, and IRB) above and beyond the single-item
shortened FSS, which is not surprising considering that all six-item of the single-item shortened
FSS were contained within the complete 24-item FSS measure. What is of interest is the increase
in R
2
demonstrated when using the complete as opposed to the shortened version of the FSS,
which was .030, .017, .018, and .024 for intent-to-quit, OCBI, OCBO, and IRB respectively. If
we look at these numbers from a different perspective, we can say that the predictive losses of
going from a 24-item scale to a six-item scale is no more than 3% of total variance for these four
outcomes. Taken together, these results provide support to Hypothesis 3 stating that both the
complete (24-item) and shortened (six-item) versions of the FSS will also demonstrate evidence
of predictive ability.



57
CHAPTER 4
DISCUSSION
In general, the results of this study showed support for the research hypothesis. However,
several concerns should be addressed at this point in the study. These include (1) the final
decision to create a 24-item six-facet FSS instead of the 32-item eight-facet scale originally
conceived, (2) uses of the six-item shortened version of the FSS, (3) facets scales as incomplete
measures of job satisfaction, and (4) the limitations of this study and next steps in scale
development.
Creation of the 24-item Six-facet FSS
The FSS was originally conceived as a scale measuring eight facets of job satisfaction,
namely, pay, promotion, supervisor, co-workers, work, benefits, procedures, and physical
working conditions. Sixty four items were initially created to measure these eight facets (i.e.
eight items per facet) in order to allow the items with poor psychometric properties to be
discarded. During the initial analysis of the FSS item however, two possible models for the FSS
(an eight-factor and a six-factor model) were discovered. Further analysis using confirmatory
factor analysis methodology showed that the final complete version of the FSS best fit a six-
factor model based on the six different model fit indices provided by R (v. 2.4.1). Specifically,
while both models showed good fit according to the six fit indices used in this study, the six-
factor model (i.e. the final version of the FSS measuring six facets) consistently outperformed
the eight-factor model across all six fit indices.
This was surprising considering that the FSS was initially conceived as an eight-facet
scale. A review of the initial pool of FSS items, however, showed that a six-facet scale was not
only more psychometrically sound, it also best represented the items as they were worded,

58
considering that the work, procedures, and physical working conditions items all tapped into a
work-related factor. While the internal consistency reliability index of the work-related facet
seems slightly lower than that of the other facets, this is to be expected considering that this facet
is an amalgamation of what was initially conceived as items from three different facets.
Nevertheless, the Cronbachs value of .89 for the work-related scale still approaches the range
described as excellent internal consistency by Charter (2003). In addition, the internal
consistency reliability scores for the other five facet subscales ranged from .92 to .95, which
were even higher than initial scale development reliability scores reported for other job
satisfaction facet scales such as the Job Descriptive Index (Smith, Kendall, & Hulin, 1969).
These reasons, coupled with the clear lack of cross-loadings in the factor analysis matrix and the
high intercorrelations (see Table 10) between the work, procedures, and physical working
conditions subscales
4
prompted the final decision to create a six-facet complete version of the
FSS that was not only more psychometrically sound but also more parsimonious as well. This
complete version of the FSS consisted of 24-items (four items per facet) measuring the following
six facets pay, promotion, supervisor, coworkers, work-related, and benefits.
The Six-item Shortened FSS
A shortened version of the FSS was also created in order to take advantage of the
significant savings provided by using shorter scales (see for a review, Nagy, 2002). The
shortened version of the FSS was conceived as a scale using one item to measure each of the six
facets derived earlier essentially creating a six-item scale that would measure as many facets of
job satisfaction as the full version of the FSS. However, due to the arguments against the use of
single-item measures (see for example, Loo & Kells, 1998), additional reliability measures were
conducted to determine if the shortened FSS would exhibit sound psychometric properties.

59
The four methods discussed by proponents of using single-item measures of job
satisfaction (see for examples, Wanous & Hudy, 2001; Nagy, 2002) provided a varied range of
the reliability of the shortened FSS (see Table 16). The results of the single-item reliability
estimates showed that the items easily surpassed the minimum acceptable single-item reliability
cut-off point of .70 reported by Nagy (2002). Indeed, the average single-item reliability score
reported in this study ranged from .85 to .95, which rebuffs the argument that measures of single-
item reliability will be necessarily low (see for example Loo, 2002). Instead, the estimate based
on factor analysis communalities (which provide the lower bound for the reliability of the
shortened FSS) ranged from .76 for the promotion and work-related items to .92 for the item
assessing pay, therefore ensuring that the reliability for this scale ranged from fair to excellent
based on the standards established by Charter (2003).
More interesting is the comparisons done between the predictive ability of the shortened
and the complete versions of the FSS (see Tables 29 32). This analysis was conducted using
four outcome measures (intent-to-quit, organization citizenship behaviors towards individuals
and the organization, and in-role behaviors) that have been previously shown to be related to job
satisfaction (see for examples, Judge, et al., 2001; Wagner & Rush, 2000; Campbell & Campbell,
2003). The results of the hierarchical multiple regressions showed that using the 24-item
complete version of the FSS significantly increased R
2
between the ranges of .017 to .030
(depending on the outcome measure analyzed) over the six-item shortened FSS.
While significant statistically, this increase really only corresponds to an increase of
between 1.7 to 3 percent of total variance, but was done by quadrupling the total number of items
used as predictors. Considering the cost-benefit ratio described, the use of the shortened FSS
may well be a reasonable alternative to the complete scale when a researcher would rather

60
generate savings in terms of time, cost, and space, and increase the parsimony and face validity
of the study as opposed to maximizing the amount of variance accounted for by the measurement
scale.
Facets as an Incomplete Measure of Job Satisfaction
The FSS was conceived as a facet measure of job satisfaction, and thus it measures a
finite number (in this case six) of facets from a complex construct. Furthermore, the scale was
originally conceived to measure eight facets, indicating difficulties with the items originally
designed to measure the various facets. As a result, critics for the use of facet measures may well
note the limitation of this scale, indicating that it does not adequately assess the entire construct
(for a review of this argument, see Scarpello & Campbell, 1983).
However, comparisons between the FSS and a well-accepted measure of global job
satisfaction (the Faces scale) seem to indicate that the FSS is an accurate predictor of outcomes
beyond the Faces scale. Using the FSS significantly increased the predictive ability of the four
hierarchical multiple regression analyses of the outcome measures (intent-to-quit, OCBI, OCBO,
and IRB). While the actual increase in total variance only ranged from 5.1 to 9.2%, in two of
these analyses (for OCBI and OCBO), entering the FSS into the analysis actually made the Faces
scale a non-significant predictor (see Tables 18 and 19). Nevertheless, the point made by
proponents of global measures of job satisfaction stating that facet measures are incomplete must
be noted and further analysis of the FSS should be undertaken in order to enhance the scales
ability to measure job satisfaction.
Limitations and Next Steps
While the results of this study showed support for the research hypotheses, one major
issue should be discussed, specifically the amalgamation of the three original FSS subscales

61
(work, procedures, and physical working conditions) into a single work-related subscale. While
summing these subscales does lead to a more psychometrically sound measure, a review of the
items was conducted to determine if the items that cross-loaded clustered around an alternate
factor heading that was not initially envisaged. It was discovered that two of the items that cross-
loaded onto an eighth factor heading (PROC4 and PWC1) may have done so by tapping into a
work enabler-inhibitor factor. This factor was not among the original eight factors assessed
during the creation of the FSS, and may indeed be another important facet that should be
assessed when measuring job satisfaction.
The discovery of another potentially significant facet is another sign that facet measures
of job satisfaction in general (and the FSS in particular) have an inherent weakness in that they
cannot measure the entire job satisfaction construct. However, actually measuring the entire job
satisfaction construct may not be necessary in order to be a significant measure of the construct
or to be a predictor of its outcomes, as was shown in the hierarchical regression analyses against
the Faces scale. Nonetheless, this leads to a next step in the refinement of the FSS, in that
additional facets should be added to the scale to determine if they add value when measuring job
satisfaction or predicting its outcomes beyond that provided by the current set of facets. Beyond
simply adding potentially significant facets, validity studies should also be conducted on the
existing FSS as part of the next step in scale creation, and additional reliability evidence can be
gathered (especially for the shortened version of the FSS) using test-retest methodology.
Conclusion
This study was initially conceived to create a new measure of job satisfaction that was
based on contemporary definitions of the construct. A semantic differential scale with evaluative
end-points was used as the basis for the construction of the response scale. Since job satisfaction

62
has often been described as a multi-faceted construct, a multi-factorial scale the Facet
Satisfaction Scale (FSS) was envisioned in which the scale items were created based on the
facets that were popularly in existing measures of the construct. In addition, recent findings
indicating the potential use of single-item facet scales were incorporated in this study, and a
shortened version of the FSS was also created for psychometric testing.
Analysis of the initial scale construction data showed that the full version of the FSS
exhibited adequate factor structure and good internal consistency reliability across each of its
factors. The full FSS was also a significant predictor of various organizational outcomes relevant
to job satisfaction above and beyond the three measures used as comparators. The shortened
version of the FSS also exhibited adequate reliability, based on various estimates of single-item
reliability, and successfully predicted the same organizational outcomes. However, the work-
related facet of the FSS exhibited slightly lower reliability than the other five facets. Future
studies should be conducted to analyze this issue in order to ensure that both the full and
shortened versions of the FSS are psychometrically sound. Additional facets should be tested to
determine if they add significantly to the existing scale. Validity studies should also be
conducted as the next step in the scale creation process.





63
ENDNOTES
1
Facet descriptions are defined as affect-free perceptions about the experiences associated with
individual job facets (Rice et al., 1991, p. 31).

2
Some single-item global measures of job satisfaction have also been shown to have high
reliability and validity (see for example Kunin, 1955 for a review of the reliability and validity
the Faces Scale, a single-item global measure of job satisfaction).

3
The single-item-multi-item correlation estimate for the work-related item was based on the
correlation between the FSS WORK8 item and the 24-item 6-facet FSS work-related subscale.

4
The high level of intercorrelations between the work, procedures, and physical working
conditions facets indicated that these facets probably measured the same construct, and were
thus redundant (see for a brief review, Bollen & Lennox, 1991).

64
REFERENCES
Acito, F. & Anderson, R. D. (1980). A Monte Carlo comparison of factor analytic models.
Journal of Marketing Research, 17(2), 228-236.
Ajzen, I. (2001). Nature and operation of attitudes. Annual Review of Psychology, 52, 27-58.
Arvey, R. D., Bouchard, T. J., Segal, N. L., & Abraham, L. M. (1989). Job satisfaction:
Environmental and genetic components. Journal of Applied Psychology, 74(2), 187-192.
Bateman, T. S. & Organ, D. W. (1983). Job satisfaction and the good soldier: The relationship
between affect and employee citizenship. Academy of Management Journal, 26(4), 587-
595.
Beauducel, A. & Wittmann, W. W. (2005). Simulation study on fit indexes in CFA based on data
with slightly distorted simple structure. Structural Equation Modeling, 12(1), 41-75.
Bollen, K. & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation
perspective. Psychological Bulletin, 110(2), 305-314.
Boswell, W. R., Boudreau, J. W., & Tichy, J. (2005). The relationship between employee job
change and job satisfaction: The honeymoon-hangover effect. Journal of Applied
Psychology, 90(5), 882-892.
Brief, A. P. (1998). Attitudes in and around organizations. Thousand Oaks, CA: Sage
Publications.
Brief, A. P. & Roberson, L. (1989). Job attitude organization: An exploratory study. Journal of
Applied Social Psychology, 19(9), 717-727.
Brief, A. P. & Weiss, H. M. (2002). Organizational behavior: Affect in the workplace. Annual
Review of Psychology, 53(1), 279-307.

65
Bruck, C. S., Allen, T. D., & Spector, P. E. (2002). The relation between work-family conflict
and job satisfaction: A finer-grained analysis. Journal of Vocational Behavior, 60(3), 336-
353.
Buckley, M. R., Carraher, S. M., & Cote, J. A. (1992). Measurement issues concerning the use of
inventories of job satisfaction. Educational and Psychological Measurement, 52(3), 529-543.
Cammann, C., Fichman, M., Jenkins, D., & Klesh, J. (1983). Assessing the attitudes and
perceptions of organizational members. In S. Seashore, E. Lawler, P. Mirvis, & C. Cammann
(Eds.), Assessing organizational change: A guide to methods, measures and practices (pp.
71-138). New York: John Wiley.
Campbell, D. J. & Campbell, K. M. (2003). Global versus facet predictors of intention to quit:
Differences in a sample of male and female Singaporean managers and non-managers.
International Journal of Human Resource Management, 14(7), 1152-1177.
Charter, R. A. (2003). A breakdown of reliability coefficients by test type and reliability method,
and the clinical implications of low reliability. Journal of General Psychology, 130(3),
290-304.
Connolly, J. J. & Viswesvaran, C. (2000). The role of affectivity in job satisfaction: A meta-
analysis. Personality and Individual Differences, 29(2), 265-281.
Cranny, C. J., Smith, C. P., & Stone, E. F. (1992). Job satisfaction: How people feel about their
jobs and how it affects their performance. New York: Lexington Books.
Crites, S. L., Fabrigar, L. R., & Petty, R. E. (1994). Measuring the affective and cognitive
properties of attitudes: Conceptual and methodological issues. Personality and Social
Psychology Bulletin, 20(6), 619-634.

66
Dipboye, R. L., Smith, C. S., & Howell, W. C. (1994). Understanding industrial and
organizational psychology: An integrated approach. Orlando, FL: Harcourt Brace & Co.
Dunham, R. B. & Smith, F. J. (1979). Organizational surveys: An internal assessment of
organizational health. Glenview, IL: Scott, Foresman and Company.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use
of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-
299.
Fazio, R. H., Powell, M. C., & Williams, C. J. (1989). The role of attitude accessibility in the
attitude to behavior process. Journal of Consumer Research, 16(3), 280-288.
Fields, D. L. (2002). Taking the measure of work: A guide to validated scales for organizational
research and diagnosis. Thousand Oaks, CA: Sage Publications.
Fisher, C. D. (2000). Mood and emotions while working: missing pieces of job satisfaction?
Journal of Organizational Behavior, 21, 185-202.
Franzoi, S. L. (2003). Social psychology. New York: McGraw-Hill.
Freund, A. (2005). Commitment and job satisfaction as predictors of turnover intentions among
welfare workers. Administration in Social Work, 29(2), 5-21.
Gerhart, B. (1987). How important are dispositional factors as determinants of job satisfaction?
Implications for job design and other personnel programs. Journal of Applied Psychology,
72(3), 366-373.
Gerhard, B. (2005). The (affective) dispositional approach to job satisfaction: sorting out the
policy implications. Journal of Organizational Behavior, 26, 79-97.
Gorsuch, R. L. & McFarland, S. G. (1972). Single vs. multiple-item scales for measuring
religious values. Journal for the Scientific Study of Religion, 11(1), 53-64.

67
Griffeth, R. W., Hom, P. W., & Gaertner, S. (2000). A meta-analysis of antecedents and
correlates of employee turnover: Update, moderator tests, and research implications for the
next millennium. Journal of Management, 26(3), 463-488.
Hackman, J. R. & Oldham, G. R. (1974). The Job Diagnostic Survey: An instrument for the
diagnosis of jobs and the evaluation of job redesign projects (Tech. Re. No. 4). New Haven,
CT: Yale University, Department of Administrative Sciences.
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work.
Organizational Behavior and Human Design Processes, 16(2), 250-279.
Hamner, W. C. & Organ, D. W. (1978). Organizational behavior: An applied psychological
approach. Plano, TX: Business Publications, Inc.
Harman, H. H. (1967). Modern factor analysis. Chicago, IL: The University of Chicago Press.
Hatfield, J., Robinson, R. B., & Huseman, R. C. (1985). An empirical evaluation of a test for
assessing job satisfaction. Psychological Reports, 56, 39-45.
Herzberg, F. Mausner, B. Peterson, R. O., & Capwell, D. F. (1957). Job attitudes: A review of
research and opinions. Pittsburgh, PA: Psychological Services of Pittsburgh.
Howard, J. L. & Frink, D. D. (1996). The effects of organizational restructure on employee
satisfaction. Group and Organizational Management, 21(3), 278-303.
Huff, J. W. (2000). Application of attitude strength to job satisfaction: The moderating role of
attitude strength in the prediction of organizational outcomes from job satisfaction.
Unpublished doctoral dissertation, Northern Illinois University, DeKalb.
Huff, J. W., Tekell, J., & Yeoh, T. (2005, April). Measuring job satisfaction: Are all measures
created equal? Poster presented at the annual conference of the Society for Industrial and
Organizational Psychology, Los Angeles, CA.

68
Hulin, C. L. & Judge, T. A. (2003). Job attitudes. In W. C. Borman, R. Klimonski, and D. Ilgen
(Eds.), Handbook of psychology: Industrial and organizational psychology (Vol. 12, pp.
255-276). New York: John Wiley & Sons, Inc.
Iaffaldano, M. T. & Muchinsky, P. M. (1985). Job satisfaction and job performance: A meta-
analysis. Psychological Bulletin, 97(2), 251-273.
Ironson, G. H., Smith, P. C., Brannick, M. T., Gibson, W. M., & Paul, K. P. (1989). Constitution
of a Job in General Scale: A comparison of global, composite, and specific measures.
Journal of Applied Psychology, 74, 193-200.
Jackson, C. J. & Corr, P. J. (2002). Global job satisfaction and facet description: The moderating
role of facet importance. European Journal of Psychological Assessment, 18(1), 1-8.
Jackson, C. J., Potter, A., & Dale, S. (1998). Utility of facet descriptions in the prediction of
global job satisfaction. European Journal of Psychological Assessment, 14(2), 134-140.
Judge, T. A., Thoresen, C. J., Bono, J. E., & Patton, G. K. (2001). The job satisfaction-job
performance relationship: A qualitative and quantitative review. Psychological Bulletin,
127(3), 376-407.
Killgore, W. D. S. (1999). Empirically derived factor indices for the Beck Depression Inventory.
Psychological Reports, 84(3), 1005-1013.
Kline, R. B. (2005). Principles and practice of structural equation modeling. New York, NY:
The Guilford Press.
Kuieck, T. J. (1980). A comparison of three operational definitions of job satisfaction.
Unpublished doctoral dissertation, Western Michigan University, Kalamazoo.
Kunin, T. (1955). The construction of a new type of attitude measure. Personnel Psychology, 8,
65-77.

69
Lambert, E. G., Edwards, C., Camp, S. D., & Saylor, W. G. (2005). Here today, gone tomorrow,
back again the next day: Antecedents of correctional absenteeism. Journal of Criminal
Justice, 33, 165-175.
LaPiere, R. T. (1934). Attitudes vs. actions. Social Forces, 13(2), 230-237.
Lau, D. C. & Murnighan, J. K. (2005). Interactions within groups and subgroups: The effects of
demographic faultlines. Academy of Management Journal, 48(4), 645-659.
Law, K. S. & Wong, C. (1999). Multidimensional constructs in structural equation analysis: An
illustration using the job perception and job satisfaction constructs. Journal of Management,
25(2), 143-160.
Lawler, E. E. III (1983). Satisfaction and behavior. In R. M. Steers & L. W. Porter (Eds.),
Motivation and work behavior (3
rd
ed.). New York: McGraw-Hill.
Lee, T. W., Mitchell, T. R., Holtom, B. C., McDaniel, L. S., & Hill, J. W. (1999). The unfolding
model of voluntary turnover: A replication and extension. Academy of Management Journal,
42(4), 450-462.
Leung, S. O. & Sachs, J. (2005). Bhargava and Ishizukas BI-Method: A neglected method for
variable selection. Journal for Experimental Education, 73(4), 353-367
Locke, E. A. (1969). What is job satisfaction? Organizational Behavior and Human
Performance, 4, 309-336.
Locke, E. A. (1976). The nature and causes of job satisfaction. In M. D. Dunnette (Ed.),
Handbook of industrial and organizational psychology (pp. 1297-1343). Chicago: Rand
McNally.
Loo, R. (2002). A caveat on using single-item versus multiple-item scales. Journal of
Managerial Psychology, 17(1), 68-75.

70
Loo, R. & Kells, P. (1998). A caveat on using single-item measures. Employee Assistance
Quareterly, 14(2), 75-80.
McElwain, A. K., Korabik, K., & Rosin, H. M. (2005). An examination of gender differences in
work-family conflict. Canadian Journal of Behavioral Science, 37(4), 283-298.
Mesmer-Magnus, J. R. & Viswesvaran, C. (2005). Convergence between measures of work-to-
family and family-to-work conflict: A meta-analytic examination. Journal of Vocational
Behavior, 67, 215-232.
Meyer, J. P., Allen, N. J., & Smith, C. A. (1993). Commitment to organizations and occupations:
Extension and test of a three-component conceptualization. Journal of Applied Psychology,
78, 538-551.
Miles, E. W., Patrick, S. L., & King, W. C. Jr. (1996). Job level as a systemic variable in
predicting the relationship between supervisory communication and job satisfaction. Journal
of Occupational and Organizational Psychology, 69, 277-292.
Millar, M. G. & Tesser, A. (1986). Effects of affective and cognitive focus on the attitude-
behavior relation. Journal of Personality and Social Psychology, 51(2), 270-276.
Moorman, R. H. (1993). The influence of cognitive and affective based job satisfaction measures
on the relationship between satisfaction and organizational citizenship behavior. Human
Relations, 46(6), 759-776.
Motowidlo, S. J. (1996). Orientation toward the job and organization. In K. R. Murphy (Ed.),
Individual differences and behavior in organizations, (pp. 175-208). San Francisco: Jossey-
Bass.
Nagy, M. S. (2002). Using a single-item approach to measure facet job satisfaction. Journal of
Occupational and Organizational Psychology, 75, 77-86.

71
Nunnally, J. C. & Bernstein, I. H. (1994). Psychometric theory. New York: McGraw-Hill.
Organ, D. W. & Hamner, W. C. (1982). Organizational behavior: An applied psychological
approach. Plano, TX: Business Publications, Inc.
Organ, D. W. & Near, J. P. (1985). Cognition vs affect in measures of job satisfaction.
International Journal of Psychology, 20, 241-253.
Penney, L. M. & Spector, P. E. (2005). Job stress, incivility, and counterproductive work
behavior (CWB): The moderating role of negative affectivity. Journal of Organizational
Behavior, 26, 777-796.
Porter, L. W. & Steers, R. M. (1973). Organizational, work, and personal factors in employee
turnover and absenteeism. Psychological Bulletin, 80(2), 151-176.
Rice, R. W., Gentile, D. A., & McFarlin, D. B. (1991). Facet importance and job satisfaction.
Journal of Applied Psychology, 76(1), 31-39.
Roethlisberger, F. J. & Dickson, W. J. (1939). Management and the worker. Oxford, England:
Harvard University Press.
Russell, S. S., Spitzmuller, C., Lin, L. F., Stanton, J. M., Smith, P. C., & Ironson, G. H. (2004).
Shorter can also be better: The abridged Job In General scale. Educational and Psychological
Measurement, 64(5), 878-893.
Saari, L. M., & Judge, T. A. (2004). Employee attitudes and job satisfaction. Human Resources
Management, 43(4), 395-407.
Sackett, P. R. & Larson, J. R. Jr. (1990). Research strategies and tactics in industrial and
organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial
and organizational psychology, (Vol. 1, pp. 419-489). Palo Alto, CA: Consulting
Psychologist Press, Inc.

72
Sagie, A. (1998). Employee absenteeism, organizational commitment, and job satisfaction:
Another look. Journal of Vocational Behavior, 52(2), 156-171.
Salancik, G. R., & Pfeffer, J. (1978). A social information processing approach to job attitudes
and task design. Administrative Science Quarterly, 23(2), 224-253.
Scarpello, V. & Campbell, J. P. (1983). Job satisfaction: Are all the parts there? Personnel
Psychology, 36(3), 577-600.
Shaeffer, E. M., Krosnick, J. A., Langer, G. E., & Merkle, D. M. (2005). Comparing the quality
of data obtained by minimally balanced and fully balanced attitude questions. Public Opinion
Quarterly, 69(3), 417-428.
Siu, O., Spector, P. E., Cooper, C. L., & Lu, C. (2005). Work stress, self-efficacy, Chinese work
values, and work well-being in Hong Kong and Beijing. International Journal of Stress
Management, 12(3), 274-288.
Smith, P. C., Kendall L. M., & Hulin, C. L. (1969). The measurement of satisfaction in work and
retirement: A strategy for the study of attitudes. Oxford, England: Rand McNally.
Spector, P. E. (1985). Measurement of human service staff satisfaction: Development of the Job
Satisfaction Survey. American Journal of Community Psychology, 13, 693-713.
Spector, P. E. (1997). Job satisfaction: Application, assessment, causes, and consequences.
Thousand Oaks, CA: Sage Publications.
Spector, P. E. (2000). Industrial and organizational psychology: Research and practice. New
York, NY: John Wiley & Sons.
Staw, B. M., & Cohen-Charash, Y. (2005). The dispositional approach to job satisfaction: More
than a mirage, but not yet an oasis. Journal of Organizational Behavior, 26, 59-78.

73
Staw, B. M., & Ross, J. (1985). Stability in the midst of change: A dispositional approach to job
attitudes. Journal of Applied Psychology, 70(3), 469-480.
Steel, R. P. & Rentsch, J. R. (1997). The dispositional model of job attitudes revisited: Findings
of a 10-year study. Journal of Applied Psychology, 82(6), 873-879.
Steers, R. M., & Rhodes, S. (1978). Major influences on employee attendance: A process model.
Journal of Applied Psychology, 63, 391-407.
Tabachnik, B. G. & Fidell, L. S. (2001). Using multivariate statistics. Needlam Heights, MA:
Allyn & Bacon.
Thoreson, C. J., Kaplan,S. A., Barskey, A. P., Warren,C. R., & Chermont, K (2003). The
Affective Underpinnings of Job Perceptions and Attitudes: A Meta-analytic Review and
Integration. Psychological Bulletin, 129, 914 945.
Thurstone, L. L. (1928). Attitudes can be measured. American Journal of Sociology, 33, 529-
554.
Wagner, S. L. & Rush, M. C. (2000). Altruistic organizational citizenship behavior: Context,
disposition, and age. Journal of Social Psychology, 140(3), 379-391.
Wanous, J. P. & Hudy, M. J. (2001). Single-item reliability: A replication and extension.
Organizational Research Methods, 4(4), 361-375.
Wanous, J. P. Reichers, A. E., & Hudy, M. J. (1997). Overall job satisfaction: How good are
single-item measures? Journal of Applied Psychology, 82(2), 247-252.
Watson, D. & Tellegen A. (1985). Toward a consensual structure of mood. Psychological
Bulletin, 98(2), 219-235.
Weiss, D. J. (1976). Multivariate procedures. In M. D. Dunnette (Ed.), Handbook of industrial
and organizational psychology, (pp. 327-362). Chicago: Rand McNally College.

74
Weiss, D. J., Dawis, R., England, G., & Lofquist, L. (1967). Manual for the Minnesota
Satisfaction Questionnaire (Minnesota studies on vocational behavior, Vol. 22).
Minneapolis: University of Minnesota, Industrial Relations Center.
Weiss, H. M. (2002). Deconstructing job satisfaction: Separating evaluations, beliefs, and
affective experiences. Human Resource Management Review, 12, 173 - 194.
Weiss, H. M., Nicholas, J. P., & Daus, C. S. (1999). An examination of the joint effects of
affective experiences and job beliefs on job satisfaction and variations in affective
experiences over time. Organizational Behavior and Human Decision Processes, 78(1), 1-
24.
Williams, L. J. & Anderson, S. E. (1991). Job satisfaction and organizational commitment as
predictors of organizational citizenship and in-role behaviors. Journal of Management, 17(3),
601-617.
Witt, L. A. & Nye, L. G. (1992). Gender and the relationship between perceived fairness of pay
or promotion and job satisfaction. Journal of Applied Psychology, 77(6), 910-917.
Wright, T. A. & Bonnett, D. G. (1992). The effect of turnover on work satisfaction and mental
health: Support for a situational perspective. Journal of Organizational Behavior, 13(6), 603-
615.
Yu, J. H., Albaum, G., & Swenson, M. (2003). Is a central tendency error inherent in the use of
semantic differential scales in different cultures? International Journal of Market Research,
45, 213-228.

Das könnte Ihnen auch gefallen