Beruflich Dokumente
Kultur Dokumente
THE JOURNAL OF SERVICES MARKETING, VOL. 13 NO. 2 1999, pp. 171-186 © MCB UNIVERSITY PRESS, 0887-6045 171
systems. Variations of the TQM framework, including total quality service
(Albrecht, 1991) and the total quality process (Coffey et al., 1991), have
been adopted by educational institutions to bolster eroding competitive
positions. Some universities have attempted to get ISO 9000-type quality
process designations to attract students for particular degree programs. The
key theoretical linkage here is that customer satisfaction is affected by
perceived quality (Anderson and Sullivan, 1993; Cronin and Taylor, 1992;
Oliver and De Sarbo, 1988) which in turn affects corporate profitability
(Anderson et al., 1994; Rust and Zahorik, 1993). What is clear from the
literature is that perceived quality of service in higher education is of
paramount strategic importance (Peters, 1992; Bemowski, 1991). While
there are a number of relevant markets that the university must consider
when assessing perceived quality, this study will focus on the perceptions of
the students of business in a cross-cultural setting.
Service attributes In order to assess the perceptions of service quality, the model most often
utilized has been SERVQUAL developed by Parasuraman et al. (1985) and
refined in 1988. The SERVQUAL questionnaire presents the respondent
with a series of service attributes which they rate using a Likert-type scale
response format. The 22 attributes which are included are grouped into five
underlying dimensions:
(1) tangibles,
(2) reliability,
(3) responsiveness,
(4) assurance, and
(5) empathy.
The respondent is asked first to provide their ratings for an excellent service
firm of the type in question, which is followed by their ratings for the actual
service which they received. The difference between these perceptual ratings
on the 22 service attributes then identifies the potential “gaps” where the
respondent experiences disconfirmed expectations (Parasuraman et al.,
1988). This then indicates strategic options for service firms which can shore
up their quality images with their customers. SERVQUAL has been
successfully adapted in an educational context (e.g. Ford et al. 1993);
however, a series of criticisms of the SERVQUAL model have been raised
which focus on:
(1) the potential inappropriateness of the five dimensions of choice criteria
used by SERVQUAL (Cronin and Taylor, 1992; Carman, 1990);
(2) the inability of expectations to remain constant over time (Carman,
1990); and
(3) the lack of prior knowledge and experience with university education
and the unrealistic expectations of incoming university students
(Chapman, 1979).
As a result of these criticisms, an alternative method of assessing service
quality is employed in this study which is based on the importance/
performance paradigm. It is reasonable to assume that when students
evaluate the quality of their educational experience, they are likely to place
different importance weights on different criteria. Martilla and James (1977)
developed a two-dimensional graphical representation which demonstrates
the mean importance and performance ratings for attributes used to assess
Methodology
Educational institution The questionnaires used for this study were derived through a two-stage
assessment process. The New Zealand questionnaire was developed first, and the first
stage involved focus groups of six to ten individuals from different
representative groups of New Zealand business students, who identified a
series of appropriate attributes for educational institution assessment. These
attributes were then used to develop a questionnaire which was sent to a
random sample of New Zealand university students enrolled in their final
year of study. The questionnaire contained four sections which assessed:
(1) the student’s perceptions of an excellent university,
(2) the importance rankings for university attributes,
(3) the student’s perceptions of their own university, and
(4) a series of respondent demographic items.
This methodology could be used just as effectively by other service
industries. A series of focus groups with past, present and potential hospital
patients could produce a series of key attributes to rate various hospitals on.
The key to the success of this process is the careful selection of the
participants for the focus groups, to ensure that only representative
individuals are helping in the identification of relevant evaluation criteria.
Location (factor 5)
L1: The university has an ideal location.
L2: The university has an excellent campus layout and appearance.
Time (factor 6)
T1: The university allows an acceptable amount of time to complete the
degree.
Results
Analysis of the New Zealand data
Tertiary educational Factor analysis was run to understand the factor structure of the New
institutions Zealand sample service evaluation attributes for tertiary educational
institutions. The data from only the New Zealand sample were analyzed in
this factor analysis. Since one of the tasks was to examine the attributes and
possible underlying dimensions for the different samples, it was assumed
that the combining of the two samples into a pool for data analysis would be
inappropriate. The rotated factor scores for the model of the important
choice criteria can be seen in Table I. This Table shows that there are strong
links between the factors in the seven factor solutions. The eigenvalue for
factor seven is just below one, but 59 percent of the total variance is
attributable to the first seven factors. Thus, a model with seven factors may
be adequate to represent the data for this particular sample of New Zealand
business students. There were two cross-loadings worth mentioning:
(1) specialist programmes (from programme issues – factor 1) also had a
high cross-loading on factor 2 (academic reputation); and
(2) cost of education (from physical aspects/cost – factor 3) also had a high
cross-loading on factor 4 (career opportunities).
Location (factor 5)
Ideal location 0.55615
Excellent campus layout and appearance 0.60581
Time (factor 6)
Acceptable length of time to complete degree 0.89381
Other (factor 7)
Family and peers influence university choice 0.84187
Word of mouth influences choice of university 0.81399
Summary of means
Performance
Category Importance Performance minus importance
Physical aspects
Accommodation facilities 3.69 2.971 –0.725
Academic facilities 4.716 3.660 –1.056
Campus layout and appearance 4.002 3.734 –0.268
Sports and recreational facilities 3.998 3.600 –0.398
Cost/time
Length of degree 3.908 3.998 –0.09
Cost of accommodation 3.747 2.680 –1.067
Cost of education 3.662 2.892 –0.77
Academic issues
Reputable degree 4.667 3.817 –0.856
Excellent instructors 4.705 3.370 –1.335
Programme issues
Specialist programmes 4.228 3.541 –0.687
Flexible structure and content 4.242 3.474 –0.768
Practical component 4.096 3.182 –0.914
Options available 4.370 3.838 –0.532
Flexibility to move within school of study 4.143 3.374 –0.769
Flexible entry requirements 3.102 3.305 0.203
Career opportunities
Employable graduates 4.120 3.468 –0.652
Information on career opportunities 4.413 3.653 –0.76
Location
Ideal location 3.575 3.630 0.055
Other
Word of mouth 3.249 2.982 –0.267
Family and peers 3.251 2.926 –0.325
Table III. Comparison of respondent mean importance scores among the factor
groupings
F
4.5
A
C
D
3.95
B
E
3.5
G
Moderately Low Possible
important 3 Priority Overkill
2 2.5 3 3.41 4
Fair Good
Performance Performance
Key
A Career Opportunities
B Cost/Time
C Physical Aspects
D Programme Issues
E Location
F Academic Reputation
G Other
were the overall importance and performance means. From a strategic point
of view this grid provides a tool for strategy development as it gives a clear
picture of the factors that are critical for resource allocation. Practically, it
appears as though the only area for the New Zealand universities to address
is the downplaying of the locational attributes and possibly the bolstering of
cost/time factor attributes.
Potential perceptual The next step in the analysis was to examine the sample responses across
problem the factor items to assess the student perceptions of the attributes, the
importance of the factor groupings, and the perceptions of the students of
their own university. The results can be found in Table V. It is quite clear
when examining Table V that the US students have a potential perceptual
problem with their particular university. The only area where the students
appear to be getting what they expected is in terms of flexible admissions in
the Other factor (0.48 score). This was the single attribute that showed the
greatest positive level in the New Zealand sample as well. The biggest
problem areas appear to be the condition of the university academic facilities
(–1.48), the reputation of the degree from the university (–1.38), the
condition of the housing and services provided (–1.08), and the cost of
housing (–1.06). It is important to note here that specific perceptual
problems are identifiable using this methodology which then can be
improved to help the student develop a more positive view of the university.
This is especially important if the perceived rank importance of the factors
themselves is taken into consideration. These are found in Table III. The
order of importance for the US students is:
(1) academic reputation;
(2) cost/time issues;
(3) program issues;
(4) other;
(5) physical aspects; and finally by
(6) choice influencers.
This assessment is helpful to temper the findings from Table V, since it
would probably not be wise to focus too heavily on attributes which are not
Program issues
Range of programs of study 4.52 3.62 –0.09
Flexibility of degree program 4.17 3.39 –0.78
Major change flexibility 4.0 3.39 –0.61
Range of degree options 4.5 3.84 –0.66
Physical aspects
Sports programs 3.89 3.14 –0.75
Housing and services provided 3.89 2.81 –1.08
Campus layout and appearance 4.14 3.3 –0.84
Academic facilities 4.64 3.16 –1.48
Academic reputation
Quality of instructors 4.7 3.49 –1.21
Degree reputation 4.45 3.07 –1.38
Career placement information 4.67 4.03 –0.64
Internship/practical component 4.5 3.53 –0.97
Cost/time issues
Reasonable cost of education 3.55 3.04 –0.51
Reasonable time to degree 4.03 3.87 –0.16
Reasonable cost of housing 3.98 2.92 –1.06
Choice influencers
Influence from family members 3.5 2.62 –0.88
Influence of word-of-mouth 3.24 2.6 –0.64
Other
Flexible admissions 2.85 3.33 0.48
Ideal location 3.41 2.99 –0.42
Graduates are employable 4.08 3.29 –0.79
3.53 C
3.28
D
3.2115
3.20 F
3.10 B
2.61 E
Moderately Low Possible
important Priority Overkill
3.37 3.44 3.88 4.0355 4.14 4.2975 4.58
Fair Good
Performance Performance
Key
A Programme Issues
B Physical Aspects
C Academic Reputation
D Cost/Time
E Choice Influences
F Other
interesting to note, however, that the factors themselves did not correspond.
There was a significantly different factor structure. The necessary models for
examining service quality in educational institutions would contain the same
attributes, but these attributes would group differently into underlying
dimensions. This is important from a strategic perspective. First, it would
suggest that trying to develop a single model of important facts to apply
cross-culturally might be a mistake. It is therefore possible that the attributes
might be appropriate for cross-cultural settings but the factor structures
would need to be assessed for each country sample without any a priori
attempts to partition them into underlying factors. A questionnaire could
certainly be developed which might be tested across countries/cultures, but
there should be no designation of grouped factors for purposes of importance
weightings as would be found in the SERVQUAL approach used by
Parasuraman et al. (1991). Each individual attribute should therefore be
ranked for importance weighting purposes. It would then be possible to look
at weighted factor rankings after the factor structure was determined for any
new country/culture sample. It would also be prudent to assess the
applicability of the attributes themselves in new survey locations to ensure
their appropriateness in any final assessment.
Relevant for any service The resulting models and grids that can come from this process can be
industry strategically relevant for any service industry competitor. This type of
assessment vehicle is a promising tool for university administrators, since it
allows the mechanism by which past, current, and potential service
consumers’ perceptions can be examined, and it allows for possible
corrective actions which can be taken to improve perceptual problems. This
potentially could help any service provider to improve its image to the point
where the service consumer actually changes from a negative or neutral
perception to a positive perception of their overall service experience. This
could be an effective competitive tool in any highly competitive service
market situation, and in this case, for institutions of higher education. It
might also be possible for the service provider through a periodic use of this
References
Albrecht, K. (1991), “Total quality service”, Executive Excellence, Vol. 8 No. 7, July,
pp. 18-19.
Anderson, E. and Sullivan, M. (1993), “The antecedents and consequences of customer
satisfaction for firms”, Marketing Science, Vol. 12, pp. 125-43.
Anderson, E., Fornell, C. and Lehmann, D. (1994), “Customer satisfaction, market share and
profitability: findings from Sweden”, Journal of Marketing, Vol. 58, July, pp. 53-66.
Bemowski, K. (1991), “Restoring the pillars of higher education”, Quality Progress, October,
pp. 37-42.
Berry, L.L. and Parasuraman, A. (1991), Marketing Services:Competing through Quality, The
Free Press, New York, NY.
Carman, J.M. (1990), “Consumer perceptions of service quality: an assessment of the
SERVQUAL dimensions”, Journal of Retailing, Vol. 66 No. 1, pp. 33-55.
Chapman, R. (1979), “Pricing policy and the college choice process”, Research in Higher
Education, Vol. 10 No. 37, p. 57.
Coffey, R.J., Eisenberg, M., Gaucher, E.M. and Kratochwill, E.W. (1991), “Total quality
progress at the University of Michigan Medical Center”, Journal for Quality &
Participation, January/February, pp. 22-31.
Cronin J.J. Jr and Taylor, S.A. (1992), “Measuring service quality: a re-examination and
extension”, Journal of Marketing, Vol. 56, July, pp. 55-68.
Ennew, C., Reed, G. and Binks, M. (1993), “Importance-performance analysis and the
measurement of service quality”, European Journal of Marketing, Vol. 27 No. 2,
pp. 59-70.
Ford, J.J.M. and Joseph, B. (1993), “Service quality in higher education: a comparison of
universities in the United States and New Zealand using SERVQUAL”, Enhancing
Knowledge Development in Marketing, Proceedings of the American Marketing
Association Annual Summer Educators’ Conference, Vol. 4, pp. 75-81.