Sie sind auf Seite 1von 15

Linking Customer Satisfaction

Attributes with Process Metrics


in Service Industries
CAMILLE F. DEYONG, OKLAHOMA STATE UNIVERSITY
KENNETH E. CASE, OKLAHOMA STATE UNIVERSITY
© 1998, ASQ

Organizations are under increasing pressure to satis-


fy customers, while continuously improving process- INTRODUCTION
es. Process improvement methodologies emphasize the Organizations are under increasing pressure to
importance of selecting performance measures that improve performance, while simultaneously “listening
are related to customers. There is, however, the need
for a methodology to progress backward from identi-
to the customer.” How does an organization go about
fying customer needs to identifying key performance controlling quality and performance in a way that
measures within a process. This article describes such meets customer needs? It must identify key process
a methodology. It progresses in a logical sequence metrics that are linked to customer requirements. This
from identifying and operationalizing customer article details a methodology that accomplishes that goal.
needs to identifying process metrics that are linked to Process improvement is a tool used to enhance the
customer needs. Application of the methodology in a
service industry is detailed.
quality, performance, and outcomes of a process. It is
widely applied, in various forms. While multiple
Key words: process improvement, service quality models exist for following a process improvement format,
none offers a methodology to explicitly identify the
relationship, or link, between customer requirements
and process metrics used for guidance and control.
Particularly in service industries, this linkage is critical.
Service industries are not identical to goods indus-
tries. Some service industries have been described as
having no tangible product (Berry 1980; Farsad and
Elshennawy 1989; and Lovelock 1981, 5 – 9). The
process of delivering the service is their product
(McLuhan 1964). Unlike manufacturing entities that
can improve customer satisfaction by improving either
the product or the manufacturing process, service
industries whose process is their product must rely
upon process improvement as their primary avenue for
improving customer satisfaction.
The customer is intimately involved in the service
process. This trait of service encounters is known as
“inseparability” (Regan 1963; Gronroos 1978). In
addition, the customer defines quality and customer
satisfaction (Juran 1974). If process improvement
offers the primary method to improve the service

76 QMJ 98 5, NO. 2/© 1998, ASQ


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

process, and the customer evaluates the results of the service personnel are difficult to assure (Booms and
improvement, it is critical that customer requirements Bitner 1981, 47–56), because what the firm intends to
be linked with the process metrics used. After all, what is deliver may be different from what the customer
measured in the process is what gets done (Drucker 1954). receives. In labor-intensive services, quality occurs
This article presents a methodology to establish a during service delivery, usually in an interaction
logical linkage between customer requirements and between clients and contact persons from the firm
process metrics. A review of the literature regarding (Parasuraman, Zeithaml, and Berry 1988).
service industries, process improvement, and customer Services are acts or performances, not things alone
requirements is discussed. Definitions and terms used (Booms and Nyquist 1981, 172–177). McLuhan (1964)
in the methodology are presented, and finally the succinctly states that the process of delivering the
methodology is detailed. service is the product. For example, customers typically
say “airline” when they mean “air transportation.” They

LITERATURE REVIEW say “movie,” but mean “entertainment” (Shostack


1987). Parasuraman, Zeithaml, and Berry (1985, 42)
Service Industries state that “quality evaluations are not made solely on
the outcome of the service; they also involve evaluation
Evaluation of quality and customer satisfaction in
of the process of service delivery.” Therefore, process
services is complex, at best. Not only must customers
improvement can serve a dual purpose for service orga-
determine how satisfied they are with the physical out-
nizations; that is, simultaneously improving the
put of the service, if any, but they must also evaluate
process and product.
the process by which the service was delivered. Often,
the only output of a service is a feeling or experience
for customers.
Tenner (1991), states that manufacturing is unique
Process Improvement
Many good process improvement models have been
in that (1) its customers are isolated from production; proposed (API 1992; AT&T 1988; Bhote 1991; Chua 1992;
(2) its outputs are tangible; and (3) its operations are Goh 1989; Juran 1988; Kelbaugh 1991; Rummler and
highly repetitive. In contrast, services are characterized Brache 1990; Yoshizawa 1991). Most follow the same
as possessing three unique characteristics: (1) intangi- general format proposed by AT&T (1988, 1).
bility; (2) heterogeneity; and (3) inseparability of
1. Establish process management responsibilities.
production and consumption (Farsad and Elshennawy
1989; Gronroos 1978; Lovelock 1981, 5–9; Shostack 2. Define the process and identify customer require-
1987; Zeithaml, Parasuraman, and Berry 1990). ments.
Intangibility refers to the fact that services are perfor- 3. Define and establish measures.
mances, not objects. Therefore, precise specifications 4. Assess conformance to customer requirements.
can rarely be set (Berry 1980; Shostack 1977). 5. Investigate the process to identify improvement
opportunities.
Process improvement can serve a dual purpose
6. Rank improvement opportunities and set objectives.
for service organizations; that is, simultaneously
improving the process and product. 7. Improve process quality.
Steps 2 and 3 are both important, since the cus-
Services, especially those with a high labor content, tomer is the ultimate judge of quality and satisfaction,
are heterogeneous: their performance often varies from and the process cannot be controlled if there are no
producer to producer, customer to customer, and day to measures. There is a need for an organized structure,
day (Zeithaml, Parasuraman, and Berry 1988). or methodology, to identify customer requirements and
Uniform quality and consistency of behavior from to link these requirements with internal metrics. It is

C. F. DEYONG AND K. E. CASE/© 1998, ASQ 77


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

important that these internal metrics be well linked to attributes of service quality: (1) physical quality;
customer satisfaction attributes. (2) corporate quality; and (3) interactive quality.
Physical quality relates to physical, tangible aspects of

Customer Requirements the service. Corporate quality is the company image or


profile, while interactive quality results from the inter-
Assessing customer requirements is a complex task. If action between contact personnel and customers, as
customer requirements are not identified, however, the well as between customers. Lehtinen and Lehtinen also
organization has no way to assess customer satisfac- differentiate between quality associated with the
tion. The terminology is not standardized, which process of service delivery and quality associated with
makes precise definitions difficult. This article uses the the outcome of the service.
term customer satisfaction attributes (CSAs) to Sasser, Olsen, and Wyckoff (1978) list three attributes
describe customer requirements, or distinguishable of service performance: (1) levels of material; (2) facilities;
characteristics that contribute to overall customer and (3) personnel. They also note that service quality
satisfaction (DeYong 1994). Several authors have involves more than outcomes, it also includes the
attempted to organize and define these characteristics. manner in which the service was delivered.
Nelson’s (1970 and 1974) early work distinguishes Garvin (1988) lists eight attributes of quality, as follows:
between two characteristics of consumer goods — 1. Performance—Primary operating characteristics
search properties and experience properties. Search
2. Feature — “Bells and whistles” of products that
properties are ways consumers evaluate options.
supplement the basic functioning
Examples of search properties are color, style, or fit.
Experience properties are evaluated only after purchase 3. Reliability—Probability of a product malfunctioning
or during consumption. Examples would be taste and or failing within a specified time
wearability. 4. Conformance—Degree to which a product’s design
Darby and Karni (1973) add a third category to and operating characteristics meet pre-established
Nelson’s two-way classification system — credence standards
qualities. These are characteristics that consumers find 5. Durability—Measure of product life
impossible to evaluate in normal use. An example 6. Serviceability—Speed, courtesy, competence, and
would be the surgical removal of an appendix. The ease of repair
consumer has no way to judge whether the removed
7. Aesthetics — How a product looks, feels, sounds,
organ was diseased. The consumer must rely upon the
tastes, or smells
doctor’s judgment.
Gronroos (1981, 108–110) notes that service quality 8. Perceived quality—Cues or other signaling devices
is a function of two attributes—technical quality and used to draw inferences about quality
functional quality. Technical quality refers to what the Parasuraman, Zeithaml, and Berry (1985) offer 10
customer gets as the result of the buyer/seller interac- determinants of service quality, as follows:
tion, while functional quality refers to how the service 1. Reliability — Consistency of performance and
is rendered. Gronroos notes that, in most cases, func- dependability
tional quality is more important to perceived quality, as
2. Responsiveness—Readiness of employee to provide
long as the tangible quality is satisfactory. In other
service; timeliness of service
words, the process of delivering the service has a
greater impact on customer satisfaction than the 3. Competence—Having the required knowledge and
receipt of the tangible product, if there is one. skills to perform the service
Lehtinen and Lehtinen (1991) state that quality is 4. Access—Approachability and ease of contact
produced in the interaction between a customer and 5. Courtesy—Politeness, respect, consideration, and
elements of the service organization. They list three friendliness

78 QMJ 98 5, NO. 2/© 1998, ASQ


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

6. Communication — Keeping the customers • Customer satisfaction — Meeting or exceeding


informed in language they understand; listening to customer expectations (Brown and Swartz 1989).
the customers • Customer satisfaction attributes (CSAs)—
7. Credibility—Trustworthiness, believability, honesty Distinguishable characteristics that contribute to
8. Security—Avoiding danger, risk, and doubt overall customer satisfaction (DeYong 1994).

9. Understanding/knowing the customer—Making • Link—Establish a relationship.


efforts to understand customer needs • Measure—A scale, test, or instrument used to evaluate
10. Tangibles—Physical characteristics of service a metric. How the metric is evaluated.

Following additional research, Zeithaml, • Measurement—The result of applying the measure


Parasuraman, and Berry (1990, 25) later renamed and to the metric.
revised this list to include the following five attributes • Metric—Distinguishable elements that are subject
of service quality. to measure. A metric is “what” is measured.
1. Tangibles • Outcome metric (OM) — Characteristics of the
2. Reliability process outcome. The OMs measure some aspect of
the performance of the process after the service
3. Responsiveness
delivery is completed (DeYong 1994).
4. Assurance (including competence, courtesy,
• Process — A series of causes and conditions that
credibility, and security)
repeatedly come together to transform inputs into
5. Empathy (including access, communication, and outcomes (API 1992).
understanding customers)
• Process metric (PM)—Characteristics of a step in
Bhote (1991) and Boehm et al. (1978) define similar the process. The PMs measure some aspect of
customer satisfaction attributes. performance while the service delivery process is
Each of these models offers a unique perspective, progressing (DeYong 1994).
but it is stated in general terms. Only the models • Quality—Fitness for use (Juran 1974).
proposed by Garvin (1988) and Parasuraman,
• Satisfaction metric (SM) — Characteristics of the
Zeithaml, and Berry (1985) emerge following extensive
customer satisfaction attributes that are subject to
contact directly with customers, while the other models
measure. The SMs measure how well the customer
are more theoretical in nature. Finally, the cited
perceives the organization is performing with respect
research dealt with inputs and outputs of the service
to the CSAs (DeYong 1994).
system, and did not attempt to model the system. The
methodology detailed in this article models the service • Service quality—Gap between customer expecta-
system, and proposes standardized terms that will help tions and customer perceptions (Gronroos 1978;
promote clear communication about critical issues. Parasuraman, Zeithaml, and Berry 1985).
The distinction between metric, measure, and

DEFINITIONS AND TERMINOLOGY measurement is particularly important. A metric is


“what” should be measured. A measure is “how” to
The language in the areas of quality, service quality, measure, while a measurement is the result, often
customer satisfaction, and customer requirements is numerical, of applying the measure to the metric. This
not standardized. Definitions for each of these terms separation enables an organization to focus on the
vary. This research uses the following definitions. more important concept of “what “ to measure before
• Customer — Anyone impacted by a product or becoming concerned with how the measurement will
process (Juran 1988). be obtained. In service industries, in particular, the

C. F. DEYONG AND K. E. CASE/© 1998, ASQ 79


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

attribute that needs to be measured may be intangible, metrics used internally to control the process, and use
making it more difficult to apply traditional measure- internal process data.
ment techniques. The PMs, SMs, and OMs are linked to the customer
through the CSAs. The service firm must be measuring

PROCESS/CUSTOMER SYSTEM elements of the service delivery that impact, and that
are important to, customers. Metrics send a clear
Figure 1 is a model of a service process/customer message to a firm’s employees about what is impor-
system. The service process is shown on the left side of tant. If a service firm makes the claim that courtesy is
the figure. The customer is shown on the figure as important, but measures employee performance based
interacting multiple times within the process, illustrating on timeliness, the conflict is clear.
the inseparability characteristic of service processes.
Four key elements of the methodology are shown on
the figure: (1) customer satisfaction attributes (CSAs):
THE METHODOLOGY
(2) satisfaction metrics (SMs); (3) outcome metrics The methodology progresses in four phases, shown in
(OMs); and (4) process metrics (PMs). The CSAs are Figure 2. Phase I includes important preliminary activ-
categories customers use to determine their satisfac- ities, which are necessary for successful implementa-
tion, and help the organization structure the elusive tion of the methodology. Phase II operationalizes the
concept of customer satisfaction. SMs and OMs are CSAs. Phase III develops the SMs and OMs. Phase IV
post-process metrics used to gauge the organization’s develops the internal PMs.
performance with respect to the CSAs. SMs use data Some of the critical and unique elements of this
obtained directly from customers. OMs use end-of- methodology are as follows :
process data as an indirect metric of customer • Information is obtained from three primary sources:
satisfaction. Both are end-of-process metrics, and only (1) customers; (2) process “experts” who perform
differ in the source of measurement data. PMs are the work; and (3) established research in the field.

Figure 1 Service process customer system.

Internal process data


Process
Start Outcome metrics
Process metrics
OM1 CSAs
Component 1 PM1
OM2 CSA1
PM2 OM3
Component 2 CSA2
PM3 CSA3
Component 3
Satisfaction metrics CSA4
PM4
SM1
Condition CSA5
SM2
CSA6
End of process data SM3
End

Direct customer data

Customer
© 1998, ASQ

Data stream Link Interaction

80 QMJ 98 5, NO. 2/© 1998, ASQ


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

Figure 2 Phases of the methodology. statement, if one does not exist. The mission statement
ensures that the organization has agreement on its
Phase I. Preliminary activities
Step 1. Write a mission statement.
purpose for existence (Mize 1993).
Step 2. Identify a process for study. Step 2 identifies a process for study and improve-
Step 3. Select the team. ment. The initial process selected for study should be
Step 4. Identify the primary customer.
one in which the customer is easily identified and
Phase II. Operationalizing the customer satisfaction attributes preferably external. The process of selecting new freshmen
Step 1. Brainstorm CSAs for primary customer.
for admissions is chosen for study in the application.
Step 2. Customer identifies CSAs.
Step 3. Study list of dimensions identified by other authors. Step 3 selects the team that will be involved in the
Step 4. Combine lists from Steps 1, 2, and 3. improvement activity. It is important that at least some of
Step 5. Constitutively define resulting CSAs. the team members be individuals who have direct contact
Step 6. Test definitions.
Step 7. Customer prioritizes CSAs.
with customers. The team members will be asked to act
Step 8. Team of experts prioritizes CSAs for the customer. as “surrogate customers,” as well as process experts,
Step 9. Combine the results from Steps 7 and 8. and later identify customer requirements.
Phase III. Developing satisfaction metrics and outcome metrics The team for the admissions office application is
Step 1. Identify metrics for CSAs. comprised of the associate director, two supervisors,
Step 2. Identify direct metrics (SMs) and indirect metrics (OMs). and the three individuals who actually process the
Step 3. Operationally define metrics.
Step 4. Evaluate metrics.
freshmen applications. The entire staff of 14 is, however,
involved at various points. This has three benefits:
Phase IV. Developing process metrics
Step 1. Identify major components of the process.
(1) it keeps everyone in the office informed; (2) valuable
Step 2. Link CSAs to process components. input is obtained from multiple sources; and (3) “buy-in”
Step 3. Identify potential PMs for each component. is obtained.
Step 4. Link SMs/OMs with PMs.
The final step in phase I, step 4, identifies the primary
© 1998, ASQ

Step 5. Operationally define PMs.


Step 6. Evaluate PMs. customer of the selected process. This is accomplished
by first defining the term customer and then using a
• Confidence that those individuals providing service structured group process with the improvement team,
know their customers, and can act as surrogate cus- such as the nominal group technique (NGT), to ask
tomers, helping predict their wants and desires, “Who is the primary customer of the freshman admis-
sometimes better than the actual customers. sions process?” The prospective student and family are
• Information obtained from the customer is specific, identified as the primary customer(s) of the admis-
usable, and expressed in the customer’s language. sions process.

• The methodology systematically links the customer


wants, needs, and desires with metrics used to Phase II
control the process. Phase II operationalizes the CSAs. This critical phase
An application of the methodology is presented as identifies, defines, tests, and prioritizes the CSAs. This
the phases are introduced. This application is with an phase is the heart of the methodology. If the process is
office of admissions at a comprehensive state university. to be linked to the customer through the CSAs, the list
of attributes must be as complete and as well-defined
as possible. In order to obtain such a list, input for the
Phase I attributes is obtained from three sources: (1) customers;
The first phase asks the organization to conduct (2) process experts; and (3) prior research in the field.
important preliminary, or foundation, activities. These Multiple viewpoints are critical. It is theoretically
are not new ideas, yet their importance cannot be over- impossible to obtain an exhaustive list unless every
stated. Step 1 asks the organization to write a mission customer is spoken to, although even this would not be

C. F. DEYONG AND K. E. CASE/© 1998, ASQ 81


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

sufficient in many situations. Often, there are regulato- Figure 3 Customer satisfaction attributes.
ry, safety, or legal issues, which customers may either
Customer Team CSA
be unaware of, or profess to not be concerned with.
Don’t make mistakes Correct information Accuracy
Surrogate customers, or staff personnel, certainly will
Make a quick decision Fast service Timeliness
be aware of these issues. Therefore, surrogate customers
Let me in Admission decision Reliability
provide a unique and invaluable perspective that cannot
Forms that Clear and concise Tangibles
be obtained elsewhere. Finally, by using a list of attributes make sense documents
obtained through prior research activity, the general Don’t give me One person to talk to
perspective of additional customers is included. It is the run-around
important to note that using just one of the three Be nice Caring/polite service Impressions
perspectives would not be sufficient. It is the combination Listen to me Empathy Impressions
of all three that brings a particular and unique Keep confidences Professional/ Impressions/
tactful service credibility
strength to the methodology.
Know what you Knowledgeable staff Knowledge
Step 1 asks the team of experts to put themselves in are doing
their customer’s place and determine what the customer Don’t embarrass me if Impressions
expects from the service provided. A structured group my scores aren’t good
process, such as the NGT, is recommended for this effort. Compliance with Credibility/
Step 2 obtains customer input regarding the CSAs. policies accuracy

This effort can be conducted in several ways, including Assistance with Attentiveness
extraneous issues
customer surveys, focus groups, large NGT sessions, or
Alternative Attentiveness
Delphi sessions. The focus group and large NGT sessions counseling/advice

© 1998, ASQ
are preferred because of the face-to-face contact that is Understand the Impressions
inherent in both methods. customer’s situation
Step 3 consults a list of attributes identified by other
authors. The lists discussed earlier, by Garvin (1988) Steps 7 and 8 ask the customers and the team of
and Parasuraman, Zeithaml, and Berry (1985), are experts to prioritize the CSAs. This is an attempt to
recommended. ascertain if any of the attributes are significantly more
Step 4 combines the lists obtained in steps 1, 2, and important to customers than others. It is recommended
3. This iterative process requires the team of experts to that a survey of a random sample of customers be used
compare, contrast, and combine the three source lists. for step 7.
Emphasis is placed on the information obtained directly In addition to being asked to prioritize the attributes,
from the customer. The other two views are used to customers are also asked if there are any elements of
confirm and complete the list. There is often consider- the service that they expect to receive, which are not
able overlap between the lists. Results from the appli- contained on the list. In this way, additional input is
cation are shown in Figure 3. obtained from a larger population, continuing the
Step 5 asks the team to constitutively define the list effort to obtain an exhaustive list of attributes.
of CSAs obtained in step 4. Constitutive definitions are The application used a customer survey, with a pop-
descriptive in nature, similar to dictionary definitions. ulation size of 4126, a sample size of 150, and obtained
The definitions of the terms must be clear to those who a response rate of 24 percent, consistent with mail surveys.
will assess them. The definitions are tested in step 6, by The survey was pre-tested with a subgroup of the target
asking a selected group of customers and other experts population. Content validity issues, including clarity of
if their meanings are complete, clear, and unambiguous. directions, clarity of questions, and content overlap
For example, the CSA reliability is defined as “the were addressed. The survey used a five-point Likert
degree to which the admissions office [staff members] scale. A survey was sent for both students and parents.
do what they say they will do.” Thus, data were obtained from both primary customers.

82 QMJ 98 5, NO. 2/© 1998, ASQ


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

The team of experts also prioritizes the attributes. Customers recognize that these standards exist, but
This effort can be conducted in a group, or using a survey they are not important to them.
similar to the customer survey, as occurred in the Seven of the attributes are used in the succeeding
application. The team members are asked to again steps in the methodology, omitting tangibles. Accuracy
place themselves in the customer’s position and prioritize is included, even though it does not seem to be impor-
the attributes from the customer’s perspective. Results tant to customers, since the definition may have biased
from the surveys are shown in Figure 4. the results, and the office of admissions must conform
to the standards, regardless of what customers want.
Figure 4 Prioritization survey results. The output of phase II is a list of customer satisfaction
Student Parent Staff attributes that are unique to a particular organization
CSA (n =35) (n =32) (n =14)
and have been constitutively defined and prioritized.
Mean SD Mean SD Mean SD
Reliability 4.77 0.49 4.56 0.80 4.64 0.50
Attentiveness 4.71 0.57 4.53 0.62 4.14 0.53 Phase III
Knowledge 4.69 0.58 4.66 0.55 4.71 0.61 Phase III develops metrics that measure the perfor-
Credibility 4.51 0.74 4.63 0.66 4.07 1.00 mance of the organization with respect to the CSAs.
Impressions 4.49 0.70 4.22 0.71 4.21 0.70 This performance is measured both directly and indi-
Timeliness 4.34 0.68 4.44 0.62 4.21 0.89 rectly. The direct metrics use information obtained
Accuracy 3.94 0.94 4.13 0.91 3.79 1.05 directly from customers, and are termed satisfaction
© 1998, ASQ

Tangibles 3.63 0.94 3.59 0.91 3.21 0.70 metrics (SMs). The indirect metrics use end-of-process
data, and are termed outcome metrics (OMs). Both of
1 = Unimportant 3 = Important 5 = Extremely important
these metrics occur after the service has been delivered.
This information alone is valuable for the organi- A third metric, which measures performance as the
zation. It indicates which attributes are important to process is progressing, is developed in phase IV.
the sample of customers and which are not. In addi- An example of direct and indirect indicators is easily
tion, the information helps the organization assess given using weights of fish. The direct indicator of
how well it knows its customers. The results indicate weight is obtained by weighing the fish on a scale. If,
that reliability is an important attribute for all three however, there is no scale available, then a ruler is used
groups who responded to the survey. Knowledge and to measure the length of the fish, which is then converted
credibility are both important to the student/parent to an estimate of the weight. Length is thus an indirect,
groups. Attentiveness is important to the student group, or surrogate, metric for weight.
yet was judged as being less important by the staff Step 1 asks the team to identify metrics for each
group. This gap is valuable information for staff and CSA from phase I. There is no attempt at this point to
management. separate direct from indirect metrics. In fact, the team
Two of the attributes, accuracy and tangibles, are is encouraged not to limit itself in any way, but to
consistently less important to the three groups. This think of “what could be measured to indicate how our
may indicate the two attributes are not important organization is performing with respect to this CSA.” A
unless they are absent. Another factor that may have brainstorming session, or a modified NGT, is useful for
biased the results is the inclusion of the statement this step. The outcome of step 1 is a list of potential
referring to “conformance to the standards set by the SMs/OMs that are associated with the CSAs.
State Regents for Higher Education” in the descriptive After the list of potential metrics has been generated,
definition. Informal conversations following the survey step 2 asks the team to separate the list into two groups.
indicate that customers were not interested in the office The first group contains SMs, and requires data
of admissions conforming to regents’ standards. obtained directly from customers. The second group

C. F. DEYONG AND K. E. CASE/© 1998, ASQ 83


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

contains OMs, and requires data from the process. It The outcome of step 3 is a list of operationally
should be noted that each CSA could be measured by defined SMs and OMs associated with each CSA. An
either an OM or SM. example is the operational definition identified for one
Step 3 operationally defines the metrics. Deming of the SMs associated with the CSA of reliability: Do
(1986) states that an operational definition puts students believe the admissions office staff members do
communicable meaning into a concept. API (1990) what they say they will do? This is measured by a
continues with this thought, and adds that an opera- survey or focus group.
tional definition puts communicable meaning into a Step 4 evaluates the metrics. Four evaluation criteria
concept by specifying how the concept is applied within are suggested (DeYong 1994). They are
a particular set of circumstances. API notes that criteria 1. Ease of measurement
for judgment are necessary in some situations to
2. Ease of analysis and interpretation
distinguish between good and bad performance.
3. Relevance
Since metrics are “what is measured,” it is appro-
priate that the definitions specify how the metric is to 4. Direct from the customer
be measured. In the context of metric, measure, and Other criteria may be useful in different organizations.
measurement, discussed earlier, the operational defini- Before evaluating the metrics, it is helpful to know
tion identifies the measure, or how the metric is to which of the criteria are most important. This infor-
be measured. mation is obtained using a priority grid. This tool
SMs are addressed first. Since data for the SMs are ranks the criteria, and obtains weights by making pair-
obtained directly from customers, some form of cus- wise comparisons of the criteria.
tomer input (survey, focus group, interview) will be the Figure 5 illustrates a portion of the metric evaluation
data-gathering instrument for several of the SMs. The worksheet. The weights for each criterion, obtained
team is essentially building elements of the survey as it earlier from the priority grid, are placed in the top half
specifies definitions for the metrics. Other methods for of each cell. Thus, the top number in each cell is the
obtaining data for SMs are: focus groups, counts and same throughout a column. The evaluation scores for
nature of customer complaints, and counts and nature each metric are placed in the bottom half of the cell.
of customer compliments. The metrics are evaluated against the four criteria
OMs are addressed last. Since they occur at the end according to a five-point scale.
of the process, the organization has greater control and After all the evaluation scores have been recorded,
flexibility over obtaining the data. In addition, organi- the product is taken of the two numbers within each
zations are more familiar with techniques to measure cell and summed across each row. This results in a
most OMs. Some OMs will be tracked currently, while final evaluation score for the metrics, which can be
others are new, and must have a measurement system used to determine which metrics should be chosen.
created. The team may have to be creative and innova- The outcome of phase III is a list of SMs and OMs
tive while operationally defining a metric that has for each CSA, complete with operational definitions
never been tracked before. and evaluation scores. The metrics are defined in terms

Figure 5 Metric evaluation worksheet SMs/OMs for reliability.


Ease of Ease of Direct from
CSA SM/OM Metric measuring analysis Relevance customer Score

SM Student perception 1 2 4 4 50
2 4 5 5

Reliability OM Consistency with policies 1 2 4 4 31


© 1998, ASQ

3 2 5 1

SM Input from student 1 2 4 4 51


3 4 5 5

84 QMJ 98 5, NO. 2/© 1998, ASQ


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

of the operations required to measure them. The SMs Figure 6 Major components of the process.
and OMs provide the basis for the team to identify
process metrics (PMs), which are linked with the SMs,
Receive correspondence
OMs, and CSAs.
Review

Phase IV Add on computer


Phase IV identifies and defines the PMs, which com-
pletes the link between the customer and the internal
process. These metrics are indicators of performance Evaluate transcript
Curricular
within the process that are explicitly linked to either an Performance
SM or an OM, and thus to the CSAs.
Customer Contact
Measurement within a process is vital for control of Update computer Counter
the process. In addition, measurement serves the pur- Phone
poses of prediction, estimation, decision making, and Letter
Decide admissions
problem solving (of a reactive type) (Sink and Tuttle
1989). Measures occupy a powerful position in organi-
Update computer
zations. It has been said that “what is measured is
what gets done” (Drucker 1954). Therefore, measuring
the “right” things is critical. The right things would be Contact student

© 1998, ASQ
those that provide an indication of how well the
process is performing with respect to customer expecta- Close file
tions. Process metrics will accomplish that.
Phase IV requires the team to follow six steps. In The linkage between the CSAs and the process com-
step 1 the team studies the process and identifies the ponents is identified by viewing, from two directions,
major components, or subprocesses, of the process. A the relationship between the attributes and the com-
component of a process is defined as a part of the ponents. First, each component is studied individually,
process that has to be planned, designed, supervised, and then each CSA is compared to the component one
and appraised (Rosander 1989). at a time. Pair-wise comparisons are more effective than
For several reasons, a list of major components is trying to make multiple comparisons simultaneously.
used in this step instead of a flowchart. First, the Initially, team members may feel every process
methodology does not require the level of detail a flow- component affects every CSA. As the analysis progresses,
chart produces. Second, it is more difficult to show team members become more discriminating. If the
components that permeate the entire process, such as team can establish a rationale for a CSA to be linked
contact with the customer, with a flowchart. Figure 6 with a component, it is included.
shows the components for the office of admissions. Next, the relationship is approached from a different
Step 2 links the CSAs to the process components. perspective. The CSAs are studied individually, and the
The CSAs are defined as distinguishable characteristics process components that impact them are identified.
that contribute to overall customer satisfaction. These Once again, it is easier to compare the concepts pair-
are categories customers are using to determine how wise, make a decision regarding the linkage, and move
satisfied they are. These attributes are varied in nature. to the next component. The outcome is a list of com-
Thus, they are impacted at different points in the ponents that impact each attribute.
process. Step 2 determines the points in the process Finally, the results are combined. In many cases,
where each of the attributes is impacted. the results will indicate the same relationship between

C. F. DEYONG AND K. E. CASE/© 1998, ASQ 85


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

the process components and the CSAs. There may, how- with the satisfaction metrics/outcome metrics, which
ever, be cases where the relationships are not identical. were linked earlier to the customer satisfaction attributes.
The team discusses these situations and reaches con- SMs/OMs are direct and indirect indicators of how
sensus regarding the relationship between the CSAs and well the process performed in the customer’s eyes after
the process components. The outcome is a list of process completion of the process, and were obtained from the
components and the CSAs they impact. This begins CSAs. PMs are internal metrics that indicate how well
building the link between the process and the customer. the process is performing as it progresses. If a linkage
Step 3 identifies potential PMs for each process is established between these metrics, the result is a
component. The team is reminded that a PM is an out- direct relationship from the process to the customer.
come of a step in the process that is subject to measure. The SMs/OMs and PMs are linked by making pair-wise
There are many such outcomes. The team is encour- comparisons between each set of metrics, and deter-
aged not to limit its ideas for PMs because of practical mining the strength of the relationship, if any, that
matters, such as how the PM could be measured, or exists. There are a significant number of comparisons
what is currently being measured. Metrics merely iden- that must be made for this step. Therefore, the procedure
tify what is measured. The operational definition will is best approached in an organized method that makes
specify how the metric will be measured. it easy to record the strength of the linkages. The linkage
Each process component is studied individually. matrix obtained in the application is shown in Figure 8.
The results of the previous step, CSAs that are impacted The process components and their associated PMs
by each component, are reviewed. The CSAs provide a from step 3 are listed along the top of the matrix. The
focus for the team, while identifying ideas about what CSAs and their associated SMs/OMs are listed along the
to measure for each component. The question is asked: side. The symbols for the strength of the relationships
“What should be measured to indicate how well we are shown on the matrix. Relationships between the
performed within the component in relation to the metrics are judged to be strong, medium, weak, or
CSAs?” Therefore, the PMs that are identified are none. These are indicated on the matrix, using the
generated in response to a CSA. Results from the appli- appropriate symbol.
cation for the process component “review file” are The SMs/OMs are linked to the PMs in a similar
shown in Figure 7. fashion that the components and CSAs were linked in
The outcome of step 3 is a list of potential PMs for step 2, by making pair-wise comparisons. First, the
team studies each PM and identifies SMs/OMs impacted
Figure 7 Process component “review file,” CSAs,
and PMs.
by the PM. Then, the team takes another perspective
and identifies PMs that impact the SMs/OMs.
Component CSA impacted Process metric
The team is looking for two things during this step:
Review file Accuracy Is the coding correct?
first, the linkages between the metrics; and second,
Knowledge Are the $ verified and whether the lists of SMs/OMs and PMs are complete.
deposited correctly?
PMs were associated with CSAs in step 2. The SMs/OMs
Reliability Are special situations/
are also associated with CSAs. Step 4 provides a final
© 1998, ASQ

circumstances identified?
check for the consistency and logic of the linkages. The
Timeliness Length of time have file.
matrix forces the team to look at the entire system—
each component. These PMs will be linked to the from CSA to process component to PM; and from CSA
SMs/OMs in step 4, further establishing the link to SM/OM to PM. The linkages should be consistent. If
between the process and the customer. not, the earlier linkages must be reviewed. There may
Step 4 completes the linkage between the process be PMs that seem important to the team that do not
and the customer. Step 2 identified a link between the appear to be linked to any SM/OM and vice versa. In
conceptual CSA, the process component, and the this case, either another metric is developed, or the first
process metric. Step 4 links the internal process metric metric is deleted.

86 QMJ 98 5, NO. 2/© 1998, ASQ


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

Figure 8 Linkage matrix.

Receive Add on Evaluate


Components correspondence Review computer transcript

State and institutional pol.


Logs maintained correctly

Length of time have file

Length of time have file


Correspondence routed

Data entered correctly


Special circumstances
$ verified correctly/
deposited correctly
PM

applied correctly
Coding correct

Daily error list


identified
properly
CSA SM/OM

Student 
perception

Consistency
Reliability 
— 
— 
— 
with policies

Calls/complaints      

Student 
—  
perception
Impressions Admission’s
judgment after
interaction

Student  
perception

Time between
Timeliness 
—    
—  
applic./response

Time waiting  
at counter

Student 
— 
perception

Integrity of
Credibility 
— 
— 
—  
— 
documents

State audit 
— 
— 
— 
results

Student 
perception

Attentiveness Time staff


spends with
non-Admissions
concerns

How well staff 


— 
—    
—    

knows everything
Knowledge
Review of files  
—  

Computer error   

list

Accuracy Review of files    


—  

Analysis of calls  
and complaints

 = Strong (3) Linkage


© 1998, ASQ

 = Medium (2)
— scores 8 5 17 5 17 7 6 21 27 7
 = Weak (1)

C. F. DEYONG AND K. E. CASE/© 1998, ASQ 87


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

Next, the team addresses each SM/OM and deter- definitions for the other three criteria remain the same.
mines which PMs impact them. In other words, the Another priority grid is used to determine which criteria
team approaches the linkage from a different perspective. are important. Another metric evaluation worksheet is
Another linkage matrix is developed. Pair-wise compar- used to record the evaluation results.
isons are made, and the strength of the linkage is The outcome of step 6 is a list of process metrics
determined. Although these comparisons will have that are operationally defined and evaluated. They are
been made before, the alternative viewpoint helps linked to the customer through the SMs/OMs and thus
make certain the lists are complete and the compar- the CSAs.
isons are valid.
Finally, the two matrices are combined. Consensus
is reached about linkages between the metrics that are DISCUSSION
not the same. Each SM/OM is linked with at least one Organizations using the methodology have a clear idea
PM. If no linkage is identified, the team brainstorms of what process metrics should be tracked. The next
for additional PMs. The outcome of step 4 is a list of step would be implementation of the measurement system,
PMs along with a linkage matrix that shows the rela- and continual review of the CSAs, OMs, SMs, and PMs.
tionship between the PMs and the SMs/OMs, and, Today’s organizations exist in a dynamic environment,
therefore, the CSAs. The PMs that result are opera- with constantly changing customers, business processes,
tionally defined in step 5 and evaluated in step 6. and pressures. Some form of continuous improvement
The linkage matrix serves three main purposes. will necessarily be a way of life.
First, it organizes and clearly communicates the link- The attributes and metrics developed with this
ages between metrics. Second, the matrix identifies application were tested and accepted by both customers
critical PMs, those that impact several CSAs or and owners of the test process. Strengths of the
SMs/OMs. If internal metrics are limited, the matrix methodology noted by users include the increased
helps determine where resources will be used most knowledge and understanding of the customer, the
effectively. Finally, the matrix serves as a self- richness of data obtained regarding customer satisfac-
coordinating device, forcing the team to review linkages tion attributes, and the efficiency and structure offered
identified earlier and verify their validity. by the methodology. The organization has historically
used ad-hoc measures of process performance, so the
process metrics provide a measurement system that is
This information … indicates which attributes accepted by process owners and focused on customers.
are important to … customers and which are
not. In addition, [it] helps the organization
assess how well it knows its customers. CONCLUSION
The methodology detailed in this article provides an
efficient and effective mechanism for organizations
In the same way the earlier metrics were defined,
interested in identifying process metrics that are linked
step 5 operationally defines the PMs. The operational
to their customers. Although many tools exist for making
definition again states how the PM is to be measured.
linkages, no methodology has been proposed that leads
Finally, the PMs are evaluated in step 6. The criteria
organizations step-by-step from assessing customer
used to evaluate the PMs are very similar to those used
attributes to identifying process metrics linked to the attrib-
in phase II to evaluate the SMs/OMs, with one exception.
utes. Benefits of the methodology include the following:
An “objective” criterion is included, rather than “direct
from the customer.” The four criteria are (1) ease of • A clear understanding of customer wants, needs,
measurement; (2) ease of analysis and interpretation; and desires
(3) relevance; and (4) objectivity. Objectivity evaluates • A process measurement system linked to the
how objective, or subjective, the metric is. The customers’ wants, needs, and desires

88 QMJ 98 5, NO. 2/© 1998, ASQ


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

• A higher degree of acceptance for the measurement Drucker, P. F. 1954. The practice of management. New York:
system since it was developed by process owners Harper and Row.

Farsad, B., and A. K. Elshennawy. 1989. Defining service quali-


• Knowledge of critical steps in the process, and which
ty is difficult for service and manufacturing. Industrial
customer satisfaction attribute is impacted at each step Engineering 21, no. 3:17–19.

Garvin, D. A. 1988. Managing quality. New York: The Free Press.


ACKNOWLEDGMENTS
Goh, T. N. 1989. An efficient empirical approach to process
The authors gratefully acknowledge the thoughtful comments
improvement. The International Journal of Quality and Reliability
expressed by reviewers of this paper. Their insight strengthened
Management 6, no. 1:7–18.
the paper in a significant way. Also, sincere appreciation goes to
Devesh Sharad Dahale, who contributed to construction of the figures. Gronroos, C. 1978. A service-oriented approach to marketing of
services. European Journal of Marketing 12, no 8:588 – 601.

REFERENCES ———. 1981. Seven key areas of research according to the


Nordic school of service marketing. In Emerging perspectives on
Associates in Process Improvement (API). 1990. Model to
service marketing, edited by L. L. Berry, G. L. Shostack, and G.
improve quality. Unpublished working paper. Austin, Tex.:
D. Upah. Chicago: American Marketing Association.
Associates in Process Improvement.
Juran, J. M., ed. 1974. Quality control handbook. 3rd ed. New
———. 1992. Linkage of processes. Unpublished working
York: McGraw-Hill.
paper. Austin, Tex.: Associates in Process Improvement.
Juran, J.M. 1988. Juran on planning for quality. New York: The
AT&T. 1988. Process quality management and improvement
Free Press.
guidelines, issue 1.1. New York: AT&T.
Kelbaugh, R. 1991. Customer focused process improvement.
Berry, L. L. 1980. Services marketing is different. Business
Journal for Quality and Participation 14, no. 6:84 – 90.
(May – June): 24 – 28.
Lehtinen, U., and J. R. Lehtinen. 1991. Two approaches to ser-
Bhote, K. R. 1991. Next operation as customer (NOAC). New
vice quality. The Service Industries Journal 11, no. 3:287– 303.
York: American Management Association.
Lovelock, C. H. 1981. Why marketing management needs to be
Boehm, B. W., J. R. Brown, H. Kaspar, M. Lipow, G. J. MacLeod,
different for services. In Marketing of Services, edited by J. H.
and M. J. Merritt. 1978. Characteristics of software quality. New
Donnelly, and W. R. George. Chicago: American Marketing
York: Elsevier North-Holland.
Association.
Booms, B. H., and M. J. Bitner. 1981. Marketing strategies and
McLuhan, M. 1964. Understanding media. New York: McGraw-Hill.
organization structures for service firms. In Marketing of services,
edited by J. H. Donnelly, and W. R. George. Chicago: American Mize, J. H. 1993. Crafting mission statements. Unpublished
Marketing Association. working paper. Stillwater, Okla.: Oklahoma State University.

Booms, B. H., and J. L. Nyquist. 1981. Analyzing the customer/ Nelson, P. 1970. Information and consumer behavior. Journal of
firm communication component of the services marketing mix. In Political Economy 78, no 2:311– 329.
Marketing of Services, edited by J. H. Donnelly, and W. R.
George. Chicago: American Marketing Association. ———. 1974. Advertising as information. Journal of Political
Economy 82, no 4:729–754.
Brown, S. W., and T. A. Swartz. 1989. A gap analysis of pro-
fessional service quality. Journal of Marketing (April): 92 – 98. Parasuraman, A., V. A. Zeithaml, and L. L. Berry. 1985. A con-
ceptual model of service quality and its implications for future
Chua, R. C. H. 1992. A customer-driven approach for measuring research. Journal of Marketing (fall): 41– 50.
service quality. ASQC Quality Congress Transactions.
Milwaukee: ASQ. ———. 1988. SERVQUAL: A multiple-item scale for measuring
consumer perceptions of service quality. Journal of Retailing 64,
Darby, M. R., and E. Karni. 1973. Free competition and the no. 1:12 – 37.
optimal amount of fraud. The Journal of Law and Economics 16,
no. 1: 67– 88. Regan, W. J. 1963. The service revolution. Journal of Marketing
(July): 57– 62.
Deming, W. E. 1986. Out of the crisis. Cambridge, Mass.: MIT
Center for Advanced Engineering Study. Rosander, A. C. 1989. The quest for quality in services.
Milwaukee: ASQC Quality Press.
DeYong, C. F. 1994. A methodology for linking customer satis-
faction dimensions with process performance metrics. Ph.D. diss., Rummler, G. A., and A. P. Brache. 1990. Improving performance.
Oklahoma State University. San Francisco: Jossey-Bass.

C. F. DEYONG AND K. E. CASE/© 1998, ASQ 89


Linking Customer Satisfaction Attributes with Process Metrics in Service Industries

Sasser, W. E., Jr., R. P. Olsen, and D. D. Wyckoff, eds. 1978. include the Oklahoma Department of Transportation, the U.S.
Management of service operations. Boston: Allyn and Bacon. Army Matériel Systems Analysis Activity, and the city of
Stillwater. She has presented numerous seminars on the topics of
Shostack, G. L. 1977. Breaking free from product marketing.
identifying performance metrics, strategic planning, and life
Journal of Marketing (April): 73 – 80.
cycle costing.
———. 1987. Service positioning through structural change.
DeYong is the author of several technical papers and one book
Journal of Marketing (January): 35 – 43.
chapter. She earned a doctorate in industrial engineering and
Sink, D. S., and T. C. Tuttle. 1989. Planning and measurement in management at Oklahoma State University. She may be contacted
your organization of the future. Norcross, Ga.: Industrial via E-mail: deyong@okway.okstate.edu .
Engineering and Management Press.
Kenneth E. Case is a regents professor of industrial engineering
Tenner, A. R. 1991. Quality management beyond manufactur- and management at Oklahoma State University. He has taught at
ing. Research-Technology Management 34, no. 5:27– 32. both OSU and Virginia Tech. He is a Fellow of ASQ and the
Institute of Industrial Engineers. He is also an ASQ Certified
Yoshizawa, M. 1991. Leadership through quality: New employ- Quality Engineer, Certified Quality Manager, Certified Reliability
ee quality training participant workbook. New York: Xerox. Engineer, and Certified Quality Auditor. He is a member of the
Zeithaml, V. A., A. Parasuraman, and L. L. Berry. 1988. National Academy for Engineering and an academician and
Communication and control processes in the delivery of service board member in the International Academy of Engineering.
quality. Journal of Marketing (April): 35 – 58. Case has published over 100 articles and has coauthored three
———. 1990. Delivering quality service. New York: The Free Press. books, one of which won the Joint Publishers – IIE Book of the
Year Award. He has received multiple teaching awards, both at
the university and the national levels, and has presented numer-
BIOGRAPHIES ous ASQ seminars on topics that include, statistical process con-
trol, quality engineering, and quality costs. His research interests
Camille Frye DeYong is an assistant professor of industrial engi-
are in quality and reliability engineering, economic analysis, and
neering and management at Oklahoma State University. She has
production planning and control.
taught at the University of Central Oklahoma, and has two years
of industrial experience in the transportation services industry. Case earned a doctorate in industrial engineering and manage-
She is an ASQ Certified Quality Engineer, and served as an ment at Oklahoma State University. He may be contacted via
examiner for the Oklahoma Quality Award in 1994. E-mail: kcase@okway.okstate.edu .

DeYong’s research interests are in the areas of total quality Both DeYong and Case may also be contacted at the School of
management, economic analysis, and service quality. She has Industrial Engineering and Management, 322 Engineering
consulted and performed research in performance metrics and North, Oklahoma State University, Stillwater, OK 74078;
customer satisfaction measurements. Client service organizations 405-744-6055, Fax: 405-744-6187.

90 QMJ 98 5, NO. 2/© 1998, ASQ

Das könnte Ihnen auch gefallen