Beruflich Dokumente
Kultur Dokumente
process, and the customer evaluates the results of the service personnel are difficult to assure (Booms and
improvement, it is critical that customer requirements Bitner 1981, 47–56), because what the firm intends to
be linked with the process metrics used. After all, what is deliver may be different from what the customer
measured in the process is what gets done (Drucker 1954). receives. In labor-intensive services, quality occurs
This article presents a methodology to establish a during service delivery, usually in an interaction
logical linkage between customer requirements and between clients and contact persons from the firm
process metrics. A review of the literature regarding (Parasuraman, Zeithaml, and Berry 1988).
service industries, process improvement, and customer Services are acts or performances, not things alone
requirements is discussed. Definitions and terms used (Booms and Nyquist 1981, 172–177). McLuhan (1964)
in the methodology are presented, and finally the succinctly states that the process of delivering the
methodology is detailed. service is the product. For example, customers typically
say “airline” when they mean “air transportation.” They
important that these internal metrics be well linked to attributes of service quality: (1) physical quality;
customer satisfaction attributes. (2) corporate quality; and (3) interactive quality.
Physical quality relates to physical, tangible aspects of
attribute that needs to be measured may be intangible, metrics used internally to control the process, and use
making it more difficult to apply traditional measure- internal process data.
ment techniques. The PMs, SMs, and OMs are linked to the customer
through the CSAs. The service firm must be measuring
PROCESS/CUSTOMER SYSTEM elements of the service delivery that impact, and that
are important to, customers. Metrics send a clear
Figure 1 is a model of a service process/customer message to a firm’s employees about what is impor-
system. The service process is shown on the left side of tant. If a service firm makes the claim that courtesy is
the figure. The customer is shown on the figure as important, but measures employee performance based
interacting multiple times within the process, illustrating on timeliness, the conflict is clear.
the inseparability characteristic of service processes.
Four key elements of the methodology are shown on
the figure: (1) customer satisfaction attributes (CSAs):
THE METHODOLOGY
(2) satisfaction metrics (SMs); (3) outcome metrics The methodology progresses in four phases, shown in
(OMs); and (4) process metrics (PMs). The CSAs are Figure 2. Phase I includes important preliminary activ-
categories customers use to determine their satisfac- ities, which are necessary for successful implementa-
tion, and help the organization structure the elusive tion of the methodology. Phase II operationalizes the
concept of customer satisfaction. SMs and OMs are CSAs. Phase III develops the SMs and OMs. Phase IV
post-process metrics used to gauge the organization’s develops the internal PMs.
performance with respect to the CSAs. SMs use data Some of the critical and unique elements of this
obtained directly from customers. OMs use end-of- methodology are as follows :
process data as an indirect metric of customer • Information is obtained from three primary sources:
satisfaction. Both are end-of-process metrics, and only (1) customers; (2) process “experts” who perform
differ in the source of measurement data. PMs are the work; and (3) established research in the field.
Customer
© 1998, ASQ
Figure 2 Phases of the methodology. statement, if one does not exist. The mission statement
ensures that the organization has agreement on its
Phase I. Preliminary activities
Step 1. Write a mission statement.
purpose for existence (Mize 1993).
Step 2. Identify a process for study. Step 2 identifies a process for study and improve-
Step 3. Select the team. ment. The initial process selected for study should be
Step 4. Identify the primary customer.
one in which the customer is easily identified and
Phase II. Operationalizing the customer satisfaction attributes preferably external. The process of selecting new freshmen
Step 1. Brainstorm CSAs for primary customer.
for admissions is chosen for study in the application.
Step 2. Customer identifies CSAs.
Step 3. Study list of dimensions identified by other authors. Step 3 selects the team that will be involved in the
Step 4. Combine lists from Steps 1, 2, and 3. improvement activity. It is important that at least some of
Step 5. Constitutively define resulting CSAs. the team members be individuals who have direct contact
Step 6. Test definitions.
Step 7. Customer prioritizes CSAs.
with customers. The team members will be asked to act
Step 8. Team of experts prioritizes CSAs for the customer. as “surrogate customers,” as well as process experts,
Step 9. Combine the results from Steps 7 and 8. and later identify customer requirements.
Phase III. Developing satisfaction metrics and outcome metrics The team for the admissions office application is
Step 1. Identify metrics for CSAs. comprised of the associate director, two supervisors,
Step 2. Identify direct metrics (SMs) and indirect metrics (OMs). and the three individuals who actually process the
Step 3. Operationally define metrics.
Step 4. Evaluate metrics.
freshmen applications. The entire staff of 14 is, however,
involved at various points. This has three benefits:
Phase IV. Developing process metrics
Step 1. Identify major components of the process.
(1) it keeps everyone in the office informed; (2) valuable
Step 2. Link CSAs to process components. input is obtained from multiple sources; and (3) “buy-in”
Step 3. Identify potential PMs for each component. is obtained.
Step 4. Link SMs/OMs with PMs.
The final step in phase I, step 4, identifies the primary
© 1998, ASQ
sufficient in many situations. Often, there are regulato- Figure 3 Customer satisfaction attributes.
ry, safety, or legal issues, which customers may either
Customer Team CSA
be unaware of, or profess to not be concerned with.
Don’t make mistakes Correct information Accuracy
Surrogate customers, or staff personnel, certainly will
Make a quick decision Fast service Timeliness
be aware of these issues. Therefore, surrogate customers
Let me in Admission decision Reliability
provide a unique and invaluable perspective that cannot
Forms that Clear and concise Tangibles
be obtained elsewhere. Finally, by using a list of attributes make sense documents
obtained through prior research activity, the general Don’t give me One person to talk to
perspective of additional customers is included. It is the run-around
important to note that using just one of the three Be nice Caring/polite service Impressions
perspectives would not be sufficient. It is the combination Listen to me Empathy Impressions
of all three that brings a particular and unique Keep confidences Professional/ Impressions/
tactful service credibility
strength to the methodology.
Know what you Knowledgeable staff Knowledge
Step 1 asks the team of experts to put themselves in are doing
their customer’s place and determine what the customer Don’t embarrass me if Impressions
expects from the service provided. A structured group my scores aren’t good
process, such as the NGT, is recommended for this effort. Compliance with Credibility/
Step 2 obtains customer input regarding the CSAs. policies accuracy
This effort can be conducted in several ways, including Assistance with Attentiveness
extraneous issues
customer surveys, focus groups, large NGT sessions, or
Alternative Attentiveness
Delphi sessions. The focus group and large NGT sessions counseling/advice
© 1998, ASQ
are preferred because of the face-to-face contact that is Understand the Impressions
inherent in both methods. customer’s situation
Step 3 consults a list of attributes identified by other
authors. The lists discussed earlier, by Garvin (1988) Steps 7 and 8 ask the customers and the team of
and Parasuraman, Zeithaml, and Berry (1985), are experts to prioritize the CSAs. This is an attempt to
recommended. ascertain if any of the attributes are significantly more
Step 4 combines the lists obtained in steps 1, 2, and important to customers than others. It is recommended
3. This iterative process requires the team of experts to that a survey of a random sample of customers be used
compare, contrast, and combine the three source lists. for step 7.
Emphasis is placed on the information obtained directly In addition to being asked to prioritize the attributes,
from the customer. The other two views are used to customers are also asked if there are any elements of
confirm and complete the list. There is often consider- the service that they expect to receive, which are not
able overlap between the lists. Results from the appli- contained on the list. In this way, additional input is
cation are shown in Figure 3. obtained from a larger population, continuing the
Step 5 asks the team to constitutively define the list effort to obtain an exhaustive list of attributes.
of CSAs obtained in step 4. Constitutive definitions are The application used a customer survey, with a pop-
descriptive in nature, similar to dictionary definitions. ulation size of 4126, a sample size of 150, and obtained
The definitions of the terms must be clear to those who a response rate of 24 percent, consistent with mail surveys.
will assess them. The definitions are tested in step 6, by The survey was pre-tested with a subgroup of the target
asking a selected group of customers and other experts population. Content validity issues, including clarity of
if their meanings are complete, clear, and unambiguous. directions, clarity of questions, and content overlap
For example, the CSA reliability is defined as “the were addressed. The survey used a five-point Likert
degree to which the admissions office [staff members] scale. A survey was sent for both students and parents.
do what they say they will do.” Thus, data were obtained from both primary customers.
The team of experts also prioritizes the attributes. Customers recognize that these standards exist, but
This effort can be conducted in a group, or using a survey they are not important to them.
similar to the customer survey, as occurred in the Seven of the attributes are used in the succeeding
application. The team members are asked to again steps in the methodology, omitting tangibles. Accuracy
place themselves in the customer’s position and prioritize is included, even though it does not seem to be impor-
the attributes from the customer’s perspective. Results tant to customers, since the definition may have biased
from the surveys are shown in Figure 4. the results, and the office of admissions must conform
to the standards, regardless of what customers want.
Figure 4 Prioritization survey results. The output of phase II is a list of customer satisfaction
Student Parent Staff attributes that are unique to a particular organization
CSA (n =35) (n =32) (n =14)
and have been constitutively defined and prioritized.
Mean SD Mean SD Mean SD
Reliability 4.77 0.49 4.56 0.80 4.64 0.50
Attentiveness 4.71 0.57 4.53 0.62 4.14 0.53 Phase III
Knowledge 4.69 0.58 4.66 0.55 4.71 0.61 Phase III develops metrics that measure the perfor-
Credibility 4.51 0.74 4.63 0.66 4.07 1.00 mance of the organization with respect to the CSAs.
Impressions 4.49 0.70 4.22 0.71 4.21 0.70 This performance is measured both directly and indi-
Timeliness 4.34 0.68 4.44 0.62 4.21 0.89 rectly. The direct metrics use information obtained
Accuracy 3.94 0.94 4.13 0.91 3.79 1.05 directly from customers, and are termed satisfaction
© 1998, ASQ
Tangibles 3.63 0.94 3.59 0.91 3.21 0.70 metrics (SMs). The indirect metrics use end-of-process
data, and are termed outcome metrics (OMs). Both of
1 = Unimportant 3 = Important 5 = Extremely important
these metrics occur after the service has been delivered.
This information alone is valuable for the organi- A third metric, which measures performance as the
zation. It indicates which attributes are important to process is progressing, is developed in phase IV.
the sample of customers and which are not. In addi- An example of direct and indirect indicators is easily
tion, the information helps the organization assess given using weights of fish. The direct indicator of
how well it knows its customers. The results indicate weight is obtained by weighing the fish on a scale. If,
that reliability is an important attribute for all three however, there is no scale available, then a ruler is used
groups who responded to the survey. Knowledge and to measure the length of the fish, which is then converted
credibility are both important to the student/parent to an estimate of the weight. Length is thus an indirect,
groups. Attentiveness is important to the student group, or surrogate, metric for weight.
yet was judged as being less important by the staff Step 1 asks the team to identify metrics for each
group. This gap is valuable information for staff and CSA from phase I. There is no attempt at this point to
management. separate direct from indirect metrics. In fact, the team
Two of the attributes, accuracy and tangibles, are is encouraged not to limit itself in any way, but to
consistently less important to the three groups. This think of “what could be measured to indicate how our
may indicate the two attributes are not important organization is performing with respect to this CSA.” A
unless they are absent. Another factor that may have brainstorming session, or a modified NGT, is useful for
biased the results is the inclusion of the statement this step. The outcome of step 1 is a list of potential
referring to “conformance to the standards set by the SMs/OMs that are associated with the CSAs.
State Regents for Higher Education” in the descriptive After the list of potential metrics has been generated,
definition. Informal conversations following the survey step 2 asks the team to separate the list into two groups.
indicate that customers were not interested in the office The first group contains SMs, and requires data
of admissions conforming to regents’ standards. obtained directly from customers. The second group
contains OMs, and requires data from the process. It The outcome of step 3 is a list of operationally
should be noted that each CSA could be measured by defined SMs and OMs associated with each CSA. An
either an OM or SM. example is the operational definition identified for one
Step 3 operationally defines the metrics. Deming of the SMs associated with the CSA of reliability: Do
(1986) states that an operational definition puts students believe the admissions office staff members do
communicable meaning into a concept. API (1990) what they say they will do? This is measured by a
continues with this thought, and adds that an opera- survey or focus group.
tional definition puts communicable meaning into a Step 4 evaluates the metrics. Four evaluation criteria
concept by specifying how the concept is applied within are suggested (DeYong 1994). They are
a particular set of circumstances. API notes that criteria 1. Ease of measurement
for judgment are necessary in some situations to
2. Ease of analysis and interpretation
distinguish between good and bad performance.
3. Relevance
Since metrics are “what is measured,” it is appro-
priate that the definitions specify how the metric is to 4. Direct from the customer
be measured. In the context of metric, measure, and Other criteria may be useful in different organizations.
measurement, discussed earlier, the operational defini- Before evaluating the metrics, it is helpful to know
tion identifies the measure, or how the metric is to which of the criteria are most important. This infor-
be measured. mation is obtained using a priority grid. This tool
SMs are addressed first. Since data for the SMs are ranks the criteria, and obtains weights by making pair-
obtained directly from customers, some form of cus- wise comparisons of the criteria.
tomer input (survey, focus group, interview) will be the Figure 5 illustrates a portion of the metric evaluation
data-gathering instrument for several of the SMs. The worksheet. The weights for each criterion, obtained
team is essentially building elements of the survey as it earlier from the priority grid, are placed in the top half
specifies definitions for the metrics. Other methods for of each cell. Thus, the top number in each cell is the
obtaining data for SMs are: focus groups, counts and same throughout a column. The evaluation scores for
nature of customer complaints, and counts and nature each metric are placed in the bottom half of the cell.
of customer compliments. The metrics are evaluated against the four criteria
OMs are addressed last. Since they occur at the end according to a five-point scale.
of the process, the organization has greater control and After all the evaluation scores have been recorded,
flexibility over obtaining the data. In addition, organi- the product is taken of the two numbers within each
zations are more familiar with techniques to measure cell and summed across each row. This results in a
most OMs. Some OMs will be tracked currently, while final evaluation score for the metrics, which can be
others are new, and must have a measurement system used to determine which metrics should be chosen.
created. The team may have to be creative and innova- The outcome of phase III is a list of SMs and OMs
tive while operationally defining a metric that has for each CSA, complete with operational definitions
never been tracked before. and evaluation scores. The metrics are defined in terms
SM Student perception 1 2 4 4 50
2 4 5 5
3 2 5 1
of the operations required to measure them. The SMs Figure 6 Major components of the process.
and OMs provide the basis for the team to identify
process metrics (PMs), which are linked with the SMs,
Receive correspondence
OMs, and CSAs.
Review
© 1998, ASQ
those that provide an indication of how well the
process is performing with respect to customer expecta- Close file
tions. Process metrics will accomplish that.
Phase IV requires the team to follow six steps. In The linkage between the CSAs and the process com-
step 1 the team studies the process and identifies the ponents is identified by viewing, from two directions,
major components, or subprocesses, of the process. A the relationship between the attributes and the com-
component of a process is defined as a part of the ponents. First, each component is studied individually,
process that has to be planned, designed, supervised, and then each CSA is compared to the component one
and appraised (Rosander 1989). at a time. Pair-wise comparisons are more effective than
For several reasons, a list of major components is trying to make multiple comparisons simultaneously.
used in this step instead of a flowchart. First, the Initially, team members may feel every process
methodology does not require the level of detail a flow- component affects every CSA. As the analysis progresses,
chart produces. Second, it is more difficult to show team members become more discriminating. If the
components that permeate the entire process, such as team can establish a rationale for a CSA to be linked
contact with the customer, with a flowchart. Figure 6 with a component, it is included.
shows the components for the office of admissions. Next, the relationship is approached from a different
Step 2 links the CSAs to the process components. perspective. The CSAs are studied individually, and the
The CSAs are defined as distinguishable characteristics process components that impact them are identified.
that contribute to overall customer satisfaction. These Once again, it is easier to compare the concepts pair-
are categories customers are using to determine how wise, make a decision regarding the linkage, and move
satisfied they are. These attributes are varied in nature. to the next component. The outcome is a list of com-
Thus, they are impacted at different points in the ponents that impact each attribute.
process. Step 2 determines the points in the process Finally, the results are combined. In many cases,
where each of the attributes is impacted. the results will indicate the same relationship between
the process components and the CSAs. There may, how- with the satisfaction metrics/outcome metrics, which
ever, be cases where the relationships are not identical. were linked earlier to the customer satisfaction attributes.
The team discusses these situations and reaches con- SMs/OMs are direct and indirect indicators of how
sensus regarding the relationship between the CSAs and well the process performed in the customer’s eyes after
the process components. The outcome is a list of process completion of the process, and were obtained from the
components and the CSAs they impact. This begins CSAs. PMs are internal metrics that indicate how well
building the link between the process and the customer. the process is performing as it progresses. If a linkage
Step 3 identifies potential PMs for each process is established between these metrics, the result is a
component. The team is reminded that a PM is an out- direct relationship from the process to the customer.
come of a step in the process that is subject to measure. The SMs/OMs and PMs are linked by making pair-wise
There are many such outcomes. The team is encour- comparisons between each set of metrics, and deter-
aged not to limit its ideas for PMs because of practical mining the strength of the relationship, if any, that
matters, such as how the PM could be measured, or exists. There are a significant number of comparisons
what is currently being measured. Metrics merely iden- that must be made for this step. Therefore, the procedure
tify what is measured. The operational definition will is best approached in an organized method that makes
specify how the metric will be measured. it easy to record the strength of the linkages. The linkage
Each process component is studied individually. matrix obtained in the application is shown in Figure 8.
The results of the previous step, CSAs that are impacted The process components and their associated PMs
by each component, are reviewed. The CSAs provide a from step 3 are listed along the top of the matrix. The
focus for the team, while identifying ideas about what CSAs and their associated SMs/OMs are listed along the
to measure for each component. The question is asked: side. The symbols for the strength of the relationships
“What should be measured to indicate how well we are shown on the matrix. Relationships between the
performed within the component in relation to the metrics are judged to be strong, medium, weak, or
CSAs?” Therefore, the PMs that are identified are none. These are indicated on the matrix, using the
generated in response to a CSA. Results from the appli- appropriate symbol.
cation for the process component “review file” are The SMs/OMs are linked to the PMs in a similar
shown in Figure 7. fashion that the components and CSAs were linked in
The outcome of step 3 is a list of potential PMs for step 2, by making pair-wise comparisons. First, the
team studies each PM and identifies SMs/OMs impacted
Figure 7 Process component “review file,” CSAs,
and PMs.
by the PM. Then, the team takes another perspective
and identifies PMs that impact the SMs/OMs.
Component CSA impacted Process metric
The team is looking for two things during this step:
Review file Accuracy Is the coding correct?
first, the linkages between the metrics; and second,
Knowledge Are the $ verified and whether the lists of SMs/OMs and PMs are complete.
deposited correctly?
PMs were associated with CSAs in step 2. The SMs/OMs
Reliability Are special situations/
are also associated with CSAs. Step 4 provides a final
© 1998, ASQ
circumstances identified?
check for the consistency and logic of the linkages. The
Timeliness Length of time have file.
matrix forces the team to look at the entire system—
each component. These PMs will be linked to the from CSA to process component to PM; and from CSA
SMs/OMs in step 4, further establishing the link to SM/OM to PM. The linkages should be consistent. If
between the process and the customer. not, the earlier linkages must be reviewed. There may
Step 4 completes the linkage between the process be PMs that seem important to the team that do not
and the customer. Step 2 identified a link between the appear to be linked to any SM/OM and vice versa. In
conceptual CSA, the process component, and the this case, either another metric is developed, or the first
process metric. Step 4 links the internal process metric metric is deleted.
applied correctly
Coding correct
Student
perception
Consistency
Reliability
—
—
—
with policies
Calls/complaints
Student
—
perception
Impressions Admission’s
judgment after
interaction
Student
perception
Time between
Timeliness
—
—
applic./response
Time waiting
at counter
Student
—
perception
Integrity of
Credibility
—
—
—
—
documents
State audit
—
—
—
results
Student
perception
Computer error
—
list
Analysis of calls
and complaints
= Medium (2)
— scores 8 5 17 5 17 7 6 21 27 7
= Weak (1)
Next, the team addresses each SM/OM and deter- definitions for the other three criteria remain the same.
mines which PMs impact them. In other words, the Another priority grid is used to determine which criteria
team approaches the linkage from a different perspective. are important. Another metric evaluation worksheet is
Another linkage matrix is developed. Pair-wise compar- used to record the evaluation results.
isons are made, and the strength of the linkage is The outcome of step 6 is a list of process metrics
determined. Although these comparisons will have that are operationally defined and evaluated. They are
been made before, the alternative viewpoint helps linked to the customer through the SMs/OMs and thus
make certain the lists are complete and the compar- the CSAs.
isons are valid.
Finally, the two matrices are combined. Consensus
is reached about linkages between the metrics that are DISCUSSION
not the same. Each SM/OM is linked with at least one Organizations using the methodology have a clear idea
PM. If no linkage is identified, the team brainstorms of what process metrics should be tracked. The next
for additional PMs. The outcome of step 4 is a list of step would be implementation of the measurement system,
PMs along with a linkage matrix that shows the rela- and continual review of the CSAs, OMs, SMs, and PMs.
tionship between the PMs and the SMs/OMs, and, Today’s organizations exist in a dynamic environment,
therefore, the CSAs. The PMs that result are opera- with constantly changing customers, business processes,
tionally defined in step 5 and evaluated in step 6. and pressures. Some form of continuous improvement
The linkage matrix serves three main purposes. will necessarily be a way of life.
First, it organizes and clearly communicates the link- The attributes and metrics developed with this
ages between metrics. Second, the matrix identifies application were tested and accepted by both customers
critical PMs, those that impact several CSAs or and owners of the test process. Strengths of the
SMs/OMs. If internal metrics are limited, the matrix methodology noted by users include the increased
helps determine where resources will be used most knowledge and understanding of the customer, the
effectively. Finally, the matrix serves as a self- richness of data obtained regarding customer satisfac-
coordinating device, forcing the team to review linkages tion attributes, and the efficiency and structure offered
identified earlier and verify their validity. by the methodology. The organization has historically
used ad-hoc measures of process performance, so the
process metrics provide a measurement system that is
This information … indicates which attributes accepted by process owners and focused on customers.
are important to … customers and which are
not. In addition, [it] helps the organization
assess how well it knows its customers. CONCLUSION
The methodology detailed in this article provides an
efficient and effective mechanism for organizations
In the same way the earlier metrics were defined,
interested in identifying process metrics that are linked
step 5 operationally defines the PMs. The operational
to their customers. Although many tools exist for making
definition again states how the PM is to be measured.
linkages, no methodology has been proposed that leads
Finally, the PMs are evaluated in step 6. The criteria
organizations step-by-step from assessing customer
used to evaluate the PMs are very similar to those used
attributes to identifying process metrics linked to the attrib-
in phase II to evaluate the SMs/OMs, with one exception.
utes. Benefits of the methodology include the following:
An “objective” criterion is included, rather than “direct
from the customer.” The four criteria are (1) ease of • A clear understanding of customer wants, needs,
measurement; (2) ease of analysis and interpretation; and desires
(3) relevance; and (4) objectivity. Objectivity evaluates • A process measurement system linked to the
how objective, or subjective, the metric is. The customers’ wants, needs, and desires
• A higher degree of acceptance for the measurement Drucker, P. F. 1954. The practice of management. New York:
system since it was developed by process owners Harper and Row.
Booms, B. H., and J. L. Nyquist. 1981. Analyzing the customer/ Nelson, P. 1970. Information and consumer behavior. Journal of
firm communication component of the services marketing mix. In Political Economy 78, no 2:311– 329.
Marketing of Services, edited by J. H. Donnelly, and W. R.
George. Chicago: American Marketing Association. ———. 1974. Advertising as information. Journal of Political
Economy 82, no 4:729–754.
Brown, S. W., and T. A. Swartz. 1989. A gap analysis of pro-
fessional service quality. Journal of Marketing (April): 92 – 98. Parasuraman, A., V. A. Zeithaml, and L. L. Berry. 1985. A con-
ceptual model of service quality and its implications for future
Chua, R. C. H. 1992. A customer-driven approach for measuring research. Journal of Marketing (fall): 41– 50.
service quality. ASQC Quality Congress Transactions.
Milwaukee: ASQ. ———. 1988. SERVQUAL: A multiple-item scale for measuring
consumer perceptions of service quality. Journal of Retailing 64,
Darby, M. R., and E. Karni. 1973. Free competition and the no. 1:12 – 37.
optimal amount of fraud. The Journal of Law and Economics 16,
no. 1: 67– 88. Regan, W. J. 1963. The service revolution. Journal of Marketing
(July): 57– 62.
Deming, W. E. 1986. Out of the crisis. Cambridge, Mass.: MIT
Center for Advanced Engineering Study. Rosander, A. C. 1989. The quest for quality in services.
Milwaukee: ASQC Quality Press.
DeYong, C. F. 1994. A methodology for linking customer satis-
faction dimensions with process performance metrics. Ph.D. diss., Rummler, G. A., and A. P. Brache. 1990. Improving performance.
Oklahoma State University. San Francisco: Jossey-Bass.
Sasser, W. E., Jr., R. P. Olsen, and D. D. Wyckoff, eds. 1978. include the Oklahoma Department of Transportation, the U.S.
Management of service operations. Boston: Allyn and Bacon. Army Matériel Systems Analysis Activity, and the city of
Stillwater. She has presented numerous seminars on the topics of
Shostack, G. L. 1977. Breaking free from product marketing.
identifying performance metrics, strategic planning, and life
Journal of Marketing (April): 73 – 80.
cycle costing.
———. 1987. Service positioning through structural change.
DeYong is the author of several technical papers and one book
Journal of Marketing (January): 35 – 43.
chapter. She earned a doctorate in industrial engineering and
Sink, D. S., and T. C. Tuttle. 1989. Planning and measurement in management at Oklahoma State University. She may be contacted
your organization of the future. Norcross, Ga.: Industrial via E-mail: deyong@okway.okstate.edu .
Engineering and Management Press.
Kenneth E. Case is a regents professor of industrial engineering
Tenner, A. R. 1991. Quality management beyond manufactur- and management at Oklahoma State University. He has taught at
ing. Research-Technology Management 34, no. 5:27– 32. both OSU and Virginia Tech. He is a Fellow of ASQ and the
Institute of Industrial Engineers. He is also an ASQ Certified
Yoshizawa, M. 1991. Leadership through quality: New employ- Quality Engineer, Certified Quality Manager, Certified Reliability
ee quality training participant workbook. New York: Xerox. Engineer, and Certified Quality Auditor. He is a member of the
Zeithaml, V. A., A. Parasuraman, and L. L. Berry. 1988. National Academy for Engineering and an academician and
Communication and control processes in the delivery of service board member in the International Academy of Engineering.
quality. Journal of Marketing (April): 35 – 58. Case has published over 100 articles and has coauthored three
———. 1990. Delivering quality service. New York: The Free Press. books, one of which won the Joint Publishers – IIE Book of the
Year Award. He has received multiple teaching awards, both at
the university and the national levels, and has presented numer-
BIOGRAPHIES ous ASQ seminars on topics that include, statistical process con-
trol, quality engineering, and quality costs. His research interests
Camille Frye DeYong is an assistant professor of industrial engi-
are in quality and reliability engineering, economic analysis, and
neering and management at Oklahoma State University. She has
production planning and control.
taught at the University of Central Oklahoma, and has two years
of industrial experience in the transportation services industry. Case earned a doctorate in industrial engineering and manage-
She is an ASQ Certified Quality Engineer, and served as an ment at Oklahoma State University. He may be contacted via
examiner for the Oklahoma Quality Award in 1994. E-mail: kcase@okway.okstate.edu .
DeYong’s research interests are in the areas of total quality Both DeYong and Case may also be contacted at the School of
management, economic analysis, and service quality. She has Industrial Engineering and Management, 322 Engineering
consulted and performed research in performance metrics and North, Oklahoma State University, Stillwater, OK 74078;
customer satisfaction measurements. Client service organizations 405-744-6055, Fax: 405-744-6187.