You are on page 1of 9



Jackie L. Dobrovolny Stephanie Christine G. Fuentes

Evaluation is often avoided in human performance technology (HPT), but it is an essential and
frequently catalytic activity that adds significant value to projects. Knowing how to approach an
evaluation and whether to use qualitative, quantitative, or both methods makes evaluation much
easier. In this article, we provide tools to help determine appropriate evaluation methods.
Successful evaluation provides insightful data with which to make informed decisions.

WHAT IS THE DIFFERENCE between quantitative and 1999, p. 185). That is, using evaluation can improve orga-
qualitative methods for evaluating a program or project? nizational decision making and planning. In addition,
How do you decide which to use? This article offers a evaluation is one of the 10 Standards of Human Perfor-
flowchart to help make that decision and a table that mance Technology (ISPI, 2007). The evaluation principle
compares and contrasts the two methodologies. It will requires HPT practitioners to systematically evaluate
help in deciding if a performance improvement inter- the efficiency and effectiveness of all interventions so the
vention is best evaluated by a quantitative or a qualita- costs incurred and the benefits gained can be compared.
tive methodology, or a combination of the two. Once you HPT practitioners operate in the world of practical appli-
make that decision, please seek more information about cation and experiences rather than experimentation
the specific steps and strategies of the methodology you (Foshay, Moller, Schwen, Kalman & Haney, 1999). Eval-
select. uation is therefore an important strategy for advancing
and improving the profession (Foshay et al., 1999; Sleezer
A third reason to evaluate is to provide a strong founda-
EVALUATE? tion for continuous improvement responsibilities. All HPT
One of the primary reasons to evaluate is to provide clear practitioners are frequently challenged to provide quality
justification for performance improvement interventions. solutions with few resources. Evaluation can help practi-
Strategies and practices should demonstrate connections tioners make smarter decisions about where to expend
with business goals, especially as organizations move to- effort and how to ensure that their continuous improve-
ward metrics-based workforce decision making (Huselid, ment efforts are effective (Brinkerhoff & Dressler, 2002).
Becker, & Beatty, 2005; Becker, Huselid, & Ulrich, 2001). Evaluations based on a written and approved evaluation
Another reason to evaluate is that evaluation is an strategy that is integrated into the mission, vision, and
important component of human performance technol- operating procedures of an organization become the core
ogy (HPT), which “is often characterized as a systems of a routinized, efficient, continuous improvement philos-
approach to organizational and individual performance ophy (Giberson, Tracey, & Harris, 2006; Medsker, 2006).
improvement. All systems have a feedback and revision In spite of its essential function, “evaluation is . . .
loop—a mechanism for determining if the output of the (probably) the most widely misunderstood, avoided, and
system meets the intended objectives” (Schrock & Geis, feared activity in the practice of HPT” (Schrock & Geis,
Performance Improvement, vol. 47, no. 4, April 2008
©2008 International Society for Performance Improvement
Published online in Wiley InterScience ( • DOI: 10.1002/pfi.197 7
Because both quantitative Quantitative and qualitative evaluations follow estab-
lished guidelines that have been tested and refined over
and qualitative evaluation many years (e.g., Krathwohl, 1998). Both require evalua-
tors to design an evaluation plan based on similar eval-
involve judging or decision uations conducted in the past by other evaluators. That is,
part of the evaluation process is to review evaluations
making, they can be conducted by others to identify effective strategies and
mistakes to avoid (Krathwohl, 1998).
perceived as subjective, For both quantitative and qualitative evaluations, eval-
controversial, and uators must protect the data they collect. Participants are
usually guaranteed anonymity, and in some evaluations,
emotionally and politically especially qualitative evaluations, evaluators collect a lot of
personal information from each participant. Quantitative
charged. and qualitative evaluations discuss results in terms of
credibility, transferability, and dependability. These three
variables together create the scientific rigor that makes the
results of an evaluation project trustworthy (Krathwohl,
1999, p. 185). Perhaps this is because practitioners fear the 1998).
results will point to a failure or they will be negatively Finally, both quantitative and qualitative evaluations
scrutinized, particularly if they operate internally in an are based on established codes of conduct and ethical
organization. “But evaluation is critical to the practice of standards that evaluators voluntarily agree to follow
HPT; without it, the success of any HPT intervention is (Krathwohl, 1998). These standards of practice are estab-
unknown, and decisions are left to hearsay, politics, egos, lished by professional associations such as ISPI. The ISPI
impressions, personal connections, and other enemies Code of Ethics (2002) is based on six principles, one of
of sound thinking” (Schrock & Geis, 1999, p. 186). HPT which is Integrity. One of the Integrity guidelines is,
practitioners have a professional obligation to provide “Exhibit the highest level of professional objectivity in
unbiased, sound evaluation data to help leaders make gathering, evaluating, and communicating information
smart HPT choices. about the activity or process being examined, or the
results achieved.” The American Evaluation Association
COMPARING QUANTITATIVE AND (2008) has a similar code of ethics and guiding principles
for evaluators.
QUALITATIVE METHODOLOGIES It is important to understand the strengths and limita-
Quantitative and qualitative evaluation methodologies tions of both quantitative and qualitative evaluation to
share a number of characteristics. Both are based on a determine which methodology will produce the data that
conceptual framework, explanation, rationale, or theory. can answer specific evaluation questions. Table 1 summa-
“A conceptual framework explains, either graphically or rizes important distinctions between these two evaluation
in a narrative form, the main things to be studied— methodologies. Notice the first distinction listed in this
the key factors, constructs or variables––and the pre- table. In a quantitative evaluation, evaluators develop an
sumed relationships among them” (Miles & Huberman, assumption, or hypothesis, before collecting data; the pur-
1994, p. 18). Both also include clear evaluation questions pose of the evaluation is to determine whether that
and an evaluation plan that is congruent with and able to assumption is supported by the data. For example, an
answer those questions. And both employ a systematic assumption might be that a particular HPT intervention
approach to the evaluation process and to measuring improves the leadership skills of the executive team. The
meaningful and relevant variables that inform our under- evaluation would then collect data to determine if that
standing of human performance in organizations (Russ- assumption was correct. In a qualitative evaluation, evalu-
Eft & Preskill, 2001). ators typically start not with a hypothesis but by looking at
Both evaluation methodologies rely on a data- or the big picture, or context, and attempting to describe or
information-gathering process—in other words, a meas- understand that context. For example, the investigation
urement process (Sleezer & Gradous, 1998). In addition, might seek to determine how leadership is defined or
because both quantitative and qualitative evaluation manifested in an organization.
involve judging or decision making, they can be perceived Another important distinction summarized in the
as subjective, controversial, and emotionally and politi- table is the evaluator’s view of reality. In a quantitative
cally charged. evaluation, the evaluator assumes reality is a constant

8 • DOI: 10.1002/pfi • APRIL 2008



Seek to validate whether a particular assumption (or hypothesis) is Focus on context. The results emerge from what is naturally existing
true for a given context. in that context.

Assume an objective reality that is relatively constant (positivist Assume that individuals create their own reality independently and
perspective). socially (constructivist perspective).

Separate and detach the observer from the observed. Involve the observer and the observed, often creating a participant-
observer role.

Explore population characteristics or sampling frames that represent Study individual cases or groups that may not be representative of a
population characteristics. larger whole.

Refer to the people who participate in the research as subjects. Refer to the people who participate in the research as participants.

Randomly select samples that are as large as possible. Select participants based on specific characteristics, i.e., a “purposive
sample” and “the sample size is as small as possible” (Morse, 1991,
p. 136).

Describe behaviors with numbers. Describe actions using words, music, art, poetry, or drama, for

Examine behavior and other observable variables. Examine the meanings that individuals create and other tacit knowl-
edge or behavior.

Explore human behavior in natural or experiment-like settings. Explore human behavior in its usual context.

Analyze social reality according to predefined variables. Situate observations within a context.

Use preconceived concepts and theories to determine what data will Discover concepts and theories after data have been collected.
be collected.

Use statistical methods and inference to analyze data (e.g., chi Use induction to analyze available data (e.g., code interview tran-
square, ANOVA, regression techniques, and multivariate analysis). scripts, identifying themes and patterns).

Generalize findings from a sample to a defined population. Do not seek to generalize findings unless similar cases exist; instead,
provide rich description.

Prepare impersonal, objective reports of research findings; the final Prepare a discourse-intensive final report that describes the themes
report typically contains charts, graphs, and tables that summarize and patterns and supports those themes and patterns with exemplary
the data. quotations or stories.

From Gall, Meredith D., Joyce P. Gall, & Walter R. Borg Educational Research: An Introduction, 8/e. Published by Allyn and Bacon, Boston, MA. Copyright © 2007 by
Pearson Education. Adapted by permission of the publisher.

Performance Improvement • Volume 47 • Number 4 • DOI: 10.1002/pfi 9

state, which everyone perceives similarly. In a qualitative Begin using the flowchart by determining what ques-
evaluation, the evaluator assumes reality is different for tion you would like to answer with the evaluation. The
each person and that individuals create their own reality. five questions in the first box on the flowchart help deter-
Perhaps one of the most misunderstood distinctions mine the kind of activity you are undertaking (Nardi,
between quantitative and qualitative methods is the num- 2006). Based on the type of question you are trying to
ber of participants required. A quantitative approach usu- answer, consider if this is a relevant question to ask given
ally requires a large, random sample of people or data that the context of the evaluation. (You may have an interest-
covers a broad spectrum. A qualitative evaluation, in con- ing question, but perhaps it is not relevant to the organi-
trast, purposefully selects specific individuals or data zation at this time.)
sources to study in depth (Krathwohl, 1998; Mason, If you do have a relevant question, consider whether
1996). A qualitative approach may study only as many you already have data that will help answer it. The issue
individuals as are necessary to identify consistent themes here is what data you already have or have access to versus
and patterns (Morse, 1991). what data you need to answer the question. If you know
These are the three distinctions we think are most the data exist but are denied access to them, there is little
important to note. With time and experience, individual more you can do. This sometimes occurs with sensitive
practitioners may find that some distinctions are more human resource (HR) data, but some of this information
salient than others to the groups and in the contexts in can be used if it is scrubbed of identifying information.
which they conduct evaluations. And if you do have access, you can continue to explore
Many situations exist when practitioners need to use what methods are appropriate for the evaluation.
both quantitative and qualitative evaluation methodolo- Next, examine the practical aspect of the evaluation:
gies, that is, a mixed methods approach. For example, Do you have the time and the resources to pursue it?
Sleezer and Spector (2006) used quantitative data to spec- Often practitioners have grand ideas about exploring
ify how well the program goals were met and used quali- interesting questions, only to find a lack of support, time,
tative data to determine if stakeholders could improve money, or other resources. If you have the time to spend,
their programs when they had access to the quantitative you may be able to look for new data sources beyond
data. Combining both methods can often improve the what you currently have. Both qualitative and quantita-
interpretation of the results and be more meaningful to tive data sources may be open to you.
decision makers. Usually practitioners have only a small amount of
Some HPT practitioners use qualitative methods to time to gather data and report back to decision makers
investigate the big picture and quantitative methods with the findings. In this case, you might want to use
to focus on specific variables within the big picture. For existing quantitative data that are easy to access. Some
example, several for-profit companies with which one of quantitative data can answer preliminary questions in
us works use qualitative methods (mainly focus group the fastest, most efficient, and least intrusive way. Any
sessions) to identify relevant variables and criteria that qualitative data might be limited to anecdotes, although
can then be explored organization-wide using quantita- this information can sometimes be compelling enough
tive methods (primarily online surveys). This ensures to free up more resources. The kinds and levels of
that organizational resources and personnel are used data available for quantitative and qualitative methods
judiciously and that management has an opportunity to are explained in detail in many books and articles (e.g.,
decide the direction and scope of future evaluative Krathwohl, 1998).
efforts before significant data collection expenses are After you gather data, you may still have questions. Or
incurred. you may need to ask a different question to get a more
informative answer. We begin the process of determining
methods again when we want to explore the topic in
QUALITATIVE OR QUANTITATIVE Let us take an example situation and use the flowchart
to help determine what methods we could use. The guid-
METHODS ing question in this example is, “How effective is our lead-
The flowchart in Figure 1, which provides decision points ership development program in preparing new leaders to
in the process of determining the scope and feasibility fill the executive talent pipeline?” We want to know in this
of an evaluation, can help practitioners decide what case whether the organization’s program works. Other
methodology may be most appropriate given situational questions might be interesting to pursue (e.g., how our
constraints. program compares to others, what results we see from it,

10 • DOI: 10.1002/pfi • APRIL 2008


why it is working, or how predictive it is of success), but sources that would not raise suspicion about our activi-
let us take this example at a basic level. In many organiza- ties. If there is sufficient time to collect data, an option is
tions, proof of a program’s value according to some met- to acquire both quantitative and qualitative data. Here are
ric can be a deciding factor to fund it. some examples of how to use mixed methods to answer
Given that we want to know if our leadership develop- the question:
ment program works for the organization, this is certainly
a relevant question. The organization’s leaders would not
Qualitative Data
• How have individuals achieved their new positions in
like spending money on something that does not help.
the pipeline? What works best?
Getting data about the leadership program might be
complicated. Since it could be sensitive for employees to • What qualifications (e.g., schools, degrees achieved)
know who is being cultivated for the upper echelons of are associated with the top achievers in the pipeline?
the company, this information might circulate only • How do customers and the workforce feel about the
among HR and a few senior executives. Capturing results transitions between positions? Are they smooth or
might require some creative thinking to identify data rough?

Performance Improvement • Volume 47 • Number 4 • DOI: 10.1002/pfi 11

Quantitative Data • How does the transition between positions affect
• What are the most common job paths to the higher results? (A manager may achieve fine results, but when
levels? (Count the number of similar positions among the transition occurs, productivity and satisfaction lev-
executives.) els may decrease in the previous group.)
• How long (in days, weeks, or months) does it take an • How do customers and the workforce feel about the
individual to be promoted from when he or she enters transitions? Are they smooth or rough? (Use a survey
the pipeline? with a scale or rating system.)
• How long does a person stay in each position? The answer to the question, “Does it work?” can help
• What kind of results do they achieve in each position? identify critical factors in leadership development in the




Quantitative methods
• Efficient with time and resources • Answer the question, but substantive or peri- • Can be perceived as impersonal and
• Less resource intensive; limited human pheral questions may remain unanswered decontextualized
interaction • Require thoughtful planning to be successful • Strength of study partially dependent on
• Results may apply to a larger population • Need a sample of at least 20 to 50 subjects breadth (number of participants)
• Anonymity in data collection is possible • Data security essential because participants
are usually guaranteed anonymity

Qualitative methods
• Rich data: the “why” and “how” become • Generalizability of results is difficult; depends • Can reveal issues that would not have been
accessible on the question, the diversity of the study discovered any other way
• Develop understanding and rapport with participants, and the consistency of the results • Can provide realistic and viable solutions
participants • Time dependent (e.g., scheduling interviews to issues from the participants themselves
with study participants) and time-consuming • Can change the nature of the relationship
(e.g., transcribing audiotape of each between participants and the organization
interview) • Strength of study partially dependent on
• Resource intensive depth (amount of data collected from each
• Data security is essential because of the
depth of information collected from each
participant, and participants are usually
guaranteed anonymity

Both quantitative and qualitative methods

• Allow us to challenge assumptions and • May produce data that are politically • Organizational culture could play a role in
question the status quo charged the findings
• Help explore issues in a systematic way • Sometimes create more questions than • Things change; what may have been the
• Answer an important question about answers case at the time of the evaluation may not
the topic of interest • Are of no help if the wrong question is consistently apply over time
asked • Evaluation can help identify the implica-
tions of something desired, but it can never
determine what ought to be, which is a
value judgment
• The client must ultimately decide what to
do with the results and how to interpret
them within the context of the situation or
• Be thoughtful about what you want to know
before getting others involved

12 • DOI: 10.1002/pfi • APRIL 2008

organization that require more attention or elements that vided, it is clear how influential any evaluation can be and
are useless and could be discontinued. Either way, the how important it is to systematically design evaluation
organization ends up more informed about the choices methods, whether qualitative or quantitative.
for leadership development.

There is a variety of advantages, limitations, and implica- American Evaluation Association. (2008). Guiding principles
tions for using qualitative and quantitative methods, and for evaluators. Retrieved January 10, 2008, from http://www.
some of them are outlined in Table 2. The information in
this table will help practitioners understand the conse-
Becker, B., Huselid, M., & Ulrich, D. (2001). The HR scorecard.
quences of choosing a particular methodology for an eval- Boston: Harvard Business School Press.
uation and explain the rationale behind methodological
choices to colleagues and managers. For example, before Brinkerhoff, R.O., & Dressler, D. (2002). Using evaluation to
trying to sell the idea of a qualitative big picture evaluation, build organizational performance and learning capability: A
we suggest reviewing the qualitative section of Table 2 and strategy and a method. Performance Improvement, 41(6),
thinking about the relevance or importance of getting 14–21. [DOI: 10.1002/pfi.4140410605.]
employees involved in identifying solutions to a problem. Foshay, W.R., Moller, L., Schwen, T.M., Kalman, H.K., &
Another example using Table 2 is to use the “Both Haney, D.S. (1999). Research in human performance technol-
Quantitative and Qualitative Methods” section as a self- ogy. In H.D. Stolovitch & E.J. Keeps (Eds.), Handbook of
check before trying to sell any evaluation plan to col- human performance technology (2nd ed., pp. 895–915). San
leagues or managers. This third section of the table is Francisco: Pfeiffer/Jossey-Bass.
useful for thinking of evaluation in the context of the
organization and customizing or personalizing that eval- Gall, M., Gall, J., & Borg, W. (2007). Educational research: An
introduction 8/E (7th ed.). Boston: Allyn & Bacon.
uation for the organization. The table may help identify
why one methodology is likely to work better than Giberson, T.R., Tracey, M.W., & Harris, M.T. (2006).
another for a particular organization. Be sure to docu- Confirmative evaluation of training outcomes: Using self-
ment these for future use. Keep in mind that there is report measures to track change at the individual and organi-
rarely only one methodology for answering a relevant zational level. Performance Improvement Quarterly, 19(4),
question. Most questions can be explored in many ways. 43–62.
Advocates of performance improvement need to show
Huselid, M., Becker, B., & Beatty, R. (2005). The workforce
how thoughtful use of appropriate evaluation methods
scorecard. Boston: Harvard Business School Press.
can support better HPT practice.
One excellent reference for understanding HPT prac- ISPI. (2002). Code of ethics. Retrieved August 13, 2007, from
tice and evaluation within the HPT context is the Stan-
dards of Human Performance Technology (ISPI, 2007).
ISPI. (2007). Ten standards of human performance technology.
Knowing what methodology to use for evaluation means
Retrieved August 13, 2007, from
understanding the larger context in which HPT activities institute/Standards.pdf.
take place. Competing pressures, resource constraints, and
organizational change all bear on how evaluations are con- Krathwohl, D.R. (1998). Methods of educational and social sci-
ducted. Using an appropriate evaluation methodology ence research (2nd ed.). New York: Longman.
adds value by helping us collect relevant and useful data so
Mason, J. (1996). Qualitative researching. Thousand Oaks, CA:
that leaders can make informed decisions.
In addition, using an appropriate method that
addresses a specific question ensures using time and Medsker, K.L. (2006). Strategy streamlines evaluation.
resources efficiently. Evaluation, whether it is quantitative Performance Improvement Quarterly, 19(1), 3–5.
or qualitative, is an important HPT tool that enables
Miles, M.B., & Huberman, A.M. (1994). Qualitative data
practitioners to focus on outcomes, take a systems view,
analysis (2nd ed.). Thousand Oaks, CA: Sage.
and provide quality data to measure the efficiency and
effectiveness of HPT interventions. A poorly designed Morse, J.M. (1991). Strategies for sampling. In J.M. Morse
evaluation yields unusable results; and since decision (Ed.), Qualitative nursing research (pp. 127–145). Thousand
making about the situation often relies on the data pro- Oaks, CA: Sage.

Performance Improvement • Volume 47 • Number 4 • DOI: 10.1002/pfi 13

Nardi, P.M. (2006). Doing survey research: A guide to quantita- Sleezer, C. M., & Gradous, D.B. (1998). Measurement chal-
tive methods (2nd ed.). Boston: Pearson. lenges in evaluation and performance improvement.
Performance Improvement Quarterly, 11(4), 62–75.
Russ-Eft, D., & Preskill, H. (2001). Evaluation in organizations.
Cambridge, MA: Perseus.
Sleezer, C.M., & Spector, M. (2006). Assessing training needs of
Schrock, S., & Geis, G.L. (1999). Evaluation. In H.D. Stolovitch HIV program providers. Performance Improvement Quarterly,
& E.J. Keeps (Eds.), Handbook of human performance technol- 19(3), 89–106.
ogy (2nd ed., pp. 185–209). San Francisco: Pfeiffer/Jossey-Bass.

JACKIE L. DOBROVOLNY, PhD, has presented at numerous international conferences, including ISPI.
Her dissertation research was a qualitative study, and she replicated that research in 2006. She has
been an independent consultant since 1993 and frequently teaches at the University of Colorado at
Denver and Health Sciences Center. At the beginning of her career, she participated in numerous
quantitative research projects funded by the U.S. Department of Defense. She may be reached at

STEPHANIE CHRISTINE G. FUENTES is a doctoral candidate in the organizational learning and

instructional technology program at the University of New Mexico (UNM). She holds an MBA in oper-
ations management from UNM and an MAEd in information and learning technologies from the
University of Colorado at Denver. She has given several presentations at ISPI’s annual conferences on
using statistics and research methods in HPT and is a former ISPI chapter co-president. She works with
a variety of organizations as an evaluator and learning strategy consultant. She may be reached at

14 • DOI: 10.1002/pfi • APRIL 2008