Beruflich Dokumente
Kultur Dokumente
(2016),"Size matters: the link between staff size and perceived organizational support in early childhood education",
International Journal of Educational Management, Vol. 30 Iss 6 pp. -
(2016),"Modelling student satisfaction and motivation in the integrated educational environment: an empirical study",
International Journal of Educational Management, Vol. 30 Iss 6 pp. -
Access to this document was granted through an Emerald subscription provided by emerald-srm:573577 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please
visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication
Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
business organizations but has received scant attention in higher education institutions. A review
of literature revealed that context, input, process, product (CIPP) model is an appropriate
performance evaluation model for higher education institutions. However, little guidance exists
for choosing appropriate metrics and benchmarks in implementing the CIPP model. The purpose
of this study was to develop a framework using CIPP model for performance evaluation of
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
conducted to identify an appropriate evaluation model. Then metrics and benchmarks framework
were formed based on practical approaches used in a large university in the United States.
Findings: Nine perspectives in performance evaluation using the CIPP model and their
application in higher education institutions were developed and discussed. The discussion
provides examples, relative prevalence including frequency of usage, advantages and limitations
actual application of the suggested CIPP model in the United States largest university, by student
enrollment, was provided. Implications for institutional assessment and continuous improvement
Originality/Value: The study provides a practical framework, model, and guidelines that can be
used by higher education institutions to evaluate and enhance their performances and better
Higher education is becoming more competitive with heightened rivalry for student enrollments,
more global due to technology advancement with increasing geographic scope, and more diverse
with broader curricular scope. In addition, many evaluation models have been used to assess
higher education institutions has never been more fraught with diverse metrics. The idea of
What you measure is what you get defines organizational citizen behaviors and becomes the
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
basis for utilizing multiple metrics in performance management to ensure that organizations seek
to achieve progress along multiple dimensions (Podsakoff et al., 2000). Several years ago,
Kaplan and Norton (1992) introduced the concept of the balanced scorecard which, at that time,
measures of financial performance, the concept has given a generation of managers a better
understanding of how their companies are really doing (El-Mashaleh et al., 2007). Balanced
scorecard was made to measure business unit performance with four perspectives - financial,
customer, internal business processes, and learning and growth. The perspectives provide a
balanced view of current and future of an organization operating performance. While the
balanced scorecard provides a wide range of metrics to be used in the effective management of
management. What one measures takes on meaning only in reference to the benchmarks that are
used in drawing insights on what the measurements mean. Metric is a standard of measurement.
Mere measurement, however precise it is, does not tell you if you are doing well or poorly.
enhanced with greater shared understanding of the metrics used (Rich et al., 2010). What
1
benchmarks you use is what meaning you get is the mantra used in this paper. We propose to
examine both the metrics and benchmarks that are to be used in performance management for
higher education institutions. While the College Affordability and Transparency Center released
metrics (U.S. Department of Education). We build on the concept of balanced scorecard in our
metrics section and present three distinct sets of benchmarks that should be used in conjunction
The paper is organized as follows to build toward our conceptual framework. We first present the
needs for metrics and benchmarks. We provide an overview about evaluation models in higher
comprised of three categories for metrics as one axis and three categories for benchmarks as the
other axis. We then describe each of the nine-cells with definitions, strengths, weaknesses and
illustrative vignettes. We build on Kaplan and Nortons (1992) notion of balance in performance
management to offer a more comprehensive balanced scorecard for higher education institutions.
We illustrate the use our suggested model at the world's largest (by student enrollment)
university. We conclude with implications for more effective and efficient management of higher
education institutions.
Why do we Measure?
Performance measurement is not an end in itself, but is part of a process that helps to guide our
decisions and work methods in organizations. Performance metrics achieve specific managerial
purposes by setting the goals for organizational work. Performance measures are of a wide
variety and are used to evaluate, control, budget, motivate, promote, celebrate, learn, and
improve (Behn, 2003). Theories of rational planning suggest that organizational performance
2
improves if targets for future achievements are set (Boyne & Chen, 2007). Performance
measurement also leads to performance improvement (Kelman & Friedman, 2009). A typical
application, which is pertinent to our paper, is in performance measurement where the intent is to
understand the determinants of performance. That is, we measure to improve. Lastly, evaluation
levels and in many institutions (Dahler-Larsen, 2011). Evaluation is not only linked to
performance improvement but also to social betterment and increased public awareness. Higher
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
education institutions are required, per accrediting bodies, to self-study to assess their role in
There are many varieties of models and frameworks for evaluating higher education institutions.
A review of literature revealed that the models are categorized based on various criteria
including evaluation procedures and designs (Darwin, 2012; Guba & Lincoln, 1989), evaluation
evaluators (Ramzan, 2015), and evaluation objectives (Stufflebeam & Shinkfield, 2007; Wang,
Evaluation models based on procedures and designs are categorized into two groups: standard
and modern (Darwin, 2012; Guba & Lincoln, 1989). Standard evaluations are motivated by
individual legitimacy, and heavily rely on quantitative data from students, and use deficit
incidental method with remedial action plans. In comparison, modern or fourth generation of
evaluation models are motivated by enhancing student learning, and mainly rely on wide-ranging
qualitative data, and use developmental continuous method with program development action
3
plans. The fourth generation models emphasize situated evaluation practices where context of
An example of a standard evaluation model is quantitative student opinion surveys that are used
in a number of countries including United States, UK and Australia (Darwin, 2012). However,
student evaluation has shown to be fragile, unreliable, and susceptible to various influences. In
addition, there has been an increasing doubt about value of student rating as a means of
objectively evaluating higher education institutions which are complex systems encompassing
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
evaluation is considered narrow and superficial, and incomplete for enhancing academic quality
learners (Arthur 2009; Darwirn, 2012; Kember, Leung, & Kwan, 2002). Consequently,
alternative evaluation models have been emerged recently. Viewing evaluation as a socio-
cultural process, and distinctly different from the standard evaluation model, new models shifted
the basis to negotiated evaluations (Guba & Lincoln, 1989). Learning evaluation model grounded
in constructivist and developmental motives has been used to evaluate Australian higher
Evaluators
Another group of evaluation models are evaluator oriented. One major example of evaluator
oriented models is the four stages model used by European higher education institutions.
European higher education institutions had no consensus over an evaluation model used by
quality agencies until 2000. Since establishment of Bologna accord and European association for
quality assurance (ENQA) in 2000, European higher education institutions have adopted and
followed a four stage model of evaluation for quality insurance of their institutions (Ramzan,
4
2015). The four stages include a self-evaluation conducted by universities, site visit by an
external peer review, writing report and publication by evaluation committee and quality
agencies, and follow up visit by quality agencies. Many accrediting agencies in the US also
adopt the same process. It is notable that input, process, output are considered three principal
Objectives
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
Evaluation models are also categorized based on their objectives into various groups such as
advocacy, and eclectic evaluation (Zhang et al., 2011). Pseudo evaluations are mostly motivated
by political objectives. Quasi evaluations focus on answering research questions with a clear
evaluations focus on examining merit of a program. Social advocacy oriented evaluations focus
on assessing programs to pursue media for social justice. Eclectic evaluations focus on utilization
and aim to serve needs of particular users (Stufflebeam & Shinkfield, 2007; Zhang et al., 2011).
There are other categories such as management oriented, outcome oriented, and process oriented
There are a number of models under each category. The objective of a model should match the
needs of the institution for evaluation. The outcome based approach is appropriate for
organizations that are mostly interested in the results. The popular Kirkpatrick evaluation is a
model for evaluating training with four components of reaction, learning, behavior and results
(Kirkpatrick, 1998). For evaluating a large scale educational system, there is a need for a more
comprehensive model. The Context, Input, Process, Product (CIPP) is a management oriented
approach widely used in public schools and higher education institutions in United States and
5
across the globe. CIPP includes context stage where evaluators identify environmental readiness
and community needs, input suggests a project that address the needs identified in the context
stage, process control and assess the project process, and product stage measures and judge
project outcomes, worth, and significance (Stufflebeam & Shinkfield, 2007; Zhang et al., 2011).
CIPP is one of the most popular evaluation models that implements social approaches in each of
the four components. The goal of CIPP is to improve not to prove (Stufflebeam &
Shinkfield; 2007) the issues within the organizations. CIPP is recommended as an appropriate
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
evaluation model for assessing higher education institutions. However, metrics and benchmarks
are a gap in the CIPP model. Our paper fills this gap.
Higher education is part of our society and thus is not immune to the influence of our culture. In
our business-oriented way of life, most events are seen through the lens of manufacturing, even
human processes that have nothing to do with producing products. For example, we have come
to view and discuss higher education in terms of simple inputs and outcomes. A systems view of
higher education institutions, depicted in Figure 1, shows that various inputs go into their
systems (e.g., high-school graduates enter the system); several value-adding processes transform
those inputs (e.g., learning processes), and outputs (e.g., educated students capable of pursuing
careers and adding value in society) are finally generated from the systems.
Evidently there are many more inputs, processes and outputs than shown above for a higher
education institution, and that condition itself calls for use of several metrics in performance
management for higher education institutions (Cave, 1997; Palomba & Banta, 1999). We
6
contend, however, despite the extensive diversity of possible metrics, the categories of inputs,
processes and outputs (Cave, 1997) provide adequate gestalts for performance management.
Why do we Compare?
The need to compare our performance with something is well ingrained in each and all of us.
By using a reference to compare with, we gain the knowledge that enables us to engage in
correction of our actions/behaviors. Correction rings oddly in contemporary ears, for it hints
at exalted standards and suggests that the few who know more or feel deeply might offer
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
instruction and guidance to the many, and might improve organizations and society in the
process. For example, organizational goals are set by knowing leaders in organizations before
actions are undertaken. A strategy is defined as a goal-oriented action. These leaders ought to
know what goals are appropriate at what time for their organizations. Goals define the metrics
and benchmarks without which performance measurement becomes meaningless. Our paper
contributes to developing choices for benchmarks that can be used in conjunction with metrics.
Our conceptual paper on metrics and benchmarks provides guidance for higher education
leaders. Nine perspectives of performance evaluation are captured in a 3X3 matrix that forms the
conceptual underpinning as shown in Table 1. We assert that senior leaders will benefit from
understanding the nine perspectives described in this paper to compare and appraise their current
evaluative schemes.
Cells 1 to 3 shown in Table 1 fall under what we call a self-referencing mindset where one looks
at performance evaluation over time. The most common example is examining this quarters
7
performance with the performance in the same quarter last year. The focus is on historical
benchmarks. We did this last year or five-years ago, and this is what we are doing now.
External-referencing
Cells 4 to 6 shown in Table 1 fall under what we call external-referencing mindset where one
looks at performance evaluation relative to benchmarks that are outside the firm. The most
common example is comparing our institutions performance with that of our competitors or
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
Aspirational-referencing
Cells 7 to 9 shown in Table 1 fall under what we call aspirational-referencing mindset where one
looks at performance evaluation relative to ideal achievement levels, which are essentially what
our aspirations impel us to achieve. The most common example is comparing our firms
performance with stretch goals that we want to ambitiously achieve as a firm. We achieved
this level of performance; however, this is what we aspire to achieve. One needs to consider two
For higher education institutions, like business organizations, the reference for performance
comparisons stems from the perspectives of various stakeholders. It is the leaders job to manage
and shape stakeholder relationships to create optimum value and to manage the distribution of
that value (Freeman, 1984; Jones, 1995; Walsh, 2005). Where stakeholder interests conflict, the
executive must find a way to rethink problems so that the needs of a broad group of stakeholders
are addressed, and to the extent this is done even more value may be created (Harrison et al.,
2010). If tradeoffs have to be made, as sometimes happens, then executives must figure out how
to make the tradeoffs, and then work on improving the tradeoffs for all sides (Freeman et al.,
8
2007). Because of the large number of stakeholders, one could pigeonhole these multiple
imperatives into the nine distinct referencing categories, as shown in our framework for metrics
Input Metrics
Cells 1, 4 and 7 shown in Table 1 refer to input metrics for any system under consideration. In
the case of higher education, input metrics could refer to any of the myriad inputs that enter the
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
system of higher education. As an example, we have chosen the quality of college freshmen
(Astin & Oseguera, 2004) as the performance metric to develop minimum threshold of
Process Metrics
Cells 2, 5 and 8 shown in Table 1 refer to process metrics for any system under consideration. In
the case of higher education, process metrics could refer to any of the myriad processes that are
internal to the system of higher education under consideration. As an example, we have chosen
the use of learning technologies (Laurillard, 2013) as the performance metric to develop
Output Metrics
Cells 3, 6 and 9 in Table 1 refer to output metrics for any system under consideration. In the
case of higher education, output metrics could refer to any of the myriad outcomes from the
system of higher education under consideration. As an example, we have chosen student learning
outcomes (Zis et al., 2010) as the performance metric to develop minimum threshold of
9
The nine cells in Table 1 provide a conceptual framework to anchor performance enhancement
initiatives in higher education institutions. For example, finding jobs for graduating students has
become a huge problem in some academic disciplines. Sweeney (2012) asked the question, Can
January 2012 where an administrator from the University of Cincinnati told the panel of speakers
that she would like to see the summit address the 850-pound gorilla in the room, which is the
overproduction of Ph.Ds. When compared to the demand (jobs) for Ph.Ds. (external-
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
referencing), there is one view of the problem. However, Sweeney (2012) suggested that people
get Ph.Ds. because they are in love with their chosen areas of study (internal-referencing). Nine
different and specific contexts are presented in Table 2 to illustrate the correct choices for
metrics and benchmarks is highly context-specific for higher education institutions. Incorrect
initiatives.
Higher education institutions, much like business organizations, that serve the interests of
multiple stakeholders enjoy higher performance levels (Preston & Sapienza, 1990;
Sisodia,Wolfe, & Sheth., 2007), superior reputation (Fombrun & Shanley, 1990), greater
(Greenley & Foxall, 1997). Perhaps the strongest economic justification to date is found in a
study by Choi and Wang (2009) who found not only that firms with good stakeholder relations
enjoyed superior financial performance over a longer period of time, but also poorly performing
10
firms with good stakeholder relations improved their performance more quickly. In a similar
vein, we can expect higher education institutions with good stakeholder relations to be effective
and efficient organizations. The broader evaluative framework offered in this paper would enable
We have provided a conceptual metrics and benchmarks model for performance management
that we believe is more comprehensive and richer than frameworks that are based on diverse
metrics alone. Whenever people are likely to think about performance management, metrics and
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
benchmarks both become relevant and important. Our conceptual metrics and benchmarks model
has several direct implications for practitioners, academic researchers, design consultants, and
higher education institution leaders. First, we broaden the scope of performance measurement
from mere diverse metrics (e.g., a dashboard of metrics such as balanced scorecard that
Our contention is that the interaction between metrics and benchmarks provides the needed
guidance required for performance enhancement. Second, to ensure that the conceptual model is
rooted in reality, future research should be focused on validating it in diverse contexts. Third, by
thinking through the nine perspectives the senior management of higher education institutions
can imagine their evolutionary growth trajectories into the future, much like jumping from one
square to the next over time in Table 1. Finally, by better understanding the need for a broader
scope for the balanced scorecard, organizational design consultants and higher education
institution leaders could develop appropriate dashboard for higher education institutions.
With over 230,000 students enrolled, in at least one measure, the University of Phoenix is the
largest university in the world. Table 3 is an example of how the metrics and benchmarks model
11
can help identify the institutions shared Key Performance Indicators. Similar KPI frameworks
improvement, and these models should always be reflective of the institution's mission and
values (Suryadi, 2007). Applying the nine perspectives of performance evaluation creates a
comprehensive approach wherein the institution must first identify and then begin to reconcile
diverse, and sometimes competing, stakeholder interests. This act of institutional self-evaluation
university has several output metrics prescribed by the U.S. Department of Education. Most
notably, the amended Higher Education Act of 1965 (HEA) requires every for-profit institution
to attain no more than 90% of its revenue from the Title IV Federal Student Aid program; this is
require for-profit institutions to publish the total cost and employment outcomes of every
program offering (U.S. Department of Education, 2014). While these metrics are intended to
protect students from burdensome debt loads and mitigate loan default rates, these regulations
College Scorecard (2013) narrowly focuses on five performance metrics: a) costs, b) graduation
rate, c) loan default rate, d) median borrowing, and e) employment (U.S. Department of
Education). Tying these external metrics to eligibility for Title IV Federal Student Aid places
The university must comply with the Higher Education Act of 1965 (HEA) and participate in
annual research conducted by the U.S. Departments National Center for Education Statistics
(NCES). This requires the publication of comparative data regarding graduation and retention
12
rates through the Integrated Postsecondary Education Data System (IPEDS). One issue with the
This conceptual framework is most valuable when applied to the organizations multiple
stakeholders rather than focusing solely on one stakeholder group. Table 3 illustrates internally-
defined metrics that are unique to the Universitys mission and study demographic and
aspirational-referencing metrics.
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
Research has suggested that first generation students have lower retention rate as compared to
other students. The attrition risk rate in first year for first generation students was reported 71%
higher than their peers with two college-educated parents (Ishitani, 2003, p. 433).
Using the definition of nontraditional students by National Center for Education Statistics
(2002), the university monitors a number of demographic factors for the incoming non-
traditional student, to provide appropriate student support services. Examples of these programs
include a Pathways Diagnostic assessment that promotes student alignment to the most
appropriate first year course sequence. Additionally, the university has identified a statistical
significance to its Fourth Course Pass Rate, finding that students who complete four courses in
the first six months of their first year will be more likely to persist in the program. The university
has enhanced its curriculum and full-time faculty in this critical first year. Each course is
measured in multiple ways, including student and faculty end of course surveys as well as Net
Promoter Scores, which measure student satisfaction and likelihood to recommend the
university. Finally, the institution internally measures student learning outcomes at the course,
13
program, and institutional levels. Cumulatively, these in-process metrics become insightful data
points that may be used to influence student learning outcomes, retention and graduation
established benchmarks, and peer groups, primarily other for-profit higher education institutions.
In addition to comparing its incoming student population to peer institutions, the university is
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
responsible to adhere to the 90/10 Rule, meaning that not more than 90% of its revenue can come
from Title IV Student Financial Aid funds. The University has implemented an up-front
financial aid calculator that helps new students understand the need for responsible borrowing.
Additionally, the University implemented financial wellness programs and exit counseling to
help students understand their financial obligations. These programs suggest students may
borrow less than the full amount available because they are better informed on the financial
implications of repaying a Title IV student loan. These in-process metrics help the university
monitor and influence its Three-Year Cohort Default Rate, which is calculated by the percentage
of students who enter loan repayment during a federal fiscal year and default on loan payments.
Aspirational-Referencing Metrics
all its operations), product innovation (market-relevant curriculum that is dynamically updated to
meet the needs in the society which provides jobs for students) and customer intimacy (proactive
understanding of the trends in the global education industry). As the university sets forward-
14
metrics that promote access, affordability, career relevancy and student outcomes. Output
metrics will continue to gain specificity in each area where comparative data are available,
increasing the Universitys ability to set targets for future performance excellence.
The context input, process, product (CIPP) model is an appropriate macro evaluation model for
higher education institutions. However, metrics and benchmarks are not adequately addressed in
CIPP model. Our paper fulfills this gap with nine perspectives on metrics and benchmarks
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
conceptualized, illustrated and applied in this paper. We believe that assessing the performance
of a higher education institution from input-process-output view provides a good basis for
view adds the benchmarking step that leads to action initiatives that would help the institution to
continuously improve. Thus, we believe that the nine perspectives, taken together, move higher
education institutions from mere analysis of past performance to the development of future
strategies. For example, areas where measured performance is below or on par with a particular
Higher education institutions are complex entities much like firms in the corporate world.
Multiple stakeholders and multiple goals need to be dealt with in a balanced manner. Balanced
scorecard (Kaplan & Norton, 2001; 2007) is widely used as a dashboard of metrics with
benchmarks by corporate CEOs to manage the complex task of managing the business along
many performance dimensions/metrics. Our paper translated the balanced scorecard philosophy
into a practical framework for ready use by higher education institutions. We believe that if more
higher education institutions develop their own context-specific matrix as illustrated in Table 3
above for the university, such self-analysis will lead to KAIZEN (Brunet & New, 2003) which is
15
an unending journey of continuous improvement for them. KAIZEN is a Japanese term for
given metric for a particular higher education institution, comparison to an external benchmark
(e.g., competitors) may provide a sense of complacency for the institution, but the same metric
when compared to an aspirational benchmark would provide the necessary grist for further
performance improvement initiatives. Our paper contributes to inspiring KAIZEN (an unending
quest for improvement) in higher education institutions. We also believe that our paper provides
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
guidance for contemplative strategic thinking about evolutionary growth trajectory for any
16
References
Astin, A. W., and Oseguera, L. (2004). The declining" equity" of American higher education.
The Review of Higher Education, Vol. 27 No. 3, pp.321-341.
Astin, A. W. (2012). Assessment for excellence: The philosophy and practice of assessment and
evaluation in higher education. Rowman & Littlefield Publishers.
Arthur, L. (2009). From performativity to professionalism: Lecturers responses to student
Feedback. Teaching in Higher Education, Vol.14 No. 4, pp. 44154.
Behn, R. D. (2003). Why measure performance? Different purposes require different measures.
Public Administration Review, Vol. 63 No. 5, pp. 586-606.
Boyne, G. A., and Chen, A. A. (2007). Performance targets and public service improvement.
Journal of Public Administration Research and Theory, Vol. 17 No. 3, pp. 455-477.
Paul Brunet, A., and New, S. (2003). Kaizen in Japan: an empirical study. International
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
Journal of Operations & Production Management, Vol. 23 No. 12, pp. 1426-1446.
Cave, M. (Ed.). (1997). The use of performance indicators in higher education: The challenge of
the quality movement. Jessica Kingsley Publishers.
Choi, J. and Wang, H. (2009). Stakeholder relations and the persistence of corporate financial
performance, Strategic Management Journal, Vol. 30, pp. 895907.
Dahler-Larsen, P. (2011). The evaluation society. Stanford University Press.
Darwin, S. (2012). Moving beyond face value: re-envisioning higher education
evaluation as a generator of professional knowledge. Assessment & Evaluation in Higher
Education, Vol. 37 No. 6, pp. 733745
El-Mashaleh, M. S., Edward Minchin Jr, R., and OBrien, W. J. (2007). Management of
construction firm performance using benchmarking. Journal of Management in
Engineering, Vol. 23 No. 1, pp. 10-17.
Freeman, R.E. (1984). Strategic management: A stakeholder approach, Boston: Pitman
Freeman, R.E., Harrison, J. and Wicks, A. (2007). Managing for stakeholders: Business in the
21st century, Yale University Press. New Haven, CT.
Fombrun, C. and Shanley, M. (1990). Whats in a name? Reputation building and corporate
strategy, Academy of Management Journal, Vol. 33 No. 2, pp. 233258.
Greenley, G.E. and Foxall, G.R. (1997). Multiple stakeholder orientation in UK companies and
the implications for company performance, Journal of Management Studies, Vol. 34 No. 2,
pp. 259284.
Guba, E., and Y. Lincoln. (1989). Fourth Generation Evaluation, Sage, Harvey, Newbury Park,
CA.
Harrison, J.S., Bosse, D.A. and Phillips, R.A. (2010). Managing for stakeholders, stakeholder
utility functions and competitive advantage, Strategic Management Journal, Vol. 31 No.1,
pp. 5874
Herrington, J., Reeves, T. C., and Oliver, R. (2014). Authentic learning environments, Springer
New York, NY.
Ishitani, T.T. (2003, Aug). A longitudinal approach to assessing attrition behavior among first-
generation students: Time-varying effects of pre-college characteristics. Research in Higher
Education. Vol. 44 No. 4, pp. 433-449.
Kaplan, R.S. and Norton, D.P. (1992). The Balanced Score Card: Measures that Drive
Performance, Harvard Business Review, July-August 1992, pp. 174-180.
17
Kaplan, R. S., and Norton, D. P. (2001). Transforming the balanced scorecard from
performance measurement to strategic management: Part II. Accounting Horizons, Vol. 15
No.2, pp. 147-160.
Kaplan, R.S. and Norton, D.P. (2007). Using Balanced Score Card as a Strategic Management
System, Harvard Business Review, pp. 150-161.
Kelman, S., and Friedman, J. N. (2009). Performance improvement and performance
dysfunction: an empirical examination of distortionary impacts of the emergency room wait-
time target in the English National Health Service. Journal of Public Administration
Research and Theory, Vol. 19 No 4, pp. 917-946.
Kember, D., Y.P. Leung, and K.P. Kwan. (2002). Does the use of student feedback
questionnaires improve the overall quality of teaching? Assessment & Evaluation in Higher
Education, Vol. 27 No. 5, pp. 41125.
Laurillard, D. (2013). Rethinking university teaching: A conversational framework for the
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
18
U.S. Department of Education (2014). Obama administration announced final rules to protect
students from poor-performing career college programs. Retrieved from
http://www.ed.gov/news/press-releases/obama-administration-announces-final-rules-protect-
students-poor-performing-career-college-programs
Walsh, J.P. (2005). Taking stock of stakeholder management, Academy of Management
Review, Vol. 30 No. 2, pp. 426438.
Wang, V. C. X. (2009). Assessing and evaluating adult learning in career and technical
education. Zhejiang University press.
Zis, S., Boeke, M., and Ewell, P. (2010). State policies on the assessment of student learning
outcomes: Results of a fifty-state inventory. National Center for Higher Education
Management Systems, Boulder, CO.
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
Biography:
Ravi Chinta, Ph.D. is currently University Research Chair, School of Advanced Studies, University of
Phoenix. Ravi has 36 years of work experience (14 in academia and 22 in industry). Ravi worked in
venture-capital start-ups and large multi-billion global firms such as IBM; Reed-Elsevier; LexisNexis;
and Hillenbrand Industries. Ravi has 47 peer-reviewed publications in journals such as Academy of
Management Executive, Journal of Small Business Management, Long Range Planning, Management
Research News, Journal of Technology Management in China, International Journal of Strategic Business
Alliances, and International Journal of Business and Globalization.
Dr. Mansureh Kebritchi is founder and chair of the Center for Educational and Instructional
Technology Research at School of Advanced Studies, University of Phoenix. She has years of
experience as faculty member and researcher in the field of educational technology. Dr.
Kebritchis research interest focuses on improving quality of teaching and learning and
evaluation models in higher education institutions. The results of her research have been
published in international journals.
Mrs. Elias is accreditation project director at the University of Phoenix. She is interested in investigating
evaluation criteria for assessing performance of higher education institutions.
19
Table 1: Metrics and Benchmarks A Taxonomic View
Metrics
Benchmark Input Process Output
Cell 1: Cell 2: Cell 3:
Quality of college freshmen Use of learning Student learning outcomes
Internal (Astin & Oseguera, 2004) technologies (Laurillard, (Zis et al., 2010)
2013)
Cell 4: Cell 5: Cell 6:
Quality of college freshmen Use of learning Student learning outcomes
External (Astin & Oseguera, 2004) technologies (Laurillard, (Zis et al., 2010)
2013)
Cell 7: Cell 8: Cell 9:
Quality of college freshmen Use of learning Student learning outcomes
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
Aspirational (Astin & Oseguera, 2004) technologies (Laurillard, (Zis et al., 2010)
2013)
Table 2: Illustrative Comparison of Table 1 Cells
Cell 3 ETS Test Student learning Rare Sporadic and Small and
scores for outcomes (infrequently done at infrequently done regional
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
Cell 7 Average SAT Quality of college High Sporadic nature of Planning for
and ACT freshmen (when online offerings use of this increased
scores (Astin & Oseguera, dominate) benchmark student
2004) enrollments
Cell 8 # of Courses Use of learning Rare Lock-up periods Universities
using web technologies (dynamic changes in with vendors embracing
technologies (Laurillard, 2013) online technologies) online
technologies
Cell 9 ETS Test Student learning Medium Variety of ranking Selected
scores for outcomes (competitive advantage measures metrics are
business (Zis et al., 2010) to increase student advantageous
students enrollments)
Table 3: Nine Perspectives on Metrics and Benchmarks Applied at the university
Benchmark Metrics
Cell 8: Cell 9:
Financial Aid Calculator; Responsible Retention and Graduation
Borrowing Programs Outcomes (i.e. IPEDS
Scholarships Graduation Rate)
Cell 7: Pathways Diagnostic Assessment Student Learning Outcomes
Access First Year Courses (i.e. ETS Proficiency Profile;
Aspirational- Affordability Fourth Course Pass Rate AACU VALUE Rubrics)
Referencing Career-Relevant # of Full-time Faculty Career and Employment
Program Offerings Career-relevant Curriculum Outcomes (i.e. Gainful
End of Course Surveys Employment Regulations)
Net Promoter Score Three-year Cohort Default
Course and Program SLO Assessment Rate
Career Services Usage Research and Scholarship
Research Center Publications Outcomes
Downloaded by University of Sussex Library At 16:27 03 July 2016 (PT)
INPUTS PROCESSES OUTPUTS
Students Curricular Programs Graduates
Faculty Extra-curricular Programs Research
Staff Research Programs Alumni
Resources Service Programs Community