Sie sind auf Seite 1von 23

Evaluation Practices and Methodologies:

Lessons for University Ranking


By Bertrand BELLON

1. Introduction
Hierarchical ranking is the most common and simplest instrument of
comparison between discrete indicators. It is the quickest way to obtain a
comparison between competing entities, as long as objectives, rules of
behavior and the relevant measurement tools are shared within a community
(school class, athletic group, business, technology… measured by marks,
speed, profit, financial assets, etc.). As an instrument of evaluation, ranking
can be applied to almost any criteria. From school marks to the book of
records (that hold the attention on the individual who first performed record
figures), ranking is used everywhere as a way of measurement and of
comparison. How can one improve this unavoidable measure? If one agrees
with the choice of data, ranking presents no difficulties excepting the quality
of data collection. According to a criterion or a set of criteria, the researcher
ranks an entity number one, until another entity surpasses it under the
accepted criteria. This process has been accelerated within an environment
marked by increasingly open societies and expanding economies and
societies, providing ranking with a new era of development.
Even so, ranking appears highly problematic when dealing with complex
and intangible goods such as knowledge, science, technology and
innovation. In the case of the “production” of universities, simple criteria do
not apply due to the high complexity level (which is the case in most
dimensions of Social Sciences). Ranking may help at first in making crude
distinctions, but it immediately becomes a limited instrument, for there is no
“unique best way” to apply it in any human activity.
Given the fact that there are many possibilities to improve the ranking
process within its own rules and limits, this chapter intends to “drain” from
the methodology of evaluation several elements with which to improve
ranking of “world-class universities”. The author will begin with the
2 Evaluation Practices and Methodologies

extension of needs regarding a better understanding of university structures


and strategies in the present times in Section 1. He will then underline the
diversity of the objectives of evaluation, comparing them to the simpler,
thus easier to understand, objectives of ranking in Section 2. Then, he will
recall a selected panel of indicators, which can be managed within
evaluation process in Section 3. Section 4 will show lessons drawn from
evaluation indicators to improve the ranking activity. In conclusion, the
author will revisit a few core questions about the goals of ranking and
evaluation.

2. The Increasing Need for a Better Characterization of


Universities
In an open economy and society, the characterization of academic activity
and of performances is not only a concern for transversal authorities [for
years, the OECD has published indicators on education and research and
UNESCO has produced an important study on performance indicators and
universities (Fielden and Abercromby, 1969)] but also an increasing need
for each individual University. Characterization is thereby jointly related to
measures of absolute excellence, and toward self-improvement within each
specific context. Better understanding of a university’s stand-point, better
management of the given assets, better efficacy and output, are thus
mandatory, both for the locally embedded as well as for the world-class
university.
Yet, characterization raises two different questions:
─ how well do universities perform when their goals and means are
taken into account?; and,
─ how is a particular university better, equivalent or worse than its
competitors?
Interestingly, however, these questions are not the same according to
everybody, for they inevitably differentiate according to the values of the
universities” stakeholders.

2.1. External stakeholders


Universities gather a wide variety of stakeholders (internal and external)
who are increasingly active and concerned with the way they are managed
Bertrand BELLON 3

and with their results. These partners becoming “drivers”, their requirements
differ widely from one to the other. That is to say:
Public authorities involved into university support are increasingly
concerned with the use of public money. The main share of universities’
budgets still depends mainly on public decisions (in coherence with Adam
Smith and Alfred Marshal theory of external effects being attributed to
knowledge and education). In a period of relative shortage of public budgets
– due to increasing competition between states, to the non interventionism
ideology and to cuts into budget deficits – an increasing responsibility is put
on public project managers, with more attention being paid to results (and
consequently to the universities’ management mode).
Taxpayers are increasingly reactive to the way their money is used by
public as well as private research institutions. This often justifies political
and public financial short-term views, in contrast to the long-term
dimension of the research process and the complex articulation between
Fundamental and Applied Sciences.
Universities have become increasingly decisive tools for economic
competitiveness, knowledge and innovation. This has led industries to be
directly concerned with university possesses (e.g., hiring skilled students
and benefiting as directly as possible from new research). This concerns not
only high tech industries, but also includes every “mid-tech” and “low-tech”
business that is involved into innovation and to increasing use of general
purpose technology (Helpman, 1998).
Finally, journalists and other opinion makers are very active in
universities’ visibility. They create and convey the images – given to people
as proven reality – emphasizing both fictive and real strengths and
weaknesses of universities.
So, one can well see that universities have become increasingly in debt
to, or at least dependent upon an increasing number of external partners,
such as taxpayers, government administration and politicians, national and
international organizations, business managers, journalists, as well as
foundations and NGOs, etc. For various reasons, those external stakeholders
focus on the final “outputs” of universities. At best, they require information
concerning the relation between material and labor “inputs” (what they have
paid for) and “output” or “production”. Thus, external stakeholders are
largely unconcerned with the two central processes of university activity,
i.e., the production of new knowledge, and the teaching-learning process
4 Evaluation Practices and Methodologies

between professors and students. Indeed, in most cases, the university


remains a mysterious “black box” to them, a vision reinforced by the very
complexity of these two intangible, ambiguous (and therefore hard-to-
evaluate) production processes. Hence, these complex problems of
education, learning, researching, and governing these institutions are left to
specialists.

2.2. Internal interest and need for self-evaluation of universities


External interests are not the only university’s partners, however. University
managers, students, professors, researchers, administrative staffs, are the
other major internal partners that come to the institution with their specific
interests and objectives.
Students also participate in the openness of economies. This is done by
their “shopping” among universities worldwide, according to their own
objectives, capabilities and means. Students might choose a university
because it is nearest to their home, but, more and more often, they will make
their choices according to institutions’ and diplomas’ fame, given that their
main concern is to increase their chance of finding rewarding jobs after
graduation. They are found to be more discriminating as concerns foreign
universities than between their own country’s universities, which increases
the artificial differences carried by reputation and image, as compared to
real relative capabilities.
Researchers and professors tend to field multiple job applications –
simultaneously among diverse universities – looking for the “most famous”
one or, barring that, the one that provides them the best facilities for
research and teaching. “Quality of life” issues, in their various dimensions,
from day-to-day particulars to lifelong career prospects, are the main
determinants of their final choice.
Working amid such “driver behaviors”, university managers carry the
responsibility to achieve the optimum of production and productivity out of
the two above mentioned groups, by building a working coherence from
heterogeneous and highly individualistic behaviors.
Each internal partner is thus a stakeholder, carrying forward his or her
own objectives, governed, in part, by an interfacing of personal and social
opinions and abilities of self and others. This makes the university resemble
a network of competing interests. In this regard, the ultimate responsibility
Bertrand BELLON 5

of its management is to provide a minimum degree of satisfaction to each


partner. Therefore, there is a great need by universities for finer abilities of
inward understanding and evaluation.
It is known that academic ranking has different meanings according to
who is looking at it. In this context it is not surprinzing that university
managers interest in academic ranking has been two-fold, i.e., fostering
better management of universities and consolidating a new field of research
about the production and diffusion of knowledge. Other objectives can be
added to these, each one bringing its own consequences to bear upon the
work to be done, but the “ranking user” issue remains the central one. As
such, it will have effects on the whole process of academic measurement,
including the choice of indicators and methods of data collection.

2.3. The Multi-helix model


The new exigency of information and control, initiated from the double
“inside-outside” stakeholder demand is strengthened by the increasing
interest of researchers in Science and Technology production and in
learning activities and processes. The character of this interest is situated at
the intersection of four vectors.
A growing interest in “macro” studies, that is, making large comparisons
of data, and/or providing general historical perspectives of trends (Mowery
and Sampat, 2004; Martin and Etzkowitz, 2000). Many trend studies are
based on historical cases of specific university (Jacob, Lundqvist and
Hellsmark, 2003).
The renewed attention to “excellence” among competing universities,
can also be associated with scientometric benchmarking and patent analysis
(Noyons, Buter, van Raan, Schmoch, Heinze, Hinze, Rangnow, 2003), as
well as, in the United States of America, Canada and in Europe (Balconi,
Borghini, Moisello, 2002), (Carayol and Matt, 2003-2004), (Azagra Caro et
al., 2001).
Emerging questions on the strategies of universities (European
Commission, 2004) and related issues including the governance of
universities (Musselin and Mignot Gerard, 2004; Reale and Potí, 2003) and
the organization of research activities: laboratories and research centres
versus teaching departments, interdisciplinary versus disciplinary
6 Evaluation Practices and Methodologies

organizations, allocation and rewarding mechanisms, articulation between


teaching and research positions, role of research in career stages, etc.).
Finally, every public institution involved in R&D and education is
increasingly interested in studies on the production process of knowledge.
Regional observatories have therefore been created as instruments for
orientation of funding decisions (as has been done at the European, national Commen
and local levels, even including medium-sized cities). gath
This representation is currently enlarged with the relations developed er
the
between the university and the public at large and the multiplicity of auth
organizations that belong neither to government nor to business (e.g., or
NGOs, international organizations, foundations, multilateral, European and mea
ns
regional entities, cities, etc. orga
Except for the hardware necessary to conduct research, both academic niza
tion
inputs and outputs are intangibles. In consequence, only a small part of such s
intangibles are identified and thus very limited instruments exist to measure belo
them (Cañibano and Sánchez, 2004). Furthermore, research in such a science ng
neit
requires multidisciplinary work: Sociology, Economics, Science Policy, her
Management, etc. Yet, one finds recent theoretical developments have to
nati
brought some interesting benefit to this field of study. Partha Dasgutpa and onal
Paul David (1994) have suggested a framework for a new Economics of gov
Science, Michael Gibbons (2004) has identified the “Mode 2” concept of ern
men
research, Henry Etzkowitz and Loet Leydesdorff have popularized the t
“Triple Helix” concept as a way to see government, academia and industry nor
as parts of one structure (1997). In sum, the complex relation system busi
ness
(University-Industry-Government) “increasingly provides the knowledge in
infrastructure of society” (Etzkowitz and Leydesdorff, 2000; p.1). The model this
pass
verifies that: 1) the relationships among universities, industry and age,
government are changing; and, 2) there are internal transformations in each give
of these individual sectors as well. Consequently, universities are not just n
som
teaching or research institutions, but combine teaching, research and service e of
toward and for society. In other words, within a knowledge-based economy the
the noted triple helix model turns into a multi-helix one, with the main exa
mpl
function being given over to universities. It is this growing sphere of social es
function and responsibility that explains growing pressures for an accounting liste
d
of resources employed and deployed by universities, yet they are given no (i.e.
unique set of criteria given by which to measure their performances. ,
citie
s).
Bertrand BELLON 7

Figure 1. The Multi-helix Model

2. Ranking versus Evaluation Processes


In this context, evaluation processes will take different forms and include
different objectives according to different problems and different missions
of institutions.
These can be distinguished thusly:
─ evaluation (to fix the value, and measure the position regarding
objectives or partners);
─ monitoring (to verify the process of the activity; to admonish, alert
and remind);
8 Evaluation Practices and Methodologies

─ control (to verify an experiment by duplication of it and comparison);


─ accountability (to be able to provide explanations to stakeholders for
actions taken);
─ ranking (to put individual elements in order and relative position; to
classify according to certain criteria).
Complicating matter, however, is the fact that the “field” of the
university is composed of ideas, knowledge, information, communication,
etc., which are typically unique non-positional, hence, “non-rival goods”.

2.1. Taking into account the diversity of higher educational missions


Because universities deal with the creation and the diffusion of knowledge,
the varieties of their missions are endless. These composite varieties are the
joint result of multiple knowledge characteristics and of the variety of each
university’s stakeholders and their concerns. The first partner will focus on
the ability to train wider numbers of students; the second to increase the
international research network; the third will consider as a priority to
produce Nobel Prize or Fields Medal winners, etc. However, whatever the
diversity, every stake holding group will be concerned in gaining better
recognition and visibility of its university.
In addition, university functions are ever growing in diversity. This is
due to the enlargement of the boundaries of scientific thought well into the
area beyond the laboratory, thus facing increasing pressures to introduce
new applied technologies into day-to-day life (i.e., into the production of
goods and services). On a synthetic level, a university can be characterized
as having a double mission of training (basic and continuing education) and
of researching (production and diffusion of new knowledge). Beyond these,
a “third mission” of universities (Spru, 2002) of providing services to
society is growing in importance, with broad social-economic impacts,
encompassing both profit and non-for-profit output).

2.2. Evaluation as a starting point


When using different individual evaluations to compare universities,
researchers are faced with a strong difficulty to agree upon indicators and to
proceed to efficient benchmarking. At best, universities will be comparable
when they share similar goals and they benefit from similar means – which
Bertrand BELLON 9

is very rarely the case. Yet, one can consider the ongoing processes of
evaluation as trials to identify the useful indicators for a given set of
questions and for a set of universities. The characterization of universities is
thus a first step toward the benchmarking of universities worldwide.
In the evaluation processes, various types of practices and meanings can
be considered:
─ discipline-based versus institution-based evaluation;
─ inputs-based versus outputs-based evaluation;
─ internal versus external evaluation;
─ qualitative versus quantitative evaluation.

2.3. The ranking process


The worldwide liberalization of markets and societies has created a new
global competition among universities. When considering research and
teaching, universities considered as the “best” universities will attract more
talented students; attracting the “best” students, they will thus be able to
reinforce their capabilities for autonomous selection-processes for their own
benefit. Academic ranking intends to provide means of comparison and
identification between universities according to their academic or research
performance. Yet, the result will be twofold: on the one side, an increasing
need for worldwide universities of excellence, on the other side, an
increasing need also for local universities that will be specialized at
providing college-level (rather than at doctoral-level) training, including a
strong commitment to regional issues and development, corresponding to
the eventual creation of disciplinary “niches” for research at a level of
excellence.
There is a gap, and often an unbridgeable one, between evaluation
processes and ranking processes. As far as this work is concerned, it will
focus on ranking considerations. Given that ranking must be based upon
incontestable (or at least objective) indicators, the objective of this author’s
contribution is to take into consideration a set of experiences growing from
specific evaluation processes toward the methodology of benefit-ranking
production.
10 Evaluation Practices and Methodologies

3. Evaluation Indicators
This section will present data and indicators commonly used for evaluation,
the objective being to deduct a few especially grounded data that can be
adapted and used for general ranking processes. In so doing, however, it is
vital to keep in mind that ranking on a worldwide scale induces very strict
constraints that limit the number of possible indicators that can be
employed. Therefore, indicators will be limited to those that are: already
existing, comparable, and, easy to collect.
This explains why the selection of indicators will be very limited. This
brings new credit to some very restrictive measures (such as the Nobel Prize
award) due to their worldwide recognition and already existing status.

3.1. Criteria for data collection


Criteria for data collection within evaluation process have a much wider
basis. Data are primarily selected at the level of each institution, for its
specific purposes. The objective is to evaluate the variety and weight of
universities’ inputs and outputs, drawing out the relations between them.
The university evaluation compares its “production” with its own goals and
means, as converse to its counterparts. An important share of the
information process is done at the level of the university itself, allowing
limited comparisons with other partners. The main objective is to identify
indicators that represent the most complete range of intellectual activity:
from production of knowledge to its use.
Evaluation indicators must be both feasible and useful. For this, they
must answer a set of criteria. Below are mentioned the criteria adopted by a
European project, the “Meritum”, that agreed upon a set of characteristics
leading to a set of quantitative indicators (MERITUM, 2002).

Figure 2. Characteristics required for evaluation indicators


USEFUL
Relevant Comparable Reliable
SignificantUnderstandableTimely Objective Truthful Verifiable
Feasible
Bertrand BELLON 11

The degree of fulfillment of these eleven characteristics induces the


degree of quality of the overall process. Details of each of these
characteristics are:
─ useful: allows decision making both by internal and external users to
occur.
─ relevant: provides information that can be modified or affirm the
expectations of decision-makers. In such cases the information
should be:
─ significant: related to issues critical for universities;
─ understandable: presented in a way it can easily be understood by
potential users;
─ timely: available when it is required for analysis, comparison or
decision-making purposes.
Turning to the development of specific indicators, they should be
comparable, that is, indicators should follow criteria generally accepted by
all implicated organizations in order to allow for comparative analysis and
benchmarking; and reliable, that is, users need to be able to trust them. To
meet these criteria, indicators are required to be:
─ objective: the value is not affected by any bias arising from the parties
involved in the preparation of the information’s interests;
─ Truthful: the information reflects the real situation;
─ Verifiable: it is possible to assess the credibility of the information it
provides.
Finally, calculation of all indicators should be cost-efficient, or feasible.
That is to say, the information required for the proposed indicator and its
computation should be easily obtained. The information from the
university’s information system, or the cost of modifying those systems in
order to obtain the required information should be lower than the benefits
(whether private or social) arising from the use of this indicator.
12 Evaluation Practices and Methodologies

3.2. A matrix of indicators


The European project on the Observatory of European University (OEU) has
developed a framework to characterize universities’ research activities (at
the moment, it does not include teaching activities) (see <www.prime.org>).
The result is a two dimensional matrix, devoted to:
─ characterizing the status of university research management;
─ identifying the best performing universities; and,
─ comparing the settings within which universities operate.

Figure 3. Observatory of European University evaluation matrix


Tools  Human Academic Third
Funding Governance
Objectives resources outcomes mission
Attractiveness
Autonomy
Strategic capabilities
Differentiation profile
Territorial embedding
Source: Observatory of European University: PRIME Network <http://www.prime-noe.org>.

The matrix and its elements are as follows:


─ The first dimension of the matrix analyses thematic aspects of
university management. The OEU research has considered five
themes herein: the first two representing “inputs; the next two
representing “outputs” and the fifth one representing the governance
of the institution:
─ Funding: includes all budget elements, both revenues and
expenses (total budget, budget structure, sources of funding, rules
for funding and for management);
─ Human Resources: includes professors, researchers, research
engineers and administrative staff, plus PhDs and post-docs
(number, distribution, functions between research, teaching and
management, staff turnover, and visiting and foreign fellows).
Bertrand BELLON 13

Human resources must be considered both as labor stocks


(numbers of people) and as labor flows (human flows, mobility);
─ Academic Outcomes: includes articles and books, other academic
publications, citations, and the knowledge embodied in PhDs
being trained through research activities;
─ Third Mission: (the university “third mission” is noted as an
addition to the two other “traditional” university missions:
teaching and research) concerns the service outreach linkages
between the university and its non-academic partners, e.g.,
industry, public authorities, international organizations, NGOs
and the public-at-large (covering activities such as employment of
alumni, patent and licenses, spin-off and job creation, support of
public policy, consultancy and promotion and diffusion of Science
and Research activities);
─ Governance: includes the process by which the university
converts its inputs (funding and human resources) into research
outputs (academic outcomes and third mission). It concerns the
management of institutions, from both above the university (as in
its manner of relations with government and other finance
providers) and within the university.
─ The second dimension of the matrix deals with transversal issues that
can be applied to each thematic category, identifying or measuring
the capabilities of the university regarding its various stakeholders.
The OEU research team has considered five transversal issues:
─ Attractiveness: Each university’s capacity to attract different
resources (money, people, equipment, collaboration, etc.) within a
context of scarcity.
─ Autonomy: Measures each university’s margin of maneuver,
formally defined as the limits, established from external partners
(mainly government and other finance providers), to which a
university must conform.
─ Strategic Capabilities: Indicates each university’s actual ability to
implement its strategic choices.
14 Evaluation Practices and Methodologies

─ Differentiation Profile: The main features of each university that


distinguishes it from other strategic actors (competing universities
and other research organizations) by its degree of specialization
and degree of interdisciplinarity, etc.
─ Territorial Embedding: The geographical distribution of each
university’s involvements, contacts, collaborations, and so on
within a defined locale, i.e., being a measure of the “territorial
utility” of the university activity.
In the actual process of using the matrix, however, many adjustments
have to be made, mainly to adjust complexity and feasibility. One of the
most complex examples is the service or “third mission” dimension, for it
requires adjusting business-type dimensions (intellectual property, contracts
with industry, spin-offs, etc.) with social and policy dimensions (public
understanding of science, involvement into social and cultural life,
participation to policy-making, etc.). Thus, the following chart detailing this
sphere of activity recalls various dimensions of the previous one, with
concise added presentations of relevant data.

Figure 4. The “third mission” dimension, eight items for data collection

1. Human resources
– Competencies trained through research transferred to industry (typical case of “embodied
knowledge”).
The essential indicator is: PhD students who work in industry, built upon numbers and
ratios. The combination is important, since having a ratio of 100 percent, i.e., all work with
industry with one PhD delivered might be far less relevant for industry than ratio of 25
percent based on twenty PhD students.

2. Ownership
– Research leading to publications or patents; with a changing balance between them.
The key indicators are: patent inventors (number and ratio) and returns to the university (via
licenses form patents, copyrights, etc., calculated as a total amount/ratio to non-public
resources). Other complementary indicators reflect the proactive attitude of the university
(existence of patent office, numbers of patents taken by university).

3. Spin-offs
– Indicators relevant here are composite ones, that is to say they take into consideration three
following entries:
– the number of incorporated firms;
– the number of permanent staff involved;
Bertrand BELLON 15

– more qualitative involvement such as: the existence of support staff funded by university;
the presence of business incubators; incentives for creation, funds for seed capital; strategic
alliances with venture capital firms, etc.

4. Contracts with industry


– The traditional indicators are number of contracts (some prefer number of partners, which is
more difficult to assess), and total financial assets generated, the ratio of which be calculated
vis-à-vis external resources.

5. Contracts with public bodies


– With this axis, the “societal” dimension is entered.
The key indicators here are contracts asked for by a public body in order to solve problems
(versus academic research funding)
– It is important here to differentiate “local” (or nearby environment) from “other” (mostly
national in large countries, may be quite international in small countries) contracts.
– Elements for analysis are the same as for industrial contracts, i.e., number, volume, ratio

6. Participation into policy-making


– Qualitative context: to build a composite index based on involvement in certain activities,
with yes/no entries and measures of importance included.
– List of activities to consider includes: norms/standards/regulation committees, expertise,
formalized public debates.

7. Involvement in social and cultural life


– Qualitative context: a composite index concerning specific investments, existence of
dedicated research teams, or involvement in specific cultural and social developments.

8. Promoting the public’s understanding of science


– Qualitative context: another composite index built on specific events to promote science, to
classical involvement of researchers into dissemination and other forms of public
understanding of science, including articles, TV appearances; books, films, etc.

Source: Observatory of European University: PRIME Network. <http://www.prime-


noe.org>.

4. Lessons Drawn from Evaluation Processes


Based on the author’s considerations of the evaluation processes, this fourth
section will suggest ideas to improve the academic ranking processes, not
with the aim of creating new conclusions, but to provide new elements for
consideration in the ranking versus evaluation debate.
The multiplication of evaluation processes facilitates new competition
between universities, greatly modifying the existing dynamics of science.
Researchers are now faced with a multiple model, which challenges “big
16 Evaluation Practices and Methodologies

science” (and the Nobel Prizes it brings) with new forms of “co-operative
science” and more “internally driven” research strategies. This new
landscape, with a wider variety of dynamic models, must now be taken into
account.
At this point, strong arguments exist to advocate a radical divergence
between evaluation (and its characterization of universities) and ranking. On
a strictly critical approach, there exist, on one side, ranking processes,
limited to structurally crude bibliometric approaches, based on the smallest
most visible parts of “output” of the complex process of knowledge. The
risk appears that such a limited focus will lead to a caricatured vision of
university missions, providing almost no possibility to draw useful relations
between input and output. The existing set of indicators for the Jiao Tong
University ranking is:

Criterion Indicator
Quality of Education Alumni of an institution winning Nobel Prizes and Fields Medals
Staff of an institution winning Nobel Prizes and Fields Medals
Quality of Faculty
Highly cited researchers in 21 broad subject categories
Articles published in Nature and Science*
Research Output Articles in Science Citation Index-expanded and Social Science
Citation Index
Size of Institution Academic performance with respect to the size of an institution

On the other side, there exist evaluation processes, which are over-
complex, too qualitative and subjective, appearing restricted to internal use
by each individually evaluated university. External comparisons are thus
limited to the benchmarking of specific functions between selected
universities. At first glance, therefore, it seems evaluation processes may not
be well adapted to making global comparisons between universities.
From this perspective, ranking and evaluation processes stand opposed
to one another. Yet, from the perspective of their objectives, they seem very
close. That is, both aim to meet the need of better efficiency via a better
management of university missions.
The “ways and means” for institutional ranking have already progressed,
but they can still be greatly improved. In this respect, ranking can greatly
benefit from certain indicators being used in the act of evaluation, but only
when they can be generalized. The question now remaining is how to best
Bertrand BELLON 17

identify relevant indicators and discover the best way to produce them
within the strict governing limit of means.

4.1. Ranking must reflect a minimum of diversity


The “Nobel Prize” model’s main limitation is its strong reference to the
“one best way” model, which is conceptually inadequate, considering the
actual worldwide competition of universities, based on differentiation of
competencies and a competition in a limited number of specific fields (e.g.,
Nano- and Macro-Sciences). At this point, a preliminary debate would be
needed, which would clarify the objectives of academic ranking by moving
away from a monolithic vision of “world-class universities” toward a set of
criteria adequate to measure the diverse strategic objectives of universities
with differentiated development trajectories.
Moreover, ranking processes must also take into account the variety of
meanings given to each indicator. An indicator may be efficient in one case
and totally misused in another. Thus it can rightly be argued that:
─ What is useful or relevant for one university, in one scientific field, is
not systematically useful or relevant for another university
specialized in other domains, with other constraints and objectives;
and,
─ What is useful or relevant for public authority or other stakeholders is
not systematically useful or relevant for the university itself.
The choice of a “universal” set of defining characteristics of
“excellence” will nonetheless end with the splitting of universities into
different categories; as it is the case for any organized championship match.

4.2. The set of characteristics should fulfill input, output and governance
indicators
The “relative utility” or relevance of an indicator is its ability to be used as a
tool for university management (finance, governance and work). Indicators
must provide access to the university’s production spectrum and
differentiation profile. The first two improvements concern the
discontinuance of some existing indicators and the adoption of more
appropriate ones, for example:
18 Evaluation Practices and Methodologies

─ Differentiation by discipline or scientific field (including Social


Sciences).
─ Introduction of significant input data and production of some
“input/output” ratio.
─ Development of indicators for local “embedded ness” and global
reach (i.e., local and global impact of universities).
─ Enlargement towards effective teaching indicators (as compared to
research).

4.3. These changes will require specific collections of data: new indicators
mean new work
Such a renewal project will demand specific computation of existing
information as well as the creation of new information. New methodological
work has to be done, in addition to the creation of normalized measures
necessary for the rebuilding of global indicators. For this to happen,
effective connections with the OECD are crucial.

4.4. Enrich the debate in order to enrich the process and the result
The debate on academic ranking will grow in importance in the future, and
will not be limited to a simple evaluation-ranking dispute. At this point, four
questions arise:
─ What does excellence mean, and what is its impact on research and
teaching orientations and activities?
─ How important is the degree of diversity within globalization (not
being limited to the dualistic global/local debate)?
─ What are the differences and specificities within processes of
production, productivity and visibility?
─ Finally (under a transversal approach), a debate appears, questioning
the quality of data themselves and their adaptation to the diversity of
legal, financial and administrative structure of the bodies that form
“universities”.
Bertrand BELLON 19

5. Conclusion: The Missing Link


In looking over the ranking versus evaluation debate, a factor has come to
be seen by the author as central: the impact of the “world ranking” process
upon the development dynamics of universities. He posits this because the
vast majority of universities in the world is not, and has no chance to be,
listed within any “world-class” list. They may possess niches of world-class
excellence and they may produce excellent output. But, for these
universities, the impact of academic ranking is either non-existent or
negative (why would a university fight to get “in” if there is no hope to
“win” or even to be visible and respected?).
At the opposite side, the “elected” universities (those that find
themselves within the 1000; 500; 100 “top” universities, in one or many
different rankings) will incorporate ranking commitment and criteria, both
within their daily management and within their long-term development
strategy. As a result, they will “naturally” select the indicators that have
been already selected by the ranking producers and will make them
mandatory to their component group members (e.g., professors will be
pushed to “publish or perish” even if the resulting research is less than
useful). In such cases, artificial ranking criteria become “the” new rules that
will be adopted and enforced by the universities themselves. The
movement’s ideology and methods are thereby self-reinforced (as in the
case of the heavy elemental weight given to the Nature and Science reviews
in the Shanghai ranking) with possible negative effects on the generation of
new hypotheses and academic fields, on the diversity of supported research,
and on interdisciplinary co-operation.
In some cases, academic ranking may have an unexpected structural side
effect. If “university” becomes the unit of evaluation and of action, the
current fragmentation of the French higher education system into
universities, Grandes écoles and specialized research bodies (they may
share academic staff and research activities with the University as well as
research bodies such as CNRS, the ENSCM have particular status as
independent bodies with their own research laboratories) is made to appear
to be completely outdated, whatever its actual rationality. On the other hand,
one of the results may be the reinforcement of recent moves to increase the
size of universities by merging existing organizations with limited
consideration to their real coherence and synergies.
20 Evaluation Practices and Methodologies

The evident impact on university management is of important


consequence. University activity is increasingly embedded into a multi-
actor social space that modifies the governance of research, of innovation
and of teaching, taking part within a new dynamic within the public sector.
Consequently, institutional ranking processes, along with other tools for unit
characterization, may provide original and useful information in the difficult
process of university management: to create and consolidate platforms of
quantitative data in the act of measuring the multidimensional nature of
performance.
Regarding external stakeholders, ranking introduces new rationales for
public intervention and for the incorporation of new actors. Considering its
implications on policy-making both for governments and for the universities
themselves, ranking opens a whole new field of research. In short, the
debate on university ranking (and on differentiating characterizations in
general) is just beginning.

References

Azagra Caro, J. M., Fernández de Lucio, I. and A. Gutiérrez Gracia (2001)


“University patent knowledge: the case of the Polytechnic University of
Valencia”, 78th International Conference AEA, November 22-23, 2001,
Brussel, Belgium.
Balconi, M., Borghini, S., and A. Moisello (2003). “Ivory tower vs.
spanning university: il caso dell’Università di Pavia”, in, Bonaccorsi, A.
(ed.), Il sistema della ricerca pubblica in Italia. Milano: Franco Angeli,
pp. 133-175.
Carayol, N. and M. Matt (2004) “Does Research Organization Influence
Academic Production? Laboratory Level Evidence from a Large
European University”, Research Policy 33, p. 1081-1102.
Dasgupta, P., and P. A. David (1994) “Towards a New Economics of
Science”, Research Policy 23 5, p. 487-521.
Etzkowitz, H. and L. Leydesdorff (2000) “The dynamics of innovation:
from National Systems and “Mode 2” to a Triple Helix of university-
industry-government relations”, Research Policy 29, p. 109-123.
Bertrand BELLON 21

Etzkowitz, H., and L. Leytesdorff (1997) Universities in the Global


Economy: A Triple Helix of academic-industry-government relation.
London: Croom Helm.
European Commission (2003) “Downsizing And Specializing: The
University Model For The 21st Century?”, Snap Shots from the Third
European Report on Science and Technology Indicators 2003, p. 1-2.
<ftp://ftp.cordis.lu/pub/indicators/docs/3rd_report_snaps10.pdf>
Fielden, J. and K. Abercromby (1969) “Accountability and International
Co-Operation in the Renewal of Higher Education”, in, UNESCO
Higher Education Indicators Study. UNESCO and ACU-CHEMS.
<http://unesdoc.unesco.org/images/0012/001206/120630e.pdf>.
Gibbons, M. (2004) The New Production of Knowledge. London: Sage.
Helpman, E. (1998) General Purpose Technologies and Economic Growth,
Cambridge, MA: Massachusetts Institute of Technology Press.
Jacob, M., Lundqvist, M. and H. Hellsmark (2003) “Entrepreneurial
transformations in the Swedish University system: the case of Chalmers
University of Technology”, Research Policy 32, p. 1555–1568
Martin, B. R. and H. Etzkowitz (2000) “The Origin and Evolution of the
University Species”, VEST 13 3-4, p. 9-34.
MERITUM. Cañibano, L., Sánchez, P., García-Ayuso, M. and C.
Chaminade (Eds.) (2002). Guidelines for Managing and Reporting on
Intangibles: Intellectual Capital Statements. Madrid: Vodafone
Foundation.
Mowery, D. C, and B. N. Sampat (2004) “Universities in national
innovation systems”, Chapter 8, in, J. Fagerberg, D. C. Mowery and
R.R. Nelson (eds.), Oxford Handbook of Innovation, Oxford: Oxford
University Press.
Musselin, C. and S. Mignot Gerard. (2004) Analyse comparative du
gouvernement de quatre universities (Comparative analysis of the
government of four universities). <http://www.cpu.fr/Telecharger/
rapport_qualitatif_gvt.pdf>.
Noyons, E. C. M., Buter, R. K., van Raan, A. F. J., Schmoch, U., Heinze,
T., Hinze, S., and R. Rangnow (2003) “Mapping Excellence in Science
22 Evaluation Practices and Methodologies

and Technology across Europe”. Life Sciences. Report to the European


Commission. Leiden University.
Observatory of European University (2005). PRIME Network of Excellence.
October. <http://www.prime-noe.org>.
Reale, E. and B. Potì (2003) “La ricerca universitaria”, in, Scarda A. M.
(ed.), Rapporto sul sistema scientifico e tecnologico in Italia,
Milano:Angeli, p. 79-99.

Das könnte Ihnen auch gefallen