Sie sind auf Seite 1von 8

Impact factor

The Impact factor, often abbreviated IF, is a measure of the citations to science and social science
journals. It is frequently used as a proxy for the importance of a journal to its field.

Overview

The Impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific
Information, now part of Thomson, a large worldwide US-based publisher. Impact factors are
calculated each year by Thomson Scientific for those journals which it indexes, and the factors
and indices are published in Journal Citation Reports. Some related values, also calculated and
published by the same organization, are:

the immediacy index: the average citation number of an article in that year.

the journal cited half-life: the median age of the articles that were cited in Journal Citation
Reports each year. For example, if a journal's half-life in 2005 is 5, that means the citations from
2001-2005 are 50% of all the citations from that journal in 2005.

the aggregate impact factor for a subject category: it is calculated taking into account the number
of citations to all journals in the subject category and the number of articles from all the journals
in the subject category.

These measures apply only to journals, not individual articles or individual scientists (unlike, say,
the H-index). The relative number of citations an individual article receives is better viewed as
citation impact.

It is, however, possible to measure the Impact factor of the journals in which a particular person
has published articles. This use is widespread, but controversial. Eugene Garfield warns about
the "misuse in evaluating individuals" because there is "a wide variation from article to article
within a single journal".[1] Impact factors have a huge, but controversial, influence on the way
published scientific research is perceived and evaluated.

Calculation

The impact factor for a journal is calculated based on a three-year period. It can be viewed as an
approximation of the average number of citations in a year, given to those papers in a journal that
were published during the two preceding years. For example, the 2003 impact factor for a journal
would be calculated as follows:

A = the number of times articles published in 2001-2 were cited in indexed journals during 2003

B = the number of "citable items" (usually articles, reviews, proceedings or notes; not editorials
and letters-to-the-Editor) published in 2001-2
2003 impact factor = A/B

(note that the 2003 impact factor was actually published in 2004, because it could not be
calculated until all of the 2003 publications had been received.)

A convenient way of thinking about it is that a journal that is cited once, on average, for each
article published has an IF of 1 in the expression above.

There are some nuances to this: ISI excludes certain article types (such as news items,
correspondence, and errata) from the denominator. New journals, that are indexed from their first
published issue, will receive an Impact Factor after the completion of two years' indexing; in this
case, the citations to the year prior to Volume 1, and the number of articles published in the year
prior to Volume 1 are known zero values. Journals that are indexed starting with a volume other
than the first volume will not have an Impact Factor published until three complete data-years
are known; annuals and other irregular publications, will sometimes publish no items in a
particular year, affecting the count. The impact factor is for a specific time period; it is possible
to calculate the impact factor for any desired period, for which the web site gives instructions.
Journal Citation Reports includes a table of the relative rank of journals by Impact factor, in
each specific science discipline, such as organic chemistry or psychiatry.

Debate

It is sometimes useful to be able to compare different journals and research groups. For example,
a sponsor of scientific research might wish to compare the results to assess the productivity of its
projects. An objective measure of the importance of different publications is then required and
the impact factor (or number of publications) are the only ones publicly available. However, it is
important to remember that different scholarly disciplines can have very different publication
and citation practices, which affect not only the number of citations, but how quickly, after
publication, most articles in the subject reach their highest level of citation. In all cases, it is only
relevant to consider the rank of the journal in a category of its peers, rather than the raw Impact
Factor value.

Impact factors are not infallible measures of journal quality.[2] For example, it is unclear
whether the number of citations a paper garners measures its actual quality or simply reflects the
sheer number of publications in that particular area of research and whether there is a difference
between them. Furthermore, in a journal which has long lag time between submission and
publication, it might be impossible to cite articles within the three-year window. Indeed, for
some journals, the time between submission and publication can be over two years, which leaves
less than a year for citation. On the other hand, a longer temporal window would be slow to
adjust to changes in journal impact factors. Thus, although the impact factor is appropriate for
some fields of science such as molecular biology, it is not appropriate for subjects with a slower
publication pattern, such as ecology. (It is possible to calculate the impact factor for any desired
period, and the web site gives instructions.)
Favorable properties of the impact factor include:

ISI's wide international coverage. Web of Knowledge indexes 9000 science and social science
journals from 60 countries. This is perhaps only partially correct: see below.

Results are widely (though not freely) available to use and understand.

It is an objective measure.

It has a wider acceptance than any of the alternatives[citation needed].

In practice, the alternative measure of quality is "prestige." This is rating by reputation, which is
very slow to change, and cannot be quantified or objectively used. It merely demonstrates
popularity.

The most commonly mentioned faults of the impact factor include:

ISI's inadequate international coverage. Although Web of Knowledge indexes journals from 60
countries, the coverage is very uneven. Very few publications from languages other than English
are included, and very few journals from the less-developed countries. Even the ones that are
included are undercounted, because most of the citations to such journals will come from other
journals in the same language or from the same country, most of which are not included.

The failure to include many high quality journals in the applied aspects of some subjects, such as
marketing communications, public relations and promotion management and many important but
not peer-reviewed technical magazines. This editorial comment [1] of the Asian EFL Journal
complains of Thomson / ISI's failure to even consider rating certain superior journals.

The failure to incorporate book publications including textbooks, handbooks and reference books
into the calculations of the impact factor.

The number of citations to papers in a particular journal does not really directly measure the true
quality of a journal, much less the scientific merit of the papers within it. It also reflects, at least
in part, the intensity of publication or citation in that area, and the current popularity of that
particular topic, along with the availability of particular journals. Journals with low circulation,
regardless of the scientific merit of their contents, will never obtain high impact factors in an
absolute sense, but if all the journals in a specific subject are of low circulation, as in some areas
of botany and zoology, the relative standing is meaningful. Since defining the quality of an
academic publication is problematic, involving non-quantifiable factors, such as the influence on
the next generation of scientists, assigning this value a specific numeric measure cannot tell the
whole story.

The temporal window for citation is too short, as discussed above. Classic articles are cited
frequently even after several decades, but this should not affect specific journals.[3]
In the short term - especially in the case of low-impact-factor journals - many of the citations to a
certain article are made in papers written by the author(s) of the original article.[4] This means
that counting citations may be independent of the real “impact” of the work among investigators.

The absolute number of researchers, the average number of authors on each paper, and the nature
of results in different research areas, as well as variations in citation habits between different
disciplines, particularly the number of citations in each paper, all combine to make impact factors
between different groups of scientists incommensurable.[5] Generally, for example, medical
journals have higher impact factors than mathematical journals and engineering journals. This
limitation is accepted by the publishers; it has never been claimed that they are useful between
fields--such a use is an indication of misunderstanding.

By merely counting the frequency of citations per article and disregarding the prestige of the
citing journals, the impact factor becomes merely a metric of popularity, not of prestige.

HEFCE was urged by the Parliament of the United Kingdom Committee on Science and
Technology to remind Research Assessment Exercise (RAE) panels that they are obliged to
assess the quality of the content of individual articles, not the reputation of the journal in which
they are published [2].

Misuse of impact factor

The impact factor is often misused to predict the importance of an individual publication based
on where it was published.[6] This does not work well since a small number of publications are
cited much more than the majority - for example, about 90% of Nature's 2004 impact factor was
based on only a fourth of its publications.[7] The impact factor, however, averages over all
articles and thus underestimates the citations of the top cited while exaggerating the number of
citations of the average publication.

Academic reviewers involved in programmatic evaluations, particularly those for doctoral degree
granting institutions, often turn to ISI's proprietary IF listing of journals in determining scholarly
output. This builds in a bias which automatically undervalues some types of research and distorts
the total contribution each faculty member makes.

The absolute value of an impact factor is meaningless. A journal with an IF of 2 would not be
very impressive in Microbiology, while it would in Oceanography. Such values are nonetheless
sometimes advertised by scientific publishers.

The comparison of impact factors between different fields is invalid. Yet such comparisons have
been widely used for the evaluation of not merely journals, but of scientists and of university
departments. It is not possible to say, for example, that a department whose publications have an
average IF below 2 is low-level. This would not make sense for Mechanical Engineering, where
only two review journals attain such a value.
Outside the sciences, impact factors are relevant for fields that have a similar publication pattern
to the sciences (such as economics), where research publications are almost always journal
articles, that cite other journal articles. They are not relevant for literature, where the most
important publications are books citing other books. Therefore, ISI does not publish a JCR for
the humanities.

Even in the sciences, it is not fully relevant to fields, such as some in engineering, where the
principal scientific output is conference proceedings , technical reports, and patents.

Since only the ISI database journals are used, it undercounts the number of citations from
journals in less-developed countries, and less-universal languages.

Even though in practice they are applied this way, impact factors cannot correctly be the only
thing to be considered by libraries in selecting journals. The local usefulness of the journal is at
least equally important, as is whether or not an institution's faculty member is editor of the
journal or on its editorial review board.

Manipulation of impact factors

A journal can adopt editorial policies that increase its impact factor.[8] These editorial policies
may not solely involve improving the quality of published scientific work. Journals sometimes
may publish a larger percentage of review articles. While many research articles remain uncited
after 3 years, nearly all review articles receive at least one citation within three years of
publication, therefore review articles can raise the impact factor of the journal. The Thomson
Scientific website gives directions for removing these journals from the calculation. For
researchers or students having even a slight familiarity with the field, the review journals will be
obvious.

Self-citing

Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the
same journal which will increase the journal's impact factor.[5]

Editorials in a journal do not count as publications. However when they cite published articles,
often articles from the same journal, those citations increase the citation count for the article.
This effect is hard to evaluate, for the distinction between editorial comment and short original
articles is not obvious. "Letters to the editor" might refer to either class.

An editor of a journal may encourage authors to cite articles from that journal in the papers they
submit. The degree to which this practice affects the citation count and impact factor included in
the Journal Citation Reports cited journal data must therefore be examined. Most of these effects
are thoroughly discussed on the site's help pages, along with ways for correcting the figures for
these effects if desired. However, it is almost universal for articles in a journal to cite primarily
its own articles, for those are the ones of the same merit in the same special field. If done
artificially, the effect will become especially visible when (i) journals have a low impact factor
(in absolute terms) and (ii) publish only few papers per year.

Skewness

For example, we have analysed the citations of individual papers in Nature and found that 89%
of last year’s figure was generated by just 25% of our papers. The most cited Nature paper from
2002−03 was the mouse genome, published in December 2002. That paper represents the
culmination of a great enterprise, but is inevitably an important point of reference rather than an
expression of unusually deep mechanistic insight. So far it has received more than 1,000
citations. Within the measurement year of 2004 alone, it received 522 citations. Our next most
cited paper from 2002−03 (concerning the functional organization of the yeast proteome)
received 351 citations that year. Only 50 out of the roughly 1,800 citable items published in those
two years received more than 100 citations in 2004. The great majority of our papers received
fewer than 20 citations.

This emphasizes the fact that the impact factor refers to the average number of citations per
paper, and this is not a gaussian distribution. It is rather a Bradford distribution, as predicted by
theory. Most papers published in a high impact factor journal will ultimately be cited many fewer
times than the impact factor may seem to suggest, and some will not be cited at all. Therefore the
Impact Factor of the source journal should not be used as a substitute measure of the citation
impact of individual articles in the journal.

Also, researchers from UCLA have estimated that when scientists write up their work and cite
other people's papers, only around 20% have read the original (based on the assumption that
copying a reference implies not reading the original paper).[9]

Use in scientific employment

Though the impact factor was originally intended as an objective measure of the reputability of a
journal (Garfield), it is now being increasingly applied to measure the productivity of scientists.
The way it is customarily used is to examine the impact factors of the journals in which the
scientist's articles have been published. This has obvious appeal for an academic administrator
who knows neither the subject nor the journals.

Other measures of impact

PageRank algorithm

In 1976 Gabriel Pinski and Francis Narin suggested a recursive impact factor, to give citations
from journals that have high impact greater weight than citations from low-impact journals.[10]
Such a recursive impact factor resembles the PageRank algorithm of the Google search engine,
though the original Pinski and Narin paper uses a "trade balance" approach in which journals
score highest when they are often cited but rarely cite other journals. A number of subsequent
authors have proposed related approaches to ranking scholarly journals.[11][12][13] In 2006,
Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel also proposed using the
PageRank algorithm.[14] From their paper:

ISI Impact Factor PageRank Combined

ANNU REV
1 52.28 16.78 Nature 51.97 Nature
IMMUNOL

ANNU REV Journal of Biological


2 37.65 16.39 48.78 Science
BIOCHEM Chemistry

New England Journal


3 36.83 PHYSIOL REV 16.38 Science 19.84
of Medicine

NAT REV MOL CELL


4 35.04 14.49 PNAS 15.34 Cell
BIO

New England Journal


5 34.83 8.41 PHYS REV LETT 14.88 PNAS
of Medicine

Journal of Biological
6 30.98 Nature 5.76 Cell 10.62
Chemistry

New England Journal of


7 30.55 Nature Medicine 5.70 8.49 JAMA
Medicine

Journal of the American


8 29.78 Science 4.67 7.78 The Lancet
Chemical Society

9 28.18 NAT IMMUNOL 4.46 J IMMUNOL 7.56 NAT GENET

10 28.17 REV MOD PHYS 4.28 APPL PHYS LETT 6.53 Nature Medicine

The table shows the top 10 journals by ISI Impact Factor, PageRank, and a modified system that
combines the two (based on 2003 data). Nature and Science are generally regarded as the most
prestigious journals, and in the combined system they come out on top. That the New England
Journal of Medicine is cited even more than Nature or Science might reflect the mix of review
articles and original articles that it publishes. It is necessary to analyze the data for a journal in
the light of a detailed knowledge of the journal literature.

The Eigenfactor is another PageRank-type measure of journal influence,[15] with rankings freely
available at eigenfactor.org.

See also
H-index, for the impact factor of individual scientists, rather than journals.

PageRank, the Algorithm used by Google, based on similar principles.

Das könnte Ihnen auch gefallen