Sie sind auf Seite 1von 24

DO SCIENTISTS GET FUNDAMENTAL RESEARCH IDEAS BY SOLVING PRACTICAL

PROBLEMS?

Chiara Franzoni ∗ F F

DISPEA – Politecnico di Torino & BRICK Collegio Carlo Alberto


Corso Duca degli Abruzzi 24/b
Torino, Italy 10129
Tel.: 011-5647205
email: chiara.franzoni@polito.it H

Abstract

We discuss the problem-solving nature of scientific activity and maintain that contributions made in
the form of improved methodologies, new technologies and instruments for research are, and will
increasingly become, central in experimental sciences and in fields traditionally the realm of pure
intellectual speculation. The contribution of scientists to the development of new technologies and
techniques for research purposes largely exceeds their contribution to developing technologies for
industrial purposes, although the former easily blurs into the latter. We verify the effect of both types
of contributions on the productivity of a sample of American star physicists, and show that
improving research technologies always boosts the productivity of scientists, whereas developing
industrial technologies is beneficial only when the technology stems from research applications.

Keywords: co-evolution of science and technology, research instruments, academic patenting,


technology transfer

JEL: O31; O34; O38


The author acknowledges the support of the MIUR, Ministero dell’Istruzione, Università e Ricerca (FIRB Project RISC-
RBNE03ZLFW_004). Most of the work was done while the author was visiting scholar at Andrew Young School of Policy
Studies (Georgia State University, Atlanta), under the sponsorship of the Fondazione IRI (Rome). The author wishes to
thank both institutions for making this possible. Paula Stephan, Francesco Lissoni, Ashwin Ram, Chris Simpkins, Andrea
Vezzulli, Mario Calderini and Giuseppe Scellato provided helpful comments and suggestions on both theory and
methodology. All errors are those of the author.
1. Introduction

In recent years, many scholarly contributions have emphasized the role that academic scientists play
in developing innovative technologies. Investigations on academic patenting and on university high-
tech ventures have revealed that scientists play a leading role in developing technologies with
important industrial and market applications, especially in fields of key scientific advance such as bio
and nano-technologies. Moreover, much to the amazement of people both within and without the
scientific community, several empirical investigations have recently been presented to support the
view that the work of applying scientific knowledge to industrial uses is not necessarily distracting
scientists from their original tasks. According to empirical investigations at the individual level, on
average, professor-inventors were found to be more efficient (produced more scientific articles)
(Stephan et al., 2006; Fabrizio and Di Minin, 2004) and better cited (Agrawal and Henderson, 2002)
than their non-inventive colleagues, with apparently no decline in the level of journals targeted (in
terms of Journal Impact Factor and Level of Basicness) (Azoulay et al., 2006; Breschi et al., 2007;
Calderini et al., 20081). In addition, there are hints that productivity may increase in coincidence with
the event of patenting (Azoulay et al., 2007; Breschi et al., 2007; Calderini et al., 2007), and that such
increase may not necessarily be only temporary (Breschi et al., 2006).
Those findings have opened up the question as to what mechanisms stand at the basis of such
improved performance. Possible explanations include, for instance, that working on industrial
problems brings fresh new ideas of investigation. Answers consistent with such hypothesis are
reported in a well-known survey by Mansfield (1995). Other explanations are the increase of
resources or the network-effect (Breschi et al., 2007), or different mechanisms of transfer going on
within diverse subfields (Franzoni et al., 2008).
In this paper, an alternative interpretation is offered: namely that in many cases, a boost in
productivity happens because the technology enables major leaps in the scientist’s ability to do
his/her job at the bench. In fact, the contribution of scientists to technology development does not
seem to be adequately represented by just looking at market applications. If one digs deeper into the
world of science, it becomes rather obvious that technology plays a leading role in the everyday work
of scientists at virtually all levels, well beyond the application of scientific principles to market ends.
One gets the idea that addressing the problem of how scientists produce working technologies for
the market, and how this work could give them insights for fundamental “superior” quests would
simply not be appropriate. From laboratory machines and tools, to processes and algorithms, the
effort put into developing and refining technologies for scientific research certainly exceeds the effort
of developing technologies for industrial applications, and it is worth noting that laboratory
technologies may well have a market value on their own.
As much as it is emphasized in scientists’ biographies, the technical side of science is often
overlooked in the technology transfer literature, and in virtually all other descriptive studies of
academic life. Indeed, many great scientists of the past and present, although generally regarded for
their ideas and insightful theories, were also great problem solvers at the bench; and their practical
attitude and reputation as laboratory masters was something they carefully nurtured and jealously
protected. Louis Pasteur based his astonishing record of scientific discoveries on his ability to tailor
experiments of great explanatory power to his multifaceted interests and to turn them into scientific
facts. Marie Curie used to regard as her most intriguing years those passed in an exhausting job of
extracting radium from pitchblende, a mechanical and chemical process that eventually allowed her
to determine the atomic weight of the new element. On the other hand, her job would not have been
possible had she not used an instrument, the Piezoelectric Quartz Electrometer, built several years
earlier by Pierre and Paul Curie for the detection of electrical charges, and first used in their studies
of Crystallography (Senior, 1998).

1
But there may be field-specific exceptions. See Franzoni, Vezzulli and Calderini, (2008).

2
Furthermore, technology development in science, in virtually all fields and disciplines, is more and
more crucial today than in the past. As Mokyr (1997) convincingly argues, within the contemporary
scientific paradigm, technologies are increasingly scientifically-constructed. Unlike in the XVIII and
XIX Century, when few individuals were capable at generated sudden leaps in understanding, in the
form of working machines for which there was not theoretical explanation, technologies are
increasingly built upon a comprehensive understanding of the forces, phenomena and principles that
govern them. As more and more directly observable facts of nature are explained, or at least cease to
be recognized as problems, the open domains of inquiry require more and more sophisticated and
scientifically-constructed instruments simply to be accessed and treated. The ground for
serendipitous discovery of golden-handed inventors who based their insight on a steady
accumulation of practical expertise has inexorably shrunk, giving way to scientists and engineers
trained on the frontier of scientific understanding.
At the same time, because of the complication of calculations, the equipment and the
experimentation on research methodologies have become more and more essential in almost every
field of science, including those disciplines that were traditionally the realm of pure intellectual
speculation, such as Theoretical Physics and Mathematics, which until not so long ago were paper-
and-pencil based and now require powerful computers and algorithms (Ehrenberg et al., 2006).
Nathan Rosenberg (1992) and Derek de Solla Price (1965, 1968) were certainly among the first to
acknowledge the importance of scientific instruments as an output of academic research and Eric
von Hippel (1988) endorsed this view, by offering evidence on the contribution of academic users to
the development of four families of scientific instruments. In his seminal 1992 work, Rosenberg
scientific instruments maintains that scientific instruments are the capital goods of the research
industry and a central output of university research, especially in the last century. He stresses that the
industry in the US was largely based on optimizing and standardizing instruments developed within
academic labs, with or without the help of scientists themselves.
Drawing on the contributions just mentioned, this paper will discuss the multi-faced nature of
technology development in academia and to shed light on what is the likely impact of such diverse
activities on scientific productivity, as well as on other qualitative features of research, such as impact
and level of basicness.
The paper is organized as follows: in the next section (§2) technology development in science is
discussed, to provide a theory framework for the empirical investigation. Section §3 presents the
(original) dataset and the variables. The results of the empirical analysis are presented in section §4
and conclusions are drawn in section §5.

2. Technology to serve science

2.1 Problem solving and instrument development in science

Good descriptive insights to better understanding the role of scientists in technology development
are offered by the contributions of the Constructivist Approach to the studies of science (Latour and
Woolgar, 1979, Knorr-Cetina, 1981; Traweek, 1988). Here the focus of the analysis is on the work of
scientists directly on the shop floor, and special attention is given to problem framing, technical
practices and process thinking, regarding both individual scientists and research teams (Owen-Smith,
1999; Hackett, 2004). In this stream of research, not yet incorporated into the economic debate on
science and technology, fresh insights into how the actual work of science is done challenge the
traditional view of the issue, well beyond the dichotomy of applied vs. fundamental knowledge.
Scientists are portrayed as craftsmen and entrepreneurs, whose idiosyncratic knowledge of theories
and techniques frames the work of research and consequently its possible outcomes (Clarke and
Fujimara, 1992). Scientific discovery is described as the result of a local and contingent process of
problem-solving, deeply framed within the technological and organizational dimensions of the

3
laboratory work. In this framework, the design and construction of instruments and methodologies
for science is central in deciding what area of knowledge the scientists will be able to tackle and to
contribute at. Of course, in virtually all disciplines, the body of consolidated research practices and
instruments that are part of the standardized laboratory equipment is pretty much used as a black
box. Yet, in the game of exploration and exploitation, several research techniques representing the
state-of-the-art are constantly being improved and are the object of continuous methodological
research.
For instance the advance of Genetic Studies was marked by the introduction of successful techniques
that either literally opened up the field for direct genetic investigation, or turned certain cumbersome
operations into efficient ones, thus paving the way for large, systematic analyses. Restriction
endonuclease, gene-cloning, chain-termination, enzymatic DNA replication, shotgun sequencing,
automatic sequencing are only some of the new technologies that have been dramatic enough to
deserve a standard name. In these cutting-edge fields of research, scientific discoveries are so
intimately related to the development of the techniques used to observe, handle, monitor and modify
the subject of the investigation that virtually every breakthrough advance in understanding has its
name linked to a new research technology.
Scholars studying the process of creativity in science stress the idea that brilliant contributions arise
when the scientist has both an interesting quest and a viable strategy to pursue. Highly creative
scientists seem to be skilled at selecting medium-size solvable problems rather than trivial quests or
large quests that are generally too complex to unfold (Simonton, 2004). Weak problems, i.e. states of
problems for which there is no strong method2 to achieve the goals, are the typical ground for creative
scientific discovery. (Klahr, 2002:21).
It is interesting that one of the first scholars to discuss the problem-solving nature of scientific
discovery was Herbert Simon (1977), whose primary interest was the role of procedural rationality
and heuristics in research, rather than the normative aspects of scientific discovery, traditionally the
realm of philosophers. Simon stressed the power of superior techniques, apparatus and
representations as the sources of heuristics that enable scientists to better select solutions to the
problems of scientific investigation.
If we take this view, it is perhaps less surprising to regard technology, instruments and methods as
the bread-and-butter of science at every level and in every field, and the progress in instruments as
the antecedent of major scientific discoveries and advances in understanding.
Holmes (2004:150) says that “In the case of experimental scientists, the rate-limiting factor in their
mental progress may be set not by the speed with which such thoughts can course through their
brains, but by the pace with which they can translate useful thoughts into laboratory operations”.
Nevertheless, it would be a mistake to believe that the techno-scientific level of research is crucial
only in experimental sciences.
There are in fact at least three reasons for developing new and more powerful instruments,
mechanisms and methods in science: 1) opening-up/enlarging the domain of observation 2) theory
testing; and 3) large-scale studies.
1) Opening-up/enlarging the domain of observation. Research in almost every field of science takes
its moves from observing nature. Because observation is constrained by the human senses, scientists,
at least since Galileo, have built up and used instruments to extend the domain of observation and
gain access to domains previously invisible or transparent to the senses alone (de Solla Price, 1965).
Instruments are used to observe, describe, measure and gather data on phenomena, often with no
preconceived theory. In all fields of science, the simple registration of previously unnoticed facts
provides enough justification to motivate scientific investigation. Particularly in a world of uncertain
payoffs, such as that of science, the finer the grain of observation and the broader its spectrum, the
better the chances to come up with an unexpected finding and uncover new facts later to become
scientific problems.

2
In cognitive psychology, a “strong method” is a solution path that is commonly and immediately available
to a scientists that has gone through extensive learning in a specific domain, by experience.

4
One need only think about how much Astronomy or Microbiology are indebted to optical
instruments to understand that many breakthrough discoveries were made possible by the use of
research instruments, but also that this particular use of instruments has little to do with the testing
of theories formulated elsewhere. In Astronomy, a more powerful telescope or a space shuttle
sending data from far off planets is like lighting a torch in the darkness: every new piece of
information provides raw data for speculation and new puzzles to solve.
In those cases, the possibility for a scientist to move from observation to theory and from data to
ideas goes hand in hand with the availability of new data, with a process of theory building acting in
reverse from effect to cause, from data to source.
Astronomic observations, space missions, gene mapping, as well as many archeological excavations,
species taxonomies, and some of the particle physics experiments are all conducted for the sake of
direct observation of phenomena and for the production of raw data that will eventually serve the
purpose of building theory. In all these disciplines, instruments and apparatus are meant to extend
the domain of what can be experienced by the human senses; in so doing, they define the spectrum
in which possible manifestations of unpredicted outcomes would be recorded as scientific facts that
need to be explained.
In many cases, entirely new branches of scientific disciplines were opened up after the development
of a new enabling technology or instrument. For instance, radio-astronomy took off in the 1930s
only after the first radio-telescopes started to register space radio emissions.

2) Theory testing. Among the three activities, theory testing happens to be by far the most famous
and commonly sold image of science, although scientists themselves would probably deny that it is
the most important. Nonetheless, this has certainly contributed to generating the (false) notion that
instrument development is a second-order activity in the hierarchy of basic/applied research.
In experimental sciences, technologies and machines are needed to run experiments whose purpose
is to observe or measure a certain phenomenon predicted by a theory in order to confirm or reject
the predictions of such theory. For instance, High Energy Physicists test the predictions of the
“standard model” related to the fundamental properties and the forces governing the physics of
particles. The behavior of elementary particles can be only investigated at very high temperature. This
is one reason why this research requires huge and extremely expensive instruments like particle
accelerators, colliding beams, spectrometers and other detectors. In this field, the design of
instruments and new methods takes up most of a scientist’s time. Sharon Traweek (1988), in her
extensive study of two large research facilities in the US and Japan, says that particle physicists have
no tools of accepted validity that can be used as a black box and that each research group conceives,
constructs and maintains its own instruments. How well the instruments are designed determines the
success of the experiments and of the scientists themselves. She maintains that “machines are the
heart of the research activity […]. their conception and development, their maintenance, their
performance during the precious allotment of beamtime for an experiment are a stuff of frustration,
hope, heartbreak, and triumph for research groups” (Traweek, 1988:49).

3) Performing large-scale studies. Finally, a substantial part of the technologies used in research serve
to perform certain familiar operations more efficiently and more quickly. In this case, although the
operations themselves are familiar, using machines that save time, labor or power makes it possible
to perform those tasks on a totally different scale, thus shifting the focus of the analysis to a larger
dimension. When performances of automatic operations are significantly higher, the use of those
machines not only allows efficiency gains, but makes it possible to tackle entirely new sets of
problems. For instance, DNA sequencing has been possible since the early 1970s, but it was only
with the development of methods such as the Chain Termination that it became possible to
reconstruct the whole DNA map of a simple virus (the bacteriophage Phi-X was the first organism
to be sequenced by Fredrick Sanger in 1975). Furthermore, it needed an entirely new set of methods
like the primer walking and shotgun sequencing, each involving a number of intermediate processes of
DNA fragmentation, labeling replication, etc. (polymerase chain reaction, automated DNA

5
sequencing, dye-terminator sequencing, expressed sequence tags, …), to go from mapping the 5386
DNA bases of the Phi-X to the impressive job of mapping as many as 215 million bases of a
drosophila (completed by Berkeley and Celera Genomics in 2000) or 3 billion base-pairs of the
human genome (released in a first rough version by The Human Genome Project and Celera
Genomics in 2000 (Davies, 2001). The entire evolution of biotechnology is marked by a steady
improvement in the efficiency of DNA interpretation and handling, which has enabled scientists to
undertake large-scale comprehensive studies that otherwise would have never been possible in the
time span of a career.
The availability of powerful hardware and software is at the basis of a widespread evolution in such
disciplines as meteorology, volcanology and planetary sciences, which have shifted from being
descriptive to being calculation-intensive. Another compelling example is offered by Materials
Sciences. In theory, the behavior of electrons was fully described analytically by means of the
Schroedinger Equation of quantum mechanics (formulated in the 1930s). However, prior to the
recent development of computer-run simulation algorithms, which offered viable and more efficient
computational solutions, there was only a limited understanding of the properties of molecules and
surfaces. Without computer simulations, analytical solutions would require computation times
amounting to years, even for powerful supercomputers, thus casting out the possibility of doing
systematic studies.

2.2 Research Hypotheses: Impact of Technology Development on Scientific Performance

The discussion thus far has strived to make the following two points: 1) successful research requires
framing quests into problems that have a technical solution, and 2) much of the work of scientists
relates to building technologies and new processes and methodologies to support scientific research;
and this contribution to technology development largely exceeds that of building technologies to
serve industrial or market needs.
The former is motivated by any of the three purposes described above and aims at improving the
scientific understanding of phenomena. The latter is mostly profit-motivated.
In principle, one should expect that a scientist who develops a powerful instrument, tool or process,
produces data and results obtained by exploiting the instrument, in addition to divulging information
about the method itself. This would eventually show up in the scientist’s vita in the form of a flurry
of publications following the announcement of the new technology, instrument or method.
Producing interesting data and evidence backs up the usefulness of the methodology itself.
Sometimes the new evidence also makes it possible to construct new hypotheses and make
contributions to theory. The desire to contribute to theory in science is enforced by the traditional
legacy of the priority of theory over experiments in attributing the highest ranks and honors in
research (Hagstrom, 1965). Moreover, the more unexpected and unpredictable the outcomes
produced by the observation are, the more there is need to explain the results by proposing a
satisfactory theory that comprehends the findings. History of science provides plenty of examples of
findings that were ignored or rejected by the scientific community, until a new theory emerged to
make sense of them3. As a consequence, developing a technology for scientific research can be the
antecedent of a major advance in understanding, which in turn would become visible as an
improvement in the citation record or in the ranks of the journals that a scientist targets.

3 The case of the birth of Radio-Astronomy is once again a perfect example. The first observation of non-terrestrial radio
signals was made by Nikola Tesla, the famous electronic engineer considered the father of wireless communications (Seifer,
1996). As early as 1899, using a cutting-edge wireless receiver, Tesla picked up remote radio signals from his Colorado lab
that he recognized as being of planetary origin, a fact completely ignored at the time. Nonetheless, he explained his
observation as numeral signals sent from Mars (possibly proof of extraterrestrial life), an explanation that did not persuade
the scientific community and led to a rejection of the evidence of space radio emissions as such. It was not until the 1930s
that space radio emissions were observed again at Bell Labs and the discipline of Radio Astronomy took off.

6
Conversely, in the case of a market application, one cannot predict a clear, unique outcome. On the
one hand, the work needed to transform a technology into an industrial product may take time away
from research and publishing (Nelson, 2004; Dasgupta and David, 1994). On the other hand, as
many scholars have suggested, there may be enough rich feedback from working on applications to
keep the research pipeline fueled (Mansfield, 1995; Stephan et al., 2006). Several conditions may be
behind such rich feedback. It has been reported in surveys that industrial partners met during
consultancy could serve as a source of new ideas and problems worthy of further investigation,
which suggests that there may be a sort of network-effect boosting creativity (Mansfield, 1995; Siegel
et al., 2003). Such hypothesis seems consistent with the findings of the effects of loosely-coupled ties
on creativity, offered by the network theorists (Granovetter, 1983; Forti et al., 2007). Exposure to
industrial problems may be filled with ideas for other reasons as well. Rosenberg (1982) takes
inspiration from the history of science and technology to suggest that when the solution to a
problem precedes its theoretical understanding, technology serves as the repository of an enormous
amount of empirical knowledge awaiting examination by scientists. Alternatively, in a world of
unpredictable payoffs, new ideas for scientific investigation may simply arise serendipitously (Barber
and Fox, 1958; Austin, 1978) whenever scientists are exposed to a larger pool of problems (Nelson,
2004).

Furthermore, there is a significant overlap between the two types of technology development
(research instruments and market applications), which consists in instruments for research that have
themselves become patented marketable products with little adaptation. Prototype research tools
developed in universities have traditionally been taken up from firms and developed into standard
off-the-shelf equipment for widespread use, which offers the advantage of requiring limited training
and are optimized to be quicker and more efficient (Von Hippel, 1988). In his study of four families
of scientific instruments, von Hippel (1988) found that in 77% of cases, the inventions were
developed by users of academic or industrial labs, later to become standardized equipment. Stern and
Murray (2007:663) report that a large proportion of the patented academic inventions in their sample
of biotech patents was based on research tools. Moreover, scientific laboratory equipment is not the
only market benefiting from the development of scientific instruments. Most of the diagnostic tools
for medical use, such as magnetic resonance, computerized transmission tomography and ultrasonics,
had origins in research instruments (Rosenberg, 1992; Von Hippel, 1984).
Those applications were either developed by specialized precision instrument firms or by forward-
looking scientists, many of whom participated directly in transforming their instruments into
successful industrial products. The gene-clone technique developed by Cohen-Boyer at the
University of Stanford, thus far the most lucrative patent in the history of the university, is just the
latest example of a long list of technologies having exceptionally high commercial value that were
initially devised as research instruments. In this case too, the impact of developing technologies can
be expected to affect scientific performances in other ways as well.

In the following section, we will look at the publication record of scientists in order to present
evidence of the impact of both kinds of technology development: 1) technology applied to industrial
use, and directed specifically to the production of products and processes; and 2) technology aimed
at improving research investigation instruments and processes, with a potential for further use in
research.

3. Data and Variables

3.1 Data collection

The original database used for the study comprises a large group of scientists doing research in
American universities in all the various subfields of Physics. The list of names was originally collected

7
from the American Physical Society (APS) Fellows archive. The APS has a very large membership
base, and the status of “Fellow” is given year after year to a maximum of 0.5% of the members in
recognition of their merits. The dataset only comprises scientists who are tops in their area of study
and who were affiliated with an American University at the time of the nomination. After clearing
out common family names and people who had retired during the period of observation, the list
resulted in 642 individuals4.

[Table 1 about here]

For each scientist i at year t comprised between 1990 and 2003, all publications and patents were
collected. Publication data was obtained from ISI Thomson Scientific and include the full reference
record, plus the full abstracts of all articles, stored in text format. The abstracts were be used to shed
light on the content (Franzoni et al., 2009).
Patent data was collected from the USPTO archive of Thomson Delphion. By means of Web
searches of curricula and personal pages, additional information about the scientists was collected,
including gender, year, place and subject of the PhD, as well as every period of time spent outside
academia since the PhD (either in firms or in national labs). The final database resulted in 45,342
unique combinations of scientist-publications, equal to 38,178 unique SCI publications, plus 104
patents.
Table 1 provides some descriptive statistics of the scientists who make up the sample.

3.2 Empirical framework and variables

In the previous sections we hypothesized that the development of technologies had an impact on a
professor’s scientific productivity. We want to separate the effect of technology development of
both of the types mentioned (past and present research instruments and past and present industrial
applications) on a professor’s productivity. Productivity can be accounted for in several ways. In this
work, we examined both the quantitative dimensions of productivity (the number of articles
published in scientific journals at a given time) and the qualitative dimensions of productivity (the
total citations and the “level” of basicness of the articles).
We specify the basic model as follows:

Yit = α1Yi(t-1) +β1 research_instrumentit + β2 research_instrument i(t-1) + β 3 patents it + β4 patents i(t-1) +


+ Φ1Cit + Φ2 Ci(t-1) + ai + δt + εit

Where Yit is the dependent variable expressing the scientific performance, research_instrumenti (t-1),
research_instrumentit, patents i (t-1) and patents it are past and present covariates, Ci (t-1) and Cit are individual
time-varying control variables, ai are individual fixed effects, δt are calendar time-dummies, εit is the
idiosyncratic error term and Φ1 and Φ2 are vectors of parameters. Each component is further
explained below.

[Table 2 about here]

Independent variables. The variables of observation a) research instruments and b) industrial applications
are proxied respectively by a) the number of articles disclosing scientific instruments or
methodologies and b) the number of patents in which the scientist proves to be one of the inventors
(patentsit ; patentsi(t-1))5. The first indicator was based on the occurrence in the articles abstracts of

4 Fellows in the subgroups related to Physics Education, History of Physics, Physics and Society or who were awarded for
not strictly scientific merits were also dropped from the sample. You can see www.aps.org for more information on
nominations and the awarding of fellowships.
5 Here t is the year of patent priority.

8
semantics describing a research technique or instrument, such as “technology, technique,
methodology, method, process, approach, instrument, tool, equipment, facility” (research_instrumentit;
research_instrumenti (t-1)). From a manual browsing of the papers, the identification of methodological
articles seems sound. The criterion identifies the target articles in a non-redundant, although non-
exhaustive way, meaning that the incidence of false-positives is negligible. However, there may be
false negatives, for instance when the technology is only labeled by a specific name (e.g.
“microscope”) throughout the text. Nonetheless, to the best of the author’s knowledge, it is
reasonable to assume the possible bias as being randomly distributed. For the sake of a comparability
and sensitivity check, an alternative and more restrictive criterion is also adopted. It consists in
keeping only those articles where the semantics describing a technique or instrument is associated
with any adjective indicating novelty, such as “new, novel, innovative, leading-edge, cutting-edge”.
The events so identified are hence a sub-sample of those identified with the first criterion. This
second alternative construction of the variable is indicated in the tables as research_instrument_altit;
research_instrument_alt (t-1). As will be shown later, our estimates will show that the two criteria give
comparable results. Use of abstracts rather than full-texts for semantic search ensures that the
methodological content is crucial.
In order to account for the possibility that contributions to research technologies arise in the form of
patents, we included split versions of the variable #patent. This was done to distinguish between
patents disclosed at the same time as a research instrument disclosure (#patent_instrumentit), as
opposed to patents alone, i.e. disclosed in the years in which the inventor did not published any
articles related to research instruments or methods (#patent_non_instrumentit). The issue that research
tools and methods constitute a large share of all patents assigned to academic scientists has been
found in previous evidence (see Murray and Stern, 2007: 663). The split is motivated by the need to
make a first rough separation of inventive activities related to research instruments, as opposed to
inventive activities unrelated to research instruments.
Dependent variables. In the basic (a) model, the dependent variable Yit is the scientific productivity
unrelated to research technologies. This is constructed as the logarithm of the count of all articles
published by scientist i in year t, except those who include semantics referring to the scientific
methodology or instrument, identified by either of the methods just described (Log_articlesit). In
models (2), we account for scientific impact and take as dependent variable the logarithm of counts
of article citations in year t (Log_citationsit). In model (3) we use the average “Level of basicness” of
the journal where the articles published in year t appeared (Levelit), based on the IpIq 2003 ranking of
journals (Hamilton, 2003)6.
Control variables. All time-invariant covariates are captured by the time-invariant individual-specific
fixed effects and hence were not included in the specification. The lagged dependent variable is
needed to account for short-term cycles of publications, which reflect choices of packaging of the
result of a scientific work. Lagged dependent variables in models of scientific productivity typically
account for a large share of the regression. Among the controls Cit and C i(t-1), we included the past
and present average number of coauthors per paper (coauthorsit and coauthorsi(t-1)), plus the output
performance indicator, to account for the effect of the size of research team, and two pairs of
variables taking the value 1 if the scientist was working in a firm (firmit and firmi(t-1)) or in a national
lab7 (national_labit and national_labit), to account for potential differences in the incentives to publish
caused by being subject to limited disclosure policies.
Finally, we control for individual-specific unobserved fixed effects and calendar effects. Descriptive
statistics related to the time-varying variables just mentioned are given in Table 2.

[Table 3 about here]

6
This indicator was used in several previous studies (Breschi et al., 2007; Calderini et al., 2007; Van Loy et al., 2004).
7 We considered as belonging to this category national agencies such as NASA, Oak Ridge, NSF, NIST, etc., but excluded
large national facilities like Lawrence Berkeley and Stanford Linear Accelerators.

9
5. Results

Table 3 provides general statistics about the incidence of the events of interest across the entire
period of observation (1990-2004). It is quite notable that the large majority of scientists (88%) have
developed at least one research instrument, process or methodology during the time-span, although
the annual incidence is comparably small (30%). On the contrary, only 14% of the scientists in the
sample have been inventors of at least one patent. The event of observing a patent and an instrument
disclosure at the same time is even more exceptional (8%). The incidence of the different types of
events in the various APS topic-groups is shown in Table 4. The incidence of patents is quite high in
the subfields of Laser Science, High Polymers, Fluid Dynamics, Instruments & Measurements and
Materials (25% or more). The incidence of research instruments development is higher than average
in Chemical Physics, Atomic, Molecular & Optical Physics, Laser Science, High Polymer Physics,
Industrial & Applied Physics, Biological Physics, Fundamental Constants, Polymer Physics and
Physics of Beams. The joint event of patented research instrument occurs more frequently in Laser
Science, Chemical Physics and Polymer Physics.

[Table 4 about here]

Table 5 presents the correlation between the time-varying variables used in the following analysis.
The dependent variables and the covariates are all highly correlated with their lagged realizations. The
count of patents is always moderately, but positively correlated to publication-related variables (with
the exception of number of coauthors), which highlights the well-known effect of positive overall
correlation of productivity measures in science (Merton, 1968; Stephan et al., 2006, Breschi et al.,
2006). All patent and publication variables tend to be correlated with their lagged values, providing
evidence of strong individual effects.

[Table 5 about here]

Regarding empirical analysis, after controlling for other potentially confounding factors, we want to
know whether working on research methodologies affects the productivity of scientists in all not
strictly methodological contributions. To account for individual, time-invariant attributes (including
seniority, scientific subfield, gender, PhD-code or affiliation, etc.), we assume the existence of
individual fixed-effects, which typically explain a large proportion of scientific performance (Stephan
and Levin, 1992). We included a lagged value of the variable, to account for short-term cycles of
productivity (Log_articlesi(t-1)), and controlled for potential effects on incentives to publish for non-
academic environments (firm, nat_lab) and the size of the research group with which the individual
had recently worked (#coauthors).
To account for the endogeneity of the lagged dependent variables, and because all measures of
scientific productivity tend to be prone to endogeneity issues (are correlated to the past, and
sometimes to the present realization of the error), we chose to estimate our model via System GMM
(Arellano and Bover, 1995; Blundell and Bond, 1998). This estimate is ideal for dynamic panel data
with many individuals and few observation-years (in our case: N=642; T=14) and makes it possible
to account for individual, unobserved fixed-effects and time-dummies, under the assumption that
first difference instrumental variables are uncorrelated with the fixed effects. It can however lead to
inconsistent estimates, in case of sample selectivity issues, causing to incidental truncation.
In the estimate, the lagged dependent variable and the covariates were treated as endogenous.
Among the control variables, coauthors was treated as predetermined. Firm, national_lab and the time-
dummies were treated as strictly exogenous. Additional lags of all covariates were included in trial
estimates (not reported here), but had never a significant coefficient. This might be explained with
the prompt publication timing for physics, typically ranging between a maximum of between 2 and
16 weeks. The results of the estimate are shown in Table 6.

10
[Table 6 about here]

As expected, the coefficients of research_instruments, present and lagged, always have a positive
significant impact on productivity. This confirms the hypotheses that solutions to practical problems
in the form of new or enhanced instruments, machines and processes for scientific research is always
antecedent of an increase in publications. Signs remain unchanged and the coefficient magnitude is
substantially similar when the second and more restrictive criterion for identifying research
instruments disclosures (the variable is here called research_instruments_alt) is adopted (Model 1b). In
this case, however, the estimated coefficient for the lagged covariate is not statistically different from
zero. Patents alone never have a statistically significant impact on productivity, both in present and
lagged realizations, and this will also be the case when the qualitative assessments of scientific
performance are adopted as the dependent variables (Models 2a-2b and 3a-3b).
Interestingly, when the impact of patents is divided between patents disclosed at the same time as a
scientific instrument disclosure and patents not associated with research instrument disclosures, the
former variable assumes a positive significant value in its present realization, while the latter takes on
a negative significant value (and stronger in magnitude). This result points at a dual effect of
patenting and seems to indicate that developing patented research instruments is always associated
with a boost in the scientist’s publications performance, while patents unrelated to research
instrument development may in fact be associated with a decline of published articles. Regarding the
short-term effect, the lag variable has a negative but not statistically significant sign and additional
lags were never statistically significant (hence, not reported for brevity). This hints that the negative
effect is only temporary. This effect can be interpreted either as evidence that industrial applications
trade-off (at least temporary) time for research, or as evidence that publications are delayed, when a
patent is under way, for fear of hampering the commercial value of the patent, or for reasons of
secrecy, imposed by the legal procedure8. The absence of counterbalancing effects on lagged
variables however seem to suggest that any eventual postponing effect does not add up to the normal
rate of publication.
The error structure confirms the suitability of the two-years lagged instruments, as shown by the
Arellano-Bond tests of Autocorrelation, reported at the bottom of the Table 6. Sargan-Hansen tests
of overidentifying restrictions do not always hold, although it is encouraging that the Chi2 improves
when moving from the basic (1a) to the complete specification (1c), where it holds at 5% probability,
under the null of exogeneity of the instruments. For comparability and robustness check, fixed effect
estimates of the same models as in Table 6 were reported in the appendix (Table A1) and largely
confirm the findings.

[Table 7 about here]

Models reported in Table 7 and Table 8 present the result estimates of similar model specifications,
where qualitative assessments of scientific productivity are used as the dependent variables, rather
than just sheer count of articles. Model 2 (a, b and c) tests the impact of the covariates on (logarithm
of) article citations, while Model 3 (a, b and c) assesses the effect of the independent variables on
variations of the index of basicness of targeted journals.
Results of the estimates are largely consistent with those obtained for articles counts: developing
research instruments continues to have a positive significant impact on both citations and level of
basicness. The coefficients of the lagged covariates are not statistically significant, indicating a short-
term or spot effect on qualitative assessments of scientific productivity. As before, the result holds
when we adopt the second and more restrictive criterion to identify contributions on research
developments, although they are significant only for the present realization of the variable. As for the

8
For instance, in the European patent system, the content of patents cannot be disclosed until when a formal patent
document is filed and more often secrecy is recommended in the 18 months after filing. For full details, see Franzoni and
Scellato, 2007.

11
impact of inventive activity on either citations or basicness, the coefficients of the number of patents,
both present and lagged, show positive signs, although never statistically different from zero.
Finally, the signs of the coefficients show a split effect of patents (joint or disjoint to research
instrument development), even though some of the effects are not significant. Overall, a joint patent-
research instrument is associated to an increase of total citations, while a patent disjoint from a
research-instrument disclosure brings a negative impact on the level of basicness of the journals on
which the articles appeared. Lagged split-variables are never significant.
Here too, the autocorrelation tests of the System GMM all hold. The Sargan-Hansen tests of
overidentifying restrictions is not well behaved in Model 2, while it improves and is acceptable at 5%
in the complete specifications of Model 3. Fixed effects estimates, reported for comparability in
Tables A2 and A3 of the Appendix (respectively), largely confirm the results.
All estimates are discounted for individual unobserved fixed-effect, and robust to controls for
coauthors, temporary positions held outside academia (either in firms or national labs), and yearly
trends (baseline years: 1990-1991). It should nonetheless be noted that estimates are not robust to
potential incidental truncation, that may be caused by selection into sample.

[Table 8 about here]

6. Conclusions

For a long time, Economists of Science and Innovation have acknowledged that the processes
linking science to technology may follow non-linear paths, and particularly may go from techniques
and applications to principles and fundamental insights. In the early section of this paper we departed
from the traditional dichotomy of science vs. technology, fundamental vs. applied research, which
the mentioned debate has typically stressed, and suggested that science is inherently scoped to
problems, and that solving empirical problems related to the research design and implementation is
the bread-and-butter of scientific work. Arguments were made to support the view that scientists
contribute extensively to the development of state-of-the-art technologies, although the bulk of their
contributions is not aimed (or at least not directly) at developing technologies for industrial use. The
latter activities, namely technology transfer, academic patenting and spinoffs, have recently become
the subject of extensive speculation (Henderson et al., 1998; Rosenberg and Nelson, 1994; Stokes,
1997) and the denounced modest commitment of academic personnel to them was at the basis of the
widespread political intervention of recent decades. The idea of academia being in its Ivory Tower,
disconnected from practical concerns, has partially contributed to obscuring a critical fact of
scientific work, one which is quite evident to those who have real experience of university labs:
universities are a fundamental locus of technology development and advances in technology play a
primary role in scientific research too. Indeed, scientists spend a great deal of their time refining
instruments and methodologies, designing new tools, machines and algorithms to serve scientific
research, and it is worth nothing that some of those applications end up of becoming successful
industrial or market products.
We have maintained that technologies are developed in science for at least three different purposes.
First, to carry on direct observations and to produce raw data that will later become scientific puzzles
to solve; second, for theory testing; and, third for performing large-scale studies. Among these, only
theory testing can be seen as hierarchically subordinated to theoretical contributions according to the
classical dichotomy of basic/applied research (Mulkay, 1977). The other two mechanisms occur
mostly in very fundamental subfields of science, which lead one to suggest that technology
development should not be viewed as being confined to the domain of applied research. Moreover,
this is becoming increasingly frequent in many domains of the hard sciences, including those that not
so long ago were paper-and-pencil based, due to a general evolution of all disciplines towards
calculation-intensive domains.

12
In the empirical assessment just presented, the impact of developing technologies for research and
developing technology for industry is tested on a sample of 642 US academics in the field of Physics,
who were observed for 14 years. The sample comprises only APS Fellows, who have been
nominated for their exceptional scientific merit. Results obtained from this sample cannot therefore
be used to infer conclusions about the entire population of physicists and make a claim for further
testing on future investigations.
System GMM estimates, robust to endogeneity of covariates and discounted by individual
unobserved fixed-effects and time-trends, show that developing a research instrument was always
associated with boosts in the productivity of sampled scientists, both in the same and in the
following year, while invention of a patent has no statistically significant impact, except in those cases
in which it is also associated with a research instrument. Estimates point to a dual-effect, with related
patents being associated with an increase in article and citation counts, and unrelated patents having a
negative effect on articles counts and level of basicness. We can interpret these findings either as
evidence of a substitution-effect in place when the scientist works on a technology that is unrelated
or of no use to his or her research work, or by a temporary need to hold-up disclosures in open
science to protect the novelty of the patent, or again for reasons of secrecy.
Similar results are found when qualitative indicators of performance, namely total citations and
journal basicness index (rather than sheer productivity) are accounted for.
If the results obtained here are confirmed by further studies, they may help to shed light on recent
findings pointing at a positive impact of inventive activities on publications (Azoulay et al., 2006;
Breschi et al., 2007; Calderini et al., 2008). Based on the findings herein presented, one can make the
hypothesis that the boosting effect of patents is observed because a large share of academic
inventions are in fact patented improvements of laboratory technologies and machines, paving the
way to advances in understanding.

References

Austin J.H. (1978), “Chase Chance and Creativity. The lucky art of novelty”, Columbia University
Press, New York.

Azoulay P., Ding W., Stuart T. (2006), “The Impact of Academic Patenting on the Rate, Quality, and
Direction of (Public) Research, NBER Working Paper #11917.

Azoulay P., Ding W., Stuart T. (2007), “The determinants of faculty patenting behavior:
Demographics or opportunities?”, Journal of Economic Behavior & Organization, 63:599–623.

Barber B., Fox R.C. (1958), “The Case of the Floppy-Eared Rabbit. An Instance of Serendipity
Gained and Serendipity Lost”, American Journal of Sociology, 64(2):128-136.

Breschi S., Lissoni F., Montobbio F. (2006), “University patenting and scientific productivity. A
quantitative study of Italian academic inventors”, CESPRI Working Paper number 189, November
2006.

Breschi S., Lissoni F., Montobbio F. (2007), “The scientific productivity of academic inventors: new
evidence from Italian data”, Economics of Innovation and New Technology 16/ 2, 2007.

Calderini M., Franzoni C., Vezzulli A. (2007), “If Star Scientists do not patent:
The Effect of Productivity, Basicness and Impact on The Decision to Patent in the Academic
World”, Research Policy, 36(3):303-319.

13
Calderini M., Franzoni C., Vezzulli A. (2008), “The Unequal Benefits of Academic Patenting for
Science and Engineering Research”, IEEE Transactions on Engineering Management, forthcoming.

Clarke A.E., Fujimura J.H. (1992), “What tool? Which Jobs? Why Right?”, in Clarke A.E., Fujimura
J.H. (eds.), “The right tool for the right job”, Princeton University Press.

Davies K. (2001), “Cracking the Genome. Inside the Race to Unlock Human DNA”, The Free Press.

Dasgoupta P. and David P.A. (1994), “Toward a new economics of science”, Research Policy,
23(5):487-521.

de Solla Price D.J. (1965), “Is technology historically independent of science? A study in Statistical
Historiography”, Technology and Culture, 6(4):553-568.

de Solla Price D.J. (1968), “Little Science, Big Science …and beyond”, Columbia University Press,
New York.

Ehrenberg R.G., Rizzo M.J., Jakubson G.H.(2006), “Who Bears the Growing Cost of Science at
Universities?”, mimeo.

Fabrizio K.R., Di Minin A. (2004), “Commercializing the Laboratory: The Relationship Between
Faculty Patenting and Publishing”, Haas School of Business Working Paper.

Franzoni C., Simpkins C., Li B., Ram A. (2009), “Using Content Analysis to Investigate the Research
Paths Chosen by Scientists Over Time”, Scientometrics, forthcoming.

Franzoni C., Scellato G. (2007), “Paper in the Drawer. Estimating the Determinants of the Patent-
Publication Lags in Europe and the Usa”, Working paper SSRN 1084062.

Granovetter M. (1983). “The Strength of Weak Ties: A Network Theory Revised”, Sociological
Theory, 1:201-233.

Hackett E.J., Conz D., Parker J., Bashford J., DeLay S. (2004), “Tokamaks and turbulence: research
ensembles, policy and technoscientific work”, Research Policy 33(5):747–767.

Hagstrom W.O. (1965), “The Scientific Community”, Basic Books Inc., New York, London.

Hamilton K S. (2003), “Subfield and Level Classification of Journals”, CHI Research No.2012-R,
January 16.

Henderson R. Jaffe A.B., Trajtenberg M. (1998), “Universities as a Source of Commercial


Technology: A Detailed Analysis of University Patenting, 1965–1988”, The Review of Economics
and Statistics, 80(1):119-127.

Holmes F.L. (2004), “Investigative Pathways. Patterns and Stages in the Careers of Experimental
Scientists”, Yale University Press, New Haven & London.

Knorr-Cetina K.D. (1981), “The manufacture of knowledge: an essay on the constructivist and
contextual nature of science”, Pergamon Press.

Latour B., Woolgar S. (1979),”Laboratory Life. The Social Construction of Scientific Facts”, Sage
Publications.

14
Mansfield E. (1995), “Academic Research Underlying Industrial Innovations: Sources,
Characteristics, and Financing”, Review of Economics and Statistics, 77(1):55-65.

Merton R. K. (1968), “The Matthew Effect in Science”, Science, New Series, 159(3810):56-63.

Mokyr J. (1997), “Are we living in the middle of a new Industrial Revolution?”, Economic Review,
Federal Reserve Bank of Kansas City, Second Quarter:31-43.

Mulkay M.J. (1977), “Sociology of the Scientific Research Community”, in Spiegel-Rosing I., de Solla
Price D. (eds.), “Science, Technology and Society. A Cross-Disciplinary Perspective”, Sage
Publications, 93-148.

Murray F., Stern S. (2007). “Do formal intellectual property rights hinder the free flow of scientific
knowledge?”, Journal of Economic Behavior & Organization, 63:648–687.

Nelson R.R. (2004), “The market economy, and the scientific commons”, Research Policy, 33:455-
471.

Owen-Smith J. (1999), “Managing Laboratory Work Through Skepticism: Processes of Evaluation


and Control”, American Sociological Review, 66(3):427-452.

Rosenberg N. (1982), “Inside the black box. Technology and Economics”, New York Cambridge
University Press.

Rosenberg N. (1992), “Scientific instrumentation and university research”, Research Policy, 21:381-
390.

Rosenberg N., Nelson R.R. (1994), “American Universities and Technical Advance in industry”,
Research Policy, 23(3):323-348.

Seifer M.J. (1996), “Wizard: the life and times of Nikola Tesla. Biography of a genius”, Carol
Publishing Corporation.

Senior J.E. (1998), “Marie & Pierre Curie”, Sutton Publishing.

Siegel, D.S., Waldman, D. and Link, A. (2003) “Assessing the Impact of Organizational Practices on
the Relative Productivity of University Technology Transfer Offices: An Exploratory Study”,
Research Policy, 32, 27–48.

Simon H.A. (1977), “Models of discovery”, D. Reidel Publishing Company.

Simonton D.K (2004), “Creativity in Science. Chance, Logic, Genius and Zeitgeist”, Cambridge
University Press.

Stephan P.E., Levin S.G. (1992), “Striking the Mother Lode in Science, The Importance of Age,
Place and –time”, Oxford University Press.

Stephan P.E., Gurmu S., Sumell A.J., Black G. (2007), “Who’s Patenting in the University? Evidence
from the Survey of Doctorate Recipients”, Econ. Innov. New Techn., 2007, Vol. 16(2), March, pp.
71–99.

15
Stokes D.E. (1997), “Pasteur’s Quadrant. Basic Science and Technological Innovation”, Brookings
Institution Press, Washington D.C.

Traweek S. (1988), “Beamtimes and lifetimes”, Harvard University Press.

Van Hippel E. (1988). “The Sources of Innovation”, Oxford University Press.

Van Looy, B., Ramga, M., Callaert, J., Debackere, K., Zimmermann, E., (2004), Combining
entrepreneurial and scientific performance in academia: towards a compounded and reciprocal
Matthew-effect?”, Research Policy 33 (3), 425–441.

16
Tables to be included in the text

Table 1 - Database description


Variable Obs Mean Std. Dev. Min Max
Phd year 642 1978.073 7.607 1957 1993
Seniority at the time of APS Fellowship 642 20.615 7.202 7 45
PhD in non-US institution 642 0.198 0.399 0 1
PhD in Physics 642 0.743 0.437 0 1
PhD in Engineering 642 0.107 0.310 0 1
PhD in Materials Science 642 0.011 0.104 0 1
PhD in Chemistry 642 0.098 0.298 0 1
PhD in Mathematics 642 0.022 0.146 0 1
PhD in Astronomy 642 0.017 0.130 0 1
PhD in Other discipline 642 0.002 0.040 0 1
Male Gender 642 0.902 0.298 0 1
Worked in Firm 642 0.171 0.377 0 1
Worked in National lab 642 0.125 0.331 0 1
Inventor 642 0.143 0.351 0 1

Table 2- Time-varying variables


Variable Obs Mean Min Max
log_articlesit 8988 1.136 0 4.419
variable
dep.

levelit 8988 2.522 0 4


log_citationsit 8988 3.797 0 11.973
research_instrumentit 8988 0.532 0 19
covariates

research_instrumets_altit 8988 0.027 0 3


patent_instrumentit 8988 0.019 0 10
patent_non_instrumentit 8988 0.016 0 5
patentit 8988 0.035 0 10
firmit 8988 0.018 0 1
controls

national_labit 8988 0.010 0 1


coauthorsit 8988 11.912 0 764.667

Table 3 - Incidence of events


Variable Obs Freq=1 Percentage Std. Dev.
developer research_instrument 642 536 83.5% 0
developer research_instrument_alt 642 142 22.1% 0
inventor_dummy 642 92 14.3% 0
inventor research instrument 642 51 7.9% 0

17
Table 4 - Occurrence of events by APS group
research % research inventor % inventor
APS_group obs. instrument instrument inventor % inventor research research
developer developer instrument instrument

Astrophysics 39 28 71.8% 1 2.6% 0 0%


Biological Physics 11 11 100% 2 18.2% 1 9.1%
Chemical Physics 46 44 95.7% 11 23.9% 8 17.4%
Computational Physics 29 26 89.7% 0 0% 0 0%
DAMOP (Atomic, Molecular, Optical) 45 43 95.6% 6 13.3% 3 6.7%
DCMP (Condensed Matter) 116 92 79.3% 23 19.8% 12 10.3%
Few Body Systems 8 8 100% 0 0% 0 0%
Fluid Dynamics 38 31 81.6% 10 26.3% 3 7.9%
Fundamental Constants 11 10 90.9% 0 0% 0 0%
Gravitational Topical 10 6 60.0% 0 0% 0 0%
High Polymer Physics 11 11 100% 4 36.4% 1 9.1%
Industrial and applied physics 13 13 100% 3 23.1% 2 15.4%
Instuments & Measurements 3 3 100% 2 66.7% 0 0%
International Physics 4 3 75.0% 0 0% 0 0%
Laser Science 29 28 96.6% 13 44.8% 12 41.4%
Magnetism & Its Applications 6 5 83.3% 0 0.0% 0 0.0%
Materials Physics 32 28 87.5% 8 25.0% 5 15.6%
Nuclear Physics 47 38 80.9% 0 0% 0 0%
Particles & Fields 74 47 63.5% 0 0.0% 0 0%
Physics of Beams 11 11 100% 2 18.2% 0 0%
Plasma Astrophysics 2 1 50.0% 0 0% 0 0%
Plasma Physics 32 28 87.5% 3 9.4% 1 3.1%
Polymer Physics 14 13 92.9% 3 21.4% 3 21.4%
Shock Compression 2 1 50.0% 0 0% 0 0%
Statistical & Nonlinear Physics 9 7 77.8% 1 11.1% 0 0%
Totals 642 536 83.5% 92 14.3% 51 7.9%

Table 5 - Correlation Matrix.


research_instrumets_alt i(t-1)

patent_non_instrumenti(t-1)
research_instrumets_alt it

patent_non_instrumentit
research_instrument i(t-1)
research_instrument it

patent_instrumenti(t-1)
patent_instrumentit
log_citationsi(t-1)

national labi(t-1)
log_articlesi(t-1)

log_citationsit

coauthorsi(t-1)

national labit
log_articlesit

coauthorsit
patenti(t-1)
leveli(t-1)

patentit

firm i(t-1)
levelit

firm it

log_articlesit 1.000
log_articlesi(t-1) 0.649 1.000
levelit 0.715 0.415 1.000
leveli(t-1) 0.439 0.718 0.480 1.000
log_citationsit 0.944 0.602 0.748 0.452 1.000
log_citationsi(t-1) 0.617 0.945 0.437 0.758 0.615 1.000
research_instrumentit 0.390 0.331 0.191 0.150 0.343 0.289 1.000
research_instrumenti(t-1) 0.316 0.370 0.133 0.185 0.267 0.320 0.472 1.000
research_instrumets_altit 0.105 0.079 0.055 0.029 0.101 0.073 0.307 0.156 1.000
research_instrumets_alti(t-1) 0.084 0.098 0.031 0.053 0.076 0.092 0.117 0.311 0.150 1.000
patentit 0.063 0.059 0.005 0.010 0.065 0.058 0.055 0.054 0.020 0.033 1.000
patenti(t-1) 0.057 0.067 0.001 0.005 0.055 0.066 0.088 0.063 0.032 0.022 0.246 1.000
patent_instrumentit 0.083 0.061 0.021 0.014 0.082 0.057 0.103 0.064 0.035 0.032 0.805 0.257 1.000
patent_instrumenti(t-1) 0.069 0.086 0.016 0.021 0.065 0.081 0.097 0.112 0.025 0.038 0.231 0.801 0.254 1.000
patent_non_instrumentit -0.007 0.016 -0.020 -0.002 -0.002 0.019 -0.047 0.003 -0.015 0.013 0.587 0.063 -0.008 0.043 1.000
patent_non_instrumenti(t-1) 0.003 -0.003 -0.021 -0.021 0.004 0.001 0.017 -0.046 0.021 -0.014 0.100 0.592 0.089 -0.008 0.047 1.000
coauthorsit 0.273 0.183 0.170 0.091 0.250 0.165 0.008 -0.004 0.013 0.001 -0.016 -0.018 -0.012 -0.012 -0.012 -0.015 1.000
coauthorsi(t-1) 0.225 0.270 0.108 0.171 0.197 0.247 0.007 0.002 -0.006 0.007 -0.018 -0.015 -0.012 -0.011 -0.013 -0.011 0.734 1.000
firmit 0.036 0.034 -0.006 -0.014 0.043 0.041 0.014 0.009 0.029 0.006 0.172 0.156 0.108 0.088 0.144 0.141 -0.021 -0.021 1.000
firmi(t-1) 0.033 0.037 -0.003 -0.005 0.044 0.047 0.002 0.008 0.018 0.020 0.157 0.171 0.111 0.104 0.114 0.146 -0.022 -0.021 0.887 1.000
national labit -0.020 -0.023 -0.020 -0.022 -0.017 -0.020 -0.011 -0.012 -0.008 -0.007 -0.012 -0.012 -0.008 -0.008 -0.009 -0.009 -0.014 -0.015 -0.012 -0.013 1.000
national labi(t-1) -0.022 -0.023 -0.022 -0.018 -0.014 -0.018 -0.015 -0.018 -0.010 -0.009 -0.013 -0.013 -0.009 -0.009 -0.010 -0.010 -0.016 -0.016 -0.014 -0.015 0.836 1.000

18
Table 6 - Results of System GMM estimates

Model 1(a) Model 1(b) Model 1(c)


Number of obs 8346 8346 8346
Number of instruments 70 69 68
Obs per group: 13 13 13
F-Test 59.34 *** 29.55 *** 26.93 ***
Log_articlesit Coef. (Robust St.Err.) Coef. (Robust St.Err.) Coef. (Robust St.Err.)
Log_articlesi(t-1) 0.232 (0.018) *** 0.243 (0.024) *** 0.233 (0.026) ***
research_instrumentit 0.129 (0.010) *** - -
research_instrumenti(t-1) 0.039 (0.011) *** - -
research_instrumets_altit - 0.161 (0.048) *** -
research_instrumets_alti(t-1) - 0.069 (0.052) -
patent_instrumentit - - 0.056 (0.025) **
patent_instrumenti(t-1) - - -0.030 (0.034)
patent_non_instrumentit - - -0.101 (0.043) **
patent_non_instrumenti(t-1) - - -0.067 (0.045)
coauthorsit 0.004 (0.000) *** 0.004 (0.000) *** 0.000 (0.000) ***
coauthorsi(t-1) -0.001 (0.000) ** 0.000 (0.000) 0.000 (0.000) ***
firmit 3.011 (1.784) * 1.948 (1.350) 0.042 (1.254)
firmi(t-1) -3.214 (1.609) ** -2.277 (1.203) * -0.068 (1.188)
national labit 4.950 (3.498) 14.777 (5.174) *** 15.272 (5.618) ***
national labi(t-1) -3.755 (3.433) -13.387 (5.123) *** -13.853 (5.761) **
patentit 0.028 (0.040) 0.000 (0.035) -
patenti(t-1) 0.028 (0.046) 0.020 (0.040) -
yr1992 -0.014 (0.039) 0.023 (0.070) 0.029 (0.072)
yr1993 0.041 (0.042) 0.086 (0.074) 0.110 (0.076)
yr1994 0.076 (0.038) ** 0.062 (0.065) 0.100 (0.067)
yr1995 0.104 (0.043) ** 0.027 (0.060) 0.072 (0.062)
yr1996 0.181 (0.043) *** 0.177 (0.073) ** 0.175 (0.072) **
yr1997 0.097 (0.042) ** 0.062 (0.058) 0.091 (0.059)
yr1998 -0.603 (0.049) *** -0.730 (0.079) *** -0.654 (0.078) ***
yr1999 0.047 (0.046) 0.015 (0.067) 0.047 (0.069)
yr2000 0.205 (0.043) *** 0.221 (0.071) *** 0.245 (0.070) ***
yr2001 0.134 (0.046) *** 0.117 (0.071) * 0.164 (0.072) **
yr2002 0.097 (0.044) ** 0.132 (0.059) ** 0.158 (0.062) **
yr2003 0.078 (0.044) * 0.110 (0.064) * 0.154 (0.062) **
_cons 0.720 (0.049) *** 0.901 (0.073) *** 0.837 (0.076) ***

AR(1) in first difference (Pr) -7.13 (0.000) -4.56 (0.000) -3.89 (0.000)
AR(2) in first difference (Pr) -1.13 (0.260) -1.34 (0.182) -1.45 (0.147)
Sargan test: chi2(46) (Pr) 161.67 (0.000) 65.17 (0.026) 57.59 (0.082)
Hansen test: chi2(46) (Pr) 122.26 (0.000) 103.74 (0.000) 87.11 (0.000)

*p≤0.1; **p≤0.05; ***p≤0.01

19
Table 7 - Results of System GMM estimates

Model 2(a) Model 2(b) Model 3(c)


Number of obs 8346 8346 8346
Number of instruments 70 69 68
Obs per group: 13 13 13
F-Test 58.16 *** 34.73 *** 41.69 ***
Log_citationsit Coef. (Robust St.Err.) Coef. (Robust St.Err.) Coef. (Robust St.Err.)
Log_citationsi(t-1) 0.191 (0.017) *** 0.183 (0.018) *** 0.184 (0.018) ***
research_instrumentit 0.405 (0.03) *** - -
research_instrumenti(t-1) 0.119 (0.029) *** - -
research_instrumets_altit - 0.573 (0.152) *** -
research_instrumets_alti(t-1) - 0.266 (0.171) -
patent_instrumentit - - 0.152 (0.086) *
patent_instrumenti(t-1) - - -0.160 (0.121)
patent_non_instrumentit - - -0.164 (0.192)
patent_non_instrumenti(t-1) - - -0.235 (0.161)
coauthorsit 0.012 (0.001) *** 0.012 (0.001) *** 0.000 (0.000) ***
coauthorsi(t-1) -0.003 (0.001) ** -0.001 (0.001) 0.000 (0.000) ***
firmit 6.197 (4.279) 5.022 (3.675) -3.728 (4.212)
firmi(t-1) -5.834 (3.938) -4.965 (3.315) 4.350 (3.925)
national labit 0.821 (7.881) 17.09 (9.681) * 12.50 (9.389)
national labi(t-1) -1.503 (7.811) -17.625 (9.991) * -13.38 (9.879)
patentit 0.135 (0.088) 0.056 (0.083) -
patenti(t-1) 0.046 (0.131) 0.029 (0.12) -
yr1992 -0.019 (0.114) 0.059 (0.141) 0.11 (0.129)
yr1993 0.016 (0.115) 0.120 (0.135) 0.183 (0.123)
yr1994 0.157 (0.115) 0.175 (0.125) 0.242 (0.118) **
yr1995 0.293 (0.136) ** 0.203 (0.149) 0.298 (0.147) **
yr1996 0.377 (0.126) *** 0.429 (0.143) *** 0.438 (0.135) ***
yr1997 0.186 (0.132) 0.185 (0.141) 0.275 (0.14) **
yr1998 -2.275 (0.151) *** -2.459 (0.17) *** -2.328 (0.163) ***
yr1999 -0.194 (0.138) -0.273 (0.157) * -0.109 (0.155)
yr2000 0.307 (0.132) ** 0.400 (0.146) *** 0.502 (0.144) ***
yr2001 -0.066 (0.138) -0.008 (0.15) 0.141 (0.148)
yr2002 -0.341 (0.133) ** -0.197 (0.149) -0.072 (0.147)
yr2003 -0.621 (0.139) *** -0.482 (0.159) *** -0.306 (0.156) **
_cons 2.816 (0.15) *** 3.204 (0.177) *** 3.122 (0.176) ***
AR(1) in first difference (Pr) -18.89 (0.000) -8.42 (0.000) -9.92 (0.000)
AR(2) in first difference (Pr) -0.29 (0.771) -0.66 (0.508) -0.41 (0.679)
Sargan test: chi2(46) (Pr) 188.27 (0.000) 107.04 (0.000) 112.01 (0.000)
Hansen test: chi2(46) (Pr) 127.91 (0.000) 96.10 (0.000) 81.24 (0.001)

*p≤0.1; **p≤0.05; ***p≤0.01

20
Table 8 - Results of System GMM estimates

Model 3(a) Model 3(b) Model 3(c)


Number of obs 8346 8346 8346
Number of instruments 70 69 68
Obs per group: 13 13 13
F-Test 38.91 *** 26.47 *** 22.00 ***
Levelit Coef. (Robust St.Err.) Coef. (Robust St.Err.) Coef. (Robust St.Err.)
Level(t-1) 0.108 (0.021) *** 0.089 (0.020) *** 0.100 (0.021) ***
research_instrumentit 0.104 (0.018) *** - -
research_instrumenti(t-1) 0.008 (0.019) - -
research_instrumets_altit - 0.149 (0.065) ** -
research_instrumets_alti(t-1) - 0.014 (0.074) -
patent_instrumentit - - 0.054 (0.045)
patent_instrumenti(t-1) - - 0.037 (0.056)
patent_non_instrumentit - - -0.119 (0.077)
patent_non_instrumenti(t-1) - - -0.153 (0.083) *
coauthorsit 0.006 (0.001) *** 0.005 (0.001) *** 0.000 (0.000) ***
coauthorsi(t-1) -0.003 (0.001) *** -0.002 (0.001) *** 0.000 (0.000) *
firmit 5.058 (3.263) 1.263 (1.735) 1.084 (2.369)
firmi(t-1) -5.132 (2.835) -1.722 (1.730) -0.766 (2.237)
national labit -11.875 (7.737) -9.609 (6.737) -12.977 (10.216)
national labi(t-1) 12.576 (7.447) * 8.427 (6.525) 13.236 (9.876)
patentit 0.014 (0.057) 0.009 (0.037) -
patenti(t-1) 0.057 (0.073) 0.000 (0.054) -
yr1992 0.001 (0.085) 0.047 (0.072) 0.033 (0.085)
yr1993 0.061 (0.096) 0.078 (0.077) 0.093 (0.096)
yr1994 0.082 (0.09) 0.038 (0.076) 0.102 (0.087)
yr1995 0.176 (0.086) ** 0.072 (0.077) 0.191 (0.087) **
yr1996 0.291 (0.09) *** 0.209 (0.080) *** 0.313 (0.089) ***
yr1997 0.159 (0.086) * 0.093 (0.079) 0.198 (0.089) **
yr1998 -1.338 (0.102) *** -1.460 (0.098) *** -1.337 (0.103) ***
yr1999 0.118 (0.094) 0.041 (0.088) 0.158 (0.096)
yr2000 0.217 (0.091) ** 0.147 (0.078) * 0.284 (0.092) ***
yr2001 0.178 (0.091) * 0.117 (0.084) 0.255 (0.096) ***
yr2002 0.019 (0.087) 0.006 (0.077) 0.100 (0.085)
yr2003 -0.012 (0.09) -0.026 (0.077) 0.094 (0.088)
_cons 2.113 (0.101) *** 2.359 (0.096) *** 2.181 (0.096) ***

AR(1) in first difference (Pr) -7.03 (0.000) 7.77 (0.000) -5.02 (0.000)
AR(2) in first difference (Pr) -1.72 (0.085) -0.89 (0.371) -1.70 (0.090)
Sargan test: chi2(46) (Pr) 75.23 (0.004) 89.37 (0.000) 67.78 (0.012)
Hansen test: chi2(46) (Pr) -75.09 (0.004) 83.23 (0.000) 56.29 (0.101)

*p≤0.1; **p≤0.05; ***p≤0.01

21
Appendix

Table A1 Results of fixed effect estimate (models as in Table 6)

Model 1(a) Model 1(b) Model 1(c)


Number of obs 8346 8346 8346
Obs per group: 13 13 13
F-Test 76.56 *** 67.47 *** 58.03 ***
Log_articlesit Coef. (Robust St.Err.) Coef. (Robust St.Err.) Coef. (Robust St.Err.)
Log_articlesi(t-1) 0.159(0.013) *** 0.018 (0.013) *** 0.191 (0.013) ***
research_instrumentit 0.096(0.008) *** - -
research_instrumenti(t-1) 0.025 (0.007) *** - -
research_instrumets_altit - 0.144 (0.031) *** -
research_instrumets_alti(t-1) - 0.050 (0.038) -
patent_instrumentit - - 0.064 (0.019) ***
patent_instrumenti(t-1) - - -0.017 (0.020)
patent_non_instrumentit - - -0.095 (0.032) ***
patent_non_instrumenti(t-1) - - -0.059 (0.035) *
coauthorsit 0.004 (0.000) *** 0.004 (0.000) *** 0.000 (0.000) ***
coauthorsi(t-1) -0.000 (0.000) * 0.000 (0.000) -0.000 (0.000) ***
firmit 0.109 (0.108) 0.181 (0.117) 0.164 (0.114)
firmi(t-1) -0.117 (0.106) -0.255 (0.116) ** -0.182 (0.112)
national labit 0.046 (0.112) 0.048 (0.120) 0.063 (0.113)
national labi(t-1) 0.037 (0.109) -0.030 (0.111) 0.019 (0.108)
patentit 0.021 (0.017) 0.005 (0.018) -
patenti(t-1) 0.024 (0.019) -0.005 (0.019) -
yr1992 -0.007 (0.029) 0.020 (0.029) 0.020 (0.030)
yr1993 0.027 (0.029) 0.048 (0.029) 0.068 (0.030) **
yr1994 0.053 (0.029) * 0.049 (0.029) 0.098 (0.029) ***
yr1995 0.109 (0.029) *** 0.086 (0.029) 0.135 (0.029) ***
yr1996 0.181 (0.028) *** 0.211 (0.028) *** 0.215 (0.029) ***
yr1997 0.120 (0.029) *** 0.129 (0.029) *** 0.154 (0.029) ***
yr1998 -0.596 (0.038) *** -0.665 (0.042) *** -0.593 (0.040) ***
yr1999 0.014 (0.030) 0.038 (0.029) 0.078 (0.030) ***
yr2000 0.210 (0.029) *** 0.252 (0.029) *** 0.275 (0.029) ***
yr2001 0.160 (0.029) *** 0.181 (0.029) *** 0.227 (0.029) ***
yr2002 0.123 (0.029) *** 0.175 (0.029) *** 0.196 (0.030) ***
yr2003 0.101 (0.300) *** 0.147 (0.029) *** 0.188 (0.030) ***
_cons 0.798 (0.025) *** 0.900 (0.025) *** 0.828 (0.026) ***

Overall R^2 0.3858 0.3287 0.3424

22
Table A2 A1 Results of fixed effect estimate (models as in Table 7)

Model 2(a) Model 2(b) Model 3(c)


Number of obs 8346 8346 8346
Obs per group: 13 13 13
F-Test 67.83 *** 60.02 *** 49.35 ***
Log_citationsit Coef. (Robust St.Err.) Coef. (Robust St.Err.) Coef. (Robust St.Err.)

Log_citationsi(t-1) 0.128 (0.012) *** 0.14 (0.012) *** 0.158 (0.013) ***
research_instrumentit 0.276 (0.025) *** - -
research_instrumenti(t-1) 0.055 (0.023) ** - -
research_instrumets_altit - 0.432 (0.107) *** -
research_instrumets_alti(t-1) - 0.137 (0.131) -
patent_instrumentit - - 0.226 (0.067) *
patent_instrumenti(t-1) - - -0.039 (0.066)
patent_non_instrumentit - - -0.279 (0.107) ***
patent_non_instrumenti(t-1) - - -0.219 (0.127) *
coauthorsit 0.015 (0.001) *** 0.015 (0.001) *** 0.000 (0.000) ***
coauthorsi(t-1) 0.001 (0.001) 0.002 (0.001) 0.000 (0.000) ***
firmit 0.367 (0.355) 0.475 (0.377) 0.538 (0.372)
firmi(t-1) -0.162 (0.357) -0.425 (0.38) -0.35 (0.373)
national labit -0.03 (0.416) 0.009 (0.433) 0.016 (0.425)
national labi(t-1) 0.282 (0.403) 0.191 (0.404) 0.25 (0.406)
patentit 0.088 (0.055) 0.062 (0.057) -
patenti(t-1) -0.074 (0.069) -0.037 (0.065) -
yr1992 0.025 (0.109) 0.069 (0.11) 0.104 (0.111)
yr1993 0.056 (0.106) 0.120 (0.135) 0.175 (0.107)
yr1994 0.162 (0.104) 0.171 (0.105) 0.3 (0.106) ***
yr1995 0.315 (0.105) *** 0.301 (0.106) *** 0.397 (0.107) ***
yr1996 0.418 (0.099) *** 0.495 (0.1) *** 0.525 (0.102) ***
yr1997 0.257 (0.102) ** 0.298 (0.102) *** 0.368 (0.103) ***
yr1998 -2.261 (0.134) *** -2.357 (0.141) *** -2.238 (0.138) ***
yr1999 -0.308 (0.105) *** -0.264 (0.106) ** -0.098 (0.107) ***
yr2000 0.339 (0.102) *** 0.400 (0.146) *** 0.547 (0.104)
yr2001 0.012 (0.101) 0.089 (0.102) 0.229 (0.103) **
yr2002 -0.265 (0.102) ** -0.135 (0.102) -0.023 (0.103)
yr2003 -0.564 (0.101) *** -0.436 (0.102) *** -0.269 (0.102) ***
_cons 3.049 (0.092) *** 3.207 (0.093) *** 3.125 (0.094) ***

Overall R^2 0.3214 0.267 0.3003

23
Table A3 A1 Results of fixed effect estimate (models as in Table 8)

Model 3(a) Model 3(b) Model 3(c)


Number of obs 8346 8346 8346
Obs per group: 13 13 13
F-Test 38.34 *** 34.82 *** 27.74 ***
Levelit Coef. (Robust St.Err.) Coef. (Robust St.Err.) Coef. (Robust St.Err.)
Level(t-1) 0.068 (0.015) *** 0.05 (0.015) *** 0.069 (0.015) ***
research_instrumentit 0.079 (0.012) *** - -
research_instrumenti(t-1) -0.006 (0.012) - -
research_instrumets_altit - 0.115 (0.053) ** -
research_instrumets_alti(t-1) - -0.017 (0.06) -
patent_instrumentit - - 0.026 (0.028)
patent_instrumenti(t-1) - - -0.002 (0.038)
patent_non_instrumentit - - -0.118 (0.068) *
patent_non_instrumenti(t-1) - - -0.161 (0.08) **
coauthorsit 0.007 (0.001) *** 0.007 (0.001) *** 0.000 (0.000) ***
coauthorsi(t-1) -0.001 (0.001) -0.001 (0) 0.000 (0.000)
firmit -0.003 (0.192) 0.007 (0.175) 0.062 (0.195)
firmi(t-1) 0.098 (0.192) 0.04 (0.165) 0.051 (0.196)
national labit 0.305 (0.25) 0.179 (0.237) 0.318 (0.254)
national labi(t-1) -0.155 (0.231) -0.253 (0.227) -0.153 (0.233)
patentit -0.012 (0.032) -0.011 (0.034) -
patenti(t-1) -0.049 (0.044) -0.04 (0.041) -
yr1992 0.014 (0.068) 0.059 (0.064) 0.033 (0.069)
yr1993 0.078 (0.069) 0.107 (0.064) 0.111 (0.069)
yr1994 0.028 (0.068) 0.035 (0.065) 0.075 (0.069)
yr1995 0.076 (0.067) 0.03 (0.063) 0.103 (0.068)
yr1996 0.203 (0.063) *** 0.179 (0.06) *** 0.241 (0.064) ***
yr1997 0.073 (0.065) 0.06 (0.062) 0.11 (0.065) *
yr1998 -1.438 (0.083) *** -1.498 (0.083) *** -1.429 (0.085) ***
yr1999 -0.022 (0.072) -0.049 (0.069) 0.024 (0.073)
yr2000 0.153 (0.063) *** 0.124 (0.06) * 0.213 (0.064) ***
yr2001 0.093 (0.066) 0.08 (0.062) 0.17 (0.066) **
yr2002 -0.04 (0.068) -0.02 (0.063) 0.033 (0.068)
yr2003 -0.079 (0.067) -0.061 (0.062) 0.022 (0.067)
_cons 2.295 (0.063) *** 2.454 (0.061) *** 2.351 (0.063) ***

Overall R^2 0.1651 0.1386 0.1405

24

Das könnte Ihnen auch gefallen