Sie sind auf Seite 1von 29

The Politics of Scientific Knowledge

The Politics of Scientific Knowledge  


Elizabeth Suhay
Subject: Communication and Technology, Health and Risk Communication, Political Communi­
cation
Online Publication Date: Jan 2017 DOI: 10.1093/acrefore/9780190228613.013.107

Summary and Keywords

This article discusses the various ways in which political concerns among government of­
ficials, scientists, journalists, and the public influence the production, communication,
and reception of scientific knowledge. In so doing, the article covers a wide variety of top­
ics, mainly with a focus on the U.S. context. The article begins by defining key terms un­
der discussion and explaining why science is so susceptible to political influence. The arti­
cle then proceeds to discuss: the government’s current and historical role as a funder,
manager, and consumer of scientific knowledge; how the personal interests and ideolo­
gies of scientists can influence their research; the susceptibility of scientific communica­
tion to politicization and the concomitant political impact on audiences; the role of the
public’s political values, identities, and interests in their understanding of science; and, fi­
nally, the role of the public, mainly through interest groups and think tanks, in shaping
the production and public discussion of scientific knowledge. While the article’s primary
goal is to provide an empirical description of these influences, a secondary, normative,
goal is to clarify when political values and interests are or are not appropriate influences
on the creation and dissemination of scientific knowledge in a democratic context.

Keywords: government-sponsored science, sociology of scientific knowledge, scientific communication, public un­
derstanding of science, political values, motivated cognition

Science, these days, is political. Few people would disagree with this sentiment. And, yet,
this represents a conundrum: many also would agree that science is supposed to be value
free—objective, and certainly independent of political influence. In what ways does poli­
tics influence scientific knowledge, and why does this influence occur? This article sets
out to answer these questions, providing an overview of various political influences on the
production, communication, and acceptance of scientific knowledge. The potential scope
of such a discussion is admittedly very broad. To provide a detailed accounting of politics
and science within the bounds of this research article, the focus is largely restricted to
the U.S. context. Focusing on one nation has the advantage of allowing for an integrated
discussion of relevant actors in society—scientists, government officials, journalists, and
the broader public—who react to one another, as well as to their shared history, as they
shape scientific knowledge.

Page 1 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

The structure of the essay is as follows. It begins by offering a definition of the politics of
scientific knowledge and then proceeds to explain why science, which is supposed to be
value free, is so often imbued with political meaning. The remainder of the essay discuss­
es four groups of actors and the distinct ways in which they influence scientific knowl­
edge at its various stages: government officials; scientists; journalists; and the public.
This article aims to mainly provide a dispassionate, objective description of political influ­
ences on scientific knowledge. This said, at points the essay indicates where normative
theorists generally endorse or denigrate political influences on scientific knowledge, and
the end of the essay takes up normative questions in a more direct manner.

Defining “the Politics of Scientific Knowledge”


It is important to understand what is meant by the politics of scientific knowledge. This
term (and closely related ones, such as the politics of science) carries a wide range of
meanings—and not only because scholars disagree over how to define “politics” and “sci­
ence.” Unless one intends to signal any way in which politics and science intersect,
greater specificity is needed. The word politics in the above phrase could be understood
as a noun (i.e., particular types of activities engaged in by scientists) or an adjective (i.e.,
an attribute of science). Further, the adjectival use could imply that science is an instru­
ment used to influence politics, is influenced by politics, or simply has political implica­
tions or effects (Brown, 2015, p. 6).

This essay discusses one piece of the politics of scientific knowledge. Building on a frame­
work introduced in Suhay and Druckman (2015), it focuses on political influences on the
production, communication, and acceptance of scientific knowledge (and not the reverse
causal relationship—scientific influences on politics). Politics, here, is not used in the
broad sense of power relations in society. Rather, it is used in the more formal sense: de­
scribing government actors, government activities, and—among the public—both prefer­
ences and actions related to how government should be structured and what it should do.
Scientific knowledge means conclusions drawn by scientists from systematic empirical
study in their areas of expertise that are formally communicated to the scientific commu­
nity (and, normally, the public).

This article focuses on scientific knowledge, as opposed to the scientific process that pro­
duces it, for two linked reasons. As noted in the next section, science is political because
it is powerful, and its power ultimately rests in the knowledge it produces, its epistemic
authority (Douglas, 2009). For this reason, discussion of political influences on science
that are not associated with concern over the topics studied by scientists and/or the con­
clusions they draw with respect to those topics is largely avoided. A prominent example
of political influences on science that largely fall outside of this purview would be efforts
by government actors to ensure the integrity of the scientific process among scientists
who hold government grants (see Guston, 2000). But focusing on scientific knowledge
does not solely narrow our purview, it allows us to extend beyond the formal products of

Page 2 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

the scientific process to examine how those products are communicated to others and
how they are understood by nonscientists, the lay public.

Finally, it is important to differentiate the definition of the politics of scientific knowledge


employed in this article from a related concept commonly discussed today—the politiciza­
tion of science. Numerous definitions of this term have been used in published work (e.g.,
Bolsen & Druckman, 2015; Fowler & Gollust, 2015). While the precise definitions vary in
their details, they overlap in asserting that some actors have purposely imbued a scientif­
ic topic with political meaning and that this outcome is normatively problematic. The top­
ic of this article is broader, encompassing unintentional actions (such as motivated cogni­
tion, bias that often operates below the level of conscious awareness) as well as political
influences on scientific knowledge that are welcome from a democratic perspective. Of
course, the term politicization is used when relevant.

Why Do Politics Influence Scientific Knowl­


edge?
Many philosophers argue that understanding and describing real-world phenomena as
they exist is a different endeavor from advocating for a particular phenomenon. In other
words, “is” should not be confused with “ought” (Hume, 2000). Believing that one can de­
duce what should be done from what is true is called the “naturalistic fallacy” (Moore,
2004). Given that science is the province of “is” (or fact) and politics of “ought” (or val­
ues) this suggests science and politics should not mix.

Yet they do, even when science proceeds in an objective manner. While scientific knowl­
edge cannot directly dictate values, it can indirectly bolster or undermine them. Scien­
tists often investigate phenomena thought to be problematic, raising the question of how
something has come to be defined as a problem. What values and/or whose interests are
threatened? Given limited resources, scientists cannot investigate all problems, which
raises the further issue of whose problems are considered (by scientists or their spon­
sors) worthwhile enough to pursue. In addition to influencing which scientific studies are
carried out, values are also involved in translating scientific findings into societal action.
Values influence when evidence for a specific threat is deemed sufficient to justify action.
Finally, scientists inevitably advance certain preferences and interests at the expense of
others when they attribute blame for a problem to specific individuals or groups as well
as when they argue certain corrections are more promising than others. In sum, for a va­
riety of reasons, scientific study to determine what “is” is often intertwined with
“ought” (see Douglas, 2015; Jasanoff, 2012).

The general nature of this relationship between societal values and science is not country
specific; however, in the United States, it has become stronger and more formalized over
time as the federal government has increasingly recognized the importance of incorporat­
ing scientific knowledge into policymaking. The government’s reliance on scientific ad­
vice grew rapidly in the 20th century. By the end of the century, science advising could be

Page 3 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

considered a “fifth branch of government” (Jasanoff, 1990). In turn, the U.S. government
has sought to foster scientific study thought to be in the public interest. This special sta­
tus of science within the U.S. government not only exemplifies the critical role such
knowledge plays in many policy decisions, it also represents a new locus of power over
government action.

With science’s role in studying societal problems and influencing collective action (includ­
ing government action) now in view, it becomes easier to understand why so many actors
wish to influence scientific knowledge, in ways that often go far beyond what philoso­
phers of science find appropriate (e.g., see Douglas, 2009, 2015). As Suhay and Druck­
man (2015) write, “individuals with strong convictions regarding which societal goals are
most important and how those goals ought to be achieved … have an interest in what is
accepted as ‘fact’” (p. 8). This interest sometimes motivates problematic attempts to get
science on one’s side—such as miscommunicating scientific findings to shore up an argu­
ment in a public forum or simply resisting new scientific information that undermines a
strongly held viewpoint. As indicated, not all attempts to influence scientific knowledge
are normatively objectionable, however. For example, a citizen group concerned over a
particular problem may try to influence the scientific agenda such that a solution might
be discovered (Bucchi & Neresini, 2008). Here, individuals wish to direct the topic of
study but do not wish to bias the resulting knowledge. Below, we discuss these—and
many other—examples of political influences on scientific knowledge.

Government Influences on Scientific Knowl­


edge
Government officials and the policies they create are the most obvious place to look for
political influences on science. Sometimes officials act in the pursuit of personally held
values. More often, their actions are driven by the preferences of colleagues, interest
groups, and constituents. In other words, government policy is a route through which a
variety of actors directly and indirectly influence scientific knowledge. This section first
provides a brief history of the relationship between the U.S. government and scientists as
well as the origins of contemporary left-right disagreements over the value of govern­
ment-sponsored science. Then, in keeping with the remainder of the essay, it discusses in
a more fine-grained manner the specific ways in which government actors can influence
scientific knowledge—in terms of the agenda that establishes topics of study, the actual
doing of science, and the communication of scientific knowledge.

A Short History of (20th Century) Government-Science Relations

While the U.S. government employed scientists in many capacities in the 19th and early
20th century, the close relationship between government and science we know today was
forged during and shortly after World War II (Douglas, 2009; Kevles, 2006). President
Franklin D. Roosevelt greatly expanded the role of the federal government, and this ex­
pansion included increased attention to scientific research and training (Kevles, 2006, p.
Page 4 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

768). This greater focus on science under Roosevelt would expand dramatically during
World War II, when the United States found itself greatly in need of technical assistance
for the war effort—not only for the purpose of creating armaments, but also for develop­
ing new medicines and information gathering technologies (Douglas, 2009). Large num­
bers of scientists were hired to work directly for the government in government labs and
indirectly via the contract research grant. New government-science institutions were cre­
ated, including the important Office of Scientific Research and Development, which coor­
dinated many scientific endeavors that supported the war effort. Scientists also began to
play a greater role in advising government officials, particularly the president. The most
influential of these advisors was Vannevar Bush, who not only advised President Roo­
sevelt but also was one of the primary architects of the new government-science institu­
tions of the period (Douglas, 2009; Guston, 2000).

After World War II ended, the United States found itself with a greatly expanded scientific
capacity but a less-than-clear scientific mission. Government officials and scientists em­
barked on an intense period of collaboration in repurposing this capacity for a post-war
world. While scientists generally were eager for the close relationship between govern­
ment and science to be made permanent with such institutions after the war, they resist­
ed the continuance of the hands-on character of government actors necessary for
wartime. In Science: The Endless Frontier—technically a report to President Truman, but
widely read—Vannevar Bush (1945) advanced a vision of government-science relations
popular among scientists. This report argued that science is essential to the public wel­
fare, but that scientific productivity is best ensured by preserving scientists’ autonomy—
specifically, by investing in independent colleges, universities, and research institutes car­
rying out basic research. Bush writes: “Scientific progress on a broad front results from
the free play of free intellects, working on subjects of their own choice, in the manner dic­
tated by their curiosity for exploration of the unknown” (p. 7). The advice of Bush and his
compatriots was heeded to a considerable agree. As detailed by Guston (2000), a social
contract for science emerged. Trusted in large part because of their essential contribu­
tions to the war effort, scientists were given a great deal of deference, autonomy, and
funding.

In the years following World War II, many new federal government institutions were cre­
ated to both sponsor and oversee research. Prominent examples included the Office of
Naval Research (founded in 1946), the Atomic Energy Commission (1947), the Research
Grants Office of the National Institutes of Health (1946), and the National Science Foun­
dation (1951). The role of scientists as policy advisers would become more formalized
during this period as well. The most prominent of the new advisory groups was the
Science Advisory Committee, initiated by Truman in 1951, which would provide advice to
the federal government, especially the President (Douglas, 2009; Kevles, 2006).

Federal science would continue to expand at least through the 1970s, responding to na­
tional and international events. The National Aeronautics and Space Administration
(NASA) (created in 1958) and heavy investment in the space program was a direct re­
sponse to Sputnik, a satellite launched by the Soviet Union, within the context of the Cold

Page 5 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

War. The Environmental Protection Agency (EPA) (1970) was a response to a new environ­
mental movement among Americans. These expansions were joined by even more re­
liance among government officials on scientists—with respect to “virtually every techni­
cally related area of government policymaking” (Kevles, 2006, p. 769). This growing role
for science in government was supported by both Democrats and Republicans in govern­
ment, a consensus that largely held until the end of the Cold War (Kevles, 2006).

Confidence in government-sponsored science among political leaders on the right and left
began to decrease in the 1970s. It was in this era that science began to be politicized in a
manner we are familiar with today, and the elevated status of scientists that had brought
them considerable autonomy (in addition to esteem) was diminished. In a sense, govern­
ment-sponsored science would be a victim of its own success. Scientists were playing a
greater role than ever before in directing government policy. In addition, new medicines
and technologies were developing and entering the marketplace at a rapid pace. In both
capacities, science was becoming increasingly intertwined with Americans’ lives (Kevles,
2006). All manner of interest groups and activists took notice of the power of science and
technology, publically lauding scientific reports that confirmed their perspective, oppos­
ing those that undermined them, and in some cases opposing the reach of science and
technology, period. All of this fed scientific controversy (Kevles, 2006; Nelkin, 1995).

The slow fracturing of the bipartisan consensus on government science was uneven, how­
ever. Those on the right eventually became far more critical of government-sponsored sci­
ence than those on the left. This difference is best understood within the context of grow­
ing ideological differences between the two parties. During the 1960s and 1970s, the De­
mocratic Party shifted to the left, increasing their support for government intervention in
a range of issue areas, most prominently civil rights and the environment (Noel, 2014). In
supporting the government’s work in these and other areas, the left almost necessarily
supported the technical experts on whose knowledge government action was based
(Kevles, 2006). In reaction to the Democrats’ leftward shift (and to the growing power of
the federal government in general), the Republican Party moved in the opposite direction,
becoming increasingly conservative and anti-federal regulation (Noel, 2014).

The challenge to government science from the right would take two forms. At the least,
conservatives argued, government funding for science should decrease in an effort to
control the federal deficit. As conservatives had argued in the past, government-funded
science should be limited and practical in nature; government funding of basic research
was largely superfluous. Private funding of science should replace much federal funding
(Kevles, 2006, p. 772). A newer argument would emerge, however, that was more damag­
ing to the scientific endeavor and a direct response to science’s hand in federal power:
conservatives seized on the idea that they could challenge government’s increasing intru­
sion into Americans’ lives by challenging the science on which it was based (Jasanoff,
2012, pp. 12–13; Kevles, 2006; Oreskes & Conway, 2010).

The end of the Cold War was perhaps the nail in the coffin for conservative support for
large-scale, government-sponsored science. For many Republican Members of Congress,

Page 6 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

the worth of scientists rested largely in their ability to counter foreign threats, as they
had during World War II and the Cold War; with the fall of the Soviet Union, this impetus
disappeared (Kevles, 2006). Republicans’ ability to act on this increasing skepticism of
government-sponsored science would also grow in subsequent years due to electoral suc­
cesses at the Congressional and Presidential level.

How Government Actors Influence Scientific Knowledge

With this brief history in place, let us discuss in more detail some of the ways in which
government actors influence scientific knowledge—scientific agendas, discoveries, and
communication. Before beginning, it is important to recognize that both the President and
Congress have the ability to substantially influence government-sponsored science. The
President is the formal head of the bureaucracy and exercises power through appointees
as well as direct executive actions, such as executive orders. The Congress writes the leg­
islation that shapes the parameters of bureaucratic institutions, controls the purse strings
of those institutions, and exercises oversight (Lowi, Ginsberg, Shepsle, & Ansolabehere,
2014).

It is no secret that the U.S. government plays an enormous role in setting the scientific
agenda of the nation. It does so in large part through the expenditure of research dollars,
much of which is distributed through grants. The government’s agenda-setting ability
with respect to scientific knowledge that is most powerful is simply its ability to drastical­
ly expand or contract research funding in general. Yet, despite this blunt power over the
overall growth or contraction of scientific knowledge, it is rarely exercised. Sarewitz
(2013) shows that the relative size of the Research and Development (R&D) budget re­
mained remarkably stable in the years following World War II and particularly since the
1970s. Since that time, total R&D (including military and nonmilitary) has ranged from 13
to 14% of discretionary spending. In constant dollars, the amount of federal science
spending has increased during this period, but this increase has been in concert with an
increase in federal spending overall. This stability stems at least in part from an institu­
tional quirk of the U.S. budgeting system for science: it is highly decentralized, thus re­
sisting strategic planning by ideological presidents or members of Congress (see Sare­
witz, 2013). The one clear aberration in the overall size in the R&D budget over the last
six or so decades occurred in the 1960s, when nondefense R&D briefly doubled as a per­
centage of nondefense discretionary spending. This occurrence is linked to one very ex­
pensive priority of the Kennedy Administration, however: sending people to the moon
(Sarewitz, 2013, p. 15).

Presidential administrations and members of Congress tend to exercise their budgeting


powers by setting priorities within a relatively stable budgeting pie. The most obvious
shift in priorities has occurred if we compare funding for NASA to the National Institutes
of Health (NIH). Even after the Apollo mission to the moon had been concluded, the
NASA budget far exceeded those of other science-related agencies. However, the NIH
reached parity with NASA in the 1980s, and, today, its budget is 2.5 times that of NASA.
NIH currently has a budget of $30 billion, with 80% of those funds awarded through

Page 7 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

grants to nongovernment researchers (National Institutes of Health, 2015). Other institu­


tions, such as the Department of Energy, have seen their budgets rise and fall as well
(Sarewitz, 2013, p. 16). To some extent, these changes carried out by government offi­
cials reflect the public interest: a growing, then waning, perceived need to compete with
the USSR; greater desire to devote monies to public health as medical technologies con­
tinue to advance; greater, and then lesser, worry over access to energy.

However, agenda setting by government officials is not only due to widespread public
concern. Interest group lobbying—carried out on behalf of particular groups of re­
searchers, industries, and private citizens—can noticeably influence the size of specific
government-science institutions’ budgets as well as to what subjects they allocate funds
(Greenberg, 2001). The concerns of individual members of Congress also can play a role.
For example, beginning in 2009, Republican members of Congress sought to reduce so­
cial science funding within the National Science Foundation (NSF) and sought to com­
pletely eliminate Political Science funding (see Coburn, 2011; Sides, 2015). The efforts,
spearheaded by former Senator Tom Coburn, did result in reduced spending in these ar­
eas. While part of a general crusade against wasteful spending, Coburn’s intense interest
in eliminating Political Science funding in particular suggests some idiosyncratic personal
beliefs were at play. As Sides (2011) points out, the Political Science program Coburn
sought to eliminate cost only $5 million (around 0.1% of NSF’s billion-dollar budget),
hardly a big contributor to the deficit.

These decisions to increase or decrease government investment in particular research


agendas have an enormous influence on where scientific discoveries occur. For example,
recent increases in government funding of biosciences via the NIH has led to greater ex­
pertise on these subjects and many important discoveries, such as our still rapidly grow­
ing understanding of the intricacies of the human genome. Such funding also has impor­
tant ripple effects. Graduate students are trained, and researchers develop expertise they
will continue to develop even after their grant has expired. Furthermore, the excitement
surrounding such discoveries, and the research dollars attached to them, attract the at­
tention of scholars in other disciplines. Not only are there more people working within
the biosciences today due to increased government funding, but also a range of other dis­
ciplines—including in the social sciences and humanities—have begun to incorporate bio­
logical approaches into their work (e.g., see Krimsky, 2013).

Agenda setting is just one aspect of government influence over scientific knowledge—one
that (depending on the reason for influence) need not necessarily be worrisome. More
problematic are efforts by government actors to change the conclusions and public re­
ports of scientists working on behalf of the government.1 Jasanoff discusses how the regu­
latory process in particular is vulnerable to political influences, as scientists within agen­
cies must meticulously deconstruct knowledge claims to examine their strength and cer­
tainty, which invites politically motivated arguments over the strength of the evidence for
or against a particular policy (Jasanoff, 1987). Although political meddling in regulatory
science and other technical areas of government occurs with some frequency, the George
W. Bush administration stands out as unusually politicized. An extensive investigation car­

Page 8 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

ried out by the Union of Concerned Scientists, discussed in two reports (Union of Con­
cerned Scientists, 2004, 2005), found that the Bush administration had been consistently
suppressing and distorting research findings at federal agencies on a wide range of top­
ics. Those topics included environmental concerns (climate change, endangered species,
forest management, strip mining); health concerns (HIV/AIDS, breast cancer, contracep­
tion, abstinence-only education), and the war in Iraq (whether the Iraq government was
building weapons of mass destruction).

Presidential administrations are not the only ones who try to influence the conclusions of
government-sponsored research. When faced with a federal agency generating inconve­
nient scientific conclusions, members of Congress may threaten to decrease or eliminate
an agency’s funding or, short of that, conduct hearings or subpoena information in an ef­
fort to discredit or harass scientists. For example, at the time of this writing in late 2015,
Rep. Lamar Smith (R-Tex)—a well-known “climate skeptic” and also chairman of the
House Committee on Science, Space and Technology—had recently issued a subpoena for
internal deliberations of scientists working for the National Oceanic and Atmospheric Ad­
ministration (NOAA) who had worked on a well-regarded study published in Science that
refuted claims that global warming had slowed in the preceding decade. While Smith
stated that his intentions were to investigate whether scientists had rushed the study and
published it despite important flaws, many scientists and administrators, including the
head of the NOAA, interpreted the subpoena as politically motivated, with the goal of in­
timidating scientists (Rein, 2015A, 2015B). This pattern of unusually aggressive interfer­
ence with, and skepticism of, government-sponsored scientists by Republican leaders in
recent decades—particularly surrounding the issue of climate change—has led some to
declare that there exists a Republican war on science (Mooney, 2005; also see Kolbert,
2015; vanden Heuvel, 2011).

Political influences on science in the federal government extend to science advising as


well. From the politician’s perspective, science advisers serve two functions: to help him
or her make quality policy decisions (evidence-based decisions), and to provide justifica­
tion for already made policy decisions to colleagues, the media, and the public (what we
might call, more critically, decision-based evidence). Befitting the post-war bipartisan
consensus on scientists’ perceived important contributions to the public welfare, Presi­
dents Eisenhower and Kennedy both emphasized the first role, listening intently to their
science advisers (Douglas, 2009). Yet, throughout history, “both presidents and Congress
latched onto technical views that suited their political purposes” (Kevles, 2006, p. 762).
Nixon famously disbanded the President’s Science Advisory Committee when its mem­
bers refused to rubber-stamp his war-related initiatives (Kevles, 2006, p. 763). More re­
cently, the George W. Bush administration continually applied political litmus tests to ap­
pointees on advisory committees, thus ensuring ahead of time that those advisers would
be on the President’s side (Kevles, 2006). Congress competes in the contest for science-
backed credibility as well, with Democrats and Republicans cherry-picking scientists
based on known perspectives to appear as supportive experts in Congressional hearings.
Because of these behind-the-scenes efforts to control which scientific perspectives are ex­
pressed in public forums, even politically neutral scientists who agree to speak in such
Page 9 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

venues can add fuel to the fire of political debate, particularly where value differences be­
tween opposing sides are high and scientific certainty is low (Pielke, 2007).

The Influence of Political Values on Scientists


at Work
The previous section discussed various ways in which government institutions and actors
influence scientific knowledge. However, it should not be presumed that scientists them­
selves do not also have political commitments that may influence their work. This said,
writing about such influences is a challenging endeavor. The process of research usually
is not observed by anyone beyond the researcher or research team, and, even if scientists
are observed in action, the observer cannot peer into their minds to understand their
thought processes. While it is certainly possible to shed some light on scientists’ likely
motivations via empirical research (e.g., observation, personal interviews, textual analy­
sis of notes and publications), conclusions normally must be somewhat tentative.

The field of the sociology of scientific knowledge (SSK) has contributed the most to col­
lective understanding of various influences on scientists’ work, including the topics scien­
tists pursue, the methods they employ, and the conclusions they draw. As one learns in
classic works in the field, such as Latour’s Science in Action (1986), creating knowledge
bears little resemblance to the overly concise and stylized way scholarly publications por­
tray research. Scientists make critical decisions based on competition with other scien­
tists, power dynamics, and miscellaneous epistemic values,2 such as a preference for nov­
elty, theoretical simplicity, or particular methodologies (also see Douglas, 2009). For the
most part, these influences are not political, at least according to the relatively formal de­
finition used herewith.

Yet, politically relevant values and interests do sometimes play a role in coloring scientific
research. The influences can intersect research at the agenda-setting stage as well as in
the “internal stages of scientific reasoning” (Douglas, 2015, p. 122)—planning and carry­
ing out a study and interpreting its evidence.

To begin, political influences on research agendas are not only produced indirectly
through funding. While some scientists pursue subjects out of intrinsic intellectual ap­
peal, scientists’ values often also influence their research agendas to some degree. P. B.
Medawar, in Advice to a Young Scientist, insists that scientists must study problems in
which “it matters what the answer is—whether to science generally or to
mankind” (Medawar, 1979, p. 13). Knowledge that matters to mankind is certainly bound
up with values. Some of these values are widely shared and pursued via a range of
projects, such as improving humans’ health and happiness via medical or consumer safe­
ty research. In other cases, value priorities may differ considerably between individuals,
or people may share societal goals but disagree over how best to get there (Rokeach,
1973). Such contested values are apparent in—and divide—the social sciences. For exam­
ple, it is fairly well known that American sociologists are, on average, considerably more

Page 10 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

liberal than economists. It is likely that students who are relatively left-leaning are drawn
to a field (sociology) explicitly concerned with social ills, such as racial discrimination,
whereas students who are relatively right-leaning are drawn to a field (economics) which,
at least until fairly recently, held that economic markets are most efficient when they are
free of government regulation.

When performing and interpreting a research study, scientific norms dictate the impor­
tance of avoiding any direct influence of social or ethical values (beyond those values that
outline ethical scientific procedures, such as the treatment of human or animal subjects).
This means that scientists should not design studies in a way that guarantees a desired
conclusion will be reached. It especially means that, when interpreting evidence, scien­
tists should not allow themselves to be influenced by what they wish the result to be.
One’s personal values simply are not appropriate evidence (see Douglas, 2009). Most pro­
fessional observers of the scientific process would argue that scientists generally strive to
adhere to this ethos.

This said, scientists—particularly government scientists working in a regulatory capacity


—consistently do (and should) take values into account indirectly in their work (Douglas,
2009). Based on concern over real-world risks, scientists must consider whether the
weight of the evidence they have before them justifies an affirmative scientific claim,
which may include a recommendation for some type of collective action, or whether more
evidence should first be gathered to increase certainty. Given that no scientific study ever
claims to have 100% settled an empirical question, scientific uncertainty is a focal point
of much political disagreement related to science-backed government policy. For exam­
ple, some may see a risk, such as children’s ill health due to low-level exposure to lead, as
unacceptable and recommend efforts to remove all lead from children’s environments
even if the evidence of ill health effects remains uncertain. Others may be more tolerant
of such a risk and demand further study to demonstrate more conclusively that the ill
health effects of low-level lead exposure are consistent and substantial prior to additional
regulation (see Douglas, 2009). The climate change debate offers a different type of ex­
ample. Most climate scientists argue that there is enough persuasive evidence for the cat­
astrophic effects of climate change that steps must be taken immediately to counteract
climate change. A handful of climate scientists, in many cases connected to industries
that stand to lose a great deal if their productivity is curbed by government regulation,
have argued that the science is as of yet too uncertain to impose the costs of regulation
on American industries and consumers (Oreskes & Conway, 2010).

Thus far, only explicit (or conscious) political influences on scientists have been de­
scribed. Such influences play a role not only in scientists’ decisions regarding their field
(and topics) of study but also in their assessments of whether the strength of evidence
justifies a public conclusion and perhaps a recommendation for societal action. This said,
borrowing from research in psychology and the sociology of scientific knowledge, politi­
cally relevant values and interests likely also influence the doing of science—designing,
conducting, and interpreting studies—to some extent subconsciously, via motivated cogni­
tion as well as background assumptions. Motivated cognition involves, in essence, wish­

Page 11 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

ing for a particular scientific conclusion (or fearing a conclusion) due to value or inter­
ests, and (unknowingly) allowing this desire to influence one’s interpretation of evidence.
Motivated cognition consists of two key behaviors: increased skepticism of evidence that
undermines one’s point of view, and searching the information environment, or one’s
memory, for facts that bolster one’s perspective (Lodge & Taber, 2013). While it appears
as though experts are less likely than others to engage in this style of thinking (Kahan et
al., 2016), it likely exists in some form among scientists. Background assumptions operate
differently. These are not values but, rather, factual beliefs about the world that are taken
for granted. Such perceived facts unavoidably differ among people, leading Barker and
Kitcher (2014) to avoid calling them a “bias.” Instead, all knowledge is “situated” in light
of the unique perspectives of the observer. Background assumptions may have a political
flavor when they take the form of stereotypes of social groups or other distinct percep­
tions of the world that stem from a person’s socioeconomic status or political alliances.
Such assumptions can influence scientists’ work by making it easier to “see” evidence
that fits an expected pattern (Barker & Kitcher, 2014).

Below, these influences—motivated cognition and background assumptions—are dis­


cussed within the context of scientific debates over biological influences on human char­
acteristics and behaviors. The study of biological inheritance (i.e., genetics), in particular,
has strong political implications (see Suhay & Jayaratne, 2013 for an overview), meaning
that scientists’ own political commitments may influence their work more here than with
respect to other topics. While this makes it easier to spot cases of bias, it is important to
note that the examples described below are likely not representative of scientific research
generally, including within the biosciences today.

Prior to World War II, the work of U.S. (as well as many British and European) geneticists
appeared to be influenced both by motivated reasoning and problematic background as­
sumptions. These scientists, most of whom were upper-class, Christian men of Western
European descent, made a number of important discoveries related to genetic inheri­
tance, but also a related set of additional claims that have since been discredited by biolo­
gists. The most notorious of those claims included the belief that a wide range of people—
southern and eastern Europeans, Africans, Asians, Jews, women, the poor, those with ad­
dictions, and the mentally ill—were genetically inferior to people like themselves. It is not
an exaggeration to say that many of these scientists advocated for eugenic practices—
many of which were acted upon by the U.S. government—including forced sterilization of
some individuals and greatly reduced immigration (Beckwith, 2002; Kevles, 1985; Paul,
1998). These early geneticists did not appear to be consciously skewing their conclusions
for political reasons, however. In an era of rising inequality and immigration, “the raison
d’etre of the eugenics movement was the perceived threat of swamping by a large class of
mental defectives” (Paul, 1998, p. 125). Problematic background assumptions about peo­
ple with whom they had little interaction—people from different social classes and na­
tions and of different races and ethnicities—appeared to be influencing these scientists.
Many geneticists of the era genuinely believed that large swaths of the masses were men­
tally disabled and, in having many children, threatened to diminish future Americans’
wellbeing. Barker and Kitcher (2014) also suggest motivated reasoning influenced the ge­
Page 12 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

neticists’ extreme conclusions: “Historically, research aimed at finding innate biological


differences that underlie and explain existing social inequalities has enjoyed intense in­
terest and often won acclaim … Bad or sloppy science may be tolerated if it leads to com­
fortable conclusions” (2014, pp. 107–108). Whatever the reasons for their conclusions,
the American eugenics movement caused substantial human suffering in the United Sates
and perhaps beyond. While historical counterfactuals are impossible to trace out with cer­
tainty, it is well established that Hitler drew heavily on the ideas of both American and
British geneticists and eugenicists when formulating Nazi racial ideology (Black, 2003;
Kühl, 1994).

After World War II, in the wake of the Holocaust, the ideological ground surrounding bio­
logical research shifted considerably. For example, Provine (1973) documents how geneti­
cists “changed their minds about the biological effects of race crossing” (i.e., miscegena­
tion) after the war even though the store of relevant scientific evidence on the subject
(there was, in fact, very little) had not changed. Before the war, many geneticists had
warned that the offspring of two parents of different “races” would likely exhibit physical
defects; after the war, genetic scientists reversed this claim, arguing defects were highly
unlikely. Segerstrale (2000) describes a general post-war taboo on using biology to ex­
plain human behavior because of concern such theories could be used to justify prejudice
and discrimination against vulnerable groups in society. Those scholars who did make
claims about biological influences on human characteristics and behaviors, such as famed
sociobiologist E. O. Wilson, were often met with intellectual attacks whose ferocity sug­
gested more than just academic motivation.

In more recent years, the ideological nature of debates over the origins of human differ­
ences has dissipated. But new scientific controversies continue to arise in this arena, and
sometimes the political motivations behind the actors involved are quite apparent (e.g.,
see Dreger, 2016). Interestingly, in the contemporary era, the coalitions arguing in favor
of nature vs. nurture have been reshuffled to a degree. While the academic disciplines
most associated with egalitarian value orientations continue to be relatively pessimistic
about research on human genetics (Hochschild & Sen, 2015), empirical findings in sup­
port of innate influences on sexual orientation specifically have been eagerly communi­
cated by some socially progressive researchers (e.g., Bailey et al., 2016). As the belief
that people are “born gay” has increasingly become associated with tolerance for diverse
sexual orientations in the public (see, e.g., Garretson & Suhay, 2016), some academics
may be motivated to present such evidence in a favorable light to further advance gay
rights (Pitman, 2011; Walters, 2014). In sum, Provine’s conclusion several decades ago, in
a very different context, continues to ring true today: “the science of genetics is often
closely intertwined with social attitudes and political considerations” (1973, p. 796).

A discussion of scientists’ political biases would be incomplete without some discussion of


the handful of scientists who knowingly distort scientific truths for political ends. By all
accounts, the relative number of such individuals is exceedingly small. However, given
that such scientists are likely to be highly outspoken, their small number belies their im­
pact. A remarkable account of one such group of scientists is provided by Oreskes and

Page 13 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Conway (2010). The authors document the activities of a small cadre of fervently anti-
communist and libertarian scientists who would knowingly mislead the government, the
media, and the public on the science behind a range of topics, from the risks of smoking,
to Reagan’s Strategic Defense Initiative, to various environmental concerns (including the
ozone layer, acid rain, and climate change). For these individuals, their fierce opposition
to communism and anything resembling it (i.e., government regulation) justifies lying.
The strategy of these individuals is to attack any science they do not like as “junk sci­
ence”—as science either driven by politics or full of mistakes (or both). In some cases,
they trumpet dubious studies produced by themselves or their allies. Because these indi­
viduals are accomplished scientists (usually in fields other than those they are critiquing,
however), they are trusted. These unscrupulous scientists play a key role not only in influ­
encing government policy but also in fostering the undeserved public perception that
much government-sponsored science is biased (Oreskes & Conway, 2010).

Public Engagement with Science


A broader view of scientific knowledge considers the communication of scientific knowl­
edge to the public, public perceptions of what is (or is not) settled scientific knowledge,
as well as ways in which the public itself can influence the production of scientific knowl­
edge.

The (Political) Science of Science Communication

It is difficult to separate the public’s understanding of science from the communication of


science. For nonscientists, science is a mediated reality. “Their exposure to science and
scientists … is not a direct one, but indirect through mass or online media” (Scheufele,
2014, p. 13587). While many types of people engage in science communication via media
(including scientists themselves), this section focuses on science communication by enti­
ties that politicize scientific communication with some frequency: journalists and interest
groups.

Science journalism began in earnest in the United States between the world wars
(Lewenstein, 1995; Weingold, 2001). The focus of this field has long been to simply trans­
late scientific findings for the general public with an added dimension of clarifying how
scientific findings may be—or may become—relevant to lay people’s lives (Lewenstein,
1995). With this latter point in mind, science reporting has long had a value dimension.
This said, science reporting has recently become more explicitly politicized. The reasons
are several.

First, the ranks of science journalists have been thinning due to shrinking news budgets
and associated newsroom cuts. As a result, when scientific topics are covered, they are
often covered by nonspecialists, including political reporters and columnists. These indi­
viduals are more likely to frame scientific issues in a political manner (Nisbet & Fahy,
2015).

Page 14 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Second, over the last several decades, controversy has become a craft norm of the news
media (Weingold, 2001). As with other media stories, framing scientific findings as politi­
cally controversial increases audience interest. Examples stretch far beyond the well-
known example of climate change reporting, including medical scientists’ health recom­
mendations (Fowler & Gollust, 2015) and genetic discoveries (Garretson & Suhay, 2016),
among others. Where scientific knowledge is contested among scientists, emphasizing
controversy may be even more advantageous for journalists. In such cases, most journal­
ists are unlikely to understand the scientific or technological issue well enough to under­
stand which claims in a scientific debate are well founded and which are safely ignored
(or debunked). Further, in covering the controversy, rather than adjudicating between
competing claims, journalists often are trying to appear objective, in the sense of a bal­
anced presentation of all sides of a debate (a long-held craft norm). Of course, when the
view of a minority of scientists is presented in media reports as just as credible as that of
an overwhelming majority of practicing scientists, this greatly distorts public perceptions
of the current state of scientific knowledge (Oreskes & Conway, 2010, p. 243).

Yet a third way in which politics can influence science reporting is less well known to
those outside the media. Scheufele (2014) describes behind-the-scenes strategic efforts
by a variety of policy stakeholders—including interest groups, corporations, scientific as­
sociations, and others. These groups compete for access to the news agenda and, not sur­
prisingly, work hard to ensure that their science or technology issue of interest is framed
in the way they want (Nisbet & Huge, 2006). One method of gaining access to the news
agenda under favorable terms is to provide information subsidies to news organizations
(Weingold, 2001, p. 181). In a striking parallel to the influence of lobbyists on Capitol Hill
(see Drutman, 2015), perpetually rushed journalists are sometimes relieved to be able to
draw heavily on a press release provided by an interest group.

Fourth and finally, politics sometimes also enters science reporting simply due to the po­
litical goals of a particular reporter or news outlet. Partisan news outlets have flourished
amidst the fragmentation of the media (Levendusky, 2013; Stroud, 2011). Such outlets are
certainly less interested than others in neutral reporting. A content analysis of climate
change coverage on several cable news channels (Fox News, CNN, MSNBC) between
2007 and 2008 demonstrated that Fox was more dismissive of climate change and inter­
viewed more climate change doubters than the other cable channels (Feldman, Maibach,
Roser-Renouf, & Leiserowitz, 2012). This said, note that political bias practiced by such
outlets is not necessarily carried out by presenting falsehoods. Rather, politically motivat­
ed news outlets and journalists may cherry-pick the studies they discuss—only reporting
ones with results that support their perspective—or express greater skepticism of studies
that undermine their perspective.

All of these political influences on science journalism will influence public understanding
of science in some way, of course. Even when reporters themselves have no political
agenda, simply alerting media consumers to the fact that there are different sides to a de­
bate (and clarifying which political values or identities are associated with which side)
will tend to encourage motivated cognition in the public. Where scientific knowledge has

Page 15 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

identified a threat to the public and thereby justifies government intervention, politicized
reporting also has a status quo bias. For example, Fowler and Gollust (2015) found that
when media coverage of the HPV vaccine emphasized political conflict over its use, sup­
port for the vaccine and a state immunization program decreased. Covering the contro­
versy also tends to erode public trust in scientists, as their motives are implicitly por­
trayed as political (Fowler & Gollust, 2015). As for more marked political biases, these in­
fluence public understanding of science in predictable ways. The previously mentioned
study of cable news climate change reporting also found that Fox News viewers were less
likely to believe in climate change than viewers of other channels, even after controlling
for possible confounds (Feldman, Maibach, Roser-Renouf, & Leiserowitz, 2012). A follow-
up study suggests that this media influence was mediated by changes in viewers’ trust in
scientists (Hmielowski, Feldman, Myers, Leiserowitz, & Maibach, 2014).

Finally, those who wish to communicate about science do not need to rely on journalists.
Oreskes and Conway describe a number of such efforts, including a pamphlet called “A
Scientific Perspective on the Cigarette Controversy” sent to 176,800 American doctors in
the 1950s. The intellectually dishonest pamphlet, funded by the tobacco industry, chal­
lenged existing evidence that smoking causes cancer so that doctors would not recom­
mend that their patients quit smoking (Oreskes & Conway, 2010, p. 18). During the draft­
ing of this article, the author saw an advertisement, broadcast during a widely viewed
sporting event, sponsored by “Fuels America,” an industry group promoting biofuels. The
ad urged President Obama to support the Renewable Fuel Standard, which, according to
the ad, has been supported by government scientists and opposed by the oil industry. An
almost too-good-to-be-true example for the present purposes, the biofuel industry-spon­
sored ad then associated government scientists with angels and oil executives with the
devil, complete with fire and smoke (Fuels America, 2015). Here, industries with a sub­
stantial financial stake in an upcoming government decision sought to portray themselves
simultaneously as having science on their side, and as being on the side of angels. What­
ever the merits of biofuels and the Renewable Fuel Standard, a viewer would be right to
be skeptical of such an advertisement.3

Political Influences on Public Understanding of Science

The intersection of scientific knowledge and the public is an increasingly popular topic
among scholars, media, and the public itself, at least in the United States. This article is
being written shortly after a period of national reflection on, and criticism of, the public’s
understanding of scientific knowledge (e.g., see Achenbach, 2015; McIntyre, 2015; Pew
Research Center, 2015). Concern has been spurred primarily by continued rejection of cli­
mate change among many (Brewer, 2012; Lewandowsky, Gignac, & Oberauer, 2013; Za­
jko, 2011), but also by a set of smaller controversies, such as some parents’ refusal to
vaccinate their children (Nyhan, 2014). An irony in this is that the current conversation is
playing out long after those who study the public’s understanding of science have turned
away from criticizing the public for such knowledge deficits and bias.

Page 16 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

The field of public understanding of science, which began in earnest in the 1980s, has
changed considerably in just three decades (see Brossard & Lewenstein, 2010; Wynne,
1995 for overviews). Early work in the field focused on measuring the public’s awareness
of well-established scientific facts via surveys, finding that such awareness was remark­
ably low (Miller, 1983). The field turned its attention to addressing this problem, assum­
ing that greater exposure to higher quality scientific communication would not only im­
prove scientific literacy but would also increase Americans’ appreciation of science. Some
refer to this set of assumptions as the deficit model, because “it describes a deficit of
knowledge that must be filled, with a presumption that after fixing the deficit, everything
will be ‘better’” (Brossard & Lewenstein, 2010, p. 13).

This model has since been critiqued on a number of grounds. From an empirical perspec­
tive, scholars point out that greater exposure to scientific communication or interest in
scientific topics often does not lead to better understanding of science, in the sense of
holding beliefs that accurately reflect scientific consensus (Brewer, 2012; Kahan et al.,
2012). Similarly, greater and more accurate understanding of scientific knowledge does
not necessarily lead to more appreciation for science (Wynne, 1995; although see Sturgis
& Allum, 2004). Other critiques of the deficit model have challenged the normative as­
sumption that the public ought to improve its science literacy. Should we ask lay people to
spend valuable time increasing their store of scientific knowledge, much of which has no
obvious utility in their day-to-day lives? Is it even appropriate to assume that scientific
knowledge is always superior to lay knowledge (Wynne, 1995)? We return to these ques­
tions in the final section below.

A frequent theme of more recent research on public understanding of science is that the
uptake of scientific knowledge depends in part on a person’s trust in the scientific enter­
prise. Thus, rather than increasing scientific knowledge leading to greater approval of
science, as in the deficit model, the causal relationship often works in the opposite direc­
tion. While public trust in science in the United States is high relative to other institutions
(Shapin, 2008), trust in science varies considerably among groups in the population, and
much of this variation has a political flavor. Perhaps most notably, Gauchat (2012) docu­
ments a marked decline in American conservatives’ trust in science from the 1970s to
2010. Conservatives began the period with the highest trust in science but ended the pe­
riod with the lowest. In a recent survey, Blank and Shaw (2015) document higher levels of
trust in scientists among liberals and Democrats than among conservatives and Republi­
cans across nearly every scientific topic examined.

The above raises the question: why do levels of trust in science vary among political (and
other) groups in the public? Some scholars have provided evidence for the import of so­
cial identity to trust and, therefore, the acquisition of scientific information (Wynne,
1992). Blank and Shaw (2015) point out that U.S. scientists are considerably more likely
than the public at large to identify as both Democratic and liberal (Pew Research Center,
2009), which may be one reason for lower trust in scientists among Americans on the
right. Highly outspoken atheistic scientists, such as Richard Dawkins, no doubt further
distance scientists from religious conservatives in particular (Nisbet, 2010). As has been

Page 17 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

discussed in the section on government influences on science, the fact that much scientif­
ic research today is used to bolster arguments for government regulation is likely another
reason why Americans on the right are more likely to distrust scientists than those on the
left (Blank & Shaw, 2015; Douglas, 2015). Finally, lower trust in scientists among Ameri­
cans on the right certainly is also driven by mistrust of scientists by Republican and con­
servative elites. It is well accepted in Political Science that the beliefs and attitudes of po­
litically attentive and partisan (and/or ideological) citizens are influenced to a substantial
degree by debate among elites (see Zaller, 1992). Thus, doubt in mainstream scientific re­
search expressed by the George W. Bush administration and, continuing today, by outspo­
ken Republican members of Congress (as previously discussed) likely trickles down to the
public.

While trust in scientists varies throughout the population, individuals may find them­
selves skeptical of a specific scientific claim for reasons other than their overall trust in
scientists. The reasons for skepticism about specific claims mirror those already dis­
cussed with respect to trust in scientists generally. First, the social identity (Wynne, 1992)
and the perceived interests (Lupia, 2013) of the particular source of the information and
the particular communicator matter. Second, the specific content of the information being
communicated matters as well. In the United States, whether a person who is conserva­
tive or liberal will accept a scientific argument depends in part on whether that argument
is value-congruent or value-incongruent (Kraft, Lodge, & Taber, 2015; Nisbet, Cooper, &
Garrett, 2015). While it is conservatives who are more likely than others to resist climate
change findings, it is liberals who are less likely than others to accept scientific findings
related to the safety of fracking or nuclear waste disposal (Nisbet, Cooper, & Garrett,
2015).

Social scientists have summed up this tendency—overreliance on social identities, values,


and interests in the acceptance of information—with the label “biased assimilation” (see,
e.g., Kahan, 2011; Garretson & Suhay, 2016). Kahan and colleagues have developed a
more specific version of this theory called cultural cognition, whereby individuals are mo­
tivated reasoners in order to protect values—particularly preferences for hierarchy vs.
egalitarianism, and for individualism vs. communitarianism—that are tightly bound up
with group identities (Kahan, 2011; Kahan et al., 2012).

Note, however, that to reject an unappealing scientific claim is not to reject science
wholesale. Even among those on the right, trust in science outweighs distrust (Blank &
Shaw, 2015). As Shapin argues, “the problem today is not antiscience but a contest for
the proper winner of the designation ‘science’” (2008, p. 439). Thus, those who reject sci­
entific knowledge on a particular topic tend to seek out alternate claims that appear sci­
entific rather than retreat into mysticism or uncertainty. In parallel to biased assimilation,
the act of searching for information to bolster preexisting views can be labeled as biased
search (see Kahan, 2011).4 Biased search, in conjunction with an overall respect for scien­
tific knowledge, explains the enormous public attention given to the handful of climate
change doubters who are scientists (Oreskes & Conway, 2010). Biased search in this con­
text also explains the growth of bodies of questionable knowledge that wear the mantle of

Page 18 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

science, such as “Intelligent Design” (see Nisbet, 2010). With this in mind, concern over a
general lack of trust in science among the public seems largely misplaced. Rather, the rel­
evant problem would seem to be that public trust in scientific claims is often overly con­
tingent on a person’s social identities, values, and interests.

Finally, while the various forms of motivated cognition influence public understanding of
science to a significant extent, it is important to recognize that most scientific beliefs
among the public have been influenced little by such biases. Motivated cognition is most
likely when a person finds him or herself in a highly partisan environment, or a specific
topic has become politicized (Kahan, 2012; Lupia, 2013). Most scientific knowledge, such
as how photosynthesis or radar works, carries few political implications and, thus, is not
met with bias. It is also worth noting that politically motivated cognition tends to be
greater among those who are most educated and attentive to media (Gauchat, 2012; Ka­
han et al., 2012; Lodge & Taber, 2013). Such individuals are simply exposed to more
politicized information and are better able to recognize the political implications of that
information.

Politics and Public Influence on Science

Criticisms of the previously described “deficit model” (again, which problematizes, and
seeks to increase, low levels of scientific knowledge in the public) have led to an interest
in reorienting scientific communication with an emphasis on interaction with the public.
Two key themes have emerged under this umbrella: first, the influence of lay expertise on
scientific knowledge; second, the influence of citizens’ interests on the scientific agenda.
In both instances, many scholars argue that these influences are generally positive
(Brossard & Lewenstein, 2010). This section focuses on public influences on the scientific
agenda given its greater political aspects.5

To some, the fact that nonscientists can influence scientists’ work may seem far-fetched,
but there are many clear examples of this phenomenon. Public involvement in science no­
ticeably increased in the United States in the 1970s. Given a confluence of increased gov­
ernment involvement in science, growing awareness of public risks created by science
and technology (e.g., environmental problems, drug side-effects) (Nelkin, 1995, p. 445),
and a general social milieu that encouraged citizen action, this timing likely was not coin­
cidental. In some cases, scientists themselves initiated and encouraged public involve­
ment in science, as in the case of the “Science for the People” movement (Beckwith,
2002; Moore, 2008; Nelkin, 1995). Increasing citizen activism has been observed outside
the United States as well. New social movements (NSMs) seeking to influence science
and technology have sprung up in many countries in recent decades, and many national
and international institutions now emphasize the importance of citizen involvement in sci­
ence and technology (Bucchi & Neresini, 2008).

What do such citizens seek to accomplish? In many cases, nonscientists wish to influence
the scientific agenda, directing the object of scientists’ inquiry to perceived pressing
problems. Bucchi and Neresini (2008) describe the successful lobbying efforts of the

Page 19 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

French Muscular Dystrophy Association (AFM). Muscular dystrophy was largely ignored
by scientists until the AFM took it upon themselves to collect clinical and genetic data on
those suffering from the disease, both subsidizing the cost of research and establishing
the disease as a legitimate subject of study (Bucchi & Neresini, 2008, p. 453). Similar lev­
els of intense interaction between scientists and the public were observed in the early
years of the AIDS epidemic, with many in the gay community in particular pressing for at­
tention by government and scientists to help fight the disease (Bucchi & Neresini, 2008;
Gould, 2009).

Not all such interest-group activity is oriented toward public health, however. Conserva­
tive and libertarian think tanks have particularly flourished in the wake of the growth of
government regulation of industry. Their ranks include science-focused entities, such as
the George C. Marshall Institute (Kevles, 2006; Oreskes & Conway, 2010). Much of the
funding for such think tanks has come from industry, given its financial interest in reduc­
ing government regulation. Both think tanks and industry directly fund much research in­
tended to influence public debate. Oreskes and Conway (2010) discuss in detail the politi­
cal goals of investments in scientific research by the tobacco and the energy industries.
In short, by only funding scientific research that was likely to counter arguments for regu­
lation (research casting doubt on the smoking-cancer link in the former case and on an­
thropogenic climate change in the latter), these industries and their ideological allies suc­
cessfully tilted the pool of knowledge in their favor.

In recent decades, normative theorists have become quite interested in the subject of
public engagement with scientific topics. Many have argued that democratic govern­
ments such as the United States should increase—and better institutionalize—considera­
tion of citizens’ perspectives when setting scientific agendas (Brown, 2006; Guston, 2013;
Kitcher, 2001). Science—particularly that sponsored by government—is supposed to be
carried out for the public benefit, after all, and who better to enunciate their interests
than the public.

There appear to be, however, two key challenges to this goal. First, while the deficit mod­
el may have fallen out of favor, there remains the reality of relatively low levels of scientif­
ic literacy. How can a wide range of citizens help to set a scientific agenda when so many
do not well understand the scientific process or even grasp what is scientifically feasible
at a given moment in time? Second, as Schattsneider recognized decades ago, “[t]he flaw
in the pluralist heaven is that the heavenly chorus sings with a strong upper-class ac­
cent” (1960, p. 35). In other words, even putting aside industry-funded interest groups,
those with high levels of wealth and education are considerably more likely than others to
participate in the political arena. One possible way of answering both of these challenges
is to borrow the method of deliberative polls, in which a random sample of citizens is
brought together for several days of education and discussion on a topic (see Fishkin &
Luskin, 2005). Increasing informed, representative participation in a scientific agenda
setting in this manner would be a resource-intensive proposition; however, the goal is ad­
mirable enough that it may be worth the cost.

Page 20 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Conclusion
In discussing the ways in which political concerns among government officials, scientists,
journalists, and the public influence scientific knowledge, this article has touched on a va­
riety of topics. These include: the government’s current and historical role as a funder,
manager, and consumer of scientific knowledge; how the personal interests and ideolo­
gies of scientists influence their research; the susceptibility of scientific communication
to politicization and the concomitant political impact on audiences; the role of the public’s
political values, identities, and interests in their understanding of science; and, finally, the
role of the public, mainly through interest groups and think tanks, in shaping the produc­
tion and public discussion of scientific knowledge.

Given that scientific findings heavily influence many types of decisions, including collec­
tive decision-making via government, we should not be surprised by this variety of influ­
ences. However, should we be concerned?

In response to this important normative question, it is worth reiterating a few key norma­
tive points. To begin, political influences on the production of scientific knowledge are not
thought to be problematic—and, indeed, are often welcome—to the extent that they (a)
reflect public concerns and (b) influence the scientific agenda. Societal values and inter­
ests may also safely influence the doing of science (i.e., the creation of scientific knowl­
edge), so long as their role is indirect and related to evaluations of whether sufficient evi­
dence has been obtained to communicate a conclusion or recommend societal action to
combat a risk (Douglas, 2009). However, most other political influences are indeed detri­
mental, particularly where political preferences and identities directly influence scientific
conclusions, their communication, or their acceptance by nonscientists. Collectively, we
must do a better job separating our policy preferences and associated political and social
identities from our factual beliefs. How we do this given present levels of political polar­
ization and the politicization and fragmentation of the media is less than certain. But a
shared, accurate understanding of the world is too important to allow the status quo to
prevail.

Acknowledgments
My sincere thanks to Shawn Janzen, who provided helpful research assistance, as well as
to participants in the 2015 “Understanding Science Denialism” workshop at Wake Forest
University, especially Heather Douglas and organizer Adrian Bardon.

References
Achenbach, J. (2015). The age of disbelief. National Geographic, 227(3), 31–47.

Bailey, J. M., Vasey, P. L., Diamond, L. M., Breedlove, S. M., Vilain, E., & Epprecht, M.
(2016). Sexual orientation, controversy, and science. Psychological Science in the Public
Interest, 17(2), 45–101.

Page 21 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Barker, G., & Kitcher, P. (2014). Philosophy of Science: A New Introduction. New York: Ox­
ford University Press.

Beckwith, J. (2002). Making genes, making waves: A social activist in science. Cambridge,
MA: Harvard University Press.

Bimber, B., & Guston, D. H. (1995). Politics by the same means. In S. Jasanoff, G. E.
Markle, J. C. Petersen, & T. Pinch (Eds.), Handbook of science and technology studies (pp.
554–571). Thousand Oaks, CA: SAGE.

Black, E. (2003). War against the weak: Eugenics and America’s campaign to create a
master race. New York: Four Walls Eight Windows.

Blank, J., & Shaw, D. (2015). Does partisanship shape attitudes toward science and public
policy? The case for ideology and religion. The ANNALS of the American Academy of Po­
litical and Social Science, 658(March), 18–35.

Bolsen, T., & Druckman, J. N. (2015). Counteracting the politicization of science. Journal
of Communication, 65, 745–769.

Brewer, P. R. (2012). Polarisation in the USA: Climate change, party politics, and public
opinion in the Obama era. European Political Science, 11(1), 7–17.

Brossard, D., & Lewenstein, B. V. (2010). A critical appraisal of models of public under­
standing of science. In L.-A. Kahlor & P. A. Stout (Eds.), Communicating science: New
agendas in communication (pp. 11–39). New York: Routledge.

Brown, M. B. (2006). Ethics, politics, and the public: Shaping the research agenda. In D.
H. Guston & D. Sarewitz (Eds.), Shaping science and policy: The next generation of re­
search (pp. 10–32). Madison: University of Wisconsin Press.

Brown, M. B. (2015). Politicizing science: Conception of politics in science and technology


studies. Social Studies of Science, 45(1), 3–30.

Bucchi, M., & Neresini, F. (2008). Science and public participation. In E. J. Hackett, O.
Amsterdamska, M. Lynch, & J. Wajcman (Eds.), The handbook of science and technology
studies (3d ed., pp. 449–472). Cambridge, MA: MIT Press.

Bush, V. (1945). Science: The endless frontier: Report to the president on a program for
postwar scientific research. Ann Arbor: University of Michigan Library.

Coburn, T. A. (2011). The National Science Foundation: Under the microscope. Washing­
ton, DC: Senator Tom Coburn.

Douglas, H. E. (2009). Science, policy, and the value-free ideal. Pittsburgh: Pittsburgh
University Press.

Douglas, H. E. (2015). Untangling values, ideologies, and reasons. The ANNALS of the
American Academy of Political and Social Science, 658(March), 296–306.
Page 22 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Dreger, A. (2016). Galileo’s middle finger: Heretics, activists, and one scholar’s search for
justice. New York: Penguin.

Drutman, L. (2015). The business of America is lobbying: How corporations became politi­
cized and politics became more corporate. New York: Oxford University Press.

Feldman, L., Maibach, E. W., Roser-Renouf, C., & Leiserowitz, A. (2012). Climate on cable:
The nature and impact of global warming coverage on Fox News, CNN, and MSNBC. The
International Journal of Press/Politics, 17(1), 3–31.

Fishkin, J. S., & Luskin, R. C. (2005). Experimenting with a democratic ideal: Deliberative
polling and public opinion. Acta Politica, 40, 284–298.

Fowler, E. F., & Gollust, S. E. (2015). The content and effect of politicized health contro­
versies. The ANNALS of the American Academy of Political and Social Science,
658(March), 155–171.

Fuels America. (2015). President Obama’s choice. Television and website ad.

Garretson, J., & Suhay, E. (2016). Scientific communication about biological influences on
homosexuality and the politics of gay rights. Political Research Quarterly, 69(1), 17–29.

Gauchat, G. (2012). Politicization of Science in the Public Sphere: A Study of Public Trust
in the United States, 1974 to 2010. American Sociological Review, 77(2), 167–187.

Gould, D. B. (2009). Moving politics: Emotion and ACT UP’s fight against AIDS. Chicago:
Chicago University Press.

Greenberg, D. S. (2001). Science, money, and politics: Political triumph and ethical ero­
sion. Chicago: University of Chicago Press.

Groshek, J., & Bronda, S. (2016, June 30). How social media can distort and misin­
form when communicating science. The Conversation.

Guston, D. (2000). Between politics and science: Assuring the integrity and productivity
of research. New York: Cambridge University Press.

Guston, D. (2013). Democratizing science: Ends, means, outcomes. In G. P. Zachary (Ed.),


The rightful place of science: Politics (pp. 39–47). Tempe, AZ: Consortium for Science,
Policy and Outcomes.

Hmielowski, J. D., Feldman, L., Myers, T. A., Leiserowitz, A., & Maibach, E. (2014). An at­
tack on science? Media use, trust in scientists, and perceptions of global warming. Public
Understanding of Science, 23(7), 866–883.

Hochschild, J., & Sen, M. (2015). Technology optimism or pessimism about genomic sci­
ence: Variation among experts and scholarly disciplines. The ANNALS of the American
Academy of Political and Social Science, 658(March), 236–252.

Page 23 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Hume, D. (2000). A Treatise of Human Nature. In D. F. Norton & M. J. Norton (Eds.). New
York: Oxford University Press.

Jasanoff, S. S. (1987). Contested boundaries in policy-relevant science. Social Studies of


Science, 17(2), 195–230.

Jasanoff, S. S. (1990). The fifth branch: Science advisers as policymakers. Cambridge,


MA: Harvard University Press.

Jasanoff, S. S. (2012). Science and public reason. New York: Oxford University Press.

Kahan, D. M. (2011). Foreword: Neutral principles, motivated cognition, and some prob­
lems for constitutional law. Harvard Law Review, 125(1), 1–77.

Kahan, D. M. (2012). Why we are poles apart on climate change. Nature, 488(7411), 255.

Kahan, D. M., Hoffman, D. A., Evans, D., Devins, N., Lucci, E. A., & Cheng, K. (2016).
“Ideology” or “Situation Sense”? An experimental investigation of motivated rea­
soning and professional judgment. University of Pennsylvania Law Review, 64.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G.
(2012). The polarizing impact of science literacy and numeracy on perceived climate
change risks. Nature Climate Change, 2, 732–735.

Kevles, D. J. (1985). In the name of eugenics: Genetics and the uses of human heredity.
Cambridge, MA: Harvard University Press.

Kevles, D. J. (2006). What’s new about the politics of science? Social Research, 73(3),
761–778.

Kitcher, P. (2001). Science, truth, and democracy. New York: Oxford University Press.

Kolbert, E. (2015, May 6). The G.O.P.’s war on science gets worse. The New Yorker.

Kraft, P. W., Lodge, M., & Taber, C. S. (2015). Why people “don’t trust the evidence”: Moti­
vated reasoning and scientific beliefs. The ANNALS of the American Academy of Political
and Social Science, 658(March), 121–133.

Krimsky, S. (2013). Evolving narratives of genetic explanation across disciplines. In S.


Krimsky & J. Gruber (Eds.), Genetic explanations: Sense and nonsense. Cambridge, MA:
Harvard University Press.

Kühl, S. (1994). The Nazi connection: Eugenics, American racism, and German national
socialism. New York: Oxford University Press.

Latour, B. (1986). Science in action. New York: Open University Press.

Levendusky, M. (2013). How partisan media polarize America. Chicago: Chicago Universi­
ty Press.

Page 24 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Lewandowsky, S., Gignac, G. E., & Oberauer, K. (2013). The role of conspiracist ideation
and worldviews in predicting rejection of science. PLoS ONE, 8(10).

Lewenstein, B. V. (1995). Science and the media. In S. Jasanoff, G. E. Markle, J. C. Pe­


tersen, & T. Pinch (Eds.), Handbook of science and technology studies (pp. 343–360).
Thousand Oaks, CA: SAGE.

Lodge, M., & Taber, C. S. (2013). The rationalizing voter. New York: Cambridge Universi­
ty Press.

Lowi, T. J., Ginsberg, B., Shepsle, K. A., & Ansolabehere, S. (2014). American government:
Power and purpose, thirteenth edition core edition. New York: W. W. Norton.

Lupia, A. (2013). Communicating science in politicized environments. Proceedings of the


National Academy of Sciences of the United States of America, 110(Suppl. 3), 14048–
14054.

McIntyre, L. (2015). Respecting truth: Willful ignorance in the Internet age. New York:
Routledge.

Medawar, P. B. (1979). Advice to a young scientist. New York: Basic Books.

Miller, J. D. (1983). Scientific literacy: A conceptual and empirical review. Daedalus,


112(2), 29–48.

Mooney, C. (2005). The Republican war on science. New York: Basic Books.

Moore, G. E. (2004). Principia ethica. Mineola, NY: Dover. Originally published in 1903.

Moore, K. (2008). Disrupting science: Social movements, American scientists, and the pol­
itics of the military, 1945–1975. Princeton, NJ: Princeton University Press.

National Institutes of Health. (2015). Budget. U.S. Department of Health & Human Ser­
vices.

Nelkin, D. (1995). Science Controversies. In S. Jasanoff, G. E. Markle, J. C. Petersen, & T.


Pinch (Eds.), Handbook of science and technology studies (pp. 444–456). Thousand Oaks,
CA: SAGE.

Nisbet, E., Cooper, K. E., & Garrett, R. K. (2015). The partisan brain: How dissonant sci­
ence messages lead conservatives and liberals to (dis)trust science. The ANNALS of the
American Academy of Political and Social Science, 658(March), 36–66.

Nisbet, M. C. (2010). Framing science: A new paradigm in public engagement. In L. A.


Kahlor & P. A. Stout (Eds.), Communicating science: New agendas in communication (pp.
40–67). New York: Routledge.

Page 25 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Nisbet, M. C., & Fahy, D. (2015). The need for knowledge-based journalism in politicized
science debates. The ANNALS of the American Academy of Political and Social Science,
658(March), 223–234.

Nisbet, M. C., & Huge, M. (2006). Attention cycles and frames in the plant biotechnology
debate. The Harvard International Journal of Press/Politics, 11(2), 3–40.

Noel, H. (2014). Political ideologies and political parties in America. New York: Cam­
bridge University Press.

Nyhan, B. (2014, May 8). Vaccine opponents can be immune to education. The up­
shot. The New York Times.

Oreskes, N., & Conway, E. M. (2010). Merchants of doubt: How a handful of scientists ob­
scured the truth on issues from tobacco smoke to global warming. New York: Bloomsbury
Press.

Paul, D. B. (1998). The politics of heredity: Essays on eugenics, biomedicine, and the na­
ture-nurture debate. Albany, NY: SUNY Press.

Pew Research Center. (2009, July 9). Scientific achievements less prominent than a
decade ago: public praises science; scientists fault public, media. Washington, DC:
The Pew Research Center For The People & The Press.

Pew Research Center. (2015, January 29). Public and scientists express strikingly dif­
ferent views about science-related issues. Washington, DC: Pew Research Center.

Pielke, R. A., Jr. (2007). The Honest Broker: Making Sense of Science in Policy and
Politics. New York: Cambridge.

Pitman, G. E. (2011). Backdrop: The politics and personalities behind sexual orientation
research. Sacramento, CA: Active Voice Press.

Provine, W. B. (1973). Geneticists and the biology of race crossing. Science, 182(4114),
790–796.

Rein, L. (2015a). Congressman demands climate study documents as scientists


warn of “chilling effect.” The Washington Post, November 6.

Rein, L. (2015b). NOAA chief tells lawmaker: No one will “coerce the scientists who
work for me.” The Washington Post, November 24.

Rokeach, M. (1973). The nature of human values. New York: Free Press.

Sarewitz, D. (2013). Making science policy matter for a use-inspired society. In M. Crow,
R. Frodeman, D. Guston, C. Mitcham, D. Sarewitz, & G. P. Zachary (Eds.), The rightful
place of science: Politics (pp. 9–26). Tempe, AZ: Consortium for Science, Policy, and Out­
comes.

Page 26 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Schattschneider, E. E. (1960). The semisovereign people: A realist’s view of democracy in


America. New York: Holt, Rinehart and Winston.

Scheufele, D. A. (2014). Science communication as political communication. Proceedings


of the National Academy of Sciences, 111(Suppl. 4), 13585–13592.

Segerstrale, U. (2000). Defenders of the truth: The battle for science in the sociobiology
debate and beyond. New York: Oxford University Press.

Shapin, S. (2008). Science and the modern world. In E. J. Hackett, O. Amsterdamska, M.


Lynch, & J. Wajcman (Eds.), The handbook of science and technology studies (3d ed., pp.
433–448). Cambridge, MA: MIT Press.

Sides, J. (2015, June 10.). Why Congress should not cut funding to the social sci­
ences. Monkey Cage, The Washington Post.

Stroud, N. J. (2011). Niche news: The politics of news choice. New York: Oxford Universi­
ty Press.

Sturgis, P., & Allum, N. (2004). Science in society: Re-evaluating the deficit model of pub­
lic attitudes. Public Understanding of Science, 13(1), 55–74.

Suhay, E., & Druckman, J. N. (2015). The politics of science: Political values and the pro­
duction, communication, and reception of scientific knowledge. In E. Suhay & J. N. Druck­
man (Eds.), The ANNALS of the American Academy of Political and Social Science,
658(March), 6–15.

Suhay, E., & Jayaratne, T. E. (2013). Does biology justify ideology? The politics of genetic
attribution. Public Opinion Quarterly, 77(2), 497–521.

Union of Concerned Scientists. (2004). Scientific integrity in policymaking: An inves­


tigation into the Bush administration’s misuse of science. Cambridge, MA.

Union of Concerned Scientists. (2005). Scientific integrity in policy making: Further


investigation of the Bush administration’s misuse of science. Cambridge, MA.

Vanden Heuvel, K. (2011, October 25). The Republicans’ war on science and reason.
The Washington Post.

Walters, S. D. (2014). The tolerance trap: How God, genes, and good intentions are sabo­
taging gay equality. New York: NYU Press.

Weingold, M. F. (2001). Communicating science: A review of the literature. Science Com­


munication, 23(2), 164–193.

Wilson, E. O. (1975). Sociobiology: The new synthesis. Cambridge, MA: Harvard Universi­
ty Press.

Page 27 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Wynne, B. (1992). Misunderstood misunderstandings: Social identities and public uptake


of science. Public Understanding of Science, 1(3), 281–304.

Wynne, B. (1995). Public understanding of science. In S. Jasanoff, G. E. Markle, J. C. Pe­


tersen, & T. Pinch (Eds.), Handbook of science and technology studies (pp. 361–388).
Thousand Oaks, CA: SAGE.

Zajko, M. (2011). The shifting politics of climate science. Society, 48(6), 457–461.

Zaller, J. R. (1992). The nature and origins of mass opinion. New York: Cambridge Univer­
sity Press.

Ziman, J. (1992). Not knowing, needing to know, and wanting to know. In B. V. Lewenstein
(Ed.), When science meets the public (pp. 13–20). Washington, DC: American Association
for the Advancement of Science.

Notes:

(1.) Yet another way government influences scientific knowledge is through “boundary or­
ganizations,” established during the 1980s and 1990s, as a method of ensuring scientific
integrity and encouraging scientific contributions to economic growth (Guston, 2000). Ex­
amples include the National Institutes of Health’s Office of Research Integrity and Office
of Technology Transfer and the National Science Foundation’s Office of Inspector Gener­
al. These organizations largely fall outside the purview of this essay, given their greater
focus on epistemic and economic values rather than politically oriented ones.

(2.) For those unfamiliar with the term, “epistemic values” make up a special category of
values accepted by a given scientific community as aiding scientists’ decision-making as
they carry out their research. The term is admittedly somewhat vague, as pointed out by
Douglas (2009).

(3.) This article does not take up the relatively new subject of science communication by
lay people on social media. Early assessments paint a pessimistic portrait of such commu­
nication, suggesting it tends to misrepresent scientific findings and is highly politicized.
See Groshek and Bronda (2016) for a brief overview.

(4.) “Biased assimilation” and “biased search” are both forms of “motivated cognition,” a
concept introduced in the section “The Influence of Political Values on Scientists at
Work.”

(5.) For an excellent example of how sometimes lay knowledge is superior to expert
knowledge, see Wynne (1992). He describes the interactions between government scien­
tists and farmers in northern England after the Chernobyl accident. Scientists repeatedly
made mistaken recommendations to farmers based on false knowledge of local conditions
and, unfortunately, resisted farmers’ efforts to correct them.

Page 28 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019


The Politics of Scientific Knowledge

Elizabeth Suhay

Department of Government, American University

Page 29 of 29

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (oxfordre.com/communication). (c) Oxford
University Press USA, 2019. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Pri­
vacy Policy and Legal Notice).

date: 27 November 2019

Das könnte Ihnen auch gefallen