Sie sind auf Seite 1von 10

Swiss Political Science Review 27(1): 170–179 doi:10.1111/spsr.

12439

Debate

Artificial Intelligence, Forward-Looking Governance


and the Future of Security

SOPHIE-CHARLOTTE FISCHER AND


ANDREAS WENGER
Center for Security Studies, ETH Z€
urich

Abstract: Over the last years, AI applications have come to play a role in many security-related
fields. In this paper, we show that scholars who want to study AI’s link to power and security
should widen their perspective to include conceptual approaches from science and technology
studies (STS). This way, scholars can pay attention to critical dynamics, processes, practices, and
non-traditional actors in AI politics and governance. We introduce two STS-inspired concepts –
the micro-politics of design and development and co-production – and show how the study of AI
and security could benefit from them. In the final section, we turn to the study of AI in the context
of Switzerland to underscore what aspects the two previously introduced concepts help to highlight
that remain invisible for traditional approaches.

Zusammenfassung: In den letzten Jahren ist k€ unstliche Intelligenz (KI) in immer mehr
sicherheitsrelevanten Bereichen zur Anwendung gekommen. In diesem Artikel argumentieren wir,
dass Forscher, die die Verbindung von KI zu Sicherheit und Macht untersuchen wollen, ihre
Perspektive um konzeptionelle Ans€ atze aus den Science and Technology Studies (STS) erweitern
sollten. Auf diese Weise k€ onnen Wissenschaftler kritische Dynamiken, Prozesse, Praktiken und
nicht-traditionelle Akteure hinsichtlich der Politik und Governance von KI beleuchten. Wir stellen
zwei von den STS inspirierte Konzepte vor – micropolitics of design und co-production – und
zeigen, wie die Forschung an der Schnittstelle von KI und Sicherheit von ihnen profitieren k€ onnte.
Zuletzt untersuchen wir KI im Kontext der Schweiz, um zu veranschaulichen, welche Aspekte die
beiden vorgestellten Konzepte hervorheben k€ onnen, die f€
ur traditionelle Ans€
atze unsichtbar bleiben.

Resume: Au cours des derni eres ann


ees, les applications d’intelligence artificielle (IA) ont fini par
jouer un r^ ole dans de nombreux domaines li es a la s
ecurit
e. Dans cet article, nous montrons que
les chercheurs qui souhaitent etudier le lien de leIA avec le pouvoir et la securite devraient elargir
leur perspective en incluant des approches conceptuelles issues des etudes des sciences et
technologies (STS). De cette facßon, les chercheurs peuvent accorder de l’attention aux
dynamiques critiques, aux processus, aux pratiques et aux acteurs non traditionnels de la politique
et de la gouvernance de l’IA. Nous pr esentons deux concepts inspir es des STS – micro-politics of
design et co-production - et montrons comment l’ etude de l’IA et de la s ecurit
e pourrait en
ben
eficier. Dans la derniere section, nous allons aborder l’tude de l’IA dans le contexte de la
Suisse et d eterminer les aspects des deux concepts pr esent
es pr edemment qui restent invisibles
ec
aux approches traditionnelles.

© 2021 Swiss Political Science Association


16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 171

KEYWORDS: Artificial Intelligence, Security, Governance, Technology

Introduction
Over the past 15 years, a combination of three factors - a rapid increase in computing
power, a vast increase of data, and the optimization of algorithms - has brought about a
new wave of progress in artificial intelligence (AI) research, especially in the subfield of
machine learning (ML) and its subset deep-learning. Ever since, diverse AI applications
developed in company and university laboratories have made their way into everyday
practical use (Bughin et al. 2017).1 Unsurprisingly, AI applications have also come to play
a role in many security-related fields, including intelligence, defense and military policy,
foreign security policy (arms control), and internal security (state security, police and
border protection, disaster management, and the protection of critical infrastructures).
However, understanding how AI interacts with national and global security, now and in
the future, is far from straightforward.
Although certain military organizations have already shown an interest in AI during the
Cold War (Roland and Shiman 2002), a broader link of AI to international relations and
security politics emerged only after the publication of three strategic reports on AI by the
U.S. government in 2016 (Allen and Kania 2017). These reports highlighted the wide-
ranging potential of AI across domains and subsequently motivated many governments to
assess their own AI capabilities, leading 29 countries to develop national strategies in this
field, so far. While the emphases of these strategies differ, they share the desire to create
the best possible conditions for their states to benefit from recent progress made in AI
(Groth et al. 2019). Overall, the ascribed revolutionary character of AI and its widespread
application are expected to drive economic growth. Yet at the same time AI technologies
are also increasingly securitized and hence posited as having an impact on national and
international security matters.
Because of the linkages made by state actors between these emergent technologies and
power politics, AI is fast becoming an interesting field to study for security scholars as
well. However, the academic discourse on AI in International Relations (IR) reflects the
state of the policy discourse – it is still in its infancy (Horowitz 2020). While there are
some neo-realist inspired articles targeting the systemic power-altering dimension of AI,
the scholarly community has yet to pay sufficient attention to the dynamic and emergent
character of the technology. Most importantly, the traditional treatment of technology as
an exogenous and “black-boxed” variable does not provide the field with all the necessary
analytical tools, especially given that global technology firms and research institutions are
the actors who currently shape the design and development of AI.
In this paper, we show that scholars who want to study AI’s link to power and security
should widen their perspective to include conceptual approaches from science and
technology studies (STS). This way, scholars can pay attention to critical dynamics,

1
For the purpose of this contribution, we adopt a frequently used definition of AI, which describes it as the
ability of a system to undertake tasks that would ordinarily require human intelligence, such as learning,
planning, and the ability to generalize. However, experts distinguish between narrow and general types of AI.
Narrow implies that AI can perform specific tasks only, such as translation from one language to another.
General AI would have the same cognitive powers as the human mind and would be able to solve a variety of
tasks (LeCun et al. 2015). To date, all existing AI applications are narrow, but some of the future oriented
research and discussions in the media are also addressing general AI.

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
172 Sophie-Charlotte Fischer and Andreas Wenger

processes, practices, and non-traditional actors in AI politics and governance. As AI takes


on an ever-more prominent place on the international policy agenda, and as IR scholars
increasingly find themselves in the role of policy advisors, their engagement with STS
concepts might also lead to a more nuanced policy discourse on the foreign policy and
security implications of AI. Most crucially, STS helps to highlight that technologies and
their use are always subject to economic, social, and political decisions and processes. This
also means that no aspect of their development or use is inevitable. Rather, and even more
so with a technology that is still dynamically evolving, scholars need to attune themselves
to study the messy interaction between micro-politics and macro-politics.
This contribution offers a conceptual sketch and constitutes a plea for more research on
AI and international security from an STS perspective. We proceed in three steps. First,
we briefly sketch out the state of the international policy debate on AI and survey the
existing academic literature in IR and security studies. In a second step, we introduce two
STS-inspired concepts – the micro-politics of design and development and co-production –
and show how the study of AI could benefit from them. In the final section, we turn to
the study of AI in the context of Switzerland to underscore what interesting aspects the
two previously introduced concepts help to highlight that remain invisible for traditional
approaches. In this part, we also point to avenues for future research and suggest possible
empirical phenomena to study.

Deterministic Views of Technology in Politics and Academia


The current debate on AI in politics and academia focuses on two broad and related
topics: AI politics and AI governance. The discussion on AI politics that is related to
security questions looks at how states are increasingly treating AI as a strategic resource
because they anticipate a significant impact on the global distribution of economic,
military, and political power. This view has led many states to invest in the development
of their national AI resources, and some states to consider restricting cross-border flows of
AI technologies and related knowledge in order to gain an advantage over their
competitors. So far, AI competition has emerged especially between the U.S. and China,
which are the global leaders in AI development and the host states of the most potent AI
companies. However, the current unequal distribution of AI resources has also led some to
fear an increase in existing economic inequalities and a dependence of states with weaker
AI resources on stronger states (Fischer and Wenger 2019).
Following these policy debates, academic interest in AI politics has increased as well, yet
few theoretically informed articles have been published on the subject, so far. The majority
of research in the field reflects realist IR and traditional strategic studies perspectives, as
most contributions focus on the impact of AI on military capabilities and the global
balance of power (e.g. Jensen et al. 2019; Horowitz 2019; Haas and Fischer 2017; Ayoub
and Payne 2015). Such realist approaches tend to conceptualize AI technologies as
capability enhancing, or at least as capability altering tools and as potential game changers
for states’ military might. Most of the literature is American in origin and therefore
focuses on American political interests.
Similar to the policy discourse and academic work on AI politics, that on AI
governance is also still in its infancy. The lack of work in this field reflects an underlying
disagreement among states about relevant norms and institutions at the level of
international politics, on the one hand, and the related dearth of empirical material
available to researchers at the academic level, on the other. The most advanced policy

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 173

discussion on AI governance at the international level is on the regulation of lethal


autonomous weapons systems (LAWS), which was initiated in 2014. However, state
parties disagree about what form such regulation should take. At the same time, broader
discussions on the international governance of AI are still emerging. State actors, non-state
actors, and international organizations are starting to produce an increasing number of
initiatives on the development of AI principles and guidelines to guide the development
and/or application of AI technologies (Jobin et al. 2019).
Just like the literature focusing on AI politics, the body of literature on AI governance
is still narrow. Rather than being influenced by neo-realism, however, certain contributions
follow a liberal mindset. Given the potential stakes at play and the LAWS debate in the
policy world, the academic literature so far has focused primarily on the regulation of AI-
enabled autonomy in weapons systems. However, the take of some of these contributions
on technology is deterministic as well: The concomitant literature tends to portray arms
control as reactive to future technological development (e.g. Kralingen 2016; Krishnan
2009). In other words, the development trajectory towards LAWS is taken for granted and
therefore, they are conceived as objects that need to be preventively regulated.2
The technological determinism that is reflected in some of the policy discussions and
academic contributions is puzzling, because all the discussions about AI evolve in the
shadow of the future. At present, there are considerable uncertainties regarding the
trajectory of the technological development (Dafoe 2018). Competing visions about the
future emanating from this uncertainty challenge a determinist understanding about the
political implications of AI. Moreover, the character of AI technologies raises pertinent
questions about the relationship between technology and politics. Although only narrow
AI applications currently exist, scholars have already underscored the need to reconsider
our conception of human agency given that AI can master an increasing number of tasks
that previously required human intelligence (Hoijtink and Leese 2019). As AI further
evolves, these aspects will gain in salience. The character of AI challenges preconceptions
about the agency of humans and technology alike and requires a more critical assessment
about the interplay between AI technologies, society and security politics, potentially even
shaping how security is viewed and lived in the future.

AI Politics and Governance as an Emergent Security Practice: Technological


Possibilities and Political Choices
Over the last few years, scholars have increasingly pointed to the value of STS for
understanding technological change and its implications for world politics (McCarthy
2018a). At the core of STS lies the assumption that science and technology are inseparable
from social structures and practice, and therefore from social power (Brey 2007). STS
scholars study how society and politics shape technology and how technology affects
society and politics in turn. Therefore, they reject both the notion that technology is an
exogenous variable that determines social, political, and cultural outcomes and the view
that technology is simply socially constructed (Mayer et al. 2014).
We posit that the next step in the study of AI in relation to IR and global security
should aim to highlight the co-constitution of international relations and technology as a
process of the micro-politics of design and development and co-production. Going in this
direction in the future will allow us to study the emergence of new socio-technical

2
There are some notable exceptions to this including Bode and Huelss (2018).

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
174 Sophie-Charlotte Fischer and Andreas Wenger

practices from a conceptual perspective that emphasizes the social and political
contingency of the AI development trajectory. Most importantly, this will bring into focus
choices made in different phases of technological development and will help scholars gain
a better understanding of how the micro-politics of private companies and universities and
the global macro-politics of states are closely intertwined. Such a research agenda could
help to bridge the gap between STS scholarship and its strong focus on idiosyncratic
micro-processes, on the one hand, and IR scholarship seeking to understand political
processes at a higher level of abstraction, on the other (McCarthy 2018b: 236-238).
Certainly, the utility of these approaches is not unique to the study of AI. However,
given the wide-ranging security applications of AI and the complexity and opacity of the
technology, we argue that there is great value in applying an STS lens to this intersection
as it could reveal dynamics and processes that we would not be able to observe otherwise.
In what follows, we briefly sketch the contours of micro-politics of design and development
and co-production and suggest possible applications to study the intersection of AI and
security before turning to the case of Switzerland in the following section.

The Micro-Politics of Design and Development


The micro-politics of design and development directs scholars’ attention towards the process
by which a technology is designed, including those who conceptualize, develop, and
assemble the technology. The development of technology is inherently political, as all
stages of the design process and all the people involved are carriers of certain norms,
assumptions, and ideas, all of which flow into the technology. In fact, such a conceptual
perspective helps us to account for the fact that the practices and implicit norms that enter
a new technological design can influence later political decisions. Conversely, this insight
also means that bottom-up governance processes focused on the design of technologies can
complement top-down inter-governmental processes and are equally important in
regulating new technologies (McCarthy 2018a).
When considering how AI is developed, we see immediately that humans are doing the
work “inside the machine.” For example, machine-learning algorithms are trained using large
amounts of data that reflect the values, assumptions, and biases of the millions of people who
produce the data. Moreover, the people working on an AI project also insert other values
into the process, reflecting, for example, their cultural backgrounds and their gender.
The more people’s lives will be affected by decisions made by machines in the future,
the more pronounced will the security implications of such applications become as well.
For example, previous research has shown that racial bias is inherent in facial recognition
software that is increasingly used by law enforcement agencies (Singer and Metz 2019).
This also highlights an interesting issue of much interest to security scholars: the clash
between different “types” of security that becomes visible at the intersection of
technological development, use and politics. Whereas heavily technologized and
increasingly automatized security provision are becoming the norm in border security
practices in the name of “national security”, these practices are also directly implicated in
in-security provision seen from a “human security” perspective (Deibert 2018; Roff 2017)

Co-Production
The idea of co-production has been developed and applied in a range of literatures
including Public Administration, Sustainability Science and STS (Miller and Wyborn,

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 175

2018). In STS literature, co-production reflects a rejection of the strict technology/policy


dichotomy of traditional deterministic views of technology. In her seminal work, Sheila
Jasanoff (2004) emphasizes that co-production should be understood as an idiom rather
than as a method. In her definition of co-production, elements and processes in
international politics are inextricably linked with and jointly produced by
technology/science. Thus, technologies are not simply artefacts but processes that reflects
social and political dynamics and decisions. Hence, co-production directs researchers
towards questions of how science, technology, and world politics are co-constitutive
(Jasanoff 2004). As Mayer, Carpes and Knoblich have summarized the idea of co-
production: “The co-productive approach sheds light on the emergence, co-production and
stabilization of new things, groups, or practices such as scientific fields, objects, or
technological systems.” (2014: 4).
Using such an approach, the study of AI politics and governance can start to look at
how preexisting entities, processes, practices, and actors are affected and transformed by
science and technology and how they in turn adapt to, and thereby shape, science and
technology. An interesting and rather intuitive example to study in the military domain
would be how AI is co-produced by an interplay between market dynamics, technological
uncertainty, political perceptions, and arms race dynamics. Although the potential of AI,
which is primarily driven by market dynamics, is still uncertain, policy makers in various
countries are already treating AI as having significant value for the production of security
and are basing their present actions on predictions about the future. Certain state leaders
are aiming for a potential first-mover advantage against their competitors. Yet competition
in the context of technological uncertainty might encourage the deployment of immature
AI systems and increase the risk of technical accidents with great security implications. As
Paul Scharre (2019) has succinctly put it, "for each country, the real danger is not that it
will fall behind its competitors in AI but that the perception of a race will prompt
everyone to rush to deploy unsafe AI systems."

Switzerland as Context to Study AI Politics and Governance


In this last section, we want to illustrate the value of the two concepts in the specific context
of AI in Switzerland. However, two caveats are in order. First, the development of AI policy
and politics is in its infancy in Switzerland and so-far Swiss actors have played a rather
passive role in global technology governance efforts. Although Switzerland has an advanced
AI ecosystem, featuring leading global companies, startups and universities, so far, the
government has not developed an AI strategy, and there seems to be no common vision
across federal departments and policy fields regarding the technical, economic, social, and
political opportunities and challenges associated with AI. Second, Switzerland is an atypical
case to study given the focus in the existing literature, which centers primarily on great
powers. This is not surprising, given the neo-realist focus on systemic power.
However, what should become clear here is that STS approaches offer a productive lens
for conceptualizing Swiss actors in technology governance and for analyzing how AI
technologies emerge in and interact with political and scientific structures. That there is no
AI strategy can be seen as an excellent opportunity for academics to engage with an
emergent field of politics and possibly help shape such a strategy in the future.
Furthermore, as soon as we move away from a narrow understanding of material military
power as the most important in international relations, a country like Switzerland can be

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
176 Sophie-Charlotte Fischer and Andreas Wenger

studied in relation to techno-political power arising from historical and political


characteristics.

Micro-politics of design and development


Historically, science and technology have played an important role in the formation of the
modern Swiss state. The birth of modern Switzerland in 1848, when the federal
constitution was introduced, is closely linked to Switzerland’s transition from an
agricultural society to an industrial society. Switzerland was one of the first countries to
become industrialized, beginning with textile production and quickly moving to the
production of machinery and the development of a chemical and pharmaceutical industry.
The founding of the modern Swiss state provided the necessary political basis with which
science and technology could be promoted, and the foundation of ETH Zurich in 1855
allowed Switzerland to establish itself as a world leader in science, technology and
innovation (Gugerli et al. 2005).
As a country with leading universities, companies, and talent in AI, Switzerland is an
important site to study how politics flow into the different stages of the design process;
how the creators and the users of AI technologies work together; and how society and
politics can best ensure that the mid- and long-term development and use of AI is
transparent, inclusive, and responsible. STS-inspired scholarship needs access and while it
might be difficult to get sufficient access to private AI labs, we suggest that public
universities like ETH and EPFL are interesting sites to closely study the micro-politics of
design and development and how the political, societal and scientific context shape the
evolution of security relevant AI applications such as facial recognition systems.

Co-Production
Technological innovation is a core factor in Switzerland’s economy and education system,
with the public sector traditionally playing a rather passive role in domestic and
international technology governance. There are two main reasons for this. First, since
1848, Swiss politics have been dominated by the liberal view that technology development
should be left to industry and market forces, making it a particularly interesting case to
study. A second reason is closely linked to Switzerland’s neutrality, which for a long time
limited the country’s involvement in arms control to areas where political neutrality has
been considered advantageous. For example, Switzerland serves as the host state of CERN
and insists that research conducted at CERN serve only peaceful purposes (Robinson
2018). Another example is the Spiez Laboratory, which has a mandate to protect the Swiss
population against nuclear, biological, and chemical threats (Spiez Laboratory 2021).
However, Switzerland is well positioned to become more actively engaged in technology
governance at the international level and certain political developments, such as the Federal
Council’s recently released Digital Foreign Policy Strategy are pointing into this direction
already (The Federal Council 2020). Its decentralized system lends itself to new modes of
decentralized governance and collaboration between the public, private, and civil sectors.
Switzerland’s subsidiarity ensures that political processes are bottom-up, both in the
coordination between the different state levels and in relationships between the public, private,
and civil sectors (Bieri and Wenger 2018). Academically, it would be interesting to study how
different positions on AI are evolving in these structures, based on what expertise, and with
what kind of political authority. One possible case study would be the recent establishment of

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 177

an "AI Alliance" involving policy makers, academic institutions and companies that aims to
transform the Canton of Zurich into Switzerland’s premier AI hub (St€adeli 2019).
Given the advanced state of the AI ecosystem in Switzerland, combined with Switzerland’s
potential as a host for AI governance initiatives, the country could also provide an interesting
context to study how governance at the design level might complement and inform
international AI governance efforts. Swiss actors, including scientists, companies and
government actors could serve as bridge-builder at the intersection of peace policy and foreign
technology relations and help ensure that the norms discussed at the policy level reflect the
behavioral practices of the creators at the technical level. From a research perspective, STS
approaches emphasize the importance to study the negotiations processes among different
socio-technical visions and between developers and consumers of technology, increasingly
within a trans-national global setting. The development trajectory of AI is not predetermined,
and science diplomacy deserves further study as it has the potential to make AI accessible to as
large a part of the world population as possible (Fischer and Wenger 2019).

Concluding Remarks
Technological developments and opportunities as well as a realization that AI technologies
are increasingly finding their way into everyday security practices have turned AI politics
and governance into interesting fields of study for IR and security studies scholars. The
aim of this contribution was to show how, going forward, the interaction between
technology and politics should be studied in order to take into account the dynamic
interaction of both spheres. In the field of AI, power is emergent and should not simply be
treated as a systemic attribute. Studying AI as an emergent policy field also means to
potentially contributing to AI politics and governance. From the viewpoint of micro-
politics interacting with macro-politics, Switzerland has all the necessary ingredients to
make a substantive contribution to international AI governance.
However, so far, Swiss actors have not been active shapers of AI governance on the
national and international level, directing attention away from the idea that technology
shapes and is simultaneously shaped by politics. Today, the increasing politicization of AI
at the level of global politics offers a new opportunity for actors within a small state like
Switzerland to become more active players in technology governance and to leverage
Switzerland’s political, economic, and societal strengths. Switzerland, from a domestic
political point of view, is very well positioned to deal with the transformation of the
economy, society, and state, that shapes and is being shaped by AI.

Acknowledgements
The authors would like to thank Myriam Dunn Cavelty and Jonas Hagmann for their
support in preparing this article. They would also like to thank the anonymous reviewers
for their valuable suggestions, which have resulted in a much-improved version of the
original manuscript.

Data Availability Statement


Data sharing not applicable to this article as no datasets were generated or analysed
during the current study.

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
178 Sophie-Charlotte Fischer and Andreas Wenger

References
Allen, G. and E. Kania (2017). China is using America’s own Plan to Dominate the Future of Artificial
Intelligence. Foreign Policy. Online: https://foreignpolicy.com/2017/09/08/china-is-using-americas-
own-plan-to-dominate-the-future-of-artificial-intelligence/ [accessed: 02.01.2021].
Ayoub, K. and K. Payne (2015). Strategy in the Age of Artificial Intelligence. Journal of Strategic
Studies 39(5–6): 793–819.
Bieri, M. and A. Wenger (2018). Subsidiarity and Swiss Security Policy. CSS Analyses in Security
Policy 227. Zurich: Center for Security Studies.
Bode, I. and H. Huelss (2018). Autonomous weapons systems and changing norms in international
relations. Review of International Studies 44(3): 393–413.
Brey, P. (2007). The Technological Construction of Social Power. Social Epistemology 22(1): 71–95.
Bughin, J., E. Hazan, S. Ramaswamy, M. Chui, T. Allas, P. Dahlstr€ om, N. Henke and M. Trench
(2017). Artificial Intelligence: The Next Digital Frontier? Discussion Paper. New York: McKinsey
Global Institute.
Dafoe, A. (2018). AI Governance: A Research Agenda. Oxford: Future of Humanity Institute.
Deibert, R. (2018). Toward a Human-Centric Approach to Cybersecurity. Ethics and International
Affairs 32(4): 411–424.
Fischer, S.-C. and A. Wenger (2019). A Politically Neutral Hub for Basic AI Research. CSS Policy
Perspectives 7(2). Zurich, Center for Security Studies.
Groth, O.J., M. Nitzberg, D. Zehr, T. Straube and T. Kraatz-Dubberke (2019). Comparison of
National Strategies to Promote Artificial Intelligence: Part I. Berlin: Konrad Adenauer
Foundation.
Gugerli, D., D. Speich and P. Kupper (2005). Die Zukunftsmaschine: Konjunkturen der ETH Z€ urich.
Zurich: Chronos-Verlag.
Haas, M.C. and S.-C. Fischer (2017). The evolution of targeted killing practices: Autonomous
weapons, future conflict, and the international order. Contemporary Security Policy 38(2): 281–
306.
Hoijtink, M. and M. Leese (2019). Technology and Agency in International Relations. New York:
Routledge.
Horowitz, M. (2019). When speed kills: Lethal autonomous weapon systems, deterrence and
stability. Journal of Strategic Studies 42(6): 764–788.
Horowitz, M. (2020). Do emerging military technologies matter for international politics? Annual
Review of Political Science 23: 385–400.
Jasanoff, S. (2004). States of Knowledge: The Co-Production of Science and the Social Order. London
and New York: Routledge.
Jensen, B.M., C. Whyte and S. Cuomo (2019). Algorithms at War: The Promise, Peril, and Limits of
Artificial Intelligence. International Studies Review 22(3): 526–550.
Jobin, A., M. Ienca and E. Vayena (2019). The global landscape of AI ethics guidelines. Nature
Machine Intelligence 1: 389–399.
Kralingen, M. v. (2016). Use of Weapons: Should We Ban the Development of Autonomous
Weapons Systems? The International Journal of Intelligence, Security, and Public Affairs 18(2): 32–
156.
Krishnan, A. (2009). Automating War: The Need for Regulation. Contemporary Security Policy 30
(1): 172–193.
LeCun, Y., G. Hinton and J. Bengio (2015). Deep learning. Nature 521: 436–444.
McCarthy, D.R. (ed.) (2018a). Technology and World Politics: An Introduction. Oxford: Routledge.

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 179

McCarthy, D.R. (2018b). Conclusion. In McCarthy, D.R. (ed.), Technology and World Politics: An
Introduction. Oxford: Routledge.
Mayer, M., M. Carpes and R. Knoblich (2014). The Global Politics of Science and Technology.
Heidelberg: Springer.
Miller, C.A. and C. Wyborn (2018). Co-production in global sustainability: Histories and theories.
Environmental Science & Policy 113: 88–95.
Robinson, M. (2018). The CERN Community; A Mechanism for Effective Global Collaboration?
Global Policy 10(1): 41–51.
Roff, H. (2017). Advancing Human Security through Artificial Intelligence. Research Paper. London:
Chatham House.
Roland, A. and P. Shiman (2002). Strategic Computing: DARPA and the Quest for Machine
Intelligence, 1983–1993. Cambridge and London: The MIT Press.
Scharre, P. (2019). Killer Apps: The Real Danger of an AI Race. Foreign Affairs 98(3): 135–144.
Singer, N. and C. Metz (2019). Many Facial-Recognition Systems Are Biased, Says U.S. Study. The
New York Times. Online: https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.
html [accessed: 02.01.2021].
Spiez Laboratory (2021). Our Vision. Online: https://www.labor-spiez.ch/en/lab/ubu/index.htm
[accessed: 20.01.2021].
St€
adeli, M. (2019). In Z€ ur Superdrohnen und Roboter. Neue Z€
urich entsteht eine breite Allianz f€ urcher
Zeitung. Online: https://nzzas.nzz.ch/wirtschaft/kuenstliche-intelligenz-zuerich-soll-schweiz-voran-
bringen-ld.1497241?reduced=true [accessed: 02.01.2021].
The Federal Council (2020). Digital Foreign Policy Strategy (2021–2024). Bern: The Federal
Council.

Sophie-Charlotte Fischer is a PhD Candidate at the Center for Security Studies at ETH Zurich. In her research
she focuses on how emerging technologies and markets interact with geopolitics, strategy and statecraft. Her
research interests further include AI governance and tech diplomacy. E-Mail: sophie.fischer@sipo.gess.ethz.ch
Andreas Wenger is Professor of International and Swiss Security Policy and Director of the Center for Security
Studies at ETH Zurich. In his research and teaching, he focuses on security and strategic studies, and the history
of international relations. Address for correspondence: Haldeneggsteig 4, 8092 Zurich, Switzerland. Email:
wenger@sipo.gess.ethz.ch

© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179

Das könnte Ihnen auch gefallen