Beruflich Dokumente
Kultur Dokumente
12439
Debate
Abstract: Over the last years, AI applications have come to play a role in many security-related
fields. In this paper, we show that scholars who want to study AI’s link to power and security
should widen their perspective to include conceptual approaches from science and technology
studies (STS). This way, scholars can pay attention to critical dynamics, processes, practices, and
non-traditional actors in AI politics and governance. We introduce two STS-inspired concepts –
the micro-politics of design and development and co-production – and show how the study of AI
and security could benefit from them. In the final section, we turn to the study of AI in the context
of Switzerland to underscore what aspects the two previously introduced concepts help to highlight
that remain invisible for traditional approaches.
Zusammenfassung: In den letzten Jahren ist k€ unstliche Intelligenz (KI) in immer mehr
sicherheitsrelevanten Bereichen zur Anwendung gekommen. In diesem Artikel argumentieren wir,
dass Forscher, die die Verbindung von KI zu Sicherheit und Macht untersuchen wollen, ihre
Perspektive um konzeptionelle Ans€ atze aus den Science and Technology Studies (STS) erweitern
sollten. Auf diese Weise k€ onnen Wissenschaftler kritische Dynamiken, Prozesse, Praktiken und
nicht-traditionelle Akteure hinsichtlich der Politik und Governance von KI beleuchten. Wir stellen
zwei von den STS inspirierte Konzepte vor – micropolitics of design und co-production – und
zeigen, wie die Forschung an der Schnittstelle von KI und Sicherheit von ihnen profitieren k€ onnte.
Zuletzt untersuchen wir KI im Kontext der Schweiz, um zu veranschaulichen, welche Aspekte die
beiden vorgestellten Konzepte hervorheben k€ onnen, die f€
ur traditionelle Ans€
atze unsichtbar bleiben.
Introduction
Over the past 15 years, a combination of three factors - a rapid increase in computing
power, a vast increase of data, and the optimization of algorithms - has brought about a
new wave of progress in artificial intelligence (AI) research, especially in the subfield of
machine learning (ML) and its subset deep-learning. Ever since, diverse AI applications
developed in company and university laboratories have made their way into everyday
practical use (Bughin et al. 2017).1 Unsurprisingly, AI applications have also come to play
a role in many security-related fields, including intelligence, defense and military policy,
foreign security policy (arms control), and internal security (state security, police and
border protection, disaster management, and the protection of critical infrastructures).
However, understanding how AI interacts with national and global security, now and in
the future, is far from straightforward.
Although certain military organizations have already shown an interest in AI during the
Cold War (Roland and Shiman 2002), a broader link of AI to international relations and
security politics emerged only after the publication of three strategic reports on AI by the
U.S. government in 2016 (Allen and Kania 2017). These reports highlighted the wide-
ranging potential of AI across domains and subsequently motivated many governments to
assess their own AI capabilities, leading 29 countries to develop national strategies in this
field, so far. While the emphases of these strategies differ, they share the desire to create
the best possible conditions for their states to benefit from recent progress made in AI
(Groth et al. 2019). Overall, the ascribed revolutionary character of AI and its widespread
application are expected to drive economic growth. Yet at the same time AI technologies
are also increasingly securitized and hence posited as having an impact on national and
international security matters.
Because of the linkages made by state actors between these emergent technologies and
power politics, AI is fast becoming an interesting field to study for security scholars as
well. However, the academic discourse on AI in International Relations (IR) reflects the
state of the policy discourse – it is still in its infancy (Horowitz 2020). While there are
some neo-realist inspired articles targeting the systemic power-altering dimension of AI,
the scholarly community has yet to pay sufficient attention to the dynamic and emergent
character of the technology. Most importantly, the traditional treatment of technology as
an exogenous and “black-boxed” variable does not provide the field with all the necessary
analytical tools, especially given that global technology firms and research institutions are
the actors who currently shape the design and development of AI.
In this paper, we show that scholars who want to study AI’s link to power and security
should widen their perspective to include conceptual approaches from science and
technology studies (STS). This way, scholars can pay attention to critical dynamics,
1
For the purpose of this contribution, we adopt a frequently used definition of AI, which describes it as the
ability of a system to undertake tasks that would ordinarily require human intelligence, such as learning,
planning, and the ability to generalize. However, experts distinguish between narrow and general types of AI.
Narrow implies that AI can perform specific tasks only, such as translation from one language to another.
General AI would have the same cognitive powers as the human mind and would be able to solve a variety of
tasks (LeCun et al. 2015). To date, all existing AI applications are narrow, but some of the future oriented
research and discussions in the media are also addressing general AI.
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
172 Sophie-Charlotte Fischer and Andreas Wenger
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 173
2
There are some notable exceptions to this including Bode and Huelss (2018).
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
174 Sophie-Charlotte Fischer and Andreas Wenger
practices from a conceptual perspective that emphasizes the social and political
contingency of the AI development trajectory. Most importantly, this will bring into focus
choices made in different phases of technological development and will help scholars gain
a better understanding of how the micro-politics of private companies and universities and
the global macro-politics of states are closely intertwined. Such a research agenda could
help to bridge the gap between STS scholarship and its strong focus on idiosyncratic
micro-processes, on the one hand, and IR scholarship seeking to understand political
processes at a higher level of abstraction, on the other (McCarthy 2018b: 236-238).
Certainly, the utility of these approaches is not unique to the study of AI. However,
given the wide-ranging security applications of AI and the complexity and opacity of the
technology, we argue that there is great value in applying an STS lens to this intersection
as it could reveal dynamics and processes that we would not be able to observe otherwise.
In what follows, we briefly sketch the contours of micro-politics of design and development
and co-production and suggest possible applications to study the intersection of AI and
security before turning to the case of Switzerland in the following section.
Co-Production
The idea of co-production has been developed and applied in a range of literatures
including Public Administration, Sustainability Science and STS (Miller and Wyborn,
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 175
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
176 Sophie-Charlotte Fischer and Andreas Wenger
Co-Production
Technological innovation is a core factor in Switzerland’s economy and education system,
with the public sector traditionally playing a rather passive role in domestic and
international technology governance. There are two main reasons for this. First, since
1848, Swiss politics have been dominated by the liberal view that technology development
should be left to industry and market forces, making it a particularly interesting case to
study. A second reason is closely linked to Switzerland’s neutrality, which for a long time
limited the country’s involvement in arms control to areas where political neutrality has
been considered advantageous. For example, Switzerland serves as the host state of CERN
and insists that research conducted at CERN serve only peaceful purposes (Robinson
2018). Another example is the Spiez Laboratory, which has a mandate to protect the Swiss
population against nuclear, biological, and chemical threats (Spiez Laboratory 2021).
However, Switzerland is well positioned to become more actively engaged in technology
governance at the international level and certain political developments, such as the Federal
Council’s recently released Digital Foreign Policy Strategy are pointing into this direction
already (The Federal Council 2020). Its decentralized system lends itself to new modes of
decentralized governance and collaboration between the public, private, and civil sectors.
Switzerland’s subsidiarity ensures that political processes are bottom-up, both in the
coordination between the different state levels and in relationships between the public, private,
and civil sectors (Bieri and Wenger 2018). Academically, it would be interesting to study how
different positions on AI are evolving in these structures, based on what expertise, and with
what kind of political authority. One possible case study would be the recent establishment of
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 177
an "AI Alliance" involving policy makers, academic institutions and companies that aims to
transform the Canton of Zurich into Switzerland’s premier AI hub (St€adeli 2019).
Given the advanced state of the AI ecosystem in Switzerland, combined with Switzerland’s
potential as a host for AI governance initiatives, the country could also provide an interesting
context to study how governance at the design level might complement and inform
international AI governance efforts. Swiss actors, including scientists, companies and
government actors could serve as bridge-builder at the intersection of peace policy and foreign
technology relations and help ensure that the norms discussed at the policy level reflect the
behavioral practices of the creators at the technical level. From a research perspective, STS
approaches emphasize the importance to study the negotiations processes among different
socio-technical visions and between developers and consumers of technology, increasingly
within a trans-national global setting. The development trajectory of AI is not predetermined,
and science diplomacy deserves further study as it has the potential to make AI accessible to as
large a part of the world population as possible (Fischer and Wenger 2019).
Concluding Remarks
Technological developments and opportunities as well as a realization that AI technologies
are increasingly finding their way into everyday security practices have turned AI politics
and governance into interesting fields of study for IR and security studies scholars. The
aim of this contribution was to show how, going forward, the interaction between
technology and politics should be studied in order to take into account the dynamic
interaction of both spheres. In the field of AI, power is emergent and should not simply be
treated as a systemic attribute. Studying AI as an emergent policy field also means to
potentially contributing to AI politics and governance. From the viewpoint of micro-
politics interacting with macro-politics, Switzerland has all the necessary ingredients to
make a substantive contribution to international AI governance.
However, so far, Swiss actors have not been active shapers of AI governance on the
national and international level, directing attention away from the idea that technology
shapes and is simultaneously shaped by politics. Today, the increasing politicization of AI
at the level of global politics offers a new opportunity for actors within a small state like
Switzerland to become more active players in technology governance and to leverage
Switzerland’s political, economic, and societal strengths. Switzerland, from a domestic
political point of view, is very well positioned to deal with the transformation of the
economy, society, and state, that shapes and is being shaped by AI.
Acknowledgements
The authors would like to thank Myriam Dunn Cavelty and Jonas Hagmann for their
support in preparing this article. They would also like to thank the anonymous reviewers
for their valuable suggestions, which have resulted in a much-improved version of the
original manuscript.
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
178 Sophie-Charlotte Fischer and Andreas Wenger
References
Allen, G. and E. Kania (2017). China is using America’s own Plan to Dominate the Future of Artificial
Intelligence. Foreign Policy. Online: https://foreignpolicy.com/2017/09/08/china-is-using-americas-
own-plan-to-dominate-the-future-of-artificial-intelligence/ [accessed: 02.01.2021].
Ayoub, K. and K. Payne (2015). Strategy in the Age of Artificial Intelligence. Journal of Strategic
Studies 39(5–6): 793–819.
Bieri, M. and A. Wenger (2018). Subsidiarity and Swiss Security Policy. CSS Analyses in Security
Policy 227. Zurich: Center for Security Studies.
Bode, I. and H. Huelss (2018). Autonomous weapons systems and changing norms in international
relations. Review of International Studies 44(3): 393–413.
Brey, P. (2007). The Technological Construction of Social Power. Social Epistemology 22(1): 71–95.
Bughin, J., E. Hazan, S. Ramaswamy, M. Chui, T. Allas, P. Dahlstr€ om, N. Henke and M. Trench
(2017). Artificial Intelligence: The Next Digital Frontier? Discussion Paper. New York: McKinsey
Global Institute.
Dafoe, A. (2018). AI Governance: A Research Agenda. Oxford: Future of Humanity Institute.
Deibert, R. (2018). Toward a Human-Centric Approach to Cybersecurity. Ethics and International
Affairs 32(4): 411–424.
Fischer, S.-C. and A. Wenger (2019). A Politically Neutral Hub for Basic AI Research. CSS Policy
Perspectives 7(2). Zurich, Center for Security Studies.
Groth, O.J., M. Nitzberg, D. Zehr, T. Straube and T. Kraatz-Dubberke (2019). Comparison of
National Strategies to Promote Artificial Intelligence: Part I. Berlin: Konrad Adenauer
Foundation.
Gugerli, D., D. Speich and P. Kupper (2005). Die Zukunftsmaschine: Konjunkturen der ETH Z€ urich.
Zurich: Chronos-Verlag.
Haas, M.C. and S.-C. Fischer (2017). The evolution of targeted killing practices: Autonomous
weapons, future conflict, and the international order. Contemporary Security Policy 38(2): 281–
306.
Hoijtink, M. and M. Leese (2019). Technology and Agency in International Relations. New York:
Routledge.
Horowitz, M. (2019). When speed kills: Lethal autonomous weapon systems, deterrence and
stability. Journal of Strategic Studies 42(6): 764–788.
Horowitz, M. (2020). Do emerging military technologies matter for international politics? Annual
Review of Political Science 23: 385–400.
Jasanoff, S. (2004). States of Knowledge: The Co-Production of Science and the Social Order. London
and New York: Routledge.
Jensen, B.M., C. Whyte and S. Cuomo (2019). Algorithms at War: The Promise, Peril, and Limits of
Artificial Intelligence. International Studies Review 22(3): 526–550.
Jobin, A., M. Ienca and E. Vayena (2019). The global landscape of AI ethics guidelines. Nature
Machine Intelligence 1: 389–399.
Kralingen, M. v. (2016). Use of Weapons: Should We Ban the Development of Autonomous
Weapons Systems? The International Journal of Intelligence, Security, and Public Affairs 18(2): 32–
156.
Krishnan, A. (2009). Automating War: The Need for Regulation. Contemporary Security Policy 30
(1): 172–193.
LeCun, Y., G. Hinton and J. Bengio (2015). Deep learning. Nature 521: 436–444.
McCarthy, D.R. (ed.) (2018a). Technology and World Politics: An Introduction. Oxford: Routledge.
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179
16626370, 2021, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/spsr.12439 by Cochrane Mexico, Wiley Online Library on [19/04/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
AI and the Future of Security 179
McCarthy, D.R. (2018b). Conclusion. In McCarthy, D.R. (ed.), Technology and World Politics: An
Introduction. Oxford: Routledge.
Mayer, M., M. Carpes and R. Knoblich (2014). The Global Politics of Science and Technology.
Heidelberg: Springer.
Miller, C.A. and C. Wyborn (2018). Co-production in global sustainability: Histories and theories.
Environmental Science & Policy 113: 88–95.
Robinson, M. (2018). The CERN Community; A Mechanism for Effective Global Collaboration?
Global Policy 10(1): 41–51.
Roff, H. (2017). Advancing Human Security through Artificial Intelligence. Research Paper. London:
Chatham House.
Roland, A. and P. Shiman (2002). Strategic Computing: DARPA and the Quest for Machine
Intelligence, 1983–1993. Cambridge and London: The MIT Press.
Scharre, P. (2019). Killer Apps: The Real Danger of an AI Race. Foreign Affairs 98(3): 135–144.
Singer, N. and C. Metz (2019). Many Facial-Recognition Systems Are Biased, Says U.S. Study. The
New York Times. Online: https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.
html [accessed: 02.01.2021].
Spiez Laboratory (2021). Our Vision. Online: https://www.labor-spiez.ch/en/lab/ubu/index.htm
[accessed: 20.01.2021].
St€
adeli, M. (2019). In Z€ ur Superdrohnen und Roboter. Neue Z€
urich entsteht eine breite Allianz f€ urcher
Zeitung. Online: https://nzzas.nzz.ch/wirtschaft/kuenstliche-intelligenz-zuerich-soll-schweiz-voran-
bringen-ld.1497241?reduced=true [accessed: 02.01.2021].
The Federal Council (2020). Digital Foreign Policy Strategy (2021–2024). Bern: The Federal
Council.
Sophie-Charlotte Fischer is a PhD Candidate at the Center for Security Studies at ETH Zurich. In her research
she focuses on how emerging technologies and markets interact with geopolitics, strategy and statecraft. Her
research interests further include AI governance and tech diplomacy. E-Mail: sophie.fischer@sipo.gess.ethz.ch
Andreas Wenger is Professor of International and Swiss Security Policy and Director of the Center for Security
Studies at ETH Zurich. In his research and teaching, he focuses on security and strategic studies, and the history
of international relations. Address for correspondence: Haldeneggsteig 4, 8092 Zurich, Switzerland. Email:
wenger@sipo.gess.ethz.ch
© 2021 Swiss Political Science Association Swiss Political Science Review (2021) Vol. 27(1): 170–179