Beruflich Dokumente
Kultur Dokumente
such as these, when the world has been transformed by globalisation and urban-
isation, economic crises, demographic changes including generational shifts and
substantial technological development, demand that a priori assumptions that
progress is assured, modernisation is desirable and efficiency is necessary are
interrogated. Climate change threatens stability on multiple fronts and the geo-
political landscape is changing simultaneously and quickly in many ways. While
political participation is increasing globally, trust in democracy and its associated
institutions has been deteriorating. (“Democracy Index 2018: Me Too? Political
Participation, Protest and Democracy” 2019; “2017 Edelman Trust Barometer-
Executive Summary” 2017) Such changes have been exacerbated by the devel-
opment of the internet 50 years ago, because the scope, nature and functioning
of networks has left the world more connected and interconnected. The net-
work now powers cities, governments, financial markets and public discourse.
Although networks are designed to be robust and adaptive, enabling many path-
ways to route around damage, strikes against the system can have knock-on and
multiplying effects, so chaotic behaviour has potentially global impacts. Concur-
rently, there have been significant shifts to population-level content consumption
patterns as news and popular culture is pushed to smartphones and ever-present
devices. The dynamics of collective attention are accelerating, fragmenting and
hastening public discussion (Lorenz-Spreen et al. 2019). As individuals, insti-
tutions and society, we are ill-equipped to deal with, and make sense of, such
complexity.
Such transformations are challenging for epistemological institutions,
including museums and galleries, whose normal approaches to knowledge
were informed by the assumptions of the modern era. Over hundreds of years,
starting in the 18th century, knowledge in the museum was created through
systematic organisation of the world through objects segmented into defined
disciplines. “Normal” practice within museums worked “to legitimate particular
views of the natural and cultural world” (Cameron and Mengler 2009, 204),
through control; simplifying dense, difficult histories to fit into display cases
and exhibition halls, at times representing whole histories, lives or moments in
time through a single object, explained by captions with limited word counts.
As Nicolas Poole writes:
For centuries, the role of museums has been to digest complexity and
express it as pattern. Whether it is a linear hang in an art museum, giving
the impression of a coherent progression of art-historical movements or a
social-historical display giving the impression of a singular “community”
with identifiably-shared beliefs and values – we are temples to the illusion
of order and predictability in a complex and chaotic world.
(Poole 2014)
Yet museums, too, are now situated in postnormal times (Grinell 2014), and
this kind of normal practice, which stages extremely limited versions of history
within the museum’s walls (Rekdal 2014) has increasingly come into question.
12 Provocations on the digital future
1 In a 2015 survey of art museum professionals, the Mellon Foundation discovered that only 28% of
art museum staff at American Association of Art Museum Director member museums were from
minority backgrounds, with most in security, facilities, finance and human resources. Only 4% of
curators, educators, conservators and directors are African American and only 3% were Hispanic. By
2018, Mellon’s second demographic survey found that art museum staff have become more racially
and ethnically diverse, although museum leadership departments showed a comparative lack of diver-
sification (Schonfeld et al. 2015; Westermann et al. 2019).
Provocations on the digital future 13
2 In the U.S., redlining included both informal and institutionalised policies to prevent certain racial
groups from accessing financial resources and mortgages. The effects of redlining continue to be felt
today.
14 Provocations on the digital future
either. In January 2018, the Google Arts and Culture Institute released an update
that used facial recognition software to compare users to portraits from museums
around the world. However, the results for non-white faces were hugely limited,
and often included and reinforced harmful stereotypes (Goggin 2018). Journalist
Benjamin Goggin attributed the results to the dual factors of Google’s selection
of its partner institutions, which skewed European and North American, and to
the problematic histories of representation within those institutions.
Unfortunately, the experience of the Google Arts and Culture Institute app
is not an isolated case. Algorithmic decision-making is sometimes assumed to be
more neutral and less open to bias than human decisions. However, algorithms
rely on the data they can access and the assumptions built into their code. When
that data is f lawed, inadequate or biased – as is often the case with museum
collections data – or the encoded decisions are intentionally or unintentionally
discriminatory, so too are the results of its analysis. As technology companies
algorithmically analyse big, connected data, often informed by generations of
systematic biases, to supply users with personalised experiences of the world,
algorithmic discrimination is creating new forms of racial and class domination,
with little to no transparency available to those whose lives are being affected.
Recent studies demonstrate that machine learning algorithms can discriminate
based on classes like race and gender (Buolamwini and Gebru 2018; Ali et al.
2019). In March 2019, Facebook was sued by the U.S. Department of Hous-
ing and Urban Development over the way it let advertisers target ads by race,
gender and religion – all protected classes under U.S. law (HUD Public Affairs
2019). The company announced that it would stop allowing this in key cat-
egories where federal law prohibits discrimination in ads (Scheiber and Isaac
2019). However, even with neutral targeting parameters, advertising and content
delivery can still skew significantly along gender and racial lines (Ali et al. 2019),
further entrenching the status quo.
Acknowledging this, some companies are trying to better account for diver-
sity in their datasets in order to better serve non-white publics. For instance, in
January 2019, IBM announced the release of a “new large and diverse dataset
called Diversity in Faces (DiF) to advance the study of fairness and accuracy in
facial recognition technology” (Merler et al. 2019, 2). The dataset provides 1
million human facial images annotated using 10 coding schemes, with the hope
to “accelerate the study of diversity and coverage of data for AI facial recognition
systems to ensure more fair and accurate AI systems.” The photoset was drawn
from creative commons licensed images on Flickr. However, as Meredith Whit-
taker, co-director of the AI Now Institute notes in an interview on NBC (U.S.),
“People gave their consent to sharing their photos in a different internet ecosys-
tem” (Solon 2019), and were not necessarily expecting to have their photos used
to train facial recognition systems that may eventually be used to surveil them.
Museums, too, have started to engage with surveillance technologies, including
monitoring of online behaviours and interactions and using location-awareness
Provocations on the digital future 15
I’ve always felt that technologies and media themselves are not inherently
good or inherently evil, but in fact they’re just tools that people will employ
for good outcomes or not so good outcomes. . . . I don’t think there’s any rea-
son why museums and culture organisations in general should be afraid of
using those tools to do good in the world, to create inclusive dialogues, to
highlight the diversity of culture and history, to promote tolerance of dif-
ferent ideas, and to promote acceptance of difference.
Such aims, that seek to leverage the affordances of digital technologies to create
better experiences for visitors, and to promote civic discourse and democratic
ideals, are laudatory and align with the broad aims of the sector. In doing so, how-
ever, it becomes essential for institutions to engage critically with the limitations
and ethical questions that such technologies engender. As mentioned above, the
algorithms used in conjunction with such technologies can be discriminatory. In
mediated contexts, it can be tempting to conf late visible audiences with all audi-
ences. However, “as identity practice becomes explicit, measurable and analys-
able . . . those who cannot or choose not to participate are increasingly powerless
to shape their own experiences or to inf luence the organisations that serve them”
(Anderson 2019). Natalie Kane, Curator of Digital Design at the V&A Museum,
recently Tweeted her concerns:
Her concerns echo those of Aaron Cope, who recently argued that:
The impulse to market, sell or otherwise leverage the data that we as muse-
ums collect through location-based technologies will be strong even though
this has nothing, and I mean literally nothing, to do with the business of
cultural heritage. The impulse to develop museum exhibitions solely for
the purpose of collecting visitor data will surely follow, which assumes
someone isn’t already doing it.
(Cope 2018)
Museums have increasingly come to rely on visitor data to understand the suc-
cess of their enterprise and to demonstrate to funders and other stakeholders the
impact of their work. But they are not the only institutions moving towards a
data-driven business model. As big, connected data has become fundamental
to the networked information infrastructure, Shoshana Zuboff has argued that
it is becoming the foundational component in a new logic of accumulation,
dubbed “surveillance capitalism” (Zuboff 2015). Surveillance capitalism is a form
of information capitalism that “aims to predict and modify human behaviour as
a means to produce revenue and market control” (Zuboff 2015, 75). She writes:
Whereas the modern era was defined by its attempt to control and reduce vari-
ables in order to better understand the world, the era of Big Data instead collects
and connects reams of data – big and small – in order to algorithmically mine
such data for patterns across human behaviour. Online, all actions leave a trace,
whether conducting Google searches or swiping right on Tinder. Our objects
and spaces are quantified, too. With little regard for user privacy, all such data
can be captured, kept, combined and mined for new understanding and insight.
Yet, as danah boyd and Kate Crawford caution, automating research in this way,
“reframes key questions about the constitution of knowledge, the processes of
Provocations on the digital future 17
research, how we should engage with information, and the nature and the cat-
egorisation of reality” (boyd and Crawford 2011, 3). This aligns with the second
contradiction that Sardar spells out, concerning knowledge in postnormal times,
wherein knowing more, as we do from research across different fields, has not
led to a decrease in ignorance, but has rather increased it. He writes, “While
we are bombarded with information on almost all and every subject, we have
very limited capability to actually discern what is important and what is trivial”
(Sardar 2010).
Even as social media enabled significant growth in user-generated content,
the mechanisms for the verification of information online were not linked to its
truth or factuality. Rather, platforms such as Google, Facebook and YouTube
created recommendation engines that used complex algorithms to define the
value of a resource for its users, with little appetite for policing content for verac-
ity. The long tail of that decision has created an online context where bots mimic
humans and troll farms exploit the features of social media platforms to delib-
erately spread “sensationalist, conspiratorialist, and other forms of junk political
news and misinformation to voters across the political spectrum” (“Russia Used
Social Media for Widespread Meddling in U.S. Politics: Reports” 2018). At the
same time, machine learning is helping create a world where real and fake are
increasingly indistinguishable to the human eye. For instance, deepfakes – a kind
of algorithmically doctored video – use generative adversarial networks to alter
the appearance of source images or video to create new videos and images that
look and sound real. It is becoming harder to believe our own eyes or to know
who to trust.
Ironically, as information has proliferated and trust in traditional knowledge
institutions and gatekeepers has eroded, many individuals now “do their own
research” rather than seeking the advice of experts (Dimock 2019). Such research
requires the individual to undertake significant cognitive responsibility over
their own decision-making, often in areas that are not related to their expertise,
while personalisation practices shapes and limit the information the individual
is exposed to. danah boyd argues that media literacy has backfired in the digital
age, in part because critical thinking and critical questioning have been weap-
onised and because each person is now expected to decide whether each new
piece of information we encounter is true or not (boyd 2017, 2018). This is a
condition that Seph Rodney draws attention to, in the first discussion of this
book, suggesting that:
Without robust institutionalised mechanisms for verifying facts and truth online,
misinformation spreads easily. But the challenges are more pernicious than that.
Writer Britney Gil argues we are in a crisis of epistemology, in which “the ways
we come to know the world [are] bound up in collapse of trust in fundamental
institutions” (Gil 2016). Cory Doctorow makes this point even more forcefully:
we’re not living through a crisis about what is true, we’re living through
a crisis about how we know whether something is true. We’re not dis-
agreeing about facts, we’re disagreeing about epistemology. The “estab-
lishment” version of epistemology is, “We use evidence to arrive at the
truth, vetted by independent verification (but trust us when we tell you
that it’s all been independently verified by people who were properly
skeptical and not the bosom buddies of the people they were supposed to
be fact-checking).”
The “alternative facts” epistemological method goes like this: “The
‘independent’ experts who were supposed to be verifying the ‘evidence-
based’ truth were actually in bed with the people they were supposed to
be fact-checking. In the end, it’s all a matter of faith, then: you either have
faith that ‘their’ experts are being truthful, or you have faith that we are.
Ask your gut, what version feels more truthful?”
( Doctorow 2017)
Frequently, conversations about trust are linked to questions about museum neu-
trality, and the kinds of curatorial choices that museums make in their exhibi-
tion, programming and collecting practices – particularly when communities
have not previously had positive experiences with the museum. Luis continues:
To accept that there are no right and wrong answers does not mean we
abandon the search for truth or solutions but it does entirely change the
process and kind of objectives we set for our endeavours. When there are no
right or wrong answers everyone, every perspective, has a contribution
to make, anyone is as likely as another to have some part of a potential
solution.
(Sardar 2010)
Provocations on the digital future 21
We need museums to play a new role. And a role that will be shaped largely
through intentional design and strategic decision making. We know that
22 Provocations on the digital future
museums are not neutral and that they need to take specific intentional
choices to make change.
(Chan 2019)
Perhaps LaToya Devezin put it best when she said, “All stories deserve to be
told.” It is now incumbent upon those of us concerned with digital practice in
museums to figure out how to best do so.
References
“2017 Edelman Trust Barometer-Executive Summary.” 2017. https://www.edelman.
de/fileadmin/user_upload/Studien/2017_Edelman_Trust_Barometer_Executive_
Summary.pdf.
Ali, Muhammad, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mis-
love, and Aaron Rieke. 2019. “Discrimination through Optimization: How Face-
book’s Ad Delivery Can Lead to Skewed Outcomes,” April. https://web.archive.org/
web/20190712191030/https://arxiv.org/abs/1904.02095.
Anderson, Susan. 2019. “Visitor and Audience Research in Museums.” In The Routledge
Handbook of Museums, Media and Communication, edited by Kirsten Drotner, Vince
Dziekan, Ross Parry, and Kim Christian Schrøder, 1st ed., 80–95. Abingdon, OX:
Routledge.
Autry, La Tanya. 2017. “Changing the Things I Cannot Accept: Museums Are Not
Neutral.” Artstuffmatters. 2017. https://web.archive.org/web/20190712191207/https://
artstuffmatters.wordpress.com/2017/10/15/changing-the-things-i-cannot-
accept-museums-are-not-neutral/.
Barlow, John Perry. 1996. “A Declaration of the Independence of Cyberspace.” Electronic
Frontier Foundation. 1996. https://web.archive.org/web/20190712191309/https://
www.eff.org/cyberspace-independence.
Battaglia, Andy. 2018. “The ARTnews Accord: Aruna D’Souza and Laura Raicovich
in Conversation.” Art News. 2018. https://web.archive.org/web/20190712191409/
http://www.artnews.com/2018/05/14/artnews-accord-aruna-dsouza-
laura-raicovich-conversation/.
Baym, Nancy K., and danah boyd. 2012. “Socially Mediated Publicness: An Introduc-
tion.” Journal of Broadcasting & Electronic Media. Abingdon, OX: Taylor and Francis
Group, LLC. http://www.tandfonline.com/doi/full/10.1080/08838151.2012.705200
Bennett, Tony. 2004. “The Exhibitionary Complex.” In Grasping the World: The Idea
of the Museum, edited by Donald Preziosi and Claire Farago, 413–42. Hants, UK &
Burlington, VT: Ashgate.
Black, Graham. 2016. “Remember the 70%: Sustaining ‘Core’ Museum Audiences.”
Museum Management and Curatorship 31 (4): 386–401. https://doi.org/10.1080/096477
75.2016.1165625.
boyd, danah. 2017. “Did Media Literacy Backfire?” Data & Society: Points. 2017. https://
web.archive.org/web/20190712191604/https://points.datasociety.net/did-
media-literacy-backfire-7418c084d88d?gi=3f5228e95dae.
———. 2018. “You Think You Want Media Literacy… Do You?” Data & Society: Points.
2018. https://web.archive.org/web/20190712191646/https://points.datasociety.net/
you-think-you-want-media-literacy-do-you-7cad6af18ec2?gi=8b52512e5eaa.
boyd, danah, and Kate Crawford. 2011. “Six Provocations for Big Data.” Oxford Inter-
net Institute’s “A Decade in Internet Time: Symposium on the Dynamics of the Internet
Provocations on the digital future 23
Topaz, Chad M., Bernhard Klingenberg, Daniel Turek, Brianna Heggeseth, Pamela
E. Harris, Julie C. Blackwood, C. Ondine Chavoya, Steven Nelson, and Kevin M.
Murphy. n.d. “Diversity of Artists in Major U.S. Museums.” Accessed June 11, 2019.
https://arxiv.org/pdf/1812.03899.pdf.
Vira, Udit. 2019. “A Field Guide to the Living Internet · Nesta.” Finding Ctrl. 2019.
https://web.archive.org/web/20190606035455/ https://findingctrl.nesta.org.uk/
field-guide-to-the-living-internet/.
Westermann, Mariët, Roger Schonfeld, and Liam Sweeney. 2019. “Art Museum Staff
Demographic Survey 2018.” https://web.archive.org/web/20190711212646/https://
sr.ithaka.org/publications/interrogating-institutional-practices-in-equity-
diversity-and-inclusion/.
Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an
Information Civilization.” Journal of Information Technology 30 (1): 75–89. https://web.
archive.org/web/20190331030357/https://journals.sagepub.com/doi/10.1057/
jit.2015.5.