Sie sind auf Seite 1von 11

Digital Risk Society

Deborah Lupton

Authors final version of chapter for The Routledge Handbook of Risk Studies (2016), edited by
Adam Burgess, Alberto Alemanno and Jens Zinn. London: Routledge, pp 301-309.

1
Introduction

As social life and social institutions have become experienced and managed via novel forms
of digital technologies, and as both public and personal spaces as well as human bodies have
become increasingly monitored by digital surveillance devices and sensors, a new field of
risk inquiry has opened up in response to what might be termed digital risk society. The
intersections between risk and digital technologies operate in several ways. First, the
phenomena and individuals that are identified as risks or risky are increasingly
configured and reproduced via digital media, devices and software. These technologies act
not only as mediators of risk but frequently are new sources of risk themselves. Second,
various uses of digital technologies are often presented as posing risks to users. In a third
major dimension, members of some social groups are positioned in the literature on the
digital divide as at particular risk of disadvantage in relation to communication, education,
information or better employment opportunities because they lack access to or interest or
skills in using online technologies.

These three dimensions of digital risk society require new sources of theorising risk that are
able to understand and elucidate the ways in which digitisation and risk intersect to create
risk representations, mentalities and practices. This chapter addresses each one of these
major dimensions in turn. Before doing so, however, it is important to introduce some of the
perspectives that may be productively employed to theorise digital risk society. This
involves moving away from approaches that traditionally have dominated risk sociology
and embracing the ideas of writers in such fields as digital sociology, internet studies, new
media and communication and surveillance studies.

New theoretical perspectives

Given that peoples encounters with digital technologies inevitably involve human-
technology interactions, one important theoretical perspective is that of the sociomaterial
approach. Writers adopting this approach draw from science and technology studies, and
particularly actor network theory, to seek to acknowledge the interplay of diverse actors in
networks. The sociomaterial perspective also provides a theoretical basis for understanding
how nonhuman actors interact with each other, as takes place in the Internet of Things,
when smart objects share data, or when different types of digital datasets combine to
produce new forms of information. Several writers (van Loon, 2002; Jayne, Valentine, &
Holloway, 2010; Lupton, 2013a; Neisser, 2014; van Loon, 2014) have employed this approach
to theorise risk. In their writing, complex interactions of heterogeneous actors are positioned
as configuring risk assemblages, including humans, nonhumans, discourses, practices,
spaces and places and risks themselves. Thus, for example, Jayne, Vallentine & Holloway
(2010) outlines the ways in which young mens compulsion towards violence is an
assemblage of physical feeling and capacities, discourses and assumptions related to
hegemonic masculinities, other young men, alcohol and spaces where such acts are
condoned, such as pubs and city streets.

2
Few theorists have as yet applied the sociomaterial approach explicitly to digital risk society.
Van Loon (van Loon, 2002, 2014) is a notable exception. He employs the term cyberrisk to
note the mediations of risk that occur via digital risk assemblages. Van Loon contends that
all risks are mediated that is, their meanings are negotiated and risks themselves are
active mediators. Cyberrisks are matter-energy-information flows (as are any forms of
digital data) that perform mediations of risks. These mediations always take place as part of
networks of human and nonhuman actors. Processes of remediation take place as these
matter-energy-information flows circulate and are taken up for different purposes by
different actors and are transformed in the process. This remediation may include
contestations and resistances to the meaning of risks.

The notion of flow is an important one to understandings of digital networks. When digital
risk assemblages are configured, risks are entangled with humans, digital technologies and
other nonhuman actors in endlessly ever-changing combinations that are responsive to
changes in context (or remediations, in van Loons terms). Writers theorising the digital
knowledge economy have drawn attention to the vitality of digital data; its ceaseless
movement and repurposing by a multitude of actors and its role in the politics of circulation
(Lash, 2007; Amoore, 2011; Beer, 2013; Lyon & Bauman, 2013; Manovich, 2013; Lupton, 2014,
2015). So too, risk has always been a lively concept because of its intertwinings with human
emotion and the types of extreme responses that it evokes in people (Lupton, 2013a). In
digital society, where technological change is so rapid and digital data are themselves vital,
moving and dynamic, the combination of risk and digital technologies configures the
possibilities of even livelier forms of risk. The concept of the digital risk assemblage
encapsulates these properties of risk, recognising the multiple and constantly in flux
intersections of technical and human hybrids.

The power dimensions of digital technologies also require attention when theorising digital
risk society. The internet empires the likes of Google, Apple, Facebook and Amazon
exert tremendous power by virtue of their ownership and control over digital data in the
global information economy, where digital information is now an important source of
commercial value (Van Dijck, 2013). Power now subsequently operates principally through
digitised modes of communication (Lash, 2007; Mackenzie & Vurdubakis, 2011; Lyon &
Bauman, 2013; Manovich, 2013). Software, computer coding and algorithms have become a
universal language, not only shaping but monitoring and recording most social encounters
(Manovich, 2013). They exert a soft biopolitical power in terms of their structuring of
contemporary social life, social relations, embodiment and selfhood (Cheney-Lippold, 2011;
Mackenzie & Vurdubakis, 2011).

Computer codes, software and algorithms also offer a late modernist promise of exerting
control over messy, undisciplined scenarios, including the efficient identification and
management of risk. They offer the (illusory) power of automatically enforcing what they
prescribe, doing away with human subjectivity and resultant inaccuracy and bias (Hui
Kyong Chun, 2011). As a consequence, much faith and trust are invested in the authority of

3
code, software and algorithms. Digital data and particularly the massive data sets
(commonly referred to as big data) that are generated by peoples transactions with digital
technologies, are also commonly represented as valuable and neutral forms of knowledge
(Kitchin, 2014; Lupton, 2015). These data are continuously produced when people interact
online or move around in space (surveilled by sensor-embedded or digital recording
technologies), constantly creating and recreating digital risk assemblages. Bodies and
identities are fragmented into a series of discrete components as digital data and
reassembled via this process of reconfiguration.

Forms of watching (veillance) are integral to the new power relations of digital risk society
(Lupton, 2015) particularly the use of dataveillance, or veillance involving the
monitoring of digital data flows (van Dijck, 2014) . Lyon and Bauman (2013) use the term
liquid surveillance to describe the ceaseless monitoring of citizens using digital
technologies, which takes place whenever they engage in routine transactions online, move
around in public spaces with surveillance technologies in place or engage on social media.
'Dataveillance and liquid surveillance operate at various levels. The personal information
that is generated by digital encounters may be used by others (security organisations,
commercial enterprises) for their own purposes as part of risk identification and
management programs. However many forms of dataveillance to identify risks are
engaged in by people entirely voluntarily for their own purposes: self-tracking of biometrics
using wearable digital devices or apps or patient self-care routines, for example. They may
also invite the surveillance of others by uploading personal information to social media sites
(Best, 2010; Lupton, 2014). In these contexts, risk data become self- generated and may be
negotiated and shared online.

Different types of datasets and digital data objects can be joined up to configure risk
calculations based on inferences that seek to uncover relationships (Amoore, 2011). These
digital risk assemblages then become targeted for various forms of intervention: managerial,
governmental or commercial. In a context in which digital data flows are dynamic and open
to repurposing, while people may choose to engage in self-surveillance using digital
technologies, the degree to which they can exert control over how their personal data are
being used by commercial, security or government agencies is rapidly becoming an element
of social disadvantage and privilege. Different groups and organisations have differential
access to these big datasets. The internet empires are able to exert control over the data they
possess in their archives, while ordinary citizens (including social researchers) may struggle
to gain access to these data and determine how they used (Andrejevic, 2014).
As I go on to detail below, algorithmic authority can have major effects on peoples life
chances, including singling out people as being at risk or risky. This authority is difficult
to challenge because of its apparent neutrality and objectivity. The human decision-making,
biases and selective judgements that underpin the writing of code and algorithms are
difficult to uncover and resist. As such, like many of the technologies of late modernity
(Beck, 1992), software, codes and algorithms offer many possibilities for identifying,
managing and protecting people against risk but also bear with them certain uncertainties
and potential harms (Hui Kyong Chun, 2011).

4
Digitising risk

There is an extensive literature on the traditional medias coverage of risk (Kitzinger, 1999;
Bakir, 2010; Tulloch & Zinn, 2011; Lupton, 2013b). As yet, however, little focus has been
placed on the digital media and how they mediate and remediate risk. Where once
traditional forms of media were important sources of identifying and publicising risks, new
digital media have become integral to these processes.

In the Web 2.0 era (where the web is far more social and interactive), digital content is far
more ephemeral and dynamic. Users of digital technologies are now both consumers and
producers of content (or prosumers as some commentators put it) (Ritzer, 2014). People
seek not only established online news sites for information about risks and crises, but the
opportunity for any internet user to upload updates or images to social media sites in real
time (sometimes referred to as citizen journalism) has altered the ways in which news is
created and responded to (Mythen, 2010).Twitter and Facebook exchanges and sharing of
web links, photos uploaded to Instagram and Flickr, home-made videos on YouTube and
Vimeo, Wikipedia entries, blogs, online news stories, websites providing information and
support and search engines all provide diverse ways of portraying and circulating risk
knowledges by experts and citizens alike. Thus far there seem to have been little or no
specific investigations of how risks are portrayed on these forums and how online users are
responding to these portrayals.

The politics of risk communication on the internet are similar to those in the traditional
media. Certain risks are singled out as more important than others, based on such factors as
how novel or dramatic they appear, who they affect and who is deemed responsible for
managing and controlling them (Bakir, 2010; Lupton, 2013b). For example, when the Ebola
disease outbreak in 2014 was mainly affecting people in impoverished African countries
such as Liberia, conversations about the epidemic on Twitter were numerous. However it
was not until a Liberian man was diagnosed with the disease in the USA that Twitter
attention escalated dramatically internationally, and particularly in the USA. The rate of
tweets per minute increased from 100 to 6,000: one case on American soil created far more
attention than the over 2,000 deaths that had already taken place in Liberia in the preceding
months (Luckerson, 2014).

As the Twitter Ebola case demonstrates, unlike the static nature of traditional media
accounts, risk discourses and debate can change by the second on platforms such as Twitter.
Thousands of individual messages per second can be generated by high profile risks,
meaning that it can be very difficult for people to assess what information is being
communicated and its validity. Misinformation can often be circulated on social media
networks, either innocently or as a deliberate attempt to engage in pranks or hoaxes
(Mythen, 2010; Lupton, 2015). For example, when Hurricane Sandy hit New York City in
late 2012, several fake images were uploaded to Twitter and Facebook that were digitally
manipulated or taken from fictional material such as films and art installations (Colbert,

5
2012). Given the technical affordances of such online media, this misinformation can
circulate exponentially and at rapid speed.

The risks of digital technology use

There is evidence that digitised systems and environments have provoked deep
ambivalence in social theory and popular coverage alike. Digital software and devices
appear to promise to rectify intractable problems, promote efficiency and prosperity, assist
efforts at protecting natural security and configure new forms of knowledge (as in the
phenomenon of big data). Yet if they go wrong or are manipulated maliciously, the situation
can deteriorate very quickly by virtue of our very dependence on them (Lupton, 1994, 1995;
Hui Kyong Chun, 2011; Mackenzie & Vurdubakis, 2011). As Mackenzie and Vurdubakis
(2011, p. 9) contend: Code is the stuff nightmares, as well as dreams, are made of.

As I noted above, software and digital devices have not simply reduced risks and
uncertainties, they have generated them (Hui Kyong Chun, 2011). The many potential harms
and hazards that have been identified in relation to digital technology use include the
possibility of internet addiction, predatory behaviour by paedophiles online, the
cyberbullying of children, illegal activities on the dark web and less-developed social skills
and physical fitness and a greater tendency towards gaining weight among those who are
deemed to spend too much time online (particularly children and young people). At the
national and global level of risk, security systems, government, the global economy and
most workplaces rely on digital technologies to operate. If their systems are damaged,
widespread disaster can follow. In computer science and risk management circles attention
has been devoted for several decades now to researching the security and reliability of
commercial or government digital systems as part of attempts to protect these systems from
failure or disruption to their operations (cyber risk). There have also been continuing
concerns about the possibilities of cyber terrorism or cyber war, involving politically-
motivated attacks on large-scale digital systems and networks (Janczewski & Colarik, 2008;
O'Connell, 2012). Indeed it has been argued that the term cyber is one of the most
frequently-used in international security protection discussions (O'Connell, 2012).

The threats posed to computer systems by computer viruses and other forms of malware
developed by hackers have occasioned disquiet since the early days of personal computing
(Lupton, 1994). At the end of last century, a high level of publicity was given to the so-called
millennium bug, also known as the Y2K bug or Year 2000 problem that bordered on
visions of apocalyptic disaster. The bug was portrayed as a massive risk to the worlds
computing systems, due to a practice in software programming that represented year dates
using only two decimal digits and did not allow for date changes related to moving into the
year 2000. The panic discourses that were generated in response to this risk (which, in the
end, did not create significant problems) were related to the dependency that people,
organisations and governments had developed on computing systems. It was a symbol of
awareness of the intensification of globalisation and rapid communication across the world
and the threats that such networked connections could generate (Best, 2003).

6
While these types of cyber risks and uncertainties remain current in portrayals of
international digitised societies, most recently, in the wake of growing public awareness of
the ways in which their personal data are repurposed for commercial reasons by the internet
empires and the revelations of the classified documents released by former US National
Security Agency contractor Edward Snowden concerning the extent of national security
agencies mass dataveillance of their citizens, the risk of losing privacy and personal
security of ones data has come to the fore. In response to these revelations, the well-known
sociologist of risk society, Ulrich Beck (2013), turned his attention to what he referred to as
global digital freedom risk. He viewed this risk as the latest in the line of risks that threaten
the world, beginning with the technological and environmental hazards that were the
subject of his Risk Society (Beck, 1992), then the global financial crises and terrorism of the
early years of this century. Beck was inspired by Snowdens leaked documents to write
about global digital freedom risk. According to Beck, this risk involves the threat to privacy
and freedom of speech created by the mass surveillance of citizens private data as they are
generated by digital devices, not only by the national security agencies that were the subject
of Snowdens revelations but by the commercial internet empires. Beck called for the
identification of the fundamental right to protection of personal data as a strategy for
countering global digital freedom risk.

Becks concerns are shared by privacy organisations and legal scholars. Digital surveillance
technologies differ from previous forms of watching in their pervasiveness, the scope of data
they are able to collect and store, their potential longevity and the implications for privacy
they evoke. Groups that once were not subject to routine surveillance are now targeted by
the dispersed liquid technologies of digital surveillance (Haggerty & Ericson, 2000; Lyon &
Bauman, 2013; van Dijck, 2014). It has been pointed out by legal and media scholars that
digital data have a much longer life and capacity to be disseminated across time and space
than previous forms of surveillance. Critics have argued that the right to be forgotten is
contravened by the archiving of digital data. Crimes, misdeeds and embarrassments are
now perpetually available for other people to find on digital archives and databases (Rosen,
2012; Bossewitch & Sinnreich, 2013).

The risks of digital social inequalities

Since the emergence of personal computers, followed by the internet, social researchers have
directed attention at the ways in which digital technology use is mediated via social
structures. Such factors as age, gender, socioeconomic status, education level, mode of
employment, geographical location, state of health or the presence of disability and
race/ethnicity have all been demonstrated to structure the opportunities that people have to
gain access to and make use of digital technologies (Lupton, 2015). The term digital social
inequality has been used to describe the disadvantages that some social groups face in
terms of these determinants of access and use based on cultural and economic capital
(Halford & Savage, 2010). Beyond these issues, however, lie a number of other ways in

7
which some social groups experience greater disadvantage and discrimination related to
digital technologies.

Digital surveillance technologies have been directed at identifying risks and constructing
risky groups to be targeted for further observations or interventions for some time. CCTV
cameras in public spaces, the use of body scanning and facial recognition systems in airport
security systems and other biometric forms of identification, for example, are used as modes
of observation, monitoring and the identification of dangerous others. Lyon (2002) uses the
concept of surveillance as social sorting to contend that digital surveillance operates to
inform judgements about risky individuals by constructing risk profiles and selecting them
as members of groups imposing threats to others. Dataveillance, therefore, can operate to
exclude individuals from public spaces, travel and other rights and privileges if such
individuals are deemed to be posing a threat in some way. This type of social sorting is
frequently discriminatory. People from specific social groups that are categorised as the
undesirable by virtue of their race, ethnicity or nationality, age or social class are subjected
to far more intensive monitoring, identification as dangerous or risky and exclusion on
the basis of these factors than are those from privileged social groups (Amoore, 2011;
Werbin, 2011; Crawford & Schultz, 2014).

The advent of big data and the opportunity to mine these data for personal information has
led to another raft of potential harms that select members of certain social groups out for
potential discrimination. These include the risk of predictive privacy harms, which involves
individuals being adversely affected by assumptions and predictions that are made about
them based on pre-existing digital datasets (Crawford & Schultz, 2014; Robinson, Yu, &
Rieke, 2014). What some theorists have characterised as algorithmic authority (Cheney-
Lippold, 2011; Rogers, 2013) has begun to affect the life chances of some individuals as part
of producing algorithmic identities. Some employers have begun to use algorithms in
specially designed automated software to select employees as well as engaging in online
searches using search engines or professional networking platforms such as LinkedIn to
seek out information on job applicants (Rosenblat, Kneese, & boyd, 2014). Insurance and
credit companies are scraping big datasets or asking people to upload their personal data,
resulting in disadvantaged groups suffering further disadvantage by being targeted for
differential offers or excluded altogether (Lupton, 2014; Robinson, Yu, & Rieke, 2014).

The potential for algorithmic discrimination against individuals or social groups based on
pre-selected characteristics has been identified as a risk of such practices. For example, now
that diverse databases holding personal details on various aspects of peoples lives can be
joined together for analysis, such information as health status or sexual orientation may
become identifiable for job applicants (Andrejevic, 2014). As noted above, it can be difficult
to challenge these assessments or to seek to have certain personal details removed from
digital datasets, even if these data can be proven to be inaccurate. As a result, privacy and
human rights organisations have begun to call for legislation and bills of rights which
promote greater transparency in the ways in which big data are used to shape peoples life
chances (Robinson, Yu, & Rieke, 2014).

8
Conclusion: towards a lively sociology of digital risk society

In this chapter I have offered a brief overview of the diverse and manifold ways in which
digital risk society is operating. I have commented on the vitality of digital data: its lively
circulations and repurposings, its mediations and remediations. Given the continuous
transformations that are part of a digital world in which new technologies and associated
practices are emerging daily, often accompanied by debates about their social, political and
ethical implications, I would contend that a lively sociology of risk is called for to better
understand these flows and fluxes. This involves an appreciation of the affordances, uses
and politics of digital technologies and the data that they generate and circulate.

Social risk researchers, like other social researchers, are grappling with the complexities of
researching both the content of these diverse forms of digital media risk representation and
audiences responses to them, given the steady stream of new devices and software entering
the market, the continuous and massive streams of output of big data and the role played by
prosumers in actively creating or responding to this content. While the liveliness of digital
data present a challenge, they also offer opportunities to rethink sociological theorising and
methodologies (Beer, 2014; Lupton, 2015). In finding their way, risk scholars need to look
beyond the traditional sources of theorising risk to come to terms with digitised risks and
their sociomaterial contexts, including the workings of big data, digital sensors, software,
platforms and algorithmic authority as they ceaselessly mediate and remediate risk
mentalities and practices. This may include adopting greater expertise in accessing and
analysing the digital data that are generated both via routine online transactions and
deliberate acts of prosumption (such as Twitter conversations about specific risks and
uncertainties), using digital devices to conduct research into risk knowledges and practices
in innovative ways (such as apps designed for ethnographic research and wearable self-
tracking technologies), looking outside the usual sources of literature to engage with the
work of researchers working in such fields as human-computer interaction studies,
computer design and data science and thinking through the ways in which multiple forms
of social positioning (gender, social class, age, disability, sexual identification,
race/ethnicity, geographical location) intersect when people use (or do not use) digital
technologies.

9
References

Amoore, L. (2011). Data derivatives: on the emergence of a security risk calculus for our
times. Theory, Culture & Society, 28(6), 24-43.
Andrejevic, M. (2014). The big data divide. International Journal of Communication, 8, 1673-
1689.
Bakir, V. (2010). Media and risk: old and new research directions. Journal of Risk Research,
13(1), 5-18.
Beck, U. (1992). Risk Society: Towards a New Modernity. London: Sage.
Beck, U. (2013). The digital freedom risk: to fragile an acknowledgement. OpenDemocracy.
Retrieved from https://www.opendemocracy.net/can-europe-make-it/ulrich-
beck/digital-freedom-risk-too-fragile-acknowledgment
Beer, D. (2013). Popular Culture and New Media: the Politics of Circulation. Houndmills:
Palgrave Macmillan.
Beer, D. (2014). Punk Sociology. Houndmills: Palgrave Macmillan.
Best, K. (2003). Revisiting the Y2K bug: language wars over networking the global order.
Television & New Media, 4(3), 297-319.
Best, K. (2010). Living in the control society: surveillance, users and digital screen
technologies. International Journal of Cultural Studies, 13(1), 5-24.
Bossewitch, J., & Sinnreich, A. (2013). The end of forgetting: Strategic agency beyond the
panopticon. New Media & Society, 15(2), 224-242.
Cheney-Lippold, J. (2011). A new algorithmic identity: soft biopolitics and the modulation of
control. Theory, Culture & Society, 28(6), 164-181.
Colbert, A. (2012). 7 fake Hurricane Sandy photos you're sharing on social media. Mashable,
13 November 2013. Retrieved from http://mashable.com/2012/10/29/fake-
hurricane-sandy-photos/
Crawford, K., & Schultz, J. (2014). Big data and due process: toward a framework to redress
predictive privacy harms. Boston College Law Review, 55(1), 93-128.
Haggerty, K., & Ericson, R. (2000). The surveillant assemblage. British Journal of Sociology,
51(4), 605-622.
Halford, S., & Savage, M. (2010). Reconceptualizing digital social inequality. Information,
Communication & Society, 13(7), 937-955.
Hui Kyong Chun, W. (2011). Crisis, crisis, crisis, or sovereignty and networks. Theory,
Culture & Society, 28(6), 91-112.
Janczewski, L., & Colarik, A. M. (Eds.). (2008). Cyber Warfare and Cyber Terrorism. Hershey,
PA: IGI Global.
Jayne, M., Valentine, G., & Holloway, S. (2010). Emotional, embodied and affective
geographies of alcohol, drinking and drunkenness. Transactions of the Institute of
British Geographers, 35(4), 540-554.
Kitchin, R. (2014). The Data Revolution: Big Data, Open Data, Data Infrastructures and Their
Consequences. London: Sage.
Kitzinger, J. (1999). Researching risk and the media. Health, Risk & Society, 1(1), 55-69.
Lash, S. (2007). Power after hegemony: cultural studies in mutation? Theory, Culture &
Society, 24(3), 55-78.
Luckerson, V. (2014). Watch how word of Ebola exploded in America. Time. Retrieved from
http://time.com/3478452/ebola-twitter/
Lupton, D. (1994). Panic computing: the viral metaphor and computer technology. Cultural
Studies, 8(3), 556-568.
Lupton, D. (1995). The embodied computer/user. Body & Society, 1(3-4), 97-112.

10
Lupton, D. (2013a). Risk and emotion: towards an alternative theoretical perspective. Health,
Risk & Society, 1-14.
Lupton, D. (2013b). Risk (2nd ed.). London: Routledge.
Lupton, D. (2014). Self-tracking modes: reflexive self-monitoring and data practices. Social
Science Research Network. Retrieved from http://ssrn.com/abstract=2483549
Lupton, D. (2015). Digital Sociology. London: Routledge.
Lyon, D. (2002). Everyday surveillance: Personal data and social classifications. Information,
Communication & Society, 5(2), 242-257.
Lyon, D., & Bauman, Z. (2013). Liquid Surveillance: A Conversation. Oxford: Wiley.
Mackenzie, A., & Vurdubakis, T. (2011). Codes and codings in crisis: signification,
performativity and excess. Theory, Culture & Society, 28(6), 3-23.
Manovich, L. (2013). Software Takes Command. London: Bloomsbury Publishing.
Mythen, G. (2010). Reframing risk? Citizen journalism and the transformation of news.
Journal of Risk Research, 13(1), 45-58.
Neisser, F. M. (2014). `Riskscapes' and risk management - Review and synthesis of an actor-
network theory approach. Risk Management, 16(2), 88-120.
O'Connell, M. E. (2012). Cyber security without cyber war. Journal of Conflict and Security
Law, 17(2), 187.
Ritzer, G. (2014). Prosumption: evolution, revolution, or eternal return of the same? Journal of
Consumer Culture, 14(1), 3-24.
Robinson, D., Yu, H., & Rieke, A. (2014). Civil Rights, Big Data, and Our Algorithmic Future.
No place of publication provided: Robinson + Yu.
Rogers, R. (2013). Digital Methods. Cambridge, MA: The MIT Press.
Rosen, J. (2012). The right to be forgotten. Stanford Law Review Online, 64(88). Retrieved from
http://www.stanfordlawreview.org/online/privacy-paradox/right-to-be-forgotten
Rosenblat, A., Kneese, T., & boyd, d. (2014). Networked employment discrimination. Data &
Society Research Institute Working Paper. Retrieved from
http://www.datasociety.net/pubs/fow/EmploymentDiscrimination.pdf
Tulloch, J. C., & Zinn, J. O. (2011). Risk, health and the media. Health, Risk & Society, 13(1), 1-
16.
Van Dijck, J. (2013). The Culture of Connectivity: A Critical History of Social Media. Oxford:
Oxford University Press.
van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific
paradigm and ideology. Surveillance & Society, 12(2), 197-208.
van Loon, J. (2002). Risk and Technological Culture: Towards a Sociology of Virulence. London:
Routledge.
van Loon, J. (2014). Remediating risk as matterenergyinformation flows of avian influenza
and BSE. Health, Risk & Society, 16(5), 444-458.
Werbin, K. (2011). Spookipedia: intelligence, social media and biopolitics. Media, Culture &
Society, 33(8), 1254-1265.

11

Das könnte Ihnen auch gefallen