Sie sind auf Seite 1von 11

HOME ABOUT LOGIN REGISTER SEARCH CURRENT OPEN JOURNAL

SYSTEMS
ARCHIVES ANNOUNCEMENTS SUBMISSIONS
Journal Help
Home > Volume 22, Number 2 - 6 February 2017 > Bay
USER

Username
Password

Remember me
Login

JOURNAL
CONTENT
Search

Search Scope
All
Search
Inspired by the 2016 case of the encrypted Apple iPhone used by alleged
terrorists in the San Bernardino, Calif. attack, this paper explores the Browse
question of whether the use of completely unbreakable encryption online By Issue
or off-line would be considered ethical by the political philosopher John By Author
Rawls. Rawls is widely acknowledged as having played an important role in By Title
how we perceive freedom and liberty in Western democracies today, and Other Journals
his work on justice, fairness and liberty appears to be a great source of
knowledge for politicians, policy-makers and activists. Several recent
events and threats to national security of a technological nature have FONT SIZE
raised ethical questions about the relationship between state and citizen
and how technological power should be divided between these two parties,
particularly when it comes to the right to privacy. However, in contrast
with a wide-spread perception of Rawls’ work, this article shows that there CURRENT ISSUE
are cases in which Rawls’ principles actually place a limitation on liberty in
these matters. This paper presents a thought experiment in which it
becomes clear that Rawls’ advocacy for liberty did not extend to cases in
which social cooperation in a well-ordered society would be obstructed.
Based on a study of Rawls’ work, the author concludes that whereas Rawls
would consider strong encryption both necessary and ethical, completely
unbreakable encryption would be considered a violation of social ARTICL E TOOL S
cooperation and thus indefensible for Rawls. Abstract
Contents
Print this
Introduction article
The San Bernardino iPhone
Breakable vs. unbreakable encryption Indexing
Encryption as obstruction of justice or civil disobedience metadata
Encryption as facilitator of free speech and privacy
Utilitarian arguments for encryption How to cite
The maximin rule item
Unbreakable encryption and maximin
Social cooperation vs. privacy Email this
The appropriateness of strong vs. unbreakable encryption article (Login required)
Conclusion
Email the
author (Login required)

ABOUT THE
AUTHOR
Introduction
Morten Bay
John Rawls is “widely considered the most important political philosopher
UCLA Department of
of the twentieth century” [1], not least for his contributions to the Information Studies
theoretical conceptions of liberty. When he received the National United States
Humanities Medal from then-President Bill Clinton in 1999, the president
said that Rawls had “placed our rights to liberty and justice upon a strong Morten Bay is a
and brilliant new foundation of reason” [2]. Indeed, Rawls is seen as the Ph.D. candidate in
most important thinker in centuries, when it comes to defending liberty Information Studies
through arguments of reason rather than (only) normative values. at UCLA. His
research interests
But the perception of Rawls as liberty’s staunchest defender may not be revolve around the
entirely in accordance with what he actually wrote in his substantial body Internet and other
of work. In fact, as I will argue in this paper, there are limits to the networks and how
they impact society,
freedoms and rights that a citizen can expect, even in the idealist version ethics, culture and
of society that Rawls constructs through theory. conceptions of reality
along diverse
A less-quoted aspect of Rawls’ theory is that there are cases where social avenues, ranging
cooperation must take precedence over individual liberties — an aspect from national
that can be at odds with the general perception of Rawls and his political security to the
liberalism. In my view, emerging technologies are presenting us with proliferation and
distribution of news,
ethical and political challenges that we have very little tradition or
data and
theoretical basis for meeting. The application of Rawls’ theories to these information. He has
new, technologically-born ethical paradoxes may be useful, particularly in written four books
questions of justice, fairness and liberty — Rawls’ main areas of concern. on these topics
Though the results of viewing these challenges through a Rawlsian lens which also are at the
may reveal a path forward for policy-makers and thinkers, this path will center of the work
also be a somewhat unexpected direction for those who view Rawls as he has done in his
merely a defender of liberty at all costs. For example, Rawls would most 20-year career as a
likely not side with those who argue that privacy is an irreducible right in a journalist.
society contingent on technological infrastructures, nor would he argue
that liberty must be constant within the same environment.

In the following, I will conduct a thought experiment based on one of


those paradoxical challenges mentioned above, the challenge presented by
the notion of completely unbreakable encryption. This challenge was
initially inspired by the case of an Apple iPhone used by one of the
suspects in the 2016 San Bernadino, Calif. attack and the techno-ethical
discourse that followed. During the debate over the encryption used by
Apple on said iPhone, it was suggested that Apple’s encryption is, in fact,
unbreakable, and that the content it protects can only be accessed by the
user with a personal key (Brown, 2016; Villasenor, 2016a; Selyukh and
Domonoske, 2016). This turned out not to be the case. After putting
pressure on Apple’s leadership (who refused to assist in opening the
iPhone) and a drawn-out public debate, the FBI managed to break the
encryption through other means.

But what if the encryption had indeed been unbreakable? What if


unbreakable encryption actually becomes available to everyone? This, I
argue, presents a different situation, one in which the application of
Rawlsian principles reveals sides of his views on liberty that are not
necessarily well known.

The San Bernardino iPhone

In the aftermath of the ISIS-inspired terror attack in San Bernardino,


California in December of 2015, the U.S. Federal Bureau of Investigation
(FBI) attempted to extract information from a smartphone used by one of
the alleged terrorists. The smartphone, an Apple iPhone, was protected by
security measures put in place by Apple. The smartphone officially
belonged to the City of San Bernardino, and it was therefore not a legal or
constitutional matter whether the FBI had permission to break into the
phone — the City of San Bernardino had already given permission. The
problem was that the FBI was initially unable to break through Apple’s
security barriers, as these consisted of a two-step verification/public-key
encryption system. The iPhone needed to be unlocked with a personal
code only known by the user, and if the FBI attempted a so-called ‘brute
force’ attack (using software to randomly enter numbers until the right
combination of digits is found), the on-board security software would
delete the contents of the phone after 10 failed attempts (Zetter, 2016).

Apple declined to aid the FBI in breaking the security measures, as this,
according to Apple, would not only set a dangerous precedent, it could
potentially also put the unlocking software code in the hands of evildoers
who could use it to break into any iPhone in existence.

Apple, by equating code with expression and speech in a constitutional


sense, also claimed that it was unconstitutional for the FBI to compel
Apple to write any sort of code, as the First Amendment protects against
compelled speech. This happened in response to a court order which was
issued to force Apple to comply (Villasenor, 2016b). The question became
particularly pertinent as Apple claimed that they could not themselves
break into iPhones without changing their entire approach to encryption.
Eventually, the FBI found a third-party provider that was able to hack into
the iPhone in question (Warren and Hernandez, 2016), but not until a
previously held discussion resurfaced in the public discourse: Should
citizens be able to use encryption to keep information from the
authorities?

Breakable vs. unbreakable encryption

As the Apple/FBI case of 2016 showed, the discussion is somewhat


academic as long as encryption is always breakable. Several scholars and
practitioners have claimed that the tug of war between creators/users of
encryption and hackers that break encryption is perpetual; That every
time a new encryption method is created, someone will always find ways
to break it, and that there is no such thing as perfect, total security when
it comes to protecting digital assets and personal information (Ellison and
Schneier, 2000; Chau, 2006; Assante, 2014).

However, with the emergence of quantum computing, encryption is


becoming so advanced that unbreakable, impenetrable encryption may
very well become a reality. Or, the time between new encryption
technology becoming available and that same technology getting hacked,
will become longer and longer. The argument for this relates to the
division of power that lies in access and ownership of the technology.
Quantum computing is expensive, and if it requires quantum computing to
break encryption created by quantum computing, those with access to
quantum computing will have the upper hand until the cost of this
technology drops to a point where hackers (or government agencies) can
afford it. Other technologies are also becoming available that are
described as unhackable, created by private companies such as Apple
(Cuthbertson, 2016), and government agencies such as DARPA (Slezak,
2015). In other words, encryption may not be forever unbreakable, but it
is very likely that encryption will be unbreakable for substantial and
significant periods of time, depending on the power dynamics and
distributions of technologies such as quantum computing.

Encryption as obstruction of justice or civil disobedience

At its core, the Apple/FBI case became one where encryption inhibited law
enforcement from conducting what the FBI believed to be necessary
information-gathering operations. In other words, the encryption
represented, in practicality, an obstruction of justice. Apple’s refusal to
comply can also be seen as obstruction of justice or as civil disobedience,
depending on your initial view of the case (Jeong, 2016; Dupont, 2016).
Dichotomies such as this one riddle the entire encryption debate, but as
this paper will attempt to show, Rawls’ work may represent a path
forward.

To begin the application of Rawlsian principles to encryption, I will move


forward with three assumptions:

1. Unbreakable/impenetrable encryption is indeed impenetrable, or at


least so functional that it would be impossible, in practice, to break
the encryption with any means at hand at that specific moment
within reasonable time constraints (i.e., it may be possible to break
the encryption, but with the processing power at hand it may e.g.,
take longer than the lifetimes of those attempting to break the
encryption).
2. The encryption in question is available to citizens and is not
exclusive to certain institutions within society.
3. The society examined here can be described as well-ordered in
Rawls’ terminology.

The ‘well-ordered society’ in 3. represents one of Rawls’ central concepts.


For a society to be just, it must be ‘well-ordered’ in Rawls’ terms. He
begins his definition of a well-ordered society by writing:

... to say a society is well-ordered by a conception


of justice means three things: a) that it is a society
in which everyone accepts, and knows that
everyone else accepts and publicly endorses, the
very same principles of justice; b) that its basic
structure — its main political and social institutions
and how they hang together as one system of
cooperation — is publicly known, or with good
reason believed to satisfy those principles; and c)
that citizens have a normally effective sense of the
principle of justice, that is, one that enables them
to understand and to apply the principles of justice,
and for the most part to act from them as their
circumstances require. [3]

Accordingly, a society that is well-ordered cannot conceal the inner


workings of its political and social institutions and everyone within the
society must accept its (transparent) principles of justice. This does not
mean that e.g., law enforcement or the intelligence community are
prohibited from conducting covert operations. It does mean, however, that
this must happen on a mandate from the citizens and through regulation
and institutions that are transparent. The citizens decide the level of
transparency, so to speak, just as citizens must agree upon the same
principles of justice in a well-ordered society. The same goes for how
justice is upheld, how laws governing the principles of society are
enforced. Rawls asserts that there must be institutions of power in place in
society for justice to be maintained, but this must be in a way that is
mandated by the citizens: “Our exercise of political power is fully proper
only when it is exercised in accordance with a constitution the essentials
of which all citizens as free and equal may reasonably be expected to
endorse in the light of principles and ideals acceptable to their common
human reason.” [4]

Hence, in a well-ordered society, an obstruction of law enforcement is at


the same time an obstruction of principles of justice agreed upon by the
society’s citizens. By extension, Rawls claims that citizens have a
“fundamental natural duty” to “support and to comply with just institutions
that exist and apply to us” [5]. If society is just, citizens must comply with
its institutions in order for it to remain just. In such a situation, concealing
information required to uphold justice from law enforcement would be a
failure of compliance with those institutions. However, Rawls also opens
the door for civil disobedience and conscientious refusal in cases where
societies are only partially just or in which, say, an unjust war is waged by
an otherwise just society. Then, the basic liberties of the individual take
precedence, in the push for restoration of justice (Rawls, 1971). In this
case, however, I have already made the assumption that society is well-
ordered and just, and so civil disobedience and conscientious refusal
would, at least initially, be in violation of Rawls’ principles.

Encryption as facilitator of free speech and privacy

If concealment of information from authorities through means of


encryption is to be seen as more than a simple obstruction of justice in a
Rawlsian well-ordered society, encryption must be understood as
something which enables the citizen to engage in social cooperation, and
somehow supports the exercise of the citizen’s basic liberties. Strong (but
not unbreakable) encryption would actually be helpful in achieving this. On
the other hand, there is strong resistance towards giving law enforcement
and the government powers to break any encryption by default, and by
doing so, effectively banning unbreakable encryption. The Apple/FBI case
reignited a debate about whether authorities should have so-called ‘back
door’ entry to encrypted computing devices — in other words, whether a
‘master key’ should be available to government agencies in matters of
national security or local/federal crime investigations. In describing the
above scenario, Froomkin and McLaughlin called it the “a new phase of the
crypto wars” [6]. They also point out that critics of government back door
solutions often argue against back doors as a matter of privacy, linking
encryption directly to the privacy discourse. In the Oxford English
Dictionary (2017), encryption is defined as a method that can “prevent
unauthorized access” or which can be used to “conceal.” By this, and
arguably most other definitions, encryption is a tool for concealing things
of a private nature and hence a tool for privacy. It can also be understood
as a tool for secret-keeping, which some to some scholars, like Posner
(1978), is the very definition of privacy, but to many others (Solove,
2011; Diffie and Landau, 2007) is only part of a much larger conception of
privacy.

Privacy is intrinsically linked to the most foundational right in a pluralistic


liberal democracy: The right to free speech. Although some scholars argue
that privacy and free speech can be at odds with each other, as in the case
of the press exposing wrongdoings of certain individuals (thereby
breaching their rights to privacy) when the public interest is at hand
(Mayes, 2002), there is a strong argument for linking the two at a more
fundamental level. As Froomkin (1995) shows in his analysis and critique
of the NSA Clipper Chip project, encryption is an enabler of the freedoms
of speech and association protected by the constitution. He also argues,
like Apple did in the recent case, that forcing someone who uses
encryption to hand over the decryption key can be seen as
unconstitutionally compelled speech, particularly when the compelled
speech is not publicized for transparency purposes, as in the case of the
financial records of publicly traded companies. In addition to the
compelled speech argument, it is imperative for the free exchange of ideas
through freedom of speech that ideas can be developed without influence,
intrusion or untimely interpretation from and by outside forces. As the
British Lord Steyn wrote, quoted in Mayes [7]: “Freedom of speech is the
lifeblood of democracy. The free flow of information and ideas informs
political debate. It is a safety valve: people are more ready to accept
decisions that go against them if they can in principle seek to influence
them.”

Referencing classics by Kafka and Orwell, Solove (2011) points to how


surveillance and the fear of decontextualized interpretation can have an
inhibiting effect on the free flow of information. Being able to develop even
the most subversive ideas through discussions with others without the risk
of it taken out of context and used against you, is at the core of freedom
of speech and thus, the right to freedom of speech by definition must also
have a privacy dimension. If a right to freedom of speech exists, so must
the right to freedom of speech in private. As Solove (2007) shows,
defining a right to privacy as merely a right to keep secrets (as seen in
Posner, 1978) is flawed and over-reductive. From such a definition follows
that you don’t need protection of your privacy if you have no secrets to
hide, which is an invalid argument according to Solove, as it rests on the
“underlying assumption that privacy is about hiding bad things. Agreeing
with this assumption concedes far too much ground and leads to an
unproductive discussion of information people would likely want or not
want to hide.” [8]

Running with Solove’s argument, secrets are usually some form of


information and hence reducing privacy to mere secret-keeping also
reduces the whole notion of privacy to the sub-category of information
privacy. More importantly, inserting a normative evaluation into what
should or shouldn’t be private based on what is “bad” or “wrong”, turns
the right to privacy into a question of moral judgment. It would validate
the restriction of human rights seen in many theocracies, which is in direct
contradiction to the pluralistic ideal that, among others, Rawls presents us
with. In Rawlsian terms, agreeing upon the basic principles in society from
a just, original position, would require the ability to freely present and
discuss any such principles. This is not to say that privacy cannot be
contextual, as Nissenbaum (2004) suggests, but, as will be revealed
below, in a Rawlsian well-ordered society, the contexts in which it is
restricted must be a result of a just agreement among citizens.

Utilitarian arguments for encryption

In other words, to facilitate freedom of speech, a right to private


conversation and development of ideas without fear of retribution is
essential. One way of facilitating this is by using strong encryption. By
encrypting conversations, they can be kept private from both the
government and your peers. With or without Rawls, it is hard to argue
against the right to encrypt your communications or utterances under the
right to free speech. The question posed in this article, however, is
whether this right to encryption also should include the right to use
encryption in a manner that is impenetrable and unbreakable as
mentioned above, and whether barring law enforcement institutions and
other authorities entirely from getting access can be viewed as being
compliant with Rawls’ principles. To answer this question, it helps to
understand Rawls’ opposition to utilitarianism, his maximin rule and his
approach to social cooperation.

Many arguments against government back doors and in favor of


unbreakable encryption have a utilitarian flavor. In my view, however,
utilitarianism can be used to argue both for and against unbreakable
encryption and the result depends on normative stances towards other
issues, which basically renders the utilitarian approach useless in this
discussion. A utilitarian argument can be made for back doors, as these
could be used by law enforcement to protect the vast majority of people
who are not cryptography-savvy or tech-savvy and whose need for
protection against, say, cybercrime or cyberattacks, is greater than their
need for the ability to conduct private communications or conceal
information. But privacy advocates could equally argue that unbreakable
encryption ensures absolute freedom of speech, which is something
everyone in a democratic society benefits from, and those benefits are
perhaps greater than the risk of cyberattacks. In both cases, a majority of
citizens are given protections we usually associate with pluralist liberal
democracies: Equal protection under the law and freedom of speech. The
point is, utilitarian reasoning does not provide us with a solution with
regard to the conflict between encryption, privacy and enforcement of
justice, since it simply becomes a version of the age-old conflict between
security and freedom at higher level of abstraction.

Rawls is famous for his arguments against utilitarianism, and this is why it
is beneficial to look to his work when discussing encryption and regulation.
I would argue that Rawls’ maximin rule is a preferable approach to this
question, rather than the utilitarianism-tinted approaches put forth by
some participants in the discourse. When one looks at the problem
through the lens of Rawls’ maximin rule, it would seem that while strong
encryption can be a facilitator of the essential ability to present and
discuss ideas without retribution, unbreakable encryption would actually
have an almost opposing effect as it would be in violation of Rawls’
maximin rule and hence, inhibit social cooperation.

The maximin rule

Rawls uses his maximin principle as a “heuristic tool” for choosing the
principles of justice in the basic structure. He stresses that this tool is not
a general tool for deciding on e.g., questions of morality, but only pertains
to principles of justice chosen in the original position [9]. He states the
maximin rule thus: “It tells us to identify the worst outcome of each
available alternative and then to adopt the alternative whose worst
outcome is better than the worst outcomes of all the other alternatives.”
[10] It may seem like stating the obvious that in choosing principles of
justice, you should choose the alternative that is least bad. But the
maximin rule, as mentioned above, is to be viewed as an alternative to
utilitarianism, in which principles are chosen that will maximize the good
for the largest amount of people in society, even if this means inequality
for some. Rawls illuminates the difference between principles based on the
maximin rule and utilitarianism by first stating some conditions under
which the maximin rule applies, and then showing how these conditions
are what makes maximin more just and fair than a utilitarian method of
choice.

The conditions in question force parties in the original position to choose


the alternative with the least bad outcome without regard for probabilities,
and to base the evaluation of what is worst on “guaranteeable” outcomes
and not just what is a possibility [11]. The parties must not ask what
should or could happen if an alternative is chosen, but what, based on a
reasonable amount of common sense reading of available information, will
happen. Utilitarianism does not, in Rawls’ view, set similar conditions.
Rather, any information available can be used in determining the best
outcome for the largest amount of people, making the utilitarian principles
much more relativistic in nature than the maximin principle. The devil is in
the detail for Rawls, who unhyperbolically refrains from using examples
such as slavery or religious persecution to illustrate the disadvantages of
utilitarianism. Rather, he writes:

Consider instead a possible balance of social


advantages to a sizable majority from limiting the
political liberties and religious freedoms of small
and weak minorities. The principle of average utility
seems to allow possible outcomes that the parties,
as trustees, must regard as altogether
unacceptable and intolerable. [12]

Thus, Rawls would likely not approve of Germany’s choice of banning


public displays of swastikas or France’s ban on niqabs and burkas in public
places. It is a utilitarian principle which, in the view of the lawmakers, will
be better for society as a whole, but restricts the rights of a small
minority. The maximin principle does not allow for this kind of inequality,
simply because equality is a basic principle that takes precedence due to
the application of the veil of ignorance [13]. In the two examples
mentioned above, a gaze through the veil of ignorance would force the
restating of these problems as being about banning the display of political
symbols in public or restricting a person’s freedom to practice his or her
religion in public — both of which, according to Rawls, would be worse
outcomes in a free and just society than the consequences stemming from
a minority of people displaying swastikas or wearing burkas.

Unbreakable encryption and maximin

This brings us back to the question of encryption. As previously stated, it


is hard to argue against the right to encrypt your communication with
others. It is not only taking advantage of a tool that is both legal and
available, it is also a practical and good way of securing your First
Amendment rights without afflicting any harm or economic damage upon
others. There is nothing in Rawls’ principles that argues against it, and in
some instances, these principles can even be read as being in support of
the right to encrypt. But what if the encryption used is unbreakable? Then,
I would argue, the application of Rawlsian principles would yield a different
result. The rule of law is an important component in Rawls’ well-ordered
society. For a circumvention of the rule of law to be ethical in Rawlsian
terms, it would take circumstances under which the only way to maximize
the freedom of the people is to allow for such a circumvention. An
example could be a revolt against a brutal dictator or the like, in which
case the society in question immediately is no longer a well-ordered one.
Similarly, it cannot be argued that the authorities’ breaking of encryption
used by citizens may be the very thing that makes society unjust (which
would be an argument in favor of unbreakable encryption), as a well-
ordered society is a premise for the application of Rawls’ principles as
discussed in this paper.

The rule of law entails at least two things: That the institutions of
government follows it as narrowly as it is expected of the citizens they
govern, and that the rule of law is just. Both things would be in place in
Rawls’ (ideal) well-ordered society. Encrypting your online communication
or digital possessions with unbreakable cryptography, however, would be
an obstruction of justice in such a society, as it is indisputable that
encryption that is unbreakable by law enforcement operatives is a
hindrance to the operatives’ abilities to gather information. Even if there is
no meaningful information or digital objects within the encrypted container
or online communication, ruling that one piece of evidence out is progress
in terms of information-gathering as a tool to enforce the law. This kind of
obstruction would be deemed unethical under the maximin rule, in which
the question of unbreakable encryption boils down to this: What are the
worst outcomes of the alternatives regarding the use and legality of such
impenetrable protection of digital objects and online communication?

If unbreakable encryption becomes available to everyone, it is almost


certain that some will use it. The conditions of the maximin rule and the
veil of ignorance prohibits any regard of how likely it is that citizens will
use unbreakable encryption or how many citizens will use it. But common
sense dictates that at least some will. If unbreakable encryption is used by
private citizens, there will, as stated above, be potential for obstruction of
justice. Again, I do not speculate on how likely it is that the citizens using
the unbreakable encryption will engage in criminal activity that would
result in the issuance of a warrant to examine their encrypted objects or
communications. But that it is possible is indisputable.

The worst outcome of allowing citizens to use unbreakable encryption is,


then, that it will obstruct the work of law enforcement and hence stop
justice from being applied. Taken to its extreme, it may rescind a society’s
status as ‘well-ordered’ in Rawls’ terminology. Once again, the conditions
of maximin prohibits regarding the extent to which this is bound to
happen, and so whether it renders law enforcement completely powerless
or only affects a few cases, through the veil of ignorance, unbreakable
encryption can be seen as a hindrance to justice. The worst outcome of
the opposite, banning unbreakable encryption, or somehow restricting the
proliferation of it, is that only highly functional, but not unbreakable,
encryption is available for citizens to protect their privacy, and that there
will always be a risk of an invasion of privacy by state or non-state actors.
Applying the same maximin method as above, common sense dictates
that it is certain that there will be invasions of privacy under these
conditions.

So is a certainty of invasion of privacy a worse outcome than inhibiting


justice from being applied? I would argue that Rawlsian principles leads to
a negative answer to this question. Though Rawls is known as great
philosopher of liberty, he also reminds us that his view of the free citizen
is limited to the political concept of justice as fairness. In this regard, he
does not concern himself with individual freedom at higher levels of
abstraction:

In what sense are citizens free? Here again we


must keep in mind that justice as fairness is a
political conception of justice for a democratic
society. The relevant meaning of free persons is to
be drawn from the political culture of such a society
and may have little or no connection, for example,
with freedom of the will as discussed in the
philosophy of mind. [14]

Social cooperation vs. privacy

Within Rawls’ basic structure of society, basic liberties and rights are
secured for the citizen in the political sense, not in the more abstract
sense. Rawls’ concept of justice as fairness requires citizens to be engaged
in social cooperation. He sees the social cooperation of citizens as intrinsic
to the construction of a society ruled by fair justice, and defines two
‘moral powers’ by which citizens can express this social cooperation:

i. One such power is the capacity for a sense of justice: It is the


capacity to understand, to apply and to act from (and not merely in
accordance with) the principles of political justice that specify the
fair terms of social cooperation.
ii. The other moral power is a capacity for a conception of the good: it
is the capacity to have, to revise and rationally to pursue a
conception of the good. [15]

Rawls even states directly in his definition of a well-ordered society that


privacy does not play a dominant part in the basic structure, and that any
claims to privacy rights are secondary to the social cooperation which
constitutes a fair and just society: “A well-ordered society, as thus
specified, is not, then, a private society; for in the well-ordered society of
justice as fairness, citizens do have final ends in common.” [16]

Could the right to privacy be considered a principle of justice that should


be part of the basic structure, agreed upon in the original position by the
parties? It could. But then, the question becomes one of degrees. How far
does your right to privacy go? This would have to be decided as a part of
the basic structure. I would argue, based on what I have shown above,
that Rawls’ well-ordered society could never allow for an amount of
privacy that is detrimental to others’ ability to exercise their moral powers.
If one person’s privacy claims inhibit another person’s acting with full
autonomy [17] (and thereby in accordance with the basic principles
agreed upon by all), I would argue that Rawls would let the privacy claims
take a back seat to the other person’s right to full autonomy. In other
words, it is possible to allow for privacy claims within a Rawlsian
framework, but the basic principles of a well-ordered society will always
take preference over those rights to privacy.

The appropriateness of strong vs. unbreakable encryption

Since Rawls only writes about privacy directly in very few places,
Nissenbaum’s (2004) description of similar principles within informational
privacy may provide some clarification. Rawlsian privacy, as decribed
above, aligns somewhat with what Nissenbaum (2004) writes about as
contextual integrity within informational privacy. According to this concept,
two norms must be upheld in order for invasions or limitation of privacy to
be allowable. One is the appropriateness of sharing information.
Nissenbaum argues that it is appropriate to share health information with
your doctor, financial information with your bank, romantic information
with friends, etc. What is not appropriate to Nissenbaum is to cross those
lines, i.e., to reveal religious affiliations or financial status with your
employer or share romantic information with the bank. Or, more
importantly, that these lines are crossed without the individual’s
permission or voluntary participation. The other norm, or set of norms,
regards distribution and flow of information. Nissenbaum finds that in
some contexts, distribution and flow of private information can be
allowable, such as within medical or financial systems, but in most cases,
it requires the voluntary participation of the individual to whom the
information relates. According to Nissenbaum, to overstep the boundaries
of (informational) privacy, the privacy breach must be both appropriate
and the flow of information must be acceptable to the person it concerns.
If one of these norms are not upheld, the breach of privacy can be viewed
as unethical. Rawls and Nissenbaum share the notion that the basic
principles agreed upon (what in Nissenbaum’s terms would be deemed
appropriate) by the citizens of society and which each of them voluntarily
comply with and understand (Nissenbaum’s voluntary participation)
determine the restrictions of privacy.

If the citizens of a society decide that strong (but not unbreakable)


encryption should be available, and unbreakable encryption should be
made illegal, the citizens voluntarily agree that it would be appropriate to
let law enforcement attempt to break the encryption used, assuming that
it is in the public interest and the process is just. Unbreakable encryption
as a violation of social cooperation, on the other hand, could be seen as a
flow of information which is not acceptable to a citizen, e.g., in a case
where a third party (or a government entity) is sharing unbreakably
encrypted information about the citizen, thus shutting the citizen out.

The mere presence of strong encryption makes the Rawlsian argument


against unbreakable encryption even more powerful. The fact that
encryption exists which requires massive resources to crack, minimizes
the amount of people or institutions that are able to invade a person’s
privacy and thereby also minimizes the need for unbreakable encryption
(unless your only need is to keep those with massive resources from
invading your privacy). This argument does not comply with the maximin
conditions, but does add weight to the arguments that do.

Strong (but not unbreakable) encryption, in most practical cases, also


makes temporality a part of the argument. There is a fundamental
difference between strong encryption which takes a long time for third
parties to crack and unbreakable encryption, which will either be
impossible to crack or will take so long to crack that it might as well be
impossible. If strong encryption, which takes a long time to crack, is
applied, any breach of privacy would always happen retroactively by
definition. It would not be possible for any third parties, including the
government, to conduct live surveillance of a strongly encrypted
conversation, if the encryption takes a relatively long time to crack.

Conclusion

Above, I have shown that even though encryption is a valid, useful and
perhaps even necessary tool to protect privacy under a Rawlsian societal
framework, unbreakable encryption is not, as it would be socially
uncooperative and a hindrance of justice to allow it. The question remains,
however, how you would even ensure that no encryption is ever
unbreakable, if you were to attempt the idealistic implementation of
Rawlsian justice as fairness.

One solution might be a ban on unbreakable encryption, but in practice,


such a ban would be impossible to enforce, as citizens would likely
manage to acquire the unbreakable encryption mechanisms through other
means if they deem it necessary enough, as examples from pirated music
to blueprints for 3D-printed firearms have shown. Another suggestion
which is a big part of the debate is the idea of a backdoor, i.e., a ‘master
key’ or another technical solution that will always allow law enforcement
officers with the right warrants in place to decrypt a device or a
communication. This is impractical too, as the global marketplace for code
will likely make cryptography available that does not comply with U.S.-
defined backdoor regulation.

A third solution is to allocate more resources to government agencies in


order for them to hire talented enough staff and do enough research that
their operators will always be technologically ahead of any methods of
encryption. This could work, but it rests on two major assumptions: 1.
That there actually does not exist such a thing as unbreakable encryption
and 2. That any type of encryption can be (or will be, in time) cracked
with the right amount of technological assets available. It is not within the
scope of this article to further discuss a concrete solution to ensure that
justice can be applied in an age of high-level or maybe even unbreakable
encryption.

One aspect to consider, however, is the temporal aspect. If unbreakable


encryption is banned in order to prohibit obstruction of justice, and
unbreakable encryption can also be strong encryption that just takes
impractically long to break, the amount of time it takes to break the
encryption may be a very useful variable. It would require more research
to draw any conclusions, but perhaps imposing a mandatory delay or
temporal distance between the government’s ability to break encryption
and citizens’ ability to encrypt could point towards a balance between the
kind of encryption that should be a right for everyone and the kind which
should be banned. This is assuming that technology available to the public
will always be ahead of what is easily breakable by the authorities (this is
actually the current situation, as the San Bernardino iPhone case showed,
but that could change). If regulators agree that it should always take at
least a week (or a day or a month or a year, the period itself can be
determined by the citizens) to crack an encrypted entity, this “privacy
buffer” or “encryption quarantine” may give the authorities the ability to
enforce the law without giving them the power of live surveillance.

This is a subject which requires further research, most likely in disciplines


different from those this article relates to. It is merely the objective of this
paper to show how a society based on Rawlsian, idealist principles of
justice as fairness allows — and maybe even requires — privacy protection
through encryption, but not at the cost of social cooperation and equality,
and that unbreakable encryption violates the latter.

About the author

Morten Bay is a Ph.D. candidate in information studies at UCLA. His


research interests revolve around the Internet and other networks and
how they impact society, ethics, culture and conceptions of reality along
diverse avenues, ranging from national security to the proliferation and
distribution of news, data and information. He has written four books on
these topics which also are at the center of the work he has done in his
20-year career as a journalist.
E-mail: mortenbay [at] ucla [dot] edu

Acknowledgements

The author would like to thank Dr. Leah Lievrouw for her extensive
commentary and advice as this paper was being developed.

Notes

1. Duignan, 2010, p. 305.

2. Schaefer, 2007, fifth paragraph.

3. Rawls, 2005, pp. 201–202.

4. Rawls, 2005, p. 137.

5. Rawls, 1971, p. 115.

6. Froomkin and McLaughlin, 2016, headline.

7. Mayes, 2002, part 2.

8. Solove, 2007, p. 764.

9. Rawls, 2001, p. 97.

10. Ibid.

11. Rawls, 2001, p. 98.

12. Rawls, 2001, p. 100.

13. The ‘veil of ignorance’ and ‘the original position’ are two of the most
central concepts in Rawls’ theory of justice. They represent a situation of
initial equality between citizens while they are creating the basic structure
of society. The ideal (and even to Rawls, utopian) construction of a just
society happens from the original position, in which no citizen has more
power or than another. The veil of ignorance is a tool presented by Rawls
which is to be used in the original position when deciding, through
reasoning, on the principles of the basic structure. By not taking prior
knowledge into account, Rawls argues, one is more likely to arrive at a
principle of justice which is fair and acceptable to all.

14. Rawls, 2001, p. 21.

15. Rawls, 2001, p. 19.

16. Rawls, 2001, p. 202, emphasis mine. This quote from Rawls’ (2005)
Political liberalism is also restated in his Justice as fairness (2001).

17. Rawls distinguishes between two types of autonomy, rational and full.
Rational autonomy is “acting solely from our capacity to be rational and
from the determinate conception of the good we have at any time” (Rawls,
2005, p. 306). That is, rational autonomy can be acting in rational
accordance with a doctrine of some sort, be it ethical, religious or other.
According to Rawls, however, this is not enough if one is to be a citizen
engaged in social cooperation: “Full autonomy includes not only this
capacity to be rational but also the capacity to advance our conception of
the good in ways consistent with honoring the fair terms of social
coopration; that is, the principles of justice” (Rawls, 2005, p. 306).
References

Michael Assante, 2014. “America’s critical infrastructure is vulnerable to


cyber attacks,” Forbes (11 November), at
http://www.forbes.com/sites/realspin/2014/11/11/americas-critical-
infrastructure-is-vulnerable-to-cyber-attacks/, accessed 14 January 2017.

Marcel Brown, 2016. “iPhone encryption is unbreakable — As long as you


use a password,” (18 February), at
http://marcelbrown.com/2016/02/18/iphone-encryption-is-unbreakable-
as-long-as-you-use-a-password/, accessed 14 January 2017.

Jacqui Chau, 2006. “Application security — It all starts from here,”


Computer Fraud & Security, volume 2006, number 6, pp. 7–9.
doi: http://dx.doi.org/10.1016/s1361-3723(06)70366-9, accessed 14
January 2017.

Anthony Cuthbertson, 2016. “Apple is working on an iPhone even it can’t


hack,” Newsweek (25 February), at http://www.newsweek.com/apple-
working-iphone-even-it-cant-hack-430259, accessed 14 January 2017.

Whitfield Diffie and Susan Eva Landau, 2007. Privacy on the line: The
politics of wiretapping and encryption. Updated and expanded edition.
Cambridge, Mass.: MIT Press.

Brian Duignan (editor), 2010. The 100 most influential philosophers of all
time. New York: Britannica Educational Pub. in association with Rosen
Educational Services.

Quinn Dupont, 2016. “Opnion: Why Apple isn’t acting in the public’s
interest,” Christian Science Monitor (22 February), at
http://www.csmonitor.com/World/Passcode/Passcode-
Voices/2016/0222/Opinion-Why-Apple-isn-t-acting-in-the-public-s-
interest, accessed 14 January 2017.

Carl Ellison and Bruce Schneier, 2000. “Ten risks of PKI: What you’re not
being told about public key infrastructure,” Computer Security Journal,
volume 16, number 1, pp. 1–7.

Dan Froomkin and Jenna McLaughlin, 2016. “FBI vs. Apple establishes a
new phase of the crypto wars,” The Intercept (26 February), at
https://theintercept.com/2016/02/26/fbi-vs-apple-post-crypto-wars/,
accessed 14 January 2017.

A. Michael Froomkin, 1995. “The metaphor is the key: Cryptography, the


Clipper chip, and the Constitution,” University of Pennsylvania Law Review,
volume 143, number 3, pp. 709–897, at
http://scholarship.law.upenn.edu/penn_law_review/vol143/iss3/3,
accessed 14 January 2017.

Sarah Jeong, 2016. “The convoluted logic behind Apple’s ‘obstruction’ of


law enforcement,” Motherboard (8 March), at
http://motherboard.vice.com/read/doj-seeks-to-overturn-new-york-
iphone-case, accessed 14 January 2017.

Tessa Mayes, 2002. “Privacy vs free speech: Two competing rights?”


Spiked (22 October), at http://www.spiked-
online.com/newsite/article/8569, accessed 14 January 2017.

Helen Nissenbaum, 2004. “Privacy as contextual integrity,” Washington


Law Review, volume 79, number 1, pp. 119–158.

Oxford English Dictionary, 2017. “encrypt, verb,” at


https://en.oxforddictionaries.com/definition/encrypt, 9 January 2017.

Richard A. Posner, 1978. “The right of privacy,” Georgia Law Review,


volume 12, number 3, pp. 393–422, at
http://digitalcommons.law.uga.edu/lectures_pre_arch_lectures_sibley/22,
14 January 2017.

John Rawls, 2005. Political liberalism. Expanded edition. New York:


Columbia University Press.

John Rawls, 2001. Justice as fairness: A restatement. Cambridge, Mass.:


Harvard University Press.

John Rawls, 1971. A theory of justice. Cambridge, Mass.: Belknap Press of


Harvard University Press.

David L. Schaefer, 2007. “Justice and inequality,” Wall Street Journal (23
July), at http://www.wsj.com/articles/SB118515408718974595, accessed
14 January 2017.

Alina Selyukh and Camila Domonoske, 2016. “Apple, the FBI And iPhone
encryption: A look at what’s at stake,” NPR (17 February), at
http://www.npr.org/sections/thetwo-way/2016/02/17/467096705/apple-
the-fbi-and-iphone-encryption-a-look-at-whats-at-stake, accessed 14
January 2017.
Michael Slezak, 2015. “Unhackable kernel could keep all computers safe
from cyberattack,” New Scientist (16 September), at
https://www.newscientist.com/article/mg22730392-600-unhackable-
kernel-could-keep-all-computers-safe-from-cyberattack-2/, accessed 14
January 2017.

Daniel Solove, 2011. “Why privacy matters even if you have ‘nothing to
hide’,” Chronicle of Higher Education (15 May), at
http://www.chronicle.com/article/Why-Privacy-Matters-Even-if/127461/,
accessed 14 January 2017.

Daniel Solove, 2007. “‘I’ve got nothing to hide’ and other


misunderstandings of privacy,” San Diego Law Review, volume 44, pp.
745–772.

John Villasenor, 2016a. “If Apple can create a backdoor to the iPhone,
could someone else?” Forbes (17 February), at
http://www.forbes.com/sites/johnvillasenor/2016/02/17/if-apple-can-
create-a-backdoor-to-the-iphone-could-someone-else/, accessed 14
January 2017.

John Villasenor, 2016b. “Some key issues in the Apple iPhone decryption
matter,” Forbes (21 February), at
http://www.forbes.com/sites/johnvillasenor/2016/02/21/some-key-issues-
in-the-apple-iphone-decryption-matter/, accessed 14 January 2017.

Christina Warren and Sergio Hernandez, 2016. “FBI finally hacks iPhone,
ending court battle with Apple,” Mashable (28 March), at
http://mashable.com/2016/03/28/fbi-cracks-san-bernardino-iphone/,
accessed 14 January 2017.

Kim Zetter, 2016. “Apple’s FBI battle is complicated. Here’s what’s really
going on,” Wired (10 February), at
https://www.wired.com/2016/02/apples-fbi-battle-is-complicated-heres-
whats-really-going-on/, accessed 14 January 2017.

Editorial history

Received 27 September 2016; revised 11 January 2017; accepted 14


January 2017.

This paper is in the Public Domain.

The ethics of unbreakable encryption: Rawlsian privacy and the San


Bernardino iPhone
by Morten Bay.
First Monday, Volume 22, Number 2 - 6 February 2017
https://journals.uic.edu/ojs/index.php/fm/article/view/7006/5860
doi: http://dx.doi.org/10.5210/fm.v22i2.7006

A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.

Das könnte Ihnen auch gefallen