Beruflich Dokumente
Kultur Dokumente
by Artificial Intelligence
Kelley M. Scott
Emerging Technologies
Fall 2016
1
A. Abstract
growth of the last 40 years. It will posit a framework for the regulation of
Finally, this paper will seek to use the fundamental concepts of autonomy,
2
B. Defining Artificial Intelligence
framework surfaced that examined the theory that intellect may be more than
think and act like a human being; (ii) systems that think and act rationally.5
It is clear from these factors that artificial intelligence differs from ordinary
are able to train themselves ([and] store their personal experience), and this
1
Cerka P, et al., Liability for damages caused by artificial intelligence, Computer Law &
Security Review (2015) at 1.
2 Vardi, M. Artificial Intelligence: Past and Future. Communications of the ACM, Vol.
55 No. 1, 2012 at 5. The term artificial intelligence was coined by John McCarthy. He
complained as soon as it works, no one calls it artificial intelligence anymore.
3
It is arguable that this is still unclear.
4
Cerka P, et al., Liability for damages caused by artificial intelligence, Computer Law &
Security Review (2015) at 1.
5
Id.
3
on the actions previously performed.6 Based on this context, an inquisition
into legal liability for injury caused by machine learning began as early as
1987.7 It was soon postulated that legal standards would be developed by the
early 2000s8. Yet, 29 years later, our system of jurisprudence has failed to
settle expectations regarding artificial intelligence and its interaction with the
law.
best choice of liability standards to hold it to. Human beings are often
confused when confronted with the term AI, as they associate the term with
encompassing not only self-driving vehicles, but also calculators, phones, and
countless other embodiments. Given that we use artificial intelligence all the
time in our daily lives, it is difficult to imagine it causing harm. In fact, many
at this point based on its capabilities rather than its purpose. Most scholars
6
Id.
7
Steven J. Frank, Tort Adjudication and the Emergence of Artificial Intelligence
Software, 21 Suffolk U. L. Rev. 623 (1987).
8
Cole, G., Tort Liability for Artificial Intelligence and Expert Systems, 10 Computer L.J.
127 (1990).
9
Vinge, Vernor. "The coming technological singularity: How to survive in the post-
human era." (1993).
10 Urban, T. The AI Revolution: The Road to Superintelligence. WBW Press, 2015.
4
(ANI), artificial general intelligence (AGI), and artificial superintelligence
(ASI).
narrow task and is specialized to perform well in a single area. All existing
Examples of ANI include IBMs Deep Blue or Matthew Lais Giraffe chess
processes. Weak artificial intelligence systems may appear very intelligent, but
their narrow application is clear when they are asked to perform any function
outside the limited parameters for which they were originally programmed.
Weak AI is accurate and useful under these limited parameters, but cannot
our electric grid, damage nuclear power plants, cause a global-scale economic
11
Dvorsky, G. How much longer before our first AI catastrophe? i09.com, published
April 1, 2013, retrieved December 21, 2016.
5
collapse12, misdirect autonomous vehicles and robots, take control of a factory
be difficult to get rid of (whether in the digital realm or the real world).13
These types of threats will be the subject of later discussion regarding liability
and even the medical and legal fields.14 In large part due to its pervasiveness,
of developing AGI, much of the research since the late 1980s has been
12
See e.g. Lauricella, Tom (May 7, 2010). "Market Plunge Baffles Wall StreetTrading
Glitch Suspected in 'Mayhem' as Dow Falls Nearly 1,000, Then Bounces". The Wall
Street Journal.
13
Dvorsky, G. How much longer before our first AI catastrophe? i09.com, published
April 1, 2013, retrieved December 21, 2016.
14
IBMs Watson program is a good example of a foray into these traditional and
venerated fields.
15
Gottfredson, Linda S., Mainstream science on intelligence: An editorial with 52
signatories, history, and bibliography. Intelligence, 24(1), 1323. 1997.
6
primarily focused on ANI. However, many leading scholars believe that
Some of the earliest ventures into AGI include Alan Newells SOAR
project initiated in 1983 and Doug Lenats CYC program initiated in 1987. The
CYC program attempted to create true AGI by by encoding all common sense
to this day, and has been successful in creating a powerful database and
assemblage of heuristics.19 More recent forays into AGI have used a more
synergistic approach, often incorporating the vast database of the internet, but
domains.
This begs the question: How would we recognize the creation of true
AGI? Measuring AGI often turns to the well-known Turing test. The test is
16
Goertzel, B. Artificial General Intelligence. Springer Publishing, 2007 at 1-2.
17
Id at 3.
18
Id.
19
Id at 27.
7
defined loosely as a test that asks an AI program to simulate a human in a
Turing testis that it is a sufficient but not necessary criterion for artificial
general intelligence.20
useful examples, inspiration, and ideally, results for AGI. Narrow ANI
provide the necessary tools and framework, or at least inspiration for key
models will also help to provide positive interactive informational models for
developing AGI.
intellect that is much smarter than the best human brains in practically every
field, including scientific creativity, general wisdom and social skills.21 ASI
may range from a computer that just barely surpasses human intelligence to
one that is more intelligent than the most intelligent human being by factorial
levels. Importantly, ASI will not only entail intelligence that is quantitatively
faster than the human mind, but also intelligence that is more advanced in
20
Id at 28.
21
Bostrom, N. How long before superintelligence? Linguistic and Philosophical
Investigations, 2006, Vol. 5, No. 1, pp. 11-30.
8
Two major developments are necessary for the advent of AGI/ASI: first,
something very near to. Kurzweil has estimated that the required
computations per second (cps). To put this in perspective, the worlds fastest
cps. However, with the cost of this type of computing power, it is more
step to its creation. There are three common strategies to modeling AGI/ASI
to model an AGI/ASI system for us, or, even more ambitious, to design a ANI
computer program or system with the sole purpose of doing research on and
coding an AGI/ASI system. Once AGI has been reached, it is inevitable under
the Law of Accelerating Returns that ASI will follow quickly behind.
Both AGI and ASI, as compared to ANI, have vast liability implications.
Artificial general intelligence has the ability to not only be used by a human
has the independent ability to decide to engage in behaviors that could cause
harm. The liability implications for AGI or ASI include all those liability
22
Kurzweil at 118.
9
problems that have faced courts as a result of human actors. However, they
also include new liability implications, the foremost of which is this: when
accountable? The creator? The manufacturer? The user? These issues among
others will be examined infra. Furthermore, some have posited that ASI may
bring about catastrophic changes to our society, including the potential for the
liability for injury caused by artificial intelligence. The idea that humanity
could at some point develop machines that could autonomously think has
been part of the cultural zeitgeist since the earliest days of our civilization. 23
artificial intelligence is already upon us, and the technology already influences
our day-to-day interactions with one another. Futurist Ray Kurzweil describes
23
McCorduck, Pamela. Machines Who Think: A Personal Inquiry Into the History and
Prospects of Artificial Intelligence, at 23-24 (2004).
10
Agricultural Revolutionary period24, but has expanded much more rapidly in
the wake of the Industrial Revolution. The Law explains that more advanced
societies have the ability to progress at a much quicker rate simply because
This concept may be facially simple, but it explains the current state of
progress in artificial intelligence and the need for a definitive framework for
progress of the entire 20th Century was achieved in only 20 years, between
between 2000 and 2014, and another centurys worth will be achieved by
linear, and instead, as exponential. For this reason, the need for a liability
framework is strikingly clear; ANI technologies are already at hand, while AGI
and ASI technologies will likely arrive by 2045 by some estimates26. Other
estimates are even less conservative: The level of human knowledge doubled
during the period from 1750 to the twentieth century. And since 1965 the level
24
Around 12,000 BC.
25
Kurzweil, Ray. The Singularity is Near, at 39. Viking/MIT Press, 2005.
26
Id at 234.
27
Tolocka TR. Regulated mechanisms.. Technologija. 2008 at 111.
11
Truly, [t]he prevalence of AI in society means that the more people use
feelings and emotions, then the laws would need to be altered to encompass
The goals of traditional tort liability are many: to reduce the possibility
incentives to engage in behavior that avoids it, to the extent that such
behaviors are not more costly than the alternatives. By creating a framework
for loss shifting from injured victims to tortfeasors, tort law deters unsafe
28
Richard C. Sherman, The Surge of Artificial Intelligence: Time To Re-examine
Ourselves. Implications: Why Create Artificial Intelligence?, (1998).
www.units.muohio.edu/psybersite/ accessed December 13, 2016.
29
Cerka P, et al., Liability for damages caused by artificial intelligence, Computer Law &
Security Review (2015) at 2.
30
See, e.g., Priest, G., Satisfying the Multiple Goals of Tort Law, 22 Val. U. L. Rev. 643,
648 (1988), also generally the classic case of United States v. Carroll Towing Co. 159
F.2d 169 (2d. Cir. 1947).
12
harmful activity to the extent that the cost of accidents exceeds the benefits of
the activity.31
examine two specific frameworks for liability: traditional negligence and strict
liability. As there is no existing case law setting the criteria for injury caused
reasonable norms of conduct create liability for said actions. Liability in this
which harm has been caused. A cause of action based on negligence requires
proof of four elements. The defendant owes (1) a duty of care to the plaintiff
based upon a certain standard of conduct, which (2) has been breached, the
31
Abbott, R. Allocating Liability for Computer Generated Torts. Draft working paper,
2016.
32
It has been suggested that the only existing framework for legal liability exists under
customary international law: In the absence of direct legal regulation of AI, we can
apply article 12 of United Nations Convention on the Use of Electronic Communications
in International Contracts, which states that a person (whether a natural person or a legal
entity) on whose behalf a computer was programmed should ultimately be responsible for
any message generated by the machine. Such an interpretation complies with a general
rule that the principal of a tool is responsible for the results obtained by the use of that
tool since the tool has no independent volition of its own. So the concept of AI-as-Tool
arises in the context of AI liability issues, which means that in some cases vicarious and
strict liability is applicable for AI actions. Cerka P, et al., Liability for damages caused
by artificial intelligence, Computer Law & Security Review (2015) at
13
breach having (3) proximately caused (4) the injury complained of.33 However,
the concept of duty is a legal fiction; it does not mean that individual inquiry is
always done into the reasonable nature of the actions taken. To the contrary, it
place of the ordinary reasonable actor. Proximate or legal causation is also the
generally will be held not to exist if the defendant could not reasonably foresee
injury being caused by his or her actions. That is to say, [i]n practice, for a
reasonable care, that the defendant breached that duty, that the breach caused
the plaintiffs damages, and that the plaintiff suffered compensable damages. 35
predict the behavior of many AI systems, and thus impossible to imagine the
33
W. Prosser & W. Keeton, Handbook On The Law Of Torts 76 N.1 (5th Ed. 1984) at
358.
34
Id.
35
Abbott, R. Allocating Liability for Computer Generated Torts. Draft working paper,
2016 at 12.
14
conditions necessary to prove proximate or legal causation for injury.36
possibility for injury, but also the possible plaintiff37. This brings us to the second
other words, strict liability is liability without fault.38 Strict liability for
tortuous conduct has been famously applied to the ownership of wild animals, or
ferae animae. At common law, wild animals are considered by their very nature
dangerous. However, they are also not automatically considered to be under the
true to this day: [W]ild animals exist throughout nature, they are generally
not predictable or controllable, and therefore, without more, they are neither
the property nor the responsibility of the owner or occupier of land on which
36
[U]nlike traditional engineering and design, the actual functioning of an autonomous
artificial agent is not necessarily predictable in the same way as most engineered systems.
Some artificial agents may be unpredictable in principle, and many will be unpredictable
in practice. Predictability is critical to current legal approaches to liability. Asaro, P. The
Liability Problem for Autonomous Artificial Agents. 2015 at 2. This author would argue
that ANI and AGI/ASI have differences in whether actions are foreseeable. See supra.
37
See classically Palsgraf v. Long Island Railroad Co., 248 N.Y. 339, 162 N.E. 99
(1928).
38
Black's Law Dictionary, 8th ed. 2004.
39
Union Pacific Railroad Company v. Nami, 2016 WL 3536842 (Tex. June 24, 2016),
see also generally 3:50.Wild animalsDoctrine of animals ferae naturae, 1 Tex. Prac.
Guide Torts 3:50
15
To establish strict liability under common law standards, it must be
shown that the defendant tamed, confined, or otherwise controlled the wild
possession, that person is not liable for their actions.40 However, if one has the
injury to another; [a] landowner can be held strictly liable for the acts of
animals ferae naturae, that is, wild animals, against invitees on the owner's
landsif the landowner has reduced the animal to his or her possession and
Strict liability has also, and most commonly been applied to either
liability for such activities derives from the English common law case of
Fletcher v. Rylands, 159 Eng. Rep. 737 (1865), reversed, Fletcher v. Rylands, 1
L.R.-Ex. 265 (1866), affirmed, Rylands v. Fletcher, 3 L.R.-E. & I. App. at 330.,
40
See Glave v. Michigan Terminix Co., 159 Mich. App. 537, 407 N.W.2d 36 (1987).
41
Nicholson v. Smith, 986 S.W.2d 54, 60 (Tex. App. - San Antonio 1999).
42
Baldwin's Oh. Prac. Tort L. 28:1 (2d ed.)
16
the old coal workings and, therefore, were not negligent
themselves. Moreover, the facts did not quite fit into the existing
tort pigeonholes: there was no trespass because the damage from
flooding was indirect and consequential, as opposed to direct and
immediate,7 and there was no nuisance, absent evidence of
something hurtful or injurious to the senses or of damage of a
continuous or recurring nature.43
(2) This strict liability is limited to the kind of harm, the possibility of which
makes the activity abnormally dangerous.
(a) existence of a high degree of risk of some harm to the person, land or
chattels of others;
(b) likelihood that the harm that results from it will be great;
(e) inappropriateness of the activity to the place where it is carried on; and
43
Gerald W. Boston, Strict Liability for Abnormally Dangerous Activity: The
Negligence Barrier, 36 San Diego L. Rev. 597 (1999)
17
(f) extent to which its value to the community is outweighed by its dangerous
attributes.
When examining these requisite prongs in the context of
harm if harm does occur, lack of accepted safeguards for use of AI, and
44
In one slightly humorous example, the United States District Court for the Eastern
District of Missouri ruled that the use of a computer program to simulate human
interaction could give rise to liability for fraud. In re Ashley Madison Customer Data Sec.
Breach Litig., 148 F. Supp. 3d 1378, 1380 (JPML 2015). Among the claims related to a
data breach on the infamous Ashley Madison online dating website in 2015 that resulted
in mass dissemination of user information, were allegations that defendants were
engaging in deceptive and fraudulent conduct by creating fake computer hosts or
bots, which were programmed to generate and send messages to male members under
the guise that they were real women, and inducing users to make purchases on the
website. It is estimated that as many as 80% of initial purchases on the websitemillions
of individual transactionswere conducted by a user communicating with a bot
operating as part of Ashley Madisons automated sales force for the website. Quinn, E.
Artificial Intelligence Litigation: Can the Law Keep Pace with The Rise of the
Machines? Accessed December 14, 2016.
45
For less humorous examples, see the 2010 Flash Crash discussed supra. See also Payne
v. ABB Flexible Automation, Inc., 116 F.3d 480, No. 96-2248, 1997 WL 311586, *1-*2
(8th Cir. 1997) (per curiam) (unpublished table decision); Hills v. Fanuc Robotics Am.,
Inc., No. 04-2659, 2010 WL 890223, *1, *4 (E.D. La. 2010); Bynum v. ESAB Grp., Inc.,
651 N.W.2d 383, 384-85 (Mich. 2002) (per curiam); Owens v. Water Gremlin Co., 605
N.W.2d 733 (Minn. 2000), also cited in the above article.
18
internet connection exists. Finally, we can also accept without much
further below.
Using the model of ANI, AGI, and/or ASI as a wild animal, one that
between both the person(s) who control our hypothetical artificial intelligence
system, as well as the level of AI that hypothetically has caused harm prior to
46
Other scholars disagree: Strict liability goes further and designates or stipulates a
party, usually the manufacturer or owner, as strictly liable for any damages caused. This
model of liability applies to such things as the keeping of wild animals. It is expected that
tigers will harm people if they get free, so as the keeper of a tiger you are strictly liable
for any and all damages the tiger may cause. We apply normal property liability to
domesticated animals, however, which we expect to not harm people under normal
circumstances. It has been suggested that we could apply this to robotics and perhaps
designate certain advanced AIs as essentially wild animals, and others as domesticated.
But then the question arises: How does one determine whether an advanced AI or robot is
appropriately domesticated, and stays that way? A designated liable person may not
know the extent to which the autonomous system is capable of changing itself once it is
activated. More problematic, however, is that a system of strict liability might result in
the slow adoption of beneficial AI technologies, as those who are strictly liable would
have a large and uncertain risk, and be less likely to produce or use the technology, or
soon go out of business. Asaro, P. The Liability Problem for Autonomous Artificial
Agents. 2015 at 5 (citations omitted). However, user-based and level-based criteria
developed herein is an attempt to address and mitigate some of these concerns.
19
E. Liability Standards for ANI
intelligence has three classifications of people who interact with it: the
framework. Given the remoteness of this class of people from eventual injury,
it is important to hold them liable only for those injuries that are intentional
level.
part, be corporations. Thus, they represent the best-cost absorber should the
context of narrow AI, most [problems with] strict liability [disappear] where
20
the manufacturer has the opportunity and the ability to predict or constrain
the set of possible inputs, i.e., where the task-language is rigidly delimited47
cost of about $242 billion.48 Direct49 and anecdotal50 evidence suggests that
removal of human error would result in significantly less motor vehicle accidents.
technology will likely come from those who seek to use it commercially.
47
Cole, G., Tort Liability for Artificial Intelligence and Expert Systems, 10 Computer
L.J. 127 (1990)
48
US Department of Transportation, FEDERAL AUTOMATED VEHICLES POLICY:
ACCELERATING THE NEXT REVOLUTION IN ROADWAY SAFETY 5, Sept.
2016, available at https://www.transportation.gov/AV/federal-automated- vehicles-
policy-september-2016. General statistics related to motor vehicle accidents are published
by the Insurance Institute for Highway Safety Highway Loss Data Institute, available at
http://www.iihs.org/iihs/topics/t/general- statistics/fatalityfacts/overview-of-fatality-facts.
Cited also in Abbott, R. Allocating Liability for Computer Generated Torts. Draft
working paper, 2016 at 5.
49
Michele Bertoncello and Dominik Wee, Ten Ways Autonomous Driving Could
Redefine the Automotive World, McKinsey & Co., June, 2015, available at
http://www.mckinsey.com/industries/automotive-and-assembly/our- insights/ten-ways-
autonomous-driving-could-redefine-the-automotive-world.
50
https://m.youtube.com/watch?v=G997UdhuhvQ
21
The third category of users encompasses the end-user of ANI technology.
We return in this instance to the individual or small group of people who have no
should expose this class of people to liability, but there is no argument that strict
development.
operators. This makes it difficult to identify the user or operator, who would
resulting in the unpredictability of a system stems from the datasets used for
learning, not a legally responsible agent. In laymans terms, AGI/ASI will not
they will make independent decisions that may result in injury to a person.
well as among policy makers and researchers. Such concerns are not
technological innovation and change. While the impacts of the adoption of any
22
technology are in some sense uncertain, and result in many and various
that this sense of concern stems from the recognition that autonomous
and general effects on society, but that they may also be out of control, in the
sense that these effects will occur beyond the scope of human responsibility.51
This statement is uniquely true as it applies to AGI/ASI, and thus, any legal
framework for these emerging technologies will differ from traditional legal
actors liable for damages that they cause. However, the creator/manufacturer
Holding the master liable for the independent actions of his or her servant,
despite the fact that said servant may be artificial in nature is a resolution
that has its basis in long-standing and accepted legal precedent, and would
AGI/ASI technologies.
there are concerns with strict liability or basic negligence standards as they
23
three main cases where strict liability applies: (a) injuries by wild animals; (b)
analysis:
It may also be argued that that joint and several liability under a
entity within a set of interrelated [entities] may be held jointly and severally
52
Cerka P, et al., Liability for damages caused by artificial intelligence, Computer Law &
Security Review (2015) at 11.
53
See generally, Richard W. Wright, The Logic and Fairness of Joint and Several
Liability, MEM. ST. UL REV. 23, 45 (1992).
24
liable for the actions of other entities that are part of the group 54. However,
end produce the most just outcome. A more lax standard will hasten
traditional products liability issues, perhaps once machines become safer than
54
Vladeck, D. Machines without Principals: Liability Rules and Artificial Intelligence.
Wash. L. Rev. 89 (2014): 117.
55
Existing forms of joint and several liability permit those harmed to seek monetary
damages from the deepest pockets among those parties sharing some portion of the
liability. This works well where there is a large corporation or government that is able to
pay damages. While the amount they pay is disproportionate to their contribution to the
damages, those harmed are more likely to be adequately compensated for their damages.
One result of this is that those manufacturers likely to bear the liability burden would
seek to limit the ability of consumers and users to modify, adapt or customize their
advanced AI and robotics products in order to retain greater control over how they are
used. This would stifle innovation coming from the hacking, open-source and DIY
communities. The hacking, open-source and DIY communities, while a powerful source
of innovation, will have limited means of compensating those who might be harmed from
their productsnot just those who chose to use them. Asaro, P. The Liability Problem
for Autonomous Artificial Agents. 2015 at 5
56
Abbott, R. Allocating Liability for Computer Generated Torts. Draft working paper,
2016.
57
Abbott, R. Allocating Liability for Computer Generated Torts. Draft working paper,
2016 at 20.
25
people, automation will result in net safety gains even taking this competing
G. Conclusion
Perhaps the causal agent for the current lack of a unified legal framework
surrounding the problem itself. As with much of legal evolution up to this point,
the most likely scenario is that the solution will assert itself as cases of injury
will force courts to mold solutions to the problems inherent in each individual
artificial intelligence responsible for harm will be an important part of how this
unique legal framework is developed. The criteria posited herein represents one
58
Id.
26