Beruflich Dokumente
Kultur Dokumente
24
1
25 1 Introduction (Simon)
26 Machine ethics is a new research field concerning whether there ought to be machine
27 decisions and actions in the real world, where such decisions and actions have real world effects
28 (e.g., Wallach & Allen 2009; Anderson 2011). The machine systems in question are so-called
29 explicit ethical agents that possess the capacity for self-directed or autonomous decision and
30 action, just like normal human beings, but which are not regarded as full ethical agents, since
31 they would not appear to necessarily possess self-consciousness, creativity, empathy, to name
32 only a few of the skills and abilities possessed in full by normal human beings regarded as
33 essential to ethics (Moor 2006; Wallach et al. 2010).
34 Some of these systems are in government and industry R&D programs; many others already
35 interact with humans in the world (IFR Statistical Department 2013). Of the latter kind, the
36 systems are machines that currently operate in services industries such as elder and child care
37 (e.g., Paro, PaPeRo) (Shibata et al. 1997; Fujita et al. 2005), e-commerce sales (Honarvar &
38 Ghasem-Aghaee 2010), public transport, and retail (Honarvar & Ghasem-Aghaee 2010). Military
39 applications include weaponry, guarding, surveillance (e.g., Predator® B UAS, Goalkeeper,
40 Samsung SGR-A1), and hazardous object recovery (e.g., DRDO Daksh). As the types of real
41 world interaction machines and humans engage in become more frequent and complex, the belief
42 that such systems ought to be ethical becomes stronger.
43 To meet the challenge of machine systems that can decide and behave in the world requires
44 solving engineering problems of design and implementation such as the transposition of a
45 suitable ethics into a computer program fitted to the appropriate machine architecture, then
46 observing its actual effects during rigorous testing. Design and implementation, however, are
47 presupposed by a more fundamental challenge which is philosophical and concerns whether
48 there ought to be ethical machines in the first place. In actual practice, however, the field of
49 machine ethics is witnessing the fertile co-evolution of philosophical and engineering concerns,
50 fuelled in turn by the agendas of computer science and Artificial Intelligence (Guarini 2013).
51 This interdisciplinary method seeks resources and ideas from anywhere on the theory and
52 engineering hierarchy above or below the question of machine ethics.
53 As a very young field, some machine ethics researchers in philosophy have found it natural
54 to turn to “high ethical theory” such as consequentialist utilitarianism (e.g., Anderson et al. 2005),
55 or Kantian deontology (e.g., Powers 2006) for their theoretical start. Other philosophers in
56 machine ethics have theoretical links to virtue ethics that evaluate what non-algorithmic
57 character traits machines ought to have, and in fact this ancient theory of Aristotle’s has already
58 been implemented in numerous companion or virtual robots (e.g., Konijn & Hoorn 2005;
59 Tonkens 2012).
60 The philosophical appeal to traditional normative theory in machine ethics has been loosely
61 characterized as “top-down” (or “rule-based”) in orientation, in contrast to “bottom-up”
62 (“non-rule based”) approaches (e.g., Allen et al. 2005). The first task of this chapter is to review
63 and assess these contrasting methods as candidate machine ethics. The next section presents
64 consequentialist rule-utilitarianism and Kantian deontology as illustrative of top-down
65 approaches, followed by case-based reasoning (otherwise “casuistry”) as illustrative of
66 bottom-up approaches.
67
2
75 influential consequentialist utilitarian theory is rule consequentialism, which is the view that a
76 rule ought to be followed only if it produces the best consequences; specifically, only if the total
77 amount of good for all minus the total amount of bad for all is greater than this net total for any
78 incompatible rule available to the individual at the time.
79 A major task of top-down approaches to machine ethics is conversion of rules to
80 algorithmic decision or act procedures. In engineering terms, this job involves reducing an
81 ethical task into simpler subtasks that can be executed and hierarchically organized in order to
82 achieve an intended outcome. Computationally, an advantage of a rule utilitarian decision
83 procedure in machine ethics is its commitment to quantifying goods and harms (Allen et al.
84 2005; Anderson et al. 2005; Anderson & Anderson 2007). The machine algorithm to compute
85 the best rule would input the actual number of people affected and, for each person, the intensity
86 and duration of the good/harm, plus the likelihood that this good or harm will actually occur, for
87 each possible rule, in order to derive the net good for each person. The algorithm then adds the
88 individual net goods to derive the total net good (Anderson & Anderson 2007).
89 A computational disadvantage of a rule utilitarian decision procedure is that in order to
90 compare the consequences of available alternative rules to determine what ethically ought to be
91 done, many or all actual rule consequences must be computed in advance by the machine. Actual
92 consequences of rules will involve diverse member-types of the ethical community (human
93 beings, possibly some non-human animals and non-human entities such as ecosystems or
94 corporations), and many non-actual secondary effects must be known in advance, including the
95 computation of expected or expectable consequences in the (far) future (Allen et al. 2005). How
96 general or strong this computational gap is supposed to be is unclear, for how much machine
97 computation does philosophical utility require? In its weakest form, it merely asserts a practical
98 limit on present machine computational ability to calculate utility, rather than an unbridgeable in
99 principle barrier, and thus can be somewhat closed if subsequent technological and computer
100 advancements enable the addition of exponentially greater computational power to a machine
101 (Chalmers 2010). If exponentially greater computational power allows more ethical knowledge, a
102 rule utilitarian machine may be able to compute actual and non-actual anticipated or anticipatory
103 consequences, including foreseeable or intended consequences, thereby integrating objective and
104 subjective consequentialism.
105 Viewing overall utility as a decision procedure in an autonomous machine should be
106 understood as computationally and philosophically distinct from utility viewed as the standard or
107 criterion of what ethically ought to be done, which is the view of utility adopted by most
108 consequentialists in philosophical ethics. Typically, rule consequentialist theories describe the
109 necessary and sufficient conditions for a rule to be ethical, regardless of whether a human being
110 (or autonomous machine) can describe in advance whether those conditions are satisfied. Still, it
111 seems possible that a decision procedure and a criterion of the ethical can co-exist if the criterion
112 aids autonomous machines to choose among available decision procedures and refine their
113 decision procedures as situations change and they acquire experience and knowledge (Parfit
114 1984).
115 Transforming rule-utilitarianism into machine computer programs is an engineering task
116 and distinct from the question whether there ought to be utilitarian machines in the first place.
117 Some philosophers have criticized consequentialist utilitarianism for the high demand strict
118 ethical utility seemingly imposes. This is the demand that, in some situations, innocent members
119 of the ethical community be killed, lied to, or deprived of certain benefits (e.g., material
120 resources) to produce greater benefits for others. Of course, this troubling possibility is a direct
121 implication of the philosophical principle of utility itself: since only consequences can justify
122 ethical decisions or actions it is inconsequential to utility how harmful they are to some so long
123 as it is more beneficial to others. A strict rule utilitarian machine ethics must accept the criticism
124 as integral to the logic of utility and acknowledge that following this decision procedure does not
125 ensure that a rule utilitarian machine will do the act with the best consequences. Nonetheless, it
126 is conceivable that a machine following a decision procedure that generally rules out such
127 extreme decisions and actions may yet in the long-term produce better consequences than trying
128 to compute consequences case-by-case. However, such pure speculation likely will be viewed by
3
129 many ethicists as too weak to attenuate fears over the plight of innocents during the swing and
130 play of utility in the real world.
131 Closely related to the critically high demand utility imposes is the converse criticism that
132 utility is insufficiently demanding. In the mind of the strict consequentialist utilitarian, the
133 principle of utility presents an unadorned version of normative reality: all ethical decisions and
134 actions are either required or forbidden. Yet, permissions exist in ethical reality to outdo utility,
135 strictly conceived, as shown in ethical supererogation, or to fail it, in ethical indifference. There
136 are also no permissions for the consequentialist in which to show bias to oneself or to family,
137 friends, or significant others (Williams 1973), yet such permissions may be necessary to virtual
138 companions and elder-care robots.
139 Usefully, strict utility appears convertible into algorithmic decision procedures in
140 autonomous machines; but, if such machines are to achieve a simulacrum of human ethical
141 decision and action, then machine design ought to address the alienating and self-effacing
142 aspects of consequentialist utilitarianism just described by supplementing strict utility with a
143 normative ethic that (1) forbids extreme actions such as the killing of innocents, despite good
144 consequences; and (2) permits individuals to follow personal projects without excessive demand
145 that individuals mould those projects so as to produce greater benefits for others.
146 Non-consequentialist deontology is an ethic that appears to fully satisfy these stipulations, and is
147 considered next.
4
180 dutifully obeys his or her own ethical maxims. To reason whether an action ought to be done or
181 not, an individual must test his or her personal maxim against what Kant terms the categorical
182 imperative. That is, given the same situation, all individuals would consistently act on the very
183 same unconditional maxim (Kant 1780/1965; 1785/1964). Free individuals determine their duties
184 for themselves by using their reason, pure and simple.
185 But the real world is not so pure and simple. As Kant himself acknowledges, a human being
186 is necessarily a battlefield upon which reason and inclination rival one with the other. Reason
187 facilitates decision and action in accordance with categorical maxims while inclination enables
188 emotion, bodily pleasures and survival. According to Kant, human beings are truly ethical only
189 because they can violate categorical maxims and surrender to the whims of emotion. The
190 deontological ought is reason’s real world safety valve for preserving autonomy in the face of
191 inclinational temptation. The perfect ethical individual, Kant suggests, is one whose decisions are
192 perfectly rational and are detached entirely from emotion and feeling.
193 Is Kant right in assuming that emotion is the enemy of ethics? Not according to David
194 Hume (1739/2000). Hume wrote that “reason alone can never be a motive to any action of the
195 will; and secondly, it can never oppose passion in the direction of the will” (Hume 1739/2000,
196 413). As he later makes clear: “T’is from the prospect of pain or pleasure that the aversion or
197 propensity arises towards any object. And these emotions extend themselves to the causes and
198 effects of the object, as they are pointed out to us by reason and experience” (Hume 1739/2000,
199 414). As Hume sees it, reason describes the various consequences of a decision or action, but
200 emotions and feelings are produced by the mind-brain in response to expectations, and incline an
201 individual towards or away from a decision or action.
202 Children learn to assess decisions and actions as rational by being shown prototypical cases,
203 in addition to prototypical examples of irrational or unwise decisions or actions. Inasmuch as
204 human beings learn by example, learning about rationality is similar to learning to recognize
205 patterns in general, whether it is recognizing what is a cat, what is food, or when a person is in
206 pain or in pleasure. In the same way, human beings also learn ethical concepts such as “right”
207 and “wrong”, “fair” and “unfair”, “kind” and “unkind” by being exposed to prototypical
208 examples and generalizing to novel but relevantly similar situations (e.g., Flanagan 1991;
209 Johnson 1993; Churchland 1998; Casebeer 2001; Churchland 2011). What appears to be special
210 about learning ethical concepts is that the basic neuroanatomy for feeling the relevant emotions
211 must be biologically intact. That is, the prototypical situation of something being unfair or wrong
212 reliably arouses feelings of rejection and fear; the prototypical situation of something being
213 “kind” reliably arouses feelings of interest and pleasure, and these feelings, in tandem with
214 perceptual features, are likely an essential component of what is learned in ethical “pattern
215 recognition” (e.g., Flanagan 1991; Johnson 1993; Churchland 1998; Casebeer 2001; Churchland
216 2011).
217 Neurobiological studies are relevant to the issue of the significance of emotion and feeling
218 in decision-making. Research on patients with brain damage reveals that, when decision-making
219 is separated from feelings and emotions, decisions are likely to be poor. For example, one
220 patient, S. M., sustained amygdala destruction and lacked normal fear processing. Although S.
221 M. can identify simple dangerous situations, her recognition is poor when she needs to recognize
222 the hostility or threat in complex social situations, where no simple algorithm for identifying
223 danger is available (Damasio 1994, 2000). Studies of patients with bilateral damage to the
224 ventromedial prefrontal cortex (VMPC), a brain region identified with normal emotion,
225 especially emotion in social contexts, showed abnormal “utilitarian” patterns in decision-making
226 during exposure to ethical dilemmas that pit emotional considerations of aggregate welfare
227 against emotionally aversive actions such as having to sacrifice one person’s life to save a
228 number of other lives. In contrast, the VMPC patients” decisions were normal in other sets of
229 ethical dilemmas (e.g., Greene 2007; Koenigs et al. 2007; Moll & de Oliveira-Souza 2007).
230 These studies indicate that, for a specific set of ethical dilemmas, the VMPC is necessary for
231 normal decisions of right and wrong. Thus, these studies support a necessary role for emotion in
232 the production of ethical decision-making. Such findings imply that Kant’s insistence that
233 detachment from emotion in ethics is mistaken.
5
234 If this argument can be sustained, the essential role for emotion in ethical decision-making
235 and action can be logically extended to include social intelligence in human-machine
236 interactions. A branch of AI, social intelligence is concerned with social abilities perceived by
237 humans when interacting with machines. Social intelligence has been identified as a critical
238 robotic design requirement for assistance, especially in healthcare and eldercare (Allen et al.
239 2005; Wallach et al. 2010; Hoorn et al. 2012; Pontier & Hoorn 2012). Ongoing research in this
240 area includes a machine’s abilities to understand human motives and emotions, and to respond to
241 them with spontaneous and convincing displays of emotions, sensations and thoughts, as well as
242 their own ability to display their personalities by integrating emotions, sensations and thoughts
243 with speech, facial expressions, and gesture (e.g., Lόpez et al. 2005). From the human-machine
244 interactions field, improved social intelligence in robots as perceived by human users has been
245 found to result in better and more effective communication and hence better human acceptance
246 of robots as authentic interaction partners (e.g., Meng & Lee 2006; Nejat & Ficocelli 2008).
247 Social intelligence is clearly an essential capacity for machines to have if humans are going to
248 trust them, since trust relies on the recognition that those you are interacting with share your
249 basic concerns and values (Churchland 2011). To implement this capacity in embodied and
250 non-embodied machines, learning ability is required. The issue of learning takes us to the
251 threshold of bottom-up machine ethics, considered next.
6
283 3.2 Artificial Neural Network Models of Ethics
284 Case-based reasoning relies on fuzzy, radially organized categories whose members occupy
285 positions in similarity spaces, rather than on unconditional theories, rules and maxims. Artificial
286 neural network (ANN) models of reasoning show that ANNs easily learn from examples and the
287 response patterns of the internal (hidden) units reveal a similarity metric. The parameter spaces
288 the inner units represent during training are similarity spaces. Guarini (2006, 2012, 2013) uses an
289 ANN model of classification to explore the possibility of case-based ethical reasoning (including
290 learning) without appeal to theories or rules. In Guarini (2006), specific actions concerning
291 killing and allowing to die were classified as ethical (acceptable) or unethical (unacceptable)
292 depending upon different motives and consequences. Following training, a simple recurrent
293 ANN on a series of such cases was able to provide reasonable responses to a variety of
294 previously unseen cases.
295 However, Guarini (2006) notes that although some of the concerns pertaining to learning
296 and generalizing from ethical problems without resorting to principles can be reasonably
297 addressed with an ANN model of reasoning, “important considerations suggest that it cannot be
298 the whole story about moral reasoning—principles are needed.” He argues that “to build an
299 artificially intelligent agent without the ability to question and revise its own initial instruction on
300 cases is to assume a kind of moral and engineering perfection on the part of the designer.” An
301 ANN, for example, is not independently capable of explicitly representing or offering reasons to
302 itself or others. Guarini (2006) wisely avers that such perfection is quixotic and that top-down
303 rules or theories seem necessary in the required subsequent revision, which is a top-down
304 operation: “at least some reflection in humans does appear to require the explicit representation
305 or consultation of…rules,” as in judging ethically salient differences in similar or different cases.
306 Such concerns are attributable to machine learning in general, and include oversensitivity to
307 training cases and the inability to generate reasoned arguments for complex machine decisions
308 and actions (Guarini 2006, 2012, 2013).
309 To illustrate the reality of Guarini’s concern with an example, consider medical machines.
310 Such machines are required to autonomously perform for long periods of operation and service.
311 Personal robots for domestic homes must be able to operate and independently perform tasks
312 without any technical assistance. Due to task complexity and technical limitations of machine
313 sensors, a top-down module is likely essential for reasoning from rules or abstract principles,
314 decision-making, accommodation of sensor errors and recovery from mistakes in order to
315 achieve goals (e.g., Pineau et al. 2003). Some researchers have also suggested equipping
316 machines with cognitive abilities that enable them to reason and make better decisions in highly
317 complex social cases (e.g., Allen et al. 2005; Wallach et al. 2008; Wallach et al. 2010; Hoorn et
318 al. 2012). Thus, a serious challenge for bottom-up machine ethics is whether machines can be
319 designed and trained to work with the kind of abstract rules or maxims that are the signature of
320 top-down ethical reasoning.
321 As argued in the preceding section, this suggestion does not imply that ethical rules must be
322 absolute. Consequentialism and deontology may be more useful to machine ethics if they are
323 viewed not as consisting of unconditional rules or maxims, as intended by their philosophical
324 proponents, but as arrays of ethical prototypes, cases where there can be agreement that
325 calculating the prototypically good consequences and duties serve members of the ethical
326 community well.
327 Our review of top-down approaches to machine ethics in the form of traditional normative
328 theory, and bottom-up approaches based in neurobiology and neural network theory, has
329 revealed that each approach is limited in various ways. Perceived challenges facing both methods
330 in conceiving, designing and training an ethical machine have led some machine ethicists to
331 consider how to eliminate or at least reduce those limitations while preserving the advantages of
332 each method. One way to do this is to integrate top-down and bottom-up methods, combining
333 them into a type of hybrid machine ethic (Allen et al. 2005; Guarini 2006; Wallach et al. 2010).
334 To assess the feasibility of a hybrid approach to machine ethical decision-making, Pontier and
7
335 Hoorn (2012) conducted an experiment in which a machine tasked to make ethical decisions was
336 programmed with a mixed top-down-bottom-up algorithm.
8
384 Second, due to an increasing shortage of resources, trained clinical personnel, and
385 de-centralized healthcare, intelligent medical machines are being trialed in healthcare contexts in
386 the United States, Japan, Germany, Spain, and Italy, as supplemental caregivers (WHO 2010).
387 There is mounting evidence that, in clinical settings, machines genuinely contribute to improved
388 patient health outcomes. Studies show that animal-shaped robots can be useful as a tool in
389 occupational therapy to treat autistic children (Robins et al. 2005). Wada and Shibata (2007)
390 developed Paro; a robot shaped like a baby-seal that interacts with users to encourage positive
391 psychological effects. Interaction with Paro has been shown to improve personal mood, making
392 users more active and communicative with each other and with human caregivers, including Paro.
393 Paro has been effectively used in therapy for dementia patients at eldercare facilities (Kidd et al.
394 2006; Marti et al. 2006; Robinson et al. 2013). Banks et al. (2008) showed that animal-assisted
395 therapy with an AIBO dog was as effective for reducing patient loneliness as therapy with a
396 living dog.
9
430 contrary, he convinced himself that he is cancer-free and does not need chemotherapy.
431 According to Buchanan and Brock (1989), the ethically preferable answer is to try again. The
432 patient’s less than fully autonomous decision will lead to harm (dying sooner) and denies him the
433 chance of a longer life (a violation of the duty of beneficence), which he might later regret.
10
Try Again -0.5 0.5 0.5 0.13
Accept 1 -0.5 -0.5 2.37
11
495 References (Simon)
496 Allen C, Smit I, Wallach W (2005) Artificial morality: Top–down, bottom–up, and hybrid
497 approaches. Ethics and Information Technology 7(3):149–155
498 Anderson M, Anderson S, Armen C (2006) MedEthEx: A Prototype Medical Ethics Advisor. In:
499 Proceedings of the Eighteenth Conference on Innovative Applications of Artificial
500 Intelligence. Menlo Park, CA: AAAI Press.
501 Anderson M, Anderson S, Armen C. (2005) Toward Machine Ethics: Implementing Two
502 Action–Based Ethical Theories. Machine Ethics: Papers from the AAAI Fall Symposium.
503 Technical Report FS–05–06, Association for the Advancement of Artificial Intelligence,
504 Menlo Park, CA.
505 Anderson M, Anderson SL (2007) Machine ethics: Creating an ethical intelligent agent. AI
506 Magazine 28(4):15–26
507 Anderson M, Anderson SL (2008) Ethical healthcare agents. In: Magarita S, Sachin V, Lakhim C
508 (eds) Advanced Computational Intelligence Paradigms in Healthcare–3. Springer, Berlin
509 Heidelberg, p 233–257
510 Anderson, SL (2011) Machine metaethics. In: Anderson M, Anderson, SL (eds) Machine ethics.
511 Cambridge University Press, Cambridge, pp 21–27
512 Ashley KD & McLaren BM (1995) Reasoning with Reasons in Case-Based Comparisons. In:
513 Veloso M & Aamodt A (eds) Case-Based Reasoning Research and Development: First
514 International Conference, ICCBR-95, Sesimbra, Portugal, October 23–26, 1995
515 Ashley KD, McLaren BM (1994) A CBR Knowledge Representation for Practical Ethics. In:
516 Proceedings of the Second European Workshop on Case-Based Reasoning (EWCBR),
517 Chantilly, France, 1994.
518 Banks MR, Willoughby LM, Banks WA. (2008) Animal-Assisted Therapy and Loneliness in
519 Nursing Homes: Use of Robotic versus Living Dogs. Journal of the American Medical
520 Directors Association 9:173–177
521 Barras C (2009) Useful, loveable and unbelievably annoying. The New Scientist 22–23
522 Buchanan AE, Brock DW (1989) Deciding for Others: The Ethics of Surrogate Decision Making,
523 Cambridge University Press, Cambridge
524 Casebeer W (2001) Natural Ethical Facts. MIT Press, Cambridge
525 Churchland PM (1998) Toward a cognitive neurobiology of the moral virtues. Topoi 17:83–96
526 Churchland PS (2011) BrainTrust: What Neuroscience Tells us About Morality. MIT Press,
527 Cambridge
528 Damasio A (1994) Descartes” Error. Putnam & Sons, New York
529 Damasio A (2000) The Feeling of What Happens: Body and Emotion in the Making of
530 Consciousness. Harcourt Brace & Company, New York
531 Flanagan O (1991) Varieties of Moral Personality: Ethics and Psychological Realism. Harvard
532 University Press, Cambridge
533 Gillon R (1994) Medical ethics: four principles plus attention to scope. BMJ 309(6948): 184–188
534 Greene JD (2007) Why are VMPFC patients more utilitarian? A dual-process theory of moral
535 judgment explains. Trends in cognitive sciences 11(8):322–323
536 Guarini M (2006) Particularism and the Classification and Reclassification of Moral Cases. IEEE
537 Intelligent Systems 21(4): 22–28.
538 Guarini M (2012) Moral case classification and the nonlocality of reasons. Topoi:1–23
539 Guarini M (2013) Case classification, similarities, spaces of reasons, and coherences. In: M
540 Araszkiewicz, J Savelka (eds) Coherence: Insights from Philosophy, Jurisprudence and
541 Artificial Intelligence. Springer, Netherlands, p 187–220
542 Guarini M (2013) Introduction: Machine Ethics and the Ethics of Building Intelligent Machines.
543 Topoi 32: 213–215
544 Honarvar AR, Ghasem-Aghaee N (2009) An artificial neural network approach for creating an
545 ethical artificial agent. In: Computational Intelligence in Robotics and Automation (ICRA),
546 2009 IEEE International Symposium, p 290–295
12
547 Hoorn JF, Pontier MA, Siddiqui GF (2011) Coppélius” Concoction: Similarity and
548 Complementarity Among Three Affect–related Agent Models. Cognitive Systems Research
549 Journal 15:33–59
550 Hume D (1739/2000) A Treatise of Human Nature, edited by David Fate Norton and Mary J.
551 Norton. Oxford University Press, Oxford
552 IFR Statistical Department (2013) Executive Summary of World Robotics 2013 Industrial
553 Robots and Service Robots. Available via
554 http://www.worldrobotics.org/uploads/media/Executive_Summary_WR_2013.pdf.
555 Accessed October 24 2013
556 Johnson M (1993) Moral Imagination. Chicago University Press, Chicago
557 Kamm FM (2007) Intricate Ethics: Rights, Responsibilities, and Permissible Harms. Oxford
558 University Press, Oxford
559 Kant I (1780/1965) The Metaphysical Elements of Justice: Part I of the Metaphysics of Morals, J.
560 Ladd, trans., Hackett Pub. Co., Indianapolis
561 Kant, I (1785/1964) Groundwork of the Metaphysic of Morals. H.J. Paton, trans., Harper and
562 Row, New York
563 Kidd C, Taggart W, & Turkle S (2006) A Social Robot to Encourage Social Interaction among
564 the Elderly. Proceedings of IEEE ICRA, 3972–3976
565 Koenigs M, Young L, Adolphs R, Tranel D, Cushman F, Hauser M, & Damasio A. (2007)
566 Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446(7138),
567 908–911.
568 Konijn EA, & Hoorn JF. (2005) Some like it bad. Testing a model for perceiving and
569 experiencing fictional characters. Media Psychology, 7(2), 107–144.
570 Leake, DB (1998) Case–based reasoning. In: Bechtel W, Graham G (eds) A Companion to
571 Cognitive Science. Blackwell, Oxford, p 465–476
572 Lόpez ME, Bergasa LM, Barea R, Escudero MS. A Navigation System for Assistant Robots
573 Using Visually Augmented POMDPs. Autonomous Robots. 2005 Jul; 19(1):67–87.
574 Mappes TA, DeGrazia D (2001) Biomedical Ethics, 5th ed., McGraw–Hill, pp. 39–42.
575 Marti P, Bacigalupo M, Giusti L, & Mennecozzi C (2006) Socially Assistive Robotics in the
576 Treatment of Actional and Psychological Symptoms of Dementia. Proceedings of BioRob,
577 438–488.
578 McLaren, B. M. (2003). Extensionally Defining Principles and Cases in Ethics: an AI Model;
579 Artificial Intelligence Journal, Volume 150, November 2003, pp. 145-181.
580 McLaren, B. M. and Ashley, K. D. (1995a). Case-Based Comparative Evaluation in Truth-Teller.
581 In the Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society.
582 Pittsburgh, PA.
583 McLaren, B. M. and Ashley, K. D. (1995b). Context Sensitive Case Comparisons in Practical
584 Ethics: Reasoning about Reasons. In the Proceedings of the Fifth International Conference
585 on Artificial Intelligence and Law, College Park, MD.
586 McLaren, B. M. and Ashley, K. D. (2000). Assessing Relevance with Extensionally Defined
587 Principles and Cases; In the Proceedings of AAAI-2000, Austin, Texas.
588 Meng Q, Lee MH. Design Issues for Assistive Robotics for the Elderly. Advanced Engineering
589 Informatics. 2006; 20(2): 171-186.
590 Moll, J., & de Oliveira–Souza, R. (2007). Moral judgments, emotions and the utilitarian brain.
591 Trends in cognitive sciences, 11(8), 319–321.
592 Moor, J. H. 2006. The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent
593 Systems, 21 (4), 18–21.
594 Nagel, Thomas. 1970. The possibility of altruism. Princeton, NJ.: Princeton University Press.
595 Nejat G, Ficocelli M. Can I be of assistance? The intelligence behind an assistive robot. In: Proc.
596 IEEE International Conference on Robotics and Automation ICRA 2008. p. 3564–3569.
597 Parfit D (1984) Reasons and Persons. Oxford: Clarendon Press.
598 Picard R (1997) Affective computing. MIT Press: Cambridge, MA.
13
599 Pineau J, Montemerlo M, Pollack M, Roy N and Thrun S. Towards robotic assistants in nursing
600 homes: Challenges and results. Special issue on Socially Interactive Robots, Robotics and
601 Autonomous Systems. 2003; 42: 271-281.
602 Pontier MA, Hoorn JF (2012) Toward machines that behave ethically better than humans do. In:
603 Proceedings of the 34th International Annual Conference of the Cognitive Science Society,
604 CogSci, 2198–2203.
605 Powers T M (2006) Prospects for a Kantian machine. Intelligent Systems, IEEE, 21(4): 46–51
606 Rawls J (1971) A theory of justice. Harvard University Press, Cambridge
607 Robins B, Dautenhahn K, Boekhorst RT, & Billard A (2005) Robotic Assistants in Therapy and
608 Education of Children with Autism: Can a Small Humanoid Robot Help Encourage Social
609 Interaction Skills? Journal of Universal Access in the Information Society 4:105–120
610 Robinson H, MacDonald BA, Kerse N, & Broadbent E (2013) Suitability of Healthcare Robots
611 for a Dementia Unit and Suggested Improvements. Journal of the American Medical
612 Directors Association, 14(1):34–40
613 Ross WD (1930) The Right and the Good. Clarendon Press, Oxford
614 Rzepka R, Araki K. (2005) What Could Statistics Do for Ethics? The Idea of a Common Sense
615 Processing–Based Safety Valve. In Machine Ethics: Papers from the AAAI Fall
616 Symposium. Technical Report FS–05–06, Association for the Advancement of Artificial
617 Intelligence, Menlo Park, CA.
618 Shulman C, Tarleton N, & Jonsson H (2009) Which consequentialism? Machine ethics and
619 moral divergence. In AP–CAP 2009: The Fifth Asia–Pacific Computing and Philosophy
620 Conference, October 1st–2nd, University of Tokyo, Japan, Proceedings, edited by Carson
621 Reynolds and Alvaro Cassinelli, 23–25 Reynolds and Cassinelli, 23–25.
622 Super DE (1973) The work values inventory. In Zytowski DG (ed.) Contemporary approaches to
623 interest measurement, University of Minnesota Press, Minneapolis
624 Tonkens R (2012) Out of character: on the creation of virtuous machines. Ethics and information
625 technology 14(2):137–149.
626 Tronto J (1993) Moral Boundaries: a political argument for an ethic of care. Routledge, New
627 York
628 van Rysewyk S (2013) Robot Pain. International Journal of Synthetic Emotions 4(2):22-33
629 Van Wynsberghe A (2012) Designing Robots for Care; Care Centered Value–Sensitive Design.
630 Journal of Science and Engineering Ethics, in press
631 Wada K, Shibata T (2009) Social Effects of Robot Therapy in a Care House, JACIII, 13,
632 386–392
633 Wallach W (2010) Robot minds and human ethics: The need for a comprehensive model of
634 moral decision making. Ethics and Information Technology 12(3):243–250.
635 Wallach W, Allen C (2009) Moral machines: Teaching robots right from wrong. Oxford
636 University Press, Oxford
637 Wallach W, Allen C, Smit I (2008) Machine morality: Bottom-up and top-down approaches for
638 modelling human moral faculties. AI and Society 22(4):565–582.
639 Wallach W, Franklin S & Allen C (2010) A Conceptual and Computational Model of Moral
640 Decision Making in human and Artificial Agents. Topics in Cognitive Science 2:454–485.
641 WHO (2010) Health topics: Ageing. Available from: http://www.who.int/topics/ageing/en/
642 Williams B (1973) A Critique of Utilitarianism. In Smart JJC and Williams B (eds)
643 Utilitarianism: For and Against, Cambridge University Press, Cambridge, p 77–150
14