Sie sind auf Seite 1von 26

HR Analytics and Ethics1

K. Simbeck

The systematic application of analytical methods on human resources (HR)


related (big) data is referred to as HR analytics or people analytics. Typical
problems in HR analytics are the estimation of churn rates, the identification
of knowledge and skill in an organization or the prediction of success on a
job. HR analytics, as opposed to the simple use of key performance
indicators, is a growing field of interest because of the rapid growth of volume,
velocity and variety of HR data, driven by the digitalization of work processes.
Personnel files used to be in steel lockers in the past, they are now stored in
company systems, along with data from hiring processes, employee
satisfaction surveys, emails, and process data. With the growing prevalence
of HR analytics, a discussion around its ethics needs to start. The objective of
this article is to discuss the ethical implications of the application of
sophisticated analytical methods to questions in HR management. This article
builds on previous literature in algorithmic fairness that focuses on technical
options to identify, measure, and reduce discrimination in data analysis. The
article applies to HR analytics the ethical frameworks discussed in other fields
including medicine, robotics, learning analytics, and coaching.

Introduction
HR Analytics
HR analytics, as an approach, is expected to be fully integrated into companies’ analytical functions in
the coming years [1]. Typical applications of HR analytics comprehend employee satisfaction
indicators, workforce forecasts (retention, churn), or employee performance prediction [2]. HR
analytics can be segmented into descriptive, predictive, and prescriptive analytics [3]. Descriptive
analytics refers to the use of relevant key performance indicators with the objective of giving a
comprehensive overview of the workforce [4]. In predictive analytics causal models are built to explain

1
Accepted for publication in IBM Journal of Research and Development
https://ieeexplore.ieee.org/document/8708197 DOI: 10.1147/JRD.2019.2915067

Page | 1
behaviors and developments [4]. The objective of prescriptive analytics is to recommend actions
based on data [4].

Companies that begin applying HR analytics, start with descriptive analyses, using reports and
dashboards on high performers, indicators on absence, satisfaction, or recruiting pipelines. The next
level of HR analytics is predictive HR analytics[5]. These causal models try to explain the variation of
a dependent variable (employee performance, employee satisfaction) based on multiple other
variables (age, gender, education, skills, hiring date) [6], using correlation analysis, pattern
recognition, or simulation.

Prescriptive HR analytics is the most advanced approach to HR analytics. In prescriptive HR analytics


machine learning technology may be used to segment employees into groups that will be treated
differently. Examples for prescriptive HR analytics would be the identification of workers with a high
probability of work accidents and the recommendation of appropriate measures such as training [6],
the allocation of salespeople to potential customers [7], or a recruiting tool that scans personal
emails as offered by finish company [8].

The rise of HR analytics is caused by an increase of digitally available data; this includes, on the one
hand, data generated through HR management processes:

 In the hiring process, applicants enter their CV in an online system.


 Personnel records are kept in the company system.
 Internal performance evaluations are documented in digital systems.
 Participation in trainings is managed digitally.
 Base pay, benefits, and performance-based compensation are computed automatically.
On the other hand, digital work processes generate data about employees. Work process data
includes data on tickets closed by a helpdesk representative, calls answered by a call center agent,
packages prepared by a fulfillment worker, lines of code written by a programmer, or contracts
reviewed by a lawyer. As soon as a digital system is used to enable, track, review, manage, prioritize,
or document work processes, data on quantitative, sometimes even qualitative output of employees is
generated.

Ethics
Ethical thinking is the autonomous and critical reflection of moral behavior [9]. Ethical theories can be
deontological (based on duty) or teleological (consequential or utilitarian view). In deontological
ethics, an action is considered good or bad, depending on rules and values (e.g., trust, fairness),
Page | 2
irrespective of consequences [9]. In teleological ethics, an action is considered good, if the positive
consequences outweigh the negative consequences [9].

Privacy
While ethics is about actions, privacy is about knowing: what should be known, which information
should be shared? Information is made of data. Single datapoints can be harmless to privacy,
however connecting datapoints can create new pieces of information that harm privacy. Privacy
includes the notions of control over personal information and involves freedom from judgment by
others [10]. As privacy is characteristic of social relationships [10], one could argue that harm to
privacy arises not from storing information digitally, but when this information is shared with other
people [11].

Privacy is a prerequisite to autonomy and accountability; increasing transparency/reducing privacy is


equivalent to increasing control [10]. Translated to the work environment, this means that employees
who get more tightly observed (electronically) will try harder to act in line with observers’
expectations, i.e., loose autonomy [10]. Therefore, privacy must be understood not only as
informational self-determination but as an enabler of freedom [12].

While privacy requirements differ from country to country, the European General Data Protection
Regulation (GDPR), which came into effect in May 2018, is one of the strictest. It incorporates the
principles of lawfulness, fairness and transparency, purpose limitation, data minimization, accuracy,
storage limitation and integrity, and confidentiality for personal data [13]. It limits not only the
collection of data but also the processing of data. Purpose limitation means that data collected for a
specific purpose may not be used for other purposes. Data minimization means that no more data
should be collected then is necessary for the purpose.

Use of powerful databases and algorithms poses new challenges to privacy; they have reduced
significantly the cost of manpower in conducting surveillance, and their applications have become
part of our most private spaces [14].

Causal Inference
In predictive HR analytics, companies aim to detect causal relationships between variables, e.g.,
between age and attrition, or between training and performance. The application of statistical
predictive risk assessments without causal models is broadly criticized, e.g., for applications in the
criminal justice system [15, 16]. Typical issues in causal inference are internal and external validity,
omitted variable bias, and simultaneity bias [17]. Internal validity refers to evidence about cause and
Page | 3
effect [18] (actually it might be the high performers who seek more training, as opposed to the more
trained performing better). The criteria of external validity is met if the found relationship can be
generalized, for example to other teams or professions [18]. Furthermore, the correlating variables
might not be cause and effect of each other, but both caused by a third variable, which was not
controlled for (omitted variable bias), or both might cause each other mutually (simultaneity bias) [17].
If the same variable is measured several times over a certain period, the regression-towards-the-
mean-effect must be considered [19].

In big data analytics, further issues arise from the use of big datasets because the results tend to be
overpowered, i.e., will provide statistically significant results even for minuscule effects.

Ethical Frameworks in Other Disciplines


In order to propose an ethical framework for HR analytics, ethical frameworks developed in other
disciplines dealing with sensitive data shall be discussed.

Medicine
Medical Confidentiality
For more than 2000 years, since the Hippocratic Oath, physicians have been committing themselves
to medical confidentiality, legally also referred to as physician-patient privilege: “Whatever I see or
hear in the lives of my patients (…) I will keep secret, as considering all such things to be private”
[20].

Medical Research
The Declaration of Helsinki (DoH) on “Ethical Principles for Medical Research Involving Human
Subjects” was first adopted in 1964 and has since been adjusted several times [21]. The Helsinki
declaration acknowledges that experimentation with humans is a prerequisite for medical progress. It
clearly states however that “the well-being of the human subject should take precedence over the
interests of science and society.” The DoH explains principles for ethical data generation:

 Research must “conform to generally accepted scientific principles” and be conducted by qualified
persons.
 Research must be based on theory (“knowledge of the scientific literature”).
 Human subject experiments as a last resort: laboratory and animal experimentation first.
 Research must be based on experimental protocol approved by an ethical review committee, and
ethical considerations should be stated.

Page | 4
 Risks (i.e. to environment, subject’s physical and mental integrity, privacy) must be assessed and
managed. Potential research benefits must outweigh risks to subjects.
 Study design must be made public.
 Informed consent of participants.

Informed Consent
The concept of informed consent originates from the relationship between patient and physician and
describes the authorization of medical intervention [22]. The concept of informed consent does not
only mean that human subjects participate voluntarily and that they must be informed about
objectives, risks, and methods. It also means that subjects have the right to withdraw consent at any
time.

The need for informed consent is also underlined in psychological research [23, 24]. However, in
psychological research, certain experiments require “deception”, i.e., it is assumed that the subject
would act differently if it was aware of the experimental design, which might be considered unethical
and should, therefore, be limited [25]. Institutional review boards (IRBs) are supposed to assess the
balance between potential scientific benefit and individual harm. In order to limit the harm of
deception or psychological influence on experiments (i.e. creation of negative mood), a debriefing
after the experiment is required [23, 24].

Evidence Based Research


Physicians strive to base medical interventions on evidence-based research. The evidence should
be effective, appropriate, and feasible [26]. Evidence can come from an interventional study (patient
is treated) or an observational study (no treatment/intervention on patients, example: observe nicotine
consumption and mortality) [26]. In randomized controlled trials (RCTs), the patient is randomly
assigned to an intervention/non-intervention group [26]. Observational studies can be up- or
downgraded, i.e., for significance of effect, risk of bias [27]. The highest standard of evidence is
produced through systematic reviews of randomized controlled trials and meta-analyses, followed by
randomized controlled trials and double-upgraded observational studies [26, 27]. Medical researchers
take multiple measures to reduce bias in their studies. In order to reduce selection bias (from
systematic differences in the selection of study participants), patients need to be randomly assigned
to either a treatment or control group [27]. In order to ensure that both groups have the same
expectations about the outcome, receive the same care, and are reported to in the same way [27],
both study participants and the personnel that conduct the study should not know if a patient is in the
Page | 5
treatment or control group (“double blind”). Medical researchers call for the pre-registration of
datasets and studies to avoid reporting bias [28, 29].

Biobanks
An abundance of data is used in today’s biomedicine for diagnostics, research, clinical trials,
monitoring, and healthcare management; the data is stored by healthcare providers, laboratories,
insurers, hospitals/doctors, and researchers [29]. Volume drivers in biomedicine are genome
sequencing datasets and epidemiological datasets [29]. The ethical discussion of big data analytics in
biomedicine includes contributions which call for a limited and thoughtful application of data [11, 29–
32].

In 2016, the World Medical Association (WMA) adopted the “Declaration of Taipei on Ethical
Considerations regarding Health Databases and Biobanks” [33]. In the Declaration of Taipei the WMA
tries to balance the societal benefit of using medical data for research and the individual right to
dignity, autonomy, confidentiality, and privacy. The declaration calls for the patients voluntary and
informed consent, explicitly concerning the “risks and burdens associated with collection, storage and
use of data and material” and “procedures for return of results including incidental findings” [33].
Specifically, the topic of informed consent is broadly discussed as it relates to data-driven biomedical
research [29]; this topic constitutes half of the literature on the ethical use of data in biomedicine [32].
Informed consent is problematic because of the difficult distinction between research and non-
research related purposes [29] and because yet unknown relationships might be studied and found
[32, 34].

Further concerns in the debate about biomedical data are ownership and intellectual property,
weighing epistemology and individual rights, and group-level ethical harms [29, 32, 34]. Data
ownership includes both the notions of control, i.e., the right to access, alter or withdraw data and the
notion of medical or commercial benefit [30, 32, 34]. Group level ethical harm may arise, when
information is generated about groups of anonymized data points which allow, for example, profiling
for certain diseases – in this case, the right of an individual not to know about a disease might be
conflicted [32]. Group level ethical harm also arises when genetic data provides information not only
about the patient but also about his/her family [22].

Data Literacy
Concerns which are technical in other disciplines, such as data quality, noise, reproducibility,
interpretability of complex analyses, over-powered analyses, and statistical approaches without

Page | 6
theory-funded hypotheses may cause harm to people in medicine [11, 29, 31]. Statistical predictions
on patient survival rates for those given a specific treatment that is not theory founded, might (and
have) led to decisions about medication for patients, which might turn out to be wrong, once the
underlying disease is better understood [11]. Biomedical researchers argue that large-scale data
analytics also requires a high level of analytical and statistical skill, both by researchers and their
readers [31].

Applicability of Medical Ethics to HR Analytics


The most important elements of the ethical considerations in medical research which are applicable
to HR analytics as well are the concepts of privacy and informed consent. On the one hand the HR
function has potentially access to a wide area of personal data of employees: number and age of
children, marital status, or health information. On the other hand, HR, representing the employer,
assumes a special responsibility toward the dependent employees. HR analytics practitioners need to
respect employees’ privacy and choose carefully, which analytical questions are ethical. Thus HR
might choose not to create a model which predicts future pregnancies.
Applying the concept of informed consent to HR means, that employees need to be informed about
how their data is used and which consequences, e.g. for pay level or career opportunities, arise from
the use of the data. Further, employees must have the opportunity to agree or not to the use of the
data. The European GDPR [13] legislation already requires informed consent for any processing of
personal data. According to article 7 of GDPR “the request for consent shall be presented (…) in an
intelligible and easily accessible form, using clear and plain language. (…) The data subject shall
have the right to withdraw his or her consent at any time.” A recent report by the consultancy
Accenture [35] underlines that employees are deeply concerned about the use of their workplace data
and that companies can gain their employees’ trust by giving control over data.
Further, HR analytics practitioners need to assume responsibility for adequate levels of data literacy
and statistical skills if the analytical approach may result in life-changing decisions for the analyzed
subjects. In order to establish causal relationships, HR analytics practitioners need to avoid relying on
observational data only and consider bias in their data.
HR analytics practitioners should consider applying the concept of ethical review committees to their
initiatives. Those ethical review committees should include employee representatives. The German
Works Constitution Act [36] implements and limits employee participation in HR analytics in section
87: “The works council shall have a right of co-determination (…) [regarding] the introduction and use
of technical devices designed to monitor the behavior or performance of the employees.”

Page | 7
Artificial Intelligence and Robotics
Artificial intelligence (AI) is concerned with “finding an effective way to understand and apply
intelligent problem solving, planning, and communication skills to a wide range of practical problems”
[37]. A robot is an “engineered machine that senses, thinks, and acts” [38].

The potentially immense power of robots to control and to destroy has fostered, early on, an
intellectual debate about the ethics of robotics, which is also reflected in science-fiction culture
(HAL900 in [39], Cyberiada [40] ). In 1950, Asimov published his famous three laws of robotics: (1) a
robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a
robot must obey orders given to it by human beings, except where such orders would conflict with the
first law; (3); a robot must protect its own existence as long as such protection does not conflict with
the First or Second Law [41]. Due to their physical capabilities (see for example industrial robots),
basic issues in robotic ethics are reliability and safety [38, 42, 43]. With the growing autonomy,
complexity and unpredictability of robotic systems, new issues emerge. The unpredictable behavior of
robots emerges because of complex codes used in complex situations [44]. Ethical concerns differ
across the fields that are applying robots, be it home service robots [42], assistive robots [45], or
military robots [44].

From an ethical perspective, AI is required to be transparent, inspectable, predictable, controllable,


and robust against manipulation [46]. Furthermore, systems using AI should provide the opportunity
for human over-ride [47]. Future research in AI ethics is expected to include verification tools for AI
systems, validity models to prevent undesirable behavior, and security through anomaly detection
and containment [48].

Algorithmic Fairness
In the recent years a discussion has started around transparency, bias and fairness in machine
learning models and its societal consequences [49–52] . Algorithms have been shown to reflect race
and gender biases in facial recognition algorithms [53], or in natural language processing [54–56].

Algorithmic fairness is relevant in unexpected applications, such as online marketing: one study
demonstrated that men tend to be shown more advertisements for higher paying jobs online than
women [57]. The more decisions that are taken automatically by algorithms and the more information
that is provided based on complex models, the higher the risk for hidden discrimination. The research
in algorithmic fairness includes topics such as transparency on relationships between input and

Page | 8
output variables [58], identification and measurement of discrimination [59, 60], or mitigating
discrimination in datasets [61–63].

Several commercial providers of AI systems are proactively presenting potential approaches to


solving the dilemma between model accuracy and fairness:

 Google provides an online course on “Machine Learning Fairness” with practical recommendations
[64].
 IBM’s “Trusted AI” research aims to improve robustness, fairness, and explainability in AI systems
[65], for example through bias detection tools [66]. In 2019 IBM released the ‘Diversity in Faces’
dataset of 1 million human faces to improve availability of unbiased training data for facial
recognition [67].
 Researchers at Microsoft regularly report on conference contributions on the company’s machine
learning blog [68] e.g. on bias in word embeddings [69], or facial recognition [70].
 Amazon Web Services on the other hand takes a more defensive position to the topic [71].
Software engineering professionals have also been discussing ethical standards for decades. The
Ethical Guidelines of the German Informatics Society were first compiled in 1994 and since reviewed
twice [72], they acknowledge that “members are responsible for the social and societal consequences
of their work”. Similar codes of ethics exist for the global engineering organization IEEE [73] or for the
Association for Computing Machinery ACM [74]. IEEE pursues also to establish a group of standards
for intelligent and autonomous technologies, considering data privacy, algorithmic bias, employer
data governance, or facial analysis [75].

Ethical Human-System Interaction


Can and shall an artificially created system be identified as a human by a third observer [76]? On
the one hand, an anthropomorphized artificial system may seem more intelligent than it really is [76].
Often, artificially intelligent systems are given human attributes, they are anthropomorphized. For
instance, the system may talk with a clearly male or female voice and exhibit signs of character
through style elements such as humor. These human attributes are believed to make the system
easier to interact with and to create confidence in the system, to create a relationship with the user
[45]. However, such a system, even though it behaves partially like a human, is not an autonomous
human, it is an automated system. Researchers argue that anthropomorphism should be used
sparsely as humans tend to form attachments to humanoid robots [45, 77].

Page | 9
Systems that interact with humans need to respect human dignity, privacy, and physical and
psychological fragility; their behavior should be transparent and predictable, they should provide
status indicators and have an emergency stop [77]. If robots are used as caregivers for humans, e.g.,
children or elderly people, severe psychological damage is suspected [43, 78].

Technical approaches that are widely used in other applications, such as probabilistic models for
face recognition, become ethical challenges in military robotics: incorrect identification can lead to
harming a person [44]. Autonomous military robots will need to take action depending on the
situation, and they need to be programmed with appropriate ethical routines, which take technical
malfunctions into account [44].

Artificial Moral Agents


In robotics, artificial moral agents (AMAs) (Allen et al. 2006) implement ethical decision making by
top-down or bottom-up approaches [44, 79]. The top-down approach decomposes an ethical theory
into algorithmic requirements; in the bottom-up approach the AMA reviews several scenarios and
maximizes ethical results [44]. An example of the top-down approach of implementing ethical norms
into algorithms is provided [80] by formalizing ethical norms into deontic logic expressions. [81]
proposes a formal stimulus-response model for ethical robotic behavior. He argues that deontological
approaches (i.e., to discriminate soldiers from civil persons) and utilitarian approaches (i.e.,
assessment of proportionality) need to be used in parallel. Some authors argue that ethics can never
be completely expressed as a set of algorithms, that it needs a heuristic approach [82]. If ethical rules
are explicitly programmed as in the top-down approach, the robot’s ethics will be rigid and inflexible,
which might pose problems in unforeseen situations, and in situations where those rules are gamed
by an opponent [44]. The issue of proportionality arises when an autonomous robot faces huge costs
for achieving a small or improbable ethical benefit or if only one of several ethical outcomes can be
achieved. A typical example comes from driverless cars– should the car protect the passengers or
the pedestrian if a crash cannot be avoided. The issue of proportionality is also discussed for military
robots [44]: if a robot is supposed to protect a building, it should consider immobilizing rather than
killing an otherwise unstoppable intruder.

[46] criticizes the attempt to build ethical decision making into artificially intelligent systems and
taking out the human oversight. She argues that moral values and goals are not fixed, that ethical
action must be the product of a reflection process, ideally of a discussion process with involved
parties and that it cannot be outsourced. In order to make ethical decisions, social systems,
hierarchies, and communication need to be considered [46].
Page | 10
Applicability of Ethical Consideration in AI and Robotics to HR Analytics
Because of the increasing level of process automation in HR, three of the discussed ethical
considerations are also applicable in HR. First, it should be avoided to design human-system
interfaces, where it might not be clear for the human, that he or she communicates with a computer
(i.e. chatbots in application processes or virtual coaches). Secondly, when decisions in automated HR
processes are not taken based on transparent, simple rules, but based on machine learned black box
algorithms, ethical moral considerations need to be implemented and controlled in the automated
process.

Finally, HR analytics practitioners need to be sensible about using machine learning technology and
recognize, that real world stereotypes will be reflected in their training data and could thus be
reflected in data trained models. Recently it became public, that the e-commerce and cloud
computing specialist Amazon failed to build an unbiased automated CV review tool; apparently the
machine learning algorithm trained on the historically predominantly software engineers consistently
preferred male over female candidates [83].

The discussions around AI ethics are particularly relevant for HR, as AI technology based services
are already offered by numerous service providers:

 The company HireVue (hirevue.com) provides a video based assessment which uses natural
language processing (analysis automated analysis of content, intonation, emotion detection) to
assess stress tolerance, ability to work in teams or willingness to learn of candidates.
 The start-up pymetrics (pymetrics.com) assesses applicants using “neuroscience games” and
machine learning algorithms.
 In performance management the company Zugata (zugata.com) uses machine learning to analyze
feedback texts.
 The German company Precire (precire.com) offers bots that conduct telephone interviews and uses
speech analysis to identify psychological traits.
 IBM uses sentiment analysis to identify issues and trends among employees [84].
 The Polish company One2Tribe (www.one2tribe.pl) provides a gamification platform to foster
change in employee behavior using personalized using personalized rewards.
 Several companies, such as the Danish Peakon (peakon.com) or the Australian CultureAmp
(cultureamp.com), offer advanced employee engagement analytics and benchmarking.
 The Belgian public employment service of Flanders analyses it’s job seeker files and website usage

Page | 11
to predict long-term unemployment and allow for early intervention such as recommender systems
for jobs [85].
 Professional social networks such as LinkedIn, Xing, or ResearchGate use machine learning
technology to curate news feeds or recommend contacts and jobs.
All those applications of AI to HR problems have in common that they are nontransparent, black box
systems. Employee data is usually shared with a 3rd party service provider and may be used not only
to answer the HR question at hand but to further calibrate the provider’s algorithms and benchmarks.
Bias is an issue, too. Even if for example the language processing services correctly classify
character traits of the majority – will they also do so for minorities, such as people with speech
impairment, migration background or elderly women?
Learning Analytics and Educational Data Mining
Learning analytics and educational data mining both strive to use data to improve learning and/or
teaching. Learning analytics focuses more on empowering instructors and learners, while educational
data mining focuses on automated discovery and adaption of learning paths [86]. The main
methodologies used in learning analytics are descriptive, predictive, and explanatory modeling [87].
The main methodology used in education data mining is clustering (e.g., by academic achievement)
[88].

In their seminal work Slade and Prinsloo [89] discuss the ethical issues and dilemmas in learning
analytics, i.e., in the application of analytical approaches in a learning context, mainly in schools or in
higher education. They propose six principles [89]. The first principle is to shift the focus from causal
modeling to a reflection about desirability (“learning analytics as a moral practice”). The second
principle calls to see learners as agents of their learning career, not as recipients of learning
interventions. The third principle recognizes the dynamics and non-stability of learners’ attributes:
learners’ identity must be permitted to evolve and must not be limited by data points collected in the
past. The forth principle underlines that “student success is a complex and multidimensional
phenomenon”, whereas the data collected might be incomplete, noisy, or biased. The fifth principle
calls for full disclosure about the collection, storage, purpose, and use of data. The sixth principle
states that “higher education cannot afford to not use data” as it is part of a responsible institutional
strategy.

One concern in learning analytics is transparency, i.e., the extent to which learners, instructors, and
the institution are aware of or can even influence extent, objectives, and form of data collection and
analysis [90, 91]. As data is needed to manage learning processes in institutions, there are limited
Page | 12
options for students to completely opt-out from data collection. However, informed consent for further
use (such as analytics), options for opting out, and anonymization of data for certain uses must be
considered [89].

Data in learning management systems is incomplete because learners do also learn outside the
system (i.e., using books). The data is noisy and potentially biased because it contains human
assessment. As learners are in a dynamic and life defining phase, they are especially vulnerable to
discrimination or labeling; institutions must actively seek to account for the incompleteness or
noisiness of digital information [89].

Applicability of Ethical Considerations in Learning Analytics to HR Analytics


HR analytics, as learning analytics, needs to recognize the dynamics in human subjects’ behaviors
and motives. Analytical results that, for example, predicted a risk of turnover two years ago, must not
be considered anymore now. HR analytics must also take into consideration the incomplete and
biased nature of many data sources such as employee satisfaction surveys, 360 degree feedbacks or
individual ratings. As in learning analytics and in medicine, transparency and informed consent are
the pre-requisites for institutional trust.

Coaching
HR analytics can be used in a prescriptive way, i.e., recommending career paths or training [92].
For this reason, it is necessary to review ethical standards in professional coaching.

Various professional organizations of coaches have published codes of ethics for coaching. The
“Global Code of Ethics” agreed upon by Association for Coaching (AC) and European Mentoring and
Coaching Council [93] lists, among others, the following ethical guidelines:

 Transparency concerning coaches’ qualifications and potential conflicts of interest,


 Confidentiality,
 On-going supervision, continuous professional development, and reflection.
Passmore summarizes the main themes in coaching ethics as utility (in the best interest of the
coachee), autonomy (self-determination of coachee), confidentiality, trust, avoiding harm, and respect
[94].

These objectives might, however, be contradictory to each other (i.e., a client considering leaving
his job and taking sensitive data with him) [94]. Three approaches are proposed to solve ethical and
professional dilemmas: supervision, ethical decision frameworks, and scenario thinking [94, 95].

Page | 13
Coaching ethics, in contrast to the previously discussed applied ethics, does not strive to solve
ethical questions ex-ante, but acknowledges the individuality of the situation and the need to find
individual ethical solutions. Coaching ethics underlines the importance of reflection for finding ethical
solutions, in difficult situations, the ethical reflection process includes peer discussions, journal entries
and supervision [95].

Applicability of Ethical Considerations in Coaching to HR Analytics


The ethical principles in coaching strive to avoid harm to the client by ensuring the client’s privacy
and the coach’s qualification. The specific ethical consideration to be learned from coaching is the
assumed responsibility for support and advice given. This responsibility is not left to the coach’s
individual conscience but included in an individual and group level review process. A similar
systematic review process should be established for the use of recommender systems in HR
analytics.

Summary: Related Ethical Frameworks


Table 1 gives an overview on the discussed ethical frameworks and their applicability to HR
analytics. The use of powerful analytics has led to similar discussions around the analytical and
technical challenges of analytics in medicine, artificial intelligence, and learning analytics. These
challenges are related to the data used, the analytical approach, and the result. It is broadly accepted
that poor quality data, noisy data, or biased data leads to misleading results. The challenges
identified with regards to the analytical methodology are explorative analyses without a theory, over-
powered analyses, reproducibility of analyses, and robustness against manipulation, gaming, or
external attacks. Interpretation of the analytical results can be complicated by high model complexity,
as well as lack of predictability and controllability.

In addition to those analytical and technical challenges the following ethical challenges were
identified:

a) Privacy and confidentiality,

b) Opportunity to opt-out,

c) Institutional review,

d) Transparency,

e) Data literacy,

f) Respect of dynamic nature of personal development.


Page | 14
Ethical Framework for HR Analytics
Specific Issues in HR Analytics
While the discussed challenges all apply to HR analytics, there are some additional ethical issues
which result from the special relationship between employer and employee.

The first peculiarity of HR analytics is the availability of abundant very personal data, because
employees need to share data with their employer in order to fulfill job, legal, or process obligations.
For example, employees will provide bank details, so that they can receive a salary. Employees will
provide a CV to get hired. Employees might give medical details to get sick leave. Employees will
reveal their address to receive correspondences. Employees will use email and company provided IT
systems for their work. All this information is

Page | 15
Table 1Ethical frameworks and applicability to HR Analytics.

Ethical Framework Considerations Applicability to HR Analytics Proposed HR Analytics


Ethical Principle
Medicine: Hippocratic Privacy Applicable Privacy and Confidentiality
Oath

Medicine: Declaration Informed consent Applicable, in EU required by Opportunity to opt-out,


of Helsinki GDPR Transparency
Assessment of risks and benefits Applicable Institutional review
Ethical review committee Applicable Institutional review
Human subject experiments as last Not applicable
resort

Medicine: Evidence Double-blind randomized controlled Applicable for predictive and


based research trials prescriptive HR analytics
Not applicable
Pre-registration of studies

Medicine: Declaration Consent for anonymized data cannot Applicable if data is shared Opportunity to opt-out,
of Taipei be withdrawn with service providers Transparency
Study of yet unknown relationships vs. Applicable for predictive and Opportunity to opt-out,
informed consent prescriptive HR analytics Transparency

Biomedicine Data Literacy Applicable Data literacy


Theory based research Partially applicable

Laws of Robotics No harm to humans Applicable Institutional review


Obey orders Not applicable
Self-protection Not applicable

Artificial Intelligence Transparency, inspectability Applicable Transparency


Robustness Applicable Data literacy
Fairness Applicable Data literacy
Avoiding anthropomorphization Applicable for prescriptive HR
analytics
Artificial moral agents Not applicable

Learning Analytics Incompleteness of data Applicable Data literacy


Transparency, Informed consent Applicable Transparency
Slade & Prinsloo [89]:
Moral practice Applicable Institutional review
Learners as agents, not passive Applicable Respect of dynamic nature
recipients of interventions Applicable of personal development
Instability of learners’ attributes Applicable
Success as a multi-dimensional Applicable
phenome

Coaching Transparency, Conflicts of interest Applicable Transparency


Confidentiality Applicable Privacy and Confidentiality
Ongoing supervision and reflection Applicable Institutional review

Page | 16
shared voluntarily by employees. However, the sharing is related to specific purposes and no
informed and voluntary consent can be derived for further usage.

The second circumstance that makes HR analytics special is the potentially long-term relationship
between employer and employee, which can last for decades.

Finally, as the objective of HR analytics will always be primarily concerned with the optimal use of
human resources, the benefit to employees themselves will often be rather small. On the other hand,
the use of HR analytics can substantially harm employees’ career option, privacy, autonomy, or
compensation.

General Ethical Principles for HR Analytics


In the following, I will propose ethical principles for the application of HR analytics. These principles
are supposed to protect employees’ rights to privacy, dignity, autonomy, and harm avoidance but
have to be balanced against the right of employers to maximize profits and to ensure competitiveness
and survival of the company.

HR analytics should apply the following five ethical principles:

1. Respect privacy and confidentiality: Decide not to do analytics when they unduly interfere with
employees’ privacy. This is the case when especially sensitive information is concerned (i.e.,
predictions about future illnesses or pregnancies on an individual level).

2. Opportunity to opt-out: Employees should have the opportunity to opt out from HR analytics,
even if their data is used for other purposes in the company. It should be possible to opt-out from
certain analyses, certain types of analyses, or from HR analytics altogether.

3. Review: Representatives of a company and employees, including minority representatives and


works council, should set up a joint review board to discuss and agree on borderlines for HR analytics
within the company. HR representatives have the opportunity to inform others about the benefits
analytics offer to the company. Employee representatives can recommend opting-out if they are not
convinced.

4. Transparency: Since employers often already have access to data about employees, HR
analytics can be performed without the concerned employees. An employee might never know that
s/he is considered a low risk-to-leave and therefore might get a smaller raise, or a higher bonus, or an
interesting project, or no expensive training. Therefore, employers should transparently communicate
which data is used and which insights are generated. If insights are generated on an individual level
Page | 17
(i.e., individual probability to leave company/perform poorly/get promoted within 3 months), these
insights should be shared with the employee. The transparency principle enables employees to act as
agents of their career, not just as recipients of HR interventions.

5. Data literacy: HR analytics practitioners need to assume responsibility for adequate levels of data
literacy and statistical skills if the analytical approach may result in life-changing decisions for the
analyzed subjects.

6. Respect the dynamic nature of personal development: Employees have the right to develop
and change. Old behavior or data should not be used only because it is available.

Ethical Principles for Predictive or Causal HR Analytics


If HR analytics are used to predict future developments or to explain causal relationships, additional
ethical concerns arise. Using only observational data to predict future behavior or outcomes can lead
to wrong results. An example is a study of informal awards for Wikipedia contributors, which showed
that highly productive contributors reduced their productivity after receiving an informal award [96].
The authors could have concluded that informal awards “don’t work” to incentivize contribution.
However, they did use a control group of equally productive contributors and noted, that their
productivity decreased as well but to a significantly lesser extent than the awarded contributors. They
attribute the effect of reduced productivity to regression to the mean and to the general turnover
among Wikipedia contributors. The experimental setup of the study and the use of the control group
provided superior insight about the role of informal awards in comparison to the sole use of
observational data.

The same study [96] showed that the contributors who were randomly selected into the group of
award receivers did receive more awards later on from other people in comparison to the initially
equally productive control group – the so-called “cumulative advantage” effect. The concept of
cumulative advantage, which says that a small lead tends to be reinforced over time and become a
significant advantage, was identified in the social sciences in the 1970s and has since been shown
numerous times [97]. Systematically only using observational data can increase the cumulative
advantage effect (self-fulfilling prophecies).

In medicine, the equivalent to HR analytics would be observational studies, which are considered to
be clearly inferior to randomized controlled trials (RCTs). Randomized controlled trials are the gold
standard in clinical research and researchers increasingly call for the use of RCTs as well in social
science research (referred to as experimental research) [18, 98]. In RCTs such as in experimental
Page | 18
research one group of participants receives a so-called treatment (i.e., career advice, information
about training, performance-based compensation) whereas the control group does not benefit from it.
Participants are randomly allocated to the two groups. Natural experiments are considered a valid
alternative to field experiments. Here participants are randomly allocated to the two groups by
circumstances [99]. A classic example is babies born shortly before or after a legislature change. In
online marketing, the equivalent to this experimental approach in social sciences is the A/B-testing
methodology [100]. In order to avoid deception (i.e., giving incomplete or false information to
participants) hypothetical scenarios could be used [98]. Obviously, experiments have to be properly
designed to create valid results [101].

Therefore, the following process needs to be followed, in order to ensure that the predictive or
causal HR analytics are both ethical and correct:

1. Conduct analytics based on theory driven hypotheses, as opposed to exploratory data


analysis.

2. Identify mechanisms that explain what has happened using data analytics.

3. Test analytical results in experiments.

4. Consider durability of identified effect.

Ethics and Prescriptive HR Analytics


If HR analytics are not only used in a descriptive, predictive, or explanatory way but also to
recommend future actions, additional ethical concerns are needed.

Classic recommender systems offer choices or information based on the users’ data and prior
choices. There is an emerging use case for recommender systems in the field of Human Resources.
The company IBM offers the IBM Watson Career Coach, a recommender-system-like career
management solution, which gives employees, based on their current role and prior job transitions
within the company, transparency about career path options and recommends online and offline
training [92]. The solution includes a web and voice interface and mimics a human coach. The
application of recommender systems to career coaching poses several ethical challenges. First, even
though the career advice it gives may be substantiated through data, its advice is biased towards
digitally available data, is machine generated, and will not take into account the complex personal
situation of the employee. Second, human coaches and advisors, be it in a professional or private
context, will feel a responsibility for their advice. The coachee on the other side will be aware of the

Page | 19
background of the advisor and take the advice as a subjective contribution to a potentially complex
decision. An automated recommender system cannot take responsibility for advice given. The user of
the system however might be led to the conclusion that the advice is given responsibly and based on
complete information. This can be true, however, only for past data. Future career paths, for example,
might change significantly given changes in technological trends or organizational changes. Finally,
the anthropomorphized user interface might lead the user to trust the system instead of considering it
as just one of many sources of information.

Therefore, in ethical prescriptive HR analytics, anthropomorphization should be avoided and the


rationale and the data behind the recommendation should be made transparent. Also it should be
considered combining automated HR analytics with human input, as this can result in better solutions
[102, 103].

Conclusion
The data available to employers about employees is currently steeply rising. This is driven by the
digitalization of both HR processes (hiring, evaluation, training, performance management) and work
processes. The availability of data and analytical capabilities will lead to an unprecedented
transparency of employees for employers.

This article proposes to transfer key ethical concepts from medical research, artificial intelligence,
learning analytics, and coaching to HR analytics. The five key principles for ethical HR analytics are
privacy and confidentiality, opportunity to opt-out, institutional review, transparency, and respect for
the dynamic nature of personal development. Additionally, predictive and explanatory HR analytics
requires an analytical process, which ensures, that the observational data is complemented by theory
driven hypotheses and controlled experiments. If HR analytics is used to give recommendations, i.e.,
about training, career options, or personnel selection, a high level of transparency about the reasons
for the recommendations is required.

The rise of data analytics poses new ethical questions to society. This article opens the discussion
for the field of HR analytics. HR analytics has the potential to both benefit employers and harm
employees. If employers, HR departments, society, and possibly employee representatives do not
find common ground on these questions, they will ultimately be solved at a bigger cost by lawmakers
and lawyers, as can already be seen in the restrictive European General Data Protection Regulation.

Page | 20
Acknowledgment
This work was supported in part by the Hans-Böckler Stiftung.

References
1. S. van den Heuvel and T. Bondarouk, “The rise (and fall?) of HR analytics,” Journal of Organizational Effectiveness: People and
Performance, vol. 4, no. 2, pp. 157–178, 2017.

2. T. H. Davenport, J. Harris, and J. Shapiro, “Competing on talent analytics,” Harvard business review, vol. 88, no. 10, 52-8, 150, 2010.

3. J. P. Isson and J. Harriott, People analytics in the era of big data: Changing the way you attract, acquire, develop, and retain talent,
1st ed. Hoboken: Wiley, 2016.

4. J. R. Evans and C. H. Lindner, Business Analytics: The Next Frontier for Decision Sciences. [Online] Available:
http://www.cbpp.uaa.alaska.edu/afef/business_analytics.htm. Accessed on: Feb. 22 2019.

5. J. G. Harris, E. Craig, and D. A. Light, “Talent and analytics: New approaches, higher ROI,” Journal of Business Strategy, vol. 32, no.
6, pp. 4–13, 2011.

6. M. Burdon and P. Harpur, “Re-conceptualising privacy and discrimination in an age of talent analytics,” The University of New South
Wales law journal, vol. 37, 2015.

7. B. Kawas, M. S. Squillante, D. Subramanian, and K. R. Varshney, “Prescriptive Analytics for Allocating Sales Teams to
Opportunities,” in 2013 IEEE 13th International Conference on Data Mining Workshops, TX, USA, 2013, pp. 211–218.

8. Digital Minds. [Online] Available: https://www.digitalminds.fi/. Accessed on: Feb. 23 2019.

9. W. K. Frankena, Ed., Ethik. Wiesbaden: Springer Fachmedien Wiesbaden, 2017.

10. L. D. Introna, “Privacy and the Computer: Why We Need Privacy in the Information Society,” Metaphilosophy, vol. 28, no. 3, pp.
259–275, 1997.

11. T. Fischer, K. B. Brothers, P. Erdmann, and M. Langanke, “Clinical decision-making and secondary findings in systems medicine,”
BMC medical ethics, vol. 17, no. 1, p. 32, 2016.

12. W. Bonner, “An Exploration of the Loss of Context on Questions of Ethics Around Privacy and its Consequences,” in Proceedings of
the Eleventh International Conference: The "backwards, forwards and sideways" changes of ICT : ETHICOMP 2010 : Universitat Rovira
i Virgili, Tarragona, Spain : 14 to 16 April 2010, 2010, pp. 43–50.

13. European Union, General Data Protection Regulation (GDPR), 2016/679.

14. M. R. Calo, “Peeping Hals,” Artificial Intelligence, vol. 175, no. 5-6, pp. 940–941, 2011.

15. J. Angwin, J. Larson, S. Mattu, and L. Kirchner, Machine Bias. [Online] Available: https://www.propublica.org/article/machine-bias-
risk-assessments-in-criminal-sentencing.

16. C. Barabas, K. Dinakar, J. I. M. Virza, and J. Zittrain, Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk
Assessment. Available: http://arxiv.org/pdf/1712.08238.

17. O. James, S. R. Jilke, and G. G. van Ryzin, “Causal Inference and the Design and Analysis of Experiments,” in Experiments in
public management research: Challenges and contributions, G. G. van Ryzin, O. James, and S. R. Jilke, Eds., Cambridge: Cambridge
University Press, 2017, pp. 59–116.

Page | 21
18. W. R. Shadish, T. D. Cook, and D. T. Campbell, Experimental and quasi-experimental designs for generalized causal inference.
Belmont, CA: Wadsworth Cengage Learning, 2002.

19. A. Chiolero, G. Paradis, B. Rich, and J. A. Hanley, “Assessing the Relationship between the Baseline Value of a Continuous
Variable and Subsequent Change Over Time,” Frontiers in public health, vol. 1, p. 29, 2013.

20. J. Hippocrates, Hippocratic Oath. transl. Michael North, National Library of Medicine. 2002. [Online] Available:
http://www.perseus.tufts.edu/hopper/text?doc=urn:cts:greekLit:tlg0627.tlg013.perseus-grc1:1. Accessed on: Feb. 18 2019.

21. World Medical Association, Declaration of Helsinki: Ethical principles for medical research involving human subjects. [Online]
Available: 10.1001/jama.2013.281053. Accessed on: Feb. 23 2019.

22. M. Bottis and H. T. Tavani, “Consent in Medical Research and DNA Databanks: Ethical Implications and Challenges,” in
Proceedings of the Eleventh International Conference: The "backwards, forwards and sideways" changes of ICT : ETHICOMP 2010 :
Universitat Rovira i Virgili, Tarragona, Spain : 14 to 16 April 2010, 2010, pp. 51–57.

23. American Psychological Association (APA), Ethical Prinicples of Psychologists and Code of Conduct. [Online] Available:
https://www.apa.org/ethics/code. Accessed on: Feb. 23 2019.

24. The British Psychological Society, Code of Human Research Ethics. [Online] Available: https://www.bps.org.uk/news-and-
policy/bps-code-human-research-ethics-2nd-edition-2014. Accessed on: Feb. 23 2019.

25. H. C. Kelman, “Human use of human subjects: The problem of deception in social psychological experiments,” Psychological
Bulletin, vol. 67, no. 1, pp. 1–11, 1967.

26. D. Evans, “Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions,” J Clin Nurs, vol. 12, no. 1,
pp. 77–84, 2003.

27. J. P. T. Higgins and S. (e.) Green, Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0: The Cochrane
Collaboration, 2011.

28. L. G. Hemkens, D. G. Contopoulos-Ioannidis, and J. P. A. Ioannidis, “Routinely collected data and comparative effectiveness
evidence: Promises and limitations,” CMAJ: Canadian Medical Association journal = journal de l'Association medicale canadienne, vol.
188, no. 8, E158-64, 2016.

29. W. Lipworth, P. H. Mason, I. Kerridge, and J. P. A. Ioannidis, “Ethics and Epistemology in Big Data Research,” Journal of bioethical
inquiry, vol. 14, no. 4, pp. 489–500, 2017.

30. B. Godard, J. Schmidtke, J.-J. Cassiman, and S. Aymé, “Data storage and DNA banking for biomedical research: Informed consent,
confidentiality, quality issues, ownership, return of benefits. A professional perspective,” European journal of human genetics : EJHG,
vol. 11 Suppl 2, S88-122, 2003.

31. S. Hoffman and A. Podgurski, “The Use and Misuse of Biomedical Data: Is Bigger Really Better?,” American Journal of Law &
Medicine, vol. 39, no. 4, pp. 497–538, 2013.

32. B. D. Mittelstadt and L. Floridi, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts,” Science and
engineering ethics, vol. 22, no. 2, pp. 303–341, 2016.

33. World Medical Association, WMA Declaration of Taipei on Ethical Considerations regarding Health Databases. [Online] Available:
https://www.wma.net/policies-post/wma-declaration-of-taipei-on-ethical-considerations-regarding-health-databases-and-biobanks/.
Accessed on: Feb. 18 2018.

Page | 22
34. C. Garattini, J. Raffle, D. N. Aisyah, F. Sartain, and Z. Kozlakidis, “Big Data Analytics, Infectious Diseases and Associated Ethical
Impacts,” Philos. Technol., vol. 282, no. 1818, p. 20150814, 2017.

35. E. Shook, M. Knickrehm, and E. Sage-Gavin, “Putting Trust to Work: Decoding Organizational DNA: Trust, Data and Decoding
Organizational DNA: Trust, Data and Unlocking Value in the Digital Workplace,” 2019. [Online] Available:
https://www.accenture.com/us-en/insights/future-workforce/workforce-data-organizational-dna. Accessed on: 18.2.19.

36. Works Constitution Act (Betriebsverfassungsgesetz): BetrVG. [Online] Available: http://www.gesetze-im-


internet.de/englisch_betrvg/englisch_betrvg.html#p0493. Accessed on: Feb. 22 2019.

37. G. F. Luger and W. A. Stubblefield, Al algorithms, data structures and idioms in Prolog, Lisp and Java. Boston MA: Addison Wesley,
2009.

38. P. Lin, K. Abney, and G. Bekey, “Robot ethics: Mapping the issues for a mechanized world,” Artificial Intelligence, vol. 175, no. 5-6,
pp. 942–949, 2011.

39. S. Kubrick, 2001: A Space Odyssey, 1968.

40. S. Lem, Kyberiade: Fabeln zum kybernetischen Zeitalter, 1st ed. Frankfurt am Main, Leipzig: Insel-Verl., 1992.

41. I. Asimov, I, Robot. New York: Bantam Spectra, 2008 [1950].

42. G. Cornelius et al., “A Perspective of Security for Mobile Service Robots,” in Robotics - Legal, Ethical and Socioeconomic Impacts,
G. Dekoulis, Ed.: InTech, 2017, pp. 88–100.

43. B. C. Stahl and M. Coeckelbergh, “Ethics of healthcare robotics: Towards responsible research and innovation,” Robotics and
Autonomous Systems, vol. 86, pp. 152–161, 2016.

44. P. Lin, G. Bekey, and K. Abney, Autonomous Military Robotics: Risk, Ethics, and Design. [Online] Available:
http://www.dtic.mil/docs/citations/ADA534697. Accessed on: 19.2.18.

45. D. Feil-Seifer and M. Mataric, “Socially Assistive Robotics,” IEEE Robot. Automat. Mag., vol. 18, no. 1, pp. 24–31, 2011.

46. P. Boddington, Towards a Code of Ethics for Artificial Intelligence. Cham: Springer International Publishing, 2017.

47. N. Bostrom and Eliezer Yudkowsky, “The ethics of artificial intelligence,” in The Cambridge handbook of artificial intelligence, K.
Frankish, Ed., 1st ed., Cambridge: Cambridge Univ. Pr, 2014, pp. 316–334.

48. S. Russell, D. Dewey, and M. Tegmark, “Research Priorities for Robust and Beneficial Artificial Intelligence,” AIMag, vol. 36, no. 4,
p. 105, 2015.

49. B. Lepri, N. Oliver, E. Letouzé, A. Pentland, and P. Vinck, “Fair, Transparent, and Accountable Algorithmic Decision-making
Processes,” Philos. Technol., vol. 31, no. 4, pp. 611–627, 2018.

50. K. Yang and J. Stoyanovich, “Measuring Fairness in Ranked Outputs,” in Proceedings of the 29th International Conference on
Scientific and Statistical Database Management - SSDBM '17, Chicago, IL, USA, 2017, pp. 1–6.

51. S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “On the (im)possibility of fairness,” Sep. 2016. [Online] Available:
http://arxiv.org/pdf/1609.07236v1.

52. N. Diakopoulos, “Accountability in algorithmic decision making,” Commun. ACM, vol. 59, no. 2, pp. 56–62, 2016.

53. J. Buolamwini and T. Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” in
Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018, pp. 77–91.

Page | 23
54. L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasserman, “Measuring and Mitigating Unintended Bias in Text Classification,” in
Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society - AIES '18, New Orleans, LA, USA, 2018, pp. 67–73.

55. R. Tatman and C. Kasten, “Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic
Captions,” in Interspeech 2017, Aug. 2017, pp. 934–938.

56. T. Bolukbasi, K.-W. Chang, J. Zou, V. Saligrama, and A. Kalai, “Man is to Computer Programmer as Woman is to Homemaker?
Debiasing Word Embeddings,” Jul. 2016. [Online] Available: http://arxiv.org/pdf/1607.06520v1.

57. A. Datta, A. Datta, J. Makagon, D. K. Mulligan, and C. Tschantz, “Discrimination in Online Advertising - A Multidisciplinary Inquiry,”
in Proceedings of Machine Learning Research, vol. 81, Proceedings of the 1st Conference on Fairness, Accountability and
Transparency, S. A. Friedler and C. Wilson, Eds., 2018, pp. 1–15.

58. A. Datta, S. Sen, and Y. Zick, “Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning
Systems,” in 2016 IEEE Symposium on Security and Privacy workshops: SPW 2016: 23-25 May 2016, San Jose, California, USA :
proceedings, San Jose, CA, 2016, pp. 598–617.

59. Y. Alufaisan, M. Kantarcioglu, and Y. Zhou, “Detecting Discrimination in a Black-Box Classifier,” in 2016 IEEE 2nd International
Conference on Collaboration and Internet Computing: IEEE CIC 2016: 1-3 November 2016, Pittsburgh, Pennsylvania, USA :
proceedings, Pittsburgh, PA, USA, 2016, pp. 329–338.

60. I. Žliobaitė, “Measuring discrimination in algorithmic decision making,” Data Min Knowl Disc, vol. 31, no. 4, pp. 1060–1089, 2017.

61. F. Kamiran, A. Karim, S. Verwer, and H. Goudriaan, “Classifying Socially Sensitive Data Without Discrimination: An Analysis of a
Crime Suspect Dataset,” in IEEE 12th International Conference on Data Mining workshops (ICDMW), 2012: 10 Dec. 2012, Brussels,
Belgium; proceedings, Brussels, Belgium, 2012, pp. 370–377.

62. S. Hajian, J. Domingo-Ferrer, and A. Martinez-Balleste, “Discrimination prevention in data mining for intrusion and crime detection,”
in 2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS): 11 - 15 April 2011, Paris, France; [part of] IEEE
SSCI 2011, Symposium Series on Computational Intelligence, Paris, France, 2011, pp. 47–54.

63. T. Kamishima, S. Akaho, and J. Sakuma, “Fairness-aware Learning through Regularization Approach,” in IEEE 11th International
Conference on Data Mining workshops (ICDMW), 2011: 11 Dec. 2011, Vancouver, Canada; proceedings, Vancouver, BC, Canada,
2011, pp. 643–650.

64. Google, Machine Learning Fairness. [Online] Available: https://developers.google.com/machine-learning/fairness-overview/.


Accessed on: Feb. 18 2019.

65. IBM, Trusted AI: IBM Research is building and enabling AI solutions people can trust. [Online] Available:
https://www.research.ibm.com/artificial-intelligence/trusted-ai/. Accessed on: Feb. 18 2019.

66. K. R. Varshney, Introducing AI Fairness 360. [Online] Available: https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/.


Accessed on: Feb. 18 2019.

67. J. R. Smith, IBM Research Releases ‘Diversity in Faces’ Dataset to Advance Study of Fairness in Facial Recognition Systems.
[Online] Available: https://www.ibm.com/blogs/research/2019/01/diversity-in-faces/. Accessed on: Feb. 18 2019.

68. Microsoft, Microsoft Research to present latest findings on fairness in socio-technical systems at FAT* 2019. [Online] Available:
https://www.microsoft.com/en-us/research/blog/microsoft-research-to-present-latest-findings-on-fairness-in-socio-technical-systems-at-
fat-2019/. Accessed on: Feb. 18 2019.

Page | 24
69. A. T. Kalai, What are the biases in my data? [Online] Available: https://www.microsoft.com/en-us/research/blog/what-are-the-
biases-in-my-data/. Accessed on: Feb. 18 2019.

70. J. Roach, Microsoft improves facial recognition technology to perform well across all skin tones, genders. [Online] Available:
https://blogs.microsoft.com/ai/gender-skin-tone-facial-recognition-improvement/. Accessed on: Feb. 18 2019.

71. M. Wood, Thoughts on Recent Research Paper and Associated Article on Amazon Rekognition. [Online] Available:
https://aws.amazon.com/de/blogs/machine-learning/thoughts-on-recent-research-paper-and-associated-article-on-amazon-rekognition/.
Accessed on: Feb. 18 2019.

72. Gesellschaft für Informatik (GI), Ethical Guidelines of the German Informatics Society. [Online] Available:
https://gi.de/ethicalguidelines/. Accessed on: Feb. 20 2019.

73. IEEE, IEEE Code of Ethics. [Online] Available: https://www.ieee.org/about/corporate/governance/p7-8.html. Accessed on: Feb. 19
2019.

74. ACM, ACM Code of Ethics and Professional Conduct. [Online] Available: https://www.acm.org/code-of-ethics. Accessed on: Feb. 20
2019.

75. IEEE, Ethics in Action: The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. [Online] Available:
https://ethicsinaction.ieee.org/. Accessed on: Feb. 19 2019.

76. D. Proudfoot, “Anthropomorphism and AI: Turingʼs much misunderstood imitation game,” Artificial Intelligence, vol. 175, no. 5-6, pp.
950–957, 2011.

77. L. Riek and D. Howard, A Code of Ethics for the Human-Robot Interaction Profession. [Online] Available:
https://ssrn.com/abstract=2757805. Accessed on: Feb. 22 2018.

78. N. Sharkey, “Computer science. The ethical frontiers of robotics,” Science (New York, N.Y.), vol. 322, no. 5909, pp. 1800–1801,
2008.

79. D. Vanderelst and A. Winfield, “An architecture for ethical robots inspired by the simulation theory of cognition,” Cognitive Systems
Research, vol. 48, pp. 56–66, 2018.

80. S. Bringsjord, K. Arkoudas, and P. Bello, “Toward a General Logicist Methodology for Engineering Ethically Correct Robots,” IEEE
Intell. Syst., vol. 21, no. 4, pp. 38–44, 2006.

81. R. C. Arkin, “Governing lethal behavior,” in Proceedings of the 3rd ACMIEEE international conference on Human robot interaction,
Amsterdam, The Netherlands, 2008, p. 121.

82. D. Gotterbarn, “Autonomous Weapon's Ethical Decisions; "I am Sorry Dave; I am Afraid I can't Do That.",” in Proceedings of the
Eleventh International Conference: The "backwards, forwards and sideways" changes of ICT: ETHICOMP 2010: Universitat Rovira i
Virgili, Tarragona, Spain: 14 to 16 April 2010, 2010, pp. 219–229.

83. J. Dastin, Amazon scraps secret AI recruiting tool that showed bias against women. [Online] Available:
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-
against-women-idUSKCN1MK08G. Accessed on: Feb. 18 2019.

84. L. Burrel, Co-Creating the Employee Experience: Interview with Diane Gherson, IBM’s head of HR. [Online] Available:
https://hbr.org/2018/03/the-new-rules-of-talent-management. Accessed on: Feb. 23 2019.

85. M. (e.) Spielkamp, Automating Society: Taking Stock of Automated Decision-Making in the EU. Berlin, 2018.

Page | 25
86. G. Siemens and R. S. J. d. Baker, “Learning analytics and educational data mining,” in Proceedings of the 2nd International
Conference on Learning Analytics and Knowledge, Vancouver, British Columbia, Canada, 2012, p. 252.

87. C. Lang, G. Siemens, A. Wise, and D. Gasevic, Eds., Handbook of Learning Analytics: Society for Learning Analytics Research
(SoLAR), 2017.

88. A. Dutt, M. A. Ismail, and T. Herawan, “A Systematic Review on Educational Data Mining,” IEEE Access, vol. 5, pp. 15991–16005,
2017.

89. S. Slade and P. Prinsloo, “Learning Analytics,” American Behavioral Scientist, vol. 57, no. 10, pp. 1510–1529, 2013.

90. A. Pardo and G. Siemens, “Ethical and privacy principles for learning analytics,” Br J Educ Technol, vol. 45, no. 3, pp. 438–450,
2014.

91. P. Prinsloo and S. Slade, “Ethics and Learning Analytics: Charting the (Un)Charted,” in Handbook of Learning Analytics, C. Lang,
G. Siemens, A. Wise, and D. Gasevic, Eds.: Society for Learning Analytics Research (SoLAR), 2017, pp. 49–57.

92. IBM, IBM Watson Career Coach for career management. [Online] Available: https://www.ibm.com/talent-management/career-
coach. Accessed on: Feb. 16 2018.

93. Global Code of Ethics for Coaches & Mentors, 2016.

94. J. Passmore, “Coaching ethics: Making ethical decisions – novices and experts,” The Coaching Psychologist, vol. 5, no. 1, pp. 6–
10, 2009.

95. M. Duffy and J. Passmore, “Ethics in coaching: An ethical decision making framework for coaching psychologists,” International
Coaching Psychology Review, vol. 5, no. 2, pp. 140–151, 2010.

96. M. Restivo and A. van de Rijt, “Experimental study of informal rewards in peer production,” PloS one, vol. 7, no. 3, e34358, 2012.

97. T. A. DiPrete and G. M. Eirich, “Cumulative Advantage as a Mechanism for Inequality: A Review of Theoretical and Empirical
Developments,” Annu. Rev. Sociol., vol. 32, no. 1, pp. 271–297, 2006.

98. O. James, P. John, and A. Moseley, “Field Experiments in Public Management,” in Experiments in public management research:
Challenges and contributions, G. G. van Ryzin, O. James, and S. R. Jilke, Eds., Cambridge: Cambridge University Press, 2017, pp. 89–
116.

99. T. Dunning, Natural experiments in the social sciences: A design-based approach. Cambridge: Cambridge University Press, 2012.

100. E. Dixon, E. Enos, and S. Brodmerkle, “A/B testing of a webpage,” US7975000B2.

101. M. J. Salganik, Bit by bit: Social research in the digital age. Princeton, Oxford: Princeton University Press, 2018.

102. J. A. Fails and D. R. Olsen, JR., “Interactive Machine Learning,” in IUI 03: 2003 International Conference on Intelligent User
Interfaces, Miami, Florida, USA, January 12-15, 2003 00 : sponsored by ACM SIGART - Special Interest Group on Artificial Intelligence,
ACM SIGCHI - Special Interest Group on Computer-Human Interaction, 2003, pp. 39–45.

103. A. Holzinger, “Interactive machine learning for health informatics: when do we need the human-in-the-loop?,” Brain informatics,
vol. 3, no. 2, pp. 119–131, 2016.

Page | 26

Das könnte Ihnen auch gefallen