Beruflich Dokumente
Kultur Dokumente
Huamin Qu
Hong Kong University of Science and Technology
https://xkcd.com/183
8/
2
Google has
introduced an
XAI service to
its cloud
platform
https://cloud.google.com/explainable-ai/ 3
Facebook
Provides XAI
in News Feed
https://about.fb.com/news/2019/03/why-am-i-seeing-this/
23
Microsoft
provides XAI
toolkits in
Azure Service
https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-machine-learning-
interpretability?WT.mc_id=azuremedium-blog-lazzeri#how-to-interpret-your-model
5
LinkedIn
conducted
case studies
of XAI in
practice
https://www.slideshare.net/KrishnaramKenthapadi/explainable-ai-in-industry-kdd-2019-tutorial
6
Need for XAI
Domain
Expert
Model
Developers
General Government
Users & Law
7
General Users Need XAI
AI makes decisions that will change your life?
Fairness
Admissi Judgeme
on nt
https://i2.wp.com/blackchristiannews.com/wp-content/uploads/2018/10/B3-
CA905_HARVAR_GR_20181014141414.jpg
8
General Users Need XAI
AI makes decisions that will change your life?
Safety
Automated Medical
vehicle diagnosis
https://si.wsj.net/public/resources/images/BN-
SS369_UBERCR_P_20170329220834.jpg
9
Government Requires XAI
The right to explanation
10
Developers Need XAI
Explanation
11
Domain Experts Learn from XAI
"So beautiful. So beautiful."
12
Domain Experts Learn from XAI
13
XAI, A Necessity?
14
We Don’t Need XAI When...
• Users have high tolerance for errors
• The problem is well studied
• Manipulation should be avoided
15
We Don’t Need XAI When...
• Users have high tolerance for errors
• The problem is well studied
• Manipulation should be avoided
Almost zero
error 16
We Don’t Need XAI When...
• Users have high tolerance for errors
• The problem is well studied
• Manipulation should be avoided
https://www.theguardian.com/technology/2016/dec/05/google-must-
review-its-search-rankings-because-of-rightwing-manipulation 17
We Don’t Need XAI When...
• The model has no significant impact
• The problem is well studied
• Avoid the manipulation of the system
Otherwise, we need XAI
18
Then, What is XAI
19
What is XAI
An explainable AI (XAI) is an intelligent
system whose actions or predictions
can be understood by humans.
Local Explanation
Applies to a single data instance
Global Explanation
A neuron
salient to tiles
of buttons
22
Types of Explainability
– Interpretability vs. Explainability
Explanation
White-box Explanation
Looking into the inner
mechanisms of a model
24
• Towards Trustworthy Machine Learning
25
XAI: Towards Trustworthy Machine Learning
26
Machine Learning, Trustworthy?
https://www.kdnuggets.com/2018/11/interpretability-trust-ai-machine-learning.html
27
Trustworthy Machine Learning
Users have a strong belief in the ability, accuracy, reliability of the
machine learning model
https://www.scnsoft.com/blog/building-trust-with-computer-vision-ai
28
How Can XAI Improve Trust?
29
How Can XAI Improve Trust?
• Verify whether a model can be trusted
• Verify whether a prediction can be trusted
• Identify when a prediction can (cannot) be trusted
30
Verify Whether A Model Can Be Trusted
https://christophm.github.io/interpretable-ml-book/other-interpretable.html
31
Verify Whether A Model Can Be Trusted
A neural network A soft decision tree of depth 4 trained on MNIST. The images at the
inner nodes are the learned filters.
Frosst and Hinton 2017. Distilling a neural network into a soft decision tree 32
Wu et al 2018 AAAI. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Verify Whether A Model Can Be Trusted
A neural network
Ribeiro et al. 2016. Why should i trust you?: Explaining the predictions of any classifier
34
Verify Whether A Prediction Can Be Trusted
The news is fake because……
https://www.finance-watch.org/uf/cartoon-on-consumer-protection-prips/
36
Identify When A Prediction Can(not) Be Trusted
• Quantify the trustworthiness of a certain prediction
• Reveal the failure modes of a certain model
(X1=3, X2=8, X3=3, X4=0,…)
Actually,
(b) I don’t know.
(c
) I am not sure.
(a Don’t trust me
) Y= b
37
Identify When A Prediction Can(not) Be Trusted
• Quantify the trustworthiness of a certain prediction
• Reveal the failure modes of a certain model
The Three Pillars of Robust Machine Learning: Specification Testing, Robust Training and Formal Verification, DeepMind 38
XAI: Toward Robust Machine Learning
39
Machine Learning, Robust?
40
Robust Machine Learning
• Robustness deals with addressing system failures in Dataset
Shift and in Adversarial Attacks.
41
Dataset Shift
Training data Testing data
????
Mushroom
43
Adversarial Attacks
Human
cat
ML
dog
https://deepmind.com/blog/robust-and-verified-ai/
Madry, ICML 2019, Robustness beyond security 44
Adversarial Attacks
Misalignment between ML and Human
• Grass
• Fur
• Or something that
makes no sense to
human
46
How Can XAI Improve Robustness?
Find the hole and Build a naturally strong
fix the house house
47
XAI: Find the Hole
The Three Pillars of Robust Machine Learning: Specification Testing, Robust Training and Formal Verification, deepmind 49
Build A Naturally Strong Model
51
Machine Learning, Fair?
Statistic Analysis The COMPAS uses an algorithm to assess potential
recidivism risk.
COMPAS has been used in a variety of places,
including Broward County of Florida, the State of New
York, the State of Wisconsin, and the State of
California
https://slideslive.com/38917412/safe-machine-learning
52
Machine Learning, Fair?
Percentage of women in top 100 Google image search results for CEO: 11%
Percentage of U.S. CEOs who are women: 27%
M. Kay, C. Matuszek, S. Munson (2015): Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. CHI'15 53
Fair Machine Learning
People who are similar to a specific task should
be treated equally
54
Can XAI Improve Fairness?
55
Data Model Predictions
In-Process:
Pro-Process Explain unfair working Post-Process:
Identify unfair mechanism Modify unfair
treatment in treatment in
training data predictions
56
Unfair Treatment
In the training data, some groups
are over-represented and others
are under-represented.
57
Unfair Treatment
The model reflects and
amplifys past discrimination.
59
research.google.com/bigpicture/attacking-discrimination-in-ml/
Unfair Working Mechanism
Taking correlation as causation can lead to unfairness
60
A College Admission Example accepted females
accepted males
rejected
50%>42%
A College Admission Example accepted females
accepted males
High score Low score rejected
75%>65% 33.3%>26.7%
A College Admission Example accepted females
accepted males
High score Low score rejected
EE CS EE CS
Test score=high,
Department=CS,
Gender=female Gender
Test
score Department
Causality-Based unfair
Explanation
Accept?
http://ieeevis.org/year/2019/info/papers-sessions 65
XAI: For Human-AI Collaboration
66
Human-AI Collaboration
67
Human-AI Collaboration:
An Emerging Research Topic
Google AI’s People + AI Research Facebook AI’s Human & Machine Intelligence
https://ai.google/research/teams/brain/pair https://ai.facebook.com/research/human-and-machine-intelligence
68
How Can XAI Facilitate Human-AI Collaboration?
69
XAI for Human-AI Collaboration
• AI with Human Support
With human knowledge and inspirations,
we could create better AI.
• Human-centered AI
With the powerful support from AI,
we could make better decisions more
efficiently.
70
AI with Human Support
Explainable Visual Interface for Understanding and Debugging in
Model Development
Neuron
activations
CNNVis
Learned features of a cluster of
[Liu et al. 2016] neurons 71
AI with Human Support
Explainable Visual Interface for Understanding and Debugging in
Model Development
[Liu et al. 2019, DeepTracker: Visualizing the Training Process of Convolutional Neural Networks] 72
Human-centered AI
XAI provide additional information to support Human
Decision Making
• During Medical Decision-Making,
Google Brain & Google Health
pathologists retrieve visually similar
medical images from past patients to
reference
[Cai et al. CHI2019, Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making]
73
Human-centered AI
XAI provide additional information to support Human
Decision Making
[Ming et al. KDD2019, Interpretable and Steerable Sequence Learning via Prototypes]
74
Human-centered AI
XAI provide additional information to support Human
Decision Making
[Ming et al. KDD2019, Interpretable and Steerable Sequence Learning via Prototypes]
75
XA
I
Thanks!
Towards Trustworthy Towards Fair
Machine Learning Machine Learning