Sie sind auf Seite 1von 17

A Practical

Beginners Guide
to Differential Privacy
4/30/2012 Christine Task -- Purdue University 1
Christine Task
PhD Candidate
Computer Science Department
Purdue University
Advisor: Chris Clifton
Presentation Outline
4/30/2012 Christine Task -- Purdue University 2
What?
How?
Where?
4/30/2012 Christine Task -- Purdue University 3
What?
Youre handed a survey
4/30/2012 Christine Task -- Purdue University 4
1) Do you like listening to Justin Bieber?
2) How many Justin Bieber albums do you own?
3) What is your gender?
4) What is your age?
The researcher tells you the data from the surveys will be collected into a data-set, then
some analysis will be done and the results released to the public. She says its perfectly
safe to submit a survey: its anonymous and the analysis will be privatized.
What do you do?
The Notation
4/30/2012 Christine Task -- Purdue University 5
---
---
---
---
---
---
---
---
---
---
------
------
------
------
------
Q?
Results
Population
of
Interest
Public
surveys
data set
privatized
analysis
The Notation
4/30/2012 Christine Task -- Purdue University 6
---
---
---
---
---
---
---
---
---
---
------
------
------
------
------
Q?
Results
Population
of
Interest
Public
surveys
data set
privatized
analysis
Pop
is the overall
group being
studied
The Notation
4/30/2012 Christine Task -- Purdue University 7
---
---
---
---
---
---
---
---
---
---
------
------
------
------
------
Q?
Results
Population
of
Interest
Public
surveys
data set
privatized
analysis
Pop
I Pop
I is the subset
that actually
submit surveys
The Notation
4/30/2012 Christine Task -- Purdue University 8
---
---
---
---
---
---
---
---
---
---
------
------
------
------
------
Q?
Results
Population
of
Interest
Public
surveys
data set
privatized
analysis
Pop
I Pop
d
i
is the data
contributed
by person i
The Notation
4/30/2012 Christine Task -- Purdue University 9
---
---
---
---
---
---
---
---
---
---
------
------
------
------
------
Q?
Results
Population
of
Interest
Public
surveys
data set
privatized
analysis
Pop
I Pop
d
i
D
I
= {d
i
|i I}
is data set
collected from
all the people
in I
The Notation
4/30/2012 Christine Task -- Purdue University 10
---
---
---
---
---
---
---
---
---
---
------
------
------
------
------
Q?
Results
Population
of
Interest
Public
surveys
data set
privatized
analysis
Pop
I Pop
d
i
D
I
= {d
i
|i I}
Q (D
I
) = R
Q is the privatized
query run on the
data set and R
is the result released
to the public
The Notation
4/30/2012 Christine Task -- Purdue University 11
---
---
---
---
---
---
---
---
---
---
------
------
------
------
------
Q?
Results
Population
of
Interest
Public
surveys
data set
privatized
analysis
Pop
I Pop
d
i
D
I
= {d
i
|i I}
Q (D
I
) = R
What do we want?
4/30/2012 Christine Task -- Purdue University 12
I would feel safe submitting a survey if
I knew that my answer
had no impact on the
released results.
I knew that any attacker
looking at the published
results R couldnt learn
(with any high probability)
any new information
about me personally.
Q(D
(I-me)
) = Q(D
I
)
Prob(secret(me) | R) =
Prob(secret(me))
Why cant we have it?
4/30/2012 Christine Task -- Purdue University 13
If individual answers had no
impact on the released
results Then the results
would have no utility
If R shows theres a strong
trend in my population
(everyone is age 10-15 and
likes Justin Bieber), with
high probability, the trend
is true of me too (even if I
dont submit a survey).
By induction,
Q(D
(I-me)
) = Q(D
I
)
Q(D
I
) = Q(D

)
Prob(secret(me) | secret(Pop) )
> Prob(secret(me))
Why cant we have it?
4/30/2012 Christine Task -- Purdue University 14
Even worse, if an attacker
knows a function about me
thats dependent on general
facts about the population:
Im twice the average age
Im in the minority gender
Then releasing just those
general facts gives the
attacker specific information
about me. (Even if I dont
submit a survey!)
(age(me) = 2*mean_age)
(gender(me) mode_gender)
(mean_age = 14)
(mode_gender = F)
(age(me) = 28)
(gender(me) = M)
One more try
4/30/2012 Christine Task -- Purdue University 15
So we cant promise that my data wont affect the results,
And we cant promise that an attacker wont be able to
learn new information about me from looking at the results,
(unless we make some strict assumptions about the
background knowledge of the attacker).
So what can we do?
One more try
4/30/2012 Christine Task -- Purdue University 16
Id feel safe submitting a survey.
If I knew the chance that the privatized
released result would be R was nearly the
same, whether or not I submitted my
information.
Differential Privacy
4/30/2012 Christine Task -- Purdue University 17
Differential Privacy is a Guarantee from the researcher to the
individuals in the data set:
The chance that the noisy released result will be R is nearly the
same, whether or not you submit your information.
(

= )
(


= )
, , ,
is the query algorithm, which includes randomized noise for privatization.
is a value close to 1 which is chosen by the researcher. When A is much larger
than 1, very little privacy is offered. If =1, then individuals have no effect on the
results and there is zero utility. Formally, we define =

for small > 0,


which is mathematically convenient, as well demonstrate later.
Differential Privacy
4/30/2012 Christine Task -- Purdue University 18
The chance that the noisy released result will be R is nearly the
same, whether or not you submit your information.
= )
=

, , , > 0
Possible World where
I submit a survey
Possible World where
I dont submit a survey
Result
R
Prob(R) = A Prob(R) = B
A B
Given R, how can anyone guess which possible world it came from?
Dont get carried away
4/30/2012 Christine Task -- Purdue University 19
These are some things we did not just say:
If an attacker cant tell whether or
not you submitted a survey, they cant
learn anything about you from the results.
False!
As we said before, with the right background
information, an attacker can learn about you
just from general information about the
population, even if you dont submit a survey!
The chance that the
noisy released result
will be R is nearly the
same, whether or not
you submit your info.
(

))
(

))

,
, ,
> 0
Dont get carried away
4/30/2012 Christine Task -- Purdue University 20
These are some things we did not just say:
An attacker cant possibly guess with
high probability whether you took the survey.
False! Its not impossible:
Differential privacy hides the differences
between data sets that differ by one individual,
not whole groups. What if youre part of a
group? If its known that you always do the same
thing as your six best friends, the whole set of
seven may have a detectable impact on the
results, and an attacker might correctly guess that
if the group was involved, you likely were too.
The chance that the
noisy released result
will be R is nearly the
same, whether or not
you submit your info.
(

))
(

))

,
, ,
> 0
Differential Privacy
4/30/2012 Christine Task -- Purdue University 21
Differential privacy ensures that the released result R
gives minimal evidence about whether or not any given
individual contributed to the data set.
If individuals only provide information about themselves,
this protects Personally Identifiable Information to the
strictest possible degree. It protects all personal information.
Differential Privacy
4/30/2012 Christine Task -- Purdue University 22
Differential privacy ensures that the released result R
gives minimal evidence about whether or not any given
individual contributed to the data set.
It protects all personal information in the data set.
It does not prevent attackers from drawing conclusions about
individuals from the aggregate results over the population:
Researchers still need to be careful that their studies are
ethical.
Differential Privacy
4/30/2012 Christine Task -- Purdue University 23
Differential privacy ensures that the released result R
gives minimal evidence about whether or not any given
individual contributed to the data set.
It protects all personal information in the data set.
It does not prevent attackers from using aggregate results.
It does not prevent attackers from learning information about
known cohesive groups in the data set. The distribution of
the population and the invasiveness of the query should be
considered.
4/30/2012 Christine Task -- Purdue University 24
How?
How do we do it?
4/30/2012 Christine Task -- Purdue University 25
1) Do you like listening to Justin Bieber?
2) How many Justin Bieber albums do you own?
3) What is your gender?
4) What is your age?
How do we do it?
4/30/2012 Christine Task -- Purdue University 26
Possible World where
I submit a survey
Possible World where
I dont submit a survey
38 people
like Bieber
37 people
like Bieber
Result
R = ?
We want to get nearly the same distribution of answers from
both possible worlds. How do we bridge the gap?
Global Sensitivity
4/30/2012 Christine Task -- Purdue University 27
Given that D1 and D2 are two data sets that differ in exactly one person, and
F(D) = X is a deterministic, non-privatized function over data set D, which returns a
vector X of k real number results.
Then the Global Sensitivity of F is:
= max
,
1 2
1
Intuitively, its the sum of the worst case difference in answers that can be caused
by adding or removing someone from a data set.
Global Sensitivity
4/30/2012 Christine Task -- Purdue University 28
The Global Sensitivity of F is:
= max
,
1 2
1
Intuitively, its the sum of the worst case difference in answers that can be caused
by adding or removing someone from a data set.
Possible World where
i submits a survey
Possible World where
i doesnt submit a survey
X people like
Bieber
X+1 people
like Bieber
=1
How many people in the data set like Justin Bieber?
Global Sensitivity
4/30/2012 Christine Task -- Purdue University 29
The Global Sensitivity of F is:
= max
,
1 2
1
Intuitively, its the sum of the worst case difference in answers that can be caused
by adding or removing someone from a data set.
Possible World where
i submits a survey
Possible World where
i doesnt submit a survey
M Males
F Females
M+1 Males or M Males
F Females F+1 Females
=1
How many males and females are there in the data set?
Global Sensitivity
4/30/2012 Christine Task -- Purdue University 30
The Global Sensitivity of F is:
= max
,
1 2
1
Intuitively, its the sum of the worst case difference in answers that can be caused
by adding or removing someone from a data set.
Possible World where
i submits a survey
Possible World where
i doesnt submit a survey
X like Bieber
M Males
F Females
X+1 like Bieber
M+1 Males or M Males
F Females F+1 Females
= 2
How many males and females are there in the data set?
And How many people in the data set like Justin Bieber?
Global Sensitivity
4/30/2012 Christine Task -- Purdue University 31
The Global Sensitivity of F is:
= max
,
1 2
1
Intuitively, its the sum of the worst case difference in answers that can be caused
by adding or removing someone from a data set.
Possible World where
i submits a survey
Possible World where
i doesnt submit a survey
Y Bieber
Albums
Y + ? Bieber Albums
=?
Whats the total number of Bieber albums owned by people in the data set?
Global Sensitivity
4/30/2012 Christine Task -- Purdue University 32
The Global Sensitivity of F is:
= max
,
1 2
1
Intuitively, its the sum of the worst case difference in answers that can be caused
by adding or removing someone from a data set.
Possible World where
i submits a survey
Possible World where
i doesnt submit a survey
Y Bieber
Albums
Y + 3 Bieber Albums
= 3
Whats the total number of Bieber albums owned by people in the data set?
Global Sensitivity
4/30/2012 Christine Task -- Purdue University 33
The Global Sensitivity of F is:
= max
,
1 2
1
Intuitively, its the sum of the worst case difference in answers that can be caused
by adding or removing someone from a data set.
Possible World where
i submits a survey
Possible World where
i doesnt submit a survey
G = Old -
Young
G + /- ? = new Old
new Young
=?
Whats the gap between the oldest and youngest members of the data set?
Laplacian Noise
4/30/2012 Christine Task -- Purdue University 34
In order for our two worst-case
neighboring data sets to produce
a similar distribution of privatized
answers, we need to add noise
to span the sensitivity gap.
What noise?
Random values taken from a
Laplacian distribution with standard
deviation large enough to cover
the gap. This isnt the only way to achieve differential privacy, but its the easiest.
Privatizing by adding noise from the Laplacian Distribution:
= |

| |

Laplacian Noise
4/30/2012
= |

| |

Adding Laplacian noise to the true answer means that the


distribution of possible results from any data set overlaps
heavily with the distribution of results from its neighbors.
Laplacian Noise
4/30/2012
R
D1 D2 D3 D0
Just by looking at the released result R,
its very hard to guess which world it came from
and who exactly was in the data set.
We know the general neighborhood of the right
answer, for utility. But the impact of specific
individuals on the data set is hidden.
= |

| |

THE PROOF!
4/30/2012 Christine Task -- Purdue University 37
This is the incredibly simple proof that adding Laplacian noise is sufficient to cover the
sensitivity of the function is enough to satisfy differential privacy. Enjoy!
What we want:
())
(

))

THE PROOF!
4/30/2012 Christine Task -- Purdue University 38
This is the incredibly simple proof that adding Laplacian noise is sufficient to cover the
sensitivity of the function is enough to satisfy differential privacy. Enjoy!
What we want:
())
(

))

Substituting the equations from adding Laplacian noise to the function:

| |

THE PROOF!
4/30/2012 Christine Task -- Purdue University 39
This is the incredibly simple proof that adding Laplacian noise is sufficient to cover the
sensitivity of the function is enough to satisfy differential privacy. Enjoy!
What we want:
())
(

))

Simplifying the fraction:

| |

THE PROOF!
4/30/2012 Christine Task -- Purdue University 40
This is the incredibly simple proof that adding Laplacian noise is sufficient to cover the
sensitivity of the function is enough to satisfy differential privacy. Enjoy!
What we want:
())
(

))

Remembering how exponents work in fractions:

THE PROOF!
4/30/2012 Christine Task -- Purdue University 41
This is the incredibly simple proof that adding Laplacian noise is sufficient to cover the
sensitivity of the function is enough to satisfy differential privacy. Enjoy!
What we want:
())
(

))

Simplifying:

THE PROOF!
4/30/2012 Christine Task -- Purdue University 42
This is the incredibly simple proof that adding Laplacian noise is sufficient to cover the
sensitivity of the function is enough to satisfy differential privacy. Enjoy!
What we want:
())
(

))

Remember! F is the maximum difference between two neighboring data sets!

THE PROOF!
4/30/2012 Christine Task -- Purdue University 43
This is the incredibly simple proof that adding Laplacian noise is sufficient to cover the
sensitivity of the function is enough to satisfy differential privacy. Enjoy!
What we want:
())
(

))

Remember! F is the maximum difference between two neighboring data sets!


So we get the following, with A 1:

QED
Give it a try!
4/30/2012 Christine Task -- Purdue University 44
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize a histogram with three partitions?
Give it a try!
4/30/2012 Christine Task -- Purdue University 45
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize a histogram with three partitions?
Add Laplacian noise calibrated to F = 1, to each partition
Give it a try!
4/30/2012 Christine Task -- Purdue University 46
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize a series of five overlapping counts across a
data set? (ie, how many people in the data set are female?,
how many like justin bieber?, how many are between age
12-16?, etc)
Give it a try!
4/30/2012 Christine Task -- Purdue University 47
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize a series of five overlapping counts across a
data set? (ie, how many people in the data set are female?,
how many like justin bieber?, how many are between age
12-16?, etc)
Add Laplacian noise calibrated to F = 5, to each count
Give it a try!
4/30/2012 Christine Task -- Purdue University 48
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize an interactive query, where first I
ask for one count across the data base, and then (based
on the answer) I choose a second count to ask for?
Give it a try!
4/30/2012 Christine Task -- Purdue University 49
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize an interactive query, where first I
ask for one count across the data base, and then (based
on the answer) I choose a second count to ask for?
Add Laplacian noise calibrated to F = 2, to each count
Give it a try!
4/30/2012 Christine Task -- Purdue University 50
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize a query whose sensitivity depends
on the number of people in the data set? Like mean, or
How many friends do you have that also took the survey?
Give it a try!
4/30/2012 Christine Task -- Purdue University 51
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize a query whose sensitivity depends
on the number of people in the data set? Like mean, or
How many friends do you have that also took the survey?
Quick answer: you dont. Strictly speaking, these queries have
unbounded sensitivity (theres no strict upper bound on how
many of your friends took the survey, and in the worst case of a
data set with only 1 person, a mean can jump from a small
number to infinity when that person is removed).
Give it a try!
4/30/2012 Christine Task -- Purdue University 52
You now know all you need to apply differential privacy to a query, although there
are some fine details well go over in a second. First, lets try it out:
How do you privatize a query whose sensitivity depends
on the number of people in the data set? Like mean, or
How many friends do you have that also took the survey?
Slightly Better Answer: Many times we can assume the
number of people who took the survey is public knowledge
(and skip privatizing it), while still protecting all the properties
(the actual data collected) of the people responding.
This is a common variant of differential privacy. Sensitivity is
calculated by looking at D1 and D2 which differ in the data values
supplied by one person.
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 53
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
How do I privatize data whose range is unknown, like the ages of people
who like Justin Beiber?
You can use a histogram of ranges, with a catch-all bucket at the end:
(6-10), (11-13), (14-16), (17-20), (20-25), (25-35), (35-45), (> 45)
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 54
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
How do I privatize data whose range is unknown, like the ages of people
who like Justin Beiber?
You can use a histogram of ranges, with a catch-all bucket at the end:
(6-10), (11-13), (14-16), (17-20), (20-25), (25-35), (35-45), (> 45)
But, be careful! When using histograms, do not :
Spread the data across too many buckets (noise overwhelms answers).
Use larger bucket ranges to avoid this.
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 55
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
How do I privatize data whose range is unknown, like the ages of people
who like Justin Beiber?
You can use a histogram of ranges, with a catch-all bucket at the end:
(6-10), (11-13), (14-16), (17-20), (20-25), (25-35), (35-45), (> 45)
But, be careful! When using histograms, do not :
Include lots of likely empty buckets (the more partitions noise is added
to, the greater chance that a rare, very large noise value will be chosen
for a partition, potentially creating spuriously large values).
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 56
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
How do I privatize data whose range is unknown, like the ages of people
who like Justin Beiber?
You can use a histogram of ranges, with a catch-all bucket at the end:
(6-10), (11-13), (14-16), (17-20), (20-25), (25-35), (35-45), (> 45)
But, be careful! When using histograms, do not :
Base the bucket ranges on actual statistics about the data set (such as
standard deviations). Feel free to use common sense, or information
about similar, public data sets but touching the data itself introduces
a privacy leak.
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 57
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
How can I make sure my results still have utility after Ive added noise?
First, check the sensitivity. If youre asking a lot of questions about the
data set, the noise will add up fast! Make sure the sensitivity is a small
fraction of the expected numerical values in the results. If it isnt, ask
fewer questions!
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 58
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
How can I make sure my results still have utility after Ive added noise?
Next, think about post-processing. Because were adding symmetric
noise to the data set, the privatized results will often not line up with
any real world (for instance, -1.6 women over 45 liked Justin Beiber).
Once the result is privatized, you can safely improve the results by
mapping it on to the closest self-consistent possible world.
Or, you can prune off what seem like clearly unrealistic answers
(using common sense, rather than statistics about the true data set).
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 59
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
How can I make sure my results still have utility after Ive added noise?
Finally, be careful how youre using those noisy answers! Counts have
low sensitivity, but thats not helpful if you take a noisy count and
insert it into a function which is very sensitive to its parameters.
Specifically, think about what happens to a mean if the denominator
varies by 1 or 2? The end result could be a small fraction of the true
answer.
Breaking and Fixing Things
4/30/2012 Christine Task -- Purdue University 60
Of course, things are never quite as simple as they seem. Heres
some additional practical advice and common mistakes to look out for:
Is there anything else I should be careful of?
Be careful about privacy leaks. The only time you should touch the
data is within the differentially privatized query.
You cant use non-privatized statistics about the data to make decisions.
You cant build algorithms and release them to the public where the
running time or behavior is heavily dependent on non-privatized data.
You cant use lists of words, names, places, etc taken from the data.
In general, just use public data or common sense to get any information
needed to assist the privatized analysis.
4/30/2012 Christine Task -- Purdue University 61
Where?
Lets Count!
4/30/2012 Christine Task -- Purdue University 62
Histograms and counts are easy!
They have very low sensitivity. What can we do with them?
Lets Count!
4/30/2012 Christine Task -- Purdue University 63
Random Forests of Binary Decision Trees: counts of randomly
selected parameters are used to effectively build partitions in
random decision trees.
Geetha Jagannathan, Krishnan Pillaipakkamnatt, and Rebecca N. Wright. 2009. A Practical Differentially Private
Random Decision Tree Classifier. In Proceedings of the 2009 IEEE International Conference on Data Mining
Workshops (ICDMW '09). IEEE Computer Society, Washington, DC
Network Trace Analysis: counts of messages sent between
network nodes are privatized and used to privately learn about
network usage patterns.
Frank McSherry and Ratul Mahajan. 2010. Differentially-private network trace analysis. InProceedings of the
ACM SIGCOMM 2010 conference (SIGCOMM '10). ACM, New York, NY
Click Query Graphs: counts of (search query, result chosen) pairs
are privatized, so search patterns can be analyzed.
Aleksandra Korolova, KrishnaramKenthapadi, Nina Mishra, and Alexandros Ntoulas. 2009. Releasing search
queries and clicks privately. In Proceedings of the 18th international conference on World wide web (WWW '09).
ACM, New York, NY
Beyond counting.
4/30/2012 Christine Task -- Purdue University 64
K-core Clustering: Individuals mapped as points in a parameter
space are clustered into a reduced, robust set of points whose
distribution varies little between neighboring data sets.
Dan Feldman, Amos Fiat, HaimKaplan, and Kobbi Nissim. 2009. Private coresets. InProceedings of the 41st
annual ACM symposium on Theory of computing (STOC '09). ACM, New York, NY
Combinatorial Optimization: Differentially private approximation
algorithms for a variety of NP-complete problems.
AnupamGupta, Katrina Ligett, Frank McSherry, Aaron Roth, and Kunal Talwar. 2010. Differentially private
combinatorial optimization. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA '10). Society for Industrial and Applied Mathematics, Philadelphia, PA
Frequent Item Set Mining: Item sets are sampled along a
probability distribution which reduces the number of necessary
frequency counts.
Raghav Bhaskar, Srivatsan Laxman, Adam Smith, and Abhradeep Thakurta. 2010. Discovering frequent patterns
in sensitive data. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and
data mining (KDD '10). ACM, New York, NY
General Differential Privacy
4/30/2012 Christine Task -- Purdue University 65
Where it all started (basically):
Cynthia Dwork. 2008. Differential privacy: a survey of results. In Proceedings of the 5th
international conference on Theory and applications of models of
computation (TAMC'08), Manindra Agrawal, Dingzhu Du, Zhenhua Duan, and Angsheng
Li (Eds.). Springer-Verlag, Berlin, Heidelberg, 1-19
Some recent observations:
Daniel Kifer and Ashwin Machanavajjhala. 2011. No free lunch in data privacy.
In Proceedings of the 2011 international conference on Management of data (SIGMOD
'11). ACM, New York, NY,
Johannes Gehrke, Edward Lui, and Rafael Pass. 2011. Towards privacy for social
networks: a zero-knowledge based definition of privacy. In Proceedings of the 8th
conference on Theory of cryptography (TCC'11), Yuval Ishai (Ed.). Springer-Verlag,
Berlin, Heidelberg, 432-449
4/30/2012 Christine Task -- Purdue University 66
Questions?

Das könnte Ihnen auch gefallen