Sie sind auf Seite 1von 85

Data

CPE403/CSC403 Advanced Data Management Techniques [ Data Mining ]

Outline
Definitions Attributes Nominal, Ordinal, Interval, Ratio Types of Data Sets Characteristics of Structured Data Data Preprocessing Measure of Similarity & Dissimilarity
CPE403/CSC403 HOI & CHENG 2

What is Data?
Collection of data objects and their attributes An attribute is a property or characteristic of an object Examples: eye color of a person, temperature, etc. Attribute is also known as variable, field, characteristic, or feature A collection of attributes describe Objects an object Object is also known as record, point, case, sample, entity, or instance
Attributes
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9 10
10

Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes
3

Yes No No Yes No No Yes No No No

Single Married Single Married

Divorced 95K Married 60K

Divorced 220K Single Married Single 85K 75K 90K

CPE403/CSC403 HOI & CHENG

Attribute Values
Attribute values are numbers or symbols assigned to an attribute Distinction between attributes and attribute values Same attributes can be mapped to different attribute values Example: height can be measured in feet or meters Different attributes can be mapped to the same set of values Example: Attribute values for ID and age are integers But properties of attribute values can be different E.g.: ID has no limit but age has a maximum value and a minimum value

CPE403/CSC403 HOI & CHENG

Types of Attributes
There are different types of attributes
Nominal
Meaningless, barely enough to tell one object from another Examples: ID numbers, eye color, zip codes

Ordinal

Interval

Order has meaning Examples: rankings (e.g., taste of potato chips on a scale from 110), grades, height in {tall, medium, short} Difference has meaning Examples: calendar dates, temperatures in Celsius or Fahrenheit.

Ratio

Ratio has meaning Examples: length, time, counts, temperature in Kelvin


CPE403/CSC403 HOI & CHENG 5

Properties of Attribute Values


The type of an attribute depends on which of the following properties it possesses:
Distinctness: Order: Addition: Multiplication: = < > + */

Nominal attribute: Ordinal attribute: Interval attribute: Ratio attribute:

distinctness distinctness, order distinctness, order, addition all 4 properties


6

CPE403/CSC403 HOI & CHENG

Discrete and Continuous Attributes


Discrete Attribute
Has only a finite or countably infinite set of values Examples: zip codes, counts, or the set of words in a collection of documents Often represented as integer variables. Note: binary attributes are a special case of discrete attributes Has real numbers as attribute values Examples: temperature, height, or weight. Practically, real values can only be measured and represented using a finite number of digits. Continuous attributes are typically represented as floating-point variables.
CPE403/CSC403 HOI & CHENG 7

Continuous Attribute

Types of data sets


Record
Data Matrix Document Data Transaction Data

Graph
World Wide Web, Social Networks Molecular Structures

Ordered
Spatial Data Temporal Data Sequential Data Genetic Sequence Data
CPE403/CSC403 HOI & CHENG 8

Key Characteristics of Structured Data


Dimensionality The number of dimensions/attributes: low or high Curse of Dimensionality Sparsity The number of non-zero values: sparse or dense Only presence counts Only non-zero values need to be stored and manipulated Resolution Data can be collected at different levels of resolution Properties of data differ at different resolutions Patterns depend on the levels of resolution E.g., surface of earth seems very uneven at a resolution of a few meters, but is relatively smooth at a resolution of tens of kilometers.
CPE403/CSC403 HOI & CHENG 9

Record Data
Data that consists of a collection of records, each of which consists of a fixed set of attributes
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9 10
10

Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes
10

Yes No No Yes No No Yes No No No

Single Married Single Married

Divorced 95K Married 60K

Divorced 220K Single Married Single 85K 75K 90K

CPE403/CSC403 HOI & CHENG

Data Matrix
If data objects have the same fixed set of numeric attributes, then the data objects can be viewed as points in a multi-dimensional space, where each dimension represents a distinct attribute Such data set can be represented by an m by n matrix, where there are m rows, one for each object, and n columns, one for each attribute
Projection of x Load 10.23 12.65 Projection of y load 5.27 6.25 Distance Load Thickness

15.22 16.22
CPE403/CSC403 HOI & CHENG

2.7 2.2

1.2 1.1
11

Document Data
Each document becomes a term vector, Each term is a component (attribute) of the vector, The value of each component is the number of times the corresponding term occurs in the document Example: document1 it is what it is
document2 document3
"a" document1 document2 document3 0 0 1

what is it it is a banana
"is" 2 1 1

"banana" 0 0 1

"it" 1 1 1

"what" 1 1 0
12

CPE403/CSC403 HOI & CHENG

Transaction Data
A special type of record data, where Each record (transaction) involves a set of items.
For example, consider a grocery store. The set of products purchased by a customer during one shopping trip constitute a transaction, while the individual products that were purchased are the items.
TID Items

1 2 3 4 5

Bread, Coke, Milk Beer, Bread Beer, Coke, Diaper, Milk Beer, Bread, Diaper, Milk Coke, Diaper, Milk
CPE403/CSC403 HOI & CHENG 13

Graph Data Web Link Data


Example:
Generic WWW graph and HTML Links

2 5 2 5 1

The WWW Graph by Googles View


CPE403/CSC403 HOI & CHENG 14

Graph Data Social Networks

Twitter networks

Facebook Open Graph of Friend Links


CPE403/CSC403 HOI & CHENG 15

Graph Data - Chemical Data


Benzene Molecule: C6H6

CPE403/CSC403 HOI & CHENG

16

Ordered Data
Sequences of transactions
Items/Events

An element of the sequence

CPE403/CSC403 HOI & CHENG

17

Ordered Data
Genomic sequence data
GGTTCCGCCTTCAGCCCCGCGCC CGCAGGGCCCGCCCCGCGCCGTC GAGAAGGGCCCGCCTGGCGGGCG GGGGGAGGCGGGGCCGCCCGAGC CCAACCGAGTCCGACCAGGTGCC CCCTCTGCTCGGCCTAGACCTGA GCTCATTAGGCGGCAGCGGACAG GCCAAGTAGAACACGCGAAGCGC TGGGCTGCCTGCTGCGACCAGGG

CPE403/CSC403 HOI & CHENG

18

Ordered Data
Spatio-Temporal Data
Average Monthly Temperature of land and ocean

CPE403/CSC403 HOI & CHENG

19

Data Quality
What kinds of data quality problems? How can we detect problems with the data? What can we do about these problems? Examples of data quality problems:
Noise and outliers missing values duplicate data
CPE403/CSC403 HOI & CHENG 20

Noise
Noise refers to modification of original values
Examples: distortion of a persons voice when talking on a poor phone and snow on television screen

What causes noise?


faulty data collection instruments data entry problems data transmission problems technology limitation inconsistency in naming convention

Two Sine Waves

Two Sine Waves + Noise CPE403/CSC403 HOI & CHENG 21

Outliers
Outliers are data objects with characteristics that are considerably different than most of the other data objects in the data set

CPE403/CSC403 HOI & CHENG

22

Missing Values
Data values are not always available E.g., many tuples have no recorded value for several attributes, such as customer income in sales data Missing data may be due to Equipment malfunction Information was not collected (e.g., people decline to give their age and weight) Data not entered due to misunderstanding Certain data may not be considered important at the time of entry Attributes may not be applicable to all cases (e.g., annual income is not applicable to children)
CPE403/CSC403 HOI & CHENG 23

Duplicate Data
Data set may include data objects that are duplicates, or almost duplicates of one another
Major issue when merging data from heterogeous sources Same person with multiple email addresses

Examples:

Data de-duplication

Process of dealing with duplicate data issues 1) if there are two objects that actually represent a single object, then values of the corresponding attributes may be differerent; these inconsistent values must be resolved 2) avoid accidentally combining data objects that are similar, but not duplicates, e.g., two distinct people with identical names
CPE403/CSC403 HOI & CHENG 24

Data Preprocessing
Data Cleaning Aggregation Sampling Dimensionality Reduction Feature Subset Selection Feature Creation Discretization and Binarization Attribute Transformation
CPE403/CSC403 HOI & CHENG 25

Data Cleaning
Importance
Data cleaning is the number one problem in data warehousingDCI survey

Data cleaning tasks


Handle the missing values Identify outliers and smooth out noisy data Correct inconsistent data Resolve redundancy caused by data integration
CPE403/CSC403 HOI & CHENG 26

How to Handle Missing Data?


Eliminate data objects or attributes
Usually done when class label is missing Effective when a data set has only a few objects with missing values When there are many objects with missing values, a reliable analysis can be difficult or impossible by this approach

Fill in the missing value manually


Tedious and infeasible for large data with many missing values

CPE403/CSC403 HOI & CHENG

27

How to Handle Missing Data? (cont)


Fill in it automatically with A global constant e.g., unknown --- this may confuse the data mining algorithm to think that these tuples are common
Attribute mean Attribute mean for all samples belonging to the same class Most probable value Mostly commonly occurring attribute values Inference-based such as Bayesian formula or decision tree

CPE403/CSC403 HOI & CHENG

28

How to Handle Noisy Data?


Binning
first sort data and partition into (equal-frequency) bins then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc. Regression smooth by fitting the data into regression functions Clustering detect and remove outliers Combined computer and human inspection detect suspicious values and check by human (e.g., deal with possible outliers)
CPE403/CSC403 HOI & CHENG 29

Binning Methods for Data Smoothing


Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 * Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 * Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 * Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34
CPE403/CSC403 HOI & CHENG 30

Regression
y
Noise Y1

Value after regression

Y1

y=x+1

X1

CPE403/CSC403 HOI & CHENG

31

Cluster Analysis

Outliers
CPE403/CSC403 HOI & CHENG 32

Aggregation
Combining two or more attributes (or objects) into a single attribute (or object) Purposes
Data reduction

Change of scale

Reduce the number of attributes or objects Cities aggregated into regions, states, countries, etc Aggregated data tends to have less variability

More stable data

CPE403/CSC403 HOI & CHENG

33

Aggregation
Example: Variation of Precipitation in Australia

Standard Deviation of Average Monthly Precipitation

Standard Deviation of Average Yearly Precipitation


34

CPE403/CSC403 HOI & CHENG

Sampling
Sampling is the main technique employed for data selection. It is often used for both the preliminary investigation of the data and the final data analysis. Statisticians sample because obtaining the entire set of data of interest is too expensive or time consuming. Sampling is used in data mining because processing the entire set of data of interest is too expensive or time consuming.
CPE403/CSC403 HOI & CHENG 35

Sampling
The key principle for effective sampling: Using a sample will work almost as well as using the entire data sets, if the sample is representative A sample is representative if it has approximately the same property (of interest) as the original set of data

CPE403/CSC403 HOI & CHENG

36

Types of Sampling
Simple Random Sampling
There is an equal probability of selecting any particular item

Sampling without replacement


Once an item is selected, it is removed from the population

Sampling with replacement


Objects are not removed from the population as they are selected for the sample The same object can be picked up more than once

Stratified sampling
Split the data into several partitions; then draw random samples from each partition

CPE403/CSC403 HOI & CHENG

37

Sample Size

8000 points

2000 Points

500 Points

CPE403/CSC403 HOI & CHENG

38

Sample Size
What sample size is necessary to get at least one object from each of 10 groups.

CPE403/CSC403 HOI & CHENG

39

Curse of Dimensionality
When dimensionality increases, data becomes increasingly sparse in the space that it occupies Example
Randomly generate 500 points MAX_DIST: Gap between the two furthest points MIN_DIST: Gap between the two nearest points Compute difference between MAX_DIST and MIN_DIST of any pair of points
CPE403/CSC403 HOI & CHENG 40

Curse of Dimensionality
Definitions of density and distance between points become less meaningful In very high-Dimension, almost every point lie at the edge of the space, far away from the center

CPE403/CSC403 HOI & CHENG

41

Curse of Dimensionality
One challenge of mining high-dimensional data is insufficient data samples
Suppose 5 samples/objects is considered enough in 1-D 1D : 5 points 2D : 25 points 3D : 125 points 10D : 9 765 625 points

CPE403/CSC403 HOI & CHENG

42

Dimensionality Reduction
Purposes:
Avoid curse of dimensionality Reduce amount of time and memory required by data mining algorithms Allow data to be more easily visualized May help to eliminate irrelevant features or reduce noise Principle Component Analysis Singular Value Decomposition Others: supervised and non-linear techniques
CPE403/CSC403 HOI & CHENG 43

Techniques

Dimensionality Reduction: PCA


Goal is to find a projection that captures the largest amount of variation in data
x2 e

x1
CPE403/CSC403 HOI & CHENG 44

Dimensionality Reduction: PCA


Approach:
Find the eigenvectors of the covariance matrix

The eigenvectors define the new space


x2 e

x1
CPE403/CSC403 HOI & CHENG 45

Dimensionality Reduction: PCA


Dimensions = 120 Dimensions =160 206 80 40 10

CPE403/CSC403 HOI & CHENG

46

Dimensionality Reduction: ISOMAP


By Tenenbaum, de Silva, Langford (Science 2000) ISOMAP: a Nonlinear dimension reduction technique Approach
Construct a neighbourhood graph For each pair of points in the graph, compute the shortest path distances geodesic distances

CPE403/CSC403 HOI & CHENG

47

Feature Subset Selection


Another way to reduce dimensionality of data Redundant features
duplicate much or all of the information contained in one or more other attributes Example: purchase price of a product and the amount of sales tax paid contain no information that is useful for the data mining task at hand Example: students' ID is often irrelevant to the task of predicting students' GPA

Irrelevant features

CPE403/CSC403 HOI & CHENG

48

Feature Subset Selection


Techniques:
Brute-force approach:
Try all possible feature subsets as input to data mining algorithm Feature selection occurs naturally as part of the data mining algorithm Features are selected before data mining algorithm is run Use the data mining algorithm as a black box to find best subset of attributes
CPE403/CSC403 HOI & CHENG 49

Embedded approaches: Filter approaches:

Wrapper approaches:

Feature Creation
Create new attributes that can capture the important information in a data set much more efficiently than the original attributes Three general methodologies:
Feature Extraction
domain-specific

Mapping Data to New Space Feature Construction


combining features
CPE403/CSC403 HOI & CHENG 50

Mapping Data to a New Space


Consider time series data,
Fourier transform Wavelet transform

(a) Two Sine Waves

(b) Two Sine Waves + Noise


CPE403/CSC403 HOI & CHENG

(c) Power Spectrum (Frequency)


51

Discretization and Binarization


Some data mining algorithms require the data to be in the form of categorical attributes or binary attributes Discretization
Transform a continuous attribute into a categorical attribute

Binarization
Transform either a continuous attribute or a categorical attribute into one or more binary attributes
CPE403/CSC403 HOI & CHENG 52

Binarization
A simple method
Given m categorical values, assign each original value to an integer in the interval [0, m-1]. Convert each of these m integers into a binary number Require n = log2(m) binary digits to represent these integers A categorical variable with 5 values {Awful, Poor, OK, Good, Great}, require 3 binary attributes x1, x2, x3
CPE403/CSC403 HOI & CHENG 53

Example:

Example:
Categorical Value Awful Poor OK Good Great Categorical Value Awful Poor OK Good Great Integer value 0 1 2 3 4 Integer value 0 1 2 3 4 X1 1 0 0 0 1 X1 0 0 0 0 1 X2 0 1 0 0 0 X2 0 0 1 1 0 X3 0 0 1 0 0 X4 0 0 0 1 0 X3 0 1 0 1 0 X5 0 0 0 0 1
54

CPE403/CSC403 HOI & CHENG

Discretization of Continuous Attribute


Transformation of a continuous attribute to a categorical attribute involve two subtasks
(1) Decide how many categories
After the values are sorted, the are then divided into n intervals by specifying n-1 split points. All the values in one interval are mapped to the same categorical value

(2) Determine how to map the values of the continuous attribute to these categories

Discretization methods

The key issue is how many split points to choose and where to place them Unsupervised vs. Supervised Discretization
CPE403/CSC403 HOI & CHENG 55

Unsupervised Discretization: Without Using Class Labels

(a) Original Data

(b) Equal width discretization

(c) Equal frequency discretization

(d) K-means discretization


56

CPE403/CSC403 HOI & CHENG

Supervised Discretization: Using Class Labels


Entropy based approach (using class labels) Entropy gives a measure of the purity of an interval

3 intervals for both x and y

5 intervals for both x and y


57

CPE403/CSC403 HOI & CHENG

Attribute/Variable Transformation
A function that maps the entire set of values of a given attribute to a new set of replacement values such that each old value can be identified with one of the new values Simple math functions: xk, log(x), ex, |x|, 1/x, sin x Normalization (or Standardization)

CPE403/CSC403 HOI & CHENG

58

Normalization
Min-max normalization:
[minA, maxA] [new_minA, new_maxA]

v minA v' = (new _ maxA new _ minA) + new _ minA maxA minA
Example: Income range [$12,000, $98,000] normalized to [0.0, 1.0]. Then $73,000 is mapped to

73,000 12,000 (1.0 0) + 0 = 0.71 98,000 12,000


CPE403/CSC403 HOI & CHENG 59

Normalization (cont)
Z-score normalization
(A: mean, A: standard deviation): v ' =

v A

Example: Consider a value v=73,000, Normalization by Decimal Scaling

Let A = 54,000, A= 16,000. Then 73,000 54,000 = 1.225


16,000

where j is the smallest integer such that Max(||) < 1


CPE403/CSC403 HOI & CHENG 60

v v' = j 10

Measure of Similarity & Dissimilarity


Similarity and dissimilarity are important as they are used by many data mining techniques In many cases, the initial data set is not needed once these similarities or dissimilarities have been computed. For convenience, the term ``proximity is used to refer to either similarity or dissimilarity.
CPE403/CSC403 HOI & CHENG 61

Similarity and Dissimilarity


Similarity
Numerical measure of how alike two data objects are. Is higher when objects are more alike. Often falls in the range [0,1]

Dissimilarity
Numerical measure of how different are two data objects Lower when objects are more alike Minimum dissimilarity is often 0 Upper limit varies
CPE403/CSC403 HOI & CHENG 62

Similarity/Dissimilarity for Simple Attributes


Similarity/Dissimilarity between p and q, p and q are the attribute values for two data objects

CPE403/CSC403 HOI & CHENG

63

Euclidean Distance
Euclidean Distance between two n-dimensional vectors (objects) p and q

dist =

k =1

( pk qk )

where n is the number of dimensions (attributes) and pk and qk are the kth attributes (components) of data objects p and q, respectively. Normalization is usually necessary if scales are different.
CPE403/CSC403 HOI & CHENG 64

Euclidean Distance
Example:
point p1 p2 p3 p4 x 0 2 3 5 y 2 0 1 1
3 2 1
p2 p1 p3 p4

0 0 1 2 3 4 5 6

p1 p1 p2 p3 p4 0 2.828 3.162 5.099

p2 2.828 0 1.414 3.162

p3 3.162 1.414 0 2

p4 5.099 3.162 2 0
65

Euclidean Distance Matrix


CPE403/CSC403 HOI & CHENG

Minkowski Distance

Minkowski Distance is a generalization of Euclidean Distance 1

dist = ( | pk qk
k =1

r r |)

Where r is a parameter, n is the number of dimensions (attributes) and pk and qk are, respectively, the k-th attributes (components) of data objects p and q.
CPE403/CSC403 HOI & CHENG 66

Minkowski Distance: Special Cases


r = 1: City block (Manhattan, taxicab, L1 norm) distance. A common example of this is the Hamming distance, which is just the number of bits that are different between two binary vectors 1 n r = 2: dist = ( | pk qk |r ) r k =1 Euclidean distance (L2 norm) r : supremum (Lmax norm, L norm) distance. The maximum difference between any component of the two vectors: Do not confuse parameter r with dimensionality n, i.e., all these distances are defined for all the dimensions.
CPE403/CSC403 HOI & CHENG 67

Minkowski Distance
Example:
point p1 p2 p3 p4 x 0 2 3 5 y 2 0 1 1

Distance Matrix
L1 p1 p2 p3 p4 L2 p1 p2 p3 p4
L p1 p2 p3 p4

p1 0 4 4 6 p1 0 2.828 3.162 5.099


p1 0 2 3 5

p2 4 0 2 4 p2 2.828 0 1.414 3.162


p2 2 0 1 3

p3 4 2 0 2 p3 3.162 1.414 0 2
p3 3 1 0 2

p4 6 4 2 0 p4 5.099 3.162 2 0
p4 5 3 2 0
68

CPE403/CSC403 HOI & CHENG

Mahalanobis Distance
mahalanobis( p, q) = ( p q) 1 ( p q)T
is the covariance matrix of all the input data X
j ,k 1 n = ( X ij X j )( X ik X k ) n 1 i =1

For two red points, the Euclidean distance is 14.7, Mahalanobis distance is 6.
CPE403/CSC403 HOI & CHENG 69

Mahalanobis Distance
Covariance Matrix:

C B A

0.3 0.2 = 0.2 0.3


A: (0.5, 0.5) B: (0, 1) C: (1.5, 1.5) Mahal(A,B) = 5 Mahal(A,C) = 4

Analogy: A(ndrew) is closer to C(athy), because there are thousands of friends spreaded in the direction of AC, C is just like any other friends. A(ndrew) is relatively far away from B(etty), because not many friends lie in the direction of AB, so B is considered rare and thus far.
CPE403/CSC403 HOI & CHENG 70

Common Properties of a Distance


Distances, such as the Euclidean distance, have some well known properties. Let us denote by d(p, q) is the distance (dissimilarity) between points (data objects) p and q. 1. Positive Definiteness d(p, q) 0 for all p and q d(p, q) = 0 if only if p = q. 2. Symmetry d(p, q) = d(q, p) for all p and q 3. Triangle Inequality d(p, r) d(p, q) + d(q, r) for all points p, q, and r. A distance satisfying all the above three properties is a metric.
CPE403/CSC403 HOI & CHENG 71

Common Properties of a Similarity


Similarities, also have some well-known properties. Let us denote by s(p, q) the similarity between two data objects (points) p and q. 1. Self-Similarity s(p, q) = 1 (or maximum similarity) only if p = q. 2. Symmetry s(p, q) = s(q, p) for all p and q.

CPE403/CSC403 HOI & CHENG

72

Similarity Between Binary Vectors


Consider two objects, p and q, having only binary attributes Compute similarities using the following quantities M01 = the number of attributes where p was 0 and q was 1 M10 = the number of attributes where p was 1 and q was 0 M00 = the number of attributes where p was 0 and q was 0 M11 = the number of attributes where p was 1 and q was 1 Simple Matching Coefficient (SMC) SMC = number of matches / number of attributes = (M11 + M00) / (M01 + M10 + M11 + M00) Jaccard Coefficient J = number of 11 matches / number of not-both-zero attributes values = M11 / (M01 + M10 + M11)
CPE403/CSC403 HOI & CHENG 73

SMC versus Jaccard: Example


p= 1000000000 q= 0000001001 M01 = 2 M10 = 1 M00 = 7 M11 = 0 (the number of attributes where p was 0 and q was 1) (the number of attributes where p was 1 and q was 0) (the number of attributes where p was 0 and q was 0) (the number of attributes where p was 1 and q was 1)

SMC = (M11 + M00)/(M01 + M10 + M11 + M00) = (0+7) / (2+1+0+7) = 0.7 J = (M11) / (M01 + M10 + M11) = 0 / (2 + 1 + 0) = 0
CPE403/CSC403 HOI & CHENG 74

Cosine Similarity
If d1 and d2 are two document vectors, then
where indicates vector dot product and || d || is the length of vector d. It is a measure of the cosine of the angle between the two vectors.

cos( d1, d2 ) = (d1 d2) / ||d1|| ||d2|| ,

Example:

d1 = 3 2 0 5 0 0 0 2 0 0 d2 = 1 0 0 0 0 0 0 1 0 2

d1 d2= 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5 ||d1|| = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 = (42) 0.5 = 6.4807 ||d2|| = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.4495

cos( d1, d2 ) = 5/(6.4807*2.4495) = 0.3150


CPE403/CSC403 HOI & CHENG

75

Extended Jaccard Coefficient


Also known as Tanimoto Coefficient A variation of Jaccard for continuous attributes
Reduces to Jaccard for binary attributes

CPE403/CSC403 HOI & CHENG

76

Correlation
Correlation is a measure of the linear relationship between the attributes of the objects The (Pearsons) correlation coefficient between two data objects p and q is defined as

CPE403/CSC403 HOI & CHENG

77

Example of Correlation
(Perfect Correlation)
Correlation is always in the range -1 and 1. A correlation of value 1 (-1) means that p and q have a perfect positive (negative) linear relationship, i.e., x = a*y + b, where a and b are constants. The follow two sets of x and y indicate two cases of correlation -1 and +1, respectively x=(-3, 6, 0, 3, -6) y = (1, -2, 0, -1, 2) corr(x, y) = -1 x= (3, 6, 0, 3, 6) y=(1, 2, 0, 1, 2) corr(x, y) = 1
78

CPE403/CSC403 HOI & CHENG

Example: Visually Evaluating Correlation

Scatter plots showing the correlation from 1 to 1.

CPE403/CSC403 HOI & CHENG

79

General Approach for Combining Similarities


Sometimes attributes are of many different types, but an overall similarity is needed. The following approach computes similarities of hetergeneous objects
1. 2. For the kth attribute, compute a similarity sk Define an indicator variable k for the kth attribute as follows

3.

Compute the overall similarity between the two objects using the following formula:

CPE403/CSC403 HOI & CHENG

80

Using Weights to Combine Similarities


May not want to treat all attributes the same.
Use weights wk which are between 0 and 1 and sum to 1.

CPE403/CSC403 HOI & CHENG

81

Density
Density-based clustering requires a notion of density Examples: Euclidean density Euclidean density = number of points per unit volume Probability density Distribution measures such as covariance Graph-based density #internal links #external links
CPE403/CSC403 HOI & CHENG 82

Euclidean Density Cell-based


Simple approach
Divide region into a number of rectangular cells of equal volume Define density as # of points the cell contains

CPE403/CSC403 HOI & CHENG

83

Euclidean Density Center-based


Euclidean density is the number of points within a specified radius of the point

CPE403/CSC403 HOI & CHENG

84

Summary
Definitions Attributes Nominal, Ordinal, Interval, Ratio Types of Data Sets Characteristics of Structured Data Data Preprocessing
Data Cleaning, Aggregation, Sampling, Dimensionality Reduction, Feature subset selection, Discretization and Binarization, Attribute Transformation

Measure of Similarity & Dissimilarity


CPE403/CSC403 HOI & CHENG

85

Das könnte Ihnen auch gefallen