Sie sind auf Seite 1von 36

An Introduction of

Support Vector Machine


Jinwei Gu 2008/10/16

Review: What Weve Learned So Far


  

Bayesian Decision Theory Maximum-Likelihood & Bayesian Parameter Estimation Nonparametric Density Estimation
Parzen-Window, kn-Nearest-Neighbor

 

K-Nearest Neighbor Classifier Decision Tree Classifier

Today: Support Vector Machine (SVM)




A classifier derived from statistical learning theory by Vapnik, et al. in 1992 SVM became famous when, using images as input, it gave accuracy comparable to neural-network with hand-designed features in a handwriting recognition task Currently, SVM is widely used in object detection & recognition, content-based image retrieval, text recognition, biometrics, speech recognition, etc. Also used for regression (will not cover today) Chapter 5.1, 5.2, 5.3, 5.11 (5.4*) in textbook
V. Vapnik

Outline
   

Linear Discriminant Function Large Margin Linear Classifier Nonlinear SVM: The Kernel Trick Demo of SVM

Discriminant Function


Chapter 2.4: the classifier is said to assign a feature vector x to class wi if

g i ( x) " g j ( x)


for all j { i

For two-category case,

g (x) | g1 (x)  g 2 (x)

Decide [1 if g (x) " 0; otherwise decide [2




An example weve learned before:


Minimum-Error-Rate Classifier

g (x) | p ([1 | x)  p ([2 | x)

Discriminant Function


It can be arbitrary functions of x, such as:

Nearest Neighbor

Decision Tree

Linear Functions

Nonlinear Functions

g ( x) ! w T x  b

Linear Discriminant Function




g(x) is a linear function:

x2

wT x + b > 0

g ( x) ! w T x  b


A hyper-plane in the feature space

(Unit-length) normal vector of the hyper-plane:

w n! w

wT x + b < 0

x1

Linear Discriminant Function




denotes +1 denotes -1

How would you classify these points using a linear discriminant function in order to minimize the error rate?

x2

Infinite number of answers!

x1

Linear Discriminant Function




denotes +1 denotes -1

How would you classify these points using a linear discriminant function in order to minimize the error rate?

x2

Infinite number of answers!

x1

Linear Discriminant Function




denotes +1 denotes -1

How would you classify these points using a linear discriminant function in order to minimize the error rate?

x2

Infinite number of answers!

x1

Linear Discriminant Function




denotes +1 denotes -1

How would you classify these points using a linear discriminant function in order to minimize the error rate?

x2

Infinite number of answers!

Which one is the best?

x1

Large Margin Linear Classifier




denotes +1 denotes -1

The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point Why it is the best?
Robust to outliners and thus strong generalization ability

x2 safe zone

Margin

x1

Large Margin Linear Classifier




denotes +1 denotes -1

Given a set of data points:

x2

{(xi , yi )}, i ! 1, 2, , n, where For yi ! 1, w T xi  b " 0 For yi ! 1, w T xi  b 0




With a scale transformation on both w and b, the above is equivalent to

For yi ! 1, w T xi  b u 1 For yi ! 1, w T xi  b e 1

x1

Large Margin Linear Classifier




denotes +1 denotes -1

We know that

x2 x+

Margin

wT x   b ! 1 w T x   b ! 1
x+


The margin width is:


n

M ! (x  x ) n w 2 ! (x  x ) ! w w
 

x-

Support Vectors

x1

Large Margin Linear Classifier




denotes +1 denotes -1

Formulation:

x2

Margin x+

maximize
such that

2 w
x+
T

For yi ! 1, w xi  b u 1 For yi ! 1, w T xi  b e 1

n x-

x1

Large Margin Linear Classifier




denotes +1 denotes -1

Formulation:

x2
2

Margin x+

1 minimize w 2
such that

x+
T

For yi ! 1, w xi  b u 1 For yi ! 1, w T xi  b e 1

n x-

x1

Large Margin Linear Classifier




denotes +1 denotes -1

Formulation:

x2
2

Margin x+

1 minimize w 2
such that

x+ n x-

yi (wT xi  b) u 1

x1

Solving the Optimization Problem


Quadratic programming with linear constraints

1 minimize w 2
s.t.

yi (wT xi  b) u 1

Lagrangian Function
n 1 2 minimize Lp (w , b, E i ) ! w  E i yi (wT xi  b)  1 2 i !1

s.t.

Ei u 0

Solving the Optimization Problem


n 1 2 minimize Lp (w , b, E i ) ! w  E i yi (wT xi  b)  1 2 i !1

s.t.

Ei u 0
n

xLp xw xLp xb

!0 !0

w ! E i yi xi
i !1 n

E y
i i !1

!0

Solving the Optimization Problem


n 1 2 minimize Lp (w , b, E i ) ! w  E i yi (wT xi  b)  1 2 i !1

s.t.

Ei u 0

Lagrangian Dual Problem

1 n n maximize E i  E iE j yi y j xT x j i 2 i !1 j !1 i !1
n

s.t.

E i u 0 , and

E y
i i !1

!0

Solving the Optimization Problem




From KKT condition, we know:


x2

E i yi (wT xi  b)  1 ! 0
x+

x+

Thus, only support vectors have E i { 0


x-

The solution has the form:


n

Support Vectors

x1

w ! E i yi xi !
i !1

E y x
i i iSV

get b from yi (w T xi  b)  1 ! 0, where xi is support vector

Solving the Optimization Problem




The linear discriminant function is:

g ( x) ! w T x  b !

E i xT x  b i
iSV

Notice it relies on a dot product between the test point x and the support vectors xi Also keep in mind that solving the optimization problem involved computing the dot products xiTxj between all pairs of training points

Large Margin Linear Classifier




denotes +1 denotes -1

What if data is not linear separable? (noisy data, outliers, etc.)

x2

Slack variables i can be added to allow misclassification of difficult or noisy data points

\2 \1

x1

Large Margin Linear Classifier




Formulation:
n 1 2 w  C \i minimize 2 i !1

such that

yi (wT xi  b) u 1  \i \i u 0

Parameter C can be viewed as a way to control over-fitting.

Large Margin Linear Classifier




Formulation: (Lagrangian Dual Problem)

1 n n maximize E i  E iE j yi y j xT x j i 2 i !1 j !1 i !1
such that

0 e Ei e C
n

E y
i i !1

!0

Non-linear SVMs


Datasets that are linearly separable with noise work out great:
0 x

But what are we going to do if the dataset is just too hard?


0 x

How about mapping data to a higher-dimensional space:


x2

This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Non-linear SVMs: Feature Space




General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable:

: x

(x)

This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Nonlinear SVMs: The Kernel Trick




With this mapping, our discriminant function is now:

g ( x) ! w T J ( x)  b !


E iJ (xi )T J (x)  b
iSV

No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test.

A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space:

K (xi , x j ) | J (xi )T J (x j )

Nonlinear SVMs: The Kernel Trick




An example:
2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj) = (xi) T (xj): K(xi,xj)=(1 + xiTxj)2, = 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2 = [1 xi12 2 xi1xi2 xi22 2xi1 2xi2]T [1 xj12 2 xj1xj2 xj22 2xj1 2xj2] = (xi) T (xj), where (x) = [1 x12 2 x1x2 x22 2x1 2x2]

This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Nonlinear SVMs: The Kernel Trick




Examples of commonly-used kernel functions:


Linear kernel:

K (xi , x j ) ! xT x j i K (xi , x j ) ! (1  xT x j ) p i xi  x j 2W
T 0 i 2 2

Polynomial kernel:

Gaussian (Radial-Basis Function (RBF) ) kernel:

K (xi , x j ) ! exp(
Sigmoid:

K (xi , x j ) ! tanh( F x x j  F1 )


In general, functions that satisfy Mercers condition can be kernel functions.

Nonlinear SVM: Optimization




Formulation: (Lagrangian Dual Problem)

1 n n maximize E i  E iE j yi y j K (xi , x j ) 2 i !1 j !1 i !1
such that

0 e Ei e C
n

E y
i i !1


!0

The solution of the discriminant function is

g ( x) !


E K ( x , x)  b
i i iSV

The optimization technique is the same.

Support Vector Machine: Algorithm




1. Choose a kernel function 2. Choose a value for C 3. Solve the quadratic programming problem (many software packages available) 4. Construct the discriminant function from the support vectors

Some Issues


Choice of kernel
- Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures

Choice of kernel parameters


- e.g. in Gaussian kernel - is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters.

Optimization criterion Hard margin v.s. Soft margin


- a lengthy series of experiments in which various parameters are tested

This slide is courtesy of www.iro.umontreal.ca/~pift6080/documents/papers/svm_tutorial.ppt

Summary: Support Vector Machine




1. Large Margin Classifier


Better generalization ability & less over-fitting

2. The Kernel Trick


Map data points to higher dimensional space in order to make them linearly separable. Since only dot product is used, we do not need to represent the mapping explicitly.

Additional Resource


http://www.kernel-machines.org/

Demo of LibSVM

http://www.csie.ntu.edu.tw/~cjlin/libsvm/

Das könnte Ihnen auch gefallen