11 views

Uploaded by ali

- 230.pdf
- TU-Berlin Intern Report
- ppca
- Principal Component Analysis of LCOM TR-UAH-CS-1999-01
- Lsa
- colibri - dinamica de vuelo.pdf
- Principal Components Analysis a Method Useful in Identification of Clusters of Variables
- Brain Mapping and Interpretation of Reading Processing in Children Using EEG and Multivariate Statistical Analysis
- Behavioral Recognition of Fish Using Accelerometer Data
- KLTnSVD
- Varimax Rotation
- Fitting protein structures
- Principal Components
- Spca_Vignettes R Package_Test Version
- SQC Chapter 6-11
- Factor Analysis
- nipsdlufl10-RandomWeights
- Model-Based Clustering Toolbox for MATLAB
- signal
- Spring 2005 Solutions

You are on page 1of 2

Frank Dellaert

May 2008

The singular value decomposition (SVD) factorizes a linear operator A : Rn Rm into three simpler linear

operators:

1. Projection z = V T x into an r-dimensional space, where r is the rank of A

2. Element-wise multiplication with r singular values i , i.e., z0 = Sz

3. Transformation y = Uz0 to the m-dimensional output space

Combining these statements, A can be re-written as

A = USV T

(1)

with U an m r orthonormal matrix spanning As column space im(A), S an r r diagonal matrix of singular

values, and V an n r orthonormal matrix spanning As row space im(AT ).

The eigenvalue decomposition applies to mappings from Rn to itself, i.e., a linear operator A : Rn Rn

described by a square matrix. An eigenvector e of A is a vector that is mapped to a scaled version of itself,

i.e., Ae = e, where is the corresponding eigenvalue. For a matrix A of rank r, we can group the r non-zero

eigenvalues in an r r diagonal matrix and their eigenvectors in an n r matrix E, and we have

AE = E

Furthermore, if A is full rank (r = n) then A can be factorized as

A = EE 1

which is a diagonalization similar to the SVD (1). In fact, if and only if A is symmetric1 and positive definite

(abbreviated SPD), we have that the SVD and the eigen-decomposition coincide

A = USU T = EE 1

with U = E and S = .

Given a non-square matrix A = USV T , two matrices and their factorization are of special interest:

AT A = V S2V T

(2)

AAT = US2U T

(3)

Thus, for these matrices the SVD on the original matrix A can be used to compute their SVD. And since

these matrices are by definition SPD, this is also their eigen-decomposition, with eigenvalues = S2 .

1 if

An important application is principal components analysis (PCA). To motivate this, consider a normally

distributed, r-dimensional vector x N(0, Ir ) with zero mean and unit covariance matrix. In other words,

each component of the vector x is drawn independently from a 1-dimensional Gaussian with zero mean and

unit variance, i.e., white noise. If we transform the vector x to an m-dimensional output space using the

following linear transformation

y = +W x

with W of size m r, then the m-dimensional vectors y will be distributed as y N(,WW T ).

Given a m n data matrix Y of n data samples y, PCA uses eigen-analysis to recover the (scaled)

principal components W . After subtracting the sample mean from all vectors y forming the matrix A, the

eigen-decomposition of the sample covariance matrix AAT is obtained by (3):

AAT = US2U T = (US)(US)T = WW T

Hence, the data can be whitened by

x = W T (y )

Just as a sanity check, the resulting covariance of x is indeed unity:

XX T = W T (AAT )W = W T (WW T )W = Ir

Two more facts regarding PCA are worth noting:

1. If the dimensionality m of the data matrix Y is very large, it is more efficient to use the eigendecomposition (2) of AT A and obtain the principal components as W = AV .

2. The m r matrix W contains the scaled principal components. Clearly, the normalized principal

components are the columns of U, and their lengths are the singular values .

Finally, it is interesting that to sample from the density y N(,WW T ) one can proceed in two ways:

1. Sample from x N(0, Ir ), and form y = + W x, i.e., generate a white latent vector x and use the

principal components W of the data Y to generate an m-dimensional vector.

2. Sample from c N(0, In ), and form y = + Ac, i.e., simply form a linear combination of the original

n

(centered) data points, with coefficients c j j=1 drawn from zero-mean, unit-variance Gaussians.

If the data lives in a small subspace of rank r, then simply drawing s r samples Ys from the data and

performing PCA will with high probability recover a set of scaled principal components Ws = Us Ss (of size

m r) that span the subspace. Then, forming the r n matrix Xs of latent variables

Xs = WsT A

for all centered samples A will not quite whiten the data. However, by doing a second decomposition on the

covariance of Xs

Xs XsT = W2W2T

the latent variables are whitened. Hence, if we form W = WsW2 and X = W T A = W2T Xs , we have

XX T = W2T Xs XsT W2 = W2T W2W2T W2 = Ir

If the data is very high-dimensional, i.e., m n, a similar algorithm can be devised by sampling rows.

2

- 230.pdfUploaded byisaac
- TU-Berlin Intern ReportUploaded byAmal Sahai
- ppcaUploaded byMaxGK
- Principal Component Analysis of LCOM TR-UAH-CS-1999-01Uploaded byGothu000
- LsaUploaded byabilngeorge
- colibri - dinamica de vuelo.pdfUploaded bylovecraft1890
- Principal Components Analysis a Method Useful in Identification of Clusters of VariablesUploaded byInternational Journal of Innovative Science and Research Technology
- Brain Mapping and Interpretation of Reading Processing in Children Using EEG and Multivariate Statistical AnalysisUploaded bynome sobrenome
- Behavioral Recognition of Fish Using Accelerometer DataUploaded byAreej Habib
- KLTnSVDUploaded bymanish
- Varimax RotationUploaded bypriyankaindoria
- Fitting protein structuresUploaded bymnstn
- Principal ComponentsUploaded byMahmud Hasan
- Spca_Vignettes R Package_Test VersionUploaded bygiunghi
- SQC Chapter 6-11Uploaded byVita Shany
- Factor AnalysisUploaded byjaya1117sarkar
- nipsdlufl10-RandomWeightsUploaded byAbbé Busoni
- Model-Based Clustering Toolbox for MATLABUploaded bykalokos
- signalUploaded byRAMA KRISHNA CHAVA
- Spring 2005 SolutionsUploaded byÖzge Tüncel
- AQA-MFP4-W-QP-JAN06Uploaded byÖzgür Oz Yilmaz Mehmet
- Behavioral Recognition of Fish Using Accelerometer DataUploaded byAreej Habib
- Error analysis lecture 7Uploaded byOmegaUser
- TJMCS2014Uploaded byAshraf Sayed Abdou
- UT Dallas Syllabus for math2333.0u1.08u taught by (xxx)Uploaded byUT Dallas Provost's Technology Group
- IIJCS-2014-04-05-008Uploaded byAnonymous vQrJlEN
- Fast Fuzzy Feature Clustering for Text ClassificationUploaded byCS & IT
- EuSpUploaded byManar Ahmed
- Mayarí-V15Uploaded byEddy Fernández Ochoa
- Diseño de sensor soft para flotación, Metodologia,Marcos OschardUploaded byEuracio Mella

- Abdi-EVD2007-pretty.pdfUploaded byali
- lab.pdfUploaded byali
- IEEESP.pdfUploaded byali
- Smart antennaUploaded byparthhsarathi
- Lab2-R.pdfUploaded byali
- Matrix Eigenvalues and Eigenvectors.pdfUploaded byali
- Vincent_157.pdfUploaded byali
- 12.pdfUploaded byali
- 2660p2.pdfUploaded byali
- Antenna arabic-Uploaded byDima
- Chapter10.pdfUploaded byPranveer Singh Parihar
- Satellite_Antenna_Fundamentals_Rev1.pdfUploaded byali
- smartantennas-140321052931-phpapp02.pptUploaded byali
- Primer 40Uploaded bytrabajosic
- Primer 40Uploaded bytrabajosic
- Group_16(DOA_Estimation).pdfUploaded byali
- Antenna FundamentalUploaded byH3liax
- lab15.pdfUploaded byali

- 0Basic Structural Analysis- Second EditionUploaded byVu Duy
- Eeg Lab Tutorial 601 bUploaded byOscar Duque Suarez
- Jntu course.pdfUploaded byKancharla Naveen Reddy
- Advekcija+difuzijaUploaded byIvan Dujlovic
- introductionUploaded bycbeleites
- Response Surface Design in MinitabUploaded byviswanathan_subram_4
- AIEEE 2006 MathsUploaded byKainshk Gupta
- Engineering Mathematics Lecture 0.pdfUploaded byAlex Lee Pak Hung
- 1st Step Into Sfg - Halliday-matthiessenUploaded byMarcos Rogério Ribeiro
- Report 1Uploaded byDivya Krishna Birthray
- MathematicsUploaded byOmeiza Alabi
- Sage - Girden, 1992 ANOVA Repeated MeasuresUploaded byEntre
- DSALab UTCNUploaded byAndreea Todasca
- zm2221.pdfUploaded byLuis Alberto Fuentes
- Steerable pyramidsUploaded byuunnss
- Solve State Eq NsUploaded byImran Shah
- Quantitative Relationships Between Key Performance Indicators for SupportingUploaded byjihanebk
- Taylor Series ExpansionUploaded byPesta Sigalingging
- asreml-RUploaded byRubénNavarro
- 15 Spectral AnalysisUploaded bymorgoth12
- 61.IJMPERDDEC201761Uploaded byTJPRC Publications
- math bookUploaded bycatclub22
- Revision Matrices and TransformationsUploaded byswetajainsewak83
- Hubbard, J. H.; Hubbard, B. B. Vector Calculus Linear Algebra and Differential Forms - A Unified Approach (Prentice Hall 698S)Uploaded byKarolina Padilla Valdez
- A Probabilistic Model of Fire Spread With Time EffectsUploaded byFredrik Nystedt
- CoreAnimation GuideUploaded byhc++
- A Tutorial Introduction to VHDL ProgrammingUploaded byMarco Antonio Bastida Flores
- AQA-MFP4-W-QP-JUN09Uploaded byÖzgür Oz Yilmaz Mehmet
- Frequency Response of Thyristor Controlled Series CapacitorUploaded byAnonymous RbPKxkX
- 9795_s14_qp_1Uploaded byRoger Chau