You are on page 1of 25

# Blind source separation

## Mean and Variance

Covariance Matrix

## EigenValue and EigenVectors A=

A - matrix - eigenVector - eigenValue of A corresponing to v The non-zero vectors that after being multiplied by the matrix, remain proportional to the original vector. Only change in magnitude, no change in direction.

## EigenValue and EigenVectors

Covariance

Always between two dimension Positive value both dimension increase together Negative value one dimension increases, other dimension decreases Zero dimensions are independent of each other Covariance between one dimension and itself gives variance

## Singular Value Decomposition

Decompose X=AZ into X=USVT S is a diagonal matrix of singular values with elements arranged in descending order of magnitude (the singular spectrum) The columns of V are the eigenvectors of C = XTX U is the matrix of projections of X onto the eigenvectors of C

## Singular Value Decomposition

Introduction
Blind No information about the sources of the signal No information about the mixing of the signal Assumption Sources are statically independent Mixing is linear and stationary Method Principle Component Analysis Independent Component Analysis

Definition of BSS
Assuming observation signal is a linear and stationary mixer of more than one unknown independent source signal, Blind Source Separation separates the source signals using the property of statically independence of the sources. Method of separation of a set of signals from a set of mixed signals, without the aid of information about the source signals or the mixing process.

## Formal Statement of Problem

N- No. of sources M- No. of samples N independent sources linear square mixing (#sources=#sensors) -

## Formal Statement of Solution

demix observations XT ( N xM ) into YT = WXT YT ( N xM ) ZT W ( N xN ) A-1

Signal Source

Noise Source

Observed Mixture

## Blind Source Separation

XT

ZT

YT

= XT

BSS is a Transform ?
Like Fourier, we decompose into components by transforming the observations into another vector space which maximises the separation between interesting (signal) and unwanted (noise). Unlike Fourier, separation is not based on frequency, Its based on independence Source can have the same frequency content No assumptions about the signals (other than they are independent and linearly mixed)

## The Fourier Transform

EigenSpectrum of Decomposition

EigenValue

## EigenSpectrum= Plot of EigenValues

EigenVector Number

## SVD noise/signal separation

To perform SVD filtering of a signal, use a truncated SVD decomposition (using the first p eigenvectors) Y=USpVT Reduce the dimensionality of the data by discarding noise projections Snoise =0, then reconstruct the data with just the signal subspace]
EigenValue

Most of the signal is contained in the first few principal components. Discarding these and projecting back into the original observation space effects a noise-filtering or a noise/signal separation
EigenVector Number

Graphs of SVD
X

Xp =USpVT

Signal 1

Signal 2

## Original PDF with Principle and Independent component

Skewness
Negative Positive

Uneven moment Is most of the data greater than or less than mean?

## Gaussians are Mesokurtic with =3 How non-Gaussian is the data?

Kurtosis
SubGaussian Negative Platykurtic SuperGaussian Positive Leptokurtic

## Non-Gaussianity Statistical Independence?

components sources

## Limit Theorem : add enough signals together, for Gaussian PDF

non-Gaussian to find