Beruflich Dokumente
Kultur Dokumente
simple
***********************************************************
***********************************************************
********* BASIC SOURCE SEPARATION CODE, 23 Jan 1996 *******
********* Tony Bell, **************************************
********* CNL, Salk Institute, PO Box 85800, San Diego ****
********* tony@salk.edu, **********************************
********* http://www.cnl.salk.edu/~tony/ ******************
***********************************************************
***********************************************************
****************** If you find this useful, ***************
************** I appreciate an acknowledgement! ***********
***********************************************************
***********************************************************
***********************************************************
*************************readsounds.m**********************
***********************************************************
<<<<<cut here>>>>
function sounds=readsounds(files)
minlen=1e10;
for fileno=1:size(files,1),
fprintf('reading %s \n', files(fileno,:));
temp=auread(['/home/tony/Matlab/sounds/' files(fileno,:)])';
len=size(temp,2);
if minlen>len, minlen=len; end;
sounds(fileno,1:minlen)=temp(1:minlen);
end;
sounds=sounds(:,1:minlen);
<<<<<cut here>>>>
***********************************************************
*************************sep.m*****************************
***********************************************************
<<<<<cut here>>>>
% SEP goes once through the scrambled mixed speech signals, x
% (which is of length P), in batch blocks of size B, adjusting weights,
% w, at the end of each block.
%
% I suggest a learning rate L, of 0.01 at least for 2->2 separation.
% But this will be unstable for higher dimensional data. Test it.
% Use smaller values. After convergence at a value for L, lower
% L and it will fine tune the solution.
%
% NOTE: this rule is the rule in our NC paper, but multiplied by w^T*w,
% as proposed by Amari, Cichocki & Yang at NIPS '95. This `natural
% gradient' method speeds convergence and avoids the matrix inverse in
the
% learning rule.
sweep=sweep+1; t=1;
noblocks=fix(P/B);
BI=B*Id;
for t=t:B:t-1+noblocks*B,
u=w*x(:,t:t+B-1);
w=w+L*(BI+(1-2*(1./(1+exp(-u))))*u')*w;
end;
sepout
<<<<<cut here>>>>
*********************************************************************
**********sepout.m: for various textual output during learning*******
*********************************************************************
<<<<<cut here>>>>
<<<<<cut here>>>>
*********************************************************************
********wchange.m: tracks size and direction of weight changes ******
*********************************************************************
<<<<<cut here>>>>
function [change,delta,angle]=wchange(w,oldw,olddelta)
[M,N]=size(w); delta=reshape(oldw-w,1,M*N);
change=delta*delta';
angle=acos((delta*olddelta')/sqrt((delta*delta')*(olddelta*olddelta')));
<<<<<cut here>>>>
*********************************************************************
*************sep.run: an example script for 2->2 separation *********
*********************************************************************
<<<<<cut here>>>>
%****
%w=[1 1; 1 2]; % init. unmixing matrix, or
w=rand(M,N);
w=eye(N); % init. unmixing matrix, or
w=rand(M,N);
M=size(w,2); % M=N usually
sweep=0; oldw=w; olddelta=ones(1,N*N);
Id=eye(M);