Beruflich Dokumente
Kultur Dokumente
1:5;
1
a = hardlims(n);
0.8 plot(n,a)
n = -5:0.1:5;
0.6
a = hardlim(n);
1
plot(n,a) 0.4
0.2
0.5
0
-5 0 5 0
-0.5
1, net 0
f (net ) sgn( net ) bipolar binary
1, net 0
1, net 0
f (net ) sgn( net ) unipolar binary
0, net 0
Activation functions of a neuron
Step function Sign function Sigmoid function Linear function
Y Y Y Y
+1 +1 +1 +1
0 X 0 X 0 X 0 X
-1 -1 -1 -1
1, if X 0 1, if X 0 1
Y step Y sign Y sigmoid Y linear X
0, if X 0 1, if X 0 1 e X
2
f ( net ) 1 bipolar continuous
1 e ( net )
1, net 0
f ( net ) sgn( net ) bipolar binary
1, net 0
1
f ( net ) unipolar continuous
1 e ( net )
1, net 0
f ( net ) sgn( net ) unipolar binary
0 , net 0
1 a=logsig(n) = 1 / (1 + exp(-n))
0.9
0.8
0.7 n = -5:0.1:5;
0.6
a = logsig(n);
0.5
0.4 plot(n,a)
0.3
0.2
0.1
0
-5 -4 -3 -2 -1 0 1 2 3 4 5
0.9
0.8
0.6
0.5
n = -5:0.1:5; 0.4
a = tansig(n); 0.3
0.2
plot(n,a) 0.1
0
-5 -4 -3 -2 -1 0 1 2 3 4 5
ERROR SURFACE
X= [2 3; 12 7; -3 5];
y = [5 19 2];
w1 = 0:0.1:2;
w2 = 0:0.1:2;
for p1 = 1:length(w1)
for p2 = 1:length(w2)
err(p1,p2) = 0;
for n = 1:3
% compute network output for example n
ynet = w1(p1)*X(n,1) + w2(p2)*X(n,2);
% update total error
err(p1,p2) = err(p1,p2) + (y(n) - ynet)^2;
end;
end;
end;
% plot error function
surf(w1,w2,err);
Learning by Error Minimization
We like to minimize the squared error (which is a function
of the weights), for each training pair/pattern:
Direction of steepest
descent
1 1
E (d i oi ) (d i f ( wi x)) 2
2 t
2 2
E (d i oi ) f ' ( wi x) x
t
net 2 1.948