Sie sind auf Seite 1von 13

EXPERIMENT NO.

AIM:
To plot different activation functions used in Artificial Neural Networks.
SOFTWARE REQUIRED:
Jupyter Notebook. Libraries: MATPLOTLIB
THEORY:
Activation Function-
 Activation functions are mathematical equations that determine the output of
a neural network. The function is attached to each neuron in the network,
and determines whether it should be activated (“fired”) or not, based on
whether each neuron’s input is relevant for the model’s prediction.
 Activation functions also help normalize the output of each neuron to a
range between 1 and 0 or between -1 and 1.
 An additional aspect of activation functions is that they must be
computationally efficient because they are calculated across thousands or
even millions of neurons for each data sample.
 Modern neural networks use a technique called backpropagation to train the
model, which places an increased computational strain on the activation
function, and its derivative function.
The different activation functions to be plotted are:
1.Hardlimit
2.Symmetrical Hardlimit
3.Linear
4.Saturating linear
5.Symmetric Saturating Linear
6.Log Sigmoid
7.Hyperbolic Tangent Sigmoid
8.Relu
1. Hardlimit function:
 The hard limit transfer function forces a neuron to output a 1 if its net input
reaches a threshold, otherwise it outputs 0. This allows a neuron to make a
decision or classification. It can say yes or no. This kind of neuron is often
trained with the perceptron learning rule.
 Transfer functions calculate a layer's output from its net input.
 hardlim(N) takes one input,

N - S x Q matrix of net input (column) vectors.

and returns 1 where N is positive, 0 elsewhere.


 hardlim(code) returns useful information for each code string,

'deriv' - Name of derivative function.

'name' - Full name.

'output' - Output range.

'active' - Active input range.

 Equation:

a = 0 for n<0

a = 1 for n>=0
2. Symmetrical Hardlimit function:
 The symmetric hard limit transfer function forces a neuron to output a 1 if its
net input reaches a threshold. Otherwise it outputs -1. Like the regular hard
limit function, this allows a neuron to make a decision or classification. It
can say yes or no.
 hardlims is a transfer function. Transfer functions calculate a layer's output
from its net input.
 hardlims(N) takes one input,

N - S x Q matrix of net input (column) vectors.

and returns 1 where N is positive, -1 elsewhere.


 hardlims(code) return useful information for each code string:

'deriv' - Name of derivative function.

'name' - Full name.

'output' - Output range.

'active' - Active input range.

 Equation:

a = -1 for n<0

a = 1 for n>=0
3. Linear function:
 A linear activation function takes the form:

A = cx

 It takes the inputs, multiplied by the weights for each neuron, and creates
an output signal proportional to the input. In one sense, a linear function is
better than a step function because it allows multiple outputs, not just yes
and no.

4. Saturating linear function:


 It takes input as n and output as a.
 Equation:

a = 0 for n<0

a = n for 0<=n<=1

a = 1 n>1
5. Symmetric saturating linear function:
 It takes input as n and output as a.
 Equation:

a = -1 for n<-1

a = n for -1<=n<=1

a = 1 n>1

6. Log sigmoid function:

 Smooth gradient, preventing “jumps” in output values.


 Output values bound between 0 and 1, normalizing the output of each
neuron.
 Clear predictions—For X above 2 or below -2, tends to bring the Y value
(the prediction) to the edge of the curve, very close to 1 or 0. This enables
clear predictions.

Disadvantages

 Vanishing gradient—for very high or very low values of X, there is almost


no change to the prediction, causing a vanishing gradient problem. This can
result in the network refusing to learn further, or being too slow to reach an
accurate prediction.
 Outputs not zero centered.
 Computationally expensive

7. Hyperbolic Tangent sigmoid:


Advantages:

 Zero centered—making it easier to model inputs that have strongly


negative, neutral, and strongly positive values.
 Otherwise like the Sigmoid function.

Disadvantages

 Similar to those of Sigmoid function.


8. ReLU(Rectified Linear Unit):

Advantages:

 Computationally efficient—allows the network to converge very quickly


 Non-linear—although it looks like a linear function, ReLU has a derivative
function and allows for backpropagation

Disadvantages:

 The Dying ReLU problem—when inputs approach zero, or are negative,


the gradient of the function becomes zero, the network cannot perform
backpropagation and cannot learn.
PROGRAM AND OUTPUT:
1. Simple Hard Limit:
import matplotlib.pyplot as plt
n=range(-10,10)
y=[]
# Simple Hard Limit
for i in n:
if i<=0:
y.append(0)
else:
y.append(1)
plt.step(n,y,label="simple Hard Limit")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')

2. Symmetrical Hard Limit:


# Symmetrical Hard Limit
y=[]
for i in n:
if i<=0:
y.append(-1)
else:
y.append(1)
plt.step(n,y,label="symmetrical Hard Limit")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')
3. Linear:
#Linear
y=[]
for i in n:
if i<=0:
y.append(0)
else:
y.append(i)
plt.plot(n,y,label="Linear function")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')
4. Saturating Linear:
# Saturating Linear
y=[]
for i in n:
if i<0:
y.append(0)
elif -1<=i<=1:
y.append(i)
else:
y.append(1)
plt.plot(n,y,label="Saturating Linear")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')

5. Symmetric Saturating Linear:


#Symmetric Saturating Linear
y=[]
for i in n:
if i<0:
y.append(-1)
elif -1<=i<=1:
y.append(i)
else:
y.append(1)
plt.plot(n,y,label="Symmetric Saturating Linear")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')
6. Log-Sigmoid:
# Log-Sigmoid
import math
y=[1/(1+math.e**-i) for i in n]
plt.plot(n,y,label="Log-sigmoid")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')
7. Hyperbolic Tangent Sigmoid:
# Hyperbolic Tangent-Sigmoid
y=[(math.e**i-math.e**-i)/(math.e**i+math.e**-i) for i in n]
plt.plot(n,y,label="hyperbolic tangent-sigmoid")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')

8. ReLU:
#Relu
y=[]
for i in n:
if i<0:
y.append(i)
else:
y.append(0)
plt.plot(n,y,label="Relu")
plt.legend(loc="upper left")
plt.axhspan(0,0,linewidth=2,color='#000000')
plt.axvline(0,0,linewidth=2,color='#000000')
CONCLUSION:

Das könnte Ihnen auch gefallen