Sie sind auf Seite 1von 23

Mrs. A. D.

Belsare, Course Teacher

2014-15

Nagar Yuwak Shikshan Sansthas


Yeshwantrao Chavan College of Engineering
(An Autonomous Institution affiliated to Rashtrasant Tukadoji Maharaj Nagpur University)
Hingna Road, Wanadongri, Nagpur - 441 110

Electronics & Telecommunication Engineering


VIII: ET- 422: PE5: Fuzzy Logic & Neural Network
VII BE: ET411: Free Elective: Soft Computing

Question Bank
Neural Network
Supervised Neural network
1. Using linear separability concept, obtain the response for OR function. Use bipolar inputs
and targets.
2. Implement AND function using MP Neuron.
3. Implement XOR function using MP Neuron.
4. Implement ANDNOT function using a) MP Neuron Model
b) Perceptron neural network.
c) ADALINE NN
d) Hebb NN (bipolar inputs & Targets)
5. Implement OR function using Hebb NN (bipolar inputs & Targets)
6. Implement XOR function using Hebb NN (bipolar inputs & Targets)
7. What are different learning laws of neural network? Determine the weights of the
network shown in figure below after one iteration using Hebbs law, Perceptron law and
LMS law for the following set of input vectors.
Input: [1100]T, [0011]T, [1001]T,and [0110]T. Use Bipolar binary output function.

8. What are different activation functions used in neural network?


9. Using Hebb Rule, find weights required to perform following classification of the given
input pattern.
+ + +
+ + +
+
+
+
+ + +
+ + +
I

O
+ => +1; and empty square => -1
For I target = +1 and for O target = -1

10. The logic networks shown in Figure below, using the McCulloch-Pitts model neuron find the
truth tables and the logic functions that are implemented by networks (a), (b), (c), and (d).

Mrs. A. D. Belsare, Course Teacher

X1

T=1

a)

T=1

O(x1, x2, x3)

X1

X2

X1

X2

X1
X2

O(x1, x2, x3)

T=2

X3

d)

O(x1, x2, x3)

T=2

X3

c)

X3

X2

b)

2014-15

1
T=1
1

-1

T=0

O(x1, x2)

11. Using Hebb rule, find the weights required to perform following classifications. The
vectors (-1 -1 1 -1) and (1 1 1 -1) belong to class 1 (target = +1); vectors (-1 -1 1 1) and
(1 1 -1 -1) do not belongs to class 1 (target = -1). Also using each of training x vectors as
input, test the response of net.
12. Implement OR function with binary inputs & bipolar targets using perceptron training
algorithm upto 3 epochs.
13. Explains the logic functions performed by the following with binary neurons

14. What is meant by topology of artificial neural networks? Give few basic topological
structures of artificial neural network
15. Explain different performance evaluator criterion of neural network.
16. Train the perceptron for the following sequence with =1 beginning from w0=[1 1 1]
a. Class 1: (3,1),(4,2),(5,3),(6,4)
b. Class 2: (2,2),(1,3),(2,6)

Mrs. A. D. Belsare, Course Teacher

2014-15

17. The network shown in fig uses neuron with continuous activation function. The o/p
measured as o1=0.28 and o2=-0.73. Find the i/p vector X=[x1 x2]T that has been applied to
the n/w.

18. The network shown in fig uses neuron with f(net) has been designed to assign i/p vector
x1x2x3 to cluster 1 or 2. The cluster no. is identical to the no. of neuron yielding the large
response. Determine the most likely cluster membership for each of the following three
1
vectors. Assume =2. Activation function is o =
(1 + exp( net ) )
0.866
0.985
0.342
X1 =
, X2 =
, X3 =

0.15
0.174
0.94
19. Explain the neural network which uses nonlinear output function.
20. Calculate the output of neuron Y for the net shown in fig.1. Use binary & bipolar sigmoid
activation functions.
c.

21. Classify the two dimensional pattern shown in fig.2 using perceptron network.

C
A
Target value:+1Target value:-1
22. Implement AND function using McCulloch Pitts neuron with i) bipolar inputs & targets
ii) binary inputs & targets
1
1
T
23. Assume the initial weight vector W = The set of three input vectors are X1=[1 0

0.5
T
T
2 1.0 0] , X2=[1 -0.5 -2 -1.5] , X3=[0 1 -1 1.5] T, obtain the output of the neuron using
binary sigmoidal and bipolar sigmoidal activation function.

Mrs. A. D. Belsare, Course Teacher

2014-15

24. Class1: (0.03,0.01), (0.04,0.02), (0.05,0.03), (0.06,0.04) and class2: (0.02,0.02),


(0.01,0.03) using Perceptron neural net with bipolar targets with bias =1 and initial
weights as (1,1,1).
1
1
25. Assume the initial weight vector W T = The set of three input vectors are X1=[1 0

0.5
T
T
2 1.0 0] , X2=[1 -0.5 -2 -1.5] , X3=[0 1 -1 1.5] T, obtain the output of the neuron using
binary sigmoidal activation function.
26. Class1: (0.03,0.01), (0.04,0.02) and class2: (0.02,0.02), (0.01,0.03) using Perceptron
neural net with bipolar targets with bias =1 and initial weights as (1,1,1).
27. Implement OR function using ADALINE network with bipolar inputs and targets.
28. Implement an Adaline network to describe the function X1 X 2 . The learning rate is 0.1
and initial weights are 0.2, 0.2 and bias 0.2 upto 2 ephochs. Use bipolar inputs and
targets.
29. Implement an Adaline network to describe the function X1 X 2 . The learning rate is 0.1
and initial weights are 0.2, 0.2 and bias 0.2 upto 2 ephochs. Use bipolar inputs and
targets.
30. Derive the learning rule of Adaline network and explain the algorithm.
31. Implement an Adaline network to describe the function X1 X 2 . The learning rate is 0.1
and initial weights are 0.2, 0.2 and bias 0.2 for 2 ephochs. Use bipolar inputs and targets.
32. Illustrate the development of weights of an ADALINE for the following data;
class1:(0,0,0),(1,1,1),(4,4,4), class2:(2,2,2),(3,3,3)
33. Discuss the parameters affecting the weight updation in case of back-propagation NN
with suitable example.
34. Derive the weight update rule for RBF neural network & explain RBF nets.
35. Derive the Back-propagation neural network learning algorithm for nonlinearly
separable task and determine the appropriate values of weight changes ( ) for the
bipolar sigmoidal function.
36. Setting the parameter values in Back-propagation algorithm affect the performance of the
network. Explain?
37. Derive Back-propagation training algorithm for the case where the activation function is
an arctan function.
38. Discuss the method which is used for accelerating the learning process of backprop
algorithm.
39. Derive the Back Propagation Neural Network learning rule and determine the appropriate
equations of weight changes in two layer neural network.
40. Derive the Back-propagation neural network learning algorithm for nonlinearly separable
task and determine the appropriate values of weight changes ( ) for the bipolar
sigmoidal function in two layer network.

Mrs. A. D. Belsare, Course Teacher

2014-15

41. Calculate the new weights for the network shown using backpropagation algorithm for
input pattern [0,1], target output=1, learning rate = 0.25 and use binary sigmoidal
activation function.
Network output

-0.2
Y1
0.1

0.4
0.3
1

0.5
Z1

Z2
-0.1

-0.3

0.6

0.4
X1

X2

Unsupervised Neural network


1. Derive the weight update rule in recurrent neural network with Williams & Zipser
training procedure.
2. Construct & test the Hamming network to cluster three vectors,e(1)=[1-1-1-1];e(2)=[-1 -1
-1 1], the bipolar input vectors are x1=[-1 -1 1 -1], x2=[-1 -1 1 1], x3=[-1 -1 -1 -1].
3. Cluster the following input vectors into three nodes. Use competitive learning algorithm
T = {i1 = (1.1,1.7,1.8), i2 = (0, 0, 0), i3 = (0, 0.5,1.5), i4 = (1, 0, 0), i5 = (0.5, 0.5, 0.5), i6 = (1,1,1)}
4. Explain leaning vector quantizer algorithm?

5. Apply adaptive resonance theory algorithm on the following set of vectors.


i1 = {1,1, 0, 0, 0, 0,1} , i2 = {0, 0,1,1,1,1, 0} , i3 = {1, 0,1,1,1, 0} , i4 = {0, 0, 0,1,1,1, 0} , i5 = {1,1, 0,1,1,1, 0}
Use vigilance parameter be =0.7.
6. Apply self organizing map to the following set of vectors to cluster them to three clusters
with following topology network

T = {i1 = (1.1,1.7,1.8), i2 = (0, 0, 0), i3 = (0, 0.5,1.5), i4 = (1, 0, 0), i5 = (0.5, 0.5, 0.5), i6 = (1,1,1)}
7. Develop the training algorithm for supervised recurrent network. Network output uses
sigmoid function.

Mrs. A. D. Belsare, Course Teacher

2014-15

8. Cluster the following input vectors into three nodes. Use competitive learning algorithm
T = { i1 = (1 . 1, 1 .7 , 1 .8 ) , i 2 = ( 0 , 0 , 0 ) , i 3 = ( 0 , 0 .5 , 1 .5 ) ,

i 4 = (1, 0 , 0 ) , i 5 = ( 0 .5 , 0 .5 , 0 .5 ) , i 6 = (1, 1, 1) }
9. Apply adaptive resonance theory algorithm on the following set of vectors
i1 = {1,1, 0, 0, 0, 0,1} , i2 = {0, 0,1,1,1,1, 0} , i3 = {1, 0,1,1,1, 0} ,
Use vigilance parameter be
i4 = {0, 0, 0,1,1,1, 0} , i5 = {1,1, 0,1,1,1, 0}
=0.
10. Explain leaning vector quantizer algorithm with example?
11. Illustrate with neat figure, ART1 architecture & discuss its training algorithm.
12. Design a MaxNet with = 0.15 to cluster the input pattern x=[x1, x2, x3, x4]=[0.7, 0.6,
0.1, 0.8]. Show the step by step execution of the clustering algorithm.
13. Illustrate with neat figure, the two basic units of an ART1 network.
14. Consider ART1 net with 5 input units & 3 cluster units. After some training the net
attains the bottom up & top down weight matrices as shown,
0.2 0 0.2
0.5 0.8 0.2
1 1 1 1 1

B = 0.5 0.5 0.2 ; T = 0 1 1 1 0

1 1 1 1 1
0.5 0.8 0.2
0.1 0 0.2
Show the behavior of the net when it is presented with the training pattern s=[1, 1, 1, 1,
0]. Assume L=2; = 0.8
15. Explain Hamming network used for unsupervised learning. Classify the input vector
i=(1,1,1,-1,-1) for Hamming network, which is trained by the following patterns, i1=(1,1,-1,1,1), i2=(-1,1,-1,1,-1), i3=(1,-1,1,-1,1).
16. Check the auto associative network for input vector [1 1 -1]. From the weight vector with
no self-connection. Test whether the net is able to recognize with one missing entry.
17. Develop the training algorithm for supervised recurrent network. Network output uses
sigmoid function.
18. Explain Hamming network used for unsupervised leaning. Classify the input vector
i = {1,1,1, 1, 1} for Hamming network which is trained by the following pattern

i1 = {1, 1, 1,1,1} , i2 = {1,1, 1,1, 1} , i3 = {1, 1,1, 1,1}

19. What are the results obtained using five node Maxnet with initial node output vectors
(0.5,0.9,1,1,0.9)? What would be more desirable value?
20. Explain associative memory?
21. Use non iterative procedure of association (matrix association memory) for the following
input and output pattern.
(1,1)->(-1,1) and (1,-1)->(-1,-1)

Mrs. A. D. Belsare, Course Teacher

2014-15

22. Use least square procedure to solve the hetro association problem for the following input
and output pattern (1,1,1,:-1,1), (1,-1,1:-1,-1), (-1,1,1;1,-1).
23. Explains Bidirectional Associative memory (BAM)? Perform hetro association using
BAM for the set of pairs
A1 = (100001) B1 = (11000)

A2 = ( 011000 ) B2 = (10100)
A3 = ( 001011) B3 = (01110)

24. Consider the following patterns


A1 = ( 1,1, 1,1)

A2 = (1,1,1, 1)

A3 = ( 1, 1, 1,1)
Use Hopfield associative memory to recognize the stored pattern as well as the noisy
pattern A' = (1,1,1,1)
25. Establish the association between the following input and output pairs using BAM
A1 = (1,1, 1, 1) B1 = (1,1)

A2 = (1,1,1,1) B2 = (1, 1)

A3 = ( 1, 1,1,1) B3 = ( 1,1)
26. Explain with example the concept of energy function for BAM
27. Recall the noisy pattern A' = (1,1,1, 1, 1,1) by exponential bidirectional association

A1 = ( 1, 1, 1, 1, 1,1)

B1 = (1,1, 1, 1, 1)

A2 = ( 1,1,1, 1, 1, 1) B2 = (1, 1,1, 1, 1)


A3 = ( 1, 1,1, 1,1,1) B3 = (1,1,1,1, 1)

Mrs. A. D. Belsare, Course Teacher

2014-15

Fuzzy Logic
1. For the two fuzzy sets given find
~

a.

A B, A B, A B c ,

~
1 0.75 0.3 0.15 0
b. A = +
+
+
+

1.0 1.5 2.0 2.5 3.0


~
1 0.6 0.2 0.1 0
B= +
+
+
+

1.0 1.5 2.0 2.5 3.0


2. For following fuzzy sets defined on interval X = [0, 10] of real number with membership
1
functions, A( x) = 2 x ; B ( x) =
, find algebraic sum and bounded sum.
1 + 10( x 2)2

c.

x
1
on
; B( x) =
x+2
1 + 10 x
universal set X = {0, 1, 2, .. , 10}. Let f(x) = x2 for all x X. Using extension principle
find f(A) and f(B).
4. Let P=Q={0, 1, 2} & R={0, 0.5, 1.5, 2} be crisp set & f : P X Q = R be a function signifying
the mean of two numbers, f(x,y)=(x+y)/2, for all x P, y Q. Construct the function table
for f. Now consider the fuzzy set A representing the closeness of a number x to 1 & B
represents distance from 1. A & B are defined on the references set P & Q respectively. The
0.5 1 0.5
1 0 1
sets
A
&
B
are A =
the
function
+ +
; B = + + ; Extend
0 1 2
0 1 2
f : P X Q R to f : A X B C using extension principle.

3.

Let the membership grade functions of fuzzy sets A & B are A( x ) =

5. Let A & B be fuzzy sets defined on the universal set X=Z whose membership functions are
0.5 1 0.5 0.3
0.5 1 0.5 0.3
. Let a function
given by
A( x) =
+ +
+
; B( x) =
+ +
+
1 0 1
2
2 3 4
5
f : X * X X be defined for all x1 , x2 X by f ( x1 , x2 ) = x1 + x2 . Calculate f(A,B).
6. State and prove third decomposition theorem of fuzzy set A. Consider
A = 0.5 / x1 + 0.4 / x 2 + 0.7 / x3 + 0.8 / x4 + 1/ x5
7. Consider the fuzzy sets defined on interval X=[0,5] of real numbers by the membership grade
x
functions A ( x ) =
; A ( x) = 2 x Determine the mathematical formulae & graphs of the
x+2
membership grade functions of each of following sets:
__

_________

i. A B; ii. A B; iii. A B
0.5 1 0.5
0.5 1 0.5
8. If 4 =
+ +
+ +
& 3=
Then find 12 & 7 using extension principle.
2 3 4
5 6 7
9. Given a t-norm i and an involutive fuzzy complement c, the binary operation u on [0,1] is
u (a, b) = c(i(c(a), c(b))); for all a, b, c [ 0,1] is a t-conorm such that <i,u,c> is a dual triple.
Prove only boundary condition, monotonicity & associativity axiom for t-conorm.

Mrs. A. D. Belsare, Course Teacher

2014-15

10. Give the skeletonzed axioms for t-conorm. Define & calculate standard union, algebraic sum,
Bounded
sum
&
drastic
union
for
fuzzy
sets
A
&
B
:
0.65 0.1 0.5
0.65 0 1
A=
+
+
; B=
+ + ;
x
y
z
x
y z
0.2 1 0.2
11. For a fuzzy set, a normal, convex membership is defined as 1 =
+ +
Perform 1+1
0 1 2
using extension principle.

12. The fuzzy sets A & B are given as,

0.2 1 0.7
0.5 1
A=
+ +
+
& B=
1 2 4
1 2

Find

f ( A, B ) = A X B using extension principle.

13. Consider the crisp domain P={3,4,5}, Q={6,7,8} & R={0,1,2}. The following table shows
the function f : P X Q R where f is defined as addition modulo: f= Addition Modulo 3
Q
6 7 8
3 0 1 2
Now, consider the fuzzy sets A & B on P & Q repectively.
P
4 1 2 0
5 2 0 1
0.1 0.8 0.5
0.6 0.2 0.7
A=
+
+
+
+
; B =
using extension principle, extend function
4
5
7
8
3
6
f=addition modulo 3 & fuzzy addition modulo 3 from A X B to C where, C is a fuzzy set defined on R.
x R for new set .

14. Consider a LAN of interconnection workstations that communicate using Ethernet protocols
at maximum rate of 12 Mbits/s. the two fuzzy sets given below represents the loading of the
1 1 0.8 0.2 0.1 0 0
s ( x) = + +
+
+
+ +
5
7 9 10
0 1 2
where, s stands for silent & c stands for
LAN,
0 0 0 0.5 0.7 0.8 1
c ( x ) = + + +
+
+
+
7
9 10
0 1 2 5
congestion. Perform Algebric sum, algebraic product, bounded sum & bounded difference
over these two fuzzy sets.

0; x 3& x > 5
0; x 12 & x > 32

15. Let A, B be two fuzzy numbers, A( x) = x 3;3 < x 4 ; B ( x) = x 12 / 8;12 < x 20


5 x ; 4 < x 5
32 x /12; 20 < x 32

Solve the following equations for x; A.X=B & A+X=B

Mrs. A. D. Belsare, Course Teacher

2014-15

0; x 1& x > 3
0; x 1& x > 5

16. Let, P( x) = x + 1/ 2; 1 < x 1 ; Q( x) = x 1/ 2;1 < x 3 Perform, P-Q & P+Q


3 x / 2;1 < x 3
3 x / 2;3 < x 5

17. Determine equivalent resistance of the circuit shown in Figure, where R1 and R2 are fuzzy sets
describing the resistance of resistors R1 and R2, respectively, expressed in ohms. Since the
resistors are in series, they can be added arithmetically. Using the extension principle, find the
equivalent resistance: Req =R1 + R2.
The membership functions for the two resistors are
0.5 1.0 0.6
R1 =
+
+
;
4
5
3
0.4 1.0 0.3
+
+
R2 =

9 10
8

1
1
1
0.5 1 0.5
0.5
18. Let y =
+
+
+
+
+
; t =
where y & t are fuzzy sets of
20 25 30
175 180 185 190
young men & tall men respectively. Find the relations young tall men & young but not
tall men.
19. Consider
a
universe
of
discourse
of
TA
marks
out
of
10.
If
0.1 0.4 0.6 0.9 1 1
good score =
+
+
+
+ + ;
6
7
8 9 10
5
then construct the following compound
1 0.9 0.8 0.7 0.4 1
bad score = +
+
+
+
+
3
4
5 0
1 2
fuzzy set: a. Very Good Score
b. Not Bad Score
c. Not Bad Score But Not Very Good Score

20. Let U = V = {0,1, 2,3, 4}

be the universe of discourse on which the fuzzy set

1.0 0.5 0.2 0.1 0.0


small =
+
+
+
+
is defined. Again let R be the fuzzy relation more or
1
2
3
4
0
less
the
same,
which
is
defined
by the
relation
matrix
shown,
0
1
2 3
4

0 1 0.5 0.1 0
0

1 0.5 1 0.5 0.1 0


R = 2 0.1 0.5 1 0.5 0.1

3 0 0.1 0.5 1 0.5


4 0
0 0.1 0.5 1

Mrs. A. D. Belsare, Course Teacher

2014-15

If the premise & the rule stated as


Premise: x is small
Rule: x is more or less the same as y
Then apply suitable fuzzy rule of inference to obtain the conclusion & express it suitably as
a relation.
21. Consider a set P={P1,P2,P3,P4} of four varieties of paddy plants, set D={D1,D2,D3,D4} of
the various diseases affecting the plants and S={S1,S2,S3,S4} be the common symptoms of
the various diseases,
Let R be the relation on P X D and S be the relation on D X S

D1 D 2 D3 D 4

S1 S 2 S 3 S 4

P1 0.6 0.6 0.9 0.8


D1 0.1 0.2 0.7 0.9

P 2 0.1 0.2 0.9 0.8


D 2 1
1 0.4 0.6
R=
R=
P3 0.9 0.3 0.4 0.8
D3 0
0 0.5 0.9

D 4 0.9 1 0.8 0.2


P 4 0.9 0.8 0.1 0.2
Obtain the association of the plants with the different symptoms of the diseases using maxmin Composition
22. State and prove third decomposition theorem of fuzzy set A. Consider
A = 0.5 / x1 + 0.4 / x2 + 0.7 / x3 + 0.8 / x4 + 1/ x5
23. A fuzzy tolerance relation, R, is reflexive and symmetric. Find the equivalence relation Re and
then classify it according to -cut levels = {0.9, 0.7, 0.5, 0.4}.
1 0.7 0 0.2 0.1
0.7 1 0.9 0 0.4

R = 0 0.9 1
0 0.3

0
1 0.5
0.2 0
0.1 0.4 0.3 0.5 1
24. In a pattern recognition test, four unknown patterns need to be classified according to three
known patterns (primitives) a, b, and c. The relationship between primitives and unknown
patterns is in the following table:
X1
X2
X3
X4
A

0.6

0.2

0.0

0.8

0.3

0.3

0.8

0.1

0.1

0.5

0.2

0.1

If a -cut level is 0.5, then into how many classes can these patterns be divided?
Hint: Use a maxmin method to first generate a fuzzy similarity relation R.
25. In the field of computer networking there is an imprecise relationship between the level of use
of a network communication bandwidth and the latency experienced in peer-to-peer
communication. Let X be a fuzzy set of use levels (in terms of the percentage of full bandwidth
used) and Y be a fuzzy set of latencies (in milliseconds) with the following membership
function:
0.2 0.5 0.8 1.0 0.6 0.1
0.3 0.6 0.9 1.0 0.6 0.3
X =
+
+
+
+
+
+
+
+
+
+
;Y =

8
20
10 20 40 60 80 100
0.5 1 1.5 4

Mrs. A. D. Belsare, Course Teacher

2014-15

a. Find the Cartesian product represented by the relation R = XY. Now, suppose

we have second fuzzy set of bandwidth usage given by


0.3 0.6 0.7 0.9 1 0.5
X =
+
+
+
+ +

10 20 40 60 80 100
S = Z1 X 6 R6 X 6
26. Find
using (1) Maxmin composition and (2) Using maxproduct
composition.
27. The three variables of interest in the MOSFET are the amount of current that can be switched,
the voltage that can be switched and the cost. The following membership function for the
transistor was developed
28. The power is given by P = V I.
0.4 0.7 1 0.8 0.6
current = I =
+
+ +
+
,
0.8 0.9 1 1.1 1.2
0.2 0.8 1 0.9 0.7
Voltage = V =
+
+ +
+
,
30 45 60 75 90
0.4 1 0.5
cos t =
+
+

0.5 0.6 0.7


Find the fuzzy Cartesian product P = V X I
b. Find the fuzzy Cartesian product T = I X C
c. Using maxmin composition find E = P T
d. Using maxproduct composition find E = P T
29. Relating earthquake intensity to ground acceleration is an imprecise science. Suppose we have a
universe of earthquake intensities I = {5, 6, 7, 8, 9} and a universe of accelerations, A = {0.2,
0.4, 0.6, 0.8, 1.0, 1.2} in 8s. The following fuzzy relation R exists on confession space I A.
0.75 1 0.65 0.4 0.2 0.1
0.5 0.9
1
0.65 0.3 0

R = 0.1 0.4 0.7


1
0.6 0

1 0.6
0.1 0.2 0.4 0.9
0
0.1 0.3 0.45 0.8 1 Fuzzy set intensing about 7 is defined as:
a.

0.1 0.6 1 0.8 0.4


I =
+
+ +
+

6 7 8
9
5
Determine the fuzzy membership of I7 on the universe of accelerations, A.

30. Two companies bid for a contract. The fuzzy set of two companies B1 and B2 is shown in the
following figure. Find the defuzzified value z* using different methods.

Mrs. A. D. Belsare, Course Teacher

2014-15

31. Amplifier capacity on a normalized universe say [0,100] can be linguistically defined by fuzzy
0 0.2 0.6 0.9
0.9 0.8 0.2 0.1
powerful = +
+
+
+
+
+
; weak =

1 10 50 100
1 10 50 100
variable like here:
Find the membership functions for the following linguistic phases used to decrease the capacity
of various amplifiers:
a. Powerful and not weak
b. Very powerful or very weak
c. Very, very powerful and not weak
32. In a computer system, performance depends to a large extent on relative spear of the
components making up the system. The speeds of the CPU and memory are important
factors in determining the limits of operating speed in terms of instruction executed per unit
size
0 0 0.1 0.3 0.5 0.7 1
fast = + +
+
+
+
+

8
20 45 100
6 1 4
1 0.9 0.8 0.5 0.2 0.1 0
slow = +
+
+
+
+
+

4
8
20 45 100
0 1
33. Calculate the membership function for the phases:
- Not very fast and slightly slow
- Very, very fast and not slow
- Very slow are not fast
34. Give artificial neural network architectures?
- Write the types of architectures
- Draw the architectures diagrams
- Write the formulas.
35. Explain Single layer feed forward network
- Draw the network diagram
- Write the step by step procedure.
36. Explain Multilayer feed forward network
- Draw the network diagram
- Write the step by step procedure.

Mrs. A. D. Belsare, Course Teacher

2014-15

Genetic Algorithm
1. Explain roulette-wheel selection method in Genetic Algorithm.
2. Explain the mutation operation in Genetic Algorithm with suitable example.
3. Maximize the function f(x) = x2 with x = [13, 24, 8, 19]. Use binary encoding and
roulette-wheel selection.
4. Explain the methods of crossover with suitable example.
5. What are the different types of encoding? Explain value encoding and binary encoding in
genetic algorithm from known lower and upper bound value of 4 bit string.
6. Maximize the function f(x) = x2 with x = [13, 24, 8, 19]. Use binary encoding and rank
selection.
7. Explain the methods of crossover with suitable example.
8. What are the different types of encoding? Explain value encoding and binary encoding in
genetic algorithm from known lower and upper bound value of 4 bit string.

Mrs. A. D. Belsare, Course Teacher

2014-15

One Mark Problems:


Neural network
1. Use non iterative procedure of association (matrix association memory) for the following
input and output pattern.(1,1)->(-1,1) and (1,-1)->(-1,-1)
2. What are the results obtained using five node Maxnet with initial node output vectors
(0.5,0.9,1,1,0.9)?
3. Find the winner of the three node with weight vector
0.2 0.7 0.3

0.1 0.1 0.9


1
1
1
If the pattern presented is (1.1,1.7,1.8) and also update the corresponding weigh using
competitive learning
4. What is the difference between LVQ and clustering
5. Draw the ADALINE neural net model and write the Delta Rule for adjustment of weight
in the ADALINE network.

6. What is winner takes all or competitive learning.


7. Discuss the important features of Kohonen self organizing maps.
8. Implement OR function using MP neuron model.
9. Draw the network architecture for MADALINE I and MADALINE II.
10. Implement AND logic function using MP neuron model.
11. Discuss Linear Seprability using an example? Is XOR Gate Linear Separable?
12. What are the different learning methods in neural network? Give one example of each.
13. Discuss stability plasticity dilemma & explain how ART network solves this problem?
14. Give the difference between auto-associative & hetero-associative neural models. Also
give the learning laws for them.
15. Implement XOR logic function using MP neuron model.
16. Realize a Hebb net for the OR function with bipolar inputs & targets.
17. Write weight updation rule for recurrent network & draw fully connected recurrent NN.
18. Give the features for KSOM.
19. Describe BAM .
20. Implement OR function using MP neuron model.
21. Draw the network architecture for MADALINE I and MADALINE II and state the
difference between two.
22. How initialization of weights affect the performance of BPN training algorithm.
23. What is RBF net? How it can be applied to the corner isolation problem with one RBF
node and four RBF nodes. Draw labeled NN architecture.

Mrs. A. D. Belsare, Course Teacher

2014-15

Fuzzy Logic
1. Verify the fuzzy DeMorgans law with thye following fuzzy sets A & B;
1 0.5 0
0 0.5 1
A= +
+
& B= +
+
0 1 2
0 1 2
2. Find & draw alpha cuts & strong alpha cuts for the following fuzzy set
1 0.5 0
0 0.5 1
A= +
+
& B= +
+
0 1 2
0 1 2
3. Give different fuzzy complement definitions & prove involutive property for fuzzy
complement.
4. Prove second decomposition theorem for the fuzzy set A using graph also give its
statement.
~
0.5 0.4 0.7 0.8
+
+
+
5. Find all alpha cuts and strong alpha cuts for fuzzy set, A =

x2
x3
x4
x1
6. List axioms required for fuzzy t-norms and define algebraic product for t-norms.
w
7. Does the function c(a) = (1 a ) qualify for each w > 0 as a fuzzy complement? Plot the
function for w >1; w = [0, 5]
8. List the properties of alpha cuts for fuzzy sets. Give and plot all alpha cuts for following
fuzzy set A
0.6 0.1 0.5 0.4 0.7
A=
+
+
+
+
a
b
c
d
e
9. Find the defuzzified value by weighted average method shown in figure

10. Find the defuzzified values using (a) center of sums methods and (b) center of largest
area for the figure shown.

11. State and prove the excluded middle laws and De Morgans laws for classical sets.
12. What are the basic elements of a fuzzy logic control system?

Mrs. A. D. Belsare, Course Teacher

2014-15

Genetic Algorithm
1. What is value encoding and binary encoding in GA?
2. Perform crossover operation on parents given below P1: 1 0 1 1 0 1 1 and P2: 0 1 1 0 0 1
0 with mask crossover.
3. What are the three basic operators of a simple genetic algorithm?
4. Perform crossover and inversion on P1: 0 0 1 1 0 0 1 and P2: 1 1 1 0 0 1 1 with crossover
sites 3 and 6.

Mrs. A. D. Belsare, Course Teacher

2014-15

Programming Exercise
1.Write a MATLAB program to generate the following activation functions that are being
a. used in neural networks using basic equations. Also plot them showing grid lines,
title
b. and xlabel. Use axis square.
c. Activation functions are defined as ,
2.Linear activation: f(x)=x; for all x
3. Binary Sigmoid Function: f(x)=1/1+exp(-x),
4. Bipolar Sigmoid Function: f(x)=1-exp(-x)/1+exp(-x),
5. Radial Basis Function: f(x)=exp(I)
N
2
(Wi (t ) X i (t ))

1. Where, I = i =1
2 2

6. Write a MATLAB function for McCulloch Pitts neural net and generate OR functions
using this created function.
7. Using Hebb rule, find the weights required to perform following classification. The
vectors (1 -1 1 -1) and (1 1 1 -1) belong to class with target +1; vectors (-1 -1 1 1) and (1
1 -1 -1) do not belong to class with target -1.
8. Write a MATLAB program for Hebb net to classify two dimensional input pattern in
bipolar with their targets given below.* indicates a +1 and . Indicates -1.

*
*
*
*
*

*
.
*
.
*

*
.
*
.
*

(E)
Target +1

*
.
*
.
*

*
*
*
*
*

*
.
*
.
.

*
.
*
.
.

*
.
*
.
.

(F)
Target -1

9. Using unit step activation function, determine a set


neural net by writing M file for the following data:
X1
X2
-0.2
0.5
0.2
-0.5
0.8
-0.5
0.8
0.8

of weights using McCulloch Pitts


O/P
0
0
1
1

Mrs. A. D. Belsare, Course Teacher

2014-15

10. Consider the surface described by z = sin( x) cos( y ) defined on a square


3 x 3, 3 y 3 ; Plot the surface z as a function of x & y and design a neural
network which will fit the data. Hint: Use 25, 25 numbers of neurons in first layer, show
50, 0.05 learning rate, 1e-3 goal and 300 epochs before training network.
11. Design a Hebb net as a gate traffic controller across the road to control the accidents
occurring on the roads regularly. The gate across the road will come down across when
the traffic light is red & pedestrian light is green. Also plot classification line and
input/output target vectors. The state table for the traffic controller is as shown below:
Traffic
Signal
Green (2)

Pedestrian
Signal
Red (0)

Up (0)

Yellow (1)

Yellow (1)

Up (0)

Red (0)

Green (2)

Down (1)

Gate

Hint: First encode the light color as given in brackets for translation of state table into
numbers, learning function for input weights and layer weights are to be set at Hebb net
learning rue: learnh, use train function for net as trainr, train function for adapt
function as trains, and number of epochs should be 150.
12. Implement XOR logic function using perceptron learning algorithm.
13. Using the Perceptron Learning Law design a classifier for the following problem:
Class C1 : [-2 2]', [-2 1.5]', [-2 0]', [1 0]' and [3 0]'
Class C2: [ 1 3]', [3 3]', [1 2]', [3 2]', and [10 0]'
14. For the following 2-class problem determine the decision boundaries obtained by
perceptron learning laws.
Class C1: [-2 2]', [-2 3]', [-1 1]', [-1 4]', [0 0]', [0 1]', [0 2]', [0 3]' and [1 1]'
Class C2: [ 1 0]', [2 1]', [3 -1]', [3 1]', [3 2]', [4 -2]', [4 1]', [5 -1]' and [5 0]'

15. Determine the weights of a network with 4 input and 2 output units using Perceptron
Learning Law for the following input-output pairs:
Input: [1100]' [1001]' [0011]' [0110]'
Output: [11]' [10]' [01]' [00]'
Discuss your results for different choices of the learning rate parameters. Use
suitable values for the initial weights.
16. . Design a backpropogation NN for diagnosis of diabetes. Assume data as per table
below:
Family
History

Obese

Thirst

Increased
Urination

Increase
Urination
(night)

Adult

1
1
1

1
1
1

1
0
1

1
0
0

1
0
0

1
1
1

Diagnosis

1
2
2

Mrs. A. D. Belsare, Course Teacher


1
0
0
0
1
0
0
0

1
1
0
0
0
1
0
1

Family
History

Obese

Thirst

Increased
Urination

Increase
Urination
(night)

Adult

1
0

1
0

1
0

1
0

1
0

1
1

2014-15

1
1
1
0
1
1
1
1
1
1
1
1
1
1
1
0
1
1
0
0
0
0
0
1
0
1
0
0
1
0
1
0
1
1
1
0
0
1
1
0
0: Not Diabetic, 1: Diabetic, 2: Prabably Diabetic
Use NN commands to train & test BPN.
Keep hidden neuron=25; learning rate=0.1; Show parameter=50; momentum constant =0.9 ;
epochs=15000 and performance goal=1e-15
Hint: You can use newpr, train, sim
Test the network for the data given below:

Diagnosis

?
?

17. Write a MATLAB program to test auto associative network using outer product rule for
storing input vector x1=[1 -1 1 -1] and x2=[1 1 -1 -1]
18. Write a program to implement Kohonen self organizing feature maps for given input pattern
x=[1 1 0 0; 0 0 0 1; 1 0 0 0 ; 0 0 1 1] using learning rate as 0.6
% Program
clear all;
clc;
disp('Kohonen self organizing feature maps');
disp('The input patterns are');
x=[1 1 0 0; 0 0 0 1; 1 0 0 0 ; 0 0 1 1]
t=1;
alpha(t)=0.6;
e=1;
disp('Since we have 4 input pattern and cluster unit to be formed is 2, the weight matrix
is');
w=[0.2 0.8; 0.6 0.4; 0.5 0.7; 0.9 0.3]
disp('The learning rate of this epoch is');
alpha
[w,alpha] = ksomfea(x,w,e,alpha);
disp('Weight updation');
w

Mrs. A. D. Belsare, Course Teacher

2014-15

disp('Learning rate updated for epoch')


alpha
Write a function ksomfea for Kohonen self organizing feature maps as given below;
function [w,alpha] = ksomfea(x,w,e,alpha)
% x is n X n input matrix
% w is weight matrix
% e is number of epoch
% alpha is learning rule for self organising maps
19. Write a MATLAB program for an ART1 neural net with four F1 units & three F2 units.
The weights after some training are given as, bij=[0.57 0 0.3;0 0 0.3;0 0.57 0.3;0 0.47
0.3], & tij=[1 1 0 0;1 0 0 1;1 1 1 1]. Find the new weight after the vector [1 0 1 1] is
presented if vigilance parameter is 0.4.
20. Using MATLAB commands draw the triangular & Gaussian membership function for x
= 0 to 10 with increment of 0.1. Triangular membership function is defined between [5 6
7] & Gaussian function is defined between 2 & 4.
21. Using following equation find the membership function for a fuzzy set.
0,
if x a

2
2 x a , if a x m

b a
( x) =
2
x b

1 2 b a , if m x b

1,
if x b

22. Consider three fuzzy sets and one null set


0 1 0.5 0.4 0.6
A= + +
+
+

8 10
2 4 6

0 0.5 0.7 0.8 0.4


B= +
+
+
+

6
8 10
2 4
0.3 0.9 0.2 0 1
C=
+
+
+ +
4
6 8 10
2
Write a program to implement fuzzy set operation and properties.
23. Consider the following fuzzy sets
0.8 0.3 0.6 0.2
A=
+
+
+

10 15 20 25
24.
0.4 0.2 0.9 0.1
B=
+
+
+

10 15 20 25
Calculate the Demorgans law A B = A B and A B = A B using MATLAB
program

Mrs. A. D. Belsare, Course Teacher

2014-15

25. An engineer is testing the properties, strength and weight of steel. Suppose he has two
fuzzy sets A, defined on a universe of three discrete strengths, {s1, s2, s3}, and B,
defined on a universe of three discrete weights {w1, w2, w3}. Suppose A & B represent a
high-strength steel & near-optimum weight, respectively, as shown,
1 0.5 0.2
1 0.5 0.3
A= +
+
+
& B= +

s1 s 2 s3
w1 w2 w3
a. Find the fuzzy relation for the Cartesian product of A & B. Here Cartesian
product would represent the strength-weight characteristics of a near maximum
steel quality.
b. Suppose we introduce another fuzzy set, C, which represents a set of moderately
good steel strength, as given below,
0.1 0.6 1
C=
+
+
s1 s 2 s3
Find the relation between C & B using Cartesian product
c. Find C o R using max-min composition & C . R max-product composition.
26. To find whether the given matrix is transitivity or not
1
1

R = 0

0
0

1 0 0 0
1 0 0 0
0 1 0 0

0 0 1 1
0 0 1 1

28. To find whether the following relations is equivalence or not using MATLAB program?
1
27. 0.8
R = 0.4

0.5
0.8

0.8 0.4 0.5 0.8


1 0.4 0.5 0.9
0.4 1 0.4 0.4

0.5 0.4 1 0.5


0.9 0.4 0.5 1

29. Find whether the following relation is tolerance relation or not by writing MATLAB
program
1
1

R = 0

0
0

1 0 0 0
1 0 0 0
0 1 0 0

0 0 1 1
0 0 1 1

30. Using MATLAB program find the crisp lambda cut set relations for lambda=0.6. The
fuzzy matrix is given by
0.1
1
R=
0

0.1

1
0.7 0.4 0.2
0.6 1 0.5

0.5 1 0.9
0.6 0.8

31. Design of a Fuzzy controller for the following system using Fuzzy tool of Matlab
a. Train Breaking system
b. Water heater Controller

Mrs. A. D. Belsare, Course Teacher

2014-15

Mrs. A. D. Belsare
Asst. Prof.
ET, YCCE
Nagpur

Das könnte Ihnen auch gefallen