Sie sind auf Seite 1von 7

Code No: RR410212 Set No.

1
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. (a) In the sigmoidal function s(x) = 1/ (1+e−cx ) explain the role of the constant
C, and draw (the role of constant C) for various values of C. Also draw the
sigmoidal function.
(b) Determine the final weights for logic function of AND and OR using perceptron
learning rule. [8+8]

2. Explain in detail the differences between competitive learning and differential com-
petitive learning. [16]

3. Write brief note on:

(a) Pattern
(b) Classes
(c) Pattern space
(d) Decision regions. [16]

4. Prove that for n = 2, the number of hidden


√ layer neurons (J) needed for hyper
1

plane partition into M regions is J = /2 8M − 7 − 1 . [16]

5. What are the properties of the continuous time dynamical system model? Explain
them using a single layer neural network. [16]

6. Explain the architecture of ART-1 neural networks with emphasis on the function
of each part. What is the importance of the vigilance parameter in its working?[16]

7. Consider the simple neural net shown in the figure 7. Assume the hidden unit
has an activation function f(ξ) = tanh(ξ) and that the output unit has a linear
activation with unit slope. Show that there exists a set of real valued weights
{w1 , w2 , v1 , v2 } that approximates the discontinuous function g(x) = a sgn(x-b)+c,
for all x,a,b and c ε R to any degree of accuracy. [16]

Figure 7

1 of 2
Code No: RR410212 Set No. 1
8. What do you understand by finite resolution and conversion error. Explain the
circuit producing a single digitally programmable weight employing a multiplying
D/A converters (MDAC). [16]

⋆⋆⋆⋆⋆

2 of 2
Code No: RR410212 Set No. 2
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. (a) How do you justify that brain is a parallel distributed processing system?
(b) Explain the structure of a brain. [8+8]

2. (a) Distinguish between local minima and global minima in neural networks and
what are the effects of these on neural networks.
(b) Explain the distinction between stability and convergence. [8+8]

3. Prototype vectors are:


X1 = [1] , X2 = [4]: Class 1
X3 = [3] , X4 = [5] , X5 = [−1]: Class 2

(a) Draw patterns in augmented pattern space.


(b) Find the set of weights for the linear dichotomizer or conclude that this is not
a linearly separable classification problem.
(c) Draw separating lines in augmented weight space for each patterns. [16]

4. With neat Block diagram and flow chart, explain Error Back propagation algorithm.
[16]

5. Design a simple continuous-time network using the concept of computational energy


function and also evaluate stationary solutions of the network. [16]

6. What is minimum spanning tree? Write the algorithm of Self - organizing feature
map? [16]

7. What do you understand by neural controller? Explain the capability of a simple


neural network to learn to balance an inverted pendulum. How is this method
different from the traditional contour approaches. [16]

8. What do you understand by finite resolution and conversion error. Explain the
circuit producing a single digitally programmable weight employing a multiplying
D/A converters (MDAC). [16]

⋆⋆⋆⋆⋆

1 of 1
Code No: RR410212 Set No. 3
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. (a) For a particular network the data is given below. Draw the architecture and
verify the result.

Input Vector = [3, 4, 0]


 
2 1 0
Weight matrix for hidden layers =  1 2 2 
0 3 1
 
−1
Weight matrix for the output layer =  1 
2
 
0
Threshold for the hidden layer =  0 
1
Threshold for the hidden layer = [ 1 ] find the output.
(b) Explain clearly the instar learning rule. [8+8]

2. (a) Distinguish between local minima and global minima in neural networks and
what are the effects of these on neural networks.
(b) Explain the distinction between stability and convergence. [8+8]

3. Write and discuss about Single layer Discrete Perceptron Training Algorithm.[16]

4. Show by geometrical arguments that with 3 layers of non-linear units, any hard
classification problem can be solved. [16]

5. Describe the Discrete time Hopfield networks with necessary illustrations. [16]

6. What is counter propagation network? What is the training procedure followed


here? [16]

7. Using backpropagation learning, find the new weights for the network shown in
figure 7 when presented with an input (0 ,1) and the target output is 0.8. Use a
learning rate of α = 0.25 and the bipolar signoid activation function. [16]

1 of 2
Code No: RR410212 Set No. 3

Figure 7
8. Analyze the neuron circuit given in figure 8 and compute its weight value. Compute
the neurons response for the following inputs knowing that fsat+ = −fsat− = 13V.

(a) X = [-1 3 0.5]T


(b) X = [0 0.5 1.5]T . [16]

Figure 8

⋆⋆⋆⋆⋆

2 of 2
Code No: RR410212 Set No. 4
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆

1. (a) How do you justify that brain is a parallel distributed processing system?
(b) Explain the structure of a brain. [8+8]
2. (a) What are the requirements of learning laws.
(b) Distinguish between activation and synaptic dynamics models. [16]
3. Discuss in detail about minimum distance classification system for a linear discrim-
inant function. [16]
4. (a) Why convergence is not guaranteed for the back propagation learning algo-
rithm?
(b) Discuss few tasks that can be performed by a back propagation network and
significance of semi linear functions in back propagation. [6+10]
5. (a) What are the advantages of vector field method over other methods?
(b) The weight matrix W for a network with bipolar discrete binary neurons as
given as: 
0 1 −1 −1 −3
 1 0 1 1 −1 
  −1
W =  −1 1
 0 3 1 Ω
 −1 1 3 0 1 
−3 −1 1 1 0
knowing that the thresholds and external inputs of neurons are zero, compute
the values of energy for v = [-1 1 1 1 1]t and v = [-1 -1 1 -1 -1]t . [4+12]
6. Write short notes on Grossberg layer and its training. Explain with an example.
[16]
7. Consider the simple neural net shown in the figure 7. Assume the hidden unit
has an activation function f(ξ) = tanh(ξ) and that the output unit has a linear
activation with unit slope. Show that there exists a set of real valued weights
{w1 , w2 , v1 , v2 } that approximates the discontinuous function g(x) = a sgn(x-b)+c,
for all x,a,b and c ε R to any degree of accuracy. [16]

Figure 7

1 of 2
Code No: RR410212 Set No. 4
8. Analyze the neuron circuit given in figure 8 and compute its weight value. Compute
the neurons response for the following inputs knowing that fsat+ = −fsat− = 13V.

(a) X = [-1 3 0.5]T


(b) X = [0 0.5 1.5]T . [16]

Figure 8

⋆⋆⋆⋆⋆

2 of 2

Das könnte Ihnen auch gefallen