Sie sind auf Seite 1von 8

www.jntuworld.

com

R05

Code No: R05421204

Set No. 2

IV B.Tech II Semester Examinations,AUGUST 2011


PATTERN RECOGNITION
Common to Information Technology, Computer Science And Systems
Engineering
Time: 3 hours
Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
?????
1. Let 1 and 2 be unknown parameters for the component densities p (x|1 ,1 ) and
p (x|2 ,2 ), respectively. Assume that 1 and 2 are initially statistically independent, so that p (1 , 2 ) = p1 (1 ) p2 (2 ) . .

D
L

(a) Show that after one sample x1 from the mixture density is observed, p (1 , 2 | x1 )
can no longer be factored as p (1 | x1 ) p2 (2 |x1 ) if
p(x | i , i )
6= 0, i = 1, 2.
i

R
O

(b) What does this imply in general about the statistical dependence of parameters
in unsupervised learning ?
[8+8]
2. Write short notes on the following:

W
U

(a) Two-category classification problem


(b) Bayes decision rule

(c) Zero-one-loss function.

T
N

[16]

3. (a) How pattern classification is different from clinical statistical hypothesis testing?

(b) Name related fields of pattern recognition and explain how pattern recognition
is useful in those fields.
[8+8]
4. (a) Explain non-linear component analysis with neat diagram.
(b) State briefly that a three-layer network cannot be used for non-linear principal
component analysis, even if the middle layer consists of nonlinear units. [8+8]
5. (a) In which care Hidden Markov model parameter set to zero initially will remain
at zero throughout the re-estimation procedure.
(b) Constraints of the left-right model no effect on the re-estimation procedure.
Justify.
[8+8]
6. (a) Construct Bayesian decision boundary for three-dimensional binary data by
considering two-class problem having three independent binary features with
known feature probabilities.
(b) Feature x is normally distributed, with = 3 and = 2. Find P(3 < x < 2).
[8+8]
7. (a) Explain the general principle of maximum likelihood estimation?
1

www.jntuworld.com

www.jntuworld.com

R05

Code No: R05421204

Set No. 2

(b) Find the maximum likelihood estimate for in a normal distribution?

[8+8]

8. (a) Given the observation sequence O=(o1 ,o2 , .......... oT ) and the model =
(A,B,) how do we choose a corresponding state sequence q=(q1 ,q2 ,..........
qT ) that is optimal in some sense (i.e. best explains the observations)?
(b) Explain N-state urn-and-ball model?

[8+8]

?????

D
L

R
O

W
U

T
N

www.jntuworld.com

www.jntuworld.com

R05

Code No: R05421204

Set No. 4

IV B.Tech II Semester Examinations,AUGUST 2011


PATTERN RECOGNITION
Common to Information Technology, Computer Science And Systems
Engineering
Time: 3 hours
Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
?????
1. (a) How pattern classification is different from clinical statistical hypothesis testing?
(b) Name related fields of pattern recognition and explain how pattern recognition
is useful in those fields.
[8+8]

D
L

2. Write short notes on the following:

R
O

(a) Two-category classification problem


(b) Bayes decision rule
(c) Zero-one-loss function.

[16]

W
U

3. (a) In which care Hidden Markov model parameter set to zero initially will remain
at zero throughout the re-estimation procedure.
(b) Constraints of the left-right model no effect on the re-estimation procedure.
Justify.
[8+8]

T
N

4. (a) Explain the general principle of maximum likelihood estimation?


(b) Find the maximum likelihood estimate for in a normal distribution?

[8+8]

5. Let 1 and 2 be unknown parameters for the component densities p (x|1 ,1 ) and
p (x|2 ,2 ), respectively. Assume that 1 and 2 are initially statistically independent, so that p (1 , 2 ) = p1 (1 ) p2 (2 ) . .
(a) Show that after one sample x1 from the mixture density is observed, p (1 , 2 | x1 )
can no longer be factored as p (1 | x1 ) p2 (2 |x1 ) if
p(x | i , i )
6= 0, i = 1, 2.
i
(b) What does this imply in general about the statistical dependence of parameters
in unsupervised learning ?
[8+8]
6. (a) Given the observation sequence O=(o1 ,o2 , .......... oT ) and the model =
(A,B,) how do we choose a corresponding state sequence q=(q1 ,q2 ,..........
qT ) that is optimal in some sense (i.e. best explains the observations)?
(b) Explain N-state urn-and-ball model?

[8+8]

7. (a) Explain non-linear component analysis with neat diagram.


(b) State briefly that a three-layer network cannot be used for non-linear principal
component analysis, even if the middle layer consists of nonlinear units. [8+8]
3

www.jntuworld.com

www.jntuworld.com

R05

Code No: R05421204

Set No. 4

8. (a) Construct Bayesian decision boundary for three-dimensional binary data by


considering two-class problem having three independent binary features with
known feature probabilities.
(b) Feature x is normally distributed, with = 3 and = 2. Find P(3 < x < 2).
[8+8]
?????

D
L

R
O

W
U

T
N

www.jntuworld.com

www.jntuworld.com

R05

Code No: R05421204

Set No. 1

IV B.Tech II Semester Examinations,AUGUST 2011


PATTERN RECOGNITION
Common to Information Technology, Computer Science And Systems
Engineering
Time: 3 hours
Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
?????
1. (a) How pattern classification is different from clinical statistical hypothesis testing?
(b) Name related fields of pattern recognition and explain how pattern recognition
is useful in those fields.
[8+8]

D
L

2. (a) Given the observation sequence O=(o1 ,o2 , .......... oT ) and the model =
(A,B,) how do we choose a corresponding state sequence q=(q1 ,q2 ,..........
qT ) that is optimal in some sense (i.e. best explains the observations)?

R
O

(b) Explain N-state urn-and-ball model?

[8+8]

3. Let 1 and 2 be unknown parameters for the component densities p (x|1 ,1 ) and
p (x|2 ,2 ), respectively. Assume that 1 and 2 are initially statistically independent, so that p (1 , 2 ) = p1 (1 ) p2 (2 ) . .

W
U

(a) Show that after one sample x1 from the mixture density is observed, p (1 , 2 | x1 )
can no longer be factored as p (1 | x1 ) p2 (2 |x1 ) if
p(x | i , i )
6= 0, i = 1, 2.
i

T
N

(b) What does this imply in general about the statistical dependence of parameters
in unsupervised learning ?
[8+8]

4. (a) In which care Hidden Markov model parameter set to zero initially will remain
at zero throughout the re-estimation procedure.
(b) Constraints of the left-right model no effect on the re-estimation procedure.
Justify.
[8+8]
5. (a) Explain the general principle of maximum likelihood estimation?
(b) Find the maximum likelihood estimate for in a normal distribution?

[8+8]

6. (a) Construct Bayesian decision boundary for three-dimensional binary data by


considering two-class problem having three independent binary features with
known feature probabilities.
(b) Feature x is normally distributed, with = 3 and = 2. Find P(3 < x < 2).
[8+8]
7. Write short notes on the following:
(a) Two-category classification problem
5

www.jntuworld.com

www.jntuworld.com

R05

Code No: R05421204

Set No. 1

(b) Bayes decision rule


(c) Zero-one-loss function.

[16]

8. (a) Explain non-linear component analysis with neat diagram.


(b) State briefly that a three-layer network cannot be used for non-linear principal
component analysis, even if the middle layer consists of nonlinear units. [8+8]
?????

D
L

R
O

W
U

T
N

www.jntuworld.com

www.jntuworld.com

R05

Code No: R05421204

Set No. 3

IV B.Tech II Semester Examinations,AUGUST 2011


PATTERN RECOGNITION
Common to Information Technology, Computer Science And Systems
Engineering
Time: 3 hours
Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
?????
1. (a) Construct Bayesian decision boundary for three-dimensional binary data by
considering two-class problem having three independent binary features with
known feature probabilities.

D
L

(b) Feature x is normally distributed, with = 3 and = 2. Find P(3 < x < 2).
[8+8]
2. Let 1 and 2 be unknown parameters for the component densities p (x|1 ,1 ) and
p (x|2 ,2 ), respectively. Assume that 1 and 2 are initially statistically independent, so that p (1 , 2 ) = p1 (1 ) p2 (2 ) . .

R
O

(a) Show that after one sample x1 from the mixture density is observed, p (1 , 2 | x1 )
can no longer be factored as p (1 | x1 ) p2 (2 |x1 ) if
p(x | i , i )
6= 0, i = 1, 2.
i

W
U

(b) What does this imply in general about the statistical dependence of parameters
in unsupervised learning ?
[8+8]

T
N

3. (a) Given the observation sequence O=(o1 ,o2 , .......... oT ) and the model =
(A,B,) how do we choose a corresponding state sequence q=(q1 ,q2 ,..........
qT ) that is optimal in some sense (i.e. best explains the observations)?

(b) Explain N-state urn-and-ball model?

[8+8]

4. Write short notes on the following:


(a) Two-category classification problem
(b) Bayes decision rule
(c) Zero-one-loss function.

[16]

5. (a) Explain the general principle of maximum likelihood estimation?


(b) Find the maximum likelihood estimate for in a normal distribution?

[8+8]

6. (a) How pattern classification is different from clinical statistical hypothesis testing?
(b) Name related fields of pattern recognition and explain how pattern recognition
is useful in those fields.
[8+8]
7. (a) In which care Hidden Markov model parameter set to zero initially will remain
at zero throughout the re-estimation procedure.
7

www.jntuworld.com

www.jntuworld.com

R05

Code No: R05421204

Set No. 3

(b) Constraints of the left-right model no effect on the re-estimation procedure.


Justify.
[8+8]
8. (a) Explain non-linear component analysis with neat diagram.
(b) State briefly that a three-layer network cannot be used for non-linear principal
component analysis, even if the middle layer consists of nonlinear units. [8+8]
?????

D
L

R
O

W
U

T
N

www.jntuworld.com

Das könnte Ihnen auch gefallen