Beruflich Dokumente
Kultur Dokumente
MCQ
1. What is the relation between the distance between clusters and the corresponding class
discriminability?
a. proportional
b. inversely-proportional
c. no-relation
Ans: (a)
Ans: (d)
Ans: (b)
Ans: (c)
Ans: (b)
6. Unsupervised classification can be termed as
a. distance measurement
b. dimensionality reduction
c. clustering
d. none of the above
Ans: (d)
Ans: (c)
Linear Algebra
MCQ
1. Which of the properties are true for matrix multiplication
a. Distributive
b. Commutative
c. both a and b
d. neither a nor b
Ans: (a)
2. Which of the operations can be valid with two matrices of different sizes?
a. addition
b. subtraction
c. multiplication
d. Division
Ans: (c)
Ans: (c)
Ans: (a)
Ans: (d)
Ans: (d)
Ans: (d)
Eigenvalues and Eigenvectors
MCQ
2 7
1. The Eigenvalues of a matrix � � are
−1 −6
a. 3 and 0
b. -2 and 7
c. -5 and 1
d. 3 and -5
Ans: (c)
0 1 1
2. The Eigenvalues of �1 0 1� are
1 1 0
a. -1, 1 and 2
b. 1, 1 and -2
c. -1, -1 and 2
d. 1, 1 and 2
Ans: (c)
0 1 1
3. The Eigenvectors of �1 0 1� are
1 1 0
a. (1 1 1), (1 0 1) and (1 1 0)
b. (1 1 -1), (1 0 -1) and (1 1 0)
c. (-1 1 -1), (1 0 1) and (1 1 0)
d. (1 1 1), (-1 0 1) and (-1 1 0)
Ans: (d)
Ans: (c)
Ans: (c)
Ans: (a)
Vector Spaces
MCQ
1. Which of these is a vector space?
a. {(x y z w)′ ∈ R4 |x + y − z + w = 0}
b. {(x y z)′ ∈ R3 |x + y + z = 0}
c. {(x y z)′ ∈ R3 |x 2 + y 2 + z 2 = 1}
a 1
d. {� � |a, b, c ∈ R}
b c
Ans: (a)
Ans: (d)
Ans: (d)
a − 3b + 6c
4. What is the dimension of the subspace H = �� 5a + 4d � : a, b, c, d ∈ R�
b − 2c − d
5d
a. 1
b. 2
c. 3
d. 4
Ans: (c)
2 −1 1 −6 8
5. What is the rank of the matrix � 1 −2 −4 3 −2 �
−7 8 10 3 −10
4 −5 −7 0 4
a. 2
b. 3
c. 4
d. 5
Ans: (a)
6. If v1, v2, v3, v4 are in 𝑅 4 and v3 is not a linear combination of v1, v2, v4, then {v1, v2, v3, v4}
must be linearly independent.
a. True
b. False
1 1 3
7. The vectors x1=�1� , x2=�−1� , x3=�1� are :
1 2 4
a. Linearly dependent
b. Linearly independent
1 −5
8. The vectors x1=� � , x2=� � are :
2 3
a. Linearly dependent
b. Linearly independent
Ans: (b).
Rank and SVD
MCQ
1. The number of non-zero rows in an echelon form is called?
Ans: (b)
2. Let A and В be arbitrary m x n matrices. Then which one of the following statement is true
a. 𝑟𝑎𝑛𝑘(𝐴 + 𝐵) ≤ 𝑟𝑎𝑛𝑘(𝐴) + 𝑟𝑎𝑛𝑘(𝐵)
b. 𝑟𝑎𝑛𝑘(𝐴 + 𝐵) < 𝑟𝑎𝑛𝑘(𝐴) + 𝑟𝑎𝑛𝑘(𝐵)
c. 𝑟𝑎𝑛𝑘(𝐴 + 𝐵) ≥ 𝑟𝑎𝑛𝑘(𝐴) + 𝑟𝑎𝑛𝑘(𝐵)
d. 𝑟𝑎𝑛𝑘(𝐴 + 𝐵) > 𝑟𝑎𝑛𝑘(𝐴) + 𝑟𝑎𝑛𝑘(𝐵)
Ans: (a)
0 0 0
3. The rank of the matrix � � is
0 0 0
a. 0
b. 2
c. 1
d. 3
Ans: (a)
1 1 1
4. The rank of �1 1 1�is
1 1 1
a. 3
b. 2
c. 1
d. 0
Ans: (c)
5. Consider the following two statements:
I. The maximum number of linearly independent column vectors of a matrix A is called the rank of A.
II. If A is an n x n square matrix, it will be nonsingular is rank A = n.
With reference to the above statements, which of the following applies?
Ans: (b)
6. The rank of a 3 x 3 matrix C (= AB), found by multiplying a non-zero column matrix A of size
3 x 1 and a non-zero row matrix B of size 1 x 3, is
a. 0
b. 1
c. 2
d. 3
Ans: (b)
1 2
7. Find the singular values of the matrix 𝐵 = � �
2 1
a. 2 and 4
b. 3 and 4
c. 2 and 3
d. 3 and 1
Ans: (d)
a. AAT
b. ATA
c. AA-1
d. A*A
Ans: (a)
Ans: (a)
Normal Distribution and Decision Boundary I
MCQ
1. Three components of Bayes decision rule are class prior, likelihood and …
a. Evidence
b. Instance
c. Confidence
d. Salience
Ans: (a)
Ans: (a)
Ans: (d)
4. When the value of the data is equal to the mean of the distribution in which it belongs to, the
Gaussian function attains … value
a. Minimum
b. Maximum
c. Zero
d. None of the above
Ans: (b)
Ans: (a)
6. Property of correlation coefficient is
a. −1 ≤ 𝜌𝑥𝑦 ≤ 1
b. −0.5 ≤ 𝜌𝑥𝑦 ≤ 1
c. −1 ≤ 𝜌𝑥𝑦 ≤ 1.5
d. −0.5 ≤ 𝜌𝑥𝑦 ≤ 0.5
Ans: (a)
Ans: (b)
Ans: (a)
Ans: (a)
Normal Distribution and Decision Boundary II
MCQ
1. If the covariance matrix is strictly diagonal with equal variance then the iso-contour lines (data
scatter) of the data resembles
a. Concentric circle
b. Ellipse
c. Oriented Ellipse
d. None of the above
Ans: (a)
Ans: (c)
Ans: (a)
Ans: (b)
Ans: (b)
6. For spiral data the decision boundary will be
a. Linear
b. Non-linear
c. Does not exist
Ans: (b)
7. In a 2-class problem, if the discriminant function satisfies 𝑔1 (𝑥) = 𝑔2 (𝑥) then, the data point
lies
a. On the DB
b. Class 1’s side
c. Class 2’s side
d. None of the above
Ans: (a)
Bayes Theorem
MCQ
1. 𝑃�𝑋⃗�𝑃�𝑤𝑖 �𝑋⃗� =
a. 𝑃�1 − 𝑋⃗�𝑃�𝑤𝑖 �𝑋⃗�
b. 𝑃�𝑋⃗�𝑃�1 − 𝑤𝑖 �𝑋⃗�
c. 𝑃�𝑋⃗|𝑤𝑖 �𝑃(𝑤𝑖 )
d. 𝑃�𝑋⃗ − 𝑤𝑖 �𝑃�𝑤𝑖 �𝑋⃗�
Ans: (c)
Ans: (a)
Ans: (b)
4. When the covariance term in Mahalobian distance becomes Identity then the distance is similar
to
a. Euclidean distance
b. Manhattan distance
c. City block distance
d. Geodesic distance
Ans: (a)
Ans: (d)
6. Bayes error is the ….. bound of probability of classification error.
a. Lower
b. Upper
Ans: (a)
7. Bayes decision rule is the theoretically …….. classifier that minimize probability of classification
error.
a. Best
b. Worst
c. Average
Ans: (a)
Linear Discriminant Function and Perceptron Learning
MCQ
1. A perceptron is:
a. a single McCulloch-Pitts neuron
b. an autoassociative neural network
c. a double layer autoassociative neural network
d. All the above
Ans: (a)
2. Perceptron is used as a classifier for
a. Linearly separable data
b. Non-linearly separable data
c. Linearly non-separable data
d. Any data
Ans: (a)
3. A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with the
constant of proportionality being equal to 2. The inputs are 4, 10, 5 and 20 respectively.
The output will be:
a. 238
b. 76
c. 119
d. 178
Ans: (a)
𝑢𝑢1 f(a)
1 𝑓𝑜𝑟 𝑎 > 0
𝑢𝑢2
𝑓(𝑎) = � 0 𝑓𝑜𝑟 𝑎 = 0
−1 𝑓𝑜𝑟 𝑎 < 0
Ans: (a)
Ans: (a)
7. Consider a perceptron for which training sample, 𝑢𝑢 ∈ 𝑅 2 and actual output, 𝑥 ∈ {0,1}, let
the desired output be 0 when elements of class A={(2,4),(3,2),(3,4)} is applied as input
and let it be 1 for the class B={(1,0),(1,2),(2,1)}. Let the learning rate η be 0.5 and initial
connection weights are w 0 =0, w 1 =1, w 2 =1. Answer the following questions:
A. Shall the perceptron convergence procedure terminate if the input patterns from class
A and B are repeatedly applied by choosing a very small learning rate?
a. Yes
b. No
c. Can’t say
Ans: (a). Since Classes are linearly separable.
B. Now add sample (5,2) to class B, what is your answer now, i.e. will it converge or
not?
a. Yes
b. No
c. Can’t say
Ans: (b). After adding above sample, classes become non linear separable.
Linear and Non-Linear Decision Boundaries
MCQ
1. Decision Boundary in case of same covariance matrix, with identical diagonal elements is :
a. Linear
b. Non-Linear
c. None of the above
Ans: (a)
2. Decision Boundary in case of diagonal covariance matrix, with identical diagonal elements is
given by 𝑊 𝑇 (𝑋 − 𝑋0 ) = 0, where 𝑊 is given by:
a. (𝜇𝑘 − 𝜇𝑙 )/ 𝜎 2
b. (𝜇𝑘 + 𝜇𝑙 )/ 𝜎 2
c. (𝜇𝑘2 + 𝜇𝑙2 )/ 𝜎 2
d. (𝜇𝑘 + 𝜇𝑙 )/ 𝜎
Ans: (a)
3. Decision Boundary in case of arbitrary covariance matrix but identical for all class is :
a. Linear
b. Non-Linear
c. None of the above
Ans: (a)
4. Decision Boundary in case of arbitrary covariance matrix but identical for all class is given by
𝑊 𝑇 (𝑋 − 𝑋0 ) = 0, where 𝑊 is given by:
a. (𝜇𝑘 − 𝜇𝑙 )/ 𝜎 2
b. Σ −1 ( µ k − µl )
c. (𝜇𝑘2 + 𝜇𝑙2 )/ 𝜎 2
d. Σ −1 ( µ k2 − µl2 )
Ans: (b)
Ans: (b)
6. Discriminant function in case of arbitrary covariance matrix and all parameters are class
dependent is given by �𝑋 𝑇 𝑊𝑖 𝑋 + 𝑤 𝑇𝑖 𝑋 + 𝑤𝑖𝑜 � = 0, where 𝑊 is given by:
1
a. − Σ i−1
2
b. Σ i µi
−1
1
c. − Σ i−1µi
2
1 −1
d. − Σ i
4
Ans: (a)
PCA
MCQ
1. The tool used to obtain a PCA is
a. LU Decomposition
b. QR Decomposition
c. SVD
d. Cholesky Decomposition
Ans: (c)
Ans: (b)
b. ∑𝑁 𝑇
𝑘=1(𝑥𝑘 − 𝜇) (𝑥𝑘 − 𝜇)
c. ∑𝑁 𝑘=1(𝜇 − 𝑥𝑘 )(𝜇 − 𝑥𝑘 )
𝑇
d. ∑𝑁 𝑇
𝑘=1(𝜇 − 𝑥𝑘 ) (𝜇 − 𝑥𝑘 )
Ans: (a)
Ans: (b)
5. The vectors which correspond to the vanishing singular values of a matrix that span the null
space of the matrix are:
a. Right singular vectors
b. Left singular vectors
c. All the singular vectors
d. None
Ans: (a)
6. If 𝑆 is the scatter of the data in the original domain, then the scatter of the transformed feature
vectors is given by
a. 𝑆 𝑇
b. 𝑆
c. 𝑊𝑆𝑊 𝑇
d. 𝑊 𝑇 𝑆𝑊
Ans: (d)
Ans: (a)
8. The following linear transform does not have a fixed set of basis vectors:
a. DCT
b. DFT
c. DWT
d. PCA
Ans: (d)
b. ∑𝑐𝑖=1 ∑𝑁 𝑇
𝑘=1(𝑥𝑘 − 𝜇𝑖 ) (𝑥𝑘 − 𝜇𝑖 )
c. ∑𝑐𝑖=1 ∑𝑁
𝑘=1(𝑥𝑖 − 𝜇𝑘 )(𝑥𝑖 − 𝜇𝑘 )
𝑇
d. ∑𝑐𝑖=1 ∑𝑁 𝑇
𝑘=1(𝑥𝑖 − 𝜇𝑘 ) (𝑥𝑖 − 𝜇𝑘 )
Ans: (a)
10. The Between Class scatter matrix is given by:
a. ∑𝑐𝑖=1 𝑁𝑖 (𝜇𝑖 − 𝜇)(𝜇𝑖 − 𝜇)𝑇
b. ∑𝑐𝑖=1 𝑁𝑖 (𝜇𝑖 − 𝜇)𝑇 (𝜇𝑖 − 𝜇)
c. ∑𝑐𝑖=1 𝑁𝑖 (𝜇 − 𝜇𝑖 )(𝜇 − 𝜇𝑖 )𝑇
d. ∑𝑐𝑖=1 𝑁𝑖 (𝜇 − 𝜇𝑖 )𝑇 (𝜇 − 𝜇𝑖 )
Ans: (a)
Ans: (a)
Linear Discriminant Analysis
MCQ
1. Linear Discriminant Analysis is
a. Unsupervised Learning
b. Supervised Learning
c. Semi-supervised Learning
d. None of the above
Ans: (b)
Ans: (b)
Ans: (a)
4. The upper bound of the number of non-zero Eigenvalues of S w -1S B (C = No. of Classes)
a. C - 1
b. C + 1
c. C
d. None of the above
Ans: (a)
5. If S w is singular and N<D, its rank is at most (N is total number of samples, D dimension of data, C
is number of classes)
a. N + C
b. N
c. C
d. N - C
Ans: (d)
6. If S w is singular and N<D the alternative solution is to use (N is total number of samples, D
dimension of data)
a. EM
b. PCA
c. ML
d. Any one of the above
Ans: (b)
GMM
MCQ
1. A method to estimate the parameters of a distribution is
a. Maximum Likelihood
b. Linear Programming
c. Dynamic Programming
d. Convex Optimization
Ans: (a)
Ans: (c)
Ans: (a)
Ans: (b)
Ans: (d)
6. For Gaussian mixture models, parameters are estimated using a closed form solution by
a. Expectation Minimization
b. Expectation Maximization
c. Maximum Likelihood
d. None of the above
Ans: (b)
Ans: (b,c)
Ans: c
References: