Sie sind auf Seite 1von 7

Brief answers to 2006 exam (Introductory Mathematics and Statistics, EC961)

Q1.
1 and 1 , the linear equation becomes
1 x1 1


1 x 2 1
0 x3 1
A simple inversion of the matrix shows
x1 1


x2 0
x 0
3
(2) To have a unique solution, matrix A must be invertible. This requires
det( A) 0 , which yields 1 / 2 . Note that det( A) 0 implies that ranks of A and
the augmented matrices are both 3, so can be any real number.
(3) For 1 / 2 , the above equations become
2 1 x1

1 1 1 / 2 x2 1
1 0 0 x 1/ 2

Note that the third vector in matrix A is linearly dependent on the second one, so for
that equation to have a solution b must be in the space spanned by the first and
second vector in A . This requires the following determinant to be zero:

(1) For
1 2

1 1
1 0

det 1
1

So

1
1 0
0 1 / 2
2

2.

(4) For 1 / 2 and 2 , the equation becomes


2 2 1 x1 0


1 1 1 / 2 x2 0
1 0 0 x 0

3
The third line yields x1 0 , substituting into the first two lines we have two identical
equations:
2 x 2 x3 0

So we have all solutions to be represented by x 2 a , x3 2a for a . Writing


the solution in vector form
x1
0

x 2 a 1
x
2
3

For a , it is straightforward to check that the solutions form a subspace with a


x1 0

basis of x 2 1 and dim=1.


x 2
3

Q2.

(1). Let the Lagrangean be


L xy (1 x 2 y 2 ) [ k (1 x) y ]

The Kuhn-Tucker conditions become


x( y 2x k ) 0,

x 0,

y 2x k 0

y ( x 2y ) 0,

y 0,

x 2 y 0

(1 x 2 y 2 ) 0,
[k (1 x) y ] 0,

0, 1 x 2 y 2 0
0, k (1 x) y 0

(2) For x 0, y 0, 0, 0 the relevant K-T conditions become


y 2x 0,

x 2y 0,
1 x 2 y 2 0,

Solving them yields


x

2 / 2,

2 / 2,

1/ 2

Checking constraint (2) provides k 2 1 .


(3) For x 0, y 0, 0, 0 the relevant K-T conditions become
y k 0,
x 0,

k (1 x ) y 0,

Solving them yields


x 1 / 2,

y k / 2,

1/ 2

Checking constraint (1) provides

3.

(4) For x 0, y 0, 0, 0 the relevant K-T conditions become


y 2x k 0,
x 2y 0,

1 x 2 y 2 0,
k (1 x ) y 0,

Solving them yields


x (k 2 1) /( k 2 1),

y 2k /( k 2 1),

k (k 2 3) /[ 2( k 2 1)], (k 4 6k 2 1) /( k 2 1) 2
Note that x 0 k 1 , 0 k 3 and 0 k 2 1 , so the restriction on k
is 3 k 2 1 .

(5) For k 2 1 , the value function is given by


V x * y * 1 / 2 , so dV / dk 0 .
For k 3 , V k / 4 , so dV / dk 1 / 4 .
For 3 k 2 1 , V 2k (k 2 1) /(k 2 1) 2 , so dV / dk

2( 4k 4 k 2 1) /( k 2 1) 3 0 .

Q3.
(1) First, determine the particular solution. Assume they are constants, then
(1 ) x e 2 y e 1 x e 1/(3 )

e
e
e
2 x (1 ) y 1 y 1/(3 )

Second, find the general solutions to the homogeneous equations. One only need to
1
2

find the eigen values and eigen vectors of A

2
. The two eigen values
1
1

and .
are 1 and 3 with their corresponding eigenvectors
1
1
Third, the general solution to the non-homogeneous equations are therefore given by
x (t )
1 ( 1) t 1 ( 3) t 1 1
K 2 e

K1 e

y (t )
1
1
3 1

Where K1 and K 2 are two arbitrary constants.


(2) For these solutions to be stable (i.e., for any given K1 and K 2 ), 1 0 and
3 0 , so 3 .
(3) Note that method in (1) cant be used in obtaining the particular solution, but one
can check that for 3 , there is a particular solution
x p t
.
p
y t

Q4.
Suppose that we throw a die. Let X be a success if the number of dots on the die is
one and a failure otherwise. Let Y be a success if the number of dots on the
die is odd and a failure otherwise. Suppose that we throw the die three times.
a.
b.
c.
d.
e.

Write out the sample space, that is the possible values of X and Y. (1 mark)
Make a table of the joint probability density function for X and Y. (2 marks)
Find the marginal distributions g(x) and h(y). (6 marks)
Find E(X), E(X2), Var(X), E(Y), E(Y2), and Var(Y). (12 marks)
Find the E(XY) and the cov(X,Y). Are X and Y independent? (6 marks)

a. Since X is the number of successes as is Y, X and Y can take on the values


{0,1,2,3}. Although, there is actually a further restriction that if X is a
success, then Y is a success. So we know that X Y. Thus the possible values
for X and Y are: {(0,0), (0,1), (0,2), (0,3), (1,1), (1,2), (1,3), (2,2), (2,3), (3,3)}.
b. The table for the joint pdf is easiest constructed by the tree given below. Note
that I use the letter E to denote an even number of dots.
1
6

3,5

1
1
6
1
1
1
6
3

3,5
1
2

1
6

1
2

1
3

1
6 1
3

1
2

1
3

E
1
2

1
1
6
3

1
1
2

1 3,5 E 1 3,5 E 1 3,5 E

1
1
6
3

3,5
1
2

1
1
6
3

1
6

1
2

1
3

E
1
2

1
1
6
3

1
1
2

1
1
6
3

1
2

1
3

3,5
1
2

1
1
6
3

E
1
2

1
1
6
3

1
2

1 3,5 E 1 3,5 E 1 3,5 E 1 3,5 E 1 3,5 E 1 3,5 E

Then we find the probabilities from above as:


111 1
27

2 2 2 8 216
111 111 111 1
54
P (0,1)


3 2 2 2 3 2 2 2 3 4 216
111 111 111 1
36
P (0,2)


3 3 2 3 2 3 2 3 3 6 216
111
1
8
P (0,3)

3 3 3 27 216
111 111 111 1
27
P (1,1)


6 2 2 2 6 2 2 2 6 8 216
111 111 111 111 111 111 1
36
P (1,2)


6 3 2 3 6 2 3 2 6 6 2 3 2 6 3 2 3 6 6 216
111 111 111
1
12
P (1,3)

6 3 3 3 6 3 3 3 6 18 216
111 111 111
1
9
P ( 2,2)

6 6 2 6 2 6 2 6 6 24 216
111 111 111
1
6
P ( 2,3)

6 6 3 6 3 6 3 6 6 36 216
111
1
P (3,3)

6 6 6 216
P (0,0)

Combining that into a table yields:

c. We can find the marginal distributions either from adding across rows and
columns as above, or we can apply the formula for the binomial distribution,
noting that X~Bin(3,1/6) and Y~Bin(3,1/2) as below:

n x n x
f (x) p (1 p)
x
0 3

3 1 5 125
P(X 0)
0 6 6 216
1 2

3 1 5 75
P(X 1)
1 6 6 216
2 1

3 1 5 15
P(X 2)
2 6 6 21 6
3 0

3 1 5 1
P(X 3)
3 6 6 216
0 3

3 1 1 1
P(Y 0)
0 2 2 8
1 2

3 1 1 3
P(Y 1)
1

d. Here we just use the formulas for expectation and variance and the marginal
distributions from above.
108 1
125
75
15
1
E ( X ) xg ( x) 0

1
2
3

216 2
216
216
216
216
x
75
1
144 2
125
2
2 15
2
E ( X 2 ) x 2 g ( x) 0 2

1
2
3

216 3
216
216
216
216
x
2

2 1
2 1
5

3 2
3 4 12
12 3
1
3
3
1
E (Y ) yh( y ) 0 1 2 3

8
2
8
8
8
8
y

Var ( X ) E ( X 2 ) E ( X )
2

24
1
2 3
2 3
2 1
3
1 2 3
8
8
8
8
8

E (Y 2 ) y 2 h( y ) 0 2
y

Var (Y ) E (Y ) E (Y )
2

3
3
2

9 3

4 4

Note also from the formula for binomial distributions the mean is np and the
variance is np(1-p), so E(X) = 3(1/6) = and Var(X) = 3(1/6)(5/6) = 5/12 and
E(Y) = 3(1/2) = 3/2 and Var(Y) = 3(1/2)(1/2) = .
e. To find E(XY) we need to use the distribution from above:
E( X ,Y )

27

54

36

xyf ( x, y ) (0)(0) 216 (0)(1) 216 (0)(2) 216 (0)(3) 216

( x, y )

(1)(1)

27
36
12
9
6
1
216
(1)( 2)
(1)(3)
(2)( 2)
(2)(3)
(3)(3)

1
216
216
216
216
216
216 216

To find the covariance, we use the formula cov(X,Y) = E(XY) E(X)E(Y)


= 1 (1/2)(3/2) = 1 =
To show that X and Y are not independent we can use a variety of arguments.
One simple one is to say that cov(X,Y) 0, so X and Y cannot be independent.
Another way is to say that f(x,y) g(x)h(y) which can be seen for several of
the values on the joint distribution table.

Das könnte Ihnen auch gefallen