Sie sind auf Seite 1von 51

Shruti Sharma

Ganesh Oka

References :-
Probability and Random Processes for Electrical Engineering: Leon-Garcia
Applied Stochastic Processes: Lefebvre, M
Random Vectors
Random Vectors
An n- dimensional random vector is a function
X = (X
1
.,X
n
) that associates a vector of real
numbers with each element s of a sample space S
of a random experiment E.

X
k
is a vector of random variables

S
k
is a set of all possible values of X
Examples of Random Vectors
Discrete r.v. :- A semiconductor chip is divided into
M regions. For the random experiment of finding the
number of defects and their locations, let N
i
denote the
number of defects in i
th
region. Then
N(,) = (N
1
(,),,N
M
(,)) is a discrete r.v.

Continuous r.v. :- In a random experiment of selecting a
students name, let
H(,) = height of student , in inches.
W(,) = weight of student , in pounds.
A(,) = age of student , in years.
Then (H(,), W(,), A(,)) is a continuous r.v.

Product form events.
For the n-dimensional r.v. X = (X
1
,,X
n
), a product form
event A can be expressed as follows.

A = {X
1
in A
1
} {X
2
in A
2
} {X
n
in A
n
}, where A
k
is a
1-dimensional event involving only X
k
.

This helps when the random vectors are dependent.
Not all events can be can be expressed in product form.
A = {X+Y s 10} B = {min(X,Y) s 5} C = {X
2
+Y
2
s 100}
Examples
Example.
Let X be the input to a communication channel and let Y be
the output. Input is +1 or -1 volt with equal probability. Output
is the input plus a noise voltage that is uniformly distributed in
the interval from -2 to +2 volts. Find the probability of positive
input but not positive output.
To find
P[X = +1, Y s 0] = P[ {X = +1} { Y s 0} ] = P[ { Y s 0 } { X = +1 } ]
= P[ Y s 0 | X = +1] P[ X = +1]
P[ X = +1 ] = .

When X = +1, Y is uniformly distributed in the interval [-2+1, 2+1] = [-1, 3]
Now, P[ Y s y | Y c [-1, 3] ] = (y (-1)) / ( 3 (-1))
Thus,
P[ Y s 0 | X = +1 ] = P[ Y s 0 | Y c [-1, 3] ] = .
P[ X = +1, Y s 0 ] = . = 1/8
Two Dimensional Random Vector
Random Vector (r.v) Z = (X,Y), where
X = x
j
, j = 1,2, Y = y
k
, k = 1,2,
Or X and Y are continuous.

Joint Distribution Function:
F
X,Y
(x,y) = P[{X<x} {Y<y}] = P[X<x,Y<y]

Marginal Distribution Function:
F
X
(x) = P[X<x, Y<] = F
X,Y
(x, )
F
Y
(y) = P[X<, Y s y] = F
X,Y
(, y)
And F
X
= F
Y
= 1 --- discrete case
Or F
X
= F
Y
= 1 --- continuous case


Properties
F
X,Y
(- ,y) = F
X,Y
(x,-) = 0
F
X,Y
( , ) = 1 ----- Normalization condition.
F
X,Y
(x
1
, y
1
) < F
X,Y
(x
2
, y
2
) if x
1
<x
2
and y
1
< y
2



P[a<X<b, c<Y<d] = F
X,Y
(b,d) - F
X,Y
(b,c) -
F
X,Y
(a,d) + F
X,Y
(a,c)
a, b, c and d are constants

Discrete Type Random Vectors
A two dimensional discrete type r.v. has a S
Z

thats finite, or countably infinite.
S
Z
= S
X * Y
= {(x
j
,y
k
), j = 1,2.. k = 1,2}
Joint Probability Mass Function
p
X,Y
(x
j
,y
k
) = P[X=x
j
, Y = y
k
]
Marginal Probability Mass Functions
p
X
(x
j
) = p
X,Y
(x
j
,y
k
)

and

p
Y
(y
k
) = p
X,Y
(x
j
,y
k
)

all y
k
all x
j
Continuous Random Vector
A two-dimensional r.v. Z= (X,Y) is continuous if
S
Z
is a uncountably infinite subset of R
2.

Joint Probability Density Function
f
X,Y
(x,y) = c
2
F
X,Y
(x,y)


Marginal Probability Density Function
f
X
(x) = f
X,Y
(x,y)dy
Probability of event Z that belongs to A:
f
X,Y
(x,y) dxdy

cxcy
A
-

Distribution Functions
For the Discrete case:
F
X,Y
(x,y) = p
X,Y
(x
j
,y
k
)


For the Continuous Case:




x
j
sx,y
k
sy
} }

=
x y
Y X Y X
dvdu v u f y x F ) , ( ) , (
, ,
f
X,Y
(x,y) = (ln x / x) if 1< x< e, 0< y<x
= 0 otherwise

Example --- marginal pdf
x dy y x f dy y x f x f
x
Y X Y X X
ln ) , ( ) , ( ) (
0
, ,
= = =
} }


2
1 ln
) , ( ) (
1
,
= = =
} }


e
Y X Y
dx
x
x
dx y x f y f
2
) (ln 1
) , ( ) (
2
,
y
dx y x f y f
e
y
Y X Y

= =
}
-----If 1 s x s e
-----If 0 < y < 1 (s x)
-----If 1 s y (< x) < e
And f
Y
(y) = 0 otherwise.
Independent Random Variables
If (X,Y) is a random vector, X,Y are independent
variables if :
p
X,Y
(x
j
,y
k
) = p
X
(x
j
) p
Y
(y
k
) for discrete X,Y
f
X,Y
(x,y) = f
X
(x) f
Y
(y) for continuous X, Y
F
X,Y
(x,y) = P[X < x ,Y< y] = F
X
(x) F
Y
(y)
If X and Y are independent, so are g(X), and h(Y)

Example on marginal pmf. (discrete r.v.)
A random experiment consists of tossing 2 loaded dice and noting the
pair of numbers (X, Y) facing up. The joint pmf p
X,Y
(j, k) is given as
1 2 3 4 5 6
1 2/42 1/42 1/42 1/42 1/42 1/42
2 1/42 2/42 1/42 1/42 1/42 1/42
3 1/42 1/42 2/42 1/42 1/42 1/42
4 1/42 1/42 1/42 2/42 1/42 1/42
5 1/42 1/42 1/42 1/42 2/42 1/42
6 1/42 1/42 1/42 1/42 1/42 2/42
j
k
p
X
(j) = p
X,Y
(j,k) = 1/6 for all j p
X
(j) = 1
K=1
6
p
Y
(k) = p
X,Y
(j,k) = 1/6 for all k p
Y
(k) = 1
j=1
6
j=1
K=1
6
6
Example of dependent r.v. (discrete)
In the earlier example,

p
X
(j) . p
Y
(k) = 1/36 for all pairs (j, k)

But,
p
X,Y
(j, k) = 2/42 for j = k and
p
X,Y
(j, k) = 1/42 for j = k

p
X,Y
(j, k) = p
X
(j) . p
Y
(k) for any pair (j, k)
X and Y are NOT independent.
Example on marginal pdf. (continuous r.v.)
Find the normalization constant c and the marginal pdfs
for the joint pdf given below.

f
X,Y
(x,y) =
Normalization condition

Ce
-x
e
-y
0 s y s x <
0 elsewhere
2
1
0 0
c
dydx e ce dydx e ce
x
y x y x
= = =
} } } }



C = 2
Example continued
} }


= = =
x
x x y x
Y X X
e e dy e e dy y x f x f
0
,
) 1 ( 2 2 ) , ( ) (
} }


= = =
y
y y x
Y X Y
e dx e e dx y x f y f
2
,
2 2 ) , ( ) (
The marginal pdfs are given as
0 s x <
0 s y <
It can be verified that
} }

= =
0 0
1 ) ( ) ( dy y f dx x f
Y X
Example of dependent r.v. (continuous)
In the previous example

f
X
(x) . f
Y
(y) = 4e
-x
e
-2y
(1-e
-x
) = 2e
-x
e
-y
= f
X,Y
(x, y)

Thus the r.v.s are NOT independent
Conditional Distribution and Density
Functions for Discrete r.v.
With Discrete X,Y given that Y = y
k

Distribution Function:
F
X|Y
(x|y
k
) = P[Xsx,Y=y
k
]

Density Function
p
X|Y
(x
j
|y
k
) = p
X,Y
(x
j
|y
k
) = P[X = x
j
,Y = y
k
]

P[Y=y
k
]
P[Y=y
k
]
P
y
(y
k
)
Conditional Distribution and Density
Functions for Continuous r.v
With Continuous X and Y given f
Y
(y)
Distribution Function:
F
X|Y
(x|y) = f
X,Y
(u,y)du

Density Function:
f
X|Y
(x|y) = f
X,Y
(x,y)

x
-
f
Y
(y)
f
Y
(y)
Example
) (
2
,
2
2
) (
) , (
) | (
y x
y
y x
Y
Y X
X
e
e
e e
y f
y x f
y x f


= = =
Use of marginal pdf and joint pdf to get conditional pdf.

Let X and Y be the random variables with following joint pdf.
f
X,Y
(x,y) = 2 e
-x
e
-y
. ( 0 s y s x < and 0 otherwise. )
In an earlier example the marginal pdfs were found to be
f
X
(x) = 2 e
-x
(1 e
-x
) 0 s x < and
f
Y
(y) = 2 e
-2y
0 s y <
Find their conditional pdfs.
x
y
x x
y x
X
Y X
Y
e
e
e e
e e
x f
y x f
x y f

= =
1 ) 1 ( 2
2
) (
) , (
) | (
,
Solution :-
For x > y
For 0 < y < x
Conditional Distributions & Independence
X and Y are independent if and only if the
conditional distribution function, the conditional
probability mass function, or the conditional
density function of X, given the Y = y, is identical
to the marginal function.
Conditional Expectation
Given that Y = y, the expectation of X is:
Discrete Case:
E[X|Y=y] = x
j
p
X|Y
(x
j
|y)

Continuous Case:
E[X|Y=y] = x f
X|Y
(x|y)dx

The conditional expectation can be viewed as defining a
function of y : g(y) = E[X | y].
Hence, g(Y) = E[X | Y] is a random variable.
j=1


-
Properties of conditional expectation.
E[ E[X|Y] ] = E[X]
For the case of continuous r.v.s
Let g(Y) = E[X|Y], then
| |
| | X E dx x xf
dx dy y x f x dx dy y f y x f x
dy y f dx y x xf dy y f y g Y g E
X
Y X Y X
Y X Y
= =
|
|
.
|

\
|
=
|
|
.
|

\
|
=
|
|
.
|

\
|
= =
}
} } } }
} } }


) (
) , ( ) ( ) | (
) ( ) | ( ) ( ) ( ) (
,
This is also true for any function of X. i.e.
E[ E[h(X)|Y] ] = E[ h(X) ]
V[X] = E[X
2
] (E[X])
2
= E[E[X
2
|Y]]-(E[E[X|Y]])
2
.
Example.
The total number of defects X on a chip is a Poisson variable
with mean o. Suppose that each defect has a probability p of
falling in a specific region R and location of each defect is
independent of the location of any other defect. Find the pmf of
the number of defects Y that fall in the region R.
This is a case of discrete r.v.s. If k is the total number of defects on
the chip and j of them fall in the region R, the pmf for Y = j is given by


=

=
= = = = = =
0 0
,
] [ ] | [ ) , ( ] [
k k
Y X
k X P k X j Y P j k p j Y P
Continued
---- eq. (I)
Example continued
Now,
P[ Y=j | X=k ] = Probability that j defects fall in region R, given that
totally there were k defects on the chip.
This is a case of binomial distribution with parameters k and p.
P[ Y=j | X=k ] =
0, j > k
k
C
j
p
j
(1 p)
K j
, 0 s j s k
p
j
j k
k
j k j
j
k
k
e
j
p
e
k
p p C
k X P k X j Y P j Y P
o o
o o

=
= =
= = = = =


!
) (
!
) 1 (
] [ ] | [ ] [
0
Substituting, in eq(I) and noting that X has Poisson distribution,
Thus, the defects falling in
Region R has a Poisson
distribution with Parameter
op.
Example. (Conditional expectation)
In the last example, the number of defects falling in a specific
region R (Y) was found to have Poisson distribution with
parameter op.
Hence, mean of Y = op.
We can get the same result by using conditional expectation.
o p X pE k X kP p k X P kp
k X P k X Y E X Y E E Y E
k k
k
= = = = = =
= = = =

=
] [ ] [ ] [ ) (
] [ ] | [ ]] | [ [ ] [
0 0
0
Conditional Variance
Definition,
V[X|Y] = E[(X-E[X|Y])
2
|Y]
Another form
V[X|Y] = E[X
2
|Y] (E[X|Y])
2
Using the above form,
E[ V[X|Y] ] = E[ E[X
2
|Y] ] E[ (E[X|Y])
2
] ---- (i)
And the definition of variance,
V[ E[X|Y] ] = E[ (E[X|Y])
2
] (E[ E[X|Y] ])
2
---- (ii)
Adding (i) and (ii) we get a useful result.
E[V[X|Y]] + V[E[X|Y]] = E[E[X
2
|Y]] (E[E[X|Y]])
2
= V[X]

Functions of random variables.
Let Z = X/Y. Find the pdf of Z if X and Y are independent and both
exponentially distributed with mean one.
We can use conditional probability.
F
Z
(z|y) = P[Z s z |y] = P[X/y s z |y]
=
P[X s yz |y] ----- if y > 0
P[X > yz |y] ----- if y < 0
=
F
X
(yz |y) ----- if y > 0
1 - F
X
(yz |y) ----- if y < 0
f
Z
(z | y) =
yf
X
(yz | y) ----- if y > 0
- yf
X
(yz | y) ----- if y < 0
= |y| f
X
(yz | y)
Using chain rule,
dF/dz = (dF/du) . (du/dz)

Here, u = yz
Continued
Example continued
} }


= = dy y yz f y dy y f y yz f y z f
Y X Y X Z
) , ( | | ) ( ) | ( | | ) (
,
Now, the pdf of Z is given by,
Using the fact that X and Y are independent and exponentially distributed
with mean one,
( )
2
0
1
1
) (
z
dy e ye z f
y yz
Z
+
= =
}


Z > 0
Another example.
A system with standby redundancy has a single key component
in operation and a duplicate of that component in standby mode.
When the first component fails the second is put into operation.
Find the pdf of the lifetime of the standby system if the
components have independent exponentially distributed lifetime
with the same mean.
Let X and Y be the lifetimes of the two components. Then the system
lifetime T is given by,
T = X + Y
The cdf of T is found by integrating the joint pdf of X and Y over the
region of plane corresponding to the event {T s t}.
} }


=
x t
Y X T
dydx y x f t F ) , ( ) (
,
Continued
Example continued
} }


= = = dx x t f x f dx x t x f t F
dt
d
t f
Y X Y X T T
) ( ) ( ) , ( ) ( ) (
,
The pdf of T is obtained from differentiating the cdf. Further, X and Y are
independent. This gives,
The two pdfs in the integrand (exponentially distributed) are given as
e
-x
x > 0
0 x < 0
f
X
(x) =
f
Y
(t - x) =
e
-(t x)
x s t
0 x > t
These substitutions give
t
t
x t x
T
te dx e e t f



= =
}
2
0
) (
) (
Expected value of functions of r.v.s
E[X
1
+ X
2
++X
n
] = E[X
1
] + E[X
2
] ++E[X
n
]

Let X
1
,X
2
,,X
n
represent repeated measurements of the same
random quantity. Then these variables can be considered iid
(Independent Identically Distributed). This means, for i = 1,,n

All X
i
are independent of each other.
E[X
i
] = E[X]
V[X
i
] = V[X]
E[X
1
+ X
2
++X
n
] = nE[X], for iid.

V[X
1
+ X
2
++ X
n
] = nV[X], for iid.
Joint moment and Covariance
} }


dxdy y x f y x
Y X
k j
) , (
,
E[X
j
Y
k
] =

i n
n i Y X
k
n
j
i
y x p y x ) , (
,
X, Y jointly continuous
X and Y discrete
Above is the definition of jk
th
joint moment of X and Y
If j = k = 1, it is known as correlation.
If E[XY] = 0, they are said to be orthogonal.
The jk
th
central moment of X and Y is
E[(X E[X])
j
(Y E[Y])
k
]
In the definition of jk
th
central moment, j = 2 and k = 0 gives
V[X] while j = 0 and k = 2 gives V[Y].
In the definition of jk
th
central moment, j = 1 and k = 1 gives
covariance of X and Y.
Properties of covariance and corr. coeff.
Covariance of independent variables is 0.
COV(X, Y) = E[(X E[X]) (Y E[Y])] = E[XY] E[X]E[Y] = 0.
If any of the random variables has mean 0 then
COV(X, Y) = E[XY].
Covariance generalizes variance, but it can be negative.
E.g. if Y = - X, then COV(X, Y) = E[XY] E[X]E[Y]
= E[- X
2
] + (E[X])
2
= - V[X] s 0.
The correlation coefficient of X and Y is defined as,

X,Y
= COV(X, Y)/o
X
o
Y
,
where o
X
and o
Y
are STDs of X and Y respectively.
We have -1 s
X,Y
s 1
Correlation coefficient of independent variables is 0, but the
converse is not true.
Similarly it can be proved that E[Y] = 0. Now,
Example.
0 cos
2
1
] [
2
0
= =
}
t
u u
t
d X E
Let u be uniformly distributed in the interval (0,2t). Define X
and Y as
X = cosu and Y = sinu.
Show that the correlation coefficient between X and Y is 0.
We have,
0 cos sin
2
1
] cos [sin ] [
2
0
= = =
}
t
| | |
t
u u d E XY E
E[XY] E[X]E[Y] = 0 COV(X, Y) = 0
X,Y
= 0

But X and Y are not independent, since X
2
+ Y
2
= 1
Sum of random number of r.v.s
Given above is the sum S
N
of X
i
(i=1,) iid; where N is chosen
randomly and independent of each X
i
. For each i, E[X
i
] = E[X] and
V[X
i
] = V[X].
Then, E[S
N
] = E[N]E[X]

=
=
N
k
k N
X S
1
From the properties of conditional expectation,
E[S
N
] = E[ E[S
N
| N] ] ------- Slide 23.
= E[ NE[X] ] ------- X
i
are iids.
= E[N]E[X] ------- N is independent of each X
i
.
This result is valid even if the Xis are not independent. They
only need to have same mean.
Continued
Continued
V[S
N
] = E[N]V[X] + V[N](E[X])
2
.
V[S
N
] = E[ V[S
N
| N] ] + V[ E[S
N
| N] ] ----- Slide 27.
Here, V[ E[S
N
| N] ] = V[NE[X]]
= E[N
2
E
2
[X]] (E[NE[X]])
2
.
= E
2
[X] (E[N
2
] (E[N])
2
)
= V[N] (E[X])
2
.
And E[ V[S
N
| N] ] = E[NV[X]] ----- Slide 32
= E[N]V[X]
Substituting in the first step above gives the expected result.
Used when a r.v. X is estimated using another r.v. Y.
MSE = E[(X-g(Y))
2
]
When g(Y) = a (constant), a = E[X] for min. error.
When g(Y) = Y + (a linear estimator)
= E[XY] E[X]E[Y] and = E[X] - E(Y) for min. error.
V(Y)
Another way of expressing linear estimator is,



When g(Y) is non-linear, g(Y) = E[X|Y] for minimum error.
The best estimator is g(Y) = E[X|Y]. If X and Y both have
Gaussian distribution, the best estimator is equal to the linear
estimator.
Mean Square Error(MSE)
] [
] [

,
X E
Y E Y
X
Y
X Y X
+
|
|
.
|

\
|

=
o
o
Example
Y X
Y X
Y X
Y X
m y m y m x m x
y x f
,
2
2 1
2
2
2
2
2
1
1
,
2
1
1
,
2
,
1 2
2
) 1 ( 2
1
exp
) , (
o to
o o o

(
(

|
|
.
|

\
|

+
|
|
.
|

\
|

|
|
.
|

\
|

|
|
.
|

\
|

=
The amount of yearly rainfall in city 1 and city 2 is modeled by a
pair of jointly Gaussian r.v.s X and Y with the joint pdf given by
the equation below. Find the most likely value of X given that we
know Y = y (i.e. E[X |Y=y]).
Solution :- The marginal pdf of Y is found by integrating f
X,Y
over the
entire range of X. It is given by,
2
2 / ) (
2
) (
2
2
2
2
o t
o m y
Y
e
y f

=
The marginal pdf of Y shows that it is
a Gaussian random variable with
mean m
2
and variance o
2
2
.
Continued
Figures for the example.
Joint Gaussian pdf Conditional pdf of X for a fixed value
Of y.
Solution continued
Now, f
X
(x | y) = f
X,Y
(x,y) / f
Y
(y). This can be shown to be,
) 1 ( 2
) (
) 1 ( 2
1
exp
) | (
2
,
2
1
2
1 2
2
1
,
2
1
2
,
Y X
Y X
Y X
X
m m y x
y x f
to
o
o


=
Hence, the conditional pdf of X is also Gaussian. It has a conditional
mean and conditional variance given by,

E[X |Y=y] = m
1
+
X,Y
(o
1
/o
2
)(y m
2
) ------ (The answer to the question)
And
V[X |Y=y] = (o
1
)
2
(1 (
X,Y
)
2
)

The conditional expectation found above, has an additional
interpretation; which is given in the next slide.
Interpretation.
Note that the conditional expectation found in the previous
solution, namely,
E[X |Y=y] = m
1
+
X,Y
(o
1
/o
2
)(y m
2
)
is a function of y.
Replacing y by Y we generate a random variable, namely,
E[X | Y]. Also replacing m
1
by E[X], m
2
by E[Y], o
1
by o
X
, and
o
2
by o
Y
, we get the following result.
E[X | Y] = E[X] +
X,Y
(o
X
/o
Y
)(Y E[Y])

We have thus proved with this example that the best
estimator (LHS) is equal to the linear estimator (RHS) for
jointly Gaussian r.v.s. (slide 38)
Sample mean
Let X be a random variable for which the mean, E[X] = , is
unknown. Let X
1
,,X
n
denote n independent repeated
measurements of X. Then X
1
,,X
n
are iids.
The sample mean defined as follows is used to estimate E[X].

=
=
n
j
j n
X
n
M
1
1
M
n
itself is a random variable and E[M
n
] = , since the X
i
are
iid.
If S
n
= X
1
+ X
2
+ + X
n
, then M
n
= S
n
/n.
V[M
n
] = (1/n
2
)V[S
n
] = V[X]/n
V[M
n
] 0 as n
M
n
becomes a good estimator as n
Continued
Sample mean continued
2
] [
] | ] [ [|
c
c
n
n n
M V
M E M P s >
Using Chebyshevs inequality,
Substituting for E[M
n
], V[M
n
] and taking the complement
probability we get,
2
2
1 ] | [|
c
o
c
n
M P
n
> <
This means for any choice of error c and probability (1 - o), we
can select the number of samples n so that M
n
is within c of the
true mean with probability (1 - o) or greater.
The quantity on the RHS of (A) gives a lower bound on
probability.
----- (A)
Example
A voltage of constant but unknown value is to be measured. Each
measurement X
j
is a sum of the desired voltage v and a noise
voltage N
j
of 0 mean and STD of 1 microvolt. How many
measurements are required so that the probability that M
n
is within
1 microvolt of the true mean is at least 0.99?
We have X
j
= v + N
j
With the assumption that X
j
are iid for all j, we have
E[X
j
] = v and V[X
j
] = 1
We require c = 1
Substituting in (A) and replacing inequality with equality for lower bound
on probability, we get
0.99 = 1 (V[X
j
]/nc
2
)
Solving this we get n = 100.
Weak law of large numbers.
Let X
1
, X
2
, be a sequence of iid r.v.s with finite mean
E[X] = , then for c > 0,
1 ] | [| lim = <

c
n
n
M P
The weak law of large numbers states that for a large enough
fixed value of n, the sample mean using n samples will be close to
the true mean with high probability.
Strong law of large numbers.
Let X
1
, X
2
, be a sequence of iid r.v.s with finite mean and
finite variance, then
| | 1 lim = =


n
n
M P
The strong law of large numbers states that with probability 1,
every sequence of sample mean calculations will eventually
approach and stay close to E[X] = .
The strong law of large numbers requires the variance to be
finite but the weak law of large numbers does not.
Central Limit Theorem.
Let S
n
be the sum of n iid r.v.s with finite mean E[X] = and
finite variance o
2
. Let Z
n
be the zero mean, unit variance r.v.
defined by
Z
n
= (S
n
- n) /o\n, then
}


= s
z
x
n
n
dx e z Z P
2 /
2
2
1
] [ lim
t
The summands X
j
need to have finite mean and variance. They
can have any distribution.
The resulting cdf of Z
n
approaches the cdf of a zero-mean, unit
variance Gaussian r.v.
Example
Suppose that orders at a restaurant are iid r.v.s with mean = $8
and STD o = $2. After how many orders can we be 90% sure that
the total spent by all customers is more than $1000?
Let X
k
denote the expenditure of the k
th
customer. Then the total spent by
n customers is,
S
n
= X
1
+ X
2
+ + X
n
.
We have,
E[S
n
] = 8n and V[S
n
] = 4n

The problem is to find the minimum value of n for which
P[S
n
> 1000] = 0.90

With Zn as defined in the previous slide,
P[S
n
> 1000] = P[Z
n
> (1000 8n)/2\n] = 0.90
Continued
Solution continued
2 /
2
2
1
) (
z
Z
e z f
n

=
t
Since Z
n
is a Gaussian r.v. with mean 0 and variance 1, its pdf is given by
The given probability is then expressed as the following integral.
}

= > =
z
z
n
dz e z Z P
2 /
2
2
1
] [ 90 . 0
t
Where, z = (1000 8n)/2\n
The value of z (- 1.2815) is found from the table 3.4 in Leon Garcia
and the minimum value of n is found by solving the quadratic equation
in \n, namely,
8n 1.2815(2)\n 1000 = 0.
The positive root of this quadratic equation gives n = 128.6

Thus after minimum 129 orders we can be 90% sure that the total spent
by customers is more than $1000.
Questions???

Das könnte Ihnen auch gefallen