Sie sind auf Seite 1von 49

RANDOM VARIABLES AND PROBABILITY

DISTRIBUTIONS

September 27, 2018

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 1 / 49
Outline
1 Random experiments and random variables: basic definitions
Random experiment
Random variables

2 Discrete Random Variables


Probability distribution of a discrete r.v.
The Binomial Distribution

3 Continuous Random Variables


Probability distribution of a continuous r.v.
The Gaussian or Normal Distribution

4 Function of a random variable


Relevant transformations of a normally distributed r.v.

5 Sets of random variables

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 2 / 49
Random experiment and sample space

Definition
RANDOM EXPERIMENT (E):
a phenomenon whose outcome cannot be predicted with
certainty.
SAMPLE SPACE of E ( ) :
the set of all possible outcomes (Ê) of the random experiment.

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 3 / 49
Random variables
Definition
A RANDOM VARIABLE (r.v.) X is a quantity measured on the
outcome of a random experiment.
More formally, a r.v. is a function from the sample space to the real
numbers: X: æ Ÿ.
x denotes a REALIZATION of X, that is, the observed value of X
corresponding to a specific outcome Ê of the experiment.
Formally, x = X(Ê).

Definition
A RANDOM VARIABLE (r.v.) X is said to be
DISCRETE if it has (at most) a countable number of possible values.
CONTINUOUS if it can take any real value in a (set of) possibly
infinite interval(s).

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 4 / 49
DISCRETE RANDOM VARIABLES

Example (Example)
E: flip a coin until you observe HEAD (H) for the first time
= {H, TH, TTH, TTTH, . . .}
r.v.: X = # of TAILs before the first HEAD
realizations of X: 0,1,2,3,. . . . more specifically

0= X(H) (HEAD on the first flip, no TAIL before it!)


1= X(TH)
2= X(TTH)
3= X(TTTH)
...

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 5 / 49
The probability mass function

Definition
The probability distribution of a discrete random variable X is described by
its probability (mass) function (pmf), p(x ), defined as

p(x ) := P(X = x ) for every real x

with the following properties1 :


p(x ) Ø 0
q
x p(x ) = 1
N.B. if p(x ) = 0, then x is a value that cannot be observed.

1
When dealing with two or more r.v.’s at the same time, it might be useful to include
a subscript referring to the specific random variable: e.g. pX (x ) and pY (x ) because
P(X = 2) = pX (2) is different from P(Y = 2) = pY (2).
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 6 / 49
The cumulative distribution function

Alternatively but equivalently:


Definition
The probability distribution of X can be described by its cumulative
distribution function (cdf), F (x ), defined as
ÿ
F (x ) := P(X Æ x ) = p(u)
uÆx

with the following properties:


0 Æ F (x ) Æ 1 for every x
F (x ) is non-decreasing
F (x ) is a step function

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 7 / 49
Example (continued)
Suppose that the coin has probability of HEAD = p (then the probability of TAIL is
1-p), then the probability mass function is

p(x ) = P(X = x ) = p (1 ≠ p)x x = 0, 1, 2, 3, . . .

Specifically

p(0) = p p(1) = p(1 ≠ p) p(2) = p(1 ≠ p)2 , ...

Graphically represented in the following plot (for p = 0.4):


0.4
0.3
P(x)

0.2
0.1
0.0

0 5 10 15

x (#trials)

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 8 / 49
Example (Uniform discrete distribution)
If X has the following pmf:
I
1
n if x = 1, 2, . . . , n
p(x ) =
0 otherwise
then X has a discrete uniform distribution.
With n = 10, graphically we have
0.14
0.12
0.10
p(x)

0.08
0.06

0 2 4 6 8 10 12 14

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 9 / 49
Example (Uniform discrete distribution: continued)
The cdf of the uniform discrete distribution (n = 10) is:
Y
]0 if x < 1
_
_
F (x ) = i
if i Æ x < i + 1 for i = 1, 2, . . . , 10
_ 10
_
[1 if x > 10
Graphically we have
1.0
0.8
0.6
F(x)

0.4
0.2
0.0

0 2 4 6 8 10

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 10 / 49
Expectation and variance
Two crucial summary measures of a random variable X are
the EXPECTATION:
ÿ
E [X ] = µ = x · p(x )
x

the VARIANCE:
Var [X ] = ‡ 2 = E [(X ≠ µ)2 ] = E [X 2 ] ≠ E [X ]2
ÿ ÿ
= (x ≠ µ)2 · p(x ) = x 2 · p(x ) ≠ µ2
x x

Example
Assume that X has a discrete uniform distribution on {1, 2, . . . , n}, then
n n
ÿ 1 1ÿ 1 n(n + 1) n+1
E [X ] = x· = x= =
x =1
n n x =1 n 2 2

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 11 / 49
The Binomial Distribution
We now present a very important discrete distribution frequently used in
applications:
Definition
A r.v. X has a Binomial distribution with parameters n and p (denoted
as X ≥ Binomial(n, p)), if its probability function is provided by
I!n"
x n≠x
pX (x ) = P(X = x ) = x p (1 ≠ p) when x = 0, 1, 2, . . . , n
0 otherwise

where n is a positive integer (n = 1, 2, . . .) and 0 Æ p Æ 1, and


A B
n n!
=
x x !(n ≠ x )!

is the binomial coefficient, and x ! = x (x ≠ 1)(x ≠ 2) · · · 2 · 1 is the


factorial of x .
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 12 / 49
Properties of the Binomial distribution

Result
If X ≥ Binomial(n, p), then
Expectation (Mean):
µ = E [X ] = np
Variance:
‡ 2 = Var [X ] = np(1 ≠ p)
Cumulative distribution function:
x
ÿ
FX (x ) = P(X Æ x ) = pX (u)
u=0
x
ÿ
= pX (0) + pX (1) + . . . + pX (x ) = P(X = u)
u=0

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 13 / 49
Binomial plots
Here below we exhibit the plots of binomial distributions with different
parameters:
Binomial Distribution: n = 7, p=0.15 Binomial Distribution: n = 7, p=0.50
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7


p(x)

p(x)
0 2 4 6 8 0 2 4 6 8

x x

Binomial Distribution: n = 7, p=0.75 Binomial Distribution: n = 7, p=0.95


0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7


p(x)

p(x)

0 2 4 6 8 0 2 4 6 8

x x

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 14 / 49
Binomial plots: continued
Binomial Distribution: n = 20, p=0.15 Binomial Distribution: n = 20, p=0.25
0.25

0.25
0.20

0.20
0.15

0.15
p(x)

p(x)
0.10

0.10
0.05

0.05
0.00

0.00
0 5 10 15 20 0 5 10 15 20

x x

Binomial Distribution: n = 20, p=0.50 Binomial Distribution: n = 20, p=0.70


0.25

0.25
0.20

0.20
0.15

0.15
p(x)

p(x)
0.10

0.10
0.05

0.05
0.00

0.00

0 5 10 15 20 0 5 10 15 20

x x

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 15 / 49
Binomial plots: continued
Binomial Distribution: n = 60, p=0.50 Binomial Distribution: n = 60, p=0.25
0.15

0.15
0.10

0.10
p(x)

p(x)
0.05

0.05
0.00

0.00
0 10 20 30 40 50 60 0 10 20 30 40 50 60

x x

Binomial Distribution: n = 60, p=0.50 Binomial Distribution: n = 60, p=0.75


0.15

0.15
0.10

0.10
p(x)

p(x)
0.05

0.05
0.00

0.00

0 10 20 30 40 50 60 0 10 20 30 40 50 60

x x

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 16 / 49
The Bernoulli distribution
Definition
A r.v X such that X ≥ Binomial(1, p) (i.e. with n = 1) is said to have a
Bernoulli distribution with parameter p, (denoted as X ≥ Bernoulli(p)),
that is
pX (x ) = P(X = x )
I
p x (1 ≠ p)1≠x if x = 0, 1
=
0 otherwise
Y
]1 ≠ p when x =0
_
_
= p when x =1
_
_
[0 otherwise

For a Bernoulli r.v. X , we have

E [X ] = p Var [X ] = p(1 ≠ p)

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 17 / 49
Random experiment behind the Binomial distribution

The Binomial distribution arises from the following situation:


consider a random trial with only two outcomes (usually called Bernoulli
trial), labelled as

success = 1 and failure = 0

such that
p = probability of success
Assume that
1 we repeat n times the aforementioned trial
2 p is constant across repetitions
3 trials are indipendent from one another
then the r.v. X = #of successes out of n trials has a Binomial(n, p)
distribution.

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 18 / 49
Random experiment behind the Binomial distribution

Example
Assume that we run n = 3 Bernoulli trials with p œ (0, 1).

Trial prob. # succ # outcomes


!3"
1 2 3 x for a given x x
0 0 0 (1 ≠ p)3 0 1 1
1 0 0 p(1 ≠ p)2 1
0 1 0 p(1 ≠ p)2 1 3 3
0 0 1 p(1 ≠ p)2 1
1 1 0 p 2 (1 ≠ p) 2
1 0 1 p 2 (1 ≠ p) 2 3 3
0 1 1 p 2 (1 ≠ p) 2
1 1 1 p3 3 1 1

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 19 / 49
Random experiment behind the Binomial distribution
Example (continued)
So, for example,

pX (0) = P(X = 0) = P(0 successes)


= P((0, 0, 0))
= (1 ≠ p)(1 ≠ p)(1 ≠ p)(by independent trials and constant p)
A B
3 3 0
= (1 ≠ p) = p (1 ≠ p)3
0

pX (1) = P(X = 1) = P(1 success)


= P((1, 0, 0)) + P((0, 1, 0)) + P((0, 0, 1))
= p(1 ≠ p)(1 ≠ p) + (1 ≠ p)p(1 ≠ p) + (1 ≠ p)(1 ≠ p)p
A B
2 3
= 3 · p(1 ≠ p) = p(1 ≠ p)2
1
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 20 / 49
CONTINUOUS RANDOM VARIABLES

Example
E: randomly select a client (Ê) who has just used the front-desk
service in a bank branch
= {Ê1 , Ê2 , Ê3 , Ê4 , . . .}
(the population of all clients who have used the front-desk service in the period of
interest, for example, in the last year)
r.v.: X = waiting time of a client in the queue (minutes)
realizations of X: (0, +Œ)

X (client Êi ) = time spent in the queue by client Êi


For example, if we drew at random Ê3 , Ê3 = ”Mr. Smith” and Mr. Smith waited
15 minutes and 30 seconds in the queue, we would have

X (Mr.Smith) = 15.5

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 21 / 49
The probability density function
Definition
The probability distribution of a continuous random variable X is described
by its probability density function (pdf) defined as any f (x ) with the
following properties
f (x ) Ø 0 for any real value x
s +Œ
≠Œ f (x )dx = 1

Note that, for a continuous r.v. with density function f (x ),


for any pair of real values a and b (a < b)
⁄ b
P(a < X < b) = f (x )dx
a

P(X = x ) = 0 for any real value x : this implies that

P(a < X < b) = P(a Æ X < b) = P(a < X Æ b) = P(a Æ X Æ b)

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 22 / 49
The cumulative distribution function

Alternatively but equivalently:


Definition
The probability distribution of X can be described by its cumulative
distribution function (cdf), F (x ), defined as
⁄ x
F (x ) := P(X Æ x ) = f (t)dt
≠Œ

with the following properties:


0 Æ F (x ) Æ 1 for every x
F (x ) is non-decreasing
P(a < X Æ b) = F (b) ≠ F (a)

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 23 / 49
Graphical interpretation

P(a < X < b) is given by the red area

F (x0 ) = P(X < x0 ) is given by the red area

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 24 / 49
Example (Uniform continuous distribution)
If X has the following pdf:
I
1
b≠a if a < x < b
f (x ) =
0 otherwise
then X has a continuous uniform distribution on the interval (a, b)
(a < b). With a = 5 and b = 15, graphically we have
0.25
0.20
0.15
f(x)

0.10
0.05
0.00

0 5 10 15 20

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 25 / 49
Expectation and variance

Analogously as we did for discrete r.v.’s, we can define


the EXPECTATION:
⁄ +Œ
E [X ] = µ = xf (x )dx
≠Œ

the VARIANCE:
Var [X ] = ‡ 2 = E [(X ≠ µ)2 ] = E [X 2 ] ≠ E [X ]2
⁄ +Œ ⁄ +Œ
2
= (x ≠ µ) f (x )dx = x 2 f (x )dx ≠ µ2
≠Œ ≠Œ

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 26 / 49
Example (Continuous uniform distribution: continued)
Assume that X has a continuous uniform distribution on (a, b), then
⁄ b ⁄
1 1 b
E [X ] = x dx = xdx
a b≠a b≠a a
1 x 2 --b 1 b 2 ≠ a2 1 (b + a)(b ≠ a)
= -a = =
b≠a 2 b≠a 2 b≠a 2
a+b
=
2
⁄ b 3 42 ⁄ b
1 a+b 1 (a + b)2
Var [X ] = x2 dx ≠ = x 2 dx ≠
a b≠a 2 b≠a a 4
3
1 x -b- (a + b)2
= -a ≠
b≠a 3 4
(b ≠ a)2
= ... =
12

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 27 / 49
The Gaussian or Normal Distribution
Definition
A r.v. X has a Normal distribution with parameters µ and ‡ 2 (denoted
as X ≥ N (µ, ‡ 2 )), if its density function is provided by

1 1 2
f (x ) = Ô e ≠ 2‡2 (x ≠µ) for ≠ Œ < x < +Œ
2fi‡ 2

where ≠Œ < µ < +Œ and ‡ 2 > 0.


If µ = 0 and ‡ 2 = 1, that is, if X ≥ N (0, 1), X is said to have a standard
normal distribution.
The cumulative distribution function of X is provided by
⁄ x
F (x ) = P(X Æ x ) = f (t)dt
≠Œ
⁄ x
1 1 2
= Ô e ≠ 2‡2 (t≠µ) dt = no explicit expression!!!
≠Œ 2fi‡ 2

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 28 / 49
Properties of the Normal distribution

Result
Mean:
E [X ] = µ
Variancea :
V [X ] = ‡ 2
Symmetric (µ is also the median)
Bell-shaped (µ is also the mode)
Light tails (as compared to other distributions to be seen later):
large deviations from µ rarely occur.
a
This might seem a trivial remark, but it is stating that the parameters of a
normal distribution directly represent the values of two crucial summary
measures of the distribution.

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 29 / 49
In this slide and in the following one we exhibit some examples of Normal
distributions, with different parameter values:

Normal distribution: mu = −10, 0, 15 (sigma=3)


0.12
0.10
prob. density − f(x)

0.08
0.06
0.04
0.02
0.00

−20 −10 0 10 20

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 30 / 49
0.8
0.6 N(0,1) [Standard Normal] (blue), N(0,9) (black) and N(0,0.25) (red)
prob. density − f(x)

0.4
0.2
0.0

−5 0 5

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 31 / 49
TRANSFORMATION of a R.V.: Distribution of the function of a r.v.

Frequently we are interested in a new r.v. Y, obtained applying a


certain function g to X: Y = g(X).
What is the distribution of the new r.v. Y?
What are its characteristics?
if X is a discrete r.v., then Y = g(X ) is a discrete r.v.
if X is a continuous r.v. and g(·) is a continuous function, then Y is
a continuous r.v.

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 32 / 49
Transformation of a r.v.
Example
Assume that X has the following discrete distribution and consider
Y = g(X ) = X 2 .

x -1 0 1
y = x2 1 0 1
pX (x ) 1/5 2/5 2/5

Then the distribution of Y is

y 0 1
pY (y ) 2/5 3/5

More formally, Y can take on only the values 0 and 1, with probabilities
I
pY (0) = P(Y = 0) = P(X 2 = 0) = P(X = 0) = pX (0) = 2/5
pY (1) = P(Y = 1) = P(X 2 = 1) = pX (≠1) + pX (1) = 3/5
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 33 / 49
Mean and variance of a function of a r.v.
The main characteristics of Y can be obtained using its distribution,
according to the definitions, or sometimes directly and more simply
through the distribution of X .

Definition (EXPECTATION and VARIANCE OF A FUNCTION OF A R.V.)


If X is a r.v., then the expectation of Y = g(X ), where g is a real-valued
function, is defined as
Iq
x g(x ) · pX (x )
E [Y ] = E [g(X )] := s
g(x ) · fX (x )dx

while the variance of Y can be compute as


Iq
2 2 xg(x )2 · pX (x ) ≠ E [Y ]2
Var [Y ] = E [Y ] ≠ E [Y ] = s
g(x )2 · fX (x )dx ≠ E [Y ]2

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 34 / 49
Transformation of a r.v.

Example (continued)
We might compute the summary measures using pY (y ):
1
ÿ 2 3 3
E [Y ] = y · pY (y ) = 0 · +1· =
y =0
5 5 5

1 3 42
ÿ
2 3 2 3 9 6
Var [Y ] = y · pY (y ) ≠ = 02 · + 12 · ≠ =
y =0
5 5 5 25 25

but also directly through pX (x ), without obtaining the whole distribution


of Y: indeed
1
ÿ 1 2 2 3
E [Y ] = E [X 2 ] = x 2 · pX (x ) = (≠1)2 · + 02 · + 12 · =
x =≠1
5 5 5 5

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 35 / 49
Transformation of a r.v.

Example (continued)

1 3 42
ÿ 3
Var [Y ] = x 4 · pX (x ) ≠
x =≠1
5
1 2 2 9 6
= (≠1)4 · + 04 · + 14 · ≠ =
5 5 5 25 25

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 36 / 49
Linear transformations of a r.v.

A very important and, at same time, simple transformation of a r.v. X is


the linear transformation:

Y = g(X ) = a + b · X

where a and b are arbitrary real values.


For such a transformation, the following results hold
Result
E [Y ] = E [a + b · X ] = a + b · E [X ]
(E (·) is a linear operator.) and

Var [Y ] = Var [a + b · X ] = b 2 · Var [X ]


a is constant
var(a)=0

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 37 / 49
Standardization of a r.v.
Frequently we will use use the following results:
Definition
If X is a r.v. such that µ = E [X ] and ‡ 2 = Var [X ], and Z is a new r.v.
defined as
X ≠µ
Z=

then
Z is said to be the standardized version of X
x ≠µ
the transformation g(x ) = ‡ is said to be the standardization.
Note that
X ≠µ 1 µ
Z= = X ≠ =a+b·X
‡ ‡ ‡
that is, the standardization of X is simply a specific linear transformation
of X with
µ 1
a=≠ b=
‡ ‡
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 38 / 49
Standardization of a r.v.
Exploiting the properties of expectation and variance for linear
transformations, and the last remark in the previous slide, we have
Result
If Z is a standardized version of X

E [Z ] = 0 Var [Z ] = 1

indeed
5 6
X ≠µ 1 1
E [Z ] = E = E [X ≠ µ] = {E [X ] ≠ µ} = 0
‡ ‡ ‡
1
Var [Z ] = Var [a + b · X ] = b 2 · Var [X ] = · ‡2 = 1
‡2
Thus standardizing a r.v. always leads to a new r.v. with zero mean
and unit variance.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 39 / 49
Relevant transformations of a normally distributed r.v.
Normally distributed r.v.’s and linear transformations react nicely:

Result (Linear transformation of a normally distributed r.v.)


Assume that X ≥ N (µ, ‡ 2 ) and define a new r.v.

Y =a+b·X

then Y ≥ N (a + bµ, b 2 ‡ 2 ).

Remark:
The crucial part of the previous theorem is that the distribution of the
new r.v. Y is still normal.2

We can compactly state that the linear transformation of a normally


distributed r.v. still leads to a normally distributed r.v..3
2
We already know that standardizing any X leads to a 0-mean unit-variance r.v..
3
We also say that linear transformations preserve normality.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 40 / 49
Relevant transformations of a normally distributed r.v.
A very useful application of the theorem in the previous slide is the
following

Result (Standardization of a normally distributed r.v.)


Assume that X ≥ N (µ, ‡ 2 ) and define a new r.v.

X ≠µ
Z=

then Z ≥ N (0, 1), i.e. a standard normal distribution.

Remark:
The previous result states that any Normally distributed r.v. can be
turned into a r.v. with standard normal distribution.

Thus computation of probabilities for normal distributions, whatever


the values of µ and ‡ 2 , can always be performed in terms of a
standard normal distribution.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 41 / 49
Computation of normal probabilities
We can use the standardization result to state that
3 4 3 4
b≠µ a≠µ
P(a < X Æ b) = FZ ≠ FZ
‡ ‡

where FZ is the cdf of the standard normal distribution4 .


Indeed,

P(a < X Æ b) = P(a ≠ µ < X ≠ µ Æ b ≠ µ)


3 4
a≠µ X ≠µ b≠µ
=P < Æ
‡ ‡ ‡
3 4
a≠µ b≠µ
=P <Z Æ
‡ ‡
3 4 3 4
b≠µ a≠µ
= FZ ≠ FZ
‡ ‡

4
FZ is frequently denoted by .
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 42 / 49
Computation of normal probabilities
Example
Assume that X ≥ N (3.6, 2.25) thus µ = 3.6, ‡ 2 = 2.25 and ‡ = 1.5:
3 4
X ≠ 3.6 4.6 ≠ 3.6
P(X Æ 4.6) = P Æ = P(Z Æ 0.67) = FZ (0.67)
1.5 1.5
= 0.7486

The statistical table of the standard normal distribution (or the use of R)
provides the final value.

3 4
X ≠ 3.6 2.1 ≠ 3.6
P(X Æ 2.1) = P Æ = P(Z Æ ≠1) = P(Z > 1)
1.5 1.5
= 1 ≠ P(Z Æ 1) = 1 ≠ FZ (1) = 1 ≠ 0.8413 = 0.1577

Note that the value -1 is not shown on the table: the symmetry of the
distribution allows us to express the required probability in terms only of a
positive value (3rd equality).
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 43 / 49
Computation of normal probabilities

Example (continued)
3 4
1.8 ≠ 3.6 X ≠µ 4 ≠ 3.6
P(1.8 < X Æ 4) = P < Æ
1.5 ‡ 1.5
= P (≠1.2 < Z Æ 0.27)
= FZ (0.27) ≠ FZ (≠1.2)
= 0.6064 ≠ 0.1151 = 0.4913

where, as previously shown by symmetry,

FZ (≠1.2) = 1 ≠ FZ (1.2) = 1 ≠ 0.8849 = 0.1151

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 44 / 49
Computation of normal probabilities

Empirical Rule

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 45 / 49
SETS OF RANDOM VARIABLES

Definition (Linear combinations of random variables)


Let X1 , X2 , . . . , Xn be r.v.’s, then the random variable
n
ÿ
Y = a1 X1 + a2 X2 + . . . + an Xn = ai Xi
i=1

where a1 , a2 , . . . , an are known real constants, is called a linear


combination of the Xi ’s.

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 46 / 49
Sets of random variables
Result (Expectation and variance of a linear combination of r.v.’s)
Let X1 , X2 , . . . , Xn be r.v.’s such that E [Xi ] = µi and Var [Xi ] = ‡i2 ,
i = 1, . . . , n, and let Y be the linear combination (ai ’s known real
constants)
n
ÿ
Y = ai Xi
i=1

then C n D
ÿ n
ÿ n
ÿ
E [Y ] = E ai Xi = ai E [Xi ] = ai µi
i=1 i=1 i=1
C n D n
ÿ ÿ ÿ
Var [Y ] = Var ai Xi = ai2 Var [Xi ] + 2 ai aj Cov (Xi , Xj )
i=1 i=1 i<j

where ‡ij = Cov (Xi , Xj ). If the r.v.’s are uncorrelated, then


n
ÿ n
ÿ
Var [Y ] = ai2 Var [Xi ] = ai2 ‡i2
RANDOM i=1 i=1 DISTRIBUTIONS
VARIABLES AND PROBABILITY September 27, 2018 47 / 49
Sets of random variables
Result (Linear combinations of normally distributed r.v.’s)
Let X1 , X2 , . . . , Xn be independenta normally distributed r.v.’s, i.e.

Xi ≥ N (µi , ‡i2 )
qn
then Y = i=1 ai Xi is normally distributed, that is
A B
ÿ ÿ
Y ≥N ai µi , ai2 ‡i2
i i

Furthermore, if X1 , X2 , . . . , Xn are not only independent but also


identically distributed (µi = µ and ‡i2 = ‡ 2 for every i), that is
Xi ≥ N (µ, ‡ 2 ) for every i, then
A B
ÿ ÿ
2
Y ≥N µ ai , ‡ ai2
i i
a
Recall that independence implies zero covariance, that is Cov (Xi , Xj ) = 0.
RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
September 27, 2018 48 / 49
Sets of random variables

Two important cases of linear combination of random variables are:

a1 = a2 , . . . , = an = 1
n
ÿ
Y = Xi ≥ N (nµ, n‡ 2 )
i=1

a1 = a2 , . . . , = an = 1/n
n
ÿ 1
Y = Xi ≥ N (µ, ‡ 2 /n)
i=1
n

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS


September 27, 2018 49 / 49

Das könnte Ihnen auch gefallen