Sie sind auf Seite 1von 33

Random process (or stochastic process)

In many real life situation, observations are made over a period of time and they
are inuenced by random eects, not just at a single instant but throughout
the entire interval of time or sequence of times.
In a rough sense, a random process is a phenomenon that varies to some
degree unpredictably as time goes on. If we observed an entire time-sequence
of the process on several dierent occasions, under presumably identical
conditions, the resulting observation sequences, in general, would be dierent.
A random variable (RV) is a rule (or function) that assigns a real number to
every outcome of a random experiment, while a random process is a rule (or
function) that assigns a time function to every outcome of a random experi-
ment.
Example
Observe the demand per week for certain service over time.
Example
The closing price of HSBC stock observed from June 15 to Dec. 22, 2003.
Example
Outcomes from Mark Six in Years 2000, 2001. We have two sequences of
observations made twice per week. The actual sequence of observation is called
the realization of the random process associated with the random experiment
of Mark Six. The realizations in dierent years should dier, though the nature
of the random experiment remains the same (assuming no change to the rule
of Mark Six).
A random experiment may lead not only to a single random variable, but to an
entire sequence
{X
i
: i = 1, 2, 3, } = {X
1
, X
2
, X
3
, }
of random variables (indexed family of random variables).
Consider the random experiment of tossing a dice at t = 0 and observing the
number on the top face. The sample space of this experiment consists of the
outcomes {1, 2, 3, , 6}. For each outcome of the experiment, let us arbitrarily
assign a function of time t (0 t < ) in the following manner.
Outcome: 1 2 3 4 5 6
Function of time: x
1
(t) = 4 x
2
(t) = 2 x
3
(t) = 2 x
4
(t) = 4 x
5
(t) = t/2 x
6
(t) = t/2
The set of functions {x
1
(t), x
2
(t), , x
6
(t)} represents a random process.
Denition: A random process is a collection (or ensemble) of RVs {X(s, t)}
that are functions of a real variable, namely time t where s S (sample space)
and t T (parameter set or index set).
The set of possible values of any individual member of the random process is
called state space. Any individual member itself is called a sample function or
a realisation of the process.
Classication of Random Processes
Depending on the continuous or discrete nature of the state space S and
parameter set T, a random process can be classied into four types:
1. If both T and S are discrete, the random process is called a discrete random
sequence. For example, if X
n
represents the outcome of the nth toss of
a fair dice, then {X
n
, n 1} is a discrete random sequence, since T =
{1, 2, 3, } and S = {1, 2, 3, 4, 5, 6}.
2. If T is discrete and S is continuous, the random process is called a contin-
uous random sequence.
For example, if X
n
represents the temperature at the end of the nth hour
of a day, then {X
n
, 1 n 24} is a continuous random sequence, since
temperature can take any value in an interval and hence continuous.
3. If T is continuous and S is discrete, the random process is called a discrete
random process.
For example, if X(t) represents the number of telephone calls received in
the interval (0, t) then {X(t)} is a discrete random process, since S =
{0, 1, 2, 3, }.
4. If both T and S are continuous, the random process is called a continuous
random process. For example, if X(t) represents the maximum temperature
at a place in the interval (0, t), {X(t)} is a continuous random process. In
the names given above, the word discrete or continuous is used to refer
to the nature of S and the word sequence or process is used to refer to
the nature of T.
Specifying a random process
Let X
1
, X
2
, X
k
be the k random variables obtained by sampling the random
process X(t, ) at times t
1
, t
2
, , t
k
:
X
1
= X(t
1
, ), X
2
= X(t
2
, ), , X
k
= X(t
k
, ).
The joint behavior of the random process at these k time instants is specied by
the joint cumulative distribution of the vector random variable (X
1
, X
2
, , X
k
).
A stochastic process is specied by the collection of kth-order joint cumulative
distribution functions:
F
X
1
X
k
(x
1
, x
2
, , x
k
) = P[X
1
x
1
, X
2
x
2
, , X
k
x
k
],
for any k and any choice at sampling instants t
1
, , t
k
.
If the stochastic process is discrete-valued, then a collection of probability mass
functions can be used to specify the stochastic process:
P
X
1
,X
k
(x
1
, x
2
, , x
k
) = P[X
1
= x
1
, X
2
= x
2
, , X
k
= x
k
].
Mean m
X
(t) of a random process X(t) is
m
X
(t) = E[X(t)] =
_

xf
X(t)
(x) dx.
In general, m
X
(t) is a function of time. Suppose we write
X(t) = m
X
(t) +Y (t)
then Y (t) has zero mean. Trends in the behavior of X(t) are reected in the
variation of m
X
(t) with time.
Autocorrelation R
X
(t
1
, t
2
) of a random process X(t) is the joint moment of
X(t
1
) and X(t
2
)
R
X
(t
1
, t
2
) = E[X(t
1
)X(t
2
)] =
_

xyf
X(t
1
),X(t
2
)
dxdy.
Note that f
X
1
(t),X
2
(t)
is the second order pdf of X(t) and R
X
(t
1
, t
2
) is a function
of t
1
and t
2
.
Autocovariance C
X
(t
1
, t
2
) of a random process X(t) is dened as the covariance
of X(t
1
) and X(t
2
):
C
X
(t
1
, t
2
) = E[{X(t
1
) m
X
(t
1
)}{X(t
2
) m
X
(t
2
)}]
= R
X
(t
1
, t
2
) m
X
(t
1
)m
X
(t
2
).
In particular, when t
1
= t
2
= t, we have
VAR[X(t)] = E[(X(t) m
X
(t))
2
] = C
X
(t, t).
Correlation coecient of X(t) is dened as

X
(t
1
, t
2
) =
C
X
(t
1
, t
2
)
_
C
X
(t
1
, t
1
)
_
C
X
(t
2
, t
2
)
; |
X
(t
1
, t
2
)| 1.
The mean, autocorrelation and autocovariance functions provide only partial
description of a random process.
Example Sinusoid with random amplitude
Let X(t) = Acos 2t, A is some random variable;
m
X
(t) = E[Acos 2t] = E[A] cos 2t.
Remark The process is always zero for those values of t where cos 2t = 0.
The autocorrelation is R
X
(t
1
, t
2
) = E[Acos 2t
1
Acos 2t
2
]
= E[A
2
] cos 2t
1
cos 2t
2
,
and
autocovariance is C
X
(t
1
, t
2
) = R
X
(t
1
, t
2
) m
X
(t
1
)m
X
(t
2
)
= {E[A
2
] E[A]
2
} cos 2t
1
cos 2t
2
= VAR[A] cos 2t
1
cos 2t
2
.
Example Sinusoid with random phase
X(t) = cos(t +), is uniformly distributed in (, ).
Mean
m
X
(t) = E[cos(t +)] =
1
2
_

cos(t +) d = 0.
The autocorrelation and autocovariance are
C
X
(t
1
, t
2
) = R
X
(t
1
, t
2
) = E[cos(t
1
+) cos(t
2
+)]
=
1
2
_

1
2
{cos[(t
1
t
2
)] +cos[(t
1
+t
2
) +2]} d
=
1
2
cos((t
1
t
2
)).
m
X
(t) is a constant and C
X
(t
1
, t
2
) depends only on |t
1
t
2
|.
Example Discrete-time random process
A discrete-time random process is dened by X
n
= s
n
, for n 0, where s is
selected at random from the interval (0, 1).
(a) Find the cdf of X
n
.
(b) Find the joint cdf for X
n
and X
n+1
.
(c) Find the mean and autovariance functions of X
n
.
Solution
(a) For 0 < y < 1, P[X
n
y] = P[s
n
y] = P[s y
1/n
] = y
1/n
, since s is
selected at random in (0, 1).
(b) P[X
n
r, X
n+1
t] = P[s
n
r, s
n+1
t] = P[s r
1/n
, s t
1/(n+1)
]
= P[s min(r
1/n
, t
1/(n+1)
] = min(r
1/n
, t
1/(n+1)
).
(c)
E[X
n
] =
_
1
0
s
n
ds
=
_
s
n+1
n +1
_
1
0
=
1
n +1
.
R
X
(n, n +k) = E[X
n
X
n+k
] =
_
1
0
s
2n+k
ds
=
_
s
2n+k+1
2n +k +1
_
1
0
=
1
2n +k +1
.
C
X
(n, n +k) = R
X
(n, n +k) E[X
n
]E[X
n+k
]
=
1
2n +k +1

_
1
n +1
__
1
n +k +1
_
.
Example Sum of independent random variables
Let Z(t) = At +B, where A and B are independent random variables.
(a) Find the pdf of Z(t), t > 0.
(b) Find m
Z
(t) and C
Z
(t
1
, t
2
), t > 0 and t
1
> 0, t
2
> 0.
Solution
(a) Since A and B are independent, so do At and B. Let f
A
(x) and f
B
(y)
denote the pdf of A and B, respectively, then
f
At
(x

) =
1
t
f
A
_
x

t
_
.
f
Z(t)
(z) is given by the convolution between f
At
(x) and f
B
(x):
f
Z(t)
(z) =
_

1
t
f
A
_
x

t
_
f
B
(z x

) dx

.
(b) m
Z
(t) = E[At +B] = tE[A] +E[B].
C
Z
(t
1
, t
2
) = COV(t
1
A+B, t
2
A+B)
= t
1
t
2
COV(A, A) +(t
1
+t
2
)COV(A, B) +COV(B, B)
= t
1
t
2
VAR(A) +VAR(B) +(t
1
+t
2
)COV(A, B).
Two random processes
The processes X(t) and Y (t) are said to be independent if the vector random
variables (X(t
1
), , X(t
k
)) and (Y (t

1
), , Y (t

j
)) are independent for all k, j,
and all choices of t
1
, , t
k
and t

1
, , t

j
.
Cross-covariance
C
X,Y
(t
1
, t
2
) = E[{X(t
1
) m
X
(t
1
)}{Y (t
2
) m
Y
(t
2
)}]
= R
X,Y
(t
1
, t
2
) m
X
(t
1
)m
Y
(t
2
)
X(t) and Y (t) are uncorrelated if C
X,Y
(t
1
, t
2
) = 0 for all t
1
and t
2
.
Example Sum of independent and identically distributed Gaussian random
variables
Let X(t) = Acos t+Bsint, where A and B are iid Gaussian random variables
with zero mean and variance
2
. Find the mean and autocovariance of X(t).
E[X(t)] = m
X
(t) = E[A] cos t +E[B] sint = 0
since A and B both have zero mean.
C
X
(t
1
, t
2
) = R
X
(t
1
, t
2
)
= E[(Acos t
1
+Bsint
1
)(Acos t
2
+Bsint
2
)]
= E[A
2
cos t
1
cos t
2
+AB(sint
1
cos t
2
+cos t
1
sint
2
)
+B
2
sint
1
sint
2
]
=
2
[cos t
1
cos t
2
+sint
1
sint
2
]
=
2
cos (t
1
t
2
).
Note that E[A
2
] = E[B
2
] =
2
and E[AB] = E[A]E[B] = 0.
Independent and identically distributed discrete time random processes
Let X
n
be a discrete time random process consisting of a sequence of inde-
pendent, identically distributed (iid) random variables with common cdf F
X
(x),
mean m and variance
2
. The joint cdf for any time instants n
1
, , n
k
is given
by
F
X
1
X
k
(x
1
, x
2
, , x
k
) = P[X
1
x
1
, , X
k
x
k
]
= F
X
(x
1
)F
X
(x
2
) F
X
(x
k
).
m
X
(n) = E[X
n
] = m, for all n;
C
X
(n
1
, n
2
) = E[(X
n
1
m)(X
n
2
m)] = E[X
n
1
m]E[X
n
2
m] = 0, n
1
= n
2
.
If n
1
= n
2
, then C
X
(n
1
, n
2
) =
2
.
We write C
X
(n
1
, n
2
) =
2

n
1
,n
2
, where
n
1
,n
2
=
_
1 if n
1
= n
2
0 otherwise
.
Independent and stationary increments
1. Random changes of the form X
t+h
X
t
, for xed h > 0, are called increments
of the process.
2. If each set of increments, corresponding to non-overlapping collection of
time intervals, is mutually independent, then X
t
is said to be a process with
independent increments.
That is, for any k and any choice of sampling instants t
1
< t
2
< t
k
, the
random variables
X(t
2
) X(t
1
), , X(t
k
) X(t
k1
)
are independent random variables.
3. If X
t+h
X
t
has a distribution that depends only on h, not on t, then X
t
is
said to have stationary increments.
Markov process
A random process X(t) is said to be Markov if the future of the process given
the present is independent of the past. For discrete-valued Markov process
P[X(t
k
) = x
k
|X(t
k1
) = x
k1
, , X(t
1
) = x
1
]
= P[X(t
k
) = x
k
|X(t
k1
) = x
k1
].
Example One dimensional random walk (frog hopping)
The initial position of a frog is taken to be at position 0. Let p and q = 1 p
be the probabilities that the frog will choose to move to the right and to the
left, respectively. Let X
n
be the position of the frog after n moves.
P[X
1
= 1] = p, P[X
1
= 1] = q
P[X
2
= 2] = p
2
, P[X
2
= 0] = 2pq, P[X
2
= 2] = q
2
P[X
3
= 3] = P
3
, P[X
3
= 1] = 3p
2
q, P[X
3
= 1] = 3pq
2
, P[X
3
= 3] = q
3
{X
1
, , X
n
} is an indexed family of random variables. The random walk process
is a discrete-valued discrete time random process.
Questions
1. Is this random process Markovian? Check whether only the information of
the present position is relevant for predicting the future movements.
Suppose the frog is at position 4 after 10 moves, does the probability that
it will be in position 8 after 16 moves (6 more moves) depend on how it
moves to position 4 within the rst 10 moves?
Answer The 11
th
to 16
th
moves are independent of the earlier 10 moves.
2. Are X
10
X
4
and X
16
X
12
independent? How about X
10
X
4
and X
12
X
8
?
Hint We are considering increments over non-overlapping and overlapping
intervals, respectively.
Each move is an independent Bernoulli trial. The probability mass functions of
X
3
are
P
X
3
(3) = p
3
, P
X
3
(1) =
3
C
1
p
2
q, P
X
3
(1) =
3
C
2
pq
2
, P
X
3
(3) = q
3
.
In general,
P
X
k
(j) =
k
C
k+j
2
p
k+j
2
q
kj
2
, k j k,
Why? Let R and L be the number of right moves and left moves, respecitvely.
We have
R +L = k and R L = j
so that R = (k +j)/2. Note that when k is odd (even), j must be odd (even).
How to nd the joint pmfs?
For example, P[X
2
= 0, X
3
= 3] = P[X
3
= 3|X
2
= 0]P[X
2
= 0] = 0;
P[X
2
= 2, X
3
= 1] = P[X
3
= 1|X
2
= 2]P[X
2
= 2] = qp
2
.
Sum processes
S
n
=
n

i=1
X
i
, X
i
s are iid random variables. With S
n
= S
n1
+X
n
, n = 2, 3,
and S
1
= X
1
, we take S
0
= 0 for notational convenience.
It is called a binomial counting process if X
i
s are iid Bernoulli random
variables.
1. S
n
is Markovian:
P[S
n
=
n
|S
n1
=
n1
]
= P[S
n
=
n
|S
n1
=
n1
, S
n2
=
n2
, , S
1
=
1
].
This is because S
n
= S
n1
+X
n
and the value taken by X
n
is independent
of the values taken by X
1
, , X
n1
.
2. S
n
has independent increments in non-overlapping time intervals (no X
n
s
are in common).
3. S
n
has stationary increments
P[S
n+k
S
k
= ] = P[S
n
S
0
= ] = P[S
n
= ], independent of k.
This is because S
n+k
S
k
= sum of n iid random variables.
Remark
S
n
and S
m
are not independent since
S
n
= X
1
+ +X
n
and S
m
= X
1
+ +X
m
.
Say, for n > m, both contain X
1
, , X
m
.
Example Find P[S
n
= , S
m
= ], n > m.
Solution
P[S
n
= , S
m
= ] = P[S
n
S
m
= , S
m
= ]
= P[S
n
S
m
= ]P[S
m
= ],
due to independent increments over non-overlapping intervals. Further, from
stationary increments property, we have S
n
S
m
= S
nm
S
0
= S
nm
so that
P[S
n
= , S
m
= ] = P[S
nm
= ]P[S
m
= ].
This veries that S
n
and S
m
are not independent since
P[S
n
= , S
m
= ] = P[S
n
= ]P[S
m
= ].
Mean, variance and autocovariance of sum processes
Let m and
2
denote the mean and variance of X
i
, for any i. Since the sum
process S
n
is the sum of n iid random variables, its mean and variance are
m
S
(n) = E[S
n
] = nE[X
i
] = nm
VAR[S
n
] = nVAR[X
i
] = n
2
.
Note that both m
S
(n) and VAR[S
n
] grow linearly with n. The autocovariance
of S
n
is
C
S
(n, k) = E[(S
n
E[S
n
])(S
k
E[S
k
])]
= E
_
_
_
_
_
n

i=1
(X
i
m)
_
_
_
_
_
_
k

j=1
(X
j
m)
_
_
_
_
_
=
n

i=1
k

j=1
E[(X
i
m)(X
j
m)]
=
min(n,k)

i=1
C
X
(i, i) = min(n, k)
2
since E[(X
i
m)(X
j
m)] =
2

i,j
and only those terms with i = j survive.
Alternative method
Without loss of generality, we let n k so that n = min(n, k).
C
S
(n, k) = E[(S
n
nm)(S
k
km)]
= E[(S
n
nm) {S
n
nm+(S
k
km) (S
n
nm)}]
= E[(S
n
nm)
2
] +E[(S
n
nm)(S
k
S
n
(k n)m)],
and since S
n
and S
k
S
n
are independent, so
C
S
(n, k) = E[(S
n
nm)
2
] +E[S
n
nm]E[S
k
S
n
(k n)m]
= E[(S
n
nm)
2
] = Var[S
n
] = n
2
= min(n, k)
2
.
Binomial counting process
Let S
n
be the sum of n independent Bernoulli random variables, that is, S
n
is a
binomial random variable with parameters n and p. When there are j successes
out of n trials, then S
n
= j.
P[S
n
= j] =
_
n
C
j
p
j
(1 p)
nj
for 0 j n
0 otherwise
.
Note that S
n
has mean np and variance np(1 p).
To prove the claim on the variance of S
n
, we consider
VAR[S
n
] = E[S
2
n
] E[S
n
]
2
, where E[S
2
n
] =
n

j=0
j
2
n
C
j
p
j
q
nj
.
Note that
n

j=0
j
2
n
C
j
p
j
q
nj
=
n

j=2
j(j 1)
n
C
j
p
j
q
nj
+
n

j=0
j
n
C
j
p
j
q
nj
= n(n 1)p
2
n2

=0
n2
C
j
p
j

q
n2j

+np, where j

= j 2.
Lastly, VAR[S
n
] = n(n 1)p
2
+np n
2
p
2
= np np
2
= npq.
One-dimensional random walk revisited
mean = E[X
n
] = 0
variance = Var(X
n
) = 4p(1 p)
Hence, the autocovariance
C
X
(n, k) = min(n, k)4p(1 p).
Example Let X
n
consist of an iid sequence of Poisson random variables with
mean .
(a) Find the pmf of the sum process S
n
.
(b) Find the joint pmf of S
n
and S
n+k
.
(c) Find COV(S
n+k
, S
n
).
Solution
(a) Recall that the sum of independent Poisson variables remains to be Poisson
distributed. Let be the common mean of these independent Poisson
variables. The sum of these n variables is a Poisson variable with mean
= E[X
1
] + +E[X
n
] = n.
Now, S
n
is a Poisson variable with mean n.
P[S
n
= m] =
(n)
m
e
n
m!
.
(b) P[S
n
= m, S
n+k
= ] = P[S
n+k
= |S
n
= m]P[S
n
= m], m
= P[S
k
= m]P[S
n
= m] (stationary increments)
=
(k)
m
e
k
( m)!
(n)
m
e
n
m!
.
(c) VAR(S
n
) = VAR(S
n+k
S
k
) = VAR(S
n+k
) +VAR(S
k
) 2COV(S
k
, S
n+k
)
so that COV(S
k
, S
n+k
) =
1
2
[VAR(S
n+k
) +VAR(S
k
) VAR(S
n
)]
=
1
2
[(n +k) +k n] = k.

Das könnte Ihnen auch gefallen