Sie sind auf Seite 1von 34

Modeling & Simulation

Lecture 10 Part 1

Generating Random
Variates

Instructor:
Eng. Ghada Al-Mashaqbeh
The Hashemite University
Computer Engineering Department
The Hashemite University 2
Outline
Introduction.
Characteristics of random variates
generation algorithms.
General approaches for random
variates generation.
Inverse transform method.
Composition.
Convolution.
Acceptance rejection method.
The Hashemite University 3
Generating Random Variates
Say we have fitted an exponential
distribution to interarrival times of
customers
Every time we anticipate a new customer
arrival (place an arrival event on the
events list), we need to generate a
realization of the arrival times
We know how to generate unit uniform.
Can we use this to generate exponential?
(And other distributions)
The Hashemite University 4
Generating Random Variates
Algorithms
Many algorithms exist in the literature to transform the
generated random number into a variate from a specified
distribution.
Such algorithm or approach depends on the used
distribution:
Whether it is discrete or continuous.
Can meet the algorithm requirements.
But if you have more than one algorithm that can
be used which one to use:
Accuracy: whether the algorithm can produce exact
variates from the distribution (i.e. with high accuracy) or
it is just an estimation.
Efficiency: in terms of needed storage and execution time.
Complexity: you must find a trade off between complexity
and performance (i.e. accuracy).
Some specific technical issues: some algorithms needs
RNG other than the U(0,1), so pay attention.
The Hashemite University 5
Two Types of Approaches
First we will explore the general approaches
for random variates generation which can
be applied for continuous, discrete, and
mixed distributions.
Two types:
Direct
Obtain an analytical expression
Inverse transform
Requires inverse of the distribution function
Composition & Convolution
For special forms of distribution functions
Indirect
Acceptance-rejection method

The Hashemite University 6
Inverse-Transform Method
Will be examined for three cases:
Continuous distributions.
Discrete distributions.
And mixed distributions (combination of both
continuous and discrete ones).
The main requirement is:
Continuous case: The underlying distribution
cdf (F(x)) must have an inverse in either:
Closed form.
Or can be calculated numerically with a good
accuracy level.
For the discrete case: always apply, no
restrictions.
The Hashemite University 7
Inverse Transform Continuous Case I
Conditions:
F(x) must be continuous and strictly increasing
(i.e. F(x
1
) < F(x
2
) if x
1
< x
2
).
F(x) must have an inverse F
-1
(x).

Algorithm:


) ( Return . 2
) 1 , 0 ( ~ Generate 1.
1
U F X
U U

=
The Hashemite University 8
Inverse Transform Continuous Case II
Proof that the returned X has the desired
distribution F(x)
The Hashemite University 9
Example: Weibull Distribution I
The Hashemite University 10
Example: Weibull Distribution II
-- Density function
of Weibull
distribution.
-- F(x) is shown on
the next slide.
The Hashemite University 11
Example: Weibull Distribution II
The Hashemite University 12
Example: Weibull Distribution III
The Hashemite University 13
Inverse Transform Discrete Case I
Here you have discrete values of x, pmf p(x
i
),
and a cdf F(x).
Algorithm



So, the above algorithm returns x
i
if and only if

Proof:
Requires search technique to find the closest x
value, many suggestions can be introduced:
Start with the largest p(x
i
).
The Hashemite University 14
Inverse Transform Discrete Case II
-- Unlike the continuous case, the discrete inverse
transform can be applied for any discrete distribution but
it could be not the most efficient one.
The Hashemite University 15
Inverse Transform Discrete
Case -- Example
The Hashemite University 16
Inverse Transform: Generalization
Algorithm that can be used for both continuous
and discrete distributions is:





The above algorithm can be used with mixed
distributions that have discontinuities in its F(x).
Have a look at the example shown in Figure 8.5
in the textbook (pp. 430).
{ } U x F x X
U U
> = ) ( : min Return 2.
) 1 , 0 ( ~ Generate . 1
The Hashemite University 17
Inverse Transform More!
Disadvantages:
Must evaluate the inverse of the distribution
function
May not exist in closed form
Could still use numerical methods
May not be the fastest way
Advantages:
Needs only one random number to generate a
random variate value.
Ease of generating truncated distributions
(redefine F(X) on a smaller finite period of x).
The Hashemite University 18
Composition
It is based on having a weighted sum of distributions
(called convex combination).
That is a distribution that has a from (or can be
decomposed as follows):



Most of the time in such cases it is very difficult to find the
inverse of the distribution (i.e. you cannot use the inverse
transform method).
Each F
j
in the above formulation is a distribution function
found in F(x) where F
j
s can be different.
You can also have a composed pdf function f(x) similar to
F(x) but remember we are working on F(x).
So, the trick is to find F
j
s that are easy to use in variates
generation where you can apply geometry to decompose
F(x).


=

=
= =
1 1
1 where , ) ( ) (
j
j
j
j j
p x F p x F
The Hashemite University 19
Composition Algorithm
The composition method is performed in two steps:
First: select one of the composed cdfs to work with.
Use the inverse transform to generate the variate from this
cdf.
The first step is choosing F
j
with probability p
j
.
Such selection can be done using the discrete inverse
transform method where P(F
j
)=p
j
Step 1 Algorithm
1. Generate a positive random integer, such that P(J=j)=p
j
2. Return X with distribution F
j
Also, to obtain the second step you can use the inverse
transform method (or any other method you want).
Step 2 Algorithm:
U
1
is used to select F
j
U
2
is used to obtain X from the selected F
j
.
As, you see you must generate at least two random
numbers U
1
and U
2
to obtain X using the composition
method
The Hashemite University 20
How to decompose a complex F(x)?
Using geometry to decompose F(x) include two
approaches:
Either divide F(x) vertically so you split the interval of
x over which the whole F(x) is defined.
Here you do not alter F
j
s at all over each interval.
See example 1 on the next slides.
Or divide F(x) horizontally the same x interval for all
F
j
s which is the same of the original F(x).
See example 2 on the next slides.
Your next step is to define p
j
associated with
each F
j
.
Hint: always treat p
j
as the area portion under F
j

from the total area under F(X) but you must
compensate for it.
The Hashemite University 21
Composition Example 1
We will solve it using two methods:
-- inverse transform.
-- composition.
The Hashemite University 22
Composition Example 1 cont.
The Hashemite University 23
Composition Example 1 cont.
The Hashemite University 24
Composition Example 2
Try to solve it using the inverse transform method and see
how it is easier to solve it using composition.
The Hashemite University 25
Composition Example 2 cont.
The Hashemite University 26
Convolution
So, you need one random number U then transform it to each Y.
Do not get confused between convolution and composition:
In composition: you express the cdf of X as a weighted sum
of other distribution functions.
In convolution: you express X itself as a sum of other
distribution functions.
The Hashemite University 27
Convolution -- Example
The Hashemite University 28
Acceptance-Rejection Method I
Specify a function that majorizes the density

Now work with the new density function r(x)


Algorithm
x x f x t > ), ( ) (
1. Step back to go Otherwise
. return ), ( ) ( If 3.
of t independen Generate . 2
density ith Generate 1.
Y X Y t Y f U
Y U
r w Y
= s
}
+

= = dx x t c
c
x t
x r ). ( where ,
) (
) (
The Hashemite University 29
Acceptance-Rejection Method II
Note the following:
f(x) and r(x) are density functions since they integrate
to 1 over their total interval.
t(x) is not a density function.
The acceptance rejection method is an indirect
method since it works on f(x) indirectly through
both r(x) and t(x).
Step 1 in the previous algorithm use one of the
direct methods for random variates generation
that we have learned before.
The Hashemite University 30
Acceptance-Rejection Method III
The probability of acceptance of Y in step 3 =
1/c.
So, as c decreases, and so the area under t(x) is
small this will reduce the number of iterations of
this algorithm.
Smaller c means that t(x) is very close to f(x)
(has a very similar shape).
So, your task is to find a suitable t(x) (has many
details to resemble f(x)) but at the same time
not too complex to generate Y from it (or from
r(x)).
The Hashemite University 31
Acceptance-Rejection -- Example
The Hashemite University 32
Acceptance-Rejection Example cont.
Exercise: Generate 3 random variates for this example using
any LCG you want.
The Hashemite University 33
Acceptance-Rejection Example
Better t(x)
-- See the complete
solution of the
example at your
textbook.
-- Now how to
generate Y from r(x)?
Need composition
then inverse
transform.
-- So, three methods
are involved here.
The Hashemite University 34
Additional Notes
The lecture covers the following
sections from the textbook:
Chapter 8
Sections:
8.1,
8.2 (8.2.1 8.2.4)

Das könnte Ihnen auch gefallen