Sie sind auf Seite 1von 13

10 Risk and Uncertainty

Definition of risks, probability of occurrence, Gambler’s Ruin, Expected Value, Decision


Theory, Monte Carlo Simulation

12.1 Definitions
Probability is a numerical measure of uncertainty. The probability of an event means the
likelihood of that event occurring. It is measured as a decimal between zero and one. The
probability of the event never occurring is zero, 0 and the probability that an event must occur is
one, 1 i.e. a certain event. All other values of probability lie between these limits and represent
the likelihood of the event occurring.

P probability, p belongs to 0 < p < 1

If two events have an equal likelihood of occurring or happening then the probability is either
outcome and is therefore one divided by two or one-half. For example, the birth of a child is an
event that has two possible outcomes – a boy or a girl. The probability of a boy or a girl is equal
and either has a probability of ½ or 0.5.

Experiments lead to outcomes which we call events ONLY when the outcome is not certain. The
collection of all possible outcomes is defined as the Sample Space. Some sample spaces are
finite such a boy or a girl. Others may be linked to positive integers as a discrete sample space.
The relative frequency of an event will extrapolate to the probability of the event when the
experiment is repeated a large number of times, for example flipping a coin and plotting the
number of head outcomes divided by the number of trials.

In a number of repeated trials the proportion of times under identical circumstances that the
event can be expected to occur on a long-run basis is referred to as objective probability.
However, a probability value can be set solely on the basis of personal judgement and is called
subjective probability.

Uncertain outcomes are called events. A complete list of events is called the sample space.
Mutually Exclusive Events
The notion of mutual exclusivity rests on the important idea that if one event occurs then the
others cannot occur. We can add the probability of several events ONLY if they are mutually
exclusive i.e. one event does not depend on any other.

As an example let us use the roll of a dice containing 6 faces numbered 1 to 6. The probability of
any number occurring is 1/6. Therefore the probability of a 1 OR a 2 occurring is additive as 1/6
add 1/6 which is equal to 1/3.

Mutually exclusive events mean that the joint occurrence of events is an impossibility. This
allows us to use the additive law to represent the probability of A or B by addition of the
respective probabilities, i.e. P[AUB] .

Independent Events
Events are independent only when there are no dependencies upon prior events or other events in
the sample space. The probability of A and B occurring lies in the intersection of the two
probability sets, i.e. P[A and B] = P[A] x P[B!A]

Conditional probability refers to the occurrence of an event after another event has occurred.
For example, what is the probability of a card being a king given that the set of cards are face
cards. The conditional probability is 4/12 or 1/3. Therefore, the conditional probability of A
given B is the proportion of times that A occurs out of all the times that B occurs.

Events are independent only when their conditional probabilities are equal to their respective
unconditional probabilities.

Bayes Theorem was proposed in the eighteenth century by Reverend Thomas Bayes, who
essentially stated that probabilities should be revised in accordance with new or additional
empirical information. New results from further experiments provide additional data. For
example, let us consider acquiring additional seismic information about a prospect slated for
exploration drilling. The new information may increase the certainty of the prospect by giving it
better definition; however, there is also the possibility that the new information may show that
the prospect really does not exist. Therefore we can revise the probabilities upward or
downward. Seismic information is relatively cheap when compared to an exploration well.

Bayes rules are used in the analysis of decisions using decision trees. Bayes formula is

P(A|B) = P(A) x P(B)


P(B)

10.2 Gambler’s Ruin


Gambler’s Ruin is defined as a continuous string of dry holes drilled consecutively that
essentially bankrupts the player even with a nominal probability of success.

Let us assume that the probability of success has been calculated as 10%. Then the probability of
a loss is 90%. Let us now consider an exploration drilling campaign consisting of four wells.
What is the probability of four dry holes?

The probability of 1 dry hole is given as 0.90 or 90% or P(success) = 0.10


The probability of 2 consecutive dry holes is (0.90) x (0.90) = 0.81
or P(success) = 0.19 or 19%
The probability of 3 consecutive dry holes is (0.90) (0.9) (0.9) = 0.729
or P(success) = 0.271 or 27.1%
The probability of 4 consecutive dry holes is (0.90) (0.9) (0.9) (0.9) = 0.6561
Or P(success) = 0.3439 or 34.39%

We can see that the probability of success increases with each well drilled!

Small E&P companies do not generally have the amount of risk capital to continue after a certain
number of dry holes and usually give up and go out of business. Larger companies however can
continue drilling since their budgets are that much greater, thereby greatly increasing their
success rate.
The probability of at least1 success out of 2 trials requires the use of the Binomial Theorem and
Distribution. The assumptions used in developing the binomial function are
• The experiment has only two outcomes, success or failure
• The experiment is repeated many times, for example n times
• The probability of success is constant
• The n trials are independent of each other

You should note that there is no real world consideration for changing the probability of success
due to additional information nor having the next well to be independent of the previously drilled
ones. These assumptions may change during the course of an exploration campaign.

The Binomial Probability Distribution Function

P(x) = [ n ] p**x q**(n-x)


[ x! (n-x)!]

where P(x) is the probability of obtaining x successes


n is the number of trials
p is the probability of success and
q =(1-p) or the probability of a loss.

A probability distribution function must satisfy two essential criteria to conform to being a
probability density function, viz. f(x) > 0 i.e. always positive or equal to zero and the integral
under x is equal to 1.0. Distributions that satisfy these conditions are the Uniform Distribution,
the Triangular Distribution and the Normal Distribution.

The Cumulative Distribution Function is used to denote the probability of an event happening at
greater than or less than a certain specified value.

A 12-well development drilling program has been approved. Assume that each well has a
probability of success of 0.70. What is the probability of a total loss?
P(loss) = 0.3
P(12 losses) = (0.3)**12 = 0.000 000 53 or 0.000 053 %
10.3 Risk Mitigation Measures
A company would drill exploration or outstep wells based upon the quality of the available
geological and reservoir engineering data. If the data set shows good closures in well-defined
mappable structures containing known reservoir rocks in an oil/gas rich basin, then the
management decision will be to drill these wells on a 100% equity basis. The reason put forward
here will be that the probability of success is high due to the certainty contained in good
geological/geophysical/reservoir data sets.

If however, the company wishes to reduce its financial exposure to risk, it may form an un-
incorporated joint venture in which both parties participate at fixed equity interest levels thereby
reducing the risk for both companies. However, in the event of a success the NPV of the project
has also been reduced. The main question to be answered however is how much should be
divested – 10% or 30%?

Another strategy geared towards risk reduction is to obtain as many exploration plays that the
basin or sub-basin or block can contain and specifically target the drilling of each one. In other
words, committing to all wells on the same play type really means only testing one possibility.
So, both structural closures and stratigraphic plays should be tested. If there are major trends in
the basin then wells should be placed to test each trend and play type. In this way, your risk can
be mitigated but would require some measure of risk capital.

12.4 Expected Value and Expected Monetary Value, EMV


Expected value is the algebraic sum of the probability of each event multiplied by its respective
outcome. It tells what is the long-term average of many repeated trials in a particular game or
experiment. These algebraic sums can be positive, zero or negative. In terms of decision criteria
the average expectation would be a loss if the EV is negative and a gain if the EV is positive.
Since each option in a decision has a probable outcome, we can use expected value to help us
choose among different decision alternatives.
EMV = p(s) x NPV10 - (1 - p(s)) x Loss

The EMV can be a positive number, zero or a negative number. The decision is to accept

decisions when the EMV is positive.

+ ve
Accept

EMV

-ve

Example
Probability of a deep water exploration success is 12%. NPV@10% is $4447.5 million. The
estimated loss is the dry hole cost of the exploration well which amounts to $235 million.
Calculate the EMV.

EMV
P(s) 0.12
P(Loss) 0.88
NPV 4447.5
Loss 235

EMV 326.9
Expected value is a measure of central tendency of a distribution. Quite often measures of the
distribution’s dispersion about the mean value, called variance is necessary to fully describe the
distribution. So standard deviation and mean values assist in describing expectation

10.4 Decision Theory


Decisions under uncertainty display three steps – acts or choices, events and outcomes or
consequences. In order of the steps then, the decision-maker accepts a choice that leads to certain
probable events that result in an outcome that can be measured.

Decision Trees

A common way of representing decisions is to use a tree-like structure called a decision tree.
Decision trees are made up of nodes and branches which are used to represent the sequence of
decisions and the related or outcomes or actions, respectively. A decision node is usually
depicted as a square and an event node as a circle. The lines joining nodes and events and
outcomes are the branches,

For example, if the decision is to drill a well that has probable outcomes of being a dry hole or a
producer then the choice taken will result in an outcome. We can represent this by a Decision
tree.

Consequences or Outcomes
Acts or Choices Events

Drill Producer NPV positive

Dry Hole Loss of money

Farmout Producer NPV positive

Dry Hole Loss of money


Decision
Point
Nodes
Therefore a Decision Tree is a graphical representation of decisions, outcomes and the value of
the respective outcome. Each outcome has a subjective probability value and the sum of the
probabilities at a node is always equal to 1.0.

Decision Trees are usually evaluated by using the Expected Value approach starting from the
6utcomes and working backwards to the decision node. The branch that has the largest EV is
chosen as the best decision based on the information given at that point in time.

The Decision Node is the point where alterative choices exist. The decision choices remain to be
determined based upon the subjective probabilities assigned to the events. The sum of the
probabilities at the nodes must equal to 1.0. The payoff or outcome is determined via a monetary
value such as NPV for the producing well via Cashflows and for the dry hole by the costs
expected to be incurred.

The expected value then becomes the algebraic sum. The decision maker usually chooses the
option with the highest EMV. Bayes Decision Rule states to select the choice having the
maximum expected payoff or NPV or the minimum expected opportunity loss.

Basic Steps in Evaluating Decision Trees

Step 1 Draw the tree starting from left to right with the decision node at the left.

Step 2 Put in all possible secondary decision points and outcomes

Step 3 Check the tree to ensure that the logic of what is being decided upon is
being evaluated.

Step 4 At each decision node put in the subjective probability of each branch

Step 5 At each outcome node put in the subjective probability of each outcome

Note that in steps 4 and 5 the sum of the probabilities must add up to 1.0

Step 6 Put in the value of each outcome e.g. NPVs

Step 7 For each node calculate the EV by using the expected value formula, i.e.
EV = p1 x Value 1 + p 2 x Value 2 + . . . . . .

Step 8 Work backwards until the decision node branches have EVs
Step 9 Choose the decision which has the highest EV

Expected Payoff of Perfect Information


Decisions are better made with all of the information obtained. However, in real life situations
this may not be the case and a decision to obtain information becomes a choice. What is the
value of the information and what effect does it have on the decision alternatives?

The expected value of perfect information is the difference of the expected values with perfect
information and without.
Example
An exploration prospect is being evaluated at 3 ranges of closure. Three cases are proposed
for consideration as Minimum, Most Likely and Maximum with the following properties:

Min Most Likely Max


Acreage, acres 5,000 10,000 25,000
Porosity, decimal 0.18 0.24 0.27
Oil Saturation 0.55 0.65 0.7
Sand Thickness, feet 50 200 300
Oil FVF,rb/STB 1.8 1.7 1.6

The OOIP is calculated using the formula 7758 A h O So/ Bo, in mmbo i.e. million
barrels

Min Most Likely Max


OOIP, mmbo 107 1424 6873

Recovery Factors for each case are

Min Most Likely Max


RF 0.1 0.2 0.3

The recoverable reserves for each case can now be calculated

Min Most Likely Max


Rec. res., mmbo 11 285 2062

The NPV @ 10% is $3/bbl so we can multiply the recoverable reserves by the unit NPV10 to
obtain the NPV for each case, i.e.
Min Most Likely Max

NPV @ 10%,$ million 32 854 6186


The Expected Reserves can now be calculated if we know the subjective probabilities, for
example
P(s) 0.65 Min Case NPV 32 million

P(s) 0.30
Most Likely Case NPV 854 million

Min Case NPV 6186 million


P(s) 0.05

The Expected Value for the prospect at the node above can now be calculated as follows

Expected Value = 0.65 x 32$ + 0.30 x 854$ + 0.05 x 6,186$, million

= $586 million

At this node there is no loss being considered.

If we consider a decision whether to drill or not to drill, then we need to have an estimate of the
Dry Hole Cost which in this case is estimated to be $ 40 million and the probability of a loss is
90%. We can now determine the expectation at the node before.

P(s) 0.65 Min Case NPV 32 million

EV = 586
P(s) 0.30
Most Likely Case NPV 854 million

P(s) 0.05 Min Case NPV 6186 million


EMV =
+22.6 $ P(s) 0.05
m
P(loss) 0.95

Loss $ 40 m

Since the EMV is positive the decision will be to drill the well.
Minimum Probability of Success
If we set EMV = 0 we can solve for p(s) min which is the lowest value for a positive EMV.
Above this p(s) min value the EMV will always remain positive and therefore the decision will
always be to drill.

In terms of certainty, this means that there exists a minimum value of p(s) which would always
make the prospect drillable. In other words the p(s) min is the lowest certainty required for
drilling to occur. If the p(s) for the prospect is lower than the p(s) min then the prospect is risky.

0 = p(s) min x NPV - (1 - p(s) min) x Loss

p(s) min x NPV = (1 - p(s) min ) x Loss

p(s) min x NPV + p(s) min ) x Loss = Loss

p(s) min ( NPV + Loss ) = Loss

p(s) min = Loss / { NPV + Loss}

Example
NPV for a prospect is 25 $ million and the expected loss value is 8 $ million, at a
probability of success of 40%. What is the minimum probability of success?

p(s) min = 8 / ( 25 + 8 ) = 0.24

or the prospect must have a greater than 24% certainty before being drilled.

By changing the Loss and NPV numbers we can understand how these two variables
affect the p(s) min.
NPV Loss p(s) min %
0.1 8 0.99 99%
1 8 0.89 89%
5 8 0.62 62%
10 8 0.44 44%
20 8 0.29 29%
50 8 0.14 14%
100 8 0.07 7%
1000 8 0.01 1%

In this example, the loss which is the cost of drilling a dry hole is kept constant and the NPV
variable increased. The p(s) min decreases from 99% at NPV of 1 to 50% at NPV of $8 million.
What this means is that the prospect probability of success must be greater than the p(s) min for
the decision to drill to be approved.

In other words, the more certain a prospect is known the less tolerant to NPV will it depend
upon.

p(s) min variation as NPV Increases


Loss Constant at $8 m
p(s) min
If probability of the event falls in
1.00
this area, i.e. above the red line,
0.90
then the decision is acceptable
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
0.1 1 10 100 1000
NPV @ 10%, $ million

Das könnte Ihnen auch gefallen