Sie sind auf Seite 1von 71

GARP: Energy Risk Professional (ERP)

Physical Commodity Markets


Study Notes Based on 2013 ERP Course Pack

1.11 Origins of Oil and Gas


For hydrocarbons to accumulate, three conditions must be met

Sedimentary basin must be created results from movement of the


Earths crust, which creates large depressions within which sediments
from elevated areas are transported to over time.
Sediments in such basins must contain a high level of organic matter this
matter becomes part of the sedimentary material to create source rock.
Elevated temperature and pressure these conditions must be sufficient
to convert the material in the source rock into oil and gas. Maturity
describes the degree to which petroleum generation has occurred. Heavy,
thick oil is considered immature having been generated at relatively low
temperatures. Mature oil lighter or less viscous forms at high
temperatures.

An important process, called migration, sees the hydrocarbons moving out of


the source rock through cracks, faults and fissures and into porous and
permeable reservoir rock (movement is typically upwards). This reservoir rock
must be configured (from prior geologic activity) in a way that retains the
hydrocarbons within structures called traps. This allows oil and gas to
accumulate in sufficient volumes.
Initial microbial action (in the presence of oxygen dissolved in seawater) returns
part of the sediments carbon to the atmosphere as carbon dioxide. As the
sediment layer gets thicker, subsequent bacterial processes at work in the deep
seabed mud (with little or no oxygen present in the mud itself or the water
immediately above it) convert the remaining organic matter into a waxy material
called kerogen. Kerogen is a complex mixture of large organic molecules whose
appearance and characteristics depend on the nature and concentration of
materials of which it is composed. Kerogen concentrations as low as 1-3% are
generally sufficient to produce source rock suitable for commercial exploitation.
Sediments
Sediments
Kerogen
Kerogen
Source
Rock, transported
transported to
Rock
Source Rock,
to Resevoir
Resevoir Rock

Black shale is the most common


kind of source rock. Oil source rocks
can contain up to 40% organic
matter, and the level approaches
100% for some types of coal.

Temperature plays a key role in the


generation of oil and gas from
Oil
and
Gas
kerogen. As the organic-rich source
Oil and Gas
rock undergoes progressive burial
(as additional sediments are laid down above it), the rock becomes progressively
hotter. This phenomenon reflects what is called the geothermal gradient of the
Earth. The gradient is variable approximately 60-122 meters below the Earths
surface, owing to atmospheric influences and circulating groundwater. Below 122
meters, temperature rises steadily with depth circa 1.5-2C per 30.5 meters. The
term oil window is used to describe the range of temperatures or depths within
which most of the oils complex constituents are produced. Peak conversion
Proprietary & Confidential

occurs at around 100C, however, if the temperature rises above 130C, then the
crude oil itself begins to break into smaller molecules, and gas begins to be
produced. Wet gas contains high levels of relatively heavy hydrocarbons, whilst
dry gas contains lighter hydrocarbon gases. Other factors which can affect the
rate of oil generation are pressure (imposed by overlying rock and sediment), the
presence of heat-tolerant bacteria that act on the oil, and the presence of
hydrogen and oxygen (from water).
Oil and gas typically move around within (and eventually expelled from) source
rock after generation, typically driven by the pressure of overlying rock layers
and aided by the presence of faults and cracks in the source rock and nearby
rock layers. Oil very rarely collects in large underground pools of liquid,
accumulating instead in the pores of highly permeable reservoir rock. Permeable
rock has extensive and well-connected pores that enable substantial
hydrocarbon flow to a drilled wellbone. Water is virtually found in the pore spaces
of reservoir rock, intermingled with oil and gas. For this reason, most wells pump
not only oil and gas, but also mineral-laden water called brine. The saline water
is heavier than most forms of crude and generally sinks beneath it.
Reservoir rock with good porosity and permeability is generally classified as
either a clastic or a carbonate system. Clastic sediments are formed from
fragments of various rocks that were transported and redeposited to create new
formations. Carbonate rock is typically formed by a chemical reaction between
calcium and carbonate ions in shallow seas, or by a process called
biomineralization.
There are several types of traps, all of which have been created by prior
deformation of the Earths crust. Highly impermeable rocks above and around
the trap seal it in a way which prevents significant movement of hydrocarbons.

Structural trap formed by tectonic processes, the movement of the rock


plates that comprise the top of the Earths crust.
Stratigraphic trap created when a seal or barrier is formed above and
around an oil or gas bearing formation by sedimentary deposition of
impermeable rock.
Combination trap formed by a combination of processes that occurred in
the sediments during the time of deposition of the reservoir bed.

1.11 Oil Overview


Petroleum in its most common liquid form is referred to as crude oil. Crude oil is
a mixture of a very large number of hydrocarbons with arcane names depending
on the number of carbon atoms they contain (in addition to functional groups).
The type, variety, and structure of the hydrocarbon molecules in crude oil
determine its physical and chemical properties.
The petroleum industry uses three major parameters to classify crude oil

Geographic location affects the cost of transporting the crude to a


refinery
API gravity an oil industry measure of density. Light crude oil = low
density. Oil with an API gravity less than 10 is classified as extraheavy.
Sulphur content considered sweet if it contains little sulphur, or sour if it
contains substantial amounts.

Proprietary & Confidential

Light crude oil is more desirable than heavy oil because it produces a higher
yield of gasoline, a highly valued petroleum product for transportation use.
Sweet oil commands a higher price than sour oil because it has fewer
environmental problems and requires less refining.
Oil from an area in which its molecular characteristics have been determined is
used as a pricing reference, or benchmark, in global oil markets. Some common
reference crudes are

West Texas Intermediate (WTI) a very high quality, sweet and light oil
delivered at Cushing, Oklahoma. It is the most widely traded oil futures
contract in the world.
Brent Blend made up of 15 oils from fields in the Brent and Ninian
systems in the East Shetland Basin in the North Sea. Oil production from
Europe, Africa and the Middle East, tends to be priced using this
benchmark.
Dubai-Oman used to benchmark for Middle Eastern sour crude flowing to
the Asia-Pacific region.
OPEC Reference Basket a weighted average of oils and blends from the
12 nations that make up the Organization of the Petroleum Exporting
Countries.
Midway-Sunset Heavy by which heavy oil in California is priced. MidwaySunset is a large oil field in Kern County, California.

Declining amounts of the above benchmark oils are being produced each year,
so other oils are more commonly what is actually delivered in the futures
contract.
Methane is the simplest hydrocarbon and is the primary component of natural
gas.
Several types of oil resources are called unconventional, to distinguish them from
oil that can be extracted using traditional oil field methods. These include tar
sands and shale oil.

Tar/oil sands crude oil is sometimes found in semisolid form, mixed with
sand and water. Tar sands contain bitumen-a kind of heavy crude oil. The
sticky, black, tarlike material is so thick, that it must be hated or
chemically diluted before it will flow. Oil-eating bacteria have destroyed
some of the lighter fractions of crude oil in such oil sands, leaving behind
the heavier bitumen fractions.
Shale oil shale oil is found in shale source rock that has not been
exposed to heat or pressure long enough to convert trapped hydrocarbons
into crude oil. They are usually relatively hard rocks called marls
composed primarily of clay and calcium carbonate containing the waxy
substance kerogen. The trapped kerogen can be converted into crude oil
using heat and pressure to simulate natural processes.

If an oil and gas reservoir has low permeability, procedures exist to increase the
flow of oil or gas through the formation to the wellbore. One method used to
increase the flow from a tight formation is fracturing. This method usually
involves introducing sand mixed with water or oil into the formation under high
pressure to open or clean channels between the pores. Another common method
used to increase the permeability of the formation is acidizing. Acidizing usually

Proprietary & Confidential

involves introducing hydrochloric acid into the formation to enlarge or reopen the
channels between the pores.
As the worlds reserves of conventional light and medium oil are depleted, oil
refineries are investing in the more-complex and expensive systems needed to
process increased volumes of heavy oil and bitumen. Heavier crude oils have too
much carbon and not enough hydrogen, so these systems generally involve
removing carbon or adding hydrogen, to convert the longer, more-complex
molecules in the oil to the shorter, simpler ones that characterise end-product
fuels.
The total estimated amount of oil in a reservoir, including both producible and
non-producible oil, is called oil in place (OIP). However, because of reservoir
characteristics and the limits of extraction technologies, only a fraction of this oil
can be brought to the surface. This producible fraction comprises the reserves.
The ratio of producible oil reserves to total OIP for a given field is often referred
to as the recovery factor. Reserves are further categorised by the level of
certainty associated with the estimates of their magnitude. 1P (90% probability)
> 2P (50% probability) > 3P (10% probability). History shows that initial
estimates of the size of newly discovered oil fields are usually too low. The term
reserve growth refers to the increases in estimated ultimate recovery that
occur as oil fields are developed and produced. Energy companies sometimes
use the parameter barrels of oil equivalent (BOE) as a way to report oil plus
gas reserves or production as a single figure.

1.12 Developing Oil and Gas Projects


Upstream development opportunities originate in many ways. The classic vertical
integration opportunity comes from a firms exploration activities in which the
exploration arm of the firm identifies a potential resource and the development
arm performs an investment appraisal that may or may not lead to project
development. With this sequence, the firm controls all aspects of exploration,
development appraisal, project development, and production management, and
in turn, bears all costs and risks of drilling. Given the increasing technological
complexity of projects, firms typically bring in partners to assist in execution.
An upstream development project begins with a prospect or development
opportunity. The next step is to perform an appraisal and analysis of the
opportunity by evaluating risk, economic returns, project feasibility, and
competitive challenges. If the project is deemed viable as it moves through the
various review phases, a decision must be made about financing sources for the
project. If the decision is to develop, the development design is done and
reviewed, followed by the actual execution of the project and handover to the
operators.

Proprietary & Confidential

Figure 1 - Project Development Stages

In the initial days of the oil industry, the first principle of oil production was fast
is better. The industry soon realised that an oil reservoir or filed should be
developed with its geophysical properties in mind, particularly the maintenance
of its natural drives which force the oil and gas to the surface. Studies yielded
the maximum efficient rate (MER) of field production, which was soon the
standard used by both industry and regulators.
The arguments against competitive production are clear: excessive wells drilled,
greater surface disturbance and excessive surface storage requirements. The
longer term result is higher extraction costs because subsurface pressures are
inefficiently depleted, which reduces overall oil and gas recovery. The
development of compulsory field unitisation required private parties to
coordinate reservoir production to minimise surface and production costs while
managing reservoir pressure to maximise recovery.
t

1+ i

Future Value
Present Value=

The discount rate is defined as the risk-adjusted cost of capital for the
specific project at hand. In practice, most companies will expect that any
investment will yield some minimum amount above the weighted average
cost of capital (WACC) - this is called the corporate hurdle rate. The WACC
value includes weighted costs of both debt and equity. The companys WACC
represents the average cost of raising capital for general company purposes at
current market rates.

Proprietary & Confidential

The
internal rate of return (IRR) is the discount rate that results in the NPV of the
expected cash flow stream having a value of exactly zero. A project with an IRR
greater than the sponsors minimum rate of return (the hurdle rate) is an
acceptable investment.

Flexibility in the development plan is necessary. The original development


plan should nearly always be reworked, which may result in short delays,
but will result in a stronger final outcome.
Technical challenges should be expected in upstream projects.
Deep coordination and integration between contractors and project
developers is critical for success.
Project developers and firms must be willing to take measured risks and
think creatively.

1.13 Upstream Oil and Gas Operations


Upstream activities include exploration, acquisition, drilling, developing and
producing oil and gas exploration and production activities (E&P).
Downstream activities generally include refining, processing, marketing and
distribution. Some activities that have characteristics of both upstream and
downstream activities are referred to as midstream.
For accounting, the classification of oil and gas activities as being upstream,
midstream, or downstream is of special significance. This is due to the fact that a
specialised set of accounting rules and standards apply to the financial
accounting for a reporting of upstream oil and gas operations.
A reconnaissance survey is a geological and geophysical (G&G) study,
covering large or broad area. A detailed survey is a G&G study covering a
smaller area, called an area of interest. A successful well is a well that finds
reserves in economically producible quantities.
The right to explore, develop, and produce any minerals that may exist beneath
the property is referred to a mineral interest or an economic interest. This
specific type of mineral interest that is owned largely determines how costs and
revenues are shared. In the US, most mineral rights are owned by individuals.
Therefore, oil and gas companies wishing to obtain a mineral interest in the US
must typically do so by executing lease agreements with individuals. In most
locations outside the US, ownership of mineral rights resides with a
governmental entity.

Proprietary & Confidential

US law assumes that for ownership purposes, the surface of a piece of property
can be separated from minerals existing underneath the surface. When a piece
of land is purchased, one may acquire ownership of the surface rights only, the
mineral rights only, or both. An ownership of both the surface and mineral rights
is called a fee interest. If the mineral rights are owned by one party and the
surface is owned by another, the surface owner must allow the mineral rights
owner, or his lessee, access to the surface area that is required to conduct
exploration and production operations.
A mineral interest (MI) is an economic interest or ownership of minerals-inplace, giving the owner the right to a share of the minerals produced either inkind or in the proceeds from the sale of the minerals. Sharing in-kind means
the company or individual has elected to receive the oil or gas itself rather than
the proceeds from the sale of the minerals. When the owner of the mineral rights
enters into a lease agreement or contract, two types of mineral interests are
created a working interest and a royalty interest. The working interest is
stated in terms of how the costs are to be shared (100%), while the royalty
interest is stated in terms of its share of gross revenue (1/8).

Working Interest or operating interest (WI) this interest is created via


leasing and is responsible for the exploration, development, and operation
of a property. The working interest owner bears all of the costs related to
drilling, completion, testing, and other similar costs. However, most leases
provide that the royalty owner bear a proportionate share of
postproduction costs. Post production costs typically include costs related
to the transportation of the saleable product as well as costs necessary to
get the product into marketable condition. A working interest can be either
an undivided interest (interest in total minerals extracted from ground) or
a divided interest (interest in all minerals extracted from ground in a select
portion of an estate).
Royalty interest (RI) this type of mineral interest is created by leasing.
The royalty interest is retained by the owner of the mineral rights when
that owner enters into a lease agreement with another party. The royalty
interest typically receives a specified portion of the minerals produced or a
specific portion of the gross revenue from selling the production, free and
clear of any costs associated with exploring, developing, or operating the
property. A 1/8 royalty is common in the US. This interest is referred to as
a non-working/operating interest.

Joint working interest is an undivided working interest owned by two


or more parties. Sharing the working interest is common in the oil and gas
industry, as it provides a means for companies to share the costs and risks
of operations. Here, one of the parties is designated as the operator of the
property, and all the other working interest owners are called nonoperators. In a typical E&P joint venture operation, each working interest
owner accounts for its own share of costs. This practice is referred to as
proportionate consolidation.

Overriding royalty interest (ORI) a non-working interest crated from


the working interest. The ORIs share of revenue is a stated percentage of
the share of revenue belonging to the working interest from which it was
created. Like a royalty interest, the owner of an ORI does not pay any of
the exploration, development, or operating costs, but is responsible for tis
share of any severance or production taxes. A carved-out ORI is created

Proprietary & Confidential

when the working interest owner sells or transfers the ORI and retains the
working interest.
Production payment interest (PPI) a non-working interest created
out of a working interest and is similar to an ORI, except that the
production payment interest is limited to a specified amount of oil or gas,
money, or time, after which it reverts back to the interest from which it
was created and ceases to exist.
Net profits interest (NPI) a non-working interest created on onshore
property typically from the working interest. Offshore, a net profits interest
is the type of interest that the government, as the mineral rights owner,
often retains when leasing an offshore block to a petroleum company. This
type of interest is similar to a royalty interest or an ORI except that the
amount to be received is a specific percentage of net profit from the
property versus a percentage of the gross revenues from the property.
They are not responsible for any portion of losses incurred in property
development and operations. These losses, however, may be recovered by
the working interest owner from future profits.
Pooled or unitized working interest this type of interest is created
when the working interests as well as the non-working interests in two or
more properties are combined. Each interest owner now owns the same
type of interest (but a smaller percentage) in the total combined property
as they held previously in the separate property. The properties are
operated as one unit, resulting in a more efficient, economical operation.
o Pooling is commonly used to describe the amalgamation of
undrilled acreage to form a drilling unit.
o Unitization is used to refer to a larger combination involving an
entire producing field or reservoir for purposes of enhanced oil and
gas recovery.

Overriding royalty interests and production payments are created out of the
working interest. These interests are often created by the working interest owner
in order to obtain financing or assistance in exploring and developing a property
and to spread the risk involved.
In the US, oil and gas leases are typically obtained through the use of a
landman, an individual who specialises in searching for and obtaining leases. A
landman acts as an agent for an undisclosed principal in trying to obtain a lease
at the lowest possible price. The lessor is the mineral rights owner who leases
the property to another party and retains a royalty interest. The lessee, the
party leasing the property, receives a working interest.

Lease bonus initial amount paid to the mineral rights owner in return
for the rights to explore, drill and produce. Lease bonus payments are
usually a dollar amount per acre.
Royalty provision specified fraction of the oil and gas produced free
and clear of any costs (except severance taxes and certain costs to market
the product) to which the royalty interest owner is entitled.
Primary term initial term of the lease. The primary term is the
maximum time that the lessee has to begin drilling or commence
production from the property. In the absence of drilling or production, the
lessee can keep the lease in effect during this term by making an annual
payment called a delay rental payment. Some short-term leases (two or
three year primary terms), called paid up leases, require the lessee to

Proprietary & Confidential

pay the delay rentals at the inception of the lease. After the primary term,
a delay rental payment can no longer keep the lease from terminating.

Shut-in payments if the well is capable of producing oil or gas in


paying quantities but is shut in (not producing), the lease may hold the
lease by making shut-in payments. Shut-in payments are usually made in
natural gas situations where access to a pipeline or an oversupply of gas
exists.
Right to assign interest the rights of each party may be assigned in
whole or in part without the approval of the other party. For example. The
working interest owner may carve out a production payment interest or
ORI from the working interest without notifying the royalty interest owner.
Rights to free use of resources for lease operations the operator
usually has the right to use, without cost, any oil or gas produced on the
lease to carry out operations on that lease.
Option payment - payment made to obtain a preleasing agreement that
gives the oil company (the lessee) a specified period of time to obtain a
lease from the entity receiving the payment. In addition to specifying the
period of time within which the lessee may lease the property, the option
contract will also typically specify the lease form, royalty interest, bonus to
be paid etc.
Offset clause if a producing well is drilled on Lease B close to the
property line of Lease A, within a distance specified on the lease contract,
the offset clause requires the lessee of Lease A to drill an offset well on
Lease A, within a specified period of time, in order to prevent the well in
Lease B from draining the reservoir. If however, the leases are within a
state with forced pooling or unitization, the leases can be forced to be
pooled and operated as one, and the offset clause becomes irrelevant. If
the state in question does not have forced pooling or unitization, then the
only recourse for the interest owners of Lease A is for the lessee to
assume the burden of offset drilling.
Minimum royalty a minimum royalty clause provides for the payment
of a stipulated amount to the lessor regardless of production. A minimum
royalty is similar to a shut-in royalty except that it is commonly
recoverable from future royalty payments.
Pooling provisions modern lease forms provide that if the working
interest owner forms a pool or unit with other leases, the royalty interest
and other non-working interest owners may also be forced to combine
their interests with the non-working interest owners of the other leases
forming the unit.

A drilling contract is an agreement between the lessee (working interest


owners) and a drilling contractor for the drilling of a well. Drilling contracts
generally provide payment on a day rate (payment based on the number of
days drilled), a footage rate (payment based on the number of feet drilled), or
a turnkey basis (payment of a fixed sum of money based on drilling to a certain
depth or stage of completion).
The first step in drilling an oil and gas well is selecting the actual drill site.
Seismic studies, particularly in 3-D are usually performed, and the results of the
studies are examined to determine the optimal site for the well. The well site is
normally surveyed and staked, then access roads are built, and the site is graded
and levelled. Reserve and waste pits are also prepared, and a water supply is
obtained. After the site is prepared, often the initial 20 to 100 feet of the well will
Proprietary & Confidential

be drilled with a small truck-mounted rig. The drilling rig and related equipment
are next moved in and set up, a process called rigging up. The well is then
ready to be spudded in. The spud date is the date the rotary drilling bit
touches the ground.
Routine rotary drilling consists of rotating a drill bit downwards through the
formations towards target depth, cutting away pieces of the formations called
cuttings. During the drilling process, drilling fluid (mud) is constantly circulated
down the wellbore. Drilling mud serves several purposes raises the cuttings to
the surface, lubricates the drilling bit, and keeps formation fluids from entering
the wellbore. Approximately every 30 feet as the hole is deepened, a joint of drill
pipe is added, a process called making a mousehole connection. Periodically,
when the drill bit becomes worn or damaged, the entire drill pipe has to be
removed from the hole in a process called tripping out. Tripping out is also
necessary when casing must be set. Casing is steel pipe that is set (cemented
into the wellbore), which prevents caving in of the hole, protects fresh water
sands, excludes water from the producing formations, confines production to the
wellbore and controls formation pressure. After a new drill bit is attached, the
pipe is lowered back into the hole, called tripping in. Normally the drill pipe is
removed and lowered three joints at a time, depending upon the height of the
derrick or mast. A derrick or mast is a four-legged, load bearing structure that
is part of the drilling rig. The height of the derrick correlates with the depth of the
well, since the derrick must support the weight of the drill string when
suspended downhole. i.e. the drill pipe, etc.
Although most onshore wellbores are drilled vertically, some wells are drilled at
an angle. Directional wells are wells that are drilled straight to a
predetermined depth and then curved or angled so that the bottom of the
wellbore is at the desired location. Horizontal wells are also initially drilled
straight down, but then are gradually curved until the hole runs parallel to the
Earths surface, with drilling actually achieving a horizontal direction through the
formation.

Directional drilling is used in situations where the drilling objective cannot be


achieved with a vertical wellbore. For example, directional drilling may be
necessary in urban locations where limitations exist regarding well location or to
side-track around an obstruction. Offshore, directional drilling is normally
necessary so that multiple wells can be drilled from a single offshore platform.
Horizontal drilling is a subset of directional drilling. Unlike a directional well that
is drilled to position a reservoir entry point, a horizontal well is commonly defined
Proprietary & Confidential

as any well in which the lower part of the wellbore parallels the pay zone. The
angle of the wellbore does not have to reach 90 degrees for the well to be
considered a horizontal well. The objective of horizontal drilling is to expose more
reservoir rock to the wellbore than would have been possible with a conventional
vertical well. Most oil and gas reservoirs have greater horizontal dimensions than
vertical thickness. By drilling a portion of a well parallel to the reservoir, the well
is capable of accessing oil and gas that would otherwise not be accessible.
Horizontal wells have become a preferred method of recovering hydrocarbons
from reservoirs in which the oil and gas bearing zones are more or less
horizontal. The cost of drilling horizontally directed wells may be two or three
times that of drilling conventional vertical wells. However, the production factor
can be enhanced by as much as 15 or 20 times.
During drilling operations, the petroleum engineer or geologist examines data
from a number of sources in order to determine whether this is sufficient oil or
gas to justify the cost of completing and producing the well. As the well is drilled,
mud is circulated in the hole. The mud and cuttings are analysed to identify
evidence of hydrocarbons and to gain insight into possible fluid content and rock
structure. Core samples, are analysed to determine formation rock
characteristics, sequence of rock layers in the earth, and fluid content of the
formation. Seismic data, is also often used to ait this analysis.
After total depth has been reached, the well is logged by lowering a device to
the bottom of the well and then pulling it back up to the surface. As the device
passes up the hole, it measures and records properties of the formations and the
fluids residing in them. Based on the results of the analyses and the test method
discussed above, as well as other evaluation methods, a decision is made as to
whether to complete the well. If the well is judged incapable of providing oil or
gas in commercial quantities, it is plugged and abandoned. Activities incident to
completing a well and placing it on production include the following

Obtaining and installing production casing


Installing tubing (steel pipe suspended in the well through which the oil
and gas are produced)
Perforating (setting off charges to create holes in the casing and cement
so formation fluids can flow from the formation into the wellbore)
Installing the Christmas tree (valves and fittings controlling production at
the wellhead)
Constructing production facilities and installing flow lines

Activities incident to plugging and abandoning a well would include removal of


any equipment possible and cementing the wellbore to seal the hole.
Some wells penetrate more than one zone containing oil or gas in commercial
quantities. In these cases, the wells maybe completed either to produce from
only one zone or from multiple zones. In a multiple completion, the well is
capable of simultaneous production from multiple zones containing oil or gas.
The activities involved in offshore drilling are somewhat different than in onshore
operations. In territorial waters offshore the U.S, operators may acquire mineral
leases from state governments or from the federal government. In contrast to
most onshore federal leases, offshore federal leases are obtained through a
system of closed competitive bidding on available offshore tracts. Normally, the
federal government keeps a 1/6 royalty on the tracts. Drilling operations are
much more expensive offshore. Some offshore drilling contracts today are as
Proprietary & Confidential

high as $800,000 per day. As a result, most offshore drilling is done in the form of
a joint venture, or joint interest operation. In some offshore areas, very little may
be known about the types and depths of the subsurface formations. In those
areas, a stratigraphic test well, a well drilled for information only, may be
drilled. Often, such a well will be drilled prior to the bidding process and paid for
by multiple companies who agree to share the information. Exploratory drilling
offshore is almost always done from mobile rigs

Drilling barges and ships towed to location


Jack-up drilling platforms these platforms are towed to location, and
then the legs are lowered to the ocean floor, and the structure body is
jacked up.
Submersible and semi-submersible drilling platforms platforms
are towed to location, and then the pontoon-like legs are flooded with
water for extra stability in the open ocean.

Development wells, which produce a reservoir that has been discovered with
exploratory drilling, are often drilled from fixed platforms containing production
and well maintenance facilities. The drilling operations of an offshore rig are
similar to those of onshore rigs, with the exception of specialised technical
adaptations that have been made to deal with the hostile marine environment.
Directional drilling is commonly used offshore, since that technique can reach
thousands of feet away from the platform. This allows the drilling of multiple
wells (as many as 40 or more) from the same development platform. Offshore
production may also be achieved via subsea completions. Subsea completions
are subsea satellite wells that are suited on the ocean floor. The production from
these wells is moved directly to platforms, floating production/storage/offloading
vessels (FPSOs), or to the shore, where it is processed and stored pending sale.
Many industry experts predict that the future of the offshore industry is in ultradeepwater areas. Ultra deepwater is defined as outer continental areas where
the water depths are 1,500 meters or greater. The cost of drilling in these areas
is extremely high.
Several types of production processes may be employed in order to move the oil
or gas from the reservoir to the well. These production processes are commonly
divided into three types of recovery methods

Initial or primary recovery of oil and gas is either by natural reservoir


drive or by pumping. Natural drive occurs when sufficient water or gas
exists in the reservoir under high pressure to provide the natural energy
required to drive the oil to the wellbore. If insufficient natural drive exists,
the oil may be pumped to the surface using a beam pumping unit.
When the maximum of oil and gas has been recovered by primary
methods, and the reservoir pressure has been largely depleted,
secondary recovery methods may be used. This consists of inducing an
artificial drive into the formation to replace the natural drive. The most
common method is waterflooding, which involves injecting water under
pressure into the formation to drive the oil to the wellbore.
The distinction between secondary and tertiary recovery methods may
be obscure. Tertiary recovery includes injection of chemicals, gas, or heat
into the well to modify the fluid properties and thereby enhance the
movement of the oil through the formation. A newer form of tertiary

Proprietary & Confidential

recovery uses microwave technology, which introduces microwaves into


reservoirs in northern climates to warm the oil.
Even with the best recovery methods, a large amount of oil remains locked in
the formation. Some experts have estimated that 50% or more of the oil
cannot be recovered with current technology.
Fluids produced from a well normally will contain a combination of crude oil
and natural gas, as well as basic sediment and water (BS&W). Before the oil
and gas are sold, the well fluid must be separated, treated, and measured.
The amount of oil transferred from the storage tanks is recorded on a
document called a run ticket. The amount of payment for the oil is based
upon information contained in the run ticket. The gas settlement
statement is used to record similar information for the production and sale
of gas.
In the 1920s and 1930s, overdrilling was commonplace. Since then, many
states have established agencies to oversee oil and gas drilling, with the
passing of regulations to eliminate waste and uneconomical methods of
producing oil and gas. One common regulation is related to well spacing. The
fact that mineral rights in the U.S can be owned by individual parties results
in numerous leases that often cover a relatively small acreage. Unless the
lease is large, it is unlikely that an oil and gas reservoir would be within a
single lease. Without regulations, many more wells than necessary (and wells
that are too closely spaced) would be drilled on the typically multiple leases
associated with given reservoirs. Today the various states regulate the
number of wells drilled into a reservoir through the use of spacing and density
regulations in order to prevent economic waste and to maximise reservoir
recovery. Economic waste occurs when too many wells are drilled, since an
increased number of wells drilled does not necessarily increase oil and gas
recovery. The reservoirs natural drive may be depleted via overdrilling
through premature water or gas encroachment. Another common regulation
relates to drilling permits. Prior to starting the drilling process, whether on
public or private land, a drilling permit is generally required from the state, or
if on federal lands or water, from the federal government. Generally, the
drilling permit will not be granted unless the well spacing requirements are
met or an exception is granted. Another important state regulation deals with
restrictions of production. If demand is adequate, states will typically allow all
wells and leases to produce at the maximum efficient rate (MER). The MER
is the maximum rate at which oil or gas can be produced without damaging
the reservoirs natural energy. In periods where there is low demand for oil
and gas, states frequently restrict production by a proration process. An
agency decides the amount to be produced within the state for a given period
of time, typically a month, and then prorated this amount the states
producing fields.

1.14 Accounting for International Petroleum Operations


This chapter provides an overview of some of the issues and difficulties
encountered in international oil and gas operations. Of special interest are
contracts between oil and gas companies and governments that dictate how
costs, revenues, and reserves are to be shared.
The agreement between a oil and gas company and the government of the
foreign country who owns a given mineral right must state what collective
Proprietary & Confidential

payments are to be received by the government in return for allowing the


company to operate. Collectively these payments are referred to as the fiscal
system of the country. Examples of such payments include

Upfront bonuses paid to the host country


Royalties paid to the host country
Federal and provincial income taxes
Infrastructure development for the host country

The exact nature of payments that the government receives is determined by


the legal system in the country. Countries that collect payment for oil and gas
produced primarily in the form of royalties and taxes are referred to as having
concessionary systems. In a concessionary system, the contract conducts
exploration, drilling, and possibly development and production activities at its
sole risk and cost. Some countries allow foreign oil companies to own minerals in
place, while others do not. Countries where the government does not rely
entirely on taxes and royalties are referred to as having contractual systems.
In a contractual system, the oil and gas company must contract with the local
government for the right to share in revenue from oil and gas production. A wide
variety of contracts are found in contractual systems, including production
sharing contracts (PSCs) and service contracts, with PSCs being the most
popular. In a PSC, the foreign oil and gas company, referred to as the contractor,
is allowed to recover certain costs and receives a share of the profits. The
contractor typically receives payment in-kind (in the form of oil or gas). In a
typical service contract, the contractor receives money representing a fee for
conducting exploration, development, and production activities. In practice,
contracts have numerous different terms and conditions that often make it
difficult to classify them as being either PSCs or service contracts. Contracts and
agreements generated by a given country may be contractual in nature, but may
have some aspects that resemble a concessionary contract and vice versa.

One variation of a concessionary agreement involves the host government


participating in the oil and gas operations as a working interest owner. This type
of arrangement is generally referred to as government participation. The
government typically sets up a state-owned oil company to participate. This
arrangement may also be referred to as a joint venture arrangement. This
arrangement may also be referred to as a joint venture arrangement. Here, the
contractor may agree to pay 100% of the exploration-type expenditures. If
commercial reserves are found, the government retains the right to participate
or back in to the development and production operations as a working interest
owner at an interest of up to 51%. If the state-owned oil company elects to
participate, then it becomes liable for its proportionate share of all future drilling,
Proprietary & Confidential

development, and production costs. The agreement may allow the contractor to
recover all or a portion of its up-front exploration-related expenditures. If this is
the case, there are two methods of recovery. One is direct payment by the
government to the contract. The other, more frequently used method is to allow
the contractor to recover some or all of its costs by the contractor keeping the
state oil companys share of production until the contractor has recouped the
allowed costs. The oil (or gas) that goes to the parties to allow them to recover
their costs is referred to as cost oil or cost gas. Typically, there is a ceiling or
maximum amount of production available for cost recovery. This maximum
amount of production that can be used for cost recovery is referred to as a cost
oil cap. In most contracts, recoverable costs that are not recovered in any given
year can be carried forward.
In most contractual systems, any equipment or facilities brought into the country
by the contractor become the property of the local government. This does not
apply to equipment and facilities that are owned by service companies,
equipment brought into the country temporarily, or to leased equipment.
Under a concessionary agreement (where the government is not a working
interest owner), the government has little or no involvement in the management
and decision making related to day-to-day drilling and operations. Governments
historically looked to expand their role in the management of petroleum
operations. These factors, as well as other issues, including legal constraints
related to the ownership of minerals, largely led to the trend toward production
sharing contracts.
A common feature of both concessionary agreements and PSCs is that the
contractor agrees to pay the government an up-front bonus for singing the
agreement. This bonus is often referred to as a signing bonus or signature
bonus. In some instances, a lump sum of money is paid at the singing of the
contract, and subsequent payments are made to the government when
production reaches an agreed-upon level. These later payments are referred to
as production bonuses.
In order to avoid royalty payments from discouraging capital investment, some
contracts contain sliding scale royalties. A sliding scale royalty provides for a
lower royalty amount when production is lower and increases as production
increases. Thus in marginal situations where production is lower, the lower
royalty may allow production that would otherwise not have been profitable.
Government sometimes provide incentives to companies in an effort to maximise
the amount of money the companies will invest in exploration, drilling and
development. These incentives may appear in PSCs or result from other
negotiations

Capital uplifts sometimes referred to as an investment credit, is an


additional amount of cost recovery on capital expenditures over and above
actual amounts spent. For example, if a company spends $1,000,000 in
recoverable capital expenditures, and there is a 10% capital uplift in the
contract, the company will be allowed to recover $110 of actual spending,
or $1,100,000.
Ringfencing generally, each contract area stands alone when
computing cost recovery. There is an imaginary boundary around the
contract area neither costs nor production can be transferred outside the
boundary. If production in that area is insufficient to allow for full recovery,

Proprietary & Confidential

costs cannot be transferred to another contract area where production is


higher and recovered from that production. Incentives that governments
may provide is to un-ringfence or allow cross-fence recovery. This
incentive is most effective when the government is seeking to increase
exploration in a particular area by allowing a company to immediately
recover certain exploration expenditures in the new, frontier area against
production from a different, currently producing area. Ringfencing and
cross-fence allowance may also be used as tax-related incentives in
computing certain petroleum taxes.
Domestic market obligation some contracts specify that a certain
percentage of the contractors share of the profit oil be sold to the local
government, perhaps at a price that is less than the current market price.
Royalty holidays and tax holidays incentives governments may use
to encourage contractors to maximise investment early in the life of
production. Here, the government specifies a period of time during which
the royalty provision is waived, resulting in the contractor paying no
royalty on production during that period of time. It also leaves the
contractor with more money to reinvest in additional drilling and
development.

The second type of agreement prevalent in a contractual system is a service


agreement. Service agreements can be classified as being either risk service
contracts or non-risk service contracts. Risk service contracts are much
more common. Here, the contractor bears all of the costs and risks related to
exploration, development and production activities. In return, if production is
achieved, the contractor is allowed to recover its costs as production is sold. In
addition, the government pays the contractor a fee for its services. The fee is
typically based on production. In non-risk service agreements, the contractor
provides services in the form of such activities as exploration, development and
production and is paid a fee by the government that covers all costs. In practice,
non-risk service agreements are rare.
In joint interest accounting, one of the key tasks is to determine the proper
amount of costs and revenues to be shared by each of the parties. As a general
rule, costs that are defined as direct costs in domestic joint operating
agreements are likely to be recoverable under the PSC or risk service contract. In
a domestic joint operating agreement, recoupment of indirect costs by the
operator is specificed via application of overhead rates. Overhead rates (often
known as sliding scale rates) are also frequently used in PSCs and risk service
contracts to determine the amounts of recoverable indirect costs.
Reserve estimation under a PSC, or risk service contract is much more
complex than reserve estimation under a concessionary contract. Basically, if
estimating working interest reserves under a concessionary contract, gross
recoverable reserves would first be estimated. Then, reserves attributable to
royalty interests or other non-operating interest would be substracted. The
remainder would be allocated to the working interest owners based on their
relative working interests. When estimating working interest reserves under a
PSC or risk service contract, the net proved reserves remaining after deducting
reserves related to royalties and other non-operating interests would be
allocated between the parties. The allocation is based on the amounts to which
they are entitled as per the contract terms.

1.15 Transportation
Proprietary & Confidential

The transportation of oil and gas from wellhead or field to refining and then to
customer markets is a critical component of the global oil industry value chain. It
has become a highly capital-intensive link in the global oil and gas market place,
which is often driven as much by geopolitics as it is by traditional business
concerns.

Distance the distance that crude needs to be moved often dictates the
mode of movement. Shorter distances may be dominated by trucking,
medium distances by barge or rail, and longer distances by tanker or
pipeline.
Oil versus gas oil has always been transportable as a heavy liquid. Gas,
however, has always been distinguished by its lack of portability. If gas is
to find its way to global markets, it must be liquefied and then moved by
most traditional transportation means available to oil.
Ownership and geopolitics thousands of different owners, organisations,
governments and interests are involved in the complex web of transport.
Environmental safety truck rollovers, railroad de-railings, pipeline
leakages and accidents have all led to an intense focus on safety, security,
and environmental protection to a degree not seen in many other
industries.
Impact on prices paid to producers transportation costs have an
enormous impact on the price paid to the producer at the wellhead. If
market prices are relatively competitive for similar crudes delivered to any
specific refining facility, the price paid the producer is backed out of the
refining purchase price for that specific grade of crude, including
transportation cost.

Moving crude oil (black oil) and natural gas from the field to the refinery today is
about pipelines across land and supertankers across water. With increasing
distances, there is no real competitive choice other than the modern
supertanker. Although every case is different, the sheer scale economies,
flexibility and speed to market that the worlds crude oil tanker fleet provides is
insurmountable. Pipelines have a critical role to play in the global oil industry,
specifically in those land-locked areas of the glove in which new (or old) oil and
gas reserves are now being developed.
Pipelines
A production field with multiple wells must first combine or gather the crude oil
before initial processing and shipping. This initial network of pipes, the flowline
network, moves the oil to a central processing plant and/or shipping point.
Proprietary & Confidential

Within gathering systems, there are two categories radial systems and
trunkline systems. A radial system integrates multiple flow lines to a central
header where the oil fluids are collected. The trunkline system is an advanced
radial system which integrates multiple remote headers into a single collection
and processing facility, used in the largest fields. The various lines running from
individual pipeline webs are then integrated at a field-level gathering point. From
here, they move to large volume crude pipelines or tankers. Some of the most
complex and critical forms of gathering systems today are those for deepwater
collection.
Because most hubs provide a number of alternative oil or gas supplies to a single
point, they often serve a very important role price setting in the industry. The
biggest obstacle in constructing pipelines has been aligning the interests
business and political.
Different products such as regular gasoline, premium gasoline, or jet fuel, are
moved through the same pipeline in batches. Once the different batches arrive
at their respective destinations, they are pulled out of the line batch cutting
into other lines or tankage through a complex set of valves. Some mixing of
product occurs at the interface of adjacent product grade batches. This volume
can sometimes be mixed into one or both of the adjacent batches and still meet
product specification. When products are significantly different grade and value,
for example, that between jet fuel and regular gasoline, the product interface,
called transmix, must be diverted to tanks and either reprocessed or returned
to a refinery. The transmix then has to be reprocessed via a costly procedure into
marketable qualities of the two products. Many companies today are working to
reduce overwash, the tendency to cut too much of the premium batch interface
into the lower grade batch just to be safe.
A crude oil tanker is a ship constructed specifically for moving crude oil,
usually long distances from production areas to refineries. Product tankers,
typically much smaller in size, carry petrochemical products of refineries closer
to final consumer markets the downstream market. There are three basic types
of tanker charter: spot charters, contract of affreightment, and period charters.
Spot charters are single voyage charters to move a single cargo load for a onetime journey. In contract of affreightment charters, a specific crude quantity is
named and time and places for loading and delivery are specified, but the
specific vessel is not specified. Period charters are roughly equivalent to a
longer term contract and may specify a series of charters of a specific vessel for
the same or multiple routes. There are a number of different types of contracts
commonly in use in shipping today

Free onboard (FOB) obligates the buyer to be responsible for shipping


the LNG from the liquefaction facility to receiving terminal.
Cargo, insurance and freight (CIF) makes shipping costs the
responsibility of the seller. An important feature is that ownership of the
crude is transferred in the midst of the voyage, typically in international
waters (and not the point of delivery).
Delivered ex-ship (DES) the seller is responsible for all shipping and
delivery costs, the transfer of crude ownership not occurring until
transferred to the buyer at the receiving terminal.

Proprietary & Confidential

1.16 Refining
A petroleum refiner, like most manufacturers, is caught between two markets:
the raw materials he needs to purchase and the finished products he offers for
sale. As such, refiners and non0integrated marketers can be at enormous risk
when the prices of crude oil rise while the prices of the finished products remain
static, or even decline. Because refiners are on both sides of the market at once,
their exposure to market risk can be greater than that incurred by companies
who simply sell crude oil at the wellhead, or sell products to the wholesale and
retail markets.
The refining process is critical to the petroleum industry value chain because
crude oil has no value until it is transformed into products such as gasoline,
diesel, heating oil, and petrochemical inputs. Thus, to a refiner, the value of
crude is nothing other than the value of its derivative products. Global refining
capacity has been steadily growing over the past decade, largely driven by
demand from emerging markets.
It is often said that IOCs, unlike independent refiners, have a natural hedge
against adverse price movements of the refining spread components because
they control their entire supply chain. In contrast, an independent refiner
exposed to the risk of increasing crude oil costs and falling refined product prices
runs the risk that refining margins will be less than anticipated. While it is true
that IOCs are involved in multiple activities, to efficiently manage their complex
network of assets, they buy and sell crude with other refiners. ExxonMobil refines
more than twice as much crude as it produces. It is therefore most likely that the
primary advantage the IOCs have relative to stand alone refiners is that the
integration between refining and upstream activities allows the IOCs to minimise
short-term cyclical effects in either branch of the business, rather than a
theoretical argument over their ability to hedge crude prices. A merchant
refiner can buy crude from any supplier and sell refined product to any
customer.
Crude oil often contains water, inorganic salts, suspended solids, and watersoluble trace metals. As a first step in the refining process, to reduce corrosion,
plugging, and fouling of equipment and to prevent poisoning the catalysts in
processing units, these contaminants must be removed by desalting
(dehydrating). The next step is distillation. Cracking breaks the heavier, higher
boiling-point petroleum fractions into more valuable products such as gasoline
and diesel. The two basic types of cracking are thermal cracking, using heat and
pressure, and catalytic cracking.
Proprietary & Confidential

Hydrotreating is a hydrogenation process used to remove contaminants such


as nitrogen, sulphur, oxygen, and metals from liquid petroleum fractions. The
contaminants, if not removed from the petroleum fractions as they travel
through the refinery processing units, can have detrimental effects on the
equipment, the catalysts, and the quality of the finished product. Hydrotreating
is usually done prior to processes such as catalytic reforming so that the catalyst
is not contaminanted by untreated feedstock.
The value of crude oil is a function of the value of the products that are refined.
Refiners evaluate crude oils by what they can earn from the products the
different crude oils produce. No two refineries will achieve the same margin for a
given crude oil or the same mix of refined products.
The topping refinery separates the crude into its constituent petroleum
products by distillation, known as atmospheric distillation. Hydroskimming
refineries are equipped with atmospheric distillation, naphtha reforming, and
necessary treating processes. The cracking refinery is, in addition to the
above, equipped with vacuum distillation and catalytic cracking. The cracking
refinery adds one more level of complexity to the hydroskimming refinery by
reducing fuel oil conversion to light distillates and middle distillates. The coking
refinery can process the vacuum residue into high-value products using the
delayed coking process. The coking refinery adds further complexity to the
cracking refinery by high conversion of fuel oil into distillates and petroleum
coke. Catalytic cracking, coking and other conversion units are referred to as Msecondary processing units. The Nelson Complexity Index captures the
proportion of the secondary conversion unit capacities relative to the primary
distillation or topping capacity, or the complexity factor to each major refining
unit based on its complexity and cost in comparison to crude distillation, which is
assigned a complexity factor of 1.0. The higher the index number, the greater
the cost of the refinery and the higher the value of tis refined products.
The refining value of crude oil, sometimes referred to as technical value, is
the value a refiner expects to realise for the refined products less the operating
costs for processing the crude. The refining value less the cost of the crude
equals the refining margin. To facilitate greater transparency in financial markets
with regard to refining margins, the concept of crack spread was created.
Refiners profits are tied directly to the spread between the price of crude oil and
the prices of refined products. Crack spread is a term used by the refining
industry for the difference between the price of crude oil and refined petroleum
products. The crack spread represents the theoretical refining margin and is
quoted in dollars per barrel.
Adverse price movements in crude and finished products can present a
significant economic risk. Given a target optimal product mix, an independent oil
refiner can attempt to hedge itself against adverse price movements by buying
oil futures and selling futures for its primary refined products according to the
proportions of tis optimal mix. In 1995, NYMEX created the crack spread contract
to help refiners lock in a crude oil price and heating oil and unleaded gasoline
price simultaneously in order to establish a fixed refining margin. For example, if
a refiner expects crude prices to hold steady, or rise somewhat, while product
prices fall (a declining crack spread), the refiner would sell the crack, that is,
the refiner would buy crude futures and sell gasoline and heating oil futures.
For protection from increasing product prices and decreasing crude oil prices, the
refinery can use a short hedge against crude and a long hedge against products.
Proprietary & Confidential

The outcome is the same as purchasing the crack spread, i.e. sell crude futures
and buy gasoline and heating oil futures.
A refinerys capacity utilisation theoretically can range from 0% to 100%. In
reality, 100% utilisation for a refinery or for any plant is extremely difficult to
achieve. The optimum rate of capacity utilisation in the US is considered to be
90%-95%, with a 95% utilisation rate considered to be full capacity. Rates below
90% suggest many units are down for maintenance or that refining margins are
so depressed that capacity was taken offline.
Given the price differentials for lower quality crudes, refiners in the past had a
strong economic incentive to increase imports of sour and high-acid crudes. The
discounts on lower quality crude oils can be substantial. For a specific refinery,
the most valuable or profitable choice is not necessarily the lowest cost crude oil
nor the lightest or sweetest crude. Refiners, in the future, will also have to deal
with the quality gap the growing demand for higher quality products by
consumers and regulators coupled with the declining quality of crude oil.

1.17 Simple and Complex Refineries


Refineries are about making money, and some refineries make more than others,
just because the assembly of processing units in them. Recognition that the
prices for crude oil, gasoline, distillates, and residual fuels are connected to each
other in a profound way. The kinds of hardware in refineries around the world not
only had impact on the profitability of refineries. They set, in large part, the
margins between crude and products and the differences between crudes and
products themselves.

Simple refinery crude distillation, cat reforming ad hydro-treating


distillates. Complexity factor between two to five.
Complex refinery 0 simple refinery plus a vacuum flasher, cat cracker,
alkyl plant, and gas processing. Complexity factor between eight to 12.
Very complex refinery complex refinery plus a coker, which eliminates
residual fuel production. Complexity factor above 15.

As you move from simple to complex to very complex refineries, the gasoline
(lighter product) yield goes up and the residual yield goes down. You can note
the following things about processing the same type of crude in two different
refineries
1)
2)
3)
4)

The light oil yield is lower in the simple refinery.


This lower yield gives lower revenue.
The simple refinerys operating costs are lower.
The profitability of the simple refinery is also lower.

The extra margin is what the owner of the complex refinery hopes for and
expects as compensation for spending additional funds on extra processing units.
If two refineries are now processing a (cheaper) heavier crude to the example
above, the following observations can be seen
1) Compared to running the medium crude, the light oil yield is significantly
less than in the simple refinery.
2) The complex refinery has sufficient conversion capacity to turn the heavy
part of the heavy crude into light products, and it will want to run heavy
crude oil.
Proprietary & Confidential

3) The heavy crude price is lower than the medium crude price, but not low
enough to keep the simple refinery interested. The simple refinery does
not have enough conversion capacity to take advantage of the price
difference between the light oils and residual fuel (the light/heavy
differential). It will want to run light crude oil.
A hypothetical world with only these two refineries, running at capacity and
running only these two crudes, 50% each. The market forces make the world
unstable. One or more of the following must happen
1) The positive margins of both refineries indicate that this world needs to
have both the simple and complex refinery stay in business.
2) The complex refinery has incentive to switch to heavier crude oil, but the
simple refinery has incentive to switch to lighter crude oil.
3) As they do, the amount of light products increases on the margin, pushing
the light oil price down and the residual fuel price up. That hurts the
complex refinery more than the simple refinery and reduces the incentive
for the complex refinery to run as much heavy crude oil.
4) That reduces the demand (and price) for heavy crude oil and increases the
price for light crude oil, which reduces the incentive for the simple refinery
to run as much light crude oil.
Any or all of the above changes, put continuous but volatile pressure on the
crude and products market and cause crude prices to change relative to each
other, and the product prices to do the same thing. Refineries typically model the
simple, complex, and very complex refineries and calculate the results of running
a selection of light, medium, and heavy crudes.
There are not many simple refineries in the world and hardly any in the
developed countries where gasoline is a big part of the market. Product yields
and crude composition need conversion capacity in refinery configurations.
As a refiner runs the distilling unit at ever-higher rates, typically the coker fills up
first. It is the most profitable unit in the refinery because it converts low-valued
heavy oil to high-valued light oil. When that happens, the next increment of
crude does not return the same profit as the previous volume. There is no room
in the coker, but there may well be some in the cat cracker. So the refinery
running the last few barrels per day behaves, on the margin, like a complex (cat
cracker) refinery. The refinery slips from a very complex mode to a complex
mode. As the distilling unit processes more crude and the cat cracker fills up, the
returns slip again. The last increment of crude runs through the distilling unit
gets simple yields and lower margins. So a very complex refinery can operate on
the margin in any of the three modes.

1.18 Natural Gas


The natural gas market is one of the largest, most established energy markets.
Natural gas is a mixture of hydrocarbon gases, the most abundant of which is
methane. Natural gas is considered dry when it is almost pure methane, and
wet when it contains substantial quantities of the other hydrocarbons. Because
the composition of natural gas varies so widely, it is commonly traded in units of
heat energy such as British Thermal Units (BTUs).

There are approximately 1,000 BTUs per cubic foot of dry natural gas.

Proprietary & Confidential

Pipelines are used to transport natural gas as the primary component, methane,
contains a relatively low amount of energy per volume. Longer chain
hydrocarbons like propane or butane contain enough heat energy to make it
practical to transport them in pressurised metal containers. Larger hydrocarbons,
like propane and butane, turn into liquids at a warmer temperature than smaller
hydrocarbons, like methane. As a result, they need to be removed from the
natural gas to prevent liquid from building up in a pipeline.
The terminology for natural gas trading is different than other financial markets.
When natural gas prices are quoted by a trader, the quote is usually in relation to
the Henry Hub price the basis price. The basis price quoted by traders is
similar to a transportation price to get gas from the Henry Hub to another area.
The index price of natural gas is the price at the Henry Hub. The actual price
of natural gas, the all-in price, is the combination of the index price and basis
price.
An actual position means being exposed to the outright price of gas in some
location. For example, a trader who owns the gas physically located somewhere
is exposed to the actual price. Sometimes, this is called an all-in position. A
basis position means an exposure to a basis price rather than an outright price.
Since the basis price is a spread, a basis position is an exposure to two locations
the basis location and the index location. A long basis position at Waha Hub is
similar to being long gas physically located at Waha Hub and short gas physically
located at the Henry Hub.
To trade natural gas, traders usually enter into two trades: a futures trade at the
Henry Hub and a basis swap that exchanges the Henry Hub exposure for an
exposure at some other location. The market is structured this way to help
traders find trading partners.
There are several common types of natural gas trades. The simplest possible
type is a directional bet on the entire natural gas market. However, since natural
gas prices are cyclical, it is more common for traders to try to speculate on one
aspect of the natural gas market by entering into spread trades. Spread trades
are trades where a trader benefits from the price difference between two
securities by buying one security and selling another. These trades are popular
because they eliminate a traders exposure to the entire market moving up and
down.

Location spreads these speculate on the price difference between two


locations. Depending on demand, natural gas prices can be substantially
different between two locations. Natural gas transportation is not
instantaneous, and storage is often limited.
Heat rates these trades speculate on the relationship between natural
gas prices and electricity prices. As two different mechanisms are used to
determine the prices of power and natural gas, they dont move together
all of the time and it is possible for a trader to benefit from that volatility.
Time spreads these speculate on the price difference between periods of
high and low demand. For example, it might be possible to speculate on a
colder than normal winter by betting that gas prices will be high. The
trader might buy winter gas and sell spring gas to eliminate directional
exposure to the entire market.
Swing trades these spread trades rely on the physical ability of the
trader to store natural gas over short periods. It is possible to pick up

Proprietary & Confidential

inexpensive natural gas when demand is low, and resell it when demand is
high. This requires the ability to store the natural gas.
In the forward markets, prices are determined by macroeconomic issues the
expected average relationship of supply and demand in the future. In the spot
market, prices are based on the supply that is on hand right now and consumer
demand right now. As a result, the spot markets are substantially more volatile
than the forward markets. In markets where spot and forward commodity
markets are linked, usually there is a constant amount of the commodity, and it
isnt consumed when it is used. The spot and forward markets for those products
are linked by the ability to buy the commodity in the spot market and store it for
future delivery. Since the commodity is never used up (like gold, or a stock
certificate), those assets are affected by short-term fluctuations in supply.
Natural gas does not work the same way. Long term buy-and-hold strategies are
of limited use in natural gas trading. Gas has no intrinsic by itself it is the
products created by burning natural gas that are valuable. Moreover, gas in
storage is no easier to deliver than gas newly extracted from the ground, unless
the storage is located near a consuming area.
Looking at the forward market, prices are determined by seasonal expectations
of supply and demand rather than on the current spot price and storage costs.
Seasonal expectations of supply and demand are generally the same every year.
Consequently, prices in the forward market tend to mirror consumer demand
both are high during winter and fall dramatically in the spring of every year. The
price of natural gas becomes less certain close to delivery, rather than further
away, unlike most markets. This is because short-term directions to supplies
have a big effect on prices. However, unless prices change for a lasting
fundamental reason, large price movements in the spot market do not have a
great effect on future prices. Forward prices tend to revert to prices based on
typical consumer demand and expected supplies.
Movements in spot prices are very different from the highly predictable prices in
the forward market. Part of the reason for this difference is the complexity of
storing, or transporting natural gas on short notice. Natural gas in one location
cant just be exchanged for natural gas in a different location it has to be
physically moved there. Natural gas requires specialised facilities to store it. If
someone doesnt have access to a storage facility and most market
participants do not they cant buy the commodity early for delivery at a later
date.
When very long time frames are considered, the assumption that natural gas
prices are a function of supply and demand is misleading. For exploration and
development purposes, the supply of natural gas depends on its price. Higher
natural gas prices make the extraction of natural gas from difficult-to-reach
reserves more economical. In other words, in the long run, supply depends on
prices and demand rather than the other way around. As a result, predictions
about future prices are mostly a function of expected demand for natural gas.

1.19 The Basics


Natural gas is the most flexible of all primary fossil fuels as it can be burned
directly to generate power and heat, converted to diesel for transportation fuel,
and chemically altered to produce a plethora of useful products. Such products
include liquid vehicle fuels, fertilizer, chemicals and plastics. It can do all of this

Proprietary & Confidential

at competitive costs and from a plentiful supply, while emitting significantly


fewer harmful pollutants than other fuels.
Pressure and volume relationships are particularly important in the production of
natural gas. Gas in the reservoirs may exist in dense phase, with liquid and
vapour phases mixed together in equilibrium. Pressures and temperatures above
which the dense phase occurs are called cricondenbar (CB) and
cricondentherm (CT) levels, respectively. As the natural gas comes to the
surface, decreased temperatures and pressures result in a drop below the
cricondenbar and cricondentherm levels. This leads to the separation of liquid
and vapour components.

Methane is the main component of natural gas, usually accounting for 7090% of the total volume produced. If gas contains more than 95%
methane, it is sometimes termed dry gas or lean gas, and it will produce
few, if any, liquids when brought to the surface.
Gas containing less than 95% methane and more than 5% heavier
hydrocarbon molecules is sometimes called rich gas or wet gas. This gas
usually produces hydrocarbon liquids during production.

Methane is the most common component transported by pipelines and


converted to liquefied natural gas (LNG). LNG is the liquid product produced by

cooling methane to 161.5 . This allows for efficient transport to markets,


usually by special ships, where it is heated back to STP and converted to gaseous
methane. Liquefied petroleum gas (LPG) refers specifically to propane and
butane when they are stored, transported, and marketed in pressurised
containers. Natural gas liquids (NGL) include components that remain gaseous at
both reservoir and surface conditions. These include ethane, propane and
butane, and components that exist with the gas in the reservoir but become
liquid on the surface, such as condensates and natural gasoline. Condensates
are low-density liquid mixtures of pentanes and other heavier hydrocarbons.
The heavier the hydrocarbon component, the more carbon molecules are
present, and the more heat generated when it is burned. If significant quantities
of non-methane components are present, the components are separated and
sold separately, often at large premiums to the price of pure methane. A large
gas development project can often earn as much revenue from selling nonmethane components as from methane sales, even though methane may
comprise 90% or more of the total volume produced.
Natural gas can also contain non-hydrocarbon components such as carbon
dioxide, hydrogen sulphide, hydrogen, nitrogen, helium and argon. All of these
impurities, especially the initial two, must be removed from the natural gas
stream prior to sale. Carbon dioxide and hydrogen sulphide can corrode pipelines
and are significant components of air pollution. Hydrogen sulphide, if left in the
gas stream, results in emission of sulphur oxides, a component of acid rain and
other air pollution effects. Gases with high levels of hydrogen sulphide are called
sour gases, referring to the sour smell of sulphur. Conversely, gases with low
levels are termed sweet gases and can be directly sold to consumers. Carbon
dioxide is a greenhouse gas, which has been blamed for contributing to global
warming.
Contrary to popular belief, gas is not generally sold per unit of volume, but rather
per unit of energy that can be produced by burning the gas. End-use customers
Proprietary & Confidential

of gas are interested in the heat energy that combusting the gas will generate.
The heat energy of a particular gas stream is measured by units of calorific
value, which is defined by the number of heat units released when a unit of
volume of the gas burns. Typical units of calorific values are British thermal
units (Btu), joules (J) and kilocalories (kcal). A British thermal unit is the
energy required to raise the temperature of 1 pound of water by

1 . Most

industrial and residential customers receive gas via a pipeline connection with a
gas meter that measures the volume of gas delivered. This volume
measurements is subsequently converted, using the average calorific value per
volume factor, into the number of energy units consumed by the end user.
Most fields of the world produce raw gas with calorific values ranging up to

1,800 Bt u /ft 3 , which processing reduces to between


1,050 Btu /ft 3

for sale to market. A factor of

1,000 Btu /ft 3

960 Btu /ft 3

and

is commonly used.

Gas Formation
The earths outermost layer, called the crust, ranges from 10050km in thickness.
The crust contains three types of rocks

Igneous rocks cooled from volcanic magma or lava. Common examples


are granite and basalt.
Sedimentary rocks. These areo Fragments of other rocks deposited on land or under the sea, mainly
by water and wind. Examples include sandstone and shale.
o Chemically precipitated from evaporating waters. Common
examples are halite and gypsum.
o Formed by organic activity, including from coral reefs. A common
example is limestone.
Metamorphic rocks - of igneous, sedimentary or metamorphic origin, the
structure of which has been changed by pressure and heat. Examples
include slate and marble.

Sedimentary rocks are the most important type of rock for producing and storing
gas and other hydrocarbons. Though there are differing theories on the origin of
hydrocarbons, the organic theory is the more widely held and studied hypothesis.
The chemical composition of hydrocarbons, consisting of carbon, hydrogen, and
oxygen, are the same materials found in life forms today. That oil and gas
reserves are found within sedimentary rocks commonly associated with marine
fossils has added to the organic theory. Sedimentary rocks are much more likely
to have properties that allow hydrocarbons to generate, migrate, and be stored
between their grains. Sedimentary rocks that accumulate in water-rich
environments, such as lakes, and oceans in particular, tend to preserve and
generate hydrocarbons more efficiently. Once the hydrocarbons are formed, they
are lighter than water and can migrate over vast distances under the influence of
gravity. Thus, hydrocarbons are often discovered in non-marine environments
today, but there is strong evidence indicating they were originally in the marine
environment before migrating to their present locations. Marine life, from the
simplest plankton and single-celled life forms to the more complex crustaceans
and fish species, contain carbon molecules. As these animals die and decay over
millions of years, carbon molecules, through processes of heat and pressure,
degrade into hydrocarbon compounds. Sufficient volumes of accumulations may
Proprietary & Confidential

form oil and gas reservoirs over time. Generally, the lower the temperature and
shallower the depth, the heavier the hydrocarbon component formed. Through
temperature is the critical factor, the amount of time that the organic material is
exposed to heat and pressure is also an important factor in the production of
hydrocarbons. These factors determine the relative amounts of natural gas
versus oil that is found in a particular reservoir. In a simple sense, gas, oil and
solid hydrocarbons such as coal are merely different stages in the creation of
hydrocarbons from organic matter.
Any basin in which commercial quantities of hydrocarbons are found must
contain at least one rock layer that hosted the conversion of organic matter to
hydrocarbon. This layer is the source rock. Source rocks are usually clay-rich
sedimentary rocks commonly called shale. Shales are predominantly found in
deeper parts of marine environments, originally having high porosity. Reservoir
rocks must have a pathway from the source rocks to allow hydrocarbons to
migrate. A sediment capable of becoming a source rock for oil may also produce
gas. In this case, gas produced will be associated gas, occurring in the same
reservoir and coexisting with crude oil. However, not all sediment capable of
producing gas will also produce oil, leading to the huge reserves of nonassociated gas, or gas without oil. As a result of these factors, gas reserves are
more widely distributed than oil reserves. Both associated and non-associated
gas reservoirs are broadly termed as conventional gas resources. In contrast,
unconventional gas resources are gas molecules that occur with coal, ice
crystals, or in otherwise difficult rock conditions. If there is no market for the gas,
it may be flared, but this is considered wasteful and environmentally damaging.
Associated gas may be used to power filed generation, or heating and
compression equipment. It can also be reinjected into the reservoir to maintain
pressure, mixed with oil to reduce viscosity, or used to increase oil production via
gas lift.
Understanding porosity and permeability is critical to visualising how these
fluids reside in reservoir rocks. Igneous and metamorphic rocks, which generally
do not contain sufficient space between the grains to store hydrocarbons and
water, lack sufficient porosity and permeability to become commercial reservoir
rocks. If the oil and gas pressure is higher than the pressure in the well, the
hydrocarbon is forced to come out of the well. If, on the other hand, the reservoir
has been producing for some time, the fluid (hydrocarbon) pressure may not be
sufficient to push the fluid to the surface. In this case, mechanical pumps or
other methods may be required to extract the hydrocarbon from the reservoir.
Sedimentary rocks are the most important rocks for both hydrocarbon formation
(source rocks) and hydrocarbon reserves (reservoir rocks). Reservoir rocks must
have the ability to store liquids between the sediment grains (measured by
porosity). They must also have the ability for the liquids to move through the
rock via connecting channels (measured by permeability).

Proprietary & Confidential

Porosity is the percentage of total volume of a rock that is void space that may
be filled with recoverable oil and gas, or by water, or left void. By definition -

total volumegrain volume


= porosity
total volume

Porosities of more than 10-15% are considered fair, and porosities of greater than
20% are considered very good. The rocks porosity is largely a function of the
relative sorting of the grains and the grain size. The more uniform the grains
size, the higher the porosity found in the rock. Permeability is the ease with
which fluid flows through the connected pore spaces of a reservoir rock, and it is
measured in units of darcies or millidarcies. A reservoir may have a high porosity
that is filled with hydrocarbons. If the permeability is low and the hydrocarbons
are unable to flow through the reservoir into the well bore, the reservoir may be
called tight, and it could be uncommercial to produce. In some cases, pumping
viscous fluids under high pressure into the reservoir can improve flow rates. The
force of the liquid fractures the rocks, allowing reservoir fluids to flow along the
cracks into the well bore, potentially increasing conduits or pathways that ease
gas production and increase recovery rates. This process, called fracturing, may
have to be repeated as the forced cracks collapse, and the results are rarely as
good as natural permeability.
Because of density differences, oil will accumulate above the water layer and
gas, if present, will accumulate above the oil layer and collect in the highest part
of the trap, forming a gas cap above the liquid layers. Density also helps to
explain why oil and gas migrates to the highest point in a formation, if sufficient
porosity and permeability conditions exist. Natural gas components may also
exist dissolved within the oil layers, separating on the surface when the pressure
is reduced.
Two significant factors that have to be estimated from known data are the gas
expansions factor and recovery factor. As a reservoir is produced, gas and oil
are brought from high-pressure reservoir conditions to lower pressure and lower
temperature surface conditions. During this process, the volumes of oil, if
present, will decrease as natural gas dissolved with crude oil comes out of
solution, and the volume of gas will increase as gas expands in the lower
pressure environment. The recovery factor is a measure of the proportion of gas
that will ultimately be recovered from the total volume of gas present in the
reservoir. The total volume of gas in the reservoir is known as gas initially in
place (GIIP). A certain proportion of GIIP volumes will not be recovered. This
could be either because it is stuck in pores that are not connected to other pores,
Proprietary & Confidential

or because the surface tension between the gas molecules and the pores of the
rock (or water in the reservoir) prevent the gas from moving towards the
producing well. Modern production techniques can increase recovery factors to
as high as 75-90% of GIIP. However, without sufficient production and reservoir
pressure data in place, accurately determining this factor ahead of time is a
challenge.
Unfortunately, oil and gas company management use reserve definitions to
boost the value of the company and promote the stock price. At current, there is
no standard industry-wide definitions of reserves.

Generally, proved reserves have known reservoir characteristics supported by


actual or specific production tests and are commercial in the current economic
climate. In some instances, proved reserves are assigned on the basis of specific
data, such as well logs and core analysis, and are analogous to reservoirs in the
same area that are producing. Proved reserves, also called 1P or P reserves
can be further classified as developed and undeveloped. Proved developed
reserves are expected to be produced from existing wells and infrastructure.
Proved undeveloped reserves (PUDs) are located near existing infrastructure,
and it is reasonably certain that they will be developed in the future, requiring
additional investments to ensure their production. Probable reserves are
unproved reserves that analysis of geological and engineering data suggests are
likely to be recoverable. In this case, there should be at least a 50% probability
that the quantities actually recovered will equal or exceed the sum of provide
plus probable reserves (P + Probable = 2P). possible reserves are unproved
reserves that are less likely to be recoverable than probable reserves, with at
least a 10% probability that the quantities actually recovered will equal or
exceed the sum of proved plus probable plus possible reserves, also known as P
+ Probable + Possible = 3P reserves.
The exploration processes for oil and gas are the same. Both oil and gas
reservoirs are buried deep underground, at depths of a few hundred meters to
many thousands of meters. Other than the obvious usage of boats versus land
surface vehicles and the presence of surface features, there is relatively little
difference between exploration on land or water. As recently as 50 years ago, the
primary method of discovering hydrocarbon methods was based on clues and
indicators visible on the surface. The presence of oil seeps (and to a limited
extent, gas seeps) often led to the discovery of the first hydrocarbons. The study
of geophysics uses physical properties measured either on the surface or inside
wells to determine the property and structure of rocks below the surface. The
first geophysical methods were simple gravity and magnetic surveys on the
surface, progressing to subsurface measurements of seismic energy waves,
radioactivity, and sonic properties.

Proprietary & Confidential

Gravity surveys measure the slight variations in gravity readings to


locate subsurface rocks of different densities.
Magnetic surveys measure the changes in the magnetic field over an
area to locate sedimentary rocks, which have a lower magnetic field than
igneous and metamorphic rocks.

Mapping the variations in gravity and magnetic readings over a large area
produces subsurface maps showing the lateral extent of potential reservoir
rocks. By drilling at the high point of the sedimentary rock formations,
exploration professionals hoped to locate the peak of the anticlinal trap and find
a hydrocarbon reservoir. This simple methodology as not reliable, since drilling at
the top of a dome-shaped feature did not necessarily mean hydrocarbons were
there to be found. Also, these methods did not provide the resolution required to
find the faults and other structural changes that would impact the presence and
trapping of hydrocarbons. Because of the methodologys inaccuracy, companies
found commercial quantities of hydrocarbons less than one-half of the time.
The development of seismic technology may be the most profound
development in the hydrocarbon industry since the discovery of the first oil wells.
Explosives, air guns, or vibrating pads directly on the surface generate lowfrequency energy waves, which reflect and refract as they pass through the
different rock layers. The speed at which waves travel is related to the density of
the individual rock layers. Each rock layer has its own density, which determines
the time it takes for the waves to pass through the layer (refracted into the next
deeper layer) or to be reflected back to the surface. Sensitive microphones
(known as geophones on land and hydrophones on the surface of the sea) record
the time taken for the waves to return to the surface after they have been
refracted and reflected in the earth.

Combining the hundreds of measurements of each source location and repeating


the measurements after moving the source hundreds of times can produce a
fairly accurate representation of the subsurface geometry. A significant
advancement over the past decade or so has been the ability to visualise direct
hydrocarbon indicators (DHIs) through sophisticated processing of seismic data.
Under ideal conditions, gas reserves, due to their density contrasts, can be
directly shown on a processed seismic image. DHIs for oil reserves have provide
more difficult to show.
The process of drilling an exploration well is deceptively simple. Drilling a
producing well is similar to drilling an exploration well. The main difference
between the two types of wells is the amount of information known before
Proprietary & Confidential

drilling. Exploration drilling is riskier because the well and reservoir conditions
are not well-known. Penetration of zones with unknown pressures and
hydrocarbon content can result in the uncontrolled release of hydrocarbons
which, if ignited or released to the surface, could result in human disaster and
financial loss. Wireline logging surveys follow the drilling of exploration wells.
A spool and data-transmission cable lower sophisticated sensors into the well
bore. The sensors measure various physical and chemical properties of the rock
layers and the fluids present in the pores of the rock. Common measurements
include resistivity, sonic porosity, and nuclear radiation and density.
Once a reservoir has been identified and confirmed by exploration wells,
petroleum professionals formulate filed development plans specific to the
reservoir. In many countries, development plans have to be approved, as per
conditions specified in legal contracts, by the national government via the
national oil company or designated ministry prior to beginning any activities. Gas
production methodology is a function of the type of gas reservoir and the
production stage in the life of the particular reservoir. A simple oil and gas
reservoir may initially produce high volumes of oil relative to gas, but as the oil
production and reservoir pressure decline, an increasing amount of gas may be
produced. This increases the gas/oil ratio (GOR) of the produced hydrocarbons.
Reservoir management may dictate producing oil first, using the natural gas
pressure to increase oil recovery rates. As the FOR increases, reservoir pressure
may become too low for natural production, and without secondary production
methods, production will eventually cease from the reservoir. In this case, the
total recovery factor could be as low as 20-40%.
An offshore field is developed differently from an onshore field. Offshore
development wells are usually tied to a fixed platform with a set number of slots
or well bore. A platform may have a large number of slots, each leading to a well
deviating away from the platform, draining a specific portion of the reservoir. A
more recent innovation for offshore development uses subsea completions. Wells
are drilled from mobile rigs, once the wells are completed and all the piping and
valves have been installed, the wells are connected to a central hub, also on the
seafloor. This allows wells to be further apart than if they were all drilled from a
single fixed platform.
Wells are usually drilled by rotating bits connected to drill pipe. As the drill bit
penetrates the ground, drill pipes are continuously added to the surface joint,
extending the reach of the drill string and keeping the drill bit at the end of the
assembly. Wells may be as deep as 15,000 20,000ft (4,500 to 7,000m). to
prevent sections of well from collapsing after drilling, and to prevent fluid
movement between different reservoirs, heavier pipes, called casings, are
lowered into the well bores and cemented in place. Drill bits, which start at the
surface at diameters of up to 30 inches (75cm), have to be successively smaller
to pass through multiple levels of casings. Accordingly, the diameter of the well
decreases as the depth increases. Well diameters at the reservoir depth are
typically 7 inches (18cm) or smaller with producing pipes, called tubing,
inserted inside the casing.

Proprietary & Confidential

After a well is drilled, data is usually collected to determine the type and extent
of reservoir. Cores are actual samples of the rock and fluids in the rock. They are
obtained during the drilling process using hollow drill bits. As in the exploration
process, wireline logging surveys can be used for measuring various geophysical
and chemical properties to indicate reservoir characteristics. After a well is cored
or logged, or both, casing cemented into place prevents hydrocarbons and water
from entering the well bore. Explosive guns with shaped charges at precise
depths perforate the casing to allow hydrocarbons to flow into the well bore and
to the surface. This allows selective production from the hydrocarbon zones and
prevents undesired water from entering the well bore and mixing with produced
hydrocarbons.
Oil wells in early production stages may be produced using natural reservoir
energy. Pressure differentials between the lower pressure well bore and the
higher pressure virgin reservoir allow oil and gas to flow naturally into the well
bore. As the reservoirs mature, declining reservoir pressure allows dissolved
gases to vaporise into bubbles in the oil column. As oil flows up the tubing, the
pressure continues to decrease, and gas bubbles expand further. Expanding
bubbles help to reduce the density and weight of the fluid in the wells, assisting
the natural flow of the well. However, continued reduction in reservoir pressure
increases the GOR to the point that gas bubbles begin to form within the
reservoir itself. These reservoir gas bubbles eventually form continuous channels
of gas, leaving some of the oil behind in the well bore, unable to rise to the
Proprietary & Confidential

surface. At this stage, energy from external sources, such as in-well pumps, may
be needed to produce the remaining oil. Gas may be injected into the oil well to
help reduce the weight of the oil in the well bore to assist oil production. Gas
injected into an overlying gas reservoir can also be used to maintain oil reservoir
pressure.
Similar to the process of conversion of organic matter to natural gas, the natural
conversion of organic materials to coal also generates large amounts of
methane. Methane is stored within the coal bed in much larger quantities per
volume of rock than conventional rock reservoirs. Much of coal, and thus much of
the methane contained within the seams, occurs close to the surface. This allows
cheaper exploration and production from less-expensive, but less-productive
(because of lower reservoir pressure), shallow wells. Methane produced from coal
seams is called coal bed methane, coal seam methane, or coal seam natural
gas. Other than usually having a lower heat value because of the lack of heavier
gas compounds, it is similar to gas produced from conventional gas reservoirs.
Water permeates coal seams, and water pressure traps any coal bed methane
present. Producing coal bed methane requires first removing water to decrease
pressure on the coal matrix, allowing free gas to flow into the well bore. The
water is usually saline, and disposing of it can add significant costs to coal bed
methane production. Conventional gas is produced from relatively homogeneous
reservoirs with predictable drainage and flow rates. By contrast, coal seams are
variable in terms of their thickness, gas saturation, and dispositional
environment. Consequently, the production profile of CBM is very different from
conventional gas production. CBM fields require ongoing well drilling and water
disposal investments; however, this results in a longer plateau production
profile and longer field life. CBM, along with tight gas sands, shale gas, and gas
hydrates, are collectively termed unconventional gas resources. Shale gas is
methane trapped in fractures and within the pore spaces of impermeable shale
layers. Gas hydrates are methane trapped within ice crystals, and tight gas
sands are gas deposits in very low permeability sedimentary rocks.
The type and extent of natural gas processing depends on the original gas
composition and the specifications of the customer. These plants range from
relatively simple plants, where oil, impurities and water are removed from the
produced gas, to complex plants. In the latter, various hydrocarbon compounds
are separated from the gas stream and large quantities of gas, liquids and water
are handled. The largest operating cost component in gas processing is
compressor fuel cost required to move gas between the various processing units.
Gas processing is necessary for the following reasons

Sales gas specifications customers demand that gas delivered to them


meet certain compositional and pressure specifications.
Pipeline transport pipelines, especially those that aggregate gas from
multiple gas fields, often limit the composition of the feed gas to maintain
pipeline flow and reduce corrosion. Produced field gas may be processed
to remove solids (such as sand), water, carbon dioxide and hydrogen
sulphide.
Liquids recovery natural gas liquids (NGL) such as ethane, propane,
butane and condensates, that can be recovered as liquids on the surface
are often removed from the natural gas stream and sold separately.
Petrochemical and other consumers often buy NGLs directly from the gas
producer.

Proprietary & Confidential

LNG and feedstock specifications LNG plants have rigid and tight
specifications for their feed gas. Any impurities in the gas, especially
water, carbon dioxide and heavier hydrocarbons will seriously impact their
LNG production.

Water, which is often produced with natural gas, must be removed to prevent
corrosion. Freezing water in pipelines may form hydrates (ice-like compounds of
water and hydrocarbon) that can block pipes and damage processing units.
Dehydrating gas is usually achieved by

Physical separation involving cooling the gas to below the initial dew point,
forcing the water to separate by gravity;
Contacting gas to solid drying agents such as silica or alumina, which
attract water molecules;
Contacting the gas to liquid absorbers such as glycol.

Produced gas may also contain sand and other solid particles such as scale or
corrosion products. Sand is a particular problem due to its destructive impact
inside pipelines. Transporting gas containing high levels of sand can blast the
inside walls of the pipeline and is a leading cause of pipeline aging. Large filters
are often installed at the inlet of the pipeline systems to prevent these particles
from entering the pipe.
Carbon dioxide is another compound that must be kept below specified limits as
dictated by the gas market. Even where removing carbon dioxide is viable,
disposal is often a problem, as most places in the world prohibit direct venting
due to greenhouse gas emission limits. Carbon dioxide can be reinjected into
porous rock layers in the ground however; the cost of wells and compression may
make this option uneconomic, thereby limiting the exploitation of the gas
resource.
During the gas combustion process, hydrogen sulphide is released as sulphate,
which is an air pollutant, and its emissions are controlled in most markets. Gas
sales contracts nearly always state the upper acceptable maximum threshold,
often to values of 10 ppm or less.
The reduction in the volume of input natural gas due to removal of NGLs, water,
impurities, and plant fuel is called shrinkage. This volume can be significant
when input gas is rich or contains a high proportion of impurities such as carbon
dioxide.
Extracting NGLs from natural gas is usually achieved by first separating methane
from the liquids, followed by separating the remaining NGLs into ethanes,
propanes, butanes, and condensates. Refrigeration is the most common
method of removing methane. The gas stream is dropped to the temperature at
which the heavier NGL liquids liquefy and separate from methane, which
liquefies at much lower temperatures than NGLs. Another method to separate
methane is the absorption method, which relies on an oil substance with an
affinity to attract only NGL molecules, thus separating the methane. Once
methane has been removed, the remaining NGL stream is usually sent to
fractionation units where the temperatures are increased, allowing the different
hydrocarbons to reach their boiling point in separate stages. De-ethaniser depropaniser de-butaniser

1.20 Transport and Storage


Proprietary & Confidential

The biggest challenge after initial gas field discovery is transporting the gas
from the field to the consumer. Depending on seasonal variations, methane may
be added and withdrawn from gas storage facilities before sale. At normal
surface conditions, produced gas has a relatively low energy density. This density
must be increased via compression, to allow gas, like water or other fluids, to
move from high-pressure environments to low-pressure environments. The cost
of transporting one energy unit of gas via an onshore pipeline is three to five
times higher than transporting the equivalent energy unit of oil. As a result of
difficulties in transporting natural gas, many stranded gas reserves have been
discovered and mapped, but their size, composition, and distances to market
make them uneconomic to develop.
The first step in transporting gas occurs once gas reaches the surface via the
tubing pipe. Produced gas flows through surface valves and flanges collectively
known as wellhead or Christmas tree assembly. From the wellhead, the fluids
may be separated into phases using simple gravity separators and sent to a
central processing plant.
Pipelines are the most common, and usually the most economic, delivery system
to transport gas from the field to the consumer. Pipelines are a fixed, long-term
investment that can be uneconomic for smaller and more remote gas fields. The
volume of gas that can be transported in a pipeline depends on two main factors
the pipeline operating pressure and pipe diameter. In order to handle the
increasing demand, it is likely that operating pressures will increase rather than
the size of the pipe. In some parts of the world, such as the Middle East, reservoir
pressure alone may be sufficient to power the local pipeline network. If the gas
to be transmitted is rich, containing heavier compounds such as ethane and
propane, these compounds may form liquid slugs inside the pipe as the pressure
decreases, resulting in two-phase flow inside the pipe. If the pipe operates at a
higher pressure, the gas maintains a dense phase that is neither liquid nor
vapour. Rather, it is a tight mixture of the two phases, avoiding the difficulties of
two-phase flow.
Deciding which type of compressor is a function of

Cost of gas versus electricity the amount of fuel gas used by a


compressor can be significant, and may materially impact on the quantity
of gas available for sale.
Operating expenses gas compressors are more complicated, with a
higher number of moving parts.
Regulatory constraints gas compressors are louder and have more
emissions than electrical compressors. Obtaining permits to operate gas
compressors near urban areas may be difficult.
Availability of electricity many long-distance gas transmissions lines
cross remote areas where reliable electrical power may be a problem. Gas
compressors allow the pipeline to be independent of the local electrical
utility.
Maintenance and emergency plans compressors require periodic
preventive maintenance to maintain efficiency.

Steel, the main component of pipelines, is susceptible to oxidation, cracking and


corrosion. Electrons in exposed steel surfaces, especially in humid or wet soil
environments, flow away from the pipe, which becomes oxidised and brittle.
Modern pipelines are coated, either at the factory or prior to installation, with a
variety of materials to mitigate the effect of oxidation and reduce the exposed
Proprietary & Confidential

surface area. Cathodic protection, used when the pipeline is buried or


submerged in water, applies a direct current to the surface of the pipeline. This
offsets the corrosion current by causing electron flow in the opposite direction of
corrosion flow, mitigating the flow of electrons and drastically reducing the level
of corrosion in the pipeline. Long-distance pipelines are often segmented with a
number of sectionalizing valves. If an incident causes the pipeline pressure to
suddenly drop, the nearest sectionalizing valve can be shut off remotely or
manually to reduce the amount of escaping gas and minimise the safety hazard.
The gas industry uses an interesting unit to measure pipeline costs, dollars per
inch kilometre, measuring the cost of 1 inch diameter per kilometre length. In
North America and Europe, pipeline companies that do not own the gas that they
transport operate most long distance pipelines. These pipelines operate as open
access carriers, and the owners of the gas contract with the pipeline
companies to transport the gas for a fee, or tariff. If requests for space on the
pipeline exceed the lines capacity, the space is allocated among shippers in a
non-discriminatory manner, usually on a first-come, first-served basis. A local or
federal government agency monitors the tariff rate based on an allowed rate of
return and valuations of the assets to regulate the tariff that the pipeline may
charge.
LNG currently represents the most exciting aspect of the international gas
landscape. LNG is simply an alternative method to transport methane from the
producer to the consumer. Methane is cooled to

161.5 , converting its

gaseous phase into an easily transportable liquid whose volume is approximately


600 times less than the equivalent volume of methane. At the receiving location,
liquid methane is offloaded from the ship and heated, allowing its physical phase
to return from liquid to gas this gas is then transported to gas consumers by
pipeline in the same manner as actual gas produced from a natural gas produced
from a natural local gas field. The LNG process is more complex than pipeline
transportation. The LNG chain consists of discrete sections: upstream,
midstream, liquefaction plant, shipping, regasification, and distribution.
Ownership of each component is usually not consistent, necessitating complex
agreements between all parties.
LNG technology is not new. The first commercial LNG facility was built in the
United States in 1941 in Cleveland as a peak load shaving facility. Gas was
liquefied during hours or seasons of low demand and heated back to gaseous
phase to be pumped into the pipeline grid during periods of high demand. The
decision to commercialise a gas field by either LNG or direct pipeline is related to
the distance to market from the gas reservoir. Rules of thumb typically followed
include

Gas market is more than 2,000 km from the field


The gas field contains at least 3-5 tcf (trillion cubic feet) of recoverable gas
Gas production costs are less than $1/MMBtu, delivered to the liquefaction
plant
The gas contains minimal impurities, such as carbon dioxide or sulphur
Presence of a marine port

Units used in the LNG trade can be confusing. Produced gas is measured in
volume (cubic metres/cubic feet), but once it is converted into LNG, it is
measured in mass units usually tons or million tons.

Proprietary & Confidential

The upstream and midstream sections of the LNG chain are identical to
traditional gas systems, with identical gas wells, wellheads, and field processing
facilities. Because LNG requires gas to be cooled to very low temperatures, care
must be taken to remove all impurities, especially water, from the methane
stream prior to processing by the liquefaction plant. Gas from a number of fields
may be commingled prior to liquefaction. An important consideration for the
evaluation of an LNG project is the cost of geed gas. Since each component of
the LNG chain adds cost to the process, yet the final product must be
competitive with other energy sources in the consuming market, the initial cost
of the feed gas must be as low as possible. Typically, this gas must be delivered
to the LNG plant at a cost of less than $1/MMBtu for the LNG project to be
economic. Any heavier hydrocarbons removed from the methane stream will be
sold, either by the LNG plant or the upstream resource holders, and could have a
significant impact on overall plant economics. The importance of non-methane
sales in overall project economics cannot be discounted.
An LNG plant is divided into independent trains operating in parallel process.
The figure below shows the components of a typical train.

The LNG market has adopted two main liquefaction processes the pure
refrigerant cascade process (also known as the Phillips process), and the
precooled propane mixed refrigerant MCR process. Competing LNG
processes are evaluated by their relative thermal efficiency, measuring the
output energy of the LNG versus input energy of the feed gas. The difference
between these values relates to the energy consumed in the liquefaction process
and depends on the efficiency of the process. It also reflects the efficiency of the
refrigeration compressors, the quality of the feed gas, and the ambient
temperature of the region.
The cascade process uses three refrigeration circuits propane, ethylene and
methane to cool the purified gas to the required temperature. The figure below
shows a simplified cascade process flow diagram.

Proprietary & Confidential

Cascade Process. In the first circuit, compressor propane cools feed gas to the
liquid temperature of propane

(30)

through a series of evaporators. The

second circuit repeats the process with ethylene, cooling the gas to the liquid
temperature of ethane

(100) . The third circuit uses methane to further

drop the temperature to

160 . Boil-off (when liquid LNG heats back to

gaseous phase) from the final tanks in further compressed and injected back into
the circuit to increase efficiency.
Precooled mixed refrigerant process. In contrast to the cascade process,
where separate circuits step down the temperature of feed gas to the desired
liquefaction temperature, the mixed refrigerant process uses a combined mixture
of methane, ethane, propane, nitrogen, butane, and pentane gases as the
refrigerant.

Air Products improved the mixed refrigerant process, marketing it as the MultiComponent Refrigerant process. To increase process efficiency, the system first
precools the gas to

30

by propane in a process similar to the first circuit of

the cascade process. The next cycle uses a combination of nitrogen, propane,
ethane, and methane refrigerants to bring the temperature down to

161.5

in a series of stages. Liquefaction is achieved when methane gas is allowed to


come in contact with cold metal spiral tube heat exchangers containing the
liquefied refrigerant mixture that cools the temperature of the gas. Liquefaction
plants are typically the most expensive element in an LNG project. Because 8Proprietary & Confidential

10% of gas delivered to the plant is used to fuel the refrigeration process, overall
operating costs are high. As economies of scale can be significant, newer LNG
plants have larger, more efficient trains, and may have shared facilities,
minimising unit costs.
All LNG projects initially market their production based on nameplate or design
estimates of plant capacities. History has shown that this estimate is often too
conservative, and the actual LNG available for sale exceeds the nameplate
capacities. Once a plant has been operational, additional production is often
available when engineering modifications and equipment upgrades (known as
debottlenecking) increase efficiencies.
LNG is usually transported to the gas consumer by specially designed
refrigerated ships. The ships operate at low atmospheric pressure (unlike LPG
carriers, which operate at much higher pressures), transporting the LNG in
individual insulated tanks to dramatically reduce the chance of a catastrophic
explosion. Insulation around the tanks maintains the temperature of the liquid
cargo, keeping the boil-off (conversion back to gas) to a minimum. Because most
older ships do not have active refrigeration systems onboard, ships use the
produced boil-off gas as engine fuel. Newer ships have the capacity to convert
the boil-off back to LNG. The thickness and effectiveness of the insulations
system, the surface area of the tanks, ambient temperature conditions, and
distance to market all determine the quantity of boil-off produced. On a typical
voyage, an estimated 0.1-0.25% of the cargo converts to gaseous phase daily. In
a typical 20-day return voyage from the LNG plant to the customer, the total
loss, net of voyage and loading/unloading boil-off, is 2-6% of the total volume.
Because regular steel becomes brittle at low temperatures, ship tanks use
special alloys of steel with nickel and aluminium. LNG ship size is expressed in
cubic meters of maximum LNG volume capacity. Shipping costs are usually
expressed as daily charter rates, which can vary between $27,000/day for
smaller ships to $150,000/day for the larger, more efficient ships. Most LNG
plants have their own dedicated fleet of LNG ships, operating a virtual pipeline.
LNG receiving terminals, also called regasification facilities, receive LNG
ships, store the LNG until required, and sent out gaseous methane into the local
pipeline grid. Some terminals, especially in the United States, also load LNG into
trucks, which deliver to smaller satellite markets not connected to the main
pipeline grid. Historically, terminals were built by LNG buyers who restricted
third-party access to other potential importers. This is, however, likely to change,
especially in Europe, where legislation supports requiring terminal owners to
open their facilities to users willing to pay the specified service fee. The main
components of a regas facility are the offloading berths and port facilities, LNG
storage tanks, vaporisers to convert the LNG into gaseous phase, and pipeline
link to the local gas grid. Storage tanks and vaporiser process units typically
account for 25-35% respectively, of the facility capital costs. Safety features,
unloading berths, and general construction absorb the remaining 40% of costs.

Proprietary & Confidential

Berths allow the LNG ships to connect to the terminal via unloading arms.
Traditionally, berths are located in existing port facilities. LNG tankers may also
be offloaded offshore, away from congested and shallow ports. This is
accomplished using a floating mooring system via undersea insulated LNG
pipelines to a land-based regas facility. This plan requires the LNG tanker to be
stationed at the offshore regas facility for the duration of the regas process,
around three to four days versus one to two days for conventional unloading.
Thus increased shipping costs and possible risks may make this venture
economically questionable. Cost savings resulting from eliminating the need for
storage tanks and large onshore facilities where regulatory permits make
locating facilities difficult may prove this to be a viable alternative, however,
especially for seasonal gas deliveries. The tank size is a function of the volume
and frequency of deliveries expected from a particular LNG plant, as well as the
gas demand from the market.
The largest component of receiving terminal capital cost is the vaporiser process
equipment. Vaporisers warm LNG from

161.5

to more than

converting methane from liquid phase into gas. Conceptually, vaporisers are
relatively simple units in which LNG is pumped through tubular or panelled heat
exchangers, allowing the temperature to rise.
The storage of natural gas is an important component of the gas transportation
system. This is true especially in North America and Europe, where gas
production areas are located far from gas consuming areas, and demand for gas
depends on seasonal weather. Undoubtedly, as demand increases and seasonal
demand swings increase, storage will enter the transportation equations of
markets such as the Middle East and Asia.
There are basically two uses for natural gas in storage facilities, meeting longer
term base load (seasonal swing) requirements and meeting shorter term peak
load (daily or inter-day swing) requirements. In regions where natural gas is
predominantly used for power generation, these swings will match electrical
power demand cycles. Base load storage capacity meets seasonal demand
increases. Typically, the turnover rate for natural gas in these facilities is
seasonal; natural gas is generally injected during low-demand months and
withdrawn during the high-demand season. These reservoirs are larger, but their
delivery rates are relatively low, meaning the amount of natural gas that can be
extracted on any particular day is limited. Instead, these facilities provide a
prolonged, steady supply of natural gas. Peak load storage facilities, on the other
hand, are designed to have high deliverability for short periods of time,
Proprietary & Confidential

allowing natural gas to be withdrawn from storage quickly should the demand
increase beyond immediate supply. The simplest form of peak load storage is to
use the pipeline itself. Depending on the pipeline length, available free volume,
and safe operating pressure limits of a particular pipeline system, the pipeline
may be line packed by injecting additional gas during periods when demand is
less than produced gas supply. This results in an increase in gas pressure inside
the pipe. During peak load periods, when demand exceeds production capacity,
the additional gas will be used by consumers, allowing gas pressure to fall back
to lower levels.
The more complex forms of peak load storage are located underground. These
facilities cannot hold as much volume as underground base load facilities;
however, they can deliver smaller amounts of gas more quickly and can also be
replenished in a shorter amount of time than base load facilities. Underground
peak load facilities can have turnover rates as short as a few days or weeks. In
North America and Europe, salt caverns are the most common type of
underground peak load storage facility, although aquifers may meet these
demands as well. For all types of underground gas storage, the process is quite
simple: lean natural gas is injected into the formation at a pressure exceeding
the natural pressure of gas and other fluids in the storage facility. When gas is
required for consumption, it is produced using standard gas wells and facilities.
Thus, similar to standard gas wells, the higher the pressure in the storage
facility, the more readily the gas may be extracted. Once the pressure drops to
below that of the wellhead, there is no pressure differential left to push the
natural gas out of the storage facility. This means that, in any underground
storage facility, a certain amount of gas must always remain in place. This
physically unrecoverable volume of gas, known as base gas or cushion gas,
has been permanently embedded in the formation. It must remain as permanent
inventory in the storage facility to provide the required pressurisation to extract
the storage gas at minimum deliverability rates. The actual volume of natural
gas in the storage reservoir that can be extracted during the normal operation of
the storage facility is known as working gas. This is gas that is stored and
withdrawn on each cycle, minus base gas quantity. At the beginning of a
withdrawal cycle, the pressure inside the storage facility is as its highest,
meaning working gas can be withdrawn at a high rate.
Natural gas is usually stored underground in large storage reservoirs. There are
three main types of underground storage; depleted gas reservoirs, aquifers, and
salt caverns. LNG, on the other hand, is always stored in special thermallyisolated steel tanks that are either buried or at the surface.

Depleted gas reservoirs the most prominent and common form of


underground storage. Depleted reservoirs have been tapped of all their
recoverable natural gas. This leaves an underground formation that is
geologically capable of holding natural gas. In addition, there are also
facilities, both above the ground and wells into the reservoir, left over from
when the field was productive. Depleted reservoirs are also attractive
because their geological characteristics are already well-known, thus
minimising leaks, and are the cheapest and easiest to develop, operate
and maintain.
Aquifers aquifers are porous, permeable rock formations underground
that may act as natural water reservoirs. In certain situations, these water
containing formations may be reconditioned and used as natural gas
storage facilities. As they are more expensive to develop than depleted

Proprietary & Confidential

reservoirs, these types of storage facilities are usually used only in areas
where there are no nearby depleted reservoirs. Aquifers are the least
desirable and most expensive type of natural gas storage facility. The
geological characteristics of aquifer formations are not as thoroughly
known as depleted reservoirs. Significant time and capital go into defining
the geological characteristics of an aquifer and determining its suitability
as a natural gas storage facility. Since no prior equipment exists,
associated infrastructure must also be developed. Since aquifers are
naturally full of water, in some instances powerful injection equipment
must be used to allow sufficient injection pressure to push down the
resident water and replace it with natural gas. In addition to these
considerations, aquifer formations typically require a great deal more
cushion gas than do depleted reservoirs. In some cases, cushion gas
requirements can be as high as 80% of the total gas volume.
Salt caverns underground salt formations offer another option for
natural gas storage. Typically, these thick formations were created from
natural salt deposits that, over time, leach up through overlying
sedimentary layers to form large dome-type structures. Salt formations
are strong and homogeneous, thus minimising the escape of injected gas.
Once a suitable salt dome or salt bed deposit is discovered, it is necessary
to develop a cavern within the formation to actually store the gas. Injected
water is used to dissolve and extract a calculated amount of salt from the
deposit, leaving a large empty space surrounded by non-dissolved salt
that will act as a trap for injected gas. Though the leaching process is
quite expensive, once treated, the cavern offers an underground natural
gas storage vessel with high deliverability and minimal leakage. In
addition, cushion gas requirements are the lowest of all three storage
types, with salt caverns requiring only about 33% of total gas capacity to
be used as cushion gas. They cannot hold the volume of gas necessary to
meet base load storage requirements. Deliverability from salt caverns is
typically much higher than from either aquifers or depleted reservoirs.
Natural gas stored in a salt cavern, therefore, may be more readily and
quickly withdrawn, and caverns may be replenished with natural gas more
quickly than either of the other types of storage facilities. Sophisticated
salt cavern operators may be able to cycle their storage four or five times
a year, often charging fees higher than other storage operators.

1.21 Contracts and Project Development


Before any exploration or production activity takes place, a legal framework must
be agreed upon by all parties to ensure adequate value sharing of the resource.
These agreements involve the companies wishing to explore and produce the
potential resource and the host government or the national oil company
representing the host government. Ideally, revenue sharing regulations should
be clear, specific, transparent, and auditable. Unfortunately, the opposite is often
the case in developing countries. The resulting contracts are often kept secret,
revenues are often siphoned off by corrupt officials, and the company wishing to
explore does not always receive the security and stability that it requires to
conduct its operations.
Concession contract. The tax/royalty concession contract is the conventional
type of contract system in North America, Argentina, Australia, and in
occasionally in the Middle East. Under this contract, the oil or gas company owns
Proprietary & Confidential

the assets and installations and receives all the production from the assets. In
return, it bears all the operating risks, costs, and investments and agrees to pay
the host government a royalty calculated on the amount of production in
addition to income tax and any other tax provided for under local legislation. The
royalty is a percentage of gross production (either in cash or in gas volumes)
paid to the host government before any cost deductions. Income tax, on the
other hand, is a percentage of net income or profits paid to the government after
deducting costs and royalties. Government take is comprised of total
government revenues, including royalty and taxes. Net company revenues are
called company take.
Production sharing contract. Also known as production sharing agreement
(PSA), this contract is more legally complex than a concession contract. It has
become the de facto standard in Asia, Africa, and parts of South America and the
Middle East. Conceptually, under a PSC, the gas company is a contractor, without
ownership of the minerals in the ground. The oil and gas company supplies the
risk capital and is compensated from a share of potential future earnings
according to a predetermined sharing arrangement. If financing is required,
reserves may be used as collateral for the loan. Any assets placed by the
company to produce the assets eventually become the property of the host
government, which will repay the costs of such assets to the company out of the
share of the production, known as cost recovery. The host government, via its
national oil company, may also participate in the operational decision-making
process. Under a PSC regime, the company agrees to perform and finance
exploration operations at its sole risk. The host government contract specifies
the amount of exploration work to be performed in number of wells or seismic
survey kilometres. If commercial quantities of hydrocarbons are found, both the
company and the host government collaboratively declare commerciality. This
allows the company to begin a field development program. The company
receives a portion of production known as cost gas which it can sell to cover its
investments and operating costs. There is typically an upper cost recovery limit
on the amount of cost gas that may be charged in a given year, calculated as a
percentage of total production value. The company and the host government
share the remaining production volumes, known as profit gas.
Other host government agreements are pure service contracts, where the
company is paid a set fee (possibly a fee per unit of gas produced) for its
technical services. In buyback contracts, adopted by Iran, the company pays for
all investments and is reimbursed its expenses plus a set rate of return from
future revenues. Under these contracts, the company usually has no claim to the
reserves or the production.
The volume of gas available for sale by the oil and gas company is a function of
the volume of gas produced and the fiscal terms in place. Cost of production,
taxes, government controls, or market forces set by local or regional supply and
demand often determine the price of gas sold. In the case of LNG or international
pipelines, sales price is often determined by market forces in the importing
country. It may be net-back calculated (by deducting transportation, terminal,
and re-gas costs) back to the producing company to determine income for the
producing company or government.
A gas buyer receiving gas from an LDC will pay a price that may include the longdistance transmission tariff and a much higher LDC tariff. LDC tariffs tend to be
two or three times higher than long-distance tariffs due to smaller volumes,
smaller-diameter pipelines, and higher costs of laying and maintaining urban
Proprietary & Confidential

pipeline networks. Transmission tariffs may be based on distance transmitted or


on a postage-stamp basis, where all consumers pay the same tariffs regardless
of distance transmitted. Tariffs may also be a function of the volume reserved for
a particular buyer (a set capacity charge) and a variable based on the pipeline
volume actually consumed by the buyer (a commodity charge).
The pipeline gas sales agreement (GSA) is also known as a gas purchase
agreement (GPA) or a gas sales and purchase agreement (GSPA). These
agreements between a producing company or sales agent (seller) and a
consuming company (buyer) usually cover a number of provisions.

The term of a GSA can be as short as one day or as long as the economic
life of the field from which the gas is produced. Spot markets generally
have terms under one month. Internationally, especially where a gas
development project will have a limited number of potential customers,
the terms could reach 20 or 30 years. Financial institutions may require
these long terms to ensure that the producing company has adequate
cash flows to cover the debt that may be required to develop the project.
Generally speaking, there are two distinct types of volume commitment
contracts: depletion contracts and the more common supply
contracts. Under depletion contracts, also called output contracts, the
producing company dedicates the entire production from a particular field,
or reserve to a buyer. The annual delivery quantities are calculated on
estimated physical performance of the field. In this case, buyers usually
require independent engineering companies to analyse data provided by
the producing company in order to certify the reserve volumes. In
contrast, supply contracts commit the seller to supply a fixed volume of
gas to the buyer for a fixed term, typically 20 to 25 years. The seller is
responsible for sourcing the gas, either from its own reserves or from third
parties, if its own reserves are inadequate.
Gas must be priced at a level competitive with alternate fuels in the
marketplace and provide an adequate return for all parties in the chain.
Pricing may be fixed, fixed with escalators, or floating. A fixed price with
an escalator is a fixed price that changes by a certain percentage every
year or other specified time frame to reflect an inflator or an index of a
known variable. Index may be linked to inflation, published price, or a
combination of substitute fuels. This ensures gas price competitiveness to
alternate fuels and helps to integrate changes in the marketplace without
renegotiating long-term contracts. Alternatively, a floating price varies
according to prices reported by unbiased sources, such as newspapers and
NYMEX quotations. In this case, the contracts are revalued every month or
week according to the reported prices. All price philosophies may be
limited to a maximum ceiling price or a minimum floor price for the term
of the contracts. Contracts may also have combinations of fixed and
floating prices.
The terms of delivery may be firm or flexible. Firm delivery implies an
obligation by the producing company or seller to deliver the specified
quantities over the term of the contract. Flexible delivery obligates the
producing company to make attempt to fulfil the delivery obligations but
does not require fulfilment of all the delivery obligations. Some contracts
specify a set number of days, usually the highest demand days, when the
supplier may be subjected to liquidated damages for failure to fulfil the
obligations. Flexible delivery contracts may have cheaper prices because

Proprietary & Confidential

gas supply is interruptible by the seller and may be acceptable to buyers


willing to occasionally substitute gas fuel for alternate energy sources.
The basic premise of take-or-pay is that the buyer is obliged to pay for a
percentage of the contracted quantity, usually 60-95% of the ACQ. This is
true even if the buyer is unable to or fails to take the gas supplied by the
seller. Some contracts allow the buyer to make up gas volumes paid for
but not taken in a period. TOP issues increase during periods when gas
prices are higher than alternative fuels, encouraging buyers to forego their
gas contracts and switch to cheaper fuels.
Delivery point it the physical location where gas is delivered to the buyer.
This is often, but not always, the same geographic point where custody
transfer (the transfer of ownership and responsibility) of the gas takes
place.
The GSA clearly states the quality of gas, including tis maximum and
minimum heating values; maximum level of impurities like oxygen, carbon
dioxide, sulphide containing compounds etc. In addition, the delivery
pressure and water vapour content.
There are certain conditions that must be satisfied before any of the
obligations in the GSA can be legally binding conditions precedent.
These may include government approvals for the development of the field
or for permitting a pipeline.
During the nomination procedure, the buyer communicates its weekly gas
volume requirements to the seller. This is particularly important when
multiple buyers are supplied by a single supplier who must manage all the
delivery requirements in an efficient and fair manner.
Force majeure refers to acts of god, such as flood, fire, earthquakes, and
other events outside the partys reasonable control that may interrupt gas
delivery or gas consumption. Liabilities and obligations of all parties,
including those resulting from negligence, must be clearly stated in the
GSA. A lengthy force majeure event may result in annulment of the
agreement.
Stabilisation clauses specifies remedies in the case of changes in law or
taxation rates, keeping the seller or buyer economically whole.

As a result of the large capital expenditures, the international nature of the


business and number of discrete elements in the value chain, the LNG business
requires numerous legal agreements. The LNG equivalent of the GSA, often
known as the LNG sales and purchase agreement (SPA), is the most
complex of all agreements. Because of the long-term nature of the contracts,
flexible and trusting relationships between all parties are critical for the success
of each component of the LNG chain. A failure in one link of the chain will have
adverse impacts on all other links, potentially destroying the economics of the
entire venture. The LNG SPA shares many of the features of the pipelinedelivered GSA described previously, but includes additional unique features.
The LNG SPA exists between the LNG exporting company, joint venture, plant
operator, or sales agent, and the importing facility or buyer. Most LNG exporting
entities are LNG project companies that own the liquefaction plant. By contrast,
LNG plant operators in Indonesia, Egypt, and certain trains in Trinidad operate
their plants on a rolling service basis, charging a tariff to the gas owners in
return for converting their gas into LNG. In this case, the gas owners, the plant
owner, and LNG buying entity all sign the SPA. The Japanese model has been a
benchmark in the industry, especially in the Pacific region. The main features of

Proprietary & Confidential

the agreement remain relatively unchanged since the first contract was signed in
1969. There are a number of features that are important to LNG SPAs

Typically, Pacific region LNG buyers are large, government-supported,


credit worthy LDC gas or power utilities. These utilities may have a
monopoly over their local region and therefore are able to commit to longterm purchase guarantees in return for less-volatile and more predictable
supplies.
Historically, utilities, especially the Japanese gas and power companies,
needed long-term stable supply. Ideally, they wanted to be the sole buyers
of all the output from a particular LNG export facility, which eliminated the
prospect of direct competition and gave the utility more control over its
supplier.
When companies negotiated the first generation of SPAs, many of the
power plants operated by Japanese utilities were able to use either oil or
natural gas to generate electricity, so the price of LNG was linked to the
price of oil. Linking the prices allowed both commodities to remain
competitive and guaranteed a market for imported LNG. The linking of the
prices is achieved through mathematical formulas comprising a fixed
component plus a variable component indexed, or linked, to a JCC price.
This is typically called the Japan Crude Cocktail or Japan Customs
Clearing price. The JCC price is based on a delivered price of a basket of
typical crude oils imported into Japan over a defined period plus an
inflation factor. The added advantage of this formula was that LNG prices
were much less volatility than crude prices because the indexing was
calculated on a monthly or longer basis. Almost all the contracts required
payment in U.S. dollars, thereby transferring currency exchange rate risks
to the utility.
Depending on the terms, SPAs may allow the buyer a degree of flexibility
in terms of scheduling or final destination of LNG. Many of the legacy
Japanese contracts did not allow such flexibility, requiring buyers to invest
in large LNG storage and receiving facilities to minimise off-loading
disruptions and TOP liabilities. Pipeline GSAs more frequently allow weekly
or monthly nomination changes, allowing both the buyer and the seller to
modify their obligations over the short-term.
Deliveries may be ono Free-on-board (FOB) basis where the buyer takes ownership of
LNG as it is loaded on ships at the export LNG facility. The buyer is
responsible for LNG delivery, either on its own ships or ships
chartered by the buyer. The contracted sale price does not include
transportation costs.
o Cost-insurance-freight (CIF) basis where the buyer takes legal
ownership of the LNG at some point during the voyage from the
loading port to the receiving port. The contracted sales price
includes insurance and transportation costs.
o Delivered ex-ship (DES) basis where the buyer takes ownership of
the LNG at the receiving port. The seller is responsible for LNG
delivery, and the contracted sales price includes insurance and
transportation costs.
o Many buyers prefer FOB contracts, which give them more control
over shipping costs and may allow them to trade surplus LNG
cargoes with other importers if explicitly allowed in the LNG sales
agreement.

Proprietary & Confidential

Under the CIF contract, transfer of title or ownership of the LNG cargo and
associated risks can legally occur at the re-gas facility, the international
marine boundary, or any other mutually agreeable point on the ship
voyage. DES contracts usually involve transfer of title at the unloading
berth at the loading terminal of the liquefaction facility. Taxation, legal and
strategic reasons influence the choice of contract type, and these issues
will increase in complexity in the future a governments increase their
scrutiny of the LNG trade.

Buyers are no longer exclusively large monopoly utilities. Deregulation has


created a host of smaller energy suppliers, many of whom are willing to
sign LNG contracts and have access to receiving and storage facilities.
As compared to the traditional 20 to 30 year contracts, todays buyers are
negotiating terms as short as 5 to 10 years. At the end of the term, the
contract may be renewed or renegotiated, or the buyer may decide to
source from a different facility. Buyers are reluctant to lock themselves to
one supplier, especially in a market where cheap gas supplies from Qatar
and elsewhere are forecast to increase.
Recent changes have seen the basic formula evolve from fixed escalation
pricing to a shorter-term rice basis. The JCC index common in Japanese
markets kept LN prices relatively stable, especially when compared to
more volatile crude prices.
Buyers, especially those negotiating with existing plants, resist agreeing to
high TOP levels demanded by a seller operating an existing LNG facility.
The original justification for the high TOP conditions was a requirement to
obtain financing for the initial project investments.
As LNG competes with pipeline gas, byers are demanding volume
flexibility. Buyers may be reluctant to purchase relatively higher priced
LNG gas when local domestic gas is available during periods of lower
demand.
Buyers may demand relaxation of traditional destination clause that limit
the ability of LNG buyers to resell their cargoes to other potential buyers.
Flexible destinations clauses allow buyers to collaborate taking advantage
of varying season demand, shipping capacity or price differentials
between markets.

Taking a gas project from concept to operation is a complicated and lengthy


process involving numerous parties with convergent and divergent motivations
relative to the project sponsor. Ideally, the project sponsor, or the party leading
the project development effort, will have sufficient control and decision-making
authority to ensure that other participants and investors are in agreement with
the project sponsors decisions and schedules. In reality, project partners, host
governments, lenders and consumers all paly an influential role during the
process.
A typical development process is shown below

Proprietary & Confidential

The first stage, concept identification, asks and answers the fundamental
question, is the project realistic and achievable? steps in this stage include
identifying the project objectives, determining alignment with company strategy,
developing a list of specific success factors, reviewing project fundamentals. The
second stage is feasibility and option selection. During this stage, financial and
commercial models are created, engineers are engaged, risks are identified and
preferred technical options are highlighted. Also at this stage, the
memorandum of understanding (MOU) or heads of agreement (HOA)
letters, are solicited from the resource holder and the potential consumers.
Financial models are a crucial component at this stage. CAPEX and OPEX
estimates at this stage have an accuracy range of

30

and a contingency

range of 15-20%. These estimates are developed using similar projects in the
area and general estimates of materials and construction costs. Stage three, the
project definition stage, is the critical go/no go stage where key agreements have
to be secured and costs and revenues ranges must be finalised to secure
financing. The gas sales agreements, transportation agreements, environmental
impact studies, and permits must be secured by the end of this phase. Partners
should finalise a joint operating agreement (JOA) before moving to the next
phase. Project costs, scope, and timing must also be finalised before financing
can be secured and procurement may begin. At this stage, the financial and
commercial models are refined with CAPEX and OPEX estimates within 10-15%
accuracy and less than 10$ contingency. To achieve this level of accuracy, an
engineering company contracted by the project sponsor completes a front-end
engineering design (FEED). This phase of the project, which may take six
months to a year depending on the size and complexity of the project, can cost
as much as 1-2% of the total CAPEX. At the end of this phase, financing for the
project should be secure.
In a project-financed project, project debt is taken on by a special purpose entity
that owns the project assets, and financial lenders are repaid their loan directly
by project cash flow, often with limited recourse or liability to the sponsors. In
this case, the project sponsors liability is limited to their equity investments in
the project (typically 25-30% of the total project investment). Project financing
may be preferred when the revenues are guaranteed, leveraged economic
returns are preferred, and where the sponsors may not have the financial
strength to take on the full risks of the project. The fourth stage of project
development is project execution.
An EPC contract is usually on a turnkey, lump-sum basis. The EPC contractor
responsible for the detailed engineering, design, procurement of the equipment,
and construction of the project bears some or most of the risks from cost
overruns or construction delays. The EPC company executes all subcontracts
under its own name and delivers the complete facility to the sponsors. An EPCM
contract is similar to an EPC contract, except that the engineering company acts
as an extension of the sponsor company. The engineering company executes
contracts and procurement on behalf of the sponsor company and is
compensated a management fee on either a lump-sum or reimbursable basis.
Proprietary & Confidential

The EPCM is often a preferred system as it tends to achieve the best quality
project at lower cost, but at a higher risk of cost overrun and increased
responsibility to the project sponsor for overall plant performance. The better
defined the project is prior to execution stage, and the better the quality of the
FEED study, the more likely it is the execution stage will be successful. As EPC
and EPCM processes are detailed and manpower intensive, they can cost up to
5% of the total CAPEX of the project. Project financing may be preferred when
the revenues are guaranteed, leveraged economic returns are preferred, and
where the sponsors may not have the financial strength to take on the full risks
of the project. The fourth stage of project development is project execution.

1.22 Liquefied Natural Gas


Natural gas has long possessed a collection of qualities seen as highly desirable
in major applications such as commercial power production and residential
heating. Lack of transportability, a primary technical drawback for many years,
was resolved more than 50 years ago with liquefaction and regasification
development. The commercialisation of that technology, however, has only
recently reached the cost efficiencies and industry scale that allow true global
competitiveness.

The capital intensity and cost of LNG have resulted in integrated contractual
arrangements that often commit gas, the LNG plant, the LNG vessels, and the
LNG terminus facilities to a specific customer for a 20 to 30 year period. The
contracts are frequently strict in form and inflexible on price and volume and
include automatic escalator costs for fuel cost changes, input, and labour cost
charges over time. The contracts also use a take or pay framework: if the
supplier meets all contractual commitments, the customer receiving LNG is
required to pay for the gas regardless of whether they need it or take it.
Intermediate steps are covered by a sale and purchase agreement (SPA)
between the supplier and receiving terminal, and a gas sale agreement between
the receiving terminal and end-users.
The initial stage of LNG development involves the upstream activities of
exploration, development and production. The gas reserves identified for LNG
development must meet three fundamental criteria composition, size and
sustainability. The gas must be a certain composition that is economic to
produce. Natural gas is often found with a combination of associated
components. The size of the gas field for development (the proven gas reserve
Proprietary & Confidential

base) must be large enough to support at least 1 million tons of LNG production
per annum (mtpa) for 20 years. Most LNG liquefaction facilities are devoted to
specific gas reserves and are dependent on the reserves being sufficient to
supply production needs over time. The field must also be sustainable over time
and must possess a sufficiently large reserve base left in the field at the end of
the project life span in order to maintain the fields production over time (plateau
level).
Getting gas from the field to processing and liquefaction facilities is a critical
activity. If quantity and quality are not a limitation, the development and
transportation of gas to liquefaction passes on to the development and
production agreements associated with reserve development. There are three
business structures typically used in LNG production and transportation

Integrated projects here, the ownership structure is the same for both
upstream development and liquefaction. This structure allows a high
degree of alignment and the business agreements may be simpler than for
the other structures. For example , unless there is a government, there is
often no need to establish a gas transfer price between the gas producer
and liquefaction facility.
Transfer pricing agreements often used when ownership of gas reserves
is separate from the LNG sponsors. This is frequently the case in countries
using production sharing agreements (PSAs) or production sharing
contracts (PSCs) and the government retains ownership of the gas. The
different ownership structures will require a transfer pricing agreement to
set the price of the natural gas feedstock for transportation and
liquefaction. The actual transfer policy will have a significant impact on the
returns to the two parties.
Throughput agreements is a contractual structure in which the owner of
the upstream reserves pays a contractual toll for transportation and
liquefaction, retaining ownership of the gas for post-liquefaction,
marketing and sale. This structure is growing in interest as upstream
owners such as sovereign governments, attempts to retain more
ownership and control over the sale and profitability of their gas. Recently,
a number of IOCs have purchased or leased their own ships and are now
pursuing throughput agreements in order to do their own marketing and
sales.

Once gas feedstock reaches a liquefaction facility, it enters a series of processing


and storage steps called the LNG train. Past LNG developments often consisted
of at least two trains to gain economies of scale and still retain the operational
efficiencies, flexibility and reliability of individual trains. Recent technological
developments have resulted in much larger single trains, making single-train
LNG facility developments economic.
Liquefaction is expensive. Capital costs of LNG trains are by far the most
significant cost, making up on average more than 80% of liquefaction cost. The
industry has seen real capital cost reduction over time, falling from more than
$500 per ton per year in the 1970s to under $240 per ton per year in 2004. Much
of this capital cost reduction has been achieved by expanding the scale of trains.
Once liquefied, LNG must be transported to market. The ownership of LNG ships
is usually separate from LNG ownership. Gas shippers may own, lease, or charter
the ships used for LNG transport. There are two basic ship designs used for LNG
shipping. The Kvaerner-Moss design stores the LNG in large spherical tanks
Proprietary & Confidential

welded into the ships hull. The tanks extend vertically far above the ships deck,
giving the ships their unmistakable look of a series of balls. The second structure,
the membrane design, has LNG tanks built into the insulated hull of the ship.
This design also has the ability to reliquefy the boil-off gas to reduce volume loss.
This has proved important in cost reduction, particularly in longer delivery,
making distant markets competitively viable. Transportation costs of LNG to
market can prove to be the critical element in project competitiveness. Many
LNG developments result in very similar FOB costs of LNG, and as a result,
whichever can reach a specific market, a receiving and regasification terminal, at
lowest delivered cost can win the long-term supply agreement.
The cost of receiving terminals and regasification facilities, like liquefaction
facilities, are very site-specific. An industry rule-of-thumb is that a complete
receiving terminal and regasification plant cost will run $1billion per billion cubic
feet of gas per day capacity. Regasification of LNG is a much simpler and less
costly process than liquefaction. Storage tanks make up the single largest
component of capital costs (about one-third of the costs), as significant storage
capacity is needed to assure a dependable supply of gas to retail customers.
There have been several significant innovations in recent years. One is the
development of the floating regas vessel, which is an LNG carrier with onboard
LNG vaporizers. A second innovation is offshore regasification terminals that take
advantage of limited onshore space and not in my backyard concerns.
To date, LNG projects have been demand/buyer driven. As a result, once a true
buyer is identified, particularly for a long-term purchase agreement, the lowest
delivered cost of a sufficiently large and reliable supplier will prevail. Therefore,
many LNG projects are driven by the buyers willingness to sign a long-term
purchase agreement with the most competitive long-term supplier.
The LNG market was created from the older natural gas market. Natural gas has
long been sold under long-term agreements between producer and buyer,
shipped via pipeline. But, as the LNG market grew, new forms of contractual
arrangements were needed as a result of the following

After securing long-term concessions or PSAs for the development of


upstream gas, LNG developers put massive amounts of capital at risk by
building pipelines and liquefaction facilities. The developers need to
secure long-term sale agreements that assure them of a return on their
sizeable capital investment.
LNG buyers without access to pipeline gas must secure long-term supplier
assurances of regular shipments in order to supply electrical power plants,
and residential and commercial gas needs.

Buyers and sellers use a series of sale and purchase agreements that tend to
escalate towards a full commitment of resources specifically the capital
associated with the construction of pipelines, liquefaction facilities, shipping
arrangement, and regasification terminals for completion of the LNG chain. The
marketability and competitiveness of LNG is based on its delivered cost to the
customer.
The LNG industry evolved around two separate regional markets, the Atlantic
Basin and the Asia Pacific. The Asia Pacific market is the larger of the two
regional markets and is driven by Japan (the worlds largest LNG buyer), Korea,
and Taiwan. Pricing in this market was based primarily on the Japan crude
cocktail (JCC) in Japan and South Asia, as imported crude oil was deemed the
Proprietary & Confidential

benchmark competitive alternative. The Japan customs-cleared crude is the


average price of customers-cleared crude oil imports into Japan (formerly the
average of the top 20 crude oils by volume).
The Atlantic Basin LNG market developed later than the Asia Pacific market.
Buyers were primarily European-based and attempted to supplement pipeline
gas from the former Soviet Union. The market was supplied nearly exclusively
out of North Africa. Pricing in the Atlantic Basin was significantly more complex
than in the Asia Pacific. As LNG agreements generally follow gas pricing
structures used in natural gas markets, this meant that prices reflected
traditional pricing by region: 1) the mix of NBP pricing in the UK and the longterm pipeline pricing agreements in continental Europe, and 2) Henry Hub in the
U.S., where pipeline gas was the widely available competitive alternative.
Over the last 10 to 15 years, the LNG market has shifted from its traditional
regionalisation to a market that is increasingly global. LNG production has
expanded far beyond the original North African, small Persian Gulf, and large
Southeast Asian liquefaction centres. These original production areas have been
supplemented by new LNG liquefaction facilities in West Africa and the
Caribbean. At the same time, LNG regasification and receiving facilities have
expanded rapidly in Europe, the U.S., Mexico and the Caribbean, China, Taiwan
and India. Volumes move across the traditional regions on both a long-term
contract basis and increasingly a short-term market basis. The growth in the
short-term spot market for LNG is an added sign that the LNG market is moving
towards a more global structure. The drivers for change have risen from both the
sellers and the buyers

Buyers unexpected supply disruptions like that of Hurricanes Katrina and


Rita on the US Gulf Coast in 2005 disrupted production significantly.
Supply disruptions have led to serious price spikes, pushing buyers to seek
more short-term supply options. LNG buyers have also experienced large
fluctuations in demand for gas. Extremely cold weather in January 2008
forced Russia to reduce offered volumes to Eastern and Western European
markets, forcing buyers to look for short-term alternatives.
Suppliers many long-term production areas have experienced declining
production. As a result, buyers have sought short-term alternatives to fill
growing supply gaps. As more production capability comes online, there
are an increasing number of delivered cost-competitive alternatives for
buyers. For example, the recent development of shale gas in North
America to search out short-term sale alternatives in Europe or the AsianPacific.

A complex combination of events in Asia between 2007 and 2009 was


particularly instrumental in expanding the short-term flexibility in the LNG
market. Market demand in Asia unexpectedly soared because of various factors,
including a large nuclear power outage in Japan. Gas supplies in the Asia Pacific
basin suddenly and unexpectedly declined as several producers encountered
dwindling production and others suffered new construction delays. The market
shortfall was rapidly filled by volume from the Atlantic Basin, roughly 15% of its
continuing volume. The ability of these producers to respond quickly to a shortterm supply problem is evidence of new market flexibility and, in particular, of
producers willingness to create divertible volumes. The influx of short-term
volumes into a market traditionally dominated by long-term supplier agreements
and long-term contract prices upset much of the regional pricing structures. Most
of the Atlantic Basin volumes diverted to the Asian market have been at spot
Proprietary & Confidential

market prices based purely on supply and demand on the open market,
something nearly unknown in the prior 40 years of LNG trade.
The cost and complexity of many LNG projects have led to a significant spread of
break-even prices on the actual gas produced by many LNG field projects. The
variance in break-even costs and prices may contribute an additional impetus to
the growth and development of global LNG trade in the near future.
Demand for LNG in the Asia Pacific region is expected to more than double
between 2005 and 2015. The major Asia Pacific markets, and the only countries
importing LNG in 2000 Japan, South Korea and Taiwan, will continue to make up
the majority of Asia Pacific LNG purchases. Substantial new demand is expected
from China, India, Mexico and Singapore.
Another liquefaction process gaining interest and investment in recent years is
the process known as gas to liquids (GTL). GTL turns natural gas into a cleanburning synthetic diesel fuel. GTL fuels ignite more easily than conventional
fuels, improving the performance of car engines. Although GTL fuel is clean
burning, the process generates significant carbon dioxide emissions that, if
regulated, could prove extremely costly. Shells Pearl GTL in the Middle East is the
largest GTL project to date.

1.23 Coal Analysis


Coal is an organic sedimentary rock that contains varying amount of carbon,
hydrogen, nitrogen, oxygen, sulphur as well as trace amounts of other elements
including mineral matter. Generally, coal was not mined to any large extent
during the early Middle Ages (prior to A.D. 1000). However, the use of coal
expanded rapidly, throughout the nineteenth and early twentieth centuries.
Coal is a solid, brittle, combustible, carbonaceous rock formed by the
decomposition and alteration of vegetation by compaction, temperature and
pressure. It varies in colour from brown to black and is usually stratified (divided
according to subgroups). The source of the vegetation is often moss and other
low plant forms, but some coal contain significant amounts of materials that
originated from woody precursors. The plant precursors that eventually formed
coal were compacted, hardened, chemically altered, and metamorphosed by
heat and pressure over geologic time. It is suspected that coal was formed from
prehistoric plants that grew in swamp ecosystems. When such plants died, their
biomass was deposited in anaerobic, aquatic environments where low oxygen
levels prevented their reduction (rotting and release of carbon dioxide).
Successive generations of this type of plant growth and death formed deep
deposits of unoxidised organic matter that were subsequently covered by
sediments and compacted into carboniferous deposits such as peat or
bituminous or anthracite coal. Coal deposits, usually called beds or seams, can
range from fractions of an inch to hundreds of feet in thickness. Coal consists of
more than 50% by weight and more than 70% by volume of carbonaceous
material (including inherent moisture). It is used primarily as a solid fuel to
produce heat by burning, which produces carbon dioxide, a greenhouse gas,
along with sulphur dioxide. This produces sulphuric acid, which is responsible for
the formation of sulphate aerosol and acid rain. Coal contains many trace
elements, including arsenic and mercury, which are dangerous if released into
the environment. Coal also contains low levels of uranium, thorium, and other
natural occurring radioactive isotopes, whose release into the environment may
lead to radioactive contamination. Although these substances are trace
Proprietary & Confidential

impurities, a great deal of coal is burned, releasing significant amounts of these


substances. When coal is used to in electricity generation, the heat is used to
create steam, which is then used to power turbine generators. Modern coal
power plants utilise a variety of techniques to limit the harmfulness of their
waste products and to improve the efficiency of burning, although these
techniques are not widely implemented in some countries, as they add to the
capital cost of the power plant.
Coal exists, or is classified, as various types, and each type has distinctly
different properties from other types. Anthracite, the highest rank of coal, is
used primarily for residential and commercial space heating. It is hard, brittle,
and black lustrous coal, often referred to as hard coal, containing a high
percentage of fixed carbon and a low percentage of volatile matter. The moisture
content of fresh-mined anthracite is generally less than 15%. The heat content of
anthracite ranges from 22-28 million Btu/tonne on a moist, mineral matter free
basis. Bituminous coal is a dense coal, usually black, sometimes dark brown,
often with well-defined bands of bright and dull material, used primarily as fuel in
steam-electric power generation, with substantial quantities also used for heat
and power applications in manufacturing and to make coke. The moisture
content of bituminous coal is usually less than 20% by weight. The heat content
of bituminous coal ranges from 21-30 million Btu/tonne on a moist, mineral
matter-free basis. Subbituminous coal is coal whose properties range from
those of lignite to those of bituminous coal, used primarily as fuel for steamelectric power generation. It may be dull, dark brown to black, and soft and
crumbly at the lower end of the range, to bright, black, hard, and relatively
strong at the upper end. Subbituminous coal contained 20-30% inherent
moisture by weight. The heat content of subbituminous coal ranges from 17-24
million Btu/ton on a moist, mineral matter-free basis. Lignite is the lowest rank f
coal, often referred to as brown coal, used almost exclusively as fuel for steamelectric power generation. It is brownish black and has high inherent moisture
content, sometimes as high as 45%. The heat content of lignite ranges from 9-17
million Btu/ton on a moist, mineral matter-free basis.
The data obtained from coal analyses establish the price of the coal by allocation
of production costs as well as to control mining and cleaning operations and to
determine plant efficiency. However, the limitations of the analytical methods
must be recognised. In commercial operations, the price of coal not only reflects
the quantity of coal but also invariably reflects the relationship of a desirable
property or even a combination of properties to performance of coal under
service conditions.

Proprietary & Confidential

There are many problems associated with the analysis of coal, not the least of
which is its heterogeneous nature. Other problems include the tendency of coal
to gain or lose moisture and to undergo oxidation when exposed to the
atmosphere. In addition, the large number of tests and analyses require
characterising coal adequately also raise issues. Many of the test methods
applied to coal analysis are empirical in nature, and strict adherence to the
procedural guidelines is necessary to obtain repeatable and reproducible results.
The type of analysis normally requested by the coal industry may be a proximate
analysis (moisture, ash, volatile matter, and fixed carbon) or an ultimate analysis
(carbon, hydrogen, sulphur, nitrogen, oxygen and ash).
The formation of various national standards associations has led to the
development of methods for coal evaluation. For example, the American Society
for Testing and Materials (ASTM) has carried out uninterrupted work in this field
for many decades, and investigations on the development of the standardisation
of methods for coal evaluations has occurred in all major coal-producing
countries.
In any form of analysis, accuracy and precision are required; otherwise, the
analytical data are suspect and cannot be used with any degree of certainty. This
is especially true of analytical data used for commercial operations where the
material is sold on the basis of purity. At present, when multi-seam blended coal
samples ranging from 10% by weight mineral matter to as much as 30% by
weight mineral matter occur, such precision could result in a corresponding
difference as large as 4-5%, with corresponding differences in the amount of ash
that remains after combustion. The response to such concerns is the design of a
sampling program that will take into consideration the potential for differences in
the analytical data. Such a program should involve acquiring samples from
several planned and designated points within the coal pile so that allowance is
made for changes in the character of the coal as well as for the segregation of
the mineral matter during and up to that point in the coals history. That is, the
sampling characteristics of the coal play an extremely important role in the
application of text methods to produce data for sales.

Proprietary & Confidential

Analyses may be reported on different bases with regard to moisture and ash
content. Indeed, results that are as-determined refer to the moisture condition
of the sample during analyses in the laboratory. A frequent practice is to air-dry
the sample, thereby bringing the moisture content to approximate equilibrium
with the laboratory atmosphere in order to minimise gain or loss during sampling
operations. Loss of weight during air drying is determined to enable calculation
on an as-received basis (the moisture condition when the sample arrived in the
laboratory). This is equivalent to the as-sampled basis if no gain or loss of
moisture occurs during transportation to the laboratory from the sampling site.
Attempts to retain the moisture at the as-sampled level include shipping in
sealed containers with sealed plastic liners. Analysis reported on a dry basis is
calculated on the basis that there is no moisture associated with the sample. The
moisture value is used for converting as determined data to the dry basis.
Analytical data that are reported on a dry, ash-free basis are calculated on the
assumption that there is no moisture or mineral matter associated with the
sample. The values obtained for moisture determination and ash determination
are used for the conversion. Finally, data calculated on an equilibrium
moisture basis are calculated to the moisture level determined as the
equilibrium (capacity) moisture. Hydrogen and oxygen reported on the moist
basis may or may not contain the hydrogen and oxygen of the associated
moisture, and the analytical report should stipulate which is the case because of
the variation in conversion factors. These factors apply to calorific values as well
as to proximate analysis and to ultimate analysis.
Just as a relationship exists between the various properties of petroleum with
parameters such as depth of burial of the reservoir, similar relationships exist for
the properties of coal. Variations in hydrogen content with carbon content or
oxygen content with carbon content and with each other have also been noted.
Other relationships also exist, such as variations of natural bed moisture with
depth of burial as well as the variations in volatile matter content of vitrinite
macerals obtained from different depths.
The latter observation (i.e. the
decrease in volatile mater with the depth of burial of the seam) is a striking
contrast to parallel observations for petroleum, where an increase in the depth of
the reservoir is accompanied by an increase in the proportion of lower-molecular
weight (more volatile) materials.
Coal classification is the grouping of different coals according to certain qualities
or properties, such as coal type, rank, carbon-hydrogen ratio, and volatile matter.
Thus, due to the worldwide occurrence of coal deposits, the numerous varieties
of coal that are available, and its many uses, many national coal classification
systems have been developed. These systems often are based on characteristics
of domestic coals without reference to coals of other countries. However, it is
unfortunate that the terms used to describe similar or identical coals are not
used uniformly in the various systems. In the U.S., coal is classified according to
the degree of metamorphism, or progressive alteration, in the series from
lignite (low rank)( to anthracite (high rank). The basis for the classification is
according to yield of fixed carbon and calorific value, both calculated on a
mineral matter-free basis. Higher-rank coals are classified according to fixed
carbon on a dry, mineral matter-free basis. Lower-rank coals are classed
according to their calorific values on a moist, mineral matter-free basis. The
agglomerating character is also used to differentiate certain classes of coals.
Thus, to classify coal, the calorific value and a proximate analysis (moisture, ash,
volatile matter, and fixed carbon by difference) are needed. For lower-rank coals,
the equilibrium moisture must also be determined.
Proprietary & Confidential

1.24 Electricity
For electricity markets, supply and demand must constantly be matched,
resulting in highly volatile prices. In the U.S., each regional market is coordinated
by its own Transmission Service Operator (TSO). Some TSOs are
government-sponsored monopolies, whilst others are Independent Service
Operators (ISOs) or Regional Transmission Organisations (RTOs). ISOs are
limited to doing service in a single state and are exempt from federal jurisdiction.
RTOs do business across several states and fall under federal jurisdiction. Many
RTOs began in a single state as ISOs and became RTOs when they expanded
across state boundaries.
A deregulated market is a service area where an RTO/ISO, rather than a
government-sponsored monopoly, coordinates generation and transmission. In
these areas, anyone can own a power plant and connect it to the transmission
grid to sell power. All participants in a deregulated market are guaranteed equal
access to transmission lines, and economic innovation is encouraged. As a
general rule, deregulated markets use economic incentives to effect changes,
while regulated markets use legislative mandates. Most energy trading occurs in
deregulated markets. The most important characteristics of a deregulated
market are daily power auctions, or non-discriminatory auctions, which set
the price of power for a transmission grid. Power producers submit the price at
which they are willing to supply power, and are activated in order from the
lowest to highest bid. These are called non-discriminatory auctions because all
winning bidders get paid the same price regardless of their bids. The price of
power for every producer and every wholesale consumer is set to a single price
called the clearing price, or the wholesale price. Smaller customers pay a
slightly higher price for their power the retail price for power. The cost of
bringing the last unit of electricity into the market is called the marginal price
of power, and the most recently activated plant is the marginal producer.
Usually there are two types of auctions coordinated by RTO/ISOs. A day-ahead
auction sets the price of power for the following day in one-hour increments.
This auction is commonly completed in the early afternoon on the day before
delivery. This allows power producers time to arrange fuel and operating
schedules for the delivery day. The actual demand for power isnt known when
the day-ahead auction occurs, instead, this auction is based on a prediction of
the next days required load. The second auction is a real-time auction, which
is run continuously throughout the actual delivery day. This auction valances the
actual demand against the predictions made the previous day. It is typically bid
in five-minute increments. If a power plant is not chosen to operate in the dayahead auction, it can still participate in the real-time auction. However, the realtime auctions require power plants to turn on and off quickly, and not every plant
has this capability.
The forward market depends on the prices set by the daily auctions, but they are
very different markets. The daily auctions are open only to power providers with
the ability to generate power and place it on the transmission grid. In contrast,
the forward market is much more accessible. The forward market doesnt require
any ability to generate power at all it is possible to trade both physical
contracts (requiring delivery of power) and financial contracts (which settle in
cash). The forward markets trade large blocks of power at a limited number of
locations around the country. This is a critical difference between the daily
auctions and the forward markets. It is possible to buy spot power in arbitrarily
small sizes for immediate use anywhere in the country. However, it is only
Proprietary & Confidential

possible to trade in monthly blocks at about 20 locations. The forward markets


limit the number of trading locations so a sufficient number of buyers and sellers
are forced to be active in each contract. Trading power in large units at a limited
number of locations makes it easier to standardise contracts and find trading
partners. Getting an exact price at a specific location commonly requires
additional trades that may or may not be possible. As a result, power is often
traded in two pieces standardised forward trades made at major hubs to get
the desired regional exposure approximately correct, and then smaller finetuning trades to lock in an exact price.
In the daily auction market, the assumption that all power providers are equally
able to deliver power isnt always true. In periods of heavy demand, power lines
can become overloaded and may require electricity to be routed around the
congestion. The primary way of rerouting power is to activate power plants
closer to the areas of high demand. Because low cost generators are normally
activated first, turning on a generator closer to the demand means that a high
cost plant is being activated out-of-merit order. If that price was allowed to set
the clearing price of power for the whole grid, there would be a jump in
everyones costs for the sake of a small minority of customers. In most
deregulated power grids, rather than having the entire grid pay the higher price,
it is paid only by the affected parties. Because this price only occurs for a single
location, this is known as a Locational Marginal Price (LMP). Aligning power
prices with the actual cost of delivering power was one of the major reasons
energy markets deregulated. The alternative to deregulated markets having
prices legislated by local governments actually interferes with matching prices
to costs. If prices are always going to match costs, there is no need for legislation
the market is deregulated. Congestion costs are just paid by consumers, power
producers pay them too. There is a charge for routing power into a high load area
over congested power lines, and a credit for producing power that bypasses the
congestion. Another part of the standard market design is a penalty for remote
generation. If a generator is a long distance from the demand, line losses on
the intervening transmission could substantially decrease the amount of power
actually delivered to customers. Under an LMP methodology, power producers
only get paid for deliverable power- not on the gross power placed onto a power
grid. These two changes have had a huge effect on the business of selling power.
In addition to fuel costs and efficiency, location has become a major factor
determining the profitability of a power plant. Implementing the FERCs standard
market design requires assigning different prices to different locations on a
power grid. In most regions, this price (the locational marginal price) consists of
three parts a clearing price, a congestion charge, and a line loss charge. The
clearing price for power is the same everywhere on a power grid, but the
congestion and line loss charges are specific to each location. Under the SMD,
there are several types of locations for which prices are calculated

Node prices correspond directly to the price of power at a specific piece


of physical hardware. Commonly this is an interface, called an electrical
bus, where power enters or leaves the transmission grid. Generators get
paid the nodal price of the electrical bus where they deliver power into the
transmission grid.
Zone prices average of all nodal prices within a limited geographical
area. Usually, electrical buyers pay the zone price for the power they
receive. Zone prices are used for customers, since they require less
detailed metering equipment.

Proprietary & Confidential

Hub prices average of selected nodal prices across several zones. The
hub price serves as the benchmark price for a power grid. Hub rices are
used extensively in the forward market for trading. In most ISO/RTO
regions, the clearing price for power and the hub price are synonymous.

Closely linked to the concept of Locational Marginal Prices is a financial


instrument called a Financial Transmission Right (FTR). These instruments
help customers manage the price risk of having purchased or sold power at a
major hub and then being forced to pay a different price when they deliver or
receive power at a specific node. FTRs are tradable contracts made between two
parties. These parties take opposite sides of an obligation to pay or receive the
difference in price between two nodes. However, if there is no congestion, the
price at the two nodes will be the same. However, if there is congestion, one
party will need to pay the other. FTR options can allow one party to pay an
upfront fee (premium) to avoid paying congestion charges. Essentially, buying an
FTR option is like buying insurance against higher prices due to congestion.
Because of the way power auctions work where power plants are activated until
the expected demand is fully met predicting customer demand for power
before it occurs is a critical part of power trading. The actual demand for power
changes constantly. As electricity cant be stored, this changing demand must
constantly be matched against supply. Making accurate estimates of the future
load on the power grid is a key factor in ensuring affordable power.
Fossil fuel plants can burn oil, natural gas, or coal to produce steam. All of these
fossil fuels produce greenhouse gases like carbon dioxide, when they are burned.
The amount of pollution depends on the efficiency of the power plant. Nuclear
generators use nuclear fission to turn water into steam. Nuclear fuel provides a
lot of electricity per weight a pound of highly enriched uranium is
approximately equal to a million gallons of gasoline. But like fossil fuels, enriched
uranium is subject to severe shortages and prevents environmental problems.
Hydroelectricity is a relatively common form of power generation. Hydroelectric
dams use flowing water to drive a turbine directly. The most common description
of a fossil fuel power plant relates to its efficiency in converting fuel into
electricity. This efficiency is called a heat rate, and it is typically expressed as a
ratio of heat input to work output (Btu/KWh). A lower heat rate indicates a more
efficient power plant.
Electricity is usually generated by manipulating the relationship between
magnetic fields and electricity, which are two parts of the same force
(electromagnetism). Spinning a wire in a magnetic field creates a current in the
wire. The easiest way to generate electricity is to rotate a coiled wire inside a
pair of magnets. As the wire spins, it will start to build up a magnetic charge that
can be removed in the form of electricity. The speed at which the current flows
through the wire depends on the voltage and load. The more work the electrons
need to do on their way through the wire, the slower they travel. This is a very
important relationship, because if the current moves too fast, the wire will heat
up and potentially melt. The only way to prevent transmission wires from melting
is to match the level of production (the voltage) and the demand for electrical
power (the load). Generating too little power will cause brownouts, and
generating too much power will melt the transmission lines and cause a
blackout.
About 80% of the worlds power is generated through the use of steam turbines.
Regardless of complexity, all steam turbines operate in a similar manner.
Proprietary & Confidential

Superheated steam is created by heating water in a boiler. When the water


turns into steam, it expands, moving past a turbine, causing the turbine to spin.
The turbine is attached to a generator, causing it to spin as well. Spinning the
generator causes electrical power to be generated. After moving past the
turbine, the steam enters a cool metal chamber (the condenser). As the steam
touches the cool sides of the condenser, it turns back into water and is then sent
back into the boiler to begin the process again. The most fuel-consuming part of
operating a steam turbine is heating the system when it starts up. After the
water is heated up, keeping the steam turbine continually operating is a very
efficient way to produce more electricity. As a result, it is often worthwhile for
power plant operators to keep their plant operational, and take a loss in low
demand periods, in order to avoid the costs associated with a cold start.
When cogeneration is a result of using waste heat produced from a condenser, it
is called topping cycle cogeneration. If heat from an industrial process
provides heat to the boiler, the fuel necessary to run the plant can be reduced or
eliminated. This type of cogeneration is called bottoming cycle cogeneration.
Gas turbines skip the step of producing steam by creating superheated gas
directly through combustion. A mixture of natural gas and air is ignited in an
explosive reaction that sends superheated gas past a turbine. Since the waste
gas cant be reused in a gas turbine, it is less efficient than a steam turbine.
However, gas turbines are much simpler to build and maintain. Also, there is no
lengthy process required to heat up the water into steam, s gas turbines can
start producing power at peak efficiency as soon as they are turned on.
In the same way that a bottoming cycle cogeneration plant uses heat from an
industrial process in its boiler, a steam turbine can use exhaust heat of a gas
turbine. The primary waste product of a gas turbine is superheated gas. This gas
is extremely hot, but since it isnt expanding anymore, it cant be used to power
a second gas turbine. However, it is perfect for heating water to produce steam.
A combined cycle power plant is a gas turbine whose exhaust gases power a
steam turbine.
The efficiency at which a plant converts fuel into electricity is called its heat
rate. This quantity is usually expressed in terms of British thermal units per
kilowatt hour (Btu/kWh). Lower heat rates imply a more efficiency power plant
since less fuel is required to produce the same amount of electricity.

Heat Rate=

Quantity of Fuel Used


Quantity of Power Produced

The heat rate of a plant is an easy way of determining when a power plant can
operate profitably. For example, if a natural gas fired power plant has a heat rate
of 8.5 MMBtu/MWh, it can sell power profitable when the price of power is 8.5
times the price of natural gas. Since this comparison is so common, the ratio of
power to fuel prices has its own terminology. That ratio is called the marked
implied heat rate and is also in units of Btu/kWh.

Marked Implied Heat Rate=

Proprietary & Confidential

Power Price
Fuel Price

As a rule of thumb, natural gas power plants commonly have heat rates between
7-10 MMBtu/MWh. Power plants closer to the 7 MMBtu/MWh level are extremely
efficient.
The heat rate of a power plant provides a way to estimate profitability. This
estimate, called a spark spread, is the theoretical profit that a natural gas
generator can make from buying fuel and selling power at current market prices.
This profit estimates does not include any charges for operating costs.

Spark Spread=Priceof Electricity(Price of Gas Heat Rate )


When multiple spark spreads are discussed, it is necessary to specify the heat
rate and pricing location. If products other than natural gas are examined,
different terms are used to describe the profitability spread. Dark spread refers
to coal-based generation plants. When emission credits are included in the
profitability estimates, the name of the spread typically has the word clean or
green in the front.
Once power is generated, it needs to be brought to the customer. A higher
voltage makes it easier to transfer power over long distances, but it is also more
dangerous. As part of the generation process, power plants use several different
types of power lines high voltage lines are used for long distance transmission
and lower voltage lines for residential distribution. Transformers and
substations step the voltage up or down between different types of power lines.
Transmission refers to the bulk transfer of power form the power plant to a
substation via high voltage lines. Distribution refers to the transfer of power
from a substation to various consumers using much lower voltage lines.
In the U.S., there are three major integrated power grids: the Eastern
Interconnect, the Western Interconnect and the ERCOT. Inside each
interconnection, all of the transmission lines are synchronised. This allows power
to be transported across long distances within those interconnections. However,
differences in population, industry, and weather still make electrical prices a
regional phenomenon. Because of this, long distance transmission of power
(called wheeling) between fundamentally different markets is the source of a
large number of trading opportunities.

1.25 The Generation Stack


In deregulated areas, the coordinating ISO/RTO for the area holds daily auctions
to determine the price of power. The physical capabilities of each generator are a
major factor in their bidding strategies. As a result, the primary way to estimate
power prices is to examine power providers ordered by their cost of production.
This ordering is called a generation stack. Dispatch capacity, often just
called capacity, indicates how much power each power plant can produce. The
break-even cost indicates the minimum price the power plant can accept for its
power and still make a profit. In most cases, power plants will place bids above
their own cost of generation, and below the costs of the next type of unit in the
stack.
In both real-time and day ahead auctions, generators participate by submitting
offer curves (their generation levels associated with prices they are willing to
accept) and their technical constraints (start-up costs, minimum up time). After
collecting offers from generators, the ISO selects the winning generators that
minimise the cost to the market. In deregulated markets, Load Serving Entities
Proprietary & Confidential

are placed into economic dispatch order based on the results of the dayahead and real-time auctions held by each TSO. The lowest bidding LSEs are
activated first, followed by higher bidders. The order in which power plants are
turned on is called the dispatch stack. This is a last on, first off ordering similar
to a stack of plates.
The basic decision on whether to actively participate in the bidding process, and
risk being inactive, often comes down to how much money a power plant stands
to lose if it isnt activated. A power plants profit is the spread between the price
of power and the power plants cost of production. If a power plant has a very
low cost of production, it will give up substantially more profit by being inactive
than a plant with a higher cost of production. For example, it might be possible
for a power plant to increase the price of power by $1 if it is willing to bid
aggressively into the day-ahead market. However, if aggressive bidders stand a
10% chance of being inactive, the decision is complicated for providers that are
already highly profitable. If the price of power is around $100, it wouldnt be
worthwhile for a plant with a cost basis of zero to give up $10 (a 10% chance of
being inactive and giving up $100) for a chance to make an extra $0.90 (making
an extra dollar 90% of the time).
Most generators split their bid into several smaller bids. It is not uncommon for
some of these bids to be offered at or below the generators cost of production.
Since most generators are steam turbines that cost money to bring online,
unaggressive bids ensure that the generator can avoid unnecessary stoppages. A
power plant will often submit bids at higher prices in an attempt to maximise
profits. Generally, the placement of the higher bids will depend on who else is
bidding at that point. For example, if a generator can profitably sell power at $50
and the next plant in the generation stack becomes profitable at $60, the first
generator is likely to split their bid into two pieces a piece below $50 to
minimise the risk of being completely inactive, and the rest of their bid just
under the $60 mark. Generally, this second bid will be as high as possible
without tempting the next generator in the stack to undercut that price. Bidding
in this manner requires an in-depth knowledge of all the power plants around a
certain point in the generation stack.
In many cases variable power providers cant turn off their power supplies or
guarantee delivery. For example, once power is scheduled to be imported from
another power grid, it cant be easily cancelled. Solar, wind, and hydro
generators also cant shut down on short notice, nor can they guarantee that
they will be able to deliver their power. As a result, these power supplies will be
price takers they will bid zero cost for their power and get paid the clearing
price set by other market participants. Even small changes to the generation
stack can have a major effect on prices because the top of the generation stack
is getting displaced.
To predict power prices, it is necessary to look at historical prices and estimate
the generation stack from other pieces of information. Of this other information,
fuel prices are the most common way to estimate changing power prices. For
fuel-dependent load providers, the cost of fuel and the conversion efficiency of
their power plant combine to determine where they become cost effective.

1.26 Tolling Agreements


A tolling agreement is a contract to rent a power plant from its owners. These
agreements give the renter the ability to convert one physical commodity (fuel)
Proprietary & Confidential

into a different commodity (electricity). Owning (or renting) a power plant gives a
trader the option of converting fuel into power. If power prices are sufficiently
high, a power plant can burn fuel to produce electricity at a profit. Otherwise, the
trader will usually leave the power plant inactive. This is very similar to the
behaviour of financial option contracts. As a result, the value of a power plant is
often approximated as a portfolio of those contracts.
The general mechanism for outsourcing trading responsibility is to rent the
power plant to a power marketer, a company specialising in power trading,
through a tolling agreement. These agreements can run for any length of time
(often 20 to 30 years) and divide the job of running a power plant between the
two parties. The owner gets paid a fee to maintain the power plant, while the
power marketer makes all of the economic decisions. The marketer is responsible
for supplying fuel to the plant and selling the resulting electricity into a
competitive market. The power marketer takes on all of the economic risks and
earns al the profits above the fixed maintenance fee.
Calculating the profit from the conversion of a low cost commodity into a high
cost commodity is a standard net profit calculation; a power plants profit is the
sale price of its product minus its cost of materials and operating costs. In most
cases, since fuel costs are much larger than other operating costs, the operating
costs are ignored and the net profit of a power plant is approximated only by the
conversion efficiency of the plant. For a single unit of output, this estimate of net
profit is called a spark spread, a concept that was also introduced earlier. Since
spark spreads can be negative, the ability of the power plant to turn off means
that its profit needs to be approximated by a spark spread option rather than a
spark spread. A spark spread option is a spark spread whose owner has the
option of taking a zero profit, which is similar to a power plant shutting down.
When spark spreads are positive, the power plants total profit is its per-unit
profit (the spark spread) multipled by the total units of electricity the power plant
can produce. When the spark spread is negative, the power plant has zero profit.
As a general rule, whenever spark spreads are positive, the owner will take the
profit. Any time they are unprofitable, the owner will try to opt for zero payment
by shutting down the power plant. Because the power marketer isnt going to
make just one decision on the power plant, a large number of options are
required to approximate the physical behaviour of a power plant. Most
commonly, a power plant will make operating decisions on an hourly, daily, or
monthly basis. Typically, a model of a tolling agreement will create an option for
each operating decision. Each leg of the trade will represent a set of decisions
occurring around the same point in time. The primary factor in choosing an
appropriate number of legs is the availability of market data and the physical
capabilities power plant. If the only available prices come from the forward
market, which trades monthly contracts, there isnt much benefit in choosing a
daily model or hourly model.
Each leg of a tolling agreement requires electricity and fuel prices at the right
time and location. Unless storage is easy, energy products are not eh same
commodity at different points in the year. This has a big effect on risk
management any measure that tries to aggregate risk between multiple legs
has to account for fundamentally different underlying exposures. Both the
efficiency of the plant and its total output determine its profitability. The total
profitability of a power plant depends on its per-unit profit (the spark spread) and
the quantity of electricity that it can produce (the dispatch rate). These two
factors need to be balanced against one another. Higher levels of production are
less efficient they require more fuel per megawatt of power. As a result, the
Proprietary & Confidential

per-unit profit decreases as more power is produced. To a large extent, the value
of a tolling agreement depends on the expected correlation between power and
fuel prices. Highly correlated power and fuel prices mean less volatility and lower
profits. Small changes to the correlation between these prices can have a major
impact on the value of a tolling agreement.
Some of the value of a tolling agreement is known immediately. At a minimum, it
is worth its intrinsic value its value if all the operating decisions were made
immediately. This can be done by arranging firm agreements to buy fuel and sell
electricity through the forward market. However, there is a second component to
an options value. Uncertainty benefits the owner of an option. The downside risk
of owning an option is capped. A 50/50 chance of making extra money is a great
investment when losing doesnt involve spending more money. The payoff of a
spark spread option is based on the spread between power and gas prices.
Todays prediction of those prices is the forward spread. The spread at the time
of expiration probably wont be identical to the spread predicted today. However,
it is likely to be distributed around the forward spread. Mathematically, the likely
range of spreads is described by a statistical distribution centred on the forward
spread. Some of the possible spreads will be at points where it is profitable to
produce electricity. Other spreads will be at points where it is unprofitable to
operate the power plant. The efficiency of the power plant (its heat rate)
determines which spreads are profitable and which spreads are unprofitable.
There are dangers to using options to approximate physical behaviour a spread
option model can ignore important physical aspects of generation like the time it
takes to turn on (ramp up or cycle) and variable operating costs. For example, a
generator might take longer and use more fuel to start operating in the winter
than during the summer. Options also assume power plant decisions can be
made instantaneously. No matter how quickly a power plant can be cycled, it is
going to be slower than instantaneous decisions implied by a spark spread
model. As spark spread option models are less constrained than actual
generators, they run the risk of overestimating profitability. This overestimation
can be as high as 20-30%.

1.27 Location
Network theory was well developed in the twentieth century and was concerned
principally with static optimisation under deterministic conditions. The enhanced
responsivity of participants that comes with liberalisation drove extensive
theoretical development around the turn of the century on the time element of
power that is the stochastic evolution of prices. Whilst location has always
been a part of system design and operation, it has not been a key element of
electricity market design until very recently. The increasing importance and
profile of location is due to four key factors

Increasing commercial complexity of networks due to the interconnection


of markets and the wheeling of power across long distances.
The increasing importance of barriers and constraints, electrical and
otherwise.

Proprietary & Confidential

Increasing geographical widening between fossil fuel sourcing, large scale


production, consumption and environmental impact, and the associated
impact on security of supply.
Increasing extent of small scale renewable generation, embedded in the
distribution networks.

Substantial maintenance is required for lines, towers, insulators, transformers,


breakers, relays, busbars, static var condensers, capacitors and a variety of other
equipment. The true cost incurred by the transportation company is equal to the
shallow (nearby) connection cost, plus the long term infrastructure
requirements caused by the new load in relation to both current and anticipated
loads. This it he deep entry cost. The problem is that with such high cost and
long life, allocation of the full cost to the new load that may only be around for a
few years, is excessive. Hence, the transmission owner must extract the rent
from the lines over a longer period of time. Over and above the build and
maintain costs for the transportation owner is the research and development
necessary for the application of new equipment for more active management of
the system. This is true for distribution as well as transmission companies, since
management of distribution networks are becoming more active due to the
increase in intermittent embedded generation.
The treatment of losses is different in the transmission and distribution networks.
In transmission, the system operator retains a close identity to losses and has
the ability to measure them with real time import and export meters. In
distribution, the distribution company commonly has no direct relationship to
losses, has a much wider and less certain infrastructure, and exit metering that
is commonly late, of low temporal resolution and with varying degrees of
accuracy. Hence a distribution loss factors is assigned rather than measured and
reconciled.
The provision of reactive power by generators is essential to maintain voltage
and stability. This provision is costly for the generator, and unless mandatory and
uncompensated as part of the statutory grid code (in which case generators load
the costs in their prices for real power, which is economically inefficient),
requires compensation to the generators.
The general assumption in consumer contracts is that the energy (and capacity)
is firm, in an immature market, voluntary demand management and
interruptibility is hard to execute and in this situation, system security is a public
rather than a private good. This is reinforced by the fact that regardless of
private willingness to accept interruption in return for lower prices, lost load has
high media and government attention. In the absence then of voluntary payment
for security of supply, the network operator (and by implication the generators)
must levy an involuntary charge for security of supply.
Constraint incurs a cost in real time due to the need to re-dispatch plant relative
to the unconstrained optimum, and the continuation of this causes a long term
cost of constraint. This cost is felt by consumers. Constraint cost can be reduced
by network build, and hence the system operator must be correctly incentivised
to do so.
Commercial losses is a commonly used term for the electricity that is consumed
but which is not paid for. Electricity can be dishonestly consumed directly from
the distribution network (by wire or transformer), or by a variety of tampering
activities with and around the meter.
Proprietary & Confidential

Electricity wheeled the whole way across the country or control area may
increase losses or cost of constraint, and in fact would usually do so, since
wheeling against the flow is unlikely. This causes an increase in the cost of
infrastructure. The effect on system security depends on the interruptibility of
the wheeling. If it is interruptible, then security of supply is increased and if it is
not then security of supply is usually decreased.
A possible ESI design might be to cost transportation on a point to point basis.
So, generators sell to the transmission or distribution networks at the respective
entry nodes, and consumers pay the networks at the exit nodes, and distribution
companies pay transmission companies for electricity at transmission
exit/distribution entry and sell to consumers at the distribution exit. This would
be enormously complicated for the consumer, and there are numerous
complications in relation to reconciliation to the consumer meter. The common
model instead is for the consumer to have a single interface to the supply
company, and for the supply company to pay generators for energy and network
operators for transportation.

The connection charge is the charge applied to either generators or


consumers when connecting to the network for the first time. It is quite
possible to have no connection charge, and simply amortise the
connection cost and smear across the whole system by collecting just use
of system charges. Similarly, it is possible to charge only generators, only
consumers, only generators connected to the transmission infrastructure,
or only consumers connected only to the transmission infrastructure. The
use of system charge is closely associated with the connection charge,
and represents that part of system build and maintain costs (plus various
other costs) that is not captured by connection charges, plus designated
other costs that may include the various forms of frequency response and
reserve, reactive power and black start.
The incremental costs for the system operator incurred by one participant
are greatest if the period of maximum demand for the participant
coincides with the period of maximum system demand.

Loss costs are applied separately to the transmission and distribution sectors,
and can have different splits of generator and demand costs, just as for use of
system costs. Electrical losses on the high voltage grid are relatively low
compared to distribution losses. Typical losses in the developed economies,
which are more densely populated than developing economies, are of the order
of 2-4%. Losses can be reduced b

Reducing net flow and distance


Flow routing, for example the path of least resistance. Electricity will do
this by itself, but the configuration of breakers in the network for loss
minimisation is different to that for least cost of re-dispatch to resolve
constraints.
Improving physical performance (for example reducing resistance by
increasing line redundancy, or reducing line losses by raising voltage or
using direct current) and technological performance.
Physical changes such as raising voltages, using direct current for long
lines, and adding lines.

In the short term, losses act as a netting off of generation submitted to the grid
and this can act as an economic signal.
Proprietary & Confidential

1.28 Details of the Integrated Trading Model


The existence of a visible spot price is central to efficient operations. Some
reasons include

Permits incentive-compatible handling of imbalances and congestion.


Induces least-cost operation by being the de factor penalty for unavailable
plants.
Used in the pricing for final customers, permitting real-time price response
and increasing reliability.
Permits small plants to enter the market because they have automatic
back up if the plant is unavailable. This reduces market concentration.
The spot price feeds back into the contract market; the expected value of
the future spot prices determines the contract price. Therefore, the spot
price is an important determinant of the willingness of investors to
construct new generating capacity.

If efforts are made to move spot trading out of the system operator and into
private markets, not only does the spot price become much less transparent and
much less efficient at doing the above tasks, but also the inevitable rules
necessary to suppress the system operators imbalance market will drive traders
to consolidate, increasing concentration. As such, the preferred trading model is
therefore the integrated model, in which short-run market and system operations
are integrated.
Prices need to rise at the peak times to provide enough to cover the investment
of the last generator on the system, and in fact, it can be shown that all the
generators need this amount to recover their investment. It is the prospect of
these returns that induces generators to invest.
There are three workable methods of ensuring that prices rise in times of tight
supply to the level set by demand

Demand bidding
Capacity payment charging the marginal cost at all hours will, in a wellplanned system, give generators just enough revenue if they also charge
the marginal cost of capacity (the investment costs not recovered
through marginal cost pricing) at peak times.
System of capacity obligations requires all load serving entities that
serve final customers to acquire capacity tickets, which must cover the
expected peak load for their customers, multiplied by a multiple (1+X).
Where X is calculated as a reserve margin, sufficient to meet some preplanned level of reliability to cope with random generator outages. Sellers
of the tickets must be generators.
o Workable, but there are problems. Relies on administrative forecast
of demand, which is problematic. They dont deal with retail
competition very well because retailers loads and reserve
requirements change too quickly.

1.29 Nuclear Power Energy Plants

Proprietary & Confidential

The already high cost estimates for constructing new nuclear power plants
versus other forms of energy power plants suddenly rose to take into account the
many anticipated extra costs of earthquake and tsunami protection that would
be required worldwide, and also the new government safety precautions and
wide range of new tests that would be required for final authorization of new
nuclear power plants. There was also a spike in projected costs because of the
widely anticipated cancellation of many nuclear power plants in various nations.
The cost of mining the uranium or other radioactive ore such as plutonium and
thorium (a less radioactive ore), as well as the cost of then transforming ore into
a safe form of yellow cake or other transportable substance, were really on the
first basic costs, compared to the total all-in cost of obtaining all government
regulatory permissions, designing the plant, and constructing the plant, and final
completion tests and numerous safety evaluation procedures, which were often
subject to very long delays and changes. The costs of fuel once the plant was
fully built and approved were relatively small, since the nuclear fuel lasted so
long and could generate electricity for decades.
One recent innovation of new nuclear power plant planning in a number of
nations is the requirement of a geological disposal facility, which is a
guaranteed financial commitment to pay for the safe disposal of the nuclear
waste created by each nuclear power plant. This prepayment program is
especially reasonable because the cost of new advanced methods of creating
nuclear power will have to be originally factored into the cost of their
reprocessing their spent or waste nuclear material. As a result of financial
planning and budgeting in advance, the extra added life span of the original
nuclear fuel that will be reprocessed, and the safe number of times it can be
reprocessed, while it will cost more initially than it would have originally, the
nuclear plant will have a prepaid right to reprocess this nuclear fuel. This lowers
the average all-in disposal cost of spent nuclear fuel by reducing the ultimate
amount of totally spent nuclear fuel.
The future of global nuclear power plants today is a far more complex forecast
than it would have been before the 2011 Japanese earthquake, tsunami, and
three nuclear power plant meltdowns. The entire nuclear power industry is under
intense nationally and internationally mandated investigations to test whether
any of their nuclear power plants are at risk from a level 9 earthquake.

1.30 Hydropower Plants


As it is for wind and solar power projects, it is tough in the current market to
make the economics work on the development of a new hydropower plant. Older,
operating hydropower plants with power purchase agreements that are indexed
to avoided cost are good restructuring candidates. Since avoided cost is typically
determined by natural gas in most markets, this tends to be a natural gas-fired
power plant. Unlike wind and solar projects, most hydro projects have a relatively
high yearly availability.
Hydro projects are optimised at the time of development and have to be
optimised to take maximum advantage of current average annual river flows.
This optimisation can include new hydraulic power units, new governors, remote
pond control, and movable forebay wall gate. Pumped storage facilities pump
water to a higher elevation in order to turn a turbine to produce electricity. The
water is moved using cheaper power during off-peak periods. During on-peak
periods, the water is released to drive a hydro turbine. The concept is that since
Proprietary & Confidential

wind turbines produce nonfarm power, a pumped storage facility would act like a
battery to store power.

1.31 Geothermal Power Plants


Under the Earths surface there is a layer of hot molten rock called magma,
which is the source of geothermal energy. Today, this magma drives 8,900 MW in
large-scale industrial power plant in 24 nations. The places in the world with the
highest temperatures underground often have active young volcanoes that are
sometimes called hot spots. These occur where the giant continental tectonic
plates meet each other and the Earths crust is thin, enabling heat, fire or
geysers to break through the Earths surface.
The largest collection of hot spots in the world is in northern California, where dry
steam spouts from many cracks in the earths rocky crust and continues every
day. The dry steam from many of these cracks in the earth rises directly into
turbines of many geothermal power plants that are placed on top of these
geologic cracks. The dry steam drives the turbines, which directly drive electric
generators that capture the geothermal energy, and transfers that power directly
to the electric power station. The Geysers plants use an evaporating watercooling process to create a vacuum that pulls steam through the turbines more
efficiently. However, this water-cooled process loses 60-80% of the steam into
the air. It does not inject is back into the ground. Therefore, when the steam
pressure declines, the rocks underneath remain very hot. The result is that 11
million gallons of water every day is treated separately, and must be transported
to the geothermal plants from a wide radius.
The dry steam required the power plant to be built actually to sit on top of the
crack in the earth where the dry steam escaped. This limited the amount of
geothermal power that could be obtained from the earth, and so a second new
technology was created. Boreholes were drilled into the earth in many nearby
locations, where the geothermal liquid could be obtained. That geothermal liquid,
could now be used to spray into a giant contained holding different liquid that
was held at much lower pressure than the geothermal fluid and thus cause the
new fluid to instantly flash steam. That flash steam could then drive a turbine,
which could drive an electric generator. However, after a number of years, it was
discovered that most geothermal regions have moderate-temperature water in
them that is lower than the hottest geothermal fluid. Scientists found that energy
could be extracted from these lower-temperature fluids and thereby developed
what become known as a binary-cycle power plant due to the use of two
different fluids. Both the geothermal fluid and a secondary fluid that has a much
lower boiling point than water pass through a heat exchanger. Heat from the
geothermal liquid causes the secondary fluid to flash into vapour, which then
drive the turbines and then the electric generator. This binary-cycle power plant
is a closed-loop system so that almost nothing escapes into the atmosphere.
Geothermal power plant projects are very expensive compared to natural gasfired power plants. These projects involve drilling risk from project inception and
during operations. Existing well continue to need maintenance and may quit
after a certain operating period.
A geothermal heat pump or ground-source heat pump extracts underground heat
in the winter for heating houses, and then transfers the heat back into the
ground in the summer for cooling. Open-loop heat pumps use natural ground
water.
Proprietary & Confidential

Geothermal energy is widely predicted to be heavily funded by many


governments because it is among the lowest-cost heating and cooling systems,
and because it has been found to available all around the world. Spent fluids
from geothermal electric plants can be used subsequently for direct-use
applications called cascaded operations. Savings can be as much as 80% under
the cost of fossil fuels.
Hot rock enhanced geothermal reservoirs require drilling wells down into hot
rocks and fracturing the rock sufficiently to enable water to flow between the
wells. The water flows along what are called permeable pathways picking up
heat, and finally exit the reservoir production wells to complete the circulation
loop.

Proprietary & Confidential

Das könnte Ihnen auch gefallen