Sie sind auf Seite 1von 97

1.

SEISMIC DATA ACQUISITION

It is the first step in seismic prospecting. It includes acquiring raw data from
the field using field instruments and operations. The field instruments used for
acquisition are: a source (dynamite, explosive, shaped charges, vibrosies, air gun,
and thumper), a receiver (geophone, hydrophone, and marsh phone), the dog house
(recording unit, magnetic tapes, on-screen display).
For acquisition a survey has to be planned. This is done using the survey
design techniques. There might be 2D and 3D seismic surveys.
 In 2D seismic survey, the source and receiver are placed in a line. There can
be split spread (source in the middle and receiver on either sides) or end on
spread (source on one end and receivers on the other).
 In 3D seismic survey, there are a series of receiver and source arrays

covering an area. The source and receiver are orthogonal. For petroleum
exploration purpose, 3D surveys are mainly used.
e
lin
X-

Geophone In-line Shot

Reflected
Seismic
Layer-1
Layer-2
Layer-3
Figure 1.1.1: Geometry of acquisition.
1
2D SEISMIC DATA ACQUISITION

In the 2D data acquisition source and geophone are in line. Acquisition


of data is first and important part of seismic. Any data processing technique
cannot add any frequencies that were not recorded nor enhance information
outside the bandwidth of the seismic data acquired in the field. Hence we
should be careful during acquisition of data. Before acquisition, first we
design field parameter. Inappropriate or poorly designed parameters can
severely limit the quality and utility of the seismic data. Properly designed
parameter, based on the knowledge of the area and the exploration target,
normally lead to greatly enhance and interpretable seismic section.

1.2 Design of field parameters:

The well-designed seismic survey begins with a clear knowledge of


the survey objectives in general terms. Several factors merit consideration in
the design of ultimate field parameters, including economics, time of the
survey, type of energy source type of geophones and their patterns.
Some parameters of a seismic acquisition program are
 Maximum offset: distance from the source to most remote

receiver.
 Minimum offset: distance from the source to nearest receiver.

 Group interval: distance between geophone arrays. Constant for

a survey.
 Shot interval: distance between two shots.

 Fold coverage: number of times a subsurface point is surveyed

by different source and detectors.


2
 Sample interval: the time interval between digital samples of

the signal. Varies from less than 1 ms to 4 ms. This sample rate
is chosen not to limit the vertical resolution and to record the
desired maximum frequencies.
 Choice of source and geophone arrays.
 No. of recording channel.
 Direction of shooting.
Before conduct the survey we clearly mark the area of survey with
reference of any fixed structure like pillar, bridges etc. while in offshore we
define co-ordinates by satellite navigation.
Layout
In reflection seismic survey two type of layout are most commonly
used.
1. End-on
2. Split spread

1. End-on:
In survey we used several no. of channels or geophone group. The
most commonly used is 96 or 120 channels. In end-on shooting, source
is placed on side of geophone array.

Fig. -1.1.2 End-on shooting

3
2. Split-spread
In this layout source is placed in between the geophone array. There
are two type of split-spread layout.

(a) Symmetrical Split-spread: in this layout equal geophones are placed


on both side of source. This layout is used when depth of interest is
shallow.

Fig.1.1.3 Symmetrical split-spread


(b) Asymmetrical split-spread: In this spread different no. of geophones
are placed on both side of source. This spread is used when our depth of
interest is both shallow and deeper.

Fig. 1.1.4 asymmetrical split spread

4
3D SEISMIC DATA ACQUISITION

Seismic data are usually collected along lines of traverse that form same
sort of grid and 3-D picture of structure is deduced by interpolating between
the lines. However features seen on such lines may be located off to the side
of the lines rather than underneath the lines and small but important features
(like fault) can occur between the lines. This produce error in interpretation.
3-D surveys are done to obtain data uniformly distributed over an area rather
than long lines; in order to correctly locate the geological features, 3-D
techniques also reduces the spatial noise.

1.3 Acquisition requirements


In 3-D, we wish to have uniform acquisition conditions and a uniform
surface distribution of CMP’s that (1) data distributing on a uniform grid (2)
same CMP multiplicity (3) utilizing the same mix of offset distances & (4)
same mix of azimuth.3-D seismic data can be acquired in number of ways.
The usual method is to run a series of closely spaced lines, to have
geophones laid out in two or more parallel lines. In other land work,
geophones are laid out on lines at right angles to source lines.

Different Field Parameters and Their Selection


Box
In an orthogonal design the box corresponds to the area encompassed by two
consecutive receiver lines (spaced Ry) and two consecutive source lines
(spaced Sx). Box area is then:
Sb = Ry *Sx

5
Directions
Two types of directions have to be considered:
In-line direction: Which is parallel to receiver lines. Sampling in this
direction is generally satisfactory.
Cross-line direction: Which is orthogonal to receiver lines. Sampling in this
direction is generally weak and has to be investigated carefully.

Fig.1.1.5 Source and Receiver line direction

Fold of coverage
The 3-D fold is the number of midpoints that fall into the same bin and that
will be stacked. The nominal fold (or full fold) of a 3D survey is the fold for
the maximum offset. The majority of the bins is filled by the nominal fold.
Run-in: is the distance necessary to bring the fold from its minimum to its
nominal value in the shooting direction.

6
Run-out: is the distance necessary to bring the fold from its nominal value
to its minimum in the shooting direction.
Foldage (F)= inline fold (FIN) x cross line fold (FXL)

No of receiver station x receiver station interval in inline direction


Inline fold (FIN) = -----------------------------------------------------------------------------
2 x source station interval in the inline direction

No of receiver line x receiver station interval in cross line


Cross line fold (FXL) = -----------------------------------------------------------------------------
2 x source station interval in the cross line direction

Fig 1.1.6 Inline and cross line fold

7
Midpoint
Midpoint is a point located exactly in the middle of the source – receiver
distance. It is not necessarily located along a receiver line as in 2-D. Instead,
midpoints are usually scattered within the survey area. In practice, they
rarely form a regular grid.
Common mid point (CMP):
In an horizontal layered Medium with constant velocity, common mid
point(CMP) is the point located in the middle of different Source-receiver
pairs which reflection corresponds to the same subsurface point. It is
desirable that source-receiver pairs are different in direction and in offsets.
CMP bin:
CMP bin is a square or rectangular area, which contains all midpoints that
correspond to the same CMP. Traces that fall in the same bin are stacked.
Their number corresponds to the fold of the bin.
Bin size:
The bin size corresponds to the length and to the width of the bin. Smallest
bin dimensions are equal to half source point interval and half receiver
interval (Sy/2*Rx/2).

Fig 1.1.7 Bin size and midpoint


8
Move-Ups
Two types of move-ups can be considered for 3-D surveys:
In-line move-up: Occurs when the template moves up along the survey
from its initial position after completion of a salvo of shots.
Cross-line move-up: Occurs when the template reaches the edge of the
survey area and moves up across the survey to start a new in-line move-up.

Fig 1.1.8 Inline and cross line move-up


Offsets
Taking into account the configuration of the 3-D template, different offsets
can be defined:
In-line offset: is the distance representing half-length of the template in the
in-line direction.
Cross-line offset: is the distance representing half-length of the template in
the cross-line direction.
Maximum offset (Xmax): is the distance of half-diagonal of the template.
Maximum Minimum offset (Xmin): is the length of the diagonal of the box
formed by two consecutive receiver lines and two consecutive source lines.

9
Patch
A patch is an acquisition technique where source lines are not parallel to
receiver lines. If source and receiver lines are orthogonal the spread is called
orthogonal (cross spread). If receiver and source lines are not orthogonal the
spread is called slant spread. The survey area will be covered by the
juxtaposition of patches. Each one represents a unit area obtained by several
template moves. Shot points can be inside the template or outside.
Receiver Line
Receiver line is a line where receivers are located at a regular distance. In
land 3-D surveys receiver lines are kept as straight as possible. In marine 3-
D surveys receiver lines correspond to the towed streamers.
Receiver line interval (Ry): Receiver line interval is the distance between
two consecutive receiver lines. It is also called receiver line spacing.
Receiver interval (Rx): Receiver interval is the distance between two
consecutive receivers located on the same receiver line. It is also called
receiver spacing.

Fig 1.1.9 Source Line

10
Receiver density (Rd): Receiver density is the number of receivers per
surface unit, in general square kilometer (sq.km). Number of receiver lines
per kilometer and number of receivers per kilometer determine the receiver
density (Rd).
Roll-Along
Roll-along is the distance of two consecutive positions of the template. It is a
number.
In-line roll-along: Corresponds to the in-line move-up of the template and
represents the distance between two consecutive positions of the template.
The number of columns of receivers left behind the template is equal to the
in-line-roll-along.
Cross-line roll-along: Corresponds to the cross-line move up of the
template and represents the distance between two consecutive positions of
the template .The number of receiver rows left behind the template is equal
to the cross-line-roll-along.

Fig 1.1.10 Inline and cross line roll-along

11
Source Line
Source line is a line where source points are located at a regular distance. In
land 3-D surveys source lines can be orthogonal or parallel to receiver lines
or have any other direction (slant). In marine 3-D surveys source lines
correspond to the lines followed by airgun arrays.
Source line interval (Sx): Source line interval is the distance between two
consecutive source lines. It is also called source line spacing.
Source interval (Sy): Source interval is the distance between two
consecutive source points located on the same source line. It is also called
source spacing.
Shot density (Sd): Shot density is the number of shots per surface unit, in
general square kilometer (sq.km). Number of source lines per kilometer and
number of sources per kilometer determine the shot density (Sd).

Fig 1.1.11 Source line and source line interval

12
Salvo
A salvo is the number of fired shots before the template moves up along the
survey.

Fig1.1.12 Salvo and Template


Swath
When the template moves in one direction and reaches the edge of the
survey area, it will generate a swath. Usually the first move occurs in the in-
line direction.
Swath-shooting mode: The swath-shooting mode is an acquisition
technique where source lines are parallel to receiver lines.

Fig 1.1.13 Swath shooting

13
Template
All active receivers corresponding to one given shot point corresponds to a
template These receivers are located on several parallel lines.
3-D Data Volume
3-D data volume is the result of data processing It is a migrated volume
obtained after sorting the data in CMP bins (binning) and stacking the data.
Data are gathered in (X, Y, Z) coordinates with:
– OX in the in-line direction;
– OY in the cross-line direction;
– OZ in two way time (or depth).
In some surveys different volumes can be generated and separately
interpreted with:
– Near offsets;
– Mid offsets;
– Far offsets.

14
Geophysical Parameters
Geophysical parameters of 3-D can be gathered into imaging, edge,
geometrical and recording parameters. All of them have an impact on the 3D
data quality. However some of them have a great impact on the cost of the
survey and have to be adjusted carefully. They correspond mainly to the
imaging parameters and are related to fold of coverage, bin size and
migration aperture. They are thus related to sampling and aliasing criteria,
to resolution and signal enhancement and to migration efficiency: Edge
parameters include in-line and cross-line tapers. Geometrical parameters
correspond to offsets and source and receiver lay outs. Recording parameters
are related to recording length and sampling rate.
Imaging Parameters
Fold of Coverage
The fold of coverage of a 3-D seismic survey represents the number of
traces that are located within a bin and that will be summed. Minimum bin
dimensions correspond to half the source interval and half the receiver
interval. Each trace is generated in the middle of a source-receiver pair.
Source-receiver pairs have different directions. Traces within the bin thus
have a range of azimuths and offsets but they correspond to the same
subsurface location (Fig.1.16).
When summed all traces carry the same signal, which is enhanced as it is in
phase. However all traces have different random noise which is out of phase.
The summation process decreases the level of noise. Then the fold
contributes greatly to the enhancement of the signal to noise (S/N) ratio.
After stacking each bin contains one single trace, whose S/N ratio is
multiplied by √F (F being the fold).

15
Bin Size:
The bin size will affect the lateral resolution of the survey and its frequency
content.
Resolution and bin size:
Resolution is defined as the ability of a seismic method to distinguish two
events of the subsurface that are close to each other. Lateral resolution (also
called horizontal resolution) corresponds to the direction parallel to the
seismic measurement plane. It is related to the Fresnel zone. The Fresnel
zone is defined as the subsurface area, which reflects energy that arrives at
the earth’s surface within a time delay equal to half the dominant period
(T/2). In this case ray paths of reflected waves differ by less than half a
wavelength. Commonly accepted value is one-fourth the signal wavelength
(λ/4). Then a recorded reflection at the surface is not coming from a
subsurface point, but from a disk shape area, which dimension is equal to the
Fresnel zone. The radius of the Fresnel zone is given by:

Rf = (V/2)(t0/fdom)1/2……………………………………(1)

16
This shows that high frequencies give better resolution than low frequencies
and resolution deteriorates with depth and with increasing velocities.
Migration technique drastically improves resolution
The 3-D migration is a major factor that drastically improves the 3-D
imaging compared with 2-D data as the energy is by far better focused. In
3-D processing, out of the plane events are restored to their correct
subsurface location and become additional energy. As a matter of fact the
migration can be considered as a downward continuation of receivers from
the surface to the
reflector making the Fresnel zone smaller and smaller. The3-D migration
will shorten the radius of the Fresnel zone in all directions improving
drastically the resolution. Bin size must be equal to the lateral resolution
after migration. This value is equal to half the dominant wavelength λ dom
associated with the dominant frequency fdom

Bin size = 1/2λ dom ……………………………………(2)

Spatial sampling and bin size:


Spatial sampling is a common operation in seismic acquisition. The recorded
samples must allow the reconstruction of the original signal without
ambiguity. A proper sampling is given by Nyquist condition (or Shannon
theorem), which states that two samples per period are minimum to
reconstruct a discrete signal. Then sampling interval is:

∆t ≤T/2 or ∆t ≤1/2 fmax

17
According to Gijs Vermeer, in the (f, k) plane there is a maximum wave
number k max such that the energy is nil for frequency superior to fmax and
there is a minimum velocity Vmin .The spatial sampling for shots and
receivers is thus:
∆x(r,s) ≤Vmin / 2fmax ……………………………………(3)
Whereas the spatial sampling in the midpoint domain is:
∆xm ≤Vmin / 4fmax ……………………………………...(4)

For dipping events (with dip θ), the above formulae become:

∆x(r,s) ≤Vmin / 2fmax sinθ…………………………………..(5)

∆xm ≤Vmin / 4fmax sinθ……………………………………(6)

These formulae give the maximum-recorded frequency and wave numbers


and no alias occurs.
However, if Vmin is very small or Fmax is very high the above formulae lead
to very small ∆x, which is very difficult to implement. Then it is common in
acquisition to accept some kind of signal that is aliased such as ground roll
with low velocity or noise with high frequency.
Diffractions and bin size
Diffractions are useful for migration and should be sampled correctly. The
sampling formula is (Liner and Underwood, 1999):
∆x ≤Vrms / 4fmax sinϕ………………………………………..(7)

18
Where ϕ is the take-off angle from the diffraction point. It is considered that
if the take-off angle is equal to 30° the corresponding wave front carries
95% of the diffracted energy. Then the above formula gives an antialias
sampling value equal to:
∆X ≤ Vrms / 2fmax ………………………………………...(8)
Practical rules
In summary, bin size must be selected as the minimum value of the
following three formulae:
Bin size = 1/2 λ dom
∆x(r,s) ≤Vmin / 2fmax sinθ
∆x ≤Vrms / 2fmax

In addition the sampling paradox must be considered either by square


sampling in shots and receivers or by implementing additional shots or by
two dimensional interpolation procedure.

Migration Aperture
Migration aperture is defined as a fringe that must be added around the
subsurface target area in order to correctly migrate dipping events and
correctly focus diffracted energy located at the edge of the target area.
Migration aperture is then related to the two aspects of migration techniques:
moving dipping events to their true subsurface locations and collapsing
diffractions. The external limit of the migration aperture corresponds to the
full fold area.
Migration aperture and migration displacements Migration restores the
dipping reflector to its true position with three effects: shortening the

19
reflector, increasing reflector dip and moving reflector in the up-dip
direction with horizontal and vertical displacement. Horizontal and vertical
displacements are given by the following formulae (Chun and Jacewitz,
1981)
Dh = (V2 *t * tan θ s)/4…………………………………. (9)

θ s being the dip angle on the time section:

Dv = t{ 1 - [1 – (V2* tan2θ s)/4]1/2} ……………………………(10)

tan θs = Dv/ Dh
The migrated angle θ m is given by:

tanθ m = tanθ s / [1 – (V2 * tan2θ s)/4]1/2 …………………………(11)

Migration aperture and diffractions


The following considerations discussed in 2D are also available in 3D.
Diffractions are generated by subsurface features whose dimensions are
smaller than the incident seismic signal such as pinch-out, erosional surface,
abrupt lithology changes, reefs, flanks of salt dome, faults, etc. In the (x, z)
plane each discontinuity will generate a circular diffracted wavefront which
will be recorded at the surface at different offsets x1, x2,…xn at times t1,t2,…
tn. In the (x, z) plane, couples (x1, t1), (x2,t2), etc. give a diffraction hyperbola
in the stacked data. The apex of this hyperbola indicates the diffraction point
and its equation is:

20
t = 2 (z2 + x2) 1/2 /Vrms

In theory the hyperbola extends to infinite time and distance. However in


practice, for the migration, the diffraction hyperbola will be truncated to a
spatial extent within which the migration process will collapse the energy to
the apex of the hyperbola. This extent is called migration aperture and its
width determines the accuracy of the migration. It is accepted to limit the
extension of the hyperbola to 95% of the seismic migration energy. This
corresponds to a take-off angle from the apex of 30° as shown in
Figure1.21a. Figure 1.21b gives the value of the migration aperture as:
Ma = z * tanθ
With θ minimum equal to 30°, this gives:
Ma = z * tan30° = 0.577 * z
Ma ≈0.6 * z = 0.6 * (Vt0 / 2) …………………………….(14)
Where V is the average velocity and t0 is the zero-offset time. In case of
dipping event the migration aperture is:
Ma = z * tanα
It then follows that:
Ma = (Vt0 / 2) * tanα…………………………………. ( 15)
Where α is the maximum geological dip.
Migration aperture and migration algorithms: The migration algorithms
give another limitation of the migration aperture. These algorithms, in
general, take into account dips of 45 to 60 degrees and too steep dips are not
well imaged after migration. Dips can then be limited to these values.

21
Migration aperture and velocity: (Yilmaz, 1987) Shows that migration
aperture increases with
Velocity as indicated in the above formulae and the deeper the geological
targets the higher the migration aperture.

Practical rules:
Migration aperture will be: Ma ≈0.6 * z = 0.6 * (Vt0 / 2)
if the maximum geological dip is less than 30°. If this angle is higher than
30° the migration aperture will be Ma = z * tanα= (Vt0 / 2) * tanα
In addition the maximum dip can be limited by the dip limit of migration
algorithms.

ADVANTAGES OF 3-D SURVEY:


1. Azimuth Information is available.
2. Detailed delineation of subsurface formations and thus visualizing them in
3Dimension.

22
2. Survey Planning

Pre-Survey Planning
At the time of prospect identification, seismic data could be only
available to the geoscientist in the life cycle of an exploratory initiative in an
area. Seismic data acquisition endeavors require for formalization of a
systematic work plan.
1. Physical planning
Physical planning aspects of the seismic survey operation are initiated
as soon as the plan is received from geological department in regards to the
area to be covered with 3D seismic surveys, generally, three-four months
ahead of the actual recording operations.
2. Technical planning
a. Reconnaissance Survey
For efficient planning & execution of seismic survey operation in the
area detailed reconnaissance survey was carried out. The primary objective
was to identify the physical and topographical features in the area of
operation. The physical aspects such as road networks, access constraints,
and other features such as Oil and gas pipelines, factories, etc. falling in the
block were identified. The topographical aspect such as rivers left over
remnant river channels, oxbow lakes, lowlands, forest, cultivation, tea-
gardens etc. were identified and the same were documented. The
information so gathered would provide immense aid in planning and
designing of seismic survey.
b. Establishing a Grid of GPS Points

23
Satellite based positioning system is used for fixing the reference
benchmarks in the area of operation. These reference benchmarks fixed with
the GPS system enable for better control and precise spatial positioning
requirement in the area of operation during the course of control traversing
in the area and final implantation work for staking out the position of source
and receiver on the surface as per the predetermined theoretical grid.
c. Acquisition Geometry and Parameters
Standard procedures were followed to decide on the various
acquisition parameters. We approached to selection of acquisition
parameters for 3D seismic surveys as a two-step.
 Pre-survey Design ( Preliminary Inputs )
 Detailed Design (finalizing the acquisition geometry and
parameters)
Pre-survey design primarily involves gathering of various available
information viz. exploration objectives, review of existing geo-scientific
data, socio-cultural data from the operational area including logistics and so
on. The information was analyzed to broadly estimate the geophysical
requirements of the survey consistent with the exploration objectives, as also
the financial implications of implementing the survey. The pre-survey design
sets the constraints in terms of fold, offset & azimuth distribution, bin size
etc i.e. sets the preliminary inputs to the survey designing.
A large number of geometries are simulated based on the technical
requirement as observed in the pre-survey design which serves as the input
to the second step i.e. detailed design. The detailed design process
incorporates the choice of critical acquisition parameters such as shooting
pattern, spacing of shots, shot-lines receivers and receiver-lines as well as

24
the inline and cross line spread length that are consistent with the broad
technical requirement as observed during the pre-survey design process.

APPROACH TO SELECT THE ACQUISITION PARAMETERS

GEOPHYSICAL / GEOLOGICAL OBJECTIVE

REVIEW OF AVAILABLE GEOSCIENTIFIC DATA

DECIDING THE PRILIMINARY INPUTS

SURVEY DESIGNING

IT MEETS THE REQUIREMENTS? NO


YES

FINALIZE THE GEOMETRY

Field Operation
1. Survey
Surveying is the science and art of making the measurement necessary
to determine the relative position of points above, on or beneath the surface
of the earth, or to establish such points. The surveying work consists of
primarily making such measurements and can be divided into three parts-
Field work, computing and mapping.
In seismic surveys the main objective of the survey work is to
mark/fix and provide the precise position of sources and sensors to be laid
out on the ground prior to commencement of recording operations. In order
25
to achieve the desired level of precision in geophysical positioning the
surveying work was divided into three steps.
 DGPS Survey
 Control Point Survey
 Implantation Survey.

a. DGPS Survey
DGPS is a differential mode of operation of GPS and involves in the
measurement of the relative position of the unknown point (rover) with
respect to a known point (reference or Master). The technique is extensively
used in the exploration industry for precise positioning of the sources and
receivers in almost all seismic survey endeavors.
The study area was adjacent to Moran, Thowara, Borbil-Diroi and
Dipling 3D block which have been already covered by 3D seismic survey.
The satellite point fixed in the adjacent block viz. Borbil-Diroi were used as
Master Control points.
Practice Adopted for fixing DGPS Points
There are certain standard practices that were followed during the
course of GPS observations to avoid operational errors and ensure precision
for further computations.
 DGPS surveyed points were distributed uniformly in the block to have
better quality control of the surveyed grid during the course of control
and implantation survey.
 An observation site was selected, where 15o degree clearance from

horizon to sky was available.

26
 Observation site was chosen to avoid High Tension lines and radio
signal transmitter station.
 Battery status and memory card status checked initially before going to
field.
 Proper centering and leveling of receivers at the master and rover point
was ensured.
 Since reference station is set up at a point of known coordinates, the
coordinates of this reference point were entered carefully before starting
the observations.
 Receiver antenna height were measured accurately and documented on
the DGPS sketch sheet provided to the surveyor.
 The time of observation was fixed based on the base line length and
was always kept more than the required specifications to avoid
repetition of observation at the same point.
 A person was engaged at each receiver station site to make sure that
receivers should not be disturbed during the course of field observation.
 The field data was processed to resolve the ambiguity with GDOP value

less than 5 as against a minimum of 8 specified in the SKI-PRO


manual. Lower GDP values helps in providing more accurate solutions.

b. Control Point Survey


Control points surveys were carried out along village roads and approaches
in tea gardens etc. using Total Stations. Utmost care was taken to avoid
operational errors during the course of traversing for minimizing the error in
computed coordinates of control points. The traverses of the Total Station
were tied up with the bench mark points obtained by the DGPS survey.

27
Triangulation methods were applied to verify the coordinates of the control
points with respect to bench mark points.

Practices Adopted in Control Point (CP) Surveying

 Survey was always started from two known DGPS positioned points
and tied to the other two or same DGPS positioned points.
 After a CP was confirmed, its position was marked by fixing wooden
peg engraved deep into the ground and the identification number of the
control point written on it.
 The visibility of two consecutive CP was required for their subsequent
use during the line opening and line tying. Site selection in the above
backdrop was always done carefully so as to avoid any problem during
the course of implantation survey work.
 The control traverses were choosen and run carefully so that no trace
line or shot point line is more than a maximum of two and a half
kilometers away from for better control.
 Detailed line sketches of the topographic and other features were
prepared during the course of control survey.

c. Implantation Survey
The purpose of implantation survey was to mark and implant the survey
lines on the ground as per the planned grid using Electronics Total Stations.
Pickets painted with red color and white color on top were fixed on the
ground for the identification of trace point and shot point respectively.
Corresponding trace no. and shot point no. were written on it with black

28
marker. Shot points were marked with a circle on the ground whereas trace
point marked with cross mark.

Practices Adopted in Implantation Survey


 Implantation survey was always started from a pair of control points or
established DGPS points available nearest to the line to be surveyed.
 The staked out points were tied to the nearest control points or DGPS
points available.
 Detailed line sketches of the topographic and other features were
prepared during the course of implantation survey along each line.
 The source and receiver points were staked out on the ground and the
line was closed with respect to the known points. The data was
subsequently processed to identify the quality of the survey and if it
was found to be within tolerances it was accepted or otherwise the line
was resurveyed.
 The difference between theoretical coordinates and field coordinates
were checked on a daily basis.
 All out precautions were taken to avoid operational errors due to wrong
input of co-ordinates, incorrect centering and leveling of the equipment
including measurement of height etc.

2. Recording Operation
Successful implementation of the survey design during the recording phase
is the key to the acquiring seismic data with good geophysical attributes as
envisaged during the design process. For efficient implementation of the

29
survey design the jobs, roles and responsibilities of recording crew were
divided into the following three groups.
 Layout of the Acquisition Spread & Monitoring
 Drilling & monitoring of shot- hole
 Loading and blasting of Shot-holes

a. Layout of the Acquisition Spread & Monitoring


The primary role of the line supervisor was to ensure proper physical
connection between the Station Units (S.U.) and proper plantation of
geophones at their staked out positions. For this purpose each acquisition
line, divided into three parts viz. Low, Middle & High, was manned by three
company supervisors and three contractor supervisors. Besides ensuring the
correct position and proper physical connection their job included
replacement of any faulty cables, geophones or SU’s, PSU’s during the
course of line checking as communicated by the observer.
Besides the physical checking of layout of acquisition spread, the quality
of acquisition spread was controlled through the online monitoring of spread
on the instrument. The main role of the observer was to check the line status
through trouble shooting, to record & maintained the quality of data and to
make proper communication with crewmembers.

b. Shot hole Drilling


Since the area of the acquisition spread in 3D seismic survey on any
day is much- much larger than a conventional 2D spread, efficient control
and management needs to be exercised for proper location and drilling of
shot holes including their depths.

30
Three nos.of company drilling supervisor were entrusted the responsibility
to monitor the shot hole depth and their positions.
Manual drilling was used to drill shot holes of depth of 60 feet. For
Shot locations falling in low lying areas and near river channels
precautionary measures were taken to load the holes immediately as soon as
they were drilled to avoid hole collapse. In essence, such types of areas were
identified one day in advance to the recording operations so that proper
planning of blasters movement in a systematic way to these points can take
place without any loss of operation time.

c. Loading and Blasting of Shot Holes


As dynamite is used as a subsurface source of energy to generate the
elastic waves in the seismic survey operation so it require awareness in the
handling. The loader crews again verified shot hole depth and used 10 feet
long steel rods to load and proper coupling of explosive charge varying 2.5-
5.0 Kg. In many occasions shot holes were either washed properly or
redrilled upto desired depth to generate good seismic signal.

Quality Control
1. The accuracy of the surveyed grid was maintained by adopting the
best practices in DGPS, Control Point Survey and Implantation
survey. (Field Operation: Survey).
2. Before going to field, we ensured the proper functioning of ground
electronics in order to avoid the loss of operational time and any
interference in the data due to faulty electronic equipments.
3. A small geophysical laboratory equipped with Test and Maintenance

System (TMS), Line Tester (LT 388) and Geophone Analyzer was
31
established for the repairing and calibration of faulty ground
electronics.
4. Bit to Bit verification test of the instrument was carried out to ensure

the error free seismic data acquisition.


5. Company line supervisors, contractor line supervisors and executives
ensured correct positioning of receiver lines and shot hole positions
through the following:
 Shot point and trace point interval was checked and verified by
step traversing, wherever there were any doubts.
 It was ensured through physical checking that every geophones
was implanted vertically and their coupling with the ground were
firm.
 No compromises were made regarding the shot hole depth. Every
shot hole drilled up to desired depth to generate good seismic
signal. In many occasions shot holes were washed properly to put
the explosives inside for having proper coupling and desired
depth. Always shots were tamped with water and mud for proper
coupling & energy penetration.
6. The Crosstalk, Impulse and Leakage tests were run every day in the

field prior to and during the moment of seismic data acquisition. The
results were analyzed to replace the faulty cable, SU, CSU, geophones
etc. with the new one.
7. Raw monitor records was used as QC measure tool during the field
operation as follows
 First break on the monitor records depicts the shot position.

32
 Slope of the first arrivals; direct wave energy and refracted
energy used to verify the trace line alignment.
 Noise analysis and map ability of deeper horizons or target

horizons on the monitor records used to have a rough idea about


the shot hole depth and charge size used.
 The geophone response in terms of polarity and plantation
verified from the monitor records.
 Utmost care was taken not to let the noise come into data. During
the course of production shooting all the lines in the swath were
monitored on a continuous basis to avoid any form of undesired
noise in the recorded seismic signatures. The shots were only
taken when the noise level was at its minimum. This assumed
significant importance as several trace lines crossed through
highways with heavy traffic plying on them.
8. Planning of recovery shots had to be done on continuous basis due to

obstacles, local problems and other access constraint in the area of


operation. Subsequently, effective recovery shots with suitable spread
pattern were planned through simulation in such a way that the
missing geophysical attributes are recovered to the best possible
extent.

Up hole Survey and Data Interpretation


Earth is assumed to be made of different homogeneous layers with different
physical properties but the near surface layer known as weathering layer
shows a great variation in its physical properties and it is of great importance
in hydrocarbon exploration. To delineate the deeper horizons, it is necessary
and required to know the near surface’s physical properties because the
33
heterogeneity in the physical properties of the near surface effects the data
recorded for deeper horizons. The main objective of the uphole survey is to
know the velocity and thickness of weathering layer as well as sub-
weathering layer. The results of uphole survey are used to calculate the
source statics and receiver statics correction.
3. Seismic Energy source

Seismic energy sources are used to generate energy which propagates


inside and undergo various phenomenons like refraction, reflection and
diffractions which return to the earth surface and recorded by detectors
situated near the earth. There are a number of energy source which are
related to explosive and non explosive .We use these energy sources for land
and marine seismic data acquisition.
The main difference between land and marine seismic is the water
layer, which has to be penetrated by the signal. The physical processes by
which seismic energy is generated in the water are different from those in
solid earth materials.
3.1 Land energy sources & Marine energy source

 Dynamite
Dynamite is an ideal seismic source for land data acquisition. because of the
impulsive nature of the seismic signal it creates and convenient storage and
mobility it provides for energy that can be converted into ground motion.
There are several disadvantages associated wit the use of dynamite.

34
1) In seismic operations, the dynamite is planted in sticks or cans in
boreholes that may range from 30 ft. to several hundred feet in depth. This
requires drilling the holes, which is a difficult and expensive procedure. It is
difficult to use in hilly terrain where the heavy drilling equipment is not easy
to move and in deserts where the water is not readily available.
2) Dynamite is dangerous to use. Any mishandling can be costly and
harmful.
Dynamite are also used in marine seismic data acquisition .in shooting
explosive at sea, it has been customary to detonate the charge at such a
shallow depth that bubble would break through surface of the water and not
oscillate the maximum depth “d” (in feet) at which the bubble will break is
releated to the charge weight “w” (in pound) by the formula d=3.8w1/3
There are several disadvantages associated wit the use of dynamite
1) It causes loss of marine lives and property.
2) Low efficiency since large portion of energy goes into the air.
3) Danger in handling in explosives.

 Buried Primacord

Primacord (an explosive extruded into a rope like form), detonating it at one
end or its center, and letting the explosive disturbance propagate along it at a
speed 22,000 ft/s. Geoflex source system operates on this principle.

 Vibroseis:

Vibroseis is a method used to propagate energy signals into the earth over an
extended period of time as opposed to the near instantaneous energy
provided by impulsive sources described above. The signal was originally

35
generated by a servo-controlled hydraulic vibrator or shaker unit mounted on
a mobile base unit but electro-mechanical versions have also been
developed. Vibroseis was developed by the Continental Oil Company
(Conoco) during the 1950s and was a trademark until the company's patent
elapsed.
 Air Gun
A source of seismic energy used in acquisition of marine seismic data. This
gun releases highly compressed air into water. Air guns are also used in
water-filled pits on land as an energy source during acquisition of vertical
seismic profiles.

Solenoid valve

High pressure
air.

PORT PORT
Port Port

FIRING PISTON
Firing piston EXPLOSIVE EXPLOSIVE
Explosive RELEASE
Explosive
RELEASE

release ofOFhigh
HIGH-PRESSURE
release
OF
of high
HIGH-PRESSURE

High pressure pressure AIRair. pressure air.


AIR

air.

GUN ARMED
GUN ARMED
GUN FIRED
GUN FIRED

Fig 3.1.1 Single Air gun Schematic Array Air gun arrangement
Armed and Fired Position

 Weight dropping
Early experiments were carried out in 1924 but use of magnetic recordings
made it possible to merge impulses from multiple drops in close propinquity
in order to produce from such energy sources. The first commercial system

36
of this type was invented by McCullom and put to use for oil exploration in
1953. The source he developed is called a Thumper.
A high level of noise is generated by weight dropping, mostly in the form of
surface waves. Procedure involves dropping of a 3-ton iron slab, attached to
a special crane on a truck hoisted 9ft. up in the air. As soon as the fist drop is
made, the truck is moved 10ft. to another spot within a few seconds. Waves
from each drop are picked up by the detector spread ad recorded on
magnetic tape for later analysis.
The main disadvantage of using this mechanism is that dropping in no way
produces synchronized impacts from multiple units.

 Flexotir
This is a small pellet of dynamite weight about two ounces .This is
detonated at the center of a perforated cast-iron spherical shell about 2ft in
diameter which is towed behind at the depth about 40ft .the perforation
breaks the bubble so that undesired effect of bubble pulsing on the signal are
suppressed .Here the shooting depths ¼ of wavelength of typical seismic
reflection wave .therefore ,the efficiency of the explosion for generation
seismic energy is much greater than foe detonation just below the surface.

 Maxipulse
Another source containing dynamite, the maxipulse is also designed for
detonation under conditions that combine safty ,efficiency ,and elimination
of bubble-pulse effects. In this the charge is about half pound packed in a
can which is injected into the water at a depth of about 20to 40ft. The
detonation tackes place about 1 second after the injection. After detonation, a

37
bubble is formed which expands and and collapses with a period of
100millseconds .In order to reduce the bubble oscillation a filter is used.

4. SEISMIC DATA PROCESSING

Alteration of seismic data to suppress noise, enhance signal and migrate


seismic events to the appropriate location in space is termed as Seismic
Processing. It facilitates better interpretation because subsurface structures
and reflection geometries are more apparent.

4.1 OBJECTIVES
 To obtain a representative image of the subsurface.
 Improve the signal to noise ratio: e.g. by measuring of several
channels and stacking of the data (white noise is suppressed).
 Present the reflections on the record sections with the greatest

possible resolution and clarity and the proper geometrical relationship


to each other by adapting the waveform of the signals.
 Isolate the wanted signals (isolate reflections from multiples and
surface waves).
 Obtain information about the subsurface (velocities, reflectivity etc.).
 Obtain a realistic image by geometrical correction.
 Conversion from travel time into depth and correction from dips and
diffractions

38
4.2 PREPROCESSING
Preprocessing is the first and important step in the
processing sequence of and it commences with the reception of field tapes
and observers log .Field tape contains seismic data and observers contains
geographical data(shot/receiver number, picket number, latitude and
longitude etc.)

 DEMULTIPLEXING

Field tapes customarily arrive at the processing center written in


multiplexed format (time sequential) because that is the way generally the
sampling is done in field. In general the early stages of processing require
channel ordered or trace ordered data. Demultiplex is therefore done to
convert the time sequential data into trace sequential data.
Mathematically, demultiplexing is seen as transposing a big matrix so that
the column of the resulting matrix can be read as seismic traces recorded at
different offsets with a common shot pint. At this stage, the data are
converted in a convenient format that is used throughout the processing.
This format is determined by the type of the processing system and

39
individual company. A common format used in seismic industry fro data
exchange is SEG-Y, established by the society of exploration geophysicists.
Nowadays demultiplexing is done in the field.

 REFORMATTING
The formats generally used for data recording are SEG-D (Demultiplexed
data) and SEG-B (Multiplexed data). Hence they are called field formats.
Demultiplexed is done on data recorded in SEG-D format. In this stage the
data are converted to a convenient format, which is used throughout
processing. There are many standards available for data storage. Format
differs with the manufarcturer, type of recording instrument and also with
the version of operating system.

 Field Geometry Set Up

Field geometry is created with the help of information provided by


field party. That is as follows.
1. Survey information
(I) X and Y coordinate of shot/vib. Points.
(II) Elevation of geophone/shot points
2. Recording instrument
(I) Record file numbers
(II) Shot interval, group interval, near offset and far offset
(III) Layout, no. of channels, foldage.
3. Processing information
(I) Datum statics
(II) Near surface model

40
(III) Datum plane elevation

 EDITING

Edit traces, which consists of killing extremely noisy traces and


muting the first-arrivals on all traces. Traces from poorly planted geophones
may show sluggishness and introduce low frequency and sometimes cause
spiky amplitudes and therefore degrade a CMP stack. These traces are
identified during manual inspection/editing phase of all the shot records and
flagged in the header so that they will not be included (they are “killed”) in
processing steps and in display.
Traces so noisy that they don’t visually correlate with strong arrivals
on adjacent traces should be killed. We have to be conservative in trace
killing because the fold of this data is low and eliminating only a few traces
may have noticeable effect on the stacked traces.
Editing involves leaving out the auxiliary channels & NTBC traces
and detecting and changing dead or exceptionally noisy traces. Bad data may
be replaced with interpolated values. Noisy traces, those with static glitches
or mono-frequency high amplitude signal levels are deleted. Polarity
reversals are corrected. Output after editing usually includes a plot of each
file so that one can see what data need further editing and what type of noise
attenuation are required.

41
Fig.4.1.2 (a) before editing (b) after editing

 SPHERICAL DIVERGENCE CORRECTION

A single shot can be thought of a point source which gives rise to a


spherical wave field. There are many factors which affect the amplitude of
this wave field as it propagates through the earth.
Two important factors which have major effect on a propagating wave
field are spherical divergence and absorption. Spherical divergence causes
wave amplitude to decay as 1/r, where r is the radius of the wave front.
Absorption results in a change of frequency content of the initial source
signal in a time-variant manner, as it propagates. Since earth behaves as a
low pass filter so high frequencies are rapidly absorbed.
There are some programmes used for gain-AGC, PGC, geometric spreading
correction

 STATIC CORRECTION

42
When the seismic observations are made on non flat topography, the
observed arrival times will not depict the subsurface structures. The
reflection time must be corrected for elevation and for the changes in the
thickness of the weathering layer with respect to flat datum. The former
correction removes difference in travel time due to variation of surface
elevation of the shot and receiver location. The weathering corrections
remove differences in travel time to the near surface zones of unconsolidated
low velocity layer which may vary thickness from place to place. These are
also called static corrections, as they do not change with time. The static
corrections are computed taking into account the elevation of the source and
receiver locations with respect to seismic reference datum (such as Mean
Sea Level), velocity information in the weathering and sub weathering
layers. Often, special surveys (up hole surveys, shallow refraction studies)
precede the conventional acquisition to obtain the characteristics of the low
velocity layer.

MAIN PROCESSING

Main processing starts. It includes three major steps. They are as follows:
1. DECONVOLUTION
2. STACKING
3. MIGRATION

43
Fig 4.1.3 Seismic data volume represented in processing coordinate:
midpoint- offset-time
 Deconvolution acts on the data along time axis and increase temporal

resolution.
 Stacking compresses the data volume in the offset direction and yields

the plane of stack section (the frontal face of the prism)


 Migration then move dipping events to their true subsurface position

and collapses diffraction and thus increases lateral resolution.


 DECONVOLUTION

Deconvolution is a process that improves the temporal resolution of seismic


data by compressing the basic seismic wavelet.

The need for deconvolution:


In exploration seismology the seismic wavelet generated by the source
travels through different geologic strata to reach the receiver. Because of the
many distorting effects encountered the wavelet reaching the receiver is by
no means similar to the wave propogation by source.
Objective of deconvolution:

 Shorten reflection wavelets


44
 Attenuate ghost , instrument effects , reverberation and multiple
reflection

The convolutional model for deconvolution

(I) The earth is made up of horizontal layers of constant velocity.


(II) The source generates a compressional plane wave that impinges on
layer boundaries at normal incidence.
(III) The source wave form does not change as it travels in the surface.
(IV) The noise component n(t) is zero.
(V) The source waveform is known.
(VI) Reflectivity is a random series.
(VII) Seismic wavelet is minimum phase.

There are two type Deconvolution


1) Deterministic Deconvolution:

Deconvolution where the particular of the filter whose effects are to


be removed are known is called deterministic Deconvolution .The
source wave shape is sometime recorded and used in a deterministic
source signature correction .No random are involved For e.g. where
source wavelet accurately known we can do source signature
deconvolution.

2) Statistical deconvolution:
A statistical deconvolution need to derive information about the
wavelet from the data itself where no information is available about
any of the component of the model .Statistical deconvolution is
applied without prior application of deterministic deconvolution in the
45
case the of a land data taken with an explosive source. In addition we
make certain assumption about the data justifies the statistical
approach
There are two type of statistical deconvolution
(I) Spiking Deconvolution – The process by which the seismic
wavelet is compressed into a zero lag spike is called Spiking decon
(II) Predictive deconvolution – The process uses prediction distance
greater than unity and yields a wavelet of finite duration instead of a
spike. This is helpful in suppressing multiples

 CMP Shorting:

Seismic data acquisition with multifold coverage is done in shot-receiver


(s,g) coordinate. Seismic data processing, on other hand conventionally is
done in midpoint-offset (y, h) coordinates. The required coordinate
transformation is achieved by sorting the data into CMP gather based on the
field geometry information , each individual trace is assigined to the mid
point between shot and receiver location associated with that trace .Those
traces with the same mid point are grouped together , making up a CMP
gather

46
Fig 4.1.4 Seismic data in shot-receiver coordinates

Fig 4.1.5 Seismic in common midpoint gather

 Velocity Analysis

Velocity analysis is the most important and sensitive part of the


processing. Without velocity we cannot change seismic section into depth
domain, which is very necessary. For applying NMO correction we need
NMO velocity. Thus we perform velocity analysis on each CDP gather but it
is not feasible to perform velocity analysis on each CDP gather. Hence we
perform velocity analysis on one CDP gather from a group of CDP points
(generally group size is 50 VDP points). There are several methods to do
velocity analysis like constant velocity scan, constant velocity stacks (CVS),

47
velocity spectrum method and horizontal velocity analysis. Out of these
methods, now a days velocity spectrum method is mostly used because it
distinguishes the signal along hyperbolic paths even with a high level of
random noise. This is because of the power of the cross correlation in
measuring coherency. The accuracy of the velocity is limited.

(I) Constant Velocity Stacks (CVS):


Fig shows this method. In this example, a portion of a line consisting
of 24 common-depth-point gather have been NMO corrected and stacked
with velocities ranging 1000 m/s to 3000 m/s. The resulting stacked 24
traces, displayed as one panel, represent one constant velocity. These panels
are displayed side by side with the velocity value indicated, where velocity
values increase from left to right. Stacking velocities are picked directly
from these panels by selecting the velocity that yields the best coherency and
the strongest amplitude for a velocity value at a certain time.
Care must be taken in using this kind of velocity analysis to estimate
the best stacking velocities. One should know the velocity range of an area,
especially if there are structure changes.

(II) Velocity spectrum method:


The velocity spectrum approach is unlike the CVS method. It is base
on the correlation of the traces in a CMP gather, and not on lateral continuity
of staked events. This method, compared with the CVS method, is more
suitable for data with multiple reflection problems. It is less suitable for
highly complex structure problems. Suppose we repeatedly correct the
gather using constant velocity values from 2000-4300 m/sec, then stack the
gather and display the stacked traces side by side. The result is a display of
velocity versus two-way time, called a “velocity spectrum”.
48
There are two commonly used ways to display the velocity spectrum:
power plot and contour plot

Fig 4.1.5 Two way of displaying velocity spectrum derived from the
CMP gather (a), (b) power plot (c) contour plot.

(III) Horizontal Velocity Analysis:


One method to estimate velocities with enough accuracy for structural
and stratigraphic application to analyze the velocities of a certain horizon of
interest continuously. Such a detailed velocity analysis is called Horizontal
Velocity Analysis. The velocity is estimated at every CMP along the selected
key horizon of interest on the stacked section. The principle of estimating
the velocities by this method is the same as that of the velocity spectrum.
The output coherency values derived by hyperbolic time gates are displayed
as a function of velocity and CMP position.
One of the applications of horizontal velocity analysis is to improve
the layered velocity variation along marker horizon, especially if these
velocities are used in post-stack depth migration.

49
Fig 4.1.6 The CMP gather and its velocity spectrum , the curve to the right
of the semblance peaks is the interval velocity function derived from the
picked rms velocity function.

 NORMAL MOVEOUIT CORRECTION

None zero offset data is characterized by a travel time increase with


increase in offset distance from the source to the reflector. on zero offset to
zero offset conversion is achieved through a correction called as NMO
(normal move out) correction.
For the single constant horizontal velocity layer the trace time curve
as a function of offset is a Hyperbola. The time difference between travel
time at a given offset and at zero offset is called normal moveout (NMO).
50
The velocity required to correct for normal moveout is called the normal
moveout velocity. For a single horizontal reflector, the NMO velocity is
equal to the velocity of the medium above the reflector .
The simple case of single horizontal layer, using by Pythagorean
Theorem the travel time equation as a function of offset is

t2(x) = t2(0) + x2v2


Where x is the distance (offset) between the source and receiver
position. V is the velocity of the medium above the reflecting interface. And
f (0) is twice the travel time along the vertical path. The NMO correction is
given by the difference between t(x) and t(0)
∆tNMO = t (x) – t (0)
= t (0) {[1- (x / vNMO.t (0))2]1/2 -1}

Fig 4.1.7 The simple geometry for NMO correction in single layer

NMO in a horizontal stratified earth

51
Now we consider a medium, composed of horizon isovelocity layers
Fig 2.9. is each layer has a certain thickness that can be defined in terms of
two way offset time. The layers have interval velocities (v1,v2,….vN) where
N is Number of layers. Travel time equation for the path SDR is
T2(x) = c0 + c1x2 + c2x4 + c3x6 + …………
Where c0 = t(0), c1 = 1 / v2rms and c2, c3, …… are complicated
functions The rms velocity vrms down to the reflector on which depth point D
is situated is defined as
V2rms = 1 / t(0) ∑ vi2 ∆ti(0)
Where ∆ti is the vertical two way time through the ith layer and t(0) =
∑ ∆tk. by making small spread approximation the series in equation … can
be truncated as follows:
T2(x) = t2(0) + x2 / v2rms
Here we see that the velocity required for NMO correction for a
horizontally stratified medium is equal to the rms velocity

Fig 4.1.8 NMO for horizontal layer

52
Fig 4.1.9 Before and after NMO correction

NMO Stretching:
In NMO correction, a frequency distortion occurs, particularly for
shallow events and at large offset. This is called NMO stretching The
waveform with a dominant period T is stretched so that its period becomes
T’. Stretching is frequency distortion in which events are shifted to lower
frequencies, stretching is quantifies as
∆f / f = ∆tNMO / t (0)
Where f is the dominant frequency. ∆f is change in frequency

53
Because of the stretched waveform at large offset, stacking the NMO
corrected CMP gather will severely damage the shallow events. This
problem can be solved by muting the stretched zone in the gather.

Fig 4.1.8 NMO Stretch

 RESIDUAL STATIC CORRECTION

They are statics deviation from a perfect hyperbolic travel time after
applying NMO and elevation statics corrections to trace within the CMP
gather. These statics cause misalignment of the seismic events across the
CMP gather and generate a poor stack trace. We need to estimate the time
shifts from the time to perfect alignment, then correct them using an
automatic procedure.
A model is need for the moveout corrected travel time from a source
location to point on the reflecting horizon, then back to a receiver location.
The key assumption is that the residual statics are surface consistent,
meaning that statics shift are time delays that depend on the sources and
receiver on the surface. Since the near-surface weathered layer has a low
velocity value, and refraction in its base tends to make the travel path
54
vertical, the surface consistent assumption usually is valid. However, this
assumption may not be valid for high-velocity permafrost layer in which
rays tend to bend away from the vertical.
Residual static corrections involve three stages:
1. Picking the values.
2. Decomposition of its components, source and receiver static,
structural and normal moveout terms.
3. Application of derived source and receiver terms to travel times
on the pre-NMO corrected gather after finding the best solution
of residual static correction. These statics are applied to the
deconvolved and sorted data, and the velocity analysis is re-run.
A refined velocity analysis can be obtained to produce the best
coherent stack section.

 DMO (dip move out) CORRECTION

The DMO correction says that-post-stack migration is acceptable


when the stacked data are zero-offset. If there are conflicting dips with
varying velocities or a large lateral velocity gradient, a prestack partial
migration is used to attenuate these conflicting dips. By applying this
technique before stack, it will provide a better stack section that can be
55
migrated after stack. Prestack partial migration only solves the problem of
conflicting dips with different stacking velocities. Its applications are:

(I) Post-stack migration is acceptable when the stacked data is zero-


offset. This is not the case for conflicting dips with varying velocity or
large lateral velocity variations.
(II) Prestack partial migration or dip Move out provides a better stack,
which can be migrated after stack.
(III) PSPM solves only conflicting dips with different stacking
velocities.

 STACKING

(I) Each common mid point gather after normal move out correction is
summed together to yield a stacked trace.
(II) Stacking enhances the in-phase components and reduces the random
noise.
(III) Stacking yields Zero offset section (in the absence of dipping layers
in the subsurface)

Stacking is combining two or more traces into one. This combination


takes place in several ways. In digital data processing, the amplitudes of the
traces are expressed as numbers, so stacking is accomplished by adding
these numbers together.
Peaks appearing at the same time on each of two traces combine to
make a peak as high as the two added together. The same is true of two
troughs. A pear and a trough of the same amplitude at the same time cancel

56
each other, and the stack trace shows no energy arrival at that time. If the
two peaks are at the different times, the combined trace will have two
separate peaks of the same sizes as the original ones. After stacking, the
traces are “normalized” to reduce the amplitude so that the largest peaks can
be plotted. Figure 2.11 illustrates the principle of stacking.

Fig 4.1.9 Stacking process

 MIGRATION

Migration is the processes that repositions reflected energy from its


common mid point to its true subsurface location. Dipping reflector on a
CMP stack are plotted down dip from with less dip than their true position.
A seismic section is assumed to represent a cross-section of the earth. The
assumption work best when layers are flat, and fairly well when they have
gentle dips. With steeper dip the assumption breaks down; the reflections are
in the wrong places and have the wrong dips.
57
In estimating the hydrocarbons in place, one of the variables is the
areal extent of the trap. Weather the trap is structural or stratigraphic; the
seismic section should represent the earth model.
Dip migration, or simply migration, is the process of moving the
reflections to their proper places with their correct amount of dips. This
results in a section that more accurately represents a cross-section of the
earth, delineating subsurface details such as fault planes. Migration also
collapse diffractions.
Migration is mainly divided into six parts 1) 2D migration 2) 3D migration
3) Time migration 4) Depth migration 5) Pre stack migration 6) Post stack
migration.

Bow-tie effect
A concave-upward event in seismic data produced by a buried focus and
corrected by proper migration of seismic data. The focusing of the seismic
wave produces three reflection points on the event per surface location. The
name was coined for the appearance of the event in unmigrated seismic data.
Synclines, or sags, commonly generate bow ties.

58
Fig 4.1.10 A syncline might appear as a bow tie on a stacked section and can
be corrected by proper migration of seismic data.

Migration algorithms

Migration algorithms can be classified under three main categories:


(I) Those that are based on the integral solution to the scalar wave equation,
(II) Those that are based on the finite- difference solution
(III)Those that are based on frequency-wavenumber implementations

Migration parameters

After deciding on the migration strategy and appropriate algorithm.The


analyst then decide the migration parameter
(I) Migration aperture width is the critical parameter in Kirchhoff migration.

59
(II) Depth step size in downward continuation is the crietical parameter in
finite –difference methods.
(III) The stretch factor is the critical parameter for stolt migration.

Migration principles

The migration principles are


(I) The dip angle of the reflector in the geologic section is greater than in the
time section thus migration steepens reflector.
(III) The length of the reflector , as seen in the geologic section is shorter
than in time section; thus , migration shortens the reflector.
(III) migration move reflector in the updip direction.

0 A B x
C
True Dip C’
t D
D’
Apparent Dip

Fig 4.1.11 Migration principle

Methods of Migration

There are different types of migrations:

Kirchhoff’s migration: It is a statistical approach technique. It is


based on the observation that the zero-offset section consists of a single
60
diffraction hyperbola that migrates to a single point. Migration involves
summation of amplitudes along a hyperbolic path. The advantaged of this
method is its good performance in case of steep-dip structures. The method
performs poorly under low signal-to-noise ratio.

Finite difference migration: It is a deterministic approach. It is


modeled by an approximation of the wave equation that is suitable for use
with computers. On e advantage of the finite difference method is its ability
to perform well under low signal-to -noise ratio condition. Its disadvantage
includes long computing time and difficulties in handling steep dips.

Frequency Domain or F-K Migration: It is a deterministic approach


via the wave equation instead of using the finite difference approximation.
2-D Fourier Transform is the main technique, use here. Its advantage is its
fast computing time, good performance under low signal-to-noise ratio and
excellent handling of steep dips. But it includes difficulty with wide varying
velocities.
Depth Migration:
Time migration is appropriate as long as lateral velocity variations are
moderate. When these variations are substantial, depth migration is needed
to obtain a true picture of the subsurface.
Time migration generally describes a simpler migration method than
depth migration, though depth migration is more accurate, and can handle
more complex situations. Usually, the output from time migration is a time
section, and the output from depth migration is a depth section. Depth
migration also deals with lateral velocity variations thus it involves large
computational time.

61
Migration Effects

Using perpendicular reflection principle, some subsurface features and how

they will look when converted to sections with vertical traces can be

considered. Then some rules can be formed for how the features of the

section will have to change to be migrated back to their correct

configurations. For simplicity at this stage, it will be assumed that the

velocity of sound is constant all through the geologic section, and that the

lines are shot in the direction of dip, so they do not have any reflections from

one side or the other of the line.

1 Reflections move up-dip.

2 Anticlines become narrower.

3 Anticlines may have less or the same vertical closure.

4 The crest of the anticline does not move.

5 Synclines become broader.

6 The low point of the syncline does not move.

7 Synclines may have more or the same closure.

8 Crossing reflections may become a sharp syncline (Bow-tie effect)

9 An umbrella shape, diffraction, becomes a point.

10 The crest of diffraction does not move, and is the diffraction point.

62
5. INTERPRETATION

The interpretation of seismic data in geologic terms is the objective


and end product of seismic work. Seismic data are usually interpreted by
geophysicists or geologists. Since, drilling wells is very costly, it is
preferable to interpret from the seismic data as much information as possible
about the geologic history of the area and about the nature of the rocks in an
effort to form an opinion about the probability of encountering petroleum in
the structures which we map.

There are some basic steps that should be followed while interpreting
a seismic section. Interpreting seismic data is straightforward provided that a
simple procedure is adopted. The technique is to divide up the seismic
section into areas of common dip families and then mark the boundaries
where these families end. The nature of these boundaries can then be
interpreted geologically and a geological history for the seismic section
produced.When data is first loaded onto a seismic workstation it is very
important not to begin detailed interpretation immediately but to stand back
and make sure that a few preliminaries have been observed.

Some basic steps of seismic interpretation are:

(1) Look at the horizontal and vertical scales and makes sure you have a

feeling for the dimensions of the data that you are looking at.
(2) Familiarize yourself with the orientation of the data and how it relates

to the base map. In particular ensure that, for any geological structure,
you know which is the dip and which is the strike direction.

63
(3) Look at the data on a scale that enables you to see an entire inline or

cross line and look at the near surface to see if there are any features of
note in the upper section. In particular, things to look out for are:
a. near surface channels filled with fast or slow velocity material
which have a time effect on all horizons below.
b. near surface amplitude anomalies. These could be shallow
hydrocarbon deposits and constitute drilling hazards.
c. if land data, have a look at the statics that were applied and see
if there is any time structure related to these static corrections.
This structure may be spurious if the statics were applied with
the wrong velocity.
d. look for velocity anomalies creating time structure. For
example pull up under salt or fault shadow effects. These time
artefacts are much easier to see on sections displayed at a large
scale, i.e. 1:50,000 or 1:100,000.
e. and finally have a look at the migration that was applied to the
data and see if you think that the data are correctly positioned..
The interpretation procedure:
(1) The section is divided into dip families.
(2) The boundaries are drawn around each dip family
(3) If the horizontal boundaries are not well defined, the dips of the
overlying family are extended downwards as far as possible. The dips
of the underlying family are extended up as far as possible. A best
estimate of the position of the boundary is made and marked.
(4) The nature of the boundaries are decided upon:
(a) vertical or inclined
 faults
64
 rock or facies boundaries, e.g. salt
 edge
(b) horizontal or inclined
 unconformities
 structural growth stages
 rock or facies boundaries, e.g. reefs
 fans, deltas, channels, etc
(5) Seismic features are separated from geology. For example, multiples,
velocity anomalies, sideswipe.
(6) The structural and depositional history is developed.
(7) A geological model is decided on, tested and revised..
(8) The geological model is built and restored in depth at a 1 to 1 scale.
(9) Mapping horizons are selected along with dip families controlling

Figure 5.1.1: Characteristics of different types of bed inclination found in


the seismic trace.

65
A synthetic example of the relationship of dip families and how they can be
interpreted geologically is shown in the diagram below. The geological
history derived from such a seismic interpretation would be along the
following lines, in time order.

Figure 5.1.2 Pattern of Dip and Boundaries.


(i) Horizons 6, 7 and 8 were laid down flat
(ii) Uplift occurred which created the anticline that now contains
these horizons
(iii) Erosion created the unconformity that separates horizon 5
from those below.

66
(iv) Horizons 1 to 5 were laid down flat over the unconformity
(v) Compression created the thrust which moved horizons 1 to 4
over the top of horizon 5.
An example of an interpreted seismic section is shown in the Figure .

Figure 5.1.3 An interpreted section

67
6. PETROLEUM SYSTEM

6.1 INTRODUCTION
The geologic components and processes necessary to generate and store
hydrocarbons, including a mature source rock, migration pathway, reservoir
rock, trap and seal are collectively called the petroleum system. Appropriate
relative timing of formation of these elements and the processes of
generation, migration and accumulation are necessary for hydrocarbons to
accumulate and be preserved. Exploration plays and prospects are typically
developed in basins or regions in which a complete petroleum system has
some likelihood of existing
6.2 PETROLEUM
Petroleum is a complex mixture of naturally occurring hydrocarbon
compounds found in rock and it can exist as solid, liquid and gaseous
according to the pressure-temperature-composition, with or without
impurities such as sulphur, oxygen and nitrogen; and there is considerable
variation in its physicochemical properties like colour, gravity, odour,
sulphur content and viscosity in petroleum from different areas.

68
Fig 6.1.1 Petroleum System

In addition to these four basic components, a petroleum system by


definition includes all the geologic processes required to create these
elements. Crucial factors of proven (i.e., economic) petroleum systems
include:
• Organic richness/type and volume of generative source rock
• Adequate burial history to ensure proper time-temperature conditions
for source rock maturation
• Timing of maturation and expulsion in relation to timing of trap
formation
• Presence of migration pathway linking source and reservoir rocks
• Preservation of trapping conditions from time of entrapment to
present day
• Relative efficiency of sealing layers
Petroleum systems may be identified according to three levels of
certainty: known, hypothetical, and speculative (Magoon, 1988). In a
known system, a good geochemical match exists between the source rock
and accumulations; in the hypothetical case, a geochemical match is lacking
but geochemical evidence is sufficient to identify the source rock. In the
case of a speculative petroleum system, the presence of economic
accumulations are lacking, but the existence of source rocks and oil/gas
accumulations are postulated on the basis of geologic or geophysical
evidence.

6.3 ELEMENTS OF PETROLEUM SYSTEM


The essential elements of a petroleum system include the following:
 Source rock
 Reservoir rock
69
 Cap rock
 Trap
 Migration

Source Rock:
1. Production, accumulation and preservation of organic matter are

prerequisites for the existence of petroleum source rocks.


2. Photosynthesis is the basis for mass production of organic matter.

About 2 billion years ago in the Precambrian photosynthesis emerged


as a world wide phenomenon.
3. Favourable conditions for the deposition of the sediments rich in

organic matter are found on the continental shelves in the area of


restricted circulation. Continental slopes are also favourable for
accumulation of organic matter
4. There are three major phases in the evolution of organic matter from

the time of deposition to the beginning of metamorphism.


a) Diagenesis:

This phase occurs in the shallow subsurface at near normal


temperatures and pressures. It includes both biogenic decay, aided
by bacteria, and abiogenic reactions. Methane, carbon dioxide and
water and given off by the organic matter leaving a complex
hydrocarbon termed Kerogen.

b) Catagenesis:

70
This phase occurs in the deeper subsurface. Thermal degradation
of the kerogen is responsible for the generation of most
hydrocarbons i.e., oil and gas

c) Metagenesis;
This third phase occurs at high temperatures and pressures verging
on metamorphism. The last hydrocarbons, generally only methane
are expelled.
5. The types of Kerogen present in a rock largely control the type of
hydrocarbons generated in that rock. Different types of Kerogen contain
different amounts of hydrogen relative to carbon and oxygen. The
hydrogen content of Kerogen is the controlling factor for oil vs. gas yields
from the primary hydrocarbon-generating reactions.On the basis of
chemical composition in the nature of organic matter the kerogen is
classified into four basic types as:

Kerogen Predominant Hydrocarbon Amount Typical


Type Potential of Depositional
Hydrogen Environment
I Oil prone Abundant Lacustrine
II Oil and gas prone Moderate Marine
III Gas prone Small Terrestrial
IV Neither (primarily composed None Terrestrial(?)
of vitrinite) or inert material

Table 6.1.2 Types of Kerogen

a)Type-I Kerogen or saprophilic


71
This is essentially algal origin.it has high hydrogen carbon ratio(H:C
is about 1.2-1.7)
b) Type-II Kerogen or Liptinic
The organic matter of this type of kerogen consisted of algal
detritus,but also contain material derived from zooplankton and
phytoplankton.It has H:C ratio greater than 1.
c) Type-III kerogen or humic
This kerogen has a much lower H:C ratio(<0.84).Humic kerogen is
produced from the lignin of the higher woody plants which grow on
land.Type III sorce material is good for gas source.

A source rock is a rock that is capable of generating or that has generated


movable quantities of hydrocarbons. Typical source rocks, usually shales or
limestone, contain about 1% organic matter and at least 0.5% total organic
carbon (TOC), although a rich source rock might have as much as 10%
organic matter. Rocks of marine origin tend to be oil-prone, whereas
terrestrial source rocks (such as coal) tend to be gas-prone.
Source rocks can be grouped into four basic categories, which are described
in the table-1. To be a source rock, a rock must have three features:
1. Quantity of organic matter
2. Quality capable of yielding moveable hydrocarbons
3. Thermal maturity.

72
Table 6.1.3: Types of Source Rocks

Table 6.1.4: The most common methods used to determine the potential of a
source rock.

6.4 Generation
The most important factor in the generation of crude oil from the organic
matter from the sedimentary rocks is temperature. A minimum temperature
of 1200 F (500 C) is necessary for the generation of oil under average
sedimentary basin condition. The generation end sat 3500 F (1750 C).Time is
also an important factor. The older the sediments lower the temperature of
generation. Younger sediments need higher temperature to generate oil than
the average. Heavy oils are generated at the lower temperature where as the

73
light oils are generated at high temperature. It takes millions of years to
generate oil from organic matter. The youngest known source rock that has
generated oil is Pliocene. At temperature higher than 3500 F crude oil is
irreversibly transformed into graphite and natural gas. Because the oil
generation has a ceiling (1200 F) and a floor (3500 F), the depth in the earth
where oil is generated is called the Oil Window. The type of organic matter
in the source rock controls the type of petroleum generated.

Reservoir
A subsurface body of rock having sufficient porosity and permeability to
store and transmit fluids is a called a reservoir rock. Sedimentary rocks are
the most common reservoir rocks because they have more porosity than
most igneous and metamorphic rocks and form under temperature conditions
at which hydrocarbons can be preserved. The most significant property of
reservoir rock is its effective permeability. Obviously, since sandstones are
the best in permeability with respect to other rocks, they act as good
reservoir rocks.

Cap rock
It is an impermeable rock-material to prevent further migration of
hydrocarbons by buoyancy, and to seal petroleum within reservoir. Cap
rocks are commonly of shale or of chemically precipitated evaporite deposits
such as salt or gypsum, or biochemical alteration products of petroleum like
tar.

74
Traps
Trap is a configuration of rocks suitable for containing hydrocarbons and
sealed by a relatively impermeable formation through which hydrocarbons
cannot migrate. Traps are described as structural traps (in deformed strata
such as folds and faults) or stratigraphic traps (in areas where rock types
change, such as unconformities, pinch-outs and reefs) or their combinations.
A trap is an essential component of a petroleum system. Petroleum migrates
upwards and laterally from source to reservoir by buoyancy.
Being lighter than water, petroleum will displace groundwater and flow
upwards, as well as laterally and will seep to the surface via faults and
porous overburden unless confined under special circumstances to become
trapped and to form economic petroleum deposits. Migration of petroleum is
aided by its low surface tension, so that molecular attraction creates a film of
water around grains, whereas the petroleum occupies the central pore spaces
and is separated from the water.

Structural Traps
By juxtaposition of porous reservoir and impermeable cap rock due to
folding or faulting, structural traps are created. So some tectonic or
deformational mechanism (either brittle or ductile) are always involved
(Figure 5). Approximately, 80 - 90% of the world's proven oil reserves are
located in anticlinal traps. Anticlinal traps are commonly tens of kilometres
long or even greater, and may be thousands of metres in amplitude (e.g.
Bombay high), or they may be combination of several small anticlines.
Traps may be stacked vertically on top of each other where alternating
reservoir and cap rocks have been folded in the same anticline. Fault traps
are numerous, but only small. Faults can also be detrimental by breaching
75
the seal of the cap rock and allowing the flow of petroleum through the fault
to the surface, where it may form an oil seep.

Fig 6.1.5 Schematic diagrams of structural traps

76
Stratigraphic traps
By juxtaposition of porous reservoir and impermeable cap rock due to
depositional variations in grain-size of different kinds of sediments
stratigraphic traps are formed. This may be due to the thinning of lenses of
sand and gravel (wedge-end traps), the morphology of carbonate reefs in
sub-circular mounds (reef traps) or by the juxtaposition of rock types at
unconformity surfaces (unconformity traps). Although unconformities are
numerous, unconformity traps account for only 4% of world reserves,
possibly because petroleum may have already escaped at the ancient surface
prior to the formation of the unconformable beds. In Indian offshore region,
especially, in the East Coast, most of the deep-water traps are stratigraphic
traps like pinch-outs, unconformities etc. In Rudrasagar Oil Field of Assam
is an example of stratigraphic trap, where petroleum exists in shoestring
fluvial sandstone.

77
Fig 6.1.6 Schematic diagrams of stratigraphic traps

Combination situations
There are several combinations of situations. Rising salt-dome has
stratigraphic traps draped against the edge with normal-fault trap caused by
tension stress over the top. Some oil also accumulates in porous cap of salt-
dome. In Assam Oil-field, there exists Naga Thrust upon which Tipam
Sandstone terminates forming thrust propagation fold. This arrangement is a
typical example of combination trap in India. Unfortunately, no salt-dome
trap is yet known in India.

78
Fig. 7. Schematic diagram of salt plug or

Fig 6.1.7 Schematic diagrams of Combination Traps

79
Migration
Migration implies movement of hydrocarbon through rocks. There are two
types of migration in a petroleum system as described below.

Fig 6.1.8 Migration of petroleum

Types of petroleum migration

1. Primary Migration
Primary migration is the process by which hydrocarbons are expelled
from the source rock into an adjacent permeable carrier bed. It is a
paradoxical situation, because most source rocks are black shale’s, which
have very low permeability’s.

2. Secondary Migration
Secondary migration is the movement of hydrocarbons along a "carrier
bed" from the source area to the trap. Migration mostly takes place as one
or more separate hydrocarbons phases (gas or liquid depending on
pressure and temperature conditions). Main Driving force for migration is
buoyancy. This force acts vertically and is proportional to the density
80
difference between water and the hydrocarbon. So, it is stronger for gas
than heavier oil.

.
Examples of Different Kinds of Non-sandstone Reservoir Rocks in
India
 Limestone with secondary porosity: Bombay High.
 Fractured shale: Indrora and Wadu Oil Field of Cambay Basin.
 Igneous rock: Fractured syenite of Borholla Oil Field of Assam, India.

Reserve Estimation

,
Where A is the area in km2, H the thickness in m, Φ the porosity, S0 the oil
saturation, RF the recovery factor (the fraction of hydrocarbons, which can
be or has been produced from a well, reservoir or field; also, the fluid that
has been produced) and B0 is the reservoir formation volume factor.
B0 may be of two types. It can be defined as follows:
Gas FVF
It is gas volume at reservoir conditions divided by gas volume at surface
conditions. This factor is used to convert surface measured volumes to
reservoir conditions, just as oil formation volume factors are used to convert
surface measured oil volumes to reservoir volumes.
Oil FVF
It is oil and dissolved gas volume at reservoir conditions divided by oil
volume at standard conditions. Since most measurements of oil and gas
production are made at the surface, and since the fluid flow takes place in

81
the formation, volume factors are needed to convert measured surface
volumes to reservoir conditions. Oil formation volume factors are almost
always greater than 1.0 because the oil in the formation usually contains
dissolved gas that comes out of solution in the wellbore with dropping
pressure.
Accumulation
Once oil and gas migrates into the trap, it separates according to density.
The gas, being lightest. Goes to the top of the trap to from the free gas cap.
The oil goes to the middle, and the water, which is always present, is on the
bottom. The oil portion of the trap is saturated with a certain percentage of
oil and water. The gas-oil and oil-water contacts are buoyant and are usually
leveled. In some traps, only gas and water or oil and are found.

82
7. WELLSITE OPERATIONS

MEASUREMENT WHILE DRILLING (MWD)

Measurement While Drilling (MWD) technology has become an important


tool for reservoir evaluation in the past 10 years. It provides downhole
evaluation of formation gamma ray, resistivity, and porosity at the time of
drilling. This tool also measures and records some mechanical parameters
such as:

• Well deviation and azimuth,


• Rate of Penetration (ROP),
• Downhole Weight on Bit (WOB) and downhole Torque.
• Annular pressure
• Annular temperature

• ECD (Equivalent Circulating Density)

83
Fig 7.1.1 Location of MWD hardware (not drawn to
scale). (From Anadrill, 1988.)

LOGGING WHILE DRILLING

INTRODUCTION

Logging While Drilling tools measure in-situ formation properties with


instruments that are located in the drill collars immediately above the drill
bit, hence forming a part of the bottom hole assembly(BHA). LWD data are
transmitted to the surface by mud pulse telemetry where variations in
pressure exercised by the tool can be sensed on the surface via a computer,
and stored in memory for retrieval on the surface.

OBJECTIVES OF LWD TECHNOLOGY

1) To get real time drilling data thus enabling quick decision-making on rig.

2) To get quick and correct formation evaluation.

MEASURED PARAMETERS

A suite of LWD tools attached to the bottom hole assembly record different
parameters of the drilled rocks:

 Natural Gamma Ray(GR)


o Average Gamma Ray
o Gamma Ray Spectrometry (Potassium, Thorium, Uranium)
 Electric
o Spontaneous Potential (old)
o Resistivity (Phase Shift & Attenuation)

84
 Laterolog
 Induction logs
 Density & Porosity
o Bulk density logs
o Neutron Porosity
o Ultra Sonic Caliper

 Nuclear Magnetic Resonance (NMR)


o Porosity
o Permeability
o Free and Bound Fluids

 Acoustic (Sonic) response


o Compressional Slowness (Δtc)
o Shear Slowness (Δts)
o Estimated Porosity

ADVANTAGES OF LWD TECHNOLOGY

LWD, while sometimes risky and expensive, has the advantage of


measuring properties of a formation before drilling fluids invade deeply.
Further, many well bores prove to be difficult or even impossible to measure
with conventional wireline tools, especially highly deviated wells. In these
situations, the LWD measurement ensures that some measurement of the
subsurface is captured in the event that wireline operations are not possible.

85
Fig 7.1.2 Components of a BHA, showing position of LWD tools

86
INTRODUCTION TO LOGGING
RESISTIVITY LOGS

INTRODUCTION
Resistivity of a body or sample is defined as a physical property of a material to
resist or oppose the movement of charge through the material. Well logs, which
depend on electrical resistivity and are function of porosity, pore fluid content,
mineralogy and temperature in rocks, are resistivity logs. Two types of logs are
generally used, laterolog and induction log.

LATEROLOG
Laterolog tools send an electric current from an electrode on the sonde directly
into the formation. The return electrodes are located either on surface or on the
sonde itself. Complex arrays of electrodes on the sonde (guard electrodes) focus
the current into the formation and prevent current lines from fanning out or flowing
directly to the return electrode through the borehole fluid. Most tools vary the
voltage at the main electrode in order to maintain a constant current intensity. This
voltage is therefore proportional to the resistivity of the formation. Because current
must flow from the sonde to the formation, these tools only work with conductive
borehole fluid. Actually, since the resistivity of the mud is measured in series with
the resistivity of the formation, laterolog tools give best results when mud
resistivity is low with respect to formation resistivity, i.e., in salty mud.

86
INDUCTION LOG
Induction logs use an electric coil in the sonde to generate an alternating current
loop in the formation by induction. This is the same physical principle as is used in
electric transformers. The alternating current loop, in turn, induces a current in a
receiving coil located elsewhere on the sonde. The amount of current in the
receiving coil is proportional to the intensity of current loop, hence to the
conductivity (reciprocal of resistivity) of the formation. Multiple transmitting and
receiving coils are used to focus formation current loops both radially (depth of
investigation) and axially (vertical resolution). Since the 90’s all major logging
companies use so-called array induction tools. These comprise a single
transmitting coil and a large number of receiving coils. Radial and axial focusing is
performed by software rather than by the physical layout of coils. Since the
formation current flows in circular loops around the logging tool, mud resistivity is
measured in parallel with formation resistivity. Induction tools therefore give best
results when mud resistivity is high with respect to formation resistivity, i.e., fresh
mud or non-conductive fluid. In oil-base mud, which is non conductive, induction
logging is the only option available.

OBJECTIVE
Resisitivity log is generally shown on logarithmic scale. The resisitivity of
hydrocarbon is higher than the resisitivity of formation water. The resisitivity of
fresh water is also high and it decreases with increasing salinity. The formation
resisitivity depends on the formation fluid and porosity. If the rock has low
porosity or rock is compact then resistivity of formation is high. We can calculate
water saturation using Archie equation.

Sw = [(a / φ m)*(Rw / Rt)](1/n)

96
Sw: water saturation, Rw: formation water resistivity, Rt: observed bulk
resistivity φ porosity, ‘a’ is a constant (often taken to be 1), m: cementation
factor (varies around 2).

GAMMA LOG

INTRODUCTION

Gamma-ray measurements detect variations in the natural radioactivity


originating from changes in concentrations of the trace elements uranium (U) and
thorium (Th) as well as changes in concentration of the major rock forming
element potassium (K). Since the concentrations of these naturally occurring
radioelements vary between different rock types, natural gamma-ray logging
provides an important tool for lithologic mapping and stratigraphic correlation.
Gamma-ray logs are important for detecting alteration zones, and for providing
information on rock types. For example, in sedimentary rocks, sandstones can be
easily distinguished from shales due to the low potassium content of the sandstones
compared to the shales.
In sedimentary rocks, potassium is the principal source of natural gamma radiation,
primarily originating from clay minerals such as illite and montmorillonite. In
igneous and metamorphic geologic environments, the three sources of natural
radiation may contribute equally to the total gamma radiation detected by the
gamma probe. Often in base metal and gold exploration areas, the principal source
of the natural gamma radiation is potassium, because alteration, characterized by
the development of sericite, is prevalent in some of the lithologic units and results
in an increase in the element potassium in these units. The presence of feldspar
porphyry sills, which contain increased concentrations of K-feldspar minerals,
would also show higher than normal radioactivity on the gamma-ray logs. During

96
metamorphism and hydrothermal alteration processes, uranium and thorium may
be preferentially concentrated in certain lithologic units.
Older gamma-ray logs are recorded in "counts" whose numbers vary according to
the tool design. Almost all modern gamma-ray logs are recorded in API (American
Petroleum Institute) units, which make a common standard for log comparison.
The scale was chosen so that a value of zero would mean no radioactivity and a
value of 100 would match a typical Mid-continent shale. In practice, shales can be
somewhat variable in their radioactivity according to their silt content, types of
clay mineral, and the occurrence of small amounts of uranium.

APPLICATION
 Open-hole as well as cased-hole correlation(as γ rays have a good
penetrating power)
 Computation of shale volume.
 Different types of clay can be identified and discriminated
 Environment of deposition can be inferred depending on Th:U ratio(Th is

present in terrestrial realm and U is mainly presenting marine realm)


 To locate radioactive ores, uranium in particular.

SP LOG (SPONTANEOUS POTENTIAL LOG)

INTRODUCTION
The spontaneous potential tool measures natural electrical potentials that occur in
boreholes and generally distinguishes porous, permeable sandstones from
intervening shales. The "natural battery" is caused when the use of drilling mud

96
with a different salinity from the formation waters, causes two solutions to be in
contact that have different ion concentrations. Ions diffuse from the more
concentrated solution (typically formation water) to the more dilute. The ion flow
constitutes electrical current, which generates a small natural potential measured
by the SP tool in millivolts.
When the salinities of mud filtrate and formation water are the same, the potential
is zero and the SP log should be a featureless line. With a fresher mud filtrate and
so, more saline formation water, sandstone will show a deflection in a negative
potential direction (to the left) from a "shale base line". The amount of the
deflection is controlled by the salinity contrast between the mud filtrate and the
formation water. Clean (shale-free) sandstone units with the same water salinity
should show a common value, the "sand line". In practice, there will be drift with
depth because of the changing salinity of formation waters. The displacement on
the log between the shale and sand lines is the "static self-potential" SSP.
The SP log is used to: (1) detect permeable beds, (2) detect boundaries of
permeable beds, (3) determine formation water resistivity (Rw), and (4) determine
the volume of shale in permeable beds. An auxiliary use of the SP curve is in the
detection of hydrocarbons by the suppression of the SP response. The concept of
static spontaneous potential (SSP) is important because SSP represents the
maximum SP that a thick, shale free, porous and permeable formation can have for
a given 'ratio between Rmf / Rw. The SP value that is measured in the borehole is
influenced by bed thickness, bed resistivity, invasion, borehole diameter, shale
content, and most important is the ratio of Rmf / Rw. measurement of SP is
controlled by various factors.
1. Bed thickness.
2. Bed resistivity.
3. Invasion profile.

96
4. Shale content in permeable bed.

APPLICATION
 Bed boundaries delineation.
 Shale volume determination. Vsh = 1- (SP/SSP).
 Formation water resistivity (Rw) determination.

POROSITY LOGS

The three porosity logs are

 Density Log
 Neutron Log
 Sonic Log

DENSITY LOG

Density logging tools contain a Cesium-137 gamma ray source which irradiates
the formation with 662-KeV gamma rays. These gamma rays interact with
electrons in the formation through Compton scattering and lose energy. Once the
energy of the gamma ray has fallen below 100 KeV, photolectric absorption
dominates: gamma rays are eventually absorbed by the formation. The amount of
energy loss by Compton scattering is related to the number electrons per unit
volume of formation. Since for most elements of interest (below Z = 20) the ratio
of atomic weight, A, to atomic number, Z, is close to 2, gamma ray energy loss is
related to the amount of matter per unit volume, i.e., formation density.

96
Fig 7.1.3 Variations in spectrum for formation with constant density but different z

APPLICATIONS OF DENSITY LOG

(1) Measurement of density of formation


(2) Calculation of porosity
(3) When combined with sonic travel times, it is used to calibrate seismic data
(4) Detection of gas in reservoirwhen used in combination with neutron log
(5) PEF is good indicator of lithology

Fig 7.1.4 Schematic drawing of the dual spacing formation density-log

96
The distance between the face of the skid and the extremity of the eccentering arm
is recorded as a caliper log, which helps to assess the quality of contact between
the skid and the formation.

NEUTRON LOG

Neutron porosity logging tools contain an Americium-Beryllium (Am241 – Be9)


neutron source, which irradiates the formation with neutrons having a mean energy
of 4.2 MeV or Californium (Cf252) source. These neutrons lose energy through
elastic collisions with nuclei in the formation. Once their energy has decreased to
thermal level, they diffuse randomly away from the source and are ultimately
absorbed by a nucleus. Hydrogen atoms have essentially the same mass as the
neutron; therefore hydrogen is the main contributor to the slowing down of
neutrons. A detector at some distance from the source records the number of
neutron reaching this point. Neutrons that have been slowed down to thermal level
have a high probability of being absorbed by the formation before reaching the
detector. The neutron counting rate is therefore inversely related to the amount of
hydrogen in the formation.

APPLICATIONS OF NEUTRON LOG

(1) Porosity Determination


(2) Locate gas when combined with Density or acoustic Log

96
SONIC LOG
The acoustic velocity log or sonic log is a porosity log that measures interval
transit time (Δt) of a compressional sound wave traveling through one foot of
formation. Transit time is the time required for the wave to travel 1ft of the
formation. The transit time will be more in case of liquid and vice versa. Less
transit time will indicate low porosity, i.e., it is proportional to the porosity. The
sonic log device consists of one or more sound transmitters, and two or more
receivers. Interval transit time (Δt) in microseconds per foot is the reciprocal of the
velocity of a compressional sound wave in feet per second. The interval transit
time (Δt) is dependent upon both lithology and porosity. Sonic porosity can be used
to determine porosity in consolidated sandstones and carbonates with intergranular
porosity or intracrystalline porosity (sucrosic dolomites). In sonic logging only the
first arrivals are noted.
Porosity (Ф) = {(t – tma) / (tf – tma)}

Fig 7.1.5 Schematic diagram of principles of acoustic velocity logging tools.

96
8. CONCOLUSION

During the training period, we got an opportunity to know the seismic methods
of prospecting of hydrocarbon. In the first stage, we learned seismic acquisition
techniques with special emphasis on 3D seismic design and the logistic problems
in the field. We visited the camp area and came to know about the functionality of
the instruments during field acquisition. Some software’s were also introduced for
the survey designing like MESSA. For data processing, we used PROMAX
software using which we could learn the different stages of data processing. This
session also improved our mathematical background for the processing.
In the second stage, interpretation techniques were taught with correlation of
other data like well logs and VSP. Different steps of interpretation were introduced
and using geology of the area, some of the sections were interpreted. Log
correlations and synthetic seismogram were also used in this analysis which gives
the true subsurface image of the earth. .

96
9. References

.
 Yilmaz, Oezdogan, 2001 Seismic Data Analysis Second Edition.
 Dobrin M.B., 1949 Introduction to Geophysical Prospecting, McGraw Hill
Publication.
 Telford, W.M., Geldart, L.P., Sheriff, R.E., 1990.Applied Geophysics Second
Edition, Cambridge University Press.
 Serra, O., 1984 Fundamental of Well Log Interpretation, Elsevier
Publications.
 Schlumberger, The essentials of Log Interpretation practice, Schulmberger.

 Website References:
http://www.glossary.oilfield.slb.com/
www.expogroup.com
www.petropep.de/w
www.strata.geol.sc.edu
www.wikipedia.com

96

Das könnte Ihnen auch gefallen