Beruflich Dokumente
Kultur Dokumente
Seismic Processing
and Analysis
Training Manual
Volume 2
This publication has been provided pursuant to an agreement containing restrictions on its use. The publication is also protected by
Federal copyright law. No part of this publication may be copied or distributed, transmitted, transcribed, stored in a retrieval system,
or translated into any human or computer language, in any form or by any means, electronic, magnetic, manual, or otherwise, or
disclosed to third parties without the express written permission of:
Trademark Notice
3D Drill View, 3D Drill View KM, 3D Surveillance, 3DFS, 3DView, Active Field Surveillance, Active Reservoir Surveillance, Adaptive Mesh
Refining, ADC, Advanced Data Transfer, Analysis Model Layering, ARIES, ARIES DecisionSuite, Asset Data Mining, Asset Decision Solutions,
Asset Development Center, Asset Development Centre, Asset Journal, Asset Performance, AssetConnect, AssetConnect Enterprise, AssetConnect
Enterprise Express, AssetConnect Expert, AssetDirector, AssetJournal, AssetLink, AssetLink Advisor, AssetLink Director, AssetLink Observer,
AssetObserver, AssetObserver Advisor, AssetOptimizer, AssetPlanner, AssetPredictor, AssetSolver, AssetSolver Online, AssetView, AssetView
2D, AssetView 3D, BLITZPAK, CasingLife, CasingSeat, CDS Connect, Channel Trim, COMPASS, Contract Generation, Corporate Data Archiver,
Corporate Data Store, Crimson, Data Analyzer, DataManager, DataStar, DBPlot, Decision Management System, DecisionSpace, DecisionSpace 3D
Drill View, DecisionSpace 3D Drill View KM, DecisionSpace AssetLink, DecisionSpace AssetPlanner, DecisionSpace AssetSolver, DecisionSpace
Atomic Meshing, DecisionSpace Nexus, DecisionSpace Reservoir, DecisionSuite, Deeper Knowledge. Broader Understanding., Depth Team, Depth
Team Explorer, Depth Team Express, Depth Team Extreme, Depth Team Interpreter, DepthTeam, DepthTeam Explorer, DepthTeam Express,
DepthTeam Extreme, DepthTeam Interpreter, Design, Desktop Navigator, DESKTOP-PVT, DESKTOP-VIP, DEX, DIMS, Discovery, Discovery
3D, Discovery Asset, Discovery Framebuilder, Discovery PowerStation, DMS, Drillability Suite, Drilling Desktop, DrillModel, Drill-to-the-Earth-
Model, Drillworks, Drillworks ConnectML, DSS, Dynamic Reservoir Management, Dynamic Surveillance System, EarthCube, EDM, EDM
AutoSync, EDT, eLandmark, Engineer's Data Model, Engineer's Desktop, Engineer's Link, ESP, Event Similarity Prediction, ezFault, ezModel,
ezSurface, ezTracker, ezTracker2D, FastTrack, Field Scenario Planner, FieldPlan, For Production, FrameBuilder, FZAP!, GeoAtlas, GeoDataLoad,
GeoGraphix, GeoGraphix Exploration System, GeoLink, Geometric Kernel, GeoProbe, GeoProbe GF DataServer, GeoSmith, GES, GES97,
GESXplorer, GMAplus, GMI Imager, Grid3D, GRIDGENR, H. Clean, Handheld Field Operator, HHFO, High Science Simplified, Horizon
Generation, I2 Enterprise, iDIMS, Infrastructure, Iso Core, IsoMap, iWellFile, KnowledgeSource, Landmark (as a service), Landmark (as software),
Landmark Decision Center, Landmark Logo and Design, Landscape, Large Model, Lattix, LeaseMap, LogEdit, LogM, LogPrep, Magic Earth, Make
Great Decisions, MathPack, MDS Connect, MicroTopology, MIMIC, MIMIC+, Model Builder, NETool, Nexus (as a service), Nexus (as software),
Nexus View, Object MP, OpenBooks, OpenJournal, OpenSGM, OpenVision, OpenWells, OpenWire, OpenWire Client, OpenWire Server,
OpenWorks, OpenWorks Development Kit, OpenWorks Production, OpenWorks Well File, PAL, Parallel-VIP, Parametric Modeling, PetroBank,
PetroBank Explorer, PetroBank Master Data Store, PetroStor, PetroWorks, PetroWorks Asset, PetroWorks Pro, PetroWorks ULTRA, PlotView,
Point Gridding Plus, Pointing Dispatcher, PostStack, PostStack ESP, PostStack Family, Power Interpretation, PowerCalculator, PowerExplorer,
PowerExplorer Connect, PowerGrid, PowerHub, PowerModel, PowerView, PrecisionTarget, Presgraf, PressWorks, PRIZM, Production, Production
Asset Manager, PROFILE, Project Administrator, ProMAGIC, ProMAGIC Connect, ProMAGIC Server, ProMAX, ProMAX 2D, ProMax 3D,
ProMAX 3DPSDM, ProMAX 4D, ProMAX Family, ProMAX MVA, ProMAX VSP, pSTAx, Query Builder, Quick, Quick+, QUICKDIF,
Quickwell, Quickwell+, Quiklog, QUIKRAY, QUIKSHOT, QUIKVSP, RAVE, RAYMAP, RAYMAP+, Real Freedom, Real Time Asset
Management Center, Real Time Decision Center, Real Time Operations Center, Real Time Production Surveillance, Real Time Surveillance, Real-
time View, Reference Data Manager, Reservoir, Reservoir Framework Builder, RESev, ResMap, RTOC, SCAN, SeisCube, SeisMap, SeisModel,
SeisSpace, SeisVision, SeisWell, SeisWorks, SeisWorks 2D, SeisWorks 3D, SeisWorks PowerCalculator, SeisWorks PowerJournal, SeisWorks
PowerSection, SeisWorks PowerView, SeisXchange, Semblance Computation and Analysis, Sierra Family, SigmaView, SimConnect, SimConvert,
SimDataStudio, SimResults, SimResults+, SimResults+3D, SIVA+, SLAM, SmartFlow, smartSECTION, Spatializer, SpecDecomp, StrataAmp,
StrataMap, StrataModel, StrataSim, StratWorks, StratWorks 3D, StreamCalc, StressCheck, STRUCT, Structure Cube, Surf & Connect, SynTool,
System Start for Servers, SystemStart, SystemStart for Clients, SystemStart for Servers, SystemStart for Storage, Tanks & Tubes, TDQ, Team
Workspace, TERAS, T-Grid, The Engineer's DeskTop, Total Drilling Performance, TOW/cs, TOW/cs Revenue Interface, TracPlanner, TracPlanner
Xpress, Trend Form Gridding, Trimmed Grid, Turbo Synthetics, VESPA, VESPA+, VIP, VIP-COMP, VIP-CORE, VIPDataStudio, VIP-DUAL,
VIP-ENCORE, VIP-EXECUTIVE, VIP-Local Grid Refinement, VIP-THERM, WavX, Web Editor, Well Cost, Well H. Clean, Well Seismic Fusion,
Wellbase, Wellbore Planner, Wellbore Planner Connect, WELLCAT, WELLPLAN, WellSolver, WellXchange, WOW, Xsection, You're in Control.
Experience the difference, ZAP!, and Z-MAP Plus are trademarks, registered trademarks, or service marks of Halliburton.
All other trademarks, service marks and product or service names are the trademarks or names of their respective owners.
Note
The information contained in this document is subject to change without notice and should not be construed as a commitment by
Halliburton. Halliburton assumes no responsibility for any error that may appear in this manual. Some states or jurisdictions do not
allow disclaimer of expressed or implied warranties in certain transactions; therefore, this statement may not apply to you
Contents
ProMAX User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Topics covered in this chapter: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Sort data by source number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Sort data by source and channel number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
Sort data by CDP number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
Display near offset section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-19
QC Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-28
Produce QC plots from the database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-29
CDP Contribution and Null QC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-29
Plotting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Getting Started
After choosing a line from the Line menu or adding a new line, the Flow
window will appear. Name your flows according to the processing
taking place, such as “brute stack”. For this course, we will also use a
number, for example "01: Display shots".
Look at the Menu Map figure on the previous page. This figure refers to
the menus we have just discussed, as well as other menus you will use
to access your datasets, database, and parameter tables.
Building a Workspace
In this exercise, you will build a workspace and look at some of the
functionality available within the user interface.
1. Type ./promax
Available areas
Active Command Global Options
Area Menu
Exit Promax
Mouse Button Help Processing Queues
Window Job Notification
Configuration Options and Control
Area Menu
The black horizontal band below the menu displays mouse button
help. Mouse button help describes the possible actions at the current
location of the cursor, and gives brief parameter information during
the flow building process.
Below the mouse button help line are options to Exit ProMAX,
configure the queues and user interface, as well as check on the
status of jobs.
The list of options running across the top of this menu: Select, Add,
Delete, Rename, and Permission are called global options. To use
these, you must first select the command, then select the Area name
that you want the command to apply to. The Copy command works
differently by providing popup menus to choose an Area to copy
from.
At this point you are building your work space. Adding an Area
creates a UNIX directory.
Use your name for the area name. For example, “Mary’s area”.
You can control whether moving the mouse registers the selection,
or if you need to press return in the Config popup. Set the Popups
remain after mouse leaves option to yes or no.
The Line Menu appears with the same global options to choose from
as the Area Menu.
5. Add a Line using the same steps as you did for adding an Area.
Name the line “Intro Line”
Area Name
Global Options
Available Seismic Lines
Line Menu
Line Menu
The Flow window appears with the following new global options:
Available Flows
Flows Menu
Flow Menu
Now it is time to build a flow, and process data. In order to perform this
you will need to tell ProMAX which processes you want to invoke, as
well as provide specific details for each of these steps. Finally, there are
different options available for executing a flow.
Build a Flow
Upon completion of the previous exercise, you are in the ProMAX flow
building menu (see below). From here, you will construct flows by
choosing processes and selecting the necessary parameter information.
Once the flow is ready, you will execute it and view the results.
Editable Flow
Parameter Specification
Available Processes
The screen is split into two sides: a list of processes on the right and
a blank tablet below the global options on the left. To build a flow,
you will select from the processes on the right and add them to the
blank tablet on the left.
2. Move your cursor into different areas of the display, such as into the
processes list, the blank tablet and the global options. Notice that
the mouse button help is sensitive to the current cursor location.
MB1 and MB2 will execute the flow interactively. The mouse button
help explaining the difference between MB1 and 2 does not apply to
the Trace Display process. Either button will allow the display to
immediately take over the monitor for display.
MB3 indicates Execute via Queue. This option enables the use of the
two types of batch queues. When using MB3, a new menu pops up
allowing the use of either the general batch queues or the small job
batch queues. In order for this option to work, your system
administrator must enable the queues whenProMAX 2D was
installed.
• Exit: Leaves the edit flow menu, and returns you to the flow
listing menu.
6. Move the “SS Phoenix Output” process to the secondary list, and
make sure the procedure worked correctly by viewing the
secondary list again.
7. Move your cursor back into the processes list (but not on a category
heading),type “gain” and press return. The following appears:
This acts as a text search, and displays all processes that contain the
word "gain." Add the process Automatic Gain Control by selecting
the process name with MB1.
10. Select Yes for the “Read data from other lines/surveys?” parameter.
For the introductory lessons we will read data from the tutorial line.
Follow the instructor’s directions for the exact path to the dataset.
After you select the dataset you will be returned to the flow editing
menu.
You can now modify parameters for AGC. Select Apply for the
Application mode.
To change this value simply place your cursor on the old value, and
type in the number 1500.
For now, do not change any of the values. We will discuss many of
these options in the next chapter. At that point, you will have the
opportunity to test and explore the various options.
15. Run the flow by clicking on the global command Execute with
MB1 or MB2.
Next
Screen
Icon
This interrupts the job and brings you back to the flow editing menu.
Sorting
Your first look at the data was the first shot with all channels. After
clicking the Next Ensemble icon, you saw the next shot. What if you
wanted to look at every other shot? What if you only wanted to look at
channels 1 through 60? What if you wanted to sort the data to CDP and
then display. All these options and more are available in Disk Data
Input.
2. Open the Disk Data Input Menu and click where the menu reads
Get All for Trace Read Option.
This toggles the read option to Sort, and the menu will automatically
add several new options:
3. Select SOURCE for the primary sort order, this will read in shot
ordered ensembles.
4. Leave the secondary sort set to NONE, this means that no sorting
of traces within ensembles will be performed.
6. In the Widget Window delete the default values, and type 1, 3/.
This specifies that only SOURCE numbers 1 and 3 will be read into
the flow.
8. Select Execute.
When the last source is displayed, the Next Screen icon becomes
inactive. To exit this display, select File Exit/Stop Flow.
2. Select CHAN for the secondary trace header entry. This will allow
you to sort each SOURCE ensemble by channel number, and also
limit the number of channels to be processed.
• : separates the primary sort order from the secondary sort order.
Note
If you only select a primary sort key, then only one range of values is allowed in the
sort order for dataset. If you select both a primary and a secondary sort key, then
two ranges of values, separated by a colon, are necessary in the sort order. This is a
common area for new ProMAX users to make mistakes.
You will see the first shot and all subsequent shots display with only
the first 60 channels.
6. Move your cursor into the trace display area. Notice that the mouse
button help gives a listing of the current CHAN and SOURCE.
Trace Display will always give you a listing of the values for the
current Secondary and Primary sort keys.
Recall that the primary trace header entry specifies the type of ensemble
to build, and also the range of that ensemble to read. The secondary sort
key allows you to sort and select the traces within each ensemble.
2. Select CDP for the Primary trace header entry. This tells the
program to build CDP gathers from the input dataset.
3. Select OFFSET for the secondary trace header entry. This tells the
program to order the traces within each CDP gather by the OFFSET
header.
• 500-600(25) This select every 25th CDP between 500, and 600.
6. Notice that we have now displayed a CDP gather, even though the
input dataset is stored on disk as shot gathers.
7. Move your cursor into the trace display area, and confirm that the
displayed gather has Primary and Secondary sorts of CDP and
OFFSET.
4. Set the sort order for dataset to *:*/. This will select all channels for
all shots starting with channel number 1.
* Pre-Initialization
no * Does Shot and Receiver X, Y, and
station information exist in the
yes headers and do you want to use it?
no
* Full Extraction
no
* Do you want to minimize the
number of times that you have to
yes read the data?
* From Field Notes and Survey
no
yes * Do I have “Valid Trace Numbers”?
Table Diagram
Do you want to minimize the Yes From Field Notes and Survey
number of times to read the
data?
No Partial Extraction
Inline Geom Header Load is the main program used to assign geometry
values to individual trace headers from the OPF database files. One of
the main issues related to this geometry assignment procedure is to
define how a trace in a data file will be identified in the Trace Ordered
Parameter file. One of the options is to use a specific trace header word
called the "valid trace number". In order to utilize the "valid trace
number" we will have to spend some time discussing it’s origin and how
it can be used.
• This means that every trace in the output data file exists in the
database and there is a one to one correspondence in all values in
the trace header to those in the database.
• After a successful, run each trace will also be assigned the "valid
trace number" if it was not pre-assigned using Extract Database
Files.
1. to read the "valid trace number" from the input trace header, or
Once a trace in a data file has been identified in the Trace OPF, the
information in all of the OPF’s for that trace is copied to the trace
header.
values as well as the other order values to the trace header. The last
thing that happens is that the traces are "stamped" as matching the
database.
• The Extract Database Files program writes this trace header word
after it reads and counts a trace that it is entering into the TRC
database. In this case the "valid trace number" is pre-assigned.
The "valid trace number" is a unique number for every trace and is
stored in the trace header as TRACE_NO.
This trace header word continues to exist ONLY if you write a new trace
file after the extraction procedure.
and then
• writes the trace count number and SIN to the trace header
Full Extraction is used when you want to extract the shot and receiver
location and coordinate information from the incoming headers.
• writes the trace count number and SIN to the trace header
IF you have run the extraction in either mode, AND written a new trace
data file, AND have not altered the number of traces in the database, you
now have “valid trace numbers” in the headers of the output data set
which you can use to map a trace in a data file to a trace in the database.
This mapping will be performed by Inline Geom Header Load after the
database is completed.
The extraction only partially populates the database. More work will
generally need to be done in the Spreadsheets to input the remaining
information.
After the Spreadsheets are complete, the next step would be to complete
the CDP binning procedures and then finalize the database.
With the database complete, you can continue with the next step of
loading the geometry information from the databases to the trace
headers. You may elect to address a trace by its "valid trace number"
assigned during the extraction or you may read a combination of trace
headers to identify the trace.
1) it identifies the TRACE_NO of the incoming trace and finds that trace
in the TRC database.
2) it copies the appropriate TRC order values to the trace header and
then
3) finds the shot, receiver, cdp, inline, crossline, and offset bin for that
trace. The appropriate values from those orders are then copied to the
trace headers as well.
In the second option, Inline Geom Header Load does not know exactly
which TRACE_NO it is looking for. It does know which channel and
shot to look for based on the header word(s) that you selected. Given
that this mapping is unique, the program now knows which SIN and
CHAN to look for in the TRC database. Once the entry is found, the
TRACE_NO is copied to the headers and the steps outlined in the first
option are performed.
Again, the key to the second option is that you need to identify which
shot a trace came from by a "unique" combination of header words for
that shot.
This option may be appropriate for relatively small datasets which only
have FFID and CHAN in the input trace headers. This option should be
used when reading the field data and writing the data to disk for the first
time. In so doing, information, such as FFID, number of shots, number
of channels are written to the database, and are then available when the
geometry is completed. Selecting this option will also stamp the output
dataset with “valid trace numbers”, which allows you to process with
trace headers only and overwrite the dataset with updated geometry
from the database files. This is an important concept for the Inline Geom
Header Load process.
In the following example, you will assume that only the FFID and
recording channel number exist in the incoming trace headers. This
information will be extracted, using the perform pre-geometry database
initialization option in Extract Database Files.
SEGY Input
Type of storage to use: -------------------------------------------- Disk
Select disk file type: ----------------------------------------Disk Image
Enter DISK file path name: --------------------------------------------
---------------------------------/misc_files/2d/segy_0_value_headers
MAXIMUM traces per ensemble: ---------------------------------120
Remap SEGY header values: -------------------------------------- NO
Extract Database Files
Is this a 3D survey: ---------------------------------------------------No
Data Type: --------------------------------------------------------LAND
Source index method: ---------------------------------------------FFID
Receiver index method: ------------------------------------STATIONS
Mode of operation: --------------------------------------OVERWRITE
Pre-geometry extraction?: -----------------------------------------Yes
Disk Data Output
Output Dataset Filename: -------------------------”Shots-raw data”
New, or Existing, File?: -------------------------------------------New
Record length to output: ----------------------------------------------0.
Trace sample format: ---------------------------------------------16 bit
Skip primary disk Storage?: ----------------------------------------No
Enter the full path name to the SEGY input dataset as described by
the instructor.
This initializes the SIN and TRC domains of the Ordered Parameter
Files, stamps the dataset with valid trace numbers, and allows for the
use of overwrite mode when performing the Inline Geom Header
Load step later.
6. In Disk Data Output, enter the name for a new output file, such as
“Shots-raw data”.
9. Check the OPFs, verifying the number of records in the dataset, the
number of channels/record, and the FFID range.
The only OPF files that should exist are LIN, SIN, and TRC. If SRF
exists, this means that you identified traces for receivers by
coordinates. You will also find that the SRF OPF has 1 value in it.
In this sequence, we ran the Extract Database Files process in the pre-
initialization mode. Here, we will read the output data from the pre-
initialization step and identify a trace relative to its “valid trace number”
with respect to the database.
3. In Inline Geom Header Load, match the traces by their “valid trace
numbers”.
Since the traces were read and counted with Extract Database Files,
you have a “valid trace number” to identify a trace. You have binned
all traces; therefore, do not drop any traces. Unless you have a
problem, there is no need for verbose diagnostics.
In the Extract Database Files path, the Inline Geom Header Load
process operates on a sequential trace basis, and includes a check to
verify that the current FFID and channel information described in the
OPFs matches the FFID and channel information found on each trace of
each ensemble. The Inline Geom Header Load process will fail if these
numbers do not correspond. You must then correct the situation by
changing the geometry found in the OPFs, or possibly by changing the
input dataset attributes.
In this exercise, you will import a UKOOA file which is the standard
output from navigation processing. The file contains:
You will load this single file to provide the SIN and TRC spreadsheets
with data. You will then continue with binning, using the Calc-Dim
option.
This marine 3D survey was collected using a single source / single cable
geometry.
4. From the Format pulldown menu, open a list of saved formats and
choose STANDARD UKOOA 90 Marine 3D.
Also note that, if desired, the coordinates can be altered using the
Math Op and Op Value columns.
While the import is running, you will see a variety of Status windows.
Eventually you will see a “Successfully Completed” window.
7. Quit from each of the column definition windows and select the
File Exit from the main import window.
8. From the main menu click Setup and input the following
information:
• Set the azimuths to 0o for the shots and receivers (the correct
azimuth will be determined later).
9. Click on Ok
25 mt group int
25 mt shot int
50 mt line spacing
32 degree azimuth
2. Click Setup and enter 32 degrees for the Nominal Sail Line
Azimuth.
Cable Feather QC
Using the same Basemap you can generate a quick QC display showing
the cable feather for individual shots or for entire shot lines.
3. Pres MB2 near a shot and all shots on the same shot line will
highlight.
5. Repeat as desired.
CDP Binning
1. In the main menu, click Bin
Note:
3. Select Bin midpoints for the the Binning Type and click Ok to
open the bin definition window.
Grid Constants
4. For the Bin space name, enter a grid name and set the following
grid constants:
Calculated Values
3. Click on Grid Open and select the grid name that you saved
from the Calc Dim operation.
You may choose to have one subsurface line for each surface sail
line. In this case you may elect to turn off the midpoint and shot plots
and redisplay the shots only in black.
Grid Display
3. You may find that you will want to delete one “inline” from the
calculated grid and you may need to adjust the inline extents by
redisplaying the Midpoint Control Points
Display Midpoint Control Points Black.
4. You may end up with a final CDP bin Grid similar to that shown in
the following diagram:
5. When satisfied with the CDP Grid make sure that you save it before
exiting from the XYgraph. Select Grid Save to and enter a
new grid name in the dialog box and press the OK button:
3
1
2. Select the bin space name that was saved in the XYgraph session
The CDP Binning parameters, even after Flex Binning, still control how
many lines and cross lines exist for the project. The often overlooked
The Goal of the Offset Binning is to achieve one trace per CDP per
offset bin, the same requirement for DMO processing. For a typical
marine case you would specify the offset bin increment as twice the shot
interval. In this case the shot interval and the group interval are the same
at 25 meters.
25 mt group intv
3182 mt
x
near offset = 207
next offset = 232
207 mt
maximum offset = 3182 mt
first bin center = 219.5
minimum offset to bin = 194.5
maximum offset to bin > 3219.5
offset bin increment = 50
x
first bin center = 219.5
minimum offset to bin = 194.5
maximum offset to bin > 3219.5
use 3300 ft offset bin increment = 50
In this case we will use offset bins that have bin centers at 50 meter
increments with a near offset bin center at 219.5 meters and a far offset
of 3300 meters.
You can use a display from the database to QC these parameters after
the final binning step. If you plot a 3D: XYgraph, from the TRC order
and plot OFFSET in X, CDP in Y and color code by OFB, you can see
the offset distributions on the CDP gathers. After some selective
zooming you can overlay the proposed offset binning grid for QC. You
may also find that using the “contrast.rgb” color table in the
$PROMAX_HOME/port/misc directory will be useful.
You will also notice on this plot that in areas there are duplicate offsets
at given CDP’s thus making it impossible to reach the goal of 1 trace per
CDP per bin.
4. Make sure that the Inlines are specified to be parallel to the Y axis.
5. Click Apply and when the Binning is complete, click Cancel from
the Binning window.
Receiver Binning
1. For the Binning Type, select Bin receivers and click OK.
This step is completely optional. Run this step if you intend to run
any surface consistent processing like surface consistent
deconvolution or residual statics.
1
2
4 3
5
3. Exit from the Spreadsheet menu using the File Exit pull down
menu.
Inline Direction
This means that if, for some reason, the only trace that is available
for a particular offset bin in a CDP has a weight of zero, use it
anyway.
1 1 1
inline
direction
25’
50’
75’
6.25’ 0 0 0
1 1 1
XLine --> offset: distance - weights
0: 0-1, 25-0 1500: 0-1, 50-0 3175: 0-1, 75-0
Inline --> offset: distance - weights
0: 0-1, 6.25-0
Azimuth Weighting
7. The following is an example of how you could set the parameters
for the the Azimuth weighting entry. We won’t set them for this
exercise but the description is included for reference.
Pass Azimuth 1
Pass Azimuth 2
Taper Length
for our example you may use 3 different weighting schemes for
different offset, azimuths pairs:
This will give us +/- 2 degrees at the near offsets, +/- 4 degrees at the
mid offsets and +/- 10 degrees at the far offsets. Note also that we are
increasing the taper length from 1 to 3 degrees as the offset
increases.
In the marine case we may elect to weight traces by a single sail line.
In this case all of the traces that contribute to the CDP line are
examined by their S_line trace header word. The sail line with the
highest number of contributors is the prime sail line and traces that
come from this sail line have the highest weight. You may elect to
put an offset variant weight function based on the dominant sail line
represented in the traces. In this example we will use an offset
variant weighting function that weights the sail line highly for the
near offsets and relaxes the weighting function toward the far
offsets:
you will have to cycle from ONE through FIVE back to NONE
QC Plots
The Assign CDP Flex Binning process writes a number of values to the
CDP Order Parameter (database) Files. You may elect to generate QC
plots using the 3D database QC capabilities.
The number of traces contributing to the CDP after flex binning. One of
the goals of flex binning is to provide uniform fold. The uniformity of
this value indicates how well the flex binning worked.
If the short or long offsets are missing from a flex binned CDP, these
values can be too high or too low, respectively.
These values can show if long or short offsets are missing. At CDPs
where the values are high, short offsets are missing. If the values are
low, long offsets are missing. The expected mean offset is half the sum
of the first offset bin center and the last offset bin center. Ideally,
RMEANOFF, the ratio of the mean to the expected mean, should be 1.
In our case we specified 50 as our near offset and 3225 as our far offset.
This yields a predicted mean of 1725 meters.
Tell where missing offsets are if the missing offsets happen to occur
such that the mean offset is unchanged. The ratio of standard deviation
to expected standard deviation will be 1 for an even offset distribution.
If the ratio is less than 1, short and/or long offsets are missing. If it is
greater than 1, middle offsets are missing.
and also:
Notice that using the recommended parameters we have done a good job
of stabilizing the fold and offset distributions of all of the CDP’s.
Any trace that is NULL for FLEXCDP#1 did not contribute to any
CDP. Non NULL CDP’s in #2 and #3 contributed to more than 1
CDP.
This tool accepts traces in any sort order and makes a copy of each input
trace for each CDP bin to which it contributes. The tool finds the list of
CDPs the trace contributes to by querying the TRC database parameter
FLEXCDPS. If the trace contributes to no bins, it will be deleted. The
tool simply reproduces one trace at a time, therefore, if the input data are
CDP sorted, they would no longer be sorted on output from this tool.
If a trace contributes to a CDP and its midpoint does not lie within the
boundaries of the CDP bin, the trace’s source and receiver coordinates
will be adjusted so that the trace’s adjusted midpoint lies at the same
relative position in the new CDP as it did in it’s original bin. The tool
does this so that 3D DMO will apply the trace to its new bin rather than
just increasing its effect in the old bin.
We do not actually have any trace data for this example so we cannot
run this.
This chapter serves as an example of how to input the geometry for a swath geometry. The main
items of interest here are the ways to handle the spreads rolling on and off the ends of the swaths
and how to handle the cable roll between the swaths. The Geometry Assignment Overview
section in the Online Help provides further details of the geometry assignment process.
153-154
Receivers 1001-1154
Receivers 2001-2154
shots 10001 -
10153
shots 50001-
50153
1 Receivers 8001-8154
stations into line 1 stations 1 - 154. The remaining receiver lines will be
handled similarly. We will also divide the shot stations 10001-10153
into line 10 stations 1 - 153. The remaining shot lines will be handled
similarly as well.
440’
110’
110’
4. Click the Setup pull down and enter the project constants:
5. Click OK.
Receivers Spreadsheet
1. Open the Receivers Spreadsheet using the File Import pull
down open the ASCII file import window.
Note:
We are splitting the station number into two numbers, one for the line
and the remaining for the station along the line.
5. Select the remaining column definitions for Line, and the X and Y
coordinates.
9. Select to Apply the format and Overwrite the values with the new
import values.
Sources Spreadsheet
1. Open the Sources Spreadsheet by clicking on the word Sources in
the main spreadsheet menu window.
Note:
We are splitting the station number into two numbers, one for the line
and the remaining for the station along the line.
5. Click on the word “Line” in the parameter column and then paint
the first two columns as the line number. (include a blank before the
first column).
8. Apply the format and Overwrite the values in the database. The
Sources spreadsheet should now be populated with the selected
information.
10. Use the Cross Domain Contribution (Double Fold) icon MB3
function to measure the Azimuth of the cable lines. You should
measure approximately 25 degrees.
Reopen the Setup window by clicking the Setup button from the main
menu and input 25 degrees for the nominal azimuth.
Note:
This is correct since we did not import patterns and did not run an
extraction. In this case we will have to specify patterns in the Patterns
Spreadsheet.
Patterns Spreadsheet
As an example we will define a pattern that is typical for a swath
shooting geometry. We will define a basic bi-symmetric split geometry
where we will have for any given shot 4 live cables and 60 traces on each
cable. The shots will be between the center two cables and between
traces 30 and 31 on each cable. There will be no gap in the split spread.
4. We can now specify the first pattern. Since we are using a line/
station relationship we will need a separate pattern for each swath.
For the First pattern mark a block of 4 cards and then fill the
columns as shown in the next diagram.
7. Exit from the Patterns Spreadsheet using the File Exit pull
down
2. You may elect to reorder the columns of the spreadsheet so that the
pattern and pattern shift cards appears near the Line and Station
columns for convenience. Use the Setup Order pull down and
then click on the column headers in the order you want the to
appear. Use MB2 on the last column heading of interest.
3. For all of the shots in the first swath (on line 10) we will use pattern
number 1 and then we will shift the pattern by -29 for the first shot
and increment the pattern shift by 1 for each shot. The last shot in
the first swath should be 123.
4. For all the shots on the second swath (on line 20) use pattern
number 2 and shift the first shot by -29 and increment the pattern
shift by 1. Similarily, complete the pattern number and pattern shift
entries for all shots in all 5 swaths using multiple Find and Fill
operations.
Trace Assignment
This exercise illustrates CDP binning procedures. For this example we
will automatically compute a CDP grid based on some initial known
values and then apply the grid using the batch CDP Binning* process.
• Computes the SIN and SRF for each trace and populates the
TRC OPF
2. Use the Cross Domain Contribution (Double Fold) icon MB1 and
MB2 functions to view which receivers have been defined to be live
for each shot and also to see which shots contribute to each receiver.
You should observe a symmetric split spread of four cables
centered on the nearest shot that rolls on and off the spread at the
ends of the swath.
2. Set the Azimuth=25, Grid Size in X = 55, Grid size in Y=55, Bin
Space Name=Calculated grid 25 degrees 55 by 55, Offset Bin
Increment=110 and select to set the Inlines to be parallel to grid Y
The Calc Dim operation computes the origin of the grid and the
Maximum X and Y dimensions.
3. Click on Grid Open and select the grid name that you saved
from the Calc Dim operation.
Because of the density of the display a zoom will help show and QC
the results.
You may elect to alter the grid by using any of the interactive grid
editing icons if desired. (There should be no need to alter the grid)
6.Select File Exit from the main spreadsheet menu to exit the
Geometry Spreadsheet.
CDP Binning*
Binned Space Name ------- “your grid”
This process will perform the CDP binning and Finalization steps in
a batch job instead of interactively using the spreadsheet.
2. Once the Binning is complete you can generate the QC plots using
the Database/Get and 3D options within XDB.
o SEG-Y Output
o Tape Data Output
o Archive Wizard
SEG-Y Output
In this exercise, you will write a SEG-Y formatted tape, mapping some
of the non-standard SEG-Y headers. We will check to make sure the
headers were mapped correctly by using SEG-Y Input and Trace
Display.
Select to read just the first two shots from your "03 Shots - with
geometry" dataset.
We are limiting the dataset size for efficiency and to save disk space..
Select Disk for type of storage--we will be using a Disk file instead
of Tape. Enter the DISK file path name according to your
instructor’s directions. Select Yes to Remap SEG-Y headers. Use
the default remapping values to map the header values for sou_sloc
and srf_sloc.
The SEG-Y format reserves bytes 181-240 for optional use. The
*_sloc trace headers are important to ProMAX so we typically write
them to the extended headers. These header values must be present
in order to automatically rebuild the database files with the Extract
Database Files process..
5. Once the job is complete, edit your flow to QC the traces and
headers written to the disk image file.
Select Disk for type of storage and the same path and filename you
used for SEG-Y Output. Choose to remap headers, and use the
defaults for sou_sloc and srf_sloc.
Tape Data Output writes seismic traces to tape in ProMAX format. This
process is ideal for archiving a dataset to be used later in ProMAX since
it automatically preserves all trace headers, the CIND, and CMAP files.
Like SEG-Y Output, Tape Data Output will archive datasets spanning
multiple disks.
While we will not complete an exercise using Tape Data Output, you
should be aware of its benefits for archiving data.
Archive Wizard
The SEG output formats and ProMAX Tape Output only operate on
seismic trace files. Starting with the 5000.0 release SeisSpace offers
a process called the Archive Wizard which writes entire Areas or
Lines to tape. the Tape Wizard offer the ability to span tape volumes
and utilizes SeisSpace and JavaSeis secondary storage system.
The Archive Wizard also checks for available disk space before
writing files. You have the option to skip trace data files when you
archive or restore data. .
Once we have worked through the help file, we will step through an
archive and restore using our 2D Marine line.
Processing Requirements
• Number of modeling parabolas used in the Radon transform should
be calculated using 2 parabolas/Hz, at the maximum frequency of
the data, over the total delta-t range at a reference offset.
1. Create the following a flow called Radon 01: Analyze input in the
"Your Name" Project - 2D Marine Line SubProject:
2. In Synthetic Trc Generation, the zero offset times and the velocities
for the events to model are specified in the EMACS widget window
under "Define ORMSBY synthetic seismic events".
section below the parameters. You can type the information or you
Radon Analysis models the input gather using the Radon transform.
Since multiples after NMO correction with primary velocities most
closely approximate parabolas rather than hyperbolas, the parabolic
transform is used for multiple attenuation. In Interactive Radon/Tau-P
Analysis, the modeled data is displayed in time-moveout space, and
primaries and multiples appear as points rather than lines. Multiples can
therefore be easily separated from NMO corrected primary energy. You
will pick a top mute in the time-moveout display that "mutes" the
multiples.
You will use the maximum residual moveout and the offset noted in the
previous exercise to set parameters in Radon Analysis.
1. Copy Radon 01: Analyze input to Radon 02: Radon analysis flow
and edit as follows:
2. We have to add the Trace Header Math to add the Aoffset header
and Inline Sort to CDP/Aoffset since Interactive Radon/Tau-P
requires the data be sorted by CDP/Aoffset.
The reference offset for delta moveout is the maximum offset, 1200
meters.
Note that the primaries (at 800 ms and 1500 ms zero-offset times)
appear along MOVEOUT=0. The multiples (at 1000 ms and 1500
ms zero-offset times) appear at their respective residual moveout
values.
7. In the Radon Analysis display, pick a top mute that passes between
the primaries and the multiples.
Once you have saved the mute, we will use it in Radon Filter to apply
the filter in a batch mode and output a dataset with Radon Filter
applied.
However, the closer the mute lies to the primary, the more the
primary amplitudes will be affected. Move the mute so that is not
as close to the primaries. Use the Paint brush to reapply the mute.
There should be a subtle change in the amplitude of the primaries as
the mute is moved either closer or farther from the primaries.
1. Copy the previous flow to make the following flow, Radon 03:
Radon Filter:
2. In Radon Filter, use exactly the same P-value and offset parameters
that you used in the Interactive Radon/tau-P Analysis.
In the first Radon Filter, choose to "Pass the modeled data" and do
not mute the data in the radon domain. This will output the fully
modeled data, including both primaries and multiples.
In the second Radon Filter, choose to "Modeled" and Top mute the
data in the radon domain. This will remove the multiples from the
input data. In the third Radon Filter choose "Subtract". This will
remove the primaries from the input data
In this exercise, you will display NMO corrected CDP gathers and
estimate the amount of differential moveout at the maximum offset.
2. Read the Shots with decon by CDP and Aoffset, CDPs 100-
600(100). Set Trace Display to plot 6 ensembles
• If you use -50 and +350 as the delta-t range, then: 120 parabolas/
1000 ms * 400 ms = 48 parabolas.
4. Pick a Top Mute that will remove the multiples from this display.
Click on the PaintBrush icon to remove the multiples in the display.
Move to the next CDP.
1. Copy flow 10: Final Stack to Radon 05: Stack with Radon:
2. In the Radon Filter, make sure the parameters exactly match the
parameters you used in Interactive Radon/Tau-P Analysis.
2. Input the Final Stack and Final Stack with Radon in Disk Data
Input and Insert, using After. Put 2 Panels in Trace Display and
Zoom up to a similar time frame as the display below.
1. Copy the flow Radon 03: Radon Filter to Radon 07: Radon Vel
Filter:
2. Replace the first Radon Filter with Radon Velocity Filter. Change
the Trace Display label.
4. You can still see some multiple energy using the Radon Velocity
Filter. You can rerun the flow with a tighter velocity percentage to
see the change in the amplitudes.
Note also that NMO can be performed within Radon Velocity Filter.
Therefore, your input gathers do not need to have NMO already
applied.
FK requires that the input traces be equally spaced. Also, since events
that arrive at the same time with the same slope will overlap, FK Filter
is not as effective in removing multiples as Radon Filter on the near
traces.
1. .Copy the flow called Radon 01: Analyze input to FK 01: Analyze
input:
2. The Synthetic Trc Generation is changed so that the traces are now
40 meters apart and the first offset is -1200 meters making a split
spread CDP.
4. The CDP has NMO applied to make it more obvious where the
primaries will be (along the k=0 axis).
2. Set the FK Analysis panel width to 60, set the trace spacing to 40.0
meters and set the Starting Display configuration to TX-FK.
Define a new mute polygon to store the mute we will design. This
mute will remove the ground roll from the CDP record.
7. Once you have the polygon you want, Exit -> Stop the flow and
save your polygon.
The first step in creating a final plot is to input your data into the Create
CGM+ Plotfile process. The menu for this process is split into three
sections (see figure). In the top section you will change some of the
global parameters for the plot. These include CDP range for a stack
section, units and font sizes. You can also choose your plot direction,
depending on how you input the leftmost and rightmost CDPs.
Top
Section
Middle
Section
Bottom
Section
Near the end of the top section, you select a specific submenu to view.
This option controls a set of dynamic submenus which are selected one
at a time. Once you are familiar with the process, the normal sequence
would be to start by selecting the first option, Traces/Plots/Posts/
Graphs, and work your way down the list of submenus. When you select
a different submenu to view, the lower two sections of the menu change
allowing you fill in the appropriate selections.
Note that this step may not be necessary if you have already copied
the tutorial line during a previous exercise.
1. Create a new Area for 2D lines, using your name in the description
i.e. Bob’s 2d area. In the new area, copy the the “tutor2d - watson
rise” line from the 2D tutorials area. Name the new line anything you
want.
3. Build the following flow to create a CGM+ file of the initial stack..
This option allows you to set the parameters for the trace data, as
well as display plots of database entries, and trace headers above and
below the trace data.
Type in 24 for trace spacing, 2.5 for time scale, and default the
remaining parameters.
Select the Add After option in the middle section of the menu. The
bottom section disappears, the Add After and Primary Trace data are
highlighted, and a menu to choose Component Type appears.
10. Select Post for Component type, and CDP from the list of Attribute
types that appears.
11. Post the stacking velocities above (before) the trace data.
Click on Primary Trace Data in the middle section, then select Insert
Before.
12. Select Post for Component type, and Vel for attribute type.
13. From Submenu to view in the top section, select Title Box Text.
This controls the side label’s title text and font sizes.
Select an item in the Titlebox Item List. You can change this item on
the Editable Title Item Text line.
14. From Submenu to view in the top section, select Field Parameter
Text.
15. Select Date Recorded on the Editable Field Text line to edit the
field data.
You can input your own processing sequence, or create one out of
the ProMAX processing history information.
By default, the side label is nearly built. You can use MB2 or MB3
to select a flow box in the right hand section in order to toggle it in/
out to your side label. Use the mouse button helps to guide you.
where you can toggle parameters black to include them in the side
label and toggle them gray to exclude. Use the mouse button helps.
20. Once you have the information you want included in the processing
history, click on Label Save (side label is stored in the
label_sequence.txt file in the Area/Line/Flow directory) and then
Exit.
Select the line labeled Processing Sequence Text to modify the side
label. Edit the side label that you saved in the history viewer. You
may want to remove the flow names, blank lines, etc.
4. Execute flow
View your CGM+ file using the scroll bars, and Zoom option.
1. Build the following flow. You may want to copy the previous flow to
save time..
3. Click Add After and then your line name in the Titlebox Item List.
Two new lines appear. On the Editable Title Item Text line, enter My
Test Plot. The new plot title is appended to the Titlebox Item Text.
1. Build the following flow to create a CGM+ file of your stack and
interactively view the results before plotting.
3. Enter your plot file name and select the plotter and host.
4. Execute flow
For First-break picking and trace editing, ProMAX uses a Cascade-Correlation Learning
Architecture. Advantages of this algorithm include decreased network learning time and the
ability to incrementally add to an existing network. The neural network compares various
attributes of the correct pick to other possible picks within a window. The network recognizes the
ability of an attribute to predict the correct pick and accordingly weights the network connection
to that attribute.
This tutorial flow is divided into 3 parts. The first part is for training the neural network, the
second part is for batch picking all shots. The third part is using the other picking module in
SeisSpace, FB Picking.
The first break picker in Trace Display gives you the opportunity to
interactively create and train a neural network to pick first breaks. You
will manually pick some first breaks and use these picks to train a neural
network. The neural network will then try to pick first breaks on selected
shots, and you can QC these picks using Trace Display.
NOTE:
The NN First Break Picker menu in Trace Display only appears if geometry is
defined, and your dataset matches the database. You can check if geometry matches
the database via MB3-> Properties on the dataset in the Dataset listing in the
Navigator.
Interactive Training
1. Go to “Your Name” Project and Salt 3D - extraction Subproject.
Create a new flow 08c - FB picking.
Select the pick polarity and the signal/noise gate length. The neural
network works well with peaks and a gate length of 100 ms. Select
OK to accept these parameters. The neural network itself, however,
may key off of instantaneous phase/frequency, amplitude before or
after the first break, or any other pattern it can recognize.
The Picking tool icon appears on the left side of the display. There
will be two entries in the Pick Layers box: “FB Training Data” and
the “nn gate.”
7. Select the “nn gate” table from the Pick Layers window, and
edit the top of the gate. Click MB3 -> New Layer to pick the
bottom of the gate. Move through the rest of the shots to make sure
the gate surrounds the first breaks. Edit the top and bottom of the
gate as necessary. If you have IDA set to Yes, you can move back
and forth to check/edit nn-gate.
The One time Recall option applies the neural network to the
currently displayed gather. A First Break NN Recall window
appears.
11. If the picks are bad, modify your FB Training Data and retrain the
network. It works better if you pick on both slopes of the first
breaks.The picks are usually off at the near offsets, but we will not
use the near offsets for refraction statics in this case.
To modify training picks, click on the Picking tool icon. Your new
table of picks appears in the Pick Layers window. Remove the table
from the list and activate the FB Training Data. Modify or add to
these training picks, select First Break NN Training, and use the
same weight table. Iterate through steps 6, 7, and 8 until you are
satisfied with the results. If you still cannot get satisfactory results,
try purging the Neural Network (FirstBreakPicker
PurgeNeuralNet) and starting over.
12. Set Neural Net Recall to Continuous and click the Next ensemble
icon to go to the next shot.
You can retrain if necessary, or if you think the picks are close
enough, select File Exit/Stop Flow, and choose to save edits
before exiting.
The weight table, and time gates are saved and can be used in the
batch NN First Break Picker process to pick the entire dataset.
This step uses the neural network weight matrix to pick first breaks on
all shots. In the case of first-break picking, neural network picks are
stored in the ordered database and can be accessed for various uses
including refraction static analysis.
You must specify a starting offset for the picker. Specify an offset
with good S/N and no shingling of refractors. For this data, an offset
value of about 3000 ft. is good.
Edit the same flow, and toggle “NN First Break Picker” inactive, and
“Trace Display” active, and execute the flow. From the menu bar in
the Trace Display window, select Picking Edit Database
Values (first breaks)... Select NN_PICK as the Infotype, and
PICK0001 (the 12345678 picks are from the interactive picker)
from the OPF File Selector, and use the same name to save edits.
Don’t spend too much time editing picks here. The easiest way to
view and edit your picks is to use the first break editing capabilities
of the Refraction Statics process in the next chapter. Also do not
worry about zero picks on the dead traces.
For this dataset, the NN Picker does not work as well as the First
Break Picking module. We will use that picker next.
3. The First Break Picker is a batch module. We will use our nn-gate
for the gate and default all the other parameters. It turns out, even
though this data was acquired with a surface source that the
DYNAMITE option works better.
These picks are better than the NN picks and will be used in the
Refraction Statics Chapter.
The refraction statics processes expects R_STATIC and S_STATIC to be present in the database.
Once these attributes are in the database the refraction statics processes can fill them in with more
accurate static values than simple elevation static calculations. The recommended method to
create R_STATIC and S_STATIC database entries is to run the process Datum Statics
Calculation*, before running the refraction statics processes.
The main disadvantages are that there is not a graphical interface for
editing. The source and receiver static solutions are applied to the data
in a future step, Apply Refraction Statics.
NOTE:
First break times must be picked and written to the database prior
to this exercise. Please refer to the Neural Network First Break Picking
exercise earlier in this manual.
As a part of this exercise you will see that there are two ways to enter
the refractor offset ranges. These are:
• Manually.
In this exercise you will use first-break pick times to calculate a near-
surface model and travel-time corrections. This process calculates shot
and receiver refraction statics to shift to the final datum and updates the
database. Results of this exercise will be used by Datum Statics Apply
in the next exercise.
Select the first break time to use for the statics decomposition. These
time picks will be in the TRC OPF and will normally be of the type
F_B_PICK. Select the PICK0000 file created by First Break
Picking. Enter the number of layers to model, in this case use one
layer. The identification number will be 1 for the first run through the
process. The shooting geometry is 3D .
6. Set the refractor offsets to 1900-8000 feet. The picks on traces with
this offset range will be used to calculate the refraction statics.
Set the Edit First Break times (median Velocity)? to Yes and the
value to 15%
You can view these database entries using DBTools -> View -> 2D
MATRIX
Once CDP velocity is available, delay times for shots and receivers
may be computed. This is done by iteration, starting with source
delay time estimates, followed by receiver delay time estimates, and
(optionally) finalized by CDP velocity updating. Values are not
computed for any SIN, SRF or CDP that does not meet the minimum
fold (menu parameter) criterion. Once the decomposition is
complete for each refractor, these missing values are interpolated
based on X and Y.
8.
The depth model stage inputs delay times and refractor velocities in
CDP, interpolates refractor velocity into SIN and SRF, computes a
depth model for sources and another for receivers. Optionally, the
first refractor depth in SRF may be projected into CDP, smoothed,
projected back into SRF, V0 recomputed in SRF based on the
smoothed depths, new V0 projected from SRF to SIN, and finally
SIN and SRF depth models computed.
It is important to note that the Datum Statics Apply process first checks
to see if other statics have been applied to the traces by an earlier
processing step. If statics are applied, Datum Statics Apply first removes
these statics returning the traces to their original recorded time
reference. Also, if previous statics contained any hand statics or shot
delay corrections, these statics are also removed and should be
reapplied.
NOTE:
We do not have to recalculate the datum statics (...C_STATIC...) unless you want to
change the smoother of N_DATUM, the datum elevation, or the replacement
velocity. Datum Statics Apply will back out the elevation statics before it applies
the refraction statics.