Sie sind auf Seite 1von 8

Case study: a sample seismic processing flow Aims Students will be able to:

circumstances under which each of these elements will be necessary

Introduction This example flow is one which I have applied to a dataset from the northern California continental margin. The data were acquired in 1994 from the R/V Maurice Ewing (a US multi-purpose research boat similar to the UK boats we see outside the SOC). The source was a 139l airgun array, fired at 50m shotpoint interval, with a 160-channel, 25m group interval recording system. The survey was aimed at determining whole-crustal structure, however there are several points at which the processing can also be tuned for optimum imaging of the sediments. Particular challenges in the data processing are: 1. a source dominated by low frequencies (the energy was being recorded at 200km offset as well as by the streamer); 2. a water depth varying from 100m to 4km along the profile, and varying rapidly in places; 3. a high degree of scattered (in and out of the plane) energy and conflicting dips from both real structure and superimposed diffraction and reflection events. These can be dealt with using various of the methods referred to so far in the course. Some thought is required as to their effect on the data and why specialist processing might be required. At every step, parameters are tested for each of the elements in the processing flow. Some of the processing was self-contained (ie based entirely on the seismic reflection data) others include information from a coincident seismic refraction profile. Data input There are few options here - we have to get the data into the system and carry out some preliminary cleanup Read field tapes This can be a slightly tricky operation as although standards exist they are often ignored in detail (and sometimes the person who recorded the data is not quite sure!) (320,000 traces, 9Gb)

Resample data We often record field data with a smaller sample interval than is required given the target of the survey, mostly in the hope that there might be something in the extra bandwidth. Quick tests on receiving the data determine its true bandwidth and sampling requirements. In this case the data were recorded with 2ms sample interval, but resampled to 8ms before further processing was carried out. We also stored the data on disk with mild compression. (320,000 traces, 1.1Gb) Geometry assignment Although it is common to have source and receiver locations stored immediately in each trace header, it is unusual to have all of the parameters for a full geometry specification. We can potentially:

revise locations with post-processed navigation decide CMP spacing and binning strategy choose strategy for offset bins (used later)

although conventionally we would use a CMP spacing of half the receiver interval and typically assume a regular geometry for marine work. (Make near trace section)

The near trace section shows some of the problems we must deal with in processing these data:

Multiples which cross the section due to changes in water depth

A combination of dipping events, rough basement (giving diffractions) and flatter sedimentary features A deep seafloor canyon causing focusing of multiple reflections

Kill traces At this point we also evaluate overall data quality, and determine channels which are consistently bad or traces which are particularly noisy then zero them. Mute We generally mute out the direct wave where it is distinct from the reflected wavefield. This can be difficult when the water depth is less than the near-trace gap. Pre-stack wavelet shaping This is the first critical set of processing steps. The order is arguable; we can construct arguments for this ordering (deconvolution requires stationarity hence fix the amplitudes, and is statistical hence remove out of band noise) but can also argue for deconvolution before bandpass filtering (we are trying to balance frequencies so why remove energy first). In each case we would test on a number of gathers at different locations on the line. Amplitude recovery Apply amplitude corrections for spreading, absorption, and transmission losses. For whole crustal work, some physically based (spherical divergence and absorption coefficient) is preferable with parameters chosen to balance amplitude as a function of time. For sediments an AGC is often acceptable if the main aims are structural Bandpass filter A minimal filter to remove noise out of the seismic source bandwidth (eg remove low frequency wave noise which originally would be removed by filtering during acquisition.) Deconvolution Some care is required here since we need to use methods which are robust in the presence of noise. Predictive deconvolution may be better than spiking deconvolution; we can also average the operators used over some number of traces. For sedimentary imaging we can use a shorter prediction lag (16ms) than if we want to see the lower crust (96ms). Some deconvolution before stack can be very important since the wavelets will be distorted during NMO. Remember that all the information required for deconvolution is present in the autocorrelation of the trace. We use a combination of autocorrelation, deconvolved data, and spectrum to determine the effectiveness of the procedure. Moveout corrections

More critical steps which result in a stack - care is required here in selection of velocities. f-k multiple attenuation Reduce multiple amplitudes to make velocity analysis and DMO easier. NMO/DMO velocity analysis Iterative process of applying NMO, DMO, inverse NMO, and redo standard velocity analysis. Including DMO improves the quality of the stack and the usefulness of the stacking velocity field. A variety of methods are available (constant velocity stacks, constant velocity gathers, semblance) which work to different extents with different data types. NMO, DMO Apply NMO and DMO using the final velocity field after convergence. Mutes Top mute may be applied automatically (based on the amount of NMO stretch) but it can be more effective to apply a hand-picked mute following DMO. It can also be useful to apply an inner trace tail mute to remove near-zero offset multiple energy which is not well-suppressed by the f-k demultiple or stack. Stack Stack traces. A standard sum is fairly robust, but in noisy data other methods may be more effective (e.g. use median sample value rather than mean.) (8,000 traces, 30Mb) (Plot stack section)

Post-stack processing After stack we have to apply migration, we may also apply more robust bandpass filtering and deconvolution having increased the signal to noise ratio through stacking. These are all quite critical steps with strong influence on the quality of the outcome. Data volumes are relatively small so that parameter testing may be carried out on the majority of the section, however the final quality migrations may be computationally intensive. f-k filter Remove scattered energy which survived through the stack Bandpass filter Typically we now apply a time and space-variant filter biased toward high frequencies in the sedimentary section and low frequencies in the basement. Parameters are chosen to minimise any remnant multiple energy in the deep section, and to minimise the effect of any bubble pulse in the shallow section Deconvolution A more agressive deconvolution may be possible with the better signal strength after stack. Often we use a time-variant method similar to that discussed for bandpass filtering. Amplitude corrections

Amplitudes need to be analyzed carefully at this point - lateral variations in amplitude will adversely affect the migration process Migration Migration will lead to the final product, either as depth or time. We can apply migration using velocities based on our velocity analysis if they are good enough, by testing a range of different velocities to determine which collapse diffractions correctly, or by using other information. Care is required to produce a generally smooth velocity field. (8,000 traces, 30-60Mb) (Plot final sections)

We may be able to convert our stacking velocities to a sensible interval velocity field, particularly if we have used an iterative NMO/DMO velocity analysis procedure. For example, compare the fields from velocity analysis and a refraction profile along this line:

Many tools are available to for example compare the seismic data and the migration velocity fields.

Things to consider

Which of the processes are benign and which will have severe effects on the data? How might we adapt a processing flow to specialist requirements with the same dataset? Why does so much of the processor's time get taken up in picking the velocities to use?

Das könnte Ihnen auch gefallen