Beruflich Dokumente
Kultur Dokumente
Case Studies
Case Studies
Contributing authors:
Catherine Bleins
Matthieu Bourges
Jacques Deraisme
Franois Geffroy
Nicolas Jeanne
Ophlie Lemarchand
Sbastien Perseval
Jrme Poisson
Frdric Rambert
Didier Renard
Yves Touffait
Laurent Wagner
"...
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
2. About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Mining. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
4. In Situ 3D Resource Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
4.1 Workflow Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
4.2 Presentation of the Dataset & Pre-processing . . . . . . . . . . . . . . . . . . . .16
4.3 Variographic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
4.4 Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
4.5 Global Estimation With Change of Support. . . . . . . . . . . . . . . . . . . . . .78
4.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
4.7 Displaying the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
5. Non Linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149
5.1 Introduction and overview of the case study . . . . . . . . . . . . . . . . . . . . .150
5.2 Preparation of the case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152
5.3 Global estimation of the recoverable resources . . . . . . . . . . . . . . . . . . .171
5.4 Local Estimation of the Recoverable Resources . . . . . . . . . . . . . . . . . .183
5.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223
5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
6. 2D Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249
6.7 Workflow Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250
6.8 From 3D to 2D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251
6.9 2D Estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260
6.10 3D Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .278
6.11 2D-3D Comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286
Oil & Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287
8. Property Mapping & Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .289
8.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290
8.2 Estimation of the Porosity From Wells Alone . . . . . . . . . . . . . . . . . . . .293
8.3 Fitting a Variogram Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297
8.4 Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299
8.5 Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302
8.6 Estimation with External Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306
8.7 Cokriging With Isotopic Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . .309
8.8 Collocated Cokriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317
335
336
338
341
348
362
371
377
10. Plurigaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2 Methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3 Creating the Structural Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 Creating the Working Grid for the Upper Unit. . . . . . . . . . . . . . . . . .
10.5 Computing the Proportions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.6 Lithotype Rule and Gaussian Functions . . . . . . . . . . . . . . . . . . . . . . .
10.7 Conditional Plurigaussian Simulation . . . . . . . . . . . . . . . . . . . . . . . .
10.8 Simulating the Lithofacies in the Lower Unit . . . . . . . . . . . . . . . . . .
10.9 Merging the Upper and Lower Units . . . . . . . . . . . . . . . . . . . . . . . . .
401
402
410
411
412
421
437
450
453
465
469
470
474
479
484
488
491
492
492
495
502
517
530
536
561
571
572
573
574
14 Soil pollution811
19.1 Presentation of the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.2 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.3 Visualization of THC grades using the 3D viewer . . . . . . . . . . . . . .
19.4 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5 Fitting a variogram model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.6 Selection of the duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.7 Kriging of THC grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.8 Intersection of interpolation results with the topography . . . . . . . . .
19.9 3D display of the estimated THC grades . . . . . . . . . . . . . . . . . . . . . .
19.10 THC simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.11 Simulation post-processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.12 Displaying graphical results of risk analysis with the 3D Viewer . .
812
816
820
822
829
832
833
838
852
854
865
871
20. Bathymetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1 Presentation of the Data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2 Pre-processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3 Interpolation by kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.4 Superposition of models and smoothing of frontiers . . . . . . . . . . . . .
20.5 Local GeoStatistics (LGS) application to bathymetry mapping . . . . .
873
874
880
894
916
922
Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
22. Image Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.2 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.3 Filtering by Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.4 Other Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.5 Comparing the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
939
940
942
949
955
959
23. Boolean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.2 Boolean Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
965
966
969
975
Introduction
for new users to get familiar with the software and gives some leading lines to carry a study
through,
for all users to improve their geostatistical knowledge by following detailed geostatistical workflows.
Basically, each case study describes how to carry out some specific calculations in Isatis as precisely as possible. The data sets are located on your disk in a sub-directory, called Datasets, of the
Isatis installation directory.
You may follow the work flow proposed in the manual (all the main parameters are described) and
then compare the results and figures given in the manual with the ones you get from your test.
Most case studies are dedicated to a given field (Mining, Oil & Gas, Environment, Methodology)
and therefore grouped together in appropriate sections. However, new users are advised to run a
maximum of case studies, whatever their field of application. Indeed, each case study describes different functions of the package which are not necessarily exclusive to one application field but
could be useful for other ones.
Several case studies, namely In Situ 3D Resources Estimation (Mining), Property Mapping (Oil
& Gas) and Pollution (Environment) almost cover entire classic geostatistical workflows: exploratory data analysis, data selections and variography, monovariate or multivariate estimation, simulations.
The other Case Studies are more specific and mainly deal with particular Isatis facilities, as
described below:
Non Linear: anamorphosis (with and without information effect), indicator kriging, disjunctive
kriging, uniform conditioning, service variables and simulations.
Non Stationary & Volumetrics: non stationary modeling, external drift kriging and simulations, volumetric calculations, spill point calculation, variable editor.
Young Fish Survey, Acoustic Fish Survey: polygons editor, global estimation.
Note - All case studies are not necessarily updated for each Isatis release. Therefore, the last
update and the corresponding Isatis version are systematically given in the introduction.
Mining
10
4.
In Situ
3D Resource Estimation
This case study is based on a real 3D data set kindly provided by Vale
(Carajs mine, Brazil).
11
12
the first link opens the application description of the Users guide: this allows the user to
have a complete description of the application as it is implemented in the software;
the second link sends the user to the corresponding practical application example in the case
study.
Applications in bold are the most important for achieving kriging and simulation:
l
13
manual: the user chooses by himself the basic structures (with their types, anisotropy, ranges
and sills) entering the parameters at the keyboard or for ranges/sills interactively in the Fitting Window. This is used for modeling the variogramof the indicator of rich ore,
automatic: the model is entireley defined (ranges, anisotropy and sills) from the definition of
the types and number of nested structures the user wants to fit. This is used for modeling the
Fe grade of rich ore.
14
15
16
a simple 3D geological model resulting from previous geological work (block size: 75 m horizontally and 15 m vertically) is provided in a 3D grid file called block model_75x75x15m.asc.).
Firstly, a new study has to be created using the File / Data File Manager facility; then, it is advised
to verify the consistency of the units defined in the Preferences / Study Environment / Units window. In particular, it is suggested to use:
l
17
0.90
0.07
6
6
2
1400.00
-195.00
795.32
4.39
66.70
0.12
0.10
0.90
0.08
6
6
3
1400.00
-195.00
791.22
4.10
67.70
0.11
0.20
0.50
0.08
3
3
The samples are organized along lines and the file contains two types of records:
l
The header record (for collars), which starts with an asterisk in the first column and introduces a
new line (i.e borehole).
The file contains two delimiter lines which define the offsets for both records.
The dataset is read using the File / Import / ASCII procedure and stored in two new files of a new
directory called Mining Case Study:
l
The file Drillholes Header, which contains the header of each borehole, stored as isolated
points.
The file Drillholes, which contains the cores measured along the boreholes.
(snap. 4.2-1)
18
You can check in File / Data File Manager (by pressing s for statistics on the Drillholes file) that
the data set contains 188 boreholes, representing a total of 5766 samples. There are five numeric
variables (heterotopic dataset), whose statistics are given in the next table (using Statistics/Quick
Statistics...):
Number
Minimum
Maximum
Mean
St. Dev.
Al2 O3
3591
0.07
44.70
1.77
4.14
Fe
5069
4.80
69.40
60.51
14.19
Mn
5008
0.
30.70
0.58
1.75
5069
0.
1.
0.06
0.08
Si O2
3594
0.05
75.50
1.54
4.32
We will focus mainly on Fe variable. Also note the presence of an alphanumeric variable called
Lithological code Alpha.
In the Data area, select the file Mining Case Study/Drillholes, without selecting any variable as we are looking for a display of the boreholes geometry.
Click on Display, and OK. The Lines appear in the graphic window.
To change the View Point, click on the Camera tab and choose for instance:
m
Longitude = -46
Latitude = 20.
19
Using the Display Box tab, deselect the toggle Automatic Scales and stretch the vertical dimension Z by a factor of 3.
Click on Display.
You should obtain the following display. You can save this template to automatically reproduce
it later: just click on Application / Store Page as in the graphic window.
(fig. 4.2-1)
Maximum
0.009 km
3.97 km
-0.35 km
3.77 km
-54.9 m
+811.8 m
Most of the boreholes are vertical and horizontally spaced approximately every 150m. The vertical
dimension is oriented upwards.
the first one called rich ore corresponds to the lithological codes 1, 3 and 6,
the second one called poor ore corresponds to the lithological codes 10 and above
20
(snap. 4.2-1)
For creating Rich ore, Poor ore and Undefinedindices, you should give the name you want
(this has to be repeated three times). Then in the bottom part of the window you will define the
rules to apply. For each rule, you will have then to choose which variable it depends to, here Lithological Code Integer, and the criterion to apply among the list you get by clicking on the button
proposing Equals as default:
m
in the case of Poor ore you choose to match 2 rules (see snap shot on the previous page).
in the case of Undefined you choose to match any of two rules (see next snap shot).
21
(snap. 4.2-2)
22
(snap. 4.2-1)
When pressing the "Display as Points" button, the following graphic window opens representing by
a + symbol in green (according to the menu Preferences / Miscellaneous). the headers of all the
boreholes in a 2D XOY projection.
23
4000
Y (m)
3000
2000
1000
1000
2000
X (m)
3000
4000
(snap. 4.2-2)
By picking with the mouse left button the 4 boreholes, their symbols are blinking, they can then be
masked by using the menu button of the mouse and clicking on Mask, the 4 masked boreholes are
then represented with the red square (according to the menu Preferences / Miscellaneous).
In the Geographic Selection window the number of selected samples (i.e.boreholes) is appearing
(184 from 188). To store the selection you must click on Run.
24
4000
Y (m)
3000
2000
1000
1000
2000
X (m)
3000
4000
(snap. 4.2-3)
This selection is defined on the drillhole collars. In order to apply this selection to all samples of the
drillholes, a possible solution is to use the menu Tools / Copy Variable / Header Point -> Line.
(snap. 4.2-4)
25
example) as it does not make sense to combine data that does not represent the same amount of
material.
Therefore, if data is measured on different support sizes, a first, essential task is to convert the
information into composites of the same dimension. This dimension is usually a multiple of the size
of the smallest sample, and is related to the height of the benches, which is in this case 15m.
l
the boreholes are cut into intervals of same length from the borehole collar, or in intervals
intersecting the boreholes and a regular system of horizontal benches. It is performed with
the Tools / Regularization by Benches or by Length facility, consists in creating a replica of
the initial data set where all the variables of interest in the input file are converted into composites.
the boreholes are cut into intervals of same length, determined on the basis of domain definition. Each time the domain assigned to the assay is changed a new composite is created. The
advantage of that method is to get more homogeneous composites. It is performed with the
Tools / Regularization by Domains facility.
We will work on the 5 numerical variables Al203, Fe, Mn, P and SiO2.
The regularization by length is performed on 5 numerical variables Al203, Fe, Mn, P and
SiO2 and on the lithological code, in order to keep for each composite the information on the
most abundant lithology and the corresponding proportion. The new files are called:
- Composites 15m by length header for the header information (collars).
- Composites 15m by length for the composite information.
Regularization mode: By Length measured along the borehole: this is the selected option as
some boreholes are inclined, with a constant length of 15m.
Minimum Length: 7.5 m. It may happen that the first composite, or the last composite (or
both) do not have the requested dimension. Keeping too many of those incomplete samples
will lead us back to the initial problem of having samples of different dimensions being considered with the same importance: this is why the minimum length is set to 7.5 m (i.e. half of
the composite size).
26
(snap. 4.2-1)
m
Three boreholes are not reproduced in the composite file as their total length is too small
(less than 7.5m): boreholes 93, 163 and 171. There are 1282 composites in the new output
file.
The regularization by domain will calculate composites for two domains rich ore and poor
ore. The macro selection defining the domains in the input file is created with the same indices
in the output composites file. The selection mask drillholes outside is activated to regularize
only the boreholes within the orebody envelope. Only Fe, P, SiO2 are regularized. The new files
are called:
m
The Undefined Domain is assigned to the Undefined index. It means that when a sample is
in the Undefined Domain the composition procedure keeps on going (see on-line Help for
more information).
The option Merge Residual is chosen, which means that the last composite is merged with
the previous one if its length is less than 50% of the composite length.
27
(snap. 4.2-2)
There are 1485 composites on the 184 boreholes in the new output file. From now on all geostatistical processes will be applied on that regularized by domains composites file.
Using Statistics / Quick Statistics we can obtain different types of statistics, as for example:
The statistics on the Fe grades by domains. You note that after compositing there are no more Undefined composites.
28
(snap. 4.2-3)
(snap. 4.2-4)
l
Graphic representations with Boxplots by slicing according the main axes of the space.
29
(snap. 4.2-5)
30
(fig. 4.2-1)
31
(snap. 4.2-6)
(snap. 4.2-7)
The swathplots along OY shows for Fe rich ore a trend to decrease from South to North.
32
The file contains only one numeric variable named domain code which equals 0, 1 or 2:
l
1 means the grid node lies in the southern part of the orebody,
2 means the grid node lies in the northern part of the orebody.
Launch File/Import/ASCII... to import the grid in the Mining Case Study directory and call it 3D
Grid 75x75x15 m.
(snap. 4.2-1)
You have now to create a selection variable, called orebody, for all blocks where the domain code
is either 1 or 2, by using the menu File / Selection / Intervals.
33
(snap. 4.2-2)
In the Contents list, double click on the Raster item. A new Item contents for: Raster window
appears, in order to let you specify which variable you want to display and with which color
scale:
m
Grid File...: select orebody variable from the 3D Grid 75x75x15 m file,
In the Grid Contents area, enter 16 for the rank of the section XOY to display.
In the Graphic Parameters area below, the default color scale is Rainbow.
Click on OK.
34
Your final graphic window should be similar to the one displayed hereafter.
(fig. 4.2-1)
The orebody lies approximately north-South, with a curve towards the southwestern part. The
northern part thins out along the northern direction and has a dipping plane striking North with a
western dip of 15 approximately. This particular geometry will be taken into account during variographic analysis.
35
and on the rich ore Fe grade, which is defined on rich ore composites.
The Exploratory Data Analysis (EDA) will be used in order to perform Quality Control, check statistical characteristics and establish the experimental variograms. Then variogram models will be
fitted.
Calculations of directional variograms in horizontal plane. For simplification we keep 2 orthogonal directions East-West (N90) and North-South (N0).
Check that the main directions of anisotropy are swapped when looking to northern or southern
boreholes.
Save the Indicator variogram in the northern part (where are most of the data), with the idea
that the variogram in the Southern part is the same as in the North by inverting N0 and N90
directions of the anisotropy. In practice this will be realized at the kriging/simulation stage by
the use of Local Parameters for the variogram structures.
36
(snap. 4.3-1)
37
(snap. 4.3-1)
Highlight the Indicator rich ore variable in the main EDA window and open the Base Map and Histogram:
38
(fig. 4.3-1)
39
(snap. 4.3-2)
40
(snap. 4.3-3)
After pressing OK you get the representation of the Variogram Map. In the Application Menu ask
Invert View Order to have variogram map and extracted experimental variograms in a landscape
view.
In the Application Menu ask Graphic Specific Parameters and change the Color Scale to Rainbow Reversed.
In the variogram map representation drag with the mouse a zone containing all directions. With the
menu button ask Activate Direction. You will then visualize the experimental variograms in the 18
directions of the horizontal plane. It exhibits clearly anisotropic behaviour.
41
(snap. 4.3-4)
We will now calculate the experimental variograms directly from the main EDA window by clicking on the Variogram bitmap at the bottom of the window. In the next figure we can see the parameters used for the calculation of 4 directional variograms in the horizontal plane and the vertical
variogram.
(snap. 4.3-5)
42
(snap. 4.3-6)
(snap. 4.3-7)
For sake of simplicity we decide to keep only 2 directions N0, showing more continuity and the
perpendicular direction N90.
The procedure to follow is:
43
In Regular Direction choose Number of Regular Directions 2 and switch on Activate Direction
Normal to the Reference Plane. Click Ok and go back to the Variogram Calculation Parameters
window.
(snap. 4.3-8)
You have then to define the parameters for each direction. Click the parameter table to edit:
l
You have then to define the parameters for each direction. Click the parameter table to edit. For
applying the same parameters on the 2 horizontal directions, you must highlight these directions
in the Directions list of the Directions Definition window.
The two regular directions choose the following parameters:
m
Number of lags: 15(so that the variogram will be calculated over 1350 m distance)
Lag Subdivision: 45m (so that we can have the variogram at short distance from the drillholes closely spaced).
Lag value: 15 m
Number of lags: 10
44
In the Application Menu ask for Graphic Specific Parameters and click on the toggle button
for the display of the Histogram of Pairs.
(snap. 4.3-9)
Because the general shape of the orebody is anisotropic, we will calculate the variogram restricted
to the northern part and to the southern part of the orebody.
To do so you will use capabilities of the linked windows of EDA, by masking samples in the Base
Map. Automatically the variograms will be recalculated with only the selected samples.
For instance in the Base Map you drag a box around data in the Southern part (as shown on the figure) and with the menu button of the mouse you ask Mask. You will then get the variogram calculated from the northern data.
45
(snap. 4.3-10)
In the next figure we compare the variograms calculated from the northern and the southern data.
The main directions of anisotropy are swapped between North and South.
46
(snap. 4.3-11)
47
(snap. 4.3-12)
We decide now to fit a variogram model on the northern variogram, which is calculated with the
most abundant data. Then we will apply the same variogram to the southern data by making the
main axes of anisotropy swapped. This will be realized by means of local parameters attached to the
variogram model and to the neighborhood.
In the graphic window containing the experimental variogram in the northern zone, click on Application / Save in Parameter File and save the variogram under the name Indicator rich ore North.
48
the Parameter File containing the set of experimental variograms: Indicator rich ore North.
Set the toggles Fitting Window and Global Window ON; the program displays automatically
one default spherical model. The Fitting window displays one direction at a time (you may
choose the direction to display through Application/Variable & Direction Selection...), and the
Global window displays every variable (if several) and direction in one graphic.
To display each direction in separate views, click in the Global Window on Application /
Graphic Specific Parameters and choose the Manual mode. Choose for Nb of Columns 3,
then Add, in turn for each Current Column, in the Selection by picking in the View Contents
area the First Variable, the Second Variable and the Direction.
(snap. 4.3-1)
49
(snap. 4.3-2)
l
The model is automatically defined with the same rotation definition as the experimental variogram. Three different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):
50
(snap. 4.3-3)
l
Nugget effect,
Anisotropic Exponential model with the following respective ranges along U, V and W: 700 m,
550 m and 70 m,
Anisotropic Exponential model with the following respective ranges along U, V and W: 500 m,
5000 m and nothing (which means that it is a zonal component with no contribution in the vertical direction).
Do not specify the sill for each structure at this stage, instead:
51
click Nugget effect in the main Variogram Fitting window, set the toggle button Lock the Nugget Effect Components During Automatic Sill Fitting ON and enter the value .065.
(snap. 4.3-4)
l
set the toggle Automatic Sill Fitting ON. The program automatically computes the sills and displays the results in the graphic windows.
A final adjustement is necessary, particularly to get a total sill of 0.25, which is the maximum
admissible for a stationary indicator variogram. Set the toggle Automatic Sill Fitting OFF from
the main Variogram Fitting window, then in the Model Definition window set the sill for the
first exponential to 0.14 and the sill for the second exponential to 0.045.
Enter the name of the Parameter File in which you wish to save the resulting model: Indicator
rich ore.
The final model is saved in the parameter file by clicking Run in the Variogram Fitting window.
52
(snap. 4.3-5)
53
(snap. 4.3-1)
You will calculate the variograms in 2 directions of dipping plane striking North with a western dip
of 15. In the Calculation Parameters you will ask in List of Options a Directional. Click then Regular Directions a new window Directions pops up where you will define the Reference Direction
and switch on Activate Direction Normal to the Reference Plane.
(snap. 4.3-2)
Click Reference Direction, in 3D Direction Definition window set the convention to User Defined
and define the rotation parameters as shown in the next figure.
54
(snap. 4.3-3)
The reference direction U (in red) correspond to the N121 main direction of anisotropy.
The calculation parameters are then chosen as shown in the next figure.
55
(snap. 4.3-4)
the anisotropy is not really marked, we will recalculate isotropic variogram in the horizontal
plane,
the second point of the variogram for the direction N121, calculated with 42 pairs, shows a peak
that we can explain by using the Exploratory Data Analysis linked windows.
56
(snap. 4.3-5)
For using the linked windows the following actions have to be made:
57
in the Graphic Specific Parameters of the graphic page containing the experimental variogram,
set the toggle button Variogram Cloud (if calculated) OFF, and click on the radio button Pick
from Experimental Variogram.
in the Calculation Parameters of the graphic page containing the experimental variogram, set
the toggle button Calculate the Variogram Cloud ON.
In the graphic page click on the experimental point with 33 pairs and ask in the menu of the
mouse Highlight. The variogram is then represented as a blue square, and all data making the
pairs represented the part painted in blue in the histogram.
(snap. 4.3-6)
The high variability due to pairs made of the samples with low values is responsible of the peak in
the variogram. It can be proved by clicking in the histogram on the bar of the minimum values and
clicking with the menu of the mouse on Mask, the variograms are automatically calculated and
dont show anymore the anomalous point as shown on the next figure.
(snap. 4.3-7)
58
We now re-calculate the variograms with 2 directions, omni-directional in the horizontal plane
and vertical, with the parameters shown hereafter you enter by clicking Regular Directions....
(snap. 4.3-8)
59
(snap. 4.3-9)
In the graphic containing this last variogram ask for the Application->Save in Parameter File to
save the variogram with the name Fe rich ore.
the Parameter File containing the set of experimental variograms: Fe rich ore
the Parameter File in which you wish to save the resulting model: Fe rich ore
In the Model Initialization section choose Spherical (Short + Long Range) and click on Add
Nugget.
(snap. 4.3-1)
60
(fig. 4.3-1)
Statistics / Domaining / Border effect calculates bi-point statistics from pairs of samples belonging to different domains. The pairs are chosen in the same way as for experimental variogram
calculations.
Statistics / Domaining / Contact Analysis calculates the mean values of samples of 2 domains as
a function of the distance to the contact between these domains along the drillholes.
61
(fig. 4.3-1)
Switch on the three toggle buttons for the Graphic Parameters and click on Run.
(snap. 4.3-1)
Three graphic pages corresponding to the three statistics are then displayed:
62
Transition Probability, that, in the case of only 2 domains, is not very informative.
(snap. 4.3-2)
70
60
Dir
50
40
Dir
30
20
Dir
10
0
0
500
1000
Mean [Z(x+h)|Z(x)], that shows that when going from Rich ore to Poor ore there is a border
effect (the grade of the new domain, i.e. Poor ore, is higher than the mean Poor ore grade which
means it is influenced at short distance by the proximity to Rich ore samples. Conversely when
going from Poor ore to Rich ore there is no border effect.
70
60
Dir
50
40
Dir
30
20
Dir
10
0
1500
500
Distance (m)
1000
1500
Distance (m)
70
70
60
Dir
50
40
Dir
30
20
Dir
10
0
63
60
Dir
50
40
Dir
30
20
Dir
10
0
500
1000
Distance (m)
1500
500
1000
Distance (m)
1500
(snap. 4.3-3)
64
40
30
Dir
20
10
Dir
0
-10
-20
Dir
-30
-40
0
500
1000
1500
Mean Diff[Z(x+h)-Z(x)], that shows that when going from Rich ore to Poor ore as well as
going from Poor ore to Rich ore the grade difference is influenced by the proximity of both
domains.
40
30
Dir
20
10
Dir
0
-10
-20
Dir
-30
-40
0
500
40
30
Dir
20
10
Dir
0
-10
-20
Dir
-30
-40
0
500
1000
Distance (m)
1000
1500
Distance (m)
1500
Distance (m)
40
30
Dir
20
10
Dir
0
-10
-20
Dir
-30
-40
0
500
1000
Distance (m)
1500
(snap. 4.3-4)
65
(snap. 4.3-1)
In the Application menu of the graphic pages we ask the Graphical Parameters, as shown
below, to display the Number of Points and the Mean per Domain.
(snap. 4.3-2)
66
(snap. 4.3-3)
67
Contact Analysis (Non-Oriented) displays the average of the two previous ones.
(snap. 4.3-4)
From these graphs it appears that the poor grades are influenced by the proximity to rich grades.
In conclusion we decide for the kriging and simulations steps to apply hard boundary when dealing
with rich ore.
68
4.4 Kriging
We are now going to estimate on blocks 75mx75mx15m the tonnage and Fe grades of Rich ore.
Therefore, we will perform two steps:
l
Kriging of the Indicator of Rich ore to get the estimated proportion of rich ore, from which the
tonnage can be deduced.
Kriging of the Fe grade of rich ore using only the rich ore samples. Each block is then estimated
as if it would be entirely in rich ore, by applying the estimated tonnage, we can then obtain an
estimate of the Fe metal content.
(snap. 4.4-1)
69
(snap. 4.4-2)
You need to specify the type of calculation to Block and the number of variables to 1, then:
l
Input File: Indicator rich ore (Composites on 15m with the selection None).
The names of the variables in the output file (3D Grid 75 x 75 x 15 m), with the orebody selection active:
m
Kriging indicator rich ore for the estimation of Indicator rich ore
Kriging indicator rich ore std dev for the kriging standard deviation
70
The variogram model contained in the Parameter File called Indicator rich ore.
The neighborhood: open the Neighborhood... definition window and specify the name (Indicator rich ore for instance) of the new parameter file which will contain the following parameters,
to be defined from the Edit... button nearby. The neighborhood type is set by default to moving:
(snap. 4.4-3)
m
The moving neighborhood is an ellipsoid with No rotation, which means that U,V,W axes
are the original X,Y,Z axes;
Set the dimensions of the ellipsoid to 800 m, 600 m and 60 m along the vertical direction;
Block discretization: as we chose to perform Block kriging, the block discretization has to be
defined. The default settings for discretization are 5 x 5 x 1, meaning each block is subdivided by 5 in each X and Y direction, but is not divided in Z direction. The Block Discret-
71
ization sub-window may be used to change these settings, and check how different discretizations influence the block covariance Cvv. In this case study, the default parameters 5x5x1
will be kept.
m
l
The Local Parameters: open the Local Parameters Loading... window and specify the name of
the Local Parameters File (3D Grid 75x75x15m). Fore the Model All Structures and Neighborhood tabs switch ON Use Local Rotation (Mathematician convention) then 2D and define as
Rotation/Z the variable Rot Z.
(snap. 4.4-4)
72
It is possible to check both the model and the neighborhood performances when processing on a
grid node, and to display the results graphically: this is the purpose of the Test option at the bottom
of the (Co-)Kriging main window. When pressing it, a graphic page opens where:
l
By pressing once on the left button of the mouse, the target grid is shown (in fact a XOY section of
it, you may select different sections through Application/Selection For Display...). The user can
then move the cursor to a target grid node: click once more to initiate kriging. The samples selected
in the neighborhood are highlighted and the weights are displayed. We can see here that the nearest
samples get the higher weights. It is also important to check that the negative weights due to screen
effect are not too important. The neighborhood can be changed sometimes to avoid this kind of
problem (more sectors and less points by sector...).
You can also select the target grid node by giving the indices along X, Y and Z with the Application
menu Target Selection (for instance 6, 11, 16). You can figure out how the local parameters used
for the neighborhood are applied.
(snap. 4.4-5)
73
(snap. 4.4-6)
Note - From Application/Link to 3D viewer, you may ask for a 3D representation of the search
ellipsoid if the 3D viewer application is already running (see the end of this case study).
Close the Test Window and press RUN.
7814 grid nodes have been estimated. Basic statistics of the variables are displayed below.
(fig. 4.4-1)
The kriging standard deviation is an indicator of the estimation error, and depends only on the geometrical configuration of the data around the target grid node and on the variogram model. Basically, the standard deviation decrease as an estimated grid node is closer to data.
Some blocks have the kriged indicator above 1. These values will be changed into 1 by means of
File / Calculator.
74
(snap. 4.4-7)
Note - In the main Kriging window, the optional toggle Full set of Output Variables allows to
store in the Output File other kriging parameters: slope of regression, weight of the mean,
estimated dispersion variance of estimates etc...
Input File: Fe (Composites on 15m with the selection final lithology{rich ore}).
The names of the variables in the output file (3D Grid 75 x 75 x 15 m), with the orebody selection active:
m
Kriging Fe rich ore std dev for the kriging standard deviation.
75
The variogram model contained in the Parameter File called Fe rich ore.
The neighborhood: open the Neighborhood... definition window and specify the name (Fe rich
ore for instance) of the new parameter file which will contain the following parameters, to be
defined from the Edit... button nearby. The neighborhood type is set by default to moving:
The moving neighborhood is an ellipsoid with No rotation, which means that U,V,W axes
are the original X,Y,Z axes;
Set the dimensions of the ellipsoid to 800 m, 300 m and 50 m along the vertical direction;
Block discretization: as we chose to perform Block kriging, the block discretization is kept to
the default 5 x 5 x 1.
Apply Local Parameters but only for the Neighborhood, where you use Rot Z variable for 2D
Rotation /Z.
(snap. 4.4-8)
76
After Run you can calculate the statistics of the kriged estimate by asking in Statistics / Quick Statistics to apply as Weight the weight variable Kriging indicator rich ore. 7561 blocks from 7814
have been kriged. By using a weight variable you will obtain the statistics weighted by the proportion of the block in rich ore.
(snap. 4.4-9)
(fig. 4.4-2)
77
The mean grade is close to the average of the composites grade (65.84). Therefore in the next steps,
carrying out non linear methods which require the modeling of the distribution, we will not apply
any declustering weights.
78
Note - When kriging too small blocks with a high error level, applying a cut-off to the kriged
grades will induce biased tonnage estimates due to the high smoothing effect. It is then
recommended to use non-linear estimation techniques, or simulations (see the Non Linear case
study). For global estimation, an other alternative is to use the Gaussian anamorphosis modeling,
as described here below.
Note - From a support size point of view, composites will be considered as points compared to
blocks.
The technique will not be mathematically detailed here: the reader is referred to the Isatis on-line
help and technical references. Basically, the anamorphosis transforms an experimental dataset to a
gaussian dataset (i.e. having a gaussian histogram). The anamorphosis is bijective, so it is possible
to back transform gaussian values to raw values. A gaussian histogram is often a pre-requisite for
using non linear and simulation techniques. The anamorphosis function may be modelled in two
ways:
l
by a discretization with n points between a negative gaussian value of -5 and a positive gaussian
value of +5.
by using a decomposition into Hermite polynomials up to a degree N. This was the only possibility until the Isatis release V10.0. It is still compulsory for some applications, as will be
explained later on.
79
(snap. 4.5-1)
l
In Input... choose the Composites 15 m file with the selection final lithology{Rich ore};
choose Fe for the raw variable.
In Interactive Fitting... choose the Type Standard and switch ON the toggle button Dispersion
with the Dispersion Law set to Log-Normal Distribution. In this mode the histogram will be
modelled by assigning to each datum a dispersion, that accounts for some uncertainty that is
80
globally reflected by an error on the mean value. The variability of the dispersion is controlled
by the Variance Increase parameter, related to the estimation variance of the mean. By default
that variance is set to the statistical variance of the data divided by the number of data.
(snap. 4.5-2)
81
Click on the Anamorphosis and Histogram bitmaps. You will visualize the anamorphosis function and how the experimental histogram is modelled (black bars are for the experimental histogram and the blue bars for the modelled histogram).
(snap. 4.5-3)
Press RUN in the Gaussian Anamorphosis window: because you have not asked for Hermite
Polynomials, the following error message window is displayed to advise you on the applications
requiring these polynomials.
(snap. 4.5-4)
82
(snap. 4.5-5)
The Selective Mining Unit (SMU) size has been fixed to 25 x 25 x 15 m. Therefore, the correction
will be calculated for a block support of 25 x 25 x 15 m. Each block is discretized by default in 3x3
for the X and Y direction (NX = 3 and NY = 3); no discretization is needed for the vertical direction
(NZ = 1) as the composites are regularized accordingly to the bench height (15 m). Changing the
discretization along X and Y may allow to study the sensitivity on change of support coefficients.
83
Switch ON the toggle button Normalize Variogram Sill. As the variogram sill is higher than the
variance, the consequence is to reduce a little bit the support correction (r coefficient a bit higher
than without normalization).
Press Calculate at the bottom of the window. The block support correction calculations are displayed in the message window:
(snap. 4.5-6)
The block variogram value Gamma (v,v) is calculated and is the base for calculating the real
block variance and the real block support correction coefficient r. We can see that the support correction is not very important (r not very far from 1), it is because of the variogram model whose
ranges are rather large compared to the smu size. The calculation is made at random, so different
calculations will give similar results, but different. If the differences in the real block variance are
too large, the block discretization should be refined by increasing NX and NY. By pressing Calculate... several times, we statistically check if the discretization is fine enough to represent the variability inside the blocks. Press OK.
Save the Block Anamorphosis under the name Fe rich ore block 25x25x15 and press RUN.
84
Kriged Fe rich ore on the panels 75mx75mx15m, the Histogram modelled after support correction on blocks 25mx25mx15m
.
For each curve you have to click Edit and Fill the parameters.
For the first curve on kriged panels:
(snap. 4.5-7)
85
(snap. 4.5-8)
86
(snap. 4.5-9)
After clicking the bitmaps at the bottom of the Grade Tonnage Curves window (M vs. z, T vs z, Q
vs. z, Q vs.T, B vs z) you get the graphics like for instance T(z), M(z):
87
100
90
80
Total Tonnage
70
60
50
40
30
20
10
0
50
55
60
65
Cutoff
(snap. 4.5-10)
70
69
68
Mean Grade
67
66
65
64
63
62
61
60
50
55
60
Cutoff
65
(snap. 4.5-11)
These curves show as expected that the selectivity is better from true blocks 25x25x15 than from
kriged panels 75x75x15, that have a lower dispersion variance.
The legend is displayed in a Separate Window as was asked in the Grade Tonange Curves window. By clicking Define Axes you switch OFF Automatic Bounds to change the Axis Minimum and
Axis Maximum for Mean Grade to 60 and 70 respectively.
88
(snap. 4.5-12)
(snap. 4.5-13)
89
4.6 Simulations
This chapter aims at giving a quick example of conditional block simulations in a multivariate case.
Simulations allow to reproduce the real variability of the variable.
We will focus on the Fe-P-SiO2 grades of rich ore of blocks 25mx25mx15m. Two steps will then be
achieved:
l
simulation of the rich ore indicator. Sequential Indicator method will be applied to generate simulated model where each block has a simulated code 1 for rich ore blocks and 2 for poor ore
blocks. A finer grid would be required to be more realistic, for sake of simplicity we will make
the indicator simulation on the same blocks 25mx25mx15m.
simulation of rich ore Fe grade, as if each block would be entirely in rich ore. By intersecting
with the indicator simulation, we will get the final picture.
(snap. 4.6-1)
90
To create in the grid file the orebody selection we use the migration capability (Tools/Migrate/Grid
to Point...) from the 3D Grid 75x75x15 m file to 3D Grid 25x25x15 with maximum migration distance of 55 m.
(snap. 4.6-2)
Open the menu Interpolate / Conditional Simulations / Sequential Indicator / Standard Neighborhood.
91
(snap. 4.6-3)
For defining the two facies 1 for rich ore and 2 for the complementary you have to click on
Facies Definition and enter the parameters as shown below.
92
(snap. 4.6-4)
You may use the same variogram model, the same neighborhood and the same local parameters as
used for the kriging. The only additional parameter is the Optimum Number of Already Simulated
Nodes, you can fix to 30 (the total number being 5 for 12 sectors, i.e. 60). Save the simulation in
SIS indicator rich ore.
You ask 100 simulations, then press on Run.
transform the raw data to gaussian values by anamorphosis. For the case of P grade the anamorphosis will take into account the fact that many samples are at the detection limit, that
produces an histogram with a significant zero effect.
93
do a multivariate variographic analysis on the gaussian data in order to have a gaussian variogram
perform the simulations using the discrete gaussian model framework, that allows to condition block simulated values to gaussian point data.
94
(snap. 4.6-1)
By clicking on Interactive Fitting, the Fitting Parameters window pops up, you will have to choose
parameters for the three variables in turn, by clicking on the arrow on the side of the area displaying
Parameters for Fe/P/SiO2. For Fe and SiO2 you choose the Standard Type with a Dispersion
using a Log Normal Distribution and the default Variance Increase (as was made before for Fe
alone).
For P many samples have values equal to the detection limit of 0.01. The histogram shows a spike
at the origin, that will be modelled by a zero-effect. You must choose the type Zero-effect and click
on Advanced Parameters to enter the parameters defining the zero effect. In particular we will put
in the atom all values equal to 0.01 with a precision of 0.01, i.e. all samples between 0 and 0.02.
95
(snap. 4.6-2)
After Run the transformed values of Fe and SiO2 have a gaussian distribution, while for P the
gaussian transform has a truncated gaussian distribution. The gaussian values assigned to the samples concerned by the zero effect are all equal to the same value (gaussian value corresponding to
the frequency of the zero effect).
Gibbs Sampler to generate the gaussian transform with a true distribution and honouring the
spatial correlation.
Using EDA we calculate the histogram and the experimental variogram on the variable Gaussian
P rich ore (activating the selection final lithology{Rich ore}). In the Application menu of the histogram you ask the Calculation Parameters and switch off the Automatic mode to the values shown
below:
(snap. 4.6-1)
96
For the variogram you choose the same parameters as used for Fe (omnidirectional in the horizontal
plane and vertical), by asking in the Application Menu / Calculation Parameters, in the Variogram
Calculation Parameters window click Load Parameters from Standard Parameter File and select
the experimental variogram Fe rich ore.
On the graphic display you see the truncated distribution with about 35% of samples concerned by
the zero effect, the gaussian truncated value is -0.393. The variance displayed as the dotted line on
the variograms is about 0.5. In the Application / Save in Parameter File menu of the graphic containing the variogram you save it under the name Gaussian P rich ore zero effect.
(snap. 4.6-2)
(snap. 4.6-3)
97
In the Variogram Fitting window you choose the Experimental Variograms Gaussian P rich ore
zero effect and you create a New Variogram Model, called Gaussian P rich ore. Note that the variogram model refers to the gaussian transform (with the true gaussian distribution), it is transformed
by means of the truncation to match the experimental variogram of the truncated gaussian variable.
(snap. 4.6-4)
Click Edit, in the Model Definition window you must first click Truncation.
98
(snap. 4.6-5)
In the Other Options section, click on Advanced options then on Truncation. Cick Anamorphosis V1 to select the anamorphosis Fe-SiO2-P rich ore[P].
(snap. 4.6-6)
99
(snap. 4.6-7)
Coming back to the Model Definition window you enter the parameters of the variogram model as
shown below. It is important to choose sill coefficients summing up to 1 (dispersion variance of the
true gaussian) and not 0.5 the dispersion variance of the truncated gaussian.
(snap. 4.6-8)
100
You will now generate gaussian values for the zero effect on P rich ore by using Statistics / Statistics
/ Gibbs Sampler. Note that the gaussian values not concerned by the zero effect are kept unchanged.
l
The Input Data are the variogram model you just fitted Gaussian P rich ore and the Gaussian
P rich ore variable stored after the GaussainAnamorphosis Modelling.
The Output Data are a new variogram model Gaussian P rich ore no truncation (which is in
fact the same as the input one without the truncation option) and a new variable in the Composites 15m file Gaussian P rich ore (Gibbs).
(snap. 4.6-9)
You can check how the Gibbs Sampler has reproduced the gaussian distribution and the input variogram. You just have to recalculate the histogram and the variograms on the variable Gaussian P
rich ore (Gibbs). After saving in the Parameter File that experimental variogram, you can superimpose to it the variogram model with no truncation using Variogram Fitting menu. For the first distance the fit is acceptable.
101
(snap. 4.6-10)
117
1.5
148
183
1.0
D-9
223
266 1120
1155
11081222
1373 1196
900
325
1195
472
688
92
0.5 157
78
N0
6
1
0.0
500
1000
Distance (m)
1500
(snap. 4.6-11)
102
(snap. 4.6-1)
In Statistics/Variogram Fitting..., choose the experimental variogram you just saved. Create the new
variogram model with the same name Gaussian Fe-SiO2-P rich ore. Set the toggles Global Window and ask to display the number of pairs in the graphic window (Application/Graphic Parameters...).
103
(snap. 4.6-2)
104
enter the name of the new variogram model Gaussian Fe-SiO2-P rich ore and Edit it.
in the Manual Fitting tab click on Load Model and choose the model made for Gaussian P
rich ore no truncation. The following window pops up:*
(snap. 4.6-3)
Clck on Clear button, then move the mouse to the second line Gaussian P rich ore, click on Link
and on OK in the Selector window to put the variogram made on Gaussian P alone for the same
variable in the three variate variogram. Then you click on OK in the Model Loading window.
l
in the Manual Fitting tab click on Automatic Sill Fitting. The Global Window shows the
model that has been fitted. Press Run to save it in the parameter file.
105
(snap. 4.6-4)
106
You first have to launch Statistics / Modeling / Variogram Regularization. You will store in a
new experimental variogram Gaussian Fe-SiO2-P rich ore block 25x25x15 3 directional variograms using a discretization of 5x5x1. You will also ask to Normalize the Input Point Variogram.
(snap. 4.6-1)
l
Then you model the regularized variogram using Variogram Fitting and the Automatic Sill Fitting mode, after having loaded the model made on the point samples Gaussian Fe-SiO2-P rich
ore. You note that the Nugget effect is put to zero. When you save the variogram model the
Nugget effect is not stored in the Parameter file
107
(snap. 4.6-2)
108
(snap. 4.6-3)
109
(snap. 4.6-1)
110
The simulated variables are created with the following names Simu block Gaussian Fe rich
ore ...in the 3D Grid 25x25x15. We store the gaussian values before transform to allow a check
of the experimental variograms on gaussian simulated values with the input variogram model,
that is defined on the gaussian variables.
The Block Anamorphosis and the Block Gaussian Model are those obtained from the Gaussian
Support Correction.
The Neighborhood used for kriging Fe rich ore is modified into a new one called Fe rich ore
simulation changing the radius along V to 800m. The reason is just because the Local Parameters for the neighborhood are not implemented in the application Direct Block Simulation.
We ask to not Perform a Gaussian Back Transformation, for the reason explained above. The
back transform will be achieved afterwards.
111
(snap. 4.6-1)
You can compare the experimental variograms calculated from the 100 simulations in up to 3 directions with the input variogram model. The directions are entered by giving the increments (number
of grid mesh) of the unit directional lag along X, Y, Z. For instance for the direction 1, the increments are respectively 1, 0, 0, which makes the unit lag 25m East-West.
112
(snap. 4.6-2)
Three graphic pages (one per direction) are then displayed. The average experimental variograms
are displayed with a single line, the variogram model with a double line. On the next figure the variograms in the direction 3 show a good match up to 100m. For the cross-variogram P-SiO2 where
the correlation is very low, some simulations look anomalous, further analysis could be made to
exclude these simulations for the next post processing steps.
113
1.25
1.00
0.75
0.50
0.25
0.00
25
50
75
100
125
0.05
0.04
0.03
0.02
0.01
0.00
-0.01
-0.02
0
25
50
75
100
125
1.00
0.75
0.50
0.25
0.00
25
0.0
-0.1
-0.2
-0.3
-0.4
25
50
75
100
Distance (m)
50
75
100
125
Distance (m)
125
Distance (m)
0.0
-0.1
-0.2
-0.3
-0.4
-0.5
0
25
50
75
100
Distance (m)
125
-0.03
Distance (m)
1.25
1.00
0.75
0.50
0.25
0.00
25
50
75
100
Distance (m)
125
(snap. 4.6-3)
It is then necessary to transform the simulated gaussian values into raw values, using Statistics /
Data Tranformation / Raw Gaussian Transformation. For transforming the three grade you will
have to run that menu three times. You should choose as Transformation Gaussian to Raw Transformation. The New Raw Variable will be created with the same number of indices with names like
Simu block Fe rich ore...
The transform is achieved by means of the block anamorphosis Fe-SiO2-P rich ore block
25x25x15, do not forget to choose on the right side of the Anamorphosis window the right variable.
114
(snap. 4.6-4)
We can now combine the simulations of the rich ore indicator and the grades simulations, by changing to undefined (N/A) the grades when the block is simulated as poor ore (simulated code 2).
These transformations have to be applied on the 100 simulations using File / Calculator. It is compulsory to create beforehand new macro variables, with 100 indices, called Simu block Fe ... with
Tools / Create Special Variable.
115
(snap. 4.6-5)
116
(snap. 4.6-6)
If you complete this Case Study by simulating also the grades of poor ore, you will get valuated
grades for all blocks in the orebody. The displays will be presented in the last chapter.
117
One run will calculate a macro-variable Tonnage rich ore, by storing the number of smus of
rich ore (i.e. where Fe simulated grade is defined) within each panel. With File / Calculator that
number is divided by 9 (number of smus in the panel) to get a proportion. By multipying by the
panel volume and the density (constant equal to 4) we get the real tonnage in tons.
(snap. 4.6-1)
118
(snap. 4.6-2)
119
three runs will be necessary to calculate the quantities of metal for the three elements. We store
with Tools / Copy Grid Statistics to Grid the mean grade of the smus of rich ore within the panel,
the variable is then called Metal Fe ... rich ore. With File / Calculator by multiplying those
mean values by the tonnage macro-variable we get the metal quantity in Tons.
(snap. 4.6-3)
120
(snap. 4.6-4)
121
(snap. 4.6-1)
(snap. 4.6-2)
The mean tonnage may be compared to the kriged indicator (after multiplication by the panel tonnage).
122
Iso-Frequency Maps to calculate the quantile at the frequencies of 25%-50%-75% of the Tonnage of rich ore. In the previous Simulation Post-Processing window, click the Toggle button
Iso-Frequency Maps, the following window pops up and you define a New Macro Variable
Quantile Tonnage rich ore[xxxxx].
(snap. 4.6-3)
then click Quantiles and choose for Step Between Frequencies 25%. You get a macro-variable with
3 indices, one per frequency: for each panel the tonnage such that 25%, 50%, 75% of the simulations is lower than the corresponding quantile value.
(snap. 4.6-4)
123
Iso-Cutoff Maps to calculate the probability for the Metal P rich ore to be above 0, 50, 100,
150, 200.
(snap. 4.6-5)
In the previous Simulation Post-Processing window, click the Toggle button Iso-Cutoff Maps, the
following window pops up and you define a New Macro Variable for Probability to be Above Cutoff (T), i.e. Proba P rich ore above[xxxxx].
124
(snap. 4.6-6)
then click Cutoff and click Regular Cutoff Definition and choose the parameters as shown below.
You get a macro-variable with 4 indices, one per cutoff: for each panel the probability to be above
0.02,0.03 ...
(snap. 4.6-7)
125
Risk Curves to calculate the distribution of 100 simulations of Fe metal quantities of rich ore
over the orebody.
(snap. 4.6-8)
Click Risk Curves then Edit and fill the parameters in the Risk Curves & Printing Format window,
as shown. Only the Accumulations are interesting. For a given simulation the accumulation is
obtained by multiplying the simulated block value (here the Fe metal in tons) by the volume of the
block. It means that the average grade of the block is multiplied twice by the block volume. That is
why in order to get the metal in MTons we have to apply a scaling factor of 75x75x15 (84375) and
multiply it by 106. That scaling is entered in the box just on the left of m3*V_unit of the Accumulations sub-window. By asking Print Statistics the 100 accumulations will be output in the Isatis message window. The order of the printout depends of the option Sorts Results by, here we ask
Accumulations.
126
(snap. 4.6-9)
Coming back to the Simulation Post-processing window and press Run. The following graphic is
then displayed.
127
(snap. 4.6-10)
With the Application / Graphic Parameters you may Highlight Quantiles with the Simulation Value
on Graphic.
(snap. 4.6-11)
128
(snap. 4.6-12)
In the message window we get the 100 simulated metal quantities by increasing order. The column
Macro gives the index of the simulation for each outcome: for instance the minimum metal is
obtained for the simulation #72, the next one for the simulation 97 ...
Volume
72 1.00
1140.90MT 3442162500.00m3
97 2.00
1156.65MT 3442162500.00m3
38 3.00
1171.82MT 3442162500.00m3
15 4.00
1179.91MT 3442162500.00m3
91 5.00
1181.25MT 3442162500.00m3
41 6.00
1185.01MT 3442162500.00m3
30 7.00
1191.53MT 3442162500.00m3
45 8.00
1191.71MT 3442162500.00m3
57 9.00
1194.86MT 3442162500.00m3
10
59 10.00
1195.80MT 3442162500.00m3
11
35 11.00
1196.15MT 3442162500.00m3
12
6 12.00
1196.37MT 3442162500.00m3
13
48 13.00
1197.58MT 3442162500.00m3
14
62 14.00
1199.70MT 3442162500.00m3
15
40 15.00
1201.25MT 3442162500.00m3
16
1 16.00
1201.90MT 3442162500.00m3
17
86 17.00
1204.47MT 3442162500.00m3
18
33 18.00
1206.65MT 3442162500.00m3
19
93 19.00
1206.83MT 3442162500.00m3
20
11 20.00
1210.44MT 3442162500.00m3
129
...
We will calculate for each panel the mean grade, tonnage and metal quantitiy of rich ore and the
quantity of rich ore Fe-P-SiO2 by using Statistics / Processing / Grade Reblocking, that applies
directly on the macro-variables. The Grade Reblocking is designed to calculate local grade tonnage
curves on panel grid (Q,T,M variables) from simulated grade variables on block grid. The grade
variables can be simulated using the panel Turning bands, Sequential Gaussian Simulation or any
kind of simulation that generates continuous variables.
The Block Grid usually corresponds to the S.M.U. (Selective Mining Unit). It has to be consistent
with the Panels, in other words the Block Grid must make a partition of this Panel Grid.This
appli-cation handles multivariable cases with a cuttof on the main variable.
Make sure to give a different name for each output variables: Simu Fe, Simu P and Simu SiO2.
130
(snap. 4.6-13)
131
132
(snap. 4.6-14)
133
First, give a name to the template you are creating: Kriging Fe rich ore. This will allow you to
easily display again this template later.
In the Contents list, double click the Raster item. A new window appears, in order to let you
specify which variable you want to display and the color scale:
m
Select the Grid file, 3D Grid 75x75x15m with selection orebody active, select the variable
Kriging Fe rich ore
Specify the title for the Raster part of the legend, for instance Kriging Fe rich ore
In the Grid Contents area, enter 16 for the rank of the section XOY to display
In the Graphic Parameters area, specify the Color Scale you want to use for the raster display. You may use an automatic default color scale, or create a new one specifically dedicated to the Fe variable. To create a new color scale: click the Color Scale button, doubleclick on New Color Scale and enter a name: Fe, and press OK. Click the Edit button. In the
Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Choose Number of Classes 22,
- Click on the Bounds... button, enter 60 and 71 as the Minimum and Maximum values.
Press OK.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large Fe
values.
- Click Undefined Values button and select Transparent.
- In the Legend area, switch off the Display all tick marks button, enter 60 as the reference
tickmark and 2 as the step between the tickmarks. Then, specify that you do not want
your final color scale to exceed 7 cm. Switch off the Automatic Format button, and specify that you want to use integer values of Length 7. Ask to display the Extreme Classes.
Click OK.
134
(snap. 4.7-1)
In the Item contents for: Raster window, click Display current item to display the result.
Click OK.
Double-click on the Isolines item. A new Item contents window appears. In the Data area, select
the Kriging Fe rich ore variable from the 3D Grid file with the same selection. In the Grid Contents area, select the rank 16 for the XOY section. In the Data Related Parameters area, switch
on the C1 line, enter 60 and 71 as lower and upper bounds and choose a step equal to 2. Switch
135
off the Visibility button. Click on Display Current Item to check your parameters, then on Display to see all the previously defined components of your graphic. Click on OK to close the Item
contents window.
l
In the Item list, you can select any item and decide whether or not you want to display its legend, by setting the toggle Legend ON. Use the Move Front and Move Back buttons to modify the
order of the items in the final Display.
Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.
2000
Y (m)
70
68
1000
66
64
0
62
60
500
1000
1500
X (m)
2000
N/A
(fig. 4.7-1)
You can also visualize your 3D grid in perspective. Open again the Contents window of the previous graphic display (Application/Contents...). Switch the Representation Type from Projection to
Perspective:
136
just click on Display: the previous section is represented within the 3D volume. Because of the
extension of the grid, set the vertical axis factor to 3 in the Display Box tab (switch the toggle
Automatic Scales OFF). In the Camera tab, modify the Perspective Parameters: longitude=60,
latitude=40.
70
68
735
635
535
435
335
5
27
75
12
64
62
163
3
116
75
22
66
60
N/A
(fig. 4.7-2)
137
Representing the whole grid as a solid: this is obtained by setting the 3D Grid contents to 3D
Box, both in the Raster and Isolines item contents windows.
Representing the 3G grid as a solid and penetrating into the solid by digging a portion of the
grid. For each item content window (for raster and isolines), set the 3D Grid contents to Excavated Box. Then define the indices of the excavation corner (for instance: cell=17, 21, 15).
70
68
735
635
535
435
335
5
27
5
27
64
62
163
116
75
22
66
60
N/A
(fig. 4.7-3)
In the Contents window, the Camera tab allows you to animate (animate tab from the main contents
window) the graphic in several ways:
l
by animating one item property at a time, for instance the grid raster section. To interrupt the
animation, press the STOP button in the main Isatis window.
Fe grade
m
Create a raster image of the Fe simulated macro variable: choose the first simulation (index
1). Display rank 16 of the 25x25x15 m 3D grid file, so you can compare simulations with
the kriging) and choose the grade Fe color scale. Ask to display the legend.
Create a Base map of the composite data from the Composites 15 m with the selection final
lithology{Rich ore} active and no variable in order to use the same Default Symbol a full
circle of 0.15cm.
138
(snap. 4.7-1)
In the Display Box tab from the contents window, set the mode to Containing a set of items and
click the Raster item: set the toggle Box Defined as Slice around Section ON and set the Slice
Thickness to 45 m.
139
(snap. 4.7-2)
Press Display:
140
Fe rich ore
2000
Y (m)
70
68
1000
66
64
0
62
60
500
1000
1500
2000
X (m)
N/A
(fig. 4.7-1)
From the Animate tab, select the raster item and choose to animate on the macro index. Set the
Delay to 1s and press Animate. The different simulations appear consecutively: the animation
allows to sense the differences between the simulations. Check that the simulations tend to be similar around boreholes.
l
Display of the probability for the Metal P of rich ore in panels to be above cut-off = 50T:
m
Create a new page and display the macro variable Proba P rich ore above from the 3D
Grid 75x75x15m file: choose the macro index n 2 (i.e. cutoff = 50)
=Make a New Color scale named Proportion as explained before for Fe, but with 20 classes
between 0 and 1.
press OK
141
Probability
Proba P rich ore above{50.000000}
3000
1.00
0.90
0.80
0.70
2000
Y (m)
0.60
0.50
1000
0.40
0.30
0.20
0.10
0.00
500
1000
1500
2000
X (m)
N/A
(fig. 4.7-2)
Drag the Fe variable from the Composites 15 m file in the Study Contents and drop it in the
display window;
Magnify by a factor of 2 the scale along Z by clicking the Z Scale button at the top of the
graphic page.
Click Toggle the Axes in the menu bar on the left of the graphic area.
From the Page contents, click right on the 3D Lines object to open the 3D Lines properties
window. In the 3D Lines tab
142
(snap. 4.7-1)
m
In the File menu click Save Page as and give a name (composites rich ore) in order to be
able to recover it later if you wish.
143
(snap. 4.7-2)
Click Compass in the menu bar on the left of the graphic area.
Drag the Kriging indicator rich ore variable from the 3D Grid 75 x 75 x 15 m file in the Study
Contents and drop it in the display window;
Click right on the 3D Grid 75x75x15m file in the Page Contents to open the 3D Grid Properties:
144
In the 3D Grid tab, tick the selection toggle, choose the orebody selection;
(fig. 4.7-1)
l
open the clipping plane facility from Toggle the Clipping Plane in the menu bar on the left of
the graphic area: the clipping plane appears across the block model;
Click the clipping plane rectangle and drag it next by the block model for better visibility;
Click one of the clipping planes axis to change its orientation (be careful to target precisely
the axis itself in dark grey, not its squared extremity nor the center tube in white)
Add the drill holes (Fe rich ore) as you did for the previous graphic page
Open the Line Properties window of the Composites 15 m file: set the Allow Clipping toggle ON;
145
Click on the clipping planes center white tube and drag it in order to translate the clipping
plane along the axis: choose a convenient cross section, approximately in the middle of the
block model. You may also benefit from the clipping controls parameters available on the
right of the graphic window in order to clip a slice with a fixed width and along the main
grid axes.
Click on one block of particular interest: its information is displayed in the top right corner:
(snap. 4.7-1)
146
Edit the 3D Grid 75x75x15m attributes, go in the Slicing tab and set the properties as follow:
(snap. 4.7-2)
Set the toggle Automatic Apply ON, and move the slices to visualize interactively the slicing.
l
Save the graphic as a New Page with the name Composites and kriged indicator rich ore.
147
(fig. 4.7-1)
148
Non Linear
5.Non Linear
This case study, dedicated to advanced users, is based on the Walker
Lake data set, which has been first introduced and analyzed by Mohan
SRIVASTAVA and Edwards H. ISAAKS in their book Applied Geostatistics (1989, Oxford University Press).
Geostatistical methods applicable to perform global and local estimation of recoverable resources in a mining industry context are
described through this case study:
Non linear methods, including four methods used to estimate local
recoverable resources: indicator kriging, disjunctive kriging, uniform
conditioning and service variables.Conditional simulations of grades,
using the two main methods applicable: turning bands and sequential
gaussian.
The efficiency of these methods will be evaluated by comparison to
the reality, which can be considered as known in this case because
of the origin of the data set.
Reminder: while using Isatis, the on-line help is accessible anytime by
pressing F1 and provides full description of the active application.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This
directory also contains a journal file including all the steps of the case
study. If case you get stuck during the case study, use the journal file to
perform all the actions according to the book.
149
150
the support effect, that makes the recovered ore depending on the volume on which the ore/
waste decision is made. In this case the size of the selective mining unit (SMU or blocks) has
been fixed to 5m x 5m. When performing the local estimations we will calculate the ore tonnage
and grade after cut-off in panels of 20m x 20m. It is important to keep these terms of block for
the selective unit and panel for the estimated unit (e.g.: tonnage within the panel of the ore consisting of blocks with a grade above the cut-off). These terms are systematically used in the Isatis interface.
the information effect, that makes the mis-classification between selected ore and waste depending on the amount of information used in estimating the blocks. At this stage two notions are
important. Firstly the recovered ore is made of true grades contained in blocks whose estimated
grade is above the cut-off. Secondly the decision between ore and waste will be made with an
additional information (blast-holes...) in the future of the production. The question is then what
can we expect to recover tomorrow, if we assume a future pattern blast-holes for instance.
the constraint effect, that leads for any technical/economical reason to ore dilution or ore left in
place. The two previously mentioned effects are assuming a free selection of blocks within the
panels, only the distribution of block grades is of importance. When their spatial distribution has
to be considered (the recovered ore will be different if rich blocks are contiguous or spread
throughout the panel), only geostatistical simulations provide an answer.
Non Linear
151
by non linear kriging techniques (developed in 3.4): the main advantage of these methods is
their swiftness, but no information on the location of the ore blocks within the panels is given.
Four methods will be described: Indicator kriging, Disjunctive kriging, Service variables and
Uniform Conditioning.
by simulation techniques (developed in 3.5): the main advantages of simulations is the possibility to derive simulated histograms and estimate the constraint effect, but the method is quite
heavy and time consuming for big block models. Two methods will be described: Turning
Bands (TB) and Sequential Gaussian Simulations (SGS).
Comparison to reality through a specific analysis of the 600 ppm cut-off will be done through
graphic displays and cross plots of the ore tonnage and mean grade above cut-off.
Note - If you wish to compare the local estimates with reality you will need first to calculate the
real tonnage variables from the real grades for the specific cut-off 600 (this is done in 3.4.1
Calculation of the true QTM variables based on the panels).
152
(snap. 5.2-1)
By visualizing the Sample set data (using Display / Basemap/ Proportional), we immediately see
the preferential sampling pattern of high grade zones:
Non Linear
153
X (m)
0
100
300
200
V
Y (m)
200
100
0
0
100
200
X (m)
(fig. 5.2-1)
In order to correct the bias of preferential sampling of high grade zones, it is necessary to decluster the data. To do so you can use Tools / Declustering: it performs a cell declustering with a moving window centered on each sample. We store the resulting weights in a variable Weight of the
sample data set: this variable will be used later to weight statistics for the variographic analysis in
the EDA and the gaussian anamorphosis modeling. The moving window size for declustering has
been fixed here to 20m x 20m, accordingly to the approximative sampling loose mesh size outside
the clusters.
Note - A possible guide for choosing the moving window dimensions is to compare the value of the
resulting declustered mean to the mean of kriged estimates (kriging has natural declustering
capabilities).
The statistics before and after declustering are the following:
154
(snap. 5.2-2)
Non Linear
155
(snap. 5.2-3)
156
(fig. 5.2-2)
From these three histograms we clearly see that the declustering process will allow to better represent the statistical behavior of the phenomenon.
Note - This short scale anisotropy is not clearly visible on the variogram map below: to better
visualize it, you may re-calculate the variogram map on 5 lags only and create a customized color
scale through Application / Graphic Specific Parameters...
Non Linear
157
In the variogram map area you can activate a direction using the mouse buttons, the left one to
select a direction, and the right one for selecting Activate Direction in the menu. Activating both
principal axes (perpendicular directions N160 and N70) displays the corresponding experimental
variograms below. When selecting the variogram, click right and ask for Modify Label... to change
N250 to N70:
(snap. 5.2-4)
The short scale anisotropy is visible on the experimental variogram; it is then saved in a parameter
file Raw V from the graphic window (Application / Save in Parameter File...).
We now have to fit a model based on these experimental variograms using the Statistics / Variogram Fitting facility. We fit the model from the Manual Fitting tab.
158
(snap. 5.2-5)
Non Linear
159
(snap. 5.2-6)
160
(snap. 5.2-7)
Press Print to check the output variogram and then save the variogram model in the parameter file
under the name of Raw V. It should be noted that the total sill of the variogram is slightly above the
dispersion variance and the low nugget value has been chosen.
Non Linear
161
(snap. 5.2-1)
Using this configuration we have exactly 25 samples from the exhaustive data set for each block of
the new grid. Edit the graphic parameters to display the auxiliary file.
162
(snap. 5.2-2)
(fig. 5.2-1)
Now we need to average the real values on this Grid 5*5 file, using Tools / Copy Statistics / Points
-> Grid. We will call this new variable True V.
Note - Using a moving window equal to zero for all the axes, we constrain the new Mean variable
to a calculation area of 5m x 5m (1 block).
Non Linear
163
(snap. 5.2-3)
True V
300
250
1000
900
800
700
600
500
400
300
200
100
0
N/A
Y (m)
200
150
100
50
50
100
150
X (m)
200
250
(fig. 5.2-2)
164
The above figure is a result of two basic actions of the Display Menu: a display grid raster of the
true block grade is performed, then isolines are overlaid. Isolines range from 0 to 1500 by steps of
250 ppm, 1000 ppm isoline has been represented with a bold line type. The color scale has been
customized to cover grades between 0 and 1000 ppm, even if there are values greater than this
upper bound. Each class has a width of 62.5 ppm, the extreme values are represented using the
extreme colors.
Note - Keep in mind that V variable has primarily been deduced from elevation data: we clearly
see on the above map a NW-SE valley responsible of the anisotropy detected during variography.
The Walker Lake itself (consequently with zero values...) is in this valley. One could raise
stationarity issues, as the statistical behavior of elevation data differs from valleys (with a lake) to
nearby ranges. This is not the subject of this case study.
X and Y mesh: 20 m,
Non Linear
165
(snap. 5.2-1)
The graphic check with the Grid 5*5 shows that the 5m x 5m blocks describe a perfect partition of
the 20m x 20m panels. This allows to use the specific Tools / Copy Statistics / Grid to grid... for calculating the true panel values True V for the Mean Name:
166
(snap. 5.2-2)
Non Linear
167
168
Model: Raw V
Neighborhood: create a moving neighborhood named octants without any rotation and a constant radius of 70 m, made of 8 sectors with a minimum of 5 samples and the optimum number of samples by sector set to 2. This neighborhood will be used extensively throughout the
case study.
(snap. 5.2-3)
Non Linear
169
(snap. 5.2-4)
For comparison purposes, it is interesting to do also the same kriging on the small blocks (Grid 5*5)
to quantify the smoothing effect of linear kriging.
170
Comparing the true V values for the three different supports (punctual, block 5x5 and panel 20x20):
l
the variance decreases with the support size: this is the support effect
Comparing estimated values vs. true values for one same support:
l
punctual: the estimation by declustering is satisfactory because the mean and the variance are
comparable. The bias (279.7 compared to 278.0) is negligible
block 5x5: ID2 shows an overestimation. For kriging, the bias is negligible and, as expected, the
variance of the kriged blocks (44013) is smaller than the real block variance (52287); this is the
smoothing effect caused by linear interpolation. Beside, there are some negative estimates; the
5m x 5m blocks are too small for a robust in situ estimation.
panel 20x20: The bias of ID2 is less pronounced, but the variance is not realistic; this is because
strong local overestimation of high grade zones. The variance of the kriged panels is smaller
than the real panel variance, but the difference is less pronounced. Moreover, there is only one
negative panel estimate.
Note - 72 SMU blocks have negative estimates indicating that the 5 m x 5 m block size is too small
in this case.
Non Linear
171
(snap. 5.3-1)
172
(snap. 5.3-2)
The Interactive Fitting... gives access to specific parameters for the anamorphosis (intervals on the
raw values to be transformed, intervals on the gaussian values, number of polynomials etc...): the
default parameters will be kept. The distribution function is modeled by specific polynomials called
Hermite polynomials; the more polynomials, the more precise is the fit. There are also QC graphic
windows allowing to check the fit between experimental (raw) and model histograms:
Non Linear
173
1500
1000
500
0
-3 -2 -1
Gaussian values
(fig. 5.3-1)
the block variance, which can be calculated using the Krige's relationship giving the dispersion
variance as a function of the variogram.
The gaussian discrete model provides then a consistent change of support model.
Use the Statistics/Support Correction... panel with the Point anamorphosis and the Raw V variogram model as input. The 5mx5m block will be discretized in 4x4. At this stage no information
effect is considered, so the corresponding toggle is not activated.
174
(snap. 5.3-3)
Press Calculate to calculate the Gamma(v,v), and the corresponding Real Block Variance and Correction are displayed in the message window:
_________________________________________________
|
|
|
|
| V
|
|--------------------------------------|----------|
| Punctual Variance (Anamorphosis)
| 63167.25 |
| Variogram Sill
| 66500.00 |
| Gamma(v,v)
| 9431.85 |
| Real Block Variance
| 53735.40 |
| Real Block Support Correction (r)
|
0.9293 |
| Kriged Block Support Correction (s) |
0.9293 |
| Kriged-Real Block Support Correction |
1.0000 |
| Zmin Block
|
0.00 |
| Zmax Block
| 1528.10 |
Non Linear
175
|______________________________________|__________|
Note - Gamma (v,v) is calculated using random procedures; hence, different results are generated
when pressing the Calculate button. Gamma (v,v) and the resulting Real Block Variance should not
vary too much between different calculations.
By clicking on the anamorphosis and on the histogram bitmaps we can check that, after the support
effect correction, the histogram of blocks is smoother (smaller variance) than the punctual histogram model:
12.5
Frequencies (%)
10.0
7.5
5.0
2.5
0.0
500
1000
150
(fig. 5.3-2)
Histograms (punctual in blue and block in red): the block histogram model is smoother
The anamorphosis function will be saved under the name Block 5m * 5m and press RUN to save it.
176
(i.e. the selection between ore and waste is made on the future estimated grades, and not on the real
grades), we should calculate 2 coefficients:
l
a coefficient that transforms the point anamorphosis in the kriged block one.
a coefficient that allows to calculate the covariance between true and kriged blocks.
Therefore, the variance of the kriged block and the covariance between real and kriged blocks are
needed: they can be automatically calculated in the same Support Correction panel through the
Information Effect optional calculation sub-panel (... selector next to the toggle):
(snap. 5.3-4)
The final sampling mesh corresponds to the final sampling pattern to be considered: 5x5 m. Press
OK and create a new anamorphosis function Block 5m*5m with information effect. Click on Run
button; two extra support correction coefficients are calculated and are displayed when pressing
RUN from the main panel:
Block Support Correction Calculation:
_________________________________________________
_________________________________________________
|
|
|
|
| V
|
|--------------------------------------|----------|
| Punctual Variance (Anamorphosis)
| 63167.25 |
| Variogram Sill
| 66500.00 |
| Gamma(v,v)
| 9431.85 |
| Real Block Variance
| 53735.40 |
| Real Block Support Correction (r)
|
0.9293 |
| Kriged Block Support Correction (s) |
0.9117 |
| Kriged-Real Block Support Correction |
0.9859 |
| Zmin Block
|
0.00 |
| Zmax Block
| 1528.10 |
|______________________________________|__________|
Non Linear
177
(snap. 5.3-5)
178
(snap. 5.3-6)
Non Linear
179
(snap. 5.3-7)
Press OK then repeat the procedure for the other 4 data with the same cut-off definition and specifying different curve parameters for distinguishing them:
m
curve 3: choose histogram model and the Block 5m * 5m with information effect anamorphosis
curve 4: choose grade variable and select the True V variable from the Grid 5*5 file
curve 5: choose grade variable and select the Kriging V variable from the Grid 5*5 file
Once the 5 curves have been edited, click on the graphic bitmaps to display the Total tonnage vs.
cut-off and the Mean grade vs. cut-off curves:
180
(fig. 5.3-3)
Total tonnage vs. cut-off - the block histograms are close to the true tonnages.
The ordinary kriging curve under-estimates the total tonnage for high cut-offs, showing the danger
to apply cut-offs on linear estimates for recoverable resources.
Non Linear
181
(fig. 5.3-4)
|
|
|
|
|
|
77.954|
87.738|
76.103|
61.082|
|
10.385 |
11.351|
10.084|
8.077|
M
750.67
772.934
754.699
756.258
In 3.2.5 we have seen that linear kriging is well adapted to in situ resource estimation on panels.
But when mining constraints are involved (i.e applying the 600 cut-off on small blocks), the kriging
predicts a tonnage of 8.08% instead of 10.38%: the mine will have to deal with a 29% over-production compared to the prediction.
On the other hand, the global estimation using the point model over-estimates the reality. The
global estimation with change of support (block 5*5 no info) gives a prediction of good quality.
Because we know the reality from the exhaustive dataset, it is possible to calculate the true block
grades taking the true information effect into account and compare it to the Block 5x5 with infor-
182
mation effect anamorphosis. The detailed workflow to calculate the true information effect will not
be detailed here, only the general idea is presented below:
l
Sample one true value at the center of each block from the exhaustive set (representing the
blasthole sampling pattern with real sampled grades V)
krige the blocks with these samples: this is the ultimate estimated block grades on which the
ultimate selection will be based
select blocks where ultimate estimates > 600 and derive the tonnage
We can now compare the Block 5x5 with info to the real QTM variables calculated with the true
information effect (info):
True block 5x5
True block 5x5 (info)
Block 5*5 with info
|
|
|
|
Q
77.95
67.92
71.83
|
|
|
|
T
10.38
9.01
9.66
|
|
|
|
750.67
754.11
743.40
As expected, the information effect on the true grades deteriorates the real recovered tonnage and
metal quantity because the ore/waste mis-classification is taken into account: the real tonnage
decreases from 10.38% to 9.01%. The estimation from the Block 5x5 with info anamorphosis
(9.69%) is closer to this reality.
Non Linear
183
the total Tonnage T: the total tonnage is expressed as the percentage or the proportion of SMU
blocks that have a grade above the given cut-off in the panel. Each panel is a partition of 16
SMU blocks, i.e when T is expressed as a proportion, T = 1 means that all the 16 SMU blocks of
the panel have an estimated grade above the cut-off.
the metal Quantity Q (also referred sometimes as the metal tonnage) is the quantity of metal
relative to the tonnage proportion T for a given cut-off (according to the grade unit);
the Mean grade M is the mean grade above the given cut-off.
In Isatis, QTM variables for local estimations are calculated and stored in macro-variables (1 index
for each cut-off) with a fixed terminology:
l
184
In Grid 5*5, create a constant 600 ppm variable named Cut-off 600 ppm: this is done through
File / Calculator window:
(snap. 5.4-1)
Tools / Copy Statistics / Grid -> Grid: in the input area we will select the true block grades
True V from the Grid 5*5 file and the Cut-off 600 ppm as the Minimum Bound Name, i.e only
cells for which the grade is above 600 will be considered. In the output area we will store the
true tonnage above 600 under Number Name and the true grade above 600 under Mean Name
in the Grid 20*20 file. If inside a specific panel no SMU block has a grade greater than 600,
then the true tonnage of this panel will be 0 and its true grade will be undefined:
Non Linear
185
(snap. 5.4-2)
In order to get the true total tonnage T relevant for future comparisons (i.e the ore proportion above
the cut-off 600), we have to normalize the number of blocks contained in each panel by the total
number of blocks in one panel (16):
(snap. 5.4-3)
186
The metal quantity Q is calculated as Q = T x M. When the true grade above 600 is defined, the
metal quantity is equal to M x T otherwise it is null. A specific ifelse syntax is needed to reflect
this:
(snap. 5.4-4)
if this specific ifelse syntax was not used, the metal quantity in the waste would be undefined
instead of being null.
Now, we have the true tonnage, the true mean and the true metal quantity above 600 ppm to base
our comparisons in the Grid 20*20 file.
Note - Beware that the true grade above 600 is not additive as it refers to different tonnages.
Therefore, it is necessary to use the true tonnage above 600 as weights for computing the global
Non Linear
187
mean of the grade over the whole deposit. Another way to compute the global mean of the grade
above 600 is to divide the global metal quantity by the global tonnage after averaging on the whole
deposit.
Multiple indicator (co-)kriging: performs the kriging of the indicator variables with their own
variograms, independently or not, for the different cut-offs.
Median indicator kriging: supposes that all the indicator variables have the same variogram; that
is, the variogram of the indicator based on the median value of the grade.
Multiple indicator kriging is preferable because of the de-structuring of the spatial correlation with
increasing cut-offs (the assumption of an unique variogram for all cut-offs does not hold for the
whole grade spectrum), but problems of consistency must be corrected afterwards. Besides it has
the disadvantage to be quite tedious because it requires a specific variographic analysis for each
cut-off. Incidentally it is the reason why median indicator kriging has been proposed as an alternative. One other possibility is to calculate the variogram using the intrinsic correlation hypothesis,
that simplifies the variograms fitting by assuming the proportionality of all variograms and crossvariograms.
In this case study we will use the median indicator kriging of the panels 20m x 20m; using Statistics
/ Quick Statistics..., with the declustering weights, the median of the declustered histogram is found
to be 223.9.
188
(snap. 5.4-1)
We then calculate the experimental variogram of this macro indicator variable Indicator V [xxxxx]
with the EDA (make sure that the Weight variable is activated). When selecting the IndicatorV[xxxxx] macro variable from the EDA, you will be asked to specify the index corresponding to
the median indicator: we have chosen the index 5 corresponding to the cut-off 250 which is close
enough to 223.9. If the same calculations parameters of the Raw V variogram are used, the anisotropy is no more visible; hence, the experimental variogram will be omnidirectional and calculated
with 33 lags of 5 m. It is stored in a parameter file Model Indicator, and used through Statistics /
Variogram fitting... to fit a variogram model with the following parameters detailed below the
graphic:
Non Linear
189
0.3
1411 2140
2237
2717
3016
2496
2661
2999
2882
2914
2742
2530
2222 2546
2912
2053 1941
2829 2405
2596
3093
2346
2549
3243
1774 1875 2659
1208
0.2
928
1520
863
244
0.1
13
0.0
50
100
150
Distance (m)
Isatis
Sample set/Data
- Variable #1 : Indicator V{250.000000}
Experimental Variogram : in 1 direction(s)
D1 :
Angular tolerance = 90.00
Lag = 5.00m, Count = 33 lags, Tolerance = 50.00%
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill =
0.035
S2 - Exponential - Scale = 45.00m, Sill =
0.21
(fig. 5.4-1)
It should be noted that the total sill is close to 0.25, which is the maximum authorized value for an
indicator variogram. The model is fit using the tab Manual Fitting. The variogram is saved in the
parameter file under the name Model Indicator.
190
(snap. 5.4-2)
Non Linear
191
(snap. 5.4-3)
192
(snap. 5.4-1)
l
We ask to calculate a Block estimate: we are estimating the proportion of points above the cutoffs within the panel.
As Indicator Definition we define the same cut-offs as previously. In the Cut-off definition window, by clicking on Calculate proportions we get the experimental probabilities of the grade
being above the different cut-offs. These values correspond to the mean of the indicators and are
used if we perform a simple kriging. In this case because a strict stationarity is not likely, we
prefer to run an ordinary kriging, which is the default option.
Non Linear
193
rebuild the cumulative density function (cdf) of tonnage, metal and grades above cut-off for
each panel,
Apply a volume correction (support effect) to take into account the fact that the recoverable
resources will be based on 5m * 5m blocks.
These two actions are done through Statistics / Processing / Indicator Post-processing... with the
Indicator V[xxxxx] variable from the panels as input:
(snap. 5.4-1)
Basename for Q.T.M variables: IK. As the cut-offs used for kriging the indicators and the cutoffs used here for representing the final grade tonnage relationships may differ (an interpolation
is needed), three different macro-variables will be created:
m
194
Cut-off Definition... for the QTM variables: 50 cut-offs from 0 by a step of 25.
Volume correction: a preliminary calculation of the dispersion variance of the blocks within the
deposit is required. A simple way to achieve this consists in using the real block variance calculated by Statistics/support Correction... choosing the block size as 5 m x 5 m (cf. 3.3.2). The
Volume Variance Reduction Factor of the affine correction is calculated by dividing the Real
Block Variance (53842) by the Punctual Variance (63167). But the real block variance is calculated from the variogram sill (66500), which is superior to the punctual variance, the difference
being 3333; the real block variance needs to be corrected according to this value:
Corrected Real Block Variance = Real Block Variance - 3333 = 53842 - 3333 = 50509
Thus, the Volume Variance Reduction Factor is:
Volume Variance Reduction Factor = 50509 / 63167 = 0.802
Therefore, enter 0.802 for the Volume Variance Reduction Factor.
two volume corrections may be applied: affine or indirect lognormal correction. As the original
distribution is clearly not lognormal we prefer to apply the affine correction, which is just
requiring the variance ratio between the 5m * 5m blocks and the points
Parameters for Local Histogram Interpolation: we keep the default parameters for interpolating
the different parts of the histogram (linear interpolation) including for the upper tail, which is
generally the most problematic. A few tests made with other parameters (hyperbolic model with
exponent varying from 1 to 3) showed great impact on the resources. We need now to define the
maximum and minimum block values of the local block histograms: the Minimum Value
Allowed is 0; the Maximum Value Allowed may be simply approximated by applying the affine
correction by hand on the maximum value from the weighted point histogram and transpose it to
the block histogram with the Volume Variance Reduction Factor (0.8) calculated above: the
obtained value is 1391.
Non Linear
195
300
1.000
250
0.875
200
0.625
250
0.750
0.500
150
0.375
100
0.250
50
0.000
N/A
200
Y (m)
Y (m)
300
IK_T{600.000000}
100
0.125
50
150
50
50
X (m)
IK_M{600.000000}
ppm
300
300
1000
250
900
850
800
150
750
100
700
650
50
600
N/A
50
200
Y (m)
200
Y (m)
250
950
150
100
50
50
1.0
rho=0.906
196
0.5
0.0
0.0
0.5
IK_T{600.000000}
1.0
1000
rho=0.683
900
800
700
600
600
700
800
900
1000
IK_M{600.000000}
(fig. 5.4-3)
Scatter diagram of the IK estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
At this stage of the case study we can consider that globally the indicator kriging gives satisfactory
results. At the local scale noticeable differences exist with a tendency to overestimate the grade,
especially in the upper tail of the histogram.
Non Linear
197
198
Isatis offers then the possibility of using that intrinsic assumption in the variogram fitting window,
through the Constraints of the Automatic Fitting function.
(snap. 5.4-2)
Non Linear
199
Create a macro variable on the grid 20*20 that will contain the 11 results indicators, Tools / Create
Special Variable as follows:
(snap. 5.4-3)
200
We now perform the kriging of the indicators in the classical (Co)-Kriging window using the intrinsic model. The resulting indicator macro variable can be processed using the Indicator Post-Processing as for the Bundled Indicator Kriging.
(snap. 5.4-4)
Non Linear
201
the variogram model of the block gaussian variable. To determine this model we need first to
calculate an experimental block gaussian variogram using the Raw V variogram model and the
block anamorphosis. For mathematical reasons, the sill of Raw V should not exceed the punctual variance of the anamorphosis, which is unfortunately the case here. Therefore, we need first
to compute another block anamorphosis including a sill normalization (cf. 3.3.2 With support
effect correction) using Statistics / Support Correction... and ask for Normalize Variogram
Sill. Store the anamorphosis in a new parameter file Block 5m * 5m (normalized) to avoid
overwriting the existing block anamorphosis Block 5m * 5m.
Open Statistics / Modeling / Block Gaussian Variogram... to calculate the experimental block
gaussian variogram:
202
(snap. 5.4-1)
m
Number of directions: 2. It is convenient to make these directions coincide with the main
directions of anisotropy of the raw variogram (N160E and N70E) by setting a rotation of 70 around positive z axis
We fit this variogram in Statistics / Variogram Fitting...; as expected the nugget effect has disappeared. Two anisotropic structures (cubic + spherical, details below the graphic) combine to a total
sill of 1, and we store the resulting model in a parameter file Block Gaussian V:
Non Linear
203
1.00
0.75
N70
0.50
N16
0.25
0.00
25
50
75
100
125
Distance (m)
Isatis
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Cubic - Range = 42.00m, Sill =
0.4
Directional Scales = (
42.00m,
60.00m)
S2 - Spherical - Range = 40.00m, Sill =
0.6
Directional Scales = (
100.00m,
40.00m)
(fig. 5.4-1)
We are now ready to perform the Disjunctive Kriging with Interpolate / Estimation / Disjunctive
Kriging...:
204
(snap. 5.4-2)
Non Linear
205
Input: Gaussian V
Number of Kriged Polynomials: we use the same number as during the modeling of the anamorphosis function, i.e. 30.
Cut-off definition...: we choose 21 cut-offs from 0 by steps of 50. It is compulsory to include the
zero cut-off, which should give the in situ grade estimate.
the Auxiliary Polynomial File will contain experimental values of the different Hermite polynomials for the data points, that will be also put at the center of the closest block 5m x 5m. They
are calculated before the RUN, as soon as the output grid is defined (it may take a little time).
Output Grid File...: in the panels Grid 20*20, store the error DK variable
in the Panel Grid file we will also store the Q.T.M. values for each cut-off from the Basename
DK.
we choose for the Block Gaussian Variogram Model the variogram model previously fitted
Block Gaussian V.
Graphic displays of the panels for comparison with reality (proportion of SMU above 600 ppm):
true tonnage above 600
300
DK_T{600.000000}
1.000
0.875
250
300
250
0.750
0.625
0.500
150
0.375
0.250
100
200
Y (m)
Y (m)
200
150
100
0.125
50
0.000
N/A
50
50
50
206
Graphic displays of the panels for comparison with reality (grade above 600 ppm):
true grade above 600
DK_M{600.000000}
ppm
300
250
250
950
900
200
200
850
150
800
100
700
Y (m)
Y (m)
300
1000
150
750
100
650
50
50
600
N/A
50
50
X (m)
X (m)
(fig. 5.4-3)
1000
rho=0.925
1.0
rho=0.753
900
800
0.5
700
600
0.0
0.0
0.5
DK T{600.000000}
1.0
500
Scatter diagram of the DK estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
The results on tonnage look very comparable to those obtained with indicator kriging; but the
grades show a better correlation between Disjunctive kriging estimates and true values.
Non Linear
207
two anamorphosis functions, one for the panel and one for the block support (Block 5m * 5m).
The calculation of the panel anamorphosis requires the value of the kriged panel dispersion variance. The two anamorphosis models must be consistent, that is, created from the same samples.
(snap. 5.4-1)
208
Set to Block mode and activate the Full set of Output Variables option
Output: in Grids / Grid 20*20. Because we have asked for the Full set of Output Variables,
we are able to store the local estimated dispersion variance Variance of Z* for V under a
new variable Local dispersion Var Z*
Neighborhood: octants
300
1000
900
800
700
600
500
400
300
200
100
0
250
200
Y (m)
ppm
150
100
50
50
N/A
(fig. 5.4-1)
Non Linear
209
(snap. 5.4-1)
300
UC_no info_T{600.000000}
300
1.000
0.875
250
250
0.750
200
0.625
0.500
150
0.375
0.250
100
0.125
50
0.000
N/A
50
Y (m)
Y (m)
200
150
100
50
50
(fig. 5.4-1)
210
UC_no info_M{600.000000}
ppm
300
1000
250
950
250
900
200
200
850
150
800
100
700
750
650
50
600
N/A
50
Y (m)
Y (m)
300
150
100
50
50
X (m)
(fig. 5.4-2)
rho=0.928
1000
True Grade above 600
1.0
0.5
0.0
rho=0.785
900
800
700
600
0.0
0.5
1.0
UC_no info_T{600.000000}
600
700
800
900
1000
UC_no info_M{600.000000}
(fig. 5.4-3)
Scatter diagram of the UC estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
The quality of local estimation is satisfying.
Moreover, UC allows to take the information effect into account by changing the block anamorphosis to the Block 5*5 with information effect instead of block 5*5.
Non Linear
211
Note - Some grade inconsistencies may appear when taking the information effect into account,
because the cut-off have to be applied on a histogram of kriged values. These grade inconsistencies
affect low grades for small tonnages, therefore it may be corrected by suppressing the lowest
tonnage values (as done here with a minimum tonnage fixed at 0.5%).
Do not forget to change the Basename for Output Variables to UC_with info and press RUN:
(snap. 5.4-2)
212
Non Linear
213
The different correction types and the associated corrections are detailed in the help menu.
214
(snap. 5.4-1)
Note: the same method can be used in the multivariate case, the metal of other elements are
assigned according to the ranking of the main variable kriged smus.
After Run we get the following Error message:
(snap. 5.4-2)
It is due to the fact that it is compulsory that for the highest cutoff the tonnage represents less than
the tonnage of one smu.
The solution consists in Re-running Uniform Conditioning with 41 cutoffs from 0 with a step of 50.
Then running Localized Uniform Conditioning does not produce anymore error message.
The statistics and the displays show that after Localized Uniform Conditioning the variability of
actual block grades is much better reproduced compared to the true smu grades.
With Tools / Grade Tonnage Curve we can also check that the QTM values obtained from Uniform
Conditioning (with Tonnage Variables option) are the same as those obtained from grades estimated using Localized Uniform Conditioning method.
Non Linear
215
Variable
Count
Minimum
Maximum
Mean
Std. Dev.
Variance
True V 5x5
3120
1378.12
277.98
228.66
52287.30
KrigingV
5x5
3120
-50.92
1361.13
275.36
209.79
44013.25
LUC V 5x5
3120
1435.18
275.79
229.66
52745.83
Kriging V
LUC V
300
300
V
2000
1900
1800
1700
1600
1500
1400
1300
1200
1100
1000
900
800
700
600
500
400
300
200
100
0
Y (m)
200
150
100
50
250
200
Y (m)
250
150
100
50
N/A
50
50
(fig. 5.4-1)
216
(snap. 5.4-3)
The scatter diagram between the Ore and the Metal above 600 ppm shows a very strong (non linear)
correlation.
rho=0.987
1000
500
0
0.0
0.5
1.0
(fig. 5.4-2)
Non Linear
217
60000
91
3972
1524
50000
5035 5224
40000
3696
4885
5254
5579
5578 5627
5319
3124
30000
5390
5537
2572
20000
10000
0
50
100
Distance (m)
150
Consequently, we will perform independently the kriging of both variables. The experimental variograms are omnidirectional and calculated with 16 lags of 10 m (with the declustering weights
active). They have been fitted as shown below:
0.10
91
0.09
0.08
1524
3972
0.07
5035 5224
0.06
3124
3696
0.05
5319
4885
5390
5254
5579
5578 5627
5537
0.04
2572
0.03
0.02
0.01
0.00
50
100
150
Distance (m)
Isatis
Isatis
Model : 2 basic structure(s)
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill =
8100
S1 - Nugget effect, Sill =
0.01
S2 - Spherical - Range = 53.00m, Sill = 2.876e+004
S2 - Spherical - Range = 53.00m, Sill =
0.0462
(fig. 5.4-3)
The declustering weights have great impact on the short scale structure; the variograms at short
scale are not satisfactory.
Then, the kriging of Ore and Metal is performed, with the usual octants neighborhood; the variables
Service Var Ore Tonnage T > 600 and Service var Metal Q > 600 are created.
218
(snap. 5.4-4)
Non Linear
219
(snap. 5.4-5)
220
Because a linear kriging is performed, some panels have negative or unacceptable low Tonnage T
values: for all panels having a tonnage T < 0.02 (i.e 2%), T and Q are set to 0 (this is done using
File / Calculator...).
(snap. 5.4-6)
Non Linear
221
Using the Calculator once more, we derive from the kriged variables
Service var Metal Q > 600 and Service Var Ore Tonnage T > 600 the variable Service var
grade M > 600 using the same relation M = Q / T.
(snap. 5.4-7)
222
1000
rho=0.924
1.0
0.5
0.0
rho=0.644
900
800
700
600
0.0
0.5
1.0
600
700
800
900
1000
(fig. 5.4-4)
The scatter diagrams show that some grades overestimation, and a slight under-estimation of high
tonnage values.
Non Linear
223
5.5 Simulations
After having reviewed the non linear estimation techniques, we can also perform simulations to
answer the same questions on the recoverable resources. Because we are in a 2D framework, we
can perform 100 simulations within a reasonable computation time.
Two techniques, both working under multigaussian hypothesis, will be described: Turning Bands
(TB) and Sequential Gaussian (SGS). This multigaussian hypothesis requires that the input variable
is gaussian: the Gaussian V variable, calculated previously ( 3.3.1 Punctual Histogram Modeling), will be used.
Simulations will be performed on the SMU blocks of 5 m x 5 m (Grid 5*5): this will allow to compare results with the non linear estimation techniques. Therefore, block simulations require a gaussian back transformation and a change of support from point to block: this implies specific remarks
discussed hereafter.
Note - In Isatis the default block discretization is 5 x 5 and may be optimized, as explained later (
3.5.4.1).
from the input gaussian data, simulate gaussian point grades according to the block discretization parameters as discussed above;
gaussian back transformation (gaussian -> raw) of the point grades using a point anamorphosis
the averaging is done automatically at the end of the simulation run. Hence the required anamorphosis function to perform the gaussian back transformation is the Point anamorphosis based on the
sample (point) support, which has already been calculated during the 3.3.1 Punctual Histogram
224
Modeling. The block anamorphosis Block 5m*5m (which includes a change of support correction)
should not be used here.
Variographic analysis of the gaussian sample grades (the name of the variogram model will be
Point Gaussian V)
Simulate the SMU grades (5 m x 5 m blocks) with Turning Bands (TB) or Sequential Gaussian
(SGS) method with the following parameters:
m
Block mode
Starting index: 1
Seed for Random Number Generator: leave the default number 423141. This seed is supposed to be a large prime number; the same seed allows reproducibility of realizations.
The neighborhood and other parameters specific to each method will be detailed in the relevant
paragraph.
l
Calculation of the QTM variables for both techniques (described for TB): ore Tonnage T (i.e
SMU proportion within each panel), metal Quantity Q, and mean grade M of blocks above 600
ppm among each 20 m x 20 m panel (M = Q / T). The panel mean grades can not be averaged
directly on the 100 simulations: the mean grade is not additive because it refers to different tonnages (the tonnage may differ between different simulations). Therefore it has to be weighed by
the ore proportion T. One way to do this is to use an accumulation variable for each panel:
m
calculate the ore proportion T and the metal quantity Q (the metal quantity is the accumulation variable: Q = T x M) for each simulation
calculate the average (T) and average (Q) of the 100 simulations
calculate the average mean grade: average (M) = average (Q) / average (T)
Non Linear
225
Raw V. A variogram model using 3 structures has been fitted and saved under the name Point
Gaussian V:
Variogram : Gaussian V
1.25
1.00
N160
0.75
0.50
N250
0.25
0.00
50
100
150
Distance (m)
Isatis
Model : 3 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill =
0.13
S2 - Spherical - Range = 20.00m, Sill =
0.3
Directional Scales = (
20.00m,
40.00m)
S3 - Spherical - Range = 40.00m, Sill =
0.6
Directional Scales = (
86.00m,
40.00m)
(fig. 5.5-1)
226
(snap. 5.5-1)
Neighborhood...: create a new neighborhood parameter file named octants for TB. Press Edit...
and from the Load... button reload the parameters from the octants neighborhood. We are now
going to optimize the block discretization: press the ... button next to Block Discretization: the
Discretization Parameters window pops up where the number of discretization points along the
x,y,z directions may be defined. These numbers are set to their default value (5 x 5 x 1). Press
Calculate Cvv, the following appears in the message window (values differ at each run due to the
randomization process):
Non Linear
227
Regular discretization: 5 x 5 x 1
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables
Gaussian V
Cvv =
0.811792
Cvv =
0.809978
Cvv =
0.812136
Cvv =
0.811752
Cvv =
0.810842
Cvv =
0.812900
Cvv =
0.808768
Cvv =
0.811977
Cvv =
0.810781
Cvv =
0.810921
Cvv =
0.812400
11 mean block covariances have been calculated with 11 different randomizations. The minimum value is 0.808768 and the maximum is 0.812900; the maximum relative variability is
approximately 0.5% which is more than acceptable: the 5 x 5 discretization is a very good
approximation of the punctual support and may be optimized.
Note - Note that, for reproducibility purposes, the first value of Cvv will be kept for the simulations
calculation
For optimization, we decrease the number of discretization points to 3x3:
228
(snap. 5.5-2)
Regular discretization: 3 x 3 x 1
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables
Gaussian V
Cvv =
0.809870
Cvv =
0.814197
Cvv =
0.808329
Cvv =
0.812451
Cvv =
0.819093
Cvv =
0.809922
Cvv =
0.814171
Cvv =
0.811332
Cvv =
0.805993
Cvv =
0.806053
Cvv =
0.807459
The minimum value is 0.805993 and the maximum value is 0.819093: the maximum relative
variability is approximately 1.6%. As expected, it has increased but remains acceptable: therefore, the 3 x 3 discretization is a good compromise and will be kept for the simulations (i.e each
simulated block value will be the average of 3 x 3 = 9 simulated points). Press Close then OK
for the neighborhood definition window.
Non Linear
229
Number of Turning Bands: 300. The more turning bands, the more precise are the realizations
but CPU time increases. Too few turning bands would create visible 1D-line artefacts.
230
True V
300
1000
900
800
700
600
500
400
300
200
100
0
200
150
100
50
50
250
200
Y (m)
250
Y (m)
Simu V TB[00002]
300
ppm
150
100
50
50
N/A
Simu V TB[00030]
300
300
250
250
200
200
Y (m)
Y (m)
Simu V TB[00020]
150
150
100
100
50
50
50
50
250
250
200
200
150
100
100
50
50
50
Simu V TB[00050]
300
Y (m)
Y (m)
Simu V TB[00040]
300
150
50
Non Linear
231
(snap. 5.5-1)
232
300
300
1.000
0.875
250
250
0.750
200
0.625
0.500
150
0.375
Y (m)
Y (m)
200
0.250
100
0.125
50
150
100
0.000
N/A
50
50
50
X (m)
Tonnage T calculated by TB (SMU proportion) compared to the true tonnage. The color scale is
a regular 16-class grey palette between 0 and 1: panels for which there is strictly less than 1
block (i.e 0 <= proportion < 0.0625) are white.
true grade above 600
300
ppm
300
1000
250
950
250
900
200
850
800
150
750
700
100
Y (m)
Y (m)
200
150
100
650
50
600
N/A
50
50
50
(fig. 5.5-1)
Non Linear
1000
rho=0.936
1.0
233
0.5
0.0
rho=0.869
900
800
700
600
0.0
0.5
1.0
600
700
800
900
1000
(fig. 5.5-2)
Scatter diagrams of ore tonnage and mean grade above 600 ppm between
the mean of 100 TB simulations and the true values of panels.
Interpolate / Conditional Simulation / Sequential Gaussian / Standard Neighborhood...: a standard elliptical neighborhood is used taking the point data & the previously simulated grid nodes
into account.
We will use the standard neighborhood option because it is more accurate from a theoretical point
of view, and moreover the Block simulation is possible (automatic averaging of point values).
5.5.5.4 Simulations
Open Interpolate / Conditional Simulations / Sequential Gaussian / Standard neighborhood.... and
enter the same parameters described in the workflow summary ( 3.5.2):
234
(snap. 5.5-1)
Non Linear
235
The Gaussian Back Transformation is enabled with the Point anamorphosis function
Special Model Options...: by default, a Simple Kriging (SK) is performed using a constant
mean equal to zero
Neighborhood...: create a new neighborhood named octants for SGS with the following parameters (you may load the parameters from the octants for TB parameter file):
(snap. 5.5-2)
m
Optimum Number of Samples per Sector: 4, which adds to a maximum of 32 samples. Theoretically, the SGS technique would require a unique neighborhood and use all the previously simulated grid nodes to reproduce exactly the variogram; in practice, it is impossible,
so it is recommended to increase the Optimum Number in respect to the Optimum Number of
Already Simulated Node (to be defined below in the main SGS window) and the capacity of
the computer.
236
in the Advanced tab, set the Minimum distance between two samples to 2 m; as two different
sets of data are used to condition the simulations (i.e the actual data points combined with
the previously simulated grid nodes), this minimum distance criterion avoids fictitious
duplicates between original data points and simulated grid nodes. It allows also to spread
conditioning data for a better reproducibility of the variogram.
Optimum Number of Already Simulated Node: 16. This means that the software will load all the
real samples and the 16 closest already simulated nodes in memory for the search neighborhood
algorithm. The maximum number of samples being 32, there will be 16 real samples used for
each node simulation, as for the Turning Bands method. The TEST window allows to evaluate
the impact of these different parameters on the neighborhood.
Leave the other parameters to their default values and press RUN
Note - Isatis offers the possibility to perform the different simulations with independent paths
(optional toggle in the main SGS window). By default, this toggle is set OFF, meaning that the same
random path is used for all simulations: the independency is no more certain, but the algorithm is
much quicker. If the toggle is set ON, the CPU time will approximately be multiplied by the number
of simulations. Here, it has been checked that both options show negligible differences in the final
results.
The resulting outcomes are very similar to the TB method.
Non Linear
237
(snap. 5.5-1)
238
300
300
1.000
0.875
250
250
0.750
200
0.625
0.500
150
0.375
0.250
100
0.125
50
Y (m)
Y (m)
200
150
100
0.000
N/A
50
50
50
X (m)
(fig. 5.5-1)
300
300
ppm
250
950
200
200
900
850
150
800
100
750
50
650
700
50
600
N/A
Y (m)
Y (m)
250
1000
150
100
50
50
(fig. 5.5-2)
Non Linear
rho=0.938
1000
true grade above 600
1.0
239
0.5
0.0
rho=0.870
900
800
700
600
0.0
0.5
1.0
600
700
800
900
1000
(fig. 5.5-3)
Scatter diagrams of ore tonnage and mean grade above 600 ppm between
the mean of 100 SGS and the true values of panels
We observe that SGS simulations give very similar results to TB and are also well correlated to the
reality.
240
5.6 Conclusions
The objective of the case study was to illustrate several non linear methods (global and local) to
estimate recoverable resources, and compare them to linear kriging. All methods take the same support effect for 5 m x 5 m blocks into account, but only a few take the information effect into
account. Therefore, we will first focus on results without information effect.
Non Linear
241
(snap. 5.6-1)
Repeat the same for DK and UC, and change the curve parameters and labels for optimal visibility.
By clicking on the graphic windows below, ask for the following Grade Tonnage curves: Mean
grade vs. cut-off, Total tonnage vs. cut-off, Metal tonnage vs. cut-off and Metal tonnage vs. Total
tonnage. The graphics are presented here below:
242
1250
Mean Grade
1000
750
500
250
250
500
750
Cutoff
1000
True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-1)
100
90
Total Tonnage
80
70
60
50
40
30
20
10
0
250
500
750
Cutoff
1000
True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-2)
Non Linear
243
Metal Tonnage
250
200
150
100
50
0
250
500
750
Cutoff
1000
True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-3)
Metal Tonnage
250
200
150
100
50
0
0 10 20 30 40 50 60 70 80 90 10
Total Tonnage
True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-4)
244
The True curve is black and represented with a bold line type. We clearly see that the OK tonnage
curves are shifted compared to others: linear kriging induces a significant smoothing effect despite
a refined sampling and a good coverage of the domain.
All non linear methods provide similar and suitable results; a zoom centered on V = 600 allows
a more precise comparison around this particular cut-off:
800
13
790
12
Total Tonnage
Mean Grade
780
770
760
750
740
90
Metal Tonnage
10
730
720
11
80
70
Cutoff
78
77
76
75
74
60
73
525 550 575 600 625 650
Cutoff
(fig. 5.6-5)
Grade-Tonnage curves with a zoom on the 600 ppm cutoff of interest (same legend)
Little differences are noticeable: IK overestimates the grades whereas DK overestimates the tonnages.
Non Linear
245
As we had to choose a particular cut-off for comparing these methods with SV and simulations, we
have chosen V = 600 and the global results according to this cut-off are presented hereafter.
246
use Statistics / Quick statistics 8 times on each grade variable of each method with the relevant tonnage as the Weight variable:
|
|
|
|
|
Q
77.95
67.92
72.03
69.20
|
|
|
|
|
T
10.38
9.01
9.69
9.17
|
|
|
|
|
M
750.67
754.11
743.05
754.60
For the cut-off V = 600 ppm, UC has correctly quantified the information effect.
Non Linear
247
The table below summarizes the main results for the error on tonnages:
248
The table below summarizes the main results for metal quantity:
2D Estimation
6.2D Estimation
249
250
Tools / Accumulation: Compulsory step to derive additive variables from the grade: Accumulation and Thickness.
File / Data File Manager/Modify 2D-3D: Transform the 3D data to 2D. It amounts to a flattening process.
Statistics / Exploratory Data Analysis: QA/QC tool. Display the experimental distributions.
File / Create Grid file: Builds the 2D grid on which the estimation will be performed.
File / Selection/From Polygons: Selection definition menu. Define the area of interest (AOI)
based on a polygon file.
Statistics / Variogram Fitting: Variogram modelling tool. Compute the Thickness and Accumulation variograms, independently.
Statistics / Variogram Fitting: Define the variogram model, this time for co-kriging.
Statistics / Statistics / Principal Component Analysis: Check the consistency different methods consistency using the PCA built-in tool on the results.
2D Estimation
251
(snap. 6.8-1)
252
(snap. 6.8-2)
Note - Isatis computes two thickness variables, Analysed length and Total length, the former being
the length of samples, analysed for Fe, and the latter the length of the entire drillhole. At this stage
a decision has to be made because the thickness is unique and not subject to the presence of grade
analysis, consequently it is compulsory to refer the accumulation to the same thickness and not only
to the analysed length. Failing doing that would result in underestimating the grade by dividing the
accumulation by a thickness larger than the analysed one.
2D Estimation
253
We have then normalized the accumulation by the ratio Total length by Analysed length. This
operation is equivalent to set the value of the non-analysed sample to the average value of the drillhole. This operation is performed using the calculator: File / Calculator.
(snap. 6.8-3)
254
(snap. 6.8-4)
(snap. 6.8-5)
2D Estimation
255
The resulting variables histograms can be displayed using the EDA: Statistics / Exploratory Data
Analysis. The accumulation and thickness histogram can be computed directly. If one is also interested in the mean Fe grade along each line, it can be reconstructed as the ratio between accumulation and length (use File/Calculator).
(snap. 6.8-6)
From top to bottom and left to right. Fe Accumulation histogram; Total length histogram; Fe grade
weighted by Total Thickness histogram; Accumulation vs. Total Thickness cross-plot, note that the
correlation coefficient is close to 1.
256
To create the grid use the menu File / Create Grid File.
(snap. 6.8-7)
2D Estimation
257
(snap. 6.8-8)
The 2D grid file is built so that each data is at centre of one block
To restrict the study to the area of interest (AOI), a polygonal selection based on the outline of the
orebody is applied on the grid. The coordinates of the polygon vertices are stored in an ASCII file
polygon_AOI.asc.
To use it, first create a new polygon file: File / Polygons Editor / Application menu / New Polygon
File. Then import the file: Application menu / ASCII Import And finally: Application menu / SAVE
and RUN.
To select the blocks on the grid file use: File / Selection / From Polygons.
258
2D Estimation
259
(snap. 6.8-9)
260
6.9 2D Estimations
Four methods will be run and compared.
6.9.1 Kriging
Let us start with the independent kriging of thickness and accumulation.
(snap. 6.9-1)
Experimental and model variograms of the thickness (Total length). Parameters are given in following table.
1.
2.
3.
2D Estimation
Range U
Range V
Sill Thickness
261
Nugget Effect
1
Spherical
650 m
400 m
1.1
Spherical
700 m
1150 m
2.7
(snap. 6.9-2)
Experimental and model variograms of the accumulation (Accu Fe corrected). Parameters are
given in following table.
This tutorial will not deal with the non-stationary structure along the EW direction, which is
Range U
Range V
Sill - Accu Fe
ignored during the fitting.
1.
Nugget Effect
1870
2.
Spherical
650 m
350 m
3176
3.
Spherical
720 m
1230 m
11000
262
6.9.1.2 Kriging
Thickness and accumulation are kriged in turn (Interpolate / Estimation / (Co-)Kriging).
(snap. 6.9-1)
2D Estimation
263
(snap. 6.9-2)
6.9.2 Co-Kriging
The most classical method to estimate the accumulation and the thickness is the co-kriging. It takes
into account the statistical link between accumulation and thickness through the cross-variogram.
264
(snap. 6.9-1)
simple and cross-variograms resemblance allows us to sensibly assume a linear model of co-regionalization, consisting of a nugget effect, and of two spherical structures, detailed in table 3. The
directions of anisotropy of the model are the directions of calculation of the experimental variograms, i.e. N90 and N0.
2D Estimation
265
(snap. 6.9-2)
Experimental and modelled variograms in NS and EW directions for thickness and accumulation.The models are described in the following table.
Range U
Range V
Sill - Accu Fe
Sill Thickness
Sill - Thickness/Accu Fe
1.
Nugget Effect
2150
15.2 %
0.95
20.9 %
41
16.7 %
2.
Spherical
480 m
600 m
7000
49.5 %
1.8
39.6 %
110
44.9 %
3.
Spherical
1150 m
600 m
5000
35.3 %
1.8
39.6 %
94
38.4 %
Note that the simpler intrinsic correlation model cannot be used, because the relative sills of the different variogram structures are not equal, and the variogram sills are thus not proportional.
266
6.9.2.4 Co-kriging
Thickness and accumulation can now be co-kriged: Interpolate / Estimation / (Co-)Kriging (figure
15).
(snap. 6.9-1)
2D Estimation
267
(snap. 6.9-1)
The strong correlation between Accumulation and Thickness would allow the use of the residual
model on this dataset.
The relationship can be expressed as follow:
(eq. 6.9-1)
Where Thickness and Residual are uncorrelated variables. In this model, the co-kriging process
amounts to the separate kriging of the Thickness and the Residual.
268
(snap. 6.9-1)
A linear regression is applied to the accumulation using the thickness as the explanatory variable.
The residual is the part that is not explained by the linear regression. It is orthogonal (not correlated) to the thickness.
In our case, the results are:
And it can be checked that the residual is indeed not correlated to the Thickness. The variograms
can be modelled independently (figure 18). The Thickness variogram has already been computed,
and the parameters for the residual variogram are detailed in table4.
2D Estimation
269
(snap. 6.9-2)
Experimental and model variograms of the residual and the thickness. The parameters are
described in the following table:
Range U
Range V
Sill - Residual
1.
Nugget Effect
508
2.
Spherical
2500 m
666 m
213
Krige the residual (Interpolate / Estimation / (Co)-Kriging) using this variogram. Iron grades can be
recovered from the Thickness variable and the residual:
270
(eq. 6.9-1)
(eq. 6.9-2)
(snap. 6.9-3)
Once the additive variables (thickness and accumulation or thickness and residual), are estimated,
the Fe grade can be calculated applying the inverse transformation.
2D Estimation
271
272
(snap. 6.9-1)
From left to right and top to bottom. Comparison of Fe estimation using kriging; co-kriging; and
the residual method. All result are weighted by the thickness. Because the Fe grades are defined on
different supports (varying thickness), histograms have to be weighted by the thickness variable
(Statistics / Exploratory Data Analaysis / Compute Using the Weigth Variable option). Global statistics (figure 20) show that each estimation method yields a mean value consistent with the data.
The kriging method gives the highest standard deviation, and the residual method the lowest.
2D Estimation
273
(snap. 6.9-2)
274
Kriging and co-kriging give locally very similar results, while the residual model wanders a bit
more, especially for low values.
(eq. 6.9-1)
As usual, when computing the Fe grade standard deviation histogram, dont forget to weight it with
the thickness variable (Statistics / Exploratory Data Analysis / Compute Using the Weight Variable
option).
2D Estimation
275
(snap. 6.9-1)
276
(snap. 6.9-2)
(snap. 6.9-3)
2D Estimation
277
(snap. 6.9-4)
Comparison of Fe kriging error using kriging, co-kriging (both weighted by the thickness), and
residual method
Kriging errors are fairly close to one another. As expected, the error is lower for the co-kriging than
for the kriging. It also appears that the error of the residual method is higher than the co-kriging
one. The calculation of its value is, however, more complicated and for simplicity sake will not be
detailed here.
278
6.10 3D Estimation
Grades in thin deposits that can stem from weathering process for example (Ni, Mn) can be efficiently estimated with a 2D kriging: flattening of surfaces is implicit, and there is no need to
model the footwall and hanging wall surfaces. On the other hand, this method requires that the
grade is decomposed into two additive variables. For comparison purposes, the 3D estimation
process is briefly presented hereafter.
2D Estimation
279
(snap. 6.10-1)
280
Make sure this new variable is of type length, this will be compulsory later on, to create the 3D
selection. If necessary, use File / Data File Manager / Variable / Format (or right click on the
variable), and change it to Length, unit meters.
As usual, use Statistics / EDA to compute the experimental variogram, and Statistics / Variogram Fitting to fit the model. Then Interpolate / Estimation / (Co)-Kriging to perform the estimation.
(snap. 6.10-2)
Experimental and model variogram of the elevation of the hanging wall. The following table
describes the parameters of the model.
Range U
Range V
Sill Z hanging wall
1.
Nugget Effect
0
2.
Spherical
300 m
400 m
13
3.
Spherical
1360 m
1200 m
55
2D Estimation
281
(snap. 6.10-3)
Compute a footwall estimate from the hanging wall and the thickness
Then use the tool in File / Selection / From Surfaces (figure 26) to compute the 3D selection. Use
the same 2D polygon file as before. Figure 27 shows the result in the 3D viewer.
282
(snap. 6.10-4)
2D Estimation
283
(snap. 6.10-5)
284
(snap. 6.10-6)
Experimental and model variograms of Fe along two horizontal directions (red and green,
range 62.5 m) and along the drillholes (purple, range 1 m).
Range U
Range V
Range W
Sill Fe
1.
Nugget Effect
24 m
2.
Spherical
0.50 m
377 m
236 m
6.2
3.
Spherical
473 m
174 m
7.0 m
17
For the estimation, use a moving neighbourhood with the following parameters:
- A search ellipsoid with maximum distances (600 m, 400 m, 30 m) in the (U, V, W) directions.
- Anisotropic distances.
- 5 samples minimum.
- 4 angular sectors.
- An optimum of 15 samples per sector.
- Selection of all the samples in the target block
The Fe grade estimated in 3D can be averaged on the 2D grid in order to compare it with the 2D
estimation: Tools / Copy Statistics / Grid -> Grid.
2D Estimation
285
(snap. 6.10-7)
Estimated values of 3D blocks of a same column of blocks are averaged on a single 2D block. This
operation is similar to the accumulation calculation.
286
(snap. 6.11-1)
Graphic of Factor 2 vs. Factor 1 and Factor 3 vs. factor 2 (F1 representing 93% of the variance,
resp. F2 5% and F3 1.7%) obtained from the PCA analysis of the four different estimates. All estimates show a good correlation although Fe Residual and Fe est 3D seem globally less consistent.
A PCA analysis is performed to compare the different estimates using the menu Statistics / Statistics / Principal Component Analysis. Fe Kriging and Fe Co-Kriging are very close while Fe Residual and Fe est 3D seem globally less consistent.
287
288
The study covers the use of estimation and simulations, from Kriging to
Cokriging, External Drift and Collocated Cokriging.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.
289
290
(snap. 8.1-1)
The datasets are located in two separate ASCII files (in the Isatis installation directory, under the
Datasets/Petroleum sub-directory):
m
The file petroleum_wells.hd contains the data collected at 55 wells. In addition to the coordinates, the file contains the target variable (Porosity) and the selection (Sampling) which
concerns the 12 initial appraisal wells,
The file petroleum_seismic.hd contains a regular grid where one seismic attribute has been
measured: the normalized acoustic impedance (Norm AI). The grid is composed of 260 by
130 nodes at 40ft x 80ft.
Both files are loaded using the File / Import / ASCII facility in the same directory (Risk_Analysis),
in files respectively called Wells and Seismic.
291
(snap. 8.1-2)
(snap. 8.1-3)
292
Using the File / Data File Manager, you can check that both files cover the same area of 10400ft by
10400ft. You can also check the basic statistics about the two variables of interest.
Variable
Number of samples
55
33800
Minimum
6.1
-1
Maximum
11.8
0.
Mean
8.2
-0.551
Std Deviation
1.4
0.155
At this stage, no correlation coefficient between the two variables can be derived, as they are not
defined at the same locations.
In this case study, the structural analysis will be performed using the whole set of 55 wells, whereas
any estimation or simulation procedure will be based on only the 12 appraisal wells, in order to
produce stronger differences in the results of various techniques.
293
Frequencies
Y (ft)
0.15
Porosity
10000
5000
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
55
6.1
11.8
8.2
1.4
0.10
0.05
0
0
5000
X (ft)
10000
217
Variogram : Porosity
2.0
199
0.00
9
10
Porosity
11
12
142 99
194
160
203
94
1.5
73
1.0
0.5
0.0
2000
4000
6000
Distance (ft)
8000
(fig. 8.2-1)
The area of interest is homogeneously covered by the wells. The Report Global Statistics item from
the Menu bar of the variogram graphic window produces the following printout where the vario-
294
gram details can be checked. The number of pairs is reasonably stable (above 70) up to 9000ft: this
is consistent with the regular sampling of the area by the wells.
Variable : Porosity
Mean of variable
Variance of variable
Rank
Number
of pairs
1
73
2
94
3
199
4
217
5
194
6
160
7
203
8
142
9
99
= 8.2
= 1.862460
Average
distance
1301.15
1911.80
2906.72
4054.00
5092.86
5882.27
6895.25
8014.89
8937.23
Value
1.143562
1.460053
1.863894
2.068571
1.987912
1.817500
1.909532
2.118310
2.070556
Coming back to the variogram Application / Calculation Parameters, ask to calculate the variogram cloud. Highlight pairs corresponding to small distances (around 1000ft) and a high variability
on the variogram cloud: these pairs are represented by asterisks on the variogram cloud; the corresponding data are highlighted on the base map and joined by a segment. No point in particular can
be designated as responsible for these pairs (outlier): as usually, they simply involve the samples
corresponding to high porosity values.
295
X (ft)
0
10000
Porosity
10000
Y (ft)
5000
5000
0
0
5000
X (ft)
10000
(fig. 8.2-2)
296
Distance (ft)
0
Variogram : Porosity
15
10
Distance (ft)
(fig. 8.2-3)
To save this experimental variogram in a Parameter File in order to fit a variogram model on it,
click on Application / Save in Parameter File and call it Porosity.
297
(snap. 8.3-1)
Pressing the Print button in this panel produces the following printout where we can check that the
model is the nesting of a short range spherical and a nugget effect.
298
(snap. 8.3-2)
Variogram : Porosity
2.0
1.5
1.0
0.5
0.0
(fig. 8.3-1)
299
8.4 Cross-Validation
The cross-validation technique (Statistics/Modeling/Cross-validation) enables you to evaluate the
consistency between your data and the chosen variogram model. It consists in removing in turn one
data point and re-estimating it (by kriging) from its neighbors using the model previously fitted.
An essential parameter of this phase is the neighborhood, which tells the system which data points,
located close enough to the target, will be used during the estimation. In this case study, because of
the small number of points, a Unique neighborhood is used; this choice means that any information
will systematically be used for the estimation of any target point in the field. Therefore, for the
cross-validation, each data point is estimated from all other data.
This neighborhood also has to be saved in a Parameter File that will be called Porosity.
(snap. 8.4-1)
When a point is considered, the kriging technique provides the estimated value Z* that can be compared to the initial known value Z, and the standard deviation of the estimation * which depends
on the model and the location of the neighboring information. The experimental error between the
estimated and the true values (Z - Z*) can be scaled by the predicted standard deviation of the estimation ( *) to produce the standardized error. This quantity, which should be a normal variable,
300
characterizes the ability of the variogram model to re-estimate correctly the data values from their
neighboring information only. If the value lies outside a given interval, the point requires some
attention: defining for instance the interval as [-2.5 ; 2.5] (that is to say, setting the threshold to 2.5),
enables to focus on the 1% extreme values of a normal distribution. Such a point may arbitrarily be
called an "outlier".
The procedure provides the statistics (mean and variance) of the estimation raw and standardized
errors, based on the 55 data points. The same statistics are also calculated when the outliers have
been removed: the remaining data are called the robust data.
Statistics based on 55 test data
Mean
Variance
Error
-0.00533
1.18778
Std. Error
-0.00257
1.02851
A data is robust when its Standardized Error lies between -2.500000 and 2.500000
Note - The key values of this printout are the mean error, which should be close to zero, and the
variance of the standardized error which should be close to 1. It is not recommended to pay too
much attention to the variance of the results obtained on the robust data alone, as the model has
been fitted taking this outlier into account.
The procedure also provides four standard displays which reflect the consistency between the data,
the neighborhood and the model: each sample is represented with a + sign whose dimension is proportional to the variable, whereas the outliers are figured using a l symbol. They consist of:
m
the scatter plot of the true value versus the estimated value,
the scatter plot of the standardized error of estimation versus the estimated value.
301
(fig. 8.4-1)
A last feature of this cross-validation is the possibility of using this variance of standardized error
(score) to rescale the model. As a matter of fact, the kriging estimate and therefore the estimation
error does not depend on the sill of the model, whereas the variance of estimation is directly proportional to this sill. Multiplying the sill by the score ensures that the cross-validation performed with
this new model, all other parameters remaining unchanged, provides a variance of standardized
error of estimation exactly equal to 1.
This last possibility must be manipulated with caution, especially if the score is far from 1 as one
can hardly imagine that the only imperfection in the model could be its sill. Instead, it is recommended to check the outliers first and possibly re-run the whole procedure (structural analysis and
cross-validation).
In the following, the Porosity variogram model is considered to be the best possible one.
302
8.5 Estimation
The task is to estimate by kriging the value of the porosity based on the 12 appraisal wells at the
nodes of the imported seismic grid, using the fitted model and the unique neighborhood.
The kriging operation is performed using the Interpolate / Estimation / (Co-)Kriging procedure. It
is compulsory to define:
l
the variable of interest (Porosity) in the Input File (Wells). As discussed earlier, the estimation
operations will be performed using the 12 appraisal wells only. This is the reason why the Sampling selection is specified,
the names of the output variables for the estimation and the corresponding standard deviation,
(snap. 8.5-1)
303
The Test button can be used to visualize the weight attached to each data point for the estimation of
one target grid node. It can also be used to check the impact of a change in the Model or the Neighborhood parameters on the Kriging weights.
The 33800 grid nodes are estimated with values ranging from 6.6 to 11.3. These statistics can interestingly be compared with the ones from the original porosity variable, which lies between 6.1 and
11.8. The difference reflects the smoothing effect of kriging.
The kriging results are now visualized using several combinations of the display capabilities. You
are going to create a new Display template, that consists in an overlay of a grid raster and porosity
data locations. All the Display facilities are explained in detail in the "Displaying & Editing Graphics" chapter of the Isatis Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l
Firstly, give a name to the template you are creating: Phi. This will allow you to easily display
again this template later.
In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m
In the Data area, in the Petroleum / Seismic file select the variable Kriging (Porosity),
Specify the title that will be given to the Raster part of the legend, for instance Phi,
In the Graphic Parameters area, specify the Color Scale you want to use for the raster display. You may use an automatic default color scale, or create a new one specifically dedicated to the Porosity variable. To create a new color scale: click on the Color Scale button,
double-click on New Color Scale and enter a name: Porosity, and press OK. Click on the
Edit button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button, enter 14 as the New Number of Classes, 6 and 13 as the Minimum and Maximum values. Press OK.
- In the Colors area, click on Color Sampling to choose regularly the 25 colors in the 32
colors palette. This will improve the contrast in the resulting display.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large Phi
values.
- Click on the Undefined Values button and select for instance Transparent.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter
10 as the reference tickmark and 1 as the step between the tickmarks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch off the Automatic Format
toggle, and enter 0 as the number of digits. Switch off the Display Undefined Values toggle.
- Click on OK.
304
In the Item contents for: Raster window, click on Display current item to display the
result.
Click on OK.
(snap. 8.5-2)
l
Back in the Contents list, double-click on the Basemap item to represent the Porosity variable
with symbols proportional to the variable value. A new Item contents window appears. In the
Data area, select Wells / Porosity variable as the Proportional Variable and activate the Sampling selection. Leave the other parameters unchanged; by default, black crosses will be displayed with a size proportional to the Porosity value. Click on Display Current Item to check
your parameters, then on Display to see all the previously defined components of your graphic.
Click on OK to close the Item contents panel.
305
In the Item list, you can select any item and decide whether or not you want to display its legend. Use the Up and Down arrows to modify the order of the items in the final Display.
Close the contents window. Your final graphic window should be similar to the one displayed
hereafter.
Kriging (Porosity)
10000
Y (ft)
Phi
5000
13
12
11
10
9
8
0
7
0
5000
X (ft)
10000
(fig. 8.5-1)
The label position may be modified using the Management / View Label / Move unconstrained
The * and [Not saved] symbols in the name of the graphic page indicate that some recent modifications have not been stored in the Phi graphic template, and that this template has never been saved.
Click on Application / Store Page to save them. You can now close your window.
306
one scarce data set containing few samples of good quality (this usually corresponds to the well
information),
one data set containing a large amount of samples covering the whole field but with poor accuracy (this usually corresponds to the seismic information).
In this case, one well-known method consists in integrating these two sources of information using
the Kriging with External Drift technique. It consists in performing the standard kriging algorithm,
based on the variable measured at the wells, considering that the drift (overall shape) is locally represented by the seismic information. This requires such information (or background) to be known
everywhere in the field or at least to be informed densely enough so that the value at any point (well
location, for instance) can be obtained using a quick local interpolation.
As in any kriging procedure a model is required about the spatial correlation. In the External Drift
case, this model has to be inferred knowing that the seismic information serves as a local drift: this
refers to the Non-stationary Structural Analysis.
The application Interpolate / Estimation / Bundled External Drift Kriging provides all these steps in
a single procedure which assumes that:
l
the seismic background is defined on a regular grid. It is interpolated at the well locations from
the target nodes using a quick bilinear interpolator.
the model of the target variable (measured at the wells) taking the seismic information into
account as a drift can be either provided by the user interactively or automatically calculated in
the scope of the Intrinsic Random Functions of order k theory, using polynomial isotropic generalized covariances. For more information about the structural analysis in IRF-k, the user
should refer to the "Non stationary modeling" technical reference (available from the On-Line
documentation). The only choice using the automatic calculation is whether to allow a nugget
effect as a possible component of the final model or not. To impose the estimation to honor the
well information and avoid misties, a quite common practice is to forbid this nugget effect component.
Still using the Sampling selection and the unique neighborhood (Porosity), the procedure first
determines the optimal structure forbidding any nugget effect component and then performs the
estimation.
The results are stored in the output grid file (called seismic) with the following names:
307
(snap. 8.6-1)
The printout generated by this procedure details the contents of the optimal model that has been
used for the estimation:
======================================================================
|
Structure Identification
|
======================================================================
.../...
Drift Identification
====================
The drift trials are sorted by increasing Mean Rank
The one with the smallest Mean Rank is preferred
Please also pay attention to the Mean Squared Error criterion
T1 : 1 f1
T2 : 1 x y f1
Mean
Mean Sq.
Mean
Trial
Error
Error
Rank
T2
9.194e-03 5.547e-01 1.417
T1
1.370e-02 6.223e-01 1.583
Covariance Identification
=========================
The models are sorted according to the scores (closest to 1. first)
When the Score is not calculated (N/A), the model is not valid
as the coefficient (sill) of one basic structure, at least, is negative
308
Score
0.869
1.192
0.771
1.869
S1
1.099e-01
0.000e+00
1.871e-01
0.000e+00
S2
2.141e-02
6.281e-02
0.000e+00
0.000e+00
Successfully processed =
CPU Time
=
Elapsed Time
=
S3
0.000e+00
0.000e+00
0.000e+00
3.409e-02
12
0:00:00 (0 sec.)
0:00:00 (0 sec.)
The 33800 grid nodes are estimated with values ranging from 5.1 to 13.3 and should be compared
to the one of the data information where the porosity varies from 6.1 to 11.8.
To display the ED Kriging result, you can easily use the previously saved display called Phi. Click
on Display / Phi in the main Isatis window. You just need to modify the variable defined in the Grid
Raster contents: replace the previous Kriging (Porosity) by ED Kriging (Porosity) and click on
Display.
ED Kriging (Porosity)
10000
Y (ft)
Phi
5000
13
12
11
10
9
8
7
0
5000
X (ft)
10000
(fig. 8.6-1)
The impact of the seismic information used as the external drift is clear, although both estimations
have been carried out using the same amount of data (hard) information, namely the 12 appraisal
wells.
The External Drift method can be seen as a linear regression of the variable on the drift information.
In other words, the result is a combination of the drift (scaled and shifted) and the residuals. The
usual drawbacks of this method are that:
l
the final map resembles the drift map as soon as the two variables are highly correlated (at the
well locations) and tends to ignore the drift map in the opposite case.
the drift information is used as a deterministic function, not as a random function and the estimation error does not take into account the variability of this drift.
309
(snap. 8.7-1)
310
The Statistics / Exploratory Data Analysis application is used to check the correlation between the
two variables: on the basis of the 55 wells, the correlation coefficient is 0.826 and is visualized in
the following scatter diagram where the linear regression line of the impedance versus the porosity
has been plotted. The two simple variograms and the cross-variogram are also calculated for 10 lags
of 1000ft each, regardless of the direction (omnidirectional).
203
217
199
0.03
194
rho=0.826
142
-0.2
160
99
Impedance at wells
-0.3
94
-0.4
0.02
73
-0.5
-0.6
0.01
-0.7
-0.8
0.00
-0.9
Distance (ft)
199
0.15
203
194
160
142
217
94
73
0.10
0.05
0.00
2.0
99
Variogram : Porosity
217
0.20
10
11
12
Porosity
199
142
194
99
203
160
94
1.5
73
1.0
0.5
0.0
(fig. 8.7-1)
Note - The variance of the acoustic impedance variable sampled at the 55 well locations (0.027) is
close from the variance of the variable calculated on the entire data set (0.024).
The calculation parameters being similar to the previous (monovariate) structural analysis, the simple variogram of Porosity has obviously not changed. This set of experimental variograms is saved
in a Parameter File called Porosity & Impedance.
The Statistics / Variogram Fitting procedure is used to derive a model which should match the three
experimental variograms simultaneously. To fit a model in a multivariate case, in the framework of
the Linear Coregionalization Model, the principle is to define a set of basic structures by clicking
the Edit button. Any simple or cross variogram will be expressed as a linear combination of these
structures. The two basic structures that will compose the final model are:
a nugget effect,
311
Once you have entered the two structures the use of the Automatic Sill Fitting option ensures that
the cokriging matrix is positive definite.
(snap. 8.7-2)
312
(fig. 8.7-2)
Pressing the Print button in Model Definition panel produces the following printout. This model is
finally saved in a new Parameter File called Porosity & Impedance.
Model : Covariance part
=======================
Number of variables
= 2
- Variable 1 : Porosity
- Variable 2 : Impedance at wells
.../...
Number of basic structures = 2
S1 : Nugget effect
Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
0.0039
0.0111
Variable 2
0.0111
0.3162
.../...
Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
0.0258
0.1915
Variable 2
0.1915
1.6755
.../...
313
8.7.2 Cross-Validation
The Statistics / Cross-Validation procedure checks the consistency of the model with respect to the
data. When performing the cross-validation, in the multivariate case, for each target point, it is possible to choose in the Special Kriging Options:
l
Note - The latter possibility is automatically selected in the Unique Neighborhood case. In order to
try the first solution, the user should use the Moving Neighborhood instead, which can be extended
by increasing the radius (20000ft) and the optimum count of points (54) for the neighborhood
search.
(snap. 8.7-3)
The cross-validation results are slightly better than in the monovariate case. This is due to the fact
that the seismic information (correlated to the porosity) is used even at the target point where the
porosity value is removed.
======================================================================
|
Cross-validation
|
======================================================================
314
Std. Error
-0.00293
1.01547
A data is robust when its Standardized Error lies between -2.500000 and 2.50000
8.7.3 Estimation
The estimation is performed using the Cokriging technique where, at each target grid node, the
porosity result is obtained as a linear combination of the porosity and the acoustic impedance measured at the 12 appraisal wells only (isotopic neighborhood). The Interpolate / Estimation / (Co)Kriging panel requires the definition of the two variables of interest in the input file (Wells), the
model (Porosity & Impedance) and the neighborhood (Porosity). It also requires the definition of
the variables in the output grid file (Seismic) which will receive the result of the estimation:
Cokriging (Porosity) for the estimation of the porosity and Cokriging St. Dev. (Porosity) for its
standard deviation.
315
(snap. 8.7-4)
It is obviously useless to compute the estimation of the acoustic impedance obtained by cokriging
based on the 12 appraisal wells only.
The 33800 grid nodes are estimated with values ranging from 6.8 to 11.2. The cokriging estimate is
displayed using the same parameters as before.
316
10000
Cokriging (Porosity)
Y (ft)
Phi
5000
13
12
11
10
9
8
7
0
5000
X (ft)
10000
(fig. 8.7-3)
This map is very similar to the one obtained with the porosity variable alone: the few differences
are only linked to the auxiliary variable (seismic information) and to the choice of the multivariate
model.
Obviously, a large amount of information is lost when reducing the seismic information to its value
at the well locations only.
The next part of the study deals with the Collocated Cokriging technique, which aims at integrating
through a cokriging approach the whole auxiliary information provided by the Norm AI variable,
exhaustively known on the seismic grid.
317
318
319
Click Number of Variables and check the Collocated Variable(s) option in the following subwindow:
(snap. 8.8-1)
l
a new name for the variable to be created, for instance Collocated Cokriging (Porosity),
the collocated variable in the Output File variable list: this refers to the seismic information
called Norm AI.,
the collocated cokriging as a Special Kriging Option in the main window; the collocated variable in the Input File should be indicated: this refers to the variable carrying the seismic information called Impedance at Wells (which is defined as target variable #2 in the Input File)
(snap. 8.8-2)
The 33800 grid nodes are estimated with the values ranging from 5.6 to 12.5.
320
Note - The kriging matrix systematically involves one extra point whose location varies with the
target grid node. Therefore, the Unique Neighborhood computer trick which consists in inverting
the kriging matrix only once cannot be exploited anymore. A partial inversion is used instead, but
the computing time is significantly longer than for the traditional cokriging.
10000
Y (ft)
Phi
5000
13
12
11
10
9
8
7
0
0
5000
X (ft)
10000
(fig. 8.8-1)
Compared to the External Drift technique, the link between the two variables is introduced through
the structural model rather than via a global correlation: this allows more flexibility as this correlation may vary with the distance. This is why it is essential to be cautious when performing the structural analysis.
Collocated Cokriging with Markov-Bayes assumption:
The idea in this paragraph is to take the full advantage of the seismic information, especially during
the structural analysis, by choosing a simplified multivariate model based on the seismic information. This may be useful when the number of wells is not large enough to allow a proper variogram
calculation.
The next graphic shows a variogram map obtained from the Exploratory Data Analysis window
(last statistical representation at the right) for the Norm AI variable defined on the grid, using 50
lags of 120 ft for the calculation parameters. This tool allows to easily investigate potential anisotropies. In this case, directions of better continuity N10E and N120 can be quite clearly identified:
just click on one of the small tickmarks corresponding to directions, on the mouse right button and
finally on Activate Direction.
Note - This calculation can be quite time demanding when it is applied to large grids. In such
cases, a Sampling selection can be preliminary performed to subsample the grid information; the
variogram map calculation is then performed only on this selection.
321
(snap. 8.8-3)
It is advised to cautiously analyze this apparent anisotropy. Actually, in the present case, this anisotropy is not intrinsic to the impedance behavior over the area of interest; it is more likely due to the
presence of a North-South low impedance band around X equal to 2000 to 4000ft. It is therefore
ignored and a standard experimental variogram is computed.
By default, the grid organization is used, as it allows a more efficient computation of the variogram,
for instance along the main grid axes. Switch off the Use the Grid Organization toggle on the
Exploratory Data Analysis main window and click on the variogram icon to compute an omnidirectional variogram of the NormAI variable on the grid. Compute 50 lags of 120ft and save (Application / Save in Parameter File menu of the graphic page) the experimental variogram under a new
Parameter File called Norm AI.
322
The Statistics / Variogram Fitting procedure is used to fit a model to the acoustic impedance experimental variogram. A possible model is obtained by nesting, in Manual Fitting tab:
l
a Generalized Cauchy structure with a range of 1750ft (third parameter equal to 1),
(snap. 8.8-4)
323
Variogram : Norm AI
0.03
0.02
0.01
0.00
1000
2000
3000
4000
Distance (ft)
5000
6000
(fig. 8.8-2)
To run a Bundled Collocated Cokriging procedure, it is still compulsory to define a completely consistent multivariate model for porosity and acoustic impedance.
The idea of the Markov-Bayes assumption is simply to derive the cross-variogram and the variogram of the porosity by simply rescaling the acoustic impedance variogram. The scaling factors are
obtained by dividing the experimental variances of the two pieces of data using; Var Norm AI
(0.0313) / Var Porosity (3.24) = (0.00966) and using the correlation coefficient at wells (0.915)
between the 2 variables Porosity and Impedance at wells that can be obtained from the scatter diagram of the Exploratory Data Analysis.
Note - This correlation coefficient corresponds to the porosity values within the Sampling
selection and the Norm AI background variable after migration from grid to wells location (Grid to
point option).
The cokriging process, by construction, operates within the scope of the model of intrinsic correlation. In this case, kriging and cokriging lead to the same result for isotopic data sets (all variables
informed at all data points). In the collocated cokriging case, an additional acoustic impedance sample, located at the target grid node, is introduced in the estimation process.
To perform Collocated Cokriging with Markov hypothesis, select the window Interpolate / Estimation / Bundled Collocated Cokriging. The results of this bundled Collocated Cokriging process are
stored in variables called:
324
(snap. 8.8-5)
325
CB Kriging (Porosity)
10000
Y (ft)
Phi
5000
13
12
11
10
9
8
7
0
5000
X (ft)
10000
(fig. 8.8-3)
326
8.9 Simulations
As a matter of fact, linear estimation techniques, such as kriging or cokriging, do not provide a correct answer if the user is interested in estimating the probability that the porosity overcomes a
given threshold. Applying a cutoff operator (selecting every grid node above the threshold) on any
of the previous maps would lead to a two-color map (each value is either above or below the threshold); this cannot be used as a probability map and it can be demonstrated that this result is biased.
At least a simple proof consists in noticing that the standard deviation of the estimation (which
proves that this estimated value is not the truth) is not used in the cutoff operation. Drawing a value
of the error at random within an interval calibrated on a multiple of this standard deviation, centered
on the estimation would correct this fact on a one-grid node basis. But drawing this correction at
random for two consecutive nodes does not take into consideration that the estimation (and therefore its related standard deviation) should be consistent with the spatial correlation model.
A correct solution is to randomly draw several simulations, which reflect the variability of the
model, and to transform each one of them into a two-color map by applying the cutoff operator.
Then, on a grid node basis, it is possible to count the number of times the simulated value passes the
threshold and normalize it by the total number of simulations: this provides an unbiased probability
estimate. The accuracy of this probability is ensured when a lot of simulations are drawn assuming
that they are all uncorrelated (up to the fact that they share the same model and the same conditioning data points).
As implemented in Isatis, the simulation technique is based on a random number generator which
ensures this independence. Any series of random numbers is related to the value of a seed which is
defined by the user. Therefore, in order to draw several series of independent simulations, it suffices
to change this seed.
Several simulation techniques are available in Isatis. The one which presents a reasonable trade-off
between quality and computing time is the Turning Bands Method which will be used for all techniques described in this paragraph. The principle of this technique is to produce a non-conditional
simulation first (this is a map which reflects the variogram but does not honor the data) and then to
correct this map by adding the map obtained by interpolating the experimental error between the
data and the non-conditional simulated value at the data point: this is called conditioning. This last
interpolation is performed by kriging (in the broad sense) using the input model. The final map is
called conditional simulation. The only parameter of this method is the count of bands that will be
fixed to 200 in the rest of this section. For more information on the simulation techniques, the user
should refer to the On-Line documentation.
Each conditional simulation is supposed to be similar to the unknown reality. It honors the few
wells and reproduces the input variogram (calculated from these few data).
An additional constraint is to reproduce the histogram. Actually, most simulation techniques
assume (multi)gaussian distributions. It is therefore usually recommended to transform the original
data prior to using them in a simulation process, unless:
327
This can be checked on both variables: the porosity from the well data file and the acoustic impedance from the seismic grid file. In Statistics / Exploratory Data Analysis, the Quantile-Quantile plot
graphically compares any experimental histogram to a set of theoretical distributions, for instance
gaussian in the present case.
Gauss(m=8.2;s=1.4)
5
10
11
12
10
11
12
12
11
Porosity
10
9
8
7
6
5
Gauss(m=8.2;s=1.4)
(fig. 8.9-1)
328
Gauss(m=-0.551;s=0.155)
-0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2
-0.2
-0.3
Norm AI
-0.4
-0.5
-0.6
-0.7
-0.8
-0.9
-0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2
Gauss(m=-0.551;s=0.155)
(fig. 8.9-2)
Visual comparison shows that the hypothesis that the distribution is normal does not really hold.
Nevertheless, for simplicity, it is decided to perform the simulations directly on the raw variables,
bypassing the gaussian anamorphosis operation. Hence, each spatial correlation model used in the
estimation section can be used directly.
Note - An example of gaussian transformation (called anamorphosis) can be found in the Non
Stationary & Volumetrics case study, for the thickness variable.
These simulations are illustrated in the next paragraphs in the univariate case and for the external
drift technique. Similarly, cosimulations and collocated cosimulations (bundled or real) could be
performed using the same model than for the estimation step.
329
For instance, perform ten simulations using 200 turning bands, storing their results in one Macro
Variable called Simu Porosity.
Should you wish to generate several batches of simulations (say 10 at one time), you have to modify the seed for each run, as discussed earlier. You also have to increase the index given to the first
simulation by 10 if you want the indices in the Macro Variable to be consecutive.
Finally, specify the model (Porosity) and the neighborhood (Porosity) to be used during the conditioning kriging step, based on the 12 appraisal wells only. Two simulation results are displayed
below.
330
(snap. 8.9-1)
331
(fig. 8.9-3)
The Tools / Simulation Post Processing facility is used to compute the probability that the porosity
is greater than one threshold (9. in this case).
Among various possibilities, define in the Iso-Cutoff Maps one new macro-variable that will contain the probability that the variable remains above the threshold 9; the resulting map will be stored
under the name Proba Porosity (kriging) {9.000000}. The resulting map is displayed with a new
color scale for the probability map (in raster mode); this color scale is derived from the Red Yellow palette. The porosity at the 12 appraisal wells is overlaid using the Symbols type of representation, with + symbols for porosity values above 9 and o symbols for porosity values below.
10000
Y (ft)
P[Porosity>9%]
5000
0
0
5000
X (ft)
10000
Proba
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
The noisy aspect of the result is due to the small number of simulations.
(fig. 8.9-4)
332
determination of the optimal model inferred taking into account the seismic information at the
data point as an external drift: as for kriging, we forbid any nugget effect component,
(snap. 8.9-2)
The next graphic shows two output realizations. Due to the control brought by the seismic information, the variability between the simulations is much smaller than for the univariate simulations.
333
(fig. 8.9-5)
10000
Y (ft)
P[Porosity(ED)>9%]
5000
0
0
5000
X (ft)
10000
Proba
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
(fig. 8.9-6)
This probability map reflects the ambiguity of the status of the auxiliary seismic variable used as an
external drift: this quantity is assumed to be a known function. Hence this drift component does not
introduce any randomness in the simulation process. Moreover, the scaling and shifting factors
which are automatically derived by the kriging system remain constant from one simulation to the
next one, and even more, they are the same all over the field due to the Unique Neighborhood.
Therefore, because of the high level of the correlation between the acoustic impedance and the
porosity, the seismic variable controls almost completely the estimation of the probability to exceed
a threshold.
334
335
336
A few wells in the ASCII file gdf_wells.hd, containing depth measurements in meters corresponding to the top of a reservoir and its respective thickness values.
2D seismic survey, containing depth measurements in meters corresponding to the same top
structure (after velocity analysis), in the ASCII file gdf_seismic.hd.
A new study has first to be created. Then, both data sets are imported using the File / Import / ASCII
procedure in a new Directory Non Stationary; the Files are called Wells and Seismic. The files are
located in Isatis installation direction/Datasets/Non_Stationary_and_Volumetrics.
(snap. 9.1-1)
The Data File Manager can be used to derive the following statistics on:
l
Directory Name
File Name
Variable Name
.../...
Printing Format
MINI=
Q.25=
Q.50=
Q.75=
MAXI=
2197.00
2208.00
2214.50
2284.00
2343.00
MEAN=
2241.17
: Non Stationary
: Wells
: depth at wells
: Decimal,
Length = 10,
ST.D/MEAN= 0.0187787
Defined Samples= 87
Digits = 2
/ 87
ST.D=
l
337
42.09
MINI=
2147.00
Q.25=
2190.00
Q.50=
2215.00
Q.75=
2235.00
MAXI=
2345.00
MEAN=
2215.60
ST.D=
36.43
: Non Stationary
: Seismic
: seismic depth
: Decimal,
Length = 10,
Digits = 2
ST.D/MEAN= 0.0164406
Defined Samples= 1351
/ 1351
The next figure illustrates a basemap of both data sets: seismic data (black crosses) and the well
data (red squares), using two basemap representations in a new Display page. The area covered by
the seismic data is much larger than the area drilled by wells.
25
Y (km)
20
15
10
5
320
325
330
335
X (km)
340
345
(fig. 9.1-1)
338
(snap. 9.2-1)
For comparison purpose a quick estimation of depth at wells is performed, using standard kriging
using Interpolate / Quick Interpolation in a Linear Model Kriging mode using all samples for each
grid estimation (Unique neighborhood).
339
(snap. 9.2-2)
The estimated depth, Depth from wells (quick stat), is displayed below with several types of representation:
340
a Raster display, using a new color scale ranging from 2200 to 2520 by steps of 10m,
an Isolines display of the estimated depth, with isolines defined from 2200 to 2500m with a
100m step (thin black lines) and between 2219.5 and 2220.5 with a 1m step to illustrate in bold
the 2220m value,
(snap. 9.2-3)
341
178
4000
197
297
3000
250
303
231
2000
262
1000
358
82 395
0
362
1
421
3
Distance (km)
(fig. 9.3-1)
The variogram reaches the dispersion variance (1772) around 3km and keeps rising with a parabolic
behavior: this could lead to modeling issues, as the smoothest theoretical variogram model precisely has a parabolic behavior. This is actually a strong indication that the variable is not stationary
at the scale of a few kilometers.
342
Determination of the optimal polynomial drift (among the possible drift trials specified by the
user). The default drift trials is selected by pressing the button Automatic (no ext. drift). Once
determined, this polynomial drift is subtracted from the raw variable to derive residuals.
Determination of the best combination of generalized covariances, from a list of basic structures
that could be modified by the user.
(snap. 9.3-1)
The Parameter File where the Model will be ultimately stored is called Wells. Edit it in order to
open the Model Definition panel and ask for the Default Model; it is composed of:
A nugget effect,
343
The scale factor of all the basic structures is automatically calculated, being equal to 10% of the
field diagonal. The value of these parameters has no consequence on the model, and is just kept by
consistency with the variogram definition.
It also requires the definition of the Neighborhood used during this structural inference. Because of
the small amount of data, we keep the Unique neighborhood previously defined.
The structural inference in Unique Neighborhood produces the following results:
l
The optimal drift is quadratic: this makes sense when trying to capture the dome-shape of the
global reservoir.
The corresponding optimal generalized covariance is composed only of a nugget effect (structure 1), with a sill coefficient of 336.74.
======================================================================
|
Structure Identification
|
======================================================================
Data File Information:
Directory
= Non Stationary
File
= Wells
Target File Information:
Directory
= Non Stationary
File
= Wells
Seed File Information:
Directory
= Non Stationary
File
= Wells
Variable(s) = depth at wells
Type
= POINT (87 points)
Model Name
= Wells
Neighborhood Name = unique - UNIQUE
.../...
Drift Identification
====================
The drift trials are sorted by increasing Mean Rank
The one with the smallest Mean Rank is preferred
Please also pay attention to the Mean Squared Error criterion
T1 : No Drift
T2 : 1 x y
T3 : 1 x y x2 xy y2
Mean
Mean Sq.
Mean
Trial
Error
Error
Rank
T3
-3.833e-02 3.996e+02 1.276
T2
-7.317e-01 1.156e+03 2.103
T1
-3.136e-14 1.813e+03 2.621
Covariance Identification
=========================
The models are sorted according to the scores (closest to 1. first)
344
When the Score is not calculated (N/A), the model is not valid
as the coefficient (sill) of one basic structure, at least, is negative
S1 : Nugget effect
S2 : Order-1 G.C. - Scale = 1400.000m
S3 : Spline G.C. - Scale = 1400.000m
S4 : Order-3 G.C. - Scale = 1400.000m
Score
S1
S2
S3
S4
Successfully processed =
87
CPU Time
=
0:00:01 (1 sec.)
Elapsed Time
=
0:00:02 (2 sec.)
The Model Parameter File (Wells) has been updated
It is frequently observed that after the drift identification process, the resulting residuals present an
erratic (non structured) behavior; consequently, a covariance structure composed only of a nugget
effect should not be a surprise. In some cases it is advised to force the model to be structured by
removing the nugget effect from the list of basic structures. To achieve this, click on Default Model
in the Model Definition window and remove the Nugget Effect from the list. Click on OK and then
on Run. This new structural inference using the same Unique Neighborhood produces the following results for the Covariance Identification Step (the drift identification results obviously remain
the same):
.../...
Covariance Identification
=========================
The models are sorted according to the scores (closest to 1. first)
When the Score is not calculated (N/A), the model is not valid
as the coefficient (sill) of one basic structure, at least, is negative
Successfully processed =
87
CPU Time
=
0:00:00 (0 sec.)
Elapsed Time
=
0:00:01 (1 sec.)
345
The optimal generalized covariance is composed only of a Spline (structure 2), with a sill coefficient of 285,7.
9.3.3 Estimation
The estimation by kriging can now be performed using the standard procedure Interpolate / Estimation / (Co-)Kriging. The target variable depth at wells has to be defined in the Input File Wells, and
the names of the resulting variables in the Output File Grid:
l
Depth from Wells (St. Dev.) for the corresponding standard deviation.
The estimation is performed with the non-stationary model Wells, and the unique neighborhood.
346
(snap. 9.3-2)
The results are visualized in the following figure, where the estimated depth is represented in the
same way as for the previously quick estimation. The only difference is that the color scale is modified in order to avoid expanding anymore the values greater than 2520; these values are set to
blank.
Additionally, a red isoline is displayed for a standard deviation value equal to 15m. The value,
which indicates a rather poor precision, is exceeded almost on the entire field, excepted close to the
wells.
347
(snap. 9.3-3)
348
Note - Pay attention to the fact that the angular tolerance on each directional variogram is equal
to approximately 15 (180 divided in 36 angles, with a tolerance of 1 sector on each side of the
direction of interest). Computing standard experimental variograms with a reference direction of
N15E and default angular tolerance (45 divided by the number of directions) could lead to
slightly different results.
349
(snap. 9.4-1)
The Statistics / Variogram Fitting procedure is now used in order to fit an anisotropic model on
these experimental variograms. In the Model Initialization frame, select Spherical. Then click Constraint to allow the Anisotropy and lock the spherical sill to 1350. Click Fit to apply the automatic
fitting.
By default, a Global Anisotropy is set to an angle consistent with the experimental calculation
(equal to 75 in trigonometric convection in this case).
350
(snap. 9.4-2)
351
(snap. 9.4-3)
The model is stored in the Standard Parameter File Seismic pressing the Run (Save) button.
(fig. 9.4-1)
352
353
(snap. 9.4-4)
354
(snap. 9.4-5)
355
(fig. 9.4-2)
The map using the previous color scale almost covers the whole area.
The top of the structure (where the wells are located) has a seismic depth around 2150m, while
the well information produces a value around 2200m.
Before using Depth from seismic (Background) as an external drift function, it is recommended to
verify the correlation between this variable and the depth information at wells. To achieve that, a
356
kriging estimation of the seismic depth variable is performed into the Wells point file, using the
same variogram and neighborhood configuration as previously; a new variable Depth from seismic
(Background) is created.
The scatter diagram between the two variables is displayed hereafter. The regression line (bold
line) and the first bisector (thin line) are indicated. Both variables are highly correlated, and this
correlation is linear. Furthermore, the global shift of approximately 50m between seismic depth and
depth at wells is obvious.
2350
rho=0.978
depth at wells
2300
2250
2200
2150
2150
2200
2250
2300
2350
(fig. 9.4-3)
To provide a surface which closely fits to the depth values given at the wells, avoiding misties.
To produce a depth map which resembles the seismic map (at least far from the control wells).
This technique may be used with several background variables (external drifts). However, the bundled version Interpolate / Estimation / External Drift (bundled) described here allows only one
background variable. The method requires the background variable to be known at the well locations: this is automatically provided by a quick bilinear interpolation run on the background variable. The Unique Neighborhood is used. The final question concerns the Model which must be
357
inferred knowing that the seismic information is used as External Drift. The procedure offers the
possibility of calculating it internally using the polynomial basic structures for the determination of
the optimal generalized covariance. In presence of outliers, the procedure often finds a nugget
effect as the optimal generalized covariance; it is therefore useful to ask the procedure to exclude
the nugget effect component from the trial set of generalized covariances.
(snap. 9.4-6)
The resulting non stationary model is printed during the process, before the kriging with External
Drift actually takes place.
======================================================================
|
Structure Identification
|
======================================================================
Data File Information:
Directory
= Non stationary
File
= Wells
Variable(s) = depth at wells
Target File Information:
Directory
= Non stationary
File
= Wells
Variable(s) = depth at wells
Seed File Information:
Directory
= Non stationary
File
= Wells
Variable(s) = depth at wells
Variable(s) = KRIG_DATA
Type
= POINT (87 points)
Neighborhood Name = Unique - UNIQUE
.../...
Drift Identification
====================
The drift trials are sorted by increasing Mean Rank
The one with the smallest Mean Rank is preferred
Please also pay attention to the Mean Squared Error criterion
358
T1 : 1 f1
T2 : 1 x y f1
Trial
Error
Error
Rank
T2
2.348e-02 7.485e+01 1.460
T1
3.488e-02 7.165e+01 1.540
Score
S1
S2
1.019 1.100e+02 0.000e+00
1.915 0.000e+00 2.834e+03
N/A
1.131e+02 -3.394e+00
The following graphic representation is performed using the same items as previously:
359
(fig. 9.4-4)
360
9.4.4 Conclusions
Although both maps have been derived with the same set of constraining data (the 87 wells):
l
the results are similar in the area close to the conditioning wells: in both maps, the top of the
structure is reached at 2200m,
the external drift map is more realistic in the extrapolated area as it resembles the seismic background variable,
the reliability of the map is estimated to be better on the external drift map: the area where the
standard deviation is smaller than 15m is larger.
The next graphic shows the horizontal position of a section line and its respective cross section.
Depth is measured in meters, and the horizontal axis measures the distance along the trace AA'.
25
A'
Y (km)
20
15
10
5
320
330
335
340
345
X (km)
2150
Depth (m)
325
2200
2250
2300
Depth from wells (Non Stat)
2350
2400
10
15
X (km)
(fig. 9.4-5)
This graphic is obtained using a Section in 2D Grid representation of the Display facility, applied to
the 4 variables simultaneously. The parameters of the display are shown below.
361
(snap. 9.4-7)
Then, to define the trace you plan to display, you can either:
l
enter its coordinates, using the Trace... button in the Contents tab illustrated above,
digitize the trace in a second display corresponding to a geographic view of the area (basemap
or grid); once this graphic is displayed, select Digitize Trace with the right button of your
mouse, then select the vertices of the trace with the left button. Once you have finished, click on
the right button to terminate and then ask to Update Trace on Graphics with the right button.
Coming back to the Contents window of the trace display, you can modify in the Display Box tab
the definition mode for the graphic bounds as well as the scaling factors.
Finally, using Application / Store Page, save this template (call it trace for instance) in order to easily reproduce this kind of cross-section later.
362
363
(snap. 9.5-1)
364
(snap. 9.5-2)
(snap. 9.5-3)
Statistics on new depth at wells:
Number of samples: 5
365
Minimum: 2197.00
Maximum: 2302.00
Mean: 2233.90
Variance: 1385.14
Three points are located on the top of the surface and two points are around with higher values of
depth. Depth from wells and Depth from seismic are strongly correlated for this set with coefficient
0.990.
The equation of linear regression and exact correlation coefficient can be found in Application /
Report Global Statistics:
Linear regression : Y = (2.027176) * X + (-2171.221816)
Correlation coefficient : 0.990410
Comparing with similar data obtained from the entire set of 87 data considered in the previous
chapter we see that the linear regression equation differs significantly:
Linear regression : Y = (1.389533) * X + (-788.985650)
Correlation coefficient : 0.982144
(snap. 9.5-4)
The linear regression of the set of five wells marked with red color does not coincide with the linear
regression of 87.
In Bayesian Kriging the choice of prior values affects the outcome. Due to the fact that the user can
change prior values for the trend coefficients, it's possible to obtain qualitative results even with
limited data.
366
(snap. 9.5-5)
The target variable depth at wells has to be defined in the Input File Wells, as well as the values of
external drift in wells - depth from seismic (background). The names of the resulting variables in
the Output File Grid:
l
Depth from wells St.Dev. (bayesian) for the corresponding standard deviation
Also we need to precise the map we use as a drift:
To create a new model we need to click Model in Kriging parameters, the window of Variogram
model will appear. We put a name Residuals, confirm it and back to the Kriging parameters we
367
click Edit to precise the choice of spherical model with range 3000 m and sill equals to 56. Do not
forget to activate a drift part of type 1 f1 by clicking on Basic Drift Functions.
(snap. 9.5-6)
(snap. 9.5-7)
368
Before entering priors, Unique should be chosen as the neighborhood. Prior parameters on trend
coefficients must be defined in Kriging Prior Values. The button Automatic calculates the linear
regression coefficients of the actual set and put these values as Mean. User is free to change these
values.
(snap. 9.5-8)
As standard deviations, we use some uncertainty on the coefficients, thus introducing possible deviation of the linear regression. Small standard deviation values for the coefficients provide smaller
standard deviation associated with Kriging. As the first factor (1) represents the shift of linear
regression line, and the second (F1) the slope, the values for the second should be relatively small
(measured by tenths), the first one allows to enter tens or even hundreds of units to obtain a qualitative result.
Coefficients for Standard Deviation can also be determined automatically. For this, the method
called BootStrap is used. The method is computing the regression coefficients after removing one
point from the set. Therefore we get two sets of n coefficients, from which standard deviations are
automatically calculated.
Correlation coefficients between elements are also calculated automatically using the same principle as for Standard Deviations: after obtaining two sets of n coefficients, one can consider them as
369
two variables and calculate the correlation coefficient between them. What is interesting is that the
coefficient is always either one or minus one irrespective of whether the initially data (depth at
wells, depth from seismic (background)) are strongly correlated or not.
User is free to change any of these values according to geological knowledge or regionalized data.
In the case of limited data, it is a great advantage because its possible to obtain a qualitative model
with a small standard deviation associated with kriging.
For the test we put the values -1000 as a mean for shift and 1.5 as a mean for F1. Corresponding
standard deviations will be 100 and 0.1. The correlation matrix stays the same.
(snap. 9.5-9)
Finally press RUN from the Kriging with Bayesian Drift panel.
As a result we got the following resulting map and map of uncertainty associated with Kriging.
370
(snap. 9.5-10)
371
Use the Interpolate / Conditional Simulations / External Drift (bundled) menu to perform 100
simulations of the top reservoir depth. Ask to calculate the model without nugget effect, use a
Unique neighborhood and set the number of turning bands to 500. The process will create a
macro-variable called Simu Top with seismic for this case.
(snap. 9.6-1)
l
Display a few simulations using the Display menu, with the previous color scale (simu #001).
372
(fig. 9.6-1)
373
the local distribution of simulated values and to compare them with values of interest such as OWC
or neighbouring measured values.
Once the panel is open, the first thing to do is to click on Application / Load Files. Here, you can
enter the macro variable to be analyzed, a grid reference variable and also an auxiliary Points file,
containing for instance the well intercepts with the top of the reservoir.
(snap. 9.6-2)
374
(snap. 9.6-3)
You can change the Basemap graphic preferences, for instance the color scale, by clicking on Application / Graphic Parameters for... / Basemap.
The local distribution of simulated values can now be obtained simply by clicking on a particular
grid node. The selected node is automatically outlined (by default with a bold red line) and the histogram containing all the simulated depth values for this particular node is displayed.
Usual histogram calculation parameters may be modified from the Application / Calculation
Parameters window. The hereafter histogram is obtained with 21 classes ranging from 2280 to
2301m. Several particular values are then superimposed to the histogram, such as:
the index of the simulation outcome currently displayed on the basemap (CIV),
quantiles of interest.
375
(snap. 9.6-4)
Coming back to the Basemap, you will notice that a right-click produces the following menu.
(snap. 9.6-5)
376
You can then clear the current selection or append additional nodes. Note that instead of selecting
an individual node, you can select blocks of nodes by modifying the Selection Neighborhood
parameter below the Basemap.
Finally, this menu allows you to select an auxiliary point, for instance a well close to your current
selection. Once you have clicked on an auxiliary point, the corresponding symbol changes from the
default + to a circle. The depth value, read from the auxiliary Points file, is then automatically
superimposed on the histogram, the valued being displayed in the legend (PRV).
(snap. 9.6-6)
377
(snap. 9.7-1)
378
The next graphic shows the histogram with the modified negative value. The default omnidirectional experimental variogram shows a stationary behavior.
87
0.00
29.15
15.49
5.06
188
Frequencies
Nb Samples:
Minimum:
Maximum:
Mean:
0.15
Std. Dev.:
0.10
0.05
0.00
10
20
30
thickness at wells
169
30
157
271
230
20
256
10
22
Distance (km)
(fig. 9.7-1)
Furthermore, computing a scatter diagram between the depth and thickness variables would show
that these variables are not correlated.
At this stage 100 stochastic simulations of the thickness have to performed, without an external
drift variable and in a stationarity assumption. One possibility is to adjust a variogram model of
thickness at wells and perform directly a Conditional Turning Bands procedure, but there is a risk to
obtain negative values of thickness. To tackle this point a gaussian anamorphosis modeling of the
thickness at wells variable is performed; this approach allows to constrain the lower and upper
thickness values.
Open the Statistics / Gaussian Anamorphosis Modeling window and enter the input raw variable
thickness at wells in the Data area by pressing the Input... button. Then switch on the toggle
Gaussian Transform and enter the new output variable name thickness at wells (gaussian). Then
click on the interactive fitting... button. The window Fitting Parameters pops up.In the Windows
area, clicking on the first icon called Anamorphosis pops up the experimental anamorphosis and the
default point model. Click on Application / Graphic Bounds in the menu bar and enter the next values
Horizontal Axis Min: -3.5
Horizontal Axis Max: 3.5
Vertical Axis Min
: -5
37
These values only adjust the display of the anamorphosis window. Now click on Interactive Fitting
Parameters... in the Anamorphosis Fitting area and enter the following values:
379
(snap. 9.7-2)
The last window authorizes to obtain raw values between 0 and 35.
380
Click on the toggle Fitting Stats, the following statistics are displayed :
=== Fitting Statistics for thickness at wells ===
Experimental mean
= 15.49
Theoretical mean (Discr)
= 15.55
Experimental variance
= 25.58
Theoretical variance (Discr) = 27.36
Interval of Definition:
On gaussian variable: [-2.53 ,2.53]
On raw variable: [0.00 ,29.15]
381
(snap. 9.7-3)
Finally give a name to the new anamorphosis function, Thickness at wells and press Run.
As the thickness simulations will be performed on this gaussian transform (before back-transformation in raw scale using the anamorphosis function), it is now requested to evaluate the spatial structure of the thickness at wells (gaussian) variable.
An experimental variogram of this variable is first calculated with 12 lags of 300m and saved in a
Parameter File (using the Application menu of the variogram graphic window) called Thickness at
wells (gaussian). Using the Statistics / Variogram Fitting window, a new variogram model is created, with the same name than the experimental variogram. This new model is edited (using manual
edit/edit) and modified in order to improve the quality of the fit. A spherical basic structure with a
range of 950m and a sill equal to 1.04 is chosen. This model is saved by pressing the Run (Save)
button.
382
130
163 143 145
1.25
225
1.00
240
215 254
206
241
0.75
222
0.50
0.25
0.00
15
2
Distance (km)
(fig. 9.7-2)
Using the Interpolate / Conditional Simulations / Turning Bands facility, perform 100 simulations
of the thickness at wells (gaussian) in Unique Neighborhood using the previous model and activating the Gaussian Back Transformation... option.
(snap. 9.7-4)
383
(snap. 9.7-5)
You can use the Exploratory Data Analysis to check the outputs and verify that the simulated thickness are greater or equal to zero.
The next graphic shows two realizations of the Simu Thickness output.
384
(fig. 9.7-3)
(fig. 9.7-4)
To visualize the simulated reservoirs, you can create a new macrovariable corresponding to the base
of the reservoir and represent the top and the base in a cross-section. To achieve that:
385
with Tools / Create Special Variable create a macro variable of length type to store 100 simulations of the depth of the reservoir base, called Simu Base reservoir,
(snap. 9.7-6)
386
using the File / Calculator, calculate the sum of the simulated top Simu Top with seismic and
the last simulated thickness Simu Thickness.
(snap. 9.7-7)
387
To display a cross-section, you can use the previous template Trace and replace the Cross-section in 2D contents by a realization of the top and the base of the reservoir (hereafter the top and
the base of simulation number 42).
2200
B
Top (Simu #42)
2250
Z (m)
2300
A
B
2350
2400
3
X (km)
5
(fig. 9.7-5)
388
Create a new polygon file, called Polygon for Volumetrics, in the Application / New Polygon
File menu.
You may display the Wells data, with Application / Auxiliary Data.
The polygon, called P1, is displayed on the top of your wells data. Finally, Save and Run your
polygon file.
(snap. 9.7-8)
Using the Tools / Volumetrics panel, the GRV will be calculated for the reservoir limited by the simulated surfaces of the Top and Bottom and the Gas water contact (GWC) at the constant depth of
2288m. Enter the macro variables names for the reservoir top and bottom.
389
Pay attention to match the indices between these macrovariables. To achieve that, switch ON the
toggle Link Macro Index to: Top Surface when you choose the second macro variable.
Note - To be able to use in the Volumetrics panel the macro-variables previously created, you need
to ensure that these macro-variables are of length type. If it is not the case, an error is produced;
then, you have to go in the Data File Manager, click on the macro-variable and ask to modify the
Format, with the right-button of the mouse. You have the possibility to specify that you want your
macro-variable to be of length type and the unit, in meter in the present case.
(snap. 9.7-9)
l
In Risk Curves / Edit specify that you are interested in the distribution of the volumes and
choose an appropriate format. Also switch on the Print Statistics toggle. Click on Close and
then on Run. You obtain the following statistics and quantiles for the P1 polygon:
Statistics on Volume Risk Curves
================================
Polygon: P1
Smallest =
367.44Mm3
Largest =
500.49Mm3
Mean
=
422.32Mm3
St. dev. =
26.50Mm3
390
P90.00 =
P50.00 =
P10.00 =
476.23Mm3
705.81Mm3
1003.03Mm3
(snap. 9.7-10)
(fig. 9.7-6)
391
You can also derive specific thickness maps from this procedure, such as iso-frequency maps
(for instance P10 and P90 maps), iso-cutoff maps (to derive for instance the probability for the
thickness to exceed 10 or 20m) or statistical maps (thickness mean or standard deviation).
(snap. 9.7-11)
The inside / outside reservoir constraints have to be digitized on the depth map before the Run.
392
(snap. 9.7-12)
393
To do this, you have to click with the mouse right-hand button on the graphic, and ask to:
m
Digitize as: Inside for the first constraint inside the reservoir structure (green circle)
Digitize as: Outside for the second constraint, located outside the reservoir (blue circle)
In the Application menu of the depth map graphic, you may ask to Print Information on Constraints. They should be approximately located as follows:
Constraints characteristics (2 points)
Rank
X
Y
1 334834.08m
15526.15m Inside
2 339925.24m
18263.27m Outside
Switch on the Map of the Mean Height above spill, Map of the Reservoir Probability, the Distribution of Spill Elevations and the Distribution of Reservoir Volumes buttons. Set up the units to Mm3
by clicking on the Print Parameters... button.
Click on Run. Isatis will pop up the requested results; for what concerns the grid displays, it is
advisable to enter in the Application / Map Graphic Parameters... menu and customize the Color
Scale.... A grey color scale is used to represent the Reservoir Probability Map and a rainbow color
scale to represent the mean height above the spill point.
(snap. 9.7-13)
(snap. 9.7-14)
394
(snap. 9.7-15)
395
(snap. 9.7-16)
Spill Point calculation results
===============================
Num
: the relative rank of the outcome
Macro
: the absolute rank of the outcome in the MACRO variable
IX0,IY0 : the coordinates of the Spill point (in grid nodes)
ACC
: the acceptation criterion
YES if the outcome is valid
or the rank of the (first) violated constraint
Spill
: Elevation of the Spill point
Thick
: Maximum thickness of the Reservoir
Res. Vol.: Volume of the Reservoir (Unknown is not included)
Unk. Vol.: Unknown volume
Num Macro
IX0
IY0
ACC
Spill
Thick
Res. Vol.
Unk. Vol.
10
10
17
15
Yes
2265.379m
69.424m
547.13m3
0.35m3
82
82
15
17
Yes
2266.296m
70.477m
597.62m3
1.22m3
78
78
38
7
Yes
2266.994m
72.872m
584.76m3
6.22m3
396
97
97
14
16
Yes
2267.007m
71.591m
561.78m3
4
4
16
15
Yes
2267.394m
72.691m
558.00m3
63
63
14
16
Yes
2267.524m
72.423m
606.63m3
93
93
13
16
Yes
2272.503m
75.501m
746.57m3
7
7
2
11
Yes
2272.744m
77.715m
794.38m3
69
69
16
13
Yes
2274.536m
79.723m
732.06m3
88
88
14
18
Yes
2275.137m
81.492m
756.87m3
58
58
15
15
Yes
2275.549m
77.512m
779.35m3
37
37
12
16
Yes
2275.675m
82.212m
818.17m3
.../...
Spill
Thick
Res. Vol.
Unk. Vol.
Mean
(All)
2285.568m
89.958m
1082.29m3
21.54m3
Mean
(Valid) 2285.568m
89.958m
1082.29m3
21.54m3
St. dev (All)
8.896m
9.146m
277.33m3
25.93m3
St. dev (Valid)
8.896m
9.146m
277.33m3
25.93m3
Minimum (All)
2265.379m
69.424m
547.13m3
0.17m3
Minimum (Valid) 2265.379m
69.424m
547.13m3
0.17m3
Maximum (All)
2304.631m
109.831m
1826.83m3
125.52m3
Maximum (Valid) 2304.631m
109.831m
1826.83m3
125.52m3
23.84m3
4.45m3
32.50m3
44.72m3
29.30m3
0.17m3
2.34m3
1.57m3
2.37m3
In this case the output print is sorted by spill elevations. Six spill elevations are below 2270m, the
minimum acceptable spill elevation value for this reservoir. To identify the rank of these simulations:
397
Ask in Print Parameters to sort by Spill Elevations by increasing order (as it was done before)
and click on Print Results: you can identify easily the corresponding simulations that do not satisfy our criteria. The corresponding indices: 10, 82, 78, 97, 4 and 63 have to be masked in the
Macro variable.
In the Data File Manager, click on the Macro variable Simu Top with seismic and, with the
right button, ask to mask the indices 10, 82, 78, 97, 4 and 63 in the menu Variable / Edit Macro
Indices. You can check that these indices do not belong anymore to the list of valid indices by
asking Information on the Macro variable.
(snap. 9.7-17)
Rerunning the spill point application gives the following distribution of volumes and spill point
depths:
398
(fig. 9.7-7)
(snap. 9.7-18)
Statistics on Volume Risk Curves
================================
Polygon: P1
Smallest =
228.11Mm3
Largest =
504.29Mm3
Mean
=
383.13Mm3
St. dev. =
57.36Mm3
399
===============================
Global Field Integration
P90.00 =
423.05Mm3
P50.00 =
534.32Mm3
P10.00 =
841.89Mm3
The distribution of volumes calculated from 94 simulations can be compared with the distribution
obtained in first place with the constant contact, which was close to the average of the spill point
depth.
(fig. 9.7-8)
400
Plurigaussian
10.Plurigaussian
This case study shows how to apply the plurigaussian approach to simulate geological facies within two oil reservoir units. The aim of this
study is to introduce the geologist with the different techniques and
concepts in order to better control the lateral and vertical variability of
facies distribution when dealing with complex geology.
401
402
The file wells.hd contains the facies information. This file is organized in a line type format; it
means that it is composed of a header (name and coordinates of each collar) and the core samples (coordinates of the core ends, and an integer value which corresponds to the lithofacies
code).
The file surface.hd contains three boundary surfaces called surf1, surf2 and surf3. They are
defined on a rotated grid (Azimuth 70 degrees equivalent to 20 in mathematician convention).
(snap. 10.1-1)
The data represents a total of 10 wells and 413 samples. A basic statistics run in the File / Data File
Manager utility on the Wells file shows that the dataset lies within the following geographical
area:
XMIN=
YMIN=
ZMIN=
95.05m
-63.58m
-35.20m
XMAX=
YMAX=
ZMAX=
2905.49m
3779.84m
20.50m
Plurigaussian
403
Quantitative information about the variable lithofacies is provided by the Statistics / Quick Statistics application. The statistics tell us that this integer variable lies between 0 and 28. The average of
7.61 is not very informative, instead it is relevant to consider the distribution of this discrete data;
the variable lies between 0 and 13 or takes the values 22 or 28. For each integer value, the utility
provides the number of samples and the corresponding percentage:
Count of samples
8
5
26
14
35
24
31
40
39
50
19
33
47
16
1
1
lithofacies
Percentage
2.06%
1.29%
6.68%
3.60%
9.00%
6.17%
7.97%
10.28%
10.03%
12.85%
4.88%
8.48%
12.08%
4.11%
0.26%
0.26%
NX=
90
X0=
25.00m
DX=
NY=
90
Y0=
-775.00m
DY=
Rotation: Angle=20.00(Mathematician)
XMIN=
YMIN=
-1496.99m
-775.00m
XMAX=
YMAX=
50.00m
50.00m
4206.63m
4928.62m
This grid clearly covers a larger area than the wells. Finally we calculate statistics on the three surfaces (using Statistics / Quick Statistics application) and check that these surfaces are not defined on
the whole grid (of 8100 cells), as shown in the following results:
Statistics:
------------------------------------------------------------------------------
| VARIABLE | Count
| Minimum | Maximum | Mean
| Std. Dev | Variance |
------------------------------------------------------------------------------
| surf1
|
7248|
-10.20|
-6.40|
-8.68|
1.34|
1.80|
| surf2
|
7248|
-18.60|
-15.40|
-16.86|
0.70|
0.49|
| surf3
|
7248|
-26.00|
-18.80|
-22.25|
1.92|
3.68|
404
problem, we will transform the 2D grid surfaces into 3D point surfaces. Open Tools / Copy Variables / Extract Samples ...
(snap. 10.1-2)
This process transforms a 2D grid variable surf1 from the Surfaces file into a new file called Surf1
3Dpoints with an output variable called surf1 2D points. Despite its name, this Surf1 3Dpoints
file is still in 2D; in order to transform it in a 3D file the variable surf1 2D points has to be changed
into a z coordinate. To achieve that, this variable surf1 2Dpoint has to be informed in the whole
extension of the grid, which is not the case for all the surfaces. In the calculator, enter a constant
value to the undefined values of the surf1 2D points variable and call the output variable surf1 z.
Plurigaussian
405
(snap. 10.1-3)
The last calculator command can be read as follows: If v1 is not defined (~ffff) then store a value of
-50 into the new variable v2, else store the value of v1 into v2.
This new variable v2 (surf1 z) has been created with a float type; before transforming it into coordinate, we have to change this type to a float length extension:
m
Enter into the Data File Manager editor, select the variable of interest and ask for the Format option.
Click on the Unit button and switch on the Length Variable option; finally select the Length
Unit, meters in the present case.
Now the 2D file may be changed into a 3D file by selecting the surf1 z variable as the new z
coordinate value, using the option Modify 2D-3D.
406
Choose Perspective for the Representation Type, then in the Display Box tab switch off the
Automatic Scales toggle and set the z scaling factor to 25. Change also the definition mode of
the display box, in order to be able to specify the min and max values along the three axes; the
Calculate button may help you to initialize the values.
(snap. 10.1-4)
Plurigaussian
407
Double-click on Lines in the Available Representations list. Select the variable lithofacies to be
displayed with a default Rainbow color scale. Select the Well Names variable as the variable to
be displayed from the linked file. In the Lithology tab, change the Representation Size to 0.1cm.
(snap. 10.1-5)
408
For each 3D point surface, ask for a Basemap representation and select the corresponding 3D
point file. Each surface is customized with a different pattern color. Click on Display.
The order of the representations in the display may be modified using the Move Back and Move
Front buttons in the Contents window.
(fig. 10.1-1)
the 3D lines representation of the Wells file is performed with a radius proportional to the lithofacies and the same Rainbow color scale than with the standard display.
the three surfaces from the Surfaces file are successively copied into the Surfaces representation type. Resulting iso-surfaces are displayed with appropriate color scales that can be parametrized by the user.
Note - Detailed information about the 3D Viewer parameters may be found in the On-Line
documentation.
Plurigaussian
409
(snap. 10.1-6)
410
10.2 Methodology
The wells information containing the lithofacies has been imported into Isatis, as well as the three
boundary surfaces. All the identified lithofacies do not necessarily need to be treated. Isatis handles
the possibility of grouping the lithofacies variable into lithotypes; in this case study the variable
lithotypes corresponds to groups of lithofacies related to the same depositional environment.
The next step consists in creating the 3D Structural Grid that will cover the whole field. Boundary
surfaces can be used to split the structural grid into different units. All the nodes of each unit will
form a new grid called Working Grid. These working grids will be created using the Tools / Discretization & Flattening facility. They may be treated differently, for example their vertical discretization may be different than the mesh of the 3D structural grid (0.2m) or they can be flattened
according to a reference surface. In the next graphic we have represented a working grid using a
reference surface, automatically it will be flattened and it will correspond to the vertical origin, and
Isatis will assign to this new grid the X, Y and Z mesh values of the Structural Grid. Facies simulation will be performed into these working grids. At the end the different working grid simulations
will be merged and back transformed into the 3D structural grid. This is illustrated in the next
graphic, which shows a Y- Z 2D cross-section.
(fig. 10.2-1)
The well discretization of facies will then be achieved using a constant vertical lag. Even if it is not
compulsory for the vertical lag to be equal to the Z mesh of the working grid, it is advised to use the
same value in order to have one conditional facies value for node.
The plurigaussian simulation needs the proportions of the lithotypes to be defined for each cell of
the working grid, using the Statistics / Proportion Curves panel. Transitions between lithotypes will
then be specified within the Statistics / Plurigaussian Variogram panel, that will ultimately be used
to perform the conditional Plurigaussian simulation itself.
Plurigaussian
411
(snap. 10.3-1)
The 3D structural grid is created in a file simu of a new directory reservoir. This 3D structural grid
will be used to split the field into two adjacent units, each of them being defined by the nodes
between two boundary surfaces called `top' and `bottom'. These grid nodes will be stored into two
new grids, the working grids for each unit. The units are called upper and lower.
412
(snap. 10.4-1)
If data are available, we must identify the information used as control data, i.e. the variable lithofacies contained in the file data / Wells. Note that the facies variable can be converted into a new
integer variable called a lithotype (group of facies) and this new variable could also be stored in the
Plurigaussian
413
input data file. To help identifying the wells in the future displays, the variable Well Name (contained in the linked header file WellHeads) that contains the name of the wells is finally defined.
If no data is available, the plurigaussian simulation is non conditional.
Note - When processing several units of the same 3D structural grid, we must pay attention to use
different pointer variables for the different units.
(snap. 10.4-2)
414
The unit is characterized by its top and bottom surfaces: for this unit, we use the surface surf1 for
the top surface and surf2 for the bottom surface, both surfaces contained in the file Surfaces of the
directory data.
We must also define the way the 3D structural grid is transformed in the working system: we refer
to the horizontalisation step (flattening). This transformation is meant to enhance the horizontal
correlation and consists in transforming the information back to the sedimentation stage. The
parameters of this transformation are usually defined by the geologist who will choose between the
two following scenarios:
l
The horizontalisation parallel to a chosen reference surface. In this scenario (which is the one
selected for this unit) we must define the surface which serves as the reference: here surf2.
A vertical stretch and squeeze of the volume between the top and bottom surfaces: this is called
the proportional horizontalisation. In this scenario, there is no need to define any reference surface.
We could also store the top and bottom surfaces after horizontalisation in the 2D surface grid in
order to check our horizontalisation choice. This operation is only meaningful in the case of parallel
horizontalisation.
Finally we must define the new 3D working grid where the plurigaussian simulation will take place
(new file WorkingGrid in the new directory UnitTop). The characteristics of this grid will be
derived automatically from the geometry of the 3D structural grid and the horizontalisation parameters. In the case of proportional horizontalisation, we must provide the vertical characteristics of
the grid which cannot be derived from the input grid. In the case of parallel horizontalisation, the
vertical mesh is equal to the one of the structural grid and the number of meshes is calculated so as
to adjust the simulated unit.
Some cells of the 3D working grid may be located outside the simulated unit: they will be masked
off in order to save time during the simulation process (new selection variable UnitSelection).
This grid will finally contain the new macro variable defining the proportion of each lithotype for
each cell, which will be used during the plurigaussian simulation (macro variable Proportions). At
this stage, these proportions are initialized using constant values for all the cells; they are calculated
as the global proportions of discretized lithotypes. This macro variable will be updated during the
proportion edition step.
Plurigaussian
415
(snap. 10.4-3)
l
416
(snap. 10.4-4)
Note - Pay attention to the fact that the discretization process creates two Proportions macro
variables; the first one, defined in the DiscretizedWells, will be used to calculate and edit the
second one, defined in the WorkingGrid and that will serve as the input model for the plurigaussian
simulation.
Plurigaussian
417
Several subsequent procedures cannot handle proportions and require a single lithotype value to be
assigned to each sample instead: this is why we also compute the "representative" lithotype for each
sample (variable Lithotype). Several algorithms are available for selecting this representative lithotype:
l
Central: the representative lithotype is the one of the sample located in the middle of the sliced
core.
Most representative: the representative lithotype corresponds to the one which has the larger
proportion over the sliced core.
Random: the representative lithotype is taken at random according to the proportions of the different lithotypes present within the sliced core.
We can also store the actual length of the sliced cores. A new linked header file is also created
which contains the name of the wells (Variable WellName). This variable is compulsory. If no variable has been provided in the Input panel, a default value is automatically generated.
(snap. 10.4-5)
418
10.4.5 Run
Click on Run. All the information previously defined is stored in the New Proportions Parameter
File... and used to perform the operation. As complementary information, this facility provides the
following printouts concerning:
Line #1 : 9 initial samples intersected by the unit
First Intersection Point : x =
2719.30m y =
9.80m
Last Intersection Point : x =
2647.38m y =
16.82m
.../...
Line #10 : 11 initial samples intersected by the unit
First Intersection Point : x =
2010.75m y =
7.00m
Last Intersection Point : x =
2010.75m y =
16.40m
File Name
: UnitTop/WorkingGrid
Mask Name
: UnitSelection
NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
51
Z0=
0.10m
DZ=
0.20m
File Name
: reservoir/simu
Pointer to Working Grid : ptr_UnitTop
NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
99
Z0=
-26.00m
DZ=
0.20m
Input Data:
-----------
File Name
: data/Wells
Variable Name : lithofacies
2146.71m z =
2181.37m z =
347.77m z =
347.77m z =
Plurigaussian
419
Limestone = [12,12]
Discretization Options
----------------------
Discretization Length
=
0.20m
Minimum Length
=
0.02m
Distortion Ratio (Hor/Vert)
= 250
Lithotype Selection Method
= Central
Discretization Results:
-----------------------
File Name
: UnitTop/DiscretizedWells
Lithotype Name
: Lithotype
Proportions Name
: Proportions[xxxxx]
To visualize the selection unit where the simulation will take place and superimpose the discretized
wells:
l
In a new Display page, switch the Representation type to the Perspective mode;
Select, with a Symbols representation, the grid file UnitTop / WorkingGrid. In the Grid Contents area, switch to the Excavated Box mode and center IX and IY to the index number 45. In
the Data Related Parameters area customize two Flags:
m
the first flag with a lower bound of 0 and upper bound of 0.5, a red point pattern with a size
of 0.1 has been used,
for the second flag we have used bounds from 1 to 1.5 in order to catch the selection values
(1), and gray circles of 0.1 size to represent the upper unit selection have been used.
420
Select a new Lines representation and select the line file UnitTop / DiscretizedWells. Select
Lithotype for Lithology #1. In the Lithology tab, it is advised to use the LithotypesUnitTop
color scale and customize the representation size to 0.1cm.
Click on Display.
(fig. 10.4-1)
Note - You can also display the Lithotype variable in a literal way by using another item, for
example Graphic Left #1: Lithotype.
Plurigaussian
421
(snap. 10.5-1)
422
When entering the name of the Proportion Parameter File, all the other parameters are defined automatically and classically do not have to be modified. We can now review the parameters of interest
for this application:
l
the discretized wells (UnitTop / DiscretizedWells) where the macro variable Proportions is
specified,
the linked header file WellHeads containing the well names (Variable WellName),
the 3D working grid (UnitTop / WorkingGrid) where the macro variable Proportions will be
modified, within the selected area (Selection variable UnitSelection)
Once these parameters have been defined, the main graphic window represents the 10 wells in the
rotated coordinate system of the working grid. If we look carefully, we can see that some wells are
deviated (W1, W5 and W9): their traces are projected on the horizontal plane.
(snap. 10.5-2)
Plurigaussian
423
In the lower right corner of the graphic window, a vertical proportion curve (or VPC for short) represents the variation of the global proportions along the vertical axis. It is displayed with a cross
symbol, which represents the VPC's anchor useful for edition purposes.
This application offers two modes, indicated at the bottom, depending whether we operate using the
polygons or the VPC. The Graphic Menu of this window depends on the selected option. In the case
of Polygon Edition, the following menu options are available:
-
Create Polygon(s)
In the case of Vertical Proportion Curves Edition, the following menu options are available:
-
Deselect
Editing
- Apply 2D constraint..
-
Completion...
Smoothing...
Delete VPC(s)
Print VPC(s)
We can define the graphic window characteristics in the panel Graphic Options of the Application
Menu. These parameters will be illustrated in the subsequent paragraphs:
m
The miscellaneous graphic parameters such as the polygon display parameters, the graphic
bounds (for projections) and the order of the lithotypes.
Here, this panel is used in order to define the options for the VPC display windows: we switch ON
the flag for Displaying Raw VPC and OFF the one asking for normalization.
424
(snap. 10.5-3)
Plurigaussian
425
The raw mode: for each (vertical) level, the (horizontal) bar is proportional to the number
of samples used to calculate the statistics. You can display the numbers by switch on the Display Numbers option in the Graphic Option panel. Each bar is subdivided according to the
proportions of each lithotype, represented using its own color. The order of the lithotypes is
defined in the Graphic Options panel.
The normalized mode: the proportions are normalized to sum up to 1 in each level (except
the levels where no sample is available). Note that the first and last levels of this global proportion curve are left blank as they do not contain any sample.
(fig. 10.5-1)
A second feature is obtained using the option Display Pie Proportions in the Application Menu. It
creates a separate window where each well is represented by a pie located at the well header location. The pie is subdivided into parts whose size represents the proportion of each lithotype calculated over the whole well (Calculated From Lines option). We can normalize the proportions by
discarding any information which does not correspond to any lithotype. Here instead, we have chosen to take them into account: they are represented as a white fictitious complementary lithotype.
426
W7
4000
W2
3000
W9
W8
W1
W6
2000
W3
W4
1000
W10
W5
-1000
1000
2000
2D Point Proportion
3000
4000
(fig. 10.5-2)
For particular usage, some lithotypes can be regrouped: this new set is then displayed as a fraction
of the pie.
The 10 wells are displayed using the pie proportional chart where each lithotype is represented with
a proportional size, calculated over the whole well. This application is used to check that the first
lithotype (called Conglomerate) represented in red is present only in the 6 wells located in the
northern part of the field (W1, W2, W6, W7, W8 and W9) and absent in the south, hence the non
stationarity.
l
Creating polygons
In this step, we turn the option of the main graphic window into the Polygon Edition mode. Using
the Create Polygon(s) option of the Graphic Menu, we digitize two polygons. When a polygon is
created, a vertical proportion curve is automatically calculated and displayed (in its normalized
mode) at the polygon anchor position (located by default in the center of gravity of the polygon).
We can now select one VPC (or a group of them) and modify it (or them) using the features demonstrated hereafter.
Plurigaussian
427
(snap. 10.5-4)
l
Edition of a VPC
In our case, we select the VPC corresponding to the northern polygon and use the Display & Edit
option of the Graphic Menu in order to represent it on a separate window. As this has already been
discussed when visualizing the global VPC, we have chosen (in the Graphic Options panel) to represent the VPC in the raw version on the right and in the normalized version on the left.
428
(fig. 10.5-3)
We can easily check that, here again, the top and bottom levels of this VPC are not informed.
Before using this VPC in a calculation step, we need to complete the empty levels.
To complete empty level, we use the Application / Completion option. By default this algorithm
first locates the first and last informed layer Number of Levels = 1. If an empty layer is found
between the first and last informed layers, the proportions are linearly interpolated. In extrapolation, the proportions of the last informed layer are duplicated. An option offers to replace the proportions of the last informed layer by the ones calculated over a set of informed layers; their number
is defined in the interface. The result is immediately visible in the normalized version of the VPC in
the left part of the specific graphic window and in the main graphic window. Note that this completion operation could have been carried out on a set of VPC (without displaying them on separate
graphic windows).
Plurigaussian
429
(fig. 10.5-4)
Smoothing the vertical transitions is advised, and may be achieved with the option Application /
Smoothing. It enables the application to run a low-pass filtering algorithm on the normalized version. This procedure requires the VPC to be completed beforehand. This procedure can be applied
several times on each selected VPC: here 3 passes are performed. Once more, the results are visible
in the normalized version of the VPC displayed in the left part of the specific graphic window. The
corresponding VPC is also updated in the main graphic window.
(fig. 10.5-5)
430
The same procedure applied on the southern VPC leads to the following graphic representation.
(fig. 10.5-6)
Note that the main difference between the two VPC, even after the completion and smoothing steps,
is the absence of the first lithotype in the VPC corresponding to the southern polygon. As these
VPC will serve as conditioning data for the subsequent interpolation phase: their contents as well as
their location are essential.
l
We recall that the proportions of the different lithotypes over the cells of the working grid have
been initially set to a constant value corresponding to the global proportions calculated using all the
discretized samples. The aim of this application is to calculate these proportions more accurately,
enhancing the absence of the first lithotype in the south, for example.
For that purpose, we use the Compute 3D Proportions of the Application Menu. This procedure
requires all the VPC used for calculations to be completed beforehand.
This application offers three possibilities for the calculation:
Plurigaussian
431
Copying the global VPC (displayed in the lower right corner of the main graphic window): the
proportions are set to those calculated globally over each layer. This crude operation is slightly
cleverer than the initial global proportions as the calculations are performed layer by layer:
therefore the vertical non stationarity is taken care of.
Inverse squared distance interpolation. This well-known technique is applied using the VPC as
constraining data. For each level and each lithotype, the resulting proportion in a given cell is
obtained as the linear combination of the proportions in all the VPC, for the same lithotype and
the same layer. The weights of this combination are proportional to the inverse squared distance
between the VPC and the target cell.
Kriging. This technique is used independently for each layer and each lithotype, using the VPC
as constraining information. A single 2D model (ModelProportions) is created and used for all
the lithotypes: we assume therefore the intrinsic hypothesis of the multivariate linear model of
coregionalization.
The model can be defined interactively using the standard model definition panel. Here it has
been set to an isotropic spherical variogram with a range of 5000m. There is no limitation in the
number and types of basic structures that can be combined to define the model used for estimating the proportions. The sill is meaningless unless several basic structures are combined.
(snap. 10.5-5)
l
The proportions have been calculated over all the cells of the 3D working grid; it is now time to
visualize them using the Display 3-D Proportions option of the Application Menu.
432
This feature is specific to the display of proportions. The figure consists in an horizontal projection:
each cell of the horizontal plane is displayed as a VPC obtained considering the proportions of all
the lithotypes for all the levels of the grid column.
The following operation can be performed:
m
Sampling. This option is relevant when the number of grid cells is large. We must simply
specify the characteristics of a coarser grid (horizontally). When the step of the coarser grid
is set to 1, no sampling is performed and the entire grid is visualized.
Averaging. Before visualization, the VPC are averaged layer by layer in moving windows.
The extension of the moving window is specified by the user.
Finally we can choose to select a vertical window specifying the top and bottom levels to be
visualized.
For this first display, the 3D working grid is sampled by step of 10, with the origin set at rank 5:
only 9 cells are presented out of the 90 cells of the working grid:
(snap. 10.5-6)
Plurigaussian
433
85
75
65
55
45
35
25
15
5
5
15
25
35
45
55
65
75
85
3D Proportion Map
(fig. 10.5-7)
Note - Pay attention to the fact that the represented grid is rotated (by 20 degrees).
The resulting graphic shows that two conditioning VPC are obviously reproduced at their location.
However, the first lithotype still shows up in the southern part of the display of the estimated proportions, because of the weak conditioning of the kriging step based on two VPC only.
Enhancing the conditioning set of information is therefore advised:
434
This can be achieved by increasing the number of VPC which serve as constraining data for the
kriging step. A first solution is to digitize more polygons as one VPC is attached to each polygon, but this may lead to poorly defined VPC.
The other solution considered here is simply to duplicate each VPC several times in its calculation polygon: in VPC Edition mode, right click on the basemap and select the Duplicate One
VPC option. You then have to pick one VPC and move its duplicate at the desired location.
(fig. 10.5-8)
These VPC are used through the same computing process (Compute 3D Proportions in the Application Menu), using the kriging option with the same model as before; the printout gives the proportions:
Computing the proportions on the 3D Grid
========================================
Number of levels
= 51
Number of lithotypes
= 4
Experimental Proportions
- Global VPC
Plurigaussian
435
=
=
=
=
=
48
0.047
0.367
0.236
0.350
- Regionalized VPC(s)
Number of VPC used
= 6
Number of active samples
= 306
Proportion of lithotype #1 = 0.091
Proportion of lithotype #2 = 0.320
Proportion of lithotype #3 = 0.219
Proportion of lithotype #4 = 0.369
The results are displayed using the Display 3D Proportions option of the Application Menu. As
expected, the first lithotype does not show up in the southern area anymore.
5
5
5
15
25
35
45
55
65
75
85
(fig. 10.5-9)
436
Some of the resulting VPC seem to be incomplete. This is due to the fact that, for these cells, the
whole vertical column of the grid does not lie within the unit: it is truncated by the unit limiting surfaces and therefore some cells are masked by the unit selection.
The final step consists in using the Save & Run option of the Application Menu which updates the
Proportions Parameter File. Remember that the last edited proportion model will serve as input for
the plurigaussian simulation.
Plurigaussian
437
(fig. 10.6-1)
Apart from the conditioning data and the 3D grid proportion curves, the plurigaussian simulation
will honor the lithotype rule and the variographic properties of the two gaussian functions.
In geological terms, this means that we can force lithotypes to follow transitional or erratic variations, intrusions, erosions of lithotypes into the whole unit or into a group of lithotypes. Furthermore we have the possibility to control anisotropies, horizontal and vertical extensions (ranges) and
behaviors (type of variogram) for the two axes of the lithotype rule (for two groups of lithotypes).
We must first define the name of the Proportion Parameter File (UnitTop) which contains all the relevant information. In particular, it contains the information on:
m
The well data information (UnitTop / DiscretizedWells): in this application, we use the
assigned lithotype value (Variable Lithotype) at each sample rather than the proportions.
The 3D Working grid (UnitTop / WorkingGrid) which contains the macro variable of the
last edited proportions of the different lithotypes in each cell (Proportions).
438
(snap. 10.6-1)
l
Lithotype rule
Click on the Define button. The initial lithotype rule has to be split in sub-rectangles, each lithotype corresponding to a rectangle. The choice of the lithotype rule should be based on all geological information about the unit. The geological model of the unit is important to assign the
lithotype transitions. The application also produces a set of "histograms" (on the right part of
the window) showing the vertical frequency of transitions between lithotypes along the wells.
Click on Cancel.
Plurigaussian
439
(snap. 10.6-2)
L4
0.000
0.009
0.088
1.000
L4
0.000
0.000
0.000
0.908
For example, from the Downward probability matrix print out (From top to bottom) we see that
L1 (in red) only has vertical contact with L2 (in orange) with a 37.5% of frequency. The same
calculation from bottom to top (Upward probability matrix) shows us that this L1 still has only
contact with L2 but now with only 9.1% of frequency.
The facies transition can be also read from the histograms in the Lithotype Rule Definition
graphic. The left column shows the whole set of lithotypes and the right column plots the lithotypes that have a different zero frequency transition value between the respective lithotype of
the left column and the rest of them.
In our case the lithotype from the left column corresponds to L1 (red one) and as it was said
before we know that it has only one contact transition with L2 (no matter the direction), then
only one histogram bar will be plotted with a frequency equal to one.
440
Note - A lithotype bar is plotted in the histogram if it has a different zero frequency transition no
matter the type of calculation (Upward or Downward). The frequency transitions are not respected
in the histogram, instead they are averaged and normalized.
From the set of histograms we conclude that:
- L1 only has contact with L2,
- L2 has contact with L1, L3 and L4,
- L3 has contact with L2 and L4,
- L4 has contact with L3 and L2.
From the VPC analysis we have found that L1 only occurs at the top and L4 at the bottom of this
unit. We have also observed that L1 is only present in the northern area of the field. In the present case, by now we will not use external information to confirm our assumptions or to customize the lithotype rule, and we will work without any geological model delineation.
In the Lithotype Rule Definition panel click switch on Add Vertically and split the lithotype rule
set as default (L1). Repeat this action to split vertically the L2 area. Now switch on Add Horizontally and split in two the L3 area. The next graphic shows the obtained lithotype rule:
(fig. 10.6-2)
This lithotype rule is consistent with all the properties mentioned above, but remember that
these diagrams are only informative as they are calculated on few samples and only along the
wells. The lithotype rule as defined previously is echoed on the main panel.
l
Plurigaussian
441
Each model can be edited using the standard model definition panel. For the time being, enter
the following structures for the variograms of the two gaussian functions:
l
g1Top: cubic variogram with a range of 2000m along X and Y, and 2.5m along Z,
g2Top: exponential variogram with a (practical) range of 2000m along X and Y, and 3m along
Z.
Note that, by default, the basic structures are anisotropic with a rotation equal to the rotation
angle of the grid (20 degrees). In our case, the basic structures are isotropic (in the XOY plane)
and this rotation angle is ignored. The quality of the fitting will be evaluated below.
Control displays
This application offers the possibility of displaying the thresholds calculated for both gaussian
random functions. They are represented in a form similar to the lithotype rule but this time, each
axis is scaled in terms of cumulative gaussian density. In our case and as the two underlying
gaussian random functions are not correlated, the surface directly represents the proportion of a
lithotype. Here the working grid is sampled considering only one cell out of 10, which brings
the number of cells down to 9*9. The level 25 (out of 51) is visualized.
442
(fig. 10.6-3)
It is interesting to see that lithotype 1 (red) does not show up and that lithotype 4 (blue) progressively disappears towards the north east. External information will be used later to inform that
lithotype 4 (blue) belongs to the Deep Platform environment and that lithotype 1 belongs to a
coastal environment (upper part of the field). This puts in evidence the N-S lithotype progradation.
We can visualize a non conditional simulation performed in the planes of the 3D working grid.
For that sake, we must enter the seed used for the random number generator.
Plurigaussian
443
(snap. 10.6-3)
For better legibility, it is possible to enlarge the extension of the grid along the vertical axis by
specifying a Z scaling factor, here the distortion factor is set to 150. The next figure represents a
YOZ section (X Index=60). We recall that this simulation is performed in the 3-D working grid.
We can also visualize the horizontal section at Z index=25.
10
9
8
7
6
5
4
3
2
1
2000
1000
-2000
-1000
1000
2000
-1000
-2000
-2000
-1000
1000
2000
(fig. 10.6-4)
l
Fitting variograms
The last tool corresponds to the traditional graphic variogram fitting facility. However, in the
case of plurigaussian model fitting, it is rather difficult to use as:
m
The variograms are calculated experimentally on the lithotype indicators. When choosing
the models for both underlying gaussian random functions, we must fit simultaneously all
the simple and cross-variograms: for 4 lithotypes, there are 4 simple variograms and 6 crossvariograms.
The equation relating the variogram model to the variogram of the lithotype indicator uses
the lithotype proportion: the impact of a strong non stationarity is difficult to evaluate on the
model rendition.
444
In addition in our case, we can calculate the variograms along the wells (almost vertically) but
with lot of difficulty horizontally because of the small number of wells. The application allows
the calculation on the fly of experimental simple and cross variograms in a set of directions (up
to 2 horizontal and 1 vertical). We must first define the characteristics of these computations:
m
Computation Parameters: the lag values and the number of lags in each calculation direction.
By default the lags are set equal to the grid mesh in each direction.
(snap. 10.6-4)
m
Horizontal tab. The horizontal experimental variogram is calculated as the average of variograms calculated layer by layer along a reference direction within angular tolerance. Here
the reference direction is set to 70 degrees and the angular calculation tolerance to 45
degrees. There is no restriction on the vertical layers to be scanned.
(snap. 10.6-5)
m
Vertical tab. The vertical experimental variogram refers to calculations performed along the
vertical axis within a vertical tolerance. In addition, we can restrict the calculation to consider pairs only within the same well.
(snap. 10.6-6)
Plurigaussian
445
Now, we can define a set of graphic pages to be displayed. Each page is identified by its name
and by its contents composed of a set of lithotype indicator simple or cross-variograms. For
each variogram, we can specify the line style and color. In the next figure, we define the page
Horizontal which contains the four simple lithotype indicator variograms for two directions
(Horizontal 1 and Horizontal 2). The Horizontal 1 direction corresponds to the E-W axe of the
working grid. We first define a New Page and enter its name in the popup window, then select
Horizontal 1 and 2 from the Directions list, then select the L1, L2 L3 and L4 lithotype from the
Lithotypes list and press the arrow button: the list of variograms to be calculated is displayed in
the Curves List (Hor1: simple[1], ...). We do the same for a new Vertical page.
The next figure shows the indicator simple variograms of the four lithotypes for the horizontal
and vertical plane (with lithotype color scale): experimental quantities are displayed in dashes
for Hor1 direction and points-dashes for Hor2 whereas the model expressions are displayed in
solid lines.
Horizontal
Indicator Variogram(s)
0.3
0.2
Lithotypes
Conglomerate
Sandstone
Shale
Limestone
Indicator Variograms
0.1
Hor1 : simple[1]
Hor1 : simple[2]
Hor1 : simple[3]
Hor1 : simple[4]
Hor2 : simple[1]
Hor2 : simple[2]
0.0
1000
2000
Distance (m)
3000
Hor2 : simple[3]
Hor2 : simple[4]
(fig. 10.6-5)
446
Vertical
Indicator Variogram(s)
0.3
0.2
Lithotypes
Conglomerate
0.1
Sandstone
Shale
Limestone
Indicator Variograms
Vert : simple[1]
Vert : simple[2]
0.0
Distance (m)
Vert : simple[3]
Vert : simple[4]
(fig. 10.6-6)
So far we have all the parameters needed to perform a plurigaussian simulation apart from the
neighborhood definition. However before arriving to the final step we present how to integrate
external information or geological assumptions to the lithotype rule and the two underlying
gaussian functions in order to simulate lithofacies within conceptual or geological model features.
As it was said before lithotype L1 (conglomerate) is associated to a coastal environment. Geological information put in evidence that lithotype L1 has a regional trend measured in Azimuth=80 degrees. Lithotype L1 would be related to coastal facies (Northern part of the deposit).
L2 and L3 (sandstones and shales) belong to a shallow marine environment and show an
oblique sigmoidal stratification. The dipping angle of this lithotype varies between 0.4 - 0.5
degrees. They have a progradation from the shore line ( ~ North to South), by consequence the
layers are oriented to the same Azimuth=80 degrees.
L4 is related to the deep platform environment and it has a good horizontal correlation (regional
presence).
Plurigaussian
447
(fig. 10.6-7)
Note - The previous graphic is represented in working grid coordinates. The structural grid has a
rotation, then you must pay attention when dealing with spatial correlations in the working grid.
In order to take into account this new information we will change the lithotype rule to fit the
next graphic.
(snap. 10.6-7)
Note that this lithotype rule allows L1 to have a contact with L3, this feature is not consistent
with the vertical transition of lithotypes from the wells, but as it was said before the transition
histograms are only informative. In the other hand, it is now possible to control the regional
trend of the coastal associated lithotype L1 with one of the gaussian random function, G1 (horizontal), for this case. We will call this function G1 Coastal Top. The characteristics of this
model are:
- Global horizontal rotation = Azimuth=80, (equivalent to Az=10)
- Type = Cubic
- Ranges (Along U rotated= 2000, Along V rotated=700, Along Z=0.5m)
448
Note - As this gaussian will principally affect lithotype L1 we have used a Z range value equivalent
to its average thickness from the wells. In order to simulate the horizontal trend of the coastal
environment we have used a range value U axis greater than V.
For the second gaussian that will only rule L2 and L3 we take into account the dipping angle of
progradation and an anisotropy direction equivalent to the G1 gaussian function but with a
range value along the V axis greater that the U axis due to the fact that progradation occurs
orthogonal to the coastal trend. We will call this function G2 (L2-L3) Top. The characteristics
of this model are:
- Local rotation for anisotropy = Azimuth=80, (equivalent to Az=10). Vertical rot= 0.4
- Type = Cubic
- Ranges (Along U rotated=1000, Along V rotated=1500, Along Z=1.5m)
(snap. 10.6-8)
Since L1 has a geological correlation with L3 (facies transition from coastal to ramp) we have
used a correlation value of 0.6 between the two gaussian functions. You can compare the indicator variograms and the display of non conditional simulations to the previous model, as well as
the impact of the correlation factor on the non conditional simulations.
Plurigaussian
449
(snap. 10.6-9)
The next graphic shows non-conditional simulations using different correlation values. (From
left to right: 0, 0.3, 0.9)
2000
2000
2000
1000
1000
1000
-1000
-1000
-1000
-2000
-2000
-2000
-1000
1000
2000
-2000
-2000
-1000
1000
2000
-2000
-1000
1000
2000
(fig. 10.6-8)
450
(snap. 10.7-1)
This application corresponds to the Interpolate / Conditional Simulations / Plurigaussian menu and
performs plurigaussian simulations (only one in this case study). First, define the name of the Proportion Standard Parameter File (UnitTop) which defines all the environment parameters such as:
l
The input discretized line structure (UnitTop / DiscretizedWells) and the assigned lithotype
variable Lithotype.
The 3D working grid (UnitTop / WorkingGrid) with the input macro variable Proportions and
the output macro variable containing the simulated lithotypes Simupluri litho (2nd version).
In particular, the parameter file indicates if the plurigaussian simulation should be conditioned to
some data or not. In addition, we must define the specific parameters, such as:
Plurigaussian
451
The neighborhood (Moving) using the standard neighborhood definition panel: the neighborhood search ellipsoid extensions are 10km by 10km in the horizontal plane and 20m along the
vertical; it is divided in 8 angular sectors with an optimum of 4 points per sector.
The parameters for reconstructing the underlying gaussian random functions at the constraining
data points.
The parameters for simulating the underlying gaussian random functions on the grid.
452
4000
Y (m)
3000
10
2000
9
8
7
Z (m)
1000
6
5
4
3
2
1
0
-1000
1000
2000
3000
1000
2000
4000
3000
4000
5000
6000
X (m)
X (m)
Lithotypes
Conglomerate
Sandstone
Shale
4000
Limestone
N/A
10
2000
9
8
7
1000
Z (m)
Y (m)
Other
3000
6
5
4
3
2
1
-1000
1000
X (m)
2000
3000
4000
1000
2000
3000
4000
5000
6000
X (m)
(fig. 10.7-1)
Plurigaussian
453
(snap. 10.8-1)
The new Proportion Parameter File (UnitBottom) is used to store all the information required by
the plurigaussian simulation for the current unit, such as:
454
The 3D structural grid (reservoir / simu) where the final results will be stored, is the same than
for the upper unit. However, we must pay attention to use a different name for the back-transformation pointer than the one used for the upper unit (Variable ptr_UnitBottom).
For this unit, the horizontalisation is performed using a distortion proportional between the Top
surface (surf2) and the Bottom surface (surf3), contained in the 2D surface grid file (data, File
Surfaces). In this case, there is no need to specify any reference surface.
The new 3D working grid (UnitBottom / WorkingGrid) is used to store the macro variable
containing the proportions (Variable Proportions). Note that, in this proportional flattening
case, the grid mesh along vertical axis (0.2m) is defined arbitrarily. The number of meshes (27)
is defaulted according to the mesh extension of the structural grid and the unit thickness.
A new file to store the discretized wells (UnitBottom / DiscretizedWells) with the macro variable for the proportions (Variable proportions) and the assigned lithotype (Variable lithotype).
The linked header file (UnitBottom / WellHeads) contains the names of the wells (Variable
WellName).
Four lithotypes are defined, as illustrated in the next window. We also define the corresponding
name and color for each lithotype and create a new Color Scale and Palette that will be used to represent the lithotype simulated grid using the traditional Display/Grid/Raster facility (LithotypesUnitBottom).
(snap. 10.8-2)
Plurigaussian
455
(snap. 10.8-3)
(snap. 10.8-4)
Once pressed the Run button, the printout shows the following results:
456
File Name
: UnitBottom/WorkingGrid
Mask Name
: None
NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
27
Z0=
0.10m
DZ=
0.20m
Type of system
: Proportional between Top and Bottom Surfaces
Surface File
: data/Surfaces
Top Surface
: surf2
Bottom Surface
: surf3
NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
99
Z0=
-26.00m
DZ=
0.20m
Input Data:
-----------
File Name
: data/Wells
Variable Name : lithofacies
Discretization Options
----------------------
Discretization Length
=
0.20m
Minimum Length
=
0.02m
Distortion Ratio (Hor/Vert)
= 250
Lithotype Selection Method
= Central
Discretization Results:
-----------------------
File Name
: UnitBottom/DiscretizedWells
Lithotype Name
: lithotype
Proportions Name
: proportions[xxxxx]
Total Number of Lines
= 10
Total Number of Samples
= 270
Number of Informed Samples
= 241
Discretized Lithotype Proportions:
Plurigaussian
457
Continental sandstone =
Continental shales =
Very shallow packstones =
Shallow wackstones =
Assigned Lithotype Proportions:
Continental sandstone =
Continental shales =
Very shallow packstones =
Shallow wackstones =
0.351
0.166
0.448
0.035
0.365
0.166
0.436
0.033
At the end of this discretization procedure, the proportions in the 3D working grid are defined as
constant over all the cells, equal to the discretized lithotype proportions (as defined above).
W7
4000
W2
3000
W9
W8
W1
W6
2000
W3
W4
1000
W10
W5
-1000
1000
2000
2D Point Proportion
3000
4000
(fig. 10.8-1)
458
This figure does not show any particular feature linked to the geographical position of these proportions.
Lithofacies L3 (Very shallow packstone - in orange) is associated to a shallow platform environment. Lithofacies L1 and L2 are associated to a continental environment but we do not have more
external information. For this reason we consider the proportions of the lithotypes as stationary in
the horizontal extension of the field.
We now focus on the global VPC (calculated using the samples from all the wells) and displayed in
the bottom right corner of the main graphic window. The next figure (obtained using the Display &
Edit option of the Graphic Menu) shows the VPC in the raw version (on the right) and in the modified version (on the left).
(fig. 10.8-2)
Note that, in our case where the horizontalisation has been performed in a proportional manner,
there is no need for completing the VPC. The initial global VPC has been smoothed (3 iterations) as
in the upper unit. The Compute 3D Proportions option is used to simply duplicate the global VPC
in each column of cells in the 3D working grid.
Plurigaussian
459
(snap. 10.8-5)
460
Experimental Proportions
- Global VPC
Number of active samples
= 27
Proportion of lithotype #1 = 0.359
Proportion of lithotype #2 = 0.171
Proportion of lithotype #3 = 0.438
Proportion of lithotype #4 = 0.031
Note that, as expected, the proportions are the same on the global VPC than over the whole grid.
They are slightly different from the one calculated on the discretized samples in the previous application, due to the smoothing step. Do not forget to Save and Run this parameter file.
The proportions can be visualized using the Display 3D Proportions utility, which produces a figure
where all the VPC are exactly similar.
Plurigaussian
461
(fig. 10.8-3)
(snap. 10.8-6)
462
g1Bot: anisotropic exponential variogram with a (practical) range of 1500m along X and Y
and 2m along Z.
g2Bot: anisotropic cubic variogram with a (practical) range of 1500m along X and Y, and
2m along Z.
The next graphic shows a non-conditional simulation using the previous parameters.
(snap. 10.8-7)
We clearly see the influence of the variogram types and lithotype rule which produce the following
transitions:
- spotted between lithotype #1 (yellow) and lithotype #3 (orange),
- spotted between lithotype #2 (green) and lithotype #3 (orange),
- spotted between lithotype #3 (orange) and lithotype #4 (blue),
- smooth between lithotype #1 (yellow) and lithotype #2 (green).
Plurigaussian
463
(snap. 10.8-8)
The next figure represents the layer 26 projected on the horizontal plane for the output model and a
cross-section.
464
(fig. 10.8-4)
Plurigaussian
465
(snap. 10.9-1)
We first define the different 3D units to be merged: they are characterized by the corresponding
Proportions Standard Parameter Files (UnitTop and UnitBottom) which contain all the relevant
information such as the name of the pointer variable (not shown in the interface) or the name of the
macro variable containing the lithotype simulations.
466
We also define the 3D structural grid (reservoir / simu) where the merged results will be stored in a
new macro variable (lithotype).
The number of simulated outcomes in the different 3D working grids may be different. The principle is to match the outcomes by their indices and to create a merged outcome in the structural file
using the same index.
In the bottom part of the window, the procedure concatenates the list of all the lithotypes present in
all the units. We must now define a set of new lithotype numbers which enable us to regroup lithotypes across units: this is the case here for the lithotype Shale present in both the upper and the
lower units, which is assigned to the same new lithotype number (3).
We can then use the Colors option in order to define the name and color attributes for the new lithotypes and define the corresponding new palette and Color Scale (LithotypesReservoir).
(snap. 10.9-2)
Plurigaussian
467
4000
3000
Y (m)
Lithotypes
Conglomerate
2000
Sandstone
Shale
1000
Limestone (upper)
Continental sandstone (lower)
Very shallow packstones (lower)
1000
2000
3000
4000
X (m)
Z (m)
1000
2000
3000
4000
5000
X (m)
(fig. 10.9-1)
We can see the two units merged and back transformed in the structural position with the limiting
surface between them. The final statistics consist in running the Statistics/Quick Statistics application on the lithotype variable of the resulting 3D structural grid (reservoir / simu) in order to get
the global statistics on the different lithotypes:
Integer Statistics: Variable lithotype[00001]
Integer value
Count of samples
Percentage
1
6545
1.31%
2
81184
16.25%
3
106415
21.29%
4
136680
27.35%
5
73203
14.65%
7
89028
17.82%
8
6678
1.34%
468
Oil Shale
469
11.Oil Shale
This case study illustrates the use of faults on a 2D data set containing
two variables: the elevation of the bottom of a layer and its thickness.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.
470
the first one (called oil_shale.hd) contains the sample information, i.e:
m
its coordinates,
the depth of the bottom of the layer (counted positively downwards) called elevation,
the second one (called oil_fault.hd) contains the coordinates of 4 segments which constitute the
main fault system as digitized by the geologist.
(snap. 11.1-1)
Oil Shale
471
We can check the contents of the file by asking for some basic statistics on the variables of interest
(all expressed in meters):
Variable name
Number of
Minimum
Maximum
valid samples
X
191
637.04
55018.00
191
27.84
68039.04
elevation
190
1299.36
2510.03
thickness
168
27.40
119.48
at the stage of the calculation of experimental variograms, the pair made of two points will not
be considered as soon as the segment joining them intersects a fault.
at the estimation phase, a sample will not be used as neighboring data if the segment joining it to
the target intersects a fault.
In 3D, the faults are defined as a set of triangular planes. In 2D, the faults represent the projection
on the XoY plane of possible 3D faults. Therefore, we can distinguish two categories of faults:
l
the set of broken lines which corresponds to the trace of vertical 3D faults,
the closed polygon which is the projection of a set of non vertical 3D faults: special options are
dedicated to this case.
In this case study, the geologist has digitized one major fault which corresponds to a single broken
line composed of four vertices. It is given in the ASCII file called oil_fault.hd.
# FAULTS SAVING: Directory: Oil Shale File: Data
#
#
# max_priority=127
#
# field=1 , type=name
# field=2 , type=x1 , unit=m
# field=3 , type=y1 , unit=m
# field=4 , type=x2 , unit=m
# field=5 , type=y2 , unit=m
# field=6 , type=polygon
# field=7 , type=priority
#
#
#
#+++++++----------++++++++++----------++++++++++----++++
1
5000.00 69000.00 13000.00 63000.00
0
1
1
13000.00 63000.00 13000.00 54000.00
0
1
1
13000.00 54000.00 19700.00 45700.00
0
1
1
19700.00 45700.00 37000.00 69000.00
0
1
472
The faults are loaded using File / Faults Editor utility. This procedure is composed of a graphic
main window. In its Application menu, we use, in order, the Load Attached File, to define the file
where we want to load the faults (Directory Oil Shale, File Data already created in the previous
paragraph) and the ASCII Import option to load the faults.
Pressing the Import button starts reading the fault which is now represented on the graphic window
together with the data information. The single fault is given the "name" 1.
70
60
50
Y (km)
40
30
20
10
10
20
30
40
50
60
X (km)
(fig. 11.1-1)
When working in the 2D space, this application offers various possibilities such as:
- digitizing new faults,
- modifying already existing faults,
- updating the attributes attached to the faults (such as their names).
An interesting attribute is the priority which corresponds to a value attached to each fault segment:
this number indicates whether the corresponding segment should be taken into account (active) or
not, with respect to a threshold priority defined in this application.
In order to check the priority attached to each segment of the fault, we select Edit Fault in the
graphic menu, select the fault (which is now blinking) and ask for the Information option.
Polyline
Fault:
Priority:[1,1]
Nber of Segments: 4
Oil Shale
473
This statement tells us that the designated fault, called 1, is composed of four segments whose priority are all equal to 1. This means that, if the threshold is left to 127 (value read from the ASCII
file containing the fault information), all the segments are active.
If we decrease the threshold down to 0 in the main graphic window, the fault is now represented by
dashed lines signifying that no segment is active and the data set would then be treated as if no fault
had been defined. By giving different priorities to different segments, we can then differentiate the
severity of each segment and set it for the next set of actions.
As we want to use the faults in this case study, we modify the threshold to 1 (in fact any positive
value) and use SAVE and RUN in the Application menu to store the fault and the threshold value
together with the data information.
This procedure allows a graphic control where the final grid is overlaid on the initial data set.
70
60
Y (km)
50
40
30
20
10
0
0
10
20
30
X (km)
40
50
60
(fig. 11.1-2)
474
70
60
50
Y (km)
40
30
20
10
0
0
10
20
30
X (km)
40
50
60
(fig. 11.2-1)
The next task consists in checking the relationship between the two target variables thickness and
elevation: this is done using a scatterplot where the regression line is represented.
The two variables are negatively correlated with a correlation coefficient of -0.72. An interesting
feature consists in highlighting the samples located on the upper side of the fault from the base
map: they are represented by asterisks and correspond to the smallest values of the elevation and
also almost always to the largest values of the thickness.
Oil Shale
475
rho=-0.724
120
110
100
thickness
90
80
70
60
50
40
30
20
1250
1500
1750
elevation
2000
2250
(fig. 11.2-2)
Because of this rather complex correlation between the two variables (which depends on the location of the samples with regard to the faulting compartment), we decide to analyze the structures of
the two target variables independently.
Due to the high sampling density a preliminary quick interpolation may help to understand the main
features of the phenomenon. Inside Interpolate / Interpolation / Quick Interpolation a Linear Model
Kriging is chosen to estimate each variable using a Unique neighborhood.
(snap. 11.2-1)
476
(snap. 11.2-2)
(fig. 11.2-3)
Note - These displays are obtained with a superimposition of a grid in raster representation, in
isolines and the fault. Details are given at the last section of this case study.
From the last graphic it is clear that the thickness is anisotropic with the elongated direction of the
anisotropy ellipse close to the NW-SE direction.
Oil Shale
477
Note - A variogram map calculation applied to thickness and elevation datasets would lead to
similar conclusions about the main directions of anisotropy.
Two directional variograms are then calculated with 15 lags of 2km each and an azimuth rotation of
45 degrees (N45).
Variogram : thickness
300
N45
200
N135
100
10
20
Distance (km)
30
(fig. 11.2-4)
We save this set of two directional variograms in the Parameter File called Oil Shale Thickness.
Note - By displaying the variogram cloud and highlighting several variogram pairs, the user may
note that none of these pairs crosses the faults.
We reproduce similar calculations on the elevation variable, for 15 lags of 2km each, but this time
the rotation is slightly different: Azimuth 30 degrees (N30).
478
90000
80000
70000
Variogram : elevation
N30
60000
50000
40000
N120
30000
20000
10000
10
20
Distance (km)
30
(fig. 11.2-5)
We note that variograms of the two variables have similar behaviour even if the directions are
slightly different. The set of directional variograms is saved in the Parameter File called Oil Shale
Elevation.
Oil Shale
479
480
(snap. 11.3-1)
Oil Shale
481
(snap. 11.3-2)
482
(snap. 11.3-3)
In order to check the automatic fitting on the two directions simultaneously, we use the Global Window. The model produced is satisfactory. Press Run(Save) to save the parameter file.
Repeat the process for Thickness.
Oil Shale
483
Variogram : thickness
300
N45
200
N135
100
10
20
30
(fig. 11.3-1)
Distance (km)
90000
80000
70000
Variogram : elevation
N30
60000
50000
40000
N120
30000
20000
10000
10
20
Distance (km)
30
(fig. 11.3-2)
484
11.4 Estimation
11.4.1 Estimation of Thickness
The estimation will be performed with the procedure: Interpolate / Estimation / (Co-)Kriging.
Note - Being a property of the Input File, the fault system will be automatically taken into account
in the estimation process.
(snap. 11.4-1)
Oil Shale
485
After having selected the variogram model, we must define the neighborhood which will be used
for the estimation of both variables. It will be saved in the Parameter File called Oil Shale. In order
to decluster the information, we will use a large amount of data per neighborhood (3x8) taken
within a large neighborhood circle (30km).
Finally, in order to avoid too much extrapolation, no target node will be estimated unless there is at
least 4 neighbors within a circle of radius 8km.
(snap. 11.4-2)
486
(snap. 11.4-3)
The first task is to check the consistency of these neighborhood parameters graphically using the
Test button in the main window: a secondary graphic window appears representing the data, the
fault and the neighborhood parameters. Pressing the Left Button of the mouse once displays the target grid. Pick a grid node with the mouse again to start the estimation: each active datum selected in
the neighborhood is then highlighted and displayed with the corresponding weight (as a percentage). Using the Domain to be estimated item in the Application menu cross-hatches all the grid
nodes where no estimation will be performed (next picture).
Oil Shale
487
70
60
Y (km)
50
40
30
20
10
10
20
30
40
50
X (km)
60
(fig. 11.4-1)
Note - Although the sample locations are the same, the graphics obtained for the two variables will
not necessarily be similar as the number of active data is not the same (190 values for elevation
compared with only 168 for the thickness): the samples for which the current variable is undefined
are represented as small dots instead of crosses.
Before visualizing the results, we run the same process with the elevation variable, modifying the
name of the Parameter File containing the model to Oil Shale Elevation; we store the results in the
variables:
l
488
Firstly, give a name to the template you are creating: Thickness. This will allow you to easily
display again this template later.
In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m
In the Data area, in the Grid file select the variable Thickness (Estimate),
Specify the title that will be given to the Raster part of the legend, for instance Thickness,
In the Graphic Parameters area, specify the Color Scale you want to use for the raster display. You may use an automatic default color scale, or create a new one specifically dedicated to the thickness variable. To create a new color scale: click on the Color Scale button,
double-click on New Color Scale and enter a name: Thickness, and press OK. Click on the
Edit button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and choose 18 classes between 30 and 120, then click on OK.
- In the Colors area, click on Color Sampling to choose regularly the 18 colors in the 32
colors palette. This will improve the contrast in the resulting display.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large
Thickness values.
- Click on the Undefined Values button and select Transparent or Blank.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter
10 as the reference tickmark and 10 as the step between the tickmarks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch off the Automatic Format
button and set the number of digits to 0.
- Click on OK.
Oil Shale
489
(snap. 11.5-1)
In the Item contents for: Raster window, click on Display current item to display the
result.
Click on OK.
Back in the Contents list, double-click on the Isolines item to represent the thickness estimate in
isolines:
m
In the Data area, in the Grid file select the variable Thickness (Estimate),
Click on Display current item to display the result and then OK.
490
Double-click on the Basemap item to represent the thickness values. In the Data area, select
Data / thickness as the proportional variable. In the Graphic Parameters area, choose a size of
0.1 and 0.2 for the lower and the upper bounds. The samples where the thickness variable is not
defined will be represented with blue circles. Click on Display Current Item to check your
parameters, then on Display to see all the previously defined components of your graphic. Click
on OK to close the Item contents panel.
Double-click on the Faults. In the Data area, select file Data, the fault being a property of this
file. Change the Faults Style to a double thickness red line. Click on Display to see all the
defined components of your graphic. Click on OK to close the Item contents panel.
In the Item list, you can select any item and decide whether or not you want to display its legend. Use the Move Back and Move Front button to modify the order of the items in the final
Display.
The Display Box tab allows you to decide whether you want display all the contents or just the
area containing specific items. Select the mode: Containing a set of items, then click on the Raster item and then on Display.
Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.
(fig. 11.5-1)
l
Before closing the graphic window, click on Application / Store Page to save its contents, allowing you to reproduce easily this graphic later.
12.Multi-layer Depth
Conversion With Isatoil
This case study illustrates the workflow of the Isatoil module, on a data
set that belongs to a real field in the North Sea.
For confidentiality reasons, the coordinates and the characteristics of
the information have been modified. For similar reasons, the case
study is only focusing on a subset of the potential reservoir layers.
491
492
12.1 Introduction
The main goal of Isatoil is to build a complete geological model in a layer-cake framework. This is
done when the surfaces corresponding to the tops of the different units are established. The layercake hypothesis assumes that each unit extends between two consecutive surfaces. The order of the
units remains unchanged over the whole field under study. One or several units may disappear over
areas of the field: this corresponds to a pinch-out.
A secondary process produces the values of the petrophysical variables as a two-dimensional grid
within each unit. Some units may be considered as outside the set of reservoirs and therefore they
do not carry any valuable petrophysical information.
Finally the program may be carried over, with the estimation tool (Kriging) being replaced by Simulations in order to reproduce the variability of each one of the parameters involved. This procedure
allows a non-biased quantification of the volumes located above contacts and within polygons
which delineate the integration areas.
Before reading this case study, the user should carefully read the technical reference dedicated to
Isatoil, which describes the general terminology. This technical reference is available in the OnLine Help.
Note - The aim of Isatoil is to derive a consistent layer-cake 2D block model: in other words, it will
be used in order to determine the elevations of a set of surfaces. Therefore elevation will be
regarded as the variable rather than a coordinate, while all the information will be considered as
2D.
Isatis will also be used during this case study, whenever Exploratory Data Analysis and particular
types of graphics are required. The user should already be acquainted with the various Isatis
applications, therefore we shall only mention in this documentation the names of the Isatis panels
that will be used - e.g. Isatis / Statistics / Exploratory Data Analysis.
493
Because of its high quality, the seismic marker corresponding to the top of the Upper Brent formation has been selected as the Top Layering surface, from which the whole layer-cake sequence will
be derived.
The second unit (Lower Brent) has been subdivided into 4 production Zones. These zones do not
correspond to visible seismic markers and thus have no time map associated.
Note - Other units were originally subdivided into zones but this option has not been retained in
this Case Study for the sake of simplicity.
The entire layer-cake sequence is truncated at the top by the Base Cretaceous Unconformity (BCU)
as well as two other erosional surfaces (named ERODE1 and ERODE2). These surfaces correspond
to Upper Limits. This field does not have any Lower Limit.
The geological sequence is summarized in the following table where the name, the type and the
designation codes of the different surfaces are listed; the area is skipped as it is always equal to 1.
Surface Name
Surface Type
Layer
Zone
BCU
Upper Limit
ERODE 1
Upper Limit
ERODE 2
Upper Limit
Upper Brent - B1
Top Layering
Lower Brent - B4
Layer
Zone
Zone
Lower Brent - B6
Zone
Dunlin - D1
Layer
Statfjord - S1
Layer
Base Statfjord - BS
Layer
the well geometry file which contains the intercepts of the wells with the different surfaces
the well petrophysics file which contains the values of the petrophysical variables sampled
within the zones
494
The base file is two-dimensional. Each sample contains the following variables:
l
the 2D coordinates (in meters) of the intercept of each well with the surfaces constituting the
geological model
the depth (in meters) of the intercept of each well with the surfaces constituting the geological
model
the indices for the area, the layer and the zone designation codes
the 2D coordinates (in meters) of the point where the petrophysical parameters are sampled.
There may be more than one sample along the same well within a given unit, for instance if the
well is highly deviated or even horizontal.
the indices for the Area, the Layer and the Zone designation
the Top Layering surface - Upper Brent B1 - from which the entire layer-cake sequence will be
derived
the trend surfaces used as external drift for the petrophysical parameters
The input variables read from the ASCII grid file must follow the terminology as defined in the
Reference Guide.
495
$GTX_HOME/Datasets/Isatoil
C:\Program Files\Geovariances\Isatis\Datasets\Isatoil
(UNIX)
(Windows)
In this case study the three data files will be loaded using the standard ASCII import facility (File /
Import / ASCII). For convenience all the files will be stored in the same Directory named Reservoir.
You user should refer to the Isatis manual for a detailed description of the standard import applications that are shared by Isatis & Isatoil.
a header file (Wells header) with 50 samples and the following variables:
m
well name contains the name which distinguishes one well from another - this is a 3-character alphanumerical variable -
496
the base file (Wells Lines (Geometry)) with 972 samples and the following variables:
m
SN+ Sample Number (READONLY) contains the rank of the sample in the file - from 1 to
972 -
LN+ Line Number (READONLY) which contains the rank of the line to which the sample
belongs - from 1 to 50 -
RN+ Relative Number (READONLY) which contains the rank of the sample in the line to
which it belongs - from 1 to 50, 50 being the count of samples in the longest well -
Area Code, Layer Code and Zone Code are the designation codes for the geological
sequence
Some basic statistics show that the data set is constituted of 50 wells (or 972 intercepts) and that the
depth of the intercept (variable Z-Coor) varies between 2337m and 3131m.
Note that the field extension (in the XOY plane) is different for the two files:
Variable
Header File
Base File
37.
-317.
3675.
4141.
48.
-777.
4919.
4919.
This indicates that the wells are not strictly vertical, which one can check out on the following XOY
projection, performed with Display/Lines in Isatis.
497
(fig. 12.3-1)
Horizontal projection of the geological well information (with the well names)
In the same application, the wells may also be represented in a perspective volume which gives a
better understanding of the well trajectories in 3D.
Note - This operation is not straightforward since the well information has been loaded as 2D
data. The well file must be temporarily modified into 3D lines: the elevation variable is transformed
into a coordinate - in the Data File Manager - for the sake of the 3D representation.
One can check that most of the wells are highly deviated - squares indicate the tops of the wells and
triangles the bottoms.
498
(fig. 12.3-2)
SN+ Sample Number (READONLY) contains the rank of the sample in the file - from 1 to 408 -
Area Code, Layer Code and Zone Code are the designation codes for the geological sequence
Porosity and Net to Gross Ratio are the measurements of the petrophysical variables
Note - Note that there is no variable corresponding to the third coordinate. As a matter of fact the
petrophysical parameters are assumed to be vertically homogeneous. Therefore it suffices to know
the unit to which the measurements belong (as well as the X & Y coordinates) in order to perform
the corresponding 2D estimation or simulations.
The data set consists of 408 samples. The following basic statistics are reported for the two petrophysical variables - using Statistics / Quick Statistics in Isatis - Please note that both variables are
not necessarily defined for the same samples - Count indicates the number of samples at which the
variables are defined.
499
Variable
Count
Minimum
Maximum
Mean
St. Dev.
Porosity
290
0.0010
0.3400
0.2404
0.0451
307
0.
1.000
0.5777
0.3436
The following picture shows the distribution of Porosity (crosses) and Net to Gross ratio (circles)
on an horizontal projection - using Display/Points/Proportional in Isatis - The grey spots correspond
to samples where one of the variables has not been measured.
(fig. 12.3-1)
500
(fig. 12.3-2)
Along Y
Origin (m)
0.
0.
Mesh (m)
50.
50.
Count
73
79
Maximum (m)
3600.
3900.
The following table gives the list of input variables defined on the grid. Note that the variable
names comply with the Isatoil naming convention.
501
Variable Name
Variable Type
Surface Name
depth_1_0_0
BCU
depth_1_0_1
ERODE 1
depth_1_0_2
ERODE 2
depth_1_1_0
Upper Brent - B1
time_1_1_0
Upper Brent - B1
time_1_2_0
Lower Brent - B4
time_1_3_0
Dunlin - D1
time_1_4_0
Statfjord - S1
time_1_5_0
Base Statfjord - BS
trendporgauss_1_2_2
trendporosity_1_2_2
502
(snap. 12.4-1)
503
the message each time a data point is discarded, for one of the following reasons:
m
Error when calculating the thickness by comparison to the reference surface: the reference
surface is not defined
Error when calculating the thickness by comparison with an extrapolated (time) surface: the
surface is not defined
Error when calculating velocities by scaling by a time thickness: the thickness is null or not
defined. This may happen essentially in the vicinity of a pinchout.
504
Finding duplicates. If two intercepts (with the same layer) are located too close (less than
one tenth of the grid mesh away), the points are considered as duplicates: their coordinates
are printed and only the first point is kept, the second one is discarded.
When a point is discarded, the following message is produced with the references of the discarded information, followed by the final count of active data:
Discarding point in the following step :
Calculating Layer Proportions (Degenerated Well)
Well
113 (Zone id.=140)
Coordinate along X =
1599.87m
Coordinate along Y =
4134.25m
Depth
= 2772.030
For this calculation phase (a layering phase which processes 4 variables simultaneously) the different columns represent:
m
Initial: the initial value, as found in the data base. In this case of layering, the data consist of
the depth of the intercept.
Data: the data after it has been pre-processed for usage in the next calculation step. In this
case of layering, data are converted into velocities.
Pr1,...,4: percentage spent in each layer. The percentage is set to 0 if the layer is not reached.
In this case of layer (in velocity), the value represents the time percentage spent in each layer
located above the intercept.
Trend1,...,4: trend used as an external drift for each layer. In this case of layering, the time
of each layer is used as its external drift. The trend value is not printed for a layer which is
not reached.
505
Layer
Zone
Surface Name
Surface Type
BCU
Upper Limit
ERODE 1
Upper Limit
ERODE 2
Upper Limit
Upper Brent - B1
Top Layering
Lower Brent - B4
Layer
Zone
Zone
Lower Brent - B6
Zone
Dunlin - D1
Layer
Statfjord - S1
Layer
Base Statfjord - BS
Layer
When the list is completely initialized you will need to Edit the different surfaces separately in
order to give them their parameters and constraints for computation.
506
Surface Name
Calculated
T2D
EDL
EDZ
BCU
No
ERODE 1
No
ERODE 2
No
Upper Brent - B1
No
Lower Brent - B4
Yes
Velocity
Time
No
Yes
No
Yes
No
Lower Brent - B6
Yes
No
Dunlin - D1
Yes
Velocity
Time
No
Statfjord - S1
Yes
Velocity
Time
No
Base Statfjord - BS
Yes
Velocity
Time
No
The first 4 surfaces - Top Layering and Limit surfaces - cannot be calculated by Isatoil, they are
already stored on the Grid file and will be used as data. All the other surfaces will be calculated.
m
T2D indicates whether the Time to Depth conversion will be performed using intermediate
Velocity or directly in terms of Thickness.
EDL indicates the type of external drift information possibly used during the Layering stage
EDZ tells the type of external drift information possibly used during the Zonation stage.
$GTX_HOME/Datasets/Isatoil
C:\Program Files\Geovariances\Isatis\Datasets\Isatoil
(UNIX)
(Windows)
The following table summarizes the faulting parameters that must be defined for the surfaces in this
Case-Study. The Unit must be set to Meter whenever polygons are used.
Surface Name
Surface Type
Faulting
BCU
Upper Limit
No
ERODE 1
Upper Limit
No
ERODE 2
Upper Limit
No
Upper Brent - B1
Top Layering
Lower Brent - B4
507
Count
Yes
b1.pol
22
Layer
Yes
b4.pol
22
Zone
No
Zone
No
Lower Brent - B6
Zone
No
Dunlin - D1
Layer
Yes
d1.pol
22
Statfjord - S1
Layer
Yes
s1.pol
23
Base Statfjord - BS
Layer
Yes
bst.pol
22
Count designates the number of fault polygons in the file. Some polygons which do not lie
within the rectangular area of interest will be automatically discarded.
Surface Type
Calculated
GOC (m)
OWC (m)
BCU
Upper Limit
No
No
No
ERODE 1
Upper Limit
No
No
No
ERODE 2
Upper Limit
No
No
No
Upper Brent - B1
Top Layering
No
2570
2600
Lower Brent - B4
Layer
Yes
No
2600
Zone
Yes
No
No
Zone
Yes
2570
2600
Lower Brent - B6
Zone
Yes
No
No
Dunlin - D1
Layer
Yes
No
No
Statfjord - S1
Layer
Yes
No
No
Base Statfjord - BS
Layer
Yes
No
No
Note - Note the particular case of Lower Brent - B4 where no GOC is provided. The only fluids that
can be encountered in this zone are Oil and Water.
508
Surface Name
Surface Type
Calc
Norm
ED
Calc
Norm
ED
BCU
Upper Limit
No
No
No
No
No
No
ERODE 1
Upper Limit
No
No
No
No
No
No
ERODE 2
Upper Limit
No
No
No
No
No
No
Upper Brent - B1
Top Layering
Yes
Yes
No
Yes
Yes
No
Lower Brent - B4
Layer
Yes
No
No
Yes
No
No
Zone
No
No
No
No
No
No
Zone
Yes
Yes
Yes
Yes
Yes
No
Lower Brent - B6
Zone
No
No
No
No
No
No
Dunlin - D1
Layer
No
No
No
No
No
No
Statfjord - S1
Layer
No
No
No
No
No
No
Base Statfjord - BS
Layer
No
No
No
No
No
No
Norm indicates if the variable must be Normal Score transformed before the simulation process,
ED indicates if the estimation (or simulation) should take an external drift into account.
Oil
Surface Name
Surface Type
BCU
Upper Limit
ERODE 1
Upper Limit
ERODE 2
Upper Limit
Upper Brent - B1
Top Layering
0.329
1.145
0.949
0.663
0.918
0.106
Lower Brent - B4
Layer
0.604
1.135
0.357
0.107
1.661
0.106
Zone
Zone
0.332
1.763
0.587
0.714
0.756
0.856
509
Lower Brent - B6
Zone
Dunlin - D1
Layer
Statfjord - S1
Layer
Base Statfjord - BS
Layer
Color
Surface Name
Surface Type
Gas
Oil
BCU
Upper Limit
Black
ERODE 1
Upper Limit
Black
ERODE 2
Upper Limit
Black
Upper Brent - B1
Top Layering
110
1.31
Yellow
Lower Brent - B4
Layer
105
1.44
Red
Zone
Pink
Zone
110
1.42
Purple
Lower Brent - B6
Zone
Orange
Dunlin - D1
Layer
Green
Statfjord - S1
Layer
Blue
Base Statfjord - BS
Layer
White
Stats-1 will produce the basic statistics of all the information regarding the selected surface.
The following example will be obtained for the Upper Brent B1 surface - obviously after the
calculations have been performed General Statistics
==================
Layer : Upper Brent - B1 (Identification :
Layer - Faulted)
Grid Time value : Nb = 5767
Grid - Depth value : Nb = 5767
Wells - Depth value : Nb =
13
Wells Porosity : Nb =
8
Wells Net/Gross : Nb =
8
Min
Min
Min
Min
Min
510
Faulted)
the file)
simulations)
simulations)
At this stage of the Case-Study no surface has been calculated yet. However the reference depth
surface - Top Layering - as well as the different time surfaces have been loaded, therefore we can
already perform various types of graphical representations of this data. Obviously these representations will also apply to the results that will be obtained later in the project.
Time
Depth
Velocity
Porosity
Let us first use Display / Map to visualize maps of some of the surfaces that are already available
on the final grid.
511
(snap. 12.4-1)
We choose to represent:
l
The time variable as a colored image - using the automatic Color Scale named Rainbow -
The time variable as a series of contour lines - click on the Edit button to access the contour
lines definition window -
(snap. 12.4-2)
In this example, the variable is smoothed prior to isoline representation - using 3 passes in the
filtering algorithm - and two sets of contour lines are represented:
m
the multiples of 50 ms using a black solid line and representing the label on a pink background
512
The well information: the corresponding Edit button is then used to define the characteristics of
the point display.
(snap. 12.4-3)
In this example the intercepts with the target surface - Upper Brent - B1 - are represented with a
"+" sign and the well name is displayed in a rectangular white box.
l
513
(fig. 12.4-1)
514
(fig. 12.4-1)
This display clearly shows the shift of the non-vertical fault planes through their intersections with
two time surfaces located around 250 ms apart. It also shows the impact of the faulting on the isochrone map.
Note that in the upper-right corner the three faults intersect the Base Statfjord - BS level although
they are not visible on the Upper Brent - B1 level - at least within the grid extension -
515
(fig. 12.4-1)
In terms of statistics we can check that 2135 grid nodes are defined out of 5767.
draw the section which corresponds to the first bisector of the field - click on the Automatic
button to initialize the segment's coordinates -
switch OFF the Automatic Vertical Scale and instead use the following parameters:
m
a Vertical Scale Factor of 300 to exaggerate the thickness for better legibility.
The figure below clearly shows the impact of (at least) one non-vertical fault.
516
(fig. 12.4-2)
We can add the traces of the fault polygons corresponding to each layer on top of the previous section. The intersection between the vertical section and the fault polygons attached to a given layer is
represented as vertical lines - with the same color coding as the layer - This helps checking that
fault polygons are indeed matching with the time maps.
(fig. 12.4-3)
There is an interactive link between the map and section representations, so that you can:
l
Display the location of the current section on time maps, depth maps, etc.
Any map can be used in order to digitize a new segment while the sections are being refreshed
simultaneously.
517
then a Zonation which subdivides each seismic unit into several zones
12.5.1 Layering
12.5.1.1 Correlation for Layering
The Geometry / Seismic Layering / Correlations application allows us to check the hypothesis concerning the correlation between layer thickness and the trend surfaces used as external drift - if
applicable -
Note - In this Case Study we have specified that Layering should be performed through velocity rather than directly in depth - using the time maps as external drift surfaces.
The application represents - in separate graphics - the behavior of the interval velocity against time,
for each of the four layers constituting the sequence.
(snap. 12.5-1)
518
The system first derives the interval velocities from the apparent velocity information at wells
(deduced from the Top Layering reference surface). For layer #N the interval velocity is obtained
by:
l
subtracting the thickness of all the layers located above layer #N - the thicknesses are simply
estimated by their trend -
Obviously the deeper the surface the less accurate - and often the less numerous - the represented
data.
Select the following parameters for representing the well names:
l
switch ON the flag which indicates that names will be posted on the graphics
select the symbol (style and size) to be posted - e.g. a 0.2 cm star -
select the background color for the label's box - e.g. white -
(snap. 12.5-2)
The following graphics show a good organization of the well data around the regression line for B4
and D1, and a more dispersed cloud for Statfjord S1 and Base Statfjord, as expected.
519
(fig. 12.5-1)
From the 52 active data remaining, the equation of the trend for each layer is produced in the message area:
Compression stage:
- Initial count of data
= 59
- Final count of active data = 52
+
+
+
+
Trend
Trend
Trend
Trend
*
*
*
*
(
(
(
(
0.00708)
0.00090)
0.00109)
0.00110)
520
(snap. 12.5-1)
The experimental simple and cross-covariances are calculated in an isotropic manner and for a
given number of lags - e.g. 10 lags of 1000 m Let us click on Edit and define the following set of Basic Structures:
l
Let us switch ON the flag Automatic Sill Fitting so that Isatoil will compute the set of optimal sills
- for all simple and cross-structures - by minimizing the distance between the experimental covariances and the model.
Note - The matrix of the sills must fulfill conditions for definite positiveness.
By switching ON the flag Printout we will obtain the following report in the Message Window:
l
for each pair of variables, the array of experimental covariances and the corresponding values in
the model:
521
.../...
l
the parameters defining the model - i.e. for each basic structure, the coregionalization matrix,
the coefficients of the linear model and the eigen vectors and values - :
Number of basic structures = 2
Variance-Covariance matrix :
Variable 1 Variable 2 Variable 3 Variable 4
Variable 1
0.0351
-0.0112
-0.0168
0.0032
Variable 2
-0.0112
0.0112
0.0055
0.0026
Variable 3
-0.0168
0.0055
0.0080
-0.0014
Variable 4
0.0032
0.0026
-0.0014
0.0020
Variance-Covariance matrix :
Variable 1 Variable 2 Variable 3 Variable 4
Variable 1
0.0007
-0.0001
0.0004
0.0007
Variable 2
-0.0001
0.0000
-0.0000
-0.0001
Variable 3
0.0004
-0.0000
0.0003
0.0004
Variable 4
0.0007
-0.0001
0.0004
0.0007
522
(fig. 12.5-1)
Each view corresponds to one pair of variables - e.g. D1 vs B4 - Only wells that intercept both layers are retained, and the experimental quantity is then averaged at distances which are multiple of
the lag - up to the maximum number of lags - The experimental curves are represented in black
while the model appears in red. The values posted on the experimental curves correspond to the
numbers of pairs averaged at the given distance.
Note - For better legibility only 6 of the actual 10 views are represented here.
The geostatistical model is stored in a Standard Parameter File named Model_area which will be
automatically recognized when running the Base Case or the simulations later on.
523
(snap. 12.5-1)
Basic statistics on the estimated surfaces are reported at the end of calculation:
Statistics on the base case results
===================================
By switching ON the flag named Replace estimation with one simulation Isatoil will perform a geostatistical Simulation instead of a Kriging, using the Turning Bands method. The results are stored
in the same variables as for the Base Case and they can be visualized to get a feeling for the amount
of variability of simulation outcomes.
The following parameters are required by the simulation process:
l
the number of turning bands used in the non-conditional simulation algorithm - Turning
Bands method.
Note - Since the simulated results are stored in the same variables as the Base Case, always make
sure to run the Base Case one more time before moving to the Zonation phase.
524
(fig. 12.5-1)
525
We can also visualize the estimated surfaces along a vertical section, by using the Display / Cross
Section application. Similar to the Display / Time Section representation described before, this type
of section is here performed in depth - with a wide range of available options -
(snap. 12.5-1)
Let us draw a section along the segment defined by the two points (X=605,Y=1119) and
(X=3060,Y=2932). We shall activate the truncation of the estimated surfaces by the Limit Surfaces
and also ask to represent the well information - names and intercepts - on top of the section. By setting the Maximum distance to the fence equal to 40 m, this section only shows three wells - 145,
152 & 191 The following figure shows the cross-section as well as two maps corresponding to the surfaces
Statfjord - S1 and Dunlin - D1
526
(fig. 12.5-2)
The influence of the major fault - which is clear on this section - is inherited from the time maps
that have been used as external drifts.
12.5.2 Zonation
For the sake of simplicity in this Case Study, the zonation has been restricted to the Lower Brent
unit only. Moreover external drift will not be used during the zonation.
Lower Brent - B6
Dunlin - D1 which is the bottom surface - i.e. the top of the next layer -
527
The top and bottom surfaces that were estimated during the layering stage are now considered as
known input data. By adding the bottom surface as an extra constraint - through an original Collocated Cokriging method - the Zonation ensures that the sum of the thickness of the four zones will
match the total thickness of the unit.
The Geometry / Geological Zonation / Model application will be used to compute experimental
simple and cross-covariances for a given number of lags - e.g. 10 lags of 500 m Let us click on Edit and define the following set of Basic Structures for the model:
m
Let us switch ON the flag Automatic Sill Fitting so that Isatoil will compute the set of optimal sills
- for all simple and cross-covariances - by minimizing the distance between the experimental
covariances and the model.
The following statistics - truncated here - are reported when the model is established:
l
(covariance coefficients) :
Variable 2 Variable 3 Variable 4
-10.6434
-2.2063
13.4752
75.1475
-75.7311
3.6356
-75.7311
89.6306
-18.2533
3.6356
-18.2533
45.1714
The model is automatically saved in a Standard Parameter File named Model_1_2 which will be
automatically recognized when running the Base Case or the simulations later on.
528
The following basic statistics are reported for the three estimated zones - based on 64 active data
out of 77 intercepts Name
Minimum
Maximum
2365
2727
2380
2779
Lower Brent - B6
2380
2794
(fig. 12.5-1)
This section does not represent the fault surfaces (as interpolated within the package for chopping
the zones) due to the small extension of the polygon fault at the vicinity of the cross-section segment.
529
the number of turning bands which is the essential parameter of the simulation technique
used: 100.
The following figure shows the map of the thickness between tops surfaces of Lower Brent - B5A
and Lower Brent - B6 (isopack), either for the simulated version (on the left) or the estimated version (on the right). Although the spread of values is different (up to 84.3m for the simulation and
71.5m for the estimation - using the check button in the display window), the same color scale is
used (lying between 50m and 85m). Any thickness smaller than 50m is left blank: this is the case
for the fault traces for example.
(fig. 12.5-2)
530
Upper Brent - B1
Lower Brent - B4
Since the two petrophysical variables are assumed to be independent one from the other - and also
from one unit to another - we must study 6 different variables separately.
Therefore, the same process must be performed 4 times although it is only described once here - for
the Net to Gross ratio of Lower Brent - B5B.
531
defining the authorized interval for the variable: the Net to Gross variable will be defined
between 0 and 1 while the porosity between 0 and 0.4: this definition is essential in order to
avoid the back transformed results to reach unexpected values.
defining additional lower and upper control points which modify the experimental cumulative
density function for extreme values: this option is necessary when working with a reduced number of active data, however it will not be used in this case study.
choosing the count of Hermite polynomials for the fitted anamorphosis function (set to 30)
display the experimental and theoretical probability density function and/or bar histogram (the
count of classes is set to 20)
(snap. 12.6-1)
The next paragraph informs us of the quality of the normal score transform as it produces:
l
statistics on the gaussian transformed data (optimally, the mean should be 0 and the variance 1)
532
and statistics on the difference between the initial data values and their back and forth transformed values
Statistics on Z-Zth:
Mean
= -0.005603
Variance = 0.001493
Std. Dev. = 0.038639
Finally the next figure shows the comparisons between experimental (in blue) and theoretical (in
black) probability density function (on the left) and bar histogram (on the right).
(fig. 12.6-1)
The anamorphosis model (for each petrophysical variable and for each unit) is automatically saved
in a Standard Parameter File whose name follows the naming convention (Psi_Poro_1_2_2 or
Psi_Net_1_2_2 for example).
If the printout option is switched on, the (normalized) coefficients of the different Hermite polynomials are printed out:
533
(fig. 12.6-2)
534
Whenever a variable has been normal score transformed, two individual models must be fitted:
l
The only difference with the other geostatistical modelling panels is that:
l
there is no restriction to strict stationarity. Variograms are used instead of covariances and nonbounded theoretical models - e.g. a linear - are authorized.
The following table summarizes the structures that have been fitted automatically, based on experimental quantities calculated with 10 lags of 300m each usually:
Unit
Variable
Type
Sill
Range
B1
N/G
Raw
Nugget
0.0022
No
B1
N/G
Gaussian
Nugget
0.8048
No
B1
Porosity
Raw
Exponential
0.0004
2000 m
B1
Porosity
Gaussian
Nugget
0.6754
No
Exponential
0.2827
2000 m
Nugget
0.0008
No
Spherical
0.0003
1000 m
Nugget
0.0002
No
Linear
0.0006
10000 m
B4
B4
N/G
Porosity
Raw
Raw
B5B
N/G
Raw
Spherical
0.0764
2000 m
B5B
N/G
Gaussian
Nugget
0.1426
No
Spherical
1.1238
2000 m
B5B
Porosity
Raw
Spherical
0.0006
2000 m
B5B
Porosity
Gaussian
Spherical
1.1927
2000 m
535
Variable
Minimum
Maximum
B1
Porosity
0.251
0.302
B4
Porosity
0.290
0.298
B5B
Porosity
0.193
0.266
B1
N/G
0.854
0.854
B4
N/G
0.952
0.983
B5B
N/G
0.037
0.931
(fig. 12.6-3)
536
12.7 Volumetrics
This section introduces the calculation of accurate volumes based on the results of geostatistical
estimation and/or simulations. There are several levels of details in the reported volumes, since the
volumetrics algorithm takes into account the following parameters:
l
volumes are calculated separately for Oil and Gas, above the relevant contacts,
volumes are computed either as Gross Rock or Oil in Place - if petrophysics is used -,
126.43
490.45
916.61
1227.35
1404.91
339.51
144.96
1803.66
2167.29
1936.70
1803.66
996.58
996.58
1733.48
259.61
490.45
1227.35
1582.48
943.24
259.61
253.23
2859.07
2167.29
1803.66
2663.96
3621.81
3222.70
2923.09
339.51
969.88
1236.23
1502.58
1955.37
1689.02
1404.91
996.58
349.14
358.01
233.85
393.49
1014.32
996.58
The polygon coordinates are expressed in meters in this example. A polygon does not need to be
closed - since Isatoil will automatically close it if necessary The following illustration has been obtained with Isatis. The polygons have been loaded from the
ASCII file named polzone.hd - which contains the proper header organization - and have been displayed with Display / Polygons on top of a time map of Dunlin - D1.
537
(fig. 12.7-1)
Note - The formats of polygon files for Isatis and Isatoil are different. It is not necessary to load
the polygons inside the Isatis database unless you wish to perform a graphic representation such as
above.
538
(snap. 12.7-1)
for gas contents: the lower contact is the OWC and there is no upper contact
for oil contents: the upper contact is the GOC and the lower contact is the OWC
of the product of the thickness by the petrophysical parameters for the storage in place volume
All these operations correspond to non-linear operations (as soon as contacts are involved). A
skilled geostatistician knows that applying a non-linear operation on the result of a linear process
(such as kriging) leads to biased estimations. It is recommended to run simulations instead.
539
Each simulation produces a realistic outcome and therefore a plausible volume result. Then, drawing several simulations will lead to the distribution of possible volumes from which any types of
statistics can be derived:
l
The general principle consists of calculating one or several block models and to derive the different
volumes (per polygon, per layer). A block model is a set of layers and petrophysical variables, all
these surfaces (either geometrical or petrophysical) being calculated consistently. Each block model
is the result of the following six nested elementary operations:
l
Each operation has two possible status, according to the flag Already calculated:
l
ON: it must not be performed and the resulting surface(s) should already exist in the grid file
with a name which follows the naming convention.
OFF: it must be performed during the Volumetrics procedure. The resulting surface(s) are usually not stored in the grid file (see the Simulation Parameters panel for exception).
The surface(s) (either calculated or read from the grid file) can be the result of one of the two following procedures:
l
a conditional simulation
Note - In particular, this allows the user to derive volume from kriged estimates, regardless of the
bias of the result.
540
the petrophysical phase for both Porosity and the Net to Gross ratio.
Therefore we can switch ON the Already calculated flags for all the phases (including the Petrophysical steps) together with the Base Case option.
This estimation will serve as a reference, therefore the values of the GOC and OWC for each unit
are set to the following constant values in the Master File:
l
Isatoil returns the following figures - which are expressed in 106 m3 - per polygon and per zone:
l
GRV is the gross rock volume which only depends on the reservoir geometry
IP is the volume in place obtained as the product of the geometry, the petrophysical variables
and the volume correction factor
Layer
B1
B4
B5B
Polygon
Gas
Oil
GRV
IP
GRV
IP
110.78
1987.26
7.65
2.28
155.31
2716.52
0.12
0.04
67.32
1159.69
0.40
0.12
0.
0.
27.05
6.52
0.
0.
50.04
12.09
0.
0.
29.27
7.03
0.16
1.92
1.78
0.35
7.08
90.73
6.41
1.59
3.43
47.93
2.60
0.53
per polygon and per layer: regrouping all the zones of a layer
per area: regrouping all the zones and layers of the same area
Note - Note that when the results of several are regrouped, the program simply adds the results of
each individual polygon without checking that the polygons do not overlap.
541
344.07
Gas IP
6004.04
Oil BRV
125.32
Oil IP
30.54
Type
GOC(m)
OWC(m)
Upper Brent B1
Top Layering
2570
T(2600,-5,+2)
Lower Brent B4
Layer
No
T(2600,-3,+2)
Zone
2570
U(2598,2602)
Where T(2600,-5,+2) means a triangular law with a minimum of 2595, a maximum of 2602 and a
mode of 2600.
For each volume calculation, the value of the contacts is drawn at random according to the law as
defined in the Master File panel (for each layer, each fluid and each index). These random numbers
use a random number generator which depends on the seed number that can be defined in the Simulation Parameters panel (the other parameters of the panel will be discussed later): changing the
seed number will alter the following Volumetrics results, even when based on the base case process.
(snap. 12.7-2)
When selecting the Verbose Output option in the Master File panel, the volumetrics procedure
produces the values of the contacts:
542
Random generation
GOC : Index-1 =
OWC : Index-1 =
Random generation
OWC : Index-1 =
Random generation
GOC : Index-1 =
OWC : Index-1 =
of contacts
2570.000000
2599.918702
of contacts
2601.204858
of contacts
2570.000000
2600.848190
for layer
Index-2 =
Index-2 =
for layer
Index-2 =
for layer
Index-2 =
Index-2 =
Upper Brent - B1
0.000000 Index-3 =
0.000000 Index-3 =
Lower Brent - B4
0.000000 Index-3 =
Lower Brent - B5B
0.000000 Index-3 =
0.000000 Index-3 =
0.000000
0.000000
0.000000
0.000000
0.000000
The global results of the base case are compared with the reference values obtained with the constant contacts of the previous paragraph:
Constant contacts
Randomized contacts
Gas GRV
344.07
344.07
Gas IP
6004.04
6004.04
Oil GRV
125.32
126.05
Oil IP
30.54
30.71
Limit surfaces
Layering
Zonation
Porosity
When the flag Already calculated is switched ON, Isatoil reads the results from the grid file using
the relevant naming convention. For example, the depth corresponding to the zone (3) of the layer
(2) inside area (1) must be stored under:
l
When the flag Already calculated is switched OFF, the base-case or the simulation outcomes are
computed at RUN time.
When simulations have been selected for a given step, the user can specify the number of outcomes
that will be calculated or read from the grid file.
543
the number of turning bands that must be used in order to generate an outcome which reproduces correctly the variability as defined in the geostatistical model. On one hand, this number
should be large for a good quality, on the other hand it should not be too large as the time consumption of each simulation is directly proportional to the number of bands. In this case study,
this value is set to 500.
should we match or combine the simulations? When two nested phases have to be simulated
with 3 outcomes for each one, this flag tells the system if the final count of scenarios should be
3 (match option) or 9 (combine option). When match is required, the number of outcomes
obtained is the smallest number of outcomes defined for the various simulation steps. When
combine is selected, the final number of outcomes is obtained as the product of the individual
numbers of outcomes.
Oil
GRV
IP
GRV
IP
Base Case
344.07
6004.04
126.05
30.71
Mean
341.11
6260.19
124.85
30.63
St. dev.
7.79
247.35
5.94
2.22
P90
332.30
5981.47
117.71
28.72
P50
342.30
6418.19
126.61
30.94
P10
356.20
6584.76
133.03
34.26
544
Oil
GRV
IP
GRV
IP
Base Case
344.07
6004.04
126.05
30.71
Mean
340.47
6352.31
126.35
31.78
St. dev.
6.33
158.55
7.40
1.86
P90
334.33
6152.96
116.72
29.38
P50
338.16
6345.31
127.17
31.94
P10
351.66
6561.83
135.32
34.09
the volume obtained using the base case is not necessarily close to the one (say the median or
P50) obtained with simulations. This is due to the bias that we have mentioned before. In the
case of the Gas IP in particular, the difference between the P50 (6345.) and the base case (6004)
is almost twice as large as the standard deviation (158).
the gain in accuracy has one severe drawback: CPU time consumption. As a matter of fact, the
volumes obtained on 625 simulation outcomes cost much more than one single volume obtained
using the base case
In order to avoid running several times the simulations for a given configuration of parameters, the
results of the RUN can be stored in some Histogram file (e.g. histo). The contents of this file can
be used in the Volumetrics / Histogram application.
Note - Although this file is in ASCII format, it can only be interpreted properly by Isatoil itself. It is
useless and not recommended to try reading these figures with another software...
the polygon number - 1,2 or 3 since 3 areal polygons have been used -
545
We can select the type of the volume to be displayed among the following options:
l
Gas in Place
Oil in Place
Finally, in our case, we get 625 possible consistent block systems: for each block system, the program has calculated the volumes of 21 different items, for 4 different materials.
The Histogram utility enables the user to select one or several item(s) of interest and to extract the
values of the 625 realizations. When several items have been selected (say Polygon 1 for Upper
Brent - B1 and Polygon 2 for Lower Brent B5B), the value for each realization is the sum of the two
individual volumes.
(snap. 12.7-1)
This first illustration shows the volumes obtained on Polygon 1 in the unit Upper Brent - B1.
546
(fig. 12.7-1)
The Gas GRV figure clearly shows a step function with 5 values:. This emphasizes that the outcomes result from the combination of:
m
547
Similarly, the Oil GRV figure shows several step functions with edges not as sharp as in the
Gas GRV. This is due to the fact that the OWC contact of this layer is randomized.
In the Gas IP figure, the outcomes result from the combination of:
m
For the sake of the demonstration, we also show the Gas GRV figure for the Polygon 1 in the layer
Lower Brent - B5B. The figure clearly shows 25 different volumes this time, obtained from the
combination of 5 outcomes from the Layering stage and 5 outcomes from the Zonation stage.
(fig. 12.7-2)
The last illustration consists of cumulating all the volumes over all the units and all the polygons, so
as to provide one value for each type of material. This compares to the statistics given in the previous paragraph.
548
(fig. 12.7-3)
(fig. 12.7-4)
This is particularly interesting to show the bias of the volume established on the base case: in the
case of Gas in Place (lower left), this volume (6004) is far from the mean simulated volume.
549
(snap. 12.7-1)
This procedure offers the possibility of defining several calculations that will systematically be performed on all the units of the block system, regardless of their contents in Gas and Oil.
The first set of maps concerns mean and dispersion standard deviation maps, calculated for:
l
the Depth of the Top Reservoir: the Reservoir is only defined where either gas or oil is present
the Gas Reservoir Thickness: for each grid cell, this represents the height of the column within
the Gas Reservoir
the Gas Pore Volume: for each cell, this represents the product of the height within the Gas Reservoir scaled by the petrophysical variables
The user can also ask for Probability Maps of the Reservoir Thickness. Here again, the Reservoir
is only defined where either Gas or Oil is present. When the flag is switched on, you must use the
Definition button to specify the characteristics of the probability maps.
The probability map gives the probability that the reservoir thickness be larger than a given
threshold for each grid cell. For example the threshold 0m gives the probability that the reservoir
exists. You may define up to 5 thresholds.
550
(snap. 12.7-2)
The user can also ask for Quantile Maps of the Depth of the Top Reservoir. Here again, the Reservoir is only defined where either Gas or Oil is present. When the flag is switched on, you must use
the Definition button to specify the characteristics of the probability maps.
For a grid cell located within the reservoir, the quantile map gives the depth of the top which corresponds to a given quantile threshold (defined in percent). For example the threshold 0% gives the
smallest depth for the top reservoir. You may define up to 5 thresholds.
(snap. 12.7-3)
Note - None of these maps can be considered as a simulation outcome - they do not honor the
geostatistical structure of the variable - therefore any volume calculation based on them would be
biased.
These special maps obey the following naming convention. Their generic name is of the form:
Code-code_number : variable_type
551
where:
l
code_number stands for the designation code of a unit, as defined in the Master File - e.g. 122
for the Lower Brent - B5B -
variable_type indicates the type of calculation that has been performed, chosen among the following list:
m
Mean Depth
Proba of thickness larger than threshold: probability that the thickness of the Reservoir
(Gas + Oil) is larger than the given threshold value
552
(fig. 12.7-1)
553
(fig. 12.7-2)
554
The next figures compare the quantiles maps (for quantiles 10%, 50% and 90%) and the mean
map. The calculations are slightly different for quantile and mean maps calculation. If we consider
N outcomes and concentrate on a given grid node:
l
quantile. The N values of the depth are considered (when there is no reservoir, the value is set
to a non-value). Then these values are sorted and the p-quantile corresponds to the value ranked:
p*N/100. If the result corresponds to a non-value, then the reservoir does not exist in the quantile map. Therefore, when the quantile increases, the depth of the reservoir top increases and, as
the contact remains unchanged, the reservoir extension shrinks down.
(fig. 12.7-3)
555
(fig. 12.7-4)
(fig. 12.7-5)
556
mean. Among the N values, only those where the reservoir exists are stored and averaged.
(fig. 12.7-6)
557
(fig. 12.7-7)
558
(fig. 12.7-8)
559
(fig. 12.7-9)
at the closest node to each one of the intercepts with layers and zones
560
This is the reason why the result simulation outcome of the depth of the top of Lower Brent - B5B
unit is difficult to interpret:
(fig. 12.7-1)
561
12.8 Tools
Isatoil offers several procedures for checking the results and understanding the calculations. A
quick review of these tools will be given in this section.
Most of these tools require the definition of a particular point that will serve as a target: this point
can be picked from a graphic representation. We will arbitrarily select the following target point:
X=780m Y=2349m
(snap. 12.8-1)
Time
Depth
BCU
2384.084
ERODE 1
2307.114
ERODE 2
1187.447
Porosity
N/G
Upper Brent B1
2291.331
2399.005
0.295
0.854
Lower Brent B4
2370.886
2510.750
0.293
0.964
0.250
0.703
2545.005
2599.685
Lower Brent B6
2614.688
Dunlin
2460.001
2619.640
Statfjord
2629.971
2899.720
Base Statfjord
2769.980
3137.597
Trend(s)
2370.886
562
the time values are only defined for layers (not for zones)
the depth variable is defined everywhere (in m). They do not take into account the order relationships between the layers: this is only performed at the output stage.
the porosity and Net to gross ratio are only calculated for the units where at least one contact is
defined
the trends (for porosity and normal transform of porosity) are defined for the unit where the
porosity variable requires:
m
An additional flag allows you to display the Simulated Results. When using this option after the
Volumetrics last procedure (running simulations and storing the outcomes in macro variables), the 5
simulated outcomes are listed for the calculated variables (Depth (layers and zones), Porosity, Net/
Gross).
563
(snap. 12.8-2)
Estimate #1
=
1.404 (Lower Brent - B4)
Estimate #2
=
1.222 (Dunlin)
Estimate #3
=
1.648 (Statfjord - S1)
Estimate #4
=
1.699 (Base Statfjord)
As it was requested in the Master File, the calculation for the layering stage are performed in terms
of interval velocity, hence the values of the estimations for the four intervals of the layering
Estimate #1
=
32.899 (Lower Brent - B5A)
Estimate #2
=
55.049 (Lower Brent - B5B)
564
Estimate #3
=
15.252
Estimate #4
=
1.781
Sum of estimates
= 104.980
Here the results correspond to thicknesses of the zones. The calculations are performed in two
steps:
l
correction in order to account for the total thickness of the layer (collocation correction)
Estimate #1
=
780.00m
2349.00m
0.250 (Lower Brent - B5B)
Estimate #1
=
780.00m
2349.00m
0.703 (Lower Brent - B5B)
.../...
Rank - Name - X - Y - Initial - Data - Pr1 - Pr2 - Pr3 - Pr4 - Trend1 - Trend2
Trend3 - Trend4
1
3 1965.27m 649.64m 2435.310 1.057 1.00 0.00 0.00 0.00 2313
0
0
0
2
3 1965.27m 649.64m 2544.110 1.178 0.44 0.56 0.00 0.00 2313 2398
0
0
3
3 1965.27m 649.64m 2813.410 1.373 0.20 0.26 0.53 0.00 2313 2398 2573
0
4
4 2408.25m 422.11m 2927.000 1.362 0.13 0.17 0.33 0.37 2279 2353 2491 2649
5 113 1668.08m 070.14m 2772.050 1.341 0.17 0.27 0.56 0.00 2304 2389 2566
0
6 119 827.59m 060.44m 2498.780 1.412 1.00 0.00 0.00 0.00 2373
0
0
0
7 120 1162.99m 212.98m 2456.170 1.183 1.00 0.00 0.00 0.00 2352
0
0
0
8 120 1827.41m 868.45m 2483.780 1.033 0.40 0.60 0.00 0.00 2299 2379
0
0
Estimate #1
Estimate #2
Estimate #3
Estimate #4
1.015
1.086
1.213
1.078
1.128
1.344
1.381
1.378
1.226
0.853
1.169
1.291
1.139
1.219
1.403
1.152
1.319
1.289
0.717
1.053
1.466
1.336
0.810
1.351
1.486
1.304
0.716
1.008
1.253
1.292
1.180
1.017
1.156
0.815
1.087
0.895
1.283
1.268
1.488
1.322
0.558
0.570
1.384
1.411
0.42
1.00
1.00
1.00
0.44
1.00
0.51
1.00
0.45
1.00
0.43
0.23
1.00
0.53
0.20
0.27
0.23
0.17
1.00
0.44
1.00
0.48
1.00
0.18
1.00
0.23
1.00
0.44
0.15
1.00
0.41
1.00
0.46
1.00
0.45
0.49
0.22
0.22
1.00
0.46
1.00
1.00
0.17
0.12
0.58
0.00
0.00
0.00
0.56
0.00
0.49
0.00
0.55
0.00
0.57
0.08
0.00
0.47
0.25
0.05
0.18
0.15
0.00
0.56
0.00
0.52
0.00
0.12
0.00
0.23
0.00
0.56
0.17
0.00
0.59
0.00
0.54
0.00
0.55
0.51
0.21
0.11
0.00
0.54
0.00
0.00
0.12
0.19
565
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.29
0.00
0.00
0.55
0.36
0.59
0.29
0.00
0.00
0.00
0.00
0.00
0.38
0.00
0.55
0.00
0.00
0.35
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.57
0.28
0.00
0.00
0.00
0.00
0.34
0.32
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.39
0.00
0.00
0.00
0.33
0.00
0.39
0.00
0.00
0.00
0.00
0.00
0.32
0.00
0.00
0.00
0.00
0.33
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.39
0.00
0.00
0.00
0.00
0.36
0.37
2297
2353
2358
2338
2339
2379
2380
2340
2342
2302
2290
2283
2376
2370
2322
2273
2295
2264
2318
2319
2371
2374
2350
2299
2430
2311
2315
2316
2257
2317
2323
2334
2337
2324
2324
2299
2300
2271
2366
2368
2303
2303
2299
2282
=
780.00m
=
2349.00m
= 2399.005
=
=
=
=
1.404
1.222
1.648
1.699
2374
0
0
0
2429
0
2459
0
2435
0
2361
2316
0
2450
2400
2292
2345
2323
0
2405
0
2461
0
2356
0
2384
0
2400
2322
0
2413
0
2425
0
2409
2366
2364
2314
0
2460
0
0
2352
2363
0
0
0
0
0
0
0
0
0
0
0
2433
0
0
2572
2444
2506
2436
0
0
0
0
0
2532
0
2561
0
0
2459
0
0
0
0
0
0
0
2535
2422
0
0
0
0
2503
2498
0
0
0
0
0
0
0
0
0
0
0
2589
0
0
0
2582
0
2588
0
0
0
0
0
2683
0
0
0
0
2588
0
0
0
0
0
0
0
0
2575
0
0
0
0
2664
2656
566
We recall that Layering is performed by a cokriging procedure using 4 variables (layers) simultaneously. The list of the information contains the following information:
l
Name is the name of the well which provided this intercept information. This information is not
available in the case of Petrophysical variables.
Initial is the depth value of the intercept, as read from the Well File
Data is the value which is actually entered in the cokriging system: in the case of the Layering,
this corresponds to an apparent velocity value calculated from the Top Layering surface down
to the surface which contains the intercept.
Pr* give the weighting coefficients which denote the percentage of time spent in each layer.
Note that a layer located below the intercept surface corresponds to a zero weight.
Trend* are the values that are used as external drift for each variable
The Pr* weight indicates if a layer (or a zone) lies between the intercept and the surface that serves
as a reference, or set to 0 otherwise If the procedure works in depth, this weight is simply an indicator (0 or 1); if it works in velocity, the weight corresponds to the percentage (in time) that the layer
thickness represents in the total distance from the intercept to the reference surface: the weights add
up to 1. This weight is not available in the case of petrophysical variables.
The Trend* values are only displayed if the variable(s) to be processed require external drift(s).
567
(snap. 12.8-1)
The following printout is obtained when checking the base case results on the Lower Brent - B5B
layer:
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
Zone = 2)
2525.179
2469.974
2515.955
2496.956
2546.401
2608.123
2542.129
2587.588
2516.301
2604.150
N/A
2508.816
2508.670
2515.207
2546.405
2520.635
2594.410
2451.716
2458.152
N/A
where:
l
Data refers to the depth information read from the Well File
The back-interpolated value is not defined (N/A) when at least one of the grid nodes surrounding the
intercept location is not defined: this is the case for the intercept located at (X=464.31m; Y=117.23m) which lies outside the grid.
568
(snap. 12.8-2)
The following printout is obtained when cross-validating the porosity information of the Lower
Brent - B5B layer:
.../...
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
brent
brent
brent
brent
brent
brent
brent
brent
brent
brent
brent
brent
brent
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
569
where:
l
True Value is value read from the data file (here the Petrophysical Well File), possibly converted into velocity in the Layering case
Layer Name gives the identification of the information: this is mainly relevant in the multivariate case (Layering or Zonation)
variables corresponding to the base case results - stored in the Grid File -
simulation outcomes that might have been stored in the Grid File by the Volumetrics procedure
Standard Parameter Files containing the models for the covariance and the distributions (anamorphosis).
The use of this procedure ensures that only variables resulting from calculations are deleted. In particular, it does not delete depth variables corresponding to the Top Layer or the Limit surfaces, or
any surface which is not calculated by Isatoil, as specified in the Master File.
The procedure offers the possibility either to clean all the results (from one of the above items mentioned above) or to restrict the deletion to the ones relative to a given unit.
Use the Check button to check the number of files which will be deleted before actually cleaning
them!
570
(snap. 12.8-3)
The following printout is obtained when cleaning all the files relative to the Lower Brent - B5B surface (all three items selected):
571
This case study is based on a public data set used by Amoco during the
80s. The dataset has been kindly provided by Richard Chambers and
Jeffrey Yarus.
It demonstrates the capabilities of Isatis in Reservoir Characterization
using lithofacies and porosity simulations. Volumetrics calculations
are performed on 3D models.
Last update: 2014
572
13.1 Introduction
3D earth modeling is a key issue for reservoir characterization. Moreover, the uncertainties on the
reservoir structure, the contacts and the rock properties may be assessed through the simulations
that preserve the geological features of the reservoir. In this case study, one purpose is the optimal
use of the available data: the wells with information on key horizons markers, lithofacies, porosity
and the facies proportions.
The reservoir is located in North Cowden area (Texas). There are three main facies (siltstone, anhydrite and dolomite). The carbonates were deposited during high sea-level stands and the siltstone
during low stands when the carbonate platform was exposed to sub-aerial conditions. The silt is
actually eolian sediment whose source is from the northwest and it was reworked into sheet-like
deposits during subsequent sea-level rise.
In this case study, several geostatistical methods from Universal Kriging to facies simulations
(Plurigaussian Simulation) and continuous simulations as Turning Bands are performed.
The main steps of the workflow are:
l
Simulations of the surfaces delimiting the top and bottom of the reservoir, using the information
from wells.
Facies simulations (TPGS for Truncated Plurigaussian Simulation). It requires the building of a
stratigraphic grid (flattening), within which variogram calculations and simulations are performed. The 3D vertical proportions matrix (VPC) is computed. A 2D Proportion map computed from a seismic attribute is used to constrain the 3D matrix proportions.
3D simulations of porosity are achieved independently for each facies, then a cookie cutting
procedure constrained by facies simulations provides the final porosity simulations.
Several types of simulations are used (surfaces simulations, TPGS, 3D porosity simulations).
Therefore, different models are available. To evaluate these models, volumetric calculations based
on the simulations of the different parameters provide stochastic distributions of volumes.
In conclusion, this case study explores some of the possibilities that Isatis offers to improve the reservoir characterization.
573
The top and bottom of the reservoir are stored for each wells. The purpose is to interpolate or
simulate the top and bottom surfaces of the reservoir from wells data. Eventually, the distribution of the GRV is performed using these surfaces and a constant contact.
2. Discretization and Flattening: Transformation from real space to the stratigraphic space
This step is crucial as it determines the lateral continuity of facies as expected from a sedimentary deposition of the facies. A flat working grid is created with a resolution of 50mx50mx1m.
3. Computing Proportion Curves: Computing curves from the well data over the working grid.
The vertical proportion curves are calculated from the wells discretized in the stratigraphic
space. Then a 3D matrix of proportion is created for further use in SIS and Plurigaussian Simulations. Finally, the computation of the proportions is performed using a 2D proportions constraint: kriging of mean proportion (siltstone). This proportion constraint was estimated by
external-drift kriging. The drift was a map of acoustic impedance (AI) extracted from the seismic filtered cube. This proportion constraint will be used for the PGS.
4. Lithotypes Simulations: Simulations with PGS of the Lithotypes.
This step aims at deriving the variogram models of two Gaussian random functions that are simulated and truncated to get the simulated lithotypes. The thresholds applied on the different levels follow the so-called lithotype rules. Then Plurigaussian simulations are performed and
transferred to the structural grid
5. 3D Porosity Simulation: Simulation of porosity with Turning Bands and Cookie Cutting
The porosity is simulated using Turning Bands for each lithotypes, then the porosity is conditioned from the lithotypes simulations (Cookie Cutting). The cookie cutting method is the combination of the facies and porosity simulations. The porosity is simulated at each node of the
grid located between the top and the bottom of the reservoir layer as if these nodes were in the
facies of intent. In the final model only the porosity of the facies actually simulated at each node
will be kept. Finally the HPCV is computed using Volumetrics.
6. 3D Volumetrics: Volumetrics of the 3D simulations
The HPCV is computed using Volumetrics. The results from the previous steps (top, bottom and
porosity simulations) are all used in conjunction in order to compute the volumetrics. In addition we assume that OWC depth is known.
574
The first named wells.hd contains the data available at the wells: depth, porosity, selection reservoir (Sel Unit S2).
The second named surfaces.hd contains the surfaces delimiting the reservoir on a grid with resolution of 50mx50m.
The third named 3D grid.hd contains a seismic impedance acoustic cube in a grid with a resolution of 50mx50mx1m.
Import these files into Isatis using the ASCII file import (File/Import/ASCII). These files are available in the Isatis installation directory under the Datasets/Reservoir_characterization sub-directory:
Each ASCII file already contains a header.
Enter a directory name and a file name for each imported file:
575
For the wells, Directory: 3D wells; File: 3D wells; Header: 3D wells header (snap. 13.3-1).
For the structural grid, Directory: 3D Grid; File: Structural Grid (snap. 13.3-3).
(snap. 13.3-1)
576
(snap. 13.3-2)
577
(snap. 13.3-3)
578
(snap. 13.4-1)
579
(snap. 13.4-2)
rho=-0.907
-1320
-1330
Minimum Z
-1340
-1350
-1360
-1370
-1380
1500
2000
2500
3000
X-UTM
(fig. 13.4-1)
580
(snap. 13.4-3)
rho=-0.880
-1300
Maximum Z
-1310
-1320
-1330
-1340
-1350
1500
2000
X-UTM
2500
3000
(snap. 13.4-4)
The crossplot of the top and bottom surfaces at wells according to X shows the existence of a trend
depending on X (East-West).
581
Compute the omnidirectional variogram of Minimum Z and then the variogram of Maximum Z.
The experimental variograms are both computed with 12 lags of 125 m.
36
500
400
44
300
50
37
35
200
37
33
100
0
35
28
5 14
500
1000
Distance (m)
36
Variogram : Maximum Z
Variogram : Minimum Z
(snap. 13.4-5)
150
400
300
44
50
200
37
33
100
37
35
35
28
5 14
500
1000
150
Distance (m)
(fig. 13.4-2)
The variograms of Minimum Z and Maximum Z also show a strong non-stationarity (fig. 13.4-2).
Therefore a non-stationary model seems the most appropriate. In that purpose, a Universal Kriging
approach (UK for short) will be applied. It amounts to decompose explicitly the variable of interest
582
into its trend and a stationary residual. A variogram will be fitted to the residuals. The kriging
amounts to krige the residuals and add the estimates to the trend model.
l
(snap. 13.4-6)
For each variable, store the global trend in a new variogram model using Statistics/Modeling/
Global Trend Modeling. A variable corresponding to the residuals is also created.
583
(snap. 13.4-7)
Store the global trend in a variogram model and then fit the variogram of the residuals. By adding
the variogram model of residuals to the model initialized at the trend modeling stage, the required
non-stationary model is obtained (for example: Maximum Z no-stationary and Minimum Z no
stationary). In that purpose, run Statistics/Variogram Fitting on the model of the residuals.
Below is the example for the residuals of maximum Z. The variogram model is the same for Maximum Z and Minimum Z.
You can automatically initialize your model (using Model Initialization) or edit your-self the model
with the following parameters:
- a Cubic structure with Range =1800m, sill= 50.
Save the model under Maximum Z no Stationary.
584
(snap. 13.4-8)
585
50
40
30
20
10
100 200 300 400 500 600 700 800 900 1000
Distance (m)
Isatis
WELLS/2D Wells Headers
Jun 03 2008
- Variable #1 : Residuals maximum Z
NCU-test
Experimental Variogram : in 1 direction(s)
D1 :
Angular tolerance = 90.00
Lag = 110.000m, Count = 10 lags, Tolerance = 50.00%
Model : 1 basic structure(s)
S1 - Cubic - Range = 1800.000m, Sill =
50
(fig. 13.4-3)
586
(snap. 13.4-9)
587
(snap. 13.4-10)
The results of the kriging are called respectively Maximum Z kriging and Minimum Z kriging.
These base cases are very closed to the surfaces already stored in the 2D grid file (see hereafter the
correlation cross plot between SURF 3: S2 and Maximum Z kriging)
588
(fig. 13.4-4)
Note - An alternative approach would be to model the top surface and the thickness of the unit,
avoiding the risk of getting surfaces crossing each other.
589
The non-stationarity that is somehow contradictory with the existence of an unique histogram,
The even density of wells in the gridded area, that controls the distribution by means of conditioning of simulations,
(fig. 13.4-5)
590
(snap. 13.4-11)
591
(snap. 13.4-12)
592
Using Tools/Simulation Post-processing, calculate the average of 100 simulations in order to compare it to the kriged values. The match is almost perfect (fig. 13.4-6), which was expected as the
mean of numerous simulations (over 100) tends towards the kriging.
In order to define the geometrical envelope of the S2 Unit where facies and porosity simulations are
achieved, store the maximum of the simulated top (Maximum Z Top) and the minimum of the simulated bottom (Minimum Z Bottom). The use of the envelope ensures that all grid nodes will be
filled with a porosity value.
(fig. 13.4-6)
593
(snap. 13.4-13)
Using Tools/Create Special Variable create a new macro variable with 100 indices, name it Thickness. Using File/Calculator, compute the thickness from the simulations of Maximum Z and the
simulations of Minimum Z. Check with Statistics/Quick Statistics that the surfaces do not cross
each other (there are not negative values).
594
(snap. 13.4-14)
595
Before going to Discretization & Flattening window you need to convert the 3D wells into cores
lines. Use the panel Tools/Convert Gravity Lines to Core Lines:
The old gravity files are saved, the new core lines are named 3D Wells and 3D Wells Header.
In the Data File Manager, set the variable Well Name as Line Name in the 3D Wells Header: right
click on the Well Name variable and choose Modify into Line Name.
In the File Manager, change the format of the variable Maximum Z Kriging, Maximum Z Top and
Minimum Z Bottom.
596
(snap. 13.5-1)
l
Go to Discretization and Flattening.Create a new proportion file S2 Unit and fill the 5 tabs.
597
(snap. 13.5-2)
(snap. 13.5-3)
598
Take the Maximum top and Minimum bottom (the envelope) as Top Unit Variable and Bottom
Unit Variable. The reference variable is Maximum Z kriging (the Base Case corresponding to
Surf 3: S2). The kriging of the top surface is used as the reference variable because it is consistent
geologically speaking. This is not the case for the envelope.
Note - The S2 top surface has been chosen as the reference surface because the base of the S2 unit
shows downlapping layers as the platform built eastward into the Midland basin.
(c) Lithotype Definition
In the S2 Unit, consider the lithofacies 1 (siltstone), 2 (anhydrite) and 3 (dolomite) and assign them
to lithotype 1 to 3. In this case, the data already contain the lithotype formations.
(snap. 13.5-4)
For further display, create a dedicated colour scale by using Lithotype Attributes.
(snap. 13.5-5)
599
The wells are discretized with a vertical lag of 1 m, which corresponds to the vertical mesh of the
stratigraphic grid. There is a distortion ratio of 50 (50/1: ratio of mesh (x,y) and mesh (z)).
(snap. 13.5-6)
(e) Output
In the output tab, enter the discretized wells file and the header file. Define the output variables.
(snap. 13.5-7)
After running the bulletin, read carefully the informations printed in the message window, for
checking the options and the discretization results.
It is possible to visualize the discretized wells in the new stratigraphic framework with the display
menu using Lines representation.
600
Note - The envelope (Maximum Z Top and Minimum Z bottom) is used to be sure that inside the
reservoir unit (S2 Unit) all the grid nodes will be filled by a porosity value when performing the
porosity simulations.
(snap. 13.5-1)
2D Proportion Constraints are specified: the proportion variable is kriging mean proportion siltstone calculated before.
The graphic window displays the wells projected on the horizontal plane and the global proportion
curve in the lower right corner.
Change the Graphic Options by using the corresponding Application menu.
601
(snap. 13.5-2)
602
(snap. 13.5-3)
603
(snap. 13.5-4)
Using the Application menu and the option Display Pie Proportions, each well is represented by a
pie subdivided into parts with a size proportional to the lithotypes proportion.
(snap. 13.5-5)
604
(snap. 13.5-6)
Coming back to the Vertical Proportion Curves Edition mode, perform the following actions: Display&Edit, completion by 3 levels and Smoothing with 3 passes. An other method using the Editing
tool is described in the section (e) Edition Mode.
Note - You can see that the Raw VPC present gaps at the top and the bottom. These gaps are
explained by the fact that, in the display, the top corresponds to the maximum of the Top unit
variable (here maximum Z Top) and the bottom corresponds to the minimum of the bottom unit
variable (here minimum Z bottom). The wells information does not fill the total length between the
defined top and bottom. These gaps may be an issue as an extrapolation is performed to fill them
(especially at the top). An other method would be to use the simulations of both surfaces two by two
to create the VPC. It would require to create as many VPC as there are couple of simulations. This
would be rather inconvenient.
605
(snap. 13.5-7)
606
(snap. 13.5-8)
To visualize the interpolated proportions, use Application/Display 3D proportions with the sampling mode (step 5 for instance along X and Y).
56
51
46
41
36
31
26
21
16
11
6
Lithotypes
Siltstone
1
1
11
16
21
26
31
3D Proportion Map
36
41
Anhydrite
Dolomite
(fig. 13.5-1)
In order to update the parameter file use the menu Application/SAVE and RUN.
13.5.1.3 Determination of the Gaussian Random Functions and their variograms for plurigaussian simulations
This phase is specific to the simulation using the plurigaussian technique and is achieved by means
of the menu Statistics/Modeling/Plurigaussian Variograms.
607
The aim is to assign the lithotypes to sets of values of a pair of Gaussian Random Function (GRF),
i.e. by means of thresholds applied to the GRF. The transform from GRF to the categorical lithotypes is called the lithotype rule. It is necessary to define it first in order to represent the possible
transitions between the facies as they can express in geological terms the deposition process.
L1 = Siltstone, L2 = Anhydrite, L3 = Dolomite.
(snap. 13.5-1)
The first GRF horizontal (G1) will rule L2, L1 and L3. It is represented by a spherical scheme with
ranges of 300 m for U, 300 m for V and 5 m for Z and a sill of 0.5.
The second GRF (G2) will rule L1, L2 and L3. It is represented by a spherical scheme with ranges
of 1200 m for U, 2700 m for V and 5 m for Z and a sill of 0.5.
(snap. 13.5-2)
608
Run non-conditional simulations along the 3 main sections of the stratigraphic space by using Display Simulations. By changing the coefficient of correlation, visualize the effect on the spatial organization of the facies.
Visualize the thresholds applied on the 2 GRFs by using Display Threshold.
By using the variogram fitting button, calculate variograms on the lithotypes indicators in two horizontal directions and along the vertical.
(snap. 13.5-3)
The figure below shows the variograms for the horizontal directions and the vertical one. The dotted lines correspond to the experimental variograms and the plain lines to the model.
609
(snap. 13.5-4)
(snap. 13.5-5)
610
For the conditioning of the simulation to data, use a standard moving neighborhood (moving
Facies). It is defined by a search ellipsoid with radii of 1.2 km x 3 km x 20 m and 8 sectors with an
optimum of 4 points by sector. Display the simulation in the flat space using Display New Page
with a raster representation or a section in a 3D grid representation.
(snap. 13.5-1)
611
(snap. 13.5-2)
Finally, transfer the plurigaussian simulations from the working grid to the structural grid by using
Tools/merge stratigraphic Units (Facies S2 Unit PGS).
The 3D viewer may be used to visualize the simulations.
612
(snap. 13.5-1)
The statistics below compare both ways to get the most probable facies without or with Soares correction. The figure displays an example of a horizontal section.
MostProbablePercentage
BeforeSoares
AfterSoares
38.83%
37.52%
41.62%
44.74%
19.56%
17.74%
4000
4000
3500
3500
3000
3000
Y (m)
Y (m)
Siltstone
Anhydrite
Dolomite
2500
2500
2000
2000
1500
1500
1500
2000
X (m)
2500
3000
1500
2000
X (m)
2500
3000
(snap. 13.5-2)
The figures below show the risk curves using either Risk Curve or Histogram display type.
613
(snap. 13.5-3)
614
100
100
Facies: Siltstone
80
80
70
70
60
60
50
40
50
40
30
30
20
20
10
0
Facies: Anhydrite
90
Frequencies
Frequencies
90
10
53
54
55
56
57
58
59
60
Volume (Mm3)
61
62
63
64
65
66
67
Volume (Mm3)
100
Facies: Dolomite
90
80
Frequencies
70
60
50
40
30
20
10
0
31
32
33
34
Volume (Mm3)
35
36
37
(snap. 13.5-4)
Facies: Anhydrite
25
25
20
20
Frequencies
Frequencies
Facies: Siltstone
15
15
10
10
81
82
83
84
85
86
615
87
88
89
Volume (Mm3)
72.5
75.0
77.5
80.0
82.5
85.0
Volume (Mm3)
Facies: Dolomite
25
Frequencies
20
15
10
33
34
35
36
37
38
Volume (Mm3)
39
40
41
42
(snap. 13.5-5)
4000
4000
3500
3500
3000
3000
Y (m)
Y (m)
616
2500
2500
2000
2000
1500
1500
2000
2500
3000
X (m)
1500
2000
X (m)
2500
3000
(snap. 13.5-6)
617
(snap. 13.5-1)
618
(snap. 13.5-2)
(snap. 13.5-3)
619
(snap. 13.5-4)
(snap. 13.5-5)
Gaussian Phi Siltstone: 2 basic structures: An anisotropic spherical model, with ranges of
680 m along U; 800 m along V; 7 m along W (sill: 0.94). An anisotropic cubic model with
ranges of 700 m along U, 1600 m along V and 2 m along W (sill: 0.06).
620
Gaussian Phi Anhydrite: An anisotropic spherical model with an horizontal range of 450 m
along U, 850 along V and a vertical range of 4,4 m. (sill: 1)
Gaussian Phi Dolomite: An anisotropic spherical model with ranges of 476 m along U;
1072 m along V and 4.7 m along W (sill 0.5). An anisotropic cubic model with ranges of 323
m along U, 497 m along V and 5.4 m along W (sill 0.5).
(snap. 13.5-6)
(snap. 13.5-7)
(d) Simulations
621
For each lithotype, run the Turning Bands simulations (Interpolate/Conditional Simulations/
Turning Bands).
(snap. 13.5-8)
622
(snap. 13.5-9)
623
(snap. 13.5-10)
Do not forget to use the Gaussian back transform option in the simulation parameters.
624
(snap. 13.5-11)
Below is the neighborhood used for Turning Bands. The same standard neighborhood is used for
the porosity of the different lithotypes (Phi Siltstone, Phi Anhydrite, Phi Dolomite).
(snap. 13.5-12)
625
(snap. 13.5-1)
(b) Calculator
The transformation created in the calculator allows to inform the macrovariable Porosity [xxxxx]
from porosity simulations conditioned by lithotype (Phi Siltstone [xxxxx], Phi Anhydrite [xxxxx],
Phi Dolomite [xxxxx]) with facies simulations (PGS [xxxxx]).
(snap. 13.5-2)
626
Then the macro variable Porosity[xxxxx] is transferred from the Working flat Grid (3D working
Grid) to the 3D real space (3D, Structural grid) using Tools/Merge Stratigraphic Units.
(snap. 13.5-3)
627
(snap. 13.5-4)
628
100
90
P90
Frequencies
80
70
60
50
P50
40
30
20
P10
10
0
58 59 60 61 62 63 64
Volumes (Mm3)
(fig. 13.5-1)
The volumetrics computed using the 3D Porosity simulations are generaly higher than those computed using the 2D mean porosity simulations.
629
13.6 Conclusion
This case study handles different technics available in Isatis (surface simulations, facies simulations
and volumetrics). The volumetrics outcomes are interesting to study as they are relevant concerning
the use of the porosity (3D porosity simulations).
The use of the envelope in the discretization and flattening has an influence in the computation of
3D proportion matrix. It is necessary to extrapolate the proportions at the top and bottom of the
VPC. This extrapolation has of course an influence on the facies simulations, therefore on the
porosity simulations and finally on the resulting volumes.
The volumes calculated with the 3D porosity simulations are generaly higher than those calculated
using a 2D mean porosity simulations. This can be explained by the extrapolation of siltstone at the
top during the computation of the 3D proportions curves.
To conclude, this case study presents a possible workflow for Reservoir Characterization. In this
purpose, several methods are applied (Turning Bands simulations, Plurigaussian Simulation, Universal Kriging). It deals with structural, facies and property modeling. An interesting topic is the
use of a 2D proportion constraint. It shows how to account for the uncertainty on surfaces and properties (e.g. Porosity) together.
630
631
Environment
632
Pollution
633
15.Pollution
This case study is based on a data set kindly provided by Dr. R.
Clardin of the Laboratoire Cantonal dAgronomie. The data set has
been collected for the GEOS project (Observation des sols de Genve)
and processed with the University of Lausanne.
The case study covers rather exhaustively a large panel of Isatis
features, such as:
how to perform a univariate and bivariate structural analysis,
how to interpolate these variables on a regular grid, using kriging
or cokriging,
how to perform conditional simulations using the Turning Bands
method, in order to obtain the probability map for the variable to
exceed a given pollution threshold.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.
634
(snap. 15.1-1)
It is then advised to verify the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l
Input-Output Length Options window: unit in kilometers (Length), with its Format set to
Decimal with length = 10 and Digits =2.
Graphical Axis Units window: X and Y units in kilometers, Z unit in centimeters (the latter
being of no importance in this 2D case).
The ASCII file contains a header where the structure of the data information is described. Each
record contains successively:
l
The rank of the sample (which will not be loaded as it is not described by the corresponding
field keyword.
Pollution
635
Note - In the definition of the two pollution variables, the lack of information is coded as a blank
string. If, for a sample, the characters within the offset dedicated to the variable are left blank, the
value of the variable for this sample is set to a conventional internal value called an undefined
value. This is the case in the third sample of the file where the Zn value is missing.
The procedure File / Import / ASCII is used to load the data. First you have to specify the path of
your data file using the button ASCII Data File, as no specific structure is provided, the samples are
considered as Points (as opposed to grid or lines structures). By default the 'Header is Contained in
the ASCII Data file' option is on, which is right for this data file.
The ASCII files are located in the Isatis installation directory Datasets/Pollution
636
(snap. 15.1-2)
Consequently in this case you do not need to pay attention to the ASCII Header part of the window.
By default this window prompts the option of Create a New File, what is also right for this case
study. In order to create a new directory and a new file in the current study, the button NEW Points
File is used to enter the names of this two items; click on the New Directory button and give a
name, do the same for the New File button, for instance:
- New Directory
= Pollution
- New File
= Data
Click on OK and you will be back to the File / Import / ASCII, finally you have to press Import.
In order to see the status of the last action you have to click on the Message Window icon.
#
#
#
#
#
#
#
#
#
#
#
#
Pollution
637
#
ffff = " " , unit = "%" , bitlength = 32
#
f_type = Decimal , f_length = 10 , f_digits = 2
#
description = ""
# field = 5 , type = numeric , name = Zn
#
ffff = " " , unit = "%" , bitlength = 32
#
f_type = Decimal , f_length = 10 , f_digits = 2
#
description = ""
#+++++++++---------+++++++++----------++++++++++
The File / Data File Manager facility offers the possibility of listing the contents of all the
Directories and Files of the current Study and of providing some information on any of the items of
the data base just by using the graphical menu (left button of the mouse in a variable of interest,
then click on the right button and select an item). This allows the following basic statistics to be
derived: the file contains 102 samples but the Zn variable is defined on 101 samples only.
Name
Count of Samples
Minimum
Maximum
X Coordinate
102
109.847 km
143.012 km
Y Coordinate
102
483.656 km
513.039 km
Pb
102
1.09
33.20
Zn
101
3.00
31.60
638
Note - Skewed data sets (presence of real high values) sometimes mask structures, therefore
complicating the task of calculating a representative experimental variogram. There are several
ways to tackle this problem; a common practice is the application of a normal or logarithmic
transformation of the cumulative distribution function (cdf) to try to stabilize the fluctuations
between high and low values. Another possibility is to mask (put aside from calculation) some or all
high relative values to try to obtain more structured variograms (reduction or elimination of a
nugget effect structure). The last method is recommended when the anomalous values correspond to
outliers, otherwise you risk to smooth or hide real structures.
Pollution
639
(snap. 15.3-1)
For example, to calculate the histogram with 32 classes between 0 and 32% (1 unit interval), first
you have to click on the histogram icon (third from the left), an histogram calculated with defaulted
values will be displayed, then you have to enter the proper values in the Application / Calculation
Parameters menu bar of the Histogram page. If you switch on the Define Parameters Before Initial
Calculations option you can skip the defaulted histogram display.
On the base map (first icon from the left), any active sample is represented by a cross proportional
to the Zn value. A sample is active if its value for a given variable is defined and not masked.
For the sake of simplicity, we limit the analysis to omni-directional variogram calculations,
therefore ignoring potential anisotropies. Experimental variogram is obtained by clicking on the
640
seventh icon. The number of pairs or the histogram of pairs may be added to the graphic by
switching on the appropriate buttons in the Application / Graphic Specific Parameters. The
following variogram has been calculated with defaulted parameters. In the Variogram Calculation
Parameters panel you can also compute the variogram cloud.
0.3
515
Nb Samples: 101
Minimum:
3.00
Maximum:
31.60
Mean:
6.10
Std. Dev.: 3.59
Zn
510
0.2
Frequencies
Y (km)
505
500
495
0.1
490
485
110
120
130
0.0
140
10
X (km)
20
30
Zn
242
400
300
236
20
Variogram : Zn
Variogram : Zn
30
221
10
247
140
213
204
243
200
100
271
11
0
4
5
6
7
Distance (km)
10
0.0
2.5
5.0
7.5
Distance (km)
10.0
(fig. 15.3-1)
From the base map, the data set shows some areas without information. The average distance
between samples is about 0.7 km. There are two samples that are 6.5 km and 9.0 km away from
a nearest sample. In this case you might question if these samples belong to the area of interest
or to the same population. We will take into account these values for our calculations.
From the base map, we can clearly see that there are two samples with anomalous values. It is
important to find the nature of these two values before considering them as outliers. You might
question also the relation between these values and their geographical location to try to infer if
they are likely to happen in non-sampled areas.
Pollution
641
The histogram shows a clear skewness. Another feature that you can observe from the
histogram is the fact that there are no samples with values less than 3% Zn.
The variogram cloud (calculated by ticking Calculate the Variogram Cloud in the Variogram
Calculation Parameters) clearly shows two populations. In order to identify the geographical
location of high variogram values at short distances, you can select several points from the
variogram cloud and highlight them with the right button. All the windows are automatically
regenerated and values are painted in blue and in asterisk format. In the base map graphic they
look like two spiders centered on the two Zn anomalous values.
0.3
515
Nb Samples: 101
Minimum:
3.00
Maximum:
31.60
Mean:
6.10
Std. Dev.: 3.59
Zn
510
0.2
Frequencies
Y (km)
505
500
495
0.1
490
485
110
120
130
0.0
140
10
20
30
Zn
X (km)
242
400
30
20
Variogram : Zn
Variogram : Zn
300
236
221
10
247
140
213
204
243
200
100
271
11
0
5
6
7
Distance (km)
10
2.5
236 221
5.0
7.5
Distance (km)
242
10.0
(fig. 15.3-2)
To find out more about the two central points of these spiders, right click on them in the basemap
and ask for Display Data Information (Short):
Display of the following variables (2 samples)
X coordinate
Y coordinate
Variable 1: Zn
642
113.433km
113.313km
498.943km
501.368km
24.80
31.60
At this stage a capital question arises: shall we consider these two anomalous values as erroneous or
real ones? It is likely that these two Zn values are not erroneous and we will consider them as real
ones. However, these high values may be due to a local behavior. Therefore, we mask them for the
analysis but we will take them into account for the estimation. Using the mouse, first click on the
left button over the anomalous values on the Basemap page and then click on the right button to
select the Mask option.
The effect on the variogram cloud is spectacular. All the pairs with high variogram values are now
suppressed: they are still drawn, but represented by red squares instead of green crosses.
Redrawing the variogram cloud while hiding the masked information (in the Application menu)
operates a rescaling of the picture.
The cloud now presents a much more conventional shape: as the variability is expected to increase
with the distance, the variogram cloud looks like a cone lying over the horizontal axis with a large
density of pairs at the bottom. By consequence the experimental variogram becomes more
structured and with a lower variability compared to the precedent variogram.
The procedure of highlighting pairs with large variogram values at small distances does not produce
spiders anymore.
At this stage, we can save the current selection (without the two high values) as the selection
variable of the data base called Variographic selection in the Application / Save in Selection panel
(which can be reached in the Menu Bar of the Base Map page).
(snap. 15.3-2)
Pollution
643
0.3
515
Nb Samples: 99
Minimum:
3.00
Maximum:
12.70
Mean:
5.66
Std. Dev.: 1.70
Zn
510
505
Y (km)
Frequencies
0.2
500
495
0.1
490
485
110
120
130
0.0
140
10
20
X (km)
50
217
208
3
266 199 218
40
Variogram : Zn
242
Variogram : Zn
30
Zn
241
2
208
11
136
30
20
1
10
208 217
11 136 208 242 241 266 199 218
0
Distance (km)
10
0.0
2.5
5.0
7.5
Distance (km)
10.0
(fig. 15.3-3)
=
=
=
=
variographic selection
102
2
100
The final task is to choose a better lag value for the experimental variogram calculation; first we
switch OFF the display of the Variogram Cloud in Application / Graphic Specific Parameters, then
we use the Application / Calculation Parameters to ask for 10 lags of 1 km, , preview the histogram
of the number of pairs (Display Pairs) in the Direction Definition panel and display the number of
pairs for each lag in Application / Graphic Specific Parameters.
644
(snap. 15.3-3)
Pollution
645
(snap. 15.3-4)
The experimental variogram is reproduced in the next figure and can be compared to its initial
shape. The variance drops from 12.91 to 2.88, and the shape of the variogram is much more convenient for the model fitting, which will be performed next. Moreover the number of pairs is quite stable for all lags (except the first one) which reflects the quality of the parameters choice.
646
187
204
185
229
206
198
Variogram : Zn
230
2
183
123
4
5
6
Distance (km)
(fig. 15.3-4)
In order to perform the fitting step, it is now time to store the final experimental variogram with the
item Save in Parameter File of the Application menu of the Variogram Page. We will call it
Pollution Zn.
Pollution
647
The global window where all experimental variograms, in all directions and for all variables are
displayed.
The fitting window where we focus on one given experimental variogram, for one variable and
in one given direction and where it is possible to get an interactive fitting.
In our case, as the Parameter File refers to only one experimental variogram for the single variable
Zn, both windows will look the same.
648
(snap. 15.4-1)
The principle consists in editing the Model parameters and checking the impact graphically. You
can also use the variogram initialization by clicking on Model initialization; this will enable you to
initialize the model with different combinations of structures and with or without a nugget effect.
This procedure automatically fits the range and the sill of the variogram (see the Variogram fitting
section from the Users guide).
The next figure presents the result of the Model Initialization with a Spherical component only.
Pollution
649
3
187
229
206
204
185
198
Variogram : Zn
230
2
123
183
1
3
Distance (km)
(fig. 15.4-1)
Apart from the Model Initialization, a more complete Automatic Fitting is provided in the
corresponding tab where you can choose the combination of structures you want to use and also put
constraints on anisotropy, sills and ranges. From the Automatic Fitting tab, click structutre to use an
Exponential one.
(snap. 15.4-2)
Then press Fit from the Automatic Fitting tab. Use the the global window and the Print button to
check the output model.
650
3
187
204
185
229
206
198
Variogram : Zn
230
2
183
123
Distance (km)
(fig. 15.4-2)
The Manual Fitting is available in the corresponding tab where you may change the parameters by
clicking on Edit.
(snap. 15.4-3)
Save the model under the name Pollution Zn and finally click on Run.
Pollution
651
15.5 Cross-Validation
The Statistics / Modeling / Cross-Validation procedure consists in considering each data point in
turn, removing it temporarily from the data set and using its neighboring information to predict (by
a kriging procedure) the value of the variable at its location. The estimation is compared to the true
value to produce the estimation error, possibly standardized by the standard deviation of the
estimation.
Click on the Data File button and select the Zn variable without any selection, in that way we will
be able to test the parameters in the two high values locations. The Target Variable button is setup
to the only variable selected in the precedent step. Set on the Graphic Representations option.
Select, with the Model button, the variogram model called Pollution Zn.
The new feature of this procedure is the definition of the Neighborhood parameters. Click on the
Neighborhood button and you will be asked to select or create a new set of parameters; in the New
File Name area enter the name Pollution, then click on Add and you will be able to set the
neighborhood parameters by clicking on the respective Edit button.
A trial and error procedure allows the user to select a convenient set of neighborhood parameters.
These parameters will be discussed in the estimation chapter; we keep here the default parameters.
652
(snap. 15.5-1)
Pollution
653
(snap. 15.5-2)
By clicking on Run, the procedure finally produces a graphic page containing the four following
windows:
l
the scatter diagram of the true data versus the estimated value,
the base map, with symbols whose size and color repsectively represent the real and estimated
Zn,
the scatter diagram of the standardized estimation errors versus the estimated values.
A sample is arbitrarily considered as not robust as soon as its standardized estimation error is larger
than a given threshold in absolute value (2.5 for example which approximately corresponds to the
1% extreme values of a normal distribution).
654
(fig. 15.5-1)
The histogram shows a long tail and the scatter diagram of estimated values versus true data is far
from being close to the first bisector. At the same time, the statistics on the estimation error and
standardized error (mean and variance) are printed out in the Message window.
======================================================================
|
Cross-validation
|
======================================================================
Data File Information:
Directory
= Pollution
File
= Data
Variable(s) = Zn
Target File Information:
Directory
= Pollution
File
= Data
Variable(s) = Zn
Seed File Information:
Directory
= Pollution
File
= Data
Variable(s) = Zn
Type
= POINT (102 points)
Pollution
655
Model Name
= Pollution Zn
Neighborhood Name = Pollution - MOVING
A data is robust when its Standardized Error lies between -2.500000 and 2.500000
Successfully processed =
101
The cross-validation has been carried out only on the 101 defined samples of Zn. The mean error
proves that the unbiased condition of the kriging algorithm worked properly. The variance of the
estimation standardized error measures the ratio between the (square of the) experimental estimation error and the kriging variance: this ratio should be close to 1. The deviation from this optimum
(13.33 in this test) probably reflects the impact of the two high values that were not taken into
account in the variogram model, and also reflects the impact of reducing the real variability from
12.9 to a sill of 2.7.
In the second part of this array, the same statistics are calculated based only on the points where the
standardized estimation error is smaller (in absolute value) than the 2.5 threshold: these points are
arbitrarily considered as robust data (87). We do not recommend to pay attention to these statistics
based on arbitrarily defined robust data.
656
More consistently, we should use the Variographic Selection to mask the two large values (high
local values) from the data information. The procedure produces the following figure:
(fig. 15.5-2)
The various figures as well as the statistics present a much better consistency between the
remaining data and the model: in particular, the variance of the estimation standardized error is now
equal to 1.82.
======================================================================
|
Cross-validation
|
======================================================================
Data File Information:
Directory
= Pollution
File
= Data
Selection
= Variographic selection
Variable(s) = Zn
Target File Information:
Directory
= Pollution
File
= Data
Selection
= Variographic selection
Pollution
657
Variable(s) = Zn
Seed File Information:
Directory
= Pollution
File
= Data
Selection
= Variographic selection
Variable(s) = Zn
Type
= POINT (102 points)
Model Name
= Pollution Zn
Neighborhood Name = Pollution - MOVING
A data is robust when its Standardized Error lies between -2.500000 and 2.500000
Successfully processed =
99
The last feature allows you to rescale the model according to the cross-validation scores. As a
matter of fact, we know that the re-estimation error does not depend on the sill of the variogram. On
the contrary, the kriging variance is directly proportional to the sill. Hence, if the variance of the
estimation standardized error is equal to 1.96, multiplying the sill by this value corrects it to 1.
However this type of operation is not recommended because of the weakness of this crossvalidation methodology based on kriging which itself relies on the model.
658
(snap. 15.6-1)
Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the data points.
Pollution
659
(fig. 15.6-1)
660
15.7 Kriging
The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l
the Input information: variable Zn in the Data File (without any selection),
the following variables in the Output Grid File, where the results will be stored:
m
As already mentioned, the two high Zn values are kept for the kriging estimation, as we do not
consider them as erroneous data.
(snap. 15.7-1)
Pollution
661
A special feature allows you to test the choice of parameters, through a kriging procedure, on a
graphical basis (Test button). A first click within the graphic area displays the target file (the grid).
A second click allows the selection of one grid node in particular. The target grid node may also be
entered in the Test Window / Application / Selection of Target option (see the status line at the
bottom of the graphic page), for instance the node [11,21].
The figure shows the data set, the samples chosen in the neighborhood and their corresponding
weights. The bottom of the screen recalls the estimation value, its standard deviation and the sum of
the weights.
(fig. 15.7-1)
=
=
=
=
=
10
5.89km
1.000000
1.000000
0.832604
662
Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Variance of Z* (Estimated Z)
Covariance between Z and Z*
Correlation between Z and Z*
Slope of the regression Z | Z*
Signal to Noise ratio (final)
=
=
=
=
=
=
=
=
=
-0.324596
9.315950
2.896565
1.701930
0.393935
0.069339
0.067976
0.176017
0.911876
You can now try to modify the neighborhood parameters (Edit button): 8 angular sectors with an
optimum count of 2 samples per sector and a minimum number of 2 points in the neighborhood
circle, centered on the target point, with a radius of 10 km. When these modifications are applied,
the calculations and the graphic are updated.
(snap. 15.7-2)
Pollution
663
(fig. 15.7-2)
- For variable V1
Number of Neighbors
Mean Distance to the target
Total sum of the weights
Sum of positive weights
Weight attached to the mean
Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Variance of Z* (Estimated Z)
Covariance between Z and Z*
Correlation between Z and Z*
Slope of the regression Z | Z*
Signal to Noise ratio (final)
=
=
=
=
=
=
=
=
=
=
=
=
=
=
16
6.33km
1.000000
1.000000
0.827340
-0.250708
8.874644
2.833627
1.683338
0.309097
0.058389
0.064621
0.188902
0.932130
You can check the reasonable stability of the estimation and an improvement of the standard
deviation which reflects the more regular spread of the neighboring data.
The Application Menu of the Test Graphic window (Application / Domain to be estimated) offers a
final possibility (restricted to the case of output grid files): to cross hatch all the grid nodes where
the neighborhood constraints cannot be fulfilled.
664
Y (km)
510
500
490
480
110
120
130
X (km)
140
(fig. 15.7-3)
Pollution
665
Firstly, give a name to the template you are creating: Zn. This will allow you to easily display
again this template later.
In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m
In the Data area, in the Grid file select the variable Estimation for Zn (Kriging),
Specify the title that will be given to the Raster part of the legend, for instance Zn kriging,
In the Graphic Parameters area, specify the Color Scale you want to use for the raster
display. You may use an automatic default color scale, or create a new one specifically
dedicated to the Zn variable. To create a new color scale: click on the Color Scale button,
double-click on New Color Scale and enter a name: Zn, and press OK. Click on the Edit
button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- To modify the bounds, click on Calculate from File to retrieve the min and max bounds
from the selected variable.
- Change the Number of Classes to 25. This might also be achieved by clicking on the
Bounds button and entering 25 as the New Number of Classes, then OK.
- In the Colors area, click on Color Sampling to choose regularly the 25 colors in the 32
colors palette. This will improve the contrast in the resulting display.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large Zn
values.
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter 0
as the reference for tick marks and 5 as the step between tick marks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch off the Use Default Format
button and set the number of digits to 0.
- Click on OK.
In the Item contents for: Raster window, click on Display current item to display the
result.
Click on OK.
666
(snap. 15.8-1)
l
Back in the Contents list, double-click on the Basemap item to represent the Zn variable with
symbols proportional to the variable value. A new Item contents window appears. In the Data
area, select Data / Zn variable as the Proportional Variable. Enter Zn data as the Legend Title.
Leave the other parameters unchanged; by default, black crosses will be displayed with a size
Pollution
667
proportional to the Zn value. Click on Display Current Item to check your parameters, then on
Display to see all the previously defined components of your graphic. Click on OK to close the
Item contents panel.
l
In the Item list, you can select any item and decide whether or not you want to display its
legend. Use the Up and Down arrows to modify the order of the items in the final Display.
In the Display Box tab, choose the Containing a set of items mode and select the Raster item to
define the display box and take off the blanks.
Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.
(snap. 15.8-2)
The * and [Not saved] symbols in the name of the page indicate that some recent modifications
have not been stored in the Zn graphic template, and that this template has never been saved. Click
on Application / Store Page to save them. You can now close your window.
668
Create a second template Zn Stdev to display the kriging standard deviation using an isoline grid
representation (between 0 and 2.5 with a step equal to 0.5) and an overlay of the Zn data locations.
The result should be similar to the one displayed hereafter.
(snap. 15.8-3)
Pollution
669
(fig. 15.8-1)
670
(snap. 15.9-1)
We will now perform several graphics as we did before with the Zn variable alone:
l
Pollution
A scatter diagram of Zn versus Pb, where we observe that the two large Zn values also
correspond to large Pb values. The linear regression line may be added by switching ON the
corresponding button in the Application / Graphic Specific Parameters... window.
rho=0.885
30
25
20
Zn
671
15
10
10
20
Pb
30
(fig. 15.9-1)
672
Two boxplots using the Statistics / Quick statistics panel. Select the two variables of interest Pb
and Zn in the file Pollution / Data. Then choose the boxplot representation and switch ON the
Draw outliers button. On the boxplot you can easily detect the outliers (see the Quick Statistics
section frome the Users guide to get more information).
(snap. 15.9-2)
Pollution
673
(fig. 15.9-2)
l
An omnidirectional multivariate variogram with the variogram cloud: for the sake of clarity, we
will define the same calculation parameters as before (10 lags of 1km).
(snap. 15.9-3)
l
Finally from the Zn base map, we mask the two large values. We refresh the Variogram picture
by hiding the masked information and checking the following points:
m
The Zn variogram cloud is the same as the one we obtained previously in this study.
The cross-variogram cloud Pb/Zn presents an almost one-sided picture: there are only a few
negative values.
The Pb variogram cloud still shows the same strip as the Zn variogram cloud before masking the two outliers.
674
40
Variogram : Zn
30
20
10
10
30
500
20
400
Variogram : Pb
Variogram : Zn & Pb
Distance (km)
10
300
200
-10
100
-20
0
Distance (km)
10
Distance (km)
10
(fig. 15.9-3)
The correct procedure, once again, is to select some pairs with high variability at small distances on
the Pb variogram and to highlight their origin. On the Zn base map, a cluster of samples are now
painted blue, but no pairs of points are represented; on the Pb base map, one obvious spider is
drawn.
Pollution
675
515
Pb
510
Y (km)
505
500
495
490
485
110
120
130
X (km)
140
(fig. 15.9-4)
Pb Basemap
This high Pb value will be masked to better interpret the underlying experimental variogram.
Moreover the center of the spider precisely corresponds to the only point where the Pb variable is
defined and the Zn is not.
This is confirmed by selecting this sample and asking for the Display Information (Long) option of
the menu which gives us the following information:
- X
120.602km
- Y
511.482km
- Pb 33.20
- Zn N/A
If we pick this sample from the Pb base map and mask it, then the Pb variogram cloud looks more
reasonable. The variogram picture is redrawn, suppressing the display of the variogram cloud and
producing the count of pairs for each lag instead.
676
3
187
204
185
229
206
198
Variogram : Zn
230
2
183
123
Distance (km)
1.5
204
185
185
229
4
187
206
123
230
Variogram : Pb
Variogram : Zn & Pb
187
1.0
198
183
0.5
204
3
198
206
123
229
183
230
1
0.0
3
0
Distance (km)
Distance (km)
(fig. 15.9-5)
Obviously, we recognize the same Zn variogram as before. The Pb variogram, as well as the crossvariogram show the same number of pairs as they are all built on the same 99 samples where both
variables are defined. We will save this new bivariate experimental variogram in a Parameter File
called Pollution Zn-Pb for the fitting step.
The Statistics / Variogram Fitting procedure is started with Pollution Zn-Pb as experimental
variogram and by defining a new file, also called Pollution Zn-Pb, for storing the bivariate model.
The Global window is used for fitting all the variables simultaneously. The use of the Model
initialization does not give satisfactory results this time, especially for the Pb variogram. The
reason is the continuous increase of variability in the Pb variogram at large distances, which is not
captured by the unique spherical default basic structure. In our case the choice of an Exponential
and a Linear structures in the Model Initialization improves a lot the fitting. In the Manual Fitting
some improvements are made while ticking the Automatic sill fitting button:
l
Pollution
677
(snap. 15.9-4)
678
(snap. 15.9-5)
The dotted lines on the cross-variogram show the envelope of maximal correlation allowed from
the simple variograms. Click on Run (Save).
Printing the model in the File / Parameter Files window allows a better understanding of the way
these two basic structures (only) have been used in order to fit simultaneously the three views, in
the framework of the linear coregionalization model, with their sills as the only degrees of freedom.
Model : Covariance part
=======================
Number of variables
= 2
- Variable 1 : Pb
- Variable 2 : Zn
Pollution
|----|-------|-------|
| Pb | 1.000 | 0.473 |
| Zn | 0.473 | 1.000 |
|____|_______|_______|
Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
1.1347
0.5334
Variable 2
0.5334
1.8167
Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
0.2562
0.0927
Variable 2
0.0927
0.1224
Regionalized correlation coefficient :
Variable 1 Variable 2
Variable 1
1.0000
0.5234
Variable 2
0.5234
1.0000
The second basic structure (linear) is used with a coefficient (slope) of:
m
679
680
Advanced explanations about these coefficients are available in the Isatis Technical References, that
can be accessed in PDF format from the On-Line documentation: chapter "Structure Identification
in the Intrinsic Case", paragraph "Printout of the Linear Model of Coregionalization". The Drift
part of the Model (composed only of the Universality Condition) recalls that the interpolation step
will be performed in Ordinary Cokriging by default.
The Interpolation / Estimation / (Co-)kriging procedure is used again to perform the cokriging step
in order to estimate both variables. The difference is that we must now:
l
Choose the two variables Zn and Pb among the variables of the Input Data File (without any
selection).
The neighborhood is unchanged, bearing in mind that therefore the kriging system, for each target
grid node will have twice as many lines and columns (hence four times bigger) as in the
monovariate kriging case.
The number of grid nodes that fulfill the neighborhood constraints is still 1021 out of 1225.
Use the Zn and Zn Stdev display templates to easily display the cokriging results: for each
template, you just need to specify in the Edit window of your grid items (Raster and Isoline) that
you want to display the Cokriging variables, instead of the previous Kriging results.
Pollution
681
(fig. 15.9-6)
682
(fig. 15.9-7)
You can compare this Zn estimate with the one obtained using the univariate kriging approach. To
analyze the difference between the Kriging and Cokriging estimates, we use the File / Calculator
facility to create a variable called Difference, equal to the absolute value of the difference between
the estimates.
Pollution
683
(snap. 15.9-6)
This difference variable is now displayed using a raster representation with a color scale from 0 to 5
by steps of 0.5.
684
(fig. 15.9-8)
where the third high value (in Pb) is located. The influence of this Pb value is amplified through
the correlation in the model as no corresponding Zn data is available here;
the second area of high difference is the zone with the first two high values, which denotes that
the link between the Zn and the Pb is not simply arithmetic. This leads us to the following
remark which will be illustrated in the next paragraph: even when the information of both
variables is present for all the samples (isotopy), cokriging carries more information than
kriging. Of course, this is even more visible in the case where the estimated variable is scarcely
sampled (heterotopy).
Pollution
685
Rank Sample #
X
Y
Kriging variable V1
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287
4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V1
Kriging variable V2
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287
4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V2
Variable V1
Estimate
=
9.3118e+00
Variance
=
2.5244e+00
Std. Dev
=
1.5888e+00
Variable V2
Estimate
=
7.0693e+00
Variance
=
2.3700e+00
Std. Dev
=
1.5395e+00
Vi
Lambda V1
Lambda V2
7.1000e+00
6.9000e+00
4.5000e+00
9.0000e+00
6.2000e+00
6.3000e+00
8.3000e+00
6.0000e+00
4.5000e+00
3.1600e+01
2.4800e+01
6.0000e+00
4.7000e+00
5.3000e+00
6.6000e+00
4.1000e+00
1.0991e-01
5.3149e-02
4.6418e-02
3.6423e-02
8.6738e-02
5.8343e-02
5.3459e-02
4.8487e-02
6.8475e-02
6.5210e-02
7.5750e-02
6.8333e-02
6.8974e-02
4.6189e-02
7.4202e-02
3.9940e-02
1.0000e+00
-7.2616e-03
3.6183e-03
-2.6317e-04
2.8239e-03
-8.4710e-04
-1.4425e-03
-7.0186e-04
4.0663e-03
-2.6696e-03
2.7918e-04
-1.3067e-03
1.8800e-03
-3.3752e-03
2.5384e-03
2.5598e-04
2.4057e-03
3.0531e-16
2.9400e+00
2.1900e+00
1.2100e+01
2.8800e+00
3.7100e+00
4.3000e+00
4.6000e+00
2.2400e+00
2.7900e+00
2.7600e+01
2.5500e+01
4.6100e+00
2.1800e+00
2.3000e+00
1.9000e+00
1.7900e+00
2.3762e-02
-1.1840e-02
8.6119e-04
-9.2407e-03
2.7720e-03
4.7203e-03
2.2967e-03
-1.3306e-02
8.7359e-03
-9.1356e-04
4.2759e-03
-6.1522e-03
1.1045e-02
-8.3066e-03
-8.3765e-04
-7.8722e-03
-1.3878e-17
1.8519e-01
1.5637e-02
4.9146e-02
7.1469e-03
9.5520e-02
7.3297e-02
6.0735e-02
6.3311e-03
9.6151e-02
6.2315e-02
8.9297e-02
4.8842e-02
1.0397e-01
1.9873e-02
7.1548e-02
1.5000e-02
1.0000e+00
In his printout, we can read the weights for the estimation of Zn (column Lambda V1) and for the
estimation of Pb (column Lambda V2) applied to the Zn information (first set of rows) and the Pb
information (second set of rows). We can check the impact of the universality condition which
implies that, when estimating a main variable, the weights attached to the main information must
686
add up to 1 while the weights attached to the secondary variable must add up to zero. Be careful, the
amplitude of the weights on the secondary variable may be misleading in general, since it depends
on the ratio of the standard deviations of the main and secondary variables and in particular on their
respective units.
Using the model composed of two nested basic structures described previously, we can check that
the weights of the secondary variable are not null: hence the cokriging result differs from the
kriging one.
This property will vanish for all variables in the particular model of intrinsic correlation, where all
the simple and cross variograms are proportional. This is obviously not the case here as the ratios
between the coefficients of each basic structure are:
m
We now wish to create a model where both variograms and the cross-variogram are proportional:
this is obviously the case when the model is reduced to one basic structure. This is why we now
return to the variogram fitting stage using the exponential basic structure alone with range 5.3 km,
switch on the Automatic Sill Fitting button and save the Model in the File Pollution Zn-Pb (one
structure).
When estimating the grid node by cokriging [11, 21] using this new model Pollution Zn-Pb (one
structure) and the same neighborhood Pollution as before, we can ask for the print of the weights
and obtain the following result:
Display of the (Co-) Kriging weights
====================================
Rank Sample #
X
Y
Kriging variable V1
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287
4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V1
Kriging variable V2
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287
Vi
Lambda V1
Lambda V2
7.1000e+00
6.9000e+00
4.5000e+00
9.0000e+00
6.2000e+00
6.3000e+00
8.3000e+00
6.0000e+00
4.5000e+00
3.1600e+01
2.4800e+01
6.0000e+00
4.7000e+00
5.3000e+00
6.6000e+00
4.1000e+00
1.2538e-01 1.0180e-17
5.4894e-02 1.0879e-17
3.1820e-02 -5.8418e-18
4.2451e-02 3.1484e-17
9.8050e-02 1.0205e-17
5.0194e-02 -5.1775e-18
4.8354e-02 2.2877e-17
5.4225e-02 9.6364e-18
6.1411e-02 1.5274e-17
6.0364e-02 2.4111e-18
6.1955e-02 1.7437e-17
6.9428e-02 -3.1694e-18
6.8763e-02 5.0959e-18
5.2616e-02 6.3284e-18
8.7005e-02 2.0962e-17
3.3090e-02 9.7171e-18
1.0000e+00 1.5830e-16
2.9400e+00
2.1900e+00
1.2100e+01
2.5873e-17
1.3824e-17
0.0000e+00
1.2538e-01
5.4894e-02
3.1820e-02
Pollution
687
4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V2
Variable V1
Estimate
Variance
Std. Dev
Variable V2
Estimate
Variance
Std. Dev
=
=
=
8.8939e+00
2.8868e+00
1.6991e+00
=
=
=
6.1524e+00
2.7307e+00
1.6525e+00
2.8800e+00
3.7100e+00
4.3000e+00
4.6000e+00
2.2400e+00
2.7900e+00
2.7600e+01
2.5500e+01
4.6100e+00
2.1800e+00
2.3000e+00
1.9000e+00
1.7900e+00
3.6371e-18
0.0000e+00
-1.9738e-17
4.6513e-17
-8.1636e-18
-2.5879e-17
5.2087e-17
3.6930e-17
-2.0138e-17
3.2378e-17
-4.0209e-18
-1.3319e-17
-1.4818e-17
1.0517e-16
4.2451e-02
9.8050e-02
5.0194e-02
4.8354e-02
5.4225e-02
6.1411e-02
6.0364e-02
6.1955e-02
6.9428e-02
6.8763e-02
5.2616e-02
8.7005e-02
3.3090e-02
1.0000e+00
This time, we can easily check that the weights attached to the secondary variable are systematically null and, therefore, the cokriging result is similar to that of kriging. However, this property
fails as soon as one sample is not informed for both variables: this can be checked for target grid
node [10, 31] where the sample (rank 2) carries the Pb information but not the Zn as can be seen in
the next printout.
Display of the (Co-) Kriging weights
====================================
Weights for option : Punctual
Rank Sample #
X
Y
Kriging variable V1
1
97
118.522
509.148
2
13
118.123
508.007
3
1
119.504
509.335
4
16
120.518
508.675
5
2
120.447
510.002
6
3
120.602
511.482
7
98
117.882
513.039
8
99
111.185
503.398
9
100
113.336
505.146
10
92
113.313
501.368
Sum of weights for Kriging V1
Kriging variable V2
1
97
118.522
509.148
2
13
118.123
508.007
3
1
119.504
509.335
4
16
120.518
508.675
5
2
120.447
510.002
6
3
120.602
511.482
7
98
117.882
513.039
8
99
111.185
503.398
9
100
113.336
505.146
10
92
113.313
501.368
Sum of weights for Kriging V2
Variable V1
Estimate
=
8.8555e+00
Variance
=
1.8112e+00
Vi
Lambda V1
Lambda V2
-1.6788e-03 4.9029e-01
-1.2601e-03 6.9908e-02
-1.0935e-03 8.5249e-02
-9.0639e-04 -1.6123e-02
-1.4007e-02 4.9935e-02
3.1473e-02 7.4666e-02
-6.2934e-03 1.4131e-01
-1.9791e-03 3.1221e-02
-2.0493e-03 3.8902e-02
-2.2059e-03 3.4649e-02
-2.1684e-18 1.0000e+00
688
Std. Dev
Variable V2
Estimate
Variance
Std. Dev
1.3458e+00
=
=
=
5.5249e+00
1.7033e+00
1.3051e+00
Pollution
689
15.11 Simulations
Kriging provides the best estimate of the variable at each grid node. Doing so, it does not produce
an image of the true variability of the phenomenon. Performing risk analysis usually requires to
compute quantities that have to be calculated from a model representing the actual variability. In
this case, advanced geostatistical techniques such as simulations have to be used.
It is for instance the case here if we want to estimate the probability of Zn to exceed a given
threshold. As in fact thresholding is not a linear operator applied to the concentration, applying the
threshold on the kriged result (which is a linear operator) can lead to an important bias. Simulation
techniques generally require a multigaussian framework: thus each variable has to be transformed
into a normal distribution beforehand and the simulation result must be back-transformed to the raw
distribution afterwards.
In this paragraph, we focus on the Zn variable alone. This first task consists in transforming the raw
distribution into a normal one: this requires the fitting of the transformation function called the
Gaussian Anamorphosis. Using the Statistics / Gaussian Anamorphosis Modeling procedure, we
can fit and display this function and transform the raw variable Zn into a new gaussian variable Zn
(Gauss).
The first left icon in the Interactive Fitting window overlays the experimental anamorphosis with
its model expanded in terms of Hermite polynomials: this step function gives the correspondence
between each one of the sorted data (vertical axis) and the corresponding frequency quantile in the
gaussian scale (horizontal axis). A good correspondence between the experimental values and the
model is obtained by choosing an appropriate number of Hermite polynomials; by default Isatis
suggests the use of 30 polynomials, but you can modify this number in Nb of Polynomials.
Close the Fitting Parameters window and click on the Point Anamorphosis button to save the
parameters of this anamorphosis in a new set name called Pollution Zn. The number of
polynomials, the absolute interval of definition and the practical interval of definition are saved in
the Parameter File and you may check their values in the printout.
Switch on the Gaussian Transform to save on Output as Zn (Gauss), the new gaussian variable.
Three options of anamorphosis are available, we recommend the Frequency Inversion method for
this case. Finally click on Run.
690
(snap. 15.11-1)
30
Zn
20
10
0
-3
-2
-1
0
1
Gaussian values
(fig. 15.11-1)
Pollution
691
Using the Statistics / Exploratory Data Analysis on this new variable, we can first ask for its basic
statistics and check the correctness of the transformation as: the mean is 0.00 and the variance is
0.99. We first display the histogram of this variable between -3 and 3 using 30 classes and check
that the distribution is symmetric with a minimum of -2.42 and a maximum of 2.42. The two high
Zn values are not anomalous anymore on the gaussian transform. As as consequence, the
experimental variogram is more structured. The following one is computed using the same
calculation parameters as in the univariate case: 10 lags of 1 km.
(fig. 15.11-2)
(fig. 15.11-3)
692
(snap. 15.11-2)
We are now able to perform the conditional simulation step using the Turning Bands method
(Interpolate / Conditional Simulations / Turning Bands). A conditional simulation corresponds to a
grid of values having a normal distribution and obeying the model. Moreover, it honors the data
points as it uses a conditioning step based on kriging which requires the definition of a
neighborhood. We use the same Pollution neighborhood parameters as in the kriging step. The
additional parameters consist in:
Pollution
693
the name of the Macro Variable: each simulation is stored in this Macro Variable with an index
attached.
the Gaussian back transformation is performed using the anamorphosis function: Pollution Zn.
the seed used for the random number generator: 423141 by default. This seed allows you to
perform lots of simulations in several steps: each step will be different from the previous one if
the seed is modified.
The final parameters are specific to the simulation technique. When using the Turning Band
method, we simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 100 bands are chosen in our exercise.
694
(snap. 15.11-3)
The results consist of 20 realizations stored in one Macro Variable in the Grid Output File. The
clear differences between several realizations are illustrated on the next graphic.
Pollution
695
(fig. 15.11-4)
The Tools / Simulation Post Processing panel provides a procedure for the post processing of a
Macro Variable. Considering the 20 conditional simulations, we ask the procedure to perform
sequentially the following tasks:
696
determination of the cutoff maps giving the probability that Zn exceeds different thresholds
(20%, 25%, 30% and 35%).
(snap. 15.11-4)
(snap. 15.11-5)
Pollution
697
(snap. 15.11-6)
(snap. 15.11-7)
The map corresponding to the mean of the 20 simulations in the raw scale is displayed with the
same color scale as for each of the estimated maps. The mean of a large number of simulations converges towards kriging.
698
Simulation Zn Mean
Y (km)
510
Zn
500
25
20
15
490
10
5
480
110
120
130
X (km)
140
0
N/A
(fig. 15.11-5)
The following graphics contain the probability maps corresponding to the cutoffs 20% and 30%.
Obviously, the probability is decreasing with the cutoff.
Pollution
699
Iso-Proba Zn{20.000000}
510
Y (km)
500
490
Proba
1.0
0.9
0.8
480
0.7
110
120
130
140
0.6
0.5
X (km)
0.4
0.3
Iso-Proba Zn{30.000000}
510
0.2
0.1
0.0
Y (km)
500
490
480
110
120
130
X (km)
140
(fig. 15.11-6)
700
The case study illustrates the use of Polygons which serve either to
delineate the subpart of a regular grid where the local estimation must
take place, or to limit the area on which a global estimation has to be
performed. It is recommended to read the Dealing With Polygons
chapter of the Beginners Guide prior to running this case study, in
order to be familiar with this facility.
701
702
16.1 Introduction
As stated in the reference book, several research laboratories from the countries surrounding the
North Sea (ICES 1997) join their effort in order to evaluate the fish stocks. The procedure consists
in surveys carried out at the same period of each year (February), and where the indices of
abundance at age for different species of fish are measured. In this case study, we will concentrate
on the 1991 survey covering the North Sea to the east of Scotland, and on the haddock of the first
category of age (less than 21 cm).
The survey was carried out using a "Grande Ouverture Verticale" (GOV) trawl: a single 60-minute
tow was conducted within each ICES statistical rectangle of the survey area. The dimensions of
these rectangles are a degree of latitude by half a degree of longitude. Therefore the exact
dimensions depend on the latitude: a general conversion rule is applied for transforming longitude
based on the cosine of the latitude (i.e. 55N).
The initial information provided by the survey are fish numbers (by species, by length and age) and
certain fishing gear parameters that enable a standard fish density unit to be obtained. These
parameters include the distance towed and the wind spread.
The fish catch (in numbers) is converted to areal fish density (numbers by nmil2) by scaling it by
the distance towed and the wingend distance (swept area method by Gunderson, 1993).
Note - Some numerical results can differ from the reference book.
703
(snap. 16.1-1)
The data are provided in the ASCII file fish_survey.hd, which includes the classical Isatis header. It
contains the following information:
l
X and Y refer to the midpoint of the haul start and end positions, converted to an absolute
measure in nmil,
Dist towed and Wingend refer to the fishing gear parameters described above. Note that the
distance towed is given in nmil whereas the wingend is provided in meter.
The information is loaded in the file Survey within a new directory North Sea.
704
(snap. 16.1-2)
The next operation consists in calculating the areal fish density (using the File / Calculator facility)
that will be stored in a new variable called Fish areal density. This variable is simply obtained by
dividing the initial fish catch in numbers (Haddock 1) by the gear parameters (Dist towed and
Wingend), once the last parameter (Wingend) is converted from meter to nautical mile (divided by
1852).
705
(snap. 16.1-3)
706
Count of samples: 59
Mean: 13772
Using the Statistics / Exploratory Data Analysis facilities, the next figure shows the spread of the
data using a representation where the symbols are proportional to the fish density.
Y (nmil)
3600
3500
3400
3300
-100
0
X (nmil)
100
(fig. 16.1-1)
The histogram performed with 10 classes between 0 and 90000 shows a positive skewness with a
large number of zero (or small) values (to be compared to the histogram in Fig 4.2.3 of the
reference book):
707
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
0.6
59
0.00
82327.34
13772.24
19645.56
0.5
0.4
0.3
0.2
0.1
0.0
20000
40000
60000
80000
(fig. 16.1-2)
The next task consists in calculating the experimental variogram. The variogram is computed for 15
lags of 15 nmil with a 7.5 nmil tolerance, assuming isotropy. The next figure shows the
experimental variogram together with the count of pairs obtained for each lag. The variogram is
saved in a Standard Parameter File called Fish density.
106
6.0e+008
Variogram : Fish areal density
124
5.0e+008
68
128
125
132
4.0e+008
121110
137
47
90
3.0e+008
32
107
92
2.0e+008
1.0e+008
0.0e+000
1
0
50
100
150
Distance (nmil)
200
(fig. 16.1-3)
We use the Statistics / Variogram Fitting procedure to fit an isotropic model, which will finally be
stored in the Standard Parameter File also called Fish density.
708
To remain compatible with the reference book, we define a model composed of a nugget effect and
a spherical basic structure (with a range of 55 nmil) and use the Automatic Sill Fitting option to get
the optimal values for the sills by minimizing the distance between the model and the values of the
experimental variogram, cumulative over all the calculated lags. The same weighting function is
applied for each lag of the experimental variogram, which accounts for:
m
Spherical basic structure with a range of 55 nmil and a sill of 3.98 e+08.
Both the experimental variogram and the model are presented in the following figure.
6.0e+008
5.0e+008
4.0e+008
3.0e+008
2.0e+008
1.0e+008
0.0e+000
50
100
150
Distance (nmil)
200
(fig. 16.1-4)
709
the trawl survey data. The next paragraph illustrates the contents of this file. One can first notice the
double nested hierarchy:
m
the polygon level which corresponds to the lines starting with the ** symbol,
the contour level which corresponds to the lines starting with the * symbol. This level
contains an additional flag indicating if the polygon stands for a hole or not.
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
**
*
Polygons Dimension = 2D
polygon_field
polygon_field
polygon_field
polygon_field
polygon_field
=
=
=
=
=
1
2
3
4
5
,
,
,
,
,
type
type
type
type
type
=
=
=
=
=
name
color_R
color_G
color_B
pattern
1 , type = hole
2 , type = name
+ ----------------
vertex_field =
vertex_field =
++++++++++ ----------
North Sea
125 190 255
0 East of Scotland
-55.17
3334.26
-65.59
3342.84
.../...
172.07
3330.00
-54.55
3330.00
1
-57.30
3627.90
-51.11
3628.50
.../...
-45.94
3630.12
-45.63
3639.18
This polygon is read using the File / Polygons Editor facility. This application stands as a graphic
window with a large Application Menu. We must first choose the New Polygon File option of the
Application menu to create a file where the 2D polygon attributes (vertices, name and color) will be
stored: the file is called Polygon in the directory North Sea.
(snap. 16.1-4)
The next task consists in loading the contents of the ASCII Polygon File using the ASCII Import
facility in the Application Menu.
710
(snap. 16.1-5)
The polygon (with its two contours) now appears in the graphic window. We can easily distinguish
the eastern coast of Scotland as well as the two sets of islands.
711
(snap. 16.1-6)
The final action consists in performing the SAVE and RUN task in order to store the polygon file in
the general data architecture of Isatis. To check this file, we can simply use the Data File Manager
utility which provides basic information:
712
the file belongs to a new type, called 2D-Polygons (which seems very similar to the Points 2D
structure). The information button used on this file simply recalls that it contains a single
polygon (constituted of 123 vertices).
the file contains only one sample in our case and several variables (created automatically):
m
the traditional variable Sample Number gives the rank of the sample in the file;
the coordinates X and Y give the location of the anchor where the label of the polygon is
attached. By default, the label is located at the gravity center of the polygon;
the NAME corresponds to the label given to each polygon and printed at anchor location
(North Sea in our case);
the SURFACE measures the actual surface of the polygon. Note that this surface is
calculated exactly, taking the hole into account, and therefore it does not require any grid for
discretization. In our case, the polygon surface is estimated at 63949 nmil2.
713
16.2 Mapping
This part corresponds to the traditional estimation step, carried out on the nodes of a regular grid
which will cover the whole area, and will allow graphic representation of the fish density with its
local variations: hence the name of local estimation.
(snap. 16.2-1)
Note - The resolution here is twice finer (in each direction) than in the reference book.
714
(snap. 16.2-2)
The selection, stored as a new variable of the Grid file, will be called North Sea. The procedure
also tells us that, out of the 4875 grid nodes, only 2610 belong to the polygon. This number gives us
a second coarse estimation of the surface of the polygon by multiplying it by the elementary cell
surface (5 x 5 nmil2): i.e. 65250 nmil2. The difference between this number and the exact surface,
whose value is 63949 nmil2, comes from the discretization.
715
(snap. 16.2-3)
The procedure creates two variables defined on the active grid nodes: the Estimation, which
contains the estimation map, and the St. deviation which gives the square root of the estimation
variance. The two following maps are produced overlaying various Display facilities: a raster, a
basemap using proportional symbols and the polygon.
716
Estimation
3650
3600
Y (nmil)
3550
3500
70000
3450
60000
50000
40000
3400
30000
20000
3350
10000
-100
100
X (nmil)
(fig. 16.2-1)
St. deviation
3650
3600
Y (nmil)
3550
3500
19500
3450
17000
14500
3400
12000
3350
9500
-100
0
X (nmil)
100
(fig. 16.2-2)
717
the unweighted estimation where the mean fish areal density (calculated from the 59 samples)
is considered as representative of the variable over the whole polygon. Therefore, the global
estimation of the abundance is obtained by raising the arithmetic mean fish density (13772 fish
per nmil2) to the area of the polygon (63949 nmil2) for a result of 881 millions.
the unweighted estimation variance expressed through the coefficient of variation CViid
z N
the weighted estimation, through kriging, where the samples are weighted optimally with
respect to the appropriate variogram model.
16.3.1 Discretization
The global estimation requires each polygon to be associated with an internal discretization grid.
The parameters of this discretization grid can be chosen in the File / Polygons Editor facility (see
the Polygons section from the Beginner's Guide).
Once the polygon file of interest has been defined (Polygon), you click with the right-button of the
mouse in the graphic area and ask for Edit Polygons option. Then you have to select the North Sea
polygon. The menu of the graphic area is now turned into the Polygons Edit Menu which offers new
options, including the Edit Discretization one.
A panel appears where you can define the discretization grid interactively. However it is strongly
recommended to use the graphic control, in particular to show the contributing nodes. The
discretization grid always covers the whole polygon; the union of the contributing cells also covers
area which does not belong to the polygon: this additional area should be as small as possible. In the
bottom of the panel, a statement calculates this added surface interactively (expressed as a
percentage of the actual polygon surface).
Note - It is possible to define more easily a discretization grid using the Application / Discretize
facility, but we choose this option in this case to illustrate the way to define exactly each grid
parameter.
You may now choose the parameters of the grid, by selecting:
m
718
the rotation angle that you wish to apply to the grid: this option is particularly adapted for
elongated grids when the elongation direction does not match one of the main axes of the
system.
In this case study, we will test the impact of the discretization on the global estimation results. In
this first step, we choose a discretization grid with the following characteristics:
m
Nodes number: 35 x 50
Mesh size: 10 x 10
(snap. 16.3-1)
which leads to an added surface of around 13,8% of the exact polygonal surface.
In order to store the characteristics of this discretization grid, you simply need to run the SAVE and
RUN option of the Application Menu.
719
The results are stored in the Polygon file using the variables: Estimation for the estimated fish density over the polygon, and St. deviation for the square root of the variance of the previous estimated
value.
This procedure requires the definition of the Neighborhood which will be taken into account for
selecting the data points involved in the estimation of each polygon. These parameters are saved in
the Standard Parameter File called Polygon.
(snap. 16.3-2)
The characteristics of this neighborhood are specific to the global estimation performed on polygons (hence the Neighborhood Type). In particular, it gives the possibility of selecting all the data
points which lie within the polygon, possibly extended by a rotated ellipsoid, and up to a maximum
count of points. Here the ellipsoid dimensions are set to zero and all the data strictly included within
the polygon are used. No limitation is imposed on the count of data.
720
(snap. 16.3-3)
To check these results, we must use the File / Print facility which produces the contents of the
selected variables for the whole set of samples. Used on the Polygon file and for the two variables
described previously, this feature will produce the following results:
721
These results lead to an estimation of the total abundance of 922 millions and a coefficient of
E
z
Note - The computing time used for the estimation is proportional to the count of nodes of the
discretization which belong to the polygon. As far as the standard deviation is concerned, the time
is proportional to the square of the count of discretization nodes which belong to the polygon.
The following table summarizes the results for the 10, 5 and 1 nmil side cells.
Discretization
10 nmil
5 nmil
1 nmil
Estimation
14417.64
14405.62
14402.56
St. dev.
2068.35
2049.55
2045.64
CV
14.3%
14.2%
14.2%
The gain in accuracy for both the abundance and the coefficient of variation is not important
enough with regard to the increase in computing time. A reasonable balance corresponds to the first
trial with 10 nmil discretization grid mesh.
722
Acoustic Survey
17.Acoustic Survey
This case study is based on an Acoustic survey carried in the northern
North Sea (western half of ICES division IVa) in July 1993 in order to
evaluate the total biomass, total numbers and numbers at age of the
North Sea herring stock. It has been kindly provided by the Herring
Assessment Group of the International Council for the Exploration of
the Sea (ICES). It has also been used as a case study in the book
Geostatistics for Estimating Fish Abundance by J. Rivoirard, K.G.
Foote, P. Fernandes and N. Bez. This book will serve as a reference for
comparison in this case study.
The case study illustrates the use of Polygons to limit the area on
which a global estimation has to be performed. The aim of this study is
to carry out a global estimation with a large number of data, which
requires the domain to be subdivided in strata (polygons). The main
issue arises in the way the results per strata have to be combined, both
for estimation and variance estimation.
723
724
17.1 Introduction
As stated in the reference book, this data set has been taken from the 6 years acoustic survey of the
Scottish North Sea. The 1993 data constitute 938 values of an absolute abundance index, at regular
points along the survey cruise track. This cruise track is oriented along systematic parallel transects
spaced 15 nautical miles (nmil) apart, running east-west and vice versa, progressing in a northerly
direction on the east of the Orkney and Shetland Islands and southward down the west side. The
acoustic index is proportional to the average fish density.
The position of an acoustic index was taken every 2.5 nmil, initially recorded in a longitude and
latitude global positioning system and later converted into nmil using a simple transformation of
longitude based on the cosine of the latitude.
Year, Month, Day, Hour, Minute and Second give the exact date at which the
measurement has been performed. They will not be used in this case study.
Fish is the variable containing the fish abundance and will be the target variable throughout
this study.
East and West are two selections which delineate the sub-part of the data belonging to the
eastern part of the North Sea from the western part: the boundary corresponds to a broken
line going through Orkney and Shetland Islands.
The data are provided in the Isatis installation directory/Datasets/Acoustic_survey and in the File
called Data.
Acoustic Survey
725
(snap. 17.1-1)
17.1.2 Statistics
Getting info on the file Data tells us that the data set contains 938 points, extending in a square area
with 200 nmil edge. The next figure represents the acoustic survey where the points located in the
East part (1993 - East selection) are displayed using a dark circle whereas the points in the West
part (1993 - West selection) are represented with plus sign.
726
3700
Y (nmil)
3650
3600
3550
3500
-100
-50
50
X (nmil)
(fig. 17.1-1)
The differences between East and West areas show up in the basic statistics of the fish abundance:
All data
East
West
Count of samples
938
606
332
Minimum
Maximum
533.36
533.36
306.48
Mean
8.27
8.16
8.47
Variance
1078.49
1189.48
875.84
Skewness
9.07
9.93
6.33
CV (sample)
3.97
4.23
3.49
The following figure shows the histogram of the Fish variable. The data are highly positively
skewed with 50% of zero values.
Acoustic Survey
727
1.00
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
938
0.00
533.36
8.27
32.84
Frequencies
0.75
0.50
0.25
0.00
100
200
300
400
500
Fish
(fig. 17.1-2)
The next figure represents the log of the acoustic index + 1 in proportional display, zero values
being displayed with plus sign whereas non zero values are displayed using circles. It is similar to
the 1993 display in the figure (4.3.1) of page 84 in the reference manual.
3700
Y (nmil)
3650
3600
3550
3500
-150
-100
-50
X (nmil)
50
(fig. 17.1-3)
728
17.1.3 Variography
Two omnidirectional variograms were calculated separately on data coming from the east and the
west areas. For sake of simplicity in this case study, the variograms are calculated on the raw
variables, with a lag value of 2.5 nmil, 30 lags and a tolerance on distance of 50%. Each
experimental variogram has then been fitted using the same combination of a nugget effect and an
exponential basic structure: the sill of each component has been fitted automatically. The next
figure shows the two experimental variograms and the corresponding models (West and East).
(fig. 17.1-4)
Acoustic Survey
729
(fig. 17.1-5)
Note that, as we already knew, the variances of the two subsets are quite different (875 for West and
1189 for East). The fitted models have the following parameters:
Dataset
West
East
Nugget
396
842
Exp - Range
27
20
Exp - Sill
787
469
Total Sill
1183
1311
33%
64%
There is enough evidence to indicate that there are differences between the east and west regions,
particularly regarding the proportion of nugget; it is therefore advised to stratify the whole area data
sets into east and west regions.
730
Y (nmil)
3650
3600
3550
S13
S14
3700
S15
S16
S17
S18
S19
S20
S21
S22
S23
S24
S25
S26
S12
S11
S10
S9
S8
S7
S6
S5
S4
S3
S2
S1
3500
-100
0
X (nmil)
100
(fig. 17.2-1)
This first set of polygons corresponding to small strata is read from the separate ASCII Polygon
File called small_strata.hd. The procedure File / Polygons Editor is used to import these polygons
into a new Polygon File Small Strata: some parameters (label contents and position, filling...) are
already stored in the ASCII File. The procedure allows a visualization of these polygons, together
with the survey data used as a control information (See paragraph on Auxiliary Data in the Polygon
section from the Beginner's Guide).
Acoustic Survey
731
The polygons are named from S1 to S26. Using the File / Selection / Intervals menu, we create two
selections on the Sample Number to distinguish the first 13 polygons (from S1 to S13) which are
located in the East region (selection East) from the last 13 polygons (from S14 to S26) which are
located in the West region (selection West).
The polygons constitute a partition of the domain of integration (no polygon overlap) and the total
surface is then obtained as the sum of the surface of each polygon: 39192 nmil2.
732
(snap. 17.2-1)
Estimation
3700
Y (nmil)
3650
Fish
30
25
3600
20
15
3550
10
5
0
3500
-100
X (nmil)
100
(fig. 17.2-2)
Acoustic Survey
733
(snap. 17.2-2)
734
Note - For comparison purpose, the dilatation radius of the neighborhood is brought back to 0 for
this example.
When the global estimation has been processed, it suffices to use the traditional Print feature to
dump out the value of the Arithmetic Mean variable for each polygon. We can check the
exactness of the comparison for the first polygon: 6.36.
17.2.4 Comparison
It is now time to review the results obtained for all polygons by using the Print feature for dumping
the variables:
m
St. dev.: square root of the (weighted) estimate of the mean (using Kriging)
Rap is the ratio of the surface of the current polygon with respect to the total surface
Zgeo is the kriged estimate of the mean Fish density over the polygon
the arithmetic mean fish density raised to area of the polygon: 284923,
the arithmetic mean fish density raised to each polygon surface, and cumulative over the 26
polygons: 305670,
the kriged mean fish density raised to each polygon surface, and cumulative over the 26
polygons
Z V : 295182.
Acoustic Survey
735
the global CViid ( ---------------- ) coefficient of variation which ignores the spatial structure
z iid N
(expressed in %): 12.97% with s the standard deviation of sample values and
mean,
m
E
z geo
different strata:
E =
z the sample
Vj
Surf
Rap
Ziid
Zgeo
Aiid
Ageo
68
4275
10.91
6.36
6.04
5.11
27179
25817
64
2650
6.76
19.34
19.05
4.75
51266
50483
57
2584
6.59
13.83
12.84
4.95
35745
33188
59
2423
6.18
4.97
4.88
4.98
12045
11814
60
2206
5.63
5.02
3.64
4.96
11081
8039
42
1938
4.95
11.16
10.12
5.63
21624
19623
40
1719
4.39
9.62
9.41
5.93
16548
16182
34
1742
4.44
5.98
5.74
6.61
10425
10005
28
1523
3.89
4.77
4.38
7.07
7261
6676
10
36
1853
4.73
6.40
5.92
6.53
11867
10965
11
33
1562
3.99
3.44
5.83
6.28
5370
9115
12
33
1585
4.05
1.11
3.67
6.35
1764
5824
13
52
2575
6.57
0.86
0.86
5.80
2215
2219
14
23
781
1.99
0.05
0.05
6.82
39
37
15
19
488
1.25
11.89
5.24
8.44
5809
2557
16
30
713
1.82
12.44
13.33
6.15
8876
9506
17
30
969
2.47
4.65
4.69
5.87
4504
4546
18
47
1280
3.27
6.24
6.30
4.48
7992
8059
19
53
1199
3.06
9.30
7.48
3.93
11147
8970
20
36
1175
3.00
30.10
29.96
5.12
35361
35196
21
35
1293
3.30
2.70
2.11
5.37
3490
2728
22
24
856
2.19
15.06
13.94
5.95
12902
14941
736
23
11
676
1.73
0.00
0.00
12.39
24
10
487
1.24
0.00
0.00
11.52
25
405
1.03
1.67
2.59
12.92
678
1051
26
234
0.60
2.07
2.73
13.24
482
638
As stated in the reference manual, the index of abundance does not differ greatly according to the
method: because of the systematic design, the kriged and unweighted estimates are similar. Polygon
15 constitutes a noticeable exception; the difference between the averages on this polygon is due to
the fact that the kriging result is obtained using the West selection, which excludes some data falling in the eastern part of the polygon.
The variance is, however, quite different: the CVgeo is higher than the CViid. This is due to the autocorrelation in the data, which is ignored in the latter estimate. As the survey is not designed in a
manner that allows for the unweighted estimate to be valid, the CViid can be considered as incorrect.
Acoustic Survey
737
3700
L5
L4
3650
Y (nmil)
L6
L3
3600
L7
L2
3550
L8
L1
3500
-100
100
X (nmil)
It is obvious to check the correspondence between the large and the small strata:
(fig. 17.2-3)
738
L1
S1, S2, S3
L2
S4, S5, S6
L3
S7, S8, S9
L4
L5
L6
L7
L8
The aim of this paragraph is to perform the estimation based on the large strata and to compare the
accuracy of the approximation which consists in combining the estimation and the variance of
several polygons or strata partitioning a domain.
The results are given in the next table:
Rk
Surf
Z(V)
A(V)
9509
12.36
2.95
117554
6567
6.07
3.06
39869
4978
6.83
3.86
34013
7575
3.37
3.20
25528
1980
5.71
4.18
11313
3447
6.37
2.67
21952
3324
15.66
3.17
52041
1802
0.90
6.17
1614
The global results calculated by combining the large strata are comparable to the one obtained by
combining the small strata:
Statistics
Small Strata
Large Strata
Surface
39192
39184
Abundance
295182
303885
CVgeo
18.37%
17.92%
Air quality
18.Air quality
This case study is based on a data set kindly provided by the French
association for Air Quality Monitoring ATMO Alsace (Source
dinformation ASPA 05020802-ID - www.atmo-alsace.net).
The case study covers rather exhaustively a large panels of Isatis
features. Its main objectives are to:
estimate the annual mean of nitrogen dioxide (NO2) over Alsace in
2004 using classical geostatistical algorithms,
perform risk analysis by:
- the estimation of the local risk to exceed a sanitary threshold of
40 g/m3 using conditional expectation (multi-gaussian kriging),
- the quantification of the statistical distribution of population
potentially exposed to NO2 concentrations higher than 40 g/m3.
Last update: Isatis version 2014
739
740
(snap. 18.1-1)
It is then advised to check the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l
Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.
You have to tick the box First Available Row Contains Field Names and click on the Automatic
button to load the variables contained in the file.
Air quality
741
and
Northing(Y)
for
X_COORD_UTM
and
(snap. 18.1-1)
742
the polygon level which corresponds to the lines starting with the ** symbol,
the contour level which corresponds to the lines starting with the * symbol.
#
# [ISATIS POLYGONS] Study: ASPA - Campagne Regionale 2004 Directory: Donnees
File: Contours
#
# Polygons Dimension = 2D
#
# polygon_field = 1 , type = name
# polygon_field = 2 , type = x_label , unit = "km"
# polygon_field = 3 , type = y_label , unit = "km"
#
# ++++++++++++ ---------- ++++++++++
#
# vertex_field = 1 , type = x , unit = "km" , f_type=Decimal , f_length=10 ,
f_digits=2
# vertex_field = 2 , type = y , unit = "km" , f_type=Decimal , f_length=10 ,
f_digits=2
#
# ++++++++++ ----------
#
** Bas-Rhin
393.42
5391.85
*
434.27
5407.18
433.06
5405.97
431.02
5404.44
.../...
434.29
5408.68
** Haut-Rhin
371.18
5302.14
*
353.01
5287.31
352.84
5287.73
.../...
352.65
5284.33
352.05
5285.49
This polygon is read using the File / Polygons Editor functionality. This application stands as a
graphic window with a large Application Menu. You must first choose the New Polygon File option
to create a file where the 2D polygon attributes will be stored: the file is called Alsace in the
directory Data.
Air quality
743
(snap. 18.1-1)
The next task consists in loading the contents of the ASCII Polygon File using the ASCII Import
functionality in the Application Menu.
(snap. 18.1-2)
The polygons now appear in the graphic window. You can easily distinguish the Bas-Rhin and the
Haut-Rhin.
744
(snap. 18.1-3)
The final action consists in performing the Save and Run task in order to store the polygon file in
the general data file system of Isatis.
Air quality
745
18.2 Pre-processing
18.2.1 Creation of a target grid
All the estimation and simulation results will be stored as different variables of a new grid file
located in the directory Data. This grid, called Grid, is created using the File / Create Grid File
functionality. It is adjusted on the Auxiliary data.
(snap. 18.2-1)
746
Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the data points and is consistent with the 1 km x 1 km resolution of
the auxiliary variables.
(snap. 18.2-2)
Air quality
747
(snap. 18.2-3)
748
(snap. 18.2-4)
Air quality
749
(snap. 18.2-5)
750
(snap. 18.3-1)
For example, to calculate the histogram with 32 classes between 4 and 68 g/m3 (2 units interval),
first you have to click on the histogram icon (third from the left); a histogram calculated with
default values is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you switch on the Define Parameters Before Initial
Calculations option you can skip the default histogram display.
Clicking on the base map (first icon from the left), the dispersion of diffusive samples on Alsace
appears. Each active sample is represented by a cross proportional to the NO2 value. A sample is
active if its value for a given variable is defined and not masked.
Air quality
751
0.125
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
5400
Frequencies
Y (km)
0.100
5350
60
4.00
67.00
26.10
12.66
0.075
0.050
5300
0.025
5250
0.000
350 360 370 380 390 400 410 420 430 440
10
20
30
X (km)
40
50
60
70
NO2
(fig. 18.3-1)
You can identify the typology of each sample on the base map entering the variable Typology as a
Literal Code Variable on the Application / Graphic Specific Parameters.
The different graphic windows are dynamically linked. If you want to locate the particularly high
NO2 concentrations, select on the histogram the higher values, right click and choose the Highlight
option. The highlighted values are now represented by a blue star on the base map; with a zoom you
can see that values are attached to traffic or urban sites.
urban
rural
periurban
5350
5300
rural
urban
rural
rural
periurban
periurban
traffic
urban
periurban
periurban
rural
periurban
rural
ruralurban
rural
industry
industry rural
industry
traffic
periurban
industry
rural
traffic
periurban
urban
industry
periurban
urban rural
periurban
0.125
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
0.100
Frequencies
Y (km)
5400
periurban
rural
rural
rural
urban
urban
rural
rural
urban industry
urban
periurban
traffic
urban
urban
urban
traffic
rural
periurban
ruralrural
urban
urban
60
4.00
67.00
26.10
12.66
0.075
0.050
0.025
periurban
5250
350 360 370 380 390 400 410 420 430 440
X (km)
0.000
10
20
30
40
50
60
70
NO2
(fig. 18.3-2)
752
Then, an experimental variogram can be calculated by clicking on the 7th statistical representation,
with 20 lags of 5km and a proportion of the lag of 0.5. The number of pairs may be added to the
graphic by switching on the appropriate button in the Application / Graphic Specific Parameters.
The variogram cloud is obtained by ticking the box Calculate the Variogram Cloud in the
Variogram Calculation Parameters.
(snap. 18.3-2)
Air quality
753
(snap. 18.3-3)
By highlighting the high values on the variogram cloud, you can see that these values are due to the
same traffic sample.
If you mask this point on the base map (right click), the number of pairs taken into account in the
computation of the experimental variogram decreases, like the variability of the variogram.
754
urban
rural
periurban
rural
urban
rural
rural
periurban
periurban
traffic
urban
periurban
periurban
rural
periurban
rural
rural
urban
rural
industry
5300 industry rural
industry
traffic
periurban
industry
rural
traffic
periurban
urban
industry
periurban
87
69
78
urban rural
periurban
53
1500
Variogram : NO2
5350
300
2000
Variogram : NO2
Y (km)
5400
periurban
rural
rural
rural
urban
urban
rural
rural
urbanindustry
urban
periurban
traffic
urban
urban
urban
traffic
rural
periurban
rural
rural
urban
urban
1000
73
85
200
104
90
80117
63
51
63
55
100
32
50 54 80
18
79
500
87
periurban
1048011790
1850 54 80 79
5250
0
350360370380390400410420430440
10
20
30
X (km)
40
78
69
63
50
85
32
60
51
70
55
53 73 63
80
90
100
10
20
30
40
50
60
70
80
90
100
Distance (km)
Distance (km)
urban
rural
periurban
rural
rural
urban
rural
industry
5300 industry rural
industry
traffic
periurban
industry
rural
traffic
periurban
urban
industry
periurban
urban rural
periurban
1000
X (km)
51
60
7911587
150
70
79
49 55
100
32
53
100
79
77
45
15
500
50
7911587
53 79 77100
1545
0
350360370380390400410420430440
66
63
1500
periurban
5250
76
200
Variogram : NO2
5350
rural
urban
rural
rural
periurban
periurban
traffic
urban
periurban
periurban
rural
periurban
83
2000
Variogram : NO2
Y (km)
5400
periurban
rural
rural
rural
urban
urban
rural
rural
urbanindustry
urban
periurban
traffic
urban
urban
urban
traffic
rural
periurban
ruralrural
urban
urban
10
20
30
40
83 76
50
63
66
60
79 32 49 55
70
Distance (km)
80
51
70 60
90
100
10
20
30
40
50
60
70
80
90
100
Distance (km)
(fig. 18.3-3)
Nearby road and industrial measurements, which are often linked to high values of NO2, have an
important impact on the variability but they are not representative of the pollution, considering the
chosen mesh grid. So, only the background samples are retained from now on, and particularly for
the calculation of the variogram by activating the Input Data selection Background (choose the
NO2 variable and click on the Background selection in the left part of the File and Variable
Selector).
Air quality
755
(snap. 18.3-4)
756
urban
rural
periurban
rural
periurban
rural
rural
urban
urban
rural
5400
0.15
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
rural
rural
urban
rural
rural
periurban
periurban
urban
periurban
periurban
periurban rural
5350
5300
Frequencies
Y (km)
urban
urban
periurban
urban
urban
urban
rural
periurban
ruralrural
urban
urban
ruralurban
rural
rural
49
4.00
44.00
22.92
10.36
0.10
0.05
rural
periurban
rural
urban
periurban
periurban
urban rural
periurban
periurban
5250
0.00
350 360 370 380 390 400 410 420 430 440
10
20
30
40
NO2
X (km)
67
800
34
55
61
150
600
Variogram : NO2
Variogram : NO2
700
500
400
300
44
70
67
100
9
0
22 31
10
46 54
20
73 56
70 58
30
40
42
67
21
43
21
34
30
100
46
54
31
34
55
61
67
58
73 56
50
200
42
43
22
44
34 30
9
50
60
70
Distance (km)
80
90
100
10
20
30
40
50
60
70
80
90
100
Distance (km)
(fig. 18.3-4)
The number of diffusive samples falls to 49 instead of 60, the maximum concentration decreases
from 67 to 44 g/m3 and the variance drops from 160.28 to 107.33.
In order to perform the fitting step, it is now time to store the final experimental variogram with the
item Save in Parameter File of the Application menu of the Variogram Page. You will call it NO2.
Air quality
757
The global window where all experimental variograms, in all directions and for all variables are
displayed.
The fitting window where you focus on one given experimental variogram, for one variable and
in one direction.
In our case, as the Parameter File refers to only one experimental variogram for the single variable
NO2, there is obviously no difference between the two windows.
758
(snap. 18.4-1)
The principle consists in editing the Model parameters and checking the impact graphically. You
can also use the variogram initialization by selecting a single structure or a combination of
structures in Model initialization and by adding or not a nugget effect. Here, we choose an
exponential model without nugget. Pressing the Fit button in the Automatic Fitting tab, the
procedure automatically fits the range and the sill of the variogram (see the Variogram fitting
section from the Users guide).
Then go to the Manual Fitting tab and press the Edit button to access to the panel used for the
Model definition and modify the model displayed. Each modification of the Model parameters can
be validated using the Test button in order to update the graphic. Here, we enter a (practical) range
of 48 km and a sill of 120 to a better fitting of the model to the experimental variogram. This model
is saved in the Parameter File for future use by clicking on the Run (Save) button.
Air quality
759
(snap. 18.4-2)
67
34
55
61
Variogram : NO2
150
44
70
42
43
67
58
21
73 56
34
30
100
46
54
31
50
22
9
0
10
20
30
40
50
60
70
Distance (km)
80
90
100
(fig. 18.4-1)
760
the Input information: variable NO2 in the Data File (with the selection Background),
the following variables in the Output Grid File, where the results will be stored (with the
selection Alsace):
m
the standard deviation of the error estimation in Std for NO2 (Kriging),
To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name Unique, then
click on OK or press Enter and you will be able to set the neighborhood parameters by clicking on
the respective Edit button.
By default, a moving neighborhood is proposed. Due to the skimpy number of diffusive samples
(less than 100), an unique neighborhood is preferred. The entire set of data will therefore be used
during the interpolation process at any grid node. An advantage of the unique neighborhood is that
during computation the kriging matrix inversion is performed once and for all.
The only thing to do is to select Unique in the Neighborhood Type and click on OK.
(snap. 18.5-1)
Air quality
761
(snap. 18.5-2)
In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). A first click within the graphic area
displays the target file (the grid). A second click allows the selection of one grid node in particular.
The target grid node may also be entered in the Test Window / Application / Selection of Target
option (see the status line at the bottom of the graphic page), for instance the node [62,128].
The figure shows the data set, the sample chosen in the neighborhood (all data in our case with an
unique neighborhood) and their corresponding weights. The bottom of the screen recalls the
estimation value, its standard deviation and the sum of the weights.
762
(snap. 18.5-3)
Air quality
763
In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
l
- For variable V1
Number of Neighbors
Mean Distance to the target
Total sum of the weights
Sum of positive weights
Weight attached to the mean
Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Variance of Z* (Estimated Z)
Covariance between Z and Z*
Correlation between Z and Z*
Slope of the regression Z | Z*
Signal to Noise ratio (final)
=
=
=
=
=
=
=
=
=
=
=
=
=
49
51732.22m
1.000000
1.179724
0.000000
-0.104556
29.200666
42.828570
6.544354
77.380543
77.275987
0.801933
0.998649
= 2.801868
764
Firstly, give a name to the template you are creating: Estimation for NO2 (Kriging). This will
allow you to easily display again this template later.
In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m
In the Data area, in the Grid file select the variable Estimation for NO2 (Kriging),
Specify the title that will be given to the Raster part of the legend, for instance NO2 (g/
m3),
In the Graphic Parameters area, specify the Color Scale you want to use for the raster
display. You may use an automatic default color scale, or create a new one specifically
dedicated to the NO2 variable. To create a new color scale, click on the Color Scale button,
double-click on New Color Scale and enter a name: NO2, and press OK. Click on the Edit
button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and enter the min and the max bounds (respectively 0 and 50).
- Do not change the Number of Classes (32).
- Switch on the Invert Color Order toggle in order to affect the red colors to the large NO2
values.
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Automatic spacing between Tick Marks button, enter 0
as the reference tick mark and 5 as the step between the tick marks. Then, specify that you
do not want your final color scale to exceed 6cm.
- Deselect Display Undefined Values as to not specify a specific label for the undefined
classes.
- Click on OK.
In the Item contents for: Raster window, click on Display to display the result.
Air quality
765
(snap. 18.6-1)
766
Back in the Contents list, double-click on the Basemap item to represent the NO2 variable with
symbols proportional to the variable value. A new Item contents window appears. In the Data
area, select Data / NO2 / NO2 variable (with the Background selection) as the proportional
variable. Enter NO2 data as the Legend Title. Leave the other parameters unchanged; by
default, black crosses will be displayed with a size proportional to the NO2 values. Click on
Display Current Item to check your parameters, then on Display to see all the previously
defined components of your graphic. Click on OK to close the Item contents panel.
In the Items list, you can select any item and decide wether or not you want to display its legend.
Use the Up and Down arrows to modify the order of the items in the final display.
To take off the white edge, click on the Display Box tab and select the Containing a set of items
mode. Choose the raster to define the display box correctly.
Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.
(snap. 18.6-2)
Air quality
767
The * and [Not saved] symbol respectively indicate that some recent modifications have not been
stored in the Estimation for NO2 (Kriging) graphic template, and that this template has never
been saved. Click on Application / Store Page to save them. You can now close your window.
Create a second template Std for NO2 (Kriging) to display the kriging standard deviation. The
result should be similar to the one displayed hereafter.
(fig. 18.6-1)
768
(snap. 18.7-1)
Air quality
769
(snap. 18.7-2)
770
(fig. 18.7-1)
(snap. 18.7-3)
Air quality
771
The coefficients of the multi-linear regression are informed in the Message Window.
Regression Parameters:
======================
Explanatory Variable
Explanatory Variable
Regressed Variable
Residual Variable
Constant Term
1 = Altitude
2 = ln(Emi_NOx+1)
= None
= None
= ON
Multi-linear regression
----------------------Equation for the target variable : NO2
(NB. coefficients apply for lengths are in their own unit)
---------------------------------------------------------------|Estimated Coeff.|Signification| Std. Error | t-value | Pr(>|t|)|
------------------------------------------------------------------------------|Constant
|
5.468 |
X
|
4.956 |
1.103 |
0.276|
-----------------------------------------------------------------------------|Altitude
|
-2.584e-02 |
***
| 4.928e-03 | -5.244 |3.864e-06|
------------------------------------------------------------------------------|ln(Emi_NOx+1)|
2.883 |
***
|
0.474 |
6.080 |2.195e-07|
------------------------------------------------------------------------------Signification codes based upon a Student test
probability of rejection:
'***' Pr(>|t|) < 0.001
'**'
Pr(>|t|) < 0.01
'*'
Pr(>|t|) < 0.05
'.'
Pr(>|t|) < 0.1
'X'
Pr(>|t|) < 1
Multiple R-squared
Adjusted R-squared
F-statistic
p-value
AIC
AIC Corrected
=
=
=
=
=
=
0.733
0.721
63.156
6.428e-14
430.727
431.261
772
(fig. 18.7-2)
For the sake of clarity, you define for the experimental variograms the same calculation parameters
as before (20 lags of 5km).
Air quality
773
(fig. 18.7-3)
Obviously, you recognize the same NO2 variogram as before. The NO2 regression variogram, as
well as the cross-variogram show the same number of pairs as they are built on the same 49 samples
where both variables are defined. You will save this new bivariate experimental variogram in a
Parameter File called NO2-Altitude+ln(Emi_NOx+1) for the fitting step.
The Statistics / Variogram Fitting procedure is started with NO2-Altitude+ln(Emi_NOx+1) as
experimental variogram and by defining a new file, also called NO2-Altitude+ln(Emi_NOx+1),
for storing the bivariate model. The Global window is used for fitting all the variables
simultaneously. The same model of variogram as before is used for the NO2 experimental
variogram. You choose the following parameters:
l
774
The dotted lines on the cross-variogram show the envelope of maximal correlation allowed from
the simple variograms. Click on Run (Save).
(fig. 18.7-4)
Note - To access the displayed variogram parameters of your choice, click on the Sill to be
displayed button.
Printing the model in the File / Parameters Files window allows a better understanding of the way
the basic structure has been used in order to fit simultaneously the three views, in the framework of
the linear coregionalization model, with their sills as the only degree of freedom.
Air quality
775
Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
120.0000
95.0000
Variable 2
95.0000
90.0000
Advanced explanations about these coefficients are available in the Isatis Technical References,
that can be accessed from the On-Line documentation: chapter Structure Identification of the
Intrinsic Case, paragraph Printout of the Linear Model of Coregionalization.
776
The Interpolation / Estimation / (Co-)Kriging procedure is used again to perform the cokriging step
in order to estimate the NO2 from the auxiliary variable NO2 regression.
You have calculated the NO2 regression variable on the diffusive samples but not on the Grid
where the two variables Altitude and ln(Emi_NOx+1) are also informed. So, the first task consists
in calculating this variable on the Grid by the Statistics / Data Transformation / Raw<->Multilinear Transformation panel. Select the Regression Parameter File NO2, that has been created
from the Multi-linear Transformation application and associate the two explanatory variables
Altitude and ln(Emi_NOx+1) located in the Grid file. Then, create a new variable NO2
regression for the regressed variable. Clicking on Run, the coefficients of the regression are
applicated to the corresponding variables and the same transformation is computed.
(snap. 18.7-4)
Now, you can execute the cokriging operation. Select the two variables NO2 and NO2 regression
among the variables of the Input File (with Background selection), name the two variables
Estimation for NO2 (Cokriging) and Std for NO2 (Cokriging) to store the cokriging results and
do not forget to mention the NO2 regression of the Grid as Collocated Variable. Name the file
containing the bivariate model NO2-Altitude+ln(Emi_NOx+1). The neighborhood is unchanged.
Click on the Special Kriging Options button and select the option Collocated Cokriging. Be careful
that the Collocated Variable in Input and Output File is the same: NO2 regression. Click on Apply
and Run.
Air quality
777
(snap. 18.7-5)
778
(snap. 18.7-6)
(fig. 18.7-5)
Air quality
779
The differences between Kriging and Cokriging are clearly visible on the display templates. On
the Cokriging map, the integration of the auxiliary variables points up the roads. This representation
is more realistic. The contribution of auxiliary variables improves the standard deviation map,
decreasing it on the grid meshes where no information was taken into account before.
780
18.8 Cross-validation
The Statistics / Modeling / Cross-Validation procedure consists in considering each data point in
turn, removing it temporarily from the data set and using its neighboring information to predict (by
a kriging procedure) the value of the variable at its location. The estimation is compared to the true
value to produce the estimation error, possibly standardized by the standard deviation of the
estimation.
Click on the Data File button and select the NO2 variable with the Background selection as Target
Variable. Set on the Graphic Representations option. Select the Model button, the variogram model
called NO2 and Unique for the Neighborhood. This panel is very similar to the (Co-)Kriging panel.
(snap. 18.8-1)
Air quality
781
By clicking on Run, the procedure finally produces a graphic page containing the four following
windows:
l
the scatter diagram of the true data versus the estimated values,
the scatter diagram of the standardized estimation errors versus the estimated values.
A sample is arbitrarily considered as not robust as soon as its standardized estimation error is larger
than a given threshold in absolute value (2.5 for example which approximately corresponds to the
1% extreme values of a normal distribution).
782
(fig. 18.8-1)
Air quality
783
At the same time, the statistics on the estimation error and standardized error (mean and variance)
are printed out in the Message window.
======================================================================
|
Cross-validation
|
======================================================================
Data File Information:
Directory
= Data
File
= NO2
Selection
= BACKGROUND
Variable(s) = NO2
Target File Information:
Directory
= Data
File
= NO2
Selection
= BACKGROUND
Variable(s) = NO2
Seed File Information:
Directory
= Data
File
= NO2
Selection
= BACKGROUND
Variable(s) = NO2
Type
= POINT (60 points)
Model Name
= NO2
Neighborhood Name = Unique - UNIQUE
A data is robust when its Standardized Error lies between -2.500000 and 2.500000
Successfully processed =
49
CPU Time
=
0:00:00 (0 sec.)
Elapsed Time
=
0:00:00 (0 sec.)
The cross-validation has been carried out on the 49 samples of NO2. The mean error near to 0
proves that the unbiased condition of the kriging algorithm worked properly. The variance of the
estimation standardized error measures the ratio between the (square of the) experimental
estimation error and the kriging variance: this ratio should be close to 1, that is the case with a value
of 0.82.
In the second part, the same statistics are calculated based only on the robust points (in our case all
the samples are robust, so you obtain the same results).
If you compare these results to the ones obtained with the cokriging, the correlation between the
true values and the estimated values is better but three samples are not considered as robust points.
The mean and the variance of the standardized error are respectively near to 0 and 1.
It is difficult to decide between kriging and cokriging just with the results of the cross-validation
but the watching of the two maps is clearly in favor of the cokriging.
784
(snap. 18.9-1)
Air quality
785
Now, using the Statistics / Gaussian Anamorphosis Modeling procedure, you can fit and display
this anamorphosis function and transform the raw variable into a new gaussian variable NO2
Gauss.
Select the NO2 variable with the Background selection on Input data and the declustering
weights as Weights.
The Interactive Fitting button overlays the experimental anamorphosis with its model expanded in
terms of Hermite polynomials: this step function gives the correspondence between each one of the
sorted data (vertical axis) and the corresponding frequency quantile in the gaussian scale
(horizontal axis). A good correspondence between the experimental values and the model is
obtained by choosing an appropriate number of Hermite polynomials; by default Isatis suggests the
use of 30 polynomials, but you can modify this number and choose 50 polynomials.
Select the option Gaussian Transform and create a new variable NO2 Gauss on the Output data.
Three options of interpolation are available, we recommend the Empirical Inversion method for
this case. Save the anamorphosis clicking on the Point Anamorphosis button, name it NO2. Finally
click on Run.
(snap. 18.9-2)
786
50
40
NO2
30
20
10
-3
-2
-1
Gaussian values
(fig. 18.9-1)
Using the Statistics / Exploratory Data Analysis on this new variable and switching Compute Using
the Weight Variable (click on the ... button on the right and enter declustering weights as the
Weight Variable), you can first compute its basic statistics: the mean is 0.00 and the variance is
1.00. You display the histogram of this variable between -3 and 3 using 18 classes and check that
the distribution is not exactly symmetric with a minimum of -2.24 and a maximum of 2.92. The
experimental variogram is very structured. The following one is computed using the same
calculation parameters as in the univariate case: 20 lags of 5 km.
(fig. 18.9-2)
Air quality
787
You can control the bi-gaussian assumption on transformed data by computing the square root of
the ratio between variogram and madogram. Click on the Application / Calculation Parameters
menu of the Variogram window and select Sqrt of Variogram / Madogram on the Variographic
2.0
1.5
1.0
0.5
0.0
10
20
30
40
50
60
70
80
90
100
Distance (km)
(fig. 18.9-3)
(fig. 18.9-4)
788
After that, you can proceed with the calculation of probability. Select the Statistics / Statistics /
Probability from Conditional Expectation menu and click on the Data File button to open a File
Selector. Choose the Estimation for NO2 Gauss (Kriging) as Gaussian Kriged Variable, Std for
Air quality
789
NO2 Gauss (Kriging) for the second variable and create a new variable Probability 40g/m3
(CE) for the last variable. This Probability macro variable will store the different probabilities to be
above given cutoffs. Each alphanumerical index of the Macro Variable will correspond to the
different cutoffs. In our case, there will be only one cutoff.
Press the Indicator Definition button to define the cutoff in the raw space, we have chosen a cutoff
of 40 g/m3. Click on Apply next Close.
Check Perform a Gaussian Back Transformation and click on Anamorphosis to define the
transformation (NO2) which has been used to transform the raw data in the gaussian space before
kriging. To finish, click on Run.
(snap. 18.10-1)
(snap. 18.10-2)
790
The map corresponding to the probability to exceed the sanitary threshold of 40 g/m3 is displayed
hereafter. A new color scale called Probability is created with irregular bounds in order to show up
the points where the probability is low.
(fig. 18.10-1)
Air quality
791
the name of the macro variable: each simulation is stored in this macro variable with an index
attached,
the Gaussian back transformation is performed using the anamorphosis function: NO2. In a first
run, this anamorphosis will be disabled in order to study the gaussian simulations,
the seed used for the random number generator: 423141 by default. This seed allows you to
perform lots of simulations in several steps: each step will be different from the previous one if
the seed is modified.
The final parameters are specific to the simulation technique. When using the Turning Band
method, you simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 1000 bands are chosen in our exercise.
You can verify on some simulations in the gaussian space that the histogram is really gaussian and
the experimental variogram respects the structure of the model NO2 Gauss particularly at small
scale. After this Quality Control, you can enable the Gaussian back transformation NO2.
792
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
0.15
8302
-3.19
3.78
-0.03
0.97
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
0.125
8302
-3.32
3.67
-0.06
1.03
Frequencies
Frequencies
0.100
0.10
0.075
0.050
0.05
0.025
0.00
-3
-2
-1
0.000
-3
1.25
1.00
0.75
0.50
0.25
0.00
10
20
30
40
50
60
70
Distance (km)
80
90
-2
-1
100
1.25
1.00
0.75
0.50
0.25
0.00
10
20
30
40
50
60
70
Distance (km)
80
90
100
(fig. 18.11-1)
Air quality
793
(snap. 18.11-1)
The results consist in 200 realizations stored in one Simulations NO2 Macro Variable in the Grid.
The clear differences between several realizations are illustrated on the next graphic.
794
(fig. 18.11-2)
Air quality
795
1 = Altitude
2 = ln(Emi_NOx+1)
= NO2 Gauss regression
= None
= ON
Multi-linear regression
----------------------Equation for the target variable : NO2 Gauss
(NB. coefficients apply for lengths are in their own unit)
------------------------------------------------------------|Estimated Coeff.|Signification|Std. Error|t-value| Pr(>|t|) |
---------------------------------------------------------------------------|Constant
|
-1.338 |
*
|
0.531 |-2.521 |1.525e-02 |
---------------------------------------------------------------------------|Altitude
|
-2.918e-03 |
***
|5.278e-04 |-5.529 |1.462e-06 |
---------------------------------------------------------------------------|ln(Emi_NOx+1)|
0.288 |
***
|5.079e-02 | 5.669 |9.064e-07 |
---------------------------------------------------------------------------Signification codes based upon a Student test
probability of rejection:
'***' Pr(>|t|) < 0.001
'**'
Pr(>|t|) < 0.01
'*'
Pr(>|t|) < 0.05
'.'
Pr(>|t|) < 0.1
'X'
Pr(>|t|) < 1
Multiple R-squared
Adjusted R-squared
F-statistic
p-value
AIC
AIC Corrected
=
=
=
=
=
=
0.728
0.716
61.644
9.659e-14
-6.313e+00
-5.780e+00
Calculate the NO2 Gauss regression variable on the Grid in the Statistics / Data Transformation /
Raw<->Multi-linear Transformation panel.
796
(snap. 18.12-1)
After that, you can compute the three experimental variograms (using the declustering weights
variable). Save them as NO2 Gauss-Altitude+ln(Emi_NOx+1) and fit a model. You choose the
following parameters:
l
Air quality
797
(fig. 18.12-1)
You are now able to perform the collocated co-simulations using the turning bands technique. The
differences in relation to the univariate simulations are that the multivariate case requires two
variables NO2 Gauss and NO2 Gauss regression (with the Background selection) on Input File.
Click on the Output File button, create two new variables Simulations NO2 (multivariate case)
and Simulations NO2 Gauss regression irrelevant but required by the algorithm (multivariate
case) on the Grid (Alsace selection activated) and select the NO2 Gauss regression as Collocated
Variable.
Enter NO2 Gauss-Altitude+ln(Emi_NOx+1) as variogram model and a Unique neighborhood.
Click on the Special Option button and switch the Collocated Cokriging option (verify that the
collocated variable is the same, NO2 Gauss regression, in Input and Output File). Enable the
Gaussian Back Transformation and define the NO2 Anamorphosis for each variable. Do not change
the other parameters like the number of simulations and the number of turning bands. Finally click
on Run.
798
(snap. 18.12-2)
(snap. 18.12-3)
Air quality
799
(snap. 18.12-4)
800
determination of the cutoff map giving the probability that NO2 exceeds 40 g/m3.
(snap. 18.13-1)
(snap. 18.13-2)
Air quality
801
(snap. 18.13-3)
The map corresponding to the mean of the 200 simulations is displayed with the same color scale as
for each of the estimated maps and the standard deviation associated. The mean of a large number
of simulations converges toward kriging.
(fig. 18.13-1)
802
(fig. 18.13-2)
The following graphic represents the probability that the NO2 concentrations exceed a sanitary
threshold of 40 g/m3 calculated by simulations. This map is very similar to the probability map
obtained by conditional expectation. With an infinity of simulations, the map would be exactly the
same.
Air quality
803
(fig. 18.13-3)
The following graphics represent the mean of simulations and the probability to exceed 40 g/m3
calculated in the multivariate case, i.e. using the Simulations NO2 (multivariate case) macro
variable in the Tools / Simulations Post Processing panel with the same parameters as before.
The simulation mean has many similarities with the cokriging map. Regarding the probability map,
it presents some differences with the probability map obtained by univariate simulations, specially
on the South where the probability is lower (quasi null) than for the first graphic and on the East
center where the main area exposed to a risk of exceed 40 g/m3 is more limited and shows up a
road axis.The integration of auxiliary variables in simulations leads to a map of probability more
realistic.
804
(fig. 18.13-4)
(fig. 18.13-5)
Air quality
805
(snap. 18.14-1)
In the File / Calculator panel, for each simulation you are going to calculate the population
potentially exposed to NO2 concentrations higher than 40 g/m3. You have to click on the Data
File button to select Pop99 as v1 and Simulations NO2 (multivariate case) and Population
exposure as m1 and m2 (macro variables).
Enter in the window Transformation the operation that will be applied on the variables. For each
simulation and each mesh, the NO2 simulated concentration is compared to the threshold of 40 g/
m3. If this value is exceeded, the number of inhabitants informed in the Pop99 variable will be
stored, else the number of inhabitants exposed will be zero. As a consequence, the transformation
is: m2=ifelse(m1>40,v1,0).
806
(snap. 18.14-2)
The Tools / Simulation Post-processing is finally used to estimate the population exposed to NO2
concentrations higher than 40 g/m3 from the Population exposure macro variable. In order to run
this operation, switch on Risk Curves and click on the Edit button. You are only interested by the
Accumulations. For each realization (each index of the macro variable), the program calculates the
sum of all the values of the variable which are greater or equal to the cutoff, i.e. in our case the
program calculates the total sum of inhabitants (so choose a cutoff of 0, the selection of the
inhabitants living in a area exposed to more than 40 g/m3 is considered in the preceding step).
This sum is then multiplied by the unit surface of a cell equal to: 1000 m x 1000 m = 1000000 m;
as you are interested in the number of inhabitants (inhab), you need to divide by this figure
1000000 m. Switch on Draw Risk Curve on Accumulations to draw the risk curves on
accumulations in a separate graphic and Print Statistics to print the accumulations of the target
variable for each simulation.
Air quality
807
(snap. 18.14-3)
808
(snap. 18.14-4)
Air quality
809
(fig. 18.14-1)
Statistics for Simulation Post Processing
=========================================
Target Variable : Macro variable = Population exposure[xxxxx] [count=200]
Cutoff
=
0.00
Number of outcomes
= 200
The 19716 values are processed using 1 buffers of 19716 data each
Cell dimension along X =
1000.00m
Cell dimension along Y =
1000.00m
Rank Macro Frequency
1
1
0.50
2
2
1.00
3
3
1.50
.../...
198
198 99.00
199
199 99.50
200
200 100.00
Accumulation
105606inhab
98998inhab
84982inhab
Surface
8302.00km2
8302.00km2
8302.00km2
91751inhab
91416inhab
120454inhab
8302.00km2
8302.00km2
8302.00km2
810
Inputs/Outputs Summary
======================
Input Macro :
- Directory Name
: Data
- File Name
: Grid
- Selection Name
: Alsace
- Variable Name
: Population exposure[xxxxx]
Quantiles on Accumulation Risk Curves
=====================================
Q5.00 =
133941inhab
Q50.00 =
87857inhab
Q95.00 =
61018inhab
Quantiles on Accumulation Risk Curves (nearest simulation values)
=================================================================
P5.00 =
135165inhab
P50.00 =
88181inhab
P95.00 =
61037inhab
The number of inhabitants exposed to NO2 concentrations higher than 40 g/m3 is between 47911
and 166422 with a mean of 91171.
Soil pollution
14 Soil pollution
This case study is based on a data set kindly provided by TOTAL
Dpots Passifs.
Coordinates and pollutant grades have been transformed for
confidentiality reasons.
The case study covers rather exhaustively a large panels of Isatis
features. Its main objectives are to:
estimate the 3D total hydrocarbons (THC) on a contaminated site
using classical geostatistical algorithms,
interpolate the site topography to exclude from the calculations 3D
grid cells above the soil surface,
use simulations to perform risk analysis by:
- the estimation of the local risk to exceed a threshold of 200mg/kg,
- the quantification of the statistical distribution of the contaminated
volume of soil.
Last update: Isatis version 2014
811
812
(snap. 19.1-1)
It is then advised to verify the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l
Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.
You have to tick the box First Available Row Contains Field Names and click on the Automatic
button to load the variables contained in the file.
Soil pollution
813
The coordinates easting(X), northing(Y) and elevation(Z) for X, Y and Cote (mNGF),
The numeric 32 bits variables ZTN (mNGF), Prof (m) and Measure,
(snap. 19.1-1)
814
(snap. 19.1-1)
Note - Be careful to define this file as a 2D file. In this step, the ZTN (mNGF) variable will be
defined as a numeric 32 bits variable, not as the Z coordinate.
the polygon level which corresponds to the lines starting with the ** symbol,
the contour level which corresponds to the lines starting with the * symbol.
This polygon is read using the File / Polygons Editor functionality. This application stands as a
graphic window with a large Application Menu. You have first to choose the New Polygon File
option to create a file where the 3D polygon attributes will be stored: the file is called Site contour
in the directory Data.
Soil pollution
815
(snap. 19.1-1)
The next task consists in loading the contents of the ASCII Polygon File using the ASCII Import
functionality in the Application Menu.
(snap. 19.1-2)
816
(snap. 19.1-3)
The final action consists in performing the Save and Run task in order to store the polygon file in
the general data file system of Isatis.
Note - This polygon could have been digitalized inside Isatis, using a background map of the site.
19.2 Pre-processing
19.2.1 Creation of a target grid
All the estimation and simulation results will be stored as different variables of a new grid file
located in the directory Grid. This grid, called 3D grid, is created using the File / Create Grid File
functionality. It is adjusted on the Site contour polygon.
Soil pollution
817
(snap. 19.2-1)
818
Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the data points.
(snap. 19.2-2)
Soil pollution
819
(snap. 19.2-3)
SELECTION/INTERVAL STATISTICS:
-----------------------------
New Selection Name
= Site contour
Total Number of Samples = 182160
Masked Samples
= 32384
Selected Samples
= 149776
820
Drag the Measure variable from the THC file in the Study Contents and drop it in the display
window;
From the Page Contents, click right on the Points object (THC) to open the Points Properties
window. In the Points tab, select the 3D Shape mode (sphere) and choose the Rainbow
Reversed Isatis Color Scale in the Color tab.
(snap. 19.3-1)
Tick the Automatic Apply option to automatically assign the defined properties to the graphic
object. If this option is not selected, modifications are applied only when clicking Display.
Display the site contour:
l
Drag the Site contour file in the Study Contents and drop it in the display window.
From the Page Contents, click right on the Polygons object (Site contour) to open the Polygons
Properties window. In the Color tab, select Constant and click the next colored button to assign
to the polygon the color of your choice. In the Transparency tab, tick the Active Transparency
option to define a level of transparency for the display, in order to see the samples inside.
Tick Legend to display the color scale in the display window. The legend is attached to the current
representation. Specific graphic objects may be added from the Display menu as the graphic axes
and corresponding valuations, the bounding box and the compass.
The Z Scale, in the tool bar, may also be modified to enhance the vertical scale.
Click on File / Save Page As to save the current graphic.
Soil pollution
821
(fig. 19.3-1)
822
(snap. 19.4-1)
For example, to calculate the histogram with 26 classes between 0 and 520 mg/kg (20 units
interval), first you have to click on the histogram icon (third from the left); a histogram calculated
with default values is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you select the option Define Parameters Before
Initial Calculations, you can skip the default histogram display.
Clicking on the base map (first icon from the left), the dispersion of THC grades appears. Each
active sample is represented by a cross proportional to the THC value. A sample is active if its
value for a given variable is defined and not masked.
Soil pollution
823
(fig. 19.4-1)
All graphic windows are dynamically linked together. If you want to locate the particularly high
values, select on the histogram the higher values, right click and choose the Highlight option. The
highlighted values are now represented by a blue star on the base map.
(fig. 19.4-2)
824
Selecting an other section (YOZ or XOZ), in the Application / Graphical Parameters panel of the
base map window, allows you to visualize the dispersion of THC grades in depth.
(snap. 19.4-2)
Soil pollution
825
Then, an experimental variogram can be calculated by clicking on the 7th statistical representation,
with 10 lags of 15m (consistence with the sampling mesh) and a proportion of the lag of 0.5. An
histogram displaying the number of pairs can be previewed by clicking on the Display Pairs button.
(snap. 19.4-3)
826
(snap. 19.4-4)
The number of pairs may be added to the graphic by switching on the appropriate button on the
Application / Graphic Specific Parameters. The variogram cloud is obtained by ticking the box
Calculate the Variogram Cloud in the Variogram Calculation Parameters.
(fig. 19.4-3)
Soil pollution
827
The experimental variogram shows an important nugget effect. This variability is due to the fact
that we compare some pairs of points located in the XOY plane and some pairs of points in depth.
The variability of the THC grades seems to be higher vertically than horizontally. You have to
consider this phenomenon by calculating two experimental variograms, one for each direction. For
that, you have to choose the Directional option. A Slicing Height of 0.5m allows you not to put
together the two directions.
Set Regular Directions to 1, choose Activate Direction Normal to the Reference Plane and choose
the following parameters in Direction Definition:
l
Number of lags: 10 (so that the variogram will be calculated over 150 m distance)
(snap. 19.4-5)
828
Then choose the following parameters for the direction normal to the reference plane:
l
Tolerance on angle: 45
Lag value: 1 m
Number of lags: 4
345
5000
5000
3274
3523
3457
4002
4000
Variogram : Measure
4000
Variogram : Measure
4471
D-90
3000
472
2000
N0
5664
3486
3003
3000
2345
2000
506
1000 1
1000
23
0
2
Distance (m)
50
100
150
Distance (m)
(fig. 19.4-4)
In order to perform the fitting step, it is now time to store the final experimental variogram with the
item Save in Parameter File of the Application menu of the Variogram Page. You will call it THC.
Soil pollution
829
The Parameter File in which you wish to save the resulting model: THC. As the experimental
and the variogram model are stored in different types of parameter file, you may define the same
name for both.
(snap. 19.5-1)
Check the toggles Fitting Window and Global Window. The Fitting window displays one direction
at a time (you may choose the direction to display through Application / Variable & Direction
Selection...), and the Global window displays every variable (if several) and direction in one
graphic.
830
You can first use the variogram initialization by selecting a single structure or a combination of
strutures in Model initialization and by adding or not a nugget effect. Pressing the Fit button in the
Automatic Fitting tab, the procedure automatically fits the range and the sill of the variogram (see
the Variogram fitting section from the Users guide).
Then, go to the Manual Fitting tab and press the Edit button to access to the panel used for the
Model Definition and modify the model displayed. Each modification of the model parameters can
be validated using the Test button in order to update the graphic. The model must reflect:
l
Here, two different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):
l
an anisotropic Linear model with a sill of 1000 and the following respective ranges along U, V
and W: 115m, 115m and 0.85m.
(snap. 19.5-2)
Soil pollution
831
345
5000
5000
3274
3523
3457
4002
4000
4000
3486
Variogram : Measure
Variogram : Measure
4471
D-90
3000
472
2000
N0
5664
3003
3000
2345
2000
506
1000 1
1000
23
0
2
Distance (m)
50
100
150
Distance (m)
(fig. 19.5-1)
This model is saved in the Parameter File for future use by clicking on the Run (Save) button.
832
(snap. 19.6-1)
Pressing Run, an Isatis message is printed out informing you that two duplicates
have been found and masked in the Without duplicates Selection variable.
Note - The presence of duplicates is generally visible on the variogram cloud by the existence of
pairs of points at zero distance.
Soil pollution
833
the Input information: variable Measure in the THC File (with the selection Without
duplicates),
the following variables in the Output Grid File, where the results will be stored (with the
selection Site contour):
m
To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name moving 3D, then
click on OK or press Enter and you will be able to set the neighborhood parameters by clicking on
the respective Edit button.
l
Set the dimensions of the ellipsoid to 100 m, 100 m and 2 m along the vertical direction;
834
(snap. 19.7-1)
Soil pollution
835
(snap. 19.7-2)
In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). By pressing once on the left button
of the mouse, the target grid is shown (in fact a XOY section of it, you may select different sections
through Application / Selection For Display...). You can then move the cursor to a target grid node:
click once more to initiate kriging. The samples selected in the neighborhood are highlighted and
the weights are displayed. The bottom of the screen recalls the estimation value, its standard
deviation and the sum of the weights. The target grid node may also be entered in the Test Window /
Application / Selection of Target option, for instance the node [37,55,10].
836
(snap. 19.7-3)
Soil pollution
837
In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
l
=
=
=
=
=
=
=
=
=
20
23.55m
1.000000
1.108689
8.146024
23.309070
1676.879631
40.949721
2.600067
You also may ask for a 3D representation of the search ellipsoid if the 3D Viewer application is
already running and, from the Application menu, ask to Link to 3D Viewer: a 3D representation of
the search ellipsoid neighborhood is represented, and the samples used for the estimation of the
node are highlighted. A new graphic object neighborhood appears in the Page Contents from which
you may change the graphic properties (color, size of the samples for coding the weights or the
THC values etc.).
(fig. 19.7-1)
838
(snap. 19.8-1)
Soil pollution
839
A selection from the polygon Site contour to the new 2D grid is also realized not to interpolate the
topography outside of the site area.
(snap. 19.8-2)
SELECTION/INTERVAL STATISTICS:
----------------------------New Selection Name
= Site contour
Total Number of Samples = 7920
Masked Samples
= 1408
Selected Samples
= 6512
840
(snap. 19.8-3)
A first experimental variogram is calculated with 10 lags of 15m and a proportion of the lag of 0.5.
Soil pollution
841
ZTN (mNGF)
350
874
0.3
815
300
Y (m)
250
200
150
100
1303
789
676
0.2
513
0.1
50
-50
50
X (m)
100
0.0
25
50
75
100
125
Distance (m)
(fig. 19.8-1)
This variogram shows an important nugget effect. This effect does not seem to be due to only one
sample. A variogram map can be computed clicking on the last statistical representation of the
panel. This specific tool allows you to analyze the spatial continuity of the variable of interest in all
the directions of the space, and especially to pick the possible anisotropies.
842
14 directions
a tolerance of 0 lag not to compute a same pair of points into two consecutive classes
a tolerance on directions of 3 sectors to smooth the map to highlight the principal directions of
anisotropy
(snap. 19.8-4)
Soil pollution
843
Studying the map, you can see that the variability seems to be higher along Y than along X until a
distance of 80m. The variograms along these two directions are directly calculated from the
variogram map. You have to pick the N90 direction label, right click and choose Active Direction
(ditto for the N0 direction).
0.32
N2
6
N3
09
0.30
773
1
N5
0.28
0.24
N283
N77
270
N90
N103
N257
0.22
0.20
0.18
0.16
0.14
N1
29
5
N1
4
N2
06
31
N2
638
0.3
0.26
Variogram : ZTN (mNGF)
34
N3
357
602
636
704
N0
371
747
313
0.2
556 330
363 432
237
255
258
177
N90
0.1
101
0.12
0.10
0.08
N/A
0.0
50
100
150
Distance (m)
(fig. 19.8-2)
The anisotropic variogram is saved in parameter file under the name Topography anisotropic.
844
an anisotropic spherical model with a sill of 0.14 and the respective ranges along U and V:
135m and 75m
(fig. 19.8-3)
two new variables Topography anisotropic kriging and Topography anisotropic std kriging
as Output File in the 2D grid file to store respectively the estimation result and the standard
deviation of the error estimation,
(snap. 19.8-5)
Soil pollution
845
(snap. 19.8-6)
846
Firstly, give a name to the template you are creating: Topography. This will allow you to easily
display again this template later.
In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m
In the Data area, in the 2D grid file select the variable Topography anisotropic kriging
with the Site contour selection,
Specify the title that will be given to the Raster part of the legend, for instance Topo
(mNGF),
In the Graphic Parameters area, specify the Color Scale you want to use for the raster
displayed. You may use an automatic default color scale, or create a new one specifically
dedicated to the variable of interest. To create a new color scale, click on the Color Scale
button, double-click on New Color Scale and enter a name: Topo, and press OK. Click on
the Edit button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and enter the min and the max bounds (respectively 27 and
30).
- Change the Number of Classes (30).
- Switch on the Invert Color Order toggle in order to affect the red colors to the large
values of topography.
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Display all Tick Marks button, enter 0.5 as the step
between the tick marks. Then, specify that you do not want your final color scale to
exceed 6 cm. Switch off the Display Undefined Values button.
- Click on OK.
In the Item contents for: Raster window, click on Display to display the result.
Soil pollution
847
(snap. 19.8-7)
l
Back in the Contents list, double-click on the Isolines item. Click Grid File to open a File
Selector to select the 2D grid file then the variable to be represented, Topography anisotropic
kriging.
m
The Legend Title is not active as no legend is attached to this type of representation.
The isolines representation requires the definition of classes. A class is an interval of values
separated by a given step. In the Data Related Parameters area, switch on the C1 line, enter
27 and 30 as lower and upper bounds and choose a step equal to 0.2.
Not to overload the graphic, the Label Flag attached to the class is left inactive.
848
In the Items list, you can select any item and decide wether or not you want to display its legend.
Use the Up and Down arrows to modify the order of the items in the final display.
Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.
(snap. 19.8-8)
The * and [Not saved] symbols respectively indicate that some recent modifications have not been
stored in the Topography graphic template, and that this template has never been saved. Click on
Application / Store Page to save them. You can now close your window.
Soil pollution
849
Create a second template Topography std kriging to display the kriging standard deviation using
the Raster item in the Contents list and a new Color Scale. To overlay the ZTN (mNGF) data
locations to the grid raster representing the error of estimation:
l
Back in the Contents list, double-click on the Basemap item to represent the ZTN (mNGF)
variable with symbols proportional to the variable value. A new Item Contents window appears.
In the Data File area, select Data / Topography / ZTN (mNGF) variable as the proportional
variable. Enter Topo data as the Legend Title. Leave the other parameters unchanged; by
default, black crosses will be displayed with a size proportional to the values of topography.
Click on Display Current Item to check your parameters, then on Display to see all the
previously defined components of your graphic. Click on OK to close the Item contents panel.
To take off the white edge, click on the Display Box tab and select the Containing a set of items
mode. Choose the raster to define the display box correctly.
Finally, click on Display. The result should be similar to the one displayed hereafter.
(fig. 19.8-4)
850
19.8.5 Selection of the grid cells under the surface of the soil
The first task consists in copying the estimation of the topography from the 2D grid to the 3D grid
using Tools / Migrate / Grid to Point.
(snap. 19.8-9)
A new Selection variable Under Topo is created using the File / Calculator to store the result of
the comparison between the estimation of the topography and the Z-coordinate. The 3D grid cells
which values of Topography are higher than corresponding values of the Z-coordinate are masked
(the cells outside of the site contour are also masked because the Site contour selection variable is
activated on input file). You have to apply the following transformation in File / Calculator:
s1=ifelse(v1<v2,1,0)
Soil pollution
851
(snap. 19.8-10)
This Under topo selection will be used in all the rest of the study (it will be activated on output file
and in the graphic representations).
852
in the 3D Grid tab, tick the selection toggle, choose the Under topo selection and active the
Automatic Apply function;
in the Color tab, be careful that selected variable is THC kriging. Apply a THC Isatis Color
Scale created in the File / Color Scale functionality (25 classes from 0 to 500 mg/kg);
in the Cell Filter tab, tick the Activate Cell Filter toggle and choose the V is Defined option not
to display the cells with undefined values (which are colored in grey by default);
(fig. 19.9-1)
l
open the clipping plane functionality from Display / Clipping Plane: the clipping plane
appears across the block model;
click on the clipping plane rectangle and drag it next by the block model for better visibility;
click on one of the clipping planes axis to change its orientation (be careful to target
precisely the axis itself in dark grey, not its squared extremity nor the center tube in white)
open the Points Properties window of the THC file: set the Allow Clipping toggle OFF
(ditto for the polygon);
Soil pollution
853
click on the clipping planes center white tube and drag it in order to translate the clipping
plane along the axis. You may also benefit from the clipping controls parameters available
on the right of the graphic window in order to clip a slice with a fixed width and along the
main grid axes.
you can click on one cell of particular interest or on a sample: its information is displayed in
the top right corner (take care to inactivate the polygon not to select it).
(snap. 19.9-1)
854
Soil pollution
855
(snap. 19.10-1)
(fig. 19.10-1)
856
Using the Statistics / Exploratory Data Analysis on this new variable, you can first compute its
basic statistics: the mean is 0.00 and the variance is 0.96. The distribution of the gaussian variable is
not symmetric with a minimum of -1.2 and a maximum of 3.3, and an important proportion of low
equal values. This phenomenon is due to the important part of values equal to the limit of detection
and the method of anamorphosis used. The gaussian value calculated uses the empirical cumulative
distribution: two points with the same raw value will get the same gaussian value. This method is
preferred to the frequency inversion method that gets, for two points with the same raw value,
different gaussian values. In the context of the study the non symmetry of the gaussian variable is
not very important because the threshold of 200mg/kg that we consider is higher than the limit of
detection.
0.3
Nb Samples: 782
Minimum:
-1.20
Maximum:
3.30
Mean:
0.00
Std. Dev.: 0.96
Frequencies
0.2
0.1
0.0
-1
1
Measure gauss
(fig. 19.10-2)
The experimental variogram is very structured. The following one is computed using the same
calculation parameters as in the non gaussian case. To load the parameters of an existing variogram,
click on Load Parameters from Standard Parameter File... and select the experimental variogram
THC.
Soil pollution
857
1.00
342
0.75
471
0.50
506
0.25
D-90
1.00
0.75
5637
35113259
4455
3441
3983
N0
3471
2987
2328
0.50
0.25
21
0.00
0.00
Distance (m)
50
100
150
Distance (m)
(fig. 19.10-3)
858
an anisotropic Exponential model with a sill of 0.58 and the following respective ranges along
U, V and W: 43m, 43m and 6m.
an anisotropic Linear model with a sill of 0.25 and the following respective ranges along U, V
and W: 115m, 115m and 2.4m.
1.00
342
0.75
471
0.50
506
0.25
D-90
1.00
35113259
4455
3441
3983
5637
0.75
N0
3471
2987
2328
0.50
0.25
21
0.00
0.00
Distance (m)
50
100
150
Distance (m)
(fig. 19.10-4)
Soil pollution
859
(snap. 19.10-2)
*** Create Grid File ***
Grid Create Mode
:
Existing Directory Name:
Existing Grid Name
:
Input Selection Name
:
X Nodes Number
:
Y Nodes Number
:
Z Nodes Number
:
Grid Directory Name
Grid Name
NX=
10
X0=
NY=
22
Y0=
NZ=
23
Z0=
Rotation: No rotation
Coarsen Mesh
Grid
3D grid
None
6
6
1
: Grid
: 3D grid remediation
-46.25m
DX=
15.00m
35.75m
DY=
15.00m
20.00m
DZ=
0.50m
860
As previously, create the selection variable Under topo on the 3D grid remediation not to take
into account in the computation of contaminated soil, the cells above the surface.
*** Variable Statistics ***
Directory Name
File Name
Variable Name
: Grid
: 3D grid remediation
: Under topo
Variable Type
: Float (Selection)
Bit Length
: 1
Unit
:
Last Modification
: Jan 30 2013
17:35:15
Size
: 737 bytes
Physical Path
: \\CRUNCHER\etudes\DOC_CASE_STUDIES\Isatis\CS_Isatis_130\Soil pollution\GTX\DIRE.2\FILE.3\VARI.10
Printing Format
: Integer,
Length = 3
Variable Description :
Creation Date: Jan 30 2013
17:35:06
Soil pollution
861
the name of the Macro Variable: each simulation is stored in this Macro Variable with an index
attached,
the Gaussian back transformation is performed using the anamorphosis function: THC. In a
first run, this anamorphosis will be disabled in order to study the gaussian simulations,
the seed used for the random number generator: 423141 by default. This seed allows you to
perform lots of simulations in several steps: each step will be different from the previous one if
the seed is modified.
The final parameters are specific to the simulation technique. When using the Turning Band
method, you simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 500 bands are chosen in our exercise.
You can verify on some simulations in the gaussian space that the histogram is really gaussian and
the experimental variogram respects the structure of the model THC Gauss particularly at small
scale. After this Quality Control, you can enable the Gaussian back transformation THC and you
can perform block simulations on the 3D grid remediation.
(fig. 19.10-5)
862
The Type of calculation is set as Block. Block simulations are obtained by averaging simulated
points. Each block is discretized in sub-blocks according to the block discretization parameters and
each sub-block is simulated as a point.
The block discretization is defined in the Neighborhood window: it will be set to 3x3x2 for quicker
calculations.
(snap. 19.10-3)
Soil pollution
863
(snap. 19.10-4)
(snap. 19.10-5)
864
Clicking on the Calculate Cvv button, the average covariance of each block is calculated using the
discretization of it. Its covariance should be practically constant for all the blocks.
Calculation of the Mean Block Covariance :
-----------------------------------------Regular discretization : 3 x 3 x 2
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables Measure gauss
Cvv =
0.323526
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
=
=
=
=
=
=
=
=
=
=
0.316433
0.323884
0.326204
0.328536
0.326872
0.330179
0.330183
0.326187
0.323799
0.326540
Note - Performing simulations on the 2.5 x 2.5 x 0.5 m grid allows you to test different sizes of
remediation grid. A Copy Statistics / Grid -> Grid computes for each block of the remediation grid,
the mean of a given simulation on the 2.5 x 2.5 x 0.5 m grid. This calculation is achieved for each
simulation (i.e. for each index of simulation) through a journal file.
%LOOP i = 1 TO 200
#
******* Bulletin Name *******
***** Bulletin Version ******
Input Directory Name
Input File Name
Input Selection Name
Variable Name
Minimum Bound Name
Maximum Bound Name
Output Directory Name
Output File Name
Output Selection Name
Number Name
Minimum Name
Maximum Name
Mean Name
Std dev Name
#
%ENDLOOP
=B=
=N=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
Soil pollution
865
determination of the cutoff giving the probability to exceed a threshold of 200 mg/kg.
(snap. 19.11-1)
Check the toggle Statistical Maps and press Edit in order to define the output file variables
Simulations THC mean and Simulations THC std.
866
(snap. 19.11-2)
Check the toggle Iso Cutoff Maps and press Edit in order to define the cutoff of 200 mg/kg.
(snap. 19.11-3)
(snap. 19.11-4)
Soil pollution
867
the Accumulations. For each realization (each index of the Macro Variable), the program
calculates the sum of all the values of the variable which are greater or equal to the Cutoff (if the
value is smaller than the cutoff, the cell is not taken into account). This sum is then multiplied
by the unit surface of the cell (or the unit volume of the block in 3D).
the Surfaces/Volumes. Instead of calculating the sum of the values for each realization, the
program calculates only the number of nodes where the Accumulation has been calculated. This
number is then multiplied by the unit surface of the cell (or the unit volume of the block in 3D).
This curve provides, for each realization of the variable, the surface (in 2D) or the volume (in
3D) of the cells (or blocks) where the variable is greater or equal to the cutoff.
(snap. 19.11-5)
The cutoff of 200 mg/kg is informed in the principal panel. Tick the Risk Curves option and press
Edit to define:
868
the Unit Name used to display the results in the printout. By default, the values of volume are
expressed in m3 but in our case, the values can be expressed in 10-3 m3 (equal to 1000 m3) not
to load down the results.
Draw Risk Curve on Volumes. The volumes values of all the realizations are sorted by
decreasing order and displayed as an inverse cumulative histogram. On the abscissae of this
graph (cutoff on the volumes) is represented the probability to get a result greater than this
value. The greater the volumes cutoff is, the smaller the probability is.
Print Statistics. The accumulation of the target variable and the volume of soil contaminated
by values of THC higher than 200 mg/kg are printed in the Isatis Message Window for each
realization. The order in which these results are printed depends on the choice of the Sorting
Order specified.
(snap. 19.11-6)
Click Apply to compute and display the risk curves and leave the dialog box open.
Soil pollution
869
(fig. 19.11-1)
The graphic figure containing the risk curves offers an Application Menu with a single item:
Graphic Parameters where you can define quantiles. Tick the Highlight Quantiles option to
compute the quantiles of your choice and click on Show the Simulation Value on Graphic to display
of the simulations for each previously selected quantile on the graphic.
(snap. 19.11-7)
870
Accumulation
1962.7910-3 m3
2088.6210-3 m3
4593.4910-3 m3
Volume
6.8610-3 m3
6.9810-3 m3
14.6310-3 m3
2546.6210-3 m3
2677.4810-3 m3
4049.8210-3 m3
8.4410-3 m3
8.4410-3 m3
12.9410-3 m3
The volume of soil contaminated by a concentration of THC higher than 200 mg/kg is between
5.511 and 21.601 m3 with a mean of 10.251 m3.
Soil pollution
871
in the 3D Grid tab, tick the selection toggle, choose the Under topo selection and active the
Automatic Apply function;
in the Color tab, be careful that selected variable is Probability 200 mg/kg. Apply a Proba
Isatis Color Scale created in the File / Color Scale functionality (25 classes from 0 to 1);
in the Cell Filter tab, tick the Activate Cell Filter toggle and choose the V > option to display
the cells with a value of probability higher than 0,2 for example;
You can add, as previously, the polygon Site contour to delineate the area and the THC data to
compare the values measured to the probability to exceed a threshold of 200 mg/kg in a remediation
cell.
(snap. 19.12-1)
872
(fig. 19.12-1)
Bathymetry
20.Bathymetry
This case study is based on a data set kindly provided by IFREMER,
the French Research Institute for Exploitation of the Sea, from La
Rochelle (www.ifremer.fr).
The case study illustrates how to set up, from several campaigns, a
unified bathymetric model which ensures the consistency of both:
data processing, merge and modeling procedures,
bathymetry product delivered for a whole region.
The last paragraph focuses on an innovative methodology using local
parameters to get a better adequacy between the geostatistical model
and the data.
Last update: Isatis version 2014
873
874
(snap. 20.1-1)
It is then advised to check the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l
Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.
Bathymetry
875
(snap. 20.1-1)
As the header file is not contained in the data file, click Build New Header and a new dialog box
pops up. The different tabs have to be filled in as follows:
l
Data Organization: this first tab is used to define the file type, dimension and specific
parameters. Select Points for Type of File and 2D for Dimension. The bathymetry will be
considered as a numeric variable and not as a third coordinate.
Options: this second tab defines how data are arranged in the file.
m
In our case, columns are fixed. So tick the CSV Input (Comma Separated Value) option and
choose ',' as Values Separator and write a '.' for Decimal Symbol. Specify that you want to
skip the first line by typing Skip 1 File Lines at the Beginning.
(snap. 20.1-2)
876
As the data coordinates are defined in a geographic system, select the Coordinates are in
latitude/longitude format option. Choose -45.6533 / 22.578 to specify that the Coordinates
Input Format are in decimal degrees. You need then to define the projection system. Click
Build/Edit Projection File to create a new projection file. The Projection Parameters dialog
box pops up.
- Click New Projection File to enter a name for the new projection file: lambert2e.proj.
- Select clarke-1880 as reference in the Ellipsoid list.
- Select Lambert as Projection Type. First, choose France / Center (II) as Lambert
System. Then switch it for User Defined in order to modify the Y Origin from 200000 to
2200000.
- Click Save to store the parameters and close the Projection Parameters dialog box.
(snap. 20.1-3)
l
Base Fields: this tab is used to specify how the input data fields will be read and stored as new
variables in Isatis. Click Automatic Fields to automatically create as many fields as they appear
in the data file. The names of the variables will be those given in the first line (the first line
skipped is considered as containing the variables names). At last, you have to define the type of
each variable:
m
The coordinates 'Easting Degrees' and 'Northing Degrees' for long and lat,
Bathymetry
877
(snap. 20.1-4)
Click Save As to save the edited header in a file. Enter a name for this file header.txt and Close.
This header could be used for the other files which have the same structure. The header created
should have the same structure as following:
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
structure=free
csv_file=Y, csv_sep=",", csv_dec="."
nskip=1
proj_file="C:\Program Files\Geovariances\Isatis\Datasets\Bathymetry\lambert2e.proj"
proj_coord_rep=0
field=1 , type=ewd , name="long";
f_type=Decimal , f_length=10 ,
factor=1
field=2 , type=nsd , name="lat";
f_type=Decimal , f_length=10 ,
factor=1
field=3 , type=numeric , name="Z"
bitlength=32 , unit="" ;
f_type=Decimal , f_length=10 ,
f_digits=2, unit="";
f_digits=2, unit="";
, ffff="" ;
f_digits=2
Once your header is ready, you have to choose where and how your data will be stored in the Isatis
database. Select the mode Create a New File to import the complete data set. Then, create a new
directory and a new file in the current study. The button NEW Points File is used to enter the names
of these two items; click on the New Directory button and give a name, do the same for the New
File button, for instance:
l
878
(snap. 20.1-5)
Do the same thing for the two other files (without building a new header but using the same as
previously) to import these data sets in two new Isatis files:
l
Directory = Data
As for the ASCII Import, tick the Coordinates are in latitude/longitude format option to specify
your data are defined in a geographic system. Click on Projection File Name and select your
projection file lambert2e.proj created previously.
Bathymetry
879
(snap. 20.1-1)
880
20.2 Pre-processing
20.2.1 Visualization
The data sets are visualized using the display capabilities.
You are going to create a new Display template, that consists in an overlay of several base maps and
polygons. All the display facilities are explained in detail in the "Displaying & Editing Graphics"
chapter of the Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l
Firstly, give a name to the template you are creating: Data. This will allow you to easily display
the same map later on.
In the Contents list, double click on the Basemap item. A new window appears, in order to let
you specify which file and which variable you want to display.
m
In the Data area, click on the Data File button and select the file Data / DDE Boyard 2000.
Three types of representation may be defined (proportional, color or literal variable) but if
these three variables are left undefined, a simple basemap is drawn using only the Default
Symbol. Clicking on this button, you can modify the pattern, the color and the size of the
points.
Click on Display to display the result and on OK to close the Item Contents panel.
Back in the Contents list, double-click again on the Basemap item to represent the other points
files DDE Marennes Oleron 2003 and DDE Maumusson 2001. Choose a different color for
each file in order to distinguish them.
Back in the Contents list again, double-click on the Polygons item to represent the coast line and
select Data/ Coast clicking on Data File. The lowest part of the window is designed to define
the graphic parameters:
m
Label Position: Select no symbol to not materialize the label position of each polygon.
Filling: Check Use a Specific Filling and click on the ... button to open the Color Selector
and choose Transparent.
Bathymetry
881
Click on Display Current Item to check your parameters, then on Display to see all the
previously defined components of your graphic.
(fig. 20.2-1)
882
Choose the Center Point option to specify that you want to keep in the selection the sample
nearest to the cell gravity center.
In order to take into account the whole set of samples, select the option Infinite Grid. The grid
system will be extended so that each of the samples is classified in a grid cell.
Finally, you have to specify the grid parameters. As the Infinite Grid option is activated, you just
have to fill in the dimensions of the cells. Type 10 m for DX and DY.
Press Run.
The variable created by the procedure is set to 1 when a sample is kept (just one sample by grid
cell), 0 otherwise.
(snap. 20.2-1)
Note - This procedure can also be achieved in Tools / Look for Duplicates.
Bathymetry
883
(snap. 20.2-2)
Note - In Isatis, only regular grids can be created but it is possible to import irregular grids. For
example, if you create a regular grid in latitude/longitude outside Isatis, this file has to be projected
in Isatis (with a projection system consistent with your data set). Once the projection realized, the
grid will not be regular so it will be imported as a points file via File / Import / ASCII. This new file
will be finally considered as the target file of the interpolation. During import, you just need to
select the Keep Geographical Coordinates option to keep and store the original fields used to
compute the latitude/longitude coordinates as float variables in the output Isatis file in order to
export the result of the interpolation on these coordinates .
884
(snap. 20.2-3)
Bathymetry
885
(snap. 20.2-1)
Note - Be careful that the input variables are defined with the same format (in our case, Float and
not Length), in order to avoid Isatis making a conversion.
Then, the borders consistency of the two data sets is studied in Statistics / Exploratory Data
Analysis. In our case, we just want to compare the two profiles linking the two campaigns. The
comparison is made via a H-scattor plot. This application allows you to analyze the spatial
continuity of the selected variable.
It is first advised to create a selection containing only the two profiles. Clicking on the base map
(first icon from the left), the localization of bathymetry measures appears. Each active measure is
represented by a cross proportional the the bathymetry value. A sample is active if its value for a
given variable is defined and not masked.
To create the selection variable, right click and Mask all Information on the Basemap window.
Then, zooming, select the two profiles (with the left button of your mouse) and right click, Unmask.
886
(snap. 20.2-2)
To avoid high computation time, you should save this selection and work only on an extraction of
the bathymetric file:
Bathymetry
887
To save the selection variable, click on Application / Save in Selection in the Basemap window
and create a new selection variable Two profiles. Save.
In Tools / Copy Variable / Extract Samples, click Input File and select the Z variable of the MO
and Maumusson file with the selection Two profiles activated (select it on the left part of the
File Selector). Click New Output Points file and create a new output Points File Two profiles
MO and Maumusson and a new variable Z. Run.
(snap. 20.2-3)
Launch again the Statistics / Exploratory Data Analysis on the Z variable of this new file. Tick the
Define Parameters before Initial Calculations option and click on the sixth icon from left to display
the H-scattor plot. The default parameters are modified:
888
the Reference Direction: an angle of 55 from North is taken to compare the pairs of points
located in the principal direction of the trench. This direction can be identified clicking on
Management / measure / Angle between two Segments in the graphic window.
the Minimum and Maximum Distance: respectively equal to 400 and 800 m to include the pairs
of points resulting of the comparison of two profiles.
(snap. 20.2-4)
It is possible to add the First Bisector Line on the H-scattor plot via Application / Graphic Specific
Parameters.
15
10
0
0
10
15
(fig. 20.2-1)
Select a pair of points on the H-scattor plot (i.e. one point) and do a right click and Highlight allows
you to show their localization on the Basemap. No special bias seems visible, in consequence the
two campaigns can be merged without any correction.
Bathymetry
889
(snap. 20.2-1)
For clarity reasons, in the DDE Boyard 2000 file, the bathymetric variable is renamed in Z
Boyard.
The difference of bathymetry between both variables Z Boyard and Z MO is calculated via the
File / Calculator panel.
890
(snap. 20.2-2)
Both Z variables and the difference between them are then selected in Statistics / Exploratory Data
Analysis. On the Scatter Diagram of Z Boyard versus Z Mo, which is considered as the reference
bathymetry because it is more recent, you can observe an excellent correlation of 0.999. However
the error Z Boyard-MO seems to increase with the depth (the distance from the first bissector line is
more and more important). The mean of these errors is equal to 0.45 m.
Bathymetry
20
891
rho=0.999
0.20
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:
115
-0.35
1.00
0.45
0.25
Frequencies
0.15
Z MO
10
0.10
0.05
0
10
20
0.00
-0.5
0.0
Z Boyard
0.5
1.0
Z Boyard-MO
(fig. 20.2-1)
After removing some points (with a right click and Mask), you can observe a link between the
errors and the bathymetry. This phenomenon could be due to an evolution of the sediments between
the two campaigns made in 2000 and 2003. Save the result of the unmasked points in a selection
variable Selection regression (Application / Save in Selection).
(fig. 20.2-2)
The bias between the two bathymetric models resulting from the Boyard and the Marennes Oleron
data sets could be corrected, applying to the Boyard data, the following correction (corresponding
to the equation of the regression line below):
Z Boyard - MO = 0.02122 * Z Boyard + 0.306
(eq. 20.2-1)
892
These parameters can be calculated thanks to the Statistics / Data Transformation / Multi-linear
Regression tool. Select Z Boyard MO as the Target Variable and Z Boyard as the Explonatory
Variable (activate the Selection regression selection). Switch Use a Constant Term in the
Regression and create a New File Name Z Boyard-MO clicking on Regression Parameter File to
stored the result of the multilinear regression. This parameter file will be used to applicate the same
transformation on the grid variables using Statistics / Data Transformation / Raw<->Multi-linear
Transformation. Finally click on Run.
(snap. 20.2-3)
Bathymetry
893
Regression Parameters:
======================
Explanatory Variable
Regressed Variable
Residual Variable
Constant Term
1 =
=
=
=
Z Boyard
None
None
ON
Multi-linear regression
----------------------Equation for the target variable : Z Boyard-MO
(NB. coefficients apply for lengths are in their own unit)
-------------------------------------------------------------|Estimated Coeff.|Signification|Std. Error|t-value| Pr(>|t|) |
-----------------------------------------------------------------------| Constant|
0.306 |
***
|2.893e-02 |10.578 |0.000e+00 |
-----------------------------------------------------------------------| Z Boyard|
2.122e-02 |
***
|3.261e-03 | 6.509 |3.242e-09 |
-----------------------------------------------------------------------Signification codes based upon a Student test
probability of rejection:
'***' Pr(>|t|) < 0.001
'**'
Pr(>|t|) < 0.01
'*'
Pr(>|t|) < 0.05
'.'
Pr(>|t|) < 0.1
'X'
Pr(>|t|) < 1
Multiple R-squared
Adjusted R-squared
F-statistic
p-value
AIC
AIC Corrected
=
=
=
=
=
=
0.302
0.295
42.361
3.242e-09
-8.978e+02
-8.977e+02
This relation is observed on the overlapping area of the two campaigns. Its validity should be
confirmed on the remaining area.
894
(snap. 20.3-1)
For example, to calculate the histogram with 25 classes between -6 and 19 m (1 meter interval),
first you have to click on the histogram icon (third from the left); a histogram calculated with
default parameters is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you switch on the Define Parameters Before Initial
Calculations option, you can skip the default histogram display.
The different graphic windows are dynamically linked. If you want to locate the negative measures
of bathymetry, select on the histogram the classes corresponding to negative values, right click and
choose the Highlight option. The highlighted values are now represented by a blue star on the base
map previously displayed.
(fig. 20.3-1)
Bathymetry
895
(fig. 20.3-2)
Then, an experimental variogram can be calculated by clicking on the 7th statistical representation,
with 20 lags of 10 m and a proportion of lag of 0.5. The variance of data may be removed from the
graphic by switching off the appropriate button in the Application / Graphic Specific Parameters.
896
(snap. 20.3-2)
(fig. 20.3-3)
In order to perform the fitting step, it is now time to store the experimental variogram with the item
Save in Parameter File of the Application menu of the Variogram Page. You will call it Z bathy.
Bathymetry
897
The global window where all experimental variograms, in all directions and for all variables are
displayed.
The fitting window where you focus on one given experimental variogram, for one variable and
in one direction.
In our case, as the parameter file refers the only one experimental variogram for the single variable
Z, there is obviously no difference between the two windows.
898
(snap. 20.3-3)
The principle consists in editing the Model parameters and checking the impact graphically. You
can also use the variogram initialization by selecting a single structure or a combination of
structures in Model initialization and by adding or not a nugget effect. Here, we choose an
exponential model without nugget. Pressing the Fit button in the Automatic Fitting tab, the
procedure automatically fits the range and the sill of the variogram (see the Variogram fitting
section from the Users guide).
Then go to the Manual Fitting tab and press the Edit button to access to the panel used for the
Model definition and modify the model displayed. Each modification of the Model parameters can
be validated using the Test button in order to update the graphic.
Here, two different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):
l
a stable model with a third parameter equal to 1.45, a range of 600 m and a sill of 3.35,
These parameter lead to a better fitting of the model to the experimental variogram.
Bathymetry
899
(snap. 20.3-4)
This model is saved in the Parameter File for future use by clicking on the Run (Save) button.
Variogram : Z
1.5
1.0
0.5
0.0
0.00
0.05
0.10
Distance (km)
0.15
0.20
(fig. 20.3-4)
900
the following variables in the Output Grid File, where the results will be stored:
m
To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name Moving 300m,
then click on OK or press Enter and you will be able to set the neighborhood parameters by clicking
on the respective Edit button.
The neighborhood type is a moving neighborhood. It is an ellipsoid with No Rotation;
Bathymetry
901
Set the dimensions of the ellipsoid to 300 m and 300 m. Because of the sampling, the
neighborhood size does not need to be very large;
Number of Angular Sectors: 4 in order to avoid data coming all from the same profile;
Optimum Number of Samples per Sector: 4. A number of 4x4=16 samples seems to be a good
compromise between reliability of the interpolation and calculation time.
(snap. 20.3-5)
In order to avoid extrapolation outside the domain, in the Advanced tab, it is possible to interrupt
the neighborhood search when there are too many consecutive empty sectors. Tick the Maximum
Number of Consecutive Empty Sectors option to active it and enter a value of 2.
902
(snap. 20.3-6)
Note - When kriging huge data sets, it is advised to modify the parameters in the Sorting tab in
order to optimize the computations. With a moving neighborhood, the samples are first sorted into a
coarse grid of cells (the maximum number of cells is limited to 500000). This sorting will improve
the performance of the search algorithm.
The sorting parameters DX and DY should be set such as the product of the domain extension along
X by the domain extension along Y divided by the product of DX by DY, is smaller than 500000.
Bathymetry
903
(snap. 20.3-7)
In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). A first click within the graphic area
displays the target file (the grid). A second click allows the selection of one grid node in particular.
The target grid node may also be entered in the Test Window / Application / Selection of target
option (see the status line at the bottom of the graphic page), for instance [207,262].
The figure shows the data set, the sample chosen in the neighborhood (the 16 closest points inside a
300 m radius circle) and their corresponding weights. The bottom of the screen recalls the
estimation value, its standard deviation and the sum of the weights.
904
(snap. 20.3-8)
In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
l
Bathymetry
905
=
=
=
=
=
=
=
=
=
=
=
=
=
=
16
56.73m
1.000000
1.063017
-0.036648
0.102890
-0.993992
0.172020
0.414753
2.974700
3.077590
0.974551
1.034588
19.474474
906
(snap. 20.3-9)
Bathymetry
907
In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable with which color scale you want to display:
m
In the Data area, in the Grid file select the variable Kriging of bathymetry MO and
Maumusson,
Specify the title that will be given to the Raster part of the legend, for instance Bathy (m),
In the Graphic Parameters area, specify the Color Scale you want to use for the raster
display. You may use an automatic default color scale, or create a new one specifically
dedicated to the bathymetry. To create a new color scale, click on the Color Scale button,
double-click on New Color Scale and enter a name: Bathy, and press OK. Click on the Edit
button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and enter the min and the max bounds (respectively -5 and
15).
- Do not change the number of classes (32).
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter 5 as the reference tick mark and 2 as the step between the tick marks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch of the Display Undefined
Classes as button.
- Click on OK.
In the Item contents for: Raster window, click on Display to display the result.
908
(snap. 20.3-1)
Bathymetry
909
It is possible to add other items such as Isolines defined on the nodes of a grid. For example,
you can display, on the bathymetry variable, isolines by 1 m classes.
You can also display the coast line by adding a Polygons item as done for the data visualization.
In the Items list, you can select any item and decide whether or not you want to display its
legend. Use the Move Back and Move Front buttons to modify the order of the items in the final
display.
Click on the Display Box tab. Choose Containing a set of items and select the Raster item to
define the size of the graphic by reference to the contents of the grid.
Finally, click on Display to display the result and on OK to close the Item Contents panel. Your
final graphic window should be similar to the one displayed hereafter:
(snap. 20.3-2)
The * and [Not saved] symbol respectively indicate that some recent modifications have not been
stored in the Bathy kriging graphic template, and that this template has never been saved. Click on
Application / Store Page to save them. You can now close your window.
910
20.3.4.2 3D Viewer
Launch the 3D Viewer (Display / 3D Viewer).
To display the bathymetry estimation, drag and drop the Kriging of bathymetry MO and
Maumusson variable from the Grid 60x60m file in the display window. In the Page Contents,
click right on the Surface object to edit its properties:
l
in the Color tab, be careful that selected variable is Kriging of bathymetry MO and
Maumusson. Apply the Bathy color scale created previously.
in the Elevation tab, you need to select Variable and to choose Kriging of bathymetry MO and
Maumusson to define for each grid cell the bathymetry as the level Z. Tick Convert into Z
Coordinate to calculate the elevation Z from the bathymetry (in depth) as Z = -1x V + 0.
(snap. 20.3-1)
Tick the Automatic Apply option to automatically assign the defined properties to the graphic
object. If this option is not selected, modifications are applied only when clicking Display.
Tick Legend to display the color scale in the display window. The legend is attached to the current
representation. Specific graphic objects may be added from the Display menu as the graphic axes
and corresponding valuations, the bounding box and the compass.
The Z Scale, in the tool bar, may also be modified to enhance the vertical scale.
Click on File / Save Page As to save the current graphic.
Bathymetry
911
(fig. 20.3-1)
912
(snap. 20.3-2)
Bathymetry
913
(snap. 20.3-3)
a new variable Z standardized error filtering as v4 which will be equal to the difference
between the "true" value and the value estimated by filtering standardized by the standard
deviation.
914
(snap. 20.3-4)
Bathymetry
915
The Exploratory Data Analysis allows you to locate on the base map the highest errors by
highlighting them on the histogram. Adding on the base map the values of bathymetry (informing
the Literal Code Variable in Application / Graphical Parameters) permits to study these points in
details.
It is first advised to modify the symbol of the selected points from crosses to points in order to
improve the legibility of the display. To achieve that, you have to access the study parameters in
Preferences / Study Environment, Miscellaneous tab and change the Selected Point symbol in the
Interactive Picking Windows Convention part.
After masking the outliers (with a right click and Mask), you can save the result of this work in a
selection variable (Application / Save in Selection). Then, you can perform again a kriging (without
filtering) with this selection variable activated in input and the grid of interpolation in output this
time. Of course, the classification of the points as outliers should be done carefully.
(fig. 20.3-2)
916
(snap. 20.4-1)
Bathymetry
917
(snap. 20.4-2)
918
In Interpolate / Interpolation / Grid operator, you should create a new selection variable Sel Z
bathy MO and Maumusson dilated which takes into account all the grid cells where the
Kriging of bathymetry MO and Maumusson variable is defined and a band of 120 m wide
(i.e. 2 cells) around.
(snap. 20.4-3)
l
Sel Z bathy MO and Maumusson buffer: this selection variable defines the buffer zone. It
is equal to Sel Z bathy MO and Maumusson dilated minus the area on which the Kriging
of bathymetry MO and Maumusson variable is defined.
Z bathy final Boyard MO and Maumusson: this variable contains the concatenation of the
two models previously created by kriging with priority to the MO and Maumusson model as
well as undefined values inside the buffer zone.
DTM area: this selection variable is created in order not to extrapolate the interpolation
done at the next step.
Bathymetry
919
(snap. 20.4-4)
920
2120
2115
Y (km)
2110
Bathy (m)
2105
15
13
11
9
2100
7
5
3
2095
1
-1
-3
320
325
X (km)
330
335
-5
(fig. 20.4-1)
The buffer area is then filled in with a simple moving average in Interpolate / Interpolation / Grid
Filling. The result of this last interpolation is stored in the same Z bathy final Boyard MO and
Maumusson variable as in input. The variable is overwritten to contain the final bathymetric
model. The DTM area selection variable is activated in order not to extrapolate.
The choice of the algorithm of interpolation has no real importance because of the limited size of
the buffer area.
Bathymetry
921
(snap. 20.4-5)
922
Tolerance on Direction: 5
Lag Value: 10 m
Number of Lags: 20
Click OK twice to calculate the variogram and get it displayed in a graphic window.
Bathymetry
923
(snap. 20.5-1)
924
(snap. 20.5-2)
Bathymetry
925
Variogram : Z
N0
2
N90
0
0.00
0.05
0.10
0.15
0.20
0.2
Distance (km)
(fig. 20.5-1)
Finally store this experimental variogram with the item Save in Parameter File of the Application
menu of the Variogram Page. You will call it Z bathy anisotropic.
To fit a variogram model, in the Statistics / Variogram Fitting application, define:
l
The Parameter File containing the set of experimental variograms: Z bathy anisotropic.
The Parameter File in which you whish to save the resulting model Z bathy anisotropic. You
may define the same name for both.
Check the toggles Fitting Window and Global Window. The Fitting Window displays one direction
at a time (you may choose the direction to display through Application / Variable & Direction
Selection...), and the Global Window displays all directions in one graphic.
Click on the Edit button in the Manual Fitting tab to open the Model Definition sub-window. You
can first initialize the variogram by pressing the Load Model button, and select the Z bathy model
to begin your modelization using the same parameters. But the model must reflect:
l
You should tick the Anisotropy option for the Stable structure with a third parameter equal to 1.45,
a sill of 3.35 and the following respective ranges along U and V: 800 m and 300 m. The nugget
effect stays equal to 0.0025.
This model is saved in the Parameter File by clicking on the Run (Save) button.
926
(snap. 20.5-3)
Variogram : Z
N0
2
N90
0
0.00
0.05
0.10
0.15
Distance (km)
0.20
0.2
(fig. 20.5-2)
Bathymetry
927
20.5.2 Pre-processing
In order to avoid heavy computation time, the method is only illustrated on a specific part of the
area of interest. After validating the analysis of the LGS parameters on this restricted area, you
could perform the estimation on the entire domain.
In the File / Selection / Geographical Box menu, a new selection variable Restricted area is
created in the MO and Maumusson file by selecting only the samples for which the coordinates
are included between:
l
(snap. 20.5-4)
The dataset is also reduced to select one point every 25 m with the File / Selection / Sampling menu.
928
(snap. 20.5-5)
Finally, the selection Sampling 25 m containing 18512 samples is extracted into a new points file
MO and Maumusson LGS thanks to the Tools / Copy Variable / Extract Samples application. You
should press the Default button to keep the name of the input variable Z as the name of the
corresponding output variable in the output file MO and Maumusson LGS. Click Run.
(snap. 20.5-6)
Bathymetry
929
Set the dimensions of the ellipsoid to 1200 m and 1200 m along the U and V directions;
930
(snap. 20.5-7)
In the Local Grid tab, you should click on the Local Grid button to define the grid on which the
local parameters will be calculated. Create a new file Grid LGS in the existing Targets directory.
The grid is automatically computed in order to geographically overlay the input samples. You
should tick the Graphic Check option to check the superimposition of the grid on the samples.
The Cross-validation tab allows you to define a block size inside of which samples are considered.
Enter a value of 100 m for X and Y and choose to Perform Cross-validation on 50 % of the data
(to reduce the amount of data and the computation time).
In the last Local Parameters tab, you should select the parameters that you whish to estimate
locally. In this exemple, we first choose to test only the rotation i.e. the directions of anisotropy. The
Output Local Base Name area is designed to define a base name for the local parameters. The
complete name of each parameter is automatically created concatenating this chain of characters,
the name of the structure (for the variogram model) and the parameter you are testing (Rot, Range,
Sill, Third). It appears in the Parameter area. You should call it Z_bathy.
The different basic structures constituting the variogram model defined earlier as well as the
neighborhood item are listed in the Structure area. Click the Stable Structure and select the
Parameter: Rot Z to indicate the local parameter you want to test. In the Min and Max boxes, enter
the values between the selected parameter should fluctuate: respectively -90 and 90. Choose a Step
of 10 degrees between two consecutive values to be tested.
Finally click Run to launch the calculations.
Bathymetry
931
(snap. 20.5-8)
You can visualize the result of calculations in the Statistics / Exploratory Data Analysis. Tick the
Legend option in the Application / Graphical Parameters menu of the basemap to display the
legend.
932
(fig. 20.5-3)
After computing the rotation of the variogram model, a second run is achieved to test the ranges,
taking into account the previous calculations. The Input Data, the Model of variogram and the
Neighborhood remain the same.
In the Local Grid tab, tick the Use an Existing Grid option to save the range parameters in the grid
file Targets / Grid LGS previously created.
Do not change anything in the Cross-validation tab.
In the Local Parameters tab, tick the Parameter Already Exists option not to erase the variable
containing the calculations of rotation. Then, click Add Parameter to add a second parameter.
Select the Stable structure and the Range U for parameter. Choose a Min of 600, a Max of 1000
and a Step of 100. Add a third parameter for the Range V with a Min of 100, a Max of 500 and a
Step of 100.
Be careful that Simultaneous estimation mode be ticked in order to test all possible combinations
of the different values for the range.
Click Run.
Bathymetry
933
(snap. 20.5-9)
the Input information: variable Z in the Data / MO and Maumusson file with the Restricted
area selection,
the following variables in the Targets / Grid 60x60m Output Grid File, where the results will
be stored:
m
934
(snap. 20.5-10)
You should click on the Local Parameters button to pop up the Local Parameter Loading box and
define the local models. Click on Local Grid and select the grid Targets / Grid LGS where the
local parameters are stored.
In the Model Per Structure tab, tick the Use Local Rotation (Mathematician Convention) option
to make the rotations varying locally. Click Rotation / Z and select the Z_bathy_2_Stable_Rot_Z
variable. In the same way, select Use Local Range and choose Z_bathy_2_Stable_Range_U for
Range / X and Z_bathy_2_Stable_Range_V for Range / Y.
Bathymetry
935
(snap. 20.5-11)
936
The map displaying the differences between kriging and kriging using LGS points out the areas
whith high differences between the maps. The two main conclusions are that the use of LGS
reduces the wavelet artefact visible at the border of the main channel and that LGS also offers more
continuous secondary channels which is closer to the reality.
(fig. 20.5-4)
(fig. 20.5-5)
937
Methodology
938
Image Filtering
22.Image Filtering
This case study demonstrates the use of kriging to filter out the component of a variable which corresponds to the noise. Applied to regular
grids such as images, this method gives convincing results in an efficient manner.
The result is compared to classical filters which do not pretend to suppress the noise but to reduce it by dilution instead.
939
940
(snap. 22.1-1)
We set in Preferences / Study Environment the X and Y units for graphics to mm.
Using the File Manager utility, we can check the basic statistics of the P variable that we have just
loaded: it varies from 11 to 71, with a mean of 35 and a standard deviation of 7.
Use the Display facility to visualize the raster contents of the P variable located on the grid. The
large amount of noise, responsible for the fuzziness of the picture, is clearly visible.
Image Filtering
941
(fig. 22.1-1)
Initial Image
942
(fig. 22.2-1)
Image Filtering
943
(snap. 22.2-1)
(fig. 22.2-2)
In the Report Global Statistics item of the Application Menu, you obtain an exhaustive comparison
between the experimental and the theoretical quantiles, as well as the score of the
-test, equal to
944
9049. This score is much greater than the reference value (for 16 degrees of freedom) obtained in
tables: this indicates that the experimental distribution cannot be considered as normal with a high
degree of confidence.
(snap. 22.2-2)
Note - We could try to calculate the variogram cloud on this image: nevertheless, for one (any)
direction, the smallest distance (once the grid mesh) already corresponds to 256255 pairs, the
second lag to 256254 pairs, and so on. Needless to say this procedure takes an enormous amount of
time to draw and selectively picking some "abnormal" pairs is almost impossible. Therefore this
option is not recommended.
Image Filtering
945
(snap. 22.2-3)
This figure represents the two directional variograms that overlay almost perfectly: this informs us
that the variable behaves similarly with respect to distance along the two main axes. This is almost
enough to pretend that the variable is isotropic. Actually, two orthogonal directional variograms are
not theoretically sufficient as the anisotropy could happen on the first diagonal and would not be
visible from the two main axes. The study can be completed by calculating the experimental variograms along the main axes and along the two main diagonals: this test confirms in the present case
the isotropy of the variable. The two experimental directional variograms are stored in a new
Parameter File called P.
To fit a model to these experimental curves, we use the Statistics / Variogram Fitting procedure,
naming the Parameter File containing the experimental quantity (P) and the one that will ultimately
contain the model. You can name it P for better convenience, keeping in mind that, although they
have the same name, there is no ambiguity between these two files as their contents belong to two
different types.
946
(snap. 22.2-4)
Image Filtering
947
(snap. 22.2-5)
By pressing the Edit button of the main window, you can define the model interactively and check
the quality of the fitting using any of the graphic windows available (Fitting or Global). Each modification must be validated using the Test button in order for the graphic to be updated. The Automatic Sill fitting and the Model Initialization of the main window can be used to help you to
determine the optimal sill and ranges values for each basic structure constituting the model. A correct fit is obtained by cumulating a large nugget effect to a very regular behavior corresponding to a
Cubic variogram with a range equal to 0.17 .
948
(fig. 22.2-3)
The parameters can also be printed using the Print button in the Model Editing panel.
Model : Covariance part
=======================
Number of variables
= 1
- Variable 1 : P
Number of basic structures = 2
S1 : Nugget effect
Sill =
40.2576
S2 : Cubic - Range = 0.17mm
Sill =
14.7493
Click on Run (Save) to save your latest choice in the model parameter file.
Image Filtering
949
950
(snap. 22.3-1)
When pressing the Neighborhood Edit button, you can set the parameters defining this Image
neighborhood. Referring to the target node as the reference, this image neighborhood is characterized by the extensions of the rectangle centered on the target node: the extension is specified by its
radius. Hence in 2D, a neighborhood corresponds to the target node alone, whereas a neighborhood
includes the eight nodes adjacent to the target node.
target cell of a 1x1 image neighborhood
Image Filtering
951
For some applications, it may be convenient to reach large distances in the neighboring information. However, the number of nodes belonging to the neighborhood also increases rapidly which
may lead to an unreasonable dimension for the kriging system. A solution consists in sampling the
neighborhood rectangle by defining the skipping ratio: a value of 1 takes all information available,
whereas a value of 2 takes one point out of 2 on average. The skipping algorithm manages to keep a
larger density of samples close to the target node and sparser information as the distance increases.
Actually, the sampling density function is inspired from the shape of the variogram function which
means that this technique also takes anisotropy into account.
(snap. 22.3-2)
Prior to running the process on the whole grid, it may be worth checking its performance on one
grid node in particular. This can be realized by pressing the Test button which produces a graphic
page where the data information is displayed. Because of the amount of data available (256256) the
page shows a solid black square. Using the zooming (or clipping) facility on the graphic area, we
can magnify the picture until a set of limited cells are visible (around 20 by 20).
By clicking on the graphic area, we can select the target node (select the one in the center of the
zoomed area). Then the graphic shows the points selected in the neighborhood, displaying their
kriging weight (as a percentage). The bottom of the graphic page recalls the value of the estimate,
the corresponding standard deviation (square root of the variance) and the value for the sum of
weights. The first trial simply reminds us that kriging is an exact interpolator: as a data point is
located exactly on top of the target node, it receives all the weight (100%) and no other information
carries weight.
In order to perform filtering, we must press the Special Model Options button and ask for the Filtering option. The covariance and drift components are now displayed where you have to select the
item that you wish to filter. The principle is to consider that the measured variables (denoted Z) is
the direct sum of two uncorrelated quantities, the underlying true variable (denoted Y) and the noise
(denoted ): Z = Y + . Due to the absence of correlation, the experimental variogram may be
interpreted as the sum of a continuous component (the Cubic variogram) attributed to Y and the
nugget effect corresponding to the noise
pressing the noise from the input image.
952
(snap. 22.3-3)
When pressing the Apply button, the filtering procedure is automatically resumed on the graphic
page, using the same target grid node as in the previous test: you can check that the weights are now
shared on all the neighboring information, although they still add up to 100%.
Before starting the filtering on the whole grid, the neighborhood has to be tuned. An efficient quality index frequently used in image analysis, called the Signal to Noise Ratio, is provided when displaying the Results (in the Application Menu of the graphic page). Roughly speaking, the larger this
quantity the most accurate the result.
The following table summarizes some trials that you can perform. The average number of data in
the neighborhood is recalled, as it directly conditions the computing time.
The Ratio increases quickly and then seems to converge with a radius equal to 8-9. Trying a neighborhood of 10 and a skipping ratio of 2 does not lead to satisfactory results. It is then decided to use
a radius of 8 for the kriging step.
Radius
Number of nodes
Skipping Ratio
3.3
25
9.1
49
17.8
81
29.9
121
41.5
169
53.7
Image Filtering
953
225
63.4
289
69.5
361
72.4
10
441
73.5
10
222
49.37
An interesting concern is to estimate a target grid node located in the corner of the grid. In order to
keep the data pattern unchanged for all the target nodes, including those located on the edge of the
field, the field is virtually extended by mirror symmetry. In the following display, the weights
attached to virtual points are cumulative to the one attached to the actual source data.
(snap. 22.3-4)
The final task consists in performing the filtering on the whole grid.
954
Note - The efficiency of this kriging application is that it takes full advantage of the regular pattern
of the information as it must solve a kriging system with 121 neighborhood data for the 65536 grid
nodes.
The resulting variable varies from 29 to 45, to be compared with the initial statistics. It can be displayed as the initial image, where the color scale has been adapted.
(fig. 22.3-1)
Kriging Filter
This image shows more regular patterns with larger extension for the patches of low and high P values. Compared to the initial image, it shows that the noise has clearly been removed.
Image Filtering
955
:
Z = Y+
(eq. 22.4-1)
It is always assumed that the noise is a zero mean quantity, uncorrelated with Y, and whose variance
is responsible for the nugget effect component of the variogram. In order to eliminate the noise, a
good solution is to perform the convolution of several consecutive pixels on the grid: this technique
corresponds to one of the actions offered by the Tools / Grid or Line Smoothing operation.
On a regular grid, the low pass filtering algorithm performs the following very simple operation on
three consecutive grid nodes in one direction:
1
1
1
Z i --- Z i 1 + --- Z i + --- Z i + 1
4
2
4
(eq. 22.4-2)
A second pass is also available which enhances the variable and avoids flattening it too much. It
operates as follows:
1
1
3
Z i --- Z i 1 + --- Z i --- Z i + 1
4
2
4
(eq. 22.4-3)
When performed on a 2D grid and using the two filtering passes, the following sequence is performed on the whole grid:
l
filter the initial image along X with the first filtering mode,
If several iterations are requested, the whole sequence is resumed, replacing the initial image by the
result of the previous iteration when starting a subsequent iteration. This mechanism can be constrained so that the impact of the filtering on each grid node is not stronger than a cutoff variable
(the estimation standard deviation map for instance): this feature is not used here.
We decide empirically to perform 20 iterations of the two-passes filtering on the initial image (P)
and to store the result on the new variable called P smoothed.
956
(snap. 22.4-1)
The result is displayed using the same type of representation as before. Nevertheless, please pay
attention to the difference in color coding. The image also shows much more structured patterns
although this time the initial high frequency has only been diluted (and not suppressed) which
causes the spotted aspect.
(fig. 22.4-1)
Image Filtering
957
Using the same window Tools / Grid or Line Smoothing, we can try another operator such as the
Median Filtering. This algorithm considers a 1D neighborhood of a target grid node and replaces its
value by the median of the neighboring values. In 2D, the whole grid is first processed along X, and
the result is then processed along Y. If several iterations are required, the whole sequence is
resumed. Here, two iterations are performed with a neighborhood radius of 10 pixels (excluding the
target grid node) so that each median is calculated on 21 pixels. The result is stored in the new variable called P median.
(snap. 22.4-2)
The result is displayed with the same type of representation as before: it is even smoother than the
kriging result, which is not surprising given the length of the neighborhood selected for the median
filter algorithm.
958
(fig. 22.4-2)
Median Filter
The real drawback of these two methods is the lack of control in the choice of the parameters (number of iterations, width of the neighborhood) whereas in the case of kriging, the quantity to be filtered is derived from the model which relies on statistics calculated on actual data, and the
neighborhood is simply a trade-off between accuracy and computing time.
Image Filtering
959
Variables (which are defined in the upper part of the window) through their aliases v* for 1 bit
variables and w* for real variables.
Structural elements which define a neighborhood between adjacent cells and are called s*. In
addition to their extension (defined by its radius in the three directions), the user can choose
between the block or the cross element, as described on the next figure:
960
Using the input variable P denoised, first apply a threshold considering as grain any pixel
whose value is larger than 40 (inclusive); otherwise the pixel corresponds to pore. The result is
stored in a 1 bit variable called grain (P denoised): in fact this variable is a standard selection
variable that can be used in any other Isatis application, where the pores correspond to the
masked samples.
Calculate the connected component and sort them by decreasing size. A connected component
is composed of the set of grain pixels which are connected by the structural element. The result,
which is the rank of the connected component, is stored in the real variable called cc (P
denoised).
(snap. 22.5-1)
The procedure also produces a printout, listing the different connected components by decreasing
size, recalling the cumulative percentage of grain.
Image Filtering
961
The same procedure is also applied on the three resulting images. The following table recalls some
general statistics for the 3 variables:
Resulting Image
P denoised
P median
P smoothed
5308
5887
7246
11
122
1718
1882
1733
1008
1864
1650
978
1167
1257
862
974
842
543
174
The different results are produced as images where the pore is painted in black.
(fig. 22.5-1)
962
(fig. 22.5-2)
(fig. 22.5-3)
Image Filtering
963
22.5.2 Cross-sections
The second way to compare the three resulting images consists in representing each variable as the
elevation along one cross-section drawn through the grid.
This is performed using a Section in 2D Grid representation of the Display facility, applied to the 3
variables simultaneously. The parameters of the display are shown below.
(snap. 22.5-2)
Clicking on the Trace... button allows you to specify the trace that will be represented. For instance,
to represent the first diagonal of the image, enter the following vertices:
964
(snap. 22.5-3)
In the Display Box tab of the Contents window, modify the Z Scaling Factor to 0.0005.
The three profiles are shown in the next figure and confirm the previous impressions (P denoised in
red, P median in green and P smoothed in blue).
(fig. 22.5-4)
Boolean
965
23.Boolean
This case study demonstrates some of the large variety of possibilities
offered by the implementation of the Boolean Conditional Simulations.
This simulation technique belongs to the category of Object Based simulations. It consists in dropping objects with different shapes (defined
by the user) in a 3D volume, fulfilling the conditioning information
defined in terms of pores and grains.
966
(snap. 23.1-1)
Note - The dataset has been drastically reduced to allow a quick and good understanding of the
conditioning and reduce the computing time.
The file refers to a Line Structure which corresponds to the format used for defining several samples gathered along several lines (i.e. boreholes or wells) in the same file. The original file contains
five columns which correspond to:
Boolean
967
the sample number: it is not described in the header and will not be loaded (the software generates it automatically in any case),
the variable of interest, called facies which only contains 0 and 1 values. This information is
considered as the geometrical input used for conditioning the boolean simulations. One can
think of 0 for shale and 1 for sandstone to illustrate this concept. In this case study, the word
grain is used for 1 values and the word pore for 0 values.
You need to to go Tools/Convert Gravity Lines to Core Lines since the boolean simulation tool work
only with Core Lines. Convert the Lines using the From Isatis <v9 Lines File option.
(snap. 23.1-2)
The boolean conditional simulation is run on a regular grid which has to be created beforehand
using the File / Create Grid File facility. It consists of a regular 3-D grid containing 201 x 201 x 51
968
nodes, with a mesh of 50 m x 50 m x 1 m and whose origin is located at point (X=0; Y=0;
Z=100m). The user may check in the Data File Manager that the grid extends from 0m to 10000m
both in X and Y, and vertically from 100m to 150m.
(snap. 23.1-3)
Boolean
969
(snap. 23.2-1)
970
An important point to remember is that, during the simulation process the conditioning data are
assigned to the closest node of the grid. This discretization step implies that the user should worry if
two samples are assigned to the same grid although they carry two different indicator values: an
error message is sent and the procedure is interrupted.
The procedure is also using the value "-1" to designate a grid node which coincides with a conditioning grain value. This is the reason why the output variable is not created as a 1-bit variable: the
software uses the default 32-bit format.
(snap. 23.2-2)
23.2.4 Parameters
The Boolean Conditional Simulation parameters are briefly described hereafter. For more information, the user should refer to the On-Line documentation.
Boolean
971
This Object Based simulation technique consists in dropping objects in a 3D space, so that they
intersect the field (3D grid) to be simulated. Obviously, to have an even spread of objects over the
space, we must take into consideration not only the objects whose center lies within the field to be
simulated, but also those located in its immediate periphery. This periphery is called the Guard
Zone and is defined by its dimensions along the three main axes. Here they have been set to 1800m
along X, 1000m along Y and 2m along Z. This implies that the radius of the objects that we consider should not be larger than these values. Note that no test is performed to ensure this compatibility.
The objects are dropped according to a random process which requires the following parameters:
l
the number of objects to be generated before the simulation stops. Actually, the user has to
define either a Poisson intensity or the related average number of objects (dropped in the dilated
domain) that the simulation aims to reach (1000 here).
the seed used to generate the random values: to generate different outcomes of this boolean conditional simulation technique, it is compulsory to change this seed value before each run. Note
that, if the seed is set to 0, Isatis automatically generates a different seed at each run.
The boolean simulation algorithm relies on a death and birth process which may either create or
delete objects. Therefore, the average number of objects must be considered as a target number
that will be reached only if the simulation is run for a long time. It is common practice, however,
to provide a Maximum Time that will be used to stop the process prematurely (100).
Moreover, this iterative process is performed in two steps: a preliminary step consists in dropping some initial objects at preferential locations simply to fulfill the conditioning data. These
initial objects must disappear during the process.
A Graphic Output enables, after run, to control the evolution of the total number of objects as
well as the proportion of initial objects (not visible without zooming in the lower left corner).
(fig. 23.2-1)
972
(snap. 23.2-3)
The density of the objects (regarding their centers) does not have to be even over the whole dilated
domain. The Theta function
defined as log P h (up to its sign) where P h is the probability that some pores extend from
z to z+h, without encountering any grain in the mean time. The value h corresponds to the Minimum
Pore Length defined by the user in terms of layers. Finally, the Theta function can be smoothed by
averaging its value over several consecutive layers. For more information, the user should refer to
the On-Line documentation.
This Theta function might also be derived from the conditioning information (Calculate from Data
button) and displayed graphically. The picture corresponds to a minimum pore length of 1 and no
smoothing (Number of layers averaged set to 1).
Boolean
973
(snap. 23.2-4)
Simultaneously Isatis calculates and represents three statistical quantities that may help analyzing
the quality of the conditioning information and understanding the simulation process:
l
the grain proportion which simply tells us, for a given horizontal grid level, what the proportion
of the conditioning information which corresponds to grain is,
the pore survival function which gives to the average residual length for the pores whose length
is larger than a given value, as a function of this value.
Only the Theta function varies when the values for the Minimum Pore Length and the Number of
layers averaged are modified. Set the Minimum Pore Length to 3 and check how the graphic modification.
You can then smooth out this function by setting the number of layers on which the function is calculated to 4.
974
Finally, the values of the Theta variable are displayed in a scrolled editable area, where the user can
modify them by hand. Any value lying between 0 and 1 is admissible. Nevertheless one must
remember that a value of 0 at a given horizontal grid level implies that no object may be generated
at this level. This constraint must at least be compatible with the conditioning information.
For sake of simplicity, the rest of this chapter will be processed setting the Minimum Pore Length to
1 and suppressing any smoothing.
Boolean
975
23.3 Simulations
This paragraph is focused on the description of the Object Law. Each example describes an Object
Family Definition and illustrates the result through the display of a simulation outcome.
23.3.1 Exercise 1
The first trial uses the already described parameters for a single type of parallelepipedic object. All
the parallelepipedic objects have the same geometrical characteristics:
l
extension along Z = 2m
The next figure represents a display of the Z level number 10 of the grid using the Display facility.
A grid node which does not intersect any object (value 0) is painted in black; if at least one object is
intersected (value 1), the color is white. If a conditioning grain coincides with the grid node (value 1), the node is painted in grey. Due to the very fine definition of the grid (the picture corresponds to
200 x 200 grid nodes), the conditioning sample at this level (located in coordinates X=5000m,
Y=5000m) is hardly visible.
10
Simulation 1
9
8
Y (km)
7
6
5
4
3
2
1
0
X (km)
10
(fig. 23.3-1)
23.3.2 Exercise 2
In this exercise, while keeping the parallelepipedic objects, set their vertical thickness equal to 3m.
This run will not function and will return errors specifying that some conditioning grains have not
been covered successfully by objects. In fact, this simply reveals the incompatibility between the
object description and the conditioning data: as a matter of fact, when reading the conditioning data
976
along the line, you can find (several times) the occurrence of the following sequence 0,1,1,0 which
implies the presence of an object between the two conditioning pores which are precisely 3m apart
and are therefore not compatible with the thickness of the objects which are constantly equal to 3m.
23.3.3 Exercise 3
In this second trial, the parallelepipeds are replaced by lower half ellipsoids. The object extension is
kept unchanged, except for the vertical extension which is set to 4m.
10
Simulation 3
9
8
Y (km)
7
6
5
4
3
2
1
0
X (km)
10
(fig. 23.3-2)
The next picture presents a vertical section (XOZ) which intersects the 3D grid at coordinate
Y=5000m (IY = 101). Do not forget to change the projection definition to XOZ in the Camera tab.
The third dimension may be extended for better legibility. This view is convenient to check that the
conditioning is also fulfilled when the sample density is large.
(fig. 23.3-3)
Note that the vertical extension of the ellipsoids of this exercise (4m), though larger than in the previous exercise (3m high parallelepipeds), do not cause any problem as the thickness of ellipsoids is
not constant over the whole object.
Boolean
977
23.3.4 Exercise 4
This exercise simulates lower half sinusoidal objects. This type of object require 6 parameters:
l
the extension of the object along the sine function in the horizontal plane: 4000m,
Simulation 4
9
8
Y (km)
7
6
5
4
3
2
1
0
10
X (km)
(fig. 23.3-4)
23.3.5 Exercise 5
Several types of objects may be mixed in the same simulation outcome. For instance, combine the
three types of objects already presented and set the following proportions for each family of
objects:
l
978
10
Simulation 5
9
8
7
Y (km)
6
5
4
3
2
1
0
10
X (km)
(fig. 23.3-5)
23.3.6 Exercise 6
In this exercise, set the object type back to the lower half ellipsoidal objects, in order to demonstrate
the non constant geometrical parameters. Simply modify the definition of the extension along X of
the ellipsoids: instead of being constantly equal to m=1800m, a tolerance s=1000m is defined in
order to allow them to vary uniformly between m-s and m+s.
10
Simulation 6
9
8
Y (km)
7
6
5
4
3
2
1
0
X (km)
9 10
(fig. 23.3-6)
Boolean
979
23.3.7 Exercise 7
This final example consists in playing with the rotation angle. Keeping the initial lower half ellipsoids, allow the rotation angle to vary in an interval centered around 45 degrees with a tolerance
equal to 20 degrees (amplitude: 25 to 65 degrees from E-W direction).
10
Simulation 7
9
8
Y (km)
7
6
5
4
3
2
1
0
X (km)
10
(fig. 23.3-7)
980