Sie sind auf Seite 1von 986

ISATIS 2016

Case Studies

Case Studies

Published, sold and distributed by GEOVARIANCES


49 bis Av. Franklin Roosevelt, BP 91, 77212 Avon Cedex, France
Web: http://www.geovariances.com

Isatis Release 2016, March 2016

Contributing authors:
Catherine Bleins
Matthieu Bourges
Jacques Deraisme
Franois Geffroy
Nicolas Jeanne
Ophlie Lemarchand
Sbastien Perseval
Jrme Poisson
Frdric Rambert
Didier Renard
Yves Touffait
Laurent Wagner

All Rights Reserved


1993-2016 GEOVARIANCES
No part of the material protected by this copyright notice may be reproduced or utilized in any form
or by any means including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner.

"...

There is no probability in itself. There are only probabilistic models. The


only question that really matters, in each particular case, is whether this or
that probabilistic model, in relation to this or that real phenomenon, has or
has not an objective meaning..."
G. Matheron
Estimating and Choosing - An Essay on Probability in Practice
(Springer Berlin, 1989)

Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
2. About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Mining. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
4. In Situ 3D Resource Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
4.1 Workflow Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
4.2 Presentation of the Dataset & Pre-processing . . . . . . . . . . . . . . . . . . . .16
4.3 Variographic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
4.4 Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
4.5 Global Estimation With Change of Support. . . . . . . . . . . . . . . . . . . . . .78
4.6 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
4.7 Displaying the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
5. Non Linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149
5.1 Introduction and overview of the case study . . . . . . . . . . . . . . . . . . . . .150
5.2 Preparation of the case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152
5.3 Global estimation of the recoverable resources . . . . . . . . . . . . . . . . . . .171
5.4 Local Estimation of the Recoverable Resources . . . . . . . . . . . . . . . . . .183
5.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223
5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
6. 2D Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249
6.7 Workflow Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250
6.8 From 3D to 2D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251
6.9 2D Estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260
6.10 3D Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .278
6.11 2D-3D Comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286
Oil & Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287
8. Property Mapping & Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .289
8.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290
8.2 Estimation of the Porosity From Wells Alone . . . . . . . . . . . . . . . . . . . .293
8.3 Fitting a Variogram Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297
8.4 Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299
8.5 Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302
8.6 Estimation with External Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306
8.7 Cokriging With Isotopic Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . .309
8.8 Collocated Cokriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317

8.9 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326


9. Non Stationary & Volumetrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Creating the Output Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3 Estimation With Wells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4 Estimation With Wells and Seismic . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5 Estimation Using Kriging With Bayesian Drift . . . . . . . . . . . . . . . . . .
9.6 Assessing the Variability of the Reservoir Top. . . . . . . . . . . . . . . . . . .
9.7 Volumetric Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

335
336
338
341
348
362
371
377

10. Plurigaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2 Methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3 Creating the Structural Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 Creating the Working Grid for the Upper Unit. . . . . . . . . . . . . . . . . .
10.5 Computing the Proportions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.6 Lithotype Rule and Gaussian Functions . . . . . . . . . . . . . . . . . . . . . . .
10.7 Conditional Plurigaussian Simulation . . . . . . . . . . . . . . . . . . . . . . . .
10.8 Simulating the Lithofacies in the Lower Unit . . . . . . . . . . . . . . . . . .
10.9 Merging the Upper and Lower Units . . . . . . . . . . . . . . . . . . . . . . . . .

401
402
410
411
412
421
437
450
453
465

11. Oil Shale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


11.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.2 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.3 Fitting a Variogram Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.5 Displaying Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

469
470
474
479
484
488

12. Multi-layer Depth Conversion With Isatoil . . . . . . . . . . . . . . . . . . .


12.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2 Field Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3 Loading the Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.4 Master File Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.5 Building the Reservoir Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6 Filling the Units With Petrophysics . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7 Volumetrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.8 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

491
492
492
495
502
517
530
536
561

13. Geostatistical Simulations for Reservoir Characterization . . . . . .


13.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2 General Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.3 Data Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

571
572
573
574

13.4 Structural Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .578


13.5 Modeling 3D Porosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .595
13.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629
Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631
15. Pollution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .633
15.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634
15.2 Univariate Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .638
15.3 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639
15.4 Fitting a Variogram Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .647
15.5 Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651
15.6 Creating the Target Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .658
15.7 Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .660
15.8 Displaying the Graphical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . .665
15.9 Multivariate Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .670
15.10 Case of Self-krigeability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .685
15.11 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .689
16. Young Fish Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .701
16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .702
16.2 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .713
16.3 Global Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .717
17. Acoustic Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .723
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .724
17.2 Global Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .730
18. Air quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .739
18.1 Presentation of the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .740
18.2 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .745
18.3 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .750
18.4 Fitting a variogram model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .757
18.5 Kriging of NO2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .760
18.6 Displaying the graphical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .764
18.7 Multivariate approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .768
18.8 Cross-validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .780
18.9 Gaussian transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .784
18.10 Quantifying a local risk with Conditional Expectation (CE) . . . . . . .788
18.11 NO2 univariate simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .791
18.12 NO2 multivariate simulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .795
18.13 Simulation post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .800
18.14 Estimating population exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . . .805

14 Soil pollution811
19.1 Presentation of the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.2 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.3 Visualization of THC grades using the 3D viewer . . . . . . . . . . . . . .
19.4 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.5 Fitting a variogram model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.6 Selection of the duplicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.7 Kriging of THC grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.8 Intersection of interpolation results with the topography . . . . . . . . .
19.9 3D display of the estimated THC grades . . . . . . . . . . . . . . . . . . . . . .
19.10 THC simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.11 Simulation post-processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.12 Displaying graphical results of risk analysis with the 3D Viewer . .

812
816
820
822
829
832
833
838
852
854
865
871

20. Bathymetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.1 Presentation of the Data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2 Pre-processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3 Interpolation by kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.4 Superposition of models and smoothing of frontiers . . . . . . . . . . . . .
20.5 Local GeoStatistics (LGS) application to bathymetry mapping . . . . .

873
874
880
894
916
922

Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
22. Image Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.2 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.3 Filtering by Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.4 Other Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22.5 Comparing the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

939
940
942
949
955
959

23. Boolean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.1 Presentation of the Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.2 Boolean Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

965
966
969
975

Introduction

About This Manual

2.About This Manual


A set of case studies is developed in this manual. It is mainly designed:
l

for new users to get familiar with the software and gives some leading lines to carry a study
through,

for all users to improve their geostatistical knowledge by following detailed geostatistical workflows.

Basically, each case study describes how to carry out some specific calculations in Isatis as precisely as possible. The data sets are located on your disk in a sub-directory, called Datasets, of the
Isatis installation directory.
You may follow the work flow proposed in the manual (all the main parameters are described) and
then compare the results and figures given in the manual with the ones you get from your test.
Most case studies are dedicated to a given field (Mining, Oil & Gas, Environment, Methodology)
and therefore grouped together in appropriate sections. However, new users are advised to run a
maximum of case studies, whatever their field of application. Indeed, each case study describes different functions of the package which are not necessarily exclusive to one application field but
could be useful for other ones.
Several case studies, namely In Situ 3D Resources Estimation (Mining), Property Mapping (Oil
& Gas) and Pollution (Environment) almost cover entire classic geostatistical workflows: exploratory data analysis, data selections and variography, monovariate or multivariate estimation, simulations.
The other Case Studies are more specific and mainly deal with particular Isatis facilities, as
described below:

Non Linear: anamorphosis (with and without information effect), indicator kriging, disjunctive
kriging, uniform conditioning, service variables and simulations.

Non Stationary & Volumetrics: non stationary modeling, external drift kriging and simulations, volumetric calculations, spill point calculation, variable editor.

Plurigaussian: an innovative facies simulation technique.

Oil Shale: fault editor.

Isatoil: multi-layer depth conversion with the Isatoil advanced module.

Young Fish Survey, Acoustic Fish Survey: polygons editor, global estimation.

Image Filtering: image filtering, grid or line smoothing, grid operator.

Boolean: boolean conditional simulations.

Note - All case studies are not necessarily updated for each Isatis release. Therefore, the last
update and the corresponding Isatis version are systematically given in the introduction.

Mining

10

In Situ 3D Resource Estimation

4.
In Situ
3D Resource Estimation
This case study is based on a real 3D data set kindly provided by Vale
(Carajs mine, Brazil).

It demonstrates particular features related to the Mining industry:


domaining, processing of three dimensional data, variogram modeling
and kriging. A brief description of global estimation with change of
support and block simulations is also provided. A simple application of
use of local parameters in kriging and simulations is presented.

Reminder: while using Isatis, the on-line help is accessible anytime by


pressing F1 and provides full description of the active application.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.

Last update: Isatis version 2014

11

12

4.1 Workflow Overview


This case study aims to give a detailed description of the kriging workflow and a brief introduction
to the grade simulation workflow of iron grades in an iron productive mine. This workflow overview lists the sequence of Isatis applications as they are ordered in the case study in order to run
through it. The list is nearly complete but not exhaustive.
Next to each application, two links are provided:
m

the first link opens the application description of the Users guide: this allows the user to
have a complete description of the application as it is implemented in the software;

the second link sends the user to the corresponding practical application example in the case
study.

Applications in bold are the most important for achieving kriging and simulation:
l

File/Import Users Guide Case Study


Import the raw drillhole data.

File/Selection/Macro Users Guide Case Study


Creates a macro-selection variable for each assay of the raw data based on the lithological code.
It is used to define two domains rich ore and poor ore.

File/Selection/Geographic Users Guide Case Study


Creates a geographic selection to mask 4 drillholes outside of the orebody.

Tools/Copy Variable/Header to Line Users Guide Case Study


Copy the selection masking the drillholes header to all assays of the drillholes.

Tools/Regularization Users Guide Case Study


Assays compositing tool. A comparison of regularization by length or by domains is made. This
step is compulsory to make data additive for kriging. The composites regularized by domains
are kept for the rest of the study.

Statistics / Quick Statistics Users Guide Case Study


Different modes for making statistics are illustrated: numerical statistics by domain, graphic displays with boxplots or swathplots.

Statistics/Exploratory Data Analysis Users Guide Case Study


Isatis fundamental tool for QA/QC, 2D data displays, statistical and variographic analysis.

Statistics/Variogram Fitting Users guide Case Study


Isatis tool for variogram modeling. Different modes are illustrated:

In Situ 3D Resource Estimation

13

manual: the user chooses by himself the basic structures (with their types, anisotropy, ranges
and sills) entering the parameters at the keyboard or for ranges/sills interactively in the Fitting Window. This is used for modeling the variogramof the indicator of rich ore,

automatic: the model is entireley defined (ranges, anisotropy and sills) from the definition of
the types and number of nested structures the user wants to fit. This is used for modeling the
Fe grade of rich ore.

Statistics/Domaining/Border Effect Users Guide Case Study


Calculates statistical quantities based on domains indicator and grades to visualize the
behaviour of grades when getting closer to the transition between domains.

Statistics/Domaining/Contact Analysis Users Guide Case Study


Represents graphically the behaviour of the mean grade as a function of the distance of samples
to the contact between two domains.

Interpolate/Estimation/(Co-)Kriging Users Guide Case Study


Isatis kriging application. It is applied here to krige (1) the indicator of rich ore and (2) the Fe
grade of rich ore on blocks 75mx75mx15m. In order to take into account the geo-morphology of
the deposit, kriging with Local Parameters is achieved: the main axis of anisotropy and neighborhood ellipsod are changed between the northern and southern part of the deposit.

Statistics/Gaussian Anamorphosis Modeling Users Guide Case Study


Isatis tool for normal score transform and modeling of histogram on composites support. This
step is compulsory for any non linear application including simulations. It is applied here on Fe
in the rich ore domain.

Statistics/Support Correction Users Guide Case Study


Isatis tool for modeling grade histograms on block support. Useful for global estimation and for
non linear techniques (see Non Linear case study).

Tools/Grade Tonnage Curves Users Guide Case Study


Calculates and represent graphically the grade tonnage curves. From the different possible
modes we compare the kriged panels and the distribution of grades on blocks obtained after support correction.

File/Create Grid File Users Guide Case Study


Creates a grid of blocks 25mx25mx15m, on which we will simulate the ore type (1 for rich ore,
2 for poor ore) and the grades of Fe-P-SiO2.

Tools/Migrate Grid to Point Users Guide Case Study


Transfers the selection variable defining the orebody from the panels 75mx75mx15m to the
blocks 25mx25mx15m.

14

Interpolate/Conditional Simulations/Sequential Indicator/Standard Neighborhood Users Guide


Case Study
Simulations of the indicator of rich ore by SIS method.

Statistics/Gaussian Anamorphosis Modeling Users Guide Case Study


That application is run again, for the purpose of a multivariate grade simulation, to transform
Fe-P-SiO2 grades of composites. The P grade distribution is modelled differently from Fe and
SiO2, because of the presence of many values at the detection limit. The zero-effect distribution
type is then applied. It results that the gaussian value assigned to P has a truncated gaussian
distribution.

Statistics/Exploratory Data Analysis Users Guide Case Study


The Exploratory Data Analysis is used for calculating the experimental variogram on the gaussian transform of P.

Statistics/Variogram Fitting Users guide Case Study


The variogram fitting is used with the Truncation Special Option for modeling the gaussian
experimental variogram of the gaussian transform of P.

Statistics/Statistics/Gibbs Sampler Users guide Case Study


The Gibbs Sampler algorithm is used to generate the final gaussian transforms of P with a true
Gaussian distribution instead of a truncated one.

Statistics/Exploratory Data Analysis Users Guide Case Study


The Exploratory Data Analysis is used now for calculating the experimental variogram on the
gaussian transform of Fe-P-SiO2.

Statistics/Variogram Fitting Users guide Case Study


The variogram fitting is used for modeling the threevariate gaussian experimental variograms of
the gaussian transform of Fe-P-SiO2. The Automatic Sill Fitting mode is used: the sills of all
basic structures are automatically calculated using a least square minimization procedure.

Statistics/Modeling/Variogram Regularization Users guide Case Study


The threevariate variogram model of the gaussian grades is regularized on the block support. A
new experimental variogram is then obtained.

Statistics/Variogram Fitting Users guide Case Study


The variogram fitting is used for modeling the threevariate gaussian experimental variograms of
the gaussian transform of Fe-P-SiO2 on the block support (25mx25mx15m). The Automatic Sill
Fitting mode is used.

In Situ 3D Resource Estimation

15

Statistics/Modeling/Gaussian Support Correction Users guide Case Study


Transforms the point anamorphosis and the variogram model referring to the gaussian variables
regularized on the block support. The result is a gaussian anamorphosis on a block support and a
variogram model referring to the block gaussian variables (0-mean, variance 1). These steps are
compulsory for carrying out Direct Block Simulations.

Interpolate/Conditional Simulations/Direct Block Simulations Users Guide Case Study


Simulations using the Turning Bands technique in the discrete gaussian model framework
(DGM).

Statistics/Variogram on Grid Users Guide Case Study


Calculates, for QC purpose, the experimental variograms on the simulated gaussian block values.

Statistics/Data Transformation/Raw<->Gaussian Transformation Users guide Case Study


Transforms the block gaussian simulations into raw block values.

Tools/Copy Statistics/ Grid-> Grid Users Guide Case Study


Calculates rich ore tonnage and metal quantities in the panels 75mx75mx15m from the simulated blocks 25mx25mx15m.

File/Calculator Users Guide Case Study


Transforms the previous results into real ore tonnages and metals.

Tools/Simulation Post-Processing Users Guide Case Study


Presents examples of Post-Processing of simulations.

3D viewer Users Guide Case Study


Some brief description of the 3D viewer module.

16

4.2 Presentation of the Dataset & Pre-processing


The data set is located in the Isatis installation directory (sub-directory Datasets/In_situ_3D_resource_estimation) and constituted of two different ASCII files:
l

borehole measurements are stored in the ASCII file called boreholes.asc;

a simple 3D geological model resulting from previous geological work (block size: 75 m horizontally and 15 m vertically) is provided in a 3D grid file called block model_75x75x15m.asc.).

Firstly, a new study has to be created using the File / Data File Manager facility; then, it is advised
to verify the consistency of the units defined in the Preferences / Study Environment / Units window. In particular, it is suggested to use:
l

Input Output Length Options:


Default Unit... = Length (m)

Default Format...= Decimal (10,2)

Graphical Axis Units:


X Coordinate = Length (km)
Y Coordinate = Length (km)
Z Coordinate = Length (m)

4.2.1 Borehole data


4.2.1.1 Data import
The boreholes.asc file begins with a header (commented by #) which describes its contents:
#
# structure=line , x_unit=m , y_unit=m , z_unit=m
#
# header_field=1 , type=alpha , name="drillhole ID"
# header_field=2 , type=xb , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=3 , type=yb , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=4 , type=zb , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=5 , type=numeric , name="depth" , ffff="
" , bitlength=32 ;
#
f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# header_field=6 , type=numeric , name="inclination" , ffff="
" , bitlength=32 ;
#
f_type=Decimal , f_length=8 , f_digits=2 , unit="deg"
# header_field=7 , type=numeric , name="azimuth" , ffff="
" , bitlength=32
;
#
f_type=Decimal , f_length=8 , f_digits=2 , unit="deg"
#
# field=1 , type=xe , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# field=2 , type=ye , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
# field=3 , type=ze , f_type=Decimal , f_length=8 , f_digits=2 , unit="m"
#
# field=4 , type=numeric , name="Sample length" , ffff="
" , bitlength=32
;
#
f_type=Decimal , f_length=6 , f_digits=2 , unit="m"
# field=5 , type=numeric , name="Fe" , ffff="
" , bitlength=32 ;
#
f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=6 , type=numeric , name="P" , ffff="
" , bitlength=32 ;
#
f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=7 , type=numeric , name="SiO2" , ffff="
" , bitlength=32 ;
#
f_type=Decimal , f_length=6 , f_digits=2 , unit="%"

In Situ 3D Resource Estimation

17

# field=8 , type=numeric , name="Al2O3" , ffff="


" , bitlength=32 ;
#
f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=9 , type=numeric , name="Mn" , ffff="
" , bitlength=32 ;
#
f_type=Decimal , f_length=6 , f_digits=2 , unit="%"
# field=10 , type=alpha , name="Lithological code ALPHA" , ffff="
"
# field=11 , type=numeric , name="Lithological code INTEGER" , ffff="
"
, bitlength= 8 ;
#
f_type=Integer , f_length= 4 , unit=" "
#
#
++++ --------- +++++++++ --------- +++++++++ --------- +++++++++
#
++++++++++ --------- +++++++++ --------- +++++++++ --------- +++++++++ -------- +++++++++ --------- ---------
*---1 026
1400.00
-195.00
804.21
144.46
90.00
0.00
1
1400.00
-195.00
799.71
4.50
65.90
0.13
0.20

0.90
0.07
6
6
2
1400.00
-195.00
795.32
4.39
66.70
0.12
0.10

0.90
0.08
6
6
3
1400.00
-195.00
791.22
4.10
67.70
0.11
0.20
0.50
0.08
3
3

The samples are organized along lines and the file contains two types of records:
l

The header record (for collars), which starts with an asterisk in the first column and introduces a
new line (i.e borehole).

The regular record which describes one core of a borehole.

The file contains two delimiter lines which define the offsets for both records.
The dataset is read using the File / Import / ASCII procedure and stored in two new files of a new
directory called Mining Case Study:
l

The file Drillholes Header, which contains the header of each borehole, stored as isolated
points.

The file Drillholes, which contains the cores measured along the boreholes.

(snap. 4.2-1)

18

You can check in File / Data File Manager (by pressing s for statistics on the Drillholes file) that
the data set contains 188 boreholes, representing a total of 5766 samples. There are five numeric
variables (heterotopic dataset), whose statistics are given in the next table (using Statistics/Quick
Statistics...):
Number

Minimum

Maximum

Mean

St. Dev.

Al2 O3

3591

0.07

44.70

1.77

4.14

Fe

5069

4.80

69.40

60.51

14.19

Mn

5008

0.

30.70

0.58

1.75

5069

0.

1.

0.06

0.08

Si O2

3594

0.05

75.50

1.54

4.32

We will focus mainly on Fe variable. Also note the presence of an alphanumeric variable called
Lithological code Alpha.

4.2.1.2 Borehole data visualization without the 3D viewer


Note - To visualize boreholes with the Isatis 3D viewer module, see the dedicated paragraph at the
end of this case study.
All the 2D Display facilities are explained in detail in the Displaying & Editing Graphics chapter
of the Beginner's Guide.
To visualize the lines without the 3D viewer, perform the following steps:
l

Click on Display / New Page,

In the Contents, for the Representation Type, choose Perspective,

Double-click on Lines. An Item Contents for: Lines window appears:

In the Data area, select the file Mining Case Study/Drillholes, without selecting any variable as we are looking for a display of the boreholes geometry.

Click on Display, and OK. The Lines appear in the graphic window.

To change the View Point, click on the Camera tab and choose for instance:
m

Longitude = -46

Latitude = 20.

In Situ 3D Resource Estimation

19

Using the Display Box tab, deselect the toggle Automatic Scales and stretch the vertical dimension Z by a factor of 3.

Click on Display.

You should obtain the following display. You can save this template to automatically reproduce
it later: just click on Application / Store Page as in the graphic window.

(fig. 4.2-1)

The data set is contained in the following portion of the space:


Minimum

Maximum

0.009 km

3.97 km

-0.35 km

3.77 km

-54.9 m

+811.8 m

Most of the boreholes are vertical and horizontally spaced approximately every 150m. The vertical
dimension is oriented upwards.

4.2.1.3 Creation of domains


In order to demonstrate Isatis capabilities linked to domaining, a simplified approach is presented
here. It consists in splitting the assays into two categories:
m
m

the first one called rich ore corresponds to the lithological codes 1, 3 and 6,
the second one called poor ore corresponds to the lithological codes 10 and above

A macro-selection final lithology[xxxxx] is created using File / Selection/Macro ...


After asking to create a New Macro Selection Variable and defining its name final lithology in the
Data File, you have to click on New.

20

(snap. 4.2-1)

For creating Rich ore, Poor ore and Undefinedindices, you should give the name you want
(this has to be repeated three times). Then in the bottom part of the window you will define the
rules to apply. For each rule, you will have then to choose which variable it depends to, here Lithological Code Integer, and the criterion to apply among the list you get by clicking on the button
proposing Equals as default:
m

in the case of Rich ore you choose Is Lower or Equals to 9

in the case of Poor ore you choose to match 2 rules (see snap shot on the previous page).

in the case of Undefined you choose to match any of two rules (see next snap shot).

In Situ 3D Resource Estimation

21

(snap. 4.2-2)

4.2.1.4 Drillholes selection


From the display of the drillholes, we can see that 4 are outside of the area covered by the other
drillholes. We will mask these drillholes for the rest of the study by using the File / Selection / Geographic menu.
The procedure "File / Selection / Geographic" is used to visualize and to perform a masking operation based on complete boreholes or more selectively on composites within a borehole.
We create the selection mask drillholes outside in the Drillholes header file.

22

(snap. 4.2-1)

When pressing the "Display as Points" button, the following graphic window opens representing by
a + symbol in green (according to the menu Preferences / Miscellaneous). the headers of all the
boreholes in a 2D XOY projection.

In Situ 3D Resource Estimation

23

4000

Y (m)

3000

2000

1000

1000

2000
X (m)

3000

4000

(snap. 4.2-2)

By picking with the mouse left button the 4 boreholes, their symbols are blinking, they can then be
masked by using the menu button of the mouse and clicking on Mask, the 4 masked boreholes are
then represented with the red square (according to the menu Preferences / Miscellaneous).
In the Geographic Selection window the number of selected samples (i.e.boreholes) is appearing
(184 from 188). To store the selection you must click on Run.

24

4000

Y (m)

3000

2000

1000

1000

2000
X (m)

3000

4000

(snap. 4.2-3)

This selection is defined on the drillhole collars. In order to apply this selection to all samples of the
drillholes, a possible solution is to use the menu Tools / Copy Variable / Header Point -> Line.

(snap. 4.2-4)

4.2.1.5 Borehole data compositing


The compositing (or regularization) is an essential phase of a study using 3D data, especially in the
mining industry, although the principle is much more general. The idea is that geostatistics will consider each datum with the same importance (prior to assigning a weight in the kriging process for

In Situ 3D Resource Estimation

25

example) as it does not make sense to combine data that does not represent the same amount of
material.
Therefore, if data is measured on different support sizes, a first, essential task is to convert the
information into composites of the same dimension. This dimension is usually a multiple of the size
of the smallest sample, and is related to the height of the benches, which is in this case 15m.
l

This operation can be achieved in different ways:


m

the boreholes are cut into intervals of same length from the borehole collar, or in intervals
intersecting the boreholes and a regular system of horizontal benches. It is performed with
the Tools / Regularization by Benches or by Length facility, consists in creating a replica of
the initial data set where all the variables of interest in the input file are converted into composites.

the boreholes are cut into intervals of same length, determined on the basis of domain definition. Each time the domain assigned to the assay is changed a new composite is created. The
advantage of that method is to get more homogeneous composites. It is performed with the
Tools / Regularization by Domains facility.

We will work on the 5 numerical variables Al203, Fe, Mn, P and SiO2.

The regularization by length is performed on 5 numerical variables Al203, Fe, Mn, P and
SiO2 and on the lithological code, in order to keep for each composite the information on the
most abundant lithology and the corresponding proportion. The new files are called:
- Composites 15m by length header for the header information (collars).
- Composites 15m by length for the composite information.

Regularization mode: By Length measured along the borehole: this is the selected option as
some boreholes are inclined, with a constant length of 15m.

Minimum Length: 7.5 m. It may happen that the first composite, or the last composite (or
both) do not have the requested dimension. Keeping too many of those incomplete samples
will lead us back to the initial problem of having samples of different dimensions being considered with the same importance: this is why the minimum length is set to 7.5 m (i.e. half of
the composite size).

26

(snap. 4.2-1)
m

Three boreholes are not reproduced in the composite file as their total length is too small
(less than 7.5m): boreholes 93, 163 and 171. There are 1282 composites in the new output
file.

The regularization by domain will calculate composites for two domains rich ore and poor
ore. The macro selection defining the domains in the input file is created with the same indices
in the output composites file. The selection mask drillholes outside is activated to regularize
only the boreholes within the orebody envelope. Only Fe, P, SiO2 are regularized. The new files
are called:
m

Composites 15m header for the header information (collars).

Composites 15m for the composite information.

The Undefined Domain is assigned to the Undefined index. It means that when a sample is
in the Undefined Domain the composition procedure keeps on going (see on-line Help for
more information).

The Analysed Length is kept for each grade element.

The option Merge Residual is chosen, which means that the last composite is merged with
the previous one if its length is less than 50% of the composite length.

In Situ 3D Resource Estimation

27

(snap. 4.2-2)

There are 1485 composites on the 184 boreholes in the new output file. From now on all geostatistical processes will be applied on that regularized by domains composites file.
Using Statistics / Quick Statistics we can obtain different types of statistics, as for example:
The statistics on the Fe grades by domains. You note that after compositing there are no more Undefined composites.

28

(snap. 4.2-3)

(snap. 4.2-4)
l

Graphic representations with Boxplots by slicing according the main axes of the space.

In Situ 3D Resource Estimation

29

(snap. 4.2-5)

30

(fig. 4.2-1)

In Situ 3D Resource Estimation

31

Swathplots by slicing according the main axes of the space.

(snap. 4.2-6)

(snap. 4.2-7)

The swathplots along OY shows for Fe rich ore a trend to decrease from South to North.

32

4.2.2 Block model


4.2.2.6 Grid import
The block model_75x75x15m.asc file begins with a header (Isatis format, commented by #) which
describes its contents:
#
# structure=grid, x_unit="m", y_unit="m", z_unit="m";
#
sorting=+Z +Y +X ;
#
x0=
150.00 , y0=
-450.00 , z0=
310.00 ;
#
dx=
75.00 , dy=
75.00 , dz=
15.00 ;
#
nx=
28 , ny=
47 , nz=
31 ;
#
theta=
0 , phi=
0 , psi=
0
# field=1, type=numeric, name="domain code", bitlength=32;
#
ffff="N/A", unit="";
#
f_type=Integer, f_length=9, f_digits=0;
#
description="Creation Date: Mar 21 2006
15:13:15"
#
#+++++++++
0
0
0

The file contains only one numeric variable named domain code which equals 0, 1 or 2:
l

0 means the grid node lies outside the orebody,

1 means the grid node lies in the southern part of the orebody,

2 means the grid node lies in the northern part of the orebody.

Launch File/Import/ASCII... to import the grid in the Mining Case Study directory and call it 3D
Grid 75x75x15 m.

(snap. 4.2-1)

You have now to create a selection variable, called orebody, for all blocks where the domain code
is either 1 or 2, by using the menu File / Selection / Intervals.

In Situ 3D Resource Estimation

33

(snap. 4.2-2)

4.2.2.7 Visualization without the 3D viewer


Note - To visualize with the Isatis 3D viewer module, see the dedicated paragraph at the end of this
case study.
Click on Display / New Page in the Isatis main window. In the Contents window:
l

In the Contents list, double click on the Raster item. A new Item contents for: Raster window
appears, in order to let you specify which variable you want to display and with which color
scale:
m

Grid File...: select orebody variable from the 3D Grid 75x75x15 m file,

In the Grid Contents area, enter 16 for the rank of the section XOY to display.

In the Graphic Parameters area below, the default color scale is Rainbow.

In the Item contents for: Raster window, click on Display.

Click on OK.

34

Your final graphic window should be similar to the one displayed hereafter.

(fig. 4.2-1)

The orebody lies approximately north-South, with a curve towards the southwestern part. The
northern part thins out along the northern direction and has a dipping plane striking North with a
western dip of 15 approximately. This particular geometry will be taken into account during variographic analysis.

In Situ 3D Resource Estimation

35

4.3 Variographic Analysis


This step describes the structural analysis performed on 3D data set. In a first stage we consider the
Fe grade only of the rich ore (univariate analysis) on the 15 m composites. The estimation requires
to estimate for each block the proportion of rich ore and its grade. The analysis has then to be made:
l

on the indicator of rich ore variable, which is defined on all composites

and on the rich ore Fe grade, which is defined on rich ore composites.

The Exploratory Data Analysis (EDA) will be used in order to perform Quality Control, check statistical characteristics and establish the experimental variograms. Then variogram models will be
fitted.

4.3.1 Variographic analysis of rich ore indicator


The workflow that has been applied illustrates some important capabilities of Exploratory Data
Analysis, the decisions that are taken would probably require more detailed analysis in a real study.
The main steps of the workflow, that will be detailed in the next pages are:
l

Calculation of the rich ore indicator.

Variogram map in horizontal slices to confirm the existence of anisotropy.

Calculations of directional variograms in horizontal plane. For simplification we keep 2 orthogonal directions East-West (N90) and North-South (N0).

Check that the main directions of anisotropy are swapped when looking to northern or southern
boreholes.

Save the Indicator variogram in the northern part (where are most of the data), with the idea
that the variogram in the Southern part is the same as in the North by inverting N0 and N90
directions of the anisotropy. In practice this will be realized at the kriging/simulation stage by
the use of Local Parameters for the variogram structures.

Variogram Fitting using a combination of Automatic and Manual mode.

4.3.1.1 Calculation of the indicator


Use File / Calculator to assign the macro-selection index corresponding to rich ore to a float variable Indicator rich ore.

36

(snap. 4.3-1)

4.3.1.2 Experimental Variogram of the Indicator


Launch Statistics/Exploratory Data Analysis... to start the analysis on the variable Indicator rich
ore:

In Situ 3D Resource Estimation

37

(snap. 4.3-1)

Highlight the Indicator rich ore variable in the main EDA window and open the Base Map and Histogram:

38

(fig. 4.3-1)

The mean value gives the proportion of rich ore samples.


The variogram map allows to check potential anisotropy. After clicking on the variogram map, the
Define Parameters Before Initial Calculations being on, you should choose the parameters as
shown in the next figure. You define parameters for horizontal slices, i.e. Ref.Plane UV with No
rotation.
Switch off the button Define the Calculations in the UW Plane and in the VW Plane, using the corresponding tabs.
With 18 directions each direction makes an angle of 10 with the previoius one. By asking a Tolerance on Directions of 2 sectors, the variograms are calculated from pairs in a given direction +/25.

In Situ 3D Resource Estimation

39

(snap. 4.3-2)

40

(snap. 4.3-3)

After pressing OK you get the representation of the Variogram Map. In the Application Menu ask
Invert View Order to have variogram map and extracted experimental variograms in a landscape
view.
In the Application Menu ask Graphic Specific Parameters and change the Color Scale to Rainbow Reversed.
In the variogram map representation drag with the mouse a zone containing all directions. With the
menu button ask Activate Direction. You will then visualize the experimental variograms in the 18
directions of the horizontal plane. It exhibits clearly anisotropic behaviour.

In Situ 3D Resource Estimation

41

(snap. 4.3-4)

We will now calculate the experimental variograms directly from the main EDA window by clicking on the Variogram bitmap at the bottom of the window. In the next figure we can see the parameters used for the calculation of 4 directional variograms in the horizontal plane and the vertical
variogram.

(snap. 4.3-5)

42

(snap. 4.3-6)

(snap. 4.3-7)

For sake of simplicity we decide to keep only 2 directions N0, showing more continuity and the
perpendicular direction N90.
The procedure to follow is:

In Situ 3D Resource Estimation

43

In the List of Options, change from Omnidirectional to Directional.

In Regular Direction choose Number of Regular Directions 2 and switch on Activate Direction
Normal to the Reference Plane. Click Ok and go back to the Variogram Calculation Parameters
window.

(snap. 4.3-8)

You have then to define the parameters for each direction. Click the parameter table to edit:
l

You have then to define the parameters for each direction. Click the parameter table to edit. For
applying the same parameters on the 2 horizontal directions, you must highlight these directions
in the Directions list of the Directions Definition window.
The two regular directions choose the following parameters:
m

Label for direction 1: N90 (default name)

Label for direction 2: N0

Tolerance on direction: 45 (in order to consider all samples without overlapping)

Lag value: 90 m (i.e. approximately the distance between boreholes)

Number of lags: 15(so that the variogram will be calculated over 1350 m distance)

Tolerance on Distance (proportion of the lag): 0.5

Slicing Height: 7.55 m (adapted to the height of composites)

Number of Lags Refined: 1

Lag Subdivision: 45m (so that we can have the variogram at short distance from the drillholes closely spaced).

The normal direction with the following parameters:


m

Label for direction 1: Vertical

Tolerance on angle: 22.5

Lag value: 15 m

Number of lags: 10

Tolerance on lags (proportion of the lag): 0.5

44

In the Application Menu ask for Graphic Specific Parameters and click on the toggle button
for the display of the Histogram of Pairs.

(snap. 4.3-9)

Because the general shape of the orebody is anisotropic, we will calculate the variogram restricted
to the northern part and to the southern part of the orebody.
To do so you will use capabilities of the linked windows of EDA, by masking samples in the Base
Map. Automatically the variograms will be recalculated with only the selected samples.
For instance in the Base Map you drag a box around data in the Southern part (as shown on the figure) and with the menu button of the mouse you ask Mask. You will then get the variogram calculated from the northern data.

In Situ 3D Resource Estimation

45

(snap. 4.3-10)

In the next figure we compare the variograms calculated from the northern and the southern data.
The main directions of anisotropy are swapped between North and South.

46

(snap. 4.3-11)

In Situ 3D Resource Estimation

47

(snap. 4.3-12)

We decide now to fit a variogram model on the northern variogram, which is calculated with the
most abundant data. Then we will apply the same variogram to the southern data by making the
main axes of anisotropy swapped. This will be realized by means of local parameters attached to the
variogram model and to the neighborhood.
In the graphic window containing the experimental variogram in the northern zone, click on Application / Save in Parameter File and save the variogram under the name Indicator rich ore North.

4.3.1.3 Variogram Modeling of the Indicator rich ore


You must now define a Model which fits the experimental variogram calculated previously. In the
Statistics / Variogram Fitting application, define:

48

the Parameter File containing the set of experimental variograms: Indicator rich ore North.

Set the toggles Fitting Window and Global Window ON; the program displays automatically
one default spherical model. The Fitting window displays one direction at a time (you may
choose the direction to display through Application/Variable & Direction Selection...), and the
Global window displays every variable (if several) and direction in one graphic.

To display each direction in separate views, click in the Global Window on Application /
Graphic Specific Parameters and choose the Manual mode. Choose for Nb of Columns 3,
then Add, in turn for each Current Column, in the Selection by picking in the View Contents
area the First Variable, the Second Variable and the Direction.

(snap. 4.3-1)

In Situ 3D Resource Estimation

49

Go to the Manual Fitting Tab and click Edit.

(snap. 4.3-2)
l

The model must reflect:


m

the variability at short distances, with a consistent nugget effect,

the main directions of anisotropy,

the general increase of the variogram.

The model is automatically defined with the same rotation definition as the experimental variogram. Three different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):

50

(snap. 4.3-3)
l

Nugget effect,

Anisotropic Exponential model with the following respective ranges along U, V and W: 700 m,
550 m and 70 m,

Anisotropic Exponential model with the following respective ranges along U, V and W: 500 m,
5000 m and nothing (which means that it is a zonal component with no contribution in the vertical direction).

Do not specify the sill for each structure at this stage, instead:

In Situ 3D Resource Estimation

51

click Nugget effect in the main Variogram Fitting window, set the toggle button Lock the Nugget Effect Components During Automatic Sill Fitting ON and enter the value .065.

(snap. 4.3-4)
l

set the toggle Automatic Sill Fitting ON. The program automatically computes the sills and displays the results in the graphic windows.

A final adjustement is necessary, particularly to get a total sill of 0.25, which is the maximum
admissible for a stationary indicator variogram. Set the toggle Automatic Sill Fitting OFF from
the main Variogram Fitting window, then in the Model Definition window set the sill for the
first exponential to 0.14 and the sill for the second exponential to 0.045.

Enter the name of the Parameter File in which you wish to save the resulting model: Indicator
rich ore.

The final model is saved in the parameter file by clicking Run in the Variogram Fitting window.

52

(snap. 4.3-5)

4.3.2 Variographic Analysis of Fe rich ore


4.3.2.4 Experimental Variogram of Fe rich ore
Launch Statistics/Exploratory Data Analysis... to start the analysis on the variable Fe using the
selection for the rich ore composites.

In Situ 3D Resource Estimation

53

(snap. 4.3-1)

You will calculate the variograms in 2 directions of dipping plane striking North with a western dip
of 15. In the Calculation Parameters you will ask in List of Options a Directional. Click then Regular Directions a new window Directions pops up where you will define the Reference Direction
and switch on Activate Direction Normal to the Reference Plane.

(snap. 4.3-2)

Click Reference Direction, in 3D Direction Definition window set the convention to User Defined
and define the rotation parameters as shown in the next figure.

54

(snap. 4.3-3)

The reference direction U (in red) correspond to the N121 main direction of anisotropy.
The calculation parameters are then chosen as shown in the next figure.

In Situ 3D Resource Estimation

55

(snap. 4.3-4)

The next figure shows the experimental variograms.


Two points may be noted:
l

the anisotropy is not really marked, we will recalculate isotropic variogram in the horizontal
plane,

the second point of the variogram for the direction N121, calculated with 42 pairs, shows a peak
that we can explain by using the Exploratory Data Analysis linked windows.

56

(snap. 4.3-5)

For using the linked windows the following actions have to be made:

In Situ 3D Resource Estimation

57

ask to display the histogram (accept the default parameters),

in the Graphic Specific Parameters of the graphic page containing the experimental variogram,
set the toggle button Variogram Cloud (if calculated) OFF, and click on the radio button Pick
from Experimental Variogram.

in the Calculation Parameters of the graphic page containing the experimental variogram, set
the toggle button Calculate the Variogram Cloud ON.

In the graphic page click on the experimental point with 33 pairs and ask in the menu of the
mouse Highlight. The variogram is then represented as a blue square, and all data making the
pairs represented the part painted in blue in the histogram.

(snap. 4.3-6)

The high variability due to pairs made of the samples with low values is responsible of the peak in
the variogram. It can be proved by clicking in the histogram on the bar of the minimum values and
clicking with the menu of the mouse on Mask, the variograms are automatically calculated and
dont show anymore the anomalous point as shown on the next figure.

(snap. 4.3-7)

58

We now re-calculate the variograms with 2 directions, omni-directional in the horizontal plane
and vertical, with the parameters shown hereafter you enter by clicking Regular Directions....

(snap. 4.3-8)

In Situ 3D Resource Estimation

59

(snap. 4.3-9)

In the graphic containing this last variogram ask for the Application->Save in Parameter File to
save the variogram with the name Fe rich ore.

4.3.2.5 Variogram Modeling of Fe rich ore


In the Statistics / Variogram Fitting application, define:
l

the Parameter File containing the set of experimental variograms: Fe rich ore

the Parameter File in which you wish to save the resulting model: Fe rich ore

In the Model Initialization section choose Spherical (Short + Long Range) and click on Add
Nugget.

(snap. 4.3-1)

60

In the Automatic Fitting tab click on Fit.


In the Global window, you represent the variograms in two columns, the automatic variogram looks
satisfactory, so you click Run in the Variogram Fitting window to save it.

(fig. 4.3-1)

4.3.3 Analysis of border effects


This chapter may be skipped in a first reading as it does not change anything in the Isatis study. It
helps to decide whether kriging/ simulation will be made using hard or soft boundary.
In order to understand the behaviour of Fe grades when the samples are close to the border between
rich and poor ore, we can use two applications:
l

Statistics / Domaining / Border effect calculates bi-point statistics from pairs of samples belonging to different domains. The pairs are chosen in the same way as for experimental variogram
calculations.

Statistics / Domaining / Contact Analysis calculates the mean values of samples of 2 domains as
a function of the distance to the contact between these domains along the drillholes.

4.3.3.6 Statistics on Border effect


Launch Statistics / Domaining / Border effect and choose in the file Composites 15m, the Macro
Selection Variable final lithology[xxxxx], that contains the definition of all domains, and the variable of interest Fe.
In the list of Domains you may pick only some of these, in this case Rich ore and Poor ore, while
you ask to Mask Samples from Domain choosing Undefined.
In the Calculation Parameters sub-window we define the parameters for 3 directions by pressing
the corresponding tabs in turn and switching on the toggle Activate Direction. For the 3 directions
the parameters are:

In Situ 3D Resource Estimation

61

(fig. 4.3-1)

Switch on the three toggle buttons for the Graphic Parameters and click on Run.

(snap. 4.3-1)

Three graphic pages corresponding to the three statistics are then displayed:

62

Transition Probability, that, in the case of only 2 domains, is not very informative.

(snap. 4.3-2)

In Situ 3D Resource Estimation

Fe entering in Rich ore

70
60

Dir

50
40

Dir

30
20

Dir

10
0
0

500

1000

Fe x+h in Poor ore | x in Rich ore

Mean [Z(x+h)|Z(x)], that shows that when going from Rich ore to Poor ore there is a border
effect (the grade of the new domain, i.e. Poor ore, is higher than the mean Poor ore grade which
means it is influenced at short distance by the proximity to Rich ore samples. Conversely when
going from Poor ore to Rich ore there is no border effect.

70
60

Dir

50
40

Dir

30
20

Dir

10
0

1500

500

Distance (m)

1000

1500

Distance (m)

70

70

60

Dir

50
40

Dir

30
20

Dir

10
0

Fe entering in Poor ore

Fe x+h in Rich ore | x in Poor ore

63

60

Dir

50
40

Dir

30
20

Dir

10
0

500

1000
Distance (m)

1500

500

1000
Distance (m)

1500

(snap. 4.3-3)

64

40
30

Dir

20
10

Dir

0
-10
-20

Dir

-30
-40
0

500

1000

1500

Diff Fe x+h in Poor ore | x in Rich ore

Mean Diff[Z(x+h)-Z(x)], that shows that when going from Rich ore to Poor ore as well as
going from Poor ore to Rich ore the grade difference is influenced by the proximity of both
domains.

Diff Fe, x+h in Rich ore, x NOT

40
30

Dir

20
10

Dir

0
-10
-20

Dir

-30
-40
0

500

40
30

Dir

20
10

Dir

0
-10
-20

Dir

-30
-40
0

500

1000
Distance (m)

1000

1500

Distance (m)

1500

Diff Fe, x+h in Poor ore, x NOT

Diff Fe, x+h in Rich ore | x in Poor or

Distance (m)

40
30

Dir

20
10

Dir

0
-10
-20

Dir

-30
-40
0

500

1000
Distance (m)

1500

(snap. 4.3-4)

4.3.3.7 Contact Analysis


Launch Statistics / Domaining / Contact Analysis and choose in the file Composites 15m, the
Macro Selection Variable final lithology[xxxxx], that contains the definition of all domains, and
the variable of interest Fe. You set the variables Direct Distance Variable and Indirect Distance
Variable to None, which means that the contact point is determined when the domain changes down
the boreholes.
In the list of Domains you pick Rich ore for Domain 1 and Poor ore for Domain 2, while you let
Use Undefined Domain Variable to Off.
The statistics are calculated as a function of the distance to the contact along the drillhole, you have
the possibility to select only some of the drillholes according to a specific direction with an angular
tolerance. In this case, as most of the drillholes are vertical, we select all drillholes by choosing a
tolerance of 90 on the vertical direction defined by thre rotation angles Az=0, Ay=90, Ax=0 (Mathematician Convention). The samples are regrouped by Distance Classes of 15m.

In Situ 3D Resource Estimation

65

(snap. 4.3-1)

Two graphic pages are then displayed:


l

Contact Analysis (Oriented) contains two views:


m

Direct for statistics calculated in the Reference Direction

Indirect for statistics calculated in the opposite of the Reference Direction

In the Application menu of the graphic pages we ask the Graphical Parameters, as shown
below, to display the Number of Points and the Mean per Domain.

(snap. 4.3-2)

66

(snap. 4.3-3)

In Situ 3D Resource Estimation

67

Contact Analysis (Non-Oriented) displays the average of the two previous ones.

(snap. 4.3-4)

From these graphs it appears that the poor grades are influenced by the proximity to rich grades.
In conclusion we decide for the kriging and simulations steps to apply hard boundary when dealing
with rich ore.

68

4.4 Kriging
We are now going to estimate on blocks 75mx75mx15m the tonnage and Fe grades of Rich ore.
Therefore, we will perform two steps:
l

Kriging of the Indicator of Rich ore to get the estimated proportion of rich ore, from which the
tonnage can be deduced.

Kriging of the Fe grade of rich ore using only the rich ore samples. Each block is then estimated
as if it would be entirely in rich ore, by applying the estimated tonnage, we can then obtain an
estimate of the Fe metal content.

4.4.1 Kriging of indicator of rich ore with local parameters


After the variographic analysis it was found that the variogram model has an horizontal anisotropy
that has a different orientation in the northern and southern part of the orebody. We will then use
that orientation as local parameter recovered from the grid file in a variable called RotZ. As a first
attempt, that should be sufficient in this case because of the orebody shape, we will use two values
90 for blocks in the southern area and 0 for the northern area, both areas being defined by means
of the geographic code variable (respectively 1 and 2). These values are stored in the grid file by
using File / Calculator.

(snap. 4.4-1)

In Situ 3D Resource Estimation

69

Then you launch Interpolate / Estimation / (Co)Kriging.

(snap. 4.4-2)

You need to specify the type of calculation to Block and the number of variables to 1, then:
l

Input File: Indicator rich ore (Composites on 15m with the selection None).

The names of the variables in the output file (3D Grid 75 x 75 x 15 m), with the orebody selection active:
m

Kriging indicator rich ore for the estimation of Indicator rich ore

Kriging indicator rich ore std dev for the kriging standard deviation

70

The variogram model contained in the Parameter File called Indicator rich ore.

The neighborhood: open the Neighborhood... definition window and specify the name (Indicator rich ore for instance) of the new parameter file which will contain the following parameters,
to be defined from the Edit... button nearby. The neighborhood type is set by default to moving:

(snap. 4.4-3)
m

The moving neighborhood is an ellipsoid with No rotation, which means that U,V,W axes
are the original X,Y,Z axes;

Set the dimensions of the ellipsoid to 800 m, 600 m and 60 m along the vertical direction;

Switch ON the Use Anisotropic Distances button.

Minimum number of samples: 4;

Number of angular sectors: 12

Optimum Number of Samples per Sector: 5

Block discretization: as we chose to perform Block kriging, the block discretization has to be
defined. The default settings for discretization are 5 x 5 x 1, meaning each block is subdivided by 5 in each X and Y direction, but is not divided in Z direction. The Block Discret-

In Situ 3D Resource Estimation

71

ization sub-window may be used to change these settings, and check how different discretizations influence the block covariance Cvv. In this case study, the default parameters 5x5x1
will be kept.
m
l

Press OK for the Neighborhood Definition.

The Local Parameters: open the Local Parameters Loading... window and specify the name of
the Local Parameters File (3D Grid 75x75x15m). Fore the Model All Structures and Neighborhood tabs switch ON Use Local Rotation (Mathematician convention) then 2D and define as
Rotation/Z the variable Rot Z.

(snap. 4.4-4)

72

It is possible to check both the model and the neighborhood performances when processing on a
grid node, and to display the results graphically: this is the purpose of the Test option at the bottom
of the (Co-)Kriging main window. When pressing it, a graphic page opens where:
l

The Indicator rich ore variable is represented with proportional symbols,

The neighborhood ellipsoid is drawn on a 2D section.

By pressing once on the left button of the mouse, the target grid is shown (in fact a XOY section of
it, you may select different sections through Application/Selection For Display...). The user can
then move the cursor to a target grid node: click once more to initiate kriging. The samples selected
in the neighborhood are highlighted and the weights are displayed. We can see here that the nearest
samples get the higher weights. It is also important to check that the negative weights due to screen
effect are not too important. The neighborhood can be changed sometimes to avoid this kind of
problem (more sectors and less points by sector...).
You can also select the target grid node by giving the indices along X, Y and Z with the Application
menu Target Selection (for instance 6, 11, 16). You can figure out how the local parameters used
for the neighborhood are applied.

(snap. 4.4-5)

In Situ 3D Resource Estimation

73

(snap. 4.4-6)

Note - From Application/Link to 3D viewer, you may ask for a 3D representation of the search
ellipsoid if the 3D viewer application is already running (see the end of this case study).
Close the Test Window and press RUN.
7814 grid nodes have been estimated. Basic statistics of the variables are displayed below.

(fig. 4.4-1)

The kriging standard deviation is an indicator of the estimation error, and depends only on the geometrical configuration of the data around the target grid node and on the variogram model. Basically, the standard deviation decrease as an estimated grid node is closer to data.
Some blocks have the kriged indicator above 1. These values will be changed into 1 by means of
File / Calculator.

74

(snap. 4.4-7)

Note - In the main Kriging window, the optional toggle Full set of Output Variables allows to
store in the Output File other kriging parameters: slope of regression, weight of the mean,
estimated dispersion variance of estimates etc...

4.4.2 Kriging of Fe rich ore


In the Standard (Co)Kriging menu specify the type of calculation to Block and the number of
variables to 1, then enter the following parameters:
l

Input File: Fe (Composites on 15m with the selection final lithology{rich ore}).

The names of the variables in the output file (3D Grid 75 x 75 x 15 m), with the orebody selection active:
m

Kriging Fe rich ore for the estimation of Fe;

Kriging Fe rich ore std dev for the kriging standard deviation.

In Situ 3D Resource Estimation

75

The variogram model contained in the Parameter File called Fe rich ore.

The neighborhood: open the Neighborhood... definition window and specify the name (Fe rich
ore for instance) of the new parameter file which will contain the following parameters, to be
defined from the Edit... button nearby. The neighborhood type is set by default to moving:

The moving neighborhood is an ellipsoid with No rotation, which means that U,V,W axes
are the original X,Y,Z axes;

Set the dimensions of the ellipsoid to 800 m, 300 m and 50 m along the vertical direction;

Switch ON the Use Anisotropic Distances button.

Minimum number of samples: 4;

Number of angular sectors: 12

Optimum Number of Samples per Sector: 3

Block discretization: as we chose to perform Block kriging, the block discretization is kept to
the default 5 x 5 x 1.

Apply Local Parameters but only for the Neighborhood, where you use Rot Z variable for 2D
Rotation /Z.

(snap. 4.4-8)

76

After Run you can calculate the statistics of the kriged estimate by asking in Statistics / Quick Statistics to apply as Weight the weight variable Kriging indicator rich ore. 7561 blocks from 7814
have been kriged. By using a weight variable you will obtain the statistics weighted by the proportion of the block in rich ore.

(snap. 4.4-9)

(fig. 4.4-2)

In Situ 3D Resource Estimation

77

The mean grade is close to the average of the composites grade (65.84). Therefore in the next steps,
carrying out non linear methods which require the modeling of the distribution, we will not apply
any declustering weights.

78

4.5 Global Estimation With Change of Support


The support is the geometrical volume on which the grade is defined.
Assuming the data sampling is representative of the deposit, it is possible to fit a histogram model
on the experimental histogram of the composites. But at the mining stage, the cut-off will be
applied on blocks, not on composites. Therefore, it is necessary to apply a support correction to the
composite histogram model in order to estimate an histogram model on the block support.

Note - When kriging too small blocks with a high error level, applying a cut-off to the kriged
grades will induce biased tonnage estimates due to the high smoothing effect. It is then
recommended to use non-linear estimation techniques, or simulations (see the Non Linear case
study). For global estimation, an other alternative is to use the Gaussian anamorphosis modeling,
as described here below.

4.5.1 Gaussian anamorphosis modeling


Gaussian anamorphosis is a mathematical technique which allows to model histograms, taking the
change of support from composites to blocks into account.

Note - From a support size point of view, composites will be considered as points compared to
blocks.
The technique will not be mathematically detailed here: the reader is referred to the Isatis on-line
help and technical references. Basically, the anamorphosis transforms an experimental dataset to a
gaussian dataset (i.e. having a gaussian histogram). The anamorphosis is bijective, so it is possible
to back transform gaussian values to raw values. A gaussian histogram is often a pre-requisite for
using non linear and simulation techniques. The anamorphosis function may be modelled in two
ways:
l

by a discretization with n points between a negative gaussian value of -5 and a positive gaussian
value of +5.

by using a decomposition into Hermite polynomials up to a degree N. This was the only possibility until the Isatis release V10.0. It is still compulsory for some applications, as will be
explained later on.

Open the Statistics/Gaussian Anamorphosis Modeling window.

In Situ 3D Resource Estimation

79

(snap. 4.5-1)
l

In Input... choose the Composites 15 m file with the selection final lithology{Rich ore};
choose Fe for the raw variable.

Do NOT ask for a Gaussian Transform.

Name the anamorphosis function Fe rich ore.

In Interactive Fitting... choose the Type Standard and switch ON the toggle button Dispersion
with the Dispersion Law set to Log-Normal Distribution. In this mode the histogram will be
modelled by assigning to each datum a dispersion, that accounts for some uncertainty that is

80

globally reflected by an error on the mean value. The variability of the dispersion is controlled
by the Variance Increase parameter, related to the estimation variance of the mean. By default
that variance is set to the statistical variance of the data divided by the number of data.

(snap. 4.5-2)

In Situ 3D Resource Estimation

81

Click on the Anamorphosis and Histogram bitmaps. You will visualize the anamorphosis function and how the experimental histogram is modelled (black bars are for the experimental histogram and the blue bars for the modelled histogram).

(snap. 4.5-3)

Close the Fitting Parameters window.


l

Press RUN in the Gaussian Anamorphosis window: because you have not asked for Hermite
Polynomials, the following error message window is displayed to advise you on the applications
requiring these polynomials.

(snap. 4.5-4)

82

4.5.2 Block anamorphosis on SMU support


Using the composite histogram and variogram models, we are now going to take the change of support into account using Statistics/Support Correction...:

(snap. 4.5-5)

The Selective Mining Unit (SMU) size has been fixed to 25 x 25 x 15 m. Therefore, the correction
will be calculated for a block support of 25 x 25 x 15 m. Each block is discretized by default in 3x3
for the X and Y direction (NX = 3 and NY = 3); no discretization is needed for the vertical direction
(NZ = 1) as the composites are regularized accordingly to the bench height (15 m). Changing the
discretization along X and Y may allow to study the sensitivity on change of support coefficients.

In Situ 3D Resource Estimation

83

Switch ON the toggle button Normalize Variogram Sill. As the variogram sill is higher than the
variance, the consequence is to reduce a little bit the support correction (r coefficient a bit higher
than without normalization).
Press Calculate at the bottom of the window. The block support correction calculations are displayed in the message window:

(snap. 4.5-6)

The block variogram value Gamma (v,v) is calculated and is the base for calculating the real
block variance and the real block support correction coefficient r. We can see that the support correction is not very important (r not very far from 1), it is because of the variogram model whose
ranges are rather large compared to the smu size. The calculation is made at random, so different
calculations will give similar results, but different. If the differences in the real block variance are
too large, the block discretization should be refined by increasing NX and NY. By pressing Calculate... several times, we statistically check if the discretization is fine enough to represent the variability inside the blocks. Press OK.
Save the Block Anamorphosis under the name Fe rich ore block 25x25x15 and press RUN.

4.5.3 Grade Tonnage Curves


Launch Tools / Grade Tonnage Curves. You will ask to display two types of curves, calculated
from:

84

Kriged Fe rich ore on the panels 75mx75mx15m, the Histogram modelled after support correction on blocks 25mx25mx15m

.
For each curve you have to click Edit and Fill the parameters.
For the first curve on kriged panels:

(snap. 4.5-7)

In Situ 3D Resource Estimation

85

(snap. 4.5-8)

For the second curve, on blocks histogram:

86

(snap. 4.5-9)

After clicking the bitmaps at the bottom of the Grade Tonnage Curves window (M vs. z, T vs z, Q
vs. z, Q vs.T, B vs z) you get the graphics like for instance T(z), M(z):

In Situ 3D Resource Estimation

87

100
90
80
Total Tonnage

70
60
50
40
30
20
10
0

50

55

60

65

Cutoff

(snap. 4.5-10)

70
69
68

Mean Grade

67
66
65
64
63
62
61
60

50

55

60
Cutoff

65

(snap. 4.5-11)

These curves show as expected that the selectivity is better from true blocks 25x25x15 than from
kriged panels 75x75x15, that have a lower dispersion variance.
The legend is displayed in a Separate Window as was asked in the Grade Tonange Curves window. By clicking Define Axes you switch OFF Automatic Bounds to change the Axis Minimum and
Axis Maximum for Mean Grade to 60 and 70 respectively.

88

(snap. 4.5-12)

(snap. 4.5-13)

In Situ 3D Resource Estimation

89

4.6 Simulations
This chapter aims at giving a quick example of conditional block simulations in a multivariate case.
Simulations allow to reproduce the real variability of the variable.
We will focus on the Fe-P-SiO2 grades of rich ore of blocks 25mx25mx15m. Two steps will then be
achieved:
l

simulation of the rich ore indicator. Sequential Indicator method will be applied to generate simulated model where each block has a simulated code 1 for rich ore blocks and 2 for poor ore
blocks. A finer grid would be required to be more realistic, for sake of simplicity we will make
the indicator simulation on the same blocks 25mx25mx15m.

simulation of rich ore Fe grade, as if each block would be entirely in rich ore. By intersecting
with the indicator simulation, we will get the final picture.

4.6.1 Simulation of the indicator rich ore


You must first create the grid of blocks 25x25x15 with File / Create Grid File.

(snap. 4.6-1)

90

To create in the grid file the orebody selection we use the migration capability (Tools/Migrate/Grid
to Point...) from the 3D Grid 75x75x15 m file to 3D Grid 25x25x15 with maximum migration distance of 55 m.

(snap. 4.6-2)

Open the menu Interpolate / Conditional Simulations / Sequential Indicator / Standard Neighborhood.

In Situ 3D Resource Estimation

91

(snap. 4.6-3)

For defining the two facies 1 for rich ore and 2 for the complementary you have to click on
Facies Definition and enter the parameters as shown below.

92

(snap. 4.6-4)

You may use the same variogram model, the same neighborhood and the same local parameters as
used for the kriging. The only additional parameter is the Optimum Number of Already Simulated
Nodes, you can fix to 30 (the total number being 5 for 12 sectors, i.e. 60). Save the simulation in
SIS indicator rich ore.
You ask 100 simulations, then press on Run.

4.6.2 Block simulations of Fe-P-SiO2 rich ore


The direct block simulation method, based on the discrete gaussian model (DGM), will be used.
The workflow is the following:
m

transform the raw data to gaussian values by anamorphosis. For the case of P grade the anamorphosis will take into account the fact that many samples are at the detection limit, that
produces an histogram with a significant zero effect.

In Situ 3D Resource Estimation

93

do a multivariate variographic analysis on the gaussian data in order to have a gaussian variogram

model these gaussian variograms with a linear model of coregionalisation;

regularize these variograms on the block support;

perform a support correction on the gaussian transforms;

perform the simulations using the discrete gaussian model framework, that allows to condition block simulated values to gaussian point data.

4.6.2.1 Gaussian Anamorphosis


We will perform the gaussian anamorphosis on the three grades of the rich ore domain in one go.
and independently. Note that the three anamorphosis functions must be stored together in the same
Parameter file called Fe-SiO2-P rich ore. Note in this case that we also ask to store the Gaussian
transforms in the composites file with the names Gaussian Fe/P/SiO2 rich ore, ...

94

(snap. 4.6-1)

By clicking on Interactive Fitting, the Fitting Parameters window pops up, you will have to choose
parameters for the three variables in turn, by clicking on the arrow on the side of the area displaying
Parameters for Fe/P/SiO2. For Fe and SiO2 you choose the Standard Type with a Dispersion
using a Log Normal Distribution and the default Variance Increase (as was made before for Fe
alone).
For P many samples have values equal to the detection limit of 0.01. The histogram shows a spike
at the origin, that will be modelled by a zero-effect. You must choose the type Zero-effect and click
on Advanced Parameters to enter the parameters defining the zero effect. In particular we will put
in the atom all values equal to 0.01 with a precision of 0.01, i.e. all samples between 0 and 0.02.

In Situ 3D Resource Estimation

95

(snap. 4.6-2)

After Run the transformed values of Fe and SiO2 have a gaussian distribution, while for P the
gaussian transform has a truncated gaussian distribution. The gaussian values assigned to the samples concerned by the zero effect are all equal to the same value (gaussian value corresponding to
the frequency of the zero effect).

4.6.2.2 Gaussian transform of P rich ore


The next steps consist of making the gaussian transform of P a true gaussian distribution. This is
achieved by using a Gibbs Sampler algorithm that will generate for all samples of the zero effect a
gaussian value consistent with the structure of spatial correlation with all gaussian values. Practically 3 steps must be carried out:
l

calculation of the experimental variogram of the truncated gaussian values;

variogram modelling of the gaussian transform using the truncation option;

Gibbs Sampler to generate the gaussian transform with a true distribution and honouring the
spatial correlation.

Using EDA we calculate the histogram and the experimental variogram on the variable Gaussian
P rich ore (activating the selection final lithology{Rich ore}). In the Application menu of the histogram you ask the Calculation Parameters and switch off the Automatic mode to the values shown
below:

(snap. 4.6-1)

96

For the variogram you choose the same parameters as used for Fe (omnidirectional in the horizontal
plane and vertical), by asking in the Application Menu / Calculation Parameters, in the Variogram
Calculation Parameters window click Load Parameters from Standard Parameter File and select
the experimental variogram Fe rich ore.
On the graphic display you see the truncated distribution with about 35% of samples concerned by
the zero effect, the gaussian truncated value is -0.393. The variance displayed as the dotted line on
the variograms is about 0.5. In the Application / Save in Parameter File menu of the graphic containing the variogram you save it under the name Gaussian P rich ore zero effect.

(snap. 4.6-2)

(snap. 4.6-3)

In Situ 3D Resource Estimation

97

In the Variogram Fitting window you choose the Experimental Variograms Gaussian P rich ore
zero effect and you create a New Variogram Model, called Gaussian P rich ore. Note that the variogram model refers to the gaussian transform (with the true gaussian distribution), it is transformed
by means of the truncation to match the experimental variogram of the truncated gaussian variable.

(snap. 4.6-4)

Click Edit, in the Model Definition window you must first click Truncation.

98

(snap. 4.6-5)

In the Other Options section, click on Advanced options then on Truncation. Cick Anamorphosis V1 to select the anamorphosis Fe-SiO2-P rich ore[P].

(snap. 4.6-6)

In Situ 3D Resource Estimation

99

(snap. 4.6-7)

Coming back to the Model Definition window you enter the parameters of the variogram model as
shown below. It is important to choose sill coefficients summing up to 1 (dispersion variance of the
true gaussian) and not 0.5 the dispersion variance of the truncated gaussian.

(snap. 4.6-8)

100

You will now generate gaussian values for the zero effect on P rich ore by using Statistics / Statistics
/ Gibbs Sampler. Note that the gaussian values not concerned by the zero effect are kept unchanged.
l

The Input Data are the variogram model you just fitted Gaussian P rich ore and the Gaussian
P rich ore variable stored after the GaussainAnamorphosis Modelling.

The Output Data are a new variogram model Gaussian P rich ore no truncation (which is in
fact the same as the input one without the truncation option) and a new variable in the Composites 15m file Gaussian P rich ore (Gibbs).

You ask to perform 1000 iterations.

(snap. 4.6-9)

You can check how the Gibbs Sampler has reproduced the gaussian distribution and the input variogram. You just have to recalculate the histogram and the variograms on the variable Gaussian P
rich ore (Gibbs). After saving in the Parameter File that experimental variogram, you can superimpose to it the variogram model with no truncation using Variogram Fitting menu. For the first distance the fit is acceptable.

In Situ 3D Resource Estimation

101

Variogram : Gaussian P rich ore (Gibbs)

(snap. 4.6-10)

117

1.5

148
183
1.0

D-9

223

266 1120
1155
11081222
1373 1196
900
325
1195
472
688
92
0.5 157
78

N0

6
1
0.0

500

1000
Distance (m)

1500
(snap. 4.6-11)

102

4.6.2.3 Multivariate Gaussian variogram modeling


In Statistics / Exploratory Data Analysis you calculate the variograms with the same parameters as
before (one monidirectional horizontal direction and one vertical direction) on the 3 gaussian transforms.
In the graphic window you use Application / Save in Parameter File to save these variograms under
the name Gaussian Fe-SiO2-P rich ore.

(snap. 4.6-1)

In Statistics/Variogram Fitting..., choose the experimental variogram you just saved. Create the new
variogram model with the same name Gaussian Fe-SiO2-P rich ore. Set the toggles Global Window and ask to display the number of pairs in the graphic window (Application/Graphic Parameters...).

In Situ 3D Resource Estimation

103

(snap. 4.6-2)

The model is made using the following method:

104

enter the name of the new variogram model Gaussian Fe-SiO2-P rich ore and Edit it.

in the Manual Fitting tab click on Load Model and choose the model made for Gaussian P
rich ore no truncation. The following window pops up:*

(snap. 4.6-3)

Clck on Clear button, then move the mouse to the second line Gaussian P rich ore, click on Link
and on OK in the Selector window to put the variogram made on Gaussian P alone for the same
variable in the three variate variogram. Then you click on OK in the Model Loading window.
l

in the Manual Fitting tab click on Automatic Sill Fitting. The Global Window shows the
model that has been fitted. Press Run to save it in the parameter file.

In Situ 3D Resource Estimation

105

(snap. 4.6-4)

4.6.2.4 Variogram regularization


In order to perform the direct block simulation you have to model the three variate variogram on the
support of the blocks 25x25x15.

106

You first have to launch Statistics / Modeling / Variogram Regularization. You will store in a
new experimental variogram Gaussian Fe-SiO2-P rich ore block 25x25x15 3 directional variograms using a discretization of 5x5x1. You will also ask to Normalize the Input Point Variogram.

(snap. 4.6-1)
l

Then you model the regularized variogram using Variogram Fitting and the Automatic Sill Fitting mode, after having loaded the model made on the point samples Gaussian Fe-SiO2-P rich
ore. You note that the Nugget effect is put to zero. When you save the variogram model the
Nugget effect is not stored in the Parameter file

In Situ 3D Resource Estimation

107

(snap. 4.6-2)

108

(snap. 4.6-3)

4.6.2.5 Gaussian Support Correction


The point gaussian anamorphosis and the regularized variogram model have to be transformed in
gaussian anamorphosis and variogram model related to the gaussian block variable Yv (gaussian
zero-mean, variance-1 variable).
This is achieved by running Statistics / Modeling / Gaussian Support Correction.

In Situ 3D Resource Estimation

109

(snap. 4.6-1)

4.6.2.6 Direct Block Simulation


It is achieved by running the menu Interpolate / Conditional Simulations / Direct Block Simulation.
It takes some time to get 100 simulations. Depending on the computer it may be more than an hour.

110

The simulated variables are created with the following names Simu block Gaussian Fe rich
ore ...in the 3D Grid 25x25x15. We store the gaussian values before transform to allow a check
of the experimental variograms on gaussian simulated values with the input variogram model,
that is defined on the gaussian variables.

The Block Anamorphosis and the Block Gaussian Model are those obtained from the Gaussian
Support Correction.

The Neighborhood used for kriging Fe rich ore is modified into a new one called Fe rich ore
simulation changing the radius along V to 800m. The reason is just because the Local Parameters for the neighborhood are not implemented in the application Direct Block Simulation.

Number of simulations: 100 for instance .

We ask to not Perform a Gaussian Back Transformation, for the reason explained above. The
back transform will be achieved afterwards.

The turning bands algorithm is used with 1000 Turning Bands.

In Situ 3D Resource Estimation

111

(snap. 4.6-1)

You can compare the experimental variograms calculated from the 100 simulations in up to 3 directions with the input variogram model. The directions are entered by giving the increments (number
of grid mesh) of the unit directional lag along X, Y, Z. For instance for the direction 1, the increments are respectively 1, 0, 0, which makes the unit lag 25m East-West.

112

(snap. 4.6-2)

Three graphic pages (one per direction) are then displayed. The average experimental variograms
are displayed with a single line, the variogram model with a double line. On the next figure the variograms in the direction 3 show a good match up to 100m. For the cross-variogram P-SiO2 where
the correlation is very low, some simulations look anomalous, further analysis could be made to
exclude these simulations for the next post processing steps.

Variogram : Simu block Gaussian SiO2 ri

In Situ 3D Resource Estimation

113

1.25

1.00

0.75

0.50

0.25

0.00

25

50

75

100

125

0.05
0.04
0.03
0.02
0.01
0.00
-0.01
-0.02
0

25

50

75

100

125

1.00

0.75

0.50

0.25

0.00

25

0.0

-0.1

-0.2

-0.3

-0.4

25

50

75

100

Distance (m)

50

75

100

125

Distance (m)

125

Variogram : Simu block Gaussian P rich

Variogram : Simu block Gaussian SiO2 ri

Distance (m)

0.0

-0.1

-0.2

-0.3

-0.4

-0.5
0

25

50

75

100

Distance (m)

125

Variogram : Simu block Gaussian Fe rich

-0.03

Variogram : Simu block Gaussian P rich

Variogram : Simu block Gaussian SiO2 ri

Distance (m)

1.25

1.00

0.75

0.50

0.25

0.00

25

50

75

100

Distance (m)

125

(snap. 4.6-3)

It is then necessary to transform the simulated gaussian values into raw values, using Statistics /
Data Tranformation / Raw Gaussian Transformation. For transforming the three grade you will
have to run that menu three times. You should choose as Transformation Gaussian to Raw Transformation. The New Raw Variable will be created with the same number of indices with names like
Simu block Fe rich ore...
The transform is achieved by means of the block anamorphosis Fe-SiO2-P rich ore block
25x25x15, do not forget to choose on the right side of the Anamorphosis window the right variable.

114

(snap. 4.6-4)

We can now combine the simulations of the rich ore indicator and the grades simulations, by changing to undefined (N/A) the grades when the block is simulated as poor ore (simulated code 2).
These transformations have to be applied on the 100 simulations using File / Calculator. It is compulsory to create beforehand new macro variables, with 100 indices, called Simu block Fe ... with
Tools / Create Special Variable.

In Situ 3D Resource Estimation

115

(snap. 4.6-5)

116

(snap. 4.6-6)

If you complete this Case Study by simulating also the grades of poor ore, you will get valuated
grades for all blocks in the orebody. The displays will be presented in the last chapter.

4.6.3 Simulations post-processing


One main advantage of simulations is the possibility to apply non linear calculations (for example
applying different cut-off grades simultaneously, or calculation of the probability for a grade to be
above a threshold etc.) for local reserves estimation. The post-processing may be applied on the
simulated blocks, but in the present case it is more interesting to first regroup the simulated blocks
in the blocks 75x75x15 (called panels) and illustrate some basic post-processing on the tonnage and
metals of rich ore within those panels.

In Situ 3D Resource Estimation

117

4.6.3.7 Regrouping blocks into panels


We will calculate for each panel the tonnage of rich ore and the quantity of rich ore Fe-P-SiO2 by
using Tools / Copy Statistics grid to Grid, that applies directly on the macro-variables.
l

One run will calculate a macro-variable Tonnage rich ore, by storing the number of smus of
rich ore (i.e. where Fe simulated grade is defined) within each panel. With File / Calculator that
number is divided by 9 (number of smus in the panel) to get a proportion. By multipying by the
panel volume and the density (constant equal to 4) we get the real tonnage in tons.

(snap. 4.6-1)

118

(snap. 4.6-2)

In Situ 3D Resource Estimation

119

three runs will be necessary to calculate the quantities of metal for the three elements. We store
with Tools / Copy Grid Statistics to Grid the mean grade of the smus of rich ore within the panel,
the variable is then called Metal Fe ... rich ore. With File / Calculator by multiplying those
mean values by the tonnage macro-variable we get the metal quantity in Tons.

(snap. 4.6-3)

120

(snap. 4.6-4)

4.6.3.8 Examples of Post Processing


The menu Tools / Simulation Post-processing offers different options, illustrated hereafter on the
Tonnage and Metal variables stored on the 3D Grid 75x75x15m file:

In Situ 3D Resource Estimation

121

Statistical Maps to calculate the average of 100 simulated tonnages

(snap. 4.6-1)

(snap. 4.6-2)

The mean tonnage may be compared to the kriged indicator (after multiplication by the panel tonnage).

122

Iso-Frequency Maps to calculate the quantile at the frequencies of 25%-50%-75% of the Tonnage of rich ore. In the previous Simulation Post-Processing window, click the Toggle button
Iso-Frequency Maps, the following window pops up and you define a New Macro Variable
Quantile Tonnage rich ore[xxxxx].

(snap. 4.6-3)

then click Quantiles and choose for Step Between Frequencies 25%. You get a macro-variable with
3 indices, one per frequency: for each panel the tonnage such that 25%, 50%, 75% of the simulations is lower than the corresponding quantile value.

(snap. 4.6-4)

In Situ 3D Resource Estimation

123

Iso-Cutoff Maps to calculate the probability for the Metal P rich ore to be above 0, 50, 100,
150, 200.

(snap. 4.6-5)

In the previous Simulation Post-Processing window, click the Toggle button Iso-Cutoff Maps, the
following window pops up and you define a New Macro Variable for Probability to be Above Cutoff (T), i.e. Proba P rich ore above[xxxxx].

124

(snap. 4.6-6)

then click Cutoff and click Regular Cutoff Definition and choose the parameters as shown below.
You get a macro-variable with 4 indices, one per cutoff: for each panel the probability to be above
0.02,0.03 ...

(snap. 4.6-7)

In Situ 3D Resource Estimation

125

Risk Curves to calculate the distribution of 100 simulations of Fe metal quantities of rich ore
over the orebody.

(snap. 4.6-8)

Click Risk Curves then Edit and fill the parameters in the Risk Curves & Printing Format window,
as shown. Only the Accumulations are interesting. For a given simulation the accumulation is
obtained by multiplying the simulated block value (here the Fe metal in tons) by the volume of the
block. It means that the average grade of the block is multiplied twice by the block volume. That is
why in order to get the metal in MTons we have to apply a scaling factor of 75x75x15 (84375) and
multiply it by 106. That scaling is entered in the box just on the left of m3*V_unit of the Accumulations sub-window. By asking Print Statistics the 100 accumulations will be output in the Isatis message window. The order of the printout depends of the option Sorts Results by, here we ask
Accumulations.

126

(snap. 4.6-9)

Coming back to the Simulation Post-processing window and press Run. The following graphic is
then displayed.

In Situ 3D Resource Estimation

127

(snap. 4.6-10)

With the Application / Graphic Parameters you may Highlight Quantiles with the Simulation Value
on Graphic.

(snap. 4.6-11)

The graphic page is refreshed as shown.

128

(snap. 4.6-12)

In the message window we get the 100 simulated metal quantities by increasing order. The column
Macro gives the index of the simulation for each outcome: for instance the minimum metal is
obtained for the simulation #72, the next one for the simulation 97 ...

Rank Macro Frequency Accumulation

Volume

72 1.00

1140.90MT 3442162500.00m3

97 2.00

1156.65MT 3442162500.00m3

38 3.00

1171.82MT 3442162500.00m3

15 4.00

1179.91MT 3442162500.00m3

91 5.00

1181.25MT 3442162500.00m3

41 6.00

1185.01MT 3442162500.00m3

30 7.00

1191.53MT 3442162500.00m3

45 8.00

1191.71MT 3442162500.00m3

57 9.00

1194.86MT 3442162500.00m3

10

59 10.00

1195.80MT 3442162500.00m3

In Situ 3D Resource Estimation

11

35 11.00

1196.15MT 3442162500.00m3

12

6 12.00

1196.37MT 3442162500.00m3

13

48 13.00

1197.58MT 3442162500.00m3

14

62 14.00

1199.70MT 3442162500.00m3

15

40 15.00

1201.25MT 3442162500.00m3

16

1 16.00

1201.90MT 3442162500.00m3

17

86 17.00

1204.47MT 3442162500.00m3

18

33 18.00

1206.65MT 3442162500.00m3

19

93 19.00

1206.83MT 3442162500.00m3

20

11 20.00

1210.44MT 3442162500.00m3

129

...

We will calculate for each panel the mean grade, tonnage and metal quantitiy of rich ore and the
quantity of rich ore Fe-P-SiO2 by using Statistics / Processing / Grade Reblocking, that applies
directly on the macro-variables. The Grade Reblocking is designed to calculate local grade tonnage
curves on panel grid (Q,T,M variables) from simulated grade variables on block grid. The grade
variables can be simulated using the panel Turning bands, Sequential Gaussian Simulation or any
kind of simulation that generates continuous variables.
The Block Grid usually corresponds to the S.M.U. (Selective Mining Unit). It has to be consistent
with the Panels, in other words the Block Grid must make a partition of this Panel Grid.This
appli-cation handles multivariable cases with a cuttof on the main variable.
Make sure to give a different name for each output variables: Simu Fe, Simu P and Simu SiO2.

130

(snap. 4.6-13)

In Situ 3D Resource Estimation

131

132

(snap. 4.6-14)

In Situ 3D Resource Estimation

133

4.7 Displaying the Results


The last chapter consists in visualizing the different result in the 3D grids, through the 2D Display
facility then through the 3D viewer.

4.7.1 Using the 2D Display


4.7.1.1 Display of the Kriged block model
We are going to create a new Display template (Display/New Page...), that consists in an overlay of
a grid raster and isolines. All the Display facilities are explained in detail in the "Displaying & Editing Graphics" chapter of the Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page pops up, together
with a Contents window. You have to specify in this window the contents of your graphic. To
achieve that:
l

First, give a name to the template you are creating: Kriging Fe rich ore. This will allow you to
easily display again this template later.

In the Contents list, double click the Raster item. A new window appears, in order to let you
specify which variable you want to display and the color scale:
m

Select the Grid file, 3D Grid 75x75x15m with selection orebody active, select the variable
Kriging Fe rich ore

Specify the title for the Raster part of the legend, for instance Kriging Fe rich ore

In the Grid Contents area, enter 16 for the rank of the section XOY to display

In the Graphic Parameters area, specify the Color Scale you want to use for the raster display. You may use an automatic default color scale, or create a new one specifically dedicated to the Fe variable. To create a new color scale: click the Color Scale button, doubleclick on New Color Scale and enter a name: Fe, and press OK. Click the Edit button. In the
Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Choose Number of Classes 22,
- Click on the Bounds... button, enter 60 and 71 as the Minimum and Maximum values.
Press OK.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large Fe
values.
- Click Undefined Values button and select Transparent.
- In the Legend area, switch off the Display all tick marks button, enter 60 as the reference
tickmark and 2 as the step between the tickmarks. Then, specify that you do not want
your final color scale to exceed 7 cm. Switch off the Automatic Format button, and specify that you want to use integer values of Length 7. Ask to display the Extreme Classes.
Click OK.

134

(snap. 4.7-1)

In the Item contents for: Raster window, click Display current item to display the result.

Click OK.

Double-click on the Isolines item. A new Item contents window appears. In the Data area, select
the Kriging Fe rich ore variable from the 3D Grid file with the same selection. In the Grid Contents area, select the rank 16 for the XOY section. In the Data Related Parameters area, switch
on the C1 line, enter 60 and 71 as lower and upper bounds and choose a step equal to 2. Switch

In Situ 3D Resource Estimation

135

off the Visibility button. Click on Display Current Item to check your parameters, then on Display to see all the previously defined components of your graphic. Click on OK to close the Item
contents window.
l

In the Item list, you can select any item and decide whether or not you want to display its legend, by setting the toggle Legend ON. Use the Move Front and Move Back buttons to modify the
order of the items in the final Display.

Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.

Kriging Fe rich ore


3000

Kriging Fe rich ore

2000

Y (m)

70
68

1000

66
64
0

62
60
500

1000

1500

X (m)

2000
N/A

(fig. 4.7-1)

You can also visualize your 3D grid in perspective. Open again the Contents window of the previous graphic display (Application/Contents...). Switch the Representation Type from Projection to
Perspective:

136

just click on Display: the previous section is represented within the 3D volume. Because of the
extension of the grid, set the vertical axis factor to 3 in the Display Box tab (switch the toggle
Automatic Scales OFF). In the Camera tab, modify the Perspective Parameters: longitude=60,
latitude=40.

Kriging Fe rich ore

Kriging Fe rich ore


735
635
535
435
335

70
68
735
635
535
435
335

5
27
75
12

64
62

163

3
116

75
22

66

60
N/A

(fig. 4.7-2)

In Situ 3D Resource Estimation

137

Representing the whole grid as a solid: this is obtained by setting the 3D Grid contents to 3D
Box, both in the Raster and Isolines item contents windows.

Representing the 3G grid as a solid and penetrating into the solid by digging a portion of the
grid. For each item content window (for raster and isolines), set the 3D Grid contents to Excavated Box. Then define the indices of the excavation corner (for instance: cell=17, 21, 15).

Kriging Fe rich ore

Kriging Fe rich ore


735
635
535
435
335

70
68
735
635
535
435
335

5
27

5
27

64
62

163

116

75
22

66

60

N/A

(fig. 4.7-3)

In the Contents window, the Camera tab allows you to animate (animate tab from the main contents
window) the graphic in several ways:
l

by animating the entire graphic along the longitude or latitude definition,

by animating one item property at a time, for instance the grid raster section. To interrupt the
animation, press the STOP button in the main Isatis window.

4.7.1.2 Display of the simulated block model


l

Fe grade
m

Create a raster image of the Fe simulated macro variable: choose the first simulation (index
1). Display rank 16 of the 25x25x15 m 3D grid file, so you can compare simulations with
the kriging) and choose the grade Fe color scale. Ask to display the legend.

Create a Base map of the composite data from the Composites 15 m with the selection final
lithology{Rich ore} active and no variable in order to use the same Default Symbol a full
circle of 0.15cm.

138

(snap. 4.7-1)

In the Display Box tab from the contents window, set the mode to Containing a set of items and
click the Raster item: set the toggle Box Defined as Slice around Section ON and set the Slice
Thickness to 45 m.

In Situ 3D Resource Estimation

139

(snap. 4.7-2)

Press Display:

140

Simu block Fe[00001]


3000

Fe rich ore

2000

Y (m)

70
68

1000

66
64
0

62
60
500

1000

1500

2000

X (m)

N/A
(fig. 4.7-1)

From the Animate tab, select the raster item and choose to animate on the macro index. Set the
Delay to 1s and press Animate. The different simulations appear consecutively: the animation
allows to sense the differences between the simulations. Check that the simulations tend to be similar around boreholes.
l

Display of the probability for the Metal P of rich ore in panels to be above cut-off = 50T:
m

Create a new page and display the macro variable Proba P rich ore above from the 3D
Grid 75x75x15m file: choose the macro index n 2 (i.e. cutoff = 50)

Legend title: probability

Ask to display rank 16 (horizontal section 16)

=Make a New Color scale named Proportion as explained before for Fe, but with 20 classes
between 0 and 1.

press OK

In Situ 3D Resource Estimation

141

Ask for the legend and press Display:

Probability
Proba P rich ore above{50.000000}
3000

1.00
0.90
0.80
0.70

2000

Y (m)

0.60
0.50
1000

0.40
0.30
0.20

0.10
0.00
500

1000

1500

2000

X (m)

N/A
(fig. 4.7-2)

4.7.2 Using the 3D Viewer


Launch the 3D Viewer (Display/3D Viewer...).

4.7.2.3 Borehole visualization


l

Display the Fe composites:


m

Drag the Fe variable from the Composites 15 m file in the Study Contents and drop it in the
display window;

Magnify by a factor of 2 the scale along Z by clicking the Z Scale button at the top of the
graphic page.

Click Toggle the Axes in the menu bar on the left of the graphic area.

From the Page contents, click right on the 3D Lines object to open the 3D Lines properties
window. In the 3D Lines tab

142

- select the Tube mode;


- switch on the toggle Selection and choose the final lithology{Rich ore} macro index;
- switch off the toggle Allow Clipping

(snap. 4.7-1)
m

In the Color tab, choose the same Fe Isatis color scale;

In the Radius tab, set the mode to constant with a radius of 20 m

Press Display and close the 3D Lines Properties window

In the File menu click Save Page as and give a name (composites rich ore) in order to be
able to recover it later if you wish.

In Situ 3D Resource Estimation

143

(snap. 4.7-2)

4.7.2.4 Display of the kriged 3D Block model


As an example we will display the kriged indicator of rich ore. In order to make a New Page click
Close Page in the File menu.
l

Click Compass in the menu bar on the left of the graphic area.

Drag the Kriging indicator rich ore variable from the 3D Grid 75 x 75 x 15 m file in the Study
Contents and drop it in the display window;

Click right on the 3D Grid 75x75x15m file in the Page Contents to open the 3D Grid Properties:

144

In the 3D Grid tab, tick the selection toggle, choose the orebody selection;

in the color tab:


- set the color mode to variable and change the variable to Kriging Indicator rich ore;
-

apply the Rainbow reversed Isatis color scale;

- Press Display and close the 3D Grid properties window

(fig. 4.7-1)
l

Investigate inside the kriged block model:


m

open the clipping plane facility from Toggle the Clipping Plane in the menu bar on the left of
the graphic area: the clipping plane appears across the block model;

Go in select mode by pressing the arrow button in the function bar;

Click the clipping plane rectangle and drag it next by the block model for better visibility;

Click one of the clipping planes axis to change its orientation (be careful to target precisely
the axis itself in dark grey, not its squared extremity nor the center tube in white)

Add the drill holes (Fe rich ore) as you did for the previous graphic page

Open the Line Properties window of the Composites 15 m file: set the Allow Clipping toggle ON;

In Situ 3D Resource Estimation

145

Click on the clipping planes center white tube and drag it in order to translate the clipping
plane along the axis: choose a convenient cross section, approximately in the middle of the
block model. You may also benefit from the clipping controls parameters available on the
right of the graphic window in order to clip a slice with a fixed width and along the main
grid axes.

Click on one block of particular interest: its information is displayed in the top right corner:

(snap. 4.7-1)

You may click also on boreholes to display composite data.

146

Slicing (before hand, click on Toggle the Clipping Plane)


m

Edit the 3D Grid 75x75x15m attributes, go in the Slicing tab and set the properties as follow:

(snap. 4.7-2)

Set the toggle Automatic Apply ON, and move the slices to visualize interactively the slicing.
l

Save the graphic as a New Page with the name Composites and kriged indicator rich ore.

4.7.2.5 Display of the search ellipsod


From the kriging application (the definition parameters of the 3D kriging of Fe should be kept),
launch the Test window. From Application/Target Selection, select the grid node (20,19,14) for
instance and press Apply. Then, make sure that the 3D viewer is running and, from the same Application menu of the Test window, ask to Link to 3D Viewer: a 3D representation of the search ellipsod neighborhood is represented, and the samples used for the estimation of this particular node are
highlighted. A new graphic object neighborhood appears in the Page Contents from which you
may change the graphic properties (color, size of the samples for coding the weights or the Fe values etc...)

In Situ 3D Resource Estimation

147

(fig. 4.7-1)

148

Non Linear

5.Non Linear
This case study, dedicated to advanced users, is based on the Walker
Lake data set, which has been first introduced and analyzed by Mohan
SRIVASTAVA and Edwards H. ISAAKS in their book Applied Geostatistics (1989, Oxford University Press).

Geostatistical methods applicable to perform global and local estimation of recoverable resources in a mining industry context are
described through this case study:
Non linear methods, including four methods used to estimate local
recoverable resources: indicator kriging, disjunctive kriging, uniform
conditioning and service variables.Conditional simulations of grades,
using the two main methods applicable: turning bands and sequential
gaussian.
The efficiency of these methods will be evaluated by comparison to
the reality, which can be considered as known in this case because
of the origin of the data set.
Reminder: while using Isatis, the on-line help is accessible anytime by
pressing F1 and provides full description of the active application.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This
directory also contains a journal file including all the steps of the case
study. If case you get stuck during the case study, use the journal file to
perform all the actions according to the book.

Last update: Isatis version 2014

149

150

5.1 Introduction and overview of the case study


This case study is dedicated to advanced users who feel comfortable with linear geostatistics and
Isatis.

5.1.1 Why non linear geostatistics?


Non linear geostatistics are used for estimating the recoverable resources. At the difference of the
estimation of in situ resources by conventional kriging (linear geostatistics), the estimation of the
recoverable resources considers the mining aspects of the question. Three points can effectively be
taken into account by non linear geostatistics:
l

the support effect, that makes the recovered ore depending on the volume on which the ore/
waste decision is made. In this case the size of the selective mining unit (SMU or blocks) has
been fixed to 5m x 5m. When performing the local estimations we will calculate the ore tonnage
and grade after cut-off in panels of 20m x 20m. It is important to keep these terms of block for
the selective unit and panel for the estimated unit (e.g.: tonnage within the panel of the ore consisting of blocks with a grade above the cut-off). These terms are systematically used in the Isatis interface.

the information effect, that makes the mis-classification between selected ore and waste depending on the amount of information used in estimating the blocks. At this stage two notions are
important. Firstly the recovered ore is made of true grades contained in blocks whose estimated
grade is above the cut-off. Secondly the decision between ore and waste will be made with an
additional information (blast-holes...) in the future of the production. The question is then what
can we expect to recover tomorrow, if we assume a future pattern blast-holes for instance.

the constraint effect, that leads for any technical/economical reason to ore dilution or ore left in
place. The two previously mentioned effects are assuming a free selection of blocks within the
panels, only the distribution of block grades is of importance. When their spatial distribution has
to be considered (the recovered ore will be different if rich blocks are contiguous or spread
throughout the panel), only geostatistical simulations provide an answer.

5.1.2 Organization of the case study


This case study is divided in several parts: the first part 3.2 Preparation of the case study
rehearses geostatistical concepts and Isatis manipulation already described in the In Situ 3D
Resource Estimation case study, consisting of declustering, grid manipulations, variography, ordinary kriging with neighborhood creation. These topics will not be detailed here and the user is
invited to have a look at the previous case study for an extensive description. The following of the
case study describes several different methods for the estimation of recoverable resources; it is also
recommended that the user reads 3.3 Global estimation of recoverable resources before starting any method described in 3.4 Local estimation of the recoverable resources or in 3.5 Simulations. The dataset allows to compare estimations with real measurements: this will be done
exhaustively in 3.6 Conclusions.

Non Linear

151

5.1.2.1 Global Estimation of Recoverable Resources (developed in 3.3)


The global estimation makes use of the raw data histogram (possibly weighed by declustering coefficients): each grade is attached to a frequency, i.e the global proportion relative to the global tonnage of the deposit assuming a perfect sampling. This is a direct statistical approach. Geostatistics
appears as soon as the variogram is used to correct this histogram, i.e the proportion, to reflect the
support effect and/or the information effect. Thus, a histogram model is needed in order to perform
these corrections: the modeling and the corrections are done through the Gaussian Anamorphosis
Modeling and Support Effect panels in Isatis, widely used through the whole case study. Comparison to reality and kriging will be done through global grade-tonnage curves.

5.1.2.2 Local estimation of recoverable resources


The local estimation of recoverable resources makes use of non linear estimation or simulation
techniques, involving gaussian anamorphosis. The aim is to estimate the proportion of ore blocks
within larger panels (assuming free selection of blocks within each panel), and the corresponding
metal tonnage and mean grade above cut-off:
l

by non linear kriging techniques (developed in 3.4): the main advantage of these methods is
their swiftness, but no information on the location of the ore blocks within the panels is given.
Four methods will be described: Indicator kriging, Disjunctive kriging, Service variables and
Uniform Conditioning.

by simulation techniques (developed in 3.5): the main advantages of simulations is the possibility to derive simulated histograms and estimate the constraint effect, but the method is quite
heavy and time consuming for big block models. Two methods will be described: Turning
Bands (TB) and Sequential Gaussian Simulations (SGS).

Comparison to reality through a specific analysis of the 600 ppm cut-off will be done through
graphic displays and cross plots of the ore tonnage and mean grade above cut-off.

Note - If you wish to compare the local estimates with reality you will need first to calculate the
real tonnage variables from the real grades for the specific cut-off 600 (this is done in 3.4.1
Calculation of the true QTM variables based on the panels).

152

5.2 Preparation of the case study


The dataset is derived from an elevation model from the western United States, the Walker Lake
area in Nevada. It has been transformed in order to represent measures of concentration in some
elements (economic grades in the deposit we are going to evaluate). From the original data set we
will use only the variable V, considered as the grade of an ore mineral measured in ppm: the multivariate aspect of this data set will not be considered, as the non linear estimation methods available
in Isatis are currently univariate (unlike simulations). The data set is two fold, the exhaustive data
set, containing 78 000 measurements points on a 1m x 1m grid, and the sample set resulting from
successive sampling campaigns and containing 470 data locations. Several methods for the estimation of recoverable resources are proposed in Isatis: this case study aims to describe them all and
compare them to the reality issued from the exhaustive set.

5.2.1 Data import and declustering


The data is stored in the Isatis installation directory (sub-directory Datasets/Non_Linear). Load the
data from ASCII file by using File / Import / ASCII. The ASCII files are Sample_set.hd for the sample set and Exhaustive_set.hd for the exhaustive data set. The files are imported into two separate
directories Sample set and Exhaustive set respectively, and files are called Data.

(snap. 5.2-1)

By visualizing the Sample set data (using Display / Basemap/ Proportional), we immediately see
the preferential sampling pattern of high grade zones:

Non Linear

153

X (m)
0

100

300

200
V

Y (m)

200

100

0
0

100

200

X (m)

(fig. 5.2-1)

In order to correct the bias of preferential sampling of high grade zones, it is necessary to decluster the data. To do so you can use Tools / Declustering: it performs a cell declustering with a moving window centered on each sample. We store the resulting weights in a variable Weight of the
sample data set: this variable will be used later to weight statistics for the variographic analysis in
the EDA and the gaussian anamorphosis modeling. The moving window size for declustering has
been fixed here to 20m x 20m, accordingly to the approximative sampling loose mesh size outside
the clusters.

Note - A possible guide for choosing the moving window dimensions is to compare the value of the
resulting declustered mean to the mean of kriged estimates (kriging has natural declustering
capabilities).
The statistics before and after declustering are the following:

154

(snap. 5.2-2)

Mean: 436.35 -> 279.68


Std dev: 299.92 -> 251.44
The next graphics correspond to the histograms of the Sample set, Exhaustive set and Declustered
sample set; they have been calculated using Statistics / Exploratory Data Analysis (EDA). The histogram of the Declustered sample set has been calculated with the Compute Using the Weight Variable option toggle ON, using the Weight variable.

Non Linear

155

(snap. 5.2-3)

156

(fig. 5.2-2)

From these three histograms we clearly see that the declustering process will allow to better represent the statistical behavior of the phenomenon.

5.2.2 Variographic analysis of the sample grades


We first focus on possible anisotropies of the sample set data. From the Statistics / Exploratory
Data Analysis panel, activate the option Compute using the Weight Variable: we will calculate a
weighted 2D variogram map on the V variable from the sample dataset. By default, the Reference
Direction is set to an azimuth equal to the North (Azimuth = N0.00). The parameters related to
the directions, lags and tolerance may be tuned for a detailed variographic analysis but here we will
base ourselves directly on common parameters: ask for 18 directions (10 degrees each), and we
will define 11 lags of 15 m. Generally, the variogram is calculated with a tolerance on distance set
to 50% of the lag which corresponds to a Tolerance on Lags equal to 0 lag; besides, calculations
are often made with an angular tolerance of 45 (in order to consider all samples once with two
directions) which corresponds to a Tolerance on Directions equal to 4 sectors (4 sectors of 10 +
half sector 5 = 45 ).
If the focus is on short scale, one may decide to calculate a bi-directional variogram along N70 and
N160, considering that N160 is a direction of maximum continuity.

Note - This short scale anisotropy is not clearly visible on the variogram map below: to better
visualize it, you may re-calculate the variogram map on 5 lags only and create a customized color
scale through Application / Graphic Specific Parameters...

Non Linear

157

In the variogram map area you can activate a direction using the mouse buttons, the left one to
select a direction, and the right one for selecting Activate Direction in the menu. Activating both
principal axes (perpendicular directions N160 and N70) displays the corresponding experimental
variograms below. When selecting the variogram, click right and ask for Modify Label... to change
N250 to N70:

(snap. 5.2-4)

The short scale anisotropy is visible on the experimental variogram; it is then saved in a parameter
file Raw V from the graphic window (Application / Save in Parameter File...).
We now have to fit a model based on these experimental variograms using the Statistics / Variogram Fitting facility. We fit the model from the Manual Fitting tab.

158

(snap. 5.2-5)

Non Linear

159

(snap. 5.2-6)

160

(snap. 5.2-7)

Press Print to check the output variogram and then save the variogram model in the parameter file
under the name of Raw V. It should be noted that the total sill of the variogram is slightly above the
dispersion variance and the low nugget value has been chosen.

5.2.3 Calculation of the true block and panel values


In this case study, during the mine exploitation period, a 5m x 5m block will be the selective mining
unit (SMU). The recoverable resource estimation will be based on this 5m x 5m block support; but
first, the in-situ resource estimation will be done on 20m x 20m panels for more robust estimation.
As we have access to an exhaustive data set of the whole area to be mined, we can assume that we
know the true values for any size of support, just by averaging the real values of the exhaustive
set on the wanted block or panel support.

5.2.3.1 Calculation of the true grade values for 5 m x 5 m SMU blocks


To store this average value on a 5m x 5m block support, we need to create a new grid (new file
called Grid 5*5 in a new directory Grids, using the File / Create Grid File facility) and choose the
coordinates of the origin (center of the block at the lowest left corner) in order to match exactly the
data. The Graphic Check, in Block mode, will help to achieve this task. Enter the following grid
parameters:
m

X and Y origin: 3m,

X and Y mesh: 5m,

52 nodes along X, 60 nodes along Y.

Non Linear

161

(snap. 5.2-1)

Using this configuration we have exactly 25 samples from the exhaustive data set for each block of
the new grid. Edit the graphic parameters to display the auxiliary file.

162

(snap. 5.2-2)

(fig. 5.2-1)

Now we need to average the real values on this Grid 5*5 file, using Tools / Copy Statistics / Points
-> Grid. We will call this new variable True V.

Note - Using a moving window equal to zero for all the axes, we constrain the new Mean variable
to a calculation area of 5m x 5m (1 block).

Non Linear

163

(snap. 5.2-3)

True V

300

250
1000
900
800
700
600
500
400
300
200
100
0
N/A

Y (m)

200

150

100

50

50

100
150
X (m)

200

250

Display of the true block grade values (5m x 5m blocks)

(fig. 5.2-2)

164

The above figure is a result of two basic actions of the Display Menu: a display grid raster of the
true block grade is performed, then isolines are overlaid. Isolines range from 0 to 1500 by steps of
250 ppm, 1000 ppm isoline has been represented with a bold line type. The color scale has been
customized to cover grades between 0 and 1000 ppm, even if there are values greater than this
upper bound. Each class has a width of 62.5 ppm, the extreme values are represented using the
extreme colors.

Note - Keep in mind that V variable has primarily been deduced from elevation data: we clearly
see on the above map a NW-SE valley responsible of the anisotropy detected during variography.
The Walker Lake itself (consequently with zero values...) is in this valley. One could raise
stationarity issues, as the statistical behavior of elevation data differs from valleys (with a lake) to
nearby ranges. This is not the subject of this case study.

5.2.3.2 Calculation of the true grade values for 20 m x 20 m panels


Create a new grid file Grid 20*20 in the Grids directory with the following parameters:
m

X and Y origin: 10.5 m,

X and Y mesh: 20 m,

13 nodes along X, 15 nodes along Y

Non Linear

165

(snap. 5.2-1)

The graphic check with the Grid 5*5 shows that the 5m x 5m blocks describe a perfect partition of
the 20m x 20m panels. This allows to use the specific Tools / Copy Statistics / Grid to grid... for calculating the true panel values True V for the Mean Name:

166

(snap. 5.2-2)

Non Linear

167

5.2.4 Ordinary Kriging - In situ resource estimation


The in-situ resource estimation will be done on the 20 m x 20 m panels through Interpolate / Estimation / (Co)-Kriging...:

168

Type of calculation: Block

Input file: Sample Set/Data/V

Output file: Grids/Grid 20*20 /Kriging V

Model: Raw V

Neighborhood: create a moving neighborhood named octants without any rotation and a constant radius of 70 m, made of 8 sectors with a minimum of 5 samples and the optimum number of samples by sector set to 2. This neighborhood will be used extensively throughout the
case study.

(snap. 5.2-3)

Non Linear

169

(snap. 5.2-4)

For comparison purposes, it is interesting to do also the same kriging on the small blocks (Grid 5*5)
to quantify the smoothing effect of linear kriging.

5.2.5 Preliminary conclusions


Basic statistics may be done through different runs of Statistics / Quick Statistics...; the results are
summarized below. Interpolation by Inverse Distance ID2 with a power equal to 2 and the same
neighborhood has been done for comparison (through Interpolate / Interpolation / Quick Interpolation...):

170

Comparing the true V values for the three different supports (punctual, block 5x5 and panel 20x20):
l

as expected, the mean remains exactly identical

the variance decreases with the support size: this is the support effect

Comparing estimated values vs. true values for one same support:
l

punctual: the estimation by declustering is satisfactory because the mean and the variance are
comparable. The bias (279.7 compared to 278.0) is negligible

block 5x5: ID2 shows an overestimation. For kriging, the bias is negligible and, as expected, the
variance of the kriged blocks (44013) is smaller than the real block variance (52287); this is the
smoothing effect caused by linear interpolation. Beside, there are some negative estimates; the
5m x 5m blocks are too small for a robust in situ estimation.

panel 20x20: The bias of ID2 is less pronounced, but the variance is not realistic; this is because
strong local overestimation of high grade zones. The variance of the kriged panels is smaller
than the real panel variance, but the difference is less pronounced. Moreover, there is only one
negative panel estimate.

Note - 72 SMU blocks have negative estimates indicating that the 5 m x 5 m block size is too small
in this case.

Non Linear

171

5.3 Global estimation of the recoverable resources


5.3.1 Punctual histogram modeling
Using Statistics / Gaussian Anamorphosis Modeling we model the anamorphosis function linking
the raw values of V (called Z in Isatis) and their normal score transform (called Y in Isatis), i.e the
associated gaussian values. In order to reproduce correctly the underlying distribution we have to
apply the Weight variable previously calculated by the Declustering tool. The Gaussian variable
will be stored under Gaussian V:

(snap. 5.3-1)

172

(snap. 5.3-2)

The Interactive Fitting... gives access to specific parameters for the anamorphosis (intervals on the
raw values to be transformed, intervals on the gaussian values, number of polynomials etc...): the
default parameters will be kept. The distribution function is modeled by specific polynomials called
Hermite polynomials; the more polynomials, the more precise is the fit. There are also QC graphic
windows allowing to check the fit between experimental (raw) and model histograms:

Non Linear

173

1500

1000
500
0
-3 -2 -1

Gaussian values

(fig. 5.3-1)

Punctual anamorphosis function.


Experimental data is in black, the anamorphosis is in blue.
Save the anamorphosis in a new parameter file called Point and to perform the gaussian transform
with the default Frequency inversion method. This will write the Gaussian V variable on disk and
will be used for the Disjunctive Kriging, Service Variable estimations and for the simulations.
The Point Anamorphosis is equivalent to a histogram model of the declustered raw values V; it may
be used to derive global estimation as an overall view of the potential of an orebody (Grade-Tonnage curves are available in the Interactive Fitting... parameters), but it does not take the support
effect nor the information effect into account. This is done hereafter.

5.3.2 Support effect correction


We are going now to quantify the support effect for 5 m x 5 m blocks; that is, how much does the 5
m x 5 m block distribution differ from the punctual grades calculated above. The following is
required:
l

a model of the distribution, defined by means of a gaussian anamorphosis function

the block variance, which can be calculated using the Krige's relationship giving the dispersion
variance as a function of the variogram.

The gaussian discrete model provides then a consistent change of support model.
Use the Statistics/Support Correction... panel with the Point anamorphosis and the Raw V variogram model as input. The 5mx5m block will be discretized in 4x4. At this stage no information
effect is considered, so the corresponding toggle is not activated.

174

(snap. 5.3-3)

Press Calculate to calculate the Gamma(v,v), and the corresponding Real Block Variance and Correction are displayed in the message window:
_________________________________________________
|
|
|
|
| V
|
|--------------------------------------|----------|
| Punctual Variance (Anamorphosis)
| 63167.25 |
| Variogram Sill
| 66500.00 |
| Gamma(v,v)
| 9431.85 |
| Real Block Variance
| 53735.40 |
| Real Block Support Correction (r)
|
0.9293 |
| Kriged Block Support Correction (s) |
0.9293 |
| Kriged-Real Block Support Correction |
1.0000 |
| Zmin Block
|
0.00 |
| Zmax Block
| 1528.10 |

Non Linear

175

|______________________________________|__________|

Note - Gamma (v,v) is calculated using random procedures; hence, different results are generated
when pressing the Calculate button. Gamma (v,v) and the resulting Real Block Variance should not
vary too much between different calculations.
By clicking on the anamorphosis and on the histogram bitmaps we can check that, after the support
effect correction, the histogram of blocks is smoother (smaller variance) than the punctual histogram model:

12.5

Frequencies (%)

10.0
7.5
5.0
2.5
0.0

500

1000

150

(fig. 5.3-2)

Histograms (punctual in blue and block in red): the block histogram model is smoother
The anamorphosis function will be saved under the name Block 5m * 5m and press RUN to save it.

5.3.3 Support & information effects correction


The grade tonnage curves obtained at this stage consider that the mining selection is based upon
true SMU grade. In reality, the SMU grades will be estimated using the ultimate information from
the blast-holes. The consequence is that the grade tonnage curve is deteriorated as it ignores the
uncertainty of the estimation: this is called the information effect. Knowing the future sampling
pattern, it is possible to consider this information effect.
We suppose that, at the mining stage, there will be one blast-hole at the centre of each block. The
blocks will then be estimated from blast-holes spread on a regular grid of 5m x 5m: we will use the
grid nodes of the Grid 5*5 file to simulate this future blast-hole sampling pattern. In order to calculate the grade tonnage curves taking into account the information effect from this blast-hole pattern

176

(i.e. the selection between ore and waste is made on the future estimated grades, and not on the real
grades), we should calculate 2 coefficients:
l

a coefficient that transforms the point anamorphosis in the kriged block one.

a coefficient that allows to calculate the covariance between true and kriged blocks.

Therefore, the variance of the kriged block and the covariance between real and kriged blocks are
needed: they can be automatically calculated in the same Support Correction panel through the
Information Effect optional calculation sub-panel (... selector next to the toggle):

(snap. 5.3-4)

The final sampling mesh corresponds to the final sampling pattern to be considered: 5x5 m. Press
OK and create a new anamorphosis function Block 5m*5m with information effect. Click on Run
button; two extra support correction coefficients are calculated and are displayed when pressing
RUN from the main panel:
Block Support Correction Calculation:
_________________________________________________
_________________________________________________
|
|
|
|
| V
|
|--------------------------------------|----------|
| Punctual Variance (Anamorphosis)
| 63167.25 |
| Variogram Sill
| 66500.00 |
| Gamma(v,v)
| 9431.85 |
| Real Block Variance
| 53735.40 |
| Real Block Support Correction (r)
|
0.9293 |
| Kriged Block Support Correction (s) |
0.9117 |
| Kriged-Real Block Support Correction |
0.9859 |
| Zmin Block
|
0.00 |
| Zmax Block
| 1528.10 |
|______________________________________|__________|

Non Linear

177

5.3.4 Analysis of the results for the global estimation


Open Tools / grade Tonnage Curves... and activate 5 data toggles. This tool allows to compare histograms from different kind of data (histogram models, grade variables, tonnage variables) and
derive grade-tonnage curves for the following QTM key variables:
Press Edit... for the first one and then ask for a histogram model kind of data. Choose the Point anamorphosis function and specify 21 cut-offs from 0 to 1000:

(snap. 5.3-5)

178

(snap. 5.3-6)

Non Linear

179

(snap. 5.3-7)

Press OK then repeat the procedure for the other 4 data with the same cut-off definition and specifying different curve parameters for distinguishing them:
m

curve 2: choose histogram model and the Block 5m * 5m anamorphosis function

curve 3: choose histogram model and the Block 5m * 5m with information effect anamorphosis

curve 4: choose grade variable and select the True V variable from the Grid 5*5 file

curve 5: choose grade variable and select the Kriging V variable from the Grid 5*5 file

Once the 5 curves have been edited, click on the graphic bitmaps to display the Total tonnage vs.
cut-off and the Mean grade vs. cut-off curves:

180

(fig. 5.3-3)

Total tonnage vs. cut-off - the block histograms are close to the true tonnages.
The ordinary kriging curve under-estimates the total tonnage for high cut-offs, showing the danger
to apply cut-offs on linear estimates for recoverable resources.

Non Linear

181

(fig. 5.3-4)

Mean grade vs. Cut-off


Pressing Print from the main Grade Tonnage Curves window prints the numeric values for each
cut-off. The QTM variables for the particular cut-off 600 are obtained by pressing Print (the total
tonnage T is expressed in %):
True block 5x5
Point model
Block 5*5 (no info)
Kriged blocks 5x5

|
|
|
|
|

|
77.954|
87.738|
76.103|
61.082|

|
10.385 |
11.351|
10.084|
8.077|

M
750.67
772.934
754.699
756.258

In 3.2.5 we have seen that linear kriging is well adapted to in situ resource estimation on panels.
But when mining constraints are involved (i.e applying the 600 cut-off on small blocks), the kriging
predicts a tonnage of 8.08% instead of 10.38%: the mine will have to deal with a 29% over-production compared to the prediction.
On the other hand, the global estimation using the point model over-estimates the reality. The
global estimation with change of support (block 5*5 no info) gives a prediction of good quality.
Because we know the reality from the exhaustive dataset, it is possible to calculate the true block
grades taking the true information effect into account and compare it to the Block 5x5 with infor-

182

mation effect anamorphosis. The detailed workflow to calculate the true information effect will not
be detailed here, only the general idea is presented below:
l

Sample one true value at the center of each block from the exhaustive set (representing the
blasthole sampling pattern with real sampled grades V)

krige the blocks with these samples: this is the ultimate estimated block grades on which the
ultimate selection will be based

select blocks where ultimate estimates > 600 and derive the tonnage

calculate the associated QTM variables based on the true grades

We can now compare the Block 5x5 with info to the real QTM variables calculated with the true
information effect (info):
True block 5x5
True block 5x5 (info)
Block 5*5 with info

|
|
|
|

Q
77.95
67.92
71.83

|
|
|
|

T
10.38
9.01
9.66

|
|
|
|

750.67
754.11
743.40

As expected, the information effect on the true grades deteriorates the real recovered tonnage and
metal quantity because the ore/waste mis-classification is taken into account: the real tonnage
decreases from 10.38% to 9.01%. The estimation from the Block 5x5 with info anamorphosis
(9.69%) is closer to this reality.

Non Linear

183

5.4 Local Estimation of the Recoverable Resources


We want now to perform the local estimation of the recoverable resources, i.e. the ore and metal
tonnage contained in selective 5m x 5m SMU blocks within 20 m x 20 m panels.
Four main estimation methods will be reviewed: Indicator kriging, Disjunctive kriging, Uniform
conditioning and Services variables. For a set of given cut-offs, these methods will issue the following QTM variables:
l

the total Tonnage T: the total tonnage is expressed as the percentage or the proportion of SMU
blocks that have a grade above the given cut-off in the panel. Each panel is a partition of 16
SMU blocks, i.e when T is expressed as a proportion, T = 1 means that all the 16 SMU blocks of
the panel have an estimated grade above the cut-off.

the metal Quantity Q (also referred sometimes as the metal tonnage) is the quantity of metal
relative to the tonnage proportion T for a given cut-off (according to the grade unit);

the Mean grade M is the mean grade above the given cut-off.

In Isatis, QTM variables for local estimations are calculated and stored in macro-variables (1 index
for each cut-off) with a fixed terminology:
l

base name_Q[xxxxx] for the metal Quantity variable

base name_T[xxxxx] for the Tonnage variable

base name_M[xxxxx] for the Mean grade above cut-off variable

All three variables are linked by the following relation:


Q=TxM
In order to be able to compare the different methods with the reality, we need first to calculate the
real QTM variables on the panel 20 x 20 support; the cut-off is defined at 600 ppm and each
method is locally compared to reality through this particular cut-off. The global grade tonnage
curves of all methods will be displayed and commented later in the final conclusion ( 3.6).

184

5.4.1 Calculation of the true QTM variables based on the panels


l

In Grid 5*5, create a constant 600 ppm variable named Cut-off 600 ppm: this is done through
File / Calculator window:

(snap. 5.4-1)

Tools / Copy Statistics / Grid -> Grid: in the input area we will select the true block grades
True V from the Grid 5*5 file and the Cut-off 600 ppm as the Minimum Bound Name, i.e only
cells for which the grade is above 600 will be considered. In the output area we will store the
true tonnage above 600 under Number Name and the true grade above 600 under Mean Name
in the Grid 20*20 file. If inside a specific panel no SMU block has a grade greater than 600,
then the true tonnage of this panel will be 0 and its true grade will be undefined:

Non Linear

185

(snap. 5.4-2)

In order to get the true total tonnage T relevant for future comparisons (i.e the ore proportion above
the cut-off 600), we have to normalize the number of blocks contained in each panel by the total
number of blocks in one panel (16):

(snap. 5.4-3)

186

The metal quantity Q is calculated as Q = T x M. When the true grade above 600 is defined, the
metal quantity is equal to M x T otherwise it is null. A specific ifelse syntax is needed to reflect
this:

(snap. 5.4-4)

if this specific ifelse syntax was not used, the metal quantity in the waste would be undefined
instead of being null.
Now, we have the true tonnage, the true mean and the true metal quantity above 600 ppm to base
our comparisons in the Grid 20*20 file.

Note - Beware that the true grade above 600 is not additive as it refers to different tonnages.
Therefore, it is necessary to use the true tonnage above 600 as weights for computing the global

Non Linear

187

mean of the grade over the whole deposit. Another way to compute the global mean of the grade
above 600 is to divide the global metal quantity by the global tonnage after averaging on the whole
deposit.

5.4.2 Indicator kriging


Indicator kriging is a distribution free method. It is based on the kriging of indicators defined on a
series of cut-off grades. The different kriged indicators are assumed to provide the possible distribution of block grades (after a block support correction) within each panel, given the neighboring
samples. Indicator kriging can be applied in two ways:
l

Multiple indicator (co-)kriging: performs the kriging of the indicator variables with their own
variograms, independently or not, for the different cut-offs.

Median indicator kriging: supposes that all the indicator variables have the same variogram; that
is, the variogram of the indicator based on the median value of the grade.

Multiple indicator kriging is preferable because of the de-structuring of the spatial correlation with
increasing cut-offs (the assumption of an unique variogram for all cut-offs does not hold for the
whole grade spectrum), but problems of consistency must be corrected afterwards. Besides it has
the disadvantage to be quite tedious because it requires a specific variographic analysis for each
cut-off. Incidentally it is the reason why median indicator kriging has been proposed as an alternative. One other possibility is to calculate the variogram using the intrinsic correlation hypothesis,
that simplifies the variograms fitting by assuming the proportionality of all variograms and crossvariograms.
In this case study we will use the median indicator kriging of the panels 20m x 20m; using Statistics
/ Quick Statistics..., with the declustering weights, the median of the declustered histogram is found
to be 223.9.

188

5.4.2.1 Calculation of the median indicator variogram


We have first to generate a Macro Indicator variable Indicator V[xxxxx] in the Sample set data
file and in the output grid, by using the Statistics / Processing / Indicator Pre Processing panel,
with 20 cut-offs from 50 by step of 50.

(snap. 5.4-1)

We then calculate the experimental variogram of this macro indicator variable Indicator V [xxxxx]
with the EDA (make sure that the Weight variable is activated). When selecting the IndicatorV[xxxxx] macro variable from the EDA, you will be asked to specify the index corresponding to
the median indicator: we have chosen the index 5 corresponding to the cut-off 250 which is close
enough to 223.9. If the same calculations parameters of the Raw V variogram are used, the anisotropy is no more visible; hence, the experimental variogram will be omnidirectional and calculated
with 33 lags of 5 m. It is stored in a parameter file Model Indicator, and used through Statistics /
Variogram fitting... to fit a variogram model with the following parameters detailed below the
graphic:

Non Linear

189

Variogram : Indicator V{250.000000}

0.3

1411 2140

2237

2717

3016
2496
2661
2999
2882
2914
2742
2530
2222 2546
2912
2053 1941
2829 2405
2596
3093
2346
2549
3243
1774 1875 2659
1208

0.2

928
1520
863
244

0.1
13

0.0

50

100

150

Distance (m)
Isatis
Sample set/Data
- Variable #1 : Indicator V{250.000000}
Experimental Variogram : in 1 direction(s)
D1 :
Angular tolerance = 90.00
Lag = 5.00m, Count = 33 lags, Tolerance = 50.00%
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill =
0.035
S2 - Exponential - Scale = 45.00m, Sill =
0.21

(fig. 5.4-1)

It should be noted that the total sill is close to 0.25, which is the maximum authorized value for an
indicator variogram. The model is fit using the tab Manual Fitting. The variogram is saved in the
parameter file under the name Model Indicator.

190

(snap. 5.4-2)

Non Linear

191

(snap. 5.4-3)

192

5.4.2.2 Kriging of the indicators


We now perform the kriging of the indicators keeping the same variogram whatever the cut-off, by
using Interpolate / Estimation / Bundled Indicator Kriging:

(snap. 5.4-1)
l

We ask to calculate a Block estimate: we are estimating the proportion of points above the cutoffs within the panel.

As Indicator Definition we define the same cut-offs as previously. In the Cut-off definition window, by clicking on Calculate proportions we get the experimental probabilities of the grade
being above the different cut-offs. These values correspond to the mean of the indicators and are
used if we perform a simple kriging. In this case because a strict stationarity is not likely, we
prefer to run an ordinary kriging, which is the default option.

Output panels: Grid 20*20 / Indicator V[xxxxx]

Model: Model Indicator

The same moving neighborhood octants will be used.

5.4.2.3 Calculation of the final grade tonnage curves


At the moment we only have 20m * 20m panel estimates of probabilities for a restricted set of specified cut-offs. Now we need to perform two actions:

Non Linear

193

rebuild the cumulative density function (cdf) of tonnage, metal and grades above cut-off for
each panel,

Apply a volume correction (support effect) to take into account the fact that the recoverable
resources will be based on 5m * 5m blocks.

These two actions are done through Statistics / Processing / Indicator Post-processing... with the
Indicator V[xxxxx] variable from the panels as input:

(snap. 5.4-1)

Basename for Q.T.M variables: IK. As the cut-offs used for kriging the indicators and the cutoffs used here for representing the final grade tonnage relationships may differ (an interpolation
is needed), three different macro-variables will be created:
m

IK_T{cut-off} for the ore total Tonnage T above cut-off.

IK_Q{cut-off} for the metal Quantity Q above cut-off

IK_M{cut-off} for the Mean grade M above cut-off.

194

Cut-off Definition... for the QTM variables: 50 cut-offs from 0 by a step of 25.

Volume correction: a preliminary calculation of the dispersion variance of the blocks within the
deposit is required. A simple way to achieve this consists in using the real block variance calculated by Statistics/support Correction... choosing the block size as 5 m x 5 m (cf. 3.3.2). The
Volume Variance Reduction Factor of the affine correction is calculated by dividing the Real
Block Variance (53842) by the Punctual Variance (63167). But the real block variance is calculated from the variogram sill (66500), which is superior to the punctual variance, the difference
being 3333; the real block variance needs to be corrected according to this value:
Corrected Real Block Variance = Real Block Variance - 3333 = 53842 - 3333 = 50509
Thus, the Volume Variance Reduction Factor is:
Volume Variance Reduction Factor = 50509 / 63167 = 0.802
Therefore, enter 0.802 for the Volume Variance Reduction Factor.

two volume corrections may be applied: affine or indirect lognormal correction. As the original
distribution is clearly not lognormal we prefer to apply the affine correction, which is just
requiring the variance ratio between the 5m * 5m blocks and the points

Parameters for Local Histogram Interpolation: we keep the default parameters for interpolating
the different parts of the histogram (linear interpolation) including for the upper tail, which is
generally the most problematic. A few tests made with other parameters (hyperbolic model with
exponent varying from 1 to 3) showed great impact on the resources. We need now to define the
maximum and minimum block values of the local block histograms: the Minimum Value
Allowed is 0; the Maximum Value Allowed may be simply approximated by applying the affine
correction by hand on the maximum value from the weighted point histogram and transpose it to
the block histogram with the Volume Variance Reduction Factor (0.8) calculated above: the
obtained value is 1391.

5.4.2.4 Analysis of the results


The Grade-Tonnage curves of the IK estimates will be displayed in 3.6 Conclusions, as for the
other following methods. Here, we will focus on the cut-off V = 600 ppm only, and compare the
results with the true values for this specific cut-off.
Below are displayed the calculated tonnage IK_T{600} compared to the true tonnage:

Non Linear

195

true tonnage above 600

300

1.000

250

0.875

200

0.625

250

0.750
0.500

150

0.375

100

0.250

50

0.000
N/A

200
Y (m)

Y (m)

300

IK_T{600.000000}

100

0.125

50

150

50

100 150 200 250

50

X (m)

100 150 200 250


X (m)
(fig. 5.4-1)

Tonnage T calculated by IK (SMU proportion) compared to the true tonnage.


The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.
Below are displayed the calculated mean grade compared to the true grade of panels:
true grade above 600

IK_M{600.000000}

ppm

300

300

1000
250

900
850
800

150

750
100

700
650

50

600
N/A
50

100 150 200 250


X (m)

200
Y (m)

200
Y (m)

250

950

150
100
50
50

100 150 200 250


X (m)
(fig. 5.4-2)

Mean grade calculated by IK compared to the true grades.


The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.
Hereafter we show the scatter diagrams of the real panel values and IK estimates for the 600 ppm
cut-off:

1.0

rho=0.906

true grade above 600

true tonnage above 600

196

0.5

0.0
0.0

0.5
IK_T{600.000000}

1.0

1000

rho=0.683

900
800
700
600
600

700

800

900

1000

IK_M{600.000000}
(fig. 5.4-3)

Scatter diagram of the IK estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
At this stage of the case study we can consider that globally the indicator kriging gives satisfactory
results. At the local scale noticeable differences exist with a tendency to overestimate the grade,
especially in the upper tail of the histogram.

Non Linear

197

5.4.2.5 IK using the intrinsic model


As for Median Indicator Kriging, the method requires to initialize a macro-variable containing the
range of cut-offs to be estimated, and finally to use a post-processing to maintain the indicators consistency.
The difference lies in the indicators variograms calculations. To avoid using one arbitrary variogram for the estimation, but also to prevent fitting a multivariate model of indicators, we make the
assumption that all indicator variables are in intrinsic correlation. This means that all variograms
and cross-variograms are proportional.
To apply this intrinsic correlation, the first step is to calculate these experimental variograms
through the EDA, the chosen parameters are an omnidirectional variogram with 10 lags of 10m.
(snap. 5.4-1)

198

Isatis offers then the possibility of using that intrinsic assumption in the variogram fitting window,
through the Constraints of the Automatic Fitting function.
(snap. 5.4-2)

Non Linear

199

Create a macro variable on the grid 20*20 that will contain the 11 results indicators, Tools / Create
Special Variable as follows:
(snap. 5.4-3)

200

We now perform the kriging of the indicators in the classical (Co)-Kriging window using the intrinsic model. The resulting indicator macro variable can be processed using the Indicator Post-Processing as for the Bundled Indicator Kriging.
(snap. 5.4-4)

Non Linear

201

5.4.2.6 Disjunctive kriging


An argument against Indicator Kriging is that it ignores the relationship existing between different
cut-offs. This argument would not hold anymore, if an indicator Co-kriging was performed instead
of an univariate kriging; in practice, it is difficult to establish a model of corregionalization acceptable for a large number of cut-offs. Disjunctive Kriging solves this problem by transforming the
cokriging problem into N krigings performed independently. One model offering this possibility is
the gaussian anamorphosis model using the Hermite polynomials where the change of support is
just explained by a coefficient (r coefficient of change of support).
In order to achieve the Disjunctive Kriging we have to provide:
l

the gaussian data values Gaussian V

the anamorphosis function on the block support Block 5m * 5m.

the variogram model of the block gaussian variable. To determine this model we need first to
calculate an experimental block gaussian variogram using the Raw V variogram model and the
block anamorphosis. For mathematical reasons, the sill of Raw V should not exceed the punctual variance of the anamorphosis, which is unfortunately the case here. Therefore, we need first
to compute another block anamorphosis including a sill normalization (cf. 3.3.2 With support
effect correction) using Statistics / Support Correction... and ask for Normalize Variogram
Sill. Store the anamorphosis in a new parameter file Block 5m * 5m (normalized) to avoid
overwriting the existing block anamorphosis Block 5m * 5m.

Open Statistics / Modeling / Block Gaussian Variogram... to calculate the experimental block
gaussian variogram:

202

(snap. 5.4-1)
m

Variogram model: Raw V

Block anamorphosis: Block 5m * 5m (normalized)

Number of directions: 2. It is convenient to make these directions coincide with the main
directions of anisotropy of the raw variogram (N160E and N70E) by setting a rotation of 70 around positive z axis

20 lags of 5 m for each direction

New experimental variogram: Block Gaussian V

We fit this variogram in Statistics / Variogram Fitting...; as expected the nugget effect has disappeared. Two anisotropic structures (cubic + spherical, details below the graphic) combine to a total
sill of 1, and we store the resulting model in a parameter file Block Gaussian V:

Variogram : V (Block Gaussian)

Non Linear

203

1.00

0.75

N70

0.50

N16
0.25

0.00

25

50

75

100

125

Distance (m)
Isatis
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Cubic - Range = 42.00m, Sill =
0.4
Directional Scales = (
42.00m,
60.00m)
S2 - Spherical - Range = 40.00m, Sill =
0.6
Directional Scales = (
100.00m,
40.00m)

(fig. 5.4-1)

We are now ready to perform the Disjunctive Kriging with Interpolate / Estimation / Disjunctive
Kriging...:

204

(snap. 5.4-2)

Non Linear

205

Input: Gaussian V

Block anamorphosis...: Block 5m * 5m (normalized)

Number of Kriged Polynomials: we use the same number as during the modeling of the anamorphosis function, i.e. 30.

Cut-off definition...: we choose 21 cut-offs from 0 by steps of 50. It is compulsory to include the
zero cut-off, which should give the in situ grade estimate.

we ask to perform Tonnage Corrections with a minimum tonnage of 0.5%.

the Auxiliary Polynomial File will contain experimental values of the different Hermite polynomials for the data points, that will be also put at the center of the closest block 5m x 5m. They
are calculated before the RUN, as soon as the output grid is defined (it may take a little time).

Output Grid File...: in the panels Grid 20*20, store the error DK variable

in the Panel Grid file we will also store the Q.T.M. values for each cut-off from the Basename
DK.

we use the neighborhood octants as before.

we choose for the Block Gaussian Variogram Model the variogram model previously fitted
Block Gaussian V.

Graphic displays of the panels for comparison with reality (proportion of SMU above 600 ppm):
true tonnage above 600

300

DK_T{600.000000}

1.000
0.875

250

300
250

0.750
0.625
0.500

150

0.375
0.250

100

200

Y (m)

Y (m)

200

150
100

0.125
50

0.000
N/A
50

100 150 200 250


X (m)

50
50

100 150 200 250


X (m)
(fig. 5.4-2)

Tonnage T calculated by DK (SMU proportion) compared to the true tonnage.


The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.

206

Graphic displays of the panels for comparison with reality (grade above 600 ppm):
true grade above 600

DK_M{600.000000}

ppm

300
250

250

950
900

200

200

850

150

800

100

700

Y (m)

Y (m)

300

1000

150

750

100

650

50

50

600
N/A
50

100 150 200 250

50

100 150 200 250

X (m)

X (m)
(fig. 5.4-3)

Mean grade calculated by DK compared to the true grades.


The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.

1000

rho=0.925

true grade above 600

true tonnage above 600

1.0

rho=0.753

900
800

0.5

700
600

0.0
0.0

0.5
DK T{600.000000}

1.0

500

500 600 700 800 900 1000


DK M{600.000000}
(fig. 5.4-4)

Scatter diagram of the DK estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
The results on tonnage look very comparable to those obtained with indicator kriging; but the
grades show a better correlation between Disjunctive kriging estimates and true values.

Non Linear

207

5.4.3 Uniform conditioning


This method aims to calculate directly the distribution of the blocks 5m x 5m within each panel, by
using the panel estimate and the anamorphosis functions to take the change of support into account.
To achieve the Uniform Conditioning we have to provide:
l

the kriged 20m x 20m panel grades,

two anamorphosis functions, one for the panel and one for the block support (Block 5m * 5m).
The calculation of the panel anamorphosis requires the value of the kriged panel dispersion variance. The two anamorphosis models must be consistent, that is, created from the same samples.

5.4.3.1 Kriging of panels


The panels have already been kriged during the in situ resource estimation (cf 3.2.4) but we need
to calculate the local dispersion variance of these estimates. In Interpolate / Estimation / (Co-)kriging.:

(snap. 5.4-1)

208

Set to Block mode and activate the Full set of Output Variables option

Input: Sample set / Data / V

Output: in Grids / Grid 20*20. Because we have asked for the Full set of Output Variables,
we are able to store the local estimated dispersion variance Variance of Z* for V under a
new variable Local dispersion Var Z*

variogram model: Raw V

Neighborhood: octants

Below are displayed the panel estimates:


Kriging V

300

1000
900
800
700
600
500
400
300
200
100
0

250
200
Y (m)

ppm

150
100
50
50

100 150 200 250


X (m)

N/A
(fig. 5.4-1)

Map of the kriged panels 20m x 20m


The Uniform Conditioning recreates a local gaussian histogram of the SMU in each panel, the mean
of this histogram being the gaussian equivalent of the kriged estimate. The panel dispersion variance (Local dispersion var Z*, estimated at the kriging step above) is also needed to re-construct
these histograms.

5.4.3.2 Uniform Conditioning


We then run Interpolate/Estimation/ Uniform Conditioning as shown below. The Block 5m * 5m
anamorphosis will be chosen for the block anamorphosis and a Tonnage correction of 0.5% will be
performed. The Basename for Output Variables is UC_no info, as the block anamorphosis has no
information effect. The same set of cut-offs as for the disjunctive kriging (21 cut-offs ranging from
0 to 1000) will be defined:

Non Linear

209

(snap. 5.4-1)

Graphic displays of the panels for comparison with reality:


true tonnage above 600

300

UC_no info_T{600.000000}

300

1.000
0.875

250

250

0.750
200

0.625
0.500

150

0.375
0.250

100

0.125
50

0.000
N/A
50

100 150 200 250


X (m)

Y (m)

Y (m)

200

150
100
50
50

100 150 200 250


X (m)

(fig. 5.4-1)

Tonnage T calculated by UC (SMU proportion) compared to the true tonnage.


The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.

210

true grade above 600

UC_no info_M{600.000000}

ppm

300

1000

250

950

250

900

200

200

850

150

800

100

700

750
650

50

600
N/A
50

Y (m)

Y (m)

300

150
100
50

100 150 200 250

50

X (m)

100 150 200 250


X (m)

(fig. 5.4-2)

Mean grade calculated by UC compared to the true grades.


The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.

rho=0.928

1000
True Grade above 600

true tonnage above 600

1.0

0.5

0.0

rho=0.785

900
800
700
600

0.0

0.5

1.0

UC_no info_T{600.000000}

600

700

800

900

1000

UC_no info_M{600.000000}

(fig. 5.4-3)

Scatter diagram of the UC estimates vs. the true panel values above 600 ppm
(the black line is the first bisector)
The quality of local estimation is satisfying.
Moreover, UC allows to take the information effect into account by changing the block anamorphosis to the Block 5*5 with information effect instead of block 5*5.

Non Linear

211

Note - Some grade inconsistencies may appear when taking the information effect into account,
because the cut-off have to be applied on a histogram of kriged values. These grade inconsistencies
affect low grades for small tonnages, therefore it may be corrected by suppressing the lowest
tonnage values (as done here with a minimum tonnage fixed at 0.5%).
Do not forget to change the Basename for Output Variables to UC_with info and press RUN:

(snap. 5.4-2)

The statistical results are presented in 3.6.


In conclusion, Disjunctive kriging and Uniform Conditioning give both good results; in practice, on
real datasets, Uniform Conditioning is often preferred because it is less sensitive to stationarity
hypothesis.

212

5.4.3.3 Q.T.M. Validation


UC output are Q.T.M. variables representing the distribution of SMU within the panels. These variables are theoretically derived from one block distribution, however during UC ore and metal tonnages (T and Q) are defined independently which may cause some inconsistencies. The window
Q.T.M. Validation (Statistics / Processing / Q.T.M. Validation) allows checking and if necessary
correcting the Q.T.M. variables.
(snap. 5.4-1)

Non Linear

213

After Run the following message appears:


(snap. 5.4-2)

The different correction types and the associated corrections are detailed in the help menu.

5.4.3.4 Localized Uniform Conditioning


A criticism addressed to non linear techniques, including Uniform Conditioning, is that the outputs
are probability of smus grades within bigger units. We dont have a representation of the spatial distribution of smu grades, like for instance with simulations.
One way to get such a representation is to apply the Localized Uniform Conditioning methodology
(see Abzalov, M.Z. Localized Uniform Conditioning (LUC): A New Approach to Direct Modelling
of Small Blocks, Mathematical Geology 38(4) pp 393-411).
The principle is the following: the tonnage and metal at different cutoffs contained in each panel are
distributed over the smus according to a preference based on the ranking of smus kriged grade. The
metal for higher cutoff is first assigned to the smus whose kriged grades is the highest, and so on.
As there are enough data to get a realistic estimate of the kriged smus, we can apply that method to
the results of Uniform Conditioning (without information effect for instance).
As the kriging of smus has already been achieved (see 3.2.4) you just have to run Statistics / Processing / Localized Uniform Conditioning.

214

(snap. 5.4-1)

Note: the same method can be used in the multivariate case, the metal of other elements are
assigned according to the ranking of the main variable kriged smus.
After Run we get the following Error message:

(snap. 5.4-2)

It is due to the fact that it is compulsory that for the highest cutoff the tonnage represents less than
the tonnage of one smu.
The solution consists in Re-running Uniform Conditioning with 41 cutoffs from 0 with a step of 50.
Then running Localized Uniform Conditioning does not produce anymore error message.
The statistics and the displays show that after Localized Uniform Conditioning the variability of
actual block grades is much better reproduced compared to the true smu grades.
With Tools / Grade Tonnage Curve we can also check that the QTM values obtained from Uniform
Conditioning (with Tonnage Variables option) are the same as those obtained from grades estimated using Localized Uniform Conditioning method.

Non Linear

215

Variable

Count

Minimum

Maximum

Mean

Std. Dev.

Variance

True V 5x5

3120

1378.12

277.98

228.66

52287.30

KrigingV
5x5

3120

-50.92

1361.13

275.36

209.79

44013.25

LUC V 5x5

3120

1435.18

275.79

229.66

52745.83

Kriging V

LUC V

300

300

V
2000
1900
1800
1700
1600
1500
1400
1300
1200
1100
1000
900
800
700
600
500
400
300
200
100
0

Y (m)

200
150
100
50

250
200
Y (m)

250

150
100
50

N/A
50

100 150 200 250


X (m)

50

100 150 200 250


X (m)

(fig. 5.4-1)

216

5.4.4 Service variables


The Service Variables method is based on the transformation of grades into two variables representing the ore and metal tonnage above a given cut-off for a block centered around the data point. This
transformation requires a change of support model. Each variable is then kriged by ordinary kriging. We can apply this technique for the cut-off 600 ppm (Tools / Service Variables...):

(snap. 5.4-3)

Metal Quantity Q above 600 ppm

The scatter diagram between the Ore and the Metal above 600 ppm shows a very strong (non linear)
correlation.

rho=0.987

1000

500

0
0.0

0.5

1.0

Ore Tonnage T above 600 ppm

(fig. 5.4-2)

Non Linear

217

60000

91
3972

1524

50000

5035 5224

40000
3696

4885

5254

5579
5578 5627

5319

3124

30000

5390

5537

2572

20000
10000
0

50

100
Distance (m)

150

Variogram : Ore Tonnage T above 600 ppm

Variogram : Metal Quantity Q above 600

Consequently, we will perform independently the kriging of both variables. The experimental variograms are omnidirectional and calculated with 16 lags of 10 m (with the declustering weights
active). They have been fitted as shown below:

0.10
91

0.09
0.08

1524

3972

0.07

5035 5224

0.06
3124
3696

0.05

5319
4885

5390

5254

5579
5578 5627
5537

0.04
2572

0.03
0.02
0.01
0.00

50

100

150

Distance (m)

Isatis
Isatis
Model : 2 basic structure(s)
Model : 2 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill =
8100
S1 - Nugget effect, Sill =
0.01
S2 - Spherical - Range = 53.00m, Sill = 2.876e+004
S2 - Spherical - Range = 53.00m, Sill =
0.0462

(fig. 5.4-3)

The declustering weights have great impact on the short scale structure; the variograms at short
scale are not satisfactory.
Then, the kriging of Ore and Metal is performed, with the usual octants neighborhood; the variables
Service Var Ore Tonnage T > 600 and Service var Metal Q > 600 are created.

218

(snap. 5.4-4)

Non Linear

219

(snap. 5.4-5)

220

Because a linear kriging is performed, some panels have negative or unacceptable low Tonnage T
values: for all panels having a tonnage T < 0.02 (i.e 2%), T and Q are set to 0 (this is done using
File / Calculator...).

(snap. 5.4-6)

Non Linear

221

Using the Calculator once more, we derive from the kriged variables
Service var Metal Q > 600 and Service Var Ore Tonnage T > 600 the variable Service var
grade M > 600 using the same relation M = Q / T.

(snap. 5.4-7)

222

1000

rho=0.924

True Grade above 600

True Tonnage above 600

1.0

0.5

0.0

rho=0.644

900
800
700
600

0.0

0.5

1.0

Service Var Ore Tonnage T>600

600

700

800

900

1000

Service Var grade M>600

(fig. 5.4-4)

The scatter diagrams show that some grades overestimation, and a slight under-estimation of high
tonnage values.

Non Linear

223

5.5 Simulations
After having reviewed the non linear estimation techniques, we can also perform simulations to
answer the same questions on the recoverable resources. Because we are in a 2D framework, we
can perform 100 simulations within a reasonable computation time.
Two techniques, both working under multigaussian hypothesis, will be described: Turning Bands
(TB) and Sequential Gaussian (SGS). This multigaussian hypothesis requires that the input variable
is gaussian: the Gaussian V variable, calculated previously ( 3.3.1 Punctual Histogram Modeling), will be used.
Simulations will be performed on the SMU blocks of 5 m x 5 m (Grid 5*5): this will allow to compare results with the non linear estimation techniques. Therefore, block simulations require a gaussian back transformation and a change of support from point to block: this implies specific remarks
discussed hereafter.

5.5.1 Before starting... important comments on block simulations


5.5.1.1 Block discretization optimization
In the standard version of Isatis, only points may be simulated and the change of support from point
to block is done by averaging simulated points. In practice, each block is discretized in n sub-cells
and each sub-cell is approximated as a point: the number n has to be large enough to validate this
approximation. But if n increases, the CPU time calculation increases, as each block will require n
simulation process (basically the CPU time is proportional to n). Thus, the choice of the block discretization is the result of a compromise between performance and precision.
The block discretization is defined through the neighborhood definition panels, and Isatis gives
some guidance to the best compromise by calculating the mean block covariance Cvv. The block
covariance depends only on the variogram model and the block geometry. Theoretically, if n was
infinite the mean block covariance would tend to its true value.

Note - In Isatis the default block discretization is 5 x 5 and may be optimized, as explained later (
3.5.4.1).

5.5.1.2 Gaussian back transformation


When simulating in Block mode, Isatis performs automatically the following workflow:
l

from the input gaussian data, simulate gaussian point grades according to the block discretization parameters as discussed above;

gaussian back transformation (gaussian -> raw) of the point grades using a point anamorphosis

block grade = averaging of the raw point grades

the averaging is done automatically at the end of the simulation run. Hence the required anamorphosis function to perform the gaussian back transformation is the Point anamorphosis based on the
sample (point) support, which has already been calculated during the 3.3.1 Punctual Histogram

224

Modeling. The block anamorphosis Block 5m*5m (which includes a change of support correction)
should not be used here.

5.5.2 Simulations workflow summary


The aim is to simulate 5 m x 5 m block grades and to calculate the ore Tonnage T, the metal Quantity Q and the mean grade M above 600 ppm for 20 m x 20 m panels. The workflow will consist in:
l

Variographic analysis of the gaussian sample grades (the name of the variogram model will be
Point Gaussian V)

Simulate the SMU grades (5 m x 5 m blocks) with Turning Bands (TB) or Sequential Gaussian
(SGS) method with the following parameters:
m

Block mode

input data: Sample Set / Data / Gaussian V

output macro-variables to be created: Grids / Grid 5*5 / Simu V TB or Simu V SGS

Number of simulations: 100

Starting index: 1

Gaussian back transformation enabled using the Point anamorphosis

Model...: Point Gaussian V defined at the previous step

Seed for Random Number Generator: leave the default number 423141. This seed is supposed to be a large prime number; the same seed allows reproducibility of realizations.

The neighborhood and other parameters specific to each method will be detailed in the relevant
paragraph.
l

Calculation of the QTM variables for both techniques (described for TB): ore Tonnage T (i.e
SMU proportion within each panel), metal Quantity Q, and mean grade M of blocks above 600
ppm among each 20 m x 20 m panel (M = Q / T). The panel mean grades can not be averaged
directly on the 100 simulations: the mean grade is not additive because it refers to different tonnages (the tonnage may differ between different simulations). Therefore it has to be weighed by
the ore proportion T. One way to do this is to use an accumulation variable for each panel:
m

calculate the ore proportion T and the metal quantity Q (the metal quantity is the accumulation variable: Q = T x M) for each simulation

calculate the average (T) and average (Q) of the 100 simulations

calculate the average mean grade: average (M) = average (Q) / average (T)

5.5.3 Variographic analysis of gaussian sample grades


The experimental variogram of gaussian variables often show more visible structures and make
their interpretation easier; the analysis of anisotropy using the variogram map gives similar information about the main directions of continuity. In Statistics / Exploratory Data Analysis..., the
experimental variogram Point Gaussian V is calculated with the same rotation parameters than

Non Linear

225

Raw V. A variogram model using 3 structures has been fitted and saved under the name Point
Gaussian V:

Variogram : Gaussian V

1.25
1.00
N160

0.75
0.50

N250

0.25
0.00

50
100
150
Distance (m)

Isatis
Model : 3 basic structure(s)
Global rotation = (Az=-70.00, Ay= 0.00, Ax= 0.00)
S1 - Nugget effect, Sill =
0.13
S2 - Spherical - Range = 20.00m, Sill =
0.3
Directional Scales = (
20.00m,
40.00m)
S3 - Spherical - Range = 40.00m, Sill =
0.6
Directional Scales = (
86.00m,
40.00m)

(fig. 5.5-1)

5.5.4 Simulation with the Turning Bands method


5.5.4.1 Simulations
We run Interpolate / Conditional Simulations / Turning Bands... with the parameters defined in the
workflow summary ( 3.5.2):

226

(snap. 5.5-1)

Gaussian back transformation... enabled: select the Point anamorphosis.

Neighborhood...: create a new neighborhood parameter file named octants for TB. Press Edit...
and from the Load... button reload the parameters from the octants neighborhood. We are now
going to optimize the block discretization: press the ... button next to Block Discretization: the
Discretization Parameters window pops up where the number of discretization points along the
x,y,z directions may be defined. These numbers are set to their default value (5 x 5 x 1). Press
Calculate Cvv, the following appears in the message window (values differ at each run due to the
randomization process):

Non Linear

227

Regular discretization: 5 x 5 x 1
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables
Gaussian V
Cvv =
0.811792
Cvv =
0.809978
Cvv =
0.812136
Cvv =
0.811752
Cvv =
0.810842
Cvv =
0.812900
Cvv =
0.808768
Cvv =
0.811977
Cvv =
0.810781
Cvv =
0.810921
Cvv =
0.812400

11 mean block covariances have been calculated with 11 different randomizations. The minimum value is 0.808768 and the maximum is 0.812900; the maximum relative variability is
approximately 0.5% which is more than acceptable: the 5 x 5 discretization is a very good
approximation of the punctual support and may be optimized.

Note - Note that, for reproducibility purposes, the first value of Cvv will be kept for the simulations
calculation
For optimization, we decrease the number of discretization points to 3x3:

228

(snap. 5.5-2)

Press Calculate Cvv:

Regular discretization: 3 x 3 x 1
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables
Gaussian V
Cvv =
0.809870
Cvv =
0.814197
Cvv =
0.808329
Cvv =
0.812451
Cvv =
0.819093
Cvv =
0.809922
Cvv =
0.814171
Cvv =
0.811332
Cvv =
0.805993
Cvv =
0.806053
Cvv =
0.807459

The minimum value is 0.805993 and the maximum value is 0.819093: the maximum relative
variability is approximately 1.6%. As expected, it has increased but remains acceptable: therefore, the 3 x 3 discretization is a good compromise and will be kept for the simulations (i.e each
simulated block value will be the average of 3 x 3 = 9 simulated points). Press Close then OK
for the neighborhood definition window.

Non Linear

229

Number of Turning Bands: 300. The more turning bands, the more precise are the realizations
but CPU time increases. Too few turning bands would create visible 1D-line artefacts.

Press RUN: calculations may take a few minutes.


We represent in the next figure five simulations, compared to the true map:

230

True V

300

1000
900
800
700
600
500
400
300
200
100
0

200
150
100
50
50

100 150 200 250


X (m)

250
200
Y (m)

250

Y (m)

Simu V TB[00002]

300

ppm

150
100
50
50

N/A

Simu V TB[00030]

300

300

250

250

200

200
Y (m)

Y (m)

Simu V TB[00020]

150

150

100

100

50

50
50

50

100 150 200 250


X (m)

250

250

200

200
150

100

100

50

50
50

100 150 200 250


X ( )

100 150 200 250


X (m)

Simu V TB[00050]

300

Y (m)

Y (m)

Simu V TB[00040]

300

150

100 150 200 250


X (m)

50

100 150 200 250


(fig. 5.5-1)

Non Linear

231

5.5.4.2 Calculation of the QTM variables


From Statistics/Processing/Grade Reblocking compute the metal quantity, mean grade and tonnage
on the 20*20 grid from the 5*5 grid simulation.

(snap. 5.5-1)

5.5.4.3 Analysis of the results


We can then display the ore Tonnage T and mean grade M above 600 ppm calculated by Turning
Bands and compere them to the true values:

232

true tonnage above 600

300

TB_ mean ore tonnage above 600

300

1.000
0.875

250

250

0.750
200

0.625
0.500

150

0.375

Y (m)

Y (m)

200

0.250

100

0.125
50

150
100

0.000
N/A
50

50

100 150 200 250

50

X (m)

100 150 200 250


X (m)
(eq. 5.5-1)

Tonnage T calculated by TB (SMU proportion) compared to the true tonnage. The color scale is
a regular 16-class grey palette between 0 and 1: panels for which there is strictly less than 1
block (i.e 0 <= proportion < 0.0625) are white.
true grade above 600

300

TB_mean (mean grade above 600)

ppm

300

1000
250

950

250

900

200

850
800

150

750
700

100

Y (m)

Y (m)

200

150
100

650
50

600
N/A
50

100 150 200 250


X (m)

50
50

100 150 200 250


X (m)

(fig. 5.5-1)

Mean grade calculated by TB compared to the true grades.


The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.

Non Linear

1000

rho=0.936

True Grade above 600

True Tonnage above 600

1.0

233

0.5

0.0

rho=0.869

900
800
700
600

0.0

0.5

1.0

TB_mean ore tonnage above 600

600

700

800

900

1000

TB_mean (mean grade above 600)

(fig. 5.5-2)

Scatter diagrams of ore tonnage and mean grade above 600 ppm between
the mean of 100 TB simulations and the true values of panels.

5.5.5 Simulation with the Sequential Gaussian method


Two different algorithms are available for SGS in Isatis, using two different kinds of neighborhood:
l

Interpolate / Conditional Simulation / Sequential Gaussian / Standard Neighborhood...: a standard elliptical neighborhood is used taking the point data & the previously simulated grid nodes
into account.

Interpolate / Conditional Simulation / Sequential Gaussian / Sequential Neighborhood...: the


sequential neighborhood performs first a migration of point data on the nearest grid node; the
neighborhood is then defined by a moving window made of x blocks around the target block.

We will use the standard neighborhood option because it is more accurate from a theoretical point
of view, and moreover the Block simulation is possible (automatic averaging of point values).

5.5.5.4 Simulations
Open Interpolate / Conditional Simulations / Sequential Gaussian / Standard neighborhood.... and
enter the same parameters described in the workflow summary ( 3.5.2):

234

(snap. 5.5-1)

Non Linear

235

The Gaussian Back Transformation is enabled with the Point anamorphosis function

Special Model Options...: by default, a Simple Kriging (SK) is performed using a constant
mean equal to zero

Neighborhood...: create a new neighborhood named octants for SGS with the following parameters (you may load the parameters from the octants for TB parameter file):

(snap. 5.5-2)
m

The search ellipsoid is maintained to 70 m.

minimum number of samples: 5

Number of angular sectors: 8

Optimum Number of Samples per Sector: 4, which adds to a maximum of 32 samples. Theoretically, the SGS technique would require a unique neighborhood and use all the previously simulated grid nodes to reproduce exactly the variogram; in practice, it is impossible,
so it is recommended to increase the Optimum Number in respect to the Optimum Number of
Already Simulated Node (to be defined below in the main SGS window) and the capacity of
the computer.

236

in the Advanced tab, set the Minimum distance between two samples to 2 m; as two different
sets of data are used to condition the simulations (i.e the actual data points combined with
the previously simulated grid nodes), this minimum distance criterion avoids fictitious
duplicates between original data points and simulated grid nodes. It allows also to spread
conditioning data for a better reproducibility of the variogram.

The same Block Discretization of 3 x 3 will be used.

Optimum Number of Already Simulated Node: 16. This means that the software will load all the
real samples and the 16 closest already simulated nodes in memory for the search neighborhood
algorithm. The maximum number of samples being 32, there will be 16 real samples used for
each node simulation, as for the Turning Bands method. The TEST window allows to evaluate
the impact of these different parameters on the neighborhood.

Leave the other parameters to their default values and press RUN

Note - Isatis offers the possibility to perform the different simulations with independent paths
(optional toggle in the main SGS window). By default, this toggle is set OFF, meaning that the same
random path is used for all simulations: the independency is no more certain, but the algorithm is
much quicker. If the toggle is set ON, the CPU time will approximately be multiplied by the number
of simulations. Here, it has been checked that both options show negligible differences in the final
results.
The resulting outcomes are very similar to the TB method.

Non Linear

237

5.5.5.5 Calculation of the QTM variables


From Statistics/Processing/Grade Reblocking compute the metal quantity, mean grade and tonnage
on the 20*20 grid from the 5*5 grid simulation.

(snap. 5.5-1)

238

5.5.5.6 Analysis of the results


true tonnage above 600

300

SGS_ mean ore tonnage above 600

300

1.000
0.875

250

250

0.750
200

0.625
0.500

150

0.375
0.250

100

0.125
50

Y (m)

Y (m)

200

150
100

0.000
N/A
50

50

100 150 200 250

50

X (m)

100 150 200 250


X (m)

(fig. 5.5-1)

Tonnage T calculated by SGS (SMU proportion) compared to the true tonnage.


The color scale is a regular 16-class grey palette between 0 and 1: panels for which
there is strictly less than 1 block (i.e 0 <= proportion < 0.0625) are white.
true grade above 600

SGS_ mean (mean grade above 600)

300

300

ppm
250

950

200

200

900
850

150

800

100

750

50

650

700

50

100 150 200 250


X (m)

600
N/A

Y (m)

Y (m)

250

1000

150
100
50
50

100 150 200 250


X (m)

(fig. 5.5-2)

Mean grade calculated by SGS compared to the true grades.


The color scale is a regular 16-class grey palette between 600 and 1000 and
undefined values are black: panels for which the tonnage is strictly 0 are black.

Non Linear

rho=0.938

1000
true grade above 600

true tonnage above 600

1.0

239

0.5

0.0

rho=0.870

900
800
700
600

0.0

0.5

1.0

SGS_ mean ore tonnage above 600

600

700

800

900

1000

SGS_ mean (mean grade above 600)

(fig. 5.5-3)

Scatter diagrams of ore tonnage and mean grade above 600 ppm between
the mean of 100 SGS and the true values of panels
We observe that SGS simulations give very similar results to TB and are also well correlated to the
reality.

240

5.6 Conclusions
The objective of the case study was to illustrate several non linear methods (global and local) to
estimate recoverable resources, and compare them to linear kriging. All methods take the same support effect for 5 m x 5 m blocks into account, but only a few take the information effect into
account. Therefore, we will first focus on results without information effect.

5.6.1 Global estimation


5.6.1.1 Without information effect

Grade Tonnage curves


The following methods will be compared to the true values (True): Ordinary Kriging (OK), block
anamorphosis (block 5x5), Indicator Kriging (IK), Disjunctive Kriging (DK) and Uniform Conditioning (UC). The grade-tonnage curves for all these methods will be presented; Service Variables
(SV) and simulations (TB and SGS) have been calculated only for one particular cut-off V = 600
ppm so we can not display G-T curves for these methods.
Open Tools / Grade Tonnage Curves...: Activate 6 curves. For IK, DK and UC outcomes, we need
to ask for Tonnage Variables. For instance, for the Indicator Kriging (IK): press Edit..., choose the
Tonnage Variables option then IK_Q[xxxxx] for the Metal Quantity and IK_T[xxxxx] for the
Total Tonnage:

Non Linear

241

(snap. 5.6-1)

Repeat the same for DK and UC, and change the curve parameters and labels for optimal visibility.
By clicking on the graphic windows below, ask for the following Grade Tonnage curves: Mean
grade vs. cut-off, Total tonnage vs. cut-off, Metal tonnage vs. cut-off and Metal tonnage vs. Total
tonnage. The graphics are presented here below:

242

1250

Mean Grade

1000

750

500

250

250

500
750
Cutoff

1000

True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-1)

Mean Grade vs. Cutoff

100
90

Total Tonnage

80
70
60
50
40
30
20
10
0

250

500
750
Cutoff

1000

Total Tonnage vs. Cutoff

True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-2)

Non Linear

243

Metal Tonnage

250
200
150
100
50
0

250

500
750
Cutoff

1000

True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-3)

Metal Tonnage vs. Cutoff

Metal Tonnage

250
200
150
100
50
0

0 10 20 30 40 50 60 70 80 90 10
Total Tonnage

Metal Tonnage vs. Total Tonnage

True
OK
Block 5*5
IK
DK
UC
(fig. 5.6-4)

244

The True curve is black and represented with a bold line type. We clearly see that the OK tonnage
curves are shifted compared to others: linear kriging induces a significant smoothing effect despite
a refined sampling and a good coverage of the domain.
All non linear methods provide similar and suitable results; a zoom centered on V = 600 allows
a more precise comparison around this particular cut-off:

800

13

790

12
Total Tonnage

Mean Grade

780
770
760
750
740

570 580 590 600 610 620 630 64


Cutoff

570 580 590 600 610 620 630


True
OK
Block 5*5
IK 81
DK 80
UC
79
Metal Tonnage

90
Metal Tonnage

10

730
720

11

80

70

Cutoff

78
77
76
75
74

60

73
525 550 575 600 625 650
Cutoff

8.5 9.0 9.5 10.0 10.5 11.0 11.


Total Tonnage

(fig. 5.6-5)

Grade-Tonnage curves with a zoom on the 600 ppm cutoff of interest (same legend)
Little differences are noticeable: IK overestimates the grades whereas DK overestimates the tonnages.

Non Linear

245

As we had to choose a particular cut-off for comparing these methods with SV and simulations, we
have chosen V = 600 and the global results according to this cut-off are presented hereafter.

Global statistics on cut-off V = 600 ppm


The following tables give the statistics on ore tonnage, metal quantity and grade above 600 for the
different methods on the 195 panels. The true values are compared to the following methods (using
Statistics / Quick Statistics...): Turning Bands (TB), Sequential Gaussian Simulations (SGS), Indicator Kriging (IK), Disjunctive Kriging (DK), Uniform conditioning (UC), Service Variables (SV),
global estimation with support effect (Block 5x5 without information effect, results already shown
in 3.3.4 Analysis of the results for the global estimation p.94) and ordinary kriging (OK):

Statistics on Ore Tonnage above 600 (proportion)

Statistics on Metal Quantity above 600


As the Mean grade M defined on the panels refers to different tonnages, it is not additive so the calculation of the mean and the standard deviation needs to be weighed by the tonnages. Therefore,

246

use Statistics / Quick statistics 8 times on each grade variable of each method with the relevant tonnage as the Weight variable:

Statistics on Mean Grade above 600


These statistics are attached to the specific cut-off 600: no global conclusion on the performances of
the methods may be assessed here. Besides, the dataset may not be compared to a realistic exploration campaign.

5.6.1.2 With information effect


Comparisons will be made for the anamorphosis Block 5*5 with information effect and the Uniform Conditioning (UC_with info[xxxxx]). Results for the block anamorphosis have already been
discussed (cf. 3.3.4 Analysis of the results for the global estimation p.94). Only global statistics
for the cut-off V = 600 ppm have been made:
True block 5x5
True block 5x5 (info)
Block 5*5 with info
UC_with info

|
|
|
|
|

Q
77.95
67.92
72.03
69.20

|
|
|
|
|

T
10.38
9.01
9.69
9.17

|
|
|
|
|

M
750.67
754.11
743.05
754.60

For the cut-off V = 600 ppm, UC has correctly quantified the information effect.

5.6.2 Local estimation


For each local estimation method, a scatter diagram of the panel estimates with true values (tonnages and grades) with the correlation coefficients has already been done (cf. relevant paragraphs).
Here, the error for each panel has been calculated and reported:
error = estimate - true value
Therefore, positive error values reveal overestimation.

Non Linear

247

The table below summarizes the main results for the error on tonnages:

Local statistics of error on tonnages estimates and correlation


with true tonnage values (for cut-off = 600 ppm)
The true global tonnage is 0.104; the bias for all non linear methods remains acceptable.
The table below summarizes the main results for the error on mean grades above 600:

Local statistics of error on mean grades above 600 and correlation


with true values (for cut-off = 600 ppm)
IK and SV methods show a global overestimation of the grades and a lower correlation with reality.

248

The table below summarizes the main results for metal quantity:

Local statistics of error on metal quantity and correlation


with true values (for cut-off = 600 ppm)
All non linear methods give consistent results for the metal quantity.

5.6.3 Final conclusions


The conclusions based on these numerical results only concern this particular dataset and should
not be interpreted as a straightforward classification of the methods.
Despite a refined sampling, linear interpolation methods (linear kriging, inverse distance...) induce
a smoothing effect that has a significant impact on recoverable resources. Non linear geostatistics
provide practical solutions and this case study shows that all methods are globally consistent;
though some little differences appear at the local scale.
Global estimation techniques, based on anamorphosis functions, showed satisfying results and are
quick to proceed.
Simulations techniques (TB and SGS) showed good results but these techniques are time consuming and quite heavy to proceed. Indicator Kriging showed some little differences at the local scale
(as service variables), and requires some specific pre/post-processing. Disjunctive Kriging and Uniform Conditioning both make use of anamorphosis functions, but Uniform Conditioning has the
advantage to base itself on ordinary kriging estimates instead of the global mean for Disjunctive
Kriging, which requires a stronger stationarity hypothesis. Besides, Uniform Conditioning is
straightforward to the global estimation techniques and allows to take the information effect into
account.

2D Estimation

6.2D Estimation

In this tutorial, different 2-dimension (2D) estimation methods are


reviewed. It is based on a metallic ore deposit dataset kindly provided
by an important mining producer. It has been altered to make it
unrecognizable, and the grade will be denoted Fe. This paper does not
claim nor intend to be a reference case study, it simply illustrates possible workflows and methods specific to 2D estimation.

249

250

6.7 Workflow Overview


This case study goes through different methods of 2D estimation, making use of several Isatis tools:
l

Tools / Accumulation: Compulsory step to derive additive variables from the grade: Accumulation and Thickness.

File / Data File Manager/Modify 2D-3D: Transform the 3D data to 2D. It amounts to a flattening process.

File / Calculator: Generic calculator tool. Normalize the Thickness variable.

Statistics / Exploratory Data Analysis: QA/QC tool. Display the experimental distributions.

File / Create Grid file: Builds the 2D grid on which the estimation will be performed.

File / Selection/From Polygons: Selection definition menu. Define the area of interest (AOI)
based on a polygon file.

Statistics / Variogram Fitting: Variogram modelling tool. Compute the Thickness and Accumulation variograms, independently.

Interpolate / Estimation/(Co)-kriging: Kriging tool. Krige Accumulation and Thickness separately.

Statistics / Variogram Fitting: Define the variogram model, this time for co-kriging.

Interpolate / Estimation/(Co)-kriging: Co-krige Accumulation and Thickness variables.

Transformation / Multi-linear Regression: Compute the linear regression of a target variable


on a set of explanatory variables. Define the residual model.

Tools / Calculator: Reconstruct the original grade variable.

Statistics / Statistics / Principal Component Analysis: Check the consistency different methods consistency using the PCA built-in tool on the results.

2D Estimation

251

6.8 From 3D to 2D Data


The data set consists of 1532 samples extracted from 276 drillholes, 1526 of the samples have been
analyses for Fe.

6.8.1 Data Import


First, create a new study: File / Data File Manager / Study /Create. Then import the data: File /
Import / Boreholes with Deviation Survey Data, it is stored in two csv files, Collar and Assay. Make
sure all the variables are of the right type. In particular Length should be a length variable (in
meters). Also note that no survey is necessary, because the boreholes are not deviated. The files to
be imported are located in the installation directory of Isatis (usually C:/programs/Geovariances/
Isatis/Datasets/2D_Estimation).

(snap. 6.8-1)

252

6.8.2 Data Grooming


The Fe grade estimation will be performed in 2D, which means that it has to be regularized on its
whole thickness. Unfortunately, the grade is not an additive variable, and we have to resort to other
variables (known as service variables): Thickness and Accumulation, which ratio gives the
grade.Use the menu Tools / Accumulation.

(snap. 6.8-2)

Calculation of the Accumulation from the drillholes to the header

Note - Isatis computes two thickness variables, Analysed length and Total length, the former being
the length of samples, analysed for Fe, and the latter the length of the entire drillhole. At this stage
a decision has to be made because the thickness is unique and not subject to the presence of grade
analysis, consequently it is compulsory to refer the accumulation to the same thickness and not only
to the analysed length. Failing doing that would result in underestimating the grade by dividing the
accumulation by a thickness larger than the analysed one.

2D Estimation

253

We have then normalized the accumulation by the ratio Total length by Analysed length. This
operation is equivalent to set the value of the non-analysed sample to the average value of the drillhole. This operation is performed using the calculator: File / Calculator.

(snap. 6.8-3)

Correction of the Accumulation


Because Isatis forbids the use of a 3D data file to estimate blocks in 2D, the point file containing the
accumulation must be a 2D-file. This sets all the data in the same plane, which is equivalent to a
flattening operation. If necessary:
Go to File / Data File Manager, then right-click on the Header file and select Modify 2D 3D. This
is important because the space dimension directly impacts several subsequent computations,
including the duplicates masking.

254

(snap. 6.8-4)

Modification of the 3D Header data to 2D


At this point, the duplicates should be masked off as they may cause kriging inversion error, and
degrade the global statistics. Use Tools / Look for Duplicates.

(snap. 6.8-5)

Duplicates are discarded to prevent inversion error during Kriging

2D Estimation

255

The resulting variables histograms can be displayed using the EDA: Statistics / Exploratory Data
Analysis. The accumulation and thickness histogram can be computed directly. If one is also interested in the mean Fe grade along each line, it can be reconstructed as the ratio between accumulation and length (use File/Calculator).

(snap. 6.8-6)

From top to bottom and left to right. Fe Accumulation histogram; Total length histogram; Fe grade
weighted by Total Thickness histogram; Accumulation vs. Total Thickness cross-plot, note that the
correlation coefficient is close to 1.

6.8.3 Creation of the Grid File


The 2D grid file that will hold the results is built according to the regular sampling pattern, with one
data point at the center of each block. Consequently, the block size is fixed and set at 62.5m x
62.5m.

256

To create the grid use the menu File / Create Grid File.

(snap. 6.8-7)

2D Estimation

257

(snap. 6.8-8)

The 2D grid file is built so that each data is at centre of one block
To restrict the study to the area of interest (AOI), a polygonal selection based on the outline of the
orebody is applied on the grid. The coordinates of the polygon vertices are stored in an ASCII file
polygon_AOI.asc.
To use it, first create a new polygon file: File / Polygons Editor / Application menu / New Polygon
File. Then import the file: Application menu / ASCII Import And finally: Application menu / SAVE
and RUN.
To select the blocks on the grid file use: File / Selection / From Polygons.

258

2D Estimation

259

(snap. 6.8-9)

260

6.9 2D Estimations
Four methods will be run and compared.

6.9.1 Kriging
Let us start with the independent kriging of thickness and accumulation.

6.9.1.1 Variographic Analysis


The variograms are calculated in two directions, NS and EW (Statistics / Exploratory Data
Analysis) and modeled with a geometrical anisotropy as shown in figures 10 and 11 (Statistics /
Variogram Fitting).

(snap. 6.9-1)

Experimental and model variograms of the thickness (Total length). Parameters are given in following table.
1.

2.

3.

2D Estimation

Range U
Range V
Sill Thickness

261

Nugget Effect
1

Spherical
650 m
400 m
1.1

Spherical
700 m
1150 m
2.7

(snap. 6.9-2)

Experimental and model variograms of the accumulation (Accu Fe corrected). Parameters are
given in following table.
This tutorial will not deal with the non-stationary structure along the EW direction, which is

Range U
Range V
Sill - Accu Fe
ignored during the fitting.

1.
Nugget Effect
1870

2.
Spherical
650 m
350 m
3176

3.
Spherical
720 m
1230 m
11000

262

6.9.1.2 Kriging
Thickness and accumulation are kriged in turn (Interpolate / Estimation / (Co-)Kriging).

(snap. 6.9-1)

2D Estimation

263

(snap. 6.9-2)

6.9.2 Co-Kriging
The most classical method to estimate the accumulation and the thickness is the co-kriging. It takes
into account the statistical link between accumulation and thickness through the cross-variogram.

6.9.2.3 Variographic Analysis


Once again, the experimental variogram is calculated in two directions following the sampling
pattern (NS and EW).

264

(snap. 6.9-1)

simple and cross-variograms resemblance allows us to sensibly assume a linear model of co-regionalization, consisting of a nugget effect, and of two spherical structures, detailed in table 3. The
directions of anisotropy of the model are the directions of calculation of the experimental variograms, i.e. N90 and N0.

2D Estimation

265

(snap. 6.9-2)

Experimental and modelled variograms in NS and EW directions for thickness and accumulation.The models are described in the following table.

Range U
Range V
Sill - Accu Fe
Sill Thickness
Sill - Thickness/Accu Fe

1.
Nugget Effect
2150
15.2 %
0.95
20.9 %
41
16.7 %

2.
Spherical
480 m
600 m
7000
49.5 %
1.8
39.6 %
110
44.9 %

3.
Spherical
1150 m
600 m
5000
35.3 %
1.8
39.6 %
94
38.4 %

Note that the simpler intrinsic correlation model cannot be used, because the relative sills of the different variogram structures are not equal, and the variogram sills are thus not proportional.

266

6.9.2.4 Co-kriging
Thickness and accumulation can now be co-kriged: Interpolate / Estimation / (Co-)Kriging (figure
15).

(snap. 6.9-1)

6.9.3 Residual Method


6.9.3.5 Method
It is based on the existence of a linear relationship between the Accumulation and Thickness variables.

2D Estimation

267

(snap. 6.9-1)

The strong correlation between Accumulation and Thickness would allow the use of the residual
model on this dataset.
The relationship can be expressed as follow:
(eq. 6.9-1)

Where Thickness and Residual are uncorrelated variables. In this model, the co-kriging process
amounts to the separate kriging of the Thickness and the Residual.

6.9.3.6 Application to the Case Study


The linear regression of the Accumulation on the Thickness can be computed (Statistics / Data
Transformation / Multi-linear Regression). The part of Accumulation which cannot be explained by
the linear regression is the residual.

268

(snap. 6.9-1)

A linear regression is applied to the accumulation using the thickness as the explanatory variable.
The residual is the part that is not explained by the linear regression. It is orthogonal (not correlated) to the thickness.
In our case, the results are:

And it can be checked that the residual is indeed not correlated to the Thickness. The variograms
can be modelled independently (figure 18). The Thickness variogram has already been computed,
and the parameters for the residual variogram are detailed in table4.

2D Estimation

269

(snap. 6.9-2)

Experimental and model variograms of the residual and the thickness. The parameters are
described in the following table:

Range U
Range V
Sill - Residual

1.
Nugget Effect
508

2.
Spherical
2500 m
666 m
213

Krige the residual (Interpolate / Estimation / (Co)-Kriging) using this variogram. Iron grades can be
recovered from the Thickness variable and the residual:

270

(eq. 6.9-1)

(eq. 6.9-2)

(snap. 6.9-3)

Once the additive variables (thickness and accumulation or thickness and residual), are estimated,
the Fe grade can be calculated applying the inverse transformation.

2D Estimation

6.9.4 Comparing Results


6.9.4.7 Estimates

271

272

(snap. 6.9-1)

From left to right and top to bottom. Comparison of Fe estimation using kriging; co-kriging; and
the residual method. All result are weighted by the thickness. Because the Fe grades are defined on
different supports (varying thickness), histograms have to be weighted by the thickness variable
(Statistics / Exploratory Data Analaysis / Compute Using the Weigth Variable option). Global statistics (figure 20) show that each estimation method yields a mean value consistent with the data.
The kriging method gives the highest standard deviation, and the residual method the lowest.

2D Estimation

273

(snap. 6.9-2)

Scatter diagrams of 3 estimations

274

Kriging and co-kriging give locally very similar results, while the residual model wanders a bit
more, especially for low values.

6.9.4.8 Kriging Error


To compare different estimation methods, one can compute the kriging error. However, in this case,
because the Fe grade was not kriged directly, this quantity is unknown, and has to be deduced from
the kriging error of Accumulation and Thickness variables. Let us denote the Fe grade by G, the
accumulation by A and the thickness by T. In the multivariate model of intrinsic correlation, the following relation holds:

(eq. 6.9-1)

As usual, when computing the Fe grade standard deviation histogram, dont forget to weight it with
the thickness variable (Statistics / Exploratory Data Analysis / Compute Using the Weight Variable
option).

2D Estimation

275

(snap. 6.9-1)

276

(snap. 6.9-2)

(snap. 6.9-3)

2D Estimation

277

(snap. 6.9-4)

Comparison of Fe kriging error using kriging, co-kriging (both weighted by the thickness), and
residual method
Kriging errors are fairly close to one another. As expected, the error is lower for the co-kriging than
for the kriging. It also appears that the error of the residual method is higher than the co-kriging
one. The calculation of its value is, however, more complicated and for simplicity sake will not be
detailed here.

278

6.10 3D Estimation
Grades in thin deposits that can stem from weathering process for example (Ni, Mn) can be efficiently estimated with a 2D kriging: flattening of surfaces is implicit, and there is no need to
model the footwall and hanging wall surfaces. On the other hand, this method requires that the
grade is decomposed into two additive variables. For comparison purposes, the 3D estimation
process is briefly presented hereafter.

6.10.1 3D Grid Creation


First, create a 3D grid with File / Create Grid File:
- Make sure you select the 3D Grid File type.
- It is usually a good idea to use an auxiliary file to calibrate the grid parameters. In this
case, because it is a 3D grid, use the Lines file.
- The geometry of the grid has already been set in 2D, so check the
- Match Geometry to an Existing Grid box and select the 2D grid.
- Use the Edit Mesh option, and adjust the Z parameters. This can be interactively checked
if the Graphic Check box at the bottom of the page is checked. To visualize the Z axis,
adjust the graphic view in Application / Graphic Parameters / Projection box. In this case
Z0 = 510, DZ = 1 and NZ = 80 seems fitting.

2D Estimation

279

(snap. 6.10-1)

3D grid file creation

6.10.2 Modeling the Hanging Wall Surface


In a 3D estimation process, both the footwall and the hanging wall of a deposit are unknown.
The strategy is to model one surface and deduce the other one by adding or subtracting the
thickness, estimated by kriging.
In this case, we choose to estimate the hanging wall, because it is more continuous and its variance is smaller. For each sample line, the elevation is calculated as the maximum of ZB+ Beginning of Sample : go to Tools / Copy Statistics / Line -> Header Point, select the ZB+ variable in
input and create a Z Hanging wall variable as the output in the Maximum Name field.

280

Make sure this new variable is of type length, this will be compulsory later on, to create the 3D
selection. If necessary, use File / Data File Manager / Variable / Format (or right click on the
variable), and change it to Length, unit meters.
As usual, use Statistics / EDA to compute the experimental variogram, and Statistics / Variogram Fitting to fit the model. Then Interpolate / Estimation / (Co)-Kriging to perform the estimation.

(snap. 6.10-2)

Experimental and model variogram of the elevation of the hanging wall. The following table
describes the parameters of the model.

Range U
Range V
Sill Z hanging wall

1.
Nugget Effect
0

2.
Spherical
300 m
400 m
13

3.
Spherical
1360 m
1200 m
55

6.10.3 3D Selection Creation


With a 3D grid and an estimation of the hanging wall elevation, the 3D selection can now be created. First, compute an estimate of the footwall using File / Calculator.

2D Estimation

281

(snap. 6.10-3)

Compute a footwall estimate from the hanging wall and the thickness
Then use the tool in File / Selection / From Surfaces (figure 26) to compute the 3D selection. Use
the same 2D polygon file as before. Figure 27 shows the result in the 3D viewer.

282

(snap. 6.10-4)

Definition of the 3D selection

2D Estimation

283

(snap. 6.10-5)

Selection of 3D blocks to be estimated. Vertical scaling is 10

6.10.4 Fe Grade Kriging


Now that the estimation is carried out in 3D on regular blocks, it is perfectly acceptable to krige the
grade directly. Follow the standard protocol: Statistics / EDA to compute the experimental variogram (take care to choose a short range for the vertical variogram), and Statistics / Variogram Fitting to fit the model. Then Interpolate / Estimation / (Co)-Kriging to perform the estimation.

284

(snap. 6.10-6)

Experimental and model variograms of Fe along two horizontal directions (red and green,
range 62.5 m) and along the drillholes (purple, range 1 m).

Range U
Range V
Range W
Sill Fe

1.
Nugget Effect

24 m

2.
Spherical
0.50 m
377 m
236 m
6.2

3.
Spherical
473 m
174 m
7.0 m
17

For the estimation, use a moving neighbourhood with the following parameters:
- A search ellipsoid with maximum distances (600 m, 400 m, 30 m) in the (U, V, W) directions.
- Anisotropic distances.
- 5 samples minimum.
- 4 angular sectors.
- An optimum of 15 samples per sector.
- Selection of all the samples in the target block
The Fe grade estimated in 3D can be averaged on the 2D grid in order to compare it with the 2D
estimation: Tools / Copy Statistics / Grid -> Grid.

2D Estimation

285

(snap. 6.10-7)

Estimated values of 3D blocks of a same column of blocks are averaged on a single 2D block. This
operation is similar to the accumulation calculation.

286

6.11 2D-3D Comparison

(snap. 6.11-1)

Graphic of Factor 2 vs. Factor 1 and Factor 3 vs. factor 2 (F1 representing 93% of the variance,
resp. F2 5% and F3 1.7%) obtained from the PCA analysis of the four different estimates. All estimates show a good correlation although Fe Residual and Fe est 3D seem globally less consistent.
A PCA analysis is performed to compare the different estimates using the menu Statistics / Statistics / Principal Component Analysis. Fe Kriging and Fe Co-Kriging are very close while Fe Residual and Fe est 3D seem globally less consistent.

287

Oil & Gas

288

Property Mapping & Risk Analysis

8.Property Mapping &


Risk Analysis
This case study is based on a real data set kindly provided by AMOCO
for teaching purposes, and that has been used in the AAPG publication
Stochastic Modeling and Geostatistics, edited by Jeffrey M. Yarus and
Richard L. Chambers.

It demonstrates several capabilities offered by Isatis to cope with two


variables whose coverage of the field are different, typically a few
wells on one hand and a complete 3D seismic on the other hand.

The study covers the use of estimation and simulations, from Kriging to
Cokriging, External Drift and Collocated Cokriging.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.

Last update: Isatis version 2014

289

290

8.1 Presentation of the Dataset


First, create a new study using the Study / Create facility of the File / Data File Manager window.

(snap. 8.1-1)

Then, set the Preferences / Study Environment / Units:


m

default input-output length unit in foot,

X, Y and Z graphical axis in foot.

The datasets are located in two separate ASCII files (in the Isatis installation directory, under the
Datasets/Petroleum sub-directory):
m

The file petroleum_wells.hd contains the data collected at 55 wells. In addition to the coordinates, the file contains the target variable (Porosity) and the selection (Sampling) which
concerns the 12 initial appraisal wells,

The file petroleum_seismic.hd contains a regular grid where one seismic attribute has been
measured: the normalized acoustic impedance (Norm AI). The grid is composed of 260 by
130 nodes at 40ft x 80ft.

Both files are loaded using the File / Import / ASCII facility in the same directory (Risk_Analysis),
in files respectively called Wells and Seismic.

Property Mapping & Risk Analysis

291

(snap. 8.1-2)

(snap. 8.1-3)

292

Using the File / Data File Manager, you can check that both files cover the same area of 10400ft by
10400ft. You can also check the basic statistics about the two variables of interest.
Variable

Porosity (from Wells)

Norm AI (from Seismic)

Number of samples

55

33800

Minimum

6.1

-1

Maximum

11.8

0.

Mean

8.2

-0.551

Std Deviation

1.4

0.155

At this stage, no correlation coefficient between the two variables can be derived, as they are not
defined at the same locations.
In this case study, the structural analysis will be performed using the whole set of 55 wells, whereas
any estimation or simulation procedure will be based on only the 12 appraisal wells, in order to
produce stronger differences in the results of various techniques.

Property Mapping & Risk Analysis

293

8.2 Estimation of the Porosity From Wells Alone


The first part of this case study is dedicated to the mapping of the porosity from wells alone. In
other words, we simply ignore the seismic information. This step is designed to provide a comparison basis, although it would probably be skipped in an industrial study. The spatial correlation of
the Porosity variable is studied through the Statistics / Exploratory Data Analysis procedure. The
following figures are displayed: a base map where the porosity variable is represented with proportional symbols, an histogram and the omnidirectional variogram calculated for 10 lags of 1000ft. In
the Application / Graphic Specific Parameters of the Variogram window, the Number of Pairs
option is switched ON.

Frequencies

Y (ft)

0.15

Porosity

10000

5000

Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

55
6.1
11.8
8.2
1.4

0.10

0.05

0
0

5000
X (ft)

10000
217

Variogram : Porosity

2.0

199

0.00

9
10
Porosity

11

12

142 99

194
160

203

94

1.5
73

1.0

0.5

0.0

2000

4000
6000
Distance (ft)

8000

(fig. 8.2-1)

The area of interest is homogeneously covered by the wells. The Report Global Statistics item from
the Menu bar of the variogram graphic window produces the following printout where the vario-

294

gram details can be checked. The number of pairs is reasonably stable (above 70) up to 9000ft: this
is consistent with the regular sampling of the area by the wells.
Variable : Porosity
Mean of variable
Variance of variable
Rank
Number
of pairs
1
73
2
94
3
199
4
217
5
194
6
160
7
203
8
142
9
99

= 8.2
= 1.862460
Average
distance
1301.15
1911.80
2906.72
4054.00
5092.86
5882.27
6895.25
8014.89
8937.23

Value

1.143562
1.460053
1.863894
2.068571
1.987912
1.817500
1.909532
2.118310
2.070556

Coming back to the variogram Application / Calculation Parameters, ask to calculate the variogram cloud. Highlight pairs corresponding to small distances (around 1000ft) and a high variability
on the variogram cloud: these pairs are represented by asterisks on the variogram cloud; the corresponding data are highlighted on the base map and joined by a segment. No point in particular can
be designated as responsible for these pairs (outlier): as usually, they simply involve the samples
corresponding to high porosity values.

Property Mapping & Risk Analysis

295

X (ft)
0

10000

Porosity

10000

Y (ft)

5000

5000

0
0

5000
X (ft)

10000
(fig. 8.2-2)

296

Distance (ft)
0

1000 2000 3000 4000 5000 6000 7000 8000 900010000

1000 2000 3000 4000 5000 6000 7000 8000 900010000

Variogram : Porosity

15

10

Distance (ft)

(fig. 8.2-3)

To save this experimental variogram in a Parameter File in order to fit a variogram model on it,
click on Application / Save in Parameter File and call it Porosity.

Property Mapping & Risk Analysis

297

8.3 Fitting a Variogram Model


Within the procedure Statistics / Variogram Fitting, define the Parameter File containing the experimental variogram (Porosity) and the one which will contain the model. The latter may also be
called Porosity; indeed, although these two Parameter Files have the same name, there will be no
confusion as their type is different. Visualize the experimental variogram and the fitted model using
any of the graphic windows; as there is only one variable and one omnidirectional variogram, the
global and the fitting windows are similar. From the Model Initialization frame, select Spherical
and Add Nugget. These are the structures that will be fitted on the experimental.
The model can be fitted using the Automatic Fitting tab by pressing Fit.

(snap. 8.3-1)

Pressing the Print button in this panel produces the following printout where we can check that the
model is the nesting of a short range spherical and a nugget effect.

298

(snap. 8.3-2)

The corresponding graphic representation is presented in the next figure.

Variogram : Porosity

2.0

1.5

1.0

0.5

0.0

1000 2000 3000 4000 5000 6000 7000 8000 9000


Distance (ft)

A final Run(Save) saves this model in the Parameter File Porosity.

(fig. 8.3-1)

Property Mapping & Risk Analysis

299

8.4 Cross-Validation
The cross-validation technique (Statistics/Modeling/Cross-validation) enables you to evaluate the
consistency between your data and the chosen variogram model. It consists in removing in turn one
data point and re-estimating it (by kriging) from its neighbors using the model previously fitted.
An essential parameter of this phase is the neighborhood, which tells the system which data points,
located close enough to the target, will be used during the estimation. In this case study, because of
the small number of points, a Unique neighborhood is used; this choice means that any information
will systematically be used for the estimation of any target point in the field. Therefore, for the
cross-validation, each data point is estimated from all other data.
This neighborhood also has to be saved in a Parameter File that will be called Porosity.

(snap. 8.4-1)

When a point is considered, the kriging technique provides the estimated value Z* that can be compared to the initial known value Z, and the standard deviation of the estimation * which depends
on the model and the location of the neighboring information. The experimental error between the
estimated and the true values (Z - Z*) can be scaled by the predicted standard deviation of the estimation ( *) to produce the standardized error. This quantity, which should be a normal variable,

300

characterizes the ability of the variogram model to re-estimate correctly the data values from their
neighboring information only. If the value lies outside a given interval, the point requires some
attention: defining for instance the interval as [-2.5 ; 2.5] (that is to say, setting the threshold to 2.5),
enables to focus on the 1% extreme values of a normal distribution. Such a point may arbitrarily be
called an "outlier".
The procedure provides the statistics (mean and variance) of the estimation raw and standardized
errors, based on the 55 data points. The same statistics are also calculated when the outliers have
been removed: the remaining data are called the robust data.
Statistics based on 55 test data
Mean
Variance
Error
-0.00533
1.18778
Std. Error
-0.00257
1.02851

Statistics based on 53 robust data


Mean
Variance
Error
0.10776
0.88043
Std. Error
0.10193
0.76465

A data is robust when its Standardized Error lies between -2.500000 and 2.500000

Note - The key values of this printout are the mean error, which should be close to zero, and the
variance of the standardized error which should be close to 1. It is not recommended to pay too
much attention to the variance of the results obtained on the robust data alone, as the model has
been fitted taking this outlier into account.
The procedure also provides four standard displays which reflect the consistency between the data,
the neighborhood and the model: each sample is represented with a + sign whose dimension is proportional to the variable, whereas the outliers are figured using a l symbol. They consist of:
m

the base map of the variable,

the histogram of the standardized errors,

the scatter plot of the true value versus the estimated value,

the scatter plot of the standardized error of estimation versus the estimated value.

Property Mapping & Risk Analysis

301

(fig. 8.4-1)

A last feature of this cross-validation is the possibility of using this variance of standardized error
(score) to rescale the model. As a matter of fact, the kriging estimate and therefore the estimation
error does not depend on the sill of the model, whereas the variance of estimation is directly proportional to this sill. Multiplying the sill by the score ensures that the cross-validation performed with
this new model, all other parameters remaining unchanged, provides a variance of standardized
error of estimation exactly equal to 1.
This last possibility must be manipulated with caution, especially if the score is far from 1 as one
can hardly imagine that the only imperfection in the model could be its sill. Instead, it is recommended to check the outliers first and possibly re-run the whole procedure (structural analysis and
cross-validation).
In the following, the Porosity variogram model is considered to be the best possible one.

302

8.5 Estimation
The task is to estimate by kriging the value of the porosity based on the 12 appraisal wells at the
nodes of the imported seismic grid, using the fitted model and the unique neighborhood.
The kriging operation is performed using the Interpolate / Estimation / (Co-)Kriging procedure. It
is compulsory to define:
l

the variable of interest (Porosity) in the Input File (Wells). As discussed earlier, the estimation
operations will be performed using the 12 appraisal wells only. This is the reason why the Sampling selection is specified,

the names of the output variables for the estimation and the corresponding standard deviation,

the Parameter File containing the Model: Porosity,

the Parameter File containing the Neighborhood: Porosity.

(snap. 8.5-1)

Property Mapping & Risk Analysis

303

The Test button can be used to visualize the weight attached to each data point for the estimation of
one target grid node. It can also be used to check the impact of a change in the Model or the Neighborhood parameters on the Kriging weights.
The 33800 grid nodes are estimated with values ranging from 6.6 to 11.3. These statistics can interestingly be compared with the ones from the original porosity variable, which lies between 6.1 and
11.8. The difference reflects the smoothing effect of kriging.
The kriging results are now visualized using several combinations of the display capabilities. You
are going to create a new Display template, that consists in an overlay of a grid raster and porosity
data locations. All the Display facilities are explained in detail in the "Displaying & Editing Graphics" chapter of the Isatis Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l

Firstly, give a name to the template you are creating: Phi. This will allow you to easily display
again this template later.

In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m

In the Data area, in the Petroleum / Seismic file select the variable Kriging (Porosity),

Specify the title that will be given to the Raster part of the legend, for instance Phi,

In the Graphic Parameters area, specify the Color Scale you want to use for the raster display. You may use an automatic default color scale, or create a new one specifically dedicated to the Porosity variable. To create a new color scale: click on the Color Scale button,
double-click on New Color Scale and enter a name: Porosity, and press OK. Click on the
Edit button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button, enter 14 as the New Number of Classes, 6 and 13 as the Minimum and Maximum values. Press OK.
- In the Colors area, click on Color Sampling to choose regularly the 25 colors in the 32
colors palette. This will improve the contrast in the resulting display.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large Phi
values.
- Click on the Undefined Values button and select for instance Transparent.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter
10 as the reference tickmark and 1 as the step between the tickmarks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch off the Automatic Format
toggle, and enter 0 as the number of digits. Switch off the Display Undefined Values toggle.
- Click on OK.

304

In the Item contents for: Raster window, click on Display current item to display the
result.

Click on OK.

(snap. 8.5-2)
l

Back in the Contents list, double-click on the Basemap item to represent the Porosity variable
with symbols proportional to the variable value. A new Item contents window appears. In the
Data area, select Wells / Porosity variable as the Proportional Variable and activate the Sampling selection. Leave the other parameters unchanged; by default, black crosses will be displayed with a size proportional to the Porosity value. Click on Display Current Item to check
your parameters, then on Display to see all the previously defined components of your graphic.
Click on OK to close the Item contents panel.

Property Mapping & Risk Analysis

305

In the Item list, you can select any item and decide whether or not you want to display its legend. Use the Up and Down arrows to modify the order of the items in the final Display.

Close the contents window. Your final graphic window should be similar to the one displayed
hereafter.
Kriging (Porosity)

10000

Y (ft)

Phi

5000
13
12
11
10
9
8
0

7
0

5000
X (ft)

10000

(fig. 8.5-1)

The label position may be modified using the Management / View Label / Move unconstrained
The * and [Not saved] symbols in the name of the graphic page indicate that some recent modifications have not been stored in the Phi graphic template, and that this template has never been saved.
Click on Application / Store Page to save them. You can now close your window.

306

8.6 Estimation with External Drift


Actually, two types of data are available:
l

one scarce data set containing few samples of good quality (this usually corresponds to the well
information),

one data set containing a large amount of samples covering the whole field but with poor accuracy (this usually corresponds to the seismic information).

In this case, one well-known method consists in integrating these two sources of information using
the Kriging with External Drift technique. It consists in performing the standard kriging algorithm,
based on the variable measured at the wells, considering that the drift (overall shape) is locally represented by the seismic information. This requires such information (or background) to be known
everywhere in the field or at least to be informed densely enough so that the value at any point (well
location, for instance) can be obtained using a quick local interpolation.
As in any kriging procedure a model is required about the spatial correlation. In the External Drift
case, this model has to be inferred knowing that the seismic information serves as a local drift: this
refers to the Non-stationary Structural Analysis.
The application Interpolate / Estimation / Bundled External Drift Kriging provides all these steps in
a single procedure which assumes that:
l

the seismic background is defined on a regular grid. It is interpolated at the well locations from
the target nodes using a quick bilinear interpolator.

the model of the target variable (measured at the wells) taking the seismic information into
account as a drift can be either provided by the user interactively or automatically calculated in
the scope of the Intrinsic Random Functions of order k theory, using polynomial isotropic generalized covariances. For more information about the structural analysis in IRF-k, the user
should refer to the "Non stationary modeling" technical reference (available from the On-Line
documentation). The only choice using the automatic calculation is whether to allow a nugget
effect as a possible component of the final model or not. To impose the estimation to honor the
well information and avoid misties, a quite common practice is to forbid this nugget effect component.

Still using the Sampling selection and the unique neighborhood (Porosity), the procedure first
determines the optimal structure forbidding any nugget effect component and then performs the
estimation.
The results are stored in the output grid file (called seismic) with the following names:

Property Mapping & Risk Analysis

ED Kriging (Porosity) for the estimation,

ED Kriging St. Dev. (Porosity) for its standard deviation.

307

(snap. 8.6-1)

The printout generated by this procedure details the contents of the optimal model that has been
used for the estimation:
======================================================================
|
Structure Identification
|
======================================================================
.../...
Drift Identification
====================
The drift trials are sorted by increasing Mean Rank
The one with the smallest Mean Rank is preferred
Please also pay attention to the Mean Squared Error criterion

T1 : 1 f1
T2 : 1 x y f1

Mean
Mean Sq.
Mean
Trial
Error
Error
Rank
T2
9.194e-03 5.547e-01 1.417
T1
1.370e-02 6.223e-01 1.583

Results are based on 12 measures

Covariance Identification
=========================
The models are sorted according to the scores (closest to 1. first)
When the Score is not calculated (N/A), the model is not valid
as the coefficient (sill) of one basic structure, at least, is negative

S1 : Order-1 G.C. - Scale = 1462.30ft


S2 : Spline G.C. - Scale = 1462.30ft
S3 : Order-3 G.C. - Scale = 1462.30ft

308

Score
0.869
1.192
0.771
1.869

S1
1.099e-01
0.000e+00
1.871e-01
0.000e+00

S2
2.141e-02
6.281e-02
0.000e+00
0.000e+00

Successfully processed =
CPU Time
=
Elapsed Time
=

S3

0.000e+00
0.000e+00
0.000e+00
3.409e-02
12
0:00:00 (0 sec.)
0:00:00 (0 sec.)

The 33800 grid nodes are estimated with values ranging from 5.1 to 13.3 and should be compared
to the one of the data information where the porosity varies from 6.1 to 11.8.
To display the ED Kriging result, you can easily use the previously saved display called Phi. Click
on Display / Phi in the main Isatis window. You just need to modify the variable defined in the Grid
Raster contents: replace the previous Kriging (Porosity) by ED Kriging (Porosity) and click on
Display.
ED Kriging (Porosity)

10000

Y (ft)

Phi

5000

13
12
11
10
9
8

7
0

5000
X (ft)

10000

(fig. 8.6-1)

The impact of the seismic information used as the external drift is clear, although both estimations
have been carried out using the same amount of data (hard) information, namely the 12 appraisal
wells.
The External Drift method can be seen as a linear regression of the variable on the drift information.
In other words, the result is a combination of the drift (scaled and shifted) and the residuals. The
usual drawbacks of this method are that:
l

the final map resembles the drift map as soon as the two variables are highly correlated (at the
well locations) and tends to ignore the drift map in the opposite case.

the drift information is used as a deterministic function, not as a random function and the estimation error does not take into account the variability of this drift.

Property Mapping & Risk Analysis

309

8.7 Cokriging With Isotopic Neighborhood


One drawback of the previous method is the lack of control on the quality of the correlation
between the variable measured at the wells and the seismic information. This paragraph will focus
on this aspect.
Cokriging is the traditional technique for integrating several variables in the estimation process: the
estimation of one variable at a target point consists of a linear combination of all the variables available at the neighboring points. This method is obviously more demanding than the kriging algorithm as it requires a consistent multivariate model.
When all variables are not known at the same locations and particularly when an auxiliary variable
(here seismic) is densely sampled, one problem is the choice of the neighborhood. Here seismic will
be used only where the porosity is known (isotopic neighborhood).

8.7.1 Structural Analysis


To derive a multivariate model, some of the information on both variables has to be defined at the
same points. This is not directly possible as the porosity is defined at 55 wells and the normalized
acoustic impedance is measured on the output grid: the two variables are in two different files.
Therefore, the preliminary task consists in "getting" the values of the seismic information at the
well locations. Due to the high density of the seismic grid, all quick local interpolation techniques
will give similar results. The simplest one is offered by the Tools / Migrate / Grid to Point procedure which gives to a well the value of the closest grid node. This is how we define the new variable
in the Wells data file, called Impedance at wells.

(snap. 8.7-1)

310

The Statistics / Exploratory Data Analysis application is used to check the correlation between the
two variables: on the basis of the 55 wells, the correlation coefficient is 0.826 and is visualized in
the following scatter diagram where the linear regression line of the impedance versus the porosity
has been plotted. The two simple variograms and the cross-variogram are also calculated for 10 lags
of 1000ft each, regardless of the direction (omnidirectional).
203

217
199

Variogram : Impedance at wells

0.03

194

rho=0.826

142
-0.2

160

99

Impedance at wells

-0.3
94

-0.4

0.02
73

-0.5

-0.6

0.01
-0.7

-0.8
0.00

1000 2000 3000 4000 5000 6000 7000 8000 9000

-0.9

Distance (ft)

199

0.15

203

194
160

142

217

94

73

0.10

0.05

0.00

2.0

99

Variogram : Porosity

Variogram : Impedance at wells & Porosi

217
0.20

10

11

12

Porosity

199

142
194

99

203
160

94

1.5

73
1.0

0.5

1000 2000 3000 4000 5000 6000 7000 8000 9000


Distance (ft)

0.0

1000 2000 3000 4000 5000 6000 7000 8000 9000


Distance (ft)

(fig. 8.7-1)

Note - The variance of the acoustic impedance variable sampled at the 55 well locations (0.027) is
close from the variance of the variable calculated on the entire data set (0.024).
The calculation parameters being similar to the previous (monovariate) structural analysis, the simple variogram of Porosity has obviously not changed. This set of experimental variograms is saved
in a Parameter File called Porosity & Impedance.
The Statistics / Variogram Fitting procedure is used to derive a model which should match the three
experimental variograms simultaneously. To fit a model in a multivariate case, in the framework of
the Linear Coregionalization Model, the principle is to define a set of basic structures by clicking
the Edit button. Any simple or cross variogram will be expressed as a linear combination of these
structures. The two basic structures that will compose the final model are:

Property Mapping & Risk Analysis

a nugget effect,

a spherical variogram with a range of 4000ft.

311

Once you have entered the two structures the use of the Automatic Sill Fitting option ensures that
the cokriging matrix is positive definite.

(snap. 8.7-2)

312

(fig. 8.7-2)

Pressing the Print button in Model Definition panel produces the following printout. This model is
finally saved in a new Parameter File called Porosity & Impedance.
Model : Covariance part
=======================
Number of variables
= 2
- Variable 1 : Porosity
- Variable 2 : Impedance at wells
.../...
Number of basic structures = 2
S1 : Nugget effect

Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
0.0039
0.0111
Variable 2
0.0111
0.3162
.../...

S2 : Spherical - Range = 4000.00ft

Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
0.0258
0.1915
Variable 2
0.1915
1.6755
.../...

Property Mapping & Risk Analysis

313

8.7.2 Cross-Validation
The Statistics / Cross-Validation procedure checks the consistency of the model with respect to the
data. When performing the cross-validation, in the multivariate case, for each target point, it is possible to choose in the Special Kriging Options:
l

to suppress all the variables relative to this point,

to suppress only the target variable at this point.

Note - The latter possibility is automatically selected in the Unique Neighborhood case. In order to
try the first solution, the user should use the Moving Neighborhood instead, which can be extended
by increasing the radius (20000ft) and the optimum count of points (54) for the neighborhood
search.

(snap. 8.7-3)

The cross-validation results are slightly better than in the monovariate case. This is due to the fact
that the seismic information (correlated to the porosity) is used even at the target point where the
porosity value is removed.
======================================================================
|
Cross-validation
|
======================================================================

Statistics based on 55 test data


Mean
Variance
Error
-0.00427
0.52898

314

Std. Error
-0.00293
1.01547

Statistics based on 55 robust data


Mean
Variance
Error
-0.00427
0.52898
Std. Error
-0.00293
1.01547

A data is robust when its Standardized Error lies between -2.500000 and 2.50000

8.7.3 Estimation
The estimation is performed using the Cokriging technique where, at each target grid node, the
porosity result is obtained as a linear combination of the porosity and the acoustic impedance measured at the 12 appraisal wells only (isotopic neighborhood). The Interpolate / Estimation / (Co)Kriging panel requires the definition of the two variables of interest in the input file (Wells), the
model (Porosity & Impedance) and the neighborhood (Porosity). It also requires the definition of
the variables in the output grid file (Seismic) which will receive the result of the estimation:
Cokriging (Porosity) for the estimation of the porosity and Cokriging St. Dev. (Porosity) for its
standard deviation.

Property Mapping & Risk Analysis

315

(snap. 8.7-4)

It is obviously useless to compute the estimation of the acoustic impedance obtained by cokriging
based on the 12 appraisal wells only.
The 33800 grid nodes are estimated with values ranging from 6.8 to 11.2. The cokriging estimate is
displayed using the same parameters as before.

316

10000

Cokriging (Porosity)

Y (ft)

Phi
5000

13
12
11
10
9
8

7
0

5000
X (ft)

10000

(fig. 8.7-3)

This map is very similar to the one obtained with the porosity variable alone: the few differences
are only linked to the auxiliary variable (seismic information) and to the choice of the multivariate
model.
Obviously, a large amount of information is lost when reducing the seismic information to its value
at the well locations only.
The next part of the study deals with the Collocated Cokriging technique, which aims at integrating
through a cokriging approach the whole auxiliary information provided by the Norm AI variable,
exhaustively known on the seismic grid.

Property Mapping & Risk Analysis

317

8.8 Collocated Cokriging


The idea of this technique is to enhance the cokriging process by adding, for each target grid node,
the value of the acoustic impedance at this location.
The system resembles the traditional cokriging technique where one additional fictitious sample is
added which coincides with the target grid node and for which only the acoustic impedance value is
provided. This is therefore an heterotopic case, as both variables are not only informed at the same
locations.
The multivariate model defined for the standard cokriging procedure (Porosity & Impedance) is
still used here. Concerning the neighborhood (Porosity), the term Unique may be misleading: all
the samples of the data file (Wells) are taken into account, but one fictitious sample is added at the
target grid node.
Using the same Standard (Co-)Kriging window, it is only compulsory to edit a few parameters:

318

Property Mapping & Risk Analysis

319

Click Number of Variables and check the Collocated Variable(s) option in the following subwindow:

(snap. 8.8-1)
l

a new name for the variable to be created, for instance Collocated Cokriging (Porosity),

the collocated variable in the Output File variable list: this refers to the seismic information
called Norm AI.,

the collocated cokriging as a Special Kriging Option in the main window; the collocated variable in the Input File should be indicated: this refers to the variable carrying the seismic information called Impedance at Wells (which is defined as target variable #2 in the Input File)

(snap. 8.8-2)

The 33800 grid nodes are estimated with the values ranging from 5.6 to 12.5.

320

Note - The kriging matrix systematically involves one extra point whose location varies with the
target grid node. Therefore, the Unique Neighborhood computer trick which consists in inverting
the kriging matrix only once cannot be exploited anymore. A partial inversion is used instead, but
the computing time is significantly longer than for the traditional cokriging.
10000

Collocated Cokriging (Porosity)

Y (ft)

Phi
5000

13
12
11
10
9
8
7

0
0

5000
X (ft)

10000

(fig. 8.8-1)

Compared to the External Drift technique, the link between the two variables is introduced through
the structural model rather than via a global correlation: this allows more flexibility as this correlation may vary with the distance. This is why it is essential to be cautious when performing the structural analysis.
Collocated Cokriging with Markov-Bayes assumption:
The idea in this paragraph is to take the full advantage of the seismic information, especially during
the structural analysis, by choosing a simplified multivariate model based on the seismic information. This may be useful when the number of wells is not large enough to allow a proper variogram
calculation.
The next graphic shows a variogram map obtained from the Exploratory Data Analysis window
(last statistical representation at the right) for the Norm AI variable defined on the grid, using 50
lags of 120 ft for the calculation parameters. This tool allows to easily investigate potential anisotropies. In this case, directions of better continuity N10E and N120 can be quite clearly identified:
just click on one of the small tickmarks corresponding to directions, on the mouse right button and
finally on Activate Direction.

Note - This calculation can be quite time demanding when it is applied to large grids. In such
cases, a Sampling selection can be preliminary performed to subsample the grid information; the
variogram map calculation is then performed only on this selection.

Property Mapping & Risk Analysis

321

(snap. 8.8-3)

It is advised to cautiously analyze this apparent anisotropy. Actually, in the present case, this anisotropy is not intrinsic to the impedance behavior over the area of interest; it is more likely due to the
presence of a North-South low impedance band around X equal to 2000 to 4000ft. It is therefore
ignored and a standard experimental variogram is computed.
By default, the grid organization is used, as it allows a more efficient computation of the variogram,
for instance along the main grid axes. Switch off the Use the Grid Organization toggle on the
Exploratory Data Analysis main window and click on the variogram icon to compute an omnidirectional variogram of the NormAI variable on the grid. Compute 50 lags of 120ft and save (Application / Save in Parameter File menu of the graphic page) the experimental variogram under a new
Parameter File called Norm AI.

322

The Statistics / Variogram Fitting procedure is used to fit a model to the acoustic impedance experimental variogram. A possible model is obtained by nesting, in Manual Fitting tab:
l

a Generalized Cauchy structure with a range of 1750ft (third parameter equal to 1),

a spherical variogram with a range equal to 6000 ft,

an automatic sil fitting.

(snap. 8.8-4)

The following figure presents the resulting model.

Property Mapping & Risk Analysis

323

Variogram : Norm AI

0.03

0.02

0.01

0.00

1000

2000

3000

4000

Distance (ft)

5000

6000

(fig. 8.8-2)

To run a Bundled Collocated Cokriging procedure, it is still compulsory to define a completely consistent multivariate model for porosity and acoustic impedance.
The idea of the Markov-Bayes assumption is simply to derive the cross-variogram and the variogram of the porosity by simply rescaling the acoustic impedance variogram. The scaling factors are
obtained by dividing the experimental variances of the two pieces of data using; Var Norm AI
(0.0313) / Var Porosity (3.24) = (0.00966) and using the correlation coefficient at wells (0.915)
between the 2 variables Porosity and Impedance at wells that can be obtained from the scatter diagram of the Exploratory Data Analysis.

Note - This correlation coefficient corresponds to the porosity values within the Sampling
selection and the Norm AI background variable after migration from grid to wells location (Grid to
point option).
The cokriging process, by construction, operates within the scope of the model of intrinsic correlation. In this case, kriging and cokriging lead to the same result for isotopic data sets (all variables
informed at all data points). In the collocated cokriging case, an additional acoustic impedance sample, located at the target grid node, is introduced in the estimation process.
To perform Collocated Cokriging with Markov hypothesis, select the window Interpolate / Estimation / Bundled Collocated Cokriging. The results of this bundled Collocated Cokriging process are
stored in variables called:

324

CB Kriging (Porosity) for the estimation,

CB Kriging St.dev. (Porosity) for its standard deviation.

(snap. 8.8-5)

Property Mapping & Risk Analysis

325

CB Kriging (Porosity)

10000

Y (ft)

Phi
5000

13
12
11
10
9
8

7
0

5000
X (ft)

10000

(fig. 8.8-3)

326

8.9 Simulations
As a matter of fact, linear estimation techniques, such as kriging or cokriging, do not provide a correct answer if the user is interested in estimating the probability that the porosity overcomes a
given threshold. Applying a cutoff operator (selecting every grid node above the threshold) on any
of the previous maps would lead to a two-color map (each value is either above or below the threshold); this cannot be used as a probability map and it can be demonstrated that this result is biased.
At least a simple proof consists in noticing that the standard deviation of the estimation (which
proves that this estimated value is not the truth) is not used in the cutoff operation. Drawing a value
of the error at random within an interval calibrated on a multiple of this standard deviation, centered
on the estimation would correct this fact on a one-grid node basis. But drawing this correction at
random for two consecutive nodes does not take into consideration that the estimation (and therefore its related standard deviation) should be consistent with the spatial correlation model.
A correct solution is to randomly draw several simulations, which reflect the variability of the
model, and to transform each one of them into a two-color map by applying the cutoff operator.
Then, on a grid node basis, it is possible to count the number of times the simulated value passes the
threshold and normalize it by the total number of simulations: this provides an unbiased probability
estimate. The accuracy of this probability is ensured when a lot of simulations are drawn assuming
that they are all uncorrelated (up to the fact that they share the same model and the same conditioning data points).
As implemented in Isatis, the simulation technique is based on a random number generator which
ensures this independence. Any series of random numbers is related to the value of a seed which is
defined by the user. Therefore, in order to draw several series of independent simulations, it suffices
to change this seed.
Several simulation techniques are available in Isatis. The one which presents a reasonable trade-off
between quality and computing time is the Turning Bands Method which will be used for all techniques described in this paragraph. The principle of this technique is to produce a non-conditional
simulation first (this is a map which reflects the variogram but does not honor the data) and then to
correct this map by adding the map obtained by interpolating the experimental error between the
data and the non-conditional simulated value at the data point: this is called conditioning. This last
interpolation is performed by kriging (in the broad sense) using the input model. The final map is
called conditional simulation. The only parameter of this method is the count of bands that will be
fixed to 200 in the rest of this section. For more information on the simulation techniques, the user
should refer to the On-Line documentation.
Each conditional simulation is supposed to be similar to the unknown reality. It honors the few
wells and reproduces the input variogram (calculated from these few data).
An additional constraint is to reproduce the histogram. Actually, most simulation techniques
assume (multi)gaussian distributions. It is therefore usually recommended to transform the original
data prior to using them in a simulation process, unless:

Property Mapping & Risk Analysis

327

the experimental histogram is not a good representation of a meaningless theoretical histogram


model: this is the case when the data variable is not stationary.

the variable measured at the data points is already normally distributed.

This can be checked on both variables: the porosity from the well data file and the acoustic impedance from the seismic grid file. In Statistics / Exploratory Data Analysis, the Quantile-Quantile plot
graphically compares any experimental histogram to a set of theoretical distributions, for instance
gaussian in the present case.
Gauss(m=8.2;s=1.4)
5

10

11

12

10

11

12

12
11

Porosity

10
9
8
7
6
5
Gauss(m=8.2;s=1.4)

(fig. 8.9-1)

328

Gauss(m=-0.551;s=0.155)
-0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2
-0.2
-0.3

Norm AI

-0.4
-0.5
-0.6
-0.7
-0.8
-0.9
-0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2
Gauss(m=-0.551;s=0.155)

(fig. 8.9-2)

Visual comparison shows that the hypothesis that the distribution is normal does not really hold.
Nevertheless, for simplicity, it is decided to perform the simulations directly on the raw variables,
bypassing the gaussian anamorphosis operation. Hence, each spatial correlation model used in the
estimation section can be used directly.

Note - An example of gaussian transformation (called anamorphosis) can be found in the Non
Stationary & Volumetrics case study, for the thickness variable.
These simulations are illustrated in the next paragraphs in the univariate case and for the external
drift technique. Similarly, cosimulations and collocated cosimulations (bundled or real) could be
performed using the same model than for the estimation step.

8.9.1 Univariate Simulations


The menu Interpolate / Conditional Simulations / Turning Bands used with the target variable
(Porosity) in the Data file (Wells) and with Sampling selection performs sequentially the non-conditional simulation and the conditioning with kriging.

Property Mapping & Risk Analysis

329

For instance, perform ten simulations using 200 turning bands, storing their results in one Macro
Variable called Simu Porosity.
Should you wish to generate several batches of simulations (say 10 at one time), you have to modify the seed for each run, as discussed earlier. You also have to increase the index given to the first
simulation by 10 if you want the indices in the Macro Variable to be consecutive.
Finally, specify the model (Porosity) and the neighborhood (Porosity) to be used during the conditioning kriging step, based on the 12 appraisal wells only. Two simulation results are displayed
below.

330

(snap. 8.9-1)

Property Mapping & Risk Analysis

331

(fig. 8.9-3)

The Tools / Simulation Post Processing facility is used to compute the probability that the porosity
is greater than one threshold (9. in this case).
Among various possibilities, define in the Iso-Cutoff Maps one new macro-variable that will contain the probability that the variable remains above the threshold 9; the resulting map will be stored
under the name Proba Porosity (kriging) {9.000000}. The resulting map is displayed with a new
color scale for the probability map (in raster mode); this color scale is derived from the Red Yellow palette. The porosity at the 12 appraisal wells is overlaid using the Symbols type of representation, with + symbols for porosity values above 9 and o symbols for porosity values below.
10000

Y (ft)

P[Porosity>9%]

5000

0
0

5000
X (ft)

10000

Proba
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00

The noisy aspect of the result is due to the small number of simulations.

(fig. 8.9-4)

332

8.9.2 External Drift Simulations


The External Drift Kriging introduces the seismic information as the background shape which conditions the local drift (hence its name). The same formalism can be transposed in the simulation
domain. The Interpolate / Conditional Simulation / External Drift (bundled) menu offers this possibility by nesting the following phases:
l

local interpolation of the seismic at the data points,

determination of the optimal model inferred taking into account the seismic information at the
data point as an external drift: as for kriging, we forbid any nugget effect component,

the conditional simulations.

The resulting Macro variable name is set to Simu Porosity ED.

(snap. 8.9-2)

The next graphic shows two output realizations. Due to the control brought by the seismic information, the variability between the simulations is much smaller than for the univariate simulations.

Property Mapping & Risk Analysis

333

The corresponding probability map (below 9.) is finally displayed.

(fig. 8.9-5)
10000

Y (ft)

P[Porosity(ED)>9%]

5000

0
0

5000
X (ft)

10000

Proba
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00

(fig. 8.9-6)

This probability map reflects the ambiguity of the status of the auxiliary seismic variable used as an
external drift: this quantity is assumed to be a known function. Hence this drift component does not
introduce any randomness in the simulation process. Moreover, the scaling and shifting factors
which are automatically derived by the kriging system remain constant from one simulation to the
next one, and even more, they are the same all over the field due to the Unique Neighborhood.
Therefore, because of the high level of the correlation between the acoustic impedance and the
porosity, the seismic variable controls almost completely the estimation of the probability to exceed
a threshold.

334

Non Stationary & Volumetrics

9.Non Stationary &


Volumetrics
This case study is based on a real 2D data set kindly provided by Gaz
de France. Its objective is twofold:
to illustrate the application of geostatistics for non-stationary phenomena in the scope of the theory of Intrinsic Random Function of
order k (IRF-k), the use of kriging with external drift and the use of
kriging with Bayesian drift.
to illustrate how volumetric calculations are derived from conditional simulations within Isatis.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.

Last update: Isatis version 2014

335

336

9.1 Presentation of the Dataset


The information consists in the depth of an horizon top. It is composed of:
l

A few wells in the ASCII file gdf_wells.hd, containing depth measurements in meters corresponding to the top of a reservoir and its respective thickness values.

2D seismic survey, containing depth measurements in meters corresponding to the same top
structure (after velocity analysis), in the ASCII file gdf_seismic.hd.

A new study has first to be created. Then, both data sets are imported using the File / Import / ASCII
procedure in a new Directory Non Stationary; the Files are called Wells and Seismic. The files are
located in Isatis installation direction/Datasets/Non_Stationary_and_Volumetrics.

(snap. 9.1-1)

The Data File Manager can be used to derive the following statistics on:
l

the depth measured at the wells

Directory Name
File Name
Variable Name
.../...
Printing Format
MINI=
Q.25=
Q.50=
Q.75=
MAXI=

2197.00
2208.00
2214.50
2284.00
2343.00

MEAN=

2241.17

: Non Stationary
: Wells
: depth at wells
: Decimal,

Length = 10,

ST.D/MEAN= 0.0187787
Defined Samples= 87

Digits = 2

/ 87

Non Stationary & Volumetrics

ST.D=
l

337

42.09

the seismic depth


Directory Name
File Name
Variable Name
.../...
Printing Format

MINI=
2147.00
Q.25=
2190.00
Q.50=
2215.00
Q.75=
2235.00
MAXI=
2345.00

MEAN=
2215.60
ST.D=
36.43

: Non Stationary
: Seismic
: seismic depth
: Decimal,

Length = 10,

Digits = 2

ST.D/MEAN= 0.0164406
Defined Samples= 1351

/ 1351

The next figure illustrates a basemap of both data sets: seismic data (black crosses) and the well
data (red squares), using two basemap representations in a new Display page. The area covered by
the seismic data is much larger than the area drilled by wells.

25

Y (km)

20

15

10

5
320

325

330

335
X (km)

340

345
(fig. 9.1-1)

338

9.2 Creating the Output Grid


All the resulting maps will be produced on the same output grid covering the entire seismic information, even if a lot of extrapolation is then required when working with the well data alone. The
procedure File / Create Grid File is used to create the grid file Grid containing 3420 grid nodes,
with the following parameters:
l

X origin: 327500m, Y origin: 10500m,

X and Y mesh: 250m,

X nodes number: 60, Y nodes number: 57.

(snap. 9.2-1)

For comparison purpose a quick estimation of depth at wells is performed, using standard kriging
using Interpolate / Quick Interpolation in a Linear Model Kriging mode using all samples for each
grid estimation (Unique neighborhood).

Non Stationary & Volumetrics

339

(snap. 9.2-2)

The estimated depth, Depth from wells (quick stat), is displayed below with several types of representation:

340

a Raster display, using a new color scale ranging from 2200 to 2520 by steps of 10m,

an Isolines display of the estimated depth, with isolines defined from 2200 to 2500m with a
100m step (thin black lines) and between 2219.5 and 2220.5 with a 1m step to illustrate in bold
the 2220m value,

a Basemap of the wells.

(snap. 9.2-3)

Non Stationary & Volumetrics

341

9.3 Estimation With Wells


The purpose of this section is to perform an estimation of the depth using the information given by
the 87 wells alone in a non-stationary framework.

9.3.1 Exploratory Data Analysis


The spatial structure of the depth at wells variable is analyzed through the Statistics / Exploratory
Data Analysis procedure. The omnidirectional variogram is computed for 12 lags of 500m, with a
tolerance on distance equal to one half of the lag (all pairs are used) and saved under the name
depth at wells.

178
4000

197

Variogram : depth at wells

297

3000

250

303

231

2000
262

1000

358

82 395
0

362
1

421

3
Distance (km)

(fig. 9.3-1)

The variogram reaches the dispersion variance (1772) around 3km and keeps rising with a parabolic
behavior: this could lead to modeling issues, as the smoothest theoretical variogram model precisely has a parabolic behavior. This is actually a strong indication that the variable is not stationary
at the scale of a few kilometers.

9.3.2 Non-stationary Model Fitting


A first solution consists in fitting a generalized covariance of order k (rather than a variogram)
where k designates the degree of the polynomial drift which suits the representation of the global
behavior of the variable.
The utility Statistics / Non Stationary Modeling performs the structural inference in the scope of the
theory of the Intrinsic Random Functions of order k (IRF-k) in the following two steps:

342

Determination of the optimal polynomial drift (among the possible drift trials specified by the
user). The default drift trials is selected by pressing the button Automatic (no ext. drift). Once
determined, this polynomial drift is subtracted from the raw variable to derive residuals.

Determination of the best combination of generalized covariances, from a list of basic structures
that could be modified by the user.

(snap. 9.3-1)

The Parameter File where the Model will be ultimately stored is called Wells. Edit it in order to
open the Model Definition panel and ask for the Default Model; it is composed of:

Non Stationary & Volumetrics

A nugget effect,

A linear generalized covariance term (similar to a linear variogram),

A spline generalized covariance,

A third order generalized covariance.

343

The scale factor of all the basic structures is automatically calculated, being equal to 10% of the
field diagonal. The value of these parameters has no consequence on the model, and is just kept by
consistency with the variogram definition.
It also requires the definition of the Neighborhood used during this structural inference. Because of
the small amount of data, we keep the Unique neighborhood previously defined.
The structural inference in Unique Neighborhood produces the following results:
l

The optimal drift is quadratic: this makes sense when trying to capture the dome-shape of the
global reservoir.

The corresponding optimal generalized covariance is composed only of a nugget effect (structure 1), with a sill coefficient of 336.74.
======================================================================
|
Structure Identification
|
======================================================================
Data File Information:
Directory
= Non Stationary
File
= Wells
Target File Information:
Directory
= Non Stationary
File
= Wells
Seed File Information:
Directory
= Non Stationary
File
= Wells
Variable(s) = depth at wells
Type
= POINT (87 points)
Model Name
= Wells
Neighborhood Name = unique - UNIQUE

.../...

Drift Identification
====================
The drift trials are sorted by increasing Mean Rank
The one with the smallest Mean Rank is preferred
Please also pay attention to the Mean Squared Error criterion

T1 : No Drift
T2 : 1 x y
T3 : 1 x y x2 xy y2

Mean
Mean Sq.
Mean
Trial
Error
Error
Rank
T3
-3.833e-02 3.996e+02 1.276
T2
-7.317e-01 1.156e+03 2.103
T1
-3.136e-14 1.813e+03 2.621

Results are based on 87 measures

Covariance Identification
=========================
The models are sorted according to the scores (closest to 1. first)

344

When the Score is not calculated (N/A), the model is not valid
as the coefficient (sill) of one basic structure, at least, is negative

S1 : Nugget effect
S2 : Order-1 G.C. - Scale = 1400.000m
S3 : Spline G.C. - Scale = 1400.000m
S4 : Order-3 G.C. - Scale = 1400.000m

Score
S1
S2
S3
S4

1.042 3.367e+02 0.000e+00 0.000e+00 0.000e+00


1.171 0.000e+00 0.000e+00 2.857e+02 0.000e+00
0.806 0.000e+00 2.007e+02 0.000e+00 0.000e+00
1.447 0.000e+00 1.023e+01 0.000e+00 7.715e+02
1.471 4.710e-01 0.000e+00 0.000e+00 8.652e+02
1.644 0.000e+00 0.000e+00 0.000e+00 1.071e+03
N/A
0.000e+00 0.000e+00 3.004e+02 -2.867e+01
N/A
-1.015e+00 3.292e+01 0.000e+00 5.998e+02
N/A
-6.779e-01 0.000e+00 3.486e+02 -7.207e+01
N/A
0.000e+00 -2.837e+01 4.476e+02 -1.221e+02
N/A
0.000e+00 -2.307e+01 3.716e+02 0.000e+00
N/A
-6.011e-01 0.000e+00 3.095e+02 0.000e+00

Successfully processed =
87
CPU Time
=
0:00:01 (1 sec.)
Elapsed Time
=
0:00:02 (2 sec.)
The Model Parameter File (Wells) has been updated

It is frequently observed that after the drift identification process, the resulting residuals present an
erratic (non structured) behavior; consequently, a covariance structure composed only of a nugget
effect should not be a surprise. In some cases it is advised to force the model to be structured by
removing the nugget effect from the list of basic structures. To achieve this, click on Default Model
in the Model Definition window and remove the Nugget Effect from the list. Click on OK and then
on Run. This new structural inference using the same Unique Neighborhood produces the following results for the Covariance Identification Step (the drift identification results obviously remain
the same):
.../...
Covariance Identification
=========================
The models are sorted according to the scores (closest to 1. first)
When the Score is not calculated (N/A), the model is not valid
as the coefficient (sill) of one basic structure, at least, is negative

S1 : Order-1 G.C. - Scale = 1400.000mS2 : Spline G.C. - Scale = 1400.000m


S3 : Order-3 G.C. - Scale = 1400.000m
Score
S1
S2
S3

1.171 0.000e+00 2.857e+02 0.000e+00


0.806 2.007e+02 0.000e+00 0.000e+00
1.447 1.023e+01 0.000e+00 7.715e+02
1.644 0.000e+00 0.000e+00 1.071e+03
N/A
-2.307e+01 3.716e+02 0.000e+00
N/A
0.000e+00 3.004e+02 -2.867e+01
N/A
-2.837e+01 4.476e+02 -1.221e+02

Successfully processed =
87
CPU Time
=
0:00:00 (0 sec.)
Elapsed Time
=
0:00:01 (1 sec.)

The Model Parameter File (Wells) has been updated

Non Stationary & Volumetrics

345

The optimal generalized covariance is composed only of a Spline (structure 2), with a sill coefficient of 285,7.

9.3.3 Estimation
The estimation by kriging can now be performed using the standard procedure Interpolate / Estimation / (Co-)Kriging. The target variable depth at wells has to be defined in the Input File Wells, and
the names of the resulting variables in the Output File Grid:
l

Depth from Wells (Non-Stat) for the estimate,

Depth from Wells (St. Dev.) for the corresponding standard deviation.

The estimation is performed with the non-stationary model Wells, and the unique neighborhood.

346

(snap. 9.3-2)

The results are visualized in the following figure, where the estimated depth is represented in the
same way as for the previously quick estimation. The only difference is that the color scale is modified in order to avoid expanding anymore the values greater than 2520; these values are set to
blank.
Additionally, a red isoline is displayed for a standard deviation value equal to 15m. The value,
which indicates a rather poor precision, is exceeded almost on the entire field, excepted close to the
wells.

Non Stationary & Volumetrics

347

(snap. 9.3-3)

348

9.4 Estimation With Wells and Seismic


The information provided by seismic data is of prime interest in making the depth estimation map
more reliable especially in the areas far from the wells. The idea is to consider the seismic as providing correct information of the large scale variability of the depth variable that we usually call the
trend: this technique is known as the External Drift Kriging.
For applying this technique, it is necessary to know this external drift information both at the well
locations and at the nodes of the final grid. Consequently, a preliminary consists in interpolating the
seismic depth at the nodes of the final grid.

9.4.1 Structural Analysis of the Seismic Data


The Statistics / Exploratory Data Analysis feature is used to calculate the Variogram Map on the
seismic depth variable. All the parameters are shown in the next graphic. Tolerances equal to 1 on
Lags and Direction are used in order to have a better representation of possible anisotropies.
In the variogram map area you can activate a direction using the mouse buttons, left one to select a
direction, right one to select Activate Direction in the menu. This variogram map clearly shows a
short scale anisotropy, with a direction of maximum continuity about N15E. The two principal
axes have been activated and the experimental variograms confirm this feature. We can observe
also a regional trend in the E-W direction.
You have the option to save these experimental variograms using the option Save in Parameter
File... in the Application menu. This set of directional variograms is saved in a new Parameter File
Seismic.

Note - Pay attention to the fact that the angular tolerance on each directional variogram is equal
to approximately 15 (180 divided in 36 angles, with a tolerance of 1 sector on each side of the
direction of interest). Computing standard experimental variograms with a reference direction of
N15E and default angular tolerance (45 divided by the number of directions) could lead to
slightly different results.

Non Stationary & Volumetrics

349

(snap. 9.4-1)

The Statistics / Variogram Fitting procedure is now used in order to fit an anisotropic model on
these experimental variograms. In the Model Initialization frame, select Spherical. Then click Constraint to allow the Anisotropy and lock the spherical sill to 1350. Click Fit to apply the automatic
fitting.
By default, a Global Anisotropy is set to an angle consistent with the experimental calculation
(equal to 75 in trigonometric convection in this case).

350

(snap. 9.4-2)

Non Stationary & Volumetrics

351

(snap. 9.4-3)

The model is stored in the Standard Parameter File Seismic pressing the Run (Save) button.

(fig. 9.4-1)

352

9.4.2 Estimation of the Seismic Depth


The seismic depth can now be estimated at the nodes of the output grid file Grid using the standard
(Co-)Kriging panel. A new variable Depth from Seismic (Background) is created in the Output
File; there is no need to calculate the standard deviation of the estimation. The variogram model
Seismic is used.
The only specificity of this estimation is in the choice of the Neighborhood parameters. Due to the
large number of data, a Moving Neighborhood is strongly advised; it is called Seismic. As the data
is not regularly spread (high density along lines), a large number of angular sectors (12) is recommended to ensure that the neighboring information will surround the target node as regularly as
possible. To reduce statistical instabilities, an optimum number of 2 samples per angular sector
(hence 24 points) is selected within a neighborhood circle of 7000m radius, and the minimum number of samples is set to 3 samples. The radius could be extended above the longest correlation distance (11km) but, due to the high sample density, this would not improve the estimation. It would
simply allow us to go further in the extrapolated areas, which is not our goal.
In order to avoid having clustering data, an advanced parameter setting the "Minimum Distance
Between two Selected Samples" to 500m is also used.

Non Stationary & Volumetrics

353

(snap. 9.4-4)

354

(snap. 9.4-5)

Non Stationary & Volumetrics

355

(fig. 9.4-2)

From the resulting map, it can be observed that:


l

The map using the previous color scale almost covers the whole area.

The top of the structure (where the wells are located) has a seismic depth around 2150m, while
the well information produces a value around 2200m.

Before using Depth from seismic (Background) as an external drift function, it is recommended to
verify the correlation between this variable and the depth information at wells. To achieve that, a

356

kriging estimation of the seismic depth variable is performed into the Wells point file, using the
same variogram and neighborhood configuration as previously; a new variable Depth from seismic
(Background) is created.
The scatter diagram between the two variables is displayed hereafter. The regression line (bold
line) and the first bisector (thin line) are indicated. Both variables are highly correlated, and this
correlation is linear. Furthermore, the global shift of approximately 50m between seismic depth and
depth at wells is obvious.

2350

rho=0.978

depth at wells

2300

2250

2200

2150
2150

2200

2250

2300

Depth from seismic (Background)

2350
(fig. 9.4-3)

9.4.3 Estimation with External Drift


The previous map, Depth from seismic (Background), can now be considered as an external drift
function for the construction of the final depth map, estimated from the well information. As kriging honors the information used as data, this method will ensure the following two goals:
l

To provide a surface which closely fits to the depth values given at the wells, avoiding misties.

To produce a depth map which resembles the seismic map (at least far from the control wells).

This technique may be used with several background variables (external drifts). However, the bundled version Interpolate / Estimation / External Drift (bundled) described here allows only one
background variable. The method requires the background variable to be known at the well locations: this is automatically provided by a quick bilinear interpolation run on the background variable. The Unique Neighborhood is used. The final question concerns the Model which must be

Non Stationary & Volumetrics

357

inferred knowing that the seismic information is used as External Drift. The procedure offers the
possibility of calculating it internally using the polynomial basic structures for the determination of
the optimal generalized covariance. In presence of outliers, the procedure often finds a nugget
effect as the optimal generalized covariance; it is therefore useful to ask the procedure to exclude
the nugget effect component from the trial set of generalized covariances.

(snap. 9.4-6)

The resulting non stationary model is printed during the process, before the kriging with External
Drift actually takes place.
======================================================================
|
Structure Identification
|
======================================================================
Data File Information:
Directory
= Non stationary
File
= Wells
Variable(s) = depth at wells
Target File Information:
Directory
= Non stationary
File
= Wells
Variable(s) = depth at wells
Seed File Information:
Directory
= Non stationary
File
= Wells
Variable(s) = depth at wells
Variable(s) = KRIG_DATA
Type
= POINT (87 points)
Neighborhood Name = Unique - UNIQUE
.../...
Drift Identification
====================
The drift trials are sorted by increasing Mean Rank
The one with the smallest Mean Rank is preferred
Please also pay attention to the Mean Squared Error criterion

358

T1 : 1 f1
T2 : 1 x y f1

Trial
Error
Error
Rank
T2
2.348e-02 7.485e+01 1.460
T1
3.488e-02 7.165e+01 1.540

Results are based on 22 measures


Covariance Identification
=========================
The models are sorted according to the scores (closest to 1. first)
When the Score is not calculated (N/A), the model is not valid
as the coefficient (sill) of one basic structure, at least, is negative
S1 : Order-1 G.C. - Scale = 2033.624m
S2 : Order-3 G.C. - Scale = 2033.624m

Score
S1
S2
1.019 1.100e+02 0.000e+00
1.915 0.000e+00 2.834e+03
N/A
1.131e+02 -3.394e+00

The following graphic representation is performed using the same items as previously:

Non Stationary & Volumetrics

359

(fig. 9.4-4)

360

9.4.4 Conclusions
Although both maps have been derived with the same set of constraining data (the 87 wells):
l

the results are similar in the area close to the conditioning wells: in both maps, the top of the
structure is reached at 2200m,

the external drift map is more realistic in the extrapolated area as it resembles the seismic background variable,

the reliability of the map is estimated to be better on the external drift map: the area where the
standard deviation is smaller than 15m is larger.

The next graphic shows the horizontal position of a section line and its respective cross section.
Depth is measured in meters, and the horizontal axis measures the distance along the trace AA'.
25

A'

Y (km)

20

15

10

5
320

330

335

340

345

X (km)

2150
Depth (m)

325

2200

Depth from seismic (Background)

2250

Depth from wells (quick stat)

2300
Depth from wells (Non Stat)

2350
2400

Depth from wells (Ext.Drift)

10

15

X (km)
(fig. 9.4-5)

This graphic is obtained using a Section in 2D Grid representation of the Display facility, applied to
the 4 variables simultaneously. The parameters of the display are shown below.

Non Stationary & Volumetrics

361

(snap. 9.4-7)

Then, to define the trace you plan to display, you can either:
l

enter its coordinates, using the Trace... button in the Contents tab illustrated above,

digitize the trace in a second display corresponding to a geographic view of the area (basemap
or grid); once this graphic is displayed, select Digitize Trace with the right button of your
mouse, then select the vertices of the trace with the left button. Once you have finished, click on
the right button to terminate and then ask to Update Trace on Graphics with the right button.

Coming back to the Contents window of the trace display, you can modify in the Display Box tab
the definition mode for the graphic bounds as well as the scaling factors.
Finally, using Application / Store Page, save this template (call it trace for instance) in order to easily reproduce this kind of cross-section later.

362

9.5 Estimation Using Kriging With Bayesian Drift


In this section, the Kriging with Bayesian Drift will be used as an alternative to the kriging with
external drift. This technique is advantageous when the number of input data - in that case well
information - is too sparse. In that case, the estimation with external drift is not an optimal option
since the drift coefficients are hard to be estimated. The less input data there are, the more uncertain
the drift coefficient determination is, hence the uncertainty over the final result should be estimated
consequently.
Kriging with external drift considers the drift coefficients as constant over the domain of estimation
and ignore the their attached uncertainty. The idea of Bayesian Kriging is to add some prior information about the drift coefficients and consider them as variables from Gaussian distribution with
correlations between each others.
Prior information can be directly deduced from standard statistical regressions of actual dataset but
normally it is not very representative if the data is very limited. User can define prior means and
standard deviations by himself, relying on some geological knowledge, previous experience,
regional data and sound judgment.
The goal of this section is to apply the Bayesian Kriging to estimation of the depth, using the data
from five wells, information from seismic as a trend variable (drift), and prior information on trend
coefficients.
The five wells are defined by the selection variable 5 wells.

9.5.1 Exploration of the Sub-dataset


In this section, the modeling will be performed using only 5 wells corresponding to the selection 5
wells available in the file Well. Go to Statistics/Exploratory Data Analysis and display a base map
of the wells and a cross plots of the depth at wells and the depth from seismic. Do not forget to activate the 5 wells selection.

Non Stationary & Volumetrics

363

(snap. 9.5-1)

364

(snap. 9.5-2)

(snap. 9.5-3)
Statistics on new depth at wells:
Number of samples: 5

Non Stationary & Volumetrics

365

Minimum: 2197.00
Maximum: 2302.00
Mean: 2233.90
Variance: 1385.14

Three points are located on the top of the surface and two points are around with higher values of
depth. Depth from wells and Depth from seismic are strongly correlated for this set with coefficient
0.990.
The equation of linear regression and exact correlation coefficient can be found in Application /
Report Global Statistics:
Linear regression : Y = (2.027176) * X + (-2171.221816)
Correlation coefficient : 0.990410

Comparing with similar data obtained from the entire set of 87 data considered in the previous
chapter we see that the linear regression equation differs significantly:
Linear regression : Y = (1.389533) * X + (-788.985650)
Correlation coefficient : 0.982144

(snap. 9.5-4)

The linear regression of the set of five wells marked with red color does not coincide with the linear
regression of 87.
In Bayesian Kriging the choice of prior values affects the outcome. Due to the fact that the user can
change prior values for the trend coefficients, it's possible to obtain qualitative results even with
limited data.

366

9.5.2 The choice of variogram model


Bayesian Kriging requires variogram model drawn up from residuals which correspond to the difference between depth at wells and depth from seismic. Residuals can be calculated in File / Calculator subtracting estimated values with depth from seismic (background), using linear regression
coefficients. We name this variable depth residual. Obviously, five data are insufficient to correctly
pick the variogram model. Important question is how the variogram model should be selected in the
case of few data available.
For geologists an answer to this question must come from the regional data, geological knowledge
and sound judgment. In our case we assume a spherical model with range 3000 m and sill equals to
56 (standard deviation of residual equal to 7.5m). This choice is similar to the residual variogram
that could be computed using the whole dataset of 87 wells. We will add this model directly when
we start the estimation.

9.5.3 Bayesian Kriging and Choice of Prior Parameters


The estimation by Bayesian kriging can be performed using the procedure Interpolate / Estimation/
kriging with Bayesian Drift.
l

The number of external drifts has to be defined in the Number of Variables.

(snap. 9.5-5)

The target variable depth at wells has to be defined in the Input File Wells, as well as the values of
external drift in wells - depth from seismic (background). The names of the resulting variables in
the Output File Grid:
l

Depth from wells (bayesian) for the estimation

Depth from wells St.Dev. (bayesian) for the corresponding standard deviation
Also we need to precise the map we use as a drift:

Depth from seismic (background) for the external drift on Target #1

To create a new model we need to click Model in Kriging parameters, the window of Variogram
model will appear. We put a name Residuals, confirm it and back to the Kriging parameters we

Non Stationary & Volumetrics

367

click Edit to precise the choice of spherical model with range 3000 m and sill equals to 56. Do not
forget to activate a drift part of type 1 f1 by clicking on Basic Drift Functions.

(snap. 9.5-6)

(snap. 9.5-7)

368

Before entering priors, Unique should be chosen as the neighborhood. Prior parameters on trend
coefficients must be defined in Kriging Prior Values. The button Automatic calculates the linear
regression coefficients of the actual set and put these values as Mean. User is free to change these
values.

(snap. 9.5-8)

As standard deviations, we use some uncertainty on the coefficients, thus introducing possible deviation of the linear regression. Small standard deviation values for the coefficients provide smaller
standard deviation associated with Kriging. As the first factor (1) represents the shift of linear
regression line, and the second (F1) the slope, the values for the second should be relatively small
(measured by tenths), the first one allows to enter tens or even hundreds of units to obtain a qualitative result.
Coefficients for Standard Deviation can also be determined automatically. For this, the method
called BootStrap is used. The method is computing the regression coefficients after removing one
point from the set. Therefore we get two sets of n coefficients, from which standard deviations are
automatically calculated.
Correlation coefficients between elements are also calculated automatically using the same principle as for Standard Deviations: after obtaining two sets of n coefficients, one can consider them as

Non Stationary & Volumetrics

369

two variables and calculate the correlation coefficient between them. What is interesting is that the
coefficient is always either one or minus one irrespective of whether the initially data (depth at
wells, depth from seismic (background)) are strongly correlated or not.
User is free to change any of these values according to geological knowledge or regionalized data.
In the case of limited data, it is a great advantage because its possible to obtain a qualitative model
with a small standard deviation associated with kriging.
For the test we put the values -1000 as a mean for shift and 1.5 as a mean for F1. Corresponding
standard deviations will be 100 and 0.1. The correlation matrix stays the same.

(snap. 9.5-9)

Finally press RUN from the Kriging with Bayesian Drift panel.
As a result we got the following resulting map and map of uncertainty associated with Kriging.

370

(snap. 9.5-10)

Non Stationary & Volumetrics

371

9.6 Assessing the Variability of the Reservoir Top


9.6.1 External drift bundled simulations of the top reservoir
Using the same model as for the estimation, we consider the Depth from Seismic (Background) as
a trend for the actual top reservoir depth. By using this background variable as an external drift
variable in the simulation process we will get images of the depth with similar behavior described
by the seismic data and honoring the depth at wells.
l

Use the Interpolate / Conditional Simulations / External Drift (bundled) menu to perform 100
simulations of the top reservoir depth. Ask to calculate the model without nugget effect, use a
Unique neighborhood and set the number of turning bands to 500. The process will create a
macro-variable called Simu Top with seismic for this case.

(snap. 9.6-1)
l

Display a few simulations using the Display menu, with the previous color scale (simu #001).

372

(fig. 9.6-1)

9.6.2 Analyzing the local variability using Macro Variable Statistics


It is common practice to describe the local variability of a reservoir top using the Tools / Simulation
Post-Processing facility. Indeed, the latter allows to compute standard deviation, probability and
quantile maps. An example of this application is presented in the Property Mapping case study.
The purpose of this paragraph is to illustrate another way to analyze the local variability of a reservoir top. Indeed, the Statistics /Statistics/ Macro Variable Statistics facility enables you to analyze

Non Stationary & Volumetrics

373

the local distribution of simulated values and to compare them with values of interest such as OWC
or neighbouring measured values.
Once the panel is open, the first thing to do is to click on Application / Load Files. Here, you can
enter the macro variable to be analyzed, a grid reference variable and also an auxiliary Points file,
containing for instance the well intercepts with the top of the reservoir.

(snap. 9.6-2)

Press OK. The following Basemap is displayed:

374

(snap. 9.6-3)

You can change the Basemap graphic preferences, for instance the color scale, by clicking on Application / Graphic Parameters for... / Basemap.
The local distribution of simulated values can now be obtained simply by clicking on a particular
grid node. The selected node is automatically outlined (by default with a bold red line) and the histogram containing all the simulated depth values for this particular node is displayed.
Usual histogram calculation parameters may be modified from the Application / Calculation
Parameters window. The hereafter histogram is obtained with 21 classes ranging from 2280 to
2301m. Several particular values are then superimposed to the histogram, such as:

Non Stationary & Volumetrics

the mean of all simulated values,

the index of the simulation outcome currently displayed on the basemap (CIV),

a particular value to be entered by the user below the histogram (URV),

quantiles of interest.

375

The values to be displayed can be modified in the Application / Graphic Parameters.

(snap. 9.6-4)

Coming back to the Basemap, you will notice that a right-click produces the following menu.

(snap. 9.6-5)

376

You can then clear the current selection or append additional nodes. Note that instead of selecting
an individual node, you can select blocks of nodes by modifying the Selection Neighborhood
parameter below the Basemap.
Finally, this menu allows you to select an auxiliary point, for instance a well close to your current
selection. Once you have clicked on an auxiliary point, the corresponding symbol changes from the
default + to a circle. The depth value, read from the auxiliary Points file, is then automatically
superimposed on the histogram, the valued being displayed in the legend (PRV).

(snap. 9.6-6)

Non Stationary & Volumetrics

377

9.7 Volumetric Calculations


Now, geostatistical simulations are going to be used to estimate the distribution of oil/gas volumes
above a water contact. These volumetric calculations will be derived from successive simulations
of the reservoir top and of the reservoir thickness. The results will be compared to the volumes calculated above the spill point.
The depth of the intercepts with the top of the structure is contained in the variable depth at wells,
the reservoir thickness is stored in the variable thickness at wells.

9.7.1 Simulation of the reservoir thickness


In Exploratory Data Analysis, displaying an histogram of the thickness at wells shows the existence of an aberrant negative value, due to an undesired negative sign that has to be removed to
avoid unrealistic results. This operation can be performed with the File / Variable Editor facility.
Inside the panel, select the variable where the value has to be modified; then, select the sample of
interest, modify its value to zero and Save.

(snap. 9.7-1)

378

The next graphic shows the histogram with the modified negative value. The default omnidirectional experimental variogram shows a stationary behavior.

87
0.00
29.15
15.49
5.06

Variogram : thickness at wells

188

Frequencies

Nb Samples:
Minimum:
Maximum:
Mean:
0.15
Std. Dev.:

0.10

0.05

0.00

10

20

30

thickness at wells

169
30

157

252 295 232

271
230

20
256

10
22

Distance (km)

(fig. 9.7-1)

Furthermore, computing a scatter diagram between the depth and thickness variables would show
that these variables are not correlated.
At this stage 100 stochastic simulations of the thickness have to performed, without an external
drift variable and in a stationarity assumption. One possibility is to adjust a variogram model of
thickness at wells and perform directly a Conditional Turning Bands procedure, but there is a risk to
obtain negative values of thickness. To tackle this point a gaussian anamorphosis modeling of the
thickness at wells variable is performed; this approach allows to constrain the lower and upper
thickness values.
Open the Statistics / Gaussian Anamorphosis Modeling window and enter the input raw variable
thickness at wells in the Data area by pressing the Input... button. Then switch on the toggle
Gaussian Transform and enter the new output variable name thickness at wells (gaussian). Then
click on the interactive fitting... button. The window Fitting Parameters pops up.In the Windows
area, clicking on the first icon called Anamorphosis pops up the experimental anamorphosis and the
default point model. Click on Application / Graphic Bounds in the menu bar and enter the next values
Horizontal Axis Min: -3.5
Horizontal Axis Max: 3.5
Vertical Axis Min

: -5

Vertical Axis Max

37

These values only adjust the display of the anamorphosis window. Now click on Interactive Fitting
Parameters... in the Anamorphosis Fitting area and enter the following values:

Non Stationary & Volumetrics

379

(snap. 9.7-2)

The last window authorizes to obtain raw values between 0 and 35.

380

Click on the toggle Fitting Stats, the following statistics are displayed :
=== Fitting Statistics for thickness at wells ===

Experimental mean
= 15.49
Theoretical mean (Discr)
= 15.55
Experimental variance
= 25.58
Theoretical variance (Discr) = 27.36

Interval of Definition:
On gaussian variable: [-2.53 ,2.53]
On raw variable: [0.00 ,29.15]

Non Stationary & Volumetrics

381

(snap. 9.7-3)

Finally give a name to the new anamorphosis function, Thickness at wells and press Run.
As the thickness simulations will be performed on this gaussian transform (before back-transformation in raw scale using the anamorphosis function), it is now requested to evaluate the spatial structure of the thickness at wells (gaussian) variable.
An experimental variogram of this variable is first calculated with 12 lags of 300m and saved in a
Parameter File (using the Application menu of the variogram graphic window) called Thickness at
wells (gaussian). Using the Statistics / Variogram Fitting window, a new variogram model is created, with the same name than the experimental variogram. This new model is edited (using manual
edit/edit) and modified in order to improve the quality of the fit. A spherical basic structure with a
range of 950m and a sill equal to 1.04 is chosen. This model is saved by pressing the Run (Save)
button.

Variogram : thickness at wells (gaussia

382

130
163 143 145

1.25
225

1.00

240

215 254

206
241

0.75

222

0.50

0.25

0.00

15

2
Distance (km)

(fig. 9.7-2)

Using the Interpolate / Conditional Simulations / Turning Bands facility, perform 100 simulations
of the thickness at wells (gaussian) in Unique Neighborhood using the previous model and activating the Gaussian Back Transformation... option.

(snap. 9.7-4)

The new output macro variable is called Simu Thickness.

Non Stationary & Volumetrics

383

(snap. 9.7-5)

You can use the Exploratory Data Analysis to check the outputs and verify that the simulated thickness are greater or equal to zero.
The next graphic shows two realizations of the Simu Thickness output.

384

(fig. 9.7-3)

(fig. 9.7-4)

To visualize the simulated reservoirs, you can create a new macrovariable corresponding to the base
of the reservoir and represent the top and the base in a cross-section. To achieve that:

Non Stationary & Volumetrics

385

with Tools / Create Special Variable create a macro variable of length type to store 100 simulations of the depth of the reservoir base, called Simu Base reservoir,

(snap. 9.7-6)

386

using the File / Calculator, calculate the sum of the simulated top Simu Top with seismic and
the last simulated thickness Simu Thickness.

(snap. 9.7-7)

Non Stationary & Volumetrics

387

To display a cross-section, you can use the previous template Trace and replace the Cross-section in 2D contents by a realization of the top and the base of the reservoir (hereafter the top and
the base of simulation number 42).

2200

B
Top (Simu #42)

2250
Z (m)

Base (Simu #42)

2300

A
B

2350

2400

3
X (km)

5
(fig. 9.7-5)

9.7.2 Calculation of Gross Rock Volumes


In order to restrict the calculations to the main structure (where most of the wells have been drilled)
it is recommended to use the polygon contained in the ASCII file polygon.hd. This polygon is
imported in Isatis using the File / Polygons Editor panel:

388

Create a new polygon file, called Polygon for Volumetrics, in the Application / New Polygon
File menu.

You may display the Wells data, with Application / Auxiliary Data.

Ask for an ASCII import of the file polygon.hd.

The polygon, called P1, is displayed on the top of your wells data. Finally, Save and Run your
polygon file.

(snap. 9.7-8)

Using the Tools / Volumetrics panel, the GRV will be calculated for the reservoir limited by the simulated surfaces of the Top and Bottom and the Gas water contact (GWC) at the constant depth of
2288m. Enter the macro variables names for the reservoir top and bottom.

Non Stationary & Volumetrics

389

Pay attention to match the indices between these macrovariables. To achieve that, switch ON the
toggle Link Macro Index to: Top Surface when you choose the second macro variable.

Note - To be able to use in the Volumetrics panel the macro-variables previously created, you need
to ensure that these macro-variables are of length type. If it is not the case, an error is produced;
then, you have to go in the Data File Manager, click on the macro-variable and ask to modify the
Format, with the right-button of the mouse. You have the possibility to specify that you want your
macro-variable to be of length type and the unit, in meter in the present case.

(snap. 9.7-9)
l

In Risk Curves / Edit specify that you are interested in the distribution of the volumes and
choose an appropriate format. Also switch on the Print Statistics toggle. Click on Close and
then on Run. You obtain the following statistics and quantiles for the P1 polygon:
Statistics on Volume Risk Curves
================================
Polygon: P1
Smallest =
367.44Mm3
Largest =
500.49Mm3
Mean
=
422.32Mm3
St. dev. =
26.50Mm3

Quantiles on Volume Risk curves


===============================
Global Field Integration

390

P90.00 =
P50.00 =
P10.00 =

476.23Mm3
705.81Mm3
1003.03Mm3

Quantiles on Volume Risk curves


===============================
Polygon: P1
P90.00 =
390.70Mm3
P50.00 =
422.87Mm3
P10.00 =
456.19Mm3

(snap. 9.7-10)
(fig. 9.7-6)

Non Stationary & Volumetrics

391

You can also derive specific thickness maps from this procedure, such as iso-frequency maps
(for instance P10 and P90 maps), iso-cutoff maps (to derive for instance the probability for the
thickness to exceed 10 or 20m) or statistical maps (thickness mean or standard deviation).

9.7.3 Determination of the spill point


The Tools / Spill Point panel enables the delineation of a potential reservoir. Considering a topographic map (given on a regular grid) where the depth is counted downwards positively (the top of
a structure corresponds to the lowest depth value), the aim of this procedure is to find the elevation
(Spill Elevation) of the deepest horizontal plane which subdivides the field into areas inside the reservoir and areas located outside the reservoir.
Firstly, enter the macrovariable containing the top reservoir simulations, and create new output
variables for the spill point, the mean height above spill and the probability to have a reservoir
(above the spill). Isatis pops up a Display Grid Raster of the Macro for Depth/Elevation. It is
advised to enter in Application / Map Graphic Parameters... and select your depth Color Scale.

(snap. 9.7-11)

The inside / outside reservoir constraints have to be digitized on the depth map before the Run.

392

(snap. 9.7-12)

Non Stationary & Volumetrics

393

To do this, you have to click with the mouse right-hand button on the graphic, and ask to:
m

Digitize as: Inside for the first constraint inside the reservoir structure (green circle)

Digitize as: Outside for the second constraint, located outside the reservoir (blue circle)

In the Application menu of the depth map graphic, you may ask to Print Information on Constraints. They should be approximately located as follows:
Constraints characteristics (2 points)
Rank
X
Y
1 334834.08m
15526.15m Inside
2 339925.24m
18263.27m Outside

Switch on the Map of the Mean Height above spill, Map of the Reservoir Probability, the Distribution of Spill Elevations and the Distribution of Reservoir Volumes buttons. Set up the units to Mm3
by clicking on the Print Parameters... button.
Click on Run. Isatis will pop up the requested results; for what concerns the grid displays, it is
advisable to enter in the Application / Map Graphic Parameters... menu and customize the Color
Scale.... A grey color scale is used to represent the Reservoir Probability Map and a rainbow color
scale to represent the mean height above the spill point.
(snap. 9.7-13)
(snap. 9.7-14)

394

(snap. 9.7-15)

Non Stationary & Volumetrics

395

(snap. 9.7-16)
Spill Point calculation results
===============================
Num
: the relative rank of the outcome
Macro
: the absolute rank of the outcome in the MACRO variable
IX0,IY0 : the coordinates of the Spill point (in grid nodes)
ACC
: the acceptation criterion
YES if the outcome is valid
or the rank of the (first) violated constraint
Spill
: Elevation of the Spill point
Thick
: Maximum thickness of the Reservoir
Res. Vol.: Volume of the Reservoir (Unknown is not included)
Unk. Vol.: Unknown volume

Num Macro
IX0
IY0
ACC
Spill
Thick
Res. Vol.
Unk. Vol.
10
10
17
15
Yes
2265.379m
69.424m
547.13m3
0.35m3
82
82
15
17
Yes
2266.296m
70.477m
597.62m3
1.22m3
78
78
38
7
Yes
2266.994m
72.872m
584.76m3
6.22m3

396

97
97
14
16
Yes
2267.007m
71.591m
561.78m3
4
4
16
15
Yes
2267.394m
72.691m
558.00m3
63
63
14
16
Yes
2267.524m
72.423m
606.63m3
93
93
13
16
Yes
2272.503m
75.501m
746.57m3
7
7
2
11
Yes
2272.744m
77.715m
794.38m3
69
69
16
13
Yes
2274.536m
79.723m
732.06m3
88
88
14
18
Yes
2275.137m
81.492m
756.87m3
58
58
15
15
Yes
2275.549m
77.512m
779.35m3
37
37
12
16
Yes
2275.675m
82.212m
818.17m3
.../...

Statistics on Reservoir Volumes


==============================
Total count of outcomes
= 100
Count of selected outcomes
= 100
Count of Valid selected outcomes = 100

Spill
Thick
Res. Vol.
Unk. Vol.
Mean
(All)
2285.568m
89.958m
1082.29m3
21.54m3
Mean
(Valid) 2285.568m
89.958m
1082.29m3
21.54m3
St. dev (All)
8.896m
9.146m
277.33m3
25.93m3
St. dev (Valid)
8.896m
9.146m
277.33m3
25.93m3
Minimum (All)
2265.379m
69.424m
547.13m3
0.17m3
Minimum (Valid) 2265.379m
69.424m
547.13m3
0.17m3
Maximum (All)
2304.631m
109.831m
1826.83m3
125.52m3
Maximum (Valid) 2304.631m
109.831m
1826.83m3
125.52m3

23.84m3
4.45m3
32.50m3
44.72m3
29.30m3
0.17m3
2.34m3
1.57m3
2.37m3

In this case the output print is sorted by spill elevations. Six spill elevations are below 2270m, the
minimum acceptable spill elevation value for this reservoir. To identify the rank of these simulations:

Non Stationary & Volumetrics

397

Ask in Print Parameters to sort by Spill Elevations by increasing order (as it was done before)
and click on Print Results: you can identify easily the corresponding simulations that do not satisfy our criteria. The corresponding indices: 10, 82, 78, 97, 4 and 63 have to be masked in the
Macro variable.

In the Data File Manager, click on the Macro variable Simu Top with seismic and, with the
right button, ask to mask the indices 10, 82, 78, 97, 4 and 63 in the menu Variable / Edit Macro
Indices. You can check that these indices do not belong anymore to the list of valid indices by
asking Information on the Macro variable.

(snap. 9.7-17)

Rerunning the spill point application gives the following distribution of volumes and spill point
depths:

398

(fig. 9.7-7)

The spill point depth lies between 2270.09 and 2311.33m.


In this Spill Point application, the gross rock volumes are calculated between the top reservoir and
the contact at the spill point depth for each simulation. In order to take into account the bottom surface of the reservoir, it is compulsory to come back to the Volumetrics application: replace the contact definition by the new Spill point macro variable and leave the rest of the window unchanged.

(snap. 9.7-18)
Statistics on Volume Risk Curves
================================
Polygon: P1
Smallest =
228.11Mm3
Largest =
504.29Mm3
Mean
=
383.13Mm3
St. dev. =
57.36Mm3

Quantiles on Volume Risk curves

Non Stationary & Volumetrics

399

===============================
Global Field Integration
P90.00 =
423.05Mm3
P50.00 =
534.32Mm3
P10.00 =
841.89Mm3

The distribution of volumes calculated from 94 simulations can be compared with the distribution
obtained in first place with the constant contact, which was close to the average of the spill point
depth.

(fig. 9.7-8)

400

Plurigaussian

10.Plurigaussian
This case study shows how to apply the plurigaussian approach to simulate geological facies within two oil reservoir units. The aim of this
study is to introduce the geologist with the different techniques and
concepts in order to better control the lateral and vertical variability of
facies distribution when dealing with complex geology.

The reservoir is composed of a carbonatic/siliciclastic depositional


environment characterized by a high variability in facies changes.

The study explains how to integrate geological assumptions in the


modeling, through the use of proportional curves (geographical trends
of facies), lithotype rules (facies transitions) and gaussian functions
(average geological bodies dimensions).
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.

Last update: Isatis version 2014

401

402

10.1 Presentation of the Dataset


The information is composed of two separate ASCII files:
l

The file wells.hd contains the facies information. This file is organized in a line type format; it
means that it is composed of a header (name and coordinates of each collar) and the core samples (coordinates of the core ends, and an integer value which corresponds to the lithofacies
code).

The file surface.hd contains three boundary surfaces called surf1, surf2 and surf3. They are
defined on a rotated grid (Azimuth 70 degrees equivalent to 20 in mathematician convention).

The Files are located in Isatis installation directory/Datasets/Plurigaussian

10.1.1 Loading the wells


The wells.hd ASCII file is imported using the File / Import / ASCII panel, within a new directory
data; the collar information is stored in the file Well Heads whereas the core information is stored
in the file Wells.

(snap. 10.1-1)

The data represents a total of 10 wells and 413 samples. A basic statistics run in the File / Data File
Manager utility on the Wells file shows that the dataset lies within the following geographical
area:
XMIN=
YMIN=
ZMIN=

95.05m
-63.58m
-35.20m

XMAX=
YMAX=
ZMAX=

2905.49m
3779.84m
20.50m

Plurigaussian

403

Quantitative information about the variable lithofacies is provided by the Statistics / Quick Statistics application. The statistics tell us that this integer variable lies between 0 and 28. The average of
7.61 is not very informative, instead it is relevant to consider the distribution of this discrete data;
the variable lies between 0 and 13 or takes the values 22 or 28. For each integer value, the utility
provides the number of samples and the corresponding percentage:

Integer Statistics Calculation:


Integer Value
0
1
2
3
4
5
6
7
8
9
10
11
12
13
22
28

Count of samples
8
5
26
14
35
24
31
40
39
50
19
33
47
16
1
1

lithofacies

Percentage
2.06%
1.29%
6.68%
3.60%
9.00%
6.17%
7.97%
10.28%
10.03%
12.85%
4.88%
8.48%
12.08%
4.11%
0.26%
0.26%

10.1.2 Loading the surfaces


The same Import ASCII facility is used to store the surfaces in a new grid file Surfaces of the directory data. The basic statistics (using the File / Data File Manager utility) give the following grid
characteristics:

NX=
90
X0=
25.00m
DX=
NY=
90
Y0=
-775.00m
DY=
Rotation: Angle=20.00(Mathematician)
XMIN=
YMIN=

-1496.99m
-775.00m

XMAX=
YMAX=

50.00m
50.00m

4206.63m
4928.62m

This grid clearly covers a larger area than the wells. Finally we calculate statistics on the three surfaces (using Statistics / Quick Statistics application) and check that these surfaces are not defined on
the whole grid (of 8100 cells), as shown in the following results:
Statistics:
------------------------------------------------------------------------------
| VARIABLE | Count
| Minimum | Maximum | Mean
| Std. Dev | Variance |
------------------------------------------------------------------------------
| surf1
|
7248|
-10.20|
-6.40|
-8.68|
1.34|
1.80|
| surf2
|
7248|
-18.60|
-15.40|
-16.86|
0.70|
0.49|
| surf3
|
7248|
-26.00|
-18.80|
-22.25|
1.92|
3.68|

Note - The z variable corresponds to an elevation type (increasing upwards).

10.1.3 Standard Visualization


Once the datasets are loaded, it is advised to visualize them in 3D. The graphical environment of
Isatis do not allow superimpose displays of data coming from 2D Grids and 3D Lines. To tackle this

404

problem, we will transform the 2D grid surfaces into 3D point surfaces. Open Tools / Copy Variables / Extract Samples ...

(snap. 10.1-2)

This process transforms a 2D grid variable surf1 from the Surfaces file into a new file called Surf1
3Dpoints with an output variable called surf1 2D points. Despite its name, this Surf1 3Dpoints
file is still in 2D; in order to transform it in a 3D file the variable surf1 2D points has to be changed
into a z coordinate. To achieve that, this variable surf1 2Dpoint has to be informed in the whole
extension of the grid, which is not the case for all the surfaces. In the calculator, enter a constant
value to the undefined values of the surf1 2D points variable and call the output variable surf1 z.

Plurigaussian

405

(snap. 10.1-3)

The last calculator command can be read as follows: If v1 is not defined (~ffff) then store a value of
-50 into the new variable v2, else store the value of v1 into v2.
This new variable v2 (surf1 z) has been created with a float type; before transforming it into coordinate, we have to change this type to a float length extension:
m

Enter into the Data File Manager editor, select the variable of interest and ask for the Format option.

Click on the Unit button and switch on the Length Variable option; finally select the Length
Unit, meters in the present case.

Now the 2D file may be changed into a 3D file by selecting the surf1 z variable as the new z
coordinate value, using the option Modify 2D-3D.

Repeat the same operation for the two other surfaces.


Now we can merge displays coming from lines (wells) and points files (surfaces). Create a new
Display by clicking on Display / New Page:

406

Choose Perspective for the Representation Type, then in the Display Box tab switch off the
Automatic Scales toggle and set the z scaling factor to 25. Change also the definition mode of
the display box, in order to be able to specify the min and max values along the three axes; the
Calculate button may help you to initialize the values.

(snap. 10.1-4)

Plurigaussian

407

Double-click on Lines in the Available Representations list. Select the variable lithofacies to be
displayed with a default Rainbow color scale. Select the Well Names variable as the variable to
be displayed from the linked file. In the Lithology tab, change the Representation Size to 0.1cm.

(snap. 10.1-5)

408

For each 3D point surface, ask for a Basemap representation and select the corresponding 3D
point file. Each surface is customized with a different pattern color. Click on Display.

The order of the representations in the display may be modified using the Move Back and Move
Front buttons in the Contents window.

(fig. 10.1-1)

10.1.4 Visualization With the 3D Viewer


The Isatis 3D Viewer allows to easily create advanced 3D displays, overlaying efficiently different
types of information and with more flexibility than with the standard displays.
For instance, the view below is obtained by a simple drag and drop of different objects from the
Study Contents part of the Viewer (left) to the Representations part (middle). Each drag and drop
operation leads to an immediate the display in the main part of the viewer, with default parameters
that may be modified afterwards. In the present case:
l

the 3D lines representation of the Wells file is performed with a radius proportional to the lithofacies and the same Rainbow color scale than with the standard display.

the three surfaces from the Surfaces file are successively copied into the Surfaces representation type. Resulting iso-surfaces are displayed with appropriate color scales that can be parametrized by the user.

The Zscale is set to 50 and the compass is shown.

Note - Detailed information about the 3D Viewer parameters may be found in the On-Line
documentation.

Plurigaussian

409

(snap. 10.1-6)

410

10.2 Methodology
The wells information containing the lithofacies has been imported into Isatis, as well as the three
boundary surfaces. All the identified lithofacies do not necessarily need to be treated. Isatis handles
the possibility of grouping the lithofacies variable into lithotypes; in this case study the variable
lithotypes corresponds to groups of lithofacies related to the same depositional environment.
The next step consists in creating the 3D Structural Grid that will cover the whole field. Boundary
surfaces can be used to split the structural grid into different units. All the nodes of each unit will
form a new grid called Working Grid. These working grids will be created using the Tools / Discretization & Flattening facility. They may be treated differently, for example their vertical discretization may be different than the mesh of the 3D structural grid (0.2m) or they can be flattened
according to a reference surface. In the next graphic we have represented a working grid using a
reference surface, automatically it will be flattened and it will correspond to the vertical origin, and
Isatis will assign to this new grid the X, Y and Z mesh values of the Structural Grid. Facies simulation will be performed into these working grids. At the end the different working grid simulations
will be merged and back transformed into the 3D structural grid. This is illustrated in the next
graphic, which shows a Y- Z 2D cross-section.

(fig. 10.2-1)

The well discretization of facies will then be achieved using a constant vertical lag. Even if it is not
compulsory for the vertical lag to be equal to the Z mesh of the working grid, it is advised to use the
same value in order to have one conditional facies value for node.
The plurigaussian simulation needs the proportions of the lithotypes to be defined for each cell of
the working grid, using the Statistics / Proportion Curves panel. Transitions between lithotypes will
then be specified within the Statistics / Plurigaussian Variogram panel, that will ultimately be used
to perform the conditional Plurigaussian simulation itself.

Plurigaussian

411

10.3 Creating the Structural Grid


This step consists in creating an empty 3D grid which will be used to split the whole reservoir into
units using the boundary surfaces. This 3D grid must have the same horizontal characteristics than
the 2D grid containing the surfaces data. For the third dimension, we adjust the 3D grid extension to
the minimum of the first surface (Surf3= -26) and the maximum of the last surface (Surf1) and set
the vertical mesh to 0.2m to cover all the field. Finally the 3D grid must be rotated (around the Zaxis) by 20 degrees in mathematical convention or Azimuth=70 (geological convention) to be consistent with the 2D surfaces grid.

(snap. 10.3-1)

The 3D structural grid is created in a file simu of a new directory reservoir. This 3D structural grid
will be used to split the field into two adjacent units, each of them being defined by the nodes
between two boundary surfaces called `top' and `bottom'. These grid nodes will be stored into two
new grids, the working grids for each unit. The units are called upper and lower.

412

10.4 Creating the Working Grid for the Upper Unit


Now that wells, surfaces and structural grid data have been either imported or created, we need to
create a `working grid' and `discretize' the wells for a specific unit (upper) in order to perform the
plurigaussian simulation process. This process will then be repeated for the Lower unit.
This application corresponds to the menu Tools / Discretization & Flattening. A new parameter file
called UnitTop is created to capture the names of all the variables of interest together, with most of
the working parameters. Click on (NEW) Proportions... to give the name. The parameters, separated in several tabs, are discussed in the next paragraphs.

10.4.1 Input Parameters


In this panel, we specify if the simulation is conditional (matching some control input data) or not.

(snap. 10.4-1)

If data are available, we must identify the information used as control data, i.e. the variable lithofacies contained in the file data / Wells. Note that the facies variable can be converted into a new
integer variable called a lithotype (group of facies) and this new variable could also be stored in the

Plurigaussian

413

input data file. To help identifying the wells in the future displays, the variable Well Name (contained in the linked header file WellHeads) that contains the name of the wells is finally defined.
If no data is available, the plurigaussian simulation is non conditional.

10.4.2 Grids & Geometry Parameters


In this panel, we define the way the simulated unit (working grid) will be distorted from its structural position into a working position, prior to simulation. We also define all the necessary information to allow the final back-transformation within the initial structural grid.
The 3D structural grid file reservoir / simu will contain the new pointer variable ptr_UnitTop
which will serve for the back-transformation of the 3D working grid into the 3D structural grid.

Note - When processing several units of the same 3D structural grid, we must pay attention to use
different pointer variables for the different units.

(snap. 10.4-2)

414

The unit is characterized by its top and bottom surfaces: for this unit, we use the surface surf1 for
the top surface and surf2 for the bottom surface, both surfaces contained in the file Surfaces of the
directory data.
We must also define the way the 3D structural grid is transformed in the working system: we refer
to the horizontalisation step (flattening). This transformation is meant to enhance the horizontal
correlation and consists in transforming the information back to the sedimentation stage. The
parameters of this transformation are usually defined by the geologist who will choose between the
two following scenarios:
l

The horizontalisation parallel to a chosen reference surface. In this scenario (which is the one
selected for this unit) we must define the surface which serves as the reference: here surf2.

A vertical stretch and squeeze of the volume between the top and bottom surfaces: this is called
the proportional horizontalisation. In this scenario, there is no need to define any reference surface.

We could also store the top and bottom surfaces after horizontalisation in the 2D surface grid in
order to check our horizontalisation choice. This operation is only meaningful in the case of parallel
horizontalisation.
Finally we must define the new 3D working grid where the plurigaussian simulation will take place
(new file WorkingGrid in the new directory UnitTop). The characteristics of this grid will be
derived automatically from the geometry of the 3D structural grid and the horizontalisation parameters. In the case of proportional horizontalisation, we must provide the vertical characteristics of
the grid which cannot be derived from the input grid. In the case of parallel horizontalisation, the
vertical mesh is equal to the one of the structural grid and the number of meshes is calculated so as
to adjust the simulated unit.
Some cells of the 3D working grid may be located outside the simulated unit: they will be masked
off in order to save time during the simulation process (new selection variable UnitSelection).
This grid will finally contain the new macro variable defining the proportion of each lithotype for
each cell, which will be used during the plurigaussian simulation (macro variable Proportions). At
this stage, these proportions are initialized using constant values for all the cells; they are calculated
as the global proportions of discretized lithotypes. This macro variable will be updated during the
proportion edition step.

10.4.3 Lithotypes Definition


In this panel, we define the way the lithofacies input variable is transformed into the lithotype variable to be simulated. This operation is meant to reduce the number of lithotypes to be simulated
(compared to the large amount of different lithofacies) and with the possibility to regroup them
according to the geologist knowledge. This transformation may simply be skipped if the lithofacies
already refers to the lithotype data.
In our case, we define 4 lithotypes whose definition and attributes will be provided in the two next
panels:

Plurigaussian

415

Lithotype Conversion panel


For each lithotype, we define the range of values of the lithofacies variable as the union of one
or several intervals. Here each lithotype corresponds to a single interval of values of the lithofacies variable, specified by its lower and upper inclusive bounds. A lithofacies value which does
not belong to any interval is simply discarded.

(snap. 10.4-3)
l

Lithotype Attributes panel


A color and a name are attributed to each lithotype. These colors can be selected using the standard color selector widget or they can be downloaded from an already existing palette. Moreover this procedure enables us to establish a palette and a color scale which can be used
afterwards to represent the simulated results using the Display Grid Raster facility: for simplicity, both objects have the same name LithotypesUnitTop.

416

(snap. 10.4-4)

10.4.4 Discretization Parameters & Output


In this panel, we define all the parameters related to the discretization of the well data (lithofacies).
The discretized wells will be used as input data for the plurigaussian simulation and to calculate
proportions. This discretization is meant to convert the information into data measured on support
of equivalent size. The discretization is performed independently along each well: a well is sliced
into consecutive cores of equal dimension (here Lag = 0.2m).
This value is set by default to the vertical mesh of the working grid, so as to keep, on average, one
sample per grid cell.
In case of deviated (or horizontal) wells, it is crucial to compensate for the large distortion ratio
between horizontal and vertical cell extensions. The operation consists in dividing the horizontal
distances by the distortion ratio before slicing using the constant lag in this "corrected" space. The
value of this distortion ratio should be consistent with the ratio of the horizontal to the vertical cell
extensions (50m/0.2m=250).
In our case (lag=0.2m), a vertical and a horizontal well will produce one discretized sample per cell.
The ending core is kept only if its dimension is larger than a minimum length threshold (here 10%
of the lag, i.e. 0.02m).
A slicing core may contain pieces of several initial samples, each sample being assigned to a given
lithotype. Therefore as a result of the slicing, we build in the Output tab a new file DiscretizedWells in the directory UnitTop where the macro variable Proportions will contain the proportions
of the different lithotypes for each sample in a line type format.

Note - Pay attention to the fact that the discretization process creates two Proportions macro
variables; the first one, defined in the DiscretizedWells, will be used to calculate and edit the
second one, defined in the WorkingGrid and that will serve as the input model for the plurigaussian
simulation.

Plurigaussian

417

Several subsequent procedures cannot handle proportions and require a single lithotype value to be
assigned to each sample instead: this is why we also compute the "representative" lithotype for each
sample (variable Lithotype). Several algorithms are available for selecting this representative lithotype:
l

Central: the representative lithotype is the one of the sample located in the middle of the sliced
core.

Most representative: the representative lithotype corresponds to the one which has the larger
proportion over the sliced core.

Random: the representative lithotype is taken at random according to the proportions of the different lithotypes present within the sliced core.

We can also store the actual length of the sliced cores. A new linked header file is also created
which contains the name of the wells (Variable WellName). This variable is compulsory. If no variable has been provided in the Input panel, a default value is automatically generated.

(snap. 10.4-5)

418

10.4.5 Run
Click on Run. All the information previously defined is stored in the New Proportions Parameter
File... and used to perform the operation. As complementary information, this facility provides the
following printouts concerning:
Line #1 : 9 initial samples intersected by the unit
First Intersection Point : x =
2719.30m y =
9.80m
Last Intersection Point : x =
2647.38m y =
16.82m
.../...
Line #10 : 11 initial samples intersected by the unit
First Intersection Point : x =
2010.75m y =
7.00m
Last Intersection Point : x =
2010.75m y =
16.40m

Description of the New Working Grid


===================================

File Name
: UnitTop/WorkingGrid
Mask Name
: UnitSelection
NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
51
Z0=
0.10m
DZ=
0.20m

Number of valid nodes = 308368/413100


Type of system
: Parallel to a Reference Surface
Surfaces File
: data/Surfaces
Top Surface
: surf1
Bottom Surface
: surf2
Reference Surface : surf2

Description of the 3D Structural Grid


=====================================

File Name
: reservoir/simu
Pointer to Working Grid : ptr_UnitTop

NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
99
Z0=
-26.00m
DZ=
0.20m

Number of valid nodes = 301033/801900

Statistics for the Discretization


=================================

Input Data:
-----------
File Name
: data/Wells
Variable Name : lithofacies

Total Number of Lines


= 10
Total Number of Samples
= 413
Analyzed Length
=
246.65m
Initial Lithotypes Proportions:
Conglomerate = 0.040
Sandstone = 0.348
Shale = 0.309
Limestone = 0.304

LithoFacies to Lithotype Conversion:


------------------------------------
Conglomerate = [7,7]
Sandstone = [9,9]
Shale = [10,10]

2146.71m z =

2181.37m z =

347.77m z =

347.77m z =

Plurigaussian

419

Limestone = [12,12]

Discretization Options
----------------------
Discretization Length
=
0.20m
Minimum Length
=
0.02m
Distortion Ratio (Hor/Vert)
= 250
Lithotype Selection Method
= Central

Discretization Results:
-----------------------
File Name
: UnitTop/DiscretizedWells
Lithotype Name
: Lithotype
Proportions Name
: Proportions[xxxxx]

Total Number of Lines


= 10
Total Number of Samples
= 426
Number of Informed Samples
= 343
Discretized Lithotype Proportions:
Conglomerate = 0.045
Sandstone = 0.336
Shale = 0.303
Limestone = 0.315
Assigned Lithotype Proportions:
Conglomerate = 0.047
Sandstone = 0.338
Shale = 0.297
Limestone = 0.318

To visualize the selection unit where the simulation will take place and superimpose the discretized
wells:
l

In a new Display page, switch the Representation type to the Perspective mode;

Select, with a Symbols representation, the grid file UnitTop / WorkingGrid. In the Grid Contents area, switch to the Excavated Box mode and center IX and IY to the index number 45. In
the Data Related Parameters area customize two Flags:
m

the first flag with a lower bound of 0 and upper bound of 0.5, a red point pattern with a size
of 0.1 has been used,

for the second flag we have used bounds from 1 to 1.5 in order to catch the selection values
(1), and gray circles of 0.1 size to represent the upper unit selection have been used.

420

Select a new Lines representation and select the line file UnitTop / DiscretizedWells. Select
Lithotype for Lithology #1. In the Lithology tab, it is advised to use the LithotypesUnitTop
color scale and customize the representation size to 0.1cm.

Click on Display.

(fig. 10.4-1)

Note - You can also display the Lithotype variable in a literal way by using another item, for
example Graphic Left #1: Lithotype.

Plurigaussian

421

10.5 Computing the Proportions


The principal task in this stage is to estimate lithotypes proportions at each cell of a working grid by
an edition of the proportions at wells after the discretization and flattening process previously performed.
This application corresponds to the menu Statistics / Proportion Curves. Proportions must be analyzed and estimated, usually in a different way for each unit, since they are related to a specific depositional environment. These proportions are an essential ingredient of the plurigaussian model.
The application is displayed as a main graphic window which represents the field base map in a
horizontal projection.

10.5.1 Loading Data


Obviously, in its first use, the graphic window is left blank. The first operation is to use the Load
Data option in the Application Menu to select the relevant information.

(snap. 10.5-1)

422

When entering the name of the Proportion Parameter File, all the other parameters are defined automatically and classically do not have to be modified. We can now review the parameters of interest
for this application:
l

the discretized wells (UnitTop / DiscretizedWells) where the macro variable Proportions is
specified,

the linked header file WellHeads containing the well names (Variable WellName),

the 3D working grid (UnitTop / WorkingGrid) where the macro variable Proportions will be
modified, within the selected area (Selection variable UnitSelection)

Once these parameters have been defined, the main graphic window represents the 10 wells in the
rotated coordinate system of the working grid. If we look carefully, we can see that some wells are
deviated (W1, W5 and W9): their traces are projected on the horizontal plane.

(snap. 10.5-2)

Plurigaussian

423

In the lower right corner of the graphic window, a vertical proportion curve (or VPC for short) represents the variation of the global proportions along the vertical axis. It is displayed with a cross
symbol, which represents the VPC's anchor useful for edition purposes.
This application offers two modes, indicated at the bottom, depending whether we operate using the
polygons or the VPC. The Graphic Menu of this window depends on the selected option. In the case
of Polygon Edition, the following menu options are available:
-

Select All Polygons

Create Regular Polygons...

Create Polygon(s)

In the case of Vertical Proportion Curves Edition, the following menu options are available:
-

Deselect

Select All VPCs

Select Incomplete VPC(s)

Display & Edit...

Editing

- Apply 2D constraint..
-

Completion...

Smoothing...

Reset From Raw Data

Delete VPC(s)

Print VPC(s)

We can define the graphic window characteristics in the panel Graphic Options of the Application
Menu. These parameters will be illustrated in the subsequent paragraphs:
m

The representation of the wells and the VPC on horizontal projections.

The parameters concerning the VPC specific windows.

The miscellaneous graphic parameters such as the polygon display parameters, the graphic
bounds (for projections) and the order of the lithotypes.

Here, this panel is used in order to define the options for the VPC display windows: we switch ON
the flag for Displaying Raw VPC and OFF the one asking for normalization.

424

(snap. 10.5-3)

10.5.2 Display global statistics


In this step, we visualize some global statistics on the proportions. First, we select the global VPC
in the lower right corner by picking its anchor (cross symbol) and use the option Display & Edit in
the Graphic Menu in order to display this VPC in a separate specific graphic window. This graphic
shows the VPC projected from the wells to the working grid in a normalized mode; this VPC can be
edited. Note that in our case we have another VPC to the right; this VPC corresponds to the raw
mode VPC Global Raw Proportions that was specified previously in the Graphic Options panel
(without normalization).

Plurigaussian

425

The raw mode: for each (vertical) level, the (horizontal) bar is proportional to the number
of samples used to calculate the statistics. You can display the numbers by switch on the Display Numbers option in the Graphic Option panel. Each bar is subdivided according to the
proportions of each lithotype, represented using its own color. The order of the lithotypes is
defined in the Graphic Options panel.

The normalized mode: the proportions are normalized to sum up to 1 in each level (except
the levels where no sample is available). Note that the first and last levels of this global proportion curve are left blank as they do not contain any sample.

(fig. 10.5-1)

A second feature is obtained using the option Display Pie Proportions in the Application Menu. It
creates a separate window where each well is represented by a pie located at the well header location. The pie is subdivided into parts whose size represents the proportion of each lithotype calculated over the whole well (Calculated From Lines option). We can normalize the proportions by
discarding any information which does not correspond to any lithotype. Here instead, we have chosen to take them into account: they are represented as a white fictitious complementary lithotype.

426

W7

4000

W2

3000

W9
W8
W1
W6

2000

W3

W4

1000

W10
W5

-1000

1000

2000

2D Point Proportion

3000

4000

(fig. 10.5-2)

For particular usage, some lithotypes can be regrouped: this new set is then displayed as a fraction
of the pie.
The 10 wells are displayed using the pie proportional chart where each lithotype is represented with
a proportional size, calculated over the whole well. This application is used to check that the first
lithotype (called Conglomerate) represented in red is present only in the 6 wells located in the
northern part of the field (W1, W2, W6, W7, W8 and W9) and absent in the south, hence the non
stationarity.
l

Creating polygons

In this step, we turn the option of the main graphic window into the Polygon Edition mode. Using
the Create Polygon(s) option of the Graphic Menu, we digitize two polygons. When a polygon is
created, a vertical proportion curve is automatically calculated and displayed (in its normalized
mode) at the polygon anchor position (located by default in the center of gravity of the polygon).
We can now select one VPC (or a group of them) and modify it (or them) using the features demonstrated hereafter.

Plurigaussian

427

(snap. 10.5-4)
l

Edition of a VPC

In our case, we select the VPC corresponding to the northern polygon and use the Display & Edit
option of the Graphic Menu in order to represent it on a separate window. As this has already been
discussed when visualizing the global VPC, we have chosen (in the Graphic Options panel) to represent the VPC in the raw version on the right and in the normalized version on the left.

428

(fig. 10.5-3)

We can easily check that, here again, the top and bottom levels of this VPC are not informed.
Before using this VPC in a calculation step, we need to complete the empty levels.
To complete empty level, we use the Application / Completion option. By default this algorithm
first locates the first and last informed layer Number of Levels = 1. If an empty layer is found
between the first and last informed layers, the proportions are linearly interpolated. In extrapolation, the proportions of the last informed layer are duplicated. An option offers to replace the proportions of the last informed layer by the ones calculated over a set of informed layers; their number
is defined in the interface. The result is immediately visible in the normalized version of the VPC in
the left part of the specific graphic window and in the main graphic window. Note that this completion operation could have been carried out on a set of VPC (without displaying them on separate
graphic windows).

Plurigaussian

429

(fig. 10.5-4)

Smoothing the vertical transitions is advised, and may be achieved with the option Application /
Smoothing. It enables the application to run a low-pass filtering algorithm on the normalized version. This procedure requires the VPC to be completed beforehand. This procedure can be applied
several times on each selected VPC: here 3 passes are performed. Once more, the results are visible
in the normalized version of the VPC displayed in the left part of the specific graphic window. The
corresponding VPC is also updated in the main graphic window.

(fig. 10.5-5)

430

The same procedure applied on the southern VPC leads to the following graphic representation.

(fig. 10.5-6)

Note that the main difference between the two VPC, even after the completion and smoothing steps,
is the absence of the first lithotype in the VPC corresponding to the southern polygon. As these
VPC will serve as conditioning data for the subsequent interpolation phase: their contents as well as
their location are essential.
l

Computing proportions on the 3D grid

We recall that the proportions of the different lithotypes over the cells of the working grid have
been initially set to a constant value corresponding to the global proportions calculated using all the
discretized samples. The aim of this application is to calculate these proportions more accurately,
enhancing the absence of the first lithotype in the south, for example.
For that purpose, we use the Compute 3D Proportions of the Application Menu. This procedure
requires all the VPC used for calculations to be completed beforehand.
This application offers three possibilities for the calculation:

Plurigaussian

431

Copying the global VPC (displayed in the lower right corner of the main graphic window): the
proportions are set to those calculated globally over each layer. This crude operation is slightly
cleverer than the initial global proportions as the calculations are performed layer by layer:
therefore the vertical non stationarity is taken care of.

Inverse squared distance interpolation. This well-known technique is applied using the VPC as
constraining data. For each level and each lithotype, the resulting proportion in a given cell is
obtained as the linear combination of the proportions in all the VPC, for the same lithotype and
the same layer. The weights of this combination are proportional to the inverse squared distance
between the VPC and the target cell.

Kriging. This technique is used independently for each layer and each lithotype, using the VPC
as constraining information. A single 2D model (ModelProportions) is created and used for all
the lithotypes: we assume therefore the intrinsic hypothesis of the multivariate linear model of
coregionalization.
The model can be defined interactively using the standard model definition panel. Here it has
been set to an isotropic spherical variogram with a range of 5000m. There is no limitation in the
number and types of basic structures that can be combined to define the model used for estimating the proportions. The sill is meaningless unless several basic structures are combined.

(snap. 10.5-5)
l

Display the proportions from the 3D working grid

The proportions have been calculated over all the cells of the 3D working grid; it is now time to
visualize them using the Display 3-D Proportions option of the Application Menu.

432

This feature is specific to the display of proportions. The figure consists in an horizontal projection:
each cell of the horizontal plane is displayed as a VPC obtained considering the proportions of all
the lithotypes for all the levels of the grid column.
The following operation can be performed:
m

Sampling. This option is relevant when the number of grid cells is large. We must simply
specify the characteristics of a coarser grid (horizontally). When the step of the coarser grid
is set to 1, no sampling is performed and the entire grid is visualized.

Averaging. Before visualization, the VPC are averaged layer by layer in moving windows.
The extension of the moving window is specified by the user.

Finally we can choose to select a vertical window specifying the top and bottom levels to be
visualized.

For this first display, the 3D working grid is sampled by step of 10, with the origin set at rank 5:
only 9 cells are presented out of the 90 cells of the working grid:

(snap. 10.5-6)

Plurigaussian

433

85

75

65

55

45

35

25

15

5
5

15

25

35

45

55

65

75

85

3D Proportion Map

(fig. 10.5-7)

Note - Pay attention to the fact that the represented grid is rotated (by 20 degrees).
The resulting graphic shows that two conditioning VPC are obviously reproduced at their location.
However, the first lithotype still shows up in the southern part of the display of the estimated proportions, because of the weak conditioning of the kriging step based on two VPC only.
Enhancing the conditioning set of information is therefore advised:

434

This can be achieved by increasing the number of VPC which serve as constraining data for the
kriging step. A first solution is to digitize more polygons as one VPC is attached to each polygon, but this may lead to poorly defined VPC.

The other solution considered here is simply to duplicate each VPC several times in its calculation polygon: in VPC Edition mode, right click on the basemap and select the Duplicate One
VPC option. You then have to pick one VPC and move its duplicate at the desired location.

Each VPC is duplicated twice in its polygon, hence 6 VPC in total.

(fig. 10.5-8)

These VPC are used through the same computing process (Compute 3D Proportions in the Application Menu), using the kriging option with the same model as before; the printout gives the proportions:
Computing the proportions on the 3D Grid
========================================
Number of levels
= 51
Number of lithotypes
= 4

Experimental Proportions
- Global VPC

Plurigaussian

435

Number of active samples


Proportion of lithotype #1
Proportion of lithotype #2
Proportion of lithotype #3
Proportion of lithotype #4

=
=
=
=
=

48
0.047
0.367
0.236
0.350

- Regionalized VPC(s)
Number of VPC used
= 6
Number of active samples
= 306
Proportion of lithotype #1 = 0.091
Proportion of lithotype #2 = 0.320
Proportion of lithotype #3 = 0.219
Proportion of lithotype #4 = 0.369

Proportions calculated on the simulation grid


Number of cell along X
= 90
Number of cell along Y
= 90
Number of cell along Z
= 51
Number of calculated cells = 308370
Proportion of lithotype #1 = 0.027
Proportion of lithotype #2 = 0.264
Proportion of lithotype #3 = 0.259
Proportion of lithotype #4 = 0.450

The results are displayed using the Display 3D Proportions option of the Application Menu. As
expected, the first lithotype does not show up in the southern area anymore.
5

5
5

15

25

35

45

55

65

75

85

(fig. 10.5-9)

436

Some of the resulting VPC seem to be incomplete. This is due to the fact that, for these cells, the
whole vertical column of the grid does not lie within the unit: it is truncated by the unit limiting surfaces and therefore some cells are masked by the unit selection.
The final step consists in using the Save & Run option of the Application Menu which updates the
Proportions Parameter File. Remember that the last edited proportion model will serve as input for
the plurigaussian simulation.

Plurigaussian

437

10.6 Lithotype Rule and Gaussian Functions


This phase is specific to the plurigaussian simulations as it is used to define the models of the two
underlying gaussian random functions, as well as the lithotype rule which is used to convert the
results from the bi-gaussian domain into lithotypes. This application corresponds to the menu Statistics / Plurigaussian Variograms.
The principal idea under the plurigaussian approach is to split an initial global rectangle called
`lithotype rule' into sub-rectangles an assign one lithotype to each sub-rectangle. You can split the
rectangles and assign the lithotypes in different ways. By this way you are able to better control
lithotype transitions.
The lithotype rule is usually represented by a diagram where the horizontal axis stands for the first
gaussian random function G1 and the vertical axis for the second gaussian random function G2. By
adding successively horizontal and vertical limits, we split the diagram into rectangles and assign
one lithotype to each rectangle, for instance:

(fig. 10.6-1)

Apart from the conditioning data and the 3D grid proportion curves, the plurigaussian simulation
will honor the lithotype rule and the variographic properties of the two gaussian functions.
In geological terms, this means that we can force lithotypes to follow transitional or erratic variations, intrusions, erosions of lithotypes into the whole unit or into a group of lithotypes. Furthermore we have the possibility to control anisotropies, horizontal and vertical extensions (ranges) and
behaviors (type of variogram) for the two axes of the lithotype rule (for two groups of lithotypes).
We must first define the name of the Proportion Parameter File (UnitTop) which contains all the relevant information. In particular, it contains the information on:
m

The well data information (UnitTop / DiscretizedWells): in this application, we use the
assigned lithotype value (Variable Lithotype) at each sample rather than the proportions.

The 3D Working grid (UnitTop / WorkingGrid) which contains the macro variable of the
last edited proportions of the different lithotypes in each cell (Proportions).

438

(snap. 10.6-1)
l

Lithotype rule
Click on the Define button. The initial lithotype rule has to be split in sub-rectangles, each lithotype corresponding to a rectangle. The choice of the lithotype rule should be based on all geological information about the unit. The geological model of the unit is important to assign the
lithotype transitions. The application also produces a set of "histograms" (on the right part of
the window) showing the vertical frequency of transitions between lithotypes along the wells.
Click on Cancel.

Plurigaussian

439

(snap. 10.6-2)

Click on Print Transition Statistics...


Transition matrices
===================
L1 = Conglomerate
L2 = Sandstone
L3 = Shale
L4 = Limestone

Downward probability matrix


---------------------------
Number
L1
L2
L3
L1
16 0.625 0.375 0.000
L2
116 0.009 0.853 0.129
L3
102 0.000 0.059 0.853
L4
99 0.000 0.000 0.000

Upward probability matrix


-------------------------
Number
L1
L2
L3
L1
11 0.909 0.091 0.000
L2
111 0.054 0.892 0.054
L3
102 0.000 0.147 0.853
L4
109 0.000 0.009 0.083

L4
0.000
0.009
0.088
1.000

L4
0.000
0.000
0.000
0.908

For example, from the Downward probability matrix print out (From top to bottom) we see that
L1 (in red) only has vertical contact with L2 (in orange) with a 37.5% of frequency. The same
calculation from bottom to top (Upward probability matrix) shows us that this L1 still has only
contact with L2 but now with only 9.1% of frequency.
The facies transition can be also read from the histograms in the Lithotype Rule Definition
graphic. The left column shows the whole set of lithotypes and the right column plots the lithotypes that have a different zero frequency transition value between the respective lithotype of
the left column and the rest of them.
In our case the lithotype from the left column corresponds to L1 (red one) and as it was said
before we know that it has only one contact transition with L2 (no matter the direction), then
only one histogram bar will be plotted with a frequency equal to one.

440

Note - A lithotype bar is plotted in the histogram if it has a different zero frequency transition no
matter the type of calculation (Upward or Downward). The frequency transitions are not respected
in the histogram, instead they are averaged and normalized.
From the set of histograms we conclude that:
- L1 only has contact with L2,
- L2 has contact with L1, L3 and L4,
- L3 has contact with L2 and L4,
- L4 has contact with L3 and L2.
From the VPC analysis we have found that L1 only occurs at the top and L4 at the bottom of this
unit. We have also observed that L1 is only present in the northern area of the field. In the present case, by now we will not use external information to confirm our assumptions or to customize the lithotype rule, and we will work without any geological model delineation.
In the Lithotype Rule Definition panel click switch on Add Vertically and split the lithotype rule
set as default (L1). Repeat this action to split vertically the L2 area. Now switch on Add Horizontally and split in two the L3 area. The next graphic shows the obtained lithotype rule:

(fig. 10.6-2)

This lithotype rule is consistent with all the properties mentioned above, but remember that
these diagrams are only informative as they are calculated on few samples and only along the
wells. The lithotype rule as defined previously is echoed on the main panel.
l

Definition of the models for the underlying gaussian random functions


Analog information, well logging, well correlation, sequential stratigraphy, etc. can give an idea
of global dimensions of lithotypes (extensions) and can be integrated in the variogram model.
The next step is used to define the models for the underlying gaussian random functions G1 and
G2. Note that, if all the limits of the lithotype rule are vertical, the second gaussian random
function does not play any role and therefore there is no need in defining the corresponding
model.
Since we have an horizontal limit in the lithotype rule, (L3-L4), we must define two new parameter files containing the models for the two underlying gaussian random functions:
m

g1Top (Horizontal Axis) will rules L1, L2 and L3-L4

g2Top. (Vertical Axis) will rules only L3 and L4

Plurigaussian

441

Each model can be edited using the standard model definition panel. For the time being, enter
the following structures for the variograms of the two gaussian functions:
l

g1Top: cubic variogram with a range of 2000m along X and Y, and 2.5m along Z,

g2Top: exponential variogram with a (practical) range of 2000m along X and Y, and 3m along
Z.
Note that, by default, the basic structures are anisotropic with a rotation equal to the rotation
angle of the grid (20 degrees). In our case, the basic structures are isotropic (in the XOY plane)
and this rotation angle is ignored. The quality of the fitting will be evaluated below.

Control displays
This application offers the possibility of displaying the thresholds calculated for both gaussian
random functions. They are represented in a form similar to the lithotype rule but this time, each
axis is scaled in terms of cumulative gaussian density. In our case and as the two underlying
gaussian random functions are not correlated, the surface directly represents the proportion of a
lithotype. Here the working grid is sampled considering only one cell out of 10, which brings
the number of cells down to 9*9. The level 25 (out of 51) is visualized.

442

(fig. 10.6-3)

It is interesting to see that lithotype 1 (red) does not show up and that lithotype 4 (blue) progressively disappears towards the north east. External information will be used later to inform that
lithotype 4 (blue) belongs to the Deep Platform environment and that lithotype 1 belongs to a
coastal environment (upper part of the field). This puts in evidence the N-S lithotype progradation.
We can visualize a non conditional simulation performed in the planes of the 3D working grid.
For that sake, we must enter the seed used for the random number generator.

Plurigaussian

443

(snap. 10.6-3)

For better legibility, it is possible to enlarge the extension of the grid along the vertical axis by
specifying a Z scaling factor, here the distortion factor is set to 150. The next figure represents a
YOZ section (X Index=60). We recall that this simulation is performed in the 3-D working grid.
We can also visualize the horizontal section at Z index=25.
10
9
8
7
6
5
4
3
2
1

2000

1000

-2000

-1000

1000

2000

-1000

-2000
-2000

-1000

1000

2000

(fig. 10.6-4)
l

Fitting variograms
The last tool corresponds to the traditional graphic variogram fitting facility. However, in the
case of plurigaussian model fitting, it is rather difficult to use as:
m

The variograms are calculated experimentally on the lithotype indicators. When choosing
the models for both underlying gaussian random functions, we must fit simultaneously all
the simple and cross-variograms: for 4 lithotypes, there are 4 simple variograms and 6 crossvariograms.

The equation relating the variogram model to the variogram of the lithotype indicator uses
the lithotype proportion: the impact of a strong non stationarity is difficult to evaluate on the
model rendition.

444

In addition in our case, we can calculate the variograms along the wells (almost vertically) but
with lot of difficulty horizontally because of the small number of wells. The application allows
the calculation on the fly of experimental simple and cross variograms in a set of directions (up
to 2 horizontal and 1 vertical). We must first define the characteristics of these computations:
m

Computation Parameters: the lag values and the number of lags in each calculation direction.
By default the lags are set equal to the grid mesh in each direction.

(snap. 10.6-4)
m

Horizontal tab. The horizontal experimental variogram is calculated as the average of variograms calculated layer by layer along a reference direction within angular tolerance. Here
the reference direction is set to 70 degrees and the angular calculation tolerance to 45
degrees. There is no restriction on the vertical layers to be scanned.

(snap. 10.6-5)
m

Vertical tab. The vertical experimental variogram refers to calculations performed along the
vertical axis within a vertical tolerance. In addition, we can restrict the calculation to consider pairs only within the same well.

(snap. 10.6-6)

Plurigaussian

445

Now, we can define a set of graphic pages to be displayed. Each page is identified by its name
and by its contents composed of a set of lithotype indicator simple or cross-variograms. For
each variogram, we can specify the line style and color. In the next figure, we define the page
Horizontal which contains the four simple lithotype indicator variograms for two directions
(Horizontal 1 and Horizontal 2). The Horizontal 1 direction corresponds to the E-W axe of the
working grid. We first define a New Page and enter its name in the popup window, then select
Horizontal 1 and 2 from the Directions list, then select the L1, L2 L3 and L4 lithotype from the
Lithotypes list and press the arrow button: the list of variograms to be calculated is displayed in
the Curves List (Hor1: simple[1], ...). We do the same for a new Vertical page.
The next figure shows the indicator simple variograms of the four lithotypes for the horizontal
and vertical plane (with lithotype color scale): experimental quantities are displayed in dashes
for Hor1 direction and points-dashes for Hor2 whereas the model expressions are displayed in
solid lines.
Horizontal

Indicator Variogram(s)

0.3

0.2

Lithotypes
Conglomerate
Sandstone
Shale
Limestone

Indicator Variograms

0.1

Hor1 : simple[1]
Hor1 : simple[2]
Hor1 : simple[3]
Hor1 : simple[4]
Hor2 : simple[1]
Hor2 : simple[2]
0.0

1000

2000

Distance (m)

3000

Hor2 : simple[3]
Hor2 : simple[4]

(fig. 10.6-5)

446

Vertical

Indicator Variogram(s)

0.3

0.2

Lithotypes
Conglomerate

0.1

Sandstone
Shale
Limestone

Indicator Variograms
Vert : simple[1]
Vert : simple[2]
0.0

Distance (m)

Vert : simple[3]
Vert : simple[4]

(fig. 10.6-6)

So far we have all the parameters needed to perform a plurigaussian simulation apart from the
neighborhood definition. However before arriving to the final step we present how to integrate
external information or geological assumptions to the lithotype rule and the two underlying
gaussian functions in order to simulate lithofacies within conceptual or geological model features.
As it was said before lithotype L1 (conglomerate) is associated to a coastal environment. Geological information put in evidence that lithotype L1 has a regional trend measured in Azimuth=80 degrees. Lithotype L1 would be related to coastal facies (Northern part of the deposit).
L2 and L3 (sandstones and shales) belong to a shallow marine environment and show an
oblique sigmoidal stratification. The dipping angle of this lithotype varies between 0.4 - 0.5
degrees. They have a progradation from the shore line ( ~ North to South), by consequence the
layers are oriented to the same Azimuth=80 degrees.
L4 is related to the deep platform environment and it has a good horizontal correlation (regional
presence).

Plurigaussian

447

(fig. 10.6-7)

Note - The previous graphic is represented in working grid coordinates. The structural grid has a
rotation, then you must pay attention when dealing with spatial correlations in the working grid.
In order to take into account this new information we will change the lithotype rule to fit the
next graphic.

(snap. 10.6-7)

Note that this lithotype rule allows L1 to have a contact with L3, this feature is not consistent
with the vertical transition of lithotypes from the wells, but as it was said before the transition
histograms are only informative. In the other hand, it is now possible to control the regional
trend of the coastal associated lithotype L1 with one of the gaussian random function, G1 (horizontal), for this case. We will call this function G1 Coastal Top. The characteristics of this
model are:
- Global horizontal rotation = Azimuth=80, (equivalent to Az=10)
- Type = Cubic
- Ranges (Along U rotated= 2000, Along V rotated=700, Along Z=0.5m)

448

Note - As this gaussian will principally affect lithotype L1 we have used a Z range value equivalent
to its average thickness from the wells. In order to simulate the horizontal trend of the coastal
environment we have used a range value U axis greater than V.
For the second gaussian that will only rule L2 and L3 we take into account the dipping angle of
progradation and an anisotropy direction equivalent to the G1 gaussian function but with a
range value along the V axis greater that the U axis due to the fact that progradation occurs
orthogonal to the coastal trend. We will call this function G2 (L2-L3) Top. The characteristics
of this model are:
- Local rotation for anisotropy = Azimuth=80, (equivalent to Az=10). Vertical rot= 0.4
- Type = Cubic
- Ranges (Along U rotated=1000, Along V rotated=1500, Along Z=1.5m)

(snap. 10.6-8)

Since L1 has a geological correlation with L3 (facies transition from coastal to ramp) we have
used a correlation value of 0.6 between the two gaussian functions. You can compare the indicator variograms and the display of non conditional simulations to the previous model, as well as
the impact of the correlation factor on the non conditional simulations.

Plurigaussian

449

(snap. 10.6-9)

The next graphic shows non-conditional simulations using different correlation values. (From
left to right: 0, 0.3, 0.9)
2000

2000

2000

1000

1000

1000

-1000

-1000

-1000

-2000

-2000
-2000

-1000

1000

2000

-2000
-2000

-1000

1000

2000

-2000

-1000

1000

2000

(fig. 10.6-8)

Clicking on Run finally saves the plurigaussian variograms.

450

10.7 Conditional Plurigaussian Simulation

(snap. 10.7-1)

This application corresponds to the Interpolate / Conditional Simulations / Plurigaussian menu and
performs plurigaussian simulations (only one in this case study). First, define the name of the Proportion Standard Parameter File (UnitTop) which defines all the environment parameters such as:
l

The input discretized line structure (UnitTop / DiscretizedWells) and the assigned lithotype
variable Lithotype.

The 3D working grid (UnitTop / WorkingGrid) with the input macro variable Proportions and
the output macro variable containing the simulated lithotypes Simupluri litho (2nd version).

In particular, the parameter file indicates if the plurigaussian simulation should be conditioned to
some data or not. In addition, we must define the specific parameters, such as:

Plurigaussian

451

The neighborhood (Moving) using the standard neighborhood definition panel: the neighborhood search ellipsoid extensions are 10km by 10km in the horizontal plane and 20m along the
vertical; it is divided in 8 angular sectors with an optimum of 4 points per sector.

The parameters for reconstructing the underlying gaussian random functions at the constraining
data points.

The parameters for simulating the underlying gaussian random functions on the grid.

Finally click on Run.


A second plurigaussian simulation is performed, with updated parameters of the utility Statistics /
Plurigaussian Variogram for the gaussian functions g1Top and g2Top and their respective lithotype
rule. These models have different anisotropy angles and the correlation between the two gaussian
functions is setup to zero. The same neighborhood is used and the new output is called Simupluri
litho (1st version).

10.7.1 Displaying results of the conditional simulation


As the simulation has been performed in the 3D working grid, it is recommended to visualize the
results in this flattening mode. This is done using a standard Raster representation in a new display
page. At this stage it is clever to use the color scale LithotypesUnitTop specially designed in order
to represent the lithotypes of interest.
The next figure represents the layer 37 projected on the horizontal plane for the two output models
Simupluri litho (1st version) and Simupluri litho (2nd version). We can clearly see the mask corresponding to the limits of the unit.
On this plane, we digitize a section (the diagonal of the 3D working grid) represented as a dashed
line. Using the graphic option (right mouse button) Update Trace on Graphics, this section is
passed to a second Display page which represents Cross-sections in 3D.

452

4000

Plurigaussian sim ulation


First version

Y (m)

3000

10

2000

9
8
7
Z (m)

1000

6
5
4
3

2
1
0

-1000

1000

2000

3000

1000

2000

4000

3000

4000

5000

6000

X (m)

X (m)

Lithotypes
Conglomerate
Sandstone
Shale

4000

Limestone

N/A
10

2000

9
8
7

1000
Z (m)

Y (m)

Plurigaussian sim ulation


Second version

Other

3000

6
5
4

3
2
1

-1000

1000
X (m)

2000

3000

4000

1000

2000

3000

4000

5000

6000

X (m)

(fig. 10.7-1)

Plurigaussian

453

10.8 Simulating the Lithofacies in the Lower Unit


The same workflow is applied for the lower unit, located just below the previous unit in structural
position. The upper and lower units are processed independently, even if some lithotypes may be
common.
We review the different steps and only highlight the main differences with the previous unit.

10.8.1 Creating the unit and discretizing the wells


We setup the environment for the plurigaussian simulation using the Tools / Discretizing & Flattening application. The Input panel is the same for both units.

(snap. 10.8-1)

The new Proportion Parameter File (UnitBottom) is used to store all the information required by
the plurigaussian simulation for the current unit, such as:

454

The 3D structural grid (reservoir / simu) where the final results will be stored, is the same than
for the upper unit. However, we must pay attention to use a different name for the back-transformation pointer than the one used for the upper unit (Variable ptr_UnitBottom).

For this unit, the horizontalisation is performed using a distortion proportional between the Top
surface (surf2) and the Bottom surface (surf3), contained in the 2D surface grid file (data, File
Surfaces). In this case, there is no need to specify any reference surface.

The new 3D working grid (UnitBottom / WorkingGrid) is used to store the macro variable
containing the proportions (Variable Proportions). Note that, in this proportional flattening
case, the grid mesh along vertical axis (0.2m) is defined arbitrarily. The number of meshes (27)
is defaulted according to the mesh extension of the structural grid and the unit thickness.

A new file to store the discretized wells (UnitBottom / DiscretizedWells) with the macro variable for the proportions (Variable proportions) and the assigned lithotype (Variable lithotype).
The linked header file (UnitBottom / WellHeads) contains the names of the wells (Variable
WellName).

Four lithotypes are defined, as illustrated in the next window. We also define the corresponding
name and color for each lithotype and create a new Color Scale and Palette that will be used to represent the lithotype simulated grid using the traditional Display/Grid/Raster facility (LithotypesUnitBottom).

(snap. 10.8-2)

Plurigaussian

455

(snap. 10.8-3)

(snap. 10.8-4)

Once pressed the Run button, the printout shows the following results:

456

Description of the New Working Grid


===================================

File Name
: UnitBottom/WorkingGrid
Mask Name
: None
NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
27
Z0=
0.10m
DZ=
0.20m

Type of system
: Proportional between Top and Bottom Surfaces
Surface File
: data/Surfaces
Top Surface
: surf2
Bottom Surface
: surf3

Description of the 3D Structural Grid


=====================================
File Name
: reservoir/simu
Pointer into Working Grid : ptr_UnitBottom

NX=
90
X0=
25.00m
DX=
50.00m
NY=
90
Y0=
-775.00m
DY=
50.00m
NZ=
99
Z0=
-26.00m
DZ=
0.20m

Number of valid nodes = 198700/801900

Statistics for the Discretization


=================================

Input Data:
-----------

File Name
: data/Wells
Variable Name : lithofacies

Total Number of Lines


= 10
Total Number of Samples
= 413
Analyzed Length
=
153.56m
Initial Lithotype Proportions:
Continental sandstone = 0.327
Continental shales = 0.198
Very shallow packstones = 0.439
Shallow wackstones = 0.036

Lithofacies to Lithotype Conversion:


-----------------------------------

Continental sandstone = [2,2]


Continental shales = [3,3]
Very shallow packstones = [8,8]
Shallow wackstones = [11,11]

Discretization Options
----------------------

Discretization Length
=
0.20m
Minimum Length
=
0.02m
Distortion Ratio (Hor/Vert)
= 250
Lithotype Selection Method
= Central

Discretization Results:
-----------------------
File Name
: UnitBottom/DiscretizedWells
Lithotype Name
: lithotype
Proportions Name
: proportions[xxxxx]
Total Number of Lines
= 10
Total Number of Samples
= 270
Number of Informed Samples
= 241
Discretized Lithotype Proportions:

Plurigaussian

457

Continental sandstone =
Continental shales =
Very shallow packstones =
Shallow wackstones =
Assigned Lithotype Proportions:
Continental sandstone =
Continental shales =
Very shallow packstones =
Shallow wackstones =

0.351
0.166
0.448
0.035
0.365
0.166
0.436
0.033

At the end of this discretization procedure, the proportions in the 3D working grid are defined as
constant over all the cells, equal to the discretized lithotype proportions (as defined above).

10.8.2 Computing the proportions


The second step consists in modifying these constant lithotype proportions over the field using the
Statistics / Proportion Curves application. For this purpose, we first load the UnitBottom Proportions parameter file and represent the global proportions calculated over the 10 wells using the Display Pie Proportions feature.

W7

4000

W2

3000

W9
W8
W1
W6

2000

W3

W4

1000

W10

W5

-1000

1000

2000

2D Point Proportion

3000

4000

(fig. 10.8-1)

458

This figure does not show any particular feature linked to the geographical position of these proportions.
Lithofacies L3 (Very shallow packstone - in orange) is associated to a shallow platform environment. Lithofacies L1 and L2 are associated to a continental environment but we do not have more
external information. For this reason we consider the proportions of the lithotypes as stationary in
the horizontal extension of the field.
We now focus on the global VPC (calculated using the samples from all the wells) and displayed in
the bottom right corner of the main graphic window. The next figure (obtained using the Display &
Edit option of the Graphic Menu) shows the VPC in the raw version (on the right) and in the modified version (on the left).

(fig. 10.8-2)

Note that, in our case where the horizontalisation has been performed in a proportional manner,
there is no need for completing the VPC. The initial global VPC has been smoothed (3 iterations) as
in the upper unit. The Compute 3D Proportions option is used to simply duplicate the global VPC
in each column of cells in the 3D working grid.

Plurigaussian

459

(snap. 10.8-5)

This utility produces the following results:

460

Computing the proportions on the 3D Grid


========================================
Number of levels
= 27
Number of lithotypes
= 4

Experimental Proportions
- Global VPC
Number of active samples
= 27
Proportion of lithotype #1 = 0.359
Proportion of lithotype #2 = 0.171
Proportion of lithotype #3 = 0.438
Proportion of lithotype #4 = 0.031

Proportions calculated on the simulation grid


Number of cell along X
= 90
Number of cell along Y
= 90
Number of cell along Z
= 27
Number of calculated cells = 218700
Proportion of lithotype #1 = 0.359
Proportion of lithotype #2 = 0.171
Proportion of lithotype #3 = 0.438
Proportion of lithotype #4 = 0.031

Note that, as expected, the proportions are the same on the global VPC than over the whole grid.
They are slightly different from the one calculated on the discretized samples in the previous application, due to the smoothing step. Do not forget to Save and Run this parameter file.
The proportions can be visualized using the Display 3D Proportions utility, which produces a figure
where all the VPC are exactly similar.

Plurigaussian

461

(fig. 10.8-3)

10.8.3 Lithotype rule and gaussian functions


The lithotype rule is defined using the Statistics / Plurigaussian Variogram application.

(snap. 10.8-6)

462

The models of the two underlying gaussian random functions are:


m

g1Bot: anisotropic exponential variogram with a (practical) range of 1500m along X and Y
and 2m along Z.

g2Bot: anisotropic cubic variogram with a (practical) range of 1500m along X and Y, and
2m along Z.

The next graphic shows a non-conditional simulation using the previous parameters.

(snap. 10.8-7)

We clearly see the influence of the variogram types and lithotype rule which produce the following
transitions:
- spotted between lithotype #1 (yellow) and lithotype #3 (orange),
- spotted between lithotype #2 (green) and lithotype #3 (orange),
- spotted between lithotype #3 (orange) and lithotype #4 (blue),
- smooth between lithotype #1 (yellow) and lithotype #2 (green).

10.8.4 Conditional plurigaussian simulation


We now run the conditional plurigaussian simulations using the application Interpolate / Conditional Simulations / Plurigaussian. Only one simulation is performed using the same neighborhood
and specific parameters than for the Upper unit.

Plurigaussian

463

(snap. 10.8-8)

The next figure represents the layer 26 projected on the horizontal plane for the output model and a
cross-section.

464

(fig. 10.8-4)

Plurigaussian

465

10.9 Merging the Upper and Lower Units


The different units are now simulated in separate 3D working grid files. It is now time to merge
all these simulation outcomes and to back transform them in the 3D structural grid, using the
Tools / Merge Stratigraphic Units facility.

(snap. 10.9-1)

We first define the different 3D units to be merged: they are characterized by the corresponding
Proportions Standard Parameter Files (UnitTop and UnitBottom) which contain all the relevant
information such as the name of the pointer variable (not shown in the interface) or the name of the
macro variable containing the lithotype simulations.

466

We also define the 3D structural grid (reservoir / simu) where the merged results will be stored in a
new macro variable (lithotype).
The number of simulated outcomes in the different 3D working grids may be different. The principle is to match the outcomes by their indices and to create a merged outcome in the structural file
using the same index.
In the bottom part of the window, the procedure concatenates the list of all the lithotypes present in
all the units. We must now define a set of new lithotype numbers which enable us to regroup lithotypes across units: this is the case here for the lithotype Shale present in both the upper and the
lower units, which is assigned to the same new lithotype number (3).
We can then use the Colors option in order to define the name and color attributes for the new lithotypes and define the corresponding new palette and Color Scale (LithotypesReservoir).

(snap. 10.9-2)

You can then Run the Merge procedure.


The next graphic shows the merged simulation. The upper part correspond to a plan view representation of the resulting simu grid in raster mode (Z node or level = 72). A section is discretized in this
view and updated in all graphics in order to be taken into account within the application Display /
Grid / 3D Fences / Unfolded.

Plurigaussian

467

Structural Grid Plan view, IZ=72


Lithofacies simulation

4000

3000

Y (m)

Lithotypes
Conglomerate
2000

Sandstone
Shale

1000

Limestone (upper)
Continental sandstone (lower)
Very shallow packstones (lower)

Shallow wackstones (lower)


-1000

1000

2000

3000

4000

X (m)

Z (m)

Structural Grid Cross section


Lithofacies simulation
-10
-15
-20
-25

1000

2000

3000

4000

5000

X (m)

(fig. 10.9-1)

We can see the two units merged and back transformed in the structural position with the limiting
surface between them. The final statistics consist in running the Statistics/Quick Statistics application on the lithotype variable of the resulting 3D structural grid (reservoir / simu) in order to get
the global statistics on the different lithotypes:
Integer Statistics: Variable lithotype[00001]
Integer value
Count of samples
Percentage
1
6545
1.31%
2
81184
16.25%
3
106415
21.29%
4
136680
27.35%
5
73203
14.65%
7
89028
17.82%
8
6678
1.34%

468

Oil Shale

469

11.Oil Shale
This case study illustrates the use of faults on a 2D data set containing
two variables: the elevation of the bottom of a layer and its thickness.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.

Last update: Isatis version 2014

470

11.1 Presentation of the Dataset


The data set consists in two ASCII files:
l

the first one (called oil_shale.hd) contains the sample information, i.e:
m

the name of the borehole,

its coordinates,

the depth of the bottom of the layer (counted positively downwards) called elevation,

the thickness of the layer.

the second one (called oil_fault.hd) contains the coordinates of 4 segments which constitute the
main fault system as digitized by the geologist.

The files are located in Isatis installation directory/Datasets/Oil_Shale

11.1.1 Loading the Data


The data is loaded using the Files / Import / ASCII from the file oil_shale.hd into a new Directory
(Oil Shale) and a new File (Data).

(snap. 11.1-1)

Oil Shale

471

We can check the contents of the file by asking for some basic statistics on the variables of interest
(all expressed in meters):
Variable name

Number of

Minimum

Maximum

valid samples
X

191

637.04

55018.00

191

27.84

68039.04

elevation

190

1299.36

2510.03

thickness

168

27.40

119.48

11.1.2 Loading Faults


The faults are defined as portions of the space which can interrupt the continuity of a variable.
In other words, when a fault is present:
l

at the stage of the calculation of experimental variograms, the pair made of two points will not
be considered as soon as the segment joining them intersects a fault.

at the estimation phase, a sample will not be used as neighboring data if the segment joining it to
the target intersects a fault.

In 3D, the faults are defined as a set of triangular planes. In 2D, the faults represent the projection
on the XoY plane of possible 3D faults. Therefore, we can distinguish two categories of faults:
l

the set of broken lines which corresponds to the trace of vertical 3D faults,

the closed polygon which is the projection of a set of non vertical 3D faults: special options are
dedicated to this case.

In this case study, the geologist has digitized one major fault which corresponds to a single broken
line composed of four vertices. It is given in the ASCII file called oil_fault.hd.
# FAULTS SAVING: Directory: Oil Shale File: Data
#
#
# max_priority=127
#
# field=1 , type=name
# field=2 , type=x1 , unit=m
# field=3 , type=y1 , unit=m
# field=4 , type=x2 , unit=m
# field=5 , type=y2 , unit=m
# field=6 , type=polygon
# field=7 , type=priority
#
#
#
#+++++++----------++++++++++----------++++++++++----++++
1
5000.00 69000.00 13000.00 63000.00
0
1
1
13000.00 63000.00 13000.00 54000.00
0
1
1
13000.00 54000.00 19700.00 45700.00
0
1
1
19700.00 45700.00 37000.00 69000.00
0
1

472

The faults are loaded using File / Faults Editor utility. This procedure is composed of a graphic
main window. In its Application menu, we use, in order, the Load Attached File, to define the file
where we want to load the faults (Directory Oil Shale, File Data already created in the previous
paragraph) and the ASCII Import option to load the faults.
Pressing the Import button starts reading the fault which is now represented on the graphic window
together with the data information. The single fault is given the "name" 1.

70

60

50

Y (km)

40

30

20

10

10

20

30

40

50

60

X (km)

(fig. 11.1-1)

When working in the 2D space, this application offers various possibilities such as:
- digitizing new faults,
- modifying already existing faults,
- updating the attributes attached to the faults (such as their names).
An interesting attribute is the priority which corresponds to a value attached to each fault segment:
this number indicates whether the corresponding segment should be taken into account (active) or
not, with respect to a threshold priority defined in this application.
In order to check the priority attached to each segment of the fault, we select Edit Fault in the
graphic menu, select the fault (which is now blinking) and ask for the Information option.
Polyline

Fault:

Priority:[1,1]

Nber of Segments: 4

Oil Shale

473

This statement tells us that the designated fault, called 1, is composed of four segments whose priority are all equal to 1. This means that, if the threshold is left to 127 (value read from the ASCII
file containing the fault information), all the segments are active.
If we decrease the threshold down to 0 in the main graphic window, the fault is now represented by
dashed lines signifying that no segment is active and the data set would then be treated as if no fault
had been defined. By giving different priorities to different segments, we can then differentiate the
severity of each segment and set it for the next set of actions.
As we want to use the faults in this case study, we modify the threshold to 1 (in fact any positive
value) and use SAVE and RUN in the Application menu to store the fault and the threshold value
together with the data information.

11.1.3 Creating the Output Grid


The initial task consists in defining the grid where the results will be stored, using File / Create
Grid File. We will minimize the extrapolated area, avoiding extending the field of investigation
down to the furthest sample points in the South. The following parameters are used:
l

X origin: 0m, Y origin: 15000m,

X and Y mesh: 1000m,

X nodes number: 58, Y nodes number: 55.

This procedure allows a graphic control where the final grid is overlaid on the initial data set.

70
60

Y (km)

50
40
30
20
10
0
0

10

20

30

X (km)

40

50

60
(fig. 11.1-2)

474

11.2 Exploratory Data Analysis


The structural analysis will be performed in terms of variograms calculated within the Statistics /
Exploratory Data Analysis module, using the target variables elevation and thickness from the
data file.
Check that, when displaying a Base Map on any of the target variables, the fault is active.
thickness

70

60

50

Y (km)

40

30

20

10

0
0

10

20

30
X (km)

40

50

60

(fig. 11.2-1)

The next task consists in checking the relationship between the two target variables thickness and
elevation: this is done using a scatterplot where the regression line is represented.
The two variables are negatively correlated with a correlation coefficient of -0.72. An interesting
feature consists in highlighting the samples located on the upper side of the fault from the base
map: they are represented by asterisks and correspond to the smallest values of the elevation and
also almost always to the largest values of the thickness.

Oil Shale

475

rho=-0.724

120
110
100

thickness

90
80
70
60
50
40
30
20
1250

1500

1750
elevation

2000

2250

(fig. 11.2-2)

Because of this rather complex correlation between the two variables (which depends on the location of the samples with regard to the faulting compartment), we decide to analyze the structures of
the two target variables independently.
Due to the high sampling density a preliminary quick interpolation may help to understand the main
features of the phenomenon. Inside Interpolate / Interpolation / Quick Interpolation a Linear Model
Kriging is chosen to estimate each variable using a Unique neighborhood.

(snap. 11.2-1)

476

(snap. 11.2-2)

(fig. 11.2-3)

Note - These displays are obtained with a superimposition of a grid in raster representation, in
isolines and the fault. Details are given at the last section of this case study.
From the last graphic it is clear that the thickness is anisotropic with the elongated direction of the
anisotropy ellipse close to the NW-SE direction.

Oil Shale

477

Note - A variogram map calculation applied to thickness and elevation datasets would lead to
similar conclusions about the main directions of anisotropy.
Two directional variograms are then calculated with 15 lags of 2km each and an azimuth rotation of
45 degrees (N45).

Variogram : thickness

300

N45

200

N135
100

10

20
Distance (km)

30

(fig. 11.2-4)

We save this set of two directional variograms in the Parameter File called Oil Shale Thickness.

Note - By displaying the variogram cloud and highlighting several variogram pairs, the user may
note that none of these pairs crosses the faults.
We reproduce similar calculations on the elevation variable, for 15 lags of 2km each, but this time
the rotation is slightly different: Azimuth 30 degrees (N30).

478

90000

80000

70000
Variogram : elevation

N30

60000

50000

40000

N120

30000

20000

10000

10

20
Distance (km)

30

(fig. 11.2-5)

We note that variograms of the two variables have similar behaviour even if the directions are
slightly different. The set of directional variograms is saved in the Parameter File called Oil Shale
Elevation.

Oil Shale

479

11.3 Fitting a Variogram Model


The procedure Statistics / Variogram Fitting will be used twice in order to fit a model to each set of
experimental directional variograms previously calculated. Each model is stored in a new Parameter File bearing the same name as the one containing the experimental quantities: they will still be
distinguished by the system as their type is different. The thickness and Elevation variograms are
fitted using a single basic structure: a power variogram.
Click Experimental Variogram and select oil Shale Elevation. Skip the model initialization frame
and fom the Automatic Fitting tab click Strucutres to add a Power Model.

480

(snap. 11.3-1)

Oil Shale

481

(snap. 11.3-2)

Click Constraint to allow the anisotropy and then press Fit.

482

(snap. 11.3-3)

In order to check the automatic fitting on the two directions simultaneously, we use the Global Window. The model produced is satisfactory. Press Run(Save) to save the parameter file.
Repeat the process for Thickness.

Oil Shale

483

Variogram : thickness

300

N45

200

N135
100

10

20

30

(fig. 11.3-1)

Distance (km)

90000

80000

70000
Variogram : elevation

N30

60000

50000

40000

N120

30000

20000

10000

10

20
Distance (km)

30

(fig. 11.3-2)

484

11.4 Estimation
11.4.1 Estimation of Thickness
The estimation will be performed with the procedure: Interpolate / Estimation / (Co-)Kriging.

Note - Being a property of the Input File, the fault system will be automatically taken into account
in the estimation process.

(snap. 11.4-1)

Oil Shale

485

We will store the results in the variables:


l

Thickness (Estimate) for the estimation,

Thickness (St. dev.) for the corresponding standard deviation.

After having selected the variogram model, we must define the neighborhood which will be used
for the estimation of both variables. It will be saved in the Parameter File called Oil Shale. In order
to decluster the information, we will use a large amount of data per neighborhood (3x8) taken
within a large neighborhood circle (30km).
Finally, in order to avoid too much extrapolation, no target node will be estimated unless there is at
least 4 neighbors within a circle of radius 8km.

(snap. 11.4-2)

486

(snap. 11.4-3)

The first task is to check the consistency of these neighborhood parameters graphically using the
Test button in the main window: a secondary graphic window appears representing the data, the
fault and the neighborhood parameters. Pressing the Left Button of the mouse once displays the target grid. Pick a grid node with the mouse again to start the estimation: each active datum selected in
the neighborhood is then highlighted and displayed with the corresponding weight (as a percentage). Using the Domain to be estimated item in the Application menu cross-hatches all the grid
nodes where no estimation will be performed (next picture).

Oil Shale

487

70

60

Y (km)

50

40

30

20

10

10

20

30

40

50

X (km)

60

(fig. 11.4-1)

Note - Although the sample locations are the same, the graphics obtained for the two variables will
not necessarily be similar as the number of active data is not the same (190 values for elevation
compared with only 168 for the thickness): the samples for which the current variable is undefined
are represented as small dots instead of crosses.
Before visualizing the results, we run the same process with the elevation variable, modifying the
name of the Parameter File containing the model to Oil Shale Elevation; we store the results in the
variables:
l

Elevation (Estimate) for the estimation,

Elevation (St. dev.) for the corresponding standard deviation.

488

11.5 Displaying Results


The thickness kriging result is visualized using several combinations of the display capabilities.
You are going to create a new Display template, that consists in an overlay of a grid raster, grid isolines and thickness data locations. All the Display facilities are explained in detail in the "Displaying & Editing Graphics" chapter of the Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l

Firstly, give a name to the template you are creating: Thickness. This will allow you to easily
display again this template later.

In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m

In the Data area, in the Grid file select the variable Thickness (Estimate),

Specify the title that will be given to the Raster part of the legend, for instance Thickness,

In the Graphic Parameters area, specify the Color Scale you want to use for the raster display. You may use an automatic default color scale, or create a new one specifically dedicated to the thickness variable. To create a new color scale: click on the Color Scale button,
double-click on New Color Scale and enter a name: Thickness, and press OK. Click on the
Edit button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and choose 18 classes between 30 and 120, then click on OK.
- In the Colors area, click on Color Sampling to choose regularly the 18 colors in the 32
colors palette. This will improve the contrast in the resulting display.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large
Thickness values.
- Click on the Undefined Values button and select Transparent or Blank.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter
10 as the reference tickmark and 10 as the step between the tickmarks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch off the Automatic Format
button and set the number of digits to 0.
- Click on OK.

Oil Shale

489

(snap. 11.5-1)

In the Item contents for: Raster window, click on Display current item to display the
result.

Click on OK.

Back in the Contents list, double-click on the Isolines item to represent the thickness estimate in
isolines:
m

In the Data area, in the Grid file select the variable Thickness (Estimate),

In the Data Related Parameters area, choose two classes of isolines:


- from 30 to 120 by steps of 10 with a solid line and labels,
- from 50 to 100 by steps of 50 with a double thickness line and labels,

In the Graphic Parameters area, switch off the Visibility toggle

Click on Display current item to display the result and then OK.

490

Double-click on the Basemap item to represent the thickness values. In the Data area, select
Data / thickness as the proportional variable. In the Graphic Parameters area, choose a size of
0.1 and 0.2 for the lower and the upper bounds. The samples where the thickness variable is not
defined will be represented with blue circles. Click on Display Current Item to check your
parameters, then on Display to see all the previously defined components of your graphic. Click
on OK to close the Item contents panel.

Double-click on the Faults. In the Data area, select file Data, the fault being a property of this
file. Change the Faults Style to a double thickness red line. Click on Display to see all the
defined components of your graphic. Click on OK to close the Item contents panel.

In the Item list, you can select any item and decide whether or not you want to display its legend. Use the Move Back and Move Front button to modify the order of the items in the final
Display.

The Display Box tab allows you to decide whether you want display all the contents or just the
area containing specific items. Select the mode: Containing a set of items, then click on the Raster item and then on Display.

Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.

(fig. 11.5-1)
l

Before closing the graphic window, click on Application / Store Page to save its contents, allowing you to reproduce easily this graphic later.

Multi-layer Depth Conversion With Isatoil

12.Multi-layer Depth
Conversion With Isatoil
This case study illustrates the workflow of the Isatoil module, on a data
set that belongs to a real field in the North Sea.
For confidentiality reasons, the coordinates and the characteristics of
the information have been modified. For similar reasons, the case
study is only focusing on a subset of the potential reservoir layers.

Last update: Isatoil version 5.0

491

492

12.1 Introduction
The main goal of Isatoil is to build a complete geological model in a layer-cake framework. This is
done when the surfaces corresponding to the tops of the different units are established. The layercake hypothesis assumes that each unit extends between two consecutive surfaces. The order of the
units remains unchanged over the whole field under study. One or several units may disappear over
areas of the field: this corresponds to a pinch-out.
A secondary process produces the values of the petrophysical variables as a two-dimensional grid
within each unit. Some units may be considered as outside the set of reservoirs and therefore they
do not carry any valuable petrophysical information.
Finally the program may be carried over, with the estimation tool (Kriging) being replaced by Simulations in order to reproduce the variability of each one of the parameters involved. This procedure
allows a non-biased quantification of the volumes located above contacts and within polygons
which delineate the integration areas.
Before reading this case study, the user should carefully read the technical reference dedicated to
Isatoil, which describes the general terminology. This technical reference is available in the OnLine Help.

Note - The aim of Isatoil is to derive a consistent layer-cake 2D block model: in other words, it will
be used in order to determine the elevations of a set of surfaces. Therefore elevation will be
regarded as the variable rather than a coordinate, while all the information will be considered as
2D.

Isatis will also be used during this case study, whenever Exploratory Data Analysis and particular
types of graphics are required. The user should already be acquainted with the various Isatis
applications, therefore we shall only mention in this documentation the names of the Isatis panels
that will be used - e.g. Isatis / Statistics / Exploratory Data Analysis.

12.2 Field Description


The field of interest is reduced to a single set of layers contained in a rectangular area of about 14
km2 located in the North Sea.
It contains several surfaces which delineate the reservoir units. Each surface is given a set of designation codes (area, layer and zone codes) which indicates its order in the layer-cake.
The geological model contains 4 units that are vertically contiguous: Upper Brent, Lower Brent,
Dunlin and Statfjord.
The top and bottom surfaces of these units correspond to seismic markers that are clearly identified
and measured in time. The time maps will be converted into depth in the layering phase: the corresponding depth surfaces are referred to as Layers.

Multi-layer Depth Conversion With Isatoil

493

Because of its high quality, the seismic marker corresponding to the top of the Upper Brent formation has been selected as the Top Layering surface, from which the whole layer-cake sequence will
be derived.
The second unit (Lower Brent) has been subdivided into 4 production Zones. These zones do not
correspond to visible seismic markers and thus have no time map associated.

Note - Other units were originally subdivided into zones but this option has not been retained in
this Case Study for the sake of simplicity.
The entire layer-cake sequence is truncated at the top by the Base Cretaceous Unconformity (BCU)
as well as two other erosional surfaces (named ERODE1 and ERODE2). These surfaces correspond
to Upper Limits. This field does not have any Lower Limit.
The geological sequence is summarized in the following table where the name, the type and the
designation codes of the different surfaces are listed; the area is skipped as it is always equal to 1.
Surface Name

Surface Type

Layer

Zone

BCU

Upper Limit

ERODE 1

Upper Limit

ERODE 2

Upper Limit

Upper Brent - B1

Top Layering

Lower Brent - B4

Layer

Lower Brent - B5A

Zone

Lower Brent - B5B

Zone

Lower Brent - B6

Zone

Dunlin - D1

Layer

Statfjord - S1

Layer

Base Statfjord - BS

Layer

12.2.1 Introducing the data


The data consists of:
l

the well geometry file which contains the intercepts of the wells with the different surfaces

the well petrophysics file which contains the values of the petrophysical variables sampled
within the zones

the grid file where all the results will be stored

12.2.1.1 The well geometry file


This file contains data organized as lines with two sets of information.

494

The header file contains the following variables:


l

the name of the well

the coordinates (in meters) of the well collar

The base file is two-dimensional. Each sample contains the following variables:
l

the 2D coordinates (in meters) of the intercept of each well with the surfaces constituting the
geological model

the depth (in meters) of the intercept of each well with the surfaces constituting the geological
model

the indices for the area, the layer and the zone designation codes

12.2.1.2 The well petrophysics file


This file contains the following variables:
l

the 2D coordinates (in meters) of the point where the petrophysical parameters are sampled.
There may be more than one sample along the same well within a given unit, for instance if the
well is highly deviated or even horizontal.

the depth (in meters) of the petrophysical measurements

the values of the petrophysical measurements

the indices for the Area, the Layer and the Zone designation

12.2.1.3 The grid file


This file - organized as a regular 2D grid - will contain all the results of the study. To be more accurate Isatoil will use its definition to create another grid file (if necessary) in order to store auxiliary
results during volumetrics calculation.
This file also contains several input variables:
l

the Top Layering surface - Upper Brent B1 - from which the entire layer-cake sequence will be
derived

the Upper Limit surfaces - ERODE1 & ERODE2 -

the time variables for the different seismic markers

the trend surfaces used as external drift for the petrophysical parameters

The input variables read from the ASCII grid file must follow the terminology as defined in the
Reference Guide.

Multi-layer Depth Conversion With Isatoil

495

12.3 Loading the Data


12.3.1 Initializing a new study
The following steps should be carried out first to make sure that you will be able to compare the
results that you will obtain in Isatoil with the statistics and illustrations reported in this Case-Study.
First of all make sure to create a fresh new Study in the Data File Manager
Then go to Preferences / Study Environment and setup the following parameters:
l

Default Distance Unit: meters

Units for the X & Y axes on the graphics: kilometers

12.3.2 Loading the data


All the data used in this Case Study is provided in ASCII files compatible with the formats required
by the software. These files can be immediately loaded by Isatoil, they are located at the following
place on your disk:
m

$GTX_HOME/Datasets/Isatoil

C:\Program Files\Geovariances\Isatis\Datasets\Isatoil

(UNIX)
(Windows)

In this case study the three data files will be loaded using the standard ASCII import facility (File /
Import / ASCII). For convenience all the files will be stored in the same Directory named Reservoir.
You user should refer to the Isatis manual for a detailed description of the standard import applications that are shared by Isatis & Isatoil.

12.3.2.1 Loading the well geometry file


The contents of the ASCII file (wells.hd) will be stored in two linked files because of the Line organization of the samples. The reading mode to be used when loading this datafile is Points+Lines.
Use Wells header as the name of the output Points file, and Wells Lines (Geometry) for the output
Lines file.
You can check with the Data File Manager that the directory named Reservoir should contain two
files:
l

a header file (Wells header) with 50 samples and the following variables:
m

SN+ Sample Number (READONLY) contains the rank of the well

X-Top and Y-Top are the coordinates of the well tops

well name contains the name which distinguishes one well from another - this is a 3-character alphanumerical variable -

496

the base file (Wells Lines (Geometry)) with 972 samples and the following variables:
m

SN+ Sample Number (READONLY) contains the rank of the sample in the file - from 1 to
972 -

LN+ Line Number (READONLY) which contains the rank of the line to which the sample
belongs - from 1 to 50 -

RN+ Relative Number (READONLY) which contains the rank of the sample in the line to
which it belongs - from 1 to 50, 50 being the count of samples in the longest well -

X-Coor and Y-Coor are the coordinates of the intercepts

Z-Coor is the depth of the intercepts

Area Code, Layer Code and Zone Code are the designation codes for the geological
sequence

Some basic statistics show that the data set is constituted of 50 wells (or 972 intercepts) and that the
depth of the intercept (variable Z-Coor) varies between 2337m and 3131m.
Note that the field extension (in the XOY plane) is different for the two files:
Variable

Header File

Base File

Minimum along X (m)

37.

-317.

Maximum along X (m)

3675.

4141.

Minimum along Y (m)

48.

-777.

Maximum along Y (m)

4919.

4919.

This indicates that the wells are not strictly vertical, which one can check out on the following XOY
projection, performed with Display/Lines in Isatis.

Multi-layer Depth Conversion With Isatoil

497

(fig. 12.3-1)

Horizontal projection of the geological well information (with the well names)
In the same application, the wells may also be represented in a perspective volume which gives a
better understanding of the well trajectories in 3D.

Note - This operation is not straightforward since the well information has been loaded as 2D
data. The well file must be temporarily modified into 3D lines: the elevation variable is transformed
into a coordinate - in the Data File Manager - for the sake of the 3D representation.
One can check that most of the wells are highly deviated - squares indicate the tops of the wells and
triangles the bottoms.

498

(fig. 12.3-2)

Perspective view of the wells

12.3.2.2 Loading the well petrophysics file


The data is available in the ASCII file named wells_petro.hd
The reading mode to be used when loading this datafile is Points.
Use Wells (Petrophysics) as the name of the output Points file.
The file named Wells (Petrophysics) in the directory named Reservoir will contain the following
variables:
l

SN+ Sample Number (READONLY) contains the rank of the sample in the file - from 1 to 408 -

X-Coor and Y-Coor are the coordinates of the samples

Area Code, Layer Code and Zone Code are the designation codes for the geological sequence

Porosity and Net to Gross Ratio are the measurements of the petrophysical variables

Note - Note that there is no variable corresponding to the third coordinate. As a matter of fact the
petrophysical parameters are assumed to be vertically homogeneous. Therefore it suffices to know
the unit to which the measurements belong (as well as the X & Y coordinates) in order to perform
the corresponding 2D estimation or simulations.
The data set consists of 408 samples. The following basic statistics are reported for the two petrophysical variables - using Statistics / Quick Statistics in Isatis - Please note that both variables are
not necessarily defined for the same samples - Count indicates the number of samples at which the
variables are defined.

Multi-layer Depth Conversion With Isatoil

499

Variable

Count

Minimum

Maximum

Mean

St. Dev.

Porosity

290

0.0010

0.3400

0.2404

0.0451

Net to Gross ratio

307

0.

1.000

0.5777

0.3436

The following picture shows the distribution of Porosity (crosses) and Net to Gross ratio (circles)
on an horizontal projection - using Display/Points/Proportional in Isatis - The grey spots correspond
to samples where one of the variables has not been measured.

(fig. 12.3-1)

Distribution of petrophysical variables


At this stage it is interesting to notice the lack of dependency between the two petrophysical variables. Let's recall that these variables will be processed independently in Isatoil.
The validation is performed on the whole data set - regardless of the geological unit - and illustrated
on the following Scatter Plot - using Statistics / Exploratory Data Analysis in Isatis.
The 287 samples at which both variables are sampled lead to a correlation coefficient of 0.59 with a
dispersed cloud which enforces the validity of the hypothesis. The regression line is also represented.

500

(fig. 12.3-2)

Scatter plot between Porosity and Net to Gross Ratio

12.3.2.3 Loading the grid file


The data is available in the ASCII file named grid_data.hd.
The reading mode to be used when loading this datafile is Grid.
Use Wells (Grid) as the name of the output 2D Grid file.
The parameters of the grid are defined in the header of the ASCII file:
Along X

Along Y

Origin (m)

0.

0.

Mesh (m)

50.

50.

Count

73

79

Maximum (m)

3600.

3900.

The following table gives the list of input variables defined on the grid. Note that the variable
names comply with the Isatoil naming convention.

Multi-layer Depth Conversion With Isatoil

501

Variable Name

Variable Type

Surface Name

depth_1_0_0

Depth of Upper limit

BCU

depth_1_0_1

Depth of Upper limit

ERODE 1

depth_1_0_2

Depth of Upper limit

ERODE 2

depth_1_1_0

Depth of Top Layering

Upper Brent - B1

time_1_1_0

Time for Top Layering

Upper Brent - B1

time_1_2_0

Time for layer

Lower Brent - B4

time_1_3_0

Time for layer

Dunlin - D1

time_1_4_0

Time for layer

Statfjord - S1

time_1_5_0

Time for layer

Base Statfjord - BS

trendporgauss_1_2_2

Trend for Porosity (normal transf.)

Lower Brent - B5B

trendporosity_1_2_2

Trend for Porosity

Lower Brent - B5B

502

12.4 Master File Definition


The Master File Definition panel drives all the remaining applications. The access to the other Isatoil panels is frozen as long as the Master File has not been properly defined.

(snap. 12.4-1)

12.4.1 Definition of the data files


You must first define the input data that will be used by Isatoil, and that has already been loaded in
the database:
l

The well geometry file

Multi-layer Depth Conversion With Isatoil

503

In the Header File select the following variable:


m

well name as the Zonation Well Name.

In the Line File select the following variables:

Z-Coor as the Zonation Intercept Depth.

Area Code as the Zonation Area Code.

Layer Code as the Zonation Layer Code.

Zone Code as the Zonation Zone Code.

The well petrophysics file


In the Point File select the following variables:

Porosity as the Porosity Variable.

Net to Gross Ratio as the Net/Gross Variable.

Area Code as the Petrophysical Area Code.

Layer Code as the Petrophysical Layer Code.

Zone Code as the Petrophysical Zone Code.

the grid file:


This file defines the geometry of the grid where the results will be stored, and which also contains the time maps, Top Layer surface, Limit surfaces, etc.
You must simply select the file at that stage.

12.4.2 Verbose output


This additional parameter offers to printout all the relevant information for each calculation step.
This option is used when detailed analysis of the results is required - e.g. auditing.
For convenience, in the rest of this documentation, we will assume that the Verbose option flag is
switched off.
When this option is activated it produces two types of printout:
l

the message each time a data point is discarded, for one of the following reasons:
m

Error when calculating the thickness by comparison to the reference surface: the reference
surface is not defined

Error when calculating the thickness by comparison with an extrapolated (time) surface: the
surface is not defined

Error when calculating velocities by scaling by a time thickness: the thickness is null or not
defined. This may happen essentially in the vicinity of a pinchout.

504

Finding duplicates. If two intercepts (with the same layer) are located too close (less than
one tenth of the grid mesh away), the points are considered as duplicates: their coordinates
are printed and only the first point is kept, the second one is discarded.

When a point is discarded, the following message is produced with the references of the discarded information, followed by the final count of active data:
Discarding point in the following step :
Calculating Layer Proportions (Degenerated Well)
Well
113 (Zone id.=140)
Coordinate along X =
1599.87m
Coordinate along Y =
4134.25m
Depth
= 2772.030

- Initial count of data


= 59
- Final count of active data = 52

the dump of the active information available in the following format:


List of active information used for Correlations of Depth
using the following variable(s):
- Variable 1: Lower Brent - B4
- Variable 2: Dunlin - D1
- Variable 3: Statfjord - S1
- Variable 4: Base Statfjord - BS

Rank - Name - X - Y - Initial - Data - Pr1 - Pr2 - Pr3 - Pr4 -


Trend1 - Trend2 - Trend3 - Trend4
1
3 1965.27m 649.64m 2435.310 1.057 1.00 0.00 0.00 0.00 2313.
0
0
0
2
3 1965.27m 649.64m 2544.110 1.178 0.44 0.56 0.00 0.00 2313. 2398
0
0
3
3 1965.27m 649.64m 2813.410 1.373 0.20 0.26 0.53 0.00 2313. 2398 2573
0
4
4 2408.25m 3422.11m 2927.000 1.362 0.13 0.17 0.33 0.37 2279. 2353 2491 2649
5 113 1668.08m 3070.14m 2772.050 1.341 0.17 0.27 0.56 0.00 2304. 2389 2566
0
6 119 827.59m 3060.44m 2498.780 1.412 1.00 0.00 0.00 0.00 2373.
0
0
0
7 120 1162.99m 3212.98m 2456.170 1.183 1.00 0.00 0.00 0.00 2352.
0
0
0
8 120 1827.41m 2868.45m 2483.780 1.033 0.40 0.60 0.00 0.00 2299. 2379
0
0
9 120 1915.18m 2825.42m 2478.170 1.015 0.42 0.58 0.00 0.00 2297. 2374
0
0

For this calculation phase (a layering phase which processes 4 variables simultaneously) the different columns represent:
m

Rank: the rank of the active data.

Name: the name of the well to which it belongs.

X and Y: the 2-D coordinates of the information.

Initial: the initial value, as found in the data base. In this case of layering, the data consist of
the depth of the intercept.

Data: the data after it has been pre-processed for usage in the next calculation step. In this
case of layering, data are converted into velocities.

Pr1,...,4: percentage spent in each layer. The percentage is set to 0 if the layer is not reached.
In this case of layer (in velocity), the value represents the time percentage spent in each layer
located above the intercept.

Trend1,...,4: trend used as an external drift for each layer. In this case of layering, the time
of each layer is used as its external drift. The trend value is not printed for a layer which is
not reached.

Multi-layer Depth Conversion With Isatoil

505

12.4.3 Setup of the geological sequence


Note - In this Case Study most of the surfaces constituting the original North Sea environment have
been kept with their original names in order to provide realistic results, however the layer-cake
sequence has been reduced - both in terms of covered area and in the number of units - and the
coordinate reference system has been transformed for confidentiality.
To achieve the Master File definition you must build up the list of surfaces that will be used and
you will also define the sets of parameters - geometry, petrophysics, faulting, etc. - possibly
attached to these surfaces.
Please refer to the Reference Guide for detailed explanations on the meaning of the various parameters that can be accessed while editing a surface.
We recommend to start defining the geological sequence from the top (BCU) to the bottom (Base
Statfjord). This is the order in which they will be presented in the list. For a start just define the different surface names and make sure to give them the proper types and codes.
The following table recalls the correspondence between the different surfaces and the designation
codes employed in Isatoil. There is only one Area which covers the entire area of interest - hence
the Area code = 1.
Area

Layer

Zone

Surface Name

Surface Type

BCU

Upper Limit

ERODE 1

Upper Limit

ERODE 2

Upper Limit

Upper Brent - B1

Top Layering

Lower Brent - B4

Layer

Lower Brent - B5A

Zone

Lower Brent - B5B

Zone

Lower Brent - B6

Zone

Dunlin - D1

Layer

Statfjord - S1

Layer

Base Statfjord - BS

Layer

When the list is completely initialized you will need to Edit the different surfaces separately in
order to give them their parameters and constraints for computation.

12.4.3.1 Geometry definition


The following table summarizes the geometrical parameters that must be defined for the surfaces in
this Case-Study:

506

Surface Name

Calculated

T2D

EDL

EDZ

BCU

No

ERODE 1

No

ERODE 2

No

Upper Brent - B1

No

Lower Brent - B4

Yes

Velocity

Time

No

Lower Brent - B5A

Yes

No

Lower Brent - B5B

Yes

No

Lower Brent - B6

Yes

No

Dunlin - D1

Yes

Velocity

Time

No

Statfjord - S1

Yes

Velocity

Time

No

Base Statfjord - BS

Yes

Velocity

Time

No

The first 4 surfaces - Top Layering and Limit surfaces - cannot be calculated by Isatoil, they are
already stored on the Grid file and will be used as data. All the other surfaces will be calculated.
m

T2D indicates whether the Time to Depth conversion will be performed using intermediate
Velocity or directly in terms of Thickness.

EDL indicates the type of external drift information possibly used during the Layering stage

EDZ tells the type of external drift information possibly used during the Zonation stage.

12.4.3.2 Faulting definition


Fault Polygon files are located at the following place on your disk:
m

$GTX_HOME/Datasets/Isatoil

C:\Program Files\Geovariances\Isatis\Datasets\Isatoil

(UNIX)
(Windows)

The following table summarizes the faulting parameters that must be defined for the surfaces in this
Case-Study. The Unit must be set to Meter whenever polygons are used.

Multi-layer Depth Conversion With Isatoil

Surface Name

Surface Type

Faulting

BCU

Upper Limit

No

ERODE 1

Upper Limit

No

ERODE 2

Upper Limit

No

Upper Brent - B1

Top Layering

Lower Brent - B4

507

Fault Polygon file

Count

Yes

b1.pol

22

Layer

Yes

b4.pol

22

Lower Brent - B5A

Zone

No

Lower Brent - B5B

Zone

No

Lower Brent - B6

Zone

No

Dunlin - D1

Layer

Yes

d1.pol

22

Statfjord - S1

Layer

Yes

s1.pol

23

Base Statfjord - BS

Layer

Yes

bst.pol

22

Count designates the number of fault polygons in the file. Some polygons which do not lie
within the rectangular area of interest will be automatically discarded.

12.4.3.3 Contact definition


The following table summarizes the geometrical parameters that must be defined for the surfaces in
this Case-Study. The OWC and GOC values are assumed to be constant and are defined for the
first index only.
Surface Name

Surface Type

Calculated

GOC (m)

OWC (m)

BCU

Upper Limit

No

No

No

ERODE 1

Upper Limit

No

No

No

ERODE 2

Upper Limit

No

No

No

Upper Brent - B1

Top Layering

No

2570

2600

Lower Brent - B4

Layer

Yes

No

2600

Lower Brent - B5A

Zone

Yes

No

No

Lower Brent - B5B

Zone

Yes

2570

2600

Lower Brent - B6

Zone

Yes

No

No

Dunlin - D1

Layer

Yes

No

No

Statfjord - S1

Layer

Yes

No

No

Base Statfjord - BS

Layer

Yes

No

No

Note - Note the particular case of Lower Brent - B4 where no GOC is provided. The only fluids that
can be encountered in this zone are Oil and Water.

508

12.4.3.4 Petrophysics definition


The following table summarizes the petrophysical parameters - Porosity and Net to Gross Ratio that must be defined for the surfaces in this Case-Study:
Porosity

Net to Gross Ratio

Surface Name

Surface Type

Calc

Norm

ED

Calc

Norm

ED

BCU

Upper Limit

No

No

No

No

No

No

ERODE 1

Upper Limit

No

No

No

No

No

No

ERODE 2

Upper Limit

No

No

No

No

No

No

Upper Brent - B1

Top Layering

Yes

Yes

No

Yes

Yes

No

Lower Brent - B4

Layer

Yes

No

No

Yes

No

No

Lower Brent - B5A

Zone

No

No

No

No

No

No

Lower Brent - B5B

Zone

Yes

Yes

Yes

Yes

Yes

No

Lower Brent - B6

Zone

No

No

No

No

No

No

Dunlin - D1

Layer

No

No

No

No

No

No

Statfjord - S1

Layer

No

No

No

No

No

No

Base Statfjord - BS

Layer

No

No

No

No

No

No

Calc indicates whether the petrophysical variable must be calculated or not,

Norm indicates if the variable must be Normal Score transformed before the simulation process,

ED indicates if the estimation (or simulation) should take an external drift into account.

12.4.3.5 Saturation definition


The following table summarizes the saturation parameters that must be defined for the various surfaces in this Case-Study:
Gas

Oil

Surface Name

Surface Type

BCU

Upper Limit

ERODE 1

Upper Limit

ERODE 2

Upper Limit

Upper Brent - B1

Top Layering

0.329

1.145

0.949

0.663

0.918

0.106

Lower Brent - B4

Layer

0.604

1.135

0.357

0.107

1.661

0.106

Lower Brent - B5A

Zone

Lower Brent - B5B

Zone

0.332

1.763

0.587

0.714

0.756

0.856

Multi-layer Depth Conversion With Isatoil

509

Lower Brent - B6

Zone

Dunlin - D1

Layer

Statfjord - S1

Layer

Base Statfjord - BS

Layer

12.4.3.6 Constant definition


The following table summarizes the constant parameters that must be defined for the various surfaces in this Case-Study:
Volume correction factors

Color

Surface Name

Surface Type

Gas

Oil

BCU

Upper Limit

Black

ERODE 1

Upper Limit

Black

ERODE 2

Upper Limit

Black

Upper Brent - B1

Top Layering

110

1.31

Yellow

Lower Brent - B4

Layer

105

1.44

Red

Lower Brent - B5A

Zone

Pink

Lower Brent - B5B

Zone

110

1.42

Purple

Lower Brent - B6

Zone

Orange

Dunlin - D1

Layer

Green

Statfjord - S1

Layer

Blue

Base Statfjord - BS

Layer

White

12.4.3.7 Surface Statistics and verification


The Stats-1 and Env-1 buttons can be used in order to report individual statistics about the selected
surface in the list.
l

Stats-1 will produce the basic statistics of all the information regarding the selected surface.

The following example will be obtained for the Upper Brent B1 surface - obviously after the
calculations have been performed General Statistics
==================
Layer : Upper Brent - B1 (Identification :
Layer - Faulted)
Grid Time value : Nb = 5767
Grid - Depth value : Nb = 5767
Wells - Depth value : Nb =
13
Wells Porosity : Nb =
8
Wells Net/Gross : Nb =
8

Area = 1 - Layer = 1 - Zone = 0 - Top


-

Min
Min
Min
Min
Min

= 2155.364 - Max = 2399.886


= 2304.683 - Max = 2561.189
= 2340.490 - Max = 2505.970
=
0.247 - Max =
0.302
=
0.769 - Max =
0.905

510

Env-1 will list the parameters of the selected surface.


The following example is obtained for the Upper Brent B1 surface:
General Environment
===================
Layer : Upper Brent - B1
(Identification : Area = 1 - Layer = 1 - Zone = 0 - Top Layer Geometry
: Must not be calculated (always read from
Porosity
: Calculation
: No special option
Will be normal score transformed (before
Net to Gross : Calculation
: No special option
Will be normal score transformed (before
Saturation
: Not defined
Contacts
: Segment Variable : None
Vol. fact.
: Gas correction factor : 0.000000
Oil correction factor : 1.528000
Displaying the Data

Faulted)
the file)
simulations)
simulations)

At this stage of the Case-Study no surface has been calculated yet. However the reference depth
surface - Top Layering - as well as the different time surfaces have been loaded, therefore we can
already perform various types of graphical representations of this data. Obviously these representations will also apply to the results that will be obtained later in the project.

12.4.4 Map representations


The Display / Map application lets you visualize one of the following types of variables:
m

Time

Depth

Isochrone - time interval -

Isopack - depth interval -

Velocity

Porosity

Net to Gross ratio

Let us first use Display / Map to visualize maps of some of the surfaces that are already available
on the final grid.

12.4.4.8 Representing a time surface


We will first display the time map of the layer Upper Brent- B1. By clicking on the Check button
we can see that time varies from 2155 ms to 2400 ms on this layer, and that time is defined on the
entire grid - 5767 nodes -

Multi-layer Depth Conversion With Isatoil

511

(snap. 12.4-1)

We choose to represent:
l

The time variable as a colored image - using the automatic Color Scale named Rainbow -

The corresponding legend

The time variable as a series of contour lines - click on the Edit button to access the contour
lines definition window -

(snap. 12.4-2)

In this example, the variable is smoothed prior to isoline representation - using 3 passes in the
filtering algorithm - and two sets of contour lines are represented:
m

the multiples of 10 ms using a red solid line

the multiples of 50 ms using a black solid line and representing the label on a pink background

512

The well information: the corresponding Edit button is then used to define the characteristics of
the point display.

(snap. 12.4-3)

In this example the intercepts with the target surface - Upper Brent - B1 - are represented with a
"+" sign and the well name is displayed in a rectangular white box.
l

The fault polygons

Multi-layer Depth Conversion With Isatoil

513

Click on RUN to obtain the following map:

(fig. 12.4-1)

12.4.4.9 Representing an isochrone map


An Isochrone surface - i.e. the interval between two time maps - is not stored in the database. It is
calculated at run time whenever is necessary, for instance for display. Let us display the isochrone
surface between the Upper Brent - B1 - the First Variable - and the Base Statfjord - BS - the Second
Variable - time maps. We can Check that this isochrone varies between 343 ms and 507 ms.
The isochrone is represented as a colored image - using the same Rainbow Color Scale as before but without contour lines.
The Fault polygons flag enables to display the polygons corresponding to the upper surface - in
black - while the Auxiliary Fault polygons flag activates those of the lower surface - in red - The
well display only represents the intercepts with the upper surface.

514

(fig. 12.4-1)

This display clearly shows the shift of the non-vertical fault planes through their intersections with
two time surfaces located around 250 ms apart. It also shows the impact of the faulting on the isochrone map.
Note that in the upper-right corner the three faults intersect the Base Statfjord - BS level although
they are not visible on the Upper Brent - B1 level - at least within the grid extension -

12.4.4.10 Representing a depth map


The Top Layering reference surface - Upper Brent - B1 - is a depth variable that we can also display at this stage. We can take into account the Upper Limit surfaces - BCU, ERODE1 & ERODE2
- which truncate the reference surface, hence the following map where the truncated area appears in
black:

Multi-layer Depth Conversion With Isatoil

515

(fig. 12.4-1)

In terms of statistics we can check that 2135 grid nodes are defined out of 5767.

12.4.5 Time Section representation


Any Time Section through the geological sequence can displayed, given 2 points that define the
extremities of the section's segment. All the time surfaces are interpolated along this section while
the layers are being represented with the colors that have been defined in the Master File.
Several representation options are still locked at this stage, however we can:
l

represent the legend

draw the section which corresponds to the first bisector of the field - click on the Automatic
button to initialize the segment's coordinates -

switch OFF the Automatic Vertical Scale and instead use the following parameters:
m

default Minimum and Maximum elevations - click on Automatic -

a Vertical Scale Factor of 300 to exaggerate the thickness for better legibility.

The figure below clearly shows the impact of (at least) one non-vertical fault.

516

(fig. 12.4-2)

We can add the traces of the fault polygons corresponding to each layer on top of the previous section. The intersection between the vertical section and the fault polygons attached to a given layer is
represented as vertical lines - with the same color coding as the layer - This helps checking that
fault polygons are indeed matching with the time maps.

(fig. 12.4-3)

There is an interactive link between the map and section representations, so that you can:
l

Display the location of the current section on time maps, depth maps, etc.

Any map can be used in order to digitize a new segment while the sections are being refreshed
simultaneously.

Multi-layer Depth Conversion With Isatoil

517

12.5 Building the Reservoir Geometry


Building the reservoir geometry consists in estimating the surfaces which divide the layer-cake into
layers and zones which are vertically homogeneous against petrophysical parameters.
This operation makes use of the information regarding the intercept locations (from the well geometry file), the fault polygons and possible variables (read from the grid file) which are used as external drifts.
Building the geometrical frame of the sequence is essentially performed in two nested steps:
l

first a Layering which estimates the seismic units - layers -

then a Zonation which subdivides each seismic unit into several zones

12.5.1 Layering
12.5.1.1 Correlation for Layering
The Geometry / Seismic Layering / Correlations application allows us to check the hypothesis concerning the correlation between layer thickness and the trend surfaces used as external drift - if
applicable -

Note - In this Case Study we have specified that Layering should be performed through velocity rather than directly in depth - using the time maps as external drift surfaces.
The application represents - in separate graphics - the behavior of the interval velocity against time,
for each of the four layers constituting the sequence.

(snap. 12.5-1)

518

The system first derives the interval velocities from the apparent velocity information at wells
(deduced from the Top Layering reference surface). For layer #N the interval velocity is obtained
by:
l

subtracting the thickness of all the layers located above layer #N - the thicknesses are simply
estimated by their trend -

dividing the residual thickness of layer #N by the time interval

Obviously the deeper the surface the less accurate - and often the less numerous - the represented
data.
Select the following parameters for representing the well names:
l

switch ON the flag which indicates that names will be posted on the graphics

select the symbol (style and size) to be posted - e.g. a 0.2 cm star -

select the background color for the label's box - e.g. white -

choose some angle for writing the well name - e.g. 0. -

(snap. 12.5-2)

The following graphics show a good organization of the well data around the regression line for B4
and D1, and a more dispersed cloud for Statfjord S1 and Base Statfjord, as expected.

Multi-layer Depth Conversion With Isatoil

519

(fig. 12.5-1)

From the 52 active data remaining, the equation of the trend for each layer is produced in the message area:
Compression stage:
- Initial count of data
= 59
- Final count of active data = 52

The following trends are defined from the data


Variable 1:
Lower Brent - B4 =
-15.50940
Variable 2:
Dunlin - D1 =
-0.88130
Variable 3:
Statfjord - S1 =
-1.23549
Variable 4:
Base Statfjord =
-1.34698

+
+
+
+

Trend
Trend
Trend
Trend

*
*
*
*

(
(
(
(

0.00708)
0.00090)
0.00109)
0.00110)

520

12.5.1.2 Model Fitting for Layering


We must now build with Geometry / Seismic Layering / Model the geostatistical model which will
be used to estimate - or simulate - all the layers simultaneously.

(snap. 12.5-1)

The experimental simple and cross-covariances are calculated in an isotropic manner and for a
given number of lags - e.g. 10 lags of 1000 m Let us click on Edit and define the following set of Basic Structures:
l

a first Spherical structure with a range of 1000 m

a second Exponential structure with a range of 3000 m

Let us switch ON the flag Automatic Sill Fitting so that Isatoil will compute the set of optimal sills
- for all simple and cross-structures - by minimizing the distance between the experimental covariances and the model.

Note - The matrix of the sills must fulfill conditions for definite positiveness.
By switching ON the flag Printout we will obtain the following report in the Message Window:
l

for each pair of variables, the array of experimental covariances and the corresponding values in
the model:

Multi-layer Depth Conversion With Isatoil

521

Printout of the experimental and theoretical variograms


Covariance for variable Lower Brent - B4
Rank Pairs
Distance Experimental
Theoretical
1
42
325.67m
0.015105
0.019366
2
172
937.45m
-0.001934
0.002372
3
136
1958.54m
-0.004176
0.000099
4
64
2919.49m
-0.007718
0.000038
5
6
3793.50m
-0.004702
0.000016

Cross-covariance for variables Dunlin - D1 and Lower Brent - B4


Rank Pairs
Distance Experimental
Theoretical
1
86
265.58m
0.004259
0.006290
2
290
975.03m
0.000550
0.000586
3
196
1985.13m
-0.002676
0.000038
4
92
2897.20m
0.000475
0.000015
5
8
3820.12m
-0.005808
0.000006

.../...
l

the parameters defining the model - i.e. for each basic structure, the coregionalization matrix,
the coefficients of the linear model and the eigen vectors and values - :
Number of basic structures = 2

S1 : Spherical - Range = 1000.00m

Variance-Covariance matrix :
Variable 1 Variable 2 Variable 3 Variable 4
Variable 1
0.0351
-0.0112
-0.0168
0.0032
Variable 2
-0.0112
0.0112
0.0055
0.0026
Variable 3
-0.0168
0.0055
0.0080
-0.0014
Variable 4
0.0032
0.0026
-0.0014
0.0020

Decomposition into factors (normalized eigen vectors) :


Variable 1 Variable 2 Variable 3 Variable 4
Factor 1
0.1859
-0.0700
-0.0892
0.0119
Factor 2
-0.0228
-0.0792
0.0090
-0.0436
Factor 3
-0.0006
0.0002
-0.0014
-0.0003
Factor 4
0.0000
0.0000
0.0000
0.0000

Decomposition into eigen vectors (whose variance is eigen values) :


Variable 1 Variable 2 Variable 3 Variable 4 Eigen Val. Var. Perc.
Factor 1
0.8524
-0.3211
-0.4091
0.0544
0.0476
84.42
Factor 2
-0.2430
-0.8459
0.0957
-0.4651
0.0088
15.57
Factor 3
-0.3821
0.1043
-0.9013
-0.1756
0.0000
0.00
Factor 4
-0.2616
-0.4130
-0.1056
0.8660
0.0000
0.00

S2 : Exponential - Scale = 3000.00m

Variance-Covariance matrix :
Variable 1 Variable 2 Variable 3 Variable 4
Variable 1
0.0007
-0.0001
0.0004
0.0007
Variable 2
-0.0001
0.0000
-0.0000
-0.0001
Variable 3
0.0004
-0.0000
0.0003
0.0004
Variable 4
0.0007
-0.0001
0.0004
0.0007

Decomposition into factors (normalized eigen vectors) :


Variable 1 Variable 2 Variable 3 Variable 4
Factor 1
0.0259
-0.0030
0.0163
0.0262
Factor 2
0.0000
0.0000
0.0000
0.0000
Factor 3
0.0000
0.0000
0.0000
0.0000
Factor 4
0.0000
0.0000
0.0000
0.0000

Decomposition into eigen vectors (whose variance is eigen values) :


Variable 1 Variable 2 Variable 3 Variable 4 Eigen Val. Var. Perc.
Factor 1
0.6421
-0.0736
0.4033
0.6477
0.0016
100.00
Factor 2
0.7666
0.0617
-0.3379
-0.5426
0.0000
0.00
Factor 3
0.0000
-0.1628
-0.8458
0.5082
0.0000
0.00
Factor 4
0.0000
-0.9820
0.0887
-0.1669
0.0000
0.00

522

By switching ON the flag Printout we will obtain the following graphics:

(fig. 12.5-1)

Each view corresponds to one pair of variables - e.g. D1 vs B4 - Only wells that intercept both layers are retained, and the experimental quantity is then averaged at distances which are multiple of
the lag - up to the maximum number of lags - The experimental curves are represented in black
while the model appears in red. The values posted on the experimental curves correspond to the
numbers of pairs averaged at the given distance.

Note - For better legibility only 6 of the actual 10 views are represented here.
The geostatistical model is stored in a Standard Parameter File named Model_area which will be
automatically recognized when running the Base Case or the simulations later on.

Multi-layer Depth Conversion With Isatoil

523

12.5.1.3 Base Case for Layering


Let us now perform the estimation - a.k.a Base Case - of all the layers simultaneously.
The Geometry / Seismic Layering / Base Case application automatically loads the geostatistical
model - which was stored in the Standard Parameter File named Model_area - and uses all the relevant information to perform the one-step cokriging of all the seismic layers.
The calculated surfaces are stored in the result Grid using the Isatoil naming convention depth_area_layer_zone.

(snap. 12.5-1)

Basic statistics on the estimated surfaces are reported at the end of calculation:
Statistics on the base case results
===================================

Layer : Lower Brent - B4 (Id : Area = 1 - Layer = 2 - Zone = 0 - Faulted)


Grid - Depth value : Nb = 5767 - Min = 2362.685 - Max = 2692.186
Layer : Dunlin - D1 (Id : Area = 1 - Layer = 3 - Zone = 0 - Faulted)
Grid - Depth value : Nb = 5767 - Min = 2380.481 - Max = 2799.331
Layer : Statfjord - S1 (Id : Area = 1 - Layer = 4 - Zone = 0 - Faulted)
Grid - Depth value : Nb = 5767 - Min = 2540.824 - Max = 3096.612
Layer : Base Statfjord (Id : Area = 1 - Layer = 5 - Zone = 0 - Faulted)
Grid - Depth value : Nb = 5767 - Min = 2724.232 - Max = 3363.551

By switching ON the flag named Replace estimation with one simulation Isatoil will perform a geostatistical Simulation instead of a Kriging, using the Turning Bands method. The results are stored
in the same variables as for the Base Case and they can be visualized to get a feeling for the amount
of variability of simulation outcomes.
The following parameters are required by the simulation process:
l

the seed used for the generation of the random numbers

the number of turning bands used in the non-conditional simulation algorithm - Turning
Bands method.

Note - Since the simulated results are stored in the same variables as the Base Case, always make
sure to run the Base Case one more time before moving to the Zonation phase.

524

12.5.1.4 Representing the results of Layering


Let us now look at the results of the Layering phase by displaying the surfaces maps with Display /
Map. The following maps of the Base Case results - as well as the Top Layering surface Upper
Brent - B1 - share the same Color Scale:

(fig. 12.5-1)

Multi-layer Depth Conversion With Isatoil

525

We can also visualize the estimated surfaces along a vertical section, by using the Display / Cross
Section application. Similar to the Display / Time Section representation described before, this type
of section is here performed in depth - with a wide range of available options -

(snap. 12.5-1)

Let us draw a section along the segment defined by the two points (X=605,Y=1119) and
(X=3060,Y=2932). We shall activate the truncation of the estimated surfaces by the Limit Surfaces
and also ask to represent the well information - names and intercepts - on top of the section. By setting the Maximum distance to the fence equal to 40 m, this section only shows three wells - 145,
152 & 191 The following figure shows the cross-section as well as two maps corresponding to the surfaces
Statfjord - S1 and Dunlin - D1

526

(fig. 12.5-2)

The influence of the major fault - which is clear on this section - is inherited from the time maps
that have been used as external drifts.

12.5.2 Zonation
For the sake of simplicity in this Case Study, the zonation has been restricted to the Lower Brent
unit only. Moreover external drift will not be used during the zonation.

12.5.2.5 Model Fitting for Zonation


The modeling of the Lower Brent unit involves the following surfaces:
m

Lower Brent - B4 which is the top surface of the layer

Lower Brent - B5A

Lower Brent - B5B

Multi-layer Depth Conversion With Isatoil

Lower Brent - B6

Dunlin - D1 which is the bottom surface - i.e. the top of the next layer -

527

The top and bottom surfaces that were estimated during the layering stage are now considered as
known input data. By adding the bottom surface as an extra constraint - through an original Collocated Cokriging method - the Zonation ensures that the sum of the thickness of the four zones will
match the total thickness of the unit.
The Geometry / Geological Zonation / Model application will be used to compute experimental
simple and cross-covariances for a given number of lags - e.g. 10 lags of 500 m Let us click on Edit and define the following set of Basic Structures for the model:
m

a first Spherical structure with a range of 2000 m

a second Nugget Effect structure

Let us switch ON the flag Automatic Sill Fitting so that Isatoil will compute the set of optimal sills
- for all simple and cross-covariances - by minimizing the distance between the experimental
covariances and the model.
The following statistics - truncated here - are reported when the model is established:
l

for the spherical component


.../...
Coregionalization matrix (covariance coefficients) :
Variable 1 Variable 2 Variable 3 Variable 4
Variable 1
27.3009
1.6692
-2.2599
5.3611
Variable 2
1.6692
10.1627
-0.7517
-5.4312
Variable 3
-2.2599
-0.7517
2.0555
2.8640
Variable 4
5.3611
-5.4312
2.8640
9.1236

for the nugget effect:


.../...
Coregionalization matrix
Variable 1
Variable 1
14.0727
Variable 2
-10.6434
Variable 3
-2.2063
Variable 4
13.4752

(covariance coefficients) :
Variable 2 Variable 3 Variable 4
-10.6434
-2.2063
13.4752
75.1475
-75.7311
3.6356
-75.7311
89.6306
-18.2533
3.6356
-18.2533
45.1714

The model is automatically saved in a Standard Parameter File named Model_1_2 which will be
automatically recognized when running the Base Case or the simulations later on.

12.5.2.6 Base Case for Zonation


Let us now perform the estimation - a.k.a Base Case - of all the zones in the Lower Brent layer.
The Geometry / Geological Zonation / Base Case application automatically loads the geostatistical
model - which was stored in the Standard Parameter File named Model_1_2 - and uses all the relevant information to perform the one-step cokriging of all the zones.
The calculated surfaces are stored in the result Grid using the Isatoil naming convention depth_area_layer_zone -

528

The following basic statistics are reported for the three estimated zones - based on 64 active data
out of 77 intercepts Name

Minimum

Maximum

Lower Brent - B5A

2365

2727

Lower Brent - B5B

2380

2779

Lower Brent - B6

2380

2794

12.5.2.7 Representing the results of Zonation


The same vertical section that was visualized after the Layering may now be represented with the
extra surfaces corresponding to the zones of the Lower Brent unit:

(fig. 12.5-1)

This section does not represent the fault surfaces (as interpolated within the package for chopping
the zones) due to the small extension of the polygon fault at the vicinity of the cross-section segment.

12.5.3 Running a simulation (instead of an estimation)


The program offers a possibility to check that the estimation process (using linear cokriging
method) produces the smoothed picture of the surfaces. Therefore, to get accurate volume estimates
above contacts (which is by definition a non-linear operation), we must use the simulation technique instead, which reproduces the variability of the surfaces this time.
For volumetrics, the usual procedure consists of drawing a large number of simulations, to calculate
the volume for each one of them and to present a risk curve.
As a preliminary task, we will simply run the base case procedures for the layering and the zonation, but replacing the estimation procedure by the simulation one: this facility produces a single
simulation outcome. It requires the definition of:

Multi-layer Depth Conversion With Isatoil

529

the seed used for the random number generator: 324321

the number of turning bands which is the essential parameter of the simulation technique
used: 100.

The following figure shows the map of the thickness between tops surfaces of Lower Brent - B5A
and Lower Brent - B6 (isopack), either for the simulated version (on the left) or the estimated version (on the right). Although the spread of values is different (up to 84.3m for the simulation and
71.5m for the estimation - using the check button in the display window), the same color scale is
used (lying between 50m and 85m). Any thickness smaller than 50m is left blank: this is the case
for the fault traces for example.

(fig. 12.5-2)

530

12.6 Filling the Units With Petrophysics


This step is dedicated to filling each unit of interest with petrophysical information (porosity or net
to gross ratio). Petrophysical variables have been defined in the Master File for the following
units:
l

Upper Brent - B1

Lower Brent - B4

Lower Brent - B5B

Since the two petrophysical variables are assumed to be independent one from the other - and also
from one unit to another - we must study 6 different variables separately.

12.6.1 Normal Score Transform


Petrophysics / Normal Score Transformation
In some cases the distribution of a petrophysical variable is far from the normal distribution. Therefore in order to be compatible with the simulation technique (Turning Bands) which requires a multinormal framework, the data must be normal scored beforehand. The geostatistical models are then
derived and the simulation process is performed on the transformed data. The results are finally
back-transformed into the raw scale.
The need for a normal score transform is defined for each petrophysical variable and for each unit
in the Master File. This is the case for:
l

Porosity and Net to Gross ratio for Upper Brent - B1

Porosity and Net to Gross ratio for Lower Brent - B5B

Therefore, the same process must be performed 4 times although it is only described once here - for
the Net to Gross ratio of Lower Brent - B5B.

Multi-layer Depth Conversion With Isatoil

531

This procedure offers several possibilities such as:


l

defining the authorized interval for the variable: the Net to Gross variable will be defined
between 0 and 1 while the porosity between 0 and 0.4: this definition is essential in order to
avoid the back transformed results to reach unexpected values.

defining additional lower and upper control points which modify the experimental cumulative
density function for extreme values: this option is necessary when working with a reduced number of active data, however it will not be used in this case study.

choosing the count of Hermite polynomials for the fitted anamorphosis function (set to 30)

display the experimental and theoretical probability density function and/or bar histogram (the
count of classes is set to 20)

(snap. 12.6-1)

The next paragraph informs us of the quality of the normal score transform as it produces:
l

statistics on the gaussian transformed data (optimally, the mean should be 0 and the variance 1)

532

Absolute Interval of Definition:


On gaussian variable: [ -10.0000 , 10.0000]
On raw variable: [ 0.0000 , 1.0000]

Practical Interval of Definition:


On gaussian variable: [ -1.7962 , 1.8000]
On raw variable: [ 0.0000 , 0.9364]

Statistics on Gaussian Variable (Frequency Inversion):


Minimum
= -1.567957
Maximum
= 1.567957
Mean
= -0.000000
Variance = 0.816242
Std. Dev. = 0.903461
l

and statistics on the difference between the initial data values and their back and forth transformed values
Statistics on Z-Zth:
Mean
= -0.005603
Variance = 0.001493
Std. Dev. = 0.038639

Finally the next figure shows the comparisons between experimental (in blue) and theoretical (in
black) probability density function (on the left) and bar histogram (on the right).

(fig. 12.6-1)

The anamorphosis model (for each petrophysical variable and for each unit) is automatically saved
in a Standard Parameter File whose name follows the naming convention (Psi_Poro_1_2_2 or
Psi_Net_1_2_2 for example).
If the printout option is switched on, the (normalized) coefficients of the different Hermite polynomials are printed out:

Multi-layer Depth Conversion With Isatoil

Normalized coefficients for the Hermite polynomials


0
1
2
3
4
0+
0.723538 -0.195402 -0.090866 -0.006407
0.031800
5+
0.029486 -0.002152 -0.026471 -0.012341
0.017568
10+
0.018136 -0.008480 -0.018879
0.001060
0.016816
15+
0.004225 -0.013377 -0.007465
0.009474
0.008972
20+
-0.005671 -0.009125
0.002297
0.008298
0.000479
25+
-0.006826 -0.002595
0.004991
0.004064 -0.003021

533

12.6.2 Correlations for the petrophysical variables


The Petrophysics / Correlations application enables to check the compatibility of a petrophysical
variable with the external drift which will be used during the estimation (or simulation) stage. In
our case, only the porosity variable of the Lower Brent - B5B makes use of an external drift.
In this case study, the results which rely only on 13 active data are rather poor. The linear regression
line is (trends are defined from the data):
Variable 1: Lower Brent - B5B = 0.29590 + Trend * ( -0.00003)

(fig. 12.6-2)

12.6.3 Model fitting for the petrophysical variables


Each petrophysical variable of each unit must be modelled separately, using the same application
Petrophysics / Model.

534

Whenever a variable has been normal score transformed, two individual models must be fitted:
l

one model on the raw variable is used for the estimation

one model on the gaussian variable is used for the simulation

The only difference with the other geostatistical modelling panels is that:
l

the current process is monovariate

there is no restriction to strict stationarity. Variograms are used instead of covariances and nonbounded theoretical models - e.g. a linear - are authorized.

The following table summarizes the structures that have been fitted automatically, based on experimental quantities calculated with 10 lags of 300m each usually:
Unit

Variable

Type

Sill

Range

B1

N/G

Raw

Nugget

0.0022

No

B1

N/G

Gaussian

Nugget

0.8048

No

B1

Porosity

Raw

Exponential

0.0004

2000 m

B1

Porosity

Gaussian

Nugget

0.6754

No

Exponential

0.2827

2000 m

Nugget

0.0008

No

Spherical

0.0003

1000 m

Nugget

0.0002

No

Linear

0.0006

10000 m

B4
B4

N/G
Porosity

Raw
Raw

B5B

N/G

Raw

Spherical

0.0764

2000 m

B5B

N/G

Gaussian

Nugget

0.1426

No

Spherical

1.1238

2000 m

B5B

Porosity

Raw

Spherical

0.0006

2000 m

B5B

Porosity

Gaussian

Spherical

1.1927

2000 m

12.6.4 Base case for the petrophysical variables


The estimation of each petrophysical variable defined in the Master File is achieved with the Petrophysics / Base Case application.
During the estimation process, a post-processing test is performed in order to truncate the resulting
values between 0 and 1. The same operation is also performed in the simulations.

Multi-layer Depth Conversion With Isatoil

535

The following statistics are obtained on 5767 samples:


Unit

Variable

Minimum

Maximum

B1

Porosity

0.251

0.302

B4

Porosity

0.290

0.298

B5B

Porosity

0.193

0.266

B1

N/G

0.854

0.854

B4

N/G

0.952

0.983

B5B

N/G

0.037

0.931

12.6.5 Map representation for the petrophysical variables


We can use Display / Map to display the petrophysical base-case results.
In comparison with the maps already drawn before, note that the well information now comes from
the Well Petrophysics file, therefore the samples do not carry the well name anymore.
The next figure shows the estimation of the Porosity on the Lower Brent - B5B unit:

(fig. 12.6-3)

536

12.7 Volumetrics
This section introduces the calculation of accurate volumes based on the results of geostatistical
estimation and/or simulations. There are several levels of details in the reported volumes, since the
volumetrics algorithm takes into account the following parameters:
l

volumes are computed separately for each Unit of the sequence,

volumes are calculated separately for Oil and Gas, above the relevant contacts,

volumes are computed either as Gross Rock or Oil in Place - if petrophysics is used -,

volumes are reported separately per areal Polygons of interest.

12.7.1 Polygon definition


In this Case-Study we have considered three polygons, the vertices of which are stored in the ASCII
file named polzone.dat. Each polygon starts with a star character (*) typed in the first column followed by a number of lines which contain the coordinates the polygon vertices:
*

126.43
490.45
916.61
1227.35
1404.91
339.51
144.96

1803.66
2167.29
1936.70
1803.66
996.58
996.58
1733.48

259.61
490.45
1227.35
1582.48
943.24
259.61
253.23

2859.07
2167.29
1803.66
2663.96
3621.81
3222.70
2923.09

339.51
969.88
1236.23
1502.58
1955.37
1689.02
1404.91

996.58
349.14
358.01
233.85
393.49
1014.32
996.58

The polygon coordinates are expressed in meters in this example. A polygon does not need to be
closed - since Isatoil will automatically close it if necessary The following illustration has been obtained with Isatis. The polygons have been loaded from the
ASCII file named polzone.hd - which contains the proper header organization - and have been displayed with Display / Polygons on top of a time map of Dunlin - D1.

Multi-layer Depth Conversion With Isatoil

537

(fig. 12.7-1)

Note - The formats of polygon files for Isatis and Isatoil are different. It is not necessary to load
the polygons inside the Isatis database unless you wish to perform a graphic representation such as
above.

538

12.7.2 The volumetrics algorithm

(snap. 12.7-1)

Each volume results from the integration within a unit:


l

between a top and a bottom

between a lower and an upper contact:


m

for gas contents: the lower contact is the OWC and there is no upper contact

for oil contents: the upper contact is the GOC and the lower contact is the OWC

of the thickness for gross volume

of the product of the thickness by the petrophysical parameters for the storage in place volume

All these operations correspond to non-linear operations (as soon as contacts are involved). A
skilled geostatistician knows that applying a non-linear operation on the result of a linear process
(such as kriging) leads to biased estimations. It is recommended to run simulations instead.

Multi-layer Depth Conversion With Isatoil

539

Each simulation produces a realistic outcome and therefore a plausible volume result. Then, drawing several simulations will lead to the distribution of possible volumes from which any types of
statistics can be derived:
l

on the volumes: mean, P5, P50 (median) or P95

on a pixel basis in order to produce probability and quantile maps

The general principle consists of calculating one or several block models and to derive the different
volumes (per polygon, per layer). A block model is a set of layers and petrophysical variables, all
these surfaces (either geometrical or petrophysical) being calculated consistently. Each block model
is the result of the following six nested elementary operations:
l

defining the Limit Surfaces (always provided as external variables)

defining the Top Layering Surface (always provided as external variable)

performing the layering phase

performing the zonation phase

painting each unit with porosity

painting each unit with net to gross ratio

Each operation has two possible status, according to the flag Already calculated:
l

ON: it must not be performed and the resulting surface(s) should already exist in the grid file
with a name which follows the naming convention.

OFF: it must be performed during the Volumetrics procedure. The resulting surface(s) are usually not stored in the grid file (see the Simulation Parameters panel for exception).

The surface(s) (either calculated or read from the grid file) can be the result of one of the two following procedures:
l

the base case which involves the kriging technique

a conditional simulation

Note - In particular, this allows the user to derive volume from kriged estimates, regardless of the
bias of the result.

12.7.3 Volumetrics using the Base Case


Note - Although the following operation is not recommended because of the bias on the calculated
volumes, we will first evaluate the volumes of oil and gas for each layer and for each polygon,
based on the base case results.
The base case has already been performed for:

540

the layering phase

the zonation phase

the petrophysical phase for both Porosity and the Net to Gross ratio.

Therefore we can switch ON the Already calculated flags for all the phases (including the Petrophysical steps) together with the Base Case option.
This estimation will serve as a reference, therefore the values of the GOC and OWC for each unit
are set to the following constant values in the Master File:
l

the GOC is fixed to 2570m

the OWC is fixed to 2600m

Isatoil returns the following figures - which are expressed in 106 m3 - per polygon and per zone:
l

GRV is the gross rock volume which only depends on the reservoir geometry

IP is the volume in place obtained as the product of the geometry, the petrophysical variables
and the volume correction factor
Layer
B1

B4

B5B

Polygon

Gas

Oil

GRV

IP

GRV

IP

110.78

1987.26

7.65

2.28

155.31

2716.52

0.12

0.04

67.32

1159.69

0.40

0.12

0.

0.

27.05

6.52

0.

0.

50.04

12.09

0.

0.

29.27

7.03

0.16

1.92

1.78

0.35

7.08

90.73

6.41

1.59

3.43

47.93

2.60

0.53

It also produces the same results:


l

per polygon and per layer: regrouping all the zones of a layer

per area: regrouping all the zones and layers of the same area

per polygon: regrouping all the zones, layers an areas

globally: regrouping all zones, layers, areas and polygons

Note - Note that when the results of several are regrouped, the program simply adds the results of
each individual polygon without checking that the polygons do not overlap.

Multi-layer Depth Conversion With Isatoil

541

The last result will serve as a global reference result:


Gas BRV

344.07

Gas IP

6004.04

Oil BRV

125.32

Oil IP

30.54

12.7.4 Randomization of the contacts


The next operation is to return to the Master File menu in order to modify the initial contact values:
Surface

Type

GOC(m)

OWC(m)

Upper Brent B1

Top Layering

2570

T(2600,-5,+2)

Lower Brent B4

Layer

No

T(2600,-3,+2)

Lower Brent B5B

Zone

2570

U(2598,2602)

Where T(2600,-5,+2) means a triangular law with a minimum of 2595, a maximum of 2602 and a
mode of 2600.
For each volume calculation, the value of the contacts is drawn at random according to the law as
defined in the Master File panel (for each layer, each fluid and each index). These random numbers
use a random number generator which depends on the seed number that can be defined in the Simulation Parameters panel (the other parameters of the panel will be discussed later): changing the
seed number will alter the following Volumetrics results, even when based on the base case process.

(snap. 12.7-2)

When selecting the Verbose Output option in the Master File panel, the volumetrics procedure
produces the values of the contacts:

542

Random generation
GOC : Index-1 =
OWC : Index-1 =
Random generation
OWC : Index-1 =
Random generation
GOC : Index-1 =
OWC : Index-1 =

of contacts
2570.000000
2599.918702
of contacts
2601.204858
of contacts
2570.000000
2600.848190

for layer
Index-2 =
Index-2 =
for layer
Index-2 =
for layer
Index-2 =
Index-2 =

Upper Brent - B1
0.000000 Index-3 =
0.000000 Index-3 =
Lower Brent - B4
0.000000 Index-3 =
Lower Brent - B5B
0.000000 Index-3 =
0.000000 Index-3 =

0.000000
0.000000
0.000000
0.000000
0.000000

The global results of the base case are compared with the reference values obtained with the constant contacts of the previous paragraph:
Constant contacts

Randomized contacts

Gas GRV

344.07

344.07

Gas IP

6004.04

6004.04

Oil GRV

125.32

126.05

Oil IP

30.54

30.71

12.7.5 Volumetrics using simulations


The next task consists in replacing the base case process by geostatistical simulations so that the
Volumetrics results do not suffer from the bias that we already discussed. Choosing between basecase and simulations can be done for each single step involved in Volumetrics calculation:
l

Limit surfaces

Top Layering reference surface

Layering

Zonation

Porosity

Net to Gross ratio

When the flag Already calculated is switched ON, Isatoil reads the results from the grid file using
the relevant naming convention. For example, the depth corresponding to the zone (3) of the layer
(2) inside area (1) must be stored under:
l

depth_1_2_3 for the Base Case

depth_1_2_3[xxxxx] for the Simulations

When the flag Already calculated is switched OFF, the base-case or the simulation outcomes are
computed at RUN time.
When simulations have been selected for a given step, the user can specify the number of outcomes
that will be calculated or read from the grid file.

Multi-layer Depth Conversion With Isatoil

543

The Simulation parameters panel is used to indicate:


l

the number of turning bands that must be used in order to generate an outcome which reproduces correctly the variability as defined in the geostatistical model. On one hand, this number
should be large for a good quality, on the other hand it should not be too large as the time consumption of each simulation is directly proportional to the number of bands. In this case study,
this value is set to 500.

should we match or combine the simulations? When two nested phases have to be simulated
with 3 outcomes for each one, this flag tells the system if the final count of scenarios should be
3 (match option) or 9 (combine option). When match is required, the number of outcomes
obtained is the smallest number of outcomes defined for the various simulation steps. When
combine is selected, the final number of outcomes is obtained as the product of the individual
numbers of outcomes.

12.7.5.1 Matching simulations


In this first run, we use Already calculated surfaces (base case) for the Top Layering and the Limits. All the other steps are simulated using 10 outcomes each.
The simulations are matched so that the final count of scenarios is 10.
The following statistics can be compared with those obtained using the base case and the randomized contacts.
Gas

Oil

GRV

IP

GRV

IP

Base Case

344.07

6004.04

126.05

30.71

Mean

341.11

6260.19

124.85

30.63

St. dev.

7.79

247.35

5.94

2.22

P90

332.30

5981.47

117.71

28.72

P50

342.30

6418.19

126.61

30.94

P10

356.20

6584.76

133.03

34.26

12.7.5.2 Combining simulations


In this second run, we choose to reduce to 5 the count of simulations - for each one of the 4 steps but to combine the simulations in the end. Therefore the final count of outcomes is 625.

544

The following results are obtained:


Gas

Oil

GRV

IP

GRV

IP

Base Case

344.07

6004.04

126.05

30.71

Mean

340.47

6352.31

126.35

31.78

St. dev.

6.33

158.55

7.40

1.86

P90

334.33

6152.96

116.72

29.38

P50

338.16

6345.31

127.17

31.94

P10

351.66

6561.83

135.32

34.09

These results bring two comments:


l

the volume obtained using the base case is not necessarily close to the one (say the median or
P50) obtained with simulations. This is due to the bias that we have mentioned before. In the
case of the Gas IP in particular, the difference between the P50 (6345.) and the base case (6004)
is almost twice as large as the standard deviation (158).

the gain in accuracy has one severe drawback: CPU time consumption. As a matter of fact, the
volumes obtained on 625 simulation outcomes cost much more than one single volume obtained
using the base case

In order to avoid running several times the simulations for a given configuration of parameters, the
results of the RUN can be stored in some Histogram file (e.g. histo). The contents of this file can
be used in the Volumetrics / Histogram application.

Note - Although this file is in ASCII format, it can only be interpreted properly by Isatoil itself. It is
useless and not recommended to try reading these figures with another software...

12.7.5.3 Volume distributions


The detailed volumetrics results - which have been stored in the Histogram file - can be used in the
Volumetrics / Histograms application in order to display volumes in the form of distribution curves.
Once the Histogram file has been read Isatoil shows the list of all available items, each of them
being named upon two things:
l

the polygon number - 1,2 or 3 since 3 areal polygons have been used -

the unit number - e.g. Upper Brent - B1 -

Multi-layer Depth Conversion With Isatoil

545

We can select the type of the volume to be displayed among the following options:
l

Gas Pore Volume

Gas in Place

Oil Pore Volume

Oil in Place

Finally, in our case, we get 625 possible consistent block systems: for each block system, the program has calculated the volumes of 21 different items, for 4 different materials.
The Histogram utility enables the user to select one or several item(s) of interest and to extract the
values of the 625 realizations. When several items have been selected (say Polygon 1 for Upper
Brent - B1 and Polygon 2 for Lower Brent B5B), the value for each realization is the sum of the two
individual volumes.

(snap. 12.7-1)

This first illustration shows the volumes obtained on Polygon 1 in the unit Upper Brent - B1.

546

In the next figure, we represent:


- the Gas GRV in the upper left corner
- the Oil GRV in the upper right corner
- the Gas IP in the lower left corner
- the Oil IP in the lower right corner

(fig. 12.7-1)

The previous figure requires the following comments:


l

The Gas GRV figure clearly shows a step function with 5 values:. This emphasizes that the outcomes result from the combination of:
m

5 outcomes of the Layering stage

1 (this layer does not include any zonation)

1 (GRV does not involve any petrophysical variable)

hence the 5 different volume values.

Multi-layer Depth Conversion With Isatoil

547

Similarly, the Oil GRV figure shows several step functions with edges not as sharp as in the
Gas GRV. This is due to the fact that the OWC contact of this layer is randomized.

In the Gas IP figure, the outcomes result from the combination of:
m

5 outcomes of the Layering stage

1 (this layer does not include any zonation)

5 outcomes for the Porosity variable

5 outcomes for the Net to Gross variables

hence the 125 different volume values


l

the same type of results for the Oil IP figure

For the sake of the demonstration, we also show the Gas GRV figure for the Polygon 1 in the layer
Lower Brent - B5B. The figure clearly shows 25 different volumes this time, obtained from the
combination of 5 outcomes from the Layering stage and 5 outcomes from the Zonation stage.

(fig. 12.7-2)

The last illustration consists of cumulating all the volumes over all the units and all the polygons, so
as to provide one value for each type of material. This compares to the statistics given in the previous paragraph.

548

(fig. 12.7-3)

(fig. 12.7-4)

This is particularly interesting to show the bias of the volume established on the base case: in the
case of Gas in Place (lower left), this volume (6004) is far from the mean simulated volume.

12.7.5.4 Production of Maps during the Volumetrics process


When running the Volumetrics process (essentially when performing the simulations) it may be
interesting to check the spread of these outcomes on a basis of each grid node, in order to produce
maps.
The principle is to specify a new grid file name where the procedure will write these maps, switching on the option "Saving Results for Map Production". If the file already exists, its contents is
emptied first before the new variables are created: no warning is issued.
When the name of the new grid file has been entered, you must use the Definition button in order to
specify the set of maps to be stored (snap. 12.7-1).

Multi-layer Depth Conversion With Isatoil

549

(snap. 12.7-1)

This procedure offers the possibility of defining several calculations that will systematically be performed on all the units of the block system, regardless of their contents in Gas and Oil.
The first set of maps concerns mean and dispersion standard deviation maps, calculated for:
l

the Depth of the Top Reservoir: the Reservoir is only defined where either gas or oil is present

the Gas Reservoir Thickness: for each grid cell, this represents the height of the column within
the Gas Reservoir

the Gas Pore Volume: for each cell, this represents the product of the height within the Gas Reservoir scaled by the petrophysical variables

the Oil Reservoir Thickness

the Oil Pore Volume

The user can also ask for Probability Maps of the Reservoir Thickness. Here again, the Reservoir
is only defined where either Gas or Oil is present. When the flag is switched on, you must use the
Definition button to specify the characteristics of the probability maps.
The probability map gives the probability that the reservoir thickness be larger than a given
threshold for each grid cell. For example the threshold 0m gives the probability that the reservoir
exists. You may define up to 5 thresholds.

550

(snap. 12.7-2)

The user can also ask for Quantile Maps of the Depth of the Top Reservoir. Here again, the Reservoir is only defined where either Gas or Oil is present. When the flag is switched on, you must use
the Definition button to specify the characteristics of the probability maps.
For a grid cell located within the reservoir, the quantile map gives the depth of the top which corresponds to a given quantile threshold (defined in percent). For example the threshold 0% gives the
smallest depth for the top reservoir. You may define up to 5 thresholds.

(snap. 12.7-3)

Note - None of these maps can be considered as a simulation outcome - they do not honor the
geostatistical structure of the variable - therefore any volume calculation based on them would be
biased.
These special maps obey the following naming convention. Their generic name is of the form:
Code-code_number : variable_type

Multi-layer Depth Conversion With Isatoil

551

where:
l

code_number stands for the designation code of a unit, as defined in the Master File - e.g. 122
for the Lower Brent - B5B -

variable_type indicates the type of calculation that has been performed, chosen among the following list:
m

Mean Depth

- average of the Depth of the Top reservoir -

Mean Gas Thickness

Mean Gas Pore Volume


correction factor -

Mean Oil Thickness

Mean Oil Pore Volume

St. dev. Depth

St. dev. Gas Thickness

St. dev. Gas Pore Volume

St. dev. Oil Thickness

St. dev. Oil Pore Volume

Proba of thickness larger than threshold: probability that the thickness of the Reservoir
(Gas + Oil) is larger than the given threshold value

Depth quantile quantile: value of the depth corresponding to the quantile

- product of thickness * petrophysical variables * volume

- standard deviation of the Depth of the Top reservoir -

552

12.7.5.5 Visualizing the special maps


The following figures show some results obtained for the Layer Lower Brent - B5B using Display /
Auxiliary Grid.

(fig. 12.7-1)

Code-122: mean of the depth of Lower Brent - B5B

Multi-layer Depth Conversion With Isatoil

553

(fig. 12.7-2)

Code-122 : Standard deviation of the depth of Lower Brent - B5B

554

The next figures compare the quantiles maps (for quantiles 10%, 50% and 90%) and the mean
map. The calculations are slightly different for quantile and mean maps calculation. If we consider
N outcomes and concentrate on a given grid node:
l

quantile. The N values of the depth are considered (when there is no reservoir, the value is set
to a non-value). Then these values are sorted and the p-quantile corresponds to the value ranked:
p*N/100. If the result corresponds to a non-value, then the reservoir does not exist in the quantile map. Therefore, when the quantile increases, the depth of the reservoir top increases and, as
the contact remains unchanged, the reservoir extension shrinks down.

(fig. 12.7-3)

Code-122: Depth Quantile 10.000000%

Multi-layer Depth Conversion With Isatoil

555

(fig. 12.7-4)

Code-122: Depth Quantile 50.000000%

(fig. 12.7-5)

Code-122: Depth Quantile 90.000000%

556

mean. Among the N values, only those where the reservoir exists are stored and averaged.

(fig. 12.7-6)

Code-122: Mean Depth


The next figures compare the probability maps for the Depth of the Top Reservoir, in the layer
Lower Brent - B5B, to be above 10m, 20m and 30m.

Multi-layer Depth Conversion With Isatoil

557

(fig. 12.7-7)

Code-122: Proba of Thickness Larger than 0.00m

558

(fig. 12.7-8)

Code-122: Proba of Thickness Larger than 5.00m

Multi-layer Depth Conversion With Isatoil

559

(fig. 12.7-9)

Code-122: Proba of Thickness Larger than 10.00m

12.7.5.6 Saving the simulations


The Volumetrics procedure offers the additional possibility of storing the simulation outcomes, for
each one of the variables processed, in the main Grid File.
This option can be used in an "expert way" in order to run a second time the volumetrics process
with the options Already Calculated switched ON...
For another use, this option is not recommended for the following reason. In order to save time, the
simulations (and the kriging) are only performed in the places which can serve during the whole
process of nested phases, i.e.:
l

at each grid node as long as it belongs to at least one polygon

at the closest node to each one of the intercepts with layers and zones

Therefore each simulation outcome is calculated on a limited number of cells.


Moreover, the calculated surface is not intersected by the Limiting Surfaces before storage.

560

This is the reason why the result simulation outcome of the depth of the top of Lower Brent - B5B
unit is difficult to interpret:

(fig. 12.7-1)

Multi-layer Depth Conversion With Isatoil

561

12.8 Tools
Isatoil offers several procedures for checking the results and understanding the calculations. A
quick review of these tools will be given in this section.
Most of these tools require the definition of a particular point that will serve as a target: this point
can be picked from a graphic representation. We will arbitrarily select the following target point:
X=780m Y=2349m

12.8.1 Inform the Target Point


This procedure, in the "Tools" menu, allows you to check the contents of the variables stored in the
main Grid File. The target point is first located within the grid and the value of each variable is calculated by bilinear interpolation.

(snap. 12.8-1)

In the case of our target point, we obtain the following results:


Unit

Time

Depth

BCU

2384.084

ERODE 1

2307.114

ERODE 2

1187.447

Porosity

N/G

Upper Brent B1

2291.331

2399.005

0.295

0.854

Lower Brent B4

2370.886

2510.750

0.293

0.964

0.250

0.703

Lower Brent B5A

2545.005

Lower Brent B5B

2599.685

Lower Brent B6

2614.688

Dunlin

2460.001

2619.640

Statfjord

2629.971

2899.720

Base Statfjord

2769.980

3137.597

Trend(s)

2370.886

562

Note the following remarks:


l

the time values are only defined for layers (not for zones)

the depth variable is defined everywhere (in m). They do not take into account the order relationships between the layers: this is only performed at the output stage.

the porosity and Net to gross ratio are only calculated for the units where at least one contact is
defined

the trends (for porosity and normal transform of porosity) are defined for the unit where the
porosity variable requires:
m

a normal scale transform before simulation

an external drift (provided by the user) for processing

An additional flag allows you to display the Simulated Results. When using this option after the
Volumetrics last procedure (running simulations and storing the outcomes in macro variables), the 5
simulated outcomes are listed for the calculated variables (Depth (layers and zones), Porosity, Net/
Gross).

12.8.2 Estimate the Target Point


The Tools / Estimate the Target Point application enables to perform the estimation at the Target
Point of one the following items:
- Layering
- Zonation
- Petrophysics
This procedure presents even more interest when the Verbose option flag is switched ON in the
Master File: then the data information taken into account at each step is listed exhaustively.
At this point, it is important to distinguish the results obtained with this procedure from the ones
obtained in the previous section. Here the estimation is actually performed at the target point location whereas, in the previous paragraph, the value was derived from the values estimated at the four
surrounding grid nodes (by a bilinear interpolation).

Multi-layer Depth Conversion With Isatoil

563

(snap. 12.8-2)

12.8.2.1 Estimating a Layer value


The layering estimate gives the following results:
Compression stage:
- Initial count of data
= 59
- Final count of active data = 52

Estimation at the Target Point


==============================
X-coordinate of the target point =
780.00m
Y-coordinate of the target point =
2349.00m
Depth for Top
= 2399.005

Estimate #1
=
1.404 (Lower Brent - B4)
Estimate #2
=
1.222 (Dunlin)
Estimate #3
=
1.648 (Statfjord - S1)
Estimate #4
=
1.699 (Base Statfjord)

As it was requested in the Master File, the calculation for the layering stage are performed in terms
of interval velocity, hence the values of the estimations for the four intervals of the layering

12.8.2.2 Estimating a Zonation value


The Zonation of the Lower Brent - B4 layer (this designates all the zones located below that top
layer) gives the following results:
Compression stage:
- Initial count of data
= 77
- Final count of active data = 64

Estimation at the Target Point


==============================
X-coordinate of the target point =
780.00m
Y-coordinate of the target point =
2349.00m
Depth for Top
= 2510.750
Depth for Bottom
= 2619.640
Value for Pre-Faulted Thickness = 108.890

Estimate #1
=
32.899 (Lower Brent - B5A)
Estimate #2
=
55.049 (Lower Brent - B5B)

564

Estimate #3
=
15.252
Estimate #4
=
1.781
Sum of estimates
= 104.980

Results after the Collocation correction


Estimate #1
=
34.252
Estimate #2
=
54.681
Estimate #3
=
15.003
Estimate #4
=
4.954
Sum of estimates
= 108.890

(Lower Brent - B6)


(Dunlin)

(Lower Brent - B5A)


(Lower Brent - B5B)
(Lower Brent - B6)
(Dunlin)

Here the results correspond to thicknesses of the zones. The calculations are performed in two
steps:
l

direct estimation of the thicknesses

correction in order to account for the total thickness of the layer (collocation correction)

12.8.2.3 Estimating petrophysical variables


The porosity estimation of the Lower Brent - B5B unit gives the following results:
Estimation at the Target Point
==============================
X-coordinate of the target point =
Y-coordinate of the target point =

Estimate #1
=

780.00m
2349.00m
0.250 (Lower Brent - B5B)

The Net to Gross ratio estimation on the same unit gives:


Estimation at the Target Point
==============================
X-coordinate of the target point =
Y-coordinate of the target point =

Estimate #1
=

780.00m
2349.00m
0.703 (Lower Brent - B5B)

12.8.2.4 Verbose output


The Layering calculation is performed again, but switching ON the Verbose Output flag in the
Master File. This case has been selected as it covers all the interesting features of the output:

.../...

List of active information used for Estimation of Depth


using the following variable(s):
- Variable 1: Lower Brent - B4
- Variable 2: Dunlin - D1
- Variable 3: Statfjord - S1
- Variable 4: Base Statfjord

Rank - Name - X - Y - Initial - Data - Pr1 - Pr2 - Pr3 - Pr4 - Trend1 - Trend2
Trend3 - Trend4
1
3 1965.27m 649.64m 2435.310 1.057 1.00 0.00 0.00 0.00 2313
0
0
0
2
3 1965.27m 649.64m 2544.110 1.178 0.44 0.56 0.00 0.00 2313 2398
0
0
3
3 1965.27m 649.64m 2813.410 1.373 0.20 0.26 0.53 0.00 2313 2398 2573
0
4
4 2408.25m 422.11m 2927.000 1.362 0.13 0.17 0.33 0.37 2279 2353 2491 2649
5 113 1668.08m 070.14m 2772.050 1.341 0.17 0.27 0.56 0.00 2304 2389 2566
0
6 119 827.59m 060.44m 2498.780 1.412 1.00 0.00 0.00 0.00 2373
0
0
0
7 120 1162.99m 212.98m 2456.170 1.183 1.00 0.00 0.00 0.00 2352
0
0
0
8 120 1827.41m 868.45m 2483.780 1.033 0.40 0.60 0.00 0.00 2299 2379
0
0

Multi-layer Depth Conversion With Isatoil

9 120 1915.18m 825.42m 2478.170


10 121 1163.91m 301.18m 2452.170
11 129 1085.61m 416.12m 2467.400
12 132 1251.11m 919.74m 2458.860
13 132 1285.38m 811.34m 2563.580
14 145 1110.81m 556.81m 2514.440
15 145 1064.02m 590.65m 2628.170
16 147 1226.54m 393.18m 2456.330
17 147 1198.95m 484.77m 2561.870
18 148 1827.40m 702.87m 2388.930
19 148 2057.15m 336.04m 2493.740
20 149 2957.57m 356.44m 2840.000
21 152 1195.30m 949.32m 2500.520
22 152 1249.02m 077.51m 2605.040
23 152 1509.20m 787.16m 2789.920
24 155 3536.50m 315.45m 2813.000
25 156 2186.83m 501.30m 2698.350
26 157 3032.15m 054.42m 2841.570
27 163 1888.61m 078.96m 2420.540
28 163 1861.60m 117.71m 2536.310
29 165 1155.13m 673.93m 2515.010
30 165 1101.41m 650.76m 2624.150
31 166 1508.61m 169.48m 2456.530
32 170 2527.97m 546.14m 2974.090
33 171 556.79m -47.33m 2601.910
34 173 2128.88m 426.30m 2779.210
35 174 1908.26m 122.07m 2417.160
36 174 1935.48m 037.06m 2523.020
37 178 3037.82m 440.93m 2830.480
38 180 1500.90m 868.04m 2424.580
39 180 1467.48m 978.30m 2533.400
40 182 1796.31m 109.01m 2452.170
41 182 1783.65m 41.02m 2565.320
42 183 1840.94m 638.18m 2426.250
43 183 1839.25m 632.94m 2537.920
44 191 2094.11m 836.57m 2455.290
45 191 2129.59m 211.76m 2722.490
46 193 3141.96m 354.71m 2822.910
47 201 945.72m 639.46m 2504.690
48 201 881.27m 761.02m 2614.100
49 202 1920.85m 808.40m 2365.780
50 203 1930.07m 850.41m 2367.500
51 204 2357.63m 533.98m 2944.710
52 205 2292.26m 475.18m 2952.830

Estimation at the Target Point


==============================
X-coordinate of the target point
Y-coordinate of the target point
Depth for Top

Estimate #1
Estimate #2
Estimate #3
Estimate #4

1.015
1.086
1.213
1.078
1.128
1.344
1.381
1.378
1.226
0.853
1.169
1.291
1.139
1.219
1.403
1.152
1.319
1.289
0.717
1.053
1.466
1.336
0.810
1.351
1.486
1.304
0.716
1.008
1.253
1.292
1.180
1.017
1.156
0.815
1.087
0.895
1.283
1.268
1.488
1.322
0.558
0.570
1.384
1.411

0.42
1.00
1.00
1.00
0.44
1.00
0.51
1.00
0.45
1.00
0.43
0.23
1.00
0.53
0.20
0.27
0.23
0.17
1.00
0.44
1.00
0.48
1.00
0.18
1.00
0.23
1.00
0.44
0.15
1.00
0.41
1.00
0.46
1.00
0.45
0.49
0.22
0.22
1.00
0.46
1.00
1.00
0.17
0.12

0.58
0.00
0.00
0.00
0.56
0.00
0.49
0.00
0.55
0.00
0.57
0.08
0.00
0.47
0.25
0.05
0.18
0.15
0.00
0.56
0.00
0.52
0.00
0.12
0.00
0.23
0.00
0.56
0.17
0.00
0.59
0.00
0.54
0.00
0.55
0.51
0.21
0.11
0.00
0.54
0.00
0.00
0.12
0.19

565

0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.29
0.00
0.00
0.55
0.36
0.59
0.29
0.00
0.00
0.00
0.00
0.00
0.38
0.00
0.55
0.00
0.00
0.35
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.57
0.28
0.00
0.00
0.00
0.00
0.34
0.32

0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.39
0.00
0.00
0.00
0.33
0.00
0.39
0.00
0.00
0.00
0.00
0.00
0.32
0.00
0.00
0.00
0.00
0.33
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.39
0.00
0.00
0.00
0.00
0.36
0.37

2297
2353
2358
2338
2339
2379
2380
2340
2342
2302
2290
2283
2376
2370
2322
2273
2295
2264
2318
2319
2371
2374
2350
2299
2430
2311
2315
2316
2257
2317
2323
2334
2337
2324
2324
2299
2300
2271
2366
2368
2303
2303
2299
2282

=
780.00m
=
2349.00m
= 2399.005
=
=
=
=

1.404
1.222
1.648
1.699

(Lower Brent - B4)


(Dunlin)
(Statfjord - S1)
(Base Statfjord)

2374
0
0
0
2429
0
2459
0
2435
0
2361
2316
0
2450
2400
2292
2345
2323
0
2405
0
2461
0
2356
0
2384
0
2400
2322
0
2413
0
2425
0
2409
2366
2364
2314
0
2460
0
0
2352
2363

0
0
0
0
0
0
0
0
0
0
0
2433
0
0
2572
2444
2506
2436
0
0
0
0
0
2532
0
2561
0
0
2459
0
0
0
0
0
0
0
2535
2422
0
0
0
0
2503
2498

0
0
0
0
0
0
0
0
0
0
0
2589
0
0
0
2582
0
2588
0
0
0
0
0
2683
0
0
0
0
2588
0
0
0
0
0
0
0
0
2575
0
0
0
0
2664
2656

566

We recall that Layering is performed by a cokriging procedure using 4 variables (layers) simultaneously. The list of the information contains the following information:
l

Rank designates the rank of the sample in the list

Name is the name of the well which provided this intercept information. This information is not
available in the case of Petrophysical variables.

X - Y gives the coordinates of the intercept

Initial is the depth value of the intercept, as read from the Well File

Data is the value which is actually entered in the cokriging system: in the case of the Layering,
this corresponds to an apparent velocity value calculated from the Top Layering surface down
to the surface which contains the intercept.

Pr* give the weighting coefficients which denote the percentage of time spent in each layer.
Note that a layer located below the intercept surface corresponds to a zero weight.

Trend* are the values that are used as external drift for each variable

The Pr* weight indicates if a layer (or a zone) lies between the intercept and the surface that serves
as a reference, or set to 0 otherwise If the procedure works in depth, this weight is simply an indicator (0 or 1); if it works in velocity, the weight corresponds to the percentage (in time) that the layer
thickness represents in the total distance from the intercept to the reference surface: the weights add
up to 1. This weight is not available in the case of petrophysical variables.
The Trend* values are only displayed if the variable(s) to be processed require external drift(s).

12.8.3 Inform Depth at Well Locations


The Tools / Inform Depths at well locations application enables to compare the depth of the intercepts contained in the Well File, with the value that can be back-interpolated from the base case
results (or simulation outcomes) stored in the Grid File. The back-interpolation uses a bilinear interpolation from the four grid nodes surrounding the intercept location.
You may choose either to concentrate on the intercepts with one layer (or zone) in particular, or to
review all the intercepts contained in the Well File.

Multi-layer Depth Conversion With Isatoil

567

(snap. 12.8-1)

The following printout is obtained when checking the base case results on the Lower Brent - B5B
layer:

Information on Well Location(s) (Bilinear interpolation)


========================================================

Layer : Lower brent B5B (Identification : Area = 1 - Layer = 2


Point: X= 1965.27m; Y= 649.64m; Data= 2525.110; Depth value
Point: X= 1778.15m; Y= 2892.79m; Data= 2481.390; Depth value
Point: X= 1435.80m; Y= 3427.97m; Data= 2503.980; Depth value
Point: X= 1617.06m; Y= 3512.49m; Data= 2472.870; Depth value
Point: X= 1277.78m; Y= 3831.64m; Data= 2543.590; Depth value
Point: X= 1073.01m; Y= 1584.21m; Data= 2606.220; Depth value
Point: X= 1203.30m; Y= 2470.50m; Data= 2544.980; Depth value
Point: X= 1238.08m; Y= 1051.87m; Data= 2588.180; Depth value
Point: X= 1865.64m; Y= 1111.99m; Data= 2520.070; Depth value
Point: X= 1109.77m; Y= 654.31m; Data= 2607.140; Depth value
Point: X= 464.31m; Y= -117.23m; Data= 2704.740; Depth value
Point: X= 1930.98m; Y= 1050.76m; Data= 2507.830; Depth value
Point: X= 1930.17m; Y= 1053.23m; Data= 2505.030; Depth value
Point: X= 1471.79m; Y= 2963.77m; Data= 2518.720; Depth value
Point: X= 1785.84m; Y=
52.65m; Data= 2546.240; Depth value
Point: X= 1839.51m; Y= 633.56m; Data= 2521.030; Depth value
Point: X= 891.47m; Y= 2741.75m; Data= 2597.020; Depth value
Point: X= 1918.16m; Y= 1854.49m; Data= 2463.280; Depth value
Point: X= 1898.10m; Y= 1994.28m; Data= 2456.820; Depth value
Point: X= -275.56m; Y= 1591.47m; Data= 2747.290; Depth value

=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=

Zone = 2)
2525.179
2469.974
2515.955
2496.956
2546.401
2608.123
2542.129
2587.588
2516.301
2604.150
N/A
2508.816
2508.670
2515.207
2546.405
2520.635
2594.410
2451.716
2458.152
N/A

where:
l

X - Y designate the intercept location coordinates

Data refers to the depth information read from the Well File

Depth value is the back-interpolated value

The back-interpolated value is not defined (N/A) when at least one of the grid nodes surrounding the
intercept location is not defined: this is the case for the intercept located at (X=464.31m; Y=117.23m) which lies outside the grid.

568

12.8.4 Cross-Validate the Well Information


This tool enables you to perform the cross-validation (blind test) of the data. The procedure consists
in masking a data value, re-estimating it from the remaining information and comparing the initial
information to the estimation. The procedure is performed, in turn, on all the information samples.
You can choose to perform the cross-validation test of:
- Layering
- Zonation
- Petrophysical
You can perform the cross-validation either on all the possible surfaces or restrict the test to a given
subset of surfaces. In the petrophysical case, you must finally specify the target variable.

(snap. 12.8-2)

The following printout is obtained when cross-validating the porosity information of the Lower
Brent - B5B layer:
.../...

Estimation at the Data Points


=============================
X - Y - True Value - Estimation - Layer Name
1069.52m
1586.71m 0.231 0.237
1201.76m
2475.57m 0.267 0.236
1242.07m
1061.08m 0.220 0.207
1864.16m
1114.09m 0.212 0.237
1106.90m
653.09m 0.193 0.214
1932.38m
1046.48m 0.237 0.213
1470.03m
2969.66m 0.233 0.243
1785.02m
48.33m 0.216 0.214
1839.40m
633.30m 0.213 0.211
887.55m
2749.13m 0.245 0.258
1768.50m
547.24m 0.205 0.211
1787.09m
2888.33m 0.228 0.239
1280.84m
3823.25m 0.260 0.223

Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower
Lower

brent
brent
brent
brent
brent
brent
brent
brent
brent
brent
brent
brent
brent

B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B
B5B

Multi-layer Depth Conversion With Isatoil

569

where:
l

X-Y designate the coordinates of the data point

True Value is value read from the data file (here the Petrophysical Well File), possibly converted into velocity in the Layering case

Estimation is the result of the estimation

Layer Name gives the identification of the information: this is mainly relevant in the multivariate case (Layering or Zonation)

12.8.5 Cleaning procedure


This tool (located in the File menu) allows you to delete some results produced by Isatoil. In particular, it allows you to get rid of the following items:
l

variables corresponding to the base case results - stored in the Grid File -

simulation outcomes that might have been stored in the Grid File by the Volumetrics procedure

Standard Parameter Files containing the models for the covariance and the distributions (anamorphosis).

The use of this procedure ensures that only variables resulting from calculations are deleted. In particular, it does not delete depth variables corresponding to the Top Layer or the Limit surfaces, or
any surface which is not calculated by Isatoil, as specified in the Master File.
The procedure offers the possibility either to clean all the results (from one of the above items mentioned above) or to restrict the deletion to the ones relative to a given unit.
Use the Check button to check the number of files which will be deleted before actually cleaning
them!

570

(snap. 12.8-3)

The following printout is obtained when cleaning all the files relative to the Lower Brent - B5B surface (all three items selected):

List of the Base Case result files to be deleted


- depth_1_2_2
- porosity_1_2_2
- netgross_1_2_2
List of the Simulation result files to be deleted
- depth_1_2_2[xxxxx]
- porosity_1_2_2[xxxxx]
- netgross_1_2_2[xxxxx]
List of the Model Standard Parameter Files to be deleted
- Model_Poro_Raw_1_2_2
- Model_Poro_Gauss_1_2_2
- Model_Net_Raw_1_2_2
- Model_Net_Gauss_1_2_2
List of the Anamorphosis Standard Parameter Files to be deleted
- Psi_Poro_1_2_2
- Psi_Net_1_2_2

Geostatistical Simulations for Reservoir Characterization

571

13.Geostatistical Simulations for Reservoir Characterization

This case study is based on a public data set used by Amoco during the
80s. The dataset has been kindly provided by Richard Chambers and
Jeffrey Yarus.
It demonstrates the capabilities of Isatis in Reservoir Characterization
using lithofacies and porosity simulations. Volumetrics calculations
are performed on 3D models.
Last update: 2014

572

13.1 Introduction
3D earth modeling is a key issue for reservoir characterization. Moreover, the uncertainties on the
reservoir structure, the contacts and the rock properties may be assessed through the simulations
that preserve the geological features of the reservoir. In this case study, one purpose is the optimal
use of the available data: the wells with information on key horizons markers, lithofacies, porosity
and the facies proportions.
The reservoir is located in North Cowden area (Texas). There are three main facies (siltstone, anhydrite and dolomite). The carbonates were deposited during high sea-level stands and the siltstone
during low stands when the carbonate platform was exposed to sub-aerial conditions. The silt is
actually eolian sediment whose source is from the northwest and it was reworked into sheet-like
deposits during subsequent sea-level rise.
In this case study, several geostatistical methods from Universal Kriging to facies simulations
(Plurigaussian Simulation) and continuous simulations as Turning Bands are performed.
The main steps of the workflow are:
l

Simulations of the surfaces delimiting the top and bottom of the reservoir, using the information
from wells.

Facies simulations (TPGS for Truncated Plurigaussian Simulation). It requires the building of a
stratigraphic grid (flattening), within which variogram calculations and simulations are performed. The 3D vertical proportions matrix (VPC) is computed. A 2D Proportion map computed from a seismic attribute is used to constrain the 3D matrix proportions.

Simulations of the average porosity in the reservoir.

3D simulations of porosity are achieved independently for each facies, then a cookie cutting
procedure constrained by facies simulations provides the final porosity simulations.

Several types of simulations are used (surfaces simulations, TPGS, 3D porosity simulations).
Therefore, different models are available. To evaluate these models, volumetric calculations based
on the simulations of the different parameters provide stochastic distributions of volumes.
In conclusion, this case study explores some of the possibilities that Isatis offers to improve the reservoir characterization.

Geostatistical Simulations for Reservoir Characterization

573

13.2 General Workflow


1. Structural Modeling: Creation and simulations of the top and bottom surfaces of the reservoir
from the well data;

The top and bottom of the reservoir are stored for each wells. The purpose is to interpolate or
simulate the top and bottom surfaces of the reservoir from wells data. Eventually, the distribution of the GRV is performed using these surfaces and a constant contact.
2. Discretization and Flattening: Transformation from real space to the stratigraphic space

This step is crucial as it determines the lateral continuity of facies as expected from a sedimentary deposition of the facies. A flat working grid is created with a resolution of 50mx50mx1m.
3. Computing Proportion Curves: Computing curves from the well data over the working grid.
The vertical proportion curves are calculated from the wells discretized in the stratigraphic
space. Then a 3D matrix of proportion is created for further use in SIS and Plurigaussian Simulations. Finally, the computation of the proportions is performed using a 2D proportions constraint: kriging of mean proportion (siltstone). This proportion constraint was estimated by
external-drift kriging. The drift was a map of acoustic impedance (AI) extracted from the seismic filtered cube. This proportion constraint will be used for the PGS.
4. Lithotypes Simulations: Simulations with PGS of the Lithotypes.
This step aims at deriving the variogram models of two Gaussian random functions that are simulated and truncated to get the simulated lithotypes. The thresholds applied on the different levels follow the so-called lithotype rules. Then Plurigaussian simulations are performed and
transferred to the structural grid
5. 3D Porosity Simulation: Simulation of porosity with Turning Bands and Cookie Cutting
The porosity is simulated using Turning Bands for each lithotypes, then the porosity is conditioned from the lithotypes simulations (Cookie Cutting). The cookie cutting method is the combination of the facies and porosity simulations. The porosity is simulated at each node of the
grid located between the top and the bottom of the reservoir layer as if these nodes were in the
facies of intent. In the final model only the porosity of the facies actually simulated at each node
will be kept. Finally the HPCV is computed using Volumetrics.
6. 3D Volumetrics: Volumetrics of the 3D simulations
The HPCV is computed using Volumetrics. The results from the previous steps (top, bottom and
porosity simulations) are all used in conjunction in order to compute the volumetrics. In addition we assume that OWC depth is known.

574

13.3 Data Import


Firstly, a new study has to be created using the File / Data File Manager facility; then, it is advised
to verify the consistency of the units defined in the Preferences / Study Environment / Units window. In particular, it is suggested to use:
l

Input Output Length Options:


Default Unit... = Length (m)

Default Format...= Decimal (10,2)

Graphical Axis Units:


X Coordinate = Length (km)
Y Coordinate = Length (km)
Z Coordinate = Length (m)

The data are stored in three ASCII files:


l

The first named wells.hd contains the data available at the wells: depth, porosity, selection reservoir (Sel Unit S2).

The second named surfaces.hd contains the surfaces delimiting the reservoir on a grid with resolution of 50mx50m.

The third named 3D grid.hd contains a seismic impedance acoustic cube in a grid with a resolution of 50mx50mx1m.

Import these files into Isatis using the ASCII file import (File/Import/ASCII). These files are available in the Isatis installation directory under the Datasets/Reservoir_characterization sub-directory:
Each ASCII file already contains a header.
Enter a directory name and a file name for each imported file:

Geostatistical Simulations for Reservoir Characterization

575

For the wells, Directory: 3D wells; File: 3D wells; Header: 3D wells header (snap. 13.3-1).

For the surfaces, Directory: 2D Surfaces; File: Surfaces (snap. 13.3-2).

For the structural grid, Directory: 3D Grid; File: Structural Grid (snap. 13.3-3).

(snap. 13.3-1)

576

(snap. 13.3-2)

Geostatistical Simulations for Reservoir Characterization

577

(snap. 13.3-3)

578

13.4 Structural Modeling


The reservoir is limited by the horizons named S2 (top) and S3 (bottom) (2D Surfaces/Surfaces/sel
Unit S2).
Considering the surfaces are known through the intercepts with the wells, simulations of these surfaces are achieved. A kriging, called base case is performed to compare the estimations with the
surfaces already imported.
Copy the top elevation (Maximum Z) and the bottom elevation (Minimum Z) of the wells in the
file Wells/3D Wells Header using Tools/Copy Statistics/line->Header Point. Apply the selection
sel Unit S2.

(snap. 13.4-1)

13.4.1 Kriging of the Surfaces


l

Exploratory Data Analysis


This step describes the structural analysis performed on the top and bottom reservoir marker.
Using Statistics/Exploratory Data Analysis, display the cross-plots Minimum Z/X-UTM and
Maximum Z/X-UTM at the wells. To do so, select the variables Minimum Z and X-UTM as
input, highlight the variables and then click on the cross-plot representation (second icon from
left). Then do the same for Maximum Z and X-UTM.

Geostatistical Simulations for Reservoir Characterization

579

(snap. 13.4-2)

rho=-0.907
-1320
-1330

Minimum Z

-1340
-1350
-1360
-1370
-1380
1500

2000

2500

3000

X-UTM

(fig. 13.4-1)

580

(snap. 13.4-3)

rho=-0.880

-1300

Maximum Z

-1310
-1320
-1330
-1340
-1350

1500

2000
X-UTM

2500

3000

(snap. 13.4-4)

The crossplot of the top and bottom surfaces at wells according to X shows the existence of a trend
depending on X (East-West).

Geostatistical Simulations for Reservoir Characterization

581

Compute the omnidirectional variogram of Minimum Z and then the variogram of Maximum Z.
The experimental variograms are both computed with 12 lags of 125 m.

36

500
400
44

300

50

37
35

200
37
33

100
0

35

28
5 14

500

1000

Distance (m)

36

Variogram : Maximum Z

Variogram : Minimum Z

(snap. 13.4-5)

150

400
300

44
50

200
37
33

100

37
35

35

28
5 14

500

1000

150

Distance (m)
(fig. 13.4-2)

The variograms of Minimum Z and Maximum Z also show a strong non-stationarity (fig. 13.4-2).
Therefore a non-stationary model seems the most appropriate. In that purpose, a Universal Kriging
approach (UK for short) will be applied. It amounts to decompose explicitly the variable of interest

582

into its trend and a stationary residual. A variogram will be fitted to the residuals. The kriging
amounts to krige the residuals and add the estimates to the trend model.
l

Non Stationary Modeling


To build the non-stationary model, the first step consists in modelling the trend by means of the
least square polynomial method fit based on a model a+ b*X.
Before modeling the trend you have to create a 2D copy of the 3D Wells Header using Data
File Manager/File/Copy. Call the new file 2D Wells Header and modify it in 2D (Data File
Manager/Modify 2D-3D).

(snap. 13.4-6)

For each variable, store the global trend in a new variogram model using Statistics/Modeling/
Global Trend Modeling. A variable corresponding to the residuals is also created.

Geostatistical Simulations for Reservoir Characterization

583

(snap. 13.4-7)

Store the global trend in a variogram model and then fit the variogram of the residuals. By adding
the variogram model of residuals to the model initialized at the trend modeling stage, the required
non-stationary model is obtained (for example: Maximum Z no-stationary and Minimum Z no
stationary). In that purpose, run Statistics/Variogram Fitting on the model of the residuals.
Below is the example for the residuals of maximum Z. The variogram model is the same for Maximum Z and Minimum Z.
You can automatically initialize your model (using Model Initialization) or edit your-self the model
with the following parameters:
- a Cubic structure with Range =1800m, sill= 50.
Save the model under Maximum Z no Stationary.

584

(snap. 13.4-8)

Variogram : Residuals maximum Z

Geostatistical Simulations for Reservoir Characterization

585

50

40

30

20

10

100 200 300 400 500 600 700 800 900 1000

Distance (m)
Isatis
WELLS/2D Wells Headers
Jun 03 2008
- Variable #1 : Residuals maximum Z
NCU-test
Experimental Variogram : in 1 direction(s)
D1 :
Angular tolerance = 90.00
Lag = 110.000m, Count = 10 lags, Tolerance = 50.00%
Model : 1 basic structure(s)
S1 - Cubic - Range = 1800.000m, Sill =
50

(fig. 13.4-3)

Run Interpolate/Estimation/(Co)-Kriging using the non-stationary model and a unique neighborhood.

586

(snap. 13.4-9)

Geostatistical Simulations for Reservoir Characterization

587

(snap. 13.4-10)

The results of the kriging are called respectively Maximum Z kriging and Minimum Z kriging.
These base cases are very closed to the surfaces already stored in the 2D grid file (see hereafter the
correlation cross plot between SURF 3: S2 and Maximum Z kriging)

588

(fig. 13.4-4)

Note - An alternative approach would be to model the top surface and the thickness of the unit,
avoiding the risk of getting surfaces crossing each other.

13.4.2 Simulations of the Surfaces


Three reasons lead to simulate directly the surfaces with the same non-stationary model, without
any normal score transformation:

Geostatistical Simulations for Reservoir Characterization

589

The non-stationarity that is somehow contradictory with the existence of an unique histogram,

The even density of wells in the gridded area, that controls the distribution by means of conditioning of simulations,

The distribution is closed to be symmetrical.

(fig. 13.4-5)

Using Interpolate/Conditional Simulations/Turning Bands, perform the simulations for Minimum


Z (Simu Minimum Z[xxxxx]) and simulations for Maximum Z (Simu Maximum Z[xxxxx]).

590

(snap. 13.4-11)

Geostatistical Simulations for Reservoir Characterization

591

(snap. 13.4-12)

592

Using Tools/Simulation Post-processing, calculate the average of 100 simulations in order to compare it to the kriged values. The match is almost perfect (fig. 13.4-6), which was expected as the
mean of numerous simulations (over 100) tends towards the kriging.
In order to define the geometrical envelope of the S2 Unit where facies and porosity simulations are
achieved, store the maximum of the simulated top (Maximum Z Top) and the minimum of the simulated bottom (Minimum Z Bottom). The use of the envelope ensures that all grid nodes will be
filled with a porosity value.

(fig. 13.4-6)

Geostatistical Simulations for Reservoir Characterization

593

(snap. 13.4-13)

Using Tools/Create Special Variable create a new macro variable with 100 indices, name it Thickness. Using File/Calculator, compute the thickness from the simulations of Maximum Z and the
simulations of Minimum Z. Check with Statistics/Quick Statistics that the surfaces do not cross
each other (there are not negative values).

594

(snap. 13.4-14)

Geostatistical Simulations for Reservoir Characterization

595

13.5 Modeling 3D Porosity


In this part, the goal is to compute the simulations of the 3D porosity constrained by the facies simulations. Porosity simulations are computed for each facies.
The steps are the following ones:
l

3D facies modeling (Plurigaussian Simulations)

3D porosity simulations conditioned by facies simulations. (cookie cutting simulations)

13.5.1 Simulations of lithofacies


13.5.1.1 Flattening
First, create the working grid (stratigraphic grid) where the simulations are performed and a new
parameter file essential for the simulations. This parameter file contains:
l

The information relative to the lithotype (regrouping of the original lithofacies);

The names of files and variables used in the construction process;

The parameters used in the calculation of the vertical proportion curves.

Before going to Discretization & Flattening window you need to convert the 3D wells into cores
lines. Use the panel Tools/Convert Gravity Lines to Core Lines:
The old gravity files are saved, the new core lines are named 3D Wells and 3D Wells Header.
In the Data File Manager, set the variable Well Name as Line Name in the 3D Wells Header: right
click on the Well Name variable and choose Modify into Line Name.
In the File Manager, change the format of the variable Maximum Z Kriging, Maximum Z Top and
Minimum Z Bottom.

596

(snap. 13.5-1)
l

Go to Discretization and Flattening.Create a new proportion file S2 Unit and fill the 5 tabs.

(a) Input Parameters


To populate the facies simulation with porosity, define an auxiliary data file enquiring the porosity
variable.

Geostatistical Simulations for Reservoir Characterization

597

(snap. 13.5-2)

(b) Grid and Geometry

(snap. 13.5-3)

598

Take the Maximum top and Minimum bottom (the envelope) as Top Unit Variable and Bottom
Unit Variable. The reference variable is Maximum Z kriging (the Base Case corresponding to
Surf 3: S2). The kriging of the top surface is used as the reference variable because it is consistent
geologically speaking. This is not the case for the envelope.

Note - The S2 top surface has been chosen as the reference surface because the base of the S2 unit
shows downlapping layers as the platform built eastward into the Midland basin.
(c) Lithotype Definition
In the S2 Unit, consider the lithofacies 1 (siltstone), 2 (anhydrite) and 3 (dolomite) and assign them
to lithotype 1 to 3. In this case, the data already contain the lithotype formations.

(snap. 13.5-4)

For further display, create a dedicated colour scale by using Lithotype Attributes.

(snap. 13.5-5)

(d) Discretization Parameters

Geostatistical Simulations for Reservoir Characterization

599

The wells are discretized with a vertical lag of 1 m, which corresponds to the vertical mesh of the
stratigraphic grid. There is a distortion ratio of 50 (50/1: ratio of mesh (x,y) and mesh (z)).

(snap. 13.5-6)

(e) Output
In the output tab, enter the discretized wells file and the header file. Define the output variables.

(snap. 13.5-7)

After running the bulletin, read carefully the informations printed in the message window, for
checking the options and the discretization results.
It is possible to visualize the discretized wells in the new stratigraphic framework with the display
menu using Lines representation.

600

Note - The envelope (Maximum Z Top and Minimum Z bottom) is used to be sure that inside the
reservoir unit (S2 Unit) all the grid nodes will be filled by a porosity value when performing the
porosity simulations.

13.5.1.2 Computing Proportions Curves


The task is to estimate the proportions of each lithotype at each cell of the working grid, from the
experimental proportions curves at the wells in the stratigraphic reference system.
The different operations are achieved by several applications of the menu Statistics/Statistics/Proportions Curves.
(a) Loading the Data
Click Load Data in the Application menu of the graphic window, as shown below:

(snap. 13.5-1)

2D Proportion Constraints are specified: the proportion variable is kriging mean proportion siltstone calculated before.
The graphic window displays the wells projected on the horizontal plane and the global proportion
curve in the lower right corner.
Change the Graphic Options by using the corresponding Application menu.

Geostatistical Simulations for Reservoir Characterization

601

(snap. 13.5-2)

602

(snap. 13.5-3)

(b) Display the global statistics


Among different possibilities, visualize the global proportion curves by picking its anchor and use
the Display & Edit menu.

Geostatistical Simulations for Reservoir Characterization

603

(snap. 13.5-4)

Using the Application menu and the option Display Pie Proportions, each well is represented by a
pie subdivided into parts with a size proportional to the lithotypes proportion.

(snap. 13.5-5)

(c) Create Polygons and calculate corresponding VPC


Digitalize four polygons after having activated the Polygon Edition mode, in order to split the field
into four parts.

604

(snap. 13.5-6)

Coming back to the Vertical Proportion Curves Edition mode, perform the following actions: Display&Edit, completion by 3 levels and Smoothing with 3 passes. An other method using the Editing
tool is described in the section (e) Edition Mode.

Note - You can see that the Raw VPC present gaps at the top and the bottom. These gaps are
explained by the fact that, in the display, the top corresponds to the maximum of the Top unit
variable (here maximum Z Top) and the bottom corresponds to the minimum of the bottom unit
variable (here minimum Z bottom). The wells information does not fill the total length between the
defined top and bottom. These gaps may be an issue as an extrapolation is performed to fill them
(especially at the top). An other method would be to use the simulations of both surfaces two by two
to create the VPC. It would require to create as many VPC as there are couple of simulations. This
would be rather inconvenient.

Geostatistical Simulations for Reservoir Characterization

605

(snap. 13.5-7)

(d) Compute the proportions on the 3D grid


The task is to assign to each cell proportions of lithotype that take into account the gradual change
from south to north. In the menu Application/Compute 3D Proportions, choose the kriging option
and an arbitrary variogram like a spherical scheme with an isotropic range of 2 km and a sill of
0.207. Do not forget to select the option: Use the 2D Proportions constraints.

606

(snap. 13.5-8)

To visualize the interpolated proportions, use Application/Display 3D proportions with the sampling mode (step 5 for instance along X and Y).

56
51
46
41
36
31
26
21
16
11
6

Lithotypes
Siltstone

1
1

11

16

21

26

31

3D Proportion Map

36

41

Anhydrite
Dolomite

(fig. 13.5-1)

In order to update the parameter file use the menu Application/SAVE and RUN.

13.5.1.3 Determination of the Gaussian Random Functions and their variograms for plurigaussian simulations
This phase is specific to the simulation using the plurigaussian technique and is achieved by means
of the menu Statistics/Modeling/Plurigaussian Variograms.

Geostatistical Simulations for Reservoir Characterization

607

The aim is to assign the lithotypes to sets of values of a pair of Gaussian Random Function (GRF),
i.e. by means of thresholds applied to the GRF. The transform from GRF to the categorical lithotypes is called the lithotype rule. It is necessary to define it first in order to represent the possible
transitions between the facies as they can express in geological terms the deposition process.
L1 = Siltstone, L2 = Anhydrite, L3 = Dolomite.

(snap. 13.5-1)

The first GRF horizontal (G1) will rule L2, L1 and L3. It is represented by a spherical scheme with
ranges of 300 m for U, 300 m for V and 5 m for Z and a sill of 0.5.
The second GRF (G2) will rule L1, L2 and L3. It is represented by a spherical scheme with ranges
of 1200 m for U, 2700 m for V and 5 m for Z and a sill of 0.5.

(snap. 13.5-2)

608

Run non-conditional simulations along the 3 main sections of the stratigraphic space by using Display Simulations. By changing the coefficient of correlation, visualize the effect on the spatial organization of the facies.
Visualize the thresholds applied on the 2 GRFs by using Display Threshold.
By using the variogram fitting button, calculate variograms on the lithotypes indicators in two horizontal directions and along the vertical.

(snap. 13.5-3)

The figure below shows the variograms for the horizontal directions and the vertical one. The dotted lines correspond to the experimental variograms and the plain lines to the model.

Geostatistical Simulations for Reservoir Characterization

609

(snap. 13.5-4)

(snap. 13.5-5)

13.5.1.4 Conditional Plurigaussian Simulation


Run 100 simulations using Interpolate/Conditional Simulations/Plurigaussian.

610

For the conditioning of the simulation to data, use a standard moving neighborhood (moving
Facies). It is defined by a search ellipsoid with radii of 1.2 km x 3 km x 20 m and 8 sectors with an
optimum of 4 points by sector. Display the simulation in the flat space using Display New Page
with a raster representation or a section in a 3D grid representation.

(snap. 13.5-1)

Geostatistical Simulations for Reservoir Characterization

611

(snap. 13.5-2)

Finally, transfer the plurigaussian simulations from the working grid to the structural grid by using
Tools/merge stratigraphic Units (Facies S2 Unit PGS).
The 3D viewer may be used to visualize the simulations.

13.5.1.5 Facies Probability Correction


From the simulations we can get one map that keep for each cell the most probable facies. Doing so
we cannot guarantee that the facies proportions match those assessed by the statistics on data. In
particular the facies poorly represented may disappear. In order to overcome that difficulty a correction of the most probable facies has been proposed by Dr A. Soares, it amounts to take from the
simulations the facies not necessarily the most probable but in trying to keep at the end of the assign
process target global proportions fixed by the user.
The menu Tools/Facies Simulation Post-processing can be used to achieve the task, with the option
to activate or not the Soares correction. In addition risk curves can be produced to generate the distribution of simulated volumes of each facies on the entire grid.

612

(snap. 13.5-1)

The statistics below compare both ways to get the most probable facies without or with Soares correction. The figure displays an example of a horizontal section.

MostProbablePercentage
BeforeSoares
AfterSoares
38.83%
37.52%
41.62%
44.74%
19.56%
17.74%

4000

4000

3500

3500

3000

3000

Y (m)

Y (m)

Siltstone
Anhydrite
Dolomite

2500

2500

2000

2000

After Soares correction

fore Soares correction

1500

1500
1500

2000
X (m)

2500

3000

1500

2000
X (m)

2500

3000

(snap. 13.5-2)

The figures below show the risk curves using either Risk Curve or Histogram display type.

Geostatistical Simulations for Reservoir Characterization

613

(snap. 13.5-3)

614

100

100

Facies: Siltstone

80

80

70

70

60

60

50
40

50
40

30

30

20

20

10
0

Facies: Anhydrite

90

Frequencies

Frequencies

90

10

53

54

55

56

57

58

59

60

Volume (Mm3)

61

62

63

64

65

66

67

Volume (Mm3)

100

Facies: Dolomite

90
80

Frequencies

70
60
50
40
30
20
10
0

31

32

33

34
Volume (Mm3)

35

36

37

(snap. 13.5-4)

Geostatistical Simulations for Reservoir Characterization

Facies: Anhydrite

25

25

20

20
Frequencies

Frequencies

Facies: Siltstone

15

15

10

10

81

82

83

84

85

86

615

87

88

89

Volume (Mm3)

72.5

75.0

77.5

80.0

82.5

85.0

Volume (Mm3)

Facies: Dolomite
25

Frequencies

20

15

10

33

34

35

36

37

38

Volume (Mm3)

39

40

41

42

(snap. 13.5-5)

4000

4000

3500

3500

3000

3000

Y (m)

Y (m)

616

2500

2500

2000

2000

After Soares correction

Before Soares correction


1500

1500
1500

2000

2500

3000

X (m)

1500

2000
X (m)

2500

3000

(snap. 13.5-6)

13.5.2 Porosity 3D Simulations


13.5.2.6 Porosity Simulations for Each Facies
(a) Macro selection lithotype
Go to File/Selection/Macro and create a macro selection of three alpha indices (Siltstones, Anhydrite and Dolomite) conditioned by the lithotype in the Auxiliary/variable/S2 UNIT. The purpose
is to select the porosity for each facies to perform simulations. First create the three macro indices
by using NEW. Then define the rule for each facies by selecting the variable Lithotype(phi). For
Siltstone the condition is equals

Geostatistical Simulations for Reservoir Characterization

617

(snap. 13.5-1)

(b) Transformation in the Gaussian domain


In order to perform simulations, the transform from real space to gaussian space is necessary. That
is done using the Gaussian Anamorphosis Modeling (Statistics/Gaussian Anamorphosis Modeling).
An anamorphosis (with 30 Hermite polynomials) is stored for porosity conditioned for each lithotype (ex Phi Siltstone (snap. 13.5-2)).

618

(snap. 13.5-2)

(snap. 13.5-3)

Geostatistical Simulations for Reservoir Characterization

619

(c) Variogram fitting


Go to EDA/Variogram and compute an experimental variogram of the Gaussian Porosity for each
lithotype (Gaussian Phi Siltstone, Gaussian Phi Anhydrite, Gaussian Phi Dolomite).
The experimental variograms are computed with 10 lags of 110 m horizontally and 20 lags of 1 m
vertically. Check for possible anisotropies on N0/N90.

(snap. 13.5-4)

(snap. 13.5-5)

Hereafter are the characteristics of the variogram models:


m

Gaussian Phi Siltstone: 2 basic structures: An anisotropic spherical model, with ranges of
680 m along U; 800 m along V; 7 m along W (sill: 0.94). An anisotropic cubic model with
ranges of 700 m along U, 1600 m along V and 2 m along W (sill: 0.06).

620

Gaussian Phi Anhydrite: An anisotropic spherical model with an horizontal range of 450 m
along U, 850 along V and a vertical range of 4,4 m. (sill: 1)

Gaussian Phi Dolomite: An anisotropic spherical model with ranges of 476 m along U;
1072 m along V and 4.7 m along W (sill 0.5). An anisotropic cubic model with ranges of 323
m along U, 497 m along V and 5.4 m along W (sill 0.5).

An example is given below for Gaussian Phi Siltstone:

(snap. 13.5-6)

(snap. 13.5-7)

(d) Simulations

Geostatistical Simulations for Reservoir Characterization

621

For each lithotype, run the Turning Bands simulations (Interpolate/Conditional Simulations/
Turning Bands).

(snap. 13.5-8)

622

(snap. 13.5-9)

Geostatistical Simulations for Reservoir Characterization

623

(snap. 13.5-10)

Do not forget to use the Gaussian back transform option in the simulation parameters.

624

(snap. 13.5-11)

Below is the neighborhood used for Turning Bands. The same standard neighborhood is used for
the porosity of the different lithotypes (Phi Siltstone, Phi Anhydrite, Phi Dolomite).

(snap. 13.5-12)

13.5.2.7 Conditioning Porosity Simulations to Facies Simulations


(a) Create special variable
Create the macro variable Porosity [xxxxx] in order to store the results of the independent porosity
simulations per facies conditioned by facies simulations. (Tools/Create Special Variable)

Geostatistical Simulations for Reservoir Characterization

625

(snap. 13.5-1)

(b) Calculator
The transformation created in the calculator allows to inform the macrovariable Porosity [xxxxx]
from porosity simulations conditioned by lithotype (Phi Siltstone [xxxxx], Phi Anhydrite [xxxxx],
Phi Dolomite [xxxxx]) with facies simulations (PGS [xxxxx]).

(snap. 13.5-2)

626

Then the macro variable Porosity[xxxxx] is transferred from the Working flat Grid (3D working
Grid) to the 3D real space (3D, Structural grid) using Tools/Merge Stratigraphic Units.

(snap. 13.5-3)

Post-processing with simulations can be performed.

13.5.3 Compute Volumetrics using Porosity 3D


Use Tools/volumetrics to perform an estimation of HCPV.
The input parameters for HCPV are identical to the previous ones. The only different variable is the
3D Porosity (Porosity[xxxxx]).

Geostatistical Simulations for Reservoir Characterization

627

(snap. 13.5-4)

628

100
90

P90

Frequencies

80
70
60
50

P50

40
30
20

P10

10
0

58 59 60 61 62 63 64
Volumes (Mm3)

(fig. 13.5-1)

The volumetrics computed using the 3D Porosity simulations are generaly higher than those computed using the 2D mean porosity simulations.

Geostatistical Simulations for Reservoir Characterization

629

13.6 Conclusion
This case study handles different technics available in Isatis (surface simulations, facies simulations
and volumetrics). The volumetrics outcomes are interesting to study as they are relevant concerning
the use of the porosity (3D porosity simulations).
The use of the envelope in the discretization and flattening has an influence in the computation of
3D proportion matrix. It is necessary to extrapolate the proportions at the top and bottom of the
VPC. This extrapolation has of course an influence on the facies simulations, therefore on the
porosity simulations and finally on the resulting volumes.
The volumes calculated with the 3D porosity simulations are generaly higher than those calculated
using a 2D mean porosity simulations. This can be explained by the extrapolation of siltstone at the
top during the computation of the 3D proportions curves.
To conclude, this case study presents a possible workflow for Reservoir Characterization. In this
purpose, several methods are applied (Turning Bands simulations, Plurigaussian Simulation, Universal Kriging). It deals with structural, facies and property modeling. An interesting topic is the
use of a 2D proportion constraint. It shows how to account for the uncertainty on surfaces and properties (e.g. Porosity) together.

630

631

Environment

632

Pollution

633

15.Pollution
This case study is based on a data set kindly provided by Dr. R.
Clardin of the Laboratoire Cantonal dAgronomie. The data set has
been collected for the GEOS project (Observation des sols de Genve)
and processed with the University of Lausanne.
The case study covers rather exhaustively a large panel of Isatis
features, such as:
how to perform a univariate and bivariate structural analysis,
how to interpolate these variables on a regular grid, using kriging
or cokriging,
how to perform conditional simulations using the Turning Bands
method, in order to obtain the probability map for the variable to
exceed a given pollution threshold.
Important Note:
Before starting this study, it is strongly advised to read the Beginner's
Guide book. Especially the following paragraphs: Handling Isatis,
Tutorial Familiarizing with Isatis basic and batch Processing & Journal Files.
All the data sets are available in the Isatis installation directory (usually C:\program file\Geovariances\Isatis\DataSets\). This directory
also contains a journal file including all the steps of the case study. If
case you get stuck during the case study, use the journal file to perform
all the actions according to the book.

Last update: Isatis version 2014

634

15.1 Presentation of the Dataset


The data is provided in the ASCII file pollution.hd (located in the Isatis installation directory).
It consists of point samples collected in a polluted area where several variables have been
measured. We will pay particular attention to two elements: lead (Pb) and zinc (Zn).
First, a new study has to be created using the File / Data File Manager facility.

(snap. 15.1-1)

It is then advised to verify the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l

Input-Output Length Options window: unit in kilometers (Length), with its Format set to
Decimal with length = 10 and Digits =2.

Graphical Axis Units window: X and Y units in kilometers, Z unit in centimeters (the latter
being of no importance in this 2D case).

The ASCII file contains a header where the structure of the data information is described. Each
record contains successively:
l

The rank of the sample (which will not be loaded as it is not described by the corresponding
field keyword.

The coordinates of each sample (X and Y).

The two variables of interest (Pb and Zn).


#
# GTX FILE SAVING: GTX Directory: 2D irregular GTX File: Data
#
# structure=free , x_unit=km , y_unit=km
#
#
# field=2 , type=xg , name="_X Gravity Center" , bitlength=32 ;
#
f_type=Decimal , f_length=9 , f_digits=3
#

Pollution

635

# field=3 , type=yg , name="_Y Gravity Center" , bitlength=32 ;


#
f_type=Decimal , f_length=9 , f_digits=3
# field=4 , type=numeric , name="Pb" ;
#
bitlength=32 , unit="%" , ffff=" ";
#
f_type=Decimal , f_length=10 , f_digits=2
#
# field=5 , type=numeric , name="Zn" ;
#
bitlength=32 , unit="%" , ffff=" ";
#
f_type=Decimal , f_length=10 , f_digits=2
#
#
#
#+++++++++---------+++++++++----------++++++++++
1.00 119.504 509.335
2.15
4.60
2.00 120.447 510.002
2.48
4.50
3.00 120.602 511.482
33.20

4.00 121.604 510.940


2.21
4.70
5.00 121.848 511.929
2.08
5.40

Note - In the definition of the two pollution variables, the lack of information is coded as a blank
string. If, for a sample, the characters within the offset dedicated to the variable are left blank, the
value of the variable for this sample is set to a conventional internal value called an undefined
value. This is the case in the third sample of the file where the Zn value is missing.
The procedure File / Import / ASCII is used to load the data. First you have to specify the path of
your data file using the button ASCII Data File, as no specific structure is provided, the samples are
considered as Points (as opposed to grid or lines structures). By default the 'Header is Contained in
the ASCII Data file' option is on, which is right for this data file.
The ASCII files are located in the Isatis installation directory Datasets/Pollution

636

(snap. 15.1-2)

Consequently in this case you do not need to pay attention to the ASCII Header part of the window.
By default this window prompts the option of Create a New File, what is also right for this case
study. In order to create a new directory and a new file in the current study, the button NEW Points
File is used to enter the names of this two items; click on the New Directory button and give a
name, do the same for the New File button, for instance:
- New Directory

= Pollution

- New File

= Data

Click on OK and you will be back to the File / Import / ASCII, finally you have to press Import.
In order to see the status of the last action you have to click on the Message Window icon.

#
#
#
#
#
#
#
#
#
#
#
#

ASCII FILE HEADER INTERPRETATION:


structure = free
x_unit = km
y_unit = km
field = 2 , type = xg , name = _X Gravity Center
ffff = " " , unit = "" , bitlength = 32
f_type = Decimal , f_length = 9 , f_digits = 3
description = ""
field = 3 , type = yg , name = _Y Gravity Center
ffff = " " , unit = "" , bitlength = 32
f_type = Decimal , f_length = 9 , f_digits = 3
description = ""
field = 4 , type = numeric , name = Pb

Pollution

637

#
ffff = " " , unit = "%" , bitlength = 32
#
f_type = Decimal , f_length = 10 , f_digits = 2
#
description = ""
# field = 5 , type = numeric , name = Zn
#
ffff = " " , unit = "%" , bitlength = 32
#
f_type = Decimal , f_length = 10 , f_digits = 2
#
description = ""
#+++++++++---------+++++++++----------++++++++++

Number of Header Samples (*) = 0


Number of Samples read = 102
Number of Samples written to disk = 102

The File / Data File Manager facility offers the possibility of listing the contents of all the
Directories and Files of the current Study and of providing some information on any of the items of
the data base just by using the graphical menu (left button of the mouse in a variable of interest,
then click on the right button and select an item). This allows the following basic statistics to be
derived: the file contains 102 samples but the Zn variable is defined on 101 samples only.
Name

Count of Samples

Minimum

Maximum

X Coordinate

102

109.847 km

143.012 km

Y Coordinate

102

483.656 km

513.039 km

Pb

102

1.09

33.20

Zn

101

3.00

31.60

638

15.2 Univariate Approach


The ordinary kriging is the most popular way to estimate values of one variable at unknown
locations. To perform this geostatistical approach we will need a variogram model that describes
the phenomenon and a neighborhood configuration.
The principal task consists in detecting and interpreting the spatial structure. It is therefore recommended that you spend most of your time trying to put in evidence associations and relations
between structures and features of the phenomenon.
The Statistics / Exploratory Data Analysis procedure will help you to have a better knowledge of
your data. This procedure relies on the use of several linked windows; it means that when you
highlight or mask one or several samples in a graphical window, they will be automatically
highlighted or masked on the other windows.
With all the graphics options (basemap, histogram, variogram...) and using link operations you will
be able to detect anomalous data and better understand your data.
You will be helped by your experience on similar data and scientific facts of the variable under
study, specially at short distance when no information is available.

Note - Skewed data sets (presence of real high values) sometimes mask structures, therefore
complicating the task of calculating a representative experimental variogram. There are several
ways to tackle this problem; a common practice is the application of a normal or logarithmic
transformation of the cumulative distribution function (cdf) to try to stabilize the fluctuations
between high and low values. Another possibility is to mask (put aside from calculation) some or all
high relative values to try to obtain more structured variograms (reduction or elimination of a
nugget effect structure). The last method is recommended when the anomalous values correspond to
outliers, otherwise you risk to smooth or hide real structures.

Pollution

639

15.3 Exploratory Data Analysis


In the Statistics / Exploratory Data Analysis panel, the first task consists in defining the file and
variables of interest, namely Pb and Zn. To achieve that, click on the Data File button and select
the two variables. We will concentrate on the Zn variable alone, then click on only Zn (see
graphic). By pressing the corresponding icon (eight in total), we can successively perform several
statistical representations, using default parameters or by choosing appropriate parameters.

(snap. 15.3-1)

For example, to calculate the histogram with 32 classes between 0 and 32% (1 unit interval), first
you have to click on the histogram icon (third from the left), an histogram calculated with defaulted
values will be displayed, then you have to enter the proper values in the Application / Calculation
Parameters menu bar of the Histogram page. If you switch on the Define Parameters Before Initial
Calculations option you can skip the defaulted histogram display.
On the base map (first icon from the left), any active sample is represented by a cross proportional
to the Zn value. A sample is active if its value for a given variable is defined and not masked.
For the sake of simplicity, we limit the analysis to omni-directional variogram calculations,
therefore ignoring potential anisotropies. Experimental variogram is obtained by clicking on the

640

seventh icon. The number of pairs or the histogram of pairs may be added to the graphic by
switching on the appropriate buttons in the Application / Graphic Specific Parameters. The
following variogram has been calculated with defaulted parameters. In the Variogram Calculation
Parameters panel you can also compute the variogram cloud.
0.3

515

Nb Samples: 101
Minimum:
3.00
Maximum:
31.60
Mean:
6.10
Std. Dev.: 3.59

Zn
510

0.2
Frequencies

Y (km)

505

500

495

0.1
490

485

110

120

130

0.0

140

10

X (km)

20

30

Zn

242
400

300

236
20

Variogram : Zn

Variogram : Zn

30

221

10

247

140
213

204
243

200

100

271

11
0

4
5
6
7
Distance (km)

10

0.0

2.5

5.0
7.5
Distance (km)

10.0

(fig. 15.3-1)

A quick analysis of these graphics leads to several comments:


l

From the base map, the data set shows some areas without information. The average distance
between samples is about 0.7 km. There are two samples that are 6.5 km and 9.0 km away from
a nearest sample. In this case you might question if these samples belong to the area of interest
or to the same population. We will take into account these values for our calculations.

From the base map, we can clearly see that there are two samples with anomalous values. It is
important to find the nature of these two values before considering them as outliers. You might
question also the relation between these values and their geographical location to try to infer if
they are likely to happen in non-sampled areas.

Pollution

641

The histogram shows a clear skewness. Another feature that you can observe from the
histogram is the fact that there are no samples with values less than 3% Zn.

The variogram cloud (calculated by ticking Calculate the Variogram Cloud in the Variogram
Calculation Parameters) clearly shows two populations. In order to identify the geographical
location of high variogram values at short distances, you can select several points from the
variogram cloud and highlight them with the right button. All the windows are automatically
regenerated and values are painted in blue and in asterisk format. In the base map graphic they
look like two spiders centered on the two Zn anomalous values.
0.3

515

Nb Samples: 101
Minimum:
3.00
Maximum:
31.60
Mean:
6.10
Std. Dev.: 3.59

Zn
510

0.2
Frequencies

Y (km)

505

500

495

0.1
490

485

110

120

130

0.0

140

10

20

30

Zn

X (km)

242
400
30

20

Variogram : Zn

Variogram : Zn

300

236
221

10

247

140
213

204
243

200

100

271

11
0

5
6
7
Distance (km)

10

11 140 213 247 243 271 204


0.0

2.5

236 221

5.0
7.5
Distance (km)

242
10.0

(fig. 15.3-2)

To find out more about the two central points of these spiders, right click on them in the basemap
and ask for Display Data Information (Short):
Display of the following variables (2 samples)
X coordinate
Y coordinate
Variable 1: Zn

642

113.433km
113.313km

498.943km
501.368km

24.80
31.60

At this stage a capital question arises: shall we consider these two anomalous values as erroneous or
real ones? It is likely that these two Zn values are not erroneous and we will consider them as real
ones. However, these high values may be due to a local behavior. Therefore, we mask them for the
analysis but we will take them into account for the estimation. Using the mouse, first click on the
left button over the anomalous values on the Basemap page and then click on the right button to
select the Mask option.
The effect on the variogram cloud is spectacular. All the pairs with high variogram values are now
suppressed: they are still drawn, but represented by red squares instead of green crosses.
Redrawing the variogram cloud while hiding the masked information (in the Application menu)
operates a rescaling of the picture.
The cloud now presents a much more conventional shape: as the variability is expected to increase
with the distance, the variogram cloud looks like a cone lying over the horizontal axis with a large
density of pairs at the bottom. By consequence the experimental variogram becomes more
structured and with a lower variability compared to the precedent variogram.
The procedure of highlighting pairs with large variogram values at small distances does not produce
spiders anymore.
At this stage, we can save the current selection (without the two high values) as the selection
variable of the data base called Variographic selection in the Application / Save in Selection panel
(which can be reached in the Menu Bar of the Base Map page).

(snap. 15.3-2)

Pollution

643

0.3

515

Nb Samples: 99
Minimum:
3.00
Maximum:
12.70
Mean:
5.66
Std. Dev.: 1.70

Zn
510

505

Y (km)

Frequencies

0.2

500

495

0.1
490

485

110

120

130

0.0

140

10

20

X (km)
50

217
208

3
266 199 218

40
Variogram : Zn

242
Variogram : Zn

30

Zn

241
2

208

11
136

30

20

1
10
208 217
11 136 208 242 241 266 199 218
0

Distance (km)

10

0.0

2.5

5.0

7.5

Distance (km)

10.0

(fig. 15.3-3)

The following printout provides the statistics on the new selection:


SELECTION STATISTICS:
--------------------
New Selection Name
Total Number of Samples
Masked Samples
Selected Samples

=
=
=
=

variographic selection
102
2
100

The final task is to choose a better lag value for the experimental variogram calculation; first we
switch OFF the display of the Variogram Cloud in Application / Graphic Specific Parameters, then
we use the Application / Calculation Parameters to ask for 10 lags of 1 km, , preview the histogram
of the number of pairs (Display Pairs) in the Direction Definition panel and display the number of
pairs for each lag in Application / Graphic Specific Parameters.

644

(snap. 15.3-3)

Pollution

645

(snap. 15.3-4)

The experimental variogram is reproduced in the next figure and can be compared to its initial
shape. The variance drops from 12.91 to 2.88, and the shape of the variogram is much more convenient for the model fitting, which will be performed next. Moreover the number of pairs is quite stable for all lags (except the first one) which reflects the quality of the parameters choice.

646

187

204

185

229

206

198

Variogram : Zn

230
2

183
123

4
5
6
Distance (km)

(fig. 15.3-4)

In order to perform the fitting step, it is now time to store the final experimental variogram with the
item Save in Parameter File of the Application menu of the Variogram Page. We will call it
Pollution Zn.

Pollution

647

15.4 Fitting a Variogram Model


The procedure Statistics / Variogram Fitting allows you to fit an authorized model on an
experimental variogram.
We must first specify the file name of the Parameter File which contains the experimental
variogram: this is the file Pollution Zn that was created in the previous paragraph.
At the bottom of the panel we need to define another Parameter File which will ultimately contain
the model: we will also call it Pollution Zn. Although they carry the same name, there will be no
ambiguity between these two files as their contents belong to different types.
Common practice is to find, by trial and error, the set of parameters defining the model which fits
the experimental variogram as closely as possible. The quality of the fit is checked graphically on
one of the two windows:
l

The global window where all experimental variograms, in all directions and for all variables are
displayed.

The fitting window where we focus on one given experimental variogram, for one variable and
in one given direction and where it is possible to get an interactive fitting.

In our case, as the Parameter File refers to only one experimental variogram for the single variable
Zn, both windows will look the same.

648

(snap. 15.4-1)

The principle consists in editing the Model parameters and checking the impact graphically. You
can also use the variogram initialization by clicking on Model initialization; this will enable you to
initialize the model with different combinations of structures and with or without a nugget effect.
This procedure automatically fits the range and the sill of the variogram (see the Variogram fitting
section from the Users guide).
The next figure presents the result of the Model Initialization with a Spherical component only.

Pollution

649

3
187
229

206

204

185

198

Variogram : Zn

230
2
123

183

1
3

Distance (km)

(fig. 15.4-1)

Apart from the Model Initialization, a more complete Automatic Fitting is provided in the
corresponding tab where you can choose the combination of structures you want to use and also put
constraints on anisotropy, sills and ranges. From the Automatic Fitting tab, click structutre to use an
Exponential one.

(snap. 15.4-2)

Then press Fit from the Automatic Fitting tab. Use the the global window and the Print button to
check the output model.

650

3
187

204

185

229

206

198

Variogram : Zn

230
2
183
123

Distance (km)

(fig. 15.4-2)

The Manual Fitting is available in the corresponding tab where you may change the parameters by
clicking on Edit.

(snap. 15.4-3)

Save the model under the name Pollution Zn and finally click on Run.

Pollution

651

15.5 Cross-Validation
The Statistics / Modeling / Cross-Validation procedure consists in considering each data point in
turn, removing it temporarily from the data set and using its neighboring information to predict (by
a kriging procedure) the value of the variable at its location. The estimation is compared to the true
value to produce the estimation error, possibly standardized by the standard deviation of the
estimation.
Click on the Data File button and select the Zn variable without any selection, in that way we will
be able to test the parameters in the two high values locations. The Target Variable button is setup
to the only variable selected in the precedent step. Set on the Graphic Representations option.
Select, with the Model button, the variogram model called Pollution Zn.
The new feature of this procedure is the definition of the Neighborhood parameters. Click on the
Neighborhood button and you will be asked to select or create a new set of parameters; in the New
File Name area enter the name Pollution, then click on Add and you will be able to set the
neighborhood parameters by clicking on the respective Edit button.
A trial and error procedure allows the user to select a convenient set of neighborhood parameters.
These parameters will be discussed in the estimation chapter; we keep here the default parameters.

652

(snap. 15.5-1)

Pollution

653

(snap. 15.5-2)

By clicking on Run, the procedure finally produces a graphic page containing the four following
windows:
l

the scatter diagram of the true data versus the estimated value,

the base map, with symbols whose size and color repsectively represent the real and estimated
Zn,

the histogram of the standardized estimation errors,

the scatter diagram of the standardized estimation errors versus the estimated values.

A sample is arbitrarily considered as not robust as soon as its standardized estimation error is larger
than a given threshold in absolute value (2.5 for example which approximately corresponds to the
1% extreme values of a normal distribution).

654

(fig. 15.5-1)

The histogram shows a long tail and the scatter diagram of estimated values versus true data is far
from being close to the first bisector. At the same time, the statistics on the estimation error and
standardized error (mean and variance) are printed out in the Message window.
======================================================================
|
Cross-validation
|
======================================================================
Data File Information:
Directory
= Pollution
File
= Data
Variable(s) = Zn
Target File Information:
Directory
= Pollution
File
= Data
Variable(s) = Zn
Seed File Information:
Directory
= Pollution
File
= Data
Variable(s) = Zn
Type
= POINT (102 points)

Pollution

655

Model Name
= Pollution Zn
Neighborhood Name = Pollution - MOVING

Statistics based on 101 test data


Mean
Variance
Error
-0.08051
18.39521
Std. Error
-0.05368
13.32939

Statistics based on 87 robust data


Mean
Variance
Error
0.08276
1.48930
Std. Error
0.10161
0.95731

A data is robust when its Standardized Error lies between -2.500000 and 2.500000

Successfully processed =
101

The cross-validation has been carried out only on the 101 defined samples of Zn. The mean error
proves that the unbiased condition of the kriging algorithm worked properly. The variance of the
estimation standardized error measures the ratio between the (square of the) experimental estimation error and the kriging variance: this ratio should be close to 1. The deviation from this optimum
(13.33 in this test) probably reflects the impact of the two high values that were not taken into
account in the variogram model, and also reflects the impact of reducing the real variability from
12.9 to a sill of 2.7.
In the second part of this array, the same statistics are calculated based only on the points where the
standardized estimation error is smaller (in absolute value) than the 2.5 threshold: these points are
arbitrarily considered as robust data (87). We do not recommend to pay attention to these statistics
based on arbitrarily defined robust data.

656

More consistently, we should use the Variographic Selection to mask the two large values (high
local values) from the data information. The procedure produces the following figure:

(fig. 15.5-2)

The various figures as well as the statistics present a much better consistency between the
remaining data and the model: in particular, the variance of the estimation standardized error is now
equal to 1.82.
======================================================================
|
Cross-validation
|
======================================================================
Data File Information:
Directory
= Pollution
File
= Data
Selection
= Variographic selection
Variable(s) = Zn
Target File Information:
Directory
= Pollution
File
= Data
Selection
= Variographic selection

Pollution

657

Variable(s) = Zn
Seed File Information:
Directory
= Pollution
File
= Data
Selection
= Variographic selection
Variable(s) = Zn
Type
= POINT (102 points)
Model Name
= Pollution Zn
Neighborhood Name = Pollution - MOVING

Statistics based on 99 test data


Mean
Variance
Error
-0.12851
2.97388
Std. Error
-0.07151
1.81638

Statistics based on 91 robust data


Mean
Variance
Error
0.14076
1.68893
Std. Error
0.13428
1.06655

A data is robust when its Standardized Error lies between -2.500000 and 2.500000

Successfully processed =
99

Still 8 points are considered as inconsistent by the procedure: several are


located on the edge of the data set and are therefore estimated using a one-sided
neighborhood.

The last feature allows you to rescale the model according to the cross-validation scores. As a
matter of fact, we know that the re-estimation error does not depend on the sill of the variogram. On
the contrary, the kriging variance is directly proportional to the sill. Hence, if the variance of the
estimation standardized error is equal to 1.96, multiplying the sill by this value corrects it to 1.
However this type of operation is not recommended because of the weakness of this crossvalidation methodology based on kriging which itself relies on the model.

658

15.6 Creating the Target Grid


All the estimation or simulation results will be stored as different variables of a new 2D grid file
located in the directory Pollution. This grid, called Grid, is created using the File / Create Grid
File facility.

(snap. 15.6-1)

Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the data points.

Pollution

659

(fig. 15.6-1)

To create the grid, you finally need to click on Run.

660

15.7 Kriging
The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l

the Input information: variable Zn in the Data File (without any selection),

the following variables in the Output Grid File, where the results will be stored:
m

the estimation result in Estimation for Zn (Kriging),

the standard deviation of the error estimation in St Dev for Zn (Kriging),

the Model: Pollution Zn,

the Neighborhood: Pollution.

As already mentioned, the two high Zn values are kept for the kriging estimation, as we do not
consider them as erroneous data.

(snap. 15.7-1)

Pollution

661

A special feature allows you to test the choice of parameters, through a kriging procedure, on a
graphical basis (Test button). A first click within the graphic area displays the target file (the grid).
A second click allows the selection of one grid node in particular. The target grid node may also be
entered in the Test Window / Application / Selection of Target option (see the status line at the
bottom of the graphic page), for instance the node [11,21].
The figure shows the data set, the samples chosen in the neighborhood and their corresponding
weights. The bottom of the screen recalls the estimation value, its standard deviation and the sum of
the weights.

(fig. 15.7-1)

Test Graphic Window


In the Application Menu of the Test graphic window, click on Print Weights & Results. This
produces a printout of:
l

the calculation environment: target location, model and neighborhood,

the kriging system,

the list of the neighboring data and the corresponding weights,

the summary of this kriging test:


Results for : Punctual
- For variable V1
Number of Neighbors
Mean Distance to the target
Total sum of the weights
Sum of positive weights
Weight attached to the mean

=
=
=
=
=

10
5.89km
1.000000
1.000000
0.832604

662

Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Variance of Z* (Estimated Z)
Covariance between Z and Z*
Correlation between Z and Z*
Slope of the regression Z | Z*
Signal to Noise ratio (final)

=
=
=
=
=
=
=
=
=

-0.324596
9.315950
2.896565
1.701930
0.393935
0.069339
0.067976
0.176017
0.911876

You can now try to modify the neighborhood parameters (Edit button): 8 angular sectors with an
optimum count of 2 samples per sector and a minimum number of 2 points in the neighborhood
circle, centered on the target point, with a radius of 10 km. When these modifications are applied,
the calculations and the graphic are updated.

(snap. 15.7-2)

Pollution

663

(fig. 15.7-2)

The summary of this kriging test follows:


Results for : Punctual

- For variable V1
Number of Neighbors
Mean Distance to the target
Total sum of the weights
Sum of positive weights
Weight attached to the mean
Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Variance of Z* (Estimated Z)
Covariance between Z and Z*
Correlation between Z and Z*
Slope of the regression Z | Z*
Signal to Noise ratio (final)

=
=
=
=
=
=
=
=
=
=
=
=
=
=

16
6.33km
1.000000
1.000000
0.827340
-0.250708
8.874644
2.833627
1.683338
0.309097
0.058389
0.064621
0.188902
0.932130

You can check the reasonable stability of the estimation and an improvement of the standard
deviation which reflects the more regular spread of the neighboring data.
The Application Menu of the Test Graphic window (Application / Domain to be estimated) offers a
final possibility (restricted to the case of output grid files): to cross hatch all the grid nodes where
the neighborhood constraints cannot be fulfilled.

664

Y (km)

510

500

490

480
110

120

130
X (km)

140

(fig. 15.7-3)

Test Graphic Window


Pressing the Run button performs the estimation on 1021 out of the 1225 grid nodes and stores the
resulting variables (the estimation and the corresponding standard deviation) in the output file.

Pollution

665

15.8 Displaying the Graphical Results


The kriging results are now visualized using several combinations of the display capabilities.
You are going to create a new Display template, that consists in an overlay of a grid raster and Zn
data locations. All the Display facilities are explained in detail in the "Displaying & Editing
Graphics" chapter of the Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l

Firstly, give a name to the template you are creating: Zn. This will allow you to easily display
again this template later.

In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m

In the Data area, in the Grid file select the variable Estimation for Zn (Kriging),

Specify the title that will be given to the Raster part of the legend, for instance Zn kriging,

In the Graphic Parameters area, specify the Color Scale you want to use for the raster
display. You may use an automatic default color scale, or create a new one specifically
dedicated to the Zn variable. To create a new color scale: click on the Color Scale button,
double-click on New Color Scale and enter a name: Zn, and press OK. Click on the Edit
button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- To modify the bounds, click on Calculate from File to retrieve the min and max bounds
from the selected variable.
- Change the Number of Classes to 25. This might also be achieved by clicking on the
Bounds button and entering 25 as the New Number of Classes, then OK.
- In the Colors area, click on Color Sampling to choose regularly the 25 colors in the 32
colors palette. This will improve the contrast in the resulting display.
- Switch on the Invert Color Order toggle in order to affect the red colors to the large Zn
values.
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter 0
as the reference for tick marks and 5 as the step between tick marks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch off the Use Default Format
button and set the number of digits to 0.
- Click on OK.

In the Item contents for: Raster window, click on Display current item to display the
result.

Click on OK.

666

(snap. 15.8-1)
l

Back in the Contents list, double-click on the Basemap item to represent the Zn variable with
symbols proportional to the variable value. A new Item contents window appears. In the Data
area, select Data / Zn variable as the Proportional Variable. Enter Zn data as the Legend Title.
Leave the other parameters unchanged; by default, black crosses will be displayed with a size

Pollution

667

proportional to the Zn value. Click on Display Current Item to check your parameters, then on
Display to see all the previously defined components of your graphic. Click on OK to close the
Item contents panel.
l

In the Item list, you can select any item and decide whether or not you want to display its
legend. Use the Up and Down arrows to modify the order of the items in the final Display.

In the Display Box tab, choose the Containing a set of items mode and select the Raster item to
define the display box and take off the blanks.

Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.

(snap. 15.8-2)

The * and [Not saved] symbols in the name of the page indicate that some recent modifications
have not been stored in the Zn graphic template, and that this template has never been saved. Click
on Application / Store Page to save them. You can now close your window.

668

Create a second template Zn Stdev to display the kriging standard deviation using an isoline grid
representation (between 0 and 2.5 with a step equal to 0.5) and an overlay of the Zn data locations.
The result should be similar to the one displayed hereafter.

(snap. 15.8-3)

Pollution

669

(fig. 15.8-1)

670

15.9 Multivariate Approach


The data set contains more information than the only variable of interest. Thus, instead of looking at
the variable Zn alone, we can try to take advantage of the knowledge of the Pb variable and of the
correlation between these two variables
The first task consists in characterizing the relationship between these two variables through the
Statistics / Exploratory Data Analysis, selecting the two variables of interest. Selecting the two
variables and pushing the Statistics button produces the basic statistics on the selected variables; the
correlation coefficient between the two variables (defined only on the 101 samples where both
variables are defined) is 0.885.

(snap. 15.9-1)

We will now perform several graphics as we did before with the Zn variable alone:
l

A base map of both variables.

Pollution

A scatter diagram of Zn versus Pb, where we observe that the two large Zn values also
correspond to large Pb values. The linear regression line may be added by switching ON the
corresponding button in the Application / Graphic Specific Parameters... window.
rho=0.885
30

25

20
Zn

671

15

10

10

20
Pb

30

(fig. 15.9-1)

672

Two boxplots using the Statistics / Quick statistics panel. Select the two variables of interest Pb
and Zn in the file Pollution / Data. Then choose the boxplot representation and switch ON the
Draw outliers button. On the boxplot you can easily detect the outliers (see the Quick Statistics
section frome the Users guide to get more information).

(snap. 15.9-2)

Pollution

673

(fig. 15.9-2)
l

An omnidirectional multivariate variogram with the variogram cloud: for the sake of clarity, we
will define the same calculation parameters as before (10 lags of 1km).

(snap. 15.9-3)
l

Finally from the Zn base map, we mask the two large values. We refresh the Variogram picture
by hiding the masked information and checking the following points:
m

The Zn variogram cloud is the same as the one we obtained previously in this study.

The cross-variogram cloud Pb/Zn presents an almost one-sided picture: there are only a few
negative values.

The Pb variogram cloud still shows the same strip as the Zn variogram cloud before masking the two outliers.

674

40

Variogram : Zn

30

20

10

10

30

500

20

400
Variogram : Pb

Variogram : Zn & Pb

Distance (km)

10

300

200

-10
100

-20
0

Distance (km)

10

Distance (km)

10

(fig. 15.9-3)

The correct procedure, once again, is to select some pairs with high variability at small distances on
the Pb variogram and to highlight their origin. On the Zn base map, a cluster of samples are now
painted blue, but no pairs of points are represented; on the Pb base map, one obvious spider is
drawn.

Pollution

675

515

Pb
510

Y (km)

505

500

495

490

485

110

120

130
X (km)

140

(fig. 15.9-4)

Pb Basemap
This high Pb value will be masked to better interpret the underlying experimental variogram.
Moreover the center of the spider precisely corresponds to the only point where the Pb variable is
defined and the Zn is not.
This is confirmed by selecting this sample and asking for the Display Information (Long) option of
the menu which gives us the following information:
- X

120.602km

- Y

511.482km

- Pb 33.20
- Zn N/A
If we pick this sample from the Pb base map and mask it, then the Pb variogram cloud looks more
reasonable. The variogram picture is redrawn, suppressing the display of the variogram cloud and
producing the count of pairs for each lag instead.

676

3
187

204

185

229

206

198

Variogram : Zn

230
2
183
123

Distance (km)
1.5

204

185

185

229
4
187
206

123

230

Variogram : Pb

Variogram : Zn & Pb

187
1.0
198

183

0.5

204
3

198

206

123

229
183
230

1
0.0

3
0

Distance (km)

Distance (km)

(fig. 15.9-5)

Obviously, we recognize the same Zn variogram as before. The Pb variogram, as well as the crossvariogram show the same number of pairs as they are all built on the same 99 samples where both
variables are defined. We will save this new bivariate experimental variogram in a Parameter File
called Pollution Zn-Pb for the fitting step.
The Statistics / Variogram Fitting procedure is started with Pollution Zn-Pb as experimental
variogram and by defining a new file, also called Pollution Zn-Pb, for storing the bivariate model.
The Global window is used for fitting all the variables simultaneously. The use of the Model
initialization does not give satisfactory results this time, especially for the Pb variogram. The
reason is the continuous increase of variability in the Pb variogram at large distances, which is not
captured by the unique spherical default basic structure. In our case the choice of an Exponential
and a Linear structures in the Model Initialization improves a lot the fitting. In the Manual Fitting
some improvements are made while ticking the Automatic sill fitting button:
l

exponential with a range of 2.5 km,

first order G. C. (linear) with a scale factor of 1 km.

Pollution

677

(snap. 15.9-4)

678

(snap. 15.9-5)

The dotted lines on the cross-variogram show the envelope of maximal correlation allowed from
the simple variograms. Click on Run (Save).
Printing the model in the File / Parameter Files window allows a better understanding of the way
these two basic structures (only) have been used in order to fit simultaneously the three views, in
the framework of the linear coregionalization model, with their sills as the only degrees of freedom.
Model : Covariance part
=======================
Number of variables
= 2
- Variable 1 : Pb
- Variable 2 : Zn

Experimental Covariance Matrix:


__________________
|
|
|
|
|
| Pb
| Zn
|
|----|-------|-------|
| Pb | 2.779 | 1.337 |
| Zn | 1.337 | 2.881 |
|____|_______|_______|

Experimental Correlation Matrix:


____________________
|
|
|
|
|
| Pb
| Zn
|

Pollution

|----|-------|-------|
| Pb | 1.000 | 0.473 |
| Zn | 0.473 | 1.000 |
|____|_______|_______|

Number of basic structures = 2

S1 : Exponential - Scale = 2.50km

Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
1.1347
0.5334
Variable 2
0.5334
1.8167

Regionalized correlation coefficient :


Variable 1 Variable 2
Variable 1
1.0000
0.3715
Variable 2
0.3715
1.0000

Decomposition into factors (normalized eigen vectors) :


Variable 1 Variable 2
Factor 1
0.6975
1.2737
Factor 2
0.8051
-0.4409

Decomposition into eigen vectors (whose variance is eigen values) :


Variable 1 Variable 2 Eigen Val. Var. Perc.
Factor 1
0.4803
0.8771
2.1087
71.45
Factor 2
0.8771
-0.4803
0.8426
28.55

S2 : Order-1 G.C. - Scale = 1.00km

Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
0.2562
0.0927
Variable 2
0.0927
0.1224
Regionalized correlation coefficient :
Variable 1 Variable 2
Variable 1
1.0000
0.5234
Variable 2
0.5234
1.0000

Decomposition into factors (normalized eigen vectors) :


Variable 1 Variable 2
Factor 1
0.4906
0.2508
Factor 2
-0.1246
0.2438

Decomposition into eigen vectors (whose variance is eigen values) :


Variable 1 Variable 2 Eigen Val. Var. Perc.
Factor 1
0.8904
0.4552
0.3036
80.20
Factor 2
-0.4552
0.8904
0.0750
19.80

Model : Drift part


==================
Number of drift functions = 1
- Universality condition

The first basic structure (exponential) is used with a sill of:


m

1.1347 in the Pb variogram

1.8167 in the Zn variogram

0.5334 in the cross-variogram

The second basic structure (linear) is used with a coefficient (slope) of:
m

0.2562 in the Pb variogram

679

680

0.1224 in the Zn variogram

0.0927 in the cross-variogram.

Advanced explanations about these coefficients are available in the Isatis Technical References, that
can be accessed in PDF format from the On-Line documentation: chapter "Structure Identification
in the Intrinsic Case", paragraph "Printout of the Linear Model of Coregionalization". The Drift
part of the Model (composed only of the Universality Condition) recalls that the interpolation step
will be performed in Ordinary Cokriging by default.
The Interpolation / Estimation / (Co-)kriging procedure is used again to perform the cokriging step
in order to estimate both variables. The difference is that we must now:
l

Choose the two variables Zn and Pb among the variables of the Input Data File (without any
selection).

Name four variables to store the cokriging results:

Estimation for Zn (Cokriging)

St Dev for Zn (Cokriging)

Estimation for Pb (Cokriging)

St Dev for Pb (Cokriging)

Choose the file containing the bivariate model Pollution Zn-Pb.

The neighborhood is unchanged, bearing in mind that therefore the kriging system, for each target
grid node will have twice as many lines and columns (hence four times bigger) as in the
monovariate kriging case.
The number of grid nodes that fulfill the neighborhood constraints is still 1021 out of 1225.
Use the Zn and Zn Stdev display templates to easily display the cokriging results: for each
template, you just need to specify in the Edit window of your grid items (Raster and Isoline) that
you want to display the Cokriging variables, instead of the previous Kriging results.

Pollution

681

(fig. 15.9-6)

682

(fig. 15.9-7)

You can compare this Zn estimate with the one obtained using the univariate kriging approach. To
analyze the difference between the Kriging and Cokriging estimates, we use the File / Calculator
facility to create a variable called Difference, equal to the absolute value of the difference between
the estimates.

Pollution

683

(snap. 15.9-6)

This difference variable is now displayed using a raster representation with a color scale from 0 to 5
by steps of 0.5.

684

(fig. 15.9-8)

The main differences appear in the northern part of the field:


l

where the third high value (in Pb) is located. The influence of this Pb value is amplified through
the correlation in the model as no corresponding Zn data is available here;

the second area of high difference is the zone with the first two high values, which denotes that
the link between the Zn and the Pb is not simply arithmetic. This leads us to the following
remark which will be illustrated in the next paragraph: even when the information of both
variables is present for all the samples (isotopy), cokriging carries more information than
kriging. Of course, this is even more visible in the case where the estimated variable is scarcely
sampled (heterotopy).

Pollution

685

15.10 Case of Self-krigeability


This paragraph is dedicated to advanced users. It aims at illustrating the case where kriging and
cokriging give similar results. For this purpose, let us return to the cokriging panel and use the Test
facility on the grid node [11, 21]. By requesting the print of weights in the graphic menu bar, we
obtain the following information:
Display of the (Co-) Kriging weights
====================================

Weights for option : Punctual

Rank Sample #
X
Y

Kriging variable V1
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287
4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V1

Kriging variable V2
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287
4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V2

Variable V1
Estimate
=
9.3118e+00
Variance
=
2.5244e+00
Std. Dev
=
1.5888e+00
Variable V2
Estimate
=
7.0693e+00
Variance
=
2.3700e+00
Std. Dev
=
1.5395e+00

Vi

Lambda V1

Lambda V2

7.1000e+00
6.9000e+00
4.5000e+00
9.0000e+00
6.2000e+00
6.3000e+00
8.3000e+00
6.0000e+00
4.5000e+00
3.1600e+01
2.4800e+01
6.0000e+00
4.7000e+00
5.3000e+00
6.6000e+00
4.1000e+00

1.0991e-01
5.3149e-02
4.6418e-02
3.6423e-02
8.6738e-02
5.8343e-02
5.3459e-02
4.8487e-02
6.8475e-02
6.5210e-02
7.5750e-02
6.8333e-02
6.8974e-02
4.6189e-02
7.4202e-02
3.9940e-02
1.0000e+00

-7.2616e-03
3.6183e-03
-2.6317e-04
2.8239e-03
-8.4710e-04
-1.4425e-03
-7.0186e-04
4.0663e-03
-2.6696e-03
2.7918e-04
-1.3067e-03
1.8800e-03
-3.3752e-03
2.5384e-03
2.5598e-04
2.4057e-03
3.0531e-16

2.9400e+00
2.1900e+00
1.2100e+01
2.8800e+00
3.7100e+00
4.3000e+00
4.6000e+00
2.2400e+00
2.7900e+00
2.7600e+01
2.5500e+01
4.6100e+00
2.1800e+00
2.3000e+00
1.9000e+00
1.7900e+00

2.3762e-02
-1.1840e-02
8.6119e-04
-9.2407e-03
2.7720e-03
4.7203e-03
2.2967e-03
-1.3306e-02
8.7359e-03
-9.1356e-04
4.2759e-03
-6.1522e-03
1.1045e-02
-8.3066e-03
-8.3765e-04
-7.8722e-03
-1.3878e-17

1.8519e-01
1.5637e-02
4.9146e-02
7.1469e-03
9.5520e-02
7.3297e-02
6.0735e-02
6.3311e-03
9.6151e-02
6.2315e-02
8.9297e-02
4.8842e-02
1.0397e-01
1.9873e-02
7.1548e-02
1.5000e-02
1.0000e+00

In his printout, we can read the weights for the estimation of Zn (column Lambda V1) and for the
estimation of Pb (column Lambda V2) applied to the Zn information (first set of rows) and the Pb
information (second set of rows). We can check the impact of the universality condition which
implies that, when estimating a main variable, the weights attached to the main information must

686

add up to 1 while the weights attached to the secondary variable must add up to zero. Be careful, the
amplitude of the weights on the secondary variable may be misleading in general, since it depends
on the ratio of the standard deviations of the main and secondary variables and in particular on their
respective units.
Using the model composed of two nested basic structures described previously, we can check that
the weights of the secondary variable are not null: hence the cokriging result differs from the
kriging one.
This property will vanish for all variables in the particular model of intrinsic correlation, where all
the simple and cross variograms are proportional. This is obviously not the case here as the ratios
between the coefficients of each basic structure are:
m

for the Pb variogram 1.1347/0.2562 ~ 4.4

for the Zn variogram 1.8167/0.1224 ~ 15

for the Pb/Zn cross-variogram 0.5334/0.0927 ~ 5.8

We now wish to create a model where both variograms and the cross-variogram are proportional:
this is obviously the case when the model is reduced to one basic structure. This is why we now
return to the variogram fitting stage using the exponential basic structure alone with range 5.3 km,
switch on the Automatic Sill Fitting button and save the Model in the File Pollution Zn-Pb (one
structure).
When estimating the grid node by cokriging [11, 21] using this new model Pollution Zn-Pb (one
structure) and the same neighborhood Pollution as before, we can ask for the print of the weights
and obtain the following result:
Display of the (Co-) Kriging weights
====================================

Weights for option : Punctual

Rank Sample #
X
Y

Kriging variable V1
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287
4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V1

Kriging variable V2
1
67
123.430
499.081
2
68
125.590
497.970
3
74
125.175
500.287

Vi

Lambda V1

Lambda V2

7.1000e+00
6.9000e+00
4.5000e+00
9.0000e+00
6.2000e+00
6.3000e+00
8.3000e+00
6.0000e+00
4.5000e+00
3.1600e+01
2.4800e+01
6.0000e+00
4.7000e+00
5.3000e+00
6.6000e+00
4.1000e+00

1.2538e-01 1.0180e-17
5.4894e-02 1.0879e-17
3.1820e-02 -5.8418e-18
4.2451e-02 3.1484e-17
9.8050e-02 1.0205e-17
5.0194e-02 -5.1775e-18
4.8354e-02 2.2877e-17
5.4225e-02 9.6364e-18
6.1411e-02 1.5274e-17
6.0364e-02 2.4111e-18
6.1955e-02 1.7437e-17
6.9428e-02 -3.1694e-18
6.8763e-02 5.0959e-18
5.2616e-02 6.3284e-18
8.7005e-02 2.0962e-17
3.3090e-02 9.7171e-18
1.0000e+00 1.5830e-16

2.9400e+00
2.1900e+00
1.2100e+01

2.5873e-17
1.3824e-17
0.0000e+00

1.2538e-01
5.4894e-02
3.1820e-02

Pollution

687

4
75
125.696
500.365
5
18
122.615
505.190
6
10
119.201
506.372
7
26
118.513
506.580
8
12
118.997
507.992
9
91
113.621
500.780
10
92
113.313
501.368
11
29
113.433
498.943
12
30
112.929
497.597
13
53
118.533
494.173
14
52
117.750
492.960
15
66
121.842
494.336
16
54
119.095
493.144
Sum of weights for Kriging V2

Variable V1
Estimate
Variance
Std. Dev
Variable V2
Estimate
Variance
Std. Dev

=
=
=

8.8939e+00
2.8868e+00
1.6991e+00

=
=
=

6.1524e+00
2.7307e+00
1.6525e+00

2.8800e+00
3.7100e+00
4.3000e+00
4.6000e+00
2.2400e+00
2.7900e+00
2.7600e+01
2.5500e+01
4.6100e+00
2.1800e+00
2.3000e+00
1.9000e+00
1.7900e+00

3.6371e-18
0.0000e+00
-1.9738e-17
4.6513e-17
-8.1636e-18
-2.5879e-17
5.2087e-17
3.6930e-17
-2.0138e-17
3.2378e-17
-4.0209e-18
-1.3319e-17
-1.4818e-17
1.0517e-16

4.2451e-02
9.8050e-02
5.0194e-02
4.8354e-02
5.4225e-02
6.1411e-02
6.0364e-02
6.1955e-02
6.9428e-02
6.8763e-02
5.2616e-02
8.7005e-02
3.3090e-02
1.0000e+00

This time, we can easily check that the weights attached to the secondary variable are systematically null and, therefore, the cokriging result is similar to that of kriging. However, this property
fails as soon as one sample is not informed for both variables: this can be checked for target grid
node [10, 31] where the sample (rank 2) carries the Pb information but not the Zn as can be seen in
the next printout.
Display of the (Co-) Kriging weights
====================================
Weights for option : Punctual

Rank Sample #
X
Y

Kriging variable V1
1
97
118.522
509.148
2
13
118.123
508.007
3
1
119.504
509.335
4
16
120.518
508.675
5
2
120.447
510.002
6
3
120.602
511.482
7
98
117.882
513.039
8
99
111.185
503.398
9
100
113.336
505.146
10
92
113.313
501.368
Sum of weights for Kriging V1

Kriging variable V2
1
97
118.522
509.148
2
13
118.123
508.007
3
1
119.504
509.335
4
16
120.518
508.675
5
2
120.447
510.002
6
3
120.602
511.482
7
98
117.882
513.039
8
99
111.185
503.398
9
100
113.336
505.146
10
92
113.313
501.368
Sum of weights for Kriging V2

Variable V1
Estimate
=
8.8555e+00
Variance
=
1.8112e+00

Vi

Lambda V1

Lambda V2

8.3000e+00 4.9427e-01 -4.0721e-17


4.9000e+00 7.2897e-02 -3.5871e-17
4.6000e+00 8.7843e-02 3.7794e-17
4.9000e+00 -1.3972e-02 6.8874e-18
4.5000e+00 8.3165e-02 -2.0899e-17
N/A
N/A
N/A
5.5000e+00 1.5624e-01 6.4792e-17
1.1500e+01 3.5916e-02 7.6359e-18
5.5000e+00 4.3763e-02 5.3315e-18
3.1600e+01 3.9882e-02 7.9739e-18
1.0000e+00 3.2922e-17
2.4100e+00
2.0200e+00
2.1500e+00
2.5600e+00
2.4800e+00
3.3200e+01
2.0900e+00
3.8600e+00
2.1900e+00
2.7600e+01

-1.6788e-03 4.9029e-01
-1.2601e-03 6.9908e-02
-1.0935e-03 8.5249e-02
-9.0639e-04 -1.6123e-02
-1.4007e-02 4.9935e-02
3.1473e-02 7.4666e-02
-6.2934e-03 1.4131e-01
-1.9791e-03 3.1221e-02
-2.0493e-03 3.8902e-02
-2.2059e-03 3.4649e-02
-2.1684e-18 1.0000e+00

688

Std. Dev
Variable V2
Estimate
Variance
Std. Dev

1.3458e+00

=
=
=

5.5249e+00
1.7033e+00
1.3051e+00

Pollution

689

15.11 Simulations
Kriging provides the best estimate of the variable at each grid node. Doing so, it does not produce
an image of the true variability of the phenomenon. Performing risk analysis usually requires to
compute quantities that have to be calculated from a model representing the actual variability. In
this case, advanced geostatistical techniques such as simulations have to be used.
It is for instance the case here if we want to estimate the probability of Zn to exceed a given
threshold. As in fact thresholding is not a linear operator applied to the concentration, applying the
threshold on the kriged result (which is a linear operator) can lead to an important bias. Simulation
techniques generally require a multigaussian framework: thus each variable has to be transformed
into a normal distribution beforehand and the simulation result must be back-transformed to the raw
distribution afterwards.
In this paragraph, we focus on the Zn variable alone. This first task consists in transforming the raw
distribution into a normal one: this requires the fitting of the transformation function called the
Gaussian Anamorphosis. Using the Statistics / Gaussian Anamorphosis Modeling procedure, we
can fit and display this function and transform the raw variable Zn into a new gaussian variable Zn
(Gauss).
The first left icon in the Interactive Fitting window overlays the experimental anamorphosis with
its model expanded in terms of Hermite polynomials: this step function gives the correspondence
between each one of the sorted data (vertical axis) and the corresponding frequency quantile in the
gaussian scale (horizontal axis). A good correspondence between the experimental values and the
model is obtained by choosing an appropriate number of Hermite polynomials; by default Isatis
suggests the use of 30 polynomials, but you can modify this number in Nb of Polynomials.
Close the Fitting Parameters window and click on the Point Anamorphosis button to save the
parameters of this anamorphosis in a new set name called Pollution Zn. The number of
polynomials, the absolute interval of definition and the practical interval of definition are saved in
the Parameter File and you may check their values in the printout.
Switch on the Gaussian Transform to save on Output as Zn (Gauss), the new gaussian variable.
Three options of anamorphosis are available, we recommend the Frequency Inversion method for
this case. Finally click on Run.

690

(snap. 15.11-1)

30

Zn

20

10

0
-3

-2

-1
0
1
Gaussian values

(fig. 15.11-1)

Pollution

691

Using the Statistics / Exploratory Data Analysis on this new variable, we can first ask for its basic
statistics and check the correctness of the transformation as: the mean is 0.00 and the variance is
0.99. We first display the histogram of this variable between -3 and 3 using 30 classes and check
that the distribution is symmetric with a minimum of -2.42 and a maximum of 2.42. The two high
Zn values are not anomalous anymore on the gaussian transform. As as consequence, the
experimental variogram is more structured. The following one is computed using the same
calculation parameters as in the univariate case: 10 lags of 1 km.

(fig. 15.11-2)

(fig. 15.11-3)

692

This experimental variogram is saved in a file called Pollution Zn (Gauss).


In the Statistics / Variogram Fitting, we fit a model constituted of a two spherical basic structures
automatically fitted. Save the model file called Pollution Zn (Gauss).

(snap. 15.11-2)

We are now able to perform the conditional simulation step using the Turning Bands method
(Interpolate / Conditional Simulations / Turning Bands). A conditional simulation corresponds to a
grid of values having a normal distribution and obeying the model. Moreover, it honors the data
points as it uses a conditioning step based on kriging which requires the definition of a
neighborhood. We use the same Pollution neighborhood parameters as in the kriging step. The
additional parameters consist in:

Pollution

693

the name of the Macro Variable: each simulation is stored in this Macro Variable with an index
attached.

the number of simulations: 20 in this exercise.

the starting index for numbering the simulations: 1 in this exercise.

the Gaussian back transformation is performed using the anamorphosis function: Pollution Zn.

the seed used for the random number generator: 423141 by default. This seed allows you to
perform lots of simulations in several steps: each step will be different from the previous one if
the seed is modified.

The final parameters are specific to the simulation technique. When using the Turning Band
method, we simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 100 bands are chosen in our exercise.

694

(snap. 15.11-3)

The results consist of 20 realizations stored in one Macro Variable in the Grid Output File. The
clear differences between several realizations are illustrated on the next graphic.

Pollution

695

(fig. 15.11-4)

The Tools / Simulation Post Processing panel provides a procedure for the post processing of a
Macro Variable. Considering the 20 conditional simulations, we ask the procedure to perform
sequentially the following tasks:

696

calculation of the mean of the 20 simulations,

determination of the cutoff maps giving the probability that Zn exceeds different thresholds
(20%, 25%, 30% and 35%).

(snap. 15.11-4)

(snap. 15.11-5)

Pollution

697

(snap. 15.11-6)

(snap. 15.11-7)

The map corresponding to the mean of the 20 simulations in the raw scale is displayed with the
same color scale as for each of the estimated maps. The mean of a large number of simulations converges towards kriging.

698

Simulation Zn Mean

Y (km)

510

Zn

500

25
20
15

490

10
5
480

110

120

130
X (km)

140

0
N/A

(fig. 15.11-5)

The following graphics contain the probability maps corresponding to the cutoffs 20% and 30%.
Obviously, the probability is decreasing with the cutoff.

Pollution

699

Iso-Proba Zn{20.000000}

510

Y (km)

500

490

Proba
1.0
0.9
0.8

480

0.7
110

120

130

140

0.6
0.5

X (km)

0.4
0.3

Iso-Proba Zn{30.000000}

510

0.2
0.1
0.0

Y (km)

500

490

480

110

120

130
X (km)

140
(fig. 15.11-6)

700

Young Fish Survey

16.Young Fish Survey


This case study is based on a Trawl survey carried in the North Sea by
research laboratories in order to evaluate fish stocks. It has been
kindly provided by FRS Marine Laboratory in Aberdeen, UK. It has
also been used as a case study in the book Geostatistics for Estimating
Fish Abundance by J. Rivoirard, K.G. Foote, P. Fernandes and N. Bez.
This book will serve as a reference for comparison in this case study.

The case study illustrates the use of Polygons which serve either to
delineate the subpart of a regular grid where the local estimation must
take place, or to limit the area on which a global estimation has to be
performed. It is recommended to read the Dealing With Polygons
chapter of the Beginners Guide prior to running this case study, in
order to be familiar with this facility.

Last update: Isatis version 2014

701

702

16.1 Introduction
As stated in the reference book, several research laboratories from the countries surrounding the
North Sea (ICES 1997) join their effort in order to evaluate the fish stocks. The procedure consists
in surveys carried out at the same period of each year (February), and where the indices of
abundance at age for different species of fish are measured. In this case study, we will concentrate
on the 1991 survey covering the North Sea to the east of Scotland, and on the haddock of the first
category of age (less than 21 cm).
The survey was carried out using a "Grande Ouverture Verticale" (GOV) trawl: a single 60-minute
tow was conducted within each ICES statistical rectangle of the survey area. The dimensions of
these rectangles are a degree of latitude by half a degree of longitude. Therefore the exact
dimensions depend on the latitude: a general conversion rule is applied for transforming longitude
based on the cosine of the latitude (i.e. 55N).
The initial information provided by the survey are fish numbers (by species, by length and age) and
certain fishing gear parameters that enable a standard fish density unit to be obtained. These
parameters include the distance towed and the wind spread.
The fish catch (in numbers) is converted to areal fish density (numbers by nmil2) by scaling it by
the distance towed and the wingend distance (swept area method by Gunderson, 1993).

Note - Some numerical results can differ from the reference book.

16.1.1 Loading information


First, we need to creat a new study using the Data File Manager, then we have to set the
Preferences / Study Environment: all units but Z unit in Nautical Miles (Input-Output Length
Options and Graphical Axis Units), and Z unit in meters.

Young Fish Survey

703

(snap. 16.1-1)

The data are provided in the ASCII file fish_survey.hd, which includes the classical Isatis header. It
contains the following information:
l

X and Y refer to the midpoint of the haul start and end positions, converted to an absolute
measure in nmil,

Haddock 1 gives the haddock juveniles catch number,

Dist towed and Wingend refer to the fishing gear parameters described above. Note that the
distance towed is given in nmil whereas the wingend is provided in meter.

The ASCII files are located in the Isatis installation directory/Dataset/Young_Fish

The information is loaded in the file Survey within a new directory North Sea.

704

(snap. 16.1-2)

The next operation consists in calculating the areal fish density (using the File / Calculator facility)
that will be stored in a new variable called Fish areal density. This variable is simply obtained by
dividing the initial fish catch in numbers (Haddock 1) by the gear parameters (Dist towed and
Wingend), once the last parameter (Wingend) is converted from meter to nautical mile (divided by
1852).

Young Fish Survey

705

(snap. 16.1-3)

706

16.1.2 Statistics and displays


We first check the basic statistics on the variable Fish areal density (to be compared with the
values reported in the Table 4.2.1 of the reference book):
l

Count of samples: 59

Maximum value: 82327

Mean: 13772

Standard Deviation: 19646

Coefficient of Variation: 1.43

Using the Statistics / Exploratory Data Analysis facilities, the next figure shows the spread of the
data using a representation where the symbols are proportional to the fish density.

Fish areal density

Y (nmil)

3600

3500

3400

3300

-100

0
X (nmil)

100

(fig. 16.1-1)

The histogram performed with 10 classes between 0 and 90000 shows a positive skewness with a
large number of zero (or small) values (to be compared to the histogram in Fig 4.2.3 of the
reference book):

Young Fish Survey

707

Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

0.6

59
0.00
82327.34
13772.24
19645.56

0.5

0.4

0.3

0.2

0.1

0.0

20000

40000

60000

80000

Fish areal density

(fig. 16.1-2)

The next task consists in calculating the experimental variogram. The variogram is computed for 15
lags of 15 nmil with a 7.5 nmil tolerance, assuming isotropy. The next figure shows the
experimental variogram together with the count of pairs obtained for each lag. The variogram is
saved in a Standard Parameter File called Fish density.

106

6.0e+008
Variogram : Fish areal density

124

5.0e+008
68

128

125
132

4.0e+008

121110

137

47

90

3.0e+008
32

107
92

2.0e+008

1.0e+008

0.0e+000

1
0

50

100

150

Distance (nmil)

200

(fig. 16.1-3)

We use the Statistics / Variogram Fitting procedure to fit an isotropic model, which will finally be
stored in the Standard Parameter File also called Fish density.

708

To remain compatible with the reference book, we define a model composed of a nugget effect and
a spherical basic structure (with a range of 55 nmil) and use the Automatic Sill Fitting option to get
the optimal values for the sills by minimizing the distance between the model and the values of the
experimental variogram, cumulative over all the calculated lags. The same weighting function is
applied for each lag of the experimental variogram, which accounts for:
m

the number of pairs,

the inverse of the average distance of the lag.

The resulting model has the following characteristics:


m

Nugget effect: Sill = 0.48 e+08,

Spherical basic structure with a range of 55 nmil and a sill of 3.98 e+08.

Both the experimental variogram and the model are presented in the following figure.

Variogram : Fish areal density

6.0e+008

5.0e+008

4.0e+008

3.0e+008

2.0e+008

1.0e+008

0.0e+000

50

100

150

Distance (nmil)

200

(fig. 16.1-4)

16.1.3 Loading the polygons


The next essential task for this study is to define the area of interest, both for mapping purpose and
for integration. This contour is loaded as a 2D polygon. The interesting feature here is that this
polygon must include the domain of the North Sea survey, extending to the east of Scotland; it must
also pay attention to exclude the Shetland and the Orkney islands.
In the terminology used by Isatis, the whole North Sea will be considered as a single polygon, composed of one contour defining the periphery of the field, and a hole that corresponds to the Shetland
islands (the Orkney islands are partially visible at the south west side of the polygon).
The Polygon is contained in an ASCII file, called north_sea.hd, whose header describes the
contents. All the vertices of the polygon are already converted to absolute measure in nmil as for

Young Fish Survey

709

the trawl survey data. The next paragraph illustrates the contents of this file. One can first notice the
double nested hierarchy:
m

the polygon level which corresponds to the lines starting with the ** symbol,

the contour level which corresponds to the lines starting with the * symbol. This level
contains an additional flag indicating if the polygon stands for a hole or not.

#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
**
*

Polygons Dimension = 2D
polygon_field
polygon_field
polygon_field
polygon_field
polygon_field

=
=
=
=
=

1
2
3
4
5

,
,
,
,
,

type
type
type
type
type

=
=
=
=
=

name
color_R
color_G
color_B
pattern

++++++++++++++++ --- +++ --- +++


contour_field =
contour_field =

1 , type = hole
2 , type = name

+ ----------------
vertex_field =
vertex_field =

1 , type = x, unit = nmil


2 , type = y, unit = nmil

++++++++++ ----------
North Sea
125 190 255
0 East of Scotland
-55.17
3334.26
-65.59
3342.84
.../...
172.07
3330.00
-54.55
3330.00
1
-57.30
3627.90
-51.11
3628.50
.../...
-45.94
3630.12
-45.63
3639.18

This polygon is read using the File / Polygons Editor facility. This application stands as a graphic
window with a large Application Menu. We must first choose the New Polygon File option of the
Application menu to create a file where the 2D polygon attributes (vertices, name and color) will be
stored: the file is called Polygon in the directory North Sea.

(snap. 16.1-4)

The next task consists in loading the contents of the ASCII Polygon File using the ASCII Import
facility in the Application Menu.

710

(snap. 16.1-5)

The polygon (with its two contours) now appears in the graphic window. We can easily distinguish
the eastern coast of Scotland as well as the two sets of islands.

Young Fish Survey

711

(snap. 16.1-6)

The final action consists in performing the SAVE and RUN task in order to store the polygon file in
the general data architecture of Isatis. To check this file, we can simply use the Data File Manager
utility which provides basic information:

712

the file belongs to a new type, called 2D-Polygons (which seems very similar to the Points 2D
structure). The information button used on this file simply recalls that it contains a single
polygon (constituted of 123 vertices).

the file contains only one sample in our case and several variables (created automatically):
m

the traditional variable Sample Number gives the rank of the sample in the file;

the coordinates X and Y give the location of the anchor where the label of the polygon is
attached. By default, the label is located at the gravity center of the polygon;

the NAME corresponds to the label given to each polygon and printed at anchor location
(North Sea in our case);

the SURFACE measures the actual surface of the polygon. Note that this surface is
calculated exactly, taking the hole into account, and therefore it does not require any grid for
discretization. In our case, the polygon surface is estimated at 63949 nmil2.

Young Fish Survey

713

16.2 Mapping
This part corresponds to the traditional estimation step, carried out on the nodes of a regular grid
which will cover the whole area, and will allow graphic representation of the fish density with its
local variations: hence the name of local estimation.

16.2.1 Grid definition


The first task is obviously to define the grid that will serve as the support of the local estimation.
This is realized using the File / Create Grid File facility and using the trawl survey as a control
information. The new 2D grid is stored in the file called Grid in the directory North Sea.

(snap. 16.2-1)

Note - The resolution here is twice finer (in each direction) than in the reference book.

714

16.2.2 Selection on the grid


It is now obvious that the local estimation must only be performed in the subpart of the grid which
corresponds to the North Sea: i.e. only at the grid nodes which belong to the polygon. We need to
perform a selection based on the polygon stored in the Polygon File.
This is realized using the File / Selection / From Polygons facility. It offers the possibility of
applying on the samples of any type of file, a selection based on the fact that they are located either
inside one polygon or outside all polygons.
We will apply this procedure on the nodes of the Grid file in order to retain only the nodes lying
within the polygon (called North Sea) of the Polygon File (called Polygon). Note that the
procedure will not select the nodes located within the hole.

(snap. 16.2-2)

The selection, stored as a new variable of the Grid file, will be called North Sea. The procedure
also tells us that, out of the 4875 grid nodes, only 2610 belong to the polygon. This number gives us
a second coarse estimation of the surface of the polygon by multiplying it by the elementary cell
surface (5 x 5 nmil2): i.e. 65250 nmil2. The difference between this number and the exact surface,
whose value is 63949 nmil2, comes from the discretization.

Young Fish Survey

715

16.2.3 Local estimation


The local estimation is performed using the Interpolate / Estimation / (Co-)Kriging facility. For
each node of the output grid, the kriging procedure is established using all the trawl data
information (unique neighborhood): this neighborhood information is stored in a Standard
Parameter File called Unique. Pay attention to the fact that a new neighborhood is by default of
moving type; it is therefore compulsory to Edit the Neighborhood and switch its type from Moving
to Unique. The output grid is reduced by considering only the nodes included in the North Sea
selection.

(snap. 16.2-3)

The procedure creates two variables defined on the active grid nodes: the Estimation, which
contains the estimation map, and the St. deviation which gives the square root of the estimation
variance. The two following maps are produced overlaying various Display facilities: a raster, a
basemap using proportional symbols and the polygon.

716

Estimation
3650

3600

Y (nmil)

3550

3500
70000
3450

60000
50000
40000

3400

30000
20000

3350

10000
-100

100

X (nmil)

(fig. 16.2-1)

St. deviation
3650

3600

Y (nmil)

3550

3500
19500
3450

17000
14500

3400

12000
3350
9500
-100

0
X (nmil)

100

(fig. 16.2-2)

Young Fish Survey

717

16.3 Global Estimation


The polygon is now used to calculate the total abundance of fish and the corresponding variance;
this estimation is known as "global", in opposition with the previous "local" estimation.
These calculations can be performed in different ways:
l

the unweighted estimation where the mean fish areal density (calculated from the 59 samples)
is considered as representative of the variable over the whole polygon. Therefore, the global
estimation of the abundance is obtained by raising the arithmetic mean fish density (13772 fish
per nmil2) to the area of the polygon (63949 nmil2) for a result of 881 millions.

the unweighted estimation variance expressed through the coefficient of variation CViid

( ---------- ) which ignores the spatial structure: 18.6%.

z N

the weighted estimation, through kriging, where the samples are weighted optimally with
respect to the appropriate variogram model.

16.3.1 Discretization
The global estimation requires each polygon to be associated with an internal discretization grid.
The parameters of this discretization grid can be chosen in the File / Polygons Editor facility (see
the Polygons section from the Beginner's Guide).
Once the polygon file of interest has been defined (Polygon), you click with the right-button of the
mouse in the graphic area and ask for Edit Polygons option. Then you have to select the North Sea
polygon. The menu of the graphic area is now turned into the Polygons Edit Menu which offers new
options, including the Edit Discretization one.
A panel appears where you can define the discretization grid interactively. However it is strongly
recommended to use the graphic control, in particular to show the contributing nodes. The
discretization grid always covers the whole polygon; the union of the contributing cells also covers
area which does not belong to the polygon: this additional area should be as small as possible. In the
bottom of the panel, a statement calculates this added surface interactively (expressed as a
percentage of the actual polygon surface).

Note - It is possible to define more easily a discretization grid using the Application / Discretize
facility, but we choose this option in this case to illustrate the way to define exactly each grid
parameter.
You may now choose the parameters of the grid, by selecting:
m

the location of its origin,

the count of nodes along each axis,

the mesh size,

718

the rotation angle that you wish to apply to the grid: this option is particularly adapted for
elongated grids when the elongation direction does not match one of the main axes of the
system.

In this case study, we will test the impact of the discretization on the global estimation results. In
this first step, we choose a discretization grid with the following characteristics:
m

Grid origin: (-148, 3321)

Nodes number: 35 x 50

Mesh size: 10 x 10

(snap. 16.3-1)

which leads to an added surface of around 13,8% of the exact polygonal surface.
In order to store the characteristics of this discretization grid, you simply need to run the SAVE and
RUN option of the Application Menu.

16.3.2 Global Estimation


This weighted estimation is precisely the aim of the Interpolate / Estimation / Polygon Kriging
facility. This calculation performs the global estimation over a set of polygons.
This feature is used in order to calculate the global estimation of the fish areal density integrated
over the single polygon (North Sea) contained in the Polygon file. The fish density model is used.

Young Fish Survey

719

The results are stored in the Polygon file using the variables: Estimation for the estimated fish density over the polygon, and St. deviation for the square root of the variance of the previous estimated
value.
This procedure requires the definition of the Neighborhood which will be taken into account for
selecting the data points involved in the estimation of each polygon. These parameters are saved in
the Standard Parameter File called Polygon.

(snap. 16.3-2)

The characteristics of this neighborhood are specific to the global estimation performed on polygons (hence the Neighborhood Type). In particular, it gives the possibility of selecting all the data
points which lie within the polygon, possibly extended by a rotated ellipsoid, and up to a maximum
count of points. Here the ellipsoid dimensions are set to zero and all the data strictly included within
the polygon are used. No limitation is imposed on the count of data.

720

(snap. 16.3-3)

To check these results, we must use the File / Print facility which produces the contents of the
selected variables for the whole set of samples. Used on the Polygon file and for the two variables
described previously, this feature will produce the following results:

Young Fish Survey

721

Estimated value ( z ) = 14417.64

Standard deviation ( E ) = 2068.35

Surface = 63949.07 nmil2

These results lead to an estimation of the total abundance of 922 millions and a coefficient of

E
z

variation CVgeo ( ------ ) of 14.3%.


As stated in the reference manual, the difference in abundance between the unweighted and kriged
estimates is small (4,7%). The difference in CVs is more marked (23.1%), the kriged version being
lower than the unweighted one.

16.3.3 Finer discretization


This case study gives us the opportunity of testing the influence of the discretization grid on the
results of the global estimation.
We return to the File / Polygons Editor facility to modify the mesh of the discretization grid to a 5 x
5 nmil. The count of nodes become 69 along X and 80 along Y for an error on the surface of 7%.
We can continue even further, down to a 1 x 1 nmil grid.

Note - The computing time used for the estimation is proportional to the count of nodes of the
discretization which belong to the polygon. As far as the standard deviation is concerned, the time
is proportional to the square of the count of discretization nodes which belong to the polygon.
The following table summarizes the results for the 10, 5 and 1 nmil side cells.
Discretization

10 nmil

5 nmil

1 nmil

Estimation

14417.64

14405.62

14402.56

St. dev.

2068.35

2049.55

2045.64

CV

14.3%

14.2%

14.2%

The gain in accuracy for both the abundance and the coefficient of variation is not important
enough with regard to the increase in computing time. A reasonable balance corresponds to the first
trial with 10 nmil discretization grid mesh.

722

Acoustic Survey

17.Acoustic Survey
This case study is based on an Acoustic survey carried in the northern
North Sea (western half of ICES division IVa) in July 1993 in order to
evaluate the total biomass, total numbers and numbers at age of the
North Sea herring stock. It has been kindly provided by the Herring
Assessment Group of the International Council for the Exploration of
the Sea (ICES). It has also been used as a case study in the book
Geostatistics for Estimating Fish Abundance by J. Rivoirard, K.G.
Foote, P. Fernandes and N. Bez. This book will serve as a reference for
comparison in this case study.

The case study illustrates the use of Polygons to limit the area on
which a global estimation has to be performed. The aim of this study is
to carry out a global estimation with a large number of data, which
requires the domain to be subdivided in strata (polygons). The main
issue arises in the way the results per strata have to be combined, both
for estimation and variance estimation.

Last update: Isatis version 2014

723

724

17.1 Introduction
As stated in the reference book, this data set has been taken from the 6 years acoustic survey of the
Scottish North Sea. The 1993 data constitute 938 values of an absolute abundance index, at regular
points along the survey cruise track. This cruise track is oriented along systematic parallel transects
spaced 15 nautical miles (nmil) apart, running east-west and vice versa, progressing in a northerly
direction on the east of the Orkney and Shetland Islands and southward down the west side. The
acoustic index is proportional to the average fish density.
The position of an acoustic index was taken every 2.5 nmil, initially recorded in a longitude and
latitude global positioning system and later converted into nmil using a simple transformation of
longitude based on the cosine of the latitude.

17.1.1 Loading the data


First, we have to set the Study Environment: all units but Z unit in Nautical Miles (Input-Output
Length Options and Graphical Axis Units), and Z unit in meters.
The acoustic survey information is contained in the ASCII file called acoustic_survey.hd. It is
provided with a header which describes the set of variables:
m

Longitude and Latitude expressed in nmil will serve as coordinates.

Year, Month, Day, Hour, Minute and Second give the exact date at which the
measurement has been performed. They will not be used in this case study.

Fish is the variable containing the fish abundance and will be the target variable throughout
this study.

East and West are two selections which delineate the sub-part of the data belonging to the
eastern part of the North Sea from the western part: the boundary corresponds to a broken
line going through Orkney and Shetland Islands.

The data are provided in the Isatis installation directory/Datasets/Acoustic_survey and in the File
called Data.

Acoustic Survey

725

(snap. 17.1-1)

17.1.2 Statistics
Getting info on the file Data tells us that the data set contains 938 points, extending in a square area
with 200 nmil edge. The next figure represents the acoustic survey where the points located in the
East part (1993 - East selection) are displayed using a dark circle whereas the points in the West
part (1993 - West selection) are represented with plus sign.

726

3700

Y (nmil)

3650

3600

3550

3500
-100

-50

50

X (nmil)

(fig. 17.1-1)

The differences between East and West areas show up in the basic statistics of the fish abundance:
All data

East

West

Count of samples

938

606

332

Minimum

Maximum

533.36

533.36

306.48

Mean

8.27

8.16

8.47

Variance

1078.49

1189.48

875.84

Skewness

9.07

9.93

6.33

CV (sample)

3.97

4.23

3.49

The following figure shows the histogram of the Fish variable. The data are highly positively
skewed with 50% of zero values.

Acoustic Survey

727

1.00

Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

938
0.00
533.36
8.27
32.84

Frequencies

0.75

0.50

0.25

0.00

100

200

300

400

500

Fish

(fig. 17.1-2)

The next figure represents the log of the acoustic index + 1 in proportional display, zero values
being displayed with plus sign whereas non zero values are displayed using circles. It is similar to
the 1993 display in the figure (4.3.1) of page 84 in the reference manual.

3700

Y (nmil)

3650

3600

3550

3500
-150

-100

-50
X (nmil)

50

(fig. 17.1-3)

728

17.1.3 Variography
Two omnidirectional variograms were calculated separately on data coming from the east and the
west areas. For sake of simplicity in this case study, the variograms are calculated on the raw
variables, with a lag value of 2.5 nmil, 30 lags and a tolerance on distance of 50%. Each
experimental variogram has then been fitted using the same combination of a nugget effect and an
exponential basic structure: the sill of each component has been fitted automatically. The next
figure shows the two experimental variograms and the corresponding models (West and East).

(fig. 17.1-4)

Acoustic Survey

729

(fig. 17.1-5)

Note that, as we already knew, the variances of the two subsets are quite different (875 for West and
1189 for East). The fitted models have the following parameters:
Dataset

West

East

Nugget

396

842

Exp - Range

27

20

Exp - Sill

787

469

Total Sill

1183

1311

Ratio Nugget / Total Sill

33%

64%

There is enough evidence to indicate that there are differences between the east and west regions,
particularly regarding the proportion of nugget; it is therefore advised to stratify the whole area data
sets into east and west regions.

730

17.2 Global Estimation


We decided to divide the information between the East and the West regions. For the purpose of the
global estimation, the whole field is subdivided into geographical sub-strata of consistent sampling
density. These sub-strata correspond to Polygons.

17.2.1 Small strata


At first, the sub-strata are designed so as to follow the survey track along a single transect in the
East part; in the West part, it may happen that a sub-strata contains the two-way transect.
These polygons also take into account the shape of the Coast of Scotland as well as the Orkney and
Shetland Islands, in order to avoid integrating the target variable (Fish density) over the land.

Y (nmil)

3650

3600

3550

S13

S14

3700

S15
S16
S17
S18
S19
S20
S21
S22
S23
S24
S25
S26

S12
S11
S10
S9
S8
S7
S6
S5
S4
S3
S2
S1

3500

-100

0
X (nmil)

100

(fig. 17.2-1)

This first set of polygons corresponding to small strata is read from the separate ASCII Polygon
File called small_strata.hd. The procedure File / Polygons Editor is used to import these polygons
into a new Polygon File Small Strata: some parameters (label contents and position, filling...) are
already stored in the ASCII File. The procedure allows a visualization of these polygons, together
with the survey data used as a control information (See paragraph on Auxiliary Data in the Polygon
section from the Beginner's Guide).

Acoustic Survey

731

The polygons are named from S1 to S26. Using the File / Selection / Intervals menu, we create two
selections on the Sample Number to distinguish the first 13 polygons (from S1 to S13) which are
located in the East region (selection East) from the last 13 polygons (from S14 to S26) which are
located in the West region (selection West).
The polygons constitute a partition of the domain of integration (no polygon overlap) and the total
surface is then obtained as the sum of the surface of each polygon: 39192 nmil2.

17.2.2 Global Weighted Estimation Using Kriging


The next step consists in performing the global estimation for each polygon, using the Interpolate /
Estimation / Polygon Kriging window. Nevertheless, we will pay attention to process the two
regions separately. Therefore, we will interpolate the data over the polygons of the East selection
using the model corresponding to this subset of information, and then perform the same operation
for the West region.
Some polygons overlap the East and West data selections; this is for instance the case for S15.
Therefore, to avoid loosing some information within the polygon, the entire dataset is used as input
for the interpolation.
The global estimation by kriging requires each polygon to be discretized. The definition of the
discretization grid is a feature of the Polygon Editor facility, using the Application / Discretize
facility. We simply obtain a discretization by choosing to fit grids with a fixed mesh of 2.5 per 2.5
nmil with no rotation.
The global estimation requires the definition of the Neighborhood criterion (stored in the Standard
Parameter File Polygon): we perform the global estimation of the fish density in each polygon only
using the data points located within this polygon. Nevertheless, we allow the system to grab
information located on the edge of this polygon and possibly falling outside due to roundoff error:
for this reason, the neighborhood is increased to its dilation by a small ball with 1 nmil radius.
The global estimation is performed and the following results are stored in the polygon file:
Estimation which contain the (weighted) estimate of the mean (using Kriging), and St. dev. which
contains the square root of the (weighted) estimate of the mean (using Kriging).
We can visualize the result using the Display facility with the Estimation variable defined on the
polygon, using a new color scale.

732

(snap. 17.2-1)

Estimation
3700

Y (nmil)

3650

Fish
30
25

3600

20
15
3550

10
5
0

3500
-100

X (nmil)

100

(fig. 17.2-2)

Acoustic Survey

733

17.2.3 Global Unweighted Estimation


Another method consists in calculating the mean estimate over each polygon simply as the mean of
the data located within the polygon neighborhood. This can be achieved in two different ways:
l

Average of the data within a polygon


We must first create a selection which retains the information located within the polygon: this is
realized using the File / Selection / From Polygons feature. When considering the first polygon
for example, we run this procedure, selecting samples located inside the polygon S1, and storing
the result in a new selection variable called S1. Out of the 938 initial data, only 68 are retained
in this selection.
Then it suffices to run the standard Quick Statistics procedure selecting only the data points
within this S1 selection, in order to obtain the mean: 6.358.

(snap. 17.2-2)

734

Global estimation with a pure nugget effect


The second solution is to run the Global Estimation procedure again, but using a model where
any spatial dependency between samples is discarded, such as a pure nugget effect (called
Nugget). Then the arithmetic average is the optimal estimation for the mean of the polygon. An
other interesting feature is that all the polygons can be estimated in a single RUN of the same
procedure. The estimation is stored in the variable Arithmetic Mean in the Polygon File.

Note - For comparison purpose, the dilatation radius of the neighborhood is brought back to 0 for
this example.
When the global estimation has been processed, it suffices to use the traditional Print feature to
dump out the value of the Arithmetic Mean variable for each polygon. We can check the
exactness of the comparison for the first polygon: 6.36.

17.2.4 Comparison
It is now time to review the results obtained for all polygons by using the Print feature for dumping
the variables:
m

Estimation: (weighted) estimate of the mean (using Kriging)

St. dev.: square root of the (weighted) estimate of the mean (using Kriging)

Arithmetic Mean: unweighted estimate of the mean

The results are presented in the following table where:


m

Rk is the rank of the polygon

N designates the count of points in a polygon

Surf is the surface of the polygon in nmil2

Rap is the ratio of the surface of the current polygon with respect to the total surface

Ziid is the unweighted average Fish density over the polygon

Zgeo is the kriged estimate of the mean Fish density over the polygon

S is the corresponding standard deviation

Aiid is the unweighted abundance over the polygon

Ageo is the kriged abundance

Regarding the abundance estimation, we can compare:


m

the arithmetic mean fish density raised to area of the polygon: 284923,

the arithmetic mean fish density raised to each polygon surface, and cumulative over the 26
polygons: 305670,

the kriged mean fish density raised to each polygon surface, and cumulative over the 26
polygons

Z V : 295182.

For the estimation variance, we can compare:

Acoustic Survey

735

the global CViid ( ---------------- ) coefficient of variation which ignores the spatial structure

z iid N

(expressed in %): 12.97% with s the standard deviation of sample values and
mean,
m

E
z geo

the CVgeo ( --------- ) where

different strata:

E =

z the sample

E is the weighted sum of the estimation variances for the

Vj

V j2 , V j is the surface of the polygon j and V

the global surface of the domain. This term is equal to 18.37%.


Rk

Surf

Rap

Ziid

Zgeo

Aiid

Ageo

68

4275

10.91

6.36

6.04

5.11

27179

25817

64

2650

6.76

19.34

19.05

4.75

51266

50483

57

2584

6.59

13.83

12.84

4.95

35745

33188

59

2423

6.18

4.97

4.88

4.98

12045

11814

60

2206

5.63

5.02

3.64

4.96

11081

8039

42

1938

4.95

11.16

10.12

5.63

21624

19623

40

1719

4.39

9.62

9.41

5.93

16548

16182

34

1742

4.44

5.98

5.74

6.61

10425

10005

28

1523

3.89

4.77

4.38

7.07

7261

6676

10

36

1853

4.73

6.40

5.92

6.53

11867

10965

11

33

1562

3.99

3.44

5.83

6.28

5370

9115

12

33

1585

4.05

1.11

3.67

6.35

1764

5824

13

52

2575

6.57

0.86

0.86

5.80

2215

2219

14

23

781

1.99

0.05

0.05

6.82

39

37

15

19

488

1.25

11.89

5.24

8.44

5809

2557

16

30

713

1.82

12.44

13.33

6.15

8876

9506

17

30

969

2.47

4.65

4.69

5.87

4504

4546

18

47

1280

3.27

6.24

6.30

4.48

7992

8059

19

53

1199

3.06

9.30

7.48

3.93

11147

8970

20

36

1175

3.00

30.10

29.96

5.12

35361

35196

21

35

1293

3.30

2.70

2.11

5.37

3490

2728

22

24

856

2.19

15.06

13.94

5.95

12902

14941

736

23

11

676

1.73

0.00

0.00

12.39

24

10

487

1.24

0.00

0.00

11.52

25

405

1.03

1.67

2.59

12.92

678

1051

26

234

0.60

2.07

2.73

13.24

482

638

As stated in the reference manual, the index of abundance does not differ greatly according to the
method: because of the systematic design, the kriged and unweighted estimates are similar. Polygon
15 constitutes a noticeable exception; the difference between the averages on this polygon is due to
the fact that the kriging result is obtained using the West selection, which excludes some data falling in the eastern part of the polygon.
The variance is, however, quite different: the CVgeo is higher than the CViid. This is due to the autocorrelation in the data, which is ignored in the latter estimate. As the survey is not designed in a
manner that allows for the unweighted estimate to be valid, the CViid can be considered as incorrect.

17.2.5 Large Strata


This part consists in performing the global estimation again, using a partition of the field with 8
large strata (rather than the 26 small strata used beforehand). These 8 large strata are imported from
the file large_strata.hd. They are split into 4 in the East and 4 in the West. All the steps remain
unchanged and only the results are exposed hereafter.

Acoustic Survey

737

3700

L5

L4

3650

Y (nmil)

L6
L3
3600

L7
L2

3550

L8
L1

3500

-100

100

X (nmil)

It is obvious to check the correspondence between the large and the small strata:

(fig. 17.2-3)

738

L1

S1, S2, S3

L2

S4, S5, S6

L3

S7, S8, S9

L4

S10, S11, S12, S13

L5

S14, S15, S16

L6

S17, S18, S19

L7

S20, S21, S22

L8

S23, S24, S25, S26

The aim of this paragraph is to perform the estimation based on the large strata and to compare the
accuracy of the approximation which consists in combining the estimation and the variance of
several polygons or strata partitioning a domain.
The results are given in the next table:
Rk

Surf

Z(V)

A(V)

9509

12.36

2.95

117554

6567

6.07

3.06

39869

4978

6.83

3.86

34013

7575

3.37

3.20

25528

1980

5.71

4.18

11313

3447

6.37

2.67

21952

3324

15.66

3.17

52041

1802

0.90

6.17

1614

The global results calculated by combining the large strata are comparable to the one obtained by
combining the small strata:
Statistics

Small Strata

Large Strata

Surface

39192

39184

Abundance

295182

303885

CVgeo

18.37%

17.92%

Air quality

18.Air quality
This case study is based on a data set kindly provided by the French
association for Air Quality Monitoring ATMO Alsace (Source
dinformation ASPA 05020802-ID - www.atmo-alsace.net).
The case study covers rather exhaustively a large panels of Isatis
features. Its main objectives are to:
estimate the annual mean of nitrogen dioxide (NO2) over Alsace in
2004 using classical geostatistical algorithms,
perform risk analysis by:
- the estimation of the local risk to exceed a sanitary threshold of
40 g/m3 using conditional expectation (multi-gaussian kriging),
- the quantification of the statistical distribution of population
potentially exposed to NO2 concentrations higher than 40 g/m3.
Last update: Isatis version 2014

739

740

18.1 Presentation of the data set


18.1.1 Creation of a new study
First, before loading the data, create a new study using the File / Data File Manager functionality.

(snap. 18.1-1)

It is then advised to check the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l

Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.

Graphical Axis Units window: X and Y units in kilometers.

18.1.2 Import of the data


18.1.2.1 Import of NO2 diffusive samples
The first data set is provided in the Excel file NO2 samples.xls (located in the Isatis installation
directory/Datasets/Air_Quality). It contains the values of NO2 measured with diffusive samples in
Alsace.
The procedure File / Import / Excel is used to load the data. First you have to specify the path of
your data using the button Excel File. In order to create a new directory and a new file in the current
study, the button Points File is used to enter the names of these two items; click on the New
Directory button and give a name, do the same for the New File button, for instance:
l

New directory = Data

New file = NO2

You have to tick the box First Available Row Contains Field Names and click on the Automatic
button to load the variables contained in the file.

Air quality

741

At last, you have to define the type of each variable:


l

The coordinates Easting(X)


Y_COORD_UTM,

and

Northing(Y)

for

X_COORD_UTM

and

The alphanumeric variables Location and Typology,

The numeric 32 bits variable NO2,

The numeric 1 bit variable Background to automatically consider it as a selection variable.


This background vector refers to the type of monitoring point (1 indicates a background
pollution, rural, peri-urban or urban site and 0 a proximity pollution with industrial or traffic
site).

Finally, you have to press Import.

(snap. 18.1-1)

742

18.1.2.2 Import of auxiliary data


The second data set is provided in the Excel spreadsheet Auxiliary data.xls. It contains the auxiliary
variables Altitude and Emissions of NOx that could increase the quality of NO2 mapping in case
of relevant correlation with this pollutant. The variable Pop99 represents the density of inhabitants
living in the region in 1999; it will be used in the estimation of population exposure.
To import this file, you have to do a File / Import / Excel in the target directory Data and the new
file Auxiliary data.

18.1.2.3 Import of polygons


The next essential task for this study is to define the area of interest. This contour is loaded as a 2D
polygon.
The polygon delineating the two departments of the Alsace region (Bas-Rhin and Haut-Rhin) is
contained in an ASCII file, called Alsace_contours.pol, whose header describes the contents:
l

the polygon level which corresponds to the lines starting with the ** symbol,

the contour level which corresponds to the lines starting with the * symbol.
#
# [ISATIS POLYGONS] Study: ASPA - Campagne Regionale 2004 Directory: Donnees
File: Contours
#
# Polygons Dimension = 2D
#
# polygon_field = 1 , type = name
# polygon_field = 2 , type = x_label , unit = "km"
# polygon_field = 3 , type = y_label , unit = "km"
#
# ++++++++++++ ---------- ++++++++++
#
# vertex_field = 1 , type = x , unit = "km" , f_type=Decimal , f_length=10 ,
f_digits=2
# vertex_field = 2 , type = y , unit = "km" , f_type=Decimal , f_length=10 ,
f_digits=2
#
# ++++++++++ ----------
#
** Bas-Rhin
393.42
5391.85
*
434.27
5407.18
433.06
5405.97
431.02
5404.44
.../...
434.29
5408.68
** Haut-Rhin
371.18
5302.14
*
353.01
5287.31
352.84
5287.73
.../...
352.65
5284.33
352.05
5285.49

This polygon is read using the File / Polygons Editor functionality. This application stands as a
graphic window with a large Application Menu. You must first choose the New Polygon File option
to create a file where the 2D polygon attributes will be stored: the file is called Alsace in the
directory Data.

Air quality

743

(snap. 18.1-1)

The next task consists in loading the contents of the ASCII Polygon File using the ASCII Import
functionality in the Application Menu.

(snap. 18.1-2)

The polygons now appear in the graphic window. You can easily distinguish the Bas-Rhin and the
Haut-Rhin.

744

(snap. 18.1-3)

The final action consists in performing the Save and Run task in order to store the polygon file in
the general data file system of Isatis.

Air quality

745

18.2 Pre-processing
18.2.1 Creation of a target grid
All the estimation and simulation results will be stored as different variables of a new grid file
located in the directory Data. This grid, called Grid, is created using the File / Create Grid File
functionality. It is adjusted on the Auxiliary data.

(snap. 18.2-1)

746

Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the data points and is consistent with the 1 km x 1 km resolution of
the auxiliary variables.

(snap. 18.2-2)

Air quality

747

18.2.2 Migration of the Auxiliary data on the Grid


After the creation of the grid, the second task consists in migrating the auxiliary variables on the
grid. This action is performed using the Tools / Migrate / Point -> Grid functionality.

(snap. 18.2-3)

748

18.2.3 Delineation of the interpolation area


You have to create a polygon selection on the grid to delineate the interpolation area by the File /
Selection / From Polygons functionality. The distinction between Bas-Rhin and Haut-Rhin is not
important in this study. So, in order to consider the interpolation on the entire domain, the new
selection variable, called Alsace, will take into account the points inside the two regions.

(snap. 18.2-4)

Air quality

749

18.2.4 Creation of data selection


An other way to create the Background selection is to use the File / Selection / Alphanumeric
functionality. Only the samples for which the value taken by the alphanumeric variable Typology is
equal to rural, periurban or urban will be kept in the new selection Background
(alphanumeric selection).

(snap. 18.2-5)

750

18.3 Exploratory Data Analysis


In the Statistics / Exploratory Data Analysis panel, the first task consists in defining the file and
variable of interest, namely NO2. To achieve that, click on the Data File button and select the
variable. By pressing the corresponding icon (eight in total), you can successively perform several
statistical representations, using default parameters or by choosing appropriate parameters.

(snap. 18.3-1)

For example, to calculate the histogram with 32 classes between 4 and 68 g/m3 (2 units interval),
first you have to click on the histogram icon (third from the left); a histogram calculated with
default values is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you switch on the Define Parameters Before Initial
Calculations option you can skip the default histogram display.
Clicking on the base map (first icon from the left), the dispersion of diffusive samples on Alsace
appears. Each active sample is represented by a cross proportional to the NO2 value. A sample is
active if its value for a given variable is defined and not masked.

Air quality

751

0.125
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

5400

Frequencies

Y (km)

0.100

5350

60
4.00
67.00
26.10
12.66

0.075

0.050

5300

0.025

5250

0.000

350 360 370 380 390 400 410 420 430 440

10

20

30

X (km)

40

50

60

70

NO2

(fig. 18.3-1)

You can identify the typology of each sample on the base map entering the variable Typology as a
Literal Code Variable on the Application / Graphic Specific Parameters.
The different graphic windows are dynamically linked. If you want to locate the particularly high
NO2 concentrations, select on the histogram the higher values, right click and choose the Highlight
option. The highlighted values are now represented by a blue star on the base map; with a zoom you
can see that values are attached to traffic or urban sites.

urban
rural
periurban

5350

5300

rural
urban
rural
rural
periurban
periurban
traffic
urban
periurban
periurban
rural
periurban
rural
ruralurban
rural
industry
industry rural
industry
traffic
periurban
industry
rural
traffic
periurban
urban
industry
periurban
urban rural
periurban

0.125
Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

0.100

Frequencies

Y (km)

5400

periurban
rural
rural
rural
urban
urban
rural
rural
urban industry
urban
periurban
traffic
urban
urban
urban
traffic
rural
periurban
ruralrural
urban
urban

60
4.00
67.00
26.10
12.66

0.075

0.050

0.025

periurban
5250
350 360 370 380 390 400 410 420 430 440

X (km)

0.000

10

20

30

40

50

60

70

NO2

(fig. 18.3-2)

752

Then, an experimental variogram can be calculated by clicking on the 7th statistical representation,
with 20 lags of 5km and a proportion of the lag of 0.5. The number of pairs may be added to the
graphic by switching on the appropriate button in the Application / Graphic Specific Parameters.
The variogram cloud is obtained by ticking the box Calculate the Variogram Cloud in the
Variogram Calculation Parameters.

(snap. 18.3-2)

Air quality

753

(snap. 18.3-3)

By highlighting the high values on the variogram cloud, you can see that these values are due to the
same traffic sample.
If you mask this point on the base map (right click), the number of pairs taken into account in the
computation of the experimental variogram decreases, like the variability of the variogram.

754

urban
rural
periurban

rural
urban
rural
rural
periurban
periurban
traffic
urban
periurban
periurban
rural
periurban

rural
rural
urban
rural
industry
5300 industry rural
industry
traffic
periurban
industry
rural
traffic
periurban
urban
industry
periurban

87
69
78

urban rural
periurban

53

1500

Variogram : NO2

5350

300
2000

Variogram : NO2

Y (km)

5400

periurban
rural
rural
rural
urban
urban
rural
rural
urbanindustry
urban
periurban
traffic
urban
urban
urban
traffic
rural
periurban
rural
rural
urban
urban

1000

73

85

200
104

90
80117

63

51
63
55

100

32

50 54 80
18
79

500
87

periurban

1048011790
1850 54 80 79

5250
0

350360370380390400410420430440

10

20

30

X (km)

40

78

69
63

50

85
32

60

51

70

55

53 73 63

80

90

100

10

20

30

40

50

60

70

80

90

100

Distance (km)

Distance (km)
urban
rural

periurban

rural
rural
urban
rural
industry
5300 industry rural
industry
traffic
periurban
industry
rural
traffic
periurban
urban
industry
periurban
urban rural
periurban

1000

X (km)

51

60

7911587

150

70
79

49 55

100
32
53

100

79
77

45
15

500

50

7911587
53 79 77100
1545
0

350360370380390400410420430440

66

63

1500

periurban
5250

76

200

Variogram : NO2

5350

rural
urban
rural
rural
periurban
periurban
traffic
urban
periurban
periurban
rural
periurban

83

2000

Variogram : NO2

Y (km)

5400

periurban
rural
rural
rural
urban
urban
rural
rural
urbanindustry
urban
periurban
traffic
urban
urban
urban
traffic
rural
periurban
ruralrural
urban
urban

10

20

30

40

83 76

50

63

66

60

79 32 49 55
70

Distance (km)

80

51

70 60

90

100

10

20

30

40

50

60

70

80

90

100

Distance (km)

(fig. 18.3-3)

Nearby road and industrial measurements, which are often linked to high values of NO2, have an
important impact on the variability but they are not representative of the pollution, considering the
chosen mesh grid. So, only the background samples are retained from now on, and particularly for
the calculation of the variogram by activating the Input Data selection Background (choose the
NO2 variable and click on the Background selection in the left part of the File and Variable
Selector).

Air quality

755

(snap. 18.3-4)

756

urban
rural
periurban

rural
periurban
rural
rural
urban
urban
rural

5400

0.15

Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

rural

rural
urban
rural
rural
periurban
periurban
urban
periurban
periurban
periurban rural

5350

5300

Frequencies

Y (km)

urban
urban
periurban
urban
urban
urban
rural
periurban
ruralrural
urban
urban

ruralurban
rural

rural

49
4.00
44.00
22.92
10.36

0.10

0.05

rural
periurban
rural
urban
periurban
periurban
urban rural
periurban
periurban

5250

0.00
350 360 370 380 390 400 410 420 430 440

10

20

30

40

NO2

X (km)

67

800

34

55
61

150

600

Variogram : NO2

Variogram : NO2

700

500

400

300

44
70

67

100
9
0

22 31
10

46 54

20

73 56

70 58

30

40

42

67

21

43

21

34

30

100
46
54

31

34

55

61

67

58
73 56

50
200

42

43

22

44
34 30
9

50

60

70

Distance (km)

80

90

100

10

20

30

40

50

60

70

80

90

100

Distance (km)

(fig. 18.3-4)

The number of diffusive samples falls to 49 instead of 60, the maximum concentration decreases
from 67 to 44 g/m3 and the variance drops from 160.28 to 107.33.
In order to perform the fitting step, it is now time to store the final experimental variogram with the
item Save in Parameter File of the Application menu of the Variogram Page. You will call it NO2.

Air quality

757

18.4 Fitting a variogram model


The procedure Statistics / Variogram Fitting allows you to fit an authorized model on an
experimental variogram.
You must first specify the file name of the Parameter File which contains the experimental
variogram: this is the file NO2 created in the previous paragraph.
Then you need to define another Parameter File which will ultimately contain the model: you will
also call it NO2. Although they carry the same name, there will be no ambiguity between these two
files as their contents belong to two different types.
Common practice is to find, by trial and error, the set of parameters defining the model which fits
the experimental variogram as closely as possible. The quality of the fit is checked graphically on
one of the two windows:
l

The global window where all experimental variograms, in all directions and for all variables are
displayed.

The fitting window where you focus on one given experimental variogram, for one variable and
in one direction.

In our case, as the Parameter File refers to only one experimental variogram for the single variable
NO2, there is obviously no difference between the two windows.

758

(snap. 18.4-1)

The principle consists in editing the Model parameters and checking the impact graphically. You
can also use the variogram initialization by selecting a single structure or a combination of
structures in Model initialization and by adding or not a nugget effect. Here, we choose an
exponential model without nugget. Pressing the Fit button in the Automatic Fitting tab, the
procedure automatically fits the range and the sill of the variogram (see the Variogram fitting
section from the Users guide).
Then go to the Manual Fitting tab and press the Edit button to access to the panel used for the
Model definition and modify the model displayed. Each modification of the Model parameters can
be validated using the Test button in order to update the graphic. Here, we enter a (practical) range
of 48 km and a sill of 120 to a better fitting of the model to the experimental variogram. This model
is saved in the Parameter File for future use by clicking on the Run (Save) button.

Air quality

759

(snap. 18.4-2)

67

34

55
61

Variogram : NO2

150

44
70

42

43

67

58

21

73 56

34

30

100
46
54

31

50
22

9
0

10

20

30

40

50

60

70

Distance (km)

80

90

100

(fig. 18.4-1)

760

18.5 Kriging of NO2


The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l

the Input information: variable NO2 in the Data File (with the selection Background),

the following variables in the Output Grid File, where the results will be stored (with the
selection Alsace):
m

the estimation result in Estimation for NO2 (Kriging),

the standard deviation of the error estimation in Std for NO2 (Kriging),

the Model: NO2,

the neighborhood: Unique.

To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name Unique, then
click on OK or press Enter and you will be able to set the neighborhood parameters by clicking on
the respective Edit button.
By default, a moving neighborhood is proposed. Due to the skimpy number of diffusive samples
(less than 100), an unique neighborhood is preferred. The entire set of data will therefore be used
during the interpolation process at any grid node. An advantage of the unique neighborhood is that
during computation the kriging matrix inversion is performed once and for all.
The only thing to do is to select Unique in the Neighborhood Type and click on OK.

(snap. 18.5-1)

Air quality

761

(snap. 18.5-2)

In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). A first click within the graphic area
displays the target file (the grid). A second click allows the selection of one grid node in particular.
The target grid node may also be entered in the Test Window / Application / Selection of Target
option (see the status line at the bottom of the graphic page), for instance the node [62,128].
The figure shows the data set, the sample chosen in the neighborhood (all data in our case with an
unique neighborhood) and their corresponding weights. The bottom of the screen recalls the
estimation value, its standard deviation and the sum of the weights.

762

(snap. 18.5-3)

Air quality

763

In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
l

the calculation environment: target location, model and neighborhood,

the kriging system,

the list of the neighboring data and the corresponding weights,

the summary of this kriging test.


Results for: Punctual

- For variable V1
Number of Neighbors
Mean Distance to the target
Total sum of the weights
Sum of positive weights
Weight attached to the mean
Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Variance of Z* (Estimated Z)
Covariance between Z and Z*
Correlation between Z and Z*
Slope of the regression Z | Z*
Signal to Noise ratio (final)

=
=
=
=
=
=
=
=
=
=
=
=
=

49
51732.22m
1.000000
1.179724
0.000000
-0.104556
29.200666
42.828570
6.544354
77.380543
77.275987
0.801933
0.998649
= 2.801868

Click on Run to interpolate the data on the entire grid.

764

18.6 Displaying the graphical results


The kriging results are now visualized using several combinations of the display capabilities.
You are going to create a new Display template, that consists in an overlay of a grid raster and NO2
data locations. All the display facilities are explained in detail in the Displaying & Editing
Graphics chapter of the Beginners Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l

Firstly, give a name to the template you are creating: Estimation for NO2 (Kriging). This will
allow you to easily display again this template later.

In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m

In the Data area, in the Grid file select the variable Estimation for NO2 (Kriging),

Specify the title that will be given to the Raster part of the legend, for instance NO2 (g/
m3),

In the Graphic Parameters area, specify the Color Scale you want to use for the raster
display. You may use an automatic default color scale, or create a new one specifically
dedicated to the NO2 variable. To create a new color scale, click on the Color Scale button,
double-click on New Color Scale and enter a name: NO2, and press OK. Click on the Edit
button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and enter the min and the max bounds (respectively 0 and 50).
- Do not change the Number of Classes (32).
- Switch on the Invert Color Order toggle in order to affect the red colors to the large NO2
values.
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Automatic spacing between Tick Marks button, enter 0
as the reference tick mark and 5 as the step between the tick marks. Then, specify that you
do not want your final color scale to exceed 6cm.
- Deselect Display Undefined Values as to not specify a specific label for the undefined
classes.
- Click on OK.

In the Item contents for: Raster window, click on Display to display the result.

Air quality

765

(snap. 18.6-1)

766

Back in the Contents list, double-click on the Basemap item to represent the NO2 variable with
symbols proportional to the variable value. A new Item contents window appears. In the Data
area, select Data / NO2 / NO2 variable (with the Background selection) as the proportional
variable. Enter NO2 data as the Legend Title. Leave the other parameters unchanged; by
default, black crosses will be displayed with a size proportional to the NO2 values. Click on
Display Current Item to check your parameters, then on Display to see all the previously
defined components of your graphic. Click on OK to close the Item contents panel.

In the Items list, you can select any item and decide wether or not you want to display its legend.
Use the Up and Down arrows to modify the order of the items in the final display.

To take off the white edge, click on the Display Box tab and select the Containing a set of items
mode. Choose the raster to define the display box correctly.

Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.

(snap. 18.6-2)

Air quality

767

The * and [Not saved] symbol respectively indicate that some recent modifications have not been
stored in the Estimation for NO2 (Kriging) graphic template, and that this template has never
been saved. Click on Application / Store Page to save them. You can now close your window.
Create a second template Std for NO2 (Kriging) to display the kriging standard deviation. The
result should be similar to the one displayed hereafter.

(fig. 18.6-1)

768

18.7 Multivariate approach


A logarithmic transform of the emissions is applied in order to decrease the huge skewness of the
variable. This usually improves the correlation with NO2. You can now try to take advantage of the
knowledge of the Altitude and Emi_NOx variables and of the correlation between these different
variables.
The transformation is performed in the File / Calculator panel. In order to avoid some problems
with the emissions equal to zero, you add 1 to the emissions.

(snap. 18.7-1)

Air quality

769

18.7.1 Correlation between NO2 and the auxiliary variables


The first task consists in interpolating the auxiliary variables on the diffusive samples in order to
characterize the relationship between NO2 and the auxiliary variables. This interpolation is realized
in the Interpolate / Interpolation / Quick Interpolation panel. Click on the Input File button and
select the Altitude variable on the Grid. Click on the Output File button to select the target variable
in which results of the interpolation will be stored, you have to create a new variable called
Altitude in the NO2 file and activate the Background selection. Select the interpolation method to
be used: Bilinear Grid Interpolation and click on Run.

(snap. 18.7-2)

Do the same thing with the ln(Emi_NOx+1) and Pop99 variables.


Now that you have the auxiliary variables available at the diffusive samples locations, it is possible
to calculate the correlation between NO2 and the auxiliary variables. Selecting the three variables
in the Statistics / Exploratory Data Analysis and pushing the Statistics button produces the basic
statistics on the selected variables.
On the Scatter Diagram of Altitude versus NO2, you can observe a correlation coefficient of -0.72.
The correlation between NO2 and ln(Emi_NOx+1) is 0.757 (the linear regression line may be
added by switching ON the corresponding button in the Application / Graphic Specific Parameters
window.

770

(fig. 18.7-1)

18.7.2 Multi-linear regression


In order to synthesize the information brought by these two auxiliary variables, a multi-linear
regression is considered. You have to open the Statistics / Data Transformation / Multi-linear
Regression panel. Click on the Data File button to open the File Selector and specify the file then
the variables to be read or created. Select the NO2 variable as target variable, the Altitude and
ln(Emi_NOx+1) as explanatory variables and create the new variables NO2 regression and NO2
residus as regressed and residual variables (activate the Background selection). Switch Use a
Constant Term in the Regression and create a New File Name NO2 clicking on Regression
Parameter File to stored the result of the multilinear regression. This parameter file will be used to
applicate the same transformation on the grid variables using Statistics / Data Transformation /
Raw<->Multi-linear Transformation. Finally click on Run.

(snap. 18.7-3)

Air quality

771

The coefficients of the multi-linear regression are informed in the Message Window.
Regression Parameters:
======================
Explanatory Variable
Explanatory Variable
Regressed Variable
Residual Variable
Constant Term

1 = Altitude
2 = ln(Emi_NOx+1)
= None
= None
= ON

Multi-linear regression
----------------------Equation for the target variable : NO2
(NB. coefficients apply for lengths are in their own unit)
---------------------------------------------------------------|Estimated Coeff.|Signification| Std. Error | t-value | Pr(>|t|)|
------------------------------------------------------------------------------|Constant
|
5.468 |
X
|
4.956 |
1.103 |
0.276|
-----------------------------------------------------------------------------|Altitude
|
-2.584e-02 |
***
| 4.928e-03 | -5.244 |3.864e-06|
------------------------------------------------------------------------------|ln(Emi_NOx+1)|
2.883 |
***
|
0.474 |
6.080 |2.195e-07|
------------------------------------------------------------------------------Signification codes based upon a Student test
probability of rejection:
'***' Pr(>|t|) < 0.001
'**'
Pr(>|t|) < 0.01
'*'
Pr(>|t|) < 0.05
'.'
Pr(>|t|) < 0.1
'X'
Pr(>|t|) < 1
Multiple R-squared
Adjusted R-squared
F-statistic
p-value
AIC
AIC Corrected

=
=
=
=
=
=

0.733
0.721
63.156
6.428e-14
430.727
431.261

Statistics calculated on 49 active samples


Raw data
Mean
=
22.918
Variance =
107.340
Regressed Mean
=
22.918
Variance =
78.685
Residuals Mean
= 4.350e-16
Variance =
28.655

772

18.7.3 Collocated cokriging


The NO2 regression variable will now be considered as the auxiliary variable. The correlation
between the NO2 and NO2 regression variables is 0.906.

(fig. 18.7-2)

For the sake of clarity, you define for the experimental variograms the same calculation parameters
as before (20 lags of 5km).

Air quality

773

(fig. 18.7-3)

Obviously, you recognize the same NO2 variogram as before. The NO2 regression variogram, as
well as the cross-variogram show the same number of pairs as they are built on the same 49 samples
where both variables are defined. You will save this new bivariate experimental variogram in a
Parameter File called NO2-Altitude+ln(Emi_NOx+1) for the fitting step.
The Statistics / Variogram Fitting procedure is started with NO2-Altitude+ln(Emi_NOx+1) as
experimental variogram and by defining a new file, also called NO2-Altitude+ln(Emi_NOx+1),
for storing the bivariate model. The Global window is used for fitting all the variables
simultaneously. The same model of variogram as before is used for the NO2 experimental
variogram. You choose the following parameters:
l

Exponential with a range of 48 km


m

a sill of 120 for the NO2 variogram,

a sill of 95 for the cross-variogram,

a sill of 90 for the NO2 regression variogram.

774

The dotted lines on the cross-variogram show the envelope of maximal correlation allowed from
the simple variograms. Click on Run (Save).

(fig. 18.7-4)

Note - To access the displayed variogram parameters of your choice, click on the Sill to be
displayed button.
Printing the model in the File / Parameters Files window allows a better understanding of the way
the basic structure has been used in order to fit simultaneously the three views, in the framework of
the linear coregionalization model, with their sills as the only degree of freedom.

Air quality

775

========== Parameter File Print ==========


---> Set name
: NO2-Altitude+ln(NOx+1)
Directory name ........ Data
File name ............. NO2
Selection name ........ BACKGROUND
Number of variables ... 2
NO2
NO2 regression
Total number of samples in File 60
Number of samples in Selection 49

Model : Covariance part


=======================
Number of variables
= 2
- Variable 1 : NO2
- Variable 2 : NO2 regression

Experimental Covariance Matrix:


__________________________________________
|
|
|
|
|
| NO2
| NO2 regression |
|----------------|--------|----------------|
| NO2
| 107.34 |
78.69 |
| NO2 regression | 78.69 |
78.69 |
|________________|________|________________|

Experimental Correlation Matrix:


________________________________________
|
|
|
|
|
| NO2 | NO2 regression |
|----------------|------|----------------|
| NO2
| 1.00 |
0.86 |
| NO2 regression | 0.86 |
1.00 |
|________________|______|________________|

Number of basic structures = 1

S1 : Exponential - Scale = 48000.00m

Variance-Covariance matrix :
Variable 1 Variable 2
Variable 1
120.0000
95.0000
Variable 2
95.0000
90.0000

Decomposition into factors (normalized eigen vectors) :


Variable 1 Variable 2
Factor 1
10.7832
9.2141
Factor 2
-1.9296
2.2582

Decomposition into eigen vectors (whose variance is eigen values) :


Variable 1 Variable 2 Eigen Val. Var. Perc.
Factor 1
0.7603
0.6496
201.1769
95.80
Factor 2
-0.6496
0.7603
8.8231
4.20

Model : Drift part


==================
Number of drift functions = 1
- Universality condition

========== End of Parameter File Print ==========

Advanced explanations about these coefficients are available in the Isatis Technical References,
that can be accessed from the On-Line documentation: chapter Structure Identification of the
Intrinsic Case, paragraph Printout of the Linear Model of Coregionalization.

776

The Interpolation / Estimation / (Co-)Kriging procedure is used again to perform the cokriging step
in order to estimate the NO2 from the auxiliary variable NO2 regression.
You have calculated the NO2 regression variable on the diffusive samples but not on the Grid
where the two variables Altitude and ln(Emi_NOx+1) are also informed. So, the first task consists
in calculating this variable on the Grid by the Statistics / Data Transformation / Raw<->Multilinear Transformation panel. Select the Regression Parameter File NO2, that has been created
from the Multi-linear Transformation application and associate the two explanatory variables
Altitude and ln(Emi_NOx+1) located in the Grid file. Then, create a new variable NO2
regression for the regressed variable. Clicking on Run, the coefficients of the regression are
applicated to the corresponding variables and the same transformation is computed.

(snap. 18.7-4)

Now, you can execute the cokriging operation. Select the two variables NO2 and NO2 regression
among the variables of the Input File (with Background selection), name the two variables
Estimation for NO2 (Cokriging) and Std for NO2 (Cokriging) to store the cokriging results and
do not forget to mention the NO2 regression of the Grid as Collocated Variable. Name the file
containing the bivariate model NO2-Altitude+ln(Emi_NOx+1). The neighborhood is unchanged.
Click on the Special Kriging Options button and select the option Collocated Cokriging. Be careful
that the Collocated Variable in Input and Output File is the same: NO2 regression. Click on Apply
and Run.

Air quality

777

(snap. 18.7-5)

778

(snap. 18.7-6)

18.7.4 Displaying the graphical results


Use the Estimation for NO2 (Kriging) and Std for NO2 (Kriging) display templates to easily
display the cokriging results: for each template, you just need to specify in the Edit window of your
grid item (Raster) that you want to display the Cokriging variables, instead of the previous
Kriging results.

(fig. 18.7-5)

Air quality

779

The differences between Kriging and Cokriging are clearly visible on the display templates. On
the Cokriging map, the integration of the auxiliary variables points up the roads. This representation
is more realistic. The contribution of auxiliary variables improves the standard deviation map,
decreasing it on the grid meshes where no information was taken into account before.

780

18.8 Cross-validation
The Statistics / Modeling / Cross-Validation procedure consists in considering each data point in
turn, removing it temporarily from the data set and using its neighboring information to predict (by
a kriging procedure) the value of the variable at its location. The estimation is compared to the true
value to produce the estimation error, possibly standardized by the standard deviation of the
estimation.
Click on the Data File button and select the NO2 variable with the Background selection as Target
Variable. Set on the Graphic Representations option. Select the Model button, the variogram model
called NO2 and Unique for the Neighborhood. This panel is very similar to the (Co-)Kriging panel.

(snap. 18.8-1)

Air quality

781

By clicking on Run, the procedure finally produces a graphic page containing the four following
windows:
l

the base map,

the histogram of the standardized estimation errors,

the scatter diagram of the true data versus the estimated values,

the scatter diagram of the standardized estimation errors versus the estimated values.

A sample is arbitrarily considered as not robust as soon as its standardized estimation error is larger
than a given threshold in absolute value (2.5 for example which approximately corresponds to the
1% extreme values of a normal distribution).

782

(fig. 18.8-1)

Air quality

783

At the same time, the statistics on the estimation error and standardized error (mean and variance)
are printed out in the Message window.

======================================================================
|
Cross-validation
|
======================================================================
Data File Information:
Directory
= Data
File
= NO2
Selection
= BACKGROUND
Variable(s) = NO2
Target File Information:
Directory
= Data
File
= NO2
Selection
= BACKGROUND
Variable(s) = NO2
Seed File Information:
Directory
= Data
File
= NO2
Selection
= BACKGROUND
Variable(s) = NO2
Type
= POINT (60 points)
Model Name
= NO2
Neighborhood Name = Unique - UNIQUE

Statistics based on 49 test data


Mean
Variance
Error
0.53765
42.90661
Std. Error
0.05044
0.67488

Statistics based on 49 robust data


Mean
Variance
Error
0.53765
42.90661
Std. Error
0.05044
0.67488

A data is robust when its Standardized Error lies between -2.500000 and 2.500000

Successfully processed =
49
CPU Time
=
0:00:00 (0 sec.)
Elapsed Time
=
0:00:00 (0 sec.)

The cross-validation has been carried out on the 49 samples of NO2. The mean error near to 0
proves that the unbiased condition of the kriging algorithm worked properly. The variance of the
estimation standardized error measures the ratio between the (square of the) experimental
estimation error and the kriging variance: this ratio should be close to 1, that is the case with a value
of 0.82.
In the second part, the same statistics are calculated based only on the robust points (in our case all
the samples are robust, so you obtain the same results).
If you compare these results to the ones obtained with the cokriging, the correlation between the
true values and the estimated values is better but three samples are not considered as robust points.
The mean and the variance of the standardized error are respectively near to 0 and 1.
It is difficult to decide between kriging and cokriging just with the results of the cross-validation
but the watching of the two maps is clearly in favor of the cokriging.

784

18.9 Gaussian transformation


Kriging provides the best estimate of the variable at each grid node. By doing so, it does not
produce an image of the true variability of the phenomenon. Performing risk analysis usually
requires to compute quantities that have to be derived from a model representing the actual
variability. In this case, advanced geostatistical techniques such as simulations have to be used.
It is for instance the case here if you want to estimate the probability of NO2 to exceed a given
threshold. As in fact thresholding is not a linear operator applied to the concentration, applying the
threshold on the kriged result (which is a linear operator) can lead to an important bias. Simulation
techniques generally require a multi-gaussian framework: thus each variable has to be transformed
into a normal distribution beforehand and the simulation result must be back-transformed to the raw
distribution afterwards.
The aim of this paragraph consists in transforming the raw distribution of the NO2 variable into a
normal one.
Before that, you are going to compute declustering weights. The principle of the declustering
application is to assign a weight to each sample where a given variable is defined taking possible
clusters into account. The weight variable which is created here may be used later in the gaussian
transformation.
Click on the Tools / Declustering menu. Select in the Data File the NO2 variable with the
Background selection and create a new variable declustering weights. Specify the Moving
Window Dimensions, i.e. the dimensions in the X and Y directions of the moving window inside
which the number of samples will be counted. Generally, the average distance between the samples
is taken, in your case 15 km for X and Y. Click on Run.

(snap. 18.9-1)

Air quality

785

Now, using the Statistics / Gaussian Anamorphosis Modeling procedure, you can fit and display
this anamorphosis function and transform the raw variable into a new gaussian variable NO2
Gauss.
Select the NO2 variable with the Background selection on Input data and the declustering
weights as Weights.
The Interactive Fitting button overlays the experimental anamorphosis with its model expanded in
terms of Hermite polynomials: this step function gives the correspondence between each one of the
sorted data (vertical axis) and the corresponding frequency quantile in the gaussian scale
(horizontal axis). A good correspondence between the experimental values and the model is
obtained by choosing an appropriate number of Hermite polynomials; by default Isatis suggests the
use of 30 polynomials, but you can modify this number and choose 50 polynomials.
Select the option Gaussian Transform and create a new variable NO2 Gauss on the Output data.
Three options of interpolation are available, we recommend the Empirical Inversion method for
this case. Save the anamorphosis clicking on the Point Anamorphosis button, name it NO2. Finally
click on Run.

(snap. 18.9-2)

786

50

40

NO2

30

20

10

-3

-2

-1

Gaussian values

(fig. 18.9-1)

Using the Statistics / Exploratory Data Analysis on this new variable and switching Compute Using
the Weight Variable (click on the ... button on the right and enter declustering weights as the
Weight Variable), you can first compute its basic statistics: the mean is 0.00 and the variance is
1.00. You display the histogram of this variable between -3 and 3 using 18 classes and check that
the distribution is not exactly symmetric with a minimum of -2.24 and a maximum of 2.92. The
experimental variogram is very structured. The following one is computed using the same
calculation parameters as in the univariate case: 20 lags of 5 km.

(fig. 18.9-2)

Air quality

787

You can control the bi-gaussian assumption on transformed data by computing the square root of
the ratio between variogram and madogram. Click on the Application / Calculation Parameters
menu of the Variogram window and select Sqrt of Variogram / Madogram on the Variographic

Sqrt of Variogram / Madogram : NO2 Gaus

Option. This ratio has to be constant and around

(represented by the dotted line).

2.0

1.5

1.0

0.5

0.0

10

20

30

40

50

60

70

80

90

100

Distance (km)

(fig. 18.9-3)

This experimental variogram is saved in a file called NO2 Gauss.


In the Statistics / Variogram Fitting, you fit a model constituted of a unique exponential structure
(range 48 km and sill 1.12) and save it in the model file called NO2 Gauss. You keep the same
range as before to remain coherent.

(fig. 18.9-4)

788

18.10 Quantifying a local risk with Conditional


Expectation (CE)
The aim of this part is to calculate the probability for NO2 to exceed a given cutoff at a given point.
The method that we consider is the Conditional Expectation, it uses a normal score transformation
of the variable and its kriging.
You need to krige the gaussian variable NO2 Gauss using the NO2 Gauss model of variogram, a
Unique neighborhood and you have to create two new variables: Estimation for NO2 Gauss
(Kriging) and Std for NO2 Gauss (Kriging) as Output File.

After that, you can proceed with the calculation of probability. Select the Statistics / Statistics /
Probability from Conditional Expectation menu and click on the Data File button to open a File
Selector. Choose the Estimation for NO2 Gauss (Kriging) as Gaussian Kriged Variable, Std for

Air quality

789

NO2 Gauss (Kriging) for the second variable and create a new variable Probability 40g/m3
(CE) for the last variable. This Probability macro variable will store the different probabilities to be
above given cutoffs. Each alphanumerical index of the Macro Variable will correspond to the
different cutoffs. In our case, there will be only one cutoff.
Press the Indicator Definition button to define the cutoff in the raw space, we have chosen a cutoff
of 40 g/m3. Click on Apply next Close.
Check Perform a Gaussian Back Transformation and click on Anamorphosis to define the
transformation (NO2) which has been used to transform the raw data in the gaussian space before
kriging. To finish, click on Run.

(snap. 18.10-1)

(snap. 18.10-2)

790

The map corresponding to the probability to exceed the sanitary threshold of 40 g/m3 is displayed
hereafter. A new color scale called Probability is created with irregular bounds in order to show up
the points where the probability is low.

(fig. 18.10-1)

Air quality

791

18.11 NO2 univariate simulations


An other way to calculate the probability to exceed a threshold is based on the simulations and
particularly the conditional simulations. Simulations will be compulsory to compute global
statistics, such as the average exposed population.
A conditional simulation corresponds to a grid of values having a normal distribution and honoring
the model. Moreover it honors the data points as it uses a conditioning step based on kriging which
requires the definition of a neighborhood. So the simulations also need the gaussian transformation
and a model of variogram based on this normal variable.
To compute these simulations, you are going to use the turning bands method (Interpolate /
Conditional Simulations / Turning Bands). You use the same Unique neighborhood as in the
kriging step. The additional parameters consist in:
l

the name of the macro variable: each simulation is stored in this macro variable with an index
attached,

the number of simulations: 200 in this exercise,

the starting index for numbering the simulations: 1 in this exercise,

the Gaussian back transformation is performed using the anamorphosis function: NO2. In a first
run, this anamorphosis will be disabled in order to study the gaussian simulations,

the seed used for the random number generator: 423141 by default. This seed allows you to
perform lots of simulations in several steps: each step will be different from the previous one if
the seed is modified.

The final parameters are specific to the simulation technique. When using the Turning Band
method, you simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 1000 bands are chosen in our exercise.
You can verify on some simulations in the gaussian space that the histogram is really gaussian and
the experimental variogram respects the structure of the model NO2 Gauss particularly at small
scale. After this Quality Control, you can enable the Gaussian back transformation NO2.

792

Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

0.15

8302
-3.19
3.78
-0.03
0.97

Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

0.125

8302
-3.32
3.67
-0.06
1.03

Frequencies

Frequencies

0.100

0.10

0.075

0.050
0.05

0.025

0.00

-3

-2

-1

0.000

-3

1.25

1.00

0.75

0.50

0.25

0.00

10

20

30

40

50

60

70

Distance (km)

80

90

-2

-1

Simulations NO2 Gauss[00150]

Variogram : Simulations NO2 Gauss[00150

Variogram : Simulations NO2 Gauss[00050

Simulations NO2 Gauss[00050]

100

1.25

1.00

0.75

0.50

0.25

0.00

10

20

30

40

50

60

70

Distance (km)

80

90

100

(fig. 18.11-1)

Air quality

793

(snap. 18.11-1)

The results consist in 200 realizations stored in one Simulations NO2 Macro Variable in the Grid.
The clear differences between several realizations are illustrated on the next graphic.

794

(fig. 18.11-2)

Air quality

795

18.12 NO2 multivariate simulations


As in the kriging, you can integrate auxiliary variables in simulations. The gaussian hypothesis
requires a new multi-linear regression of auxiliary variables Altitude and ln(Emi_NOx+1) on the
NO2 Gauss variable. The new auxiliary variable is stored in NO2 Gauss regression and the
coefficients of this new regression stored in a new Regression Parameter File NO2 Gauss are also
informed in the Message window:
Regression Parameters:
======================
Explanatory Variable
Explanatory Variable
Regressed Variable
Residual Variable
Constant Term

1 = Altitude
2 = ln(Emi_NOx+1)
= NO2 Gauss regression
= None
= ON

Multi-linear regression
----------------------Equation for the target variable : NO2 Gauss
(NB. coefficients apply for lengths are in their own unit)
------------------------------------------------------------|Estimated Coeff.|Signification|Std. Error|t-value| Pr(>|t|) |
---------------------------------------------------------------------------|Constant
|
-1.338 |
*
|
0.531 |-2.521 |1.525e-02 |
---------------------------------------------------------------------------|Altitude
|
-2.918e-03 |
***
|5.278e-04 |-5.529 |1.462e-06 |
---------------------------------------------------------------------------|ln(Emi_NOx+1)|
0.288 |
***
|5.079e-02 | 5.669 |9.064e-07 |
---------------------------------------------------------------------------Signification codes based upon a Student test
probability of rejection:
'***' Pr(>|t|) < 0.001
'**'
Pr(>|t|) < 0.01
'*'
Pr(>|t|) < 0.05
'.'
Pr(>|t|) < 0.1
'X'
Pr(>|t|) < 1
Multiple R-squared
Adjusted R-squared
F-statistic
p-value
AIC
AIC Corrected

=
=
=
=
=
=

0.728
0.716
61.644
9.659e-14
-6.313e+00
-5.780e+00

Statistics calculated on 49 active samples


Raw data
Mean
=
0.310
Variance =
1.210
Regressed Mean
=
0.310
Variance =
0.881
Residuals Mean
= -9.969e-17
Variance =
0.329

Calculate the NO2 Gauss regression variable on the Grid in the Statistics / Data Transformation /
Raw<->Multi-linear Transformation panel.

796

(snap. 18.12-1)

After that, you can compute the three experimental variograms (using the declustering weights
variable). Save them as NO2 Gauss-Altitude+ln(Emi_NOx+1) and fit a model. You choose the
following parameters:
l

exponential with a range of 48 km with:


m

a sill of 1.12 for the NO2 Gauss variogram,

a sill of 1.00 for the cross variogram,

a sill of 1.10 for the NO2 Gauss regression variogram.

Air quality

797

(fig. 18.12-1)

You are now able to perform the collocated co-simulations using the turning bands technique. The
differences in relation to the univariate simulations are that the multivariate case requires two
variables NO2 Gauss and NO2 Gauss regression (with the Background selection) on Input File.
Click on the Output File button, create two new variables Simulations NO2 (multivariate case)
and Simulations NO2 Gauss regression irrelevant but required by the algorithm (multivariate
case) on the Grid (Alsace selection activated) and select the NO2 Gauss regression as Collocated
Variable.
Enter NO2 Gauss-Altitude+ln(Emi_NOx+1) as variogram model and a Unique neighborhood.
Click on the Special Option button and switch the Collocated Cokriging option (verify that the
collocated variable is the same, NO2 Gauss regression, in Input and Output File). Enable the
Gaussian Back Transformation and define the NO2 Anamorphosis for each variable. Do not change
the other parameters like the number of simulations and the number of turning bands. Finally click
on Run.

798

(snap. 18.12-2)

(snap. 18.12-3)

Air quality

799

(snap. 18.12-4)

800

18.13 Simulation post-processing


The Tools / Simulation Post Processing panel provides a procedure for the post processing of a
macro variable. Considering the 200 univariate simulations, you ask the procedure to perform
sequentially the following tasks:
l

calculation of the mean of the 200 simulations,

determination of the cutoff map giving the probability that NO2 exceeds 40 g/m3.

(snap. 18.13-1)

(snap. 18.13-2)

Air quality

801

(snap. 18.13-3)

The map corresponding to the mean of the 200 simulations is displayed with the same color scale as
for each of the estimated maps and the standard deviation associated. The mean of a large number
of simulations converges toward kriging.

(fig. 18.13-1)

802

(fig. 18.13-2)

The following graphic represents the probability that the NO2 concentrations exceed a sanitary
threshold of 40 g/m3 calculated by simulations. This map is very similar to the probability map
obtained by conditional expectation. With an infinity of simulations, the map would be exactly the
same.

Air quality

803

(fig. 18.13-3)

The following graphics represent the mean of simulations and the probability to exceed 40 g/m3
calculated in the multivariate case, i.e. using the Simulations NO2 (multivariate case) macro
variable in the Tools / Simulations Post Processing panel with the same parameters as before.
The simulation mean has many similarities with the cokriging map. Regarding the probability map,
it presents some differences with the probability map obtained by univariate simulations, specially
on the South where the probability is lower (quasi null) than for the first graphic and on the East
center where the main area exposed to a risk of exceed 40 g/m3 is more limited and shows up a
road axis.The integration of auxiliary variables in simulations leads to a map of probability more
realistic.

804

(fig. 18.13-4)

(fig. 18.13-5)

Air quality

805

18.14 Estimating population exposure


The first task consists in initializing a new population exposure macro variable. For that, use the
Tools / Create Special Variable panel. Select the Variable Type to be created in the list: a Macro
Variable 32 bits and click on the Data File button to define the name of this new variable:
Population exposure. Click Variable Unit to select the unit: Float() and Editing Format to define
the format: Integer(10,0). Finally, specify the Number of Macro Variable Indices, it will be the same
as the number of simulations, i.e. 200. Click on Run.

(snap. 18.14-1)

In the File / Calculator panel, for each simulation you are going to calculate the population
potentially exposed to NO2 concentrations higher than 40 g/m3. You have to click on the Data
File button to select Pop99 as v1 and Simulations NO2 (multivariate case) and Population
exposure as m1 and m2 (macro variables).
Enter in the window Transformation the operation that will be applied on the variables. For each
simulation and each mesh, the NO2 simulated concentration is compared to the threshold of 40 g/
m3. If this value is exceeded, the number of inhabitants informed in the Pop99 variable will be
stored, else the number of inhabitants exposed will be zero. As a consequence, the transformation
is: m2=ifelse(m1>40,v1,0).

806

(snap. 18.14-2)

The Tools / Simulation Post-processing is finally used to estimate the population exposed to NO2
concentrations higher than 40 g/m3 from the Population exposure macro variable. In order to run
this operation, switch on Risk Curves and click on the Edit button. You are only interested by the
Accumulations. For each realization (each index of the macro variable), the program calculates the
sum of all the values of the variable which are greater or equal to the cutoff, i.e. in our case the
program calculates the total sum of inhabitants (so choose a cutoff of 0, the selection of the
inhabitants living in a area exposed to more than 40 g/m3 is considered in the preceding step).
This sum is then multiplied by the unit surface of a cell equal to: 1000 m x 1000 m = 1000000 m;
as you are interested in the number of inhabitants (inhab), you need to divide by this figure
1000000 m. Switch on Draw Risk Curve on Accumulations to draw the risk curves on
accumulations in a separate graphic and Print Statistics to print the accumulations of the target
variable for each simulation.

Air quality

807

(snap. 18.14-3)

808

(snap. 18.14-4)

Air quality

809

(fig. 18.14-1)
Statistics for Simulation Post Processing
=========================================
Target Variable : Macro variable = Population exposure[xxxxx] [count=200]
Cutoff
=
0.00
Number of outcomes
= 200
The 19716 values are processed using 1 buffers of 19716 data each
Cell dimension along X =
1000.00m
Cell dimension along Y =
1000.00m
Rank Macro Frequency
1
1
0.50
2
2
1.00
3
3
1.50
.../...
198
198 99.00
199
199 99.50
200
200 100.00

Accumulation
105606inhab
98998inhab
84982inhab

Surface
8302.00km2
8302.00km2
8302.00km2

91751inhab
91416inhab
120454inhab

8302.00km2
8302.00km2
8302.00km2

Statistics on Accumulation Risk Curve


=====================================
Smallest =
47911inhab
Largest =
166422inhab
Mean
=
91171inhab
St. dev. =
21714inhab
Statistics on Surface Risk Curve
===============================
Smallest =
8302.00km2
Largest =
8302.00km2
Mean
=
8302.00km2
St. dev. =
0.00km2

810

Inputs/Outputs Summary
======================
Input Macro :
- Directory Name
: Data
- File Name
: Grid
- Selection Name
: Alsace
- Variable Name
: Population exposure[xxxxx]
Quantiles on Accumulation Risk Curves
=====================================
Q5.00 =
133941inhab
Q50.00 =
87857inhab
Q95.00 =
61018inhab
Quantiles on Accumulation Risk Curves (nearest simulation values)
=================================================================
P5.00 =
135165inhab
P50.00 =
88181inhab
P95.00 =
61037inhab

The number of inhabitants exposed to NO2 concentrations higher than 40 g/m3 is between 47911
and 166422 with a mean of 91171.

Soil pollution

14 Soil pollution
This case study is based on a data set kindly provided by TOTAL
Dpots Passifs.
Coordinates and pollutant grades have been transformed for
confidentiality reasons.
The case study covers rather exhaustively a large panels of Isatis
features. Its main objectives are to:
estimate the 3D total hydrocarbons (THC) on a contaminated site
using classical geostatistical algorithms,
interpolate the site topography to exclude from the calculations 3D
grid cells above the soil surface,
use simulations to perform risk analysis by:
- the estimation of the local risk to exceed a threshold of 200mg/kg,
- the quantification of the statistical distribution of the contaminated
volume of soil.
Last update: Isatis version 2014

811

812

19.1 Presentation of the data set


19.1.1 Creation of a new study
First, a new study has to be created using the File / Data File Manager functionality.

(snap. 19.1-1)

It is then advised to verify the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l

Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.

Graphical Axis Units window: X, Y and Z units in meters.

19.1.2 Import of the data


19.1.2.1 Import of THC grades
The first data set is provided in the Excel file THC.xls (located in the Isatis installation directory/
Datasets/Soil_Pollution). It contains the values of THC measured on the site.
The procedure File / Import / Excel is used to load the data. First you have to specify the path of
your data using the button Excel File. In order to create a new directory and a new file in the current
study, the button Points File is used to enter the names of these two items; click on the New
Directory button and give a name, do the same for the New File button, for instance:
l

New directory = Data

New file = THC

You have to tick the box First Available Row Contains Field Names and click on the Automatic
button to load the variables contained in the file.

Soil pollution

813

At last, you have to define the type of each variable:


l

The coordinates easting(X), northing(Y) and elevation(Z) for X, Y and Cote (mNGF),

The numeric 32 bits variables ZTN (mNGF), Prof (m) and Measure,

The alphanumeric variable Mesh.

Finally, press Run.

(snap. 19.1-1)

19.1.2.2 Import of the topography


The second data set is provided in the Excel spreadsheet Topo.xls. It contains the values of
topography measured on the site that will enable to limit the interpolation of THC grades under the
surface of the soil.
To import this file, you have to do a File / Import / Excel in the target directory Data and the new
file Topography.

814

(snap. 19.1-1)

Note - Be careful to define this file as a 2D file. In this step, the ZTN (mNGF) variable will be
defined as a numeric 32 bits variable, not as the Z coordinate.

19.1.2.3 Import of polygon


The next essential task for this study is to define the area of interest. This contour is loaded as a 3D
polygon.
The polygon delineating the contour of the site is contained in an ASCII file, called
Site_contour.pol, whose header describes the contents:
l

the polygon level which corresponds to the lines starting with the ** symbol,

the contour level which corresponds to the lines starting with the * symbol.

This polygon is read using the File / Polygons Editor functionality. This application stands as a
graphic window with a large Application Menu. You have first to choose the New Polygon File
option to create a file where the 3D polygon attributes will be stored: the file is called Site contour
in the directory Data.

Soil pollution

815

(snap. 19.1-1)

The next task consists in loading the contents of the ASCII Polygon File using the ASCII Import
functionality in the Application Menu.

(snap. 19.1-2)

The polygon now appears in the graphic window.

816

(snap. 19.1-3)

The final action consists in performing the Save and Run task in order to store the polygon file in
the general data file system of Isatis.

Note - This polygon could have been digitalized inside Isatis, using a background map of the site.

19.2 Pre-processing
19.2.1 Creation of a target grid
All the estimation and simulation results will be stored as different variables of a new grid file
located in the directory Grid. This grid, called 3D grid, is created using the File / Create Grid File
functionality. It is adjusted on the Site contour polygon.

Soil pollution

817

(snap. 19.2-1)

818

Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the data points.

(snap. 19.2-2)

Soil pollution

819

19.2.2 Delineation of the interpolation area


You have to create a polygon selection on the grid to delineate the interpolation area by the File /
Selection / From polygons functionality.

(snap. 19.2-3)
SELECTION/INTERVAL STATISTICS:
-----------------------------
New Selection Name
= Site contour
Total Number of Samples = 182160
Masked Samples
= 32384
Selected Samples
= 149776

820

19.3 Visualization of THC grades using the 3D viewer


Launch the 3D Viewer (Display / 3D Viewer).
Display the THC grades:
l

Drag the Measure variable from the THC file in the Study Contents and drop it in the display
window;

From the Page Contents, click right on the Points object (THC) to open the Points Properties
window. In the Points tab, select the 3D Shape mode (sphere) and choose the Rainbow
Reversed Isatis Color Scale in the Color tab.

(snap. 19.3-1)

Tick the Automatic Apply option to automatically assign the defined properties to the graphic
object. If this option is not selected, modifications are applied only when clicking Display.
Display the site contour:
l

Drag the Site contour file in the Study Contents and drop it in the display window.

From the Page Contents, click right on the Polygons object (Site contour) to open the Polygons
Properties window. In the Color tab, select Constant and click the next colored button to assign
to the polygon the color of your choice. In the Transparency tab, tick the Active Transparency
option to define a level of transparency for the display, in order to see the samples inside.

Tick Legend to display the color scale in the display window. The legend is attached to the current
representation. Specific graphic objects may be added from the Display menu as the graphic axes
and corresponding valuations, the bounding box and the compass.
The Z Scale, in the tool bar, may also be modified to enhance the vertical scale.
Click on File / Save Page As to save the current graphic.

Soil pollution

821

(fig. 19.3-1)

822

19.4 Exploratory Data Analysis


In the Statistics / Exploratory Data Analysis panel, the first task consists in defining the file and
variable of interest, namely Measure. To achieve that, click on the Data File button and select the
variable. By pressing the corresponding icon (eight in total), you can successively perform several
statistical representations, using default parameters or by choosing appropriate parameters.

(snap. 19.4-1)

For example, to calculate the histogram with 26 classes between 0 and 520 mg/kg (20 units
interval), first you have to click on the histogram icon (third from the left); a histogram calculated
with default values is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you select the option Define Parameters Before
Initial Calculations, you can skip the default histogram display.
Clicking on the base map (first icon from the left), the dispersion of THC grades appears. Each
active sample is represented by a cross proportional to the THC value. A sample is active if its
value for a given variable is defined and not masked.

Soil pollution

823

(fig. 19.4-1)

All graphic windows are dynamically linked together. If you want to locate the particularly high
values, select on the histogram the higher values, right click and choose the Highlight option. The
highlighted values are now represented by a blue star on the base map.

(fig. 19.4-2)

824

Selecting an other section (YOZ or XOZ), in the Application / Graphical Parameters panel of the
base map window, allows you to visualize the dispersion of THC grades in depth.

(snap. 19.4-2)

Soil pollution

825

Then, an experimental variogram can be calculated by clicking on the 7th statistical representation,
with 10 lags of 15m (consistence with the sampling mesh) and a proportion of the lag of 0.5. An
histogram displaying the number of pairs can be previewed by clicking on the Display Pairs button.

(snap. 19.4-3)

826

(snap. 19.4-4)

The number of pairs may be added to the graphic by switching on the appropriate button on the
Application / Graphic Specific Parameters. The variogram cloud is obtained by ticking the box
Calculate the Variogram Cloud in the Variogram Calculation Parameters.

(fig. 19.4-3)

Soil pollution

827

The experimental variogram shows an important nugget effect. This variability is due to the fact
that we compare some pairs of points located in the XOY plane and some pairs of points in depth.
The variability of the THC grades seems to be higher vertically than horizontally. You have to
consider this phenomenon by calculating two experimental variograms, one for each direction. For
that, you have to choose the Directional option. A Slicing Height of 0.5m allows you not to put
together the two directions.
Set Regular Directions to 1, choose Activate Direction Normal to the Reference Plane and choose
the following parameters in Direction Definition:
l

Label for regular direction: N0 (default name)

Tolerance on angle: 90 (in order to consider all samples without overlapping)

Lag value: 15 m (i.e. approximately the distance between boreholes)

Number of lags: 10 (so that the variogram will be calculated over 150 m distance)

Tolerance on distance (proportion of the lag): 0.5

(snap. 19.4-5)

828

Then choose the following parameters for the direction normal to the reference plane:
l

Label for orthogonal direction: D-90

Tolerance on angle: 45

Lag value: 1 m

Number of lags: 4

Tolerance on distance (proportion of the lag): 0.5

345

5000

5000

3274
3523
3457
4002

4000
Variogram : Measure

4000
Variogram : Measure

4471

D-90

3000
472

2000

N0

5664
3486
3003

3000
2345
2000

506
1000 1

1000
23
0

2
Distance (m)

50

100

150

Distance (m)

(fig. 19.4-4)

In order to perform the fitting step, it is now time to store the final experimental variogram with the
item Save in Parameter File of the Application menu of the Variogram Page. You will call it THC.

Soil pollution

829

19.5 Fitting a variogram model


You must now define a Model which fits the experimental variogram calculated previously. In the
Statistics / Variogram Fitting application, define:
l

The Parameter File containing the set of experimental variograms: THC.

The Parameter File in which you wish to save the resulting model: THC. As the experimental
and the variogram model are stored in different types of parameter file, you may define the same
name for both.

(snap. 19.5-1)

Check the toggles Fitting Window and Global Window. The Fitting window displays one direction
at a time (you may choose the direction to display through Application / Variable & Direction
Selection...), and the Global window displays every variable (if several) and direction in one
graphic.

830

You can first use the variogram initialization by selecting a single structure or a combination of
strutures in Model initialization and by adding or not a nugget effect. Pressing the Fit button in the
Automatic Fitting tab, the procedure automatically fits the range and the sill of the variogram (see
the Variogram fitting section from the Users guide).
Then, go to the Manual Fitting tab and press the Edit button to access to the panel used for the
Model Definition and modify the model displayed. Each modification of the model parameters can
be validated using the Test button in order to update the graphic. The model must reflect:
l

The specific variability along each direction (anisotropy),

The general increase of the variogram.

Here, two different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):
l

an exponential model with a (practical) range of 50m and a sill of 3360,

an anisotropic Linear model with a sill of 1000 and the following respective ranges along U, V
and W: 115m, 115m and 0.85m.

(snap. 19.5-2)

Soil pollution

831

345

5000

5000

3274
3523
3457
4002

4000

4000

3486
Variogram : Measure

Variogram : Measure

4471

D-90

3000
472

2000

N0

5664

3003
3000
2345
2000

506
1000 1

1000
23
0

2
Distance (m)

50

100

150

Distance (m)

(fig. 19.5-1)

This model is saved in the Parameter File for future use by clicking on the Run (Save) button.

832

19.6 Selection of the duplicates


In order to avoid some problems of matrix inversion during the kriging, a New Selection variable is
created. The Tools / Look for Duplicates panel is designed to check the presence of too close data
points and allows you to mask them. The samples which the distance between them is smaller than
0.1m, will be declared as duplicates. The Mask all Duplicates but First option allows you to kept
the first of the duplicates unmasked (i.e. the duplicate with the smallest X-coordinate).

(snap. 19.6-1)
Pressing Run, an Isatis message is printed out informing you that two duplicates
have been found and masked in the Without duplicates Selection variable.

Duplicates below a distance of :


0.10m
-------------------------------Total number of discarded samples
= 2
Number of groups
= 2
Number of duplicates
= 2
Minimum grouping distance
=
0.00m
SELECTION/DUPLICATES STATISTICS:
-------------------------------New Selection Name
= Without duplicates
Total Number of Samples = 784
Masked Samples
= 2
Selected Samples
= 782

Note - The presence of duplicates is generally visible on the variogram cloud by the existence of
pairs of points at zero distance.

Soil pollution

833

19.7 Kriging of THC grades


The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l

the Input information: variable Measure in the THC File (with the selection Without
duplicates),

the following variables in the Output Grid File, where the results will be stored (with the
selection Site contour):
m

the estimation result in THC kriging,

the standard deviation of the error estimation in THC std kriging,

the Model: THC,

the neighborhood: moving 3D.

To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name moving 3D, then
click on OK or press Enter and you will be able to set the neighborhood parameters by clicking on
the respective Edit button.
l

The neighborhood type is a moving neighborhood. It is an ellipsoid with No Rotation;

Set the dimensions of the ellipsoid to 100 m, 100 m and 2 m along the vertical direction;

Minimum number of samples: 1;

Number of angular sectors: 1

Optimum Number of Samples per Sector: 20.

Press OK for the Neighborhood Definition.

834

(snap. 19.7-1)

Soil pollution

835

(snap. 19.7-2)

In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). By pressing once on the left button
of the mouse, the target grid is shown (in fact a XOY section of it, you may select different sections
through Application / Selection For Display...). You can then move the cursor to a target grid node:
click once more to initiate kriging. The samples selected in the neighborhood are highlighted and
the weights are displayed. The bottom of the screen recalls the estimation value, its standard
deviation and the sum of the weights. The target grid node may also be entered in the Test Window /
Application / Selection of Target option, for instance the node [37,55,10].

836

(snap. 19.7-3)

Soil pollution

837

In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
l

the calculation environment: target location, model and neighborhood,

the kriging system,

the list of the neighboring data and the corresponding weights,

the summary of this kriging test.


Results for : Punctual
- For variable V1
Number of Neighbors
Mean Distance to the target
Total sum of the weights
Sum of positive weights
Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Signal to Noise ratio (final)

=
=
=
=
=
=
=
=
=

20
23.55m
1.000000
1.108689
8.146024
23.309070
1676.879631
40.949721
2.600067

You also may ask for a 3D representation of the search ellipsoid if the 3D Viewer application is
already running and, from the Application menu, ask to Link to 3D Viewer: a 3D representation of
the search ellipsoid neighborhood is represented, and the samples used for the estimation of the
node are highlighted. A new graphic object neighborhood appears in the Page Contents from which
you may change the graphic properties (color, size of the samples for coding the weights or the
THC values etc.).

(fig. 19.7-1)

Click on Run to interpolate the data on the entire grid.

838

19.8 Intersection of interpolation results with the


topography
The aim of this part is to interpolate the site topography not to take into account the 3D grid cells
above the surface of the soil in the results of simulations.
The idea is to copy the topography interpolated from a 2D grid to the 3D grid and select the cells
under the surface by comparing the values of topography with the Z-coordinate.
Following this section is not relevant if the topography of your site can be considered as flat.

19.8.1 Creation of a 2D grid


The estimation of topography is calculated on a 2D grid that the parameters along X and Y are the
same that those of the previous 3D grid (origin and resolution unchanged). This new grid is saved in
a new grid file 2D grid.

(snap. 19.8-1)

Soil pollution

839

A selection from the polygon Site contour to the new 2D grid is also realized not to interpolate the
topography outside of the site area.

(snap. 19.8-2)
SELECTION/INTERVAL STATISTICS:
----------------------------New Selection Name
= Site contour
Total Number of Samples = 7920
Masked Samples
= 1408
Selected Samples
= 6512

840

19.8.2 Exploratory data analysis


The experimental variogram of the topography is computed in the Statistics / Exploratory Data
Analysis panel.

(snap. 19.8-3)

A first experimental variogram is calculated with 10 lags of 15m and a proportion of the lag of 0.5.

Soil pollution

841

ZTN (mNGF)

350

874

0.3
815

Variogram : ZTN (mNGF)

300

Y (m)

250

200

150

100

1303

932 1075 873

789
676

0.2
513

0.1

50

-50

50
X (m)

100

0.0

25

50

75

100

125

Distance (m)

(fig. 19.8-1)

This variogram shows an important nugget effect. This effect does not seem to be due to only one
sample. A variogram map can be computed clicking on the last statistical representation of the
panel. This specific tool allows you to analyze the spatial continuity of the variable of interest in all
the directions of the space, and especially to pick the possible anisotropies.

842

The following parameters are defined:


l

14 directions

10 lags of 15m as previously

a tolerance of 0 lag not to compute a same pair of points into two consecutive classes

a tolerance on directions of 3 sectors to smooth the map to highlight the principal directions of
anisotropy

(snap. 19.8-4)

Soil pollution

843

Studying the map, you can see that the variability seems to be higher along Y than along X until a
distance of 80m. The variograms along these two directions are directly calculated from the
variogram map. You have to pick the N90 direction label, right click and choose Active Direction
(ditto for the N0 direction).

0.32

N2
6

N3
09

0.30
773

1
N5

0.28

0.24

N283

N77

270

N90
N103

N257

0.22
0.20
0.18
0.16
0.14

N1
29

5
N1
4

N2
06

31
N2

638

0.3

0.26
Variogram : ZTN (mNGF)

34
N3

357

602

636
704

N0

371

747

313
0.2

556 330

363 432

237

255
258

177

N90
0.1

101

0.12
0.10
0.08
N/A

0.0

50

100

150

Distance (m)

(fig. 19.8-2)

The anisotropic variogram is saved in parameter file under the name Topography anisotropic.

844

It is fitted by a model of variogram composed of:


l

an anisotropic spherical model with a sill of 0.14 and the respective ranges along U and V:
135m and 75m

an exponential model with a range of 20m and a sill of 0.13

(fig. 19.8-3)

19.8.3 Kriging of topography


The kriging of the topography requires the definition of:
l

the ZTN (mNGF) variable as Input File,

two new variables Topography anisotropic kriging and Topography anisotropic std kriging
as Output File in the 2D grid file to store respectively the estimation result and the standard
deviation of the error estimation,

the Model of variogram Topography anisotropic,

the new Neighborhood unique.

(snap. 19.8-5)

Soil pollution

845

(snap. 19.8-6)

19.8.4 Displaying the results of the estimation of topography


The kriging results of topography are now visualized using several combinations of the display
capabilities.
You are going to create a new Display template, that consists in an overlay of a grid raster and a
representation of the topography by isolines. All the display facilities are explained in detail in the
Displaying & Editing Graphics chapter of the Beginners Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify the contents of your graphic in this window.
To achieve that:

846

Firstly, give a name to the template you are creating: Topography. This will allow you to easily
display again this template later.

In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable you want to display and with which color scale:
m

In the Data area, in the 2D grid file select the variable Topography anisotropic kriging
with the Site contour selection,

Specify the title that will be given to the Raster part of the legend, for instance Topo
(mNGF),

In the Graphic Parameters area, specify the Color Scale you want to use for the raster
displayed. You may use an automatic default color scale, or create a new one specifically
dedicated to the variable of interest. To create a new color scale, click on the Color Scale
button, double-click on New Color Scale and enter a name: Topo, and press OK. Click on
the Edit button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and enter the min and the max bounds (respectively 27 and
30).
- Change the Number of Classes (30).
- Switch on the Invert Color Order toggle in order to affect the red colors to the large
values of topography.
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Display all Tick Marks button, enter 0.5 as the step
between the tick marks. Then, specify that you do not want your final color scale to
exceed 6 cm. Switch off the Display Undefined Values button.
- Click on OK.

In the Item contents for: Raster window, click on Display to display the result.

Soil pollution

847

(snap. 19.8-7)
l

Back in the Contents list, double-click on the Isolines item. Click Grid File to open a File
Selector to select the 2D grid file then the variable to be represented, Topography anisotropic
kriging.
m

The Legend Title is not active as no legend is attached to this type of representation.

The isolines representation requires the definition of classes. A class is an interval of values
separated by a given step. In the Data Related Parameters area, switch on the C1 line, enter
27 and 30 as lower and upper bounds and choose a step equal to 0.2.

Not to overload the graphic, the Label Flag attached to the class is left inactive.

Close the current Item Contents and click on Display.

848

In the Items list, you can select any item and decide wether or not you want to display its legend.
Use the Up and Down arrows to modify the order of the items in the final display.

Close the Contents window. Your final graphic window should be similar to the one displayed
hereafter.

(snap. 19.8-8)

The * and [Not saved] symbols respectively indicate that some recent modifications have not been
stored in the Topography graphic template, and that this template has never been saved. Click on
Application / Store Page to save them. You can now close your window.

Soil pollution

849

Create a second template Topography std kriging to display the kriging standard deviation using
the Raster item in the Contents list and a new Color Scale. To overlay the ZTN (mNGF) data
locations to the grid raster representing the error of estimation:
l

Back in the Contents list, double-click on the Basemap item to represent the ZTN (mNGF)
variable with symbols proportional to the variable value. A new Item Contents window appears.
In the Data File area, select Data / Topography / ZTN (mNGF) variable as the proportional
variable. Enter Topo data as the Legend Title. Leave the other parameters unchanged; by
default, black crosses will be displayed with a size proportional to the values of topography.
Click on Display Current Item to check your parameters, then on Display to see all the
previously defined components of your graphic. Click on OK to close the Item contents panel.

To take off the white edge, click on the Display Box tab and select the Containing a set of items
mode. Choose the raster to define the display box correctly.

Finally, click on Display. The result should be similar to the one displayed hereafter.

(fig. 19.8-4)

850

19.8.5 Selection of the grid cells under the surface of the soil
The first task consists in copying the estimation of the topography from the 2D grid to the 3D grid
using Tools / Migrate / Grid to Point.

(snap. 19.8-9)

A new Selection variable Under Topo is created using the File / Calculator to store the result of
the comparison between the estimation of the topography and the Z-coordinate. The 3D grid cells
which values of Topography are higher than corresponding values of the Z-coordinate are masked
(the cells outside of the site contour are also masked because the Site contour selection variable is
activated on input file). You have to apply the following transformation in File / Calculator:
s1=ifelse(v1<v2,1,0)

Soil pollution

851

(snap. 19.8-10)

This Under topo selection will be used in all the rest of the study (it will be activated on output file
and in the graphic representations).

852

19.9 3D display of the estimated THC grades


You could start from the THC grades page created previously not to do again the display of the data.
Drag and drop the THC kriging variable from the 3D grid file in the display window. In the Page
contents, click right on the 3D grid object to edit its properties:
l

in the 3D Grid tab, tick the selection toggle, choose the Under topo selection and active the
Automatic Apply function;

in the Color tab, be careful that selected variable is THC kriging. Apply a THC Isatis Color
Scale created in the File / Color Scale functionality (25 classes from 0 to 500 mg/kg);

in the Cell Filter tab, tick the Activate Cell Filter toggle and choose the V is Defined option not
to display the cells with undefined values (which are colored in grey by default);

(fig. 19.9-1)
l

investigate inside the kriged model:


m

open the clipping plane functionality from Display / Clipping Plane: the clipping plane
appears across the block model;

go in Selecting mode by pressing the arrow button in the function bar;

click on the clipping plane rectangle and drag it next by the block model for better visibility;

click on one of the clipping planes axis to change its orientation (be careful to target
precisely the axis itself in dark grey, not its squared extremity nor the center tube in white)

open the Points Properties window of the THC file: set the Allow Clipping toggle OFF
(ditto for the polygon);

Soil pollution

853

click on the clipping planes center white tube and drag it in order to translate the clipping
plane along the axis. You may also benefit from the clipping controls parameters available
on the right of the graphic window in order to clip a slice with a fixed width and along the
main grid axes.

you can click on one cell of particular interest or on a sample: its information is displayed in
the top right corner (take care to inactivate the polygon not to select it).

(snap. 19.9-1)

854

19.10 THC simulations


Kriging provides the best estimate of the variable at each grid node. By doing so, it does not
produce an image of the true variability of the phenomenon. Performing risk analysis usually
requires to compute quantities that have to be derived from a model representing the actual
variability. In this case, advanced geostatistical techniques such as simulations have to be used.
It is for instance the case here if you want to estimate the probability of THC to exceed a given
threshold. As in fact thresholding is not a linear operator applied to the concentration, applying the
threshold on the kriged result (which is a linear operator) can lead to an important bias. The
problem is similar to estimate the statistical distribution of a contaminated volume of soil.
Simulation techniques generally require a multi-gaussian framework: thus each variable has to be
transformed into a normal distribution beforehand and the simulation result must be backtransformed to the raw distribution afterwards.

19.10.1 Gaussian transformation


A conditional simulation corresponds to a grid of values having a normal distribution and honoring
the model. Moreover it honors the data points as it uses a conditioning step based on kriging which
requires the definition of a neighborhood. So the simulations also need the gaussian transformation
and a model of variogram based on this normal variable.
Using the Statistics / Gaussian Anamorphosis Modeling procedure, you can fit and display this
anamorphosis function and transform the raw variable into a new gaussian variable Measure
Gauss.
Select the Measure variable with the Without duplicates selection on Input data.
The Interactive Fitting button overlays the experimental anamorphosis with its model expanded in
terms of Hermite polynomials: this step function gives the correspondence between each one of the
sorted data (vertical axis) and the corresponding frequency quantile in the gaussian scale
(horizontal axis). A good correspondence between the experimental values and the model is
obtained by choosing an appropriate number of Hermite polynomials; by default Isatis suggests the
use of 30 polynomials, but you can modify this number and choose 50 polynomials.
Switch on the Gaussian Transform and create a new variable Measure Gauss on the Output data.
Three options of interpolation are available, we recommend the Empirical Inversion method for
this case. Save the anamorphosis clicking on the Point Anamorphosis button, name it THC. Finally
click on Run.

Soil pollution

855

(snap. 19.10-1)

(fig. 19.10-1)

856

Using the Statistics / Exploratory Data Analysis on this new variable, you can first compute its
basic statistics: the mean is 0.00 and the variance is 0.96. The distribution of the gaussian variable is
not symmetric with a minimum of -1.2 and a maximum of 3.3, and an important proportion of low
equal values. This phenomenon is due to the important part of values equal to the limit of detection
and the method of anamorphosis used. The gaussian value calculated uses the empirical cumulative
distribution: two points with the same raw value will get the same gaussian value. This method is
preferred to the frequency inversion method that gets, for two points with the same raw value,
different gaussian values. In the context of the study the non symmetry of the gaussian variable is
not very important because the threshold of 200mg/kg that we consider is higher than the limit of
detection.

0.3

Nb Samples: 782
Minimum:
-1.20
Maximum:
3.30
Mean:
0.00
Std. Dev.: 0.96

Frequencies

0.2

0.1

0.0

-1

1
Measure gauss

(fig. 19.10-2)

The experimental variogram is very structured. The following one is computed using the same
calculation parameters as in the non gaussian case. To load the parameters of an existing variogram,
click on Load Parameters from Standard Parameter File... and select the experimental variogram
THC.

Soil pollution

857

1.00

Variogram : Measure gauss

342
0.75

471
0.50
506

0.25

D-90

Variogram : Measure gauss

1.00

0.75

5637

35113259
4455
3441
3983

N0

3471
2987
2328

0.50

0.25

21

0.00

0.00

Distance (m)

50

100

150

Distance (m)

(fig. 19.10-3)

This variogram is saved in a file called THC gauss.

858

In the Statistics / Variogram Fitting, you fit a model constituted of:


l

an anisotropic Exponential model with a sill of 0.58 and the following respective ranges along
U, V and W: 43m, 43m and 6m.

an anisotropic Linear model with a sill of 0.25 and the following respective ranges along U, V
and W: 115m, 115m and 2.4m.

1.00

Variogram : Measure gauss

342
0.75

471
0.50
506

0.25

D-90

Variogram : Measure gauss

1.00

35113259
4455
3441
3983
5637

0.75

N0

3471
2987
2328

0.50

0.25

21

0.00

0.00

Distance (m)

50

100

150

Distance (m)

(fig. 19.10-4)

19.10.2 Creation of a remediation grid


According to the remediation strategy, a 3D grid of 15 x 15 x 0.5 m is created in order to calculate
the volume of soil to be excavated. Select the option Match Geometry to an Existing Grid to create
a new grid 3D grid remediation that fits the existing one 3D grid. The new grid will have the same
extension as the existing grid but the number of blocks, the size of each block and the grid origin
will be changed. Select Coarsen Mesh and specify a number of 6 blocks to be merged along the X
and Y axis (i.e. that will correspond to a size of 6 x 2.5 m = 15 m) and 1 along Z (we keep a same
size of 0.5 m). Click on Run.

Soil pollution

859

(snap. 19.10-2)
*** Create Grid File ***
Grid Create Mode
:
Existing Directory Name:
Existing Grid Name
:
Input Selection Name
:
X Nodes Number
:
Y Nodes Number
:
Z Nodes Number
:
Grid Directory Name
Grid Name
NX=
10
X0=
NY=
22
Y0=
NZ=
23
Z0=
Rotation: No rotation

Coarsen Mesh
Grid
3D grid
None
6
6
1

: Grid
: 3D grid remediation
-46.25m
DX=
15.00m
35.75m
DY=
15.00m
20.00m
DZ=
0.50m

860

As previously, create the selection variable Under topo on the 3D grid remediation not to take
into account in the computation of contaminated soil, the cells above the surface.
*** Variable Statistics ***
Directory Name
File Name
Variable Name

: Grid
: 3D grid remediation
: Under topo

Variable Type
: Float (Selection)
Bit Length
: 1
Unit
:
Last Modification
: Jan 30 2013
17:35:15
Size
: 737 bytes
Physical Path
: \\CRUNCHER\etudes\DOC_CASE_STUDIES\Isatis\CS_Isatis_130\Soil pollution\GTX\DIRE.2\FILE.3\VARI.10
Printing Format
: Integer,
Length = 3
Variable Description :
Creation Date: Jan 30 2013

17:35:06

Number of Selected Samples : 3210 / 5060

Soil pollution

861

19.10.3 Turning bands simulations


To make these simulations, you are going to use the turning bands method (Interpolate /
Conditional Simulations / Turning Bands). You use the same moving 3D neighborhood as in the
kriging step. The additional parameters consist in:
l

the name of the Macro Variable: each simulation is stored in this Macro Variable with an index
attached,

the number of simulations: 200 in this exercise,

the starting index for numbering the simulations: 1 in this exercise,

the Gaussian back transformation is performed using the anamorphosis function: THC. In a
first run, this anamorphosis will be disabled in order to study the gaussian simulations,

the seed used for the random number generator: 423141 by default. This seed allows you to
perform lots of simulations in several steps: each step will be different from the previous one if
the seed is modified.

The final parameters are specific to the simulation technique. When using the Turning Band
method, you simply need to specify the number of bands: a rule of thumb is to enter a number much
larger than the count of rows or columns in the grid, and smaller than the total number of grid
nodes; 500 bands are chosen in our exercise.
You can verify on some simulations in the gaussian space that the histogram is really gaussian and
the experimental variogram respects the structure of the model THC Gauss particularly at small
scale. After this Quality Control, you can enable the Gaussian back transformation THC and you
can perform block simulations on the 3D grid remediation.

(fig. 19.10-5)

862

The Type of calculation is set as Block. Block simulations are obtained by averaging simulated
points. Each block is discretized in sub-blocks according to the block discretization parameters and
each sub-block is simulated as a point.
The block discretization is defined in the Neighborhood window: it will be set to 3x3x2 for quicker
calculations.

(snap. 19.10-3)

Soil pollution

863

(snap. 19.10-4)

(snap. 19.10-5)

864

Clicking on the Calculate Cvv button, the average covariance of each block is calculated using the
discretization of it. Its covariance should be practically constant for all the blocks.
Calculation of the Mean Block Covariance :
-----------------------------------------Regular discretization : 3 x 3 x 2
In order to account for the randomization, 11 trials are performed
(the first value will be kept for the Kriging step)
Variables Measure gauss
Cvv =
0.323526
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv
Cvv

=
=
=
=
=
=
=
=
=
=

0.316433
0.323884
0.326204
0.328536
0.326872
0.330179
0.330183
0.326187
0.323799
0.326540

Note - Performing simulations on the 2.5 x 2.5 x 0.5 m grid allows you to test different sizes of
remediation grid. A Copy Statistics / Grid -> Grid computes for each block of the remediation grid,
the mean of a given simulation on the 2.5 x 2.5 x 0.5 m grid. This calculation is achieved for each
simulation (i.e. for each index of simulation) through a journal file.

%LOOP i = 1 TO 200
#
******* Bulletin Name *******
***** Bulletin Version ******
Input Directory Name
Input File Name
Input Selection Name
Variable Name
Minimum Bound Name
Maximum Bound Name
Output Directory Name
Output File Name
Output Selection Name
Number Name
Minimum Name
Maximum Name
Mean Name
Std dev Name
#
%ENDLOOP

=B=
=N=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=
=A=

Copy Grid Statistics to Grid


600
Grid
3D grid
Under topo
Simulations THC[$0i]
None
None
Grid
3D grid remediation
Under topo
None
None
None
Simulations THC block[$0i]
None

Soil pollution

865

19.11 Simulation post-processing


One main advantage of simulations is the possibility to apply non linear calculations (for example
applying different cut-off grades simultaneously, calculation of the probability for a grade to be
above a threshold, or a volume of soil contaminated).

19.11.1 Statistical and probability maps


The Tools / Simulation Post Processing panel provides a procedure for the post processing of a
macro variable. Considering the 200 simulations, you ask the procedure to perform sequentially the
following tasks:
l

calculation of the mean of the 200 simulations,

determination of the cutoff giving the probability to exceed a threshold of 200 mg/kg.

(snap. 19.11-1)

Check the toggle Statistical Maps and press Edit in order to define the output file variables
Simulations THC mean and Simulations THC std.

866

(snap. 19.11-2)

Check the toggle Iso Cutoff Maps and press Edit in order to define the cutoff of 200 mg/kg.

(snap. 19.11-3)

(snap. 19.11-4)

Close and press Run.

Soil pollution

867

19.11.2 Risk curves on volume


Conversely to the previous statistics which are calculated on the whole set of realizations but for
each node of the grid, the program works here realization by realization and performs global
statistics. These statistics are expressed as Risk Curves.
Each realization produces two quantities:
l

the Accumulations. For each realization (each index of the Macro Variable), the program
calculates the sum of all the values of the variable which are greater or equal to the Cutoff (if the
value is smaller than the cutoff, the cell is not taken into account). This sum is then multiplied
by the unit surface of the cell (or the unit volume of the block in 3D).

the Surfaces/Volumes. Instead of calculating the sum of the values for each realization, the
program calculates only the number of nodes where the Accumulation has been calculated. This
number is then multiplied by the unit surface of the cell (or the unit volume of the block in 3D).
This curve provides, for each realization of the variable, the surface (in 2D) or the volume (in
3D) of the cells (or blocks) where the variable is greater or equal to the cutoff.

(snap. 19.11-5)

The cutoff of 200 mg/kg is informed in the principal panel. Tick the Risk Curves option and press
Edit to define:

868

the Unit Name used to display the results in the printout. By default, the values of volume are
expressed in m3 but in our case, the values can be expressed in 10-3 m3 (equal to 1000 m3) not
to load down the results.

the Global Statistics (on Polygons):


m

Draw Risk Curve on Volumes. The volumes values of all the realizations are sorted by
decreasing order and displayed as an inverse cumulative histogram. On the abscissae of this
graph (cutoff on the volumes) is represented the probability to get a result greater than this
value. The greater the volumes cutoff is, the smaller the probability is.

Print Statistics. The accumulation of the target variable and the volume of soil contaminated
by values of THC higher than 200 mg/kg are printed in the Isatis Message Window for each
realization. The order in which these results are printed depends on the choice of the Sorting
Order specified.

(snap. 19.11-6)

Click Apply to compute and display the risk curves and leave the dialog box open.

Soil pollution

869

(fig. 19.11-1)

The graphic figure containing the risk curves offers an Application Menu with a single item:
Graphic Parameters where you can define quantiles. Tick the Highlight Quantiles option to
compute the quantiles of your choice and click on Show the Simulation Value on Graphic to display
of the simulations for each previously selected quantile on the graphic.

(snap. 19.11-7)

870

Statistics for Simulation Post Processing


=========================================
Target Variable : Macro variable = Simulations THC block[xxxxx] [count=200]
Cutoff
=
200.00
Number of outcomes
= 200
The 5060 values are processed using 1 buffers of 5060 data each
Cell dimension along X =
15.00m
Cell dimension along Y =
15.00m
Cell dimension along Z =
0.50m
Rank Macro Frequency
1
1
0.50
2
2
1.00
3
3
1.50
.../...
198
198 99.00
199
199 99.50
200
200 100.00

Accumulation
1962.7910-3 m3
2088.6210-3 m3
4593.4910-3 m3

Volume
6.8610-3 m3
6.9810-3 m3
14.6310-3 m3

2546.6210-3 m3
2677.4810-3 m3
4049.8210-3 m3

8.4410-3 m3
8.4410-3 m3
12.9410-3 m3

Statistics on Accumulation Risk Curve


=====================================
Smallest =
1577.4310-3 m3
Largest =
7427.4810-3 m3
Mean
=
3222.4110-3 m3
St. dev. =
992.6010-3 m3
Statistics on Volume Risk Curve
===============================
Smallest =
5.5110-3 m3
Largest =
21.6010-3 m3
Mean
=
10.2510-3 m3
St. dev. =
2.8310-3 m3
Inputs/Outputs Summary
======================
Input Macro :
- Directory Name
: Grid
- File Name
: 3D grid remediation
- Selection Name
: Under topo
- Variable Name
: Simulations THC block[xxxxx]
Quantiles on Volume Risk Curves
===============================
Q5.00 =
15.2410-3 m3
Q50.00 =
9.9010-3 m3
Q95.00 =
6.4110-3 m3
Quantiles on Volume Risk Curves (nearest simulation values)
===========================================================
P5.00 =
15.3010-3 m3
P50.00 =
9.9010-3 m3
P95.00 =
6.4110-3 m3

The volume of soil contaminated by a concentration of THC higher than 200 mg/kg is between
5.511 and 21.601 m3 with a mean of 10.251 m3.

Soil pollution

871

19.12 Displaying graphical results of risk analysis with


the 3D Viewer
Drag and drop the Probability 200 mg/kg variable from the 3D grid remediation file in the
display window. In the Page contents, click right on the 3D grid object to edit its properties:
l

in the 3D Grid tab, tick the selection toggle, choose the Under topo selection and active the
Automatic Apply function;

in the Color tab, be careful that selected variable is Probability 200 mg/kg. Apply a Proba
Isatis Color Scale created in the File / Color Scale functionality (25 classes from 0 to 1);

in the Cell Filter tab, tick the Activate Cell Filter toggle and choose the V > option to display
the cells with a value of probability higher than 0,2 for example;

You can add, as previously, the polygon Site contour to delineate the area and the THC data to
compare the values measured to the probability to exceed a threshold of 200 mg/kg in a remediation
cell.

(snap. 19.12-1)

872

(fig. 19.12-1)

Bathymetry

20.Bathymetry
This case study is based on a data set kindly provided by IFREMER,
the French Research Institute for Exploitation of the Sea, from La
Rochelle (www.ifremer.fr).
The case study illustrates how to set up, from several campaigns, a
unified bathymetric model which ensures the consistency of both:
data processing, merge and modeling procedures,
bathymetry product delivered for a whole region.
The last paragraph focuses on an innovative methodology using local
parameters to get a better adequacy between the geostatistical model
and the data.
Last update: Isatis version 2014

873

874

20.1 Presentation of the Data set


20.1.1 Creation of a new study
First, before loading the data, create a new study using the File / Data File Manager functionality.

(snap. 20.1-1)

It is then advised to check the consistency of the units defined in the Preferences / Study
Environment / Units panel:
l

Input-Output Length Options window: unit in meters (Length), with its Format set to Decimal
with Length = 10 and Digits = 2.

Graphical Axis Units window: X and Y units in kilometers.

20.1.2 Import of the data


20.1.2.1 Import of bathymetry data sets
The first data set is provided in the Ascii file DDE_Boyard_2000.csv (located in the Isatis installation
directory). It contains the values of bathymetry measured on the Fort Boyard area. The coordinates
are defined in a geographic system in latitude/longitude. As Isatis is designed to work with sample
locations defined in a Cartesian coordinate system, it is necessary to compute the Cartesian
coordinates X and Y from the geographic coordinate system using a projection system. We choose
to work in a Lambert zone II (extended) projection.
The procedure File / Import / ASCII is used to load the data. First you have to specify the path of
your data using the ASCII Data File button (Isatis installation directory/Datasets/Bathymetry).
The second step consists in creating an external file referred to as the Header File. This header file
will contain a full description of the contents of the data file to be read (type of organization, details
on the variables, description of the input fields). It can be included at the beginning of the data file
or, as in our case, separated and created from scratch.
You can click on the Preview button to bring up the Data and Header Preview window. It is
designed to help you building the header.

Bathymetry

875

(snap. 20.1-1)

As the header file is not contained in the data file, click Build New Header and a new dialog box
pops up. The different tabs have to be filled in as follows:
l

Data Organization: this first tab is used to define the file type, dimension and specific
parameters. Select Points for Type of File and 2D for Dimension. The bathymetry will be
considered as a numeric variable and not as a third coordinate.

Options: this second tab defines how data are arranged in the file.
m

In our case, columns are fixed. So tick the CSV Input (Comma Separated Value) option and
choose ',' as Values Separator and write a '.' for Decimal Symbol. Specify that you want to
skip the first line by typing Skip 1 File Lines at the Beginning.

(snap. 20.1-2)

876

As the data coordinates are defined in a geographic system, select the Coordinates are in
latitude/longitude format option. Choose -45.6533 / 22.578 to specify that the Coordinates
Input Format are in decimal degrees. You need then to define the projection system. Click
Build/Edit Projection File to create a new projection file. The Projection Parameters dialog
box pops up.
- Click New Projection File to enter a name for the new projection file: lambert2e.proj.
- Select clarke-1880 as reference in the Ellipsoid list.
- Select Lambert as Projection Type. First, choose France / Center (II) as Lambert
System. Then switch it for User Defined in order to modify the Y Origin from 200000 to
2200000.
- Click Save to store the parameters and close the Projection Parameters dialog box.

(snap. 20.1-3)
l

Base Fields: this tab is used to specify how the input data fields will be read and stored as new
variables in Isatis. Click Automatic Fields to automatically create as many fields as they appear
in the data file. The names of the variables will be those given in the first line (the first line
skipped is considered as containing the variables names). At last, you have to define the type of
each variable:
m

The coordinates 'Easting Degrees' and 'Northing Degrees' for long and lat,

The bathymetry Z is considered as a 'Numeric 32 bits'.

Bathymetry

877

(snap. 20.1-4)

Click Save As to save the edited header in a file. Enter a name for this file header.txt and Close.
This header could be used for the other files which have the same structure. The header created
should have the same structure as following:
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#

structure=free
csv_file=Y, csv_sep=",", csv_dec="."
nskip=1
proj_file="C:\Program Files\Geovariances\Isatis\Datasets\Bathymetry\lambert2e.proj"
proj_coord_rep=0
field=1 , type=ewd , name="long";
f_type=Decimal , f_length=10 ,
factor=1
field=2 , type=nsd , name="lat";
f_type=Decimal , f_length=10 ,
factor=1
field=3 , type=numeric , name="Z"
bitlength=32 , unit="" ;
f_type=Decimal , f_length=10 ,

f_digits=2, unit="";
f_digits=2, unit="";
, ffff="" ;
f_digits=2

Once your header is ready, you have to choose where and how your data will be stored in the Isatis
database. Select the mode Create a New File to import the complete data set. Then, create a new
directory and a new file in the current study. The button NEW Points File is used to enter the names
of these two items; click on the New Directory button and give a name, do the same for the New
File button, for instance:
l

New directory = Data

New file = DDE Boyard 2000

Finally, press OK and then Import.

878

(snap. 20.1-5)

Do the same thing for the two other files (without building a new header but using the same as
previously) to import these data sets in two new Isatis files:
l

DDE_Maumusson_2001.csv in Data/DDE Maumusson 2001,

DDE_Marennes_Oleron_2003.csv in Data/DDE Marennes Oleron 2003.

20.1.2.2 Import of the coast line


This other data set is provided in an ArcView format. It contains the geometry of the coast line and
the different islands. These contours are loaded as polygons that allow you to define the area of
interest in the grid file in order not to interpolate outside the sea.
To import this file, you have to go to File / Import / ArcView.
In the Shapefile tab, click File Name to open a file selector and select the Shapefile to be read
Coast.shp.
Choose the option Import as Polygons and click Data File to define the output file in your Isatis
study:
l

Directory = Data

New file = Coast

As for the ASCII Import, tick the Coordinates are in latitude/longitude format option to specify
your data are defined in a geographic system. Click on Projection File Name and select your
projection file lambert2e.proj created previously.

Bathymetry

879

Finally, press Import.

(snap. 20.1-1)

880

20.2 Pre-processing
20.2.1 Visualization
The data sets are visualized using the display capabilities.
You are going to create a new Display template, that consists in an overlay of several base maps and
polygons. All the display facilities are explained in detail in the "Displaying & Editing Graphics"
chapter of the Beginner's Guide.
Click on Display / New Page in the Isatis main window. A blank graphic page is popped up,
together with a Contents window. You have to specify in this window the contents of your graphic.
To achieve that:
l

Firstly, give a name to the template you are creating: Data. This will allow you to easily display
the same map later on.

In the Contents list, double click on the Basemap item. A new window appears, in order to let
you specify which file and which variable you want to display.
m

In the Data area, click on the Data File button and select the file Data / DDE Boyard 2000.
Three types of representation may be defined (proportional, color or literal variable) but if
these three variables are left undefined, a simple basemap is drawn using only the Default
Symbol. Clicking on this button, you can modify the pattern, the color and the size of the
points.

Click on Display to display the result and on OK to close the Item Contents panel.

Back in the Contents list, double-click again on the Basemap item to represent the other points
files DDE Marennes Oleron 2003 and DDE Maumusson 2001. Choose a different color for
each file in order to distinguish them.

Back in the Contents list again, double-click on the Polygons item to represent the coast line and
select Data/ Coast clicking on Data File. The lowest part of the window is designed to define
the graphic parameters:
m

Label Position: Select no symbol to not materialize the label position of each polygon.

Filling: Check Use a Specific Filling and click on the ... button to open the Color Selector
and choose Transparent.

Bathymetry

881

Click on Display Current Item to check your parameters, then on Display to see all the
previously defined components of your graphic.

(fig. 20.2-1)

20.2.2 Sampling selection


The final resolution for the bathymetric model will be 60 m. However, the data sets resolution is
finest, metric in some places. In consequence, it is advised to create a selection variable in order to
avoid some problems of matrix inversion during the interpolation due to very close points which
will be considered as "duplicates" and to reduce calculation time.
The File / Selection / Sampling panel allows you to create a selection variable by sampling data
points on a regular grid basis.
Click on the Data File button to select the file Data / DDE Boyard 2000 you want to re-sample.
You have then to define a New Selection Variable where the result of the sampling will be stored.
Call it Sampling 10 m.

882

Choose the Center Point option to specify that you want to keep in the selection the sample
nearest to the cell gravity center.
In order to take into account the whole set of samples, select the option Infinite Grid. The grid
system will be extended so that each of the samples is classified in a grid cell.
Finally, you have to specify the grid parameters. As the Infinite Grid option is activated, you just
have to fill in the dimensions of the cells. Type 10 m for DX and DY.
Press Run.
The variable created by the procedure is set to 1 when a sample is kept (just one sample by grid
cell), 0 otherwise.

(snap. 20.2-1)

Do the same thing for the two other data sets.

Note - This procedure can also be achieved in Tools / Look for Duplicates.

20.2.3 Creation of a target grid


All the estimation results will be stored as different variables inside a new grid file located in the
directory Targets. This grid, called Grid 60x60m, is created using the File / Create Grid File
functionality.
Using the Graphic Check option, the procedure offers the graphical capability of checking that the
new grid reasonably overlays the different data files selected clicking on the Display File
(Optional) button.

Bathymetry

883

(snap. 20.2-2)

Note - In Isatis, only regular grids can be created but it is possible to import irregular grids. For
example, if you create a regular grid in latitude/longitude outside Isatis, this file has to be projected
in Isatis (with a projection system consistent with your data set). Once the projection realized, the
grid will not be regular so it will be imported as a points file via File / Import / ASCII. This new file
will be finally considered as the target file of the interpolation. During import, you just need to
select the Keep Geographical Coordinates option to keep and store the original fields used to
compute the latitude/longitude coordinates as float variables in the output Isatis file in order to
export the result of the interpolation on these coordinates .

20.2.4 Delineation of the interpolation area


You have to create a polygon selection on the grid to delineate the interpolation area by the File /
Selection / From Polygons functionality. In order to consider the interpolation only on the sea, the
new selection variable, called Bathy area, will take into account the grid cells located outside all
the polygons.

884

(snap. 20.2-3)

20.2.5 Consistency and concatenation of data sets


20.2.5.1 Marennes Oleron and Maumusson
When zooming on the Marennes Oleron and Maumusson area, it appears that the two campaigns do
not overlay. Consequently, the two data sets can be concatenated in order to interpolate them
simultaneously. However, it is important to check the consistency of bathymetry between the files
before merging them.
The merger of the two files is done in Tools / Copy Variable / Merge Samples. Click Input File 1 to
open a File selector to select the Z variable of the DDE Marennes Oleron 2003 file using the
Sampling 10 m selection. Select the same variable for the DDE Maumusson 2001 file clicking
Input File 2. You have then to define the output variable corresponding to the input variable of both
input files. Click New Output Points File to create the new output Points File MO and
Maumusson where the variable Z will be copied. If you press the Default button, the name(s) of
the input variable(s) will be kept as the name(s) of the corresponding output variable(s) in the
output file. Finally, click Run.

Bathymetry

885

(snap. 20.2-1)

Note - Be careful that the input variables are defined with the same format (in our case, Float and
not Length), in order to avoid Isatis making a conversion.
Then, the borders consistency of the two data sets is studied in Statistics / Exploratory Data
Analysis. In our case, we just want to compare the two profiles linking the two campaigns. The
comparison is made via a H-scattor plot. This application allows you to analyze the spatial
continuity of the selected variable.
It is first advised to create a selection containing only the two profiles. Clicking on the base map
(first icon from the left), the localization of bathymetry measures appears. Each active measure is
represented by a cross proportional the the bathymetry value. A sample is active if its value for a
given variable is defined and not masked.
To create the selection variable, right click and Mask all Information on the Basemap window.
Then, zooming, select the two profiles (with the left button of your mouse) and right click, Unmask.

886

(snap. 20.2-2)

To avoid high computation time, you should save this selection and work only on an extraction of
the bathymetric file:

Bathymetry

887

To save the selection variable, click on Application / Save in Selection in the Basemap window
and create a new selection variable Two profiles. Save.

In Tools / Copy Variable / Extract Samples, click Input File and select the Z variable of the MO
and Maumusson file with the selection Two profiles activated (select it on the left part of the
File Selector). Click New Output Points file and create a new output Points File Two profiles
MO and Maumusson and a new variable Z. Run.

(snap. 20.2-3)

Launch again the Statistics / Exploratory Data Analysis on the Z variable of this new file. Tick the
Define Parameters before Initial Calculations option and click on the sixth icon from left to display
the H-scattor plot. The default parameters are modified:

888

the Reference Direction: an angle of 55 from North is taken to compare the pairs of points
located in the principal direction of the trench. This direction can be identified clicking on
Management / measure / Angle between two Segments in the graphic window.

the Minimum and Maximum Distance: respectively equal to 400 and 800 m to include the pairs
of points resulting of the comparison of two profiles.

the Tolerance on Angle: 5 not to be too strict on the Reference Direction.

(snap. 20.2-4)

It is possible to add the First Bisector Line on the H-scattor plot via Application / Graphic Specific
Parameters.
15

10

0
0

10

15

(fig. 20.2-1)

Select a pair of points on the H-scattor plot (i.e. one point) and do a right click and Highlight allows
you to show their localization on the Basemap. No special bias seems visible, in consequence the
two campaigns can be merged without any correction.

Bathymetry

889

20.2.5.2 Marennes Oleron and Boyard


The consistency between Boyard and Marennes Oleron can be studied more finely because both
campaigns overlay.
The first step consists in migrating the bathymetry values of Marennes Oleron to the points of
Boyard with the Tools / Migrate / Point to Point. The Maximum Migration Distance is set to 2 m
not to compare too far values.

(snap. 20.2-1)

For clarity reasons, in the DDE Boyard 2000 file, the bathymetric variable is renamed in Z
Boyard.
The difference of bathymetry between both variables Z Boyard and Z MO is calculated via the
File / Calculator panel.

890

(snap. 20.2-2)

Both Z variables and the difference between them are then selected in Statistics / Exploratory Data
Analysis. On the Scatter Diagram of Z Boyard versus Z Mo, which is considered as the reference
bathymetry because it is more recent, you can observe an excellent correlation of 0.999. However
the error Z Boyard-MO seems to increase with the depth (the distance from the first bissector line is
more and more important). The mean of these errors is equal to 0.45 m.

Bathymetry

20

891

rho=0.999
0.20

Nb Samples:
Minimum:
Maximum:
Mean:
Std. Dev.:

115
-0.35
1.00
0.45
0.25

Frequencies

0.15

Z MO

10

0.10

0.05
0

10

20

0.00

-0.5

0.0

Z Boyard

0.5

1.0

Z Boyard-MO

(fig. 20.2-1)

After removing some points (with a right click and Mask), you can observe a link between the
errors and the bathymetry. This phenomenon could be due to an evolution of the sediments between
the two campaigns made in 2000 and 2003. Save the result of the unmasked points in a selection
variable Selection regression (Application / Save in Selection).

(fig. 20.2-2)

The bias between the two bathymetric models resulting from the Boyard and the Marennes Oleron
data sets could be corrected, applying to the Boyard data, the following correction (corresponding
to the equation of the regression line below):
Z Boyard - MO = 0.02122 * Z Boyard + 0.306

(eq. 20.2-1)

892

These parameters can be calculated thanks to the Statistics / Data Transformation / Multi-linear
Regression tool. Select Z Boyard MO as the Target Variable and Z Boyard as the Explonatory
Variable (activate the Selection regression selection). Switch Use a Constant Term in the
Regression and create a New File Name Z Boyard-MO clicking on Regression Parameter File to
stored the result of the multilinear regression. This parameter file will be used to applicate the same
transformation on the grid variables using Statistics / Data Transformation / Raw<->Multi-linear
Transformation. Finally click on Run.

(snap. 20.2-3)

Bathymetry

893

Regression Parameters:
======================
Explanatory Variable
Regressed Variable
Residual Variable
Constant Term

1 =
=
=
=

Z Boyard
None
None
ON

Multi-linear regression
----------------------Equation for the target variable : Z Boyard-MO
(NB. coefficients apply for lengths are in their own unit)
-------------------------------------------------------------|Estimated Coeff.|Signification|Std. Error|t-value| Pr(>|t|) |
-----------------------------------------------------------------------| Constant|
0.306 |
***
|2.893e-02 |10.578 |0.000e+00 |
-----------------------------------------------------------------------| Z Boyard|
2.122e-02 |
***
|3.261e-03 | 6.509 |3.242e-09 |
-----------------------------------------------------------------------Signification codes based upon a Student test
probability of rejection:
'***' Pr(>|t|) < 0.001
'**'
Pr(>|t|) < 0.01
'*'
Pr(>|t|) < 0.05
'.'
Pr(>|t|) < 0.1
'X'
Pr(>|t|) < 1
Multiple R-squared
Adjusted R-squared
F-statistic
p-value
AIC
AIC Corrected

=
=
=
=
=
=

0.302
0.295
42.361
3.242e-09
-8.978e+02
-8.977e+02

Statistics calculated on 100 active samples


Raw data
Mean
=
0.463
Variance = 3.531e-02
Regressed Mean
=
0.463
Variance = 1.066e-02
Residuals Mean
= 2.831e-17
Variance = 2.465e-02

This relation is observed on the overlapping area of the two campaigns. Its validity should be
confirmed on the remaining area.

894

20.3 Interpolation by kriging


20.3.1 Exploratory Data Analysis
In the Statistics / Exploratory Data Analysis panel, the first task consists in defining the file and
variable of interest. To achieve that, click on the Data File button and select the variable Z in the
Data / MO and Maumusson file. By pressing the corresponding icon (eight in total), you can
successively perform several statistical representations, using default parameters or by choosing
appropriate parameters.

(snap. 20.3-1)

For example, to calculate the histogram with 25 classes between -6 and 19 m (1 meter interval),
first you have to click on the histogram icon (third from the left); a histogram calculated with
default parameters is displayed, then enter the previous values in the Application / Calculation
Parameters menu bar of the Histogram page. If you switch on the Define Parameters Before Initial
Calculations option, you can skip the default histogram display.
The different graphic windows are dynamically linked. If you want to locate the negative measures
of bathymetry, select on the histogram the classes corresponding to negative values, right click and
choose the Highlight option. The highlighted values are now represented by a blue star on the base
map previously displayed.
(fig. 20.3-1)

Bathymetry

895

(fig. 20.3-2)

Then, an experimental variogram can be calculated by clicking on the 7th statistical representation,
with 20 lags of 10 m and a proportion of lag of 0.5. The variance of data may be removed from the
graphic by switching off the appropriate button in the Application / Graphic Specific Parameters.

896

(snap. 20.3-2)

(fig. 20.3-3)

In order to perform the fitting step, it is now time to store the experimental variogram with the item
Save in Parameter File of the Application menu of the Variogram Page. You will call it Z bathy.

Bathymetry

897

20.3.2 Fitting a variogram model


The procedure Statistics / Variogram Fitting allows you to fit an authorized model on an
experimental variogram.
You must first specify the name of the parameter file which contains the Experimental Variogram Z
bathy created in the previous paragraph.
Then, you need to define another parameter file which will ultimately contain the model: you will
also call it Z bathy. Although they carry the same name, there will be no ambiguity between these
two files as they are of different types.
Common practice is to find, by trial and error, the set of parameters defining the model which fits
the experimental variogram as closely as possible. The quality of the fit is checked graphically on
each of the two windows:
l

The global window where all experimental variograms, in all directions and for all variables are
displayed.

The fitting window where you focus on one given experimental variogram, for one variable and
in one direction.

In our case, as the parameter file refers the only one experimental variogram for the single variable
Z, there is obviously no difference between the two windows.

898

(snap. 20.3-3)

The principle consists in editing the Model parameters and checking the impact graphically. You
can also use the variogram initialization by selecting a single structure or a combination of
structures in Model initialization and by adding or not a nugget effect. Here, we choose an
exponential model without nugget. Pressing the Fit button in the Automatic Fitting tab, the
procedure automatically fits the range and the sill of the variogram (see the Variogram fitting
section from the Users guide).
Then go to the Manual Fitting tab and press the Edit button to access to the panel used for the
Model definition and modify the model displayed. Each modification of the Model parameters can
be validated using the Test button in order to update the graphic.
Here, two different structures have been defined (in the Model Definition window, use the Add
button to add a structure, and define its characteristics below, for each structure):
l

a stable model with a third parameter equal to 1.45, a range of 600 m and a sill of 3.35,

a nugget effect of 0.0025.

These parameter lead to a better fitting of the model to the experimental variogram.

Bathymetry

899

(snap. 20.3-4)

This model is saved in the Parameter File for future use by clicking on the Run (Save) button.

Variogram : Z

1.5

1.0

0.5

0.0
0.00

0.05

0.10

Distance (km)

0.15

0.20

(fig. 20.3-4)

900

20.3.3 Kriging of bathymetry


The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l

the Input information: variable Z in the Data File,

the following variables in the Output Grid File, where the results will be stored:
m

the estimation result in Kriging of bathymetry MO and Maumusson,

the standard deviation of estimation in Std of bathymetry MO and Maumusson (Kriging),

the Model of variogram: Z bathy,

the neighborhood: Moving 300m.

To define the neighborhood, you have to click on the Neighborhood button and you will be asked to
select or create a new set of parameters; in the New File Name area enter the name Moving 300m,
then click on OK or press Enter and you will be able to set the neighborhood parameters by clicking
on the respective Edit button.
The neighborhood type is a moving neighborhood. It is an ellipsoid with No Rotation;

Bathymetry

901

Set the dimensions of the ellipsoid to 300 m and 300 m. Because of the sampling, the
neighborhood size does not need to be very large;

Minimum number of samples: 1;

Number of Angular Sectors: 4 in order to avoid data coming all from the same profile;

Optimum Number of Samples per Sector: 4. A number of 4x4=16 samples seems to be a good
compromise between reliability of the interpolation and calculation time.

(snap. 20.3-5)

In order to avoid extrapolation outside the domain, in the Advanced tab, it is possible to interrupt
the neighborhood search when there are too many consecutive empty sectors. Tick the Maximum
Number of Consecutive Empty Sectors option to active it and enter a value of 2.

902

(snap. 20.3-6)

Press OK for the Neighborhood Definition.

Note - When kriging huge data sets, it is advised to modify the parameters in the Sorting tab in
order to optimize the computations. With a moving neighborhood, the samples are first sorted into a
coarse grid of cells (the maximum number of cells is limited to 500000). This sorting will improve
the performance of the search algorithm.
The sorting parameters DX and DY should be set such as the product of the domain extension along
X by the domain extension along Y divided by the product of DX by DY, is smaller than 500000.

Bathymetry

903

(snap. 20.3-7)

In the Standard (Co-)Kriging panel, a special feature allows you to test the choice of parameters,
through a kriging procedure, on a graphical basis (Test button). A first click within the graphic area
displays the target file (the grid). A second click allows the selection of one grid node in particular.
The target grid node may also be entered in the Test Window / Application / Selection of target
option (see the status line at the bottom of the graphic page), for instance [207,262].
The figure shows the data set, the sample chosen in the neighborhood (the 16 closest points inside a
300 m radius circle) and their corresponding weights. The bottom of the screen recalls the
estimation value, its standard deviation and the sum of the weights.

904

(snap. 20.3-8)

In the Application menu of the Test Graphic Window, click on Print Weights & Results. This
produces a printout of:
l

the calculation environment: target location, model and neighborhood,

the kriging system,

the list of neighboring data and the corresponding weights,

Bathymetry

905

the summary of this kriging test.


Results for : Punctual
- For variable V1
Number of Neighbors
Mean Distance to the target
Total sum of the weights
Sum of positive weights
Weight attached to the mean
Lagrange parameters #1
Estimated value
Estimation variance
Estimation standard deviation
Variance of Z* (Estimated Z)
Covariance between Z and Z*
Correlation between Z and Z*
Slope of the regression Z | Z*
Signal to Noise ratio (final)

=
=
=
=
=
=
=
=
=
=
=
=
=
=

16
56.73m
1.000000
1.063017
-0.036648
0.102890
-0.993992
0.172020
0.414753
2.974700
3.077590
0.974551
1.034588
19.474474

Click on Run to interpolate the data on the entire grid.


The same interpolation can be achieved with the Boyard data set (with the selection Sampling 10 m
activated) taking care that the names of the output file are different. Create two new variables:

906

Kriging of bathymetry Boyard to store the estimation result ,

Std kriging of bathymetry Boyard for the standard deviation of estimation.

(snap. 20.3-9)

20.3.4 Displaying the graphical results


20.3.4.1 Display 2D
Click on Display / New Page in the Isatis main window. A new blank graphic page is popped up.

Bathymetry

907

Give a name to the template you are creating: Bathy kriging.

In the Contents list, double click on the Raster item. A new window appears, in order to let you
specify which variable with which color scale you want to display:
m

In the Data area, in the Grid file select the variable Kriging of bathymetry MO and
Maumusson,

Specify the title that will be given to the Raster part of the legend, for instance Bathy (m),

In the Graphic Parameters area, specify the Color Scale you want to use for the raster
display. You may use an automatic default color scale, or create a new one specifically
dedicated to the bathymetry. To create a new color scale, click on the Color Scale button,
double-click on New Color Scale and enter a name: Bathy, and press OK. Click on the Edit
button. In the Color Scale Definition window:
- In the Bounds Definition, choose User Defined Classes.
- Click on the Bounds button and enter the min and the max bounds (respectively -5 and
15).
- Do not change the number of classes (32).
- Click on the Undefined Values button and select Transparent.
- In the Legend area, switch off the Automatic Spacing between Tick Marks button, enter 5 as the reference tick mark and 2 as the step between the tick marks. Then, specify that
you do not want your final color scale to exceed 6 cm. Switch of the Display Undefined
Classes as button.
- Click on OK.

In the Item contents for: Raster window, click on Display to display the result.

908

(snap. 20.3-1)

Bathymetry

909

It is possible to add other items such as Isolines defined on the nodes of a grid. For example,
you can display, on the bathymetry variable, isolines by 1 m classes.

You can also display the coast line by adding a Polygons item as done for the data visualization.

In the Items list, you can select any item and decide whether or not you want to display its
legend. Use the Move Back and Move Front buttons to modify the order of the items in the final
display.

Click on the Display Box tab. Choose Containing a set of items and select the Raster item to
define the size of the graphic by reference to the contents of the grid.

Finally, click on Display to display the result and on OK to close the Item Contents panel. Your
final graphic window should be similar to the one displayed hereafter:

(snap. 20.3-2)

The * and [Not saved] symbol respectively indicate that some recent modifications have not been
stored in the Bathy kriging graphic template, and that this template has never been saved. Click on
Application / Store Page to save them. You can now close your window.

910

20.3.4.2 3D Viewer
Launch the 3D Viewer (Display / 3D Viewer).
To display the bathymetry estimation, drag and drop the Kriging of bathymetry MO and
Maumusson variable from the Grid 60x60m file in the display window. In the Page Contents,
click right on the Surface object to edit its properties:
l

in the Color tab, be careful that selected variable is Kriging of bathymetry MO and
Maumusson. Apply the Bathy color scale created previously.

in the Elevation tab, you need to select Variable and to choose Kriging of bathymetry MO and
Maumusson to define for each grid cell the bathymetry as the level Z. Tick Convert into Z
Coordinate to calculate the elevation Z from the bathymetry (in depth) as Z = -1x V + 0.

(snap. 20.3-1)

Tick the Automatic Apply option to automatically assign the defined properties to the graphic
object. If this option is not selected, modifications are applied only when clicking Display.
Tick Legend to display the color scale in the display window. The legend is attached to the current
representation. Specific graphic objects may be added from the Display menu as the graphic axes
and corresponding valuations, the bounding box and the compass.
The Z Scale, in the tool bar, may also be modified to enhance the vertical scale.
Click on File / Save Page As to save the current graphic.

Bathymetry

911

(fig. 20.3-1)

20.3.5 Detection of outliers - Filtering


By construction, kriging smoothes the real variability. As a consequence, with a variogram model
including a nugget effect, if the interpolation is done very close to a data point, the estimated value
will be different from the one measured. Indeed, if this nugget effect is due to an error of
measurement, it makes sense not to give priority only to the closest point and to give also some
weight to farther data. However, by construction, kriging is unbiased, i.e. if kriging is done exactly
on a measure, the estimated value and the measured value will be exactly the same.
Filtering allows you to produce an estimation of the variable, filtering out the effect of the error of
measurement. This error, considered as independent from the variable, is characterized by its own
scale component on the variogram model: the nugget effect. The objective is to estimate on the data
points the most probable bathymetric value with no measurement error. Then, an analysis will be
done to compare these values with the ones measured. The validity of the points for which the
correction will be the most important will be called into question.
Filtering is achieved just like kriging. You just need to specify the same Input File and Output File
with two new variables Z filtering for estimation and Z std filtering for the standard deviation.
The Model of variogram and the Neighborhood are the same as for the kriging. Click on the Special
Model Options button and tick Filtering Model Components. Then highlight the nugget effect so
that it will be filtered from the model in the list of Covariance basic structures. Click Apply and
Run.

912

(snap. 20.3-2)

Bathymetry

913

(snap. 20.3-3)

In File / Calculator, click Data File and select:


l

the measures Z as v1,

the filtered bathymetry Z filtering as v2,

the standard deviation associated Z std filtering as v3,

a new variable Z standardized error filtering as v4 which will be equal to the difference
between the "true" value and the value estimated by filtering standardized by the standard
deviation.

Then write the following transformation:


v4=(v1-v2)/v3
Click on Run.

914

(snap. 20.3-4)

Bathymetry

915

The Exploratory Data Analysis allows you to locate on the base map the highest errors by
highlighting them on the histogram. Adding on the base map the values of bathymetry (informing
the Literal Code Variable in Application / Graphical Parameters) permits to study these points in
details.
It is first advised to modify the symbol of the selected points from crosses to points in order to
improve the legibility of the display. To achieve that, you have to access the study parameters in
Preferences / Study Environment, Miscellaneous tab and change the Selected Point symbol in the
Interactive Picking Windows Convention part.
After masking the outliers (with a right click and Mask), you can save the result of this work in a
selection variable (Application / Save in Selection). Then, you can perform again a kriging (without
filtering) with this selection variable activated in input and the grid of interpolation in output this
time. Of course, the classification of the points as outliers should be done carefully.

(fig. 20.3-2)

916

20.4 Superposition of models and smoothing of


frontiers
20.4.1 Merge of several Digital Terrain Models (DTM)
The two data sets, Boyard on the one hand and Marennes Oleron/Maumusson on the other hand,
have been interpolated separately but they overlay in part. At this stage, it is necessary to know
which one of these two models you want to give priority. In this case, it is decided to favour
Marennes Oleron/Maumusson because of its more recent campaign and its larger coverage of the
study area.
A new bathymetric model Z bathy Boyard MO and Maumusson is built thanks to the File /
Calculator application.
The mathematic transformation simply consists in taking for final model the priority one (Kriging
of bathymetry MO and Maumusson) when it is defined, otherwise the Kriging of bathymetry
Boyard variable adjusted thanks to the equation of regression calculated at the paragraph 15.2.5.2.
This regression is firstly applied in the Statistics / Data Transformation / Raw<->Multi-linear
Transformation tool and the result is stored in a new variable Z Boyard-MO regression.

(snap. 20.4-1)

Bathymetry

917

(snap. 20.4-2)

20.4.2 Smoothing of frontiers


Zooming in a display of the Z bathy Boyard MO and Maumusson variable, you can see that the
frontier between the two concatenated DTM (Boyard and Marennes Oleron) is still visible. So it
seems necessary to smooth it.
The idea consists in defining a band around Marennes Oleron (which is privileged), then reinterpolate the values of bathymetry in this band from the interpolated values nearby.
The buffer zone is created in two steps:

918

In Interpolate / Interpolation / Grid operator, you should create a new selection variable Sel Z
bathy MO and Maumusson dilated which takes into account all the grid cells where the
Kriging of bathymetry MO and Maumusson variable is defined and a band of 120 m wide
(i.e. 2 cells) around.

(snap. 20.4-3)
l

In File / Calculator, three variables are created:


m

Sel Z bathy MO and Maumusson buffer: this selection variable defines the buffer zone. It
is equal to Sel Z bathy MO and Maumusson dilated minus the area on which the Kriging
of bathymetry MO and Maumusson variable is defined.

Z bathy final Boyard MO and Maumusson: this variable contains the concatenation of the
two models previously created by kriging with priority to the MO and Maumusson model as
well as undefined values inside the buffer zone.

DTM area: this selection variable is created in order not to extrapolate the interpolation
done at the next step.

Bathymetry

919

(snap. 20.4-4)

920

Z bathy final Boyard MO and Maumusson

2120

2115

Y (km)

2110

Bathy (m)
2105

15
13
11
9

2100

7
5
3

2095

1
-1
-3
320

325

X (km)

330

335

-5

(fig. 20.4-1)

The buffer area is then filled in with a simple moving average in Interpolate / Interpolation / Grid
Filling. The result of this last interpolation is stored in the same Z bathy final Boyard MO and
Maumusson variable as in input. The variable is overwritten to contain the final bathymetric
model. The DTM area selection variable is activated in order not to extrapolate.
The choice of the algorithm of interpolation has no real importance because of the limited size of
the buffer area.

Bathymetry

921

(snap. 20.4-5)

922

20.5 Local GeoStatistics (LGS) application to


bathymetry mapping
20.5.1 Variogram analysis
The estimation previously obtained is based on a global variogram model. This analysis assumes
the stationarity of the data and its spatial structure over the area of interest.
The LGS methodology described hereafter suggests to calculate local variograms, taking into
account potential local particularities such as local anisotropies, spatially varying small-scale
structures or heterogeneity. The expected outcome is an improved prediction, together with a more
consistent assessment of uncertainties.
The first step consists in computing an anisotropic variogram which will be used in the LGS
methodology to find the right local angle of anisotropy. Using this local angle, the local ranges
along U and V will then be computed in the next step.
An anisotropic variogram model is required to test different rotations. Consequently, the first step
of the analysis consists in computing an anisotropic variogram. In the Statistics / Exploratory Data
Analysis menu, you should select the Z variable in the Data / MO and Maumusson file and click
on the variogram icon. The Variogram Calculation Parameters panel pops up. In the List of
Options, change the type of variogram from Omnidirectional to Directional. Then click on the
Regular Directions button and choose a Number of Regular Directions of 2 (N0 and N90) and
click on OK. In the Variogram Calculation Parameters panel, when clicking any cell of the table,
the Directions Definition box pops up.
Select the direction to be defined in the Directions List on the left side of the interface. You may
select the two directions at the same time to set the same parameter values. Then activate the
parameters you need to modify by checking the corresponding box and choose:
l

Tolerance on Direction: 5

Lag Value: 10 m

Number of Lags: 20

Click OK twice to calculate the variogram and get it displayed in a graphic window.

Bathymetry

923

(snap. 20.5-1)

924

(snap. 20.5-2)

Bathymetry

925

Variogram : Z

N0
2

N90

0
0.00

0.05

0.10

0.15

0.20

0.2

Distance (km)

(fig. 20.5-1)

Finally store this experimental variogram with the item Save in Parameter File of the Application
menu of the Variogram Page. You will call it Z bathy anisotropic.
To fit a variogram model, in the Statistics / Variogram Fitting application, define:
l

The Parameter File containing the set of experimental variograms: Z bathy anisotropic.

The Parameter File in which you whish to save the resulting model Z bathy anisotropic. You
may define the same name for both.

Check the toggles Fitting Window and Global Window. The Fitting Window displays one direction
at a time (you may choose the direction to display through Application / Variable & Direction
Selection...), and the Global Window displays all directions in one graphic.
Click on the Edit button in the Manual Fitting tab to open the Model Definition sub-window. You
can first initialize the variogram by pressing the Load Model button, and select the Z bathy model
to begin your modelization using the same parameters. But the model must reflect:
l

The specific variability along each direction (anisotropy),

The general increase of the variogram.

You should tick the Anisotropy option for the Stable structure with a third parameter equal to 1.45,
a sill of 3.35 and the following respective ranges along U and V: 800 m and 300 m. The nugget
effect stays equal to 0.0025.
This model is saved in the Parameter File by clicking on the Run (Save) button.

926

(snap. 20.5-3)

Variogram : Z

N0
2

N90

0
0.00

0.05

0.10

0.15

Distance (km)

0.20

0.2

(fig. 20.5-2)

Bathymetry

927

20.5.2 Pre-processing
In order to avoid heavy computation time, the method is only illustrated on a specific part of the
area of interest. After validating the analysis of the LGS parameters on this restricted area, you
could perform the estimation on the entire domain.
In the File / Selection / Geographical Box menu, a new selection variable Restricted area is
created in the MO and Maumusson file by selecting only the samples for which the coordinates
are included between:
l

325800 and 333200 m for X,

2102200 and 2111200 m for Y.

The same selection is applied to the grid Targets / Grid 60x60m.

(snap. 20.5-4)

The dataset is also reduced to select one point every 25 m with the File / Selection / Sampling menu.

928

(snap. 20.5-5)

Finally, the selection Sampling 25 m containing 18512 samples is extracted into a new points file
MO and Maumusson LGS thanks to the Tools / Copy Variable / Extract Samples application. You
should press the Default button to keep the name of the input variable Z as the name of the
corresponding output variable in the output file MO and Maumusson LGS. Click Run.

(snap. 20.5-6)

Bathymetry

929

20.5.3 LGS Parameters Modeling


The computation of the Local Parameters is achieved in the Statistics / LGS Parameters Modeling /
Local Cross-validation Score Fitting application.
Click Input Data and select the Z variable in the Data / MO and Maumusson LGS file. Then
choose the Z bathy anisotropic variogram model.
In order to perform the cross-validation used to compute the parameters, it is necessary to specify a
search neighborhood. Click Neighborhood to open the Neighborhood selector. We choose not to
use the previous neighborhood but to create a new one, Moving LGS, to authorize a minimum
distance between two samples of 100 m to counterbalance the samples organization along lines.
Then, click Edit to modify the parameters, in the Sectors tab:
l

The neighborhood type is a Moving neighborhood. It is an ellipsoid with No Rotation;

Set the dimensions of the ellipsoid to 1200 m and 1200 m along the U and V directions;

Minimum number of samples: 1;

Number of angular sectors: 8;

Optimum Number of Samples per Sector: 4.

In the Advanced tab:


l

Minimum Distance Between two Selected Samples: 100 m;

Maximum Number of Consecutive Empty Sectors: 2.

Press OK for the Neighborhood Definition.

930

(snap. 20.5-7)

In the Local Grid tab, you should click on the Local Grid button to define the grid on which the
local parameters will be calculated. Create a new file Grid LGS in the existing Targets directory.
The grid is automatically computed in order to geographically overlay the input samples. You
should tick the Graphic Check option to check the superimposition of the grid on the samples.
The Cross-validation tab allows you to define a block size inside of which samples are considered.
Enter a value of 100 m for X and Y and choose to Perform Cross-validation on 50 % of the data
(to reduce the amount of data and the computation time).
In the last Local Parameters tab, you should select the parameters that you whish to estimate
locally. In this exemple, we first choose to test only the rotation i.e. the directions of anisotropy. The
Output Local Base Name area is designed to define a base name for the local parameters. The
complete name of each parameter is automatically created concatenating this chain of characters,
the name of the structure (for the variogram model) and the parameter you are testing (Rot, Range,
Sill, Third). It appears in the Parameter area. You should call it Z_bathy.
The different basic structures constituting the variogram model defined earlier as well as the
neighborhood item are listed in the Structure area. Click the Stable Structure and select the
Parameter: Rot Z to indicate the local parameter you want to test. In the Min and Max boxes, enter
the values between the selected parameter should fluctuate: respectively -90 and 90. Choose a Step
of 10 degrees between two consecutive values to be tested.
Finally click Run to launch the calculations.

Bathymetry

931

(snap. 20.5-8)

You can visualize the result of calculations in the Statistics / Exploratory Data Analysis. Tick the
Legend option in the Application / Graphical Parameters menu of the basemap to display the
legend.

932

(fig. 20.5-3)

After computing the rotation of the variogram model, a second run is achieved to test the ranges,
taking into account the previous calculations. The Input Data, the Model of variogram and the
Neighborhood remain the same.
In the Local Grid tab, tick the Use an Existing Grid option to save the range parameters in the grid
file Targets / Grid LGS previously created.
Do not change anything in the Cross-validation tab.
In the Local Parameters tab, tick the Parameter Already Exists option not to erase the variable
containing the calculations of rotation. Then, click Add Parameter to add a second parameter.
Select the Stable structure and the Range U for parameter. Choose a Min of 600, a Max of 1000
and a Step of 100. Add a third parameter for the Range V with a Min of 100, a Max of 500 and a
Step of 100.
Be careful that Simultaneous estimation mode be ticked in order to test all possible combinations
of the different values for the range.
Click Run.

Bathymetry

933

(snap. 20.5-9)

20.5.4 LGS kriging


The last step of this analysis consists in performing a kriging taking into account the local
parameters previously calculated.
The kriging procedure Interpolate / Estimation / (Co-)Kriging requires the definition of:
l

the Input information: variable Z in the Data / MO and Maumusson file with the Restricted
area selection,

the following variables in the Targets / Grid 60x60m Output Grid File, where the results will
be stored:
m

the estimation result in Kriging LGS MO and Maumusson restricted area,

the Model of variogram: Z bathy anisotropic,

the neighborhood: Moving 300m.

934

(snap. 20.5-10)

You should click on the Local Parameters button to pop up the Local Parameter Loading box and
define the local models. Click on Local Grid and select the grid Targets / Grid LGS where the
local parameters are stored.
In the Model Per Structure tab, tick the Use Local Rotation (Mathematician Convention) option
to make the rotations varying locally. Click Rotation / Z and select the Z_bathy_2_Stable_Rot_Z
variable. In the same way, select Use Local Range and choose Z_bathy_2_Stable_Range_U for
Range / X and Z_bathy_2_Stable_Range_V for Range / Y.

Bathymetry

935

Click OK and Run.

(snap. 20.5-11)

936

The map displaying the differences between kriging and kriging using LGS points out the areas
whith high differences between the maps. The two main conclusions are that the use of LGS
reduces the wavelet artefact visible at the border of the main channel and that LGS also offers more
continuous secondary channels which is closer to the reality.

(fig. 20.5-4)

(fig. 20.5-5)

937

Methodology

938

Image Filtering

22.Image Filtering
This case study demonstrates the use of kriging to filter out the component of a variable which corresponds to the noise. Applied to regular
grids such as images, this method gives convincing results in an efficient manner.

The result is compared to classical filters which do not pretend to suppress the noise but to reduce it by dilution instead.

Last update: Isatis version 2014

939

940

22.1 Presentation of the Dataset


The dataset is contained in the ASCII file called images.hd. It corresponds to a grid of 256*256
nodes with a square mesh of 2 microns, only containing one variable piece of information: the
phosphorus element (P) measured using an electronic microprobe on a steel sample. Due to the very
low quantities of material (traces), the realization of this picture may take up to several hours of
exposure: hence the large amount of noise induced by the process. The file is read using the Files /
Import / ASCII facility asking for the data to be loaded in the new Directory called Images and the
new grid file called Grid. The files to be imported are located in Isatis installation directory/Datasets/Image_Filtering.

(snap. 22.1-1)

We set in Preferences / Study Environment the X and Y units for graphics to mm.
Using the File Manager utility, we can check the basic statistics of the P variable that we have just
loaded: it varies from 11 to 71, with a mean of 35 and a standard deviation of 7.
Use the Display facility to visualize the raster contents of the P variable located on the grid. The
large amount of noise, responsible for the fuzziness of the picture, is clearly visible.

Image Filtering

941

(fig. 22.1-1)

Initial Image

942

22.2 Exploratory Data Analysis


The next step consists in analyzing the variability of this trace element with the Statistics / Exploratory Data Analysis.
Once the names of the Directory (Images), File (Grid) and variable (P) have been defined, ask for
a histogram. Using the Application Menu of the graphic page, modify the Calculation Parameters
as follows: 62 classes lying from 10 to 72. The Automatic button resets the minimum and maximum
by performing the statistics on the active data in the file. The resulting histogram is very close to a
normal distribution with a mode located around 35.

(fig. 22.2-1)

22.2.1 Quantile-quantile plot and 2 -test


Although this will not be used afterwards in the case study, it is possible to check how close this
experimental distribution is to normality, using the Quantile-quantile facility.
It allows the comparison of the experimental quantiles to those calculated on any theoretical distribution (normal in our case). This comparison may be improved by suppressing several points taken
from the head or the tail of the experimental distribution.

Image Filtering

943

(snap. 22.2-1)

(fig. 22.2-2)

In the Report Global Statistics item of the Application Menu, you obtain an exhaustive comparison
between the experimental and the theoretical quantiles, as well as the score of the

-test, equal to

944

9049. This score is much greater than the reference value (for 16 degrees of freedom) obtained in
tables: this indicates that the experimental distribution cannot be considered as normal with a high
degree of confidence.

22.2.2 Variographic Analysis


We now wish to estimate the spatial variability of P, by computing its experimental variogram. The
data being organized on a regular grid, the program takes this information into account to calculate
two variograms by default in a more efficient way: the one established by comparing nodes belonging to the same row (X direction) and the one obtained by comparing nodes belonging to the same
column (Y direction). The number of lags is set to 90; be sure to modify the parameter twice (once
for each direction of calculation).

(snap. 22.2-2)

Note - We could try to calculate the variogram cloud on this image: nevertheless, for one (any)
direction, the smallest distance (once the grid mesh) already corresponds to 256255 pairs, the
second lag to 256254 pairs, and so on. Needless to say this procedure takes an enormous amount of
time to draw and selectively picking some "abnormal" pairs is almost impossible. Therefore this
option is not recommended.

Image Filtering

945

(snap. 22.2-3)

This figure represents the two directional variograms that overlay almost perfectly: this informs us
that the variable behaves similarly with respect to distance along the two main axes. This is almost
enough to pretend that the variable is isotropic. Actually, two orthogonal directional variograms are
not theoretically sufficient as the anisotropy could happen on the first diagonal and would not be
visible from the two main axes. The study can be completed by calculating the experimental variograms along the main axes and along the two main diagonals: this test confirms in the present case
the isotropy of the variable. The two experimental directional variograms are stored in a new
Parameter File called P.
To fit a model to these experimental curves, we use the Statistics / Variogram Fitting procedure,
naming the Parameter File containing the experimental quantity (P) and the one that will ultimately
contain the model. You can name it P for better convenience, keeping in mind that, although they
have the same name, there is no ambiguity between these two files as their contents belong to two
different types.

946

(snap. 22.2-4)

Image Filtering

947

(snap. 22.2-5)

By pressing the Edit button of the main window, you can define the model interactively and check
the quality of the fitting using any of the graphic windows available (Fitting or Global). Each modification must be validated using the Test button in order for the graphic to be updated. The Automatic Sill fitting and the Model Initialization of the main window can be used to help you to
determine the optimal sill and ranges values for each basic structure constituting the model. A correct fit is obtained by cumulating a large nugget effect to a very regular behavior corresponding to a
Cubic variogram with a range equal to 0.17 .

948

(fig. 22.2-3)

The parameters can also be printed using the Print button in the Model Editing panel.
Model : Covariance part
=======================
Number of variables
= 1
- Variable 1 : P
Number of basic structures = 2
S1 : Nugget effect
Sill =
40.2576
S2 : Cubic - Range = 0.17mm
Sill =
14.7493

Model : Drift part


==================
Number of drift functions = 1
- Universality condition

Click on Run (Save) to save your latest choice in the model parameter file.

Image Filtering

949

22.3 Filtering by Kriging


This task corresponds to the Interpolate / Estimation / Image Filtering & Deconvoluting procedure.
First, define the names of directory (Images), file (Grid) and variable (P) of interest which contain
the information. There is no possibility of selecting the output file as it corresponds to the input file,
by construction in this procedure. The only choice is to define the name of the variable which will
receive the result of kriging process: P denoised. The parameter file containing the model is called
P and a new file called Images P is created for the definition and the storage of the neighborhood
parameters.

950

(snap. 22.3-1)

When pressing the Neighborhood Edit button, you can set the parameters defining this Image
neighborhood. Referring to the target node as the reference, this image neighborhood is characterized by the extensions of the rectangle centered on the target node: the extension is specified by its
radius. Hence in 2D, a neighborhood corresponds to the target node alone, whereas a neighborhood
includes the eight nodes adjacent to the target node.
target cell of a 1x1 image neighborhood

Image Filtering

951

For some applications, it may be convenient to reach large distances in the neighboring information. However, the number of nodes belonging to the neighborhood also increases rapidly which
may lead to an unreasonable dimension for the kriging system. A solution consists in sampling the
neighborhood rectangle by defining the skipping ratio: a value of 1 takes all information available,
whereas a value of 2 takes one point out of 2 on average. The skipping algorithm manages to keep a
larger density of samples close to the target node and sparser information as the distance increases.
Actually, the sampling density function is inspired from the shape of the variogram function which
means that this technique also takes anisotropy into account.

(snap. 22.3-2)

Prior to running the process on the whole grid, it may be worth checking its performance on one
grid node in particular. This can be realized by pressing the Test button which produces a graphic
page where the data information is displayed. Because of the amount of data available (256256) the
page shows a solid black square. Using the zooming (or clipping) facility on the graphic area, we
can magnify the picture until a set of limited cells are visible (around 20 by 20).
By clicking on the graphic area, we can select the target node (select the one in the center of the
zoomed area). Then the graphic shows the points selected in the neighborhood, displaying their
kriging weight (as a percentage). The bottom of the graphic page recalls the value of the estimate,
the corresponding standard deviation (square root of the variance) and the value for the sum of
weights. The first trial simply reminds us that kriging is an exact interpolator: as a data point is
located exactly on top of the target node, it receives all the weight (100%) and no other information
carries weight.
In order to perform filtering, we must press the Special Model Options button and ask for the Filtering option. The covariance and drift components are now displayed where you have to select the
item that you wish to filter. The principle is to consider that the measured variables (denoted Z) is
the direct sum of two uncorrelated quantities, the underlying true variable (denoted Y) and the noise
(denoted ): Z = Y + . Due to the absence of correlation, the experimental variogram may be
interpreted as the sum of a continuous component (the Cubic variogram) attributed to Y and the
nugget effect corresponding to the noise
pressing the noise from the input image.

. Hence filtering the nugget effect is equivalent to sup-

952

(snap. 22.3-3)

When pressing the Apply button, the filtering procedure is automatically resumed on the graphic
page, using the same target grid node as in the previous test: you can check that the weights are now
shared on all the neighboring information, although they still add up to 100%.
Before starting the filtering on the whole grid, the neighborhood has to be tuned. An efficient quality index frequently used in image analysis, called the Signal to Noise Ratio, is provided when displaying the Results (in the Application Menu of the graphic page). Roughly speaking, the larger this
quantity the most accurate the result.
The following table summarizes some trials that you can perform. The average number of data in
the neighborhood is recalled, as it directly conditions the computing time.
The Ratio increases quickly and then seems to converge with a radius equal to 8-9. Trying a neighborhood of 10 and a skipping ratio of 2 does not lead to satisfactory results. It is then decided to use
a radius of 8 for the kriging step.
Radius

Number of nodes

Skipping Ratio

Signal to Noise Ratio

3.3

25

9.1

49

17.8

81

29.9

121

41.5

169

53.7

Image Filtering

953

225

63.4

289

69.5

361

72.4

10

441

73.5

10

222

49.37

An interesting concern is to estimate a target grid node located in the corner of the grid. In order to
keep the data pattern unchanged for all the target nodes, including those located on the edge of the
field, the field is virtually extended by mirror symmetry. In the following display, the weights
attached to virtual points are cumulative to the one attached to the actual source data.

(snap. 22.3-4)

The final task consists in performing the filtering on the whole grid.

954

Note - The efficiency of this kriging application is that it takes full advantage of the regular pattern
of the information as it must solve a kriging system with 121 neighborhood data for the 65536 grid
nodes.
The resulting variable varies from 29 to 45, to be compared with the initial statistics. It can be displayed as the initial image, where the color scale has been adapted.

(fig. 22.3-1)

Kriging Filter
This image shows more regular patterns with larger extension for the patches of low and high P values. Compared to the initial image, it shows that the noise has clearly been removed.

Image Filtering

955

22.4 Other Techniques


We return to the basic assumption that the measured variable Z is the combination of the underlying
true variable Y and the noise

:
Z = Y+

(eq. 22.4-1)

It is always assumed that the noise is a zero mean quantity, uncorrelated with Y, and whose variance
is responsible for the nugget effect component of the variogram. In order to eliminate the noise, a
good solution is to perform the convolution of several consecutive pixels on the grid: this technique
corresponds to one of the actions offered by the Tools / Grid or Line Smoothing operation.
On a regular grid, the low pass filtering algorithm performs the following very simple operation on
three consecutive grid nodes in one direction:

1
1
1
Z i --- Z i 1 + --- Z i + --- Z i + 1
4
2
4

(eq. 22.4-2)

A second pass is also available which enhances the variable and avoids flattening it too much. It
operates as follows:

1
1
3
Z i --- Z i 1 + --- Z i --- Z i + 1
4
2
4

(eq. 22.4-3)

When performed on a 2D grid and using the two filtering passes, the following sequence is performed on the whole grid:
l

filter the initial image along X with the first filtering mode,

filter the result along X with the second filtering mode,

filter the result along Y with the first filtering mode,

filter the result along Y with the second filtering mode.

If several iterations are requested, the whole sequence is resumed, replacing the initial image by the
result of the previous iteration when starting a subsequent iteration. This mechanism can be constrained so that the impact of the filtering on each grid node is not stronger than a cutoff variable
(the estimation standard deviation map for instance): this feature is not used here.
We decide empirically to perform 20 iterations of the two-passes filtering on the initial image (P)
and to store the result on the new variable called P smoothed.

956

(snap. 22.4-1)

The result is displayed using the same type of representation as before. Nevertheless, please pay
attention to the difference in color coding. The image also shows much more structured patterns
although this time the initial high frequency has only been diluted (and not suppressed) which
causes the spotted aspect.

(fig. 22.4-1)

Low Pass Filter

Image Filtering

957

Using the same window Tools / Grid or Line Smoothing, we can try another operator such as the
Median Filtering. This algorithm considers a 1D neighborhood of a target grid node and replaces its
value by the median of the neighboring values. In 2D, the whole grid is first processed along X, and
the result is then processed along Y. If several iterations are required, the whole sequence is
resumed. Here, two iterations are performed with a neighborhood radius of 10 pixels (excluding the
target grid node) so that each median is calculated on 21 pixels. The result is stored in the new variable called P median.

(snap. 22.4-2)

The result is displayed with the same type of representation as before: it is even smoother than the
kriging result, which is not surprising given the length of the neighborhood selected for the median
filter algorithm.

958

(fig. 22.4-2)

Median Filter
The real drawback of these two methods is the lack of control in the choice of the parameters (number of iterations, width of the neighborhood) whereas in the case of kriging, the quantity to be filtered is derived from the model which relies on statistics calculated on actual data, and the
neighborhood is simply a trade-off between accuracy and computing time.

Image Filtering

959

22.5 Comparing the Results


22.5.1 Connected Components
The idea is to use the Interpolate / Interpolation / Grid Operator which offers several functions
linked to the Mathematical Morphology to operate on the image.
The window provides an interpreter which sequentially performs all the transformations listed in
the calculation area. The formula involves:
l

Variables (which are defined in the upper part of the window) through their aliases v* for 1 bit
variables and w* for real variables.

Thresholds which correspond to intervals and are called t*.

Structural elements which define a neighborhood between adjacent cells and are called s*. In
addition to their extension (defined by its radius in the three directions), the user can choose
between the block or the cross element, as described on the next figure:

Cross (X=2; Y=1)


Here the procedure is used to perform two successive tasks:

Block (X=2; Y=1)

960

Using the input variable P denoised, first apply a threshold considering as grain any pixel
whose value is larger than 40 (inclusive); otherwise the pixel corresponds to pore. The result is
stored in a 1 bit variable called grain (P denoised): in fact this variable is a standard selection
variable that can be used in any other Isatis application, where the pores correspond to the
masked samples.

Calculate the connected component and sort them by decreasing size. A connected component
is composed of the set of grain pixels which are connected by the structural element. The result,
which is the rank of the connected component, is stored in the real variable called cc (P
denoised).

(snap. 22.5-1)

The procedure also produces a printout, listing the different connected components by decreasing
size, recalling the cumulative percentage of grain.

Image Filtering

961

The same procedure is also applied on the three resulting images. The following table recalls some
general statistics for the 3 variables:
Resulting Image

P denoised

P median

P smoothed

Total amount of grain

5308

5887

7246

Number of connected components

11

122

5 largest components (in pixels)

1718

1882

1733

1008

1864

1650

978

1167

1257

862

974

842

543

174

The different results are produced as images where the pore is painted in black.

(fig. 22.5-1)

Grains for Kriging Filter

962

(fig. 22.5-2)

Grains for Low Pass Filter

(fig. 22.5-3)

Grains for Median Filter

Image Filtering

963

22.5.2 Cross-sections
The second way to compare the three resulting images consists in representing each variable as the
elevation along one cross-section drawn through the grid.
This is performed using a Section in 2D Grid representation of the Display facility, applied to the 3
variables simultaneously. The parameters of the display are shown below.

(snap. 22.5-2)

Clicking on the Trace... button allows you to specify the trace that will be represented. For instance,
to represent the first diagonal of the image, enter the following vertices:

964

(snap. 22.5-3)

In the Display Box tab of the Contents window, modify the Z Scaling Factor to 0.0005.
The three profiles are shown in the next figure and confirm the previous impressions (P denoised in
red, P median in green and P smoothed in blue).

(fig. 22.5-4)

Boolean

965

23.Boolean
This case study demonstrates some of the large variety of possibilities
offered by the implementation of the Boolean Conditional Simulations.
This simulation technique belongs to the category of Object Based simulations. It consists in dropping objects with different shapes (defined
by the user) in a 3D volume, fulfilling the conditioning information
defined in terms of pores and grains.

Last update: Isatis version 2014

966

23.1 Presentation of the Dataset


This simulation type requires data to be defined on lines in a 3-D space. The file bool_line.hd contains a single vertical line (located at coordinates X=5000m and Y=5000m), constituted of 50 samples defined at a regular spacing of one meter, from 100m to 149m. You must load it using File /
Import / ASCII, creating a new Directory Boolean, and a new File Lines. The ASCII file are located
in the Isatis installation directory/Datasets/Boolean
Set the input-output length units in meters, the X and Y graphical axis units in kilometers and the Z
axis in meters in the Preferences / Study Environment / Units.

(snap. 23.1-1)

Note - The dataset has been drastically reduced to allow a quick and good understanding of the
conditioning and reduce the computing time.
The file refers to a Line Structure which corresponds to the format used for defining several samples gathered along several lines (i.e. boreholes or wells) in the same file. The original file contains
five columns which correspond to:

Boolean

967

the sample number: it is not described in the header and will not be loaded (the software generates it automatically in any case),

the coordinate of the sample gravity center along X,

the coordinate of the sample gravity center along Y,

the coordinate of the sample gravity center along Z,

the variable of interest, called facies which only contains 0 and 1 values. This information is
considered as the geometrical input used for conditioning the boolean simulations. One can
think of 0 for shale and 1 for sandstone to illustrate this concept. In this case study, the word
grain is used for 1 values and the word pore for 0 values.

You need to to go Tools/Convert Gravity Lines to Core Lines since the boolean simulation tool work
only with Core Lines. Convert the Lines using the From Isatis <v9 Lines File option.

(snap. 23.1-2)

The boolean conditional simulation is run on a regular grid which has to be created beforehand
using the File / Create Grid File facility. It consists of a regular 3-D grid containing 201 x 201 x 51

968

nodes, with a mesh of 50 m x 50 m x 1 m and whose origin is located at point (X=0; Y=0;
Z=100m). The user may check in the Data File Manager that the grid extends from 0m to 10000m
both in X and Y, and vertically from 100m to 150m.

(snap. 23.1-3)

Boolean

969

23.2 Boolean Environment


The boolean simulation facility is located in the Interpolate / Conditional Simulations / Boolean
menu. This window requires the definition of several items, presented hereafter.

(snap. 23.2-1)

23.2.1 Conditioning Information


This item refers to the Line file that has been imported. The target variable is obviously the facies
variable. As this variable may contain any numerical value, it is compulsory to specify how this
numerical variable has to be converted into a boolean variable (only 0 and 1 values). This is done
by defining the threshold rule in the sub-window that pops up when pressing the button called Set
Definition.... Here the interval is simply set to [1,1].

970

23.2.2 Output Grid


The variable that will be used to store the result of the Boolean Conditional Simulation has to be
defined: it is called Simulation 1. For each grid node, this variable will contain the resulting indicator value, i.e. a value equal to:
l

1 if the grid node belongs to at least one object,

0 if the node does not belong to any object.

An important point to remember is that, during the simulation process the conditioning data are
assigned to the closest node of the grid. This discretization step implies that the user should worry if
two samples are assigned to the same grid although they carry two different indicator values: an
error message is sent and the procedure is interrupted.
The procedure is also using the value "-1" to designate a grid node which coincides with a conditioning grain value. This is the reason why the output variable is not created as a 1-bit variable: the
software uses the default 32-bit format.

23.2.3 Object Family Definition


This is where the user defines the shapes and dimensions of the objects to be simulated. Different
examples will be given in this case study; for the moment, just press the Add button and enter the
following parameters:

(snap. 23.2-2)

23.2.4 Parameters
The Boolean Conditional Simulation parameters are briefly described hereafter. For more information, the user should refer to the On-Line documentation.

Boolean

971

This Object Based simulation technique consists in dropping objects in a 3D space, so that they
intersect the field (3D grid) to be simulated. Obviously, to have an even spread of objects over the
space, we must take into consideration not only the objects whose center lies within the field to be
simulated, but also those located in its immediate periphery. This periphery is called the Guard
Zone and is defined by its dimensions along the three main axes. Here they have been set to 1800m
along X, 1000m along Y and 2m along Z. This implies that the radius of the objects that we consider should not be larger than these values. Note that no test is performed to ensure this compatibility.
The objects are dropped according to a random process which requires the following parameters:
l

the number of objects to be generated before the simulation stops. Actually, the user has to
define either a Poisson intensity or the related average number of objects (dropped in the dilated
domain) that the simulation aims to reach (1000 here).

the seed used to generate the random values: to generate different outcomes of this boolean conditional simulation technique, it is compulsory to change this seed value before each run. Note
that, if the seed is set to 0, Isatis automatically generates a different seed at each run.
The boolean simulation algorithm relies on a death and birth process which may either create or
delete objects. Therefore, the average number of objects must be considered as a target number
that will be reached only if the simulation is run for a long time. It is common practice, however,
to provide a Maximum Time that will be used to stop the process prematurely (100).
Moreover, this iterative process is performed in two steps: a preliminary step consists in dropping some initial objects at preferential locations simply to fulfill the conditioning data. These
initial objects must disappear during the process.
A Graphic Output enables, after run, to control the evolution of the total number of objects as
well as the proportion of initial objects (not visible without zooming in the lower left corner).

(fig. 23.2-1)

972

23.2.5 Theta Function


The definition of the Theta Function is available pressing the Theta Intensity button.

(snap. 23.2-3)

The density of the objects (regarding their centers) does not have to be even over the whole dilated
domain. The Theta function

h z describes the object density along the vertical axis. It is

defined as log P h (up to its sign) where P h is the probability that some pores extend from
z to z+h, without encountering any grain in the mean time. The value h corresponds to the Minimum
Pore Length defined by the user in terms of layers. Finally, the Theta function can be smoothed by
averaging its value over several consecutive layers. For more information, the user should refer to
the On-Line documentation.
This Theta function might also be derived from the conditioning information (Calculate from Data
button) and displayed graphically. The picture corresponds to a minimum pore length of 1 and no
smoothing (Number of layers averaged set to 1).

Boolean

973

(snap. 23.2-4)

Simultaneously Isatis calculates and represents three statistical quantities that may help analyzing
the quality of the conditioning information and understanding the simulation process:
l

the grain proportion which simply tells us, for a given horizontal grid level, what the proportion
of the conditioning information which corresponds to grain is,

the histogram of the pore length,

the pore survival function which gives to the average residual length for the pores whose length
is larger than a given value, as a function of this value.

Only the Theta function varies when the values for the Minimum Pore Length and the Number of
layers averaged are modified. Set the Minimum Pore Length to 3 and check how the graphic modification.
You can then smooth out this function by setting the number of layers on which the function is calculated to 4.

974

Finally, the values of the Theta variable are displayed in a scrolled editable area, where the user can
modify them by hand. Any value lying between 0 and 1 is admissible. Nevertheless one must
remember that a value of 0 at a given horizontal grid level implies that no object may be generated
at this level. This constraint must at least be compatible with the conditioning information.
For sake of simplicity, the rest of this chapter will be processed setting the Minimum Pore Length to
1 and suppressing any smoothing.

Boolean

975

23.3 Simulations
This paragraph is focused on the description of the Object Law. Each example describes an Object
Family Definition and illustrates the result through the display of a simulation outcome.

23.3.1 Exercise 1
The first trial uses the already described parameters for a single type of parallelepipedic object. All
the parallelepipedic objects have the same geometrical characteristics:
l

extension along X = 1800m

extension along Y = 1000m

extension along Z = 2m

The next figure represents a display of the Z level number 10 of the grid using the Display facility.
A grid node which does not intersect any object (value 0) is painted in black; if at least one object is
intersected (value 1), the color is white. If a conditioning grain coincides with the grid node (value 1), the node is painted in grey. Due to the very fine definition of the grid (the picture corresponds to
200 x 200 grid nodes), the conditioning sample at this level (located in coordinates X=5000m,
Y=5000m) is hardly visible.
10

Simulation 1

9
8

Y (km)

7
6
5
4
3
2
1
0

X (km)

10

(fig. 23.3-1)

23.3.2 Exercise 2
In this exercise, while keeping the parallelepipedic objects, set their vertical thickness equal to 3m.
This run will not function and will return errors specifying that some conditioning grains have not
been covered successfully by objects. In fact, this simply reveals the incompatibility between the
object description and the conditioning data: as a matter of fact, when reading the conditioning data

976

along the line, you can find (several times) the occurrence of the following sequence 0,1,1,0 which
implies the presence of an object between the two conditioning pores which are precisely 3m apart
and are therefore not compatible with the thickness of the objects which are constantly equal to 3m.

23.3.3 Exercise 3
In this second trial, the parallelepipeds are replaced by lower half ellipsoids. The object extension is
kept unchanged, except for the vertical extension which is set to 4m.
10

Simulation 3

9
8

Y (km)

7
6
5
4
3
2
1
0

X (km)

10

(fig. 23.3-2)

The next picture presents a vertical section (XOZ) which intersects the 3D grid at coordinate
Y=5000m (IY = 101). Do not forget to change the projection definition to XOZ in the Camera tab.
The third dimension may be extended for better legibility. This view is convenient to check that the
conditioning is also fulfilled when the sample density is large.

(fig. 23.3-3)

Note that the vertical extension of the ellipsoids of this exercise (4m), though larger than in the previous exercise (3m high parallelepipeds), do not cause any problem as the thickness of ellipsoids is
not constant over the whole object.

Boolean

977

23.3.4 Exercise 4
This exercise simulates lower half sinusoidal objects. This type of object require 6 parameters:
l

the value for half of the period (extension): 1300m,

the amplitude of the sine function: 500m,

the thickness of the sine function: 400m,

the extension of the object along the sine function in the horizontal plane: 4000m,

the extension along Z: 4m,

the rotation angle: 0.


10

Simulation 4

9
8

Y (km)

7
6
5
4
3
2
1
0

10

X (km)

(fig. 23.3-4)

23.3.5 Exercise 5
Several types of objects may be mixed in the same simulation outcome. For instance, combine the
three types of objects already presented and set the following proportions for each family of
objects:
l

10% of parallelepipedic objects (1800m, 1000m, 2m)

60% of lower half ellipsoids (1800m, 1000m, 4m)

30% of lower half sinusoids (1300m, 500m, 400m, 4000m, 4m)

978

10

Simulation 5

9
8
7

Y (km)

6
5
4
3
2
1
0

10

X (km)

(fig. 23.3-5)

23.3.6 Exercise 6
In this exercise, set the object type back to the lower half ellipsoidal objects, in order to demonstrate
the non constant geometrical parameters. Simply modify the definition of the extension along X of
the ellipsoids: instead of being constantly equal to m=1800m, a tolerance s=1000m is defined in
order to allow them to vary uniformly between m-s and m+s.
10

Simulation 6

9
8

Y (km)

7
6
5
4
3
2
1
0

X (km)

9 10
(fig. 23.3-6)

Boolean

979

23.3.7 Exercise 7
This final example consists in playing with the rotation angle. Keeping the initial lower half ellipsoids, allow the rotation angle to vary in an interval centered around 45 degrees with a tolerance
equal to 20 degrees (amplitude: 25 to 65 degrees from E-W direction).
10

Simulation 7

9
8

Y (km)

7
6
5
4
3
2
1
0

X (km)

10

(fig. 23.3-7)

980

Das könnte Ihnen auch gefallen