Sie sind auf Seite 1von 10

t Special section: Formation evaluation using petrophysics and borehole geophysics

Automated gamma-ray log pattern alignment and depth matching by


machine learning
Shirui Wang1, Qiuyang Shen1, Xuqing Wu2, and Jiefu Chen1
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Abstract
Depth matching of multiple logging curves is essential to any well evaluation or reservoir characterization.
Depth matching can be applied to various measurements of a single well or multiple log curves from multiple
wells within the same field. Because many drilling advisory projects have been launched to digitalize the well-log
analysis, accurate depth matching becomes an important factor in improving well evaluation, production, and
recovery. It is a challenge, though, to align the log curves from multiple wells due to the unpredictable structure
of the geologic formations. We have conducted a study on the alignment of multiple gamma-ray well logs by
using the state-of-the-art machine-learning techniques. Our objective is to automate the depth-matching task
with minimum human intervention. We have developed a novel multitask learning approach by using a deep
neural network to optimize the depth-matching strategy that correlates multiple gamma-ray logs in the same
field. Our approach can be extended to other applications as well, such as automatic formation top labeling for
an ongoing well given a reference well.

Introduction The traditional practice of pattern alignment was ac-


Digitalization becomes a key factor in improving complished manually by picking up similar sequential
drilling productivities. Under the new digital frame- patterns among different gamma-ray log curves. This la-
work, the drilling data and parameters are transferred, bor-intensive job is prone to human error and is tar-
integrated, and processed by the digitalized advisory nished with imprecise labeling due to the lack of a
system. The visualization of real-time drilling parame- consistent evaluation criterion. Reliability becomes a
ters and the intelligent analysis process are key compo- big issue even for an experienced geologist when facing
nents in the system. Monitoring drilling data from these complex curve patterns. Various methods have
multiple wells and performing evaluations across wells been proposed to assist the curve matching task (Zang-
need to synchronize their depth scale, which is impor- will, 1982; Kerzner, 1984; Lineman et al., 1987; Zoraster
tant for accurate formation interpretation and precise et al., 2004). Measuring the crosscorrelation between
drilling control. As the function of depth, different types two sequences is the most straightforward approach,
of measurement support the formation evaluation by in which a high crosscorrelation indicates high similar-
probing the physical properties around the borehole, ity, hence a potential match. However, crosscorrelation
such as the rock conductivity, radioactivity, porosity, is sensitive to measurement noises and other perturba-
and permeability. Gamma-ray logs are widely used tion factors that could distort the log curves and intro-
for depth matching or correlation between wells be- duce false-positive matches. Some common distortion
cause it is a good indicator of the natural radioactivity factors include shifting, pattern stretching and com-
and is standardized as the basic measurement. A dis- pacting, and missing measurements. Moreover, the
tinct pattern of the gamma-ray log is oftentimes pre- choice of the distance metric also plays a significant
sented when there is a disruptive change of formation role in quantifying the similarity. Dynamic time warping
properties or rock types. Therefore, the task of depth (DTW) has been applied to compare curves by many
matching of two wells falls into the area that aligns works in various fields (Aach and Church, 2001; Khol-
gamma-ray measurements by identifying similar pat- matov and Yanikoglu, 2005; Petitjean et al., 2011), and it
terns among gamma-ray log curves. is a distortion-tolerance approach. DTW (Sakoe, 1971;

1
University of Houston, Cullen College of Engineering, Department of Electrical and Computer Engineering, Houston, Texas 77004, USA. E-mail:
sruiwang1182@gmail.com; sqygg@hotmail.com; chenjiefu@gmail.com.
2
University of Houston, College of Technology, Department of Information and Logistics Technology, Houston, Texas 77004, USA. E-mail:
xwu8@central.uh.edu.
Manuscript received by the Editor 11 September 2019; revised manuscript received 14 November 2019; published ahead of production 18 De-
cember 2019; published online 23 March 2020. This paper appears in Interpretation, Vol. 8, No. 3 (August 2020); p. SL25–SL34, 10 FIGS., 3 TABLES.
http://dx.doi.org/10.1190/INT-2019-0193.1. © 2020 Society of Exploration Geophysicists and American Association of Petroleum Geologists. All rights reserved.

Interpretation / August 2020 SL25


Sakoe et al., 1990) was proposed for measuring the sim- unsupervised feature extraction performed hierarchi-
ilarity between two temporal sequences, and it has been cally through a process of coarse graining, DNN is used
implemented in many signal processing applications in this paper to learn the sequential pattern of gamma-
such as speech recognition. Unlike correlation, DTW ray log curves. To deliver a fully automated depth-
takes the shape transformation between sequences into matching framework, our DNN also learns a nonlinear
account and searches for the optimal match even when matching strategy to overcome the shift between
the distortion exists. curves. The objective of the network is to automatically
However, the direct use of DTW to align two sequen- locate the corresponding depth of the proposed pat-
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ces may not be ideal due to its high computational cost. terns for a new logging sequence by comparing to a
The complexity of the algorithm is Oðn2 Þ, where n de- reference well log.
notes the length of the matching sequence. The length In this paper, a dynamic matching system is proposed
of the subsequence cannot be too large or the computa- by using a DNN, which is built upon 1D convolution
tional time could exponentially increase. More impor- layers. The system can automatically suggest a tracking
tantly, DTW takes presumption that the two ends of movement to find the best matching pattern between a
the candidate curves are aligned, so a manual alignment query and a target gamma-ray log. A multitask learning
of both ends is needed before the matching process. By approach is adopted to take global and local information
using a sliding window, DTW compares all of the sub- into consideration during the pattern matching. The sys-
sequences of the target one by one and takes the one tem is trained and tested with synthetic data augmented
with the minimum value as the matched pattern. The from real logging data, and its performance is reported in
DTW-based matching workflow is complicated when the “Experiment” section. Future extension and im-
considering the length of the logging curve, different provement are discussed in the end.
depth resolutions, potential shifting range in the current
work, etc. It is a delicate task to deploy the DTW
method directly to solve the depth-matching problem. Conventional methods
Due to the aforementioned limitations, the industry Crosscorrelation and DTW are two classic methods
still relies on the empirical interpretation of the geolo- for measuring the similarity or distance between two
gists. Although there exist algorithms to measure the time series. However, there are inevitable drawbacks
correlation between different logging curves, they suf- when directly using these algorithms. Crosscorrelation
fer from either the impreciseness or the dramatic in- is defined in equation 1, which is the integral of two time
crease of computing time as the number of wells series at each corresponding time-delay point. It is effi-
increases. As one of the most promising solutions, ma- cient when the data are clean and the shape of the pat-
chine-learning-based algorithms have been gradually tern is fixed. But in the gamma-ray log depth-matching
adopted and applied to many areas. A recent review problem, in which distortion and noise take place, cross-
(Bergen et al., 2019) summarized several applications correlation becomes inapplicable. As shown in Figure 1,
by using machine-learning and data-driven methods. the matched pattern is far from the ground truth:
In particular, cumulative historical data can be used X

to train a predictive model to analyze the new data. ðf ⋆gÞ½n ¼ f ½i g½n þ i: (1)
In other words, we could build an intelligent depth- i¼−∞
matching algorithm by observing a large number of well
logs with named well tops and mining implicit curve
patterns. Zimmermann et al. (2018) and Liang et al. Instead of measuring the difference between differ-
(2019) present a neural network-based solution to solve ent curves point-wisely, DTW takes a dynamic program-
log synchronization problems. By comparing a candi- ming approach to locate the optimal matching pattern
date pattern segment and a subsequence of the same globally. It captures the global trajectory information
length of the target log, the model can locate the posi- and mitigates the influence of distortion, shifting, and
tion of the matching pattern on the target log. Their re- noise. The rules followed by DTW are as shown in equa-
sult demonstrates that a neural network can be used to tion 2, where A ¼ fai gm n
i¼1 and B ¼ fbi gi¼1 denote two
quantify the similarity of two fixed-length curves with 1D series with length m and n. The term δðai ; bj Þ is the
high confidence. However, because the search is con- distance of ai and bj (usually the Euclidean distance).
ducted locally, there is a limitation on the distance of The distance is computed recursively from the first
maximum shifting and the method is not suitable for points of A and B to the end:
two curves with a significant shift and covering a great 8 9
depth. < DðAi−1 ; Bj Þ =
In this paper, we take advantage of the multitask DðA; BÞ ¼ δðai ; bi Þ þ min DðAi−1 ; Bj−1 Þ : (2)
: ;
learning technique and have proposed a dynamic DðAi ; Bj−1 Þ
matching algorithm using a deep neural network
(DNN). DNN has been proven to have satisfactory per-
formance (Krizhevsky et al., 2012) for many pattern-rec- The disadvantage of this approach is obvious: It is a
ognition and prediction-related tasks. Leveraged by the nonlinear algorithm, and the computational complexity

SL26 Interpretation / August 2020


is high. Thus, for a well log with a depth of thousands given gamma-ray log. A concept of tracker is used here
feet, making a depth matching using DTW-based ap- to describe the matching process. To be specific, the
proach is very time consuming. The execution time tracker is a sliding window with a fixed length, and
for different lengths can be seen in Table 1. the central point of the window indicates the current po-
sition on the target. Given a query subsequence and ini-
Machine-learning-based algorithm tialize the tracker at a depth, it can move back and forth
Given a segment of gamma-ray-series (query) that on the target well and compare the current subsequence
within the tracker to the query, until it finds the best-
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

indicates a proposed pattern on a reference log (refer-


ence), the aim is to find the best-matched subsequence matched pattern and stops. The action of the window
on another gamma-ray series (target). We proposed a is controlled by its moving directions (e.g., left versus
DNN-based algorithm to solve this problem. A DNN with right), which is automatically predicted by the DNN. This
1D convolution layers is built to extract features from a process can be seen in Figure 2.

Figure 1. Pattern matching by crosscorrela-


tion. The upper row shows the target log on
the left and the candidate subsequence to
match on the right. The lower row demon-
strates the crosscorrelation at each depth
for the candidate. The green line indicates
the ground truth of the matching, and the
red line marks the matching depth by the
crosscorrelation method.

Figure 2. An illustration of the curve-matching process. The series in the blue box is the query, and the purple box represents the
initialization of the tracker on the target. The tracker will move forward or backward on the reference series until the DNN finds
the best matching (red box) and the tracker to stop.

Table 1. Time consumption of DTW.

Execution time on inputs of different length by DTW

Length (point) 100 200 400 800 1000 2000 4000 8000
Time (s) 0.16 0.68 2.71 10.93 17.12 68.14 276.56 1123.21

Interpretation / August 2020 SL27


Deep neural network where i ∈ f0; L − 1g. Thus, the double channel input is
The DNN has been extensively used in the geoscience I ¼ ½S; S 0 . Figure 4 shows the construction of the input
domain recently due to its outstanding performance in given a target log, a query log, and the current position of
many classification and regression applications. Convo- the tracker. The network deploys a multitask learning
lutional neural network (CNN) is one of the most suc- scheme and has two outputs as shown in Figure 3: (1) Ac-
cessful network structures, which is widely used in 2D tion instruction Ak , a vector of length three indicating
image processing for image classification, semantic seg- three alternative actions [forward, backward, stop]
mentation, and object detection. CNN consists of multi- suggested by the network for moving the tracker at time
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

ple convolutional layers and a set of learnable filters. τk . (2) A matching probability Mk , a scalar indicating
During the forward pass, it will extract semantic informa- whether the query has been matched by the tracker at
tion and features from the given images or time series. the current position. By exploiting commonalities and
These high-level features are very robust with regard to differences across action prediction and matching justi-
shifting or distortion caused by noise. Because weights fication, multitask learning can help improve the perfor-
are shared among different nodes within a layer, the for- mance and robustness of the neural network.
ward pass is computationally efficient. In this paper, we
adopt the CNN structure and use 1D convolutional layers Dynamic matching system
to convolve the gamma-ray logs, and the extracted fea- The workflow of the matching process is shown
tures are then classified by the following fully connected in Figure 3. At the beginning, a query Q of length
layers to predict the action for the tracker. The network l is given with the central point at the location
structure is shown in Figure 3. Given a target log of C Q ¼ indexðqi¼bl∕2c Þ on the reference log. The tracker
length L and a query log of length l, the input of the T0 ¼ ft0;i gl−1
i¼0 of the same length l is initialized on the
CNN consists of two channels. The first channel is the target log series S ¼ fsi gL−1 i¼0 at the same depth (i.e.,
log of the target well, a data series represented by C T 0 ¼ indexðt0;i−bl∕2c Þ ¼ C Q ). Thus, we get the initial in-
S ¼ fsi gL−1 put of the network I0 ¼ ½S; S00 . Assuming that the
i¼0 . The second channel is a replicate of the first
channel except for a subsequence of length l replaced by center of the best-matched tracking window for the
the query log Q ¼ fqi gl−1 query is located at C  ∈ fbl∕2c; L − bl∕2cg on the series
i¼0 . Let us specify the index of the
central point of the current tracking window on the tar- S. At each step τk ∈ fτ0 ; : : : ; τn g, the network iteratively
get as C T and C T ∈ fbl∕2c; L − bl∕2cg. The data series of takes an input Ik to predict a matching probability sca-
lar Mk , which indicates whether the current tracker has
the second channel S 0 can be formed by
matched the query, and an action probability vector Ak ,
8 h jk jk i which indicates a “moving forward” (if C T k < C  ), “mov-
<q  j k i ∈ C − l ;C þ l − 1
T T ing backward” (if C T k > C  ), or “stopping” (if C T k ¼ C  )
2 2
si0 ¼ i− C T − l
2 (3)
: action. Based on the action prediction vector, a
si otherwise; stochastic action scheme is performed on the tracker

Figure 3. An overview of the workflow and the structure of the neural network. The input is a dual-channel vector (9000 × 2).
Each box represents a multichannel feature map.

Figure 4. The process to construct the input


for the neural network.

SL28 Interpretation / August 2020


to make it move forward (C T kþ1 ¼ C T k þ stepsize) or ing elastic distortion (Simard et al., 2003) and random
backward (C T kþ1 ¼ C T k − stepsize) along the target S shifting on them. The logs are split into training and test-
and stop when the pattern is matched, i.e., Mk > ing data proportionally (83% and 17%). In total, 6000 in-
threshold. put samples are generated for each log pair, and we
collected more than 1 million training samples in total
Stochastically weighted action and split 20% of them for validation. Error-tolerant
The model produces an action prediction at each matching criteria are defined to improve the smooth-
matching iteration. Occasionally, mispredictions hap- ness of the model. For a tracker centered at C T and
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

pen during this dynamic process and the tracker’s a query that matches the target log at C  , the ground-
movement can be stalled before it moves to the optimal truth action prediction A and matching probability
location. To improve the overall robustness of the sys- M are defined as
tem, we take a stochastic approach to jump out of the
8
local minima. Instead of depending on the deterministic < ½1; 0; 0 if C T < C  − 25
prediction at a single location, the action decision A ¼ ½0; 1; 0 if C T > C  þ 25 ;
 (7)
scheme consists of two steps: (1) For a tracker Tk lo- :
½0; 0; 1 otherwise
cating at C T k at kth step, we take its adjacent locations
into account. Sliding the tracking window from
C T k − bm∕2c to C T k þ bm∕2c, we generate m inputs
Im
k ¼ ½Ik;0 ; : : : ; Ik;m−1 , respectively, and input them into

1 if C T ∈ ½C  − 25; C  þ 25
the network. Thus, for each tracker T k , the network M ¼ : (8)
will output m prediction vectors for action. Stacking 0 otherwise
together, we have an m × 3 probability matrix
Amk ¼ ½A k;0 ; : : : ; A k;m−1  for action. The average of each
column forms a probability vector:
Multitask loss
1 X
m −1 In multitask training, each loss for a task is assigned
Ak ¼ Ak;i ¼ ½P f ; P b ; P s : (4) a weight based on the magnitude of importance. The
m i¼0 weights are often chosen and tuned manually. It is time

The averaging process acts as a smoother, eliminates


Algorithm 1. Dynamic matching system.
small mispredictions during the process, and effectively
increases the accuracy. The same averaging process
is also implemented on the matching probability Mk . Require: Queries: Q. Target log: S. Trained model: M.
(2) Based on Ak , a historically averaged action proba- Averaging number m.
bility vector is computed as Ensure: Using a tracker T to find the alignment of Q on S,
1P where CT ¼ C .
k
A if k > 0 1: Initialize T0 ; ðCT 0 ¼ CQ Þ
Ak ¼ fP f ; P b ; P s g ¼ k i¼0 i : (5)
Ak otherwise 2: Generate Im k , (k = 0) from T0
3: repeat
After the two steps, a randomization scheme is 4: Am m
k ; Mk ⇐MðIk Þ
m
P Pm−1
implemented to choose the action. The next moving 5: Ak ; Mk ¼ m1 m−1 i¼0 Ak;i ; m
1
i¼0 Mk;i
direction is determined by sampling an action accord- 6: if Mk > threshold then
ing to Ak . The moving step size is the product of a 7: break
default step size and the probability of the chosen 8: end if
action, i.e., P
9: Ak ¼ 1k k−1 i¼0 Ak
 10: actionk ¼ stochasticactionðAk Þ
direction ∼ Ak ¼ ½P f ; P b ; P s 
action ¼ : (6) 11: Tkþ1 ⇐Tk ðactionk Þ
stepsize ¼ P direction × default stepsize
12: Im kþ1 ⇐Tkþ1
13: k ¼ k þ 1
This stochastic action selection can effectively avoid 14: until k ¼¼ kmax
local minimum traps. A complete dynamic depth- 15: if Mk > threshold then
matching algorithm is shown in Algorithm 1. 16: preserve Tk
17: else
Experiment 18: discard Tk
Data processing 19: end if
To augment the training set, we generate 177 syn- 20: Output: CTk
thetic gamma-ray log pairs based on 59 real logs by add-

Interpretation / August 2020 SL29


consuming and becomes unsustainable when the prob- work effectively recognizes the pattern as well as
lem scales up. Kendall et al. (2018) propose a method to captures spatial correlations.
accomplish this task in an efficient way. Instead of tun- Figure 7 demonstrates the effectiveness of using the
ing the weights artificially after each training and vali- stochastically weighted approach to improve the
dation experiment, we make them learnable parameters matching process. In each figure, the query and the tar-
and let the network learn them by itself. In this work, get log are presented in the first two rows, respectively,
the weighted loss is formulated as equation 9, where σ 1 with the red dot marking the center of the ground truth.
and σ 2 are the learnable weights of the action prediction In the bottom three rows, the green lines represent the
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

task and matching classification task. The terms LA and action decided under different decision schemes. A
LM are the cross-entropy losses: higher value on the green line indicates a “backward”
prediction, which will move the tracker to the left of
1 1 its current location. A lower value indicates a “forward”
Lðσ 1 ; σ 2 Þ ¼ LA þ 2 LM þ log σ 1 σ 2 : (9)
σ 21 σ2 prediction. A middle value indicates a “stop” prediction.
The only two circumstances that the tracker will stop
are (1) the action alternates forward and backward
in two consecutive predictions and (2) a stop predic-
Training and testing tion. As shown in the top green line of Figure 7a, which
The training and validation performance are shown is obtained by the direct output of the model at every
in Figure 5. The first row shows the cross-entropy single position, there are multiple local minima for ac-
losses of the action prediction and the matching classi- tion predictions and they could lead to a wrong stop
fication during training, and the second row shows the decision. By averaging the m adjacent action predic-
accuracy. The variation of two learnable weights is
shown in Figure 6. The overall training and validation
metric are presented in Table 2. The validation accu-
Action
racy is greater than 98% for action prediction and Match
matching classification, which indicates that the net-

Table 2. Loss and accuracy for action prediction and


matching indication.

Action prediction Matching classification


(train/validation) (train/validation)

Loss 0.02/0.09 0.01/0.02


Accuracy 99.3%/98.0% 99.5%/99.4%
Figure 6. The variation of weights for two outputs during
training.

Figure 5. The variation of loss and accuracy a) b)


during training.

c) d)

SL30 Interpretation / August 2020


tion, the local minima problem is suppressed as shown query. At the end of the iterations, if no matched pattern
in the middle green line of Figure 7a. However, it is not is found (i.e., Mk < 0.5; ∀ k ∈ 0; 1; : : : ; kmax ), we aban-
good enough when the action prediction is less accu- don this tracker. Figure 9a and 9b represents two suc-
rate as shown in Figure 7b. Mispredictions are not fully cessful matching and Figure 9c and 9d presents two
eliminated as shown by the middle green line, and the abandoned matching.
matching process is stalled by local minima. By taking The nonlinear matching strategy driven by the action
the average of historical predictions, an optimal action prediction has a significant impact on the computa-
is selected as shown at the bottom row of Figure 7. tional efficiency. A single matching process of the pro-
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

When the tracker moves to the matched pattern, it will posed method takes an average of approximately 0.53 s
be stopped by the matching classifier Mk . for 9000 logging points. Without action prediction, an
We test our model on the testing data. The accura- exhaustive search guided by the DNN along takes ap-
cies for the action prediction and matching classifica- proximately 25 s. Meanwhile, a DTW exhaustive search-
tion are 91.2% and 96.0%, respectively. Figure 8 ing process could take more than 30 s.
shows the confusion matrix of the matching classifica- Figure 10 shows examples of the complete depth-
tion M. Due to the low false-positive rate, only 2% of matching process. First, patterns are randomly pro-
negative matches will be classified as positive and posed on the reference log. By using the proposed
the majority (98%) of the mismatches will be filtered dynamic matching system, successful matchings will
out by the matching classifier. Meanwhile, the true-pos- be retained. Further alignment between two matching
itive rate is comparably high, which means that most patterns can be done by using either a linear mapping
positive matches (94%) will be retained. or DTW as shown in Figure 10d.
Four examples of the matching results are shown in
Figure 9. All four plots show the tracker’s state at the
end of the matching process. In each plot, the reference
log is on the left side. A rectangle and a horizontal black
line indicate the query pattern and its location. The tar-
get log is plotted on the right, and the rectangle on it
represents the tracker at the current iteration. A num-
ber in the range ½0; 1 is annotated to each tracker,
which is the matching probability M given by the
CNN. During the dynamic matching process presented
in Algorithm 1, if the matching probability is higher than
the threshold, which is 0.5 in our experiment, the
matching process is stopped and the current location
of the tracker C k is preserved as the alignment of the Figure 8. The confusion matrix of the matching indication.

a) b)

Figure 7. Demonstrations of the matching process. The first row shows the query proposed from the reference well, and the blue
dot indicates the center. The second row shows a segment of the target well. The red dot on the target well marks the ground truth
of matching. The three green lines represent action decisions using three different approaches: (1) deterministic action predicted
via Ak;bm∕2c without considering the m adjacent locations of the current tracker; (2) action predicted after averaging the m adjacent
action probability vectors Am k ; and (3) action predicted via Ak , which is the average of the historical action probabilities. The
tracker is initialized on the left end and right end, respectively, in (a and b). The green arrows indicate its forward or backward
movements. Compared to the deterministic approach, a stochastic method has better chances to overcome local minima. The best
performance is obtained when taking historical predictions into consideration.

Interpretation / August 2020 SL31


a) b)
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

c) d)

Figure 9. Matching examples of the dynamic matching process. (a and b) Two successful matches. (c and d) Two discarded
mismatches due to the low matching probability.

SL32 Interpretation / August 2020


a) b) c) d)
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Figure 10. Examples of the point-to-point depth matching. (a-c) Three pattern alignment results by the dynamic matching system.
The blue line represents a reference log, and the green one is the target log. The center of proposed patterns and their matching
pair are marked on the reference log and target log, respectively. (d) Uses linear mapping and DTW to align points between pattern
matching pairs.

Conclusion Kerzner, M. G., 1984, A solution to the problem of auto-


In this work, we proposed an automatic gamma-ray matic depth matching: Presented at the 25th Annual
log pattern alignment and depth-matching algorithm us- Logging Symposium, SPWLA.
ing DNN and multitask learning. Based on the action and Kholmatov, A., and B. Yanikoglu, 2005, Identity authentica-
matching probability predicted by the DNN, a dynamic tion using improved online signature verification
matching strategy is stochastically drafted to shift the method: Pattern Recognition Letters, 26, 2400–2408,
tracking window until it finds the best matching pattern doi: 10.1016/j.patrec.2005.04.017.
in the target log. Experimental results demonstrate the Krizhevsky, A., I. Sutskever, and G. E. Hinton, 2012, Image-
effectiveness of the proposed approach. Future work Net classification with deep convolutional neural net-
will focus on improving the adaptiveness of the frame- works, in F. Pereira, C. J. C. Burges, L. Bottou, and
work to include logs with different depth resolutions.
K. Q. Weinberger, eds., Advances in neural information
processing systems 25: NIPS Foundation, 1097–1105.
Data and materials availability
Liang, L., T. Le, T. Zimmermann, S. Zeroug, and D. Heliot,
Data associated with this research are confidential
2019, A machine learning framework for automating
and cannot be released.
well log depth matching: Presented at the 60th Annual
References Logging Symposium, SPWLA.
Aach, J., and G. M. Church, 2001, Aligning gene expression Lineman, D., J. Mendelson, and M. N. Toksoz, 1987,
time series with time warping algorithms: Bioinfor- Well to well log correlation using knowledge-based
matics, 17, 495–508, doi: 10.1093/bioinformatics/17.6.495. systems and dynamic depth warping: Presented at
Bergen, K. J., P. A. Johnson, V. Maarten, and G. C. Beroza, the 28th Annual Logging Symposium, SPWLA.
2019, Machine learning for data-driven discovery in Petitjean, F., A. Ketterlin, and P. Gançarski, 2011, A
solid earth geoscience: Science, 363, eaau0323, doi: global averaging method for dynamic time warping,
10.1126/science.aau0323. with applications to clustering: Pattern Recognition,
Kendall, A., Y. Gal, and R. Cipolla, 2018, Multi-task learning 44, 678–693, doi: 10.1016/j.patcog.2010.09.013.
using uncertainty to weigh losses for scene geometry and Sakoe, H., 1971, Dynamic-programming approach to
semantics: Proceedings of the IEEE Conference on continuous speech recognition: Proceedings of the
Computer Vision and Pattern Recognition, 7482–7491. International Congress of Acoustics.

Interpretation / August 2020 SL33


Sakoe, H., S. Chiba, A. Waibel, and K. Lee, 1990, Dynamic Zimmermann, T., L. Liang, and S. Zeroug, 2018, Machine-
programming algorithm optimization for spoken word learning-based automatic well-log depth matching: Pet-
recognition: Readings in Speech Recognition, 159, rophysics, 59, 863–872, doi: 10.30632/PJV59N6-2018a10.
224, doi: 10.1109/TASSP.1978.1163055. Zoraster, S., R. Paruchuri, and S. Darby, 2004, Curve
Simard, P. Y., D. Steinkraus, and J. C. Platt, 2003, Best prac- alignment for well-to-well log correlation: Annual
tices for convolutional neural networks applied to visual Technical Conference and Exhibition, SPE, Extended
document analysis: Presented at the 7th International Abstracts, doi: 10.2118/90471-MS.
Downloaded 06/03/20 to 129.7.105.55. Redistribution subject to SEG license or copyright; see Terms of Use at http://library.seg.org/

Conference on Document Analysis and Recognition.


Zangwill, J., 1982, Depth matching — A computerized
approach: Presented at the 23rd Annual Logging Biographies and photographs of the authors are not
Symposium, SPWLA. available.

SL34 Interpretation / August 2020

Das könnte Ihnen auch gefallen