Sie sind auf Seite 1von 5

24

Dri.I.I~ait . g.i~gno.sis Wi.l:h...NeU~.I . . .N..,."".CJr~il .


R.A. Arehart .e;xxon ProductipnRe~arch Cp.i.

Copyright 1990 Society of Petroleum Engineers, Inc. only marginally acceptable because of the lack of a good algorithm
This paper (SPE 19558) was prepared for presentation at the 1989 SPE Annua[ Technica[ and the marginal quality oflithology estimates during drilling. As
Conference and Exhibition, San Antonio, Oct. 8-11. a result, drillers often still use their experience in the particular
field and formation and their knowledge of the bit they are using
Summary to estimate the bit grade. The result is that the bit often is pulled
too quickly, and the cost of drilling is higher than necessary.
A neural network was constructed to determine the grade (state of
wear) of a drill bit while it is drilling. With a three-layer neural
Introduction to Neural Networks
network and back-propagation as the learning algorithm, the system
was trained with laboratory data collected using bits of known grades Computers can perform many tasks, such as numerical processing
drilling through known lithologies, The inputs to the neural network and database management, much more quickly and accurately than
were rate of penetration (ROP), weight on bit (WOB), torque (T), humans. Some tasks, however, such as pattern recognition and fuzzy
revolutions per minute (RPM), and hydraulic horsepower per square reasoning, humans perform better and faster than computers, even
inch (HSI). The network was tested on synthetic formations ofvar- though the CPU operates much faster than a neuron, which is the
ious bed thicknesses constructed from the test data. processing unit of the brain. The brain is capable of performing
some tasks better than computers because the architecture of the
Introduction to Drill-Bit Diagnosis brain is much different from that of the computer. The use of neural
During drilling, it is important to have an estimate of the drill bit's networks represents an attempt to emulate the architecture of the
condition or state of wear. Drill bits are graded primarily on the brain with hardware or software simulations. Neural networks do
length of their teeth. A new bit is said to have a grade of 0, and not attempt to model the brain because not enough is known about
as the bit wears during drilling, the grade goes up linearly until the way the brain functions to make a serious attempt to model it.
the teeth have completely worn away, at which time they are said Neural networks merely use what is known about the brain to build,
to have a grade of 8. If the drill bits are continually pulled before or simulate with software, machines with similar architectures.
they are dull, then the cost of the drilling operation rises significantly Computers are usually built with a single powerful processing
owing to the cost of the rig time required to pull the bit and to install unit. There are some parallel machines, but they usually have at
a new one. If the bit is used too long, then it will driIJ inefficiently, most a few thousand processing units that are sparsely intercon-
and the cost will rise because of the time required to drill at a low nected. Fig. 1 illustrates a 3D cube with eight processing units and
ROP. In extreme cases, the condition of the bit may be ignored the standard interconnection scheme of each processing unit con-
until catastrophic failure occurs, resulting in the loss of one or more nected with its three nearest neighbors. Fig. 2 illustrates a simple
of the rolling-cone cutters. When this occurs, economic loss can three-layer feed-forward neural network with each node connect-
be very significant because of the time consumed in retrieving the ed to all nodes on the next level. Neural networks differ from "or-
broken pieces from the bottom of the hole. Consequently, a more dinary" parallel processing machines in that neural networks have
accurate estimate of the drill-bit grade translates directly into a more many more processing units that are much more densely intercon-
efficient, less costly drilling operation. nected and much less powerful. Software simulations of the use
Analytic methods for using driIJing parameters and an estimate of a very large number of simple processing units that are mas-
of the lithology to estimate the bit grade during driIJing have been sively interconnected have resulted in advancements toward solving

OUTPUT 1 OUTPUT 2 OUTPUT3

INPUT1 INPUT2 INPUT3 INPUT4

Fig. 1-Three-cube parallel computer. Fig. 2-A simple neural network.

SPE Computer Applications, July/August 1990


---------------------------------------------------------25
the pattern-recognition problem and other types of problems at which its line and will produce 0 if the input is below its line. In either
computers have been only partially successful. In addition, these case, the node output could be reversed by simply reversing the
simulations have been used in a supervised learning mode to per- signs of all its input weights.
form n-dimensional, nonparametric classification. A single node can be used to implement a logical "or" or a logical
"and." Fig. 7 illustrates an implementation of a logical "or." The
Using Neural Networks as n-Dimensional, two variable inputs are binary, and the output will be I if either
Nonparametric Classifiers input is 1. Fig. 8 illustrates an implementation of a logical "and."
A neural network that is to be trained in a supervised learning mode The two variable inputs are binary, and the output will be I only
to perform nonparametric classification is characterized by its ar- if both inputs are I. If the inputs to the node implementing the logical
chitecture, node characteristics, and learning algorithm. The very "or" are the outputs from the two-node network, as in Fig. 9, then
simple network in Fig. 3 consists of a single node with two variable the resulting network output will represent the union of the points
inputs, X and Y, and a constant input of 1, which are weighted by above the two lines, as in Fig. 10. If the inputs to the node im-
the constants A, B, and C. The node sums these weighted inputs, plementing the logical "and" are the outputs from the two-node
and it produces a 0 or 1, depending on whether the weighted sum network, as in Fig. I1, then the resulting network output will rep-
is <0 or >0. The case where the weighted sum is equal to 0 is resent the intersection of the points above the two lines, as in Fig. 12.
discussed later. If a third input node is added with coefficients that make the network
The output will be 0 when AX + BY+ C < 0 and it will be I when output 1 if and only if the point (X,Y) is below its line, and the
AX + BY+ C> O. Fig. 4 is a graphical representation of the output second-level node is reconfigured to give a network output of 1
of the node as a function of the input (X,y). If (X,Y) lies below
the line, then AX +BY+C<O and the output is O. If (X,Y) lies above
the line, then AX + B Y + C> 0 and the output is 1. Thus, a single OUTPUT 1 OUTPUT 2
node can be used as a classifier if the two classes are half-planes. {O,l} {O,l}
If (X, Y) lies on the line, then AX + BY+ C=O, and the node output
could be defined as either 0 or 1, depending on whether the appli-
cation required the line to be included with the upper or lower half-
plane. Note that the node could be redefined to give an output of
o for the points above the line and an output of I for the points
below the line simply by reversing the signs of A, B, and C. c F
If the network is composed of two nodes, as in Fig. 5, then there
will be two outputs, which are graphically represented in Fig. 6.
One output will indicate whether the input (X, Y) is above or below
the line AX + BY+ C=O; the other output will indicate whether the
input (X, Y) is above or below the line DX +EY +F=O. It has been
assumed that each node will produce I if the input (X, Y) is above x Y

Fig. 5-Two-node neural network.


OUTPUT

{O,l}

Y
AX + BY +C=O
t
OUTPUT 1 = 1
OUTPUT 2 = 1

x Y -------------+~~~------~X

OUTPUT 1 = 1
OUTPUT 2 =0
Fig. 3-Single node.

OUTPUT 1 = 0
Y OUTPUT2=0
AX+ BY +C=O
Fig. 6-Graphical representation of two-node network.

)
OUTPUT
OUTPUT =1
{O,l}
--------------+---~~------~x

OUTPUT =0
-0.5

{O,l} {O,l}

Fig. 4-Graphical representation of a single node. Fig. 7-Logical "or" implementation.


26---------------------------------------------------------
if and only if all three variable inputs are 1 (as in Fig. 13), then In supervised learning, the input values for one of the training
the output from the network is graphically characterized by Fig. 14. cases are presented to the network. These values are propagated
A neural network could be used to classify the two sets of points through the network and an output results. This output is compared
indicated in Fig. 15 by closed squares and open circles. In fact, with the known result, and if the difference is not 0, then the inter-
the network illustrated by Fig. 13 would suffice. In two dimensions, connection weights are adjusted according to the learning algorithm.
one could plot the data, draw the lines (Fig. 16), and determine This process continues for all the cases in the training set. The entire
the interconnection weights analytically from the equations of the set of training cases is then run through the network again, with
lines. But as the number of dimensions increases, the manual process the weights readjusted when necessary. This process continues until
becomes impossible. Fortunately, supervised learning with neural
networks offers an alternative. Supervised learning requires a learn- OUTPUT
ing algorithm and a set of training data, which include cases with
values for all network inputs and a known result. A discussion of
learning algorithms is beyond the scope of this paper; see Refs. 1
through 4 for detail.

OUTPUT

{O,1}

{O,1} {O,1} x y

Fig. a-Logical "and" implementation. Fig. 11-lntersection implementation.

OUTPUT Y
AX+ BY + C=O
t
OUTPUT =1

OUTPUT=O

Fig. 12-Graphical representation of intersection.


x y

Fig. 9-Union implementation.


OUTPUT

AX+ BY + C=O

x Y

Fig. 10-Graphical representation of union. Fig. 13-lntersectlon implementation with three input nodes.
-------------------------------------------------------------27
the network gets the correct result for all the cases in the training different. Previously, LITH was used because it could be estimated
set. The system can then be used to estimate the classification of directly from various known data and hardness was then inferred
an unknown set of inputs. When the network makes an error, that from the lithology. The bit grade and drilling parameters can be
case can be added to the training set and the system retrained, similar used to estimate hardness, but not LITH, because several lithologies
to the way that a person learns from mistakes. may have the same hardness. Hardness was now being estimated
A two-layer neural network can be trained to classify any convex directly and the actual LITH was of no importance. The new net-
set that is separable by straight lines in 2D space or hyperplanes work was trained to identify the hardness of all the test data cor-
in n-dimensional space. The convexity requirement can be removed rectly.
if a third layer is added to the network. Because bit grade changes very slowly, its value at one level of
measurements can be used as a very good estimate of its value at
Using Neural Networks for Drill-Bit Diagnosis the next level; i.e., if measurements are taken every foot, then the
bit grade at one foot is a very good estimate of the bit grade at the
Because conventional methods had not produced estimates of the
next foot. The two neural networks were then linked together, as
quality desired, a neural network was trained with laboratory data
in Fig. 17, to estimate both hardness and bit grade at each level.
to estimate drill-bit grade. The neural network was a three-layer
The bit grade estimated for the previous level and the five drilling
feed-forward system with a sigmoid as the transfer function for all
parameters for the current level were used to estimate hardness,
the nodes, and the learning algorithm was back-propagation. 1-4
and that value of hardness was used with the five drilling parameters
The data consisted of 173 cases, each of which included values for for the current level to estimate the bit grade for the current level.
the bit grade, lithology (LITH), HSI, T, RPM, ROP, and WOB. The bit grade for the current level was then fed back to the hardness
The network was trained by presenting the five drilling parameters neural network, along with the five drilling parameters for the next
and LITH to the system as inputs, propagating these inputs through level, to estimate the hardness for the next level, and the process
the system, and comparing the network output with the known bit continued. When a new bit enters the hole, it has a known grade
grade. In those cases where there was disagreement, the interconnec- of 0, and this value was used with the first set of five drilling pa-
tion weights were adjusted according to the back-propagation learn- rameters to start the process.
ing algorithm. After many iterations through the training set, the
network was able to adapt its interconnection weights to give the Results
correct output for each case in the training set.
Even though successful, this network solved only part of the prob- The data used are proprietary, so the results can be reported only
very generally. The data set included data representing three bit
lem because, in the real world, the LITH estimate was only margi-
grades: 0, 3, and 6. Two lithologies, Mancos shale and Carthage
nally acceptable, and it was still being used as an input to the system.
marble, were included in the data set. The networks were trained
A new network was trained to estimate the lithologic hardness using
with the entire data set and tested with synthetic formations, similar
inputs of bit grade and the five drilling parameters. The term "hard-
ness" was substituted for "LITH" because the usage was now
y

v
t
OUTPUT = 0

------------~~~~------~x
OUTPUT=O

OUTPUT = 0
Fig. 16-Classlfication lines.

Fig. 14-Graphical representation of intersection with three


input nodes.
GRADE

y
NEURAL NETWORK

12

HARDNESS

NEURAL NETWORK

------~--~~--~~---------------x

ROP RPM T HSI WOB

Fig. 15-Data set with two classes of points. Fig. 17-Network for drill-bit diagnosis.
28---------------------------------------------------------
to that in Fig. 18, constructed from the data. The drilling process
was simulated by requiring the bit grade to be nonincreasing with LITH GRADE
depth, and the lithologies were intermingled to simulate bed thick-
nesses of varying lengths. The estimates of hardness and bit grade
were perfect in every test where the entire data set was used for 1 o
training. The technique was also tested by partitioning the data set 1 o
into a training set, used to train the system, and its complement, 1 o
used to test the system. These results were mixed and reflected the 2 o
degree to which the training set spanned the test set. If the training 2 o
set spanned the test set, then the results were very good; if it did 1 o
not, then the points in the test set that were not spanned were often 1 o
misclassified. 2 o
2 o
Conclusions 2 o
2 o
Neural networks can be used successfully for n-dimensional, non- 2 3
parametric classification if the classes are separable by hyperplanes 2 3
in the n-dimensional space and if the training set spans the entire 1 3
domain of interest. In this test, neural networks were very successful 1 3
with the training data available. However, the data available were 1 3
not sufficient to train a system for actual implementation in a drilling 2 3
environment because they did not span the entire domain of interest. 2 3
For example, the training data covered only two lithologies, only 1 3
three of the eight drill-bit grades, and only one drill-bit model. Many 1 3
more training data must be collected before the system will be ready 2 6
for operational testing. 2 6
2 6
References 2 6
I. McClelland, J.L. and Rumelhart, D.E.: Parallel Distributed Processing, 1 6
MIT Press, Cambridge, MA (1986) 1,2. 1 6
2. Minsky, M.L. and Papert, S.A.: Perceptrons, Expanded Edition, MIT 1 6
Press, Cambridge, MA (1988). 2 6
3. DARPA Neural Network Study, AFCEA Press, Fairfax, VA (1988).
4. Miller, R.K.: Neural Networks: Implementing Associative Memory Models
Fig. 18-Example of synthetic formation.
In Neurocomputers, SEAl Technical Publishing, Madison, GA (1987).

Das könnte Ihnen auch gefallen