Sie sind auf Seite 1von 8

Neural Comput & Applic (2012) 21:763–770

DOI 10.1007/s00521-010-0457-6

ORIGINAL ARTICLE

An intelligent approach to evaluate drilling performance


A. Bhatnagar • Manoj Khandelwal

Received: 21 April 2010 / Accepted: 2 October 2010 / Published online: 20 October 2010
Ó Springer-Verlag London Limited 2010

Abstract In this paper, an attempt has been made to 1 Introduction


predict the rate of penetration (ROP) of rocks by incor-
porating thrust, revolutions per minute (rpm), flushing Drilling is one of the most common operations in exca-
media and compressive strength of rocks using artificial vation industry, starting from exploration stage and
neural network (ANN) technique. A three-layer feed- continuing at every phase of production till completion of
forward back-propagation neural network with 4-7-1 mining activity. Of the various purposes for which the rock
architecture was trained using 472 experimental data sets is drilled, diamond drilling takes a unique place in the
of sandstone, limestone, rock phosphate, dolomite, marble sense that its application, until recent times, was limited to
and quartz-chlorite-schist rocks. A total of 146 new data exploratory drilling. Of late, the importance of intact core
sets were used for the testing and comparison of the ROP recovery to find out the physico-mechanical behavior of
by ANN. Multivariate regression analysis (MVRA) has rock strata as well as the classification of rock mass has
also been done with same data sets of ANN. ANN and highlighted the importance of diamond core drilling.
MVRA results were compared based on coefficient of A rotary drilling is a three-dimensional cutting operation
determination (CoD) and mean absolute error (MAE) involving a combination of one or several cutting processes
between experimental and predicted values of ROP. The commonly known as indentation [16, 17, 24], cutting and
coefficient of determination by ANN was 0.985, while crushing [19], and plowing, grinding and shearing [4].
coefficient of determination was 0.629 for rate of pene- However, the rock fragmentation at the bit–rock interface is
tration. The mean absolute error (MAE) for rate of pene- a combined action of vertical thrust force as well as hori-
tration by ANN was 0.3547, whereas MAE by MVRA was zontal torque compound, which is due to the rotation of drill
1.7499. steel.
The diamond drilling with core bits in exploratory
Keywords Thrust  RPM  Flushing media  Compressive investigations and diamond cutting in dimensional stone
strength  Multivariate regression analysis (MVRA)  industry are unproductive and costly work. The improve-
Artificial neural network (ANN)  Mean absolute error ment in performance of diamond drilling and cutting can
improve the overall economics of mining. This includes
improvement in penetration rate and life of bit or decrease in
bit wear rate. In order to reduce the drilling cost, investi-
gations in the past have been carried out to arrive at a suitable
working level of machine parameters for particular rock
type. Since most of the investigations are empirical in nat-
ure, their application has not become universal.
A. Bhatnagar  M. Khandelwal (&) In order to know the greater applicability of these
Department of Mining Engineering, College of Technology
characteristics in the drilling, a lot of work has been carried
and Engineering, Maharana Pratap University of Agriculture
and Technology, Udaipur 313001, India out to find and suggest a definite model that can be rock
e-mail: mkhandelwal1@gmail.com friendly. On the basis of detailed investigation, a viable

123
764 Neural Comput & Applic (2012) 21:763–770

approach for the prediction is necessary, and an artificial To study the influence of various parameters on the
intelligence (AI) comes in handy to fulfill this approach. performance of diamond drilling, work done by various
The artificial neural network (ANN) model has been one researchers has been reviewed [7, 21, 22, 24]. The rate of
of the attractive tools used in geo-engineering applications penetration increases linearly with the increase in thrust on
due to its high performance in the modeling of non-linear bit for each rotational speed. However, for each rpm, there
multivariate problems [27]. The artificial neural network exists an optimum thrust on bit and beyond which there is
(ANN) is a new branch of intelligence science and has no appreciable increase in rate of penetration [3]. The
developed rapidly since the 1980s. Nowadays, ANN is magnitude of torque developed at the bit–rock interface
considered to be one of the intelligent tools to understand increases linearly with the increase in the thrust on bit at
the complex problems. Neural network has the ability to each rotational speed.
learn from the pattern acquainted before. Once the network
has been trained, with sufficient number of sample data
sets, it can make predictions, on the basis of its previous 3 The philosophy of artificial neural network
learning, about the output related to new input data set of
similar pattern [10]. Due to its multidisciplinary nature, Artificial neural network (ANN) is a branch of the ‘artificial
ANN is becoming popular among the researchers, planners, intelligence’, which also includes case-based reasoning,
designers, etc. as an effective tool for the accomplishment expert systems and genetic algorithms. The classical statis-
of their work. Therefore, ANN is being successfully used in tics, fuzzy logic and chaos theory are also considered to be
many industrial areas as well as in research also. ANN related fields. ANN is an information-processing system
model has superiority in solving problems in which many simulating the structure and functions of the human brain. It
complex parameters influence the process and results, is a highly interconnected structure that consists of many
when process and results are not fully understood and simple processing elements (called neurons) capable of
where historical or experimental data are available. The performing massively parallel computation for data pro-
prediction of ROP is also of this type. cessing and knowledge representation. The neural network
In the present investigation, different drilling and rock is first trained by processing a large number of input patterns
parameters have been used to predict the rate of penetration and the corresponding output. After proper training, the
(ROP) using artificial neural network and multivariate neural network is able to recognize similarities and predict
regression analysis, and predicted results are compared the output pattern, when presented with a new input pattern.
with actual field data. The basic idea is to find the scope Neural networks are able to detect similarities in inputs
and suitability of the ANN for prediction of ROP. even though a particular input may never have been known
previously. This property allows its excellent interpolation
capabilities, especially when the input data are noisy (not
2 Factors influencing drilling performance exact). Neural networks may be used as an alternative for
autocorrelation, multivariable regression, linear regression,
Drilling is an operation in which rock is fragmented under trigonometric and other statistical analysis techniques.
the influence of drilling forces like thrust and torque while A particular network can be defined using three funda-
the broken chips are flushed out of the hole through cir- mental components: transfer function, network architecture
culating water. The drill performance depends upon the and learning law [11, 26]. One has to define these com-
following: ponents depending upon the problem to be solved.
1. the physico-mechanical properties of rock,
3.1 Network training
2. the shape of cutting tool,
3. the magnitude of drilling forces acting at bit–rock
A network first needs to be trained before interpreting new
interface and
information. A number of algorithms are available for
4. the flushing rate.
training of neural networks, but the back-propagation
The detailed study of the relationship between the rate algorithm is the most versatile and robust technique. It
of penetration and various rock as well as machine provides the most efficient learning procedure for multi-
parameters is carried out by several researchers [7, 16, 17, layer neural networks. Also, the fact that back-propagation
19, 20]. Studies of the effect of polymer mixed in flushing algorithms are especially capable of solving predictive
water on the performance of the diamond drilling show a problems makes them so popular [15]. The feed-forward
substantial increase in performance. However, the work in back-propagation neural network (BPNN) always consists
this regard has not considered all machine parameters and of at least three layers: input layer, hidden layer and output
has been confined to laboratory tests only. layer. Each layer consists of a number of elementary

123
Neural Comput & Applic (2012) 21:763–770 765

processing units, called neurons, and each neuron is con- In Fig. 1, the jth neuron, in the hidden layer, is con-
nected to the next layer through weights, i.e., neurons in the nected to a number of inputs
input layer will send their output as input to neurons in the xi ¼ ðx1 ; x2 ; x3 ; . . .; xn Þ: ð1Þ
hidden layer and similar is the connection between hidden
The net input values in the hidden layer will be as
and output layer. The number of hidden layers and neurons
follows:
in the hidden layer changes according to the problem to be
solved. The number of input and output neurons is the same X n
Netj ¼ xi wij þ hj ð2Þ
as the number of input and output variables. i¼1
To differentiate between the various processing units,
values called biases are introduced into the transfer func- where xi = input units, wij = weight on the connection of
tions. Except for the input layer, all neurons in the back- ith input and jth neuron, hj = bias neuron (optional) and
propagation network are associated with a bias neuron and n = number of input units.
a transfer function. The bias is much like a weight, except So, the net output from hidden layer is calculated using a
that it has a constant input of 1, while the transfer function logarithmic sigmoid function
filters the summed signals received from this neuron. These Oj ¼ f ðNetj Þ ¼ 1=1 þ eðNetj þhj Þ : ð3Þ
transfer functions are designed to map the net output of a
The total input to the kth unit is as follows:
neuron or layer to its actual output. The application of these
Xn
transfer functions depends on the purpose of the neural Netk ¼ wjk Oj þ hk ð4Þ
network. The output layer produces the computed output j¼1
vectors corresponding to the solution [9].
where hk = bias neuron, wjk = weight between jth neuron
During training of the network, data are processed
and kth output.
through the input layer to hidden layer until it reaches the
So, the total output from kth unit will be
output layer (forward pass). In this layer, the output is
compared to the measured values (i.e., the ‘‘true’’ output). Ok ¼ f ðNetk Þ: ð5Þ
The difference or error between both is propagated back In the learning process, the network is presented with a
through the network (backward pass) updating the indi- pair of patterns, an input pattern and a corresponding
vidual weights of the connections and the biases of the output pattern. The network computes its own output
individual neurons. The input and output data are mostly pattern using its (mostly incorrect) weights and thresholds.
represented as vectors called training pairs. The process as Now, the actual output is compared with the desired output.
mentioned above is repeated for all the training pairs in the Hence, the error at any output in layer k is
data set until the network error converges to a threshold
el ¼ tk  Ok ð6Þ
defined by a corresponding function: usually the root mean
squared error (RMS) or summed squared error (SSE). where tk = desired output, and Ok = actual output.

Input Layer Hidden Layer I Hidden Layer II Output Layer


i = 1…..n j = 1…..q k = 1…..q O

1 u 1 v 1 w

I J K E = 0.5(t – O)2

O t
I J K
. . .
O Compare Target

I J K
. . .

I J K

Feed Forward
Error Back propagation

Fig. 1 Back-propagation neural network [18]

123
766 Neural Comput & Applic (2012) 21:763–770

The total error function is given by following equation: 4 Data set


Xn
E ¼ 0:5 ðtk  Ok Þ2 : ð7Þ One of the most important stages in the ANN technique is
k¼1 data collection. The data were divided into training and
Training of the network is basically a process of arriving testing data sets using sorting method to maintain statistical
at an optimum weight space for the network. The steepest consistency. Data sets for testing were extracted at regular
descent error surface is made using the following rule: intervals from the sorted database, and the remaining data
sets were used for the training and testing. In the present
rWjk ¼ gðdE=dWjk Þ ð8Þ
study, 618 drilling data were measured at different thrust,
where g = learning rate parameter and E = error function. RPM and flushing media on six different types of rock.
The update of weights for the (n ? 1)th pattern is given Among them, 472 data sets were used for the training and
as follows: testing of the ANN network, whereas 146 data sets were
chosen for testing of the network. The thrust was kept at
Wjk ðn þ 1Þ ¼ Wjk ðnÞ þ rWjk ðnÞ: ð9Þ
seven different levels, viz. 325, 410, 490, 560, 641, 728 and
Similar logic applies to the connections between the 820 N, whereas RPM was kept constant at four different
hidden and output layers [9]. This procedure is repeated levels 285, 471.1, 687.1 and 1122.2 for each rock type.
with each pair of training case. Each pass through all the Plain tap water with poly-ethylene-oxide (PEO) mixed
training patterns is called a cycle or epoch. The process is with different ppm was used for the measurement of ROP.
then repeated as many epochs as needed until the error is PEO mixed with water was kept at five different levels, viz.
within the user-specified goal. A schematic representation 0, 10, 15, 20 and 30 ppm. In this, plain tap water is con-
of the whole process is shown in Fig. 2 [18]. sidered as a zero ppm. Flushing rate of 285 l/min was kept
constant for the whole study.
Six different rock types were used for this analysis,
Start namely sandstone, limestone, rock phosphate, dolomite,
marble and quartz-chlorite-schist. In this study, 120 drilling
data sets were measured on sandstone, 124 on limestone,
Data gathering
128 on rock phosphate, 139 on dolomite, 56 on marble and
51 on quartz-chlorite-schist. Table 1 shows the range of
input and output parameters. A list of sample data for
Incomplete data filtering
training and testing of the ANN and MVRA model is given
in Tables 2 and 3, respectively.

Normalization of data
5 Network architecture

Data selection for training


Baheer [1] and Hecht-Neilsen [5] indicated that one hidden
and testing layer may be sufficient for most problems. Two hidden
layers may be necessary for a learning function with dis-
continuities [14]. Lippmann [12] and Rumelhart et al. [25]
Training indicated that there is rarely an advantage in using more than
one hidden layer. Therefore, one hidden layer was preferred

Testing
Table 1 Input and output parameters with their range, mean and
standard deviation
S. no. Input parameter Range Mean Standard
If the error of deviation
Yes testing is declined
1. Thrust (N) 325–820 546.914 151.858
2. RPM 285–1122.2 631.404 308.09
No
3. Flushing media 0–30 13.605 10.092
Finish 4. Compressive 24.5–77.8 45.354 11.022
strength (MPa)
5. ROP 0.244–18.111 6.136 4.09
Fig. 2 ANN process flowchart [18]

123
Neural Comput & Applic (2012) 21:763–770 767

in this study. However, the number of neurons is the most input/output mapping problem. The closer the mapping,
critical task in the ANN structure. The heuristics proposed better the performance of the network is.
for this purpose are summarized in Table 4. As can be seen A three-layer feed-forward back-propagation neural
from Table 4, the number of neurons that may be used in the network was developed to predict the ROP. The input layer
hidden layer varies between 2 and 12, depending on the has four input neurons and the output layer has one neuron,
proposed heuristics in the literature. The ANN structures while the hidden layer comprises seven hidden neurons
were trained by using number of hidden neurons defined (Fig. 3). Training of the network was carried out using 472
above. By considering the findings obtained from different cases, whereas testing of the network was performed using
trials, the ANN structure consisting of one hidden layer with 146 different cases.
seven neurons (Fig. 3) was selected for the given problem. The number of training cycles is important to obtain
The data sets were normalized between zero and one con- proper generalization of the ANN structure. Theoretically,
sidering the maximum values of input parameters. excessive training, which is also known as over-learning, can
Feed-forward back-propagation neural network archi- result in near-zero error on predicting training data. How-
tecture (4-7-1) is adopted due to its appropriateness for the ever, this over-learning may result in loss of the ability of the
identification problem. Pattern matching is basically an ANN to generalize from the test data, as shown in Fig. 4 [2].
The increasing point in the error of the test data or the closest

Table 2 Sample data set used for the training of ANN and MVRA
model
S. no. Thrust RPM Flushing Compressive ROP
(N) media strength (MPa) Thrust

1. 728 471.1 0 45.9 7.8947


2. 820 285 10 48.7 10 RPM
3. 325 687.1 30 30.1 1.7543
ROP
4. 325 1122.2 20 50.9 1.4285
Flushing
5. 560 687.1 15 46.5 5.8823 Media

Compressive
Table 3 Sample data set used for the testing of ANN and MVRA Strength
model
S. no. Thrust RPM Flushing Compressive ROP
(N) media strength (MPa) Fig. 3 Suggested architecture for the case study

1. 820 1122.2 0 47.8 9.0909


2. 490 471.1 15 49.1 3.5714
3. 820 285 10 50.2 11.1111
4. 560 687.1 30 45.5 8.3333
5. 325 687.1 20 33.5 1.8518

Table 4 The heuristics proposed for the number of neurons to be


used in hidden layer(s)
Heuristic Calculated number of Reference
neurons for this study

B2 9 Ni ? 1 9 Hecht-Nielsen [5]
3Ni 12 Hush [6]
(Ni ? N0)/2 3 Ripley [23]
2Ni/3 3 Wang [28]
H(Ni 9 N0) 2 Masters [14]
2Ni 8 Kanellopoulas
and Wilkinson [8]
Fig. 4 Criteria for the termination of training and selection of
Ni number of input neurons, N0 number of output neurons optimum network architecture [2]

123
768 Neural Comput & Applic (2012) 21:763–770

point to the training curve is considered to represent the variables and a dependent or criterion variable. The goal of
optimal number of cycles for the ANN architecture. regression analysis is to determine the values of parameters
All the input and output parameters were normalized for a function that cause the function to best fit a set of data
between 0 and 1. Equation 10 was used for the scaling of observations provided. In linear regression, the function is
input and output parameters. a linear (straight line) equation. When there is more than
Normalized value ¼ ðmax :value  unnormalized valueÞ= one independent variable, multivariate regression analysis
is used to get the best-fit equation. Multiple regression
ðmax :value  min :valueÞ ð10Þ
analysis solves the data sets by performing least squares fit.
The architecture of the network is tabulated below: It constructs and solves the simultaneous equations by
forming the regression matrix and solving for the coeffi-
1. Number of input neurons: 4 cient using the backslash operator. The MVRA has been
2. Number of output neurons: 1 done by same data sets and same input parameters which
3. Number of hidden layers: 1 we used in ANN.
4. Number of hidden neurons: 7 The equation for prediction of ROP by MVRA is
5. Number of training datasets: 472
6. Number of testing datasets: 146 ROP ¼ 8:1649 þ 0:02  Thrust þ 0:0045  RPM
7. Error goal: 0.0 þ 0:0617  Flushing media  0:0093  Comp St:

Figure 6 shows the measured and predicted ROP on 1:1


6 Testing of ANN technique slope line with CoD. Here, CoD between measured and
predicted ROP is 0.629, whereas MAE is 1.7499. From the
To test and validate the ANN model, a data set that was not figure, it can be said that predicted ROP by MVRA is
used while training the network was employed. showing high error. Most of the predicted ROP points are
The results are presented in this section to demonstrate in very scattered manner.
the performance of the networks. Coefficient of determi-
nation (CoD) is taken as the measure of performance.
8 Results and discussion
As Bayesian interpolation [13] has been used, there was
no danger of over-fitting or under-fitting problems. Figure 5
Figure 7 shows a bar chart of measured and predicted ROP
illustrates the measured and predicted ROP on 1:1 slope line.
by ANN and MVRA, whereas Fig. 8 shows the comparison
All predicted data points were well within the 1:1 slope line.
of ROP by ANN and MVRA on 1:1 slope line. Here,
This clearly indicates the ability of ANN to predict ROP.
prediction by ANN is closer to the measured ROP, whereas
Here, CoD is as high as 0.985, whereas MAE is 0.3547.
prediction by MVRA has wide variation.
It can be seen that ANN demonstrates superiority over
MVRA. Table 5 shows the CoD and MAE of ROP pre-
7 Prediction of ROP by multivariate regression
dicted by ANN and MVRA. It can be said that prediction
analysis
capability of ANN is quite remarkable and compares well
to measured values. This shows superiority of ANN over
The purpose of multiple regressions is to learn more about
MVRA.
the relationship between several independent or predictor

Fig. 5 Measured versus ANN-predicted ROP Fig. 6 Measured versus MVRA-predicted ROP

123
Neural Comput & Applic (2012) 21:763–770 769

highly encouraging and satisfactory. ANN can learn new


patterns that are not previously available in the training
data set. ANN can also update knowledge over time as long
as more training data sets are presented and can process
information in parallel way. Therefore, the technique
results in a greater degree of accuracy than any other
analysis techniques. Hence, the technique proves to be
economical and easier in comparison with tedious expen-
sive experimental work.

References
Fig. 7 Comparison of measured and predicted ROP
1. Baheer I (2000) Selection of methodology for modeling hyster-
esis behavior of soils using neural networks. J Comput Aided
Civil Infrastruct Eng 5(6):445–463
2. Basheer IA, Hajmeer M (2000) Artificial neural networks: fun-
damentals, computing, design, and application. J Microbiol Meth
43:3–31
3. Bhatnagar A, Khandelwal M, Rao KUM (2010) Performance
enhancement by addition of non-ionic polymer in flushing media
for diamond drilling in rock phosphate. Min Sci Technol
20(3):400–405
4. Chugh CP (1992) High technology in drilling and exploration.
Oxford and IBH, India
5. Hecht-Nielsen R (1987) Kolmogorov’s mapping neural network
existence theorem, Proceedings of the first IEEE international
conference on neural networks, San Diego CA, USA, pp 11–14
6. Hush DR (1989) Classification with neural networks: a perfor-
mance analysis. Proceedings of the IEEE international conference
on systems engineering Dayton OH, USA, 277–280
7. John LP (1994) Influence of RPM and flushing media on the
Fig. 8 Comparison of measured and predicted ROP on 1:1 slope line performance of diamond drilling, B. Tech. Thesis, Department of
Mining Engineering, I.I.T Kharagpur India
8. Kanellopoulas I, Wilkinson GG (1997) Strategies and best
Table 5 CoD and MAE of ROP by ANN and MVRA practice for neural network image classification. Int J Remote
Sens 18:711–725
Model CoD MAE 9. Khandelwal M, Kumar DL, Mohan Y (2009) Application of soft
computing to predict blast-induced ground vibration, Engineering
ANN 0.984 0.3254
with computers (Online Published)
MVRA 0.769 1.2993 10. Khandelwal M, Singh TN (2009) Prediction of blast-induced
ground vibrations using artificial neural network. Int J Rock
Mech Min Sci 46:1214–1222
11. Kosko B (1994) Neural networks and fuzzy systems: a dynamical
9 Conclusions systems approach to machine intelligence. Prentice Hall, New
Delhi
Based on the study, it is established that the feed-forward 12. Lippmann RP (1987) An introduction to computing with neural
back-propagation neural network approach seems to be the nets. IEEE ASSP Mag 4:4–22
13. MacKay DJC (1992) Bayesian interpolation. Neural Comput
better option for close and appropriate prediction of rate of 4:415–447
penetration. ANN results indicate very close agreement for 14. Masters T (1994) Practical neural network recipes in C??.
the ROP with the field data sets, whereas MVRA shows Academic Press, Boston MA
high error and it was not able to predict the ROP up to the 15. Maulenkamp F, Grima MA (1999) Application of neural net-
works for the prediction of the unconfined compressive strength
mark. By adopting ANN technique, ROP can be predicted (UCS) from Equotip hardness. Int J Rock Mech Min Sci
prior to the drilling. The drilling system can be modified 36:29–39
accordingly so that drilling loses can be minimized as well 16. Miller D, Ball A (1990) Rock drilling with impregnated diamond
as higher utilization of energy can also be achieved. micro bits-an experimental study. Int J Rock Mech Min Sci
27:363–371
Considering the complexity of the relationship between 17. Miller D, Ball A (1991) The wear of diamonds in impregnated
the inputs and outputs, the results obtained by ANN are diamond bit drilling. Wear 141:311–320

123
770 Neural Comput & Applic (2012) 21:763–770

18. Monjezi M, Dehghani H (2008) Evaluation of effect of blasting chaos-statistical and probabilistic aspects. Chapman & Hall,
pattern parameters on back break using neural networks. Int J London, pp 40–123
Rock Mech Min Sci 45(8):1446–1453 24. Rowlands D (1975) Rock fracture by diamond drilling. Ph.D.
19. Paone J, Bruce WE (1963) Drillability studies-diamond drilling. Thesis. University of Melbourne, Australia
RI-USBM 6324, US Bureau of Mines 25. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal
20. Paone J, Madson D (1966) Drillability studies-impregnated dia- representation by error propagation. In: Rumelhart DE, McClel-
mond bits. RI-USBM 6776, US Bureau of Mines land JL (eds) Parallel distributed processing, vol 1, pp 318–362
21. Rao KUM, Misra B (1994) Design of spoked wheel dynamometer 26. Simpson PK (1990) Artificial neural system—foundation, paradigm,
for simultaneous monitoring of thrust and torque developed at bit application and implementations. Pergamon Press, New York
rock interface during drilling. Int J Surf Min Reclam Environ 27. Sonmez H, Gokceoglu C, Nefeslioglu HA, Kayabasi A (2006)
8:146–147 Estimation of rock modulus: for intact rocks with an artificial
22. Rao KUM (1993) Experimental and theoretical investigations of neural network and for rock masses with a new empirical equa-
drilling of rocks by impregnated diamond core bits, Ph.D. Thesis, tion. Int J Rock Mech Min Sci 43:224–235
Department of Mining Engineering, I.I.T. Kharagpur 28. Wang C (1994) A theory of generalization in learning machines
23. Ripley BD (1993) Statistical aspects of neural networks. In: with neural application. Ph.D. thesis, The University of Penn-
Barndoff-Neilsen OE, Jensen JL, Kendall WS (eds) Networks and sylvania, USA

123

Das könnte Ihnen auch gefallen