Sie sind auf Seite 1von 16

Rock Mech. Rock Engng.

(1997) 30 (4), 207-222


Rock Mechanics
and Rock Engineering
9 Springer-Verlag 1997
Printed in Austria

A Hierarchical Analysis for Rock Engineering Using


Artificial Neural Networks
By
Y. Yang and Q. Zhang

Northern Jiaotong University, Beijing, China

Summary
Rock behavior, such as the stability of underground openings, is controlled by many different
factors which have varying levels of influence. It is very difficult to identify the relative effect of
each factor with traditional methods, such as structural analysis and statistical approaches. This
paper introduces a hierarchical analytical method based on the application of neural networks
which reveals the different degrees of importance of these factors so as to recognize the key
factors. This makes it possible to focus on the key factors and do rock engineering more
efficiently. An example is given applying this approach to an underground opening.

1. Introduction

It is well known that rock behavior, such as the stability of underground openings, is
controlled by many factors. For example, bedding planes, faults, joints, in-situ stress
field, as well as folds and water can all influence rock behavior. Artificial effects, such
as excavation methods, can also affect rock behavior. Most of these factors take effect
simultaneously and have complicated interactions with each other in practical engi-
neering (Hudson, 1991). To date, there are still many unknown mechanisms which can
have great influence on rock behavior. Therefore, rock engineering projects often cause
engineers to abandon the traditional design techniques. For example, the results of
structural analysis using artificial materials are not always applicable to rock engineer-
ing. Relevant technical information for rock engineering exists in textbooks and in
experts' minds, but it is often not easy for those engaged in construction to tap into this
expertise in any coherent fashion. Recently, some scholars developed approaches
taking into account all components. Hudson (1991) shed some light on this issue with a
new system (Hudson, 1992), and Jiao (Jiao and Hudson, 1995) developed this system
with Graph theory. Jiao's work is applicable to linear problems only. Hence, a
nonlinear approach is taken in this paper. The application of Artificial Neural Networks
(ANN) to rock mechanics or rock engineering (Zhang, 1991, 1994; Lee and Sterling,
1992; Ghaboussi, 1992) has been proven to be successful. The purpose of this paper is
208 Y. Yang and Q. Zhang

to present an approach applying ANN to study all rock mass and engineering variables
simultaneously and to show how a parameter hierarchy can be developed with ANN.

2. Hierarchical Analysis with ANN

2.1 Function Approximation


Some of the complicated mechanisms of rock engineering are ill defined, and we can
consider them as a black box with inputs and outputs. The operation of the black box
can be regarded as a mapping process. What we want to do first is to identify the
response of black box to various inputs, or in other words, establish the mapping
implementation from observed examples of rock engineering.
As in the simulation of the human brain, ANN (Hechi-Nielson, 1990) is capable of
solving this mapping implementation problem. Kolmogorov's Mapping Neural Net-
work Existence Theorem shows that ANN is able to implement any function of
practical interest to any desired degree of accuracy (Hechi-Nielson, 1989).

Input Layer i Hidden Layer j Output Layer k


0i Wji Dj Oj Wkj Dk Ok

=f ' O4
A Wji= ~7'djO i "I

Fig. 1. BP algorithm. Hidden unit layerj weight changes

Currently, the most popular mapping neural network is the back-propagation


network (Rumelhart and McClelland, 1986). The backpropagation (BP) neural network
architecture (Fig. 1) is a hierarchical design consisting of fully interconnected layers or
rows of processing units. The information processing operation that backpropagation
networks are intended to carry out is the approximation of a bounded mapping function
f : A c R n ~ R m, from a compact subset A of n-dimensional Euclidean space to a
bounded subset f[A] of m-dimensional Euclidean space, by means of training on
examples (x l, Yl), (x2, Y2). . . . . (Xk, Yk) . . . . of the mapping, where Yk -~ f(xk). As always,
it will be assumed that such examples of a mapping function f are generated by
selecting xk vectors randomly from A in accordance with a fixed probability density
function O(x). The operational use to which the network is to be put after training is also
assumed to involve random selections of input vectors x in accordance with p(x).
A Hierarchical Analysis for Rock Engineering 209

2.2 Hierarchical Analysis


The trained neural network is capable of giving a certain output whenever input factors
are offered. Our interest concentrates on searching for a method of identifying how
significant each factor is in the operation of ANN. In this paper, the way to find the
dominant factor is called a hierarchical analysis.
After a BP neural network has been trained successfully, the neural network is no
longer allowed to adapt. The output Ok can be written as
Ok = 1/(1 + exp(--e:,)), (1)
where,

ek = Z O j W j k +Ok Oj = 1/(1 + exp(--ej)) (2)


J

ej= _,oiw.+o.
i
where W is a connected weight, 0 is a threshold, Oi is the value of input unit. Thus we
have

Ok= l / ( l + e x p (--(~/W/k(1/(l+exp (--(~. WijOi+Oj))))+Ok))).

(3)
Obviously, since the activation function is a Sigmoid function as shown in Eq. (1),
which is differentiable, the variance of Ok with the change of Oi can be calculated by
the differentiation of the equation:

OOk/OOi = Z Z * ' " ~ WJnkG(ek)W.JtlljnG(ej',)


jn jn-I jl
w;._.,, ,)w:,,_..
...Wij, G(ej,), (4)
where G(ek) = exp(--ek)/(1 + exp(--ek)) 2, Oj., Oj. ~, Oj._~..... Oj, denotes the hidden
units in the n, n - 1, n - 2 . . . . . 1 hidden layers.
Obviously, no matter what the neural network approximates, all items on the Right
Hand Side of Eq. (4) always exist (Hechi-Nielson, 1990). According to Eq. (4), a new
parameter RSE/: i can be defined as the Relative Strength of Effect for input unit i on
output unit k.
Definition of RSE: For a given sample set S = {sl,sz, s3 ..... sj .... st}, where,
sj = { X , Y } , X = {xl,x2,x 3..... xp}, Y = {Yl,Y2,Y3 ..... yq}, if there is a neural net-
work trained by BP algorithm with this set of samples, the RSEki exists as

RSEki = C Z Z "'" ~ Wj,,kG(ek)Wjn-lj,tG(~J/,)


jn jn I jl
W),,"n-IG(Cj,,-I)Wjn
3jn-2G(ej,,2)
...w.,o(e:,). (5)
210 Y. Yang and Q. Zhang

where C is a normalized constant which controls the maximum absolute value of RSE/~i
as unit, and the function G denotes the differentiation of the activation function. G, W
and e are all the same as in Eq. (4).
It should be noted that the control of RSE is done with respect to the corresponding
output unit, which means all RSE values for every input unit on the corresponding
output unit are scaled with the same scale coefficient. Hence, it is clear that RSE ranges
from - 1 to 1.
Compared with Eq. (4), RSEki is similar to the derivative except for its scaling
value. But it is a different concept from the differentiation of the original mapping
function. RSEki is a kind of parameter which could be used to measure the relative
importance of input factors to output units, and it shows only the relative dominance
rather than the differentiation of one to one input and output. The larger the absolute
value of RSEki is, the greater the effect the corresponding input unit has on the output
unit. The sign of RSEki indicates the direction of influence, which means positive action
applies to the output when RSEki > 0, and negative action when RSEki < 0. Here,
positive action denotes that the output increases with the increment of the correspond-
ing input, and decreases with reduction of the corresponding input. On the contrary,
negative action indicates that the output decreases when the corresponding input
increases, and increases when the corresponding input decreases. The output has no
relation with the input if RSEki = 0. RSEki is a dynamic parameter which changes with
the variance of input factors. Hence, we can classify the input factors dynamically
based on their RSE values, and do the hierarchical analysis with ANN. This is very
applicable to rock engineering because the real interaction is also dynamic and is
difficult to recognize by other methods. With RSE analysis, engineers can find out and
pay attention to the factors which significantly influence rock behavior.
According to Eq. (5), one can calculate the values of RSEki by the following steps:
1. Enter all the values of the input units, calculate all values of ej in the hidden units
and ek in the output units by the standard BP method, where ej represents ej,,, ej,,_,,
~J'n-2' ' " ~ ~oj].
2. Calculate the values of the G function in the output units and hidden units as

G(ek) =exp(--ek)/(1 + exp (--ek)) 2 (6)

G(ej) =exp(-ej)/(1 + exp ( - e j ) ) 2.

3. Assume RS as a temporary variable in every unit for calculating RSE. For the output
units, we have
RS(ek) = G(ek). (7)
. Calculate the RS values of the units in preceding layers as follows:
= (8)

5. Calculate the RS values of the units in other hidden layers


RS(ej,,_,) ----C(ej,, ,) ~ ~,,_,j, RS(ej,,). (9)
jn

Repeat this calculation up to the first hidden layer.


A Hierarchical Analysis for Rock Engineering 21 l

6. Calculate the RSI~i value as

RSk, :
jl
7. Assume the number of input units as p, RSk, = max{ IRSkj [, IRSk21.... IRS~z,l}, then
control the value of RSki such that
RSE~i = RSki/RSkx. (1 l)
In this manner, the RSEki value can be calculated. Now that the value of RSEk,
shows the relative influence, a comparison may be carried out to find the key input units
from all input units by their RSEki. Therefore, the RSE~i may be considered a very
important index to evaluate the relative significance of all factors influencing the rock
behavior.

3. Application

3.1 Learning Strategy


One of the limitations of BP algorithm is the existence of local minima in its error
surface. This means that it might not yield a right solution to the mapping procedure,
and its RSE can not reveal the true mechanisms either. A hybrid algorithm for finding
the global minimum of the error function is used. In this algorithm, a combination of the
modified BP and the random optimization method was proposed (Baba, 1994).
Learning begins with the modified BP method and changes into the random optimiza-
tion approach when the learning process gets stuck in a local minimum. In addition, the
dynamic adaptation of structure and parameters is applied to the whole process of
learning where both the structure of the neural network and its learning parameters are
modified dynamically according to the changes in error.

3.2 Nonlinear Equation


As mentioned above, RSE is different from the derivative of the output with respect to
the input. However, if enough learning has been done, the RSE is able to approximate
the scaling differentiation when the original mapping function is differentiable. Hence,
the RSE method can be tested with a differentiable function for a demonstration.
As an example, we consider a simple nonlinear equation as follows
z = 0.5x 2 + O.lxy - 0.9y 2, (12)
the derivative can be obtained as

Ox x + 0.1y (13)
az
--
ay = 0. l x - 1.8y.

We will now show that the feasibility of RSE for Eq. (12) can be verified by Eq. (13).
212 Y. Yang and Q. Zhang

W e c h o o s e t h e v a l u e s o f x a n d y w i t h i n [0,1] r a n d o m l y , a n d get the c o r r e s p o n d i n g z


values f r o m Eq. (12). O n e sets up a t r a i n i n g s a m p l e set c o n t a i n i n g 30 s a m p l e s as s h o w n
in T a b l e 1.

Table 1. Training sample set of nonlinear equation

No. x y z No. x y z
! 0.37 0.8l -0.5 16 0.94 0.98 -0.33
2 0.81 0.4 0.21 17 0.06 0.49 -0.21
3 0.62 0.7 -0.2 18 0.25 0.75 -0.46
4 0.88 0.05 0.39 19 0.15 0.59 -0.3
5 0.88 0.53 0.19 20 0.59 0.79 -0.34
6 0.64 0.6 -0.08 21 0.31 0.93 -0.71
7 0.06 0.62-0.34 22 0.41 0.29 0.02
8 0.85 0.3 0.31 23 0.81 0.13 0.32
9 0.43 0.69 -0.31 24 0.77 0.09 0.3
!0 0.32 0.42 -0,1 25 0.73 0.28 0.22
II 0.85 0.76 -0.09 26 0.34 0.54 -0.19
12 0.64 0.52 -0.01 27 0.83 0.83 -0.22
13 0.56 0.97 -0.65 28 0.9 0.84 -0.16
14 0.38 0.2 0.04 29 0.42 0.46 -0.08
15 0.27 0.05 0.04 30 0.39 0.55 -0.18

T h e s t r u c t u r e o f a n e u r a l n e t w o r k is i n i t i a l i z e d as s h o w n in Fig. 2a. A f t e r 1044


iterations o f l e a r n i n g , the e r r o r h a s b e e n r e d u c e d to 10 - 4 , a n d the n u m b e r o f h i d d e n
units h a s b e e n i n c r e a s e d to 6 (Fig. 2 b ) f r o m 3. A s a r e s u l t o f d y n a m i c a d a p t i o n , the
structure of the neural network has changed.

1:
F

(a) (b)
Fig. 2. The change of network structure for a nonlinear equation

A n o t h e r g r o u p o f s a m p l e s w h i c h d o e s n o t b e l o n g to the t r a i n i n g set is u s e d to test


the t r a i n e d n e u r a l n e t w o r k . T h e t e s t i n g results are l i s t e d i n T a b l e 2 w h i c h s h o w s that the
a b s o l u t e errors b e t w e e n t h e o u t p u t s o f the n e u r a l n e t w o r k a n d the ideal v a l u e s are n o t
m o r e t h a n 0.04. A t t h e s a m e t i m e , m o s t o f t h e i r r e l a t i v e errors are also l o w e r t h a n 10%
e x c e p t for t h e 7 t h s a m p l e in T a b l e 2. T h e z v a l u e o f the 7th s a m p l e is t o o s m a l l to
c o m p a r e its r e l a t i v e error, a n d its a b s o l u t e e r r o r is also v e r y small.
A Hierarchical Analysis for Rock Engineering 213

Table 2. The capacity of neural network for a nonlinear equation

No. sample ouJ out


I

x I Y z z error
l 0.81 0.t5 0.32 0.30 -0.02
2 0.88 0.43 0.26 0.24 -0.02
3 0.63 0~37 0.1 0.09 -0.01
4 1 0.69 0.14 0.13 -0.01
S 0.56 0.32 0.08 0.08 0
6 0.23 0.97 -0.8 -0.76 0.04
7 0.28 0.18 0.01 0 -0.01
8 0.9l 0.61 0.13 0.12 -0.01
9 ~0.630.36 0. l -0.'1403 0
1010.49 0,'741-0.33 -0.01

With the trained neural network, the R S E can be obtained f r o m Eqs. ( 5 - 1 l), and the
c o r r e s p o n d i n g derivatives can be w o r k e d out with Eq. (13). W e c o m p a r e d these results
to test the capability of R S E as shown in Table 3. The first two c o l u m n s are the x, y
input data, and the second two c o l u m n s are their derivatives. The third two c o l u m n s are
the derivatives divided by their m a x i m u m value, w h i c h is the control that is applied to
the derivatives o f the s a m e sample. The two c o l u m n s to the far right are the R S E values.

Table 3. The derivative and RSE of a nonlinear equation

~go. input derivative control RSE

I 0.2 0.2 0.22 -0.34 0.65 -I 0.58 -I


2 0.20.8 0.28 -1.36 0.21 -1 0.24 -1
3 0.8 0.2 0.82 -0.28 1 -0.34 1 -0.41
4 0.8 0.8 0.88 -l.28 0.61 -1 0.63 -1
5 0.5 0.5 0.55 -0.85 0.65 -1 0.69 -I
6 0, 5 0.2 0.52 -0.31 1 -0,6 1 -0.7
7 0.2 0.5 0,25 -0.88 0.28 -1 0.32 -1
8 0.5 0,8 0.58 -1.31 0.44 -1 0.39 ] -1
9 0,810.5 0,85 -0,82 I -0.96 1 [ -0.83

B e c a u s e o f the existence o f eI~ror, the values o f R S E are not exactly the s a m e as the
controls o f the derivatives. H o w e v e r , it is clear e n o u g h that the values o f R S E display a
similar relative d o m i n a n c e as the controls s h o w n in Table 3. W e can m a k e the R S E
a p p r o x i m a t e the derivative with any desired degree by increasing the n u m b e r o f
iterations. Here, we intended to demonstrate the capabilities of R S E , and we paid
attention only to the relative d o m i n a n c e o f inputs rather than the exact derivative. The
c o m p a r i s o n o f R S E with the controlled d e r i v a t i v e in Table 3 is s h o w n in Fig. 3. It is
o b v i o u s that the results in Fig. 3 s h o w the a g r e e m e n t b e t w e e n R S E and the derivative.
214 Y. Yang and Q. Zhang

0.8
06
0.4
--& z'(x)
0.2
---El-z'(y)
0 I I L I I i I 1
--i RSEzx
-0.2 2 3 4 5 6 7 8 9
--tt- RSl'zy

-04
-0.6 4 / / ~ ~
-0.8

Sample
Fig. 3. The comparison of RSE and the regulation of derivative for nonlinear equation

This nonlinear equation is simple, but the result has demonstrated the efficiency of
RSE. Clearly, the RSE reflects the dynamic variance of the effect for inputs acting on
output. With the same operation, one can analyze more complicated problems, such as
rock behavior.

3.3 The Stability of an Underground Opening

The stability of an underground opening is influenced by many factors, such as intact


rock quality, discontinuity pattern, discontinuity aperture, in-situ stress, hydraulic
conditions, etc. The interactions among these factors are very complex, they act on
rock behavior simultaneously and it is very difficult to analyze these factors simulta-
neously with a traditional approach (Jiao and Hudson, 1995). If there are enough
samples for learning, the RSE method will be an ideal tool for this kind of problem.
With a group of data for coal mine roadways (Sheorey, 1991), we apply RSE to the
hierarchical analysis of factors which influence the stability of the underground
opening. There are 44 samples as shown in Table 4.
The meanings of the Items in Table 4 are as follows:
Rock type:
s - sandstone;
s h - shale;
c - coal;
s h c - shaly coal;
s h s - shaly sandstone;
s s h - sandy shale;
s i - siltstone;
m - mudstone.
Joint orientation:
U - unfavourable joint orientation;
F - favourable joint orientation.
A H i e r a r c h i c a l A n a l y s i s for R o c k E n g i n e e r i n g 215

T a b l e 4. Samples for coal mine roadways

No. B H cry. RQD Jn Jr Ja Jw SRF y rock joint state


(m) (m) (MPa) (%) (t/m 3) type orientation
1 3.6 60 12 15 12 1 2 1 2.5 1.9 shc U U
2 3.6 70 49 4 9 1.5 1 1 1 2.3 sh U U
3 4.2 220 31 48 9 1.5 I 1 2 2 c,sh,s U U
4 3.2 30 24 18 12 0.5 1 1 1 2.2 sh,s F U
5 3.5 105 30 93 9 1.5 1 1 1 2.5 sh,s F S
6 4.2 170 I6 10 9 0.5 1 1 7 1.8 shc U U
7 4.2 390 31 35 12 0.5 1 1 9 1.5 c U U
8 3 200 58 85 4 1.5 0.75 0.66 2.5 2.2 s F S
9 4,8 85 25 15 4 1.5 1 1 1.5 c F S
10 4.8 85 19 62 9 1.5 1 2 2.3 s F S
II 3 310 2I 17 9 0.5 1 9 1,7 c,sh U U
12 4 160 50 55 4 l l 1 2.3 s U
I3 4.5 135 43 36 4 1.5 1 1 1.8 m,c F S
14 4 75 20 70 9 t .5 1 t 2.1 c,sh F S
I5 3.5 140 25 46 9 1.5 I 2 2 c,sh U S
16 4 ti0 31 58 9 1.5 0.75 1 1.6 s F S
17 4.2 190 30 13 9 0.5 1 2 2.3 sh U
18 5.2 400 53 47 9 1.5 1 5 1.8 c U U
19 4.2 [80 20 16 12 0.5 1 6 2.2 sh U U
20 3.6 300 48 67 12 1 1 2 2.3 s U S
2I 4.2 145 34 41 12 1.5 0.75 2 1.6 c F S
22 3.2 80 26 31 6 0.5 0.75 1 1.9 sh,c F S
23 3,5 100 25 62 4 1.5 0.75 I 2.4 sh F S
24 4.2 560 27 77 12 1.5 I 12 2.3 sh,s U
25 3 40 27 77 12 1,5 1 1 2.3 si F U
26 3,8 90 28 47 9 1 1 2.5 2 c,sh F S
27 3.6 80 35 25 9 1 1 1 2 shc U U
28 3.6 55 24 76 4 1.5 0.75 1 2.5 s F S
29 4.5 105 29 34 4 1.5 0.75 1 2 c,s F U
3O 4.5 150 38 80 9 1.5 1 1 2.3 s F S
31 3.5 330 24 41 12 1.5 1 9 1.9 sh,c U U
32 4.5 60 33 65 12 1 1 2.5 2 sh,ssh U U
33 3.2 30 20 46 15 1 1 1 2.2 shc U U
34 4.5 225 19 19 6 1.5 1 8 1.9 c,sh U
35 4.2 20 30 24 12 0.5 1 1 2.2 c,si,s U U
36 3.6 225 14 14 9 0.5 1 10 2.1 shc U U
37 3.6 200 21 4 3 1.5 1 7 2.3 s U U
38 3.2 300 21 29 9 1.5 0.75 9 1.9 si,c U U
39 4.2 330 37 35 12 1.5 1 6 2.4 s U U
40 4.2 410 37 35 12 1.5 1 8 2.4 s U U
41 4.2 510 37 35 12 1.5 1 9 2.4 s U U
42 3 335 33 26 9 t.5 0.75 7 1.9 sh U U
43 4.2 800 34 56 3 0.5 1 14 1.7 sh,c F U
44 4.2 45 45 10 3 1.5 1 5 2 s F U
216 Y. Yang and Q. Zhang

Engineering state:
U - unstable;
S - stable.
Other:
B - roadway span;
H - depth of roadway;
oc - uniaxial strength;
RQD - rock quality designation (Deere, 1963);

J n - joint set number (Barton, 1974);

J r - joint roughness number (Barton, 1974);

J a - joint alteration number (Barton, 1974);

Jw - joint water reduction factor (Barton, 1974);

SRF - stress reduction factor (Barton, 1974);

3 / - dry density.
There are a total of 44 samples in Table 4, and the number of parameters in every
row is 13. It is obvious that there are not enough samples for training and testing in this
problem. We have to take most of the 44 samples to train the neural network so as to
keep enough information for the data set. By using the "withholding" method (Hecht-
Nielson, 1990), we train the neural network 44 times using 43 of the samples, each time
withholding a single different sample as a singleton test set. During this process, we
adopt the dynamic BP (Yang, 1996) model only in the first round of learning because
most of the information contained in the data set has been obtained and the neural
network fits the first withheld sample (No. 1 in Table 4) well. The other 43 rounds of
learning are all based on the standard BP method (Rumelhart and McClelland, 1986),
and most of them converge rapidly. There are only 4 samples which do not agree with
the results of the neural network among the 44 samples, so the error is no more than
10%.
The inputs and outputs are as follows:
- inputs: B, H, oc, RQD, Jn, Jr, Ja, Jw, SRF, 3', rock type and joint orientation;
- outputs: state of rock engineering (stable or unstable).
It should be noted that the state of rock engineering in Table 4 can be determined
from the remarks of P. R. Sheorey' s data table and figures (Sheorey, 1991), all items are
unstable except for those marked as stable.
According to the values in Table 4, we initialize the neural network with 4 layers:
one for an input layer which contains 12 units, one for an output layer which has only 1
unit, the other two layers are hidden layers. The first hidden layer (nearest the input
layer) starts at 16 and the second at 4 units. The limit for the number of random
optimization iterations is 1000, and the process of the first withheld learning is shown in
Table 5. After the first withheld learning, the structure of the neural network is fixed as
shown in Figure 4.
Having completed the learning operation, we get the values of RSE for a stability
evaluation of this kind of roadway as shown in Table 6. We select two groups of values
randomly, one for the stable state, which includes samples 5, 8, 10, 22, and 30; the other
one for the unstable state, which consists of samples 1, 18, 33, 40, and 44. Both groups
of values are illustrated in Fig. 5 and Fig. 6.
A Hierarchical Analysis for Rock Engineering 217

T a b l e 5. Process of the first " w i t h h e l d " training

stage method times error

1 random optimization 1000 6.1 x l 0 -~


2 dynamic BP 312 3.5 x 10 .3
3 random optimization I000 3.5 • 10 3
4* dynamic BP 406 4.7 x 10 .4
5 random optimization 1000 4.7 x 10 4
6* dynamic BP 207 1x 10 4

* Increasing one unit both hidden layers

RQD
Jn

Jr

Ja
engineering slate
Jw
SRF

rock type
joint orientation

Fig. 4. Structure of neural network for the stability of underground opening

Compared with other parameters, the variances of Ja and Jw are too small to
measure their effects on rock behavior. Hence, Ja and Jw have been rejected from Fig. 5
and Fig. 6.
With the results in Fig. 5 and Fig. 6 it is clear that RQD is the dominant parameter
for the coal mine roadway. The greater the value of RQD, the more stable the coal mine
roadway is. The stress reduction factor, density and rock type are also very important in
most cases. High strength rock increases the stability of the roadway, but the greater
stress reduction factor and density of rock reduce the stability of the roadway. The
shape of "parameter curve" in Fig. 5 is similar to that in Fig. 6, but the scatter in Fig. 6
is greater than Fig. 5. It means that the dynamic characteristic is more prominent when
the underground opening is unstable. The total mechanism, which is determined by the
218 Y. Y a n g a n d Q. Z h a n g

Table 6. RSE for some samples in Table 5

No. B H ~c RQD Jn Jr Ja Jw SRF -f rock joint


(m) (m) (MPa) (%) (t/m 3) type orientation
1 -0.71 0.03 0,22 1 -0.25 -0.2 0.5 -0.15 -0.99 -0.79 0.63 0.21
2 -0.44 0.03 0.47 1 0.1 -0.18 0.77 0.05 -0.87 -0.77 0.94 -0.19
3 -0.59 -0.09 0.11 1 0.26 0.07 0.6 0.22 -0.69 -0.65 0.37 0.33
4 -0.33 -0.01 0.42 0,88 -0,42 0.03 0.33 0.19 -0.91 -0.61 1 0.09
5 -0.18 0 0.36 0,91 -0.1 -0.02 0.74 -0.16 -0.7 -0.9 1 -0.16
6 -0.41 0.07 0.6 0,82 -0.24 -0.29 0.55 0.01 -0.86 -0,58 1 -0.12
7 -0.52 -0.06 0.25 1 0.23 0.03 0.62 0.26 -0.7 -0.64 0.59 0.17
8 -0.57 -0.09 -0.04 1 0.58 0.09 0.75 0.14 -0.48 -0.79 0.2 0.19
9 -0.47 -0.05 0.06 1 0.48 0.17 0,74 0.22 -0.54 -0.75 0.26 0.35
lO -0.46 -0.14 -0.07 1 0.11 0.25 0.60 0.03 -0.8 -0_88 0.37 0.3
11 - 0 . 4 7 - 0 . 0 4 0.36 I 0.07 0,0l 0.63 0.21 - 0 . 8 4 - 0 . 6 9 0.73 0.17
12 -0.58 -0.06 0.01 1 0,6 -0.03 0.79 0.09 -0.55 -0.77 0.25 0.06
13 -0.45 -0.08 0.04 1 0,34 0.2 0.68 0.18 -0.64 -0.8 0.33 0.34
14 0.1 0.07 0.57 0.51 -0,38 0 0.37 0.05 -0.56 -0.45 1 -0.07
15 -0.5 -0.07 0.22 1 -0.06 0.11 0.51 0.18 -0.83 -0.69 0.67 0.29
16 -0.13 0.33 0.87 0.81 -0.15 -0.41 0.86 -017 -0.7 -0.83 1 0.09
17 -0.27 0.03 0.58 0.64 -0.45 -0.16 0.25 0.17 -0.84 -0.38 1 -0.07
18 -0.51 0 0.43 1 -0.01 -0,07 0.56 0.19 -0.96 -0.71 0.69 0.2
19 -0.26 0.02 0.58 0,64 -0.45 -0.16 0.25 0.17 -0.84 -0,38 1 -0.07
20 -0.58 -0.03 0.23 t 0.3 -0.05 0.65 0.21 -0.71 -0.66 0.48 0. t7
21 -0.03 0.01 0.52 0.59 -0.59 0.08 0.26 0.09 -0.84 -0.5 1 0.1
22 -0.51 -0.02 0.19 1 0.19 0.04 0.59 0.21 -0.64 -0.69 0.51 025
23 -0,46 -0.01 0.11 1 0.48 0.09 0.77 0.18 -0.46 -0.7 0.34 0.29
24 -0.36 -0.07 0.44 0.97 -0.18 0.11 0.5 0.27 -0.9 -0.65 1 0.11
25 0.05 0.12 0.71 0.45 -0.38 -0.17 0.41 -0.02 -0.53 -0.42 1 -0.12
26 -0.27 -0.07 0.33 0.97 -0.38 0.16 0.48 0.06 -1 -0.86 0.99 0.11
27 -0.41 -0.04 0.39 1 -0.06 0.06 0.56 0.26 -0.85 -0.66 0.87 0.15
28 - 0 . 4 3 - 0 . 0 3 0.07 1 0.48 0.09 0.8 0.1 - 0 . 4 6 - 0 . 7 6 0.36 0.2
29 -0.52 -0.11 -0.12 1 0.45 0.22 0.69 0.11 -0.59 -0.81 0.13 0.36
30 -0.53 -0.16 -0.23 0.97 0.09 0.18 0.57 -0.18 -0.83 -1 0.2 0.24
31 -0.49 -0.03 0.3 1 0.04 0.05 0.59 0.21 -0.8 -0.67 0.73 0.2
32 - 0 . 4 3 - 0 . 0 3 0.47 1 -0.2 0.03 0.49 0.29 -0.94 -0.62 0.95 0.2t
33 -0.46 -0.02 0.27 1 -0.16 -0.02 0.61 -0.09 -0.93 -0.87 0.8 0.07
34 -0,23 -0.08 0.37 0.88 -0.45 0.21 0.36 0.17 -1 -0.7 0.99 0.19
35 -0.26 -0.02 0.51 0.78 -0.29 -0.02 0.42 0.18 -0.83 -0.54 1 -0.01
36 -0.79 0,24 0.89 0.9 -0.16 -0.82 0.7 -0.05 -0.92 -0.48 1 -0.18
37 -0.28 0.06 0.54 0.59 -0.35 -0.25 0.35 0 -0.7 -0_37 1 -0.23
38 - 0 . 6 2 - 0 . 0 5 0.07 1 0.22 0.07 0.62 0.13 -0.71 -0.7 0.37 0.34
39 -0.2 -0.02 0.5 0.7 -0.31 -0.01 0.33 0.18 -0.77 -0.46 1 -0.07
4O -0.2 -0.02 0.5 0.69 - 0 . 3 2 - 0 . 0 1 0.32 0.18 -0.77 -0.46 1 -0.07
41 -0.16 -0.01 0.51 0.63 -0.36 -0.02 0.28 0.17 -0.74 -0.42 1 -0.09
42 - 0 . 4 4 - 0 . 0 4 0.34 1 -0.03 0.09 0.56 0.24 -0.83 -0.67 0.82 0.19
43 -0.47 -0.04 0.42 1 -0.17 0.07 0.45 0.33 -0.93 -0.63 0.89 0.23
44 -0.53 -0.1 -0.12 1 0.39 0.16 0.72 0.02 -0.7 -0.9 0.19 0.23

d o m i n a n t f a c t o r s in t h e s e s a m p l e s , is s i m i l a r i n b o t h f i g u r e s , b u t t h e v a r i a n c e b e t w e e n
t h e m m a y t u r n o u t r a t h e r l a r g e . F o r e x a m p l e , R S E f o r t h e N o . 10 s a m p l e in t h e " s t a b l e
group" a n d t h e N o . 4 0 s a m p l e in t h e " u n s t a b l e g r o u p " a r e s h o w n in F i g . 7 a n d F i g . 8.
Comparing these two figures, we can see that the main dominant parameter changes
A Hierarchical Analysis for Rock Engineering 219

0.8
0.6
0.4
0.2
RSE 0
-0.2 [~ 1o
---X 22
-0.4
[ ~(-, 30
-0.6
-0.8
-1
Parameters

B - - span, H - - depth, S c - - uniaxial strength, R Q D - - rock quality designation, Jn - - j o i n t set number,


J r - - j o i n t roughness number, S R F - - stress redution factor, D s - - density, T s - - rock type, Dr--joint orientation

Fig. 5. RSE of the stable group

0.8
0.6
0.4
0.2 //'
RSE 0 ~ /
-0.2
-0.4
-06
-0.8 -
-1
Parameters

B - - span, H - - depth, S c - - uniaxial strength, R Q D - - rock quality designation, Jn - - j o i n t set number,


Jr--joint roughness number, S R F - - stress redution factor, Ds - - density, T s - - rock type, Dr--joint orientation

Fig. 6. RSE of the unstable group

from RQD in sample l0 to rock type in sample 40. It is possible that a factor which
influences rock behavior little under one condition becomes a dominant one under
another condition (e.g., the rock type is not important in Fig. 7, but it becomes a
dominant one in Fig. 8). Obviously, it shows that the dynamic analysis can not be
excluded for these relevant factors, i.e., one can not trace the dynamic changes of the
significance of parameters without the dynamic analysis.
220 Y. Yang and Q. Zhang

1
ii:i):.i:i
0.8 9 ~:2.

0.6
i:i!i~i~
0.4 : ):ii :

0.2
:il :i
RSE I I . 9 I I I I

-0.2 so ROD ,, ,r tgl T= Dr


-0.4
-0.6
-0.8
-I
Parameters

B - - span, H-- depth, Sc - - n n i a x i a l strength, R Q D - - rock quality d e s i g n a t i o n , ,In - - j o i n t set n u m b e r ,


Jr --joint r o u g h n e s s number, S R F - - stress redution factor, D s - - density, T s - - rock type, D r - - j o i n t orientation

Fig. 7. The RSE of sample l0

0.8
0.6
0.4
0.2
RSE
0 i I i I l 6. I I 1 | t

u sc RQD :: Jr T~ Dr
-0.2
-0.4
-0.6
-0,8
Parameters

B -- span, H - - depth, Sr - - u u i a x i a l strength, R Q D - - rock quality d e s i g n a t i o n , Jn - - j o i n t set number,


Jr - - joint r o u g h n e s s n u m b e r , SRF - - stress redution factor, Ds-- density~ Ts - - rock type, Dr-- j o i n t orientation

Fig. 8. The RSE of saraple 40

The RSE values of RQD in Fig. 5 and Fig. 6 are relatively high, which once again
implies that RQD is a sensitive index for evaluation of stability of underground
openings. On the basis of RSE, we can also determine that some factors have very little
influence on rock behavior. As shown in Fig. 8, the depth (H') and joint roughness (Jr)
are not important.
It is clear that the density (Ds) and stress reduction factor (SRF) are also important
parameters besides RQD. Another significant parameter is the rock type (Ts), but its
A Hierarchical Analysis for Rock Engineering 221

influence varies very much between different samples. Hence, a hierarchy of these
parameters is obtained as:

RQD,
SRF and density,
rock type,
span and uniaxial strength,
joint set number and joint orientation,
depth and joint roughness.

4. Conclusion

In this paper, we have elaborated on the hierarchical analysis by RSE which may be
used as a reliable, predictive tool in the analysis of rock behavior. It has been shown
that such an approach may be applied to identify the essential factors affecting the
stability of underground openings, if enough data are available. The proposed approach
seems to be a practical tool to provide more insight into rock behavior.

References

Baba, N. (1994): A hybrid algorithm for finding the global minimum of error function of neural
networks and its applications. Neural Networks 7/8, 1253-1265.
Ghaboussi, J. (1992): Potential applications of neuro-biological computational models in
geotechnical engineering. Proc., Fourth International Symposium on Nmnerical Models in
Geomechanics, Swansea, U.K., August.
Hechi-Nielson, R. (1990): Neurocomputing, Addison-Welsley, Reading.
Hechi-NMson, R. (1989): Theory of the backpropagation, neural network. Proc. of the Int. Joint
Conf. on Neural Network I, IEEE Press, New York 593-611.
Hudson, J. A. (1991): Atlas of rock engineering mechanisms underground excavation. Int. J.
Rock Mech. Min. Sci. 28 (6), 523.
Hudson, J. A. (1992): Rock engineering systems, Ellis Horwood, Chichester.
Jiao, Y., Hudson, J. A. (1995): The fully-coupled model for rock engineering systems. Int. J.
Rock Mech. Min. Sci. Geomech. Abstr. 32 (5), 491-512.
Lee, C., Sterling, R. (1992): Identifying probable failure modes for underground openings using a
neural network. Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 29 (1), 49-67.
Nie, X., Zhang, Q. (1994): Predication of rock mechanical behavior by artificial neural network.
A comparison with traditional method, 1V CSMR, Integral Approach to Applied Rock
Mechanics, Santiago, Chile, 279-287.
Rumelhart, D. E., McClelland, J. L. (1986): Parallel distributed processing: Explorations in the
microstructure of cognition, Vol 1, MIT Press, Cambridge, MA.
Sheorey, P. R. (1991): Experiences with application of the NGI classification to coal measures.
Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 28 (1), 27-33.
Yang, Y. J., Zhang, Q. (1996): Application of artificial neural networks on analysis of factors for
rock engineering. Report RC-96-1, Northern Jiaotong University, 32-36 (in Chinese).
222 Y. Yang and Q. Zhang: A Hierarchical Analysis for Rock Engineering

Zhang, Q. et al. (1991): The application of neural network to rock mechanics and rock
engineering. Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 28 (6), 535-540.

Authors' address: Yingjie Yang, Department of Civil Engineering, Northern Jiaotong


University, Beijing 100044, P. R. China.

Das könnte Ihnen auch gefallen