Sie sind auf Seite 1von 180

IJCSIS Vol. 5, No.

1, September 2009
ISSN 1947-5500

International Journal of
Computer Science
& Information Security

© IJCSIS PUBLICATION 2009


IJCSIS Editorial
Message from Managing Editor

The editorial policy of the Journal of Computer Science and Information


Security (IJCSIS) is to publish top-level research material in all fields of
computer science and related issues like mobile and wireless network,
multimedia communication and systems, network security etc. With an open-
access policy, IJCSIS is an established journal now and will continue to grow as
a venue for state-of-art knowledge dissemination. The journal has proven to be
on the cutting edge of latest research findings with its growing popularity.

The ultimate success of this journal is truly dependent on the quality of the
articles that we include in our issues. As we endeavor to live up to our mandate
of providing a publication that bridges applied computer science from industry
practitioners as well as the basic research/academic community.

I also want to thank the reviewers for their valuable service. We have selected
some excellent papers with an acceptance rate of 35% and I hope that you enjoy
IJCSIS Volume 5, No. 1 September 2009 issue.

Available at http://sites.google.com/site/ijcsis/
IJCSIS Vol. 5, No. 1,
September 2009 Edition
ISSN 1947-5500
© IJCSIS 2009, USA.
Indexed by:
IJCSIS EDITORIAL BOARD
Dr. Gregorio Martinez Perez
Associate Professor - Professor Titular de Universidad
University of Murcia (UMU), Spain

Dr. M. Emre Celebi,


Assistant Professor
Department of Computer Science
Louisiana State University in Shreveport, USA

Dr. Yong Li
School of Electronic and Information Engineering,
Beijing Jiaotong University
P.R. China

Dr. Sanjay Jasola


Professor and Dean
School of Information and Communication Technology,
Gautam Buddha University,

Dr Riktesh Srivastava
Assistant Professor, Information Systems
Skyline University College, University City of Sharjah,
Sharjah, PO 1797, UAE

Dr. Siddhivinayak Kulkarni


University of Ballarat, Ballarat, Victoria
Australia

Professor (Dr) Mokhtar Beldjehem


Sainte-Anne University
Halifax, NS, Canada
TABLE OF CONTENTS

1. A Method for Extraction and Recognition of Isolated License Plate Characters (pp. 001-010)
YON-PING CHEN, Dept. of Electrical and Control Engineering, National Chiao-Tung University Hsinchu,
Taiwan
TIEN-DER YEH, Dept. of Electrical and Control Engineering, National Chiao-Tung University, Hsinchu,
Taiwan

2. Personal Information Databases (pp. 011-020)


Sabah S. Al-Fedaghi, Computer Engineering Department, Kuwait University
Bernhard Thalheim, Computer Science Institute, Kiel University, Germany

3. Improving Effectiveness Of E-Learning In Maintenance Using Interactive-3D (pp. 021-024)


Lt. Dr. S Santhosh Baboo, Reader, P.G. & Research Dept of Computer Science, D.G.Vaishnav College,
Chennai 106
Nikhil Lobo, Research Scholar, Bharathiar University

4. An Empirical Comparative Study of Checklist-based and Ad Hoc Code Reading Techniques in a


Distributed Groupware Environment (pp. 025-035)
Olalekan S. Akinola and Adenike O. Osofisan
Department of Computer Science, University of Ibadan, Nigeria

5. Robustness of the Digital Image Watermarking Techniques against Brightness and Rotation
Attack (pp. 036-040)
Harsh K Verma, Department of Computer Science and Engineering, Dr B R Ambedkar National Institute of
Technology, Jalandhar, India
Abhishek Narain Singh, Department of Computer Science and Engineering, Dr B R Ambedkar National
Institute of Technology, Jalandhar, India
Raman Kumar, Singh, Department of Computer Science and Engineering, Dr B R Ambedkar National
Institute of Technology, Jalandhar, India

6. ODMRP with Quality of Service and local recovery with security Support (pp. 041-045)
Farzane kabudvand, Computer Engineering Department zanjan, Azad University, Zanjan, Iran

7. A Secure and Fault-tolerant framework for Mobile IPv6 based networks (pp. 046-055)
Rathi S, Sr. Lecturer, Dept. of Computer Science and Engineering, Government College of Technology,
Coimbatore, Tamilnadu, India
Thanuskodi K, Principal, Akshaya College of Engineering, Coimbatore, Tamilnadu, India

8. A New Generic Taxonomy on Hybrid Malware Detection Technique (pp. 056-061)


Robiah Y, Siti Rahayu S., Mohd Zaki M, Shahrin S., Faizal M. A., Marliza R.
Faculty of Information Technology and Communication, Univeristi Teknikal Malaysia Melaka, Durian
Tunggal, Melaka, Malaysia

9. Hybrid Intrusion Detection and Prediction multiAgent System, HIDPAS (pp. 062-071)
Farah Jemili, Mohamed Ben Ahmed, Montaceur Zaghdoud
RIADI Laboratory, Manouba University, Manouba 2010, Tunisia

10. An Algorithm for Mining Multidimensional Fuzzy Assoiation Rules (pp. 072-076)
Neelu Khare, Department of Computer Applications, MANIT, Bhopal (M.P.)
Neeru Adlakha, Department of Applied Mathematics, SVNIT, Surat (Gujrat)
K. R. Pardasani, Department of Computer Applications, MANIT, Bhopal (M.P.)

11. Analysis, Design and Simulation of a New System for Internet Multimedia Transmission
Guarantee (pp. 077-086)
O. Said, S. Bahgat, M. Ghoniemy, Y. Elawdy
Computer Science Department, Faculty of Computers and Information Systems, Taif University, Taif, KSA.

12. Hierarchical Approach for Key Management in Mobile Ad hoc Networks (pp. 087-095)
Renuka A., Dept. of Computer Science and Engg., Manipal Institute of Technology, Manipal-576104-India
Dr. K. C. Shet, Dept. of Computer Engg., National Institute of Technology Karnataka, Surathkal,
P.O.Srinivasanagar-575025

13. An Analysis of Energy Consumption on ACK+Rate Packet in Rate Based Transport Protocol (pp.
096-102)
P. Ganeshkumar, Department of IT, PSNA College of Engineering & Technology, Dindigul, TN, India,
624622
K. Thyagarajah, Principal, PSNA College of Engineering & Technology, Dindigul, TN, India, 624622

14. Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive Integrated Moving
Average (SARIMA) (pp. 103-110)
Adhistya Erna Permanasari, Computer and Information Science Dept., Universiti Teknonologi PETRONAS,
Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia
Dayang Rohaya Awang Rambli, Computer and Information Science Dept. Universiti Teknonologi
PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia
Dhanapal Durai Dominic, Computer and Information Science Dept., Universiti Teknonologi PETRONAS,
Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia

15. Performance Evaluation of Wimax Physical Layer under Adaptive Modulation Techniques and
Communication Channels (pp. 111-114)
Md. Ashraful Islam, Dept. of Information & Communication Engineering, University of Rajshahi, Rajshahi,
Bangladesh
Riaz Uddin Mondal (corresponding author), Assistant Professor, Dept. of Information & Communication
Engineering,University of Rajshahi, Rajshahi, Bangladesh
Md. Zahid Hasan, Dept. of Information & Communication Engineering, University of Rajshahi, Rajshahi,
Bangladesh

16. A Survey of Biometric keystroke Dynamics: Approaches, Security and Challenges (pp. 115-119)
Mrs. D. Shanmugapriya, Dept. of Information Technology, Avinashilingam University for Women,
Coimbatore, Tamilnadu, India
Dr. G. Padmavathi , Dept. of Computer Science, Avinashilingam University for Women, Coimbatore,
Tamilnadu, India

17. Agent’s Multiple Architectural Capabilities: A Critical Review (pp. 120-127)


Ritu Sindhu, Department of CSE, World Institute of Technology, Gurgaon, India
Abdul Wahid, Department of CS, Ajay Kumar Garg Enginnering College, Ghaziabad, India
Prof. G.N.Purohit, Dean, Banasthali University, Rajasthan, India

18. Prefetching of VoD Programs Based On ART1 Requesting Clustering (pp. 128-134)
P Jayarekha, Research Scholar, Dr. MGR University Dept. of ISE, BMSCE, Bangalore
& Member, Multimedia Research Group, Research Centre, DSI, Bangalore
Dr. T R GopalaKrishnan Nair
Director, Research and Industry Incubation Centre, DSI, Bangalore

19. Prefix based Chaining Scheme for Streaming Popular Videos using Proxy servers in VoD (pp.
135-143)
M Dakshayini, Research Scholar, Dr. MGR University.Working with Dept. of ISE, BMSCE,
& Member, Multimedia Research Group, Research Centre, DSI, Bangalore, India
Dr T R GopalaKrishnan Nair, Director, Research and Industry Incubation Centre, DSI, Bangalore, India

20. Convergence Time Evaluation of Algorithms in MANETs (pp.144-149)


Narmada Sambaturu, Department of Computer Science and Engineering, M.S. Ramaiah Institute of
Technology,Bangalore-54,India.
Krittaya Chunhaviriyakul, Department of Computer Science and Engineering, M.S. Ramaiah Institute of
Technology,Bangalore-54,India.
Annapurna P.Patil, Department of Computer Science and Engineering, M.S. Ramaiah Institute of
Technology, Bangalore-54,India.

21. RASDP: A Resource-Aware Service Discovery Protocol for Mobile Ad Hoc Networks (pp. 150-
159 )
Abbas Asosheh, Faculty of Engineering, Tarbiat Modares University, Tehran, Iran
Gholam Abbas Angouti, Faculty of Engineering, Tarbiat Modares University, Tehran, Iran

22. Tool Identification for Learning Object Creation (pp. 160-167)


Sonal Chawla, Dept. of Computer Science and Applications, Panjab University, Chandigarh, India
Dr.R.K.Singla, Dept. of Computer Science and Applications, Panjab University, Chandigarh, India

--------------------
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009

A Method for Extraction and Recognition of Isolated


License Plate Characters
YON-PING CHEN TIEN-DER YEH
Dept. of Electrical and Control Engineering, Dept. of Electrical and Control Engineering,
National Chiao-Tung University National Chiao-Tung University
Hsinchu, Taiwan Hsinchu, Taiwan
Email:ypchen@cc.nctu.edu.tw Email:tainder.ece91g@nctu.edu.tw

Abstract—A method to extract and recognize isolated characters detected areas. Normalization is not always necessary for all
in license plates is proposed. In extraction stage, the proposed recognition methods. Some recognition methods need
method detects isolated characters by using Difference-of- normalized characters so that they need more computations to
Gaussian (DOG) function, The DOG function, similar to normalize the character candidates before recognition. In this
Laplacian of Gaussian function, was proven to produce the paper the detection stage and segmentation stage are merged
most stable image features compared to a range of other into an extraction stage. And the normalization is not necessary
possible image functions. The candidate characters are because the characters are recognized in an orientation and size
extracted by doing connected component analysis on invariant manner.
different scale DOG images. In recognition stage, a novel The motivations of this work originate from three
feature vector named accumulated gradient projection limitations of traditional method of LPR systems. First,
vector (AGPV) is used to compare the candidate character traditional method uses simple features such as gradient energy
with the standard ones. The AGPV is calculated by first to detect the possible location of license plates. However, this
projecting pixels of similar gradient orientations onto method may lose some plate candidates because the gradient
specific axes, and then accumulates the projected gradient energy may be suppressed due to camera saturation or
magnitudes by each axis. In the experiments, the AGPVs underexposure, which often takes place under extreme light
are proven to be invariant from image scaling and rotation, conditions such as sun light or shadow. Second, traditional
and robust to noise and illumination change. detection methods often assume the license plates images are
captured in a correct orientation so that the gradients can be
Keywords-accumulated gradient; gradient projection; isolated accumulated on the pre-defined direction and then the license
character; character extraction; character recognition plates can be detected correctly. In real cases, the license plates
may not always keep the same orientations in the captured
I. INTRODUCTION images. They can be rotated or slanted due to the irregular
roads, unfixed camera positions, or the abnormal conditions of
The license plate recognition, or LPR in short, has been a
cars. Third, it often happens that some characters in a license
popular research topic for several decades [1] [2] [3]. An LPR
plate are blurred or corrupted which may fail the LPR process
system is able to recognize vehicles automatically and therefore
in detection or segmentation stage. The characteristic is
useful for many applications such as portal controlling, traffic
dangerous for application because one single character may
monitoring, stolen car detection, and etc. Up to now, an LPR
result in loss of whole license plate. Compare to human nature,
system still faces some problems concerning various light
people know the position of the unclear characters because they
condition, image deformation, and processing time see some characters located aside. We try different methods,
consumption [3]. e.g. change head position or walk closer, to read the unclear
Traditional methods for recognition of license plate character, and even guess it if it is still not distinguishable. This
characters often include several stages. Stage one is detection nature is not achievable in a traditional LPR system due to its
of possible areas where the license plates may exist. To detect coarse-to-fine architecture. To retain high detection rate of
license plates quickly and robustly is a big challenge since license plates under these limitations, the method in this paper
images may contain far more contents than just only expected proposes a fine-to-coarse method which firstly finds isolated
ones. Stage two is segmentation, which divides the detected characters in the captured image. Once some characters on a
areas into several regions containing one character candidate. license plate are found, the entire license plate can be detected
Stage three is normalization; some attributes of the character around these characters. The method may consume more
candidates, e.g., size or orientation, are converted to pre- computation than the traditional coarse-to-fine method.
defined values for later stages. Stage four is recognition stage; However, it minimizes the probability of missing license plate
the segmented characters can be recognized by technologies candidates in the detection stage.
such as vector quantization [4] or neural networks [5] [6]. Most
A challenge to achieve the fine-to-coarse method is
works propose to recognize characters in binary forms so that
recognition of isolated characters. There are some difficulties
they find thresholds [7] to depict the regions of interest in the to deal with isolated characters recognition. First, it is difficult
This research was supported by a grant provided by National Science Council,
Taiwan, R.O.C.( NSC 98-2221-E-009 -128 -).

1 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
to extract orientation of an isolated character. In traditional characters. Third, on each group the proposed accumulated
LPRs [3], the orientations of characters can be determined by gradient projection method is applied to find out the nature
the baseline [3][8] of multiple characters. However this method axes and associated accumulated gradient projection vectors
is not suitable for isolated characters. Second, the unfixed (AGPVs). Finally, the AGPVs of each candidate are matched
camera view angle often introduces large deformation on the with those of standard characters to find the most similar
character shapes or stroke directions. It makes the detection and standard character as recognition result. The experimental
normalization process difficult to be applied. Third, the results show the feasibility of the proposed method and its
unknown orientations and shapes exposed under unknown light robustness to several image parameters such as noise and
condition and environment become a bottleneck for the illumination change.
characters to be correctly extracted and recognized.
II. EXTRACTION OF ISOLATED CHARACTERS
Calculate scale-space differences in the Before extracting the isolated characters in an image, there
Extraction Stage

input image are four assumptions made for the proposed methods:
1. The color (or intensities for gray scale images) of a
Group pixels of positive (or negative) character is monotonic, i.e., the character is composed of
differences single color without texture on it.
2. Same as 1, the color of background around the
character is monotonic, too.
3. The color of the character is always different from that
Calculate gradient histogram of each of the background;
group 4. Characters must be isolated and no overlap in the
input image.
Find 1~6 nature axes In this chapter, the scale space theory is used and acts as the
theoretical basis of the method to robustly extract the interested
characters in the captured image. Based on the theory, the
Difference-of-Gaussian images of all different scales are
Find nature AGPVs on each axis iteratively calculated and grouped to produce the candidates of
license plate characters.
Recognition Stage

A. Produce the Difference-of-Gaussian Images


Match with nature AGPVs in database to Taking advantage of the scale-space theories [9]-[11], the
find possible matching characters extraction of characters becomes systematic and effective. In
the first step, the detection of the characters is done by
searching scale-space images of all the possible scales where
Calculate augmented AGPV of each the characters may appear in the input image. As suggested by
possible matching character the authors in [12] and [13], the scale-space images in this
work are generated by convolving input image with different
scale Gaussian functions. The first task to get the scale-space
Calculate total matching cost of each images is defining the Gaussian function. There are two
possible matching character parameters required for choosing of Gaussian filters, i.e., filter
width λ and smoothing factor σ, where the two parameters are
not fully independent yet some relationship between them are
Recognize by lowest matching cost required to be discussed.
The range of smoothing factor σ is determined from
experiments that a better choice of it is from 1 to 16 for the
Figure 1. Process flow of the proposed method input image up to 3M pixels. There are two factors relevant to
the sampling frequency of σ: the resolution of the target
The proposed scheme to extract and recognize license plate
characters and the computational resources(including allowed
characters has procedures as the following. First, the candidates
processing time). These two factors play roles of trade-off and
of characters are detected by the scale-space differences. Scale-
are often determined case by case. In this paper, we choose to
space extrema has been proved stable against noise,
illumination change and 3D view point change [9]-[14]. In this set σ of a scale double of that of the previous scale for
paper, the scale-space differences are approximated by the convenient computation, i.e., σ2=2σ1, σ3=2σ2 …, where σ1, σ2,
difference-of-Gaussian functions as in [9]. Second, the pixels σ3…, are the corresponding smoothing factors of the scale
of positive (or negative) differences are gathered into groups by numbered 1, 2, 3… As a result, the choice of smoothing factors
connected components analysis and form candidates of in our case is, σ1=1, σ2=2, σ3=4, σ4=8, and σ5=16. Consider

2 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
factors of noise and sampling frequency in the spatial domain, beginning, I(x,y) is convolved with Gaussian filter G(x,y,σa) to
a larger σ is more stable to detect characters of larger sizes. generate the first smoothed image, I1(x,y) for the first octave.
σa is the smoothing factor of the initial scale and is selected as
Ideally the width λ of a Gaussian filter is infinity, while in
1 (σa =σ1) in our experiments. The smoothed image I1(x,y) is
real case it is reasonable to be an integer to match the design of
digital filters. In addition, the integer cannot be large due to used to convolve with Gaussian filter G(x,y,σb) to generate the
limited computation resources and only odd integers are chosen second smoothed image I2(x,y), which will be subtracted from
such that each output of convolution can be aligned to the I1(x,y) to generate the first DOG image D1(x,y) on the octave.
The I2(x,y) is also sub-sampled by every two pixels on each
center pixel of the filter. The width λ is changed with the
row and column to produce the image I2'(x,y) for the next
smoothing factor σ, which is in other words the standard octave. It is worth to note that an image sub-sampled from a
deviation of the Gaussian distribution. Smaller σ has better source image has smoothing factor equal to one half of that of
response on edges but yet more sensitive to noise. When σ is the source image. The length and width of image I2'(x,y) are
small, there is no need to define a large λ because the filter L/2 and W/2, and the equivalent smoothing factor is (σa+σb)/2
decays to a very small value when it reaches the boundary. In from initial scale. As the σb is selected to be same as the
this paper we propose to choose the two parameters satisfying smoothing factor σa of the initial scale, the image I2'(x,y)
the following inequality,
therefore has the equivalent smoothing factor σ=σa,, and is
served as the initial scale of the second octave. The image
λ ≥ σ × 7 & λ = 2n + 1,∀n ∈ N . (1) I2'(x,y) is convolved with G(x,y,σb) again to generate the third
smoothed image, I3(x,y), which can be subtracted from I2'(x,y)
to produce the second DOG image D2(x,y). The same
procedure can be applied to the remaining octaves to generate
the required smoothed images I4 and I5, and Difference-of-
Gaussian images D3 and D4.

B. Grouping of the Difference-of-Gaussian Images


To find the interested characters in the DOG image, the
next step is to apply connected components analysis to connect
pixels of positive (or negative) responses into groups. After
connected components analysis, all the groups are filtered by
their sizes. There are expected sizes of characters for different
octave and the groups will be discarded if their sizes are not
falling into the expected range. The most stable sizes for
extracting general characters on each octave are ranged from
32×32 to 64×64. Characters sizes smaller than 32×32 are easily
disturbed by noise and result in undesirable outcomes.
Characters sizes larger than 64×64 can be extracted on octaves
of larger scales.

Figure 2. The procedure to produce difference-of-Gaussian Images III. THE ACCUMULATED GRADIENT PROJECTION
VECTOR(AGPV) METHOD
An efficient way to generate the smoothed images is taking
sub-sampling. As explained above that the filter width is better After extraction of the candidate characters, a novel method
chosen λ≥7σ, it makes the filter width grows to a large value named accumulated gradient projection vector method, or
when the smoothing factor grows up. This leads to a AGPV method in short, is proposed and applied to recognize
considerable amount of computation in real case if the filters the extracted candidate characters. There are four stages to
are implemented in such a long size. To avoid expanding the recognize a character using the AGPV method. First, determine
filter width directly, we take use of sub-sampling on images of the axes; including the nature axes and augmented axes.
smoothing factors σ>1 based on the truth that the information Second, calculate the AGPVs based on these axes. Third,
in images is decreased as the smoothing factors increase. normalize the AGPVs for comparing with standard ones.
Due to the fact that the image sizes vary with the level of Fourth, match with standard AGPVs to validate the recognition
sub-sampling, we store the smoothed images into a series of result. The procedure will be explained in detail in the
octaves according to their sizes. The images on an octave have following sections.
one half length and width of those of the previous octave. On
each octave there are two images subtracting from each other to A. Determine Axes
produce the desired Difference-of-Gaussian (DOG) image for It is important to introduce the axes first before discussing
later processing. The procedure of producing the Difference-of- the AGPV method. An axis of a character is a specific
Gaussian images can be explained by Fig.2. Let the length and orientation on which the gradients of grouped pixels are
width of the input image I(x,y) be L and W respectively. In the projected and accumulated to form the desired feature vector.

3 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
An axis is represented by a line that has specific orientation and Let function H(a) denote the histogram magnitude appeared
passes through the center of gravity point of the pixels group. on angle a. Find the center of the k-th peak, pk, of the histogram,
The axes of a character can be separated into two different which are defined by satisfying H(pk)> H(pk -1) and H(pk)>
classes named nature axes and augmented axes, which are H(pk +1). A peak represents a specific orientation in the
different in characteristics and usages and will be described character image. Beside the center, find the boundaries of the
below. peak, start angle sk and end angle ek, within an threshold angle
distance ath , i.e.,
1) Build up Orientation Histogram
Upon an input image is clustered into one or more groups
of pixels, the next step is to build up the corresponding sk = a , H (a ) ≤ H (b ),∀b ∈ ( pk − ath , pk ) (4)
orientation histograms. The orientation histograms are formed
from gradient orientations of grouped pixels. Let γ(x,y) be the
intensity value of sample pixel (x,y) of an image group I, the ek = a , H (a ) ≤ H (b ),∀b ∈ ( pk , pk + ath ) (5)
gradients on x-axis and y-axis are respectively,
The threshold ath is used to guarantee the boundaries of a
∇X (x , y ) = γ (x + 1, y − 1) − γ ( x − 1, y − 1) +
peak stay nearby of its center and is defined to be 22.5 degrees
in the experiment. The reason to choose ±22.5 degrees
2 × (γ (x + 1, y ) − γ ( x − 1, y )) + γ ( x + 1, y + 1) − γ ( x − 1, y + 1) .(2) threshold is because it segments a 360-degree circle into 8
∇Y ( x , y ) = γ ( x − 1, y + 1) − γ ( x − 1, y − 1) + orientations, which is similar to human eyes often see a circle
2 × (γ (x , y + 1) − γ ( x , y − 1)) + γ ( x + 1, y + 1) − γ ( x + 1, y − 1) in 8 octants.
Once the start angle and end angle of a peak is determined,
The gradient magnitude, m(x,y), and orientation, θ(x,y), of this ek
pixel is computed by define the energy function of the k-th peak as E (k ) = ∑ H (a ) ,
a = sk

which stands for the gradient energy of a peak. In addition, an


m( x , y ) = (∇X (x , y ))2 + (∇Y (x , y ))2 outstanding energy function D(k) is also defined for each peak,
θ ( x , y ) = tan −1 (∇Y ( x , y ) / ∇X (x , y )) , (3)
(H (s k ) + H (ek )) × (ek − s k )
D(k ) = E (k ) −
By assigning a resolution BINhis in the orientation histogram, 2 (6)
the gradients are accumulated into BINhis bins and the angle
resolution is REShis =(360/BINhis). The BINhis is chosen as 64 in The outstanding energy neglects the energy contributed by
the experiments and the angle resolution REShis is therefore neighboring peaks and is more meaningful than E(k) to
5.625 degrees. Each sample added to the histogram is weighted represent the distinctiveness of a peak. Peaks with small
by its gradient magnitude and accumulated into the two nearest outstanding energy are not considered as nature axes because
bins by linear interpolation. Besides the histogram they do not outstand from the neighboring peaks and may not
accumulation, the gradient of each sample is accumulated into be detectable in new images.
a variable GEhis which stands for the total gradient energy of
the histogram. In the experiments, there are different strategies to
threshold the outstanding energy for calculating standard
2) Determine the Nature Axes AGPVs and test AGPVs. When calculating standard AGPVs,
The next step toward recognition is to find the we select one grouped image as standard character image for
corresponding nature axes based on the built orientation each character and assign it to be the standard of the
histogram. The word “nature” is used because the axes always recognition. The mission of this task is to find stable peaks in
exist “naturally” regardless of most environment and camera the standard character image. Therefore, a higher threshold
factors that degrade the recognition rate. The nature axes have GEhis/32 is applied and a peak has outstanding energy higher
several good properties helpful for the recognition. First, they than the threshold is considered as a nature axis of the standard
have high gradient energy on specific orientation and therefore character image. When calculating test AGPVs, the histogram
are easily detectable in the input image. Second, the angle may have many unexpected factors such as noise, focus error,
differences among the nature axes are invariant to image bad lighting condition…, so that the task is changed to find one
scaling and rotation. It means, they can be used as references to or more matched candidates for further recognition. Therefore,
correct the unknown rotation and scaling factors on the input a lower threshold GEhis/64 is used to filter out the dissimilar
image. Third, the directions of nature axes are robust within a ones by the outstanding energy. After threshold checking, the
range of focus and illumination differences. Fourth, although peaks whose outstanding energy higher than the threshold is
some factors, such as different camera view angle, may cause called nature peaks of the character image and the
character deformation and change the angle relationship among corresponding angles are called the nature axes. Typical license
the nature axes, the detected nature axes are still useful to filter plate characters can be found having two to six nature axes by
out the dissimilar ones and narrow down the range of the procedures above.
recognition.

4 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
Fig.3 is an example to show the nature axes. Fig.3(a) is the B. Calculate AGPVs
source image, where intensity is ranged from 0(black) to Once the axes of a character are found, the next step is to
255(white). Fig.3(b) overlays the source image with the calculate the accumulated gradient projection vectors (AGPVs)
detected nature axes shown by red arrows. Fig.3(c) is the based on these axes. On each axis of corresponding peak pk, the
corresponding orientation histogram which are accumulated gradient magnitudes of pixels whose gradient orientations fall
inside the range sk < θ (x , y ) < ek are projected and
from the pixels gradient magnitudes in Fig.3(a). We can see six
peaks in the histogram, marked as A,B,C,D,E and F
respectively, which correspond to the six red arrows in Fig.3(b). accumulated. The axis could be any one in the nature axes or
augmented axes.

1) Projection principles
The projection axis, ηφ, is chosen from either nature axes or
augmented axes with positive direction φ. Let the (xcog, ycog) be
. . the COG point of the input image, i.e.,

⎡N
⎡ xcog ⎤ 1 ⎢ ∑
(xi )⎤⎥
(a) (b) (c) ⎥
⎢y ⎥ = × ⎢ N
i =1

Figure 3. (a)input image (b)the nature axes (c)orientation histogram


⎣ cog ⎦ N ⎢
⎢⎣∑
( y i )⎥
⎥⎦
i =1
, (7)
3) Determine the Augmented Axes
Augmented axes are defined, as augmentations to nature where (xi, yi) is the i-th sample pixel and N is the total number
axes, to be the directions on which the feature vectors, AGPVs, of sample pixels of a character. Let the function A(x,y) denote
generated are unique or special to represent the source the angle between sample pixel(x,y) and the x-axis, i.e.,
character. Unlike the nature axes possessing strong gradient
energy on specific orientation, augmented axes do not have this
⎛ y⎞
property so that they may not be observed from orientation A( x , y ) = a tan⎜ ⎟
histogram. ⎝ x ⎠. (8)
Some characters, such as Fig.4, have only few (one or two)
apparent nature axes. Therefore, it is necessary to generate The process of projecting a character onto axis ηφ can be
enough AGPVs on augmented axes for the recognition process. decomposed into three operations. First, rotate the character by
The experiment tells us that a better choice for the number of angle Δθ = (A(xcog , ycog ) − φ ) . Second, scale the rotated pixels
AGPVs is four to six in order to recognize a character in a high
successful rate. The AGPVs can be any one from nature axes by a projection factor cos(Δθ). And third, translate the axis
AGPVs or augmented axes AGPVs. origin to the desired coordinate. Apply the process on the COG
point, the coordinate of COG point after rotation is,
The augmented axes can be defined by character shapes or
by fixed directions. In our experiments, there are only four
fixed directions, as the four arrows in Fig.4(b), defined as ⎡ x rcog ⎤ ⎡cos (Δθ ) − sin(Δθ )⎤ ⎡ x cog ⎤
⎢y ⎥ = ⎢ ⎥⋅⎢ ⎥
⎣ rcog ⎦ ⎣ sin(Δθ ) cos(Δθ ) ⎦ ⎣ y cog ⎦ ,
augmented axes for the total 36 characters. If any one of the
four directions already exists in the nature axes, it will not be (9)
declared again in the augmented axes.
Scaling by a projection factor cos(Δθ), it becomes,

⎡ x pcog ⎤ ⎡cos(Δθ ) 0 ⎤ ⎡ x rcog ⎤


⎢y ⎥ = ⎢ ⋅⎢ ⎥
⎣ pcog ⎦ ⎣ 0 cos(Δθ )⎥⎦ ⎣ y rcog ⎦
. (10)
.
Finally, combine (15) and (16) and further translate the origin
of axis ηφ, to (xηori, yηori), the final coordinate (xproj, yproj) of
projecting any sample pixel (x,y) onto axis ηφ , is computed by
(a) (b) (c)
Figure 4. (a).a character has only one nature axis. (b)the nature axes in red
arrow and three augmented axes in blue arrows(c)orientation histogram

5 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
⎡ x proj ⎤ ⎡ cos 2 (Δθ ) − sin(Δθ ) cos(Δθ )⎤ ⎡ x ⎤ ˆl( x , y ) = (x − xcog ) + ( y proj − y cog )
2 2

. (13)
⎢y ⎥ = ⎢ ⎥⋅⎢ ⎥
proj

⎣ proj ⎦ ⎣ sin(Δθ ) cos(Δθ ) cos 2 (Δθ ) ⎦ ⎣ y ⎦ . (11)


⎡ x pcog ⎤ ⎡ xηori ⎤ To accumulate the gradient projections, an empty array R(x)
−⎢ ⎥+⎢ ⎥ is created with length equals to the diagonal of the input image.
⎣ y pcog ⎦ ⎣ yηori ⎦ Since the indexes of an array must be integers, linear
interpolation is used to accumulate the gradient projections into
Note that the origin of axis ηφ., (xηori, yηori), is chosen to be the two nearest indexes of the array. In mathematical
the COG point of the candidate character in the experiments, representations, let b=floor( ˆl(x , y ) ) and u=b+1, where floor(z)
i.e., (xηori, yηori)= (xcog, ycog), because it concentrates the rounds z to the nearest integers towards minus infinity. For
projected pixels around the origin (xcog, ycog) and saves some each sample pixel (x,y) on input image I, do the following
memory space used to accumulate the projected samples on accumulations,
new axis.

(
R(b ) = R(b ) + m̂( x , y ) × ˆl( x , y ) − b ; ) (14)
2) Gradient projection accumulation
In this section, the pre-computed gradient orientation and (
R(u ) = R(u ) + m̂( x , y ) × u − ˆl( x , y ) )
magnitude will be projected onto specific axes then summed up.
Only sample pixels of similar gradient orientations are Besides R(x), an additional gradient accumulation array, T(x)
projected onto the same axis. See Fig. 5 for example; an object is also created to collect extra information required for
O is projected onto axis η of angle 0-degree. In this case, only normalization. There are two differences between R(x) and T(x).
the sample pixels of gradient orientations θ(x,y) near 0-degree First, unlike R(x) targeting on only the sample pixels of similar
will be projected onto η and then accumulated. orientation, T(x) targets on all the sample pixels of a character
and accumulates their gradient magnitudes. Second, R(x)
According to axes types, there are two different cases to accumulates the projected gradient magnitude m̂(x , y ) , while
select sample pixels of similar orientations. For nature axis T(x) accumulates the original gradient magnitude m(x,y).
corresponding to k-th peak pk, the sample pixels with Referring to (14),
orientation θ(x,y) ranged inside the boundaries of the pk, i.e., sk
< θ(x,y) < ek, are projected and accumulated. For augmented
axis with angle φ, the sample pixels with gradient orientations (
T (b ) = T (b ) + m( x , y ) × ˆl( x , y ) − b ; ) (15)
θ(x,y) ranged by θ(x,y)≥ φ-22.5 and θ(x,y)≤ φ+22.5 will be
projected and accumulated.
(
T (u ) = T (u ) + m( x , y ) × u − ˆl( x , y ) )
The purpose of T(x) is to collect the overall gradient
information of the interested character then applied to
normalize array R(x) into desired AGPV.
3) Normalization
The last step to find out the AGPV on an axis is to
normalize the gradient projection accumulation array R(x) into
a fixed-length vector. With the fixed length, the AGPVs have
uniform dimensionality and can be compared with standard
AGPVs easily. Before the normalization, the length of AGPV,
LAGPV, has to be determined. Depends on the complexity of
recognition targets, different length of AGPV may be selected
to describe the distribution of projected gradients. In our
experiments, the LAGPV is chosen as 32. A smaller LAGPV lowers
the resolution and degrades the recognition rate. A larger LAGPV
slows down system performance and makes no significant
difference on recognition rate. It is worth to note that, one
Figure 5. Accumulation of gradient projection AGPV formed upon an axis is independent from the other
AGPVs formed upon different axes. This is important to make
From (3) and (11), the projected gradient magnitude, m̂( x , y ) , the AGPVs independent from one another regardless of the
and the projected distance, ˆl(x , y ) of sample pixel (x,y) onto formed character and axes.
axis ηφ are respectively In order to avoid the impact of isolated sample pixels which
are mostly caused by noise, the array R(x) is filtered by a
m̂( x , y ) = m( x , y ) × cos (θ (x , y ) − φ ) .
Gaussian filter G(x):
(12)

R ( x ) = R (x )* G ( x ) ,
~
(16)

6 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
where the operator * stands for convolution operation. The
variance of the G(x) is chosen as σ =(LenR)/128 in the
experiments, where LenR is the length of R(x). It is found that
this choice benefits in both resolution and noise rejection.
Similarly, the array T(x) is also filtered by the same Gaussian
filter to eliminate the effect of noise. After Gaussian filtering,
the array T(x) is analyzed to find effective range, the range in
which the data is effective to represent a character. The
effective range starts from index Xs and ends in index Xe, (a) (b)
defined as

X s = {x s ,T (x s ) ≥ thT ;T (x ) < thT , ∀x < x s }


, (17)
and

X e = {xe ,T (xe ) ≥ thT ;T (x ) < thT , ∀x > xe }


, (18)
where the threshold thT is used to discard noise and is chosen as (c) (d)
thT =Max(T(x))/32 in the experiment. The effective range of
R(x) is the same as the effective range of T(x), from Xs to Xe. Figure 6. (a).Gradient projection on axis D. (pink: COG point; red: axis D;
cyan: selected sample pixels; blue: locations after projection) (b).the gradient
As mentioned previously, the gradient projection accumulation array T(x) with distance to the COG point (c).the gradient
accumulation results in a large sum along a straight edge. This projection array R(x) (d).normalized AGPV.
is a good property if the interested character is composed of
straight edges. However, some characters may consist of not C. Matching and Recognition
only straight edges but also some curves and corners which
only contribute small energy on array R(x). In order to balance 1) Properties used for matching
the contribution of different types of edges and avoid the Unlike general vectors matching problem directly referring
disturbance from noise, a threshold is used to adjust the content to the RMS error of two vectors, the matching of AGPVs refers
of array R(x) before normalization, to special properties derived from their physical meanings.
There are three properties useful for similarity measuring
between two AGPVs.
⎧ 0 , if R (x ) < thR
~
R̂( x ) = ⎨ Each peak in an AGPV represents an edge on the source
⎩255, if R (x ) ≥ thR .
~
(19) character. The number of peaks, or say the edge count, is useful
to represent the difference between two AGPVs. For example,
After finding the effective range and adjusting the content of there are four peaks on the extracted AGPV in Fig.6(d) and
array R(x), the accumulated gradient projection vector (AGPV) each of them represents an edge on the axis. The edge count is
is defined to resample from R̂(x ) , invariant no matter how the character exists in the input image.
In this paper, a function EC(V) is defined to calculate the edge
count of an AGPV V and EC(V) increase if V(i)=0 and V(i+1)
⎛ ⎛⎛ i ⎞ ⎞⎞ >0.
AGPV (i ) = R̂⎜⎜ round ⎜⎜ ⎜ ⎟ × ( X e − X s ) + X s ⎟⎟ ⎟⎟
⎝ ⎝ ⎝ 32 ⎠ ⎠⎠ . (20) Although the edge count in an AGPV is invariant for the
same character, the position of the edges could be varied if the
character is deformed due to slanted camera angle. This is the
Fig.6 gives an example of the gradient accumulation array
major reason to explain why the RMS error is not suitable to
T(x), gradient projection accumulation array R(x) and
measure the similarity between two AGPVs. In order to
normalized AGPV. The example uses the same test image as
compare AGPVs under the cases of character deformation, a
Fig.3 and only one of the nature axes, axis E, is selected and
matching cost function C(U, V) is calculated to measure the
printed. Similar to the method of finding the peaks of
similarity between AGPV U and AGPV V, expressed as,
orientation histogram, the k-th peaks, pk, on R(x) is defined as
R(pk)> R(pk -1) and R(pk)> R(pk +1). It can be observed that
four peaks exist in Fig.6(c) and each of them represents an edge C (U ,V ) = EC (U ) − EC (V ) + EC (UV ) − EC (V ) +
projected onto axis E.
EC ( IV ) − EC (V )
, (21)

Where UV = U ∪ V is the union vector of AGPV U and


AGPV V and UV(i)=1 if V(i)>0 or U(i) >0. IV = U ∩V is the
intersection vector and IV(i)=1 if V(i)>0 and U(i) >0.

7 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
An inter-axes property used to match the test character with Stage 2: Find the other matching pairs between the standard
the standard characters is that the angular relationships of AGPVs and the test character: Based on the fundamental pair,
nature axes on the test character are similar to those on the the axes angles of the test character are compared with those of
corresponding standard character. In the experiment, a the standard character. Let the number of nature AGPVs
threshold thA=π/32 is used to check if the AGPVs of the test detected on the test character be NNT. For the i-th standard
character match the angular relationship of nature axes of a character, create an empty array mp(j)=0, 1≤j≤ NV(i), to denote
standard character. Let AAT(k) be the k-th axis angle of the test the matching pair with the test character. Take use of (22),
character and 0≤AAT(k)<2π. The function AA(i,j) denote the calculate
angle of the j-th axis of the i-th standard character, similarly
0≤AA(i,j)<2π. If the m-th and n-th axis of the test character are
respectively corresponding to the g-th and h-th axis of the i-th ( AAT (k ) − AAT (kT )) − ( AA(i , j ) − AA(i , jS )) ≤ thA . (25)
standard character, then ∀k ∈ [1, NN T ], k ≠ kT ; ∀j ∈ [1, NN (i )], j ≠ jS

( AAT (m) − AAT (n )) − ( AA(i , g ) − AA(i ,h )) ≤ thA . (22)


the k-th test AGPV satisfies (25) is called the j-th matching pair
of the standard character, denoted as mp(j)=k. Note that there
might be more than one test AGPVs satisfying (25). In this
2) Create standard AGPV database case only the one of lowest matching cost is recognized as the
A standard database is created by collecting all the AGPVs j-th matching axis and the others are ignored.
extracted from characters of standard parameters: standard size,
standard aspect ratio, no noise, no blur, and neither rotation nor Stage 3: Calculate total matching cost of standard nature
deformation. The extracted AGPVs are called standard AGPVs AGPVs: Define a character matching cost function CMC(i) to
and stored by two categories: the one calculated on nature axes measure the similarity between test character and the i-th
is called the standard nature AGPVs and the other calculated on standard character by summing up the matching costs of all the
augmented axes is called the standard augmented AGPVs. Let matching pairs,
the number of total standard characters be N, N=36(0~9 and
A~Z) for license plate characters in this paper. Denote the NN (i )
number of standard nature AGPVs for i-th standard character CMC (i ) = ∑ MC (V (mp( j )),V (i , j ))
T S
as NN(i), the number of standard augmented AGPVs as NA(i), j =1,mp ( j )> 0 (26)
and the total number of AGPVs as NV(i), where NV(i)= NN(i)+
NA(i). The j-th standard AGPV of the i-th character is denoted Stage 4: Calculate the matching costs of augmented AGPVs:
as VS(i,j), where j=1 to NV(i). Note that VS(i,j) are standard At the first step, find the axis angle AX on the test character
nature AGPVs for j≤NN(i) while VS(i,j) are standard corresponding to the j-th standard augmented axis as
augmented AGPVs otherwise.
3) Matching of characters AX = ( AA(i , j ) − AA(i , j S )) + AAT (kT ) (27)
In order to recognize the test character, the AGPVs of it is
stage-by-stage compared with the standard AGPVs in the
database. Moreover, a candidates list is created by including all If there is one AGPV of the test character, say, the k-th
the standard characters at the beginning and removes those nature AGPV, satisfies (25), i.e., ( AAT (k ) − AX ) ≤ th A , then the
having high matching cost to the test character on each stage. k-th nature AGPV is mapped to the j-th augmented axis and
Until the end of the last stage, the candidate in the list mp(j)=k. Otherwise, the AGPV corresponding to the j-th
consisting of the lowest total matching cost is considered as the standard augmented axis must be calculated based on the axis
recognition result. angle AX. After that, the matching costs of the augmented
AGPVs are accumulated into the character matching cost
Stage 1: Find the fundamental matching pair. Calculate the
function as,
cost function between the test character and the j-th AGPV of
the i-th standard character.
NV (i )
CMC (i ) = CMC (i ) + ∑ MC (V (mp( j )),V (i , j )) (28)
C1 (k , j ) = MC (VT (k ),VS (i , j ))
T S
(23) j = NN (i )+1

Find a pair of axes whose matching cost is minimum: Stage 5: Recognition: Due to the different number of
AGPVs for different standard character, the character matching
cost function is normalized by the total number of standard
(kT , j S ) = arg min(C1 (k , j )) AGPVs, i.e.,
k ,j
(24)

If min(C1(kT, jS)) is less than a threshold thF, the i-th standard CMC (i ) = CMC (i ) / NV (i ) (29)
character is kept in the candidates list and the pair(kT, jS) is
served as the fundamental pair of the candidate.

8 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
Finally, the test character is recognized as the h-th standard A. Stability to Noise
character of the lowest matching cost if the character matching The image set B is generated by two steps: First, copy set A
cost CMC(h)<thR to a new image set. Second, add 4% pepper and salt noise onto
the new image set to form set B. Note that 4% pepper and salt
noise means one pepper or salt appearing in every 25 pixels.
IV. EXPERIMENTAL RESULTS
It can be seen from Table I that the character extraction rate
The test images are captured from the license plates under is degraded when noise is added. A further experiment shows
general conditions and include several factors which often that enlarging the size of characters is very useful to improve
degrade recognition rates, such as dirty plates, deformed the extraction rate under the effect of noise. It is reasonable
license plates, plates under special light conditions …, etc. All since the noise energy is lowered if the range of accumulation
the images are sized 2048x1536 and converted into 8-bit gray- is enlarged. Similarly, the same method is able to improve the
scale images. Total 60 test images are collected and each of TPR of recognition since it accumulates the gradient energy
them contains one license plates consisting of 4 to 6 characters. before feature extraction.
The minimum character size in the image is 32×32 pixels. We
choose some characters from the test images to be the standard B. Stability to Character Deformation
characters and calculate the standard AGPVs.
The image set C is generated by transforming pixels of set
The results are measured by the true positive rate(TPR), i.e., A via affine transformation matrix M, expressed as
the rate that the true characters are extracted and recognized
successfully, and the false positive rate(FPR), i.e., the rate that ⎡ xC ⎤ ⎡ xA ⎤ . (30)
the false characters are extracted and recognized as a character ⎢y ⎥ = M ⋅⎢y ⎥
in the test image. The process is divided into two stages, ⎣ C⎦ ⎣ A⎦
extraction stage and recognition stage, and the result of each Table II gives two examples of different matrix M and
stage is recorded and listed in table I. corresponding affine-transformed characters
The discussion is focused on the stability of the proposed
method with respect to the three imaging parameters: noise, TABLE II. TWO OF THE AFFINE-TRANSFORMED CHARACTERS
character deformation, and illumination change. These
parameters are considered to be the most serious factors to ⎡ 1 0⎤ ⎡ 1 0⎤ ⎡ 1 0⎤ ⎡ 1 0⎤
Input \ M ⎢− 1 / 6 1⎥ ⎢− 1 / 4 1 ⎥ ⎢1 / 6 1⎥ ⎢1 / 4 1⎥
impact recognition rate. Denote the original test images as set ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A. Three extra image sets, set B, set C and set D, are generated
to test the impact of these parameters, respectively.
In Table I, the column “Extraction TPR” stands for the rate
that the characters in test images are extracted correctly. A
character is considered as successfully extracted if, first, it is In Table I, the extraction TPR for set C is very close to the
isolated from external objects and second, the grouped pixels original rate for set A. This is because the extraction method by
can be recognized by human eyes. The column “Recognition 1 scale-space difference is independent from character
TPR” represents the rate that the extracted true characters are deformation. However, the character deformation has serious
recognized correctly, under the condition that the character impact to recognition TPR because not only the angle
orientation is unknown. Nevertheless, the column “Recognition differences of the axes but also the peaks in AGPV are changed
2 TPR” is the recognition rate based on the condition that the seriously due to the deformation.
character orientation is known so that the fundamental pair of a
test character is fixed. This is reasonable for most cases since A method to increase the recognition TPR for deformed
the license plates are often orientated horizontally. Under such input characters is expanding the database by including the
conditions, the characters are kept in vertical orientations if the deformed characters into standard characters. In other words,
camera capture angle keeps upright. the false recognition can be reduced by considering the
seriously deformed characters as new characters, then
recognize it based on the new standard AGPVs. This method is
TABLE I. SIMULATION RESULTS OF THE FOUR IMAGE SETS
helpful to resolve the problem of character deformation but
Extraction Recognition 1 Recognition 2 takes more time in recognition as the AGPVs in standard
Unknown Known database grow up comparing to the original.
orientation orientation
TPR TPR FPR TPR FPR C. Stability to Illumination Change
Set A 93.3 88.3 8.3 93.3 3.3 It is found that the successful rate is robust and almost no
Set B 83.0 85.4 8.1 88.3 5.0 change on extraction stage when the illumination is changed by
constant factors, i.e.,
Set C 91.6 67.3 13.3 75.8 10.6
Set D 89.7 78.4 8.6 89.3 3.3
I' ( x , y ) = k ⋅ I ( x , y ) (31)

9 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
Consider to uneven illumination, four directional light plate characters, a lot of isolated characters are wide spreading
sources, L1 to L4, are added into the test images and form Set D around our daily lives. For example, there are many isolated
to imitate the response of illumination change. Expressed as characters exist in elevators, clocks, telephones …, etc.
Traditional coarse-to-fine methods to recognize license plate
characters are not suitable for these cases because each of them
L1 ( x , y ) = ( x + y + 10 ) / (L + W + 10) have different background and irregular characters placement.
L2 ( x , y ) = ((W − x ) + y + 10 ) / (L + W + 10 ) The proposed AGPV method shall be useful to recognize these
L3 (x , y ) = (x + (L − y ) + 10 ) / (L + W + 10 ) isolated characters if it can be adapted to different font types.
L4 ( x , y ) = ((W − x ) + (L − y ) + 10 ) / (L + W + 10 ) , (32) REFERENCES

where the W and L are respectively the width and length of the [1] Takashi Naito, Toshihiko Tsukada, Keiichi Yamada, Kazuhiro Kozuka,
test image. We can see from Table I that the uneven and Shin Yamamoto, “Robust License-Plate Recognition Method for
illumination change makes little difference to character Passing Vehicles under Outside Environment,” IEEE TRANSACTIONS
extraction. A detail analysis indicates that insufficient color ON VEHICULAR TECHNOLOGY, VOL. 49, NO. 6, NOVEMBER
(intensity) depth makes some edges disappeared under 2000
illumination change and forms the major reason for the drop of [2] S. Kim, D. Kim, Y. Ryu, and G. Kim, “A Robust License Plate
Extraction Method Under Complex Image Conditions,” in Proc. 16th
extraction TPR. Similarly, the same reason also degrades the International Conference on Pattern Recognition (ICPR’02), Quebec
TPR in recognition stage. City, Canada, vol. 3, pp. 216-219, Aug. 2002.
A good approach to minimize the sensitivity to illumination [3] Wang, S.-Z.; Lee, H.-J., “A Cascade Framework for a Real-Time
Statistical Plate Recognition System,” Transactions on Information
change is to increase the color (intensity) depth of the input Forensics and Security, IEEE, vol. 2, no.2, pp. 267 - 282, DOI:
image. 12-bit or 16-bit gray-level test images will have better 10.1109/TIFS.2007.897251, June 2007
recognition rates than 8-bit ones. [4] Zunino, R.; Rovetta, S., “Vector quantization for license-plate location
and image coding.” Transactions on Industrial Electronics, IEEE, vol. 47,
no.1, pp159 - 167, Feb 2000, DOI: 10.1109/41.824138
V. CONCLUSION
[5] Kwan, H.K., “Multilayer recurrent neural networks [character
A method to extract and recognize the license plate recognition application example].” The 2002 45th Midwest Symposium
characters is presented comprising two stages: First, extract on, vol. 3, pp 97-100, 4-7 Aug. 2002, ISBN: 0-7803-7523-8, INSPEC:
isolated characters in a license plate. And second, recognize 7736581
them by the novel AGPVs. [6] Wu-Jun Li; Chong-Jun Wang; Dian-Xiang Xu; Shi-Fu Chen.,
“Illumination invariant face recognition based on neural network
The method in extraction stage incrementally convolves the ensemble.” ICTAI 2004., pp 486 - 490, 15-17 Nov. 2004, DOI:
input image with different scale Gaussian functions and 10.1109/ICTAI.2004.71
minimizes the computations in high scale images by means of [7] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms.”
Transactions on Systems, Man and Cybernetics, IEEE, vol. 9, no.1,
sub-sampling. The isolated characters are found by connected pp62-66, Jan. 1979, ISSN: 0018-9472, DOI:
components analysis on the Difference-of-Gaussian image and 10.1109/TSMC.1979.4310076
filtered by expected sizes. The method in recognition stage [8] Atallah AL-Shatnawi and Khairuddin Omar., “Methods of Arabic
adopts AGPV as feature vector. The AGPVs calculated from Language Baseline Detection – The State of Art,” IJCSNS, vol. 8, no.10,
Gaussian-filtered images are independent from rotation and Oct 2008
scaling, and robust to noise and illumination change. [9] David G. Lowe, "Distinctive image features from scale-invariant
keypoints," International Journal of Computer Vision, 60, 2 , pp. 91-110,
The proposed method has two distinctions with traditional 2004
methods: [10] David G. Lowe, "Object recognition from local scale-invariant features,"
International Conference on Computer Vision, Corfu, Greece
1. Unlike traditional methods detect the whole license plate (September 1999), pp. 1150-1157.
in the first step; the method proposed here extracts characters [11] Witkin, A.P. "Scale-space filtering," International Joint Conference on
alone and no need to detect the whole plate in advance. Artificial Intelligence, Karlsruhe, Germany, pp. 1019-1022, 1983
[12] Koenderink, J.J. "The structure of images," Biological Cybernetics,
2. The recognition method is suitable for single characters. 50:363-396, 1984
Unlike traditional methods require the information of baseline
[13] Lindeberg, T. "Scale space theory: A basic tool for analyzing structures
before recognition; the proposed method requires no prior at different scales." Journal of Applied Statistics, 21(2):224-270, 1994
information of character sizes and orientations. [14] Mikolajczyk, K. "Detection of local features invariant to affine
A direction for future research can be categorized to extend transformations." Ph.D thesis, Institut National Polytechnique de
Grenoble, France, 2002
the method to recognize different font types. Not only license

10 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Personal Information Databases

Sabah S. Al-Fedaghi Bernhard Thalheim


Computer Engineering Department Computer Science Institute
Kuwait University Kiel University
Kuwait Kiel, Germany
sabah@eng.kuniv.edu.kw thalheim@is.informatik.uni-kiel.de

Abstract—One of the most important aspects of security leading to the need for a PII database with its own conceptual
organization is to establish a framework to identify security- scheme, system management, and physical structure.
significant points where policies and procedures are declared.
The (information) security infrastructure comprises entities, Different types of information of interest in this paper are
processes, and technology. All are participants in handling shown in Fig. 1. We will use the term infon to refer to “a piece
information, which is the item that needs to be protected. Privacy of information” [9]. The parameters of an infon are objects,
and security information technology is a critical and unmet need and so-called anchors assign these objects such as agents to
in the management of personal information. This paper proposes parameters. Infons can have sub-infons that are also infons.
concepts and technologies for management of personal
information. Two different types of information can be Let INF = the set of infons in the system. Four types of
distinguished: personal information and non-personal infons are identified:
information. Personal information can be either personal- 1. So-called “private or personal” information is a
identifiable information (PII), or non-identifiable information subset of INF. “Private or personal” information is
(NII). Security, policy, and technical requirements can be based
partitioned into two types of information: PII and
on this distinction. At the conceptual level, PII is defined and
formalized by propositions over infons (discrete pieces of PNI.
information) that specify transformations in PII and NII. PII is 2. PII is the set of pieces of personal identifiable
categorized into simple infons that reflect the proprietor’s information. We use the term pinfon to refer to this
aspects, relationships with objects, and relationships with other special type of infon. The relationship between PII
proprietors. The proprietor is the identified person about whom and the notion of identifiably will be discussed later.
the information is communicated. The paper proposes a database
organization that focuses on the PII spheres of proprietors. At the 3. PNI is the set of pieces of non-identifiable
design level, the paper describes databases of personal information.
identifiable information built exclusively for this type of
information, with their own conceptual scheme, system
management, and physical structure. Information (INF)
Keywords-component; database; personal information;
personal identifiable information

I. INTRODUCTION Private/personal information


Rapid advances in information technology and the
emergence of privacy-invasive technologies have made
Personal non-identifiable
information privacy a critical issue. According to Bennett and
Raab [11], technically, the concept of information privacy is information (PNI)
treated as information security. “Information privacy is the
interest an individual has in controlling, or at least significantly
Personal identifiable
influencing, the handling of data about themselves” [10];
information (PII)
however, the information privacy domain goes beyond security
concerns.
Information security aims to ensure the security of all
information regardless whether privacy related or non-privacy
related. Here we use the term information in its ordinary sense Figure 1. Types of information.
of “facts” stored in a database. This paper explores the privacy-
related differences between types of information to argue that
security, policy, and technical requirements set personal 4. NII = (INF – PII). We use the term ninfon to refer to
identifiable information apart from other types of information, this special type of infon. NII is the set of pieces of

11 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
non-identifiable information and includes all pieces building applications that enforce the security policies
of information except personal identifiable customers want enforced, but only where such control is
information (shaded area in Fig. 1). PNI in Fig. 1 is a necessary. By dynamically appending a predicate to SQL
subset of NII. It is the set of non-identifiable statements, VPD limits access to data at the table’s row level
information; however, it is called “personal” or and ties the security policy to the table (or view) itself.
“private” because its owner (a natural person) has “Private” in VPD means data owned and controlled by its
interest in keeping it private. In contrast, PII embeds owner. Such a mechanism supports the “privacy” of any owned
a unique identity of a natural person data, not necessarily personal identifiable information. In
contrast, we propose to develop a general PII information
From the security point of view, PII is more sensitive than database management system where PII and NII are explicitly
an “equal amount” (to be discussed later) of NII. With regard separated in planning, design, and implementation.
to policy, PII has more policy-oriented significance (e.g., the
1996 EU directive) than NII. With regard to technology, there Separating “private” data from “public” data has already
been adopted in privacy preserving systems; however, these
are unique PII-related technologies (e.g., P3P) and techniques
systems do not distinguish explicitly personal identifiable
(e.g., k-anonymization) that revolve around PII. Additionally,
information. The Platform for Privacy Preferences (P3P) is one
PII possesses an objective definition that allows separating it such system that provides a means for privacy policy
from other types of information, which facilitates organizing it specification and exchange but “does not provide any
in a manner not available to NII information. mechanism to ensure that these promises are consistent with
The distinction of infons into PII, NII, and PNI requires a the internal data processing” [7]. It is our judgment that
supporting technology. We thus need a framework that allows “internal data processing” requires recognizing explicitly that
us to handle, implement, and manage PII, NII, and PNI. “private data” is of two types: personal identifiable information
Management of PII, NII, and PNI ought, ideally, to be optimal and personal non-identifiable information, and this difficulty is
in the sense that derivable infons are not stored. This paper caused by the heterogeneity of data. Hippocratic databases
introduces a formalism to specify privacy-related infons based have been introduced as systems that integrate privacy
on a theoretical foundation. Current privacy research lacks protection into relational database systems [1][4]. A
such formalism. The new formalism can benefit two areas. Hippocratic database includes privacy policies and
First, a precise definition of the informational privacy notion is authorizations associated with each attribute and each user for
introduced. It can also be used as a base to develop a formal usage purpose(s). Access is granted if the access purpose
and informal specification language. Informal specification (stated by the user) is entailed by the allowed purposes and not
language can be used as a vehicle to specify various privacy entailed by the prohibited purposes [7]. Users’ role hierarchies,
constraints and rules. Further work can develop a full formal similar to ones used in security policies (e.g., RBAC), are used
language to be used in privacy enhancing systems. to simplify management of the mapping between users and
In this paper, we concentrate on the conceptual purposes. A request to access data is accompanied by access
organization of PII databases based on a theory of infons. To purpose, and accessing permission is determined after
achieve such a task, we need to identify which subset of infons comparing such purpose with the intended purposes of that data
will be considered personal identifiable information. Since the in privacy policies. Each user has authorization for a set of
combination of personal identifiable information is also access purposes. Nevertheless, in principle, a Hippocratic
personally identifiable, we must find a way to minimize the database is a general DBMS with a purpose mechanism.
information to be stored. We introduce an algebra that supports Purposes can be declared for any data item that is not
such minimization. Infons may belong to different users in a necessarily personal identifiable information.
system. We distinguish between proprietors (persons to whom
PII refers through embedded identities) and owners (entities III. INFONS
that possess PII of others such as agencies or other non- This section reviews the theory of infons. The theory of
proprietor persons). infons provides a rich algebra of construction operations that
can be applied to PII. Infons in an application domain such as
II. RELATED WORKS personal identifiable information are typically interrelated; they
Current database management systems (DBMS) do not partially depend on each other, partially exclude each other,
distinguish between PII and NII. An enterprise typically has and may be (hierarchically) ordered. Thus we need a theory
one or several databases. Some data is “private,” other data is that allows constructing a “lattice” of infons (and PII infons)
public, and it is typical that these data are combined in queries. that includes basic and complex infons while taking into
“Private” typically means exclusive ownership of and rights consideration their structures and relationships. In such a
(e.g., access) to the involved data, but there is a difference theory, we identify basic infons that cannot be decomposed into
between “private” data and personal identifiable data. “Private” more basic infons. This construction mechanism of infons from
data may include NII exclusively controlled by its owner; in infons should be supported by an algebra of construction
contrast, PII databases contain only personal identifiable operations. We generally may assume that each infon consists
information and related data, as will be described later. For of a number of components. The construction is applied in
example, in the Oracle database, the Virtual Private Database performing combination, replacement, or removal of some of
(VPD) is the aggregation of fine-grained access control in a these components; some may be essential (not removable) or
secure application context. It provides a mechanism for auxiliary (optional).

12 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
An infon is a discrete item of information and may be associations among the infons but only those that are
parametric and anchored. The parameters represent objects or meaningful in a given application area. We may assume that
properties of objects. Anchors assign objects to parameters. two infons are either potentially associated or cannot be
Parameter-value pairs are used to represent a property of an associated with each other. The same restriction can be made
infon. The property may be valid, invalid, undetermined, etc. for compatibility.
The validity of properties is typically important information.
Infons are thus representable by a tuple structure This infon world is very general and allows deriving more
advanced operations and predicates. If we assume the
<<ID, {(param, value, validity)} >> completeness of compatibility and association predicates, we
may use expressions defined by the operations and derived
or by an anchored tuple structures predicates. The extraction of application-relevant infons from
<<ID, {((param, value, validity), anchor(object))} >>. infons is supported by five operations:
We may order properties and anchors. A linear order allows 1. Infon projection narrows the infon to those parts (objects or
representing an infon as a simple predicate value. Following concepts, axioms or invariants relating entities, functions,
Devin’s formalism [9], an infon has the form <<R, a1, ... , an, events, and behaviors) of concern for the application-relevant
1>> and <<R, a1, ... , an, 0>>. R is an n-place relation and a1, . infons. For example, a projection operation may produce the
. . , an are objects appropriate for R. 0 and 1 indicate these may set of proprietors from a given infon, e.g., {Mary, John} from
be thought of as objects do, do not, respectively, and they stand John loves Mary.
in relation R. For simplicity sake, we may write an infon <<R, 2. Infon instantiation lifts the general infons to those of
a1, ... , an, 1/0>> as <<a1, ... , an>> when R is known or interest within the solution and instantiates variables by values
immaterial. that are fixed for the given system. For example a PII infon
We may use multisets instead of sets for infons or a more may be instantiated from its anonymized version, e.g., John is
complex structure. We choose the set notation because of its sick from Someone is sick.
representability within the XML technology. Sets allow us to 3. Infon determination is used to select those traces or
introduce a simple algebra and a simple set of predicates. solutions to the problem under inspection that are the most
suitable or the best fit for the system envisioned. The
“PII infons” are distinguished by the mandatory presence of determination typically results in a small number of scenarios
at least one proprietor, an object of type uniquely identifiable
for the infons to be supported, for example, infon
person.
determination to decide whether an infon belongs to a certain
The world of infons currently of interest can be specified as piiSphere (PII of a certain proprietor – to be discussed later).
the triple: (A; O; P) as follows. 4. Infon extension is used to add those facets not given by the
- Atomic infons A infon but by the environment or the platforms that might be
chosen or that might be used for simplification or support of
- Algebraic operations O for computing complex infons such
the infon (e.g., additional data, auxiliary functionality), for
as combination ⊕ of infons, abstraction ⊗ of infons by
example, infon extension to related non-identifiable
projections, quotient ÷ of infons, ρ renaming of infons, union information (to be discussed later).
∪ of infons, intersection ∩ of infons, full negation ¬ of infons, 5. Infons are often associated, adjacent, interacting, or fit with
and minimal negation ┐ of infons within a given context. each other. Infon join is used to combine infons into more
- Predicates P stating associations among infons such as the complex and combined infons that describe a complex
sub-infon relation, a statement whether infons can be solution, for example, joining atomic PIIs to form compound
potentially associated with each other, a statement whether PII (these types of PII will be defined later) and a collection of
infons cannot be potentially associated with each other, a related PII information.
statement whether infons are potentially compatible with each The application of these operations allows extraction of
other, and a statement whether infons are incompatible with which sub-infons, which functionality, which events, and
each other. which behavior (e.g., the action/verb in PII) are shared among
The combination of two infons results in an infon with all information spheres (e.g., of proprietors). These shared
components of the two infons. The abstraction is used for a “facilities” encompass all information spheres of relevant
reduction of components of an infon. The quotient allows infons. They also hint at possible architectures of information
concentrating on those components that do not appear in the and database systems and at separation into candidate
second infon. The union takes all components of two infons components. For instance, entity sharing (say, non-person
and does not combine common components into one entity) describes which information flow and development can
component. The full negation allows generating all those be observed in the information spheres.
components that do not appear in the infon. The minimal
negation restricts this negation to some given context. We will not be strictly formal in applying infon theory to
PII. Such a venture needs far more space. Additionally, we
We require that the sub-infon relation is not transitively squeeze the approach in the area of database design in order to
reflexive. The compatibility and incompatibility predicates are illustrate a sample application. The theory of PII infons can be
not contradictory. The potential association and its negation applied in several areas, including the technical and legal
must not conflict. The predicates should not span all possible aspects of information privacy and security.

13 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
IV. PERSONAL IDENTIFIABLE INFORMATION 4. Inclusivity of PII
It is typically claimed that what makes data “private” or Let nσ denote the number of uniquely identified persons in the
“personal” is either specific legislation, e.g., a company must infon σ, then σ ∈ INF ∧ nσ > 0 ↔ σ ∈ PII
not disclose information about its employees, or individual
agreements, e.g., a customer has agreed to an electronic 5. Proprietary
retailer's privacy policy. However, this line of thought blurs the
difference between personal identifiable information and other For σ ∈ PII, let PROP(σ) be the set of proprietors of σ. Let
“private” or “personal” information. Personal identifiable PERSONS denote the set of (natural) persons. Then,
information has an “objective” definition in the sense that it is σ ∈ PII → PROP(σ) ∈ PERSONS
independent of such authorities as legislation or agreement. That is, pinfons are pieces of information about persons.
PII infons involve a special relationship called 6. Inclusivity of NII
proprietorship with their proprietors, but not with persons who
σ ∈ INF ∧ (nσ = 0) ↔ σ ∈ NII
are their non-proprietors, and non-persons such as institutions,
That is, non-identifiable information (ninfon) does not embed
agencies, or companies. For example, a person may possess
any unique identifiers of persons.
PII of another person, or a company may have the PII of
someone in its database; however, proprietorship of PII is 7. Combination of non-identifiability with identity
reserved only to its proprietor regardless of who possesses it.
Let ID denote the set of (basic) pinfons of type:
To base personal identifiable information on firmer
ground, we turn to stating some principles related to such << is, Þ, 1>>, then,
information. For us, personal identifiable information (pinfon) σ1 ∈ PII ↔ <<σ2 ∈ NII ⊕ σ3 ∈ ID) >>
is any information that has referent(s) to uniquely identifiable assuming σ1 ∉ ID. “⊕” here denotes the “merging” of two
persons [2]. In logic, this type of reference is the relation of a sub-infons.
word (logical name) to a thing.
A pinfon is an infon such that at least one of the “objects” 8. Closability of PII
is a singly identifiable person. Any singly identifiable person σ1 ∈ PII ⊕ σ2 ∈ PII → (σ1 ⊕ σ2) ∈ PII
in the pinfon is called proprietor of that pinfon. The proprietor
is the person about whom the pinfon communicates 9. Combination with non-identifiability
information. If there is exactly one object of this type, the σ1 ∈ NII ⊕σ2 ∈ PII → (σ1⊕σ2) ∈ PII
pinfon is an atomic pinfon; if there is more than one singly That is, non-identifying information plus personal identifiable
identifiable person, it is a compound pinfon. An atomic pinfon information is personal identifiable information.
is a discrete piece of information about a singly identifiable
person. A compound pinfon is a discrete piece of information 10. Reducibility to non-identifiability
about several singly identifiable persons. If the infon does not
σ1 ∈ PII ÷ (σ2 ∈ ID) ↔ σ3 ∈ NII
include a singly identifiable person, it is called a ninfon.
where σ2 is a sub-infon of σ1. “÷” denotes removing σ2.
We now introduce a series of axioms that establish the
foundation of the theory of personal identifiable information. 11. Atomicity
These axioms can be considered negotiable assumptions. The
Let APII = the set of atomic personal identifiable information.
symbol “→” denotes implication. INF is the set of infons
Then, σ ∈ PII ∧ (nσ = 1) ↔ σ ∈ APII
described in Fig. 1.
12. Non-atomicity
1. Inclusivity of INF
Let CPII = the set of compound personal identifiable
σ ∈ INF ↔ σ ∈ PII ∨ σ ∈ NII
information. Then, σ ∈ PII ∧ (nσ > 1) ↔ σ ∈ CPII
That is, infons are the union of pinfons and ninfons. PII is the
set of pinfons (pieces of personal identifiable information), 13. Reducibility to atomicity
and NII is the set of ninfons (pieces of non-identifiable
information). σ ∈ CPII ↔ <<σ1, σ2, …, σm >>, σi ∈ APII,

2. Exclusivity of PII and NII m = nσ, and 1≤i≤ m, and {PROP(σ1) , PROP(σ2), …,
PROP(σm)} = PROP(σ).
σ ∈ INF ∧ σ ∉ PII → σ ∈ N
σ ∈ INF ∧ σ ∉ N → σ ∈ PII These axioms support the separation of infons into PII, NII,
That is, every infon is exclusively either pinfon or ninfon. and PNI and their transformation. Let us now discuss the
impact of some of these axioms. We concentrate the
3. Identifiability
discussion on the more difficult axioms.
Let ID denote the set of (basic) pinfons of type
Identifiability
<< is, Þ, 1>> and let þ be a parameter for a singly identifiable
person. Let þ be a parameter for a singly identifiable person, i.e., a
Then << is, Þ, 1>> → << is, Þ, 1>> ∈ INF specific person, defined as

14 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
Þ = IND1|<< singly identifiable, IND1, 1>> - Basic pinfons and ninfons, i.e., the pinfon John S. Smith and
where IND indicates the basic type: an individual [9]. the ninfon Someone is sick form the atomic PII (i.e., PII with
That is, Þ is a (restricted) parameter with an anchor for an one proprietor) John S. Smith is sick. This pinfon is produced
object of type singly identifiable individual. The individual by an instantiation operation that lifts the general infons to
IND1 is of type person defined as pinfons and instantiates the variable (Someone) by a value
<< person, IND1, 1>> (John S. Smith).
Put simply, þ is a reference to a singly identifiable person. We - Complex pinfons form more complex infons, e.g., John S.
now elaborate on the meaning of “identifiable.” Smith and Mary F. Fox are sick
Consider the set of unique identifiers of persons. We notice that the operation of projection is not PII-closed
Ontologically, the Aristotelian entity/object is a single, since we can define projecting of ninfon from pinfon
specific existence (a particularity) in the world. For us, the (removing all identifiers). This operation is typically called
identity of an entity is its natural descriptors (e.g., tall, black anonymization.
eyes, male, blood type A, etc.). These descriptors exist in the
Every pinfon refers to its proprietor(s) in the sense that it
entity/object. Tallness, whiteness, location, etc. exist as
“leads” to him/her/them as distinguishable entities in the
aspects of the existence of the entity. We recognize the human
world. This reference is based on his/her/their unique
entity from its natural descriptors. Some descriptors form
identifier(s). As stated previously, the relationship between
identifiers. A natural identifier is a set of natural descriptors
persons and their own pinfon is called proprietorship [1]. A
that facilitates recognizing a person uniquely. Examples of
pinfon is proprietary PII of its proprietor(s).
identifiers include fingerprints, faces, and DNA. No two
Defining pinfon as “information identifiable to the
persons have identical natural identifiers. An artificial
individual” does not mean that the information is “especially
descriptor is a descriptor that is mapped to a natural identifier.
sensitive, private, or embarrassing. Rather, it describes a
Attaching the number 123456 to a particular person is an
relationship between the information and a person, namely
example of an artificial descriptor in the sense that it is not
that the information—whether sensitive or trivial—is
recognizable in the (natural) person. An artificial identifier is a
somehow identifiable to an individual” [10]. However,
set of descriptors mapped to a natural identifier of a person.
personal identifiable information (pinfon) is more “valuable”
Date of birth (an artificial descriptor), gender (a natural
than personal non-identifiable information (ninfon) because it
descriptor), and a 5-digit ZIP (an artificial descriptor) are three
has an intrinsic value as “a human matter,” just as privacy is a
descriptors that form an artificial identifier for 87% of the US
human trait. Does this mean that scientific information about
population [12]. By implication, no two persons have identical
how to make a nuclear bomb has less intrinsic moral value
artificial identifiers. If two persons somehow have the same
than the pinfon John is left handed? No, it means John is left
Social Security number, then this Social Security number is
handed has a higher moral value than the ninfon There exists
not an artificial identifier because it is not mapped uniquely to
someone who is left handed. It is important to compare equal
a natural identifier.
amounts of information when we decide the status of each
We define identifiers of proprietors as infons. Such
type of information [5].
definition is reasonable since the mere act of identifying a
To exclude such notions as confidentiality being
proprietor is a reference to a unique entity in the information
applicable to the informational privacy of non-natural persons
sphere. Hence,
(e.g., companies), the next axiom formalizes that pinfon is
<< is, Þ, 1>> → << is, Þ, 1>> ∈ INF
applied only to (natural) persons.
That is, every unique identifier of a person is an infon. These
For σ ∈ PII, we define PROP(σ) to be the set of proprietors of
infons cannot be decomposed into more basic infons.
σ. Notice that |PROP(σ ∈ PII)| = nσ. Multiple occurrences of
Inclusivity of PII identifiers of the same proprietor are counted as a single
reference to the proprietor. In our ontology, we categorize
Next we position identifiers as the basic infons in the
things (in the world) as objects (denoted by the set OBJECTS)
sphere of PII. The symbol nσ denotes the number of uniquely
and non-objects. Objects are divided into (natural) persons
identified persons in infon σ. Then we can define PII and NII
(denoted by the set PERSONS) and non-persons. A
accordingly:
fundamental proposition in our system is that proprietors are
σ ∈ INF ∧ nσ > 0 ↔ σ ∈ PII
(natural) persons.
That is, an infon that includes unique identifiers of (natural)
persons is personal identifiable information. From (3) and (4), Combination of non-identifiability with identity
any unique personal identifier or piece of information that
Next we can specify several transformation rules that
embeds identifiers is personal identifiable information. Thus,
convert from one type of information to another. These
identifiers are the basic PII infons (pinfons) that cannot be
(privacy) rules are important for deciding what type of
decomposed into more basic infons. Furthermore, every
information applies to what operations (e.g., information
complex pinfon includes in its structure at least one basic
disclosure rules).
infon, i.e., identifier. The structure of a complex pinfon is
Let ID denote the set of (basic) pinfons of type << is, Þ,
constructed from several components:
1>>. That is, ID is the set of identifiers of persons (in the

15 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
world). We now define construction of complex infons from That is, an apinfon is a pinfon with a single human referent.
basic pinfons and non-identifying information. The definition Notice that σ may embed several identifiers of the same
also applies to projecting pinfons from more complex pinfons person, yet the referent is still one. Notice that apinfons can be
by removing all or some non-identifying information. basic (a single identifier) or complex (a single identifier plus
σ1 ∈ PII ↔ <<σ2 ∈ NII ⊕ σ3 ∈ ID) >> non-identifiable information).
assuming σ1 ∉ ID. Non-atomicity
That is, non-identifiable information plus a unique personal
identifier is personal identifiable information and vice versa Let CPII = a set of compound personal identifiable
(i.e., minus). Thus the set of pinfons is closed under operations information. Each piece of compound personal identifiable
that remove or add non-identifying information. We assume information is a special type of pinfon called cpinfon.
the empty information ∅ is in NII. “⊕” here denotes Formally, the set CPII is defined as follows.
“merging” two sub-infons. We also assume that only a single σ ∈ PII ∧ nσ > 1 ↔ σ ∈ CPII
σ3 ∈ ID is added to σ2 ∈ NII; however, the axiom can be That is, a cpinfon is a pinfon with more than one human
generalized to apply to multiple identifiers. An example of referent. Notice that cpinfons are always complex since they
axiom 7 is must have at least two apinfons (two identifiers).
σ1 = << John loves apples>> ↔ <<σ2 = Someone loves apples
⊕ σ3 = John>> The apinfon (atomic personal identifiable information) is the
Or, in a simpler description: σ1 = John loves apples ↔ {σ2 = “unit” of personal identifiable information. It includes one
Someone loves apples ⊕ σ3 = John} identifier and non-identifiable information. We assume that at
least some of the non-identifiable information is about the
The axiom can also be applied to the union ∪ of pinfons.
proprietor. In theory this is not necessary. Suppose that an
Closability of PII identifier is amended to a random piece of non-identifiable
information (noise). In the PII theory the result is (complex)
PII is a closed set under different operations (e.g., merge,
atomic PII. In general, mixing noise with information
concatenate, submerge, etc.) that construct complex pinfons
preserves information.
from more basic pinfons. Hence,
σ1 ∈ PII ⊕ σ2 ∈ PII → (σ1 ⊕ σ2) ∈ PII Reducibility to atomicity
That is, merging personal identifiable information with
Any cpinfon is privacy-reducible to a set of apinfons
personal identifiable information produces personal
(atomic personal identifiable information). For example, John
identifiable information. In addition, PII is a closed set under
and Mary are in love can be privacy-reducible to the apinfons
different operations (e.g., merge, concatenate, submerge, etc.)
John and someone are in love and Someone and Mary are in
that construct complex pinfons by mixing pinfons with non-
love. Notice that our PII theory is a syntax (structural) based
identifying information.
theory. It is obvious that the privacy-reducibility of compound
Reducibility to non-identifiability personal identifiable information causes a loss of “semantic
equivalence,” since the identities of the referents in the
Identifiers are the basic pinfons. Removing all identifiers
original information are separated. Semantic equivalency here
from a pinfon converts it to non-identifying information.
means preserving the totality of information, the pieces of
Adding identifiers to any piece of non-identifying information
atomic information, and their link.
converts it to a pinfon,
Privacy reducibility is expressed by the following axiom:
σ1 ∈ PII ÷ σ2 ∈ ID ↔ σ3 ∈ NII
σ ∈ CPII ↔ <<σ1, σ2, …, σm >>, σi ∈ APII,
where σ2 is a sub-infon of σ1.
Axiom 10 states that personal identifiable information minus a m = nσ, (1 ≤ i ≤ m), and {PROP(σ1) , PROP(σ2), …,
unique personal identifier is non-identifying information and PROP(σm)} = PROP(σ).
vice versa. “÷” here denotes removing σ2. We assume that a
The reduction process produces m atomic personal
single σ2 ∈ ID is embedded in σ1; however, the opposition can identifiable information with m different proprietors. Notice
be generalized to apply to multiple identifiers such that that the set of resultant apinfons produces a compound pinfon.
removing all identifiers produces σ3 ∈ NII. This preserves the totality of the original cpinfon through
Atomicity linking its apinfons together as members of the same set.

Furthermore, we define atomic and non-atomic V. CATEGORIZATION OF ATOMIC PERSONAL IDENTIFIABLE


(compound) types of pinfons. Let INFORMATION
APII = a set of atomic personal identifiable information.
Each piece of atomic personal identifiable information is a In this section, we identify categories of apinfons. Atomic
special type of pinfon called apinfon. personal identifiable information provides a foundation for
As we will see later, cpinfons can be reduced to apinfons, thus structuring pinfons since compound personal identifiable
simplifying the analysis of PII. Formally, the set APII is information can be reduced to a set of apinfons. We
concentrate on reducing all given personal identifiable
defined as follows. σ ∈ PII ∧ nσ = 1 ↔ σ ∈ APII

16 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
information to sets of apinfons. Justification for this will be Let SAPII denote the set of sapinfons (self personal
discussed later. identifiable information).
A. Eliminating ninfons embedded in an apinfon 14. Aboutness proposition
Organizing a database of personal identifiable information σ ∈ SAPII ↔ ABOUT(σ) = PROP(σ)
requires filtering and simplifying apinfons to more basic
apinfons in order to make the structuring of pinfons easier. That is, atomic personal identifiable information σ is said to be
Axiom (9) tells us that pinfons may carry non-identifiable self personal identifiable information (sapinfon) if its subject is
information, ninfons. This non-identifiable information may its proprietor. The term “subject” here means what the entity is
be random noise or information not directly about the about when the information is communicated. The mechanism
(e.g., manually) that converts APII to SAPII has yet to be
proprietor. Removing random noise is certainly an advantage
investigated.
in designing a database. Identifying information that is not
about the proprietor clarifies the boundary between PII and
A. Sapinfons involving aspects of proprietor or relationship
NII.
with non-person
A first concern when analyzing an apinfon is projecting
(isolating, factoring) information about any other entities We further simplify sapinfons. Let OPJ(σ ∈ SAPII) be the
besides the proprietor. Consider the apinfon John’s car is fast. set of objects in σ. SAPII is of two types depending on the
This is information about John and about a car of his. This number of objects embedded in it: singleton, ssapinfon and
apinfon can be projected as: multitude, msapinfon. The set ssapinfons, SSAPII, is defined
as:
⊗ (John’s car is fast) ⇒ {The car is fast, John has a car},
where ⇒ is a production operator. 15. Singleton proposition
σ ∈ SSAPII → σ ∈ SAPII ∧ (PROP(σ) = OPJ(σ))
John’s car is fast information embeds the “pure” apinfon John That is, the proprietor of σ is its only object.
has a car and the ninfon The car is fast. John has a car is
information about a relationship that John has with another
object in the world. This last information is an example of The set msapinfons, MSAPII, is defined as follows.
what we call self information. Self information (sapinfon = 16. Multitude proposition
self atomic pinfon) is information about a proprietor, his/her
aspects (e.g., tall, short), or his/her relationship with non- σ ∈ MSAPII → σ ∈ SAPII ∧ (|OPJ(σ)| > 1)
human objects in the world; it is thus useful to further reduce That is, σ embeds other objects beside its proprietor.
apinfons (atomic) to sapinfon (self). We also assume logical simplification that eliminates
conjunctions and disjunctions of SAPII [5].
Sapinfon is related to the concept of “what the piece of
apinfon is about.” In the theory of aboutness, this question is Now we can declare that the sphere of personal identifiable
answered by studying the text structure and assumptions of the information (piiSphere) for a given proprietor is the database
source about the receiver (e.g., reader). We formalize that contains:
aboutness in terms of the procedure ABOUT(σ), which 1. All ssapinfons and msapinfons of the proprietor, including
produces the set of entities/objects that σ is “talking” about. In
their arrangement in super-infons (e.g., to preserve compound
our case, we aim to reduce any self infon σ to σ´ such that
personal identifiable information).
ABOUT(σ) is PROP(σ´).
Self atomic information represents information about the 2. Related non-identifiable information to the piiSphere of the
following: proprietor, as discussed next.

• Aspects of proprietor (identification, character, acts, etc.) A. What is related non-identifiable information?
• His or her association with non-person “things” (e.g., house, Consider the msapinfons Alice visited clinic Y. It is
dog, organization, etc.) msapinfons because it represents a relationship (not an aspect
of) the proprietor Alice had with an object, the clinic.
• His or her relationships with other persons (e.g., Smith saw a Information about the clinic may or may not be privacy related
blond woman). information. For example, year of opening, number of beds,
and other information about the clinic are not privacy related
With regard to non-objects, of special importance for privacy information; thus, such information ought not be included in
analysis are aspects of persons that are expressed by sapinfon. Alice’s piiSphere. However, when the information is that the
Aspects of a person include his/her (physical) parts, character, clinic is an abortion clinic, then Alice’s piiSphere ought to
acts, condition, name, health, race, handwriting, blood type, include this non-identifiable information about the clinic.
manner, and intelligence. The existence of these aspects
depends on the person, in contrast to (physical or social) As another example in terms of database tables, consider
the following three tables representing the database of a
objects associated with him/her such as his/her house, dog,
company:
spouse, job, professional associations, etc.
Customer (Civil ID, Name, Address, Product ID)

17 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
Product (ID, Price, Factory) Even the relationship between the two proprietors may have
Factory (Product ID, Product location, Inventory) its own sphere of information (white circle around E). E
signifies a connection among a set of apinfons (atomic PII)
Customer’s piiSphere includes: since we assume that all compound PII have been reduced to
- Ssapinfons (aspects of customer): Civil ID, Name atomic PII. For example, the infon {Alice is the mother of a
- Msapinfons (relationships with non-person objects): child in the orphanage, John is the child of a woman who gave
Address, Product ID him up} is a cpinfon with two apinfons. If Alice and John are
However, information about Factory is not information related the proprietors, then E in the figure preserves the connection
to the customer’s piiSphere. between these two apinfons in the two piiSpheres of Alice and
Now suppose that we have the following database: John.

Person (Name, Address, Place of work) VI. JUSTIFICATIONS FOR PII DATABASES
Place of work (Name, Owner)
If it is known that the owner of the place of work is the Mafia, We concentrate on what we call PII database, PIIDB, that
then the information related to the person’s piiSphere extends contains personal identifiable information and information
beyond the name of place of work. related to it.
The decision about the boundary between a certain
piiSphere and its related non-identifiable information is A. Security requirement
difficult to formalize. Fig. 2 shows a conceptualization of
piiSpheres of two proprietors that have compound PII. Dark We can distinguish two types of information security:
circles A–G represent possible non-person objects. For (1) Personal identifiable information security, and
example, object A participates in an msapinfon (e.g., Factory, (2) Non-identifiable information security.
Address, and Place of work in previous examples). Object A While the security requirements of NII are concerned with the
has its own aspects (white circle around A) and relationships traditional system characteristics of confidentiality, integrity,
(e.g., with F) where some information may be privacy- and availability, PII security lends itself to unique techniques
significant to the piiSphere of proprietor. pertaining only to PII.
The process of protecting PII involves (1) protection of the
identities of the proprietor and (2) protection of the non-
identity portion of the PII.

F Related data G

A piiSphere B

msapinfons msapinfons
sspinfons ssainfons

Aspects Non-person
Non-person Aspects Proprietor Proprietor
E Objects
Objects

C D
cpinfon
cpinfon

Figure 2. Conceptualization of piiSpheres of two proprietors.

18 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
Of course, all information security tools such as encryption requires that the internal key (#proprietor) is mapped one-to-
can be applied in this context, yet other methods (e.g., one to the individual's legal identity or physical location. This
anonymization) utilizing the unique structure of PII as a is an important feature in PIIDB to guarantee consistency of
combination of identities and other information can also be information about persons. This “identity” uniquely identifies
used. Data-mining attacks on PII aim to determine the identity the piiSphere and distinguishes one piiSphere from another.
of the proprietor(s) from non-identifiable information; for Thus, if we have PIIDB of three individuals, then we have
example, determining the identity of a patient from three entries such that each leads (denoted as ⇒) to three
anonymized information that gives age, sex, and zip code in piiSpheres:
health records (k-anonymization). Thus, PII lends itself to PROPRIETOR_TABLE:
unique techniques that can be applied in protection of this {(#proprietor1, …) ⇒ piiSphere of proprietor 1, (#proprietor2,
information …) ⇒ piiSphere of proprietor 2, (#proprietor3, …) ⇒
Another important issue that motivates organizing PII piiSphere of proprietor 3}.
separately is that any intrusion on PII involves information in The “…” denotes the possibility of other information in the
addition to the owner’s information (e.g., a company, table. What is the content of each piiSphere? The answer is
proprietors, and other third parties, e.g., privacy set(s) of atomic PIIs and related information.
commissioner). For example, a PII security system may Usually, database design begins by identifying data items,
require immediately alerting the proprietor that intrusion on including objects and attributes (Employee No., Name, Salary,
his/her PII has occurred. Birth, Date of Employment, etc.). Relationships among data
An additional point is that the sensitivity of PII is in general items are then specified (e.g., data dependencies).
valued more highly than the sensitivity of other types of Semantically oriented graphs (e.g., ER graphs [13]) are
information. PII is more “valuable” than non-PII because of its sometimes used at this level. Finally, a set of tables is
privacy aspect, as discussed previously. Such considerations declared, such as the following:
imply a special security status for PII. The source of this T1 = Father (ID, Name, Details),
volubility is instigated by moral considerations [7]. T2 = Mother (ID, Name Details),
T3 = Child (ID, Name, Details),
B. Policy requirement T4 = Case (No., Father ID, Mother ID, Child ID).
Some policies applied to PII are not applicable to NII (e.g., T1, T2, and T3 represent atomic PIIs of fathers, mothers,
consent, opt-in/out, proprietor’s identity management, trust, and children, respectively. T4 embeds compound PIIs. In
privacy mining). While NII security requirements are PIIDB, if R is a compound PII, then it is represented by the set
concerned with the traditional system characteristics of of atomic PIIs:
confidentiality, integrity, and availability, PII privacy {R′ = Case (No., Father ID),
requirements are also concerned with such issues as purpose,
privacy compliance, transborder flow of data, third party R′′ = Case (No., Mother ID),
disclosure, etc. Separating PII from NII can reduce the R′′ = Case (No., Child ID)}
complex policies required to safeguard sensitive information
where multiple rules are applied, depending on who is Where R′ is in the piiSphere of father, R′′ is in the piiSphere
accessing the data and what the function is. of mother, and R′′ is in the piiSphere of child. Such a schema
In general, PIIDB goes beyond mere protection of data: permits complete isolation of atomic PIIs from each other.
1. PIIDB identifies proprietor’s piiSphere and provides This privacy requirement is essential in many personal
security, policy, and tools to the piiSphere. identifiable databases. For example, in orphanages it is
2. PIIDB provides security, policy, and tools only to possible not to allow access to the information that a record
proprietor’s piiSphere, thus conserving privacy efforts. exists in the database for a mother. In the example above,
2. PIIDB identifies inter-piiSphere relationships (proprietors’ access policy for the three piiSpheres is independent from
relationships with each other) and provides security, policy, each other. At the conceptual level, reconstructing the
and tools to protect the privacy of these relationships. relations among proprietors (cpinfons) is a database design
problem (e.g., internal pointers among tables across
VII. PERSONAL IDENTIFIABLE INFORMATION piiSpheres).
DATABASE (PIIDB) PIIDB obeys all propositions defined previously. Some of
The central mechanism in PIIDB is an explicit declaration these propositions can be utilized as privacy rules. As an
of proprietors in a table called PROPRIETORS that includes illustration of the applications of these propositions, consider
unique identifiers of all proprietors in the PIIDB. the case of privacy constraint that prohibits disclosing σ ∈ PII.
PROPRIETOR_TABLE contains a unique entry with an By proposition (9) above, mixing (e.g., amending, inserting,
internal key (#proprietor) for each proprietor in addition to etc.) σ with any other piece of information makes the
other information such as pointer(s) to his/her piiSphere. disclosure constraint apply to the combined piece of
The principle of uniqueness of proprietor’s identifiers information. In this case a general policy is: Applying a

19 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
protection rule to σ1 ∈ PII implies applying the same [7] J. Byun, E. Bertino, and N. Li, “Purpose based access control of
complex data for privacy protection,” Proceedings of the Tenth ACM
protection to (σ1 σ2) where σ2 ∉ PII. Symposium on Access Control Models and Technologies
(SACMAT’050), June 1–3, 2005, Stockholm, Sweden.
VIII. CONCLUSION [8] R. Clarke, “Introduction to dataveillance and informational privacy, and
definitions of terms,” 2006.
The theory of PII infons can provide a theoretical http://www.anu.edu.au/people/Roger.Clarke/DV/Intro.html#InfoPriv
foundation for technical solutions to problems of protection of [9] K. Devlin, Logic and Information, New York: Cambridge University
personal identifiable information. In such an approach, privacy Press, 1991.
[10] J. Kang, “Information privacy in cyberspace transactions,” Stanford Law
rules form an integral part of the design of the system. PII can Review 1193, pp. 1212-1220, 1998.
be identified (hence becomes an object of privacy rules) [11] C. J. Bennett and C. D. Raab, The Governance of Privacy: Policy
during processing of information that may mix it with other Instruments in Global Perspective. United Kingdom: Ashgate, 2003.
types of information. Different types of basic PII infons [12] L. Sweeney, “Weaving technology and policy together to maintain
provide an opportunity for tuning the design of an information confidentiality,” Journal of Law, Medicine and Ethics, vol. 25, pp. 98-
110, 1997.
system. We propose analyzing and processing PII as a
[13] B. Thalheim, Entity-Relationship Modeling: Foundations of Database
database with clear boundary lines separate from non- Technology. Berlin: Springer, 2000.
identifiable information, which facilitates meeting the unique
requirements of PII. A great deal of work is needed at the AUTHOR PROFILES
theoretical and design levels. An expanded version of this
paper includes complete formalization of the theory. Sabah Al-Fedaghi holds an MS and a PhD in computer science from
Additionally, we are currently applying the approach to Northwestern University, Evanston, Illinois, and a BS in computer
analysis of an actual database of a government agency that science from Arizona State University, Tempe. He has published
handles social problems where a great deal of PII is collected. papers in journals and contributed to conferences on topics in
database systems, natural language processing, information systems,
REFERENCES information privacy, information security, and information ethics. He
is an associate professor in the Computer Engineering Department,
[1] R. Agrawal, J. Kiernan, R. Srikant, and Y. Xu, “Hippocratic databases,”
Kuwait University. He previously worked as a programmer at the
28th International Conference on Very Large Databases (VLDB), Hong
Kong, China, August 2002. Kuwait Oil Company and headed the Electrical and Computer
Engineering Department (1991–1994) and the Computer Engineering
[2] S. Al-Fedaghi, “How to calculate the information privacy,” Proceedings
of the Third Annual Conference on Privacy, Security and Trust. Department (2000–2007).
(October 2005), pp. 12–14, St. Andrews, New Brunswick, Canada.
[3] S. Al-Fedaghi, G. Fiedler, and B. Thalheim, “Privacy enhanced Bernhard Thalheim holds an MSc in mathematics from Dresden
information systems,” Information Modelling and Knowledge Bases University of Technology, a PhD in Mathematics from Lomonosov
XVII, vol. 136, in Frontiers in Artificial Intelligence and Applications, University Moscow, and a DSc in Computer Science from Dresden
Y. Kiyoki, J. Henno, H. Jaakkola, and H. Kangassalo, Eds. IOS Press, University of Technology. His major research interests are database
February 2006. theory, logic in databases, discrete mathematics, knowledge systems,
[4] S. Al-Fedaghi, “Beyond purpose-based privacy access control,” 18th and systems development methodologies, in particular for Web
Australasian Database Conference, Ballarat, Australia, January 29– information systems. He has been program committee chair and
February 2, 2007. general chair for several international events, including ADBIS,
[5] S. Al-Fedaghi, “How sensitive is your personal information?” 22nd ASM, EJC, ER, FoIKS, MFDBS, NLDB, and WISE. He is currently
ACM Symposium on Applied Computing (ACM SAC 2007), March 11– full professor at Christian-Albrechts-University at Kiel in Germany
15, Seoul, Korea.
and was previously with Dresden University of Technology (1979–
[6] S. Al-Fedaghi, “Crossing privacy, information, and ethics,” 17th
1988) (associate professor beginning in 1986), Kuwait University
International Conference Information Resources Management
Association (IRMA 2006), Washington, DC, USA, May 21–24, 2006. (1988–1990) (visiting professor), Rostock University (1990–1993)
[Also published in Emerging Trends And Challenges in Information (full professor), and Cottbus University of Technology (1993–2003)
Technology Management, Mehdi Khosrow-Pour (ed.), IGI Publishing (full professor).
Hershey, PA, USA.]

20 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Improving Effectiveness Of E-Learning In


Maintenance Using Interactive-3D
Lt.Dr.S Santhosh Baboo Nikhil Lobo
Reader Research Scholar
P.G. & Research Dept of Computer Science Bharathiar University
D.G.Vaishnav College, Chennai 106 nikhillobo@baehal.com

Abstract—In aerospace and defense, training is being carried out • What steps a technician needs to perform as part of the
on the web by viewing PowerPoint presentations, manuals and procedure?
videos that are limited in their ability to convey information to
the technician. Interactive training in the form of 3D is a more • What steps are to be carried out after completing a
cost effective approach compared to creation of physical procedure?
simulations and mockups. This paper demonstrates how training
using interactive 3D simulations in e-learning achieves a II. TRAINING EFFECTIVENESS
reduction in the time spent in training and improves the
efficiency of a trainee performing the installation or removal. Effectiveness of training based on statistics convey that
trainees generally remember more of what they see than of
Keywords- Interactive 3D; E-Learning; Training; Simulation what they read or hear and more of what they hear, see and do
than what they hear and see.
I. INTRODUCTION
Some procedures are found to be hazardous and need to be
demonstrated to maintenance personnel without damaging
equipment or injuring personnel. These procedures require
continuous practice and when necessary retraining.
The technician is also to be trained in problem solving and
decision making skills. The training should consider
technicians widely distributed with various skills and
experience levels.
The aerospace and Defense industry in the past have
imparted training using traditional blackboard outlines,
physical demonstrations and video that is limited in their ability
to convey information about tasks, procedures and internal
components. Studies have demonstrated that 90% what they do Figure 1. Statistics of training effectiveness
is remembered by a student in contrast to 10% of what they
read. Now a need has arisen for interactive three-dimensional Three-dimensional models are widely used in the design and
content for visualization of objects, concepts and processes.
development of products because they efficiently represent
Interactive training in the form of 3D is a more cost effective
approach compared to creation of physical simulations and complex shape information. These 3D models can be used in e
mockups. Online training generally takes only 60 percent of the learning to impart training. The training process can be greatly
time required for classroom training on the same material [1]. enhanced by allowing trainees to interact with these 3D
models. Moreover by using WWW-based simulation, the
For any removal or installation procedure, it is important that
company can make a single copy of the models available over
the e-learning content should demonstrate the following
the WWW instead of mailing demonstration software to
aspects:
potential customers. This reduces costs and avoids customer
• What steps are to be carried out before starting a frustration with installation and potential hardware
procedure? compatibility problems [2]. This simulation-based e-learning is
designed to simplify and control reality by removing complex
• What special tools, consumables and spares are systems that exist in real life, so the learner can focus on the
required for carrying out the procedure? knowledge to be learnt effectively and efficiently [3].
• What safety conditions are to be adhered to when
carrying out the procedure?

21 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
The objectives of using interactive 3D for training is as the technician to move or rotate the individual part for better
follows: understand of the particular part. Each part is labeled with a
• Reducing the time spent in training by 30% or more part number that is unique and a nomenclature.
• Reducing the time spent in performing the installation
by a trainee by 25% or more

III. COURSEWARE STRUCTURE


Since the web has been providing unprecedented flexibility
and multimedia capability to deliver course materials, more and Figure 3. Part Familiarisation module and Context View
more courses are being delivered through the web. The existing
course materials like PowerPoint presentations, manuals and
videos, had a limited ability to convey information to the V. PROCEDURE
technician and most current internet-based educational Once the technician has a clear understanding of the assembly
applications do not present 3D objects even though 3D and its parts, technicians advance to the module to learn how to
visualization is essential in teaching most engineering ideas. accurately remove or install the assembly. In procedure,
Interactive learning is essential for both acquiring knowledge removal or installation is demonstrated in an animated form
and developing physical skills for carrying out maintenance
one step at a time. This allows the technician to study step-by-
related activities. Interaction and interactivity are fundamental
to all dynamic systems, particularly those involving people [4]. step removal or installation process using animation that
Although there are images of three-dimensional graphics on the technicians can replay anytime. The use of three-dimensional
web, their two-dimensional format does not imitate the actual models in the animation imitates the real removal or
environment because objects are three-dimensional. Hence installation process helping the technicians to understand
there is a need for integrating three-dimensional components concepts very clearly. Removal or installation of each part from
and interactivity, creating three-dimensional visualization the assembly is animated along with callouts indicating the part
providing technicians an opportunity to learn through number, nomenclature, tool required and torque. The
experimentation and research. The e-learning courseware is animations are presented one step at a time to ensure
linearly structured with three modules for each removal or technicians are able to perform the removal or installation
installation procedure. A technician is required to complete process in the right order. Safety conditions like warnings and
each module before proceeding to the next. These modules cautions are also displayed along with the animation. A
include Part Familiarization, Procedure and Practice. These warning is used to alert the reader to possible hazards, which
modules provide a new and creative method for presenting may cause loss of life, physical injury or ill health. A caution is
removal or installation procedures effectively to technicians. used to denote a possibility of damage to material but not to
personnel. A voice accompanies the animation to enable the
technician to understand the procedure better.

Figure 2. Part Familiarization, Procedure and Practice modules

IV. PART FAMILIARIZATION


This familiarizes the technician with the parts that constitute
the assembly to be installed or removed. This module provides
technicians with information as to what each part looks like and
where they are located in the assembly. An assembly is
displayed with a part list comprising of the parts that make up
the assembly. Here the assembly is displayed as a 3D Model in Figure 4. Procedure module
the Main Window allowing the technician to rotate and move
the assembly. Individual parts can be identified by selecting VI. PRACTICE
them from the parts list and by viewing the model in
Before performing any real procedure, technicians are first
“context view”. Context View displays only the part selected in evaluated using a removal or installation procedural simulation.
the part list with 100% opacity while reducing the opacity of Practice allows a technician to perform an installation or
other parts in the assembly. This enables easy identification of removal procedure on standard desktops, laptops, and Tablet
a selected part in the assembly. Individual parts that are PCs one step at a time, to ensure that the technician clearly
selected are also displayed in a Secondary Window, allowing understands the procedure and is ready to perform the
procedure using an actual assembly. Three-dimensional models

22 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
are used in the simulation to enable the technician to feel as weighing more than 5 kilos), triggered by heavy vehicles such
though they were performing the removal or installation as tanks. These mines contain enough explosives to destroy the
procedure using the actual assembly. Using three-dimensional vehicle that runs over them. Anti-personnel mines are smaller
simulations technicians can perform, view, and understand the and lighter (weighing as little as 50 grams), triggered much
procedure using a three-dimensional view. In installation, parts more easily and is designed to wound people. It is critical that
are dragged from a bin to create an assembly. In removal, parts these soldiers have access to technical information about the
are dragged from the assembly into a bin. In either case if a landmine, details regarding safe handling and its disposal.
wrong part is installed or removed, an alert box is displayed on Creation of 3D simulations of landmines that allow soldiers to
the screen preventing the technician from proceeding until the view its details of its parts, watch safety procedural animations
correct part is installed or removed. and interact with them resulted in soldiers having greater
understanding and knowledge of landmines they encounter.

Figure 7. Anti-tank and Anti-personel mines

C. M79 Grenade Launcher


It had been identified by the Army that a lack of access to M79
Grenade Launchers during familiarization trainings had
Figure 5. Practice module resulted in deployment of soldiers with limited knowledge and
experience levels. To overcome this hurdle 3D-enabled M79
VII. SUCCESSFUL DEPLOYMENT Grenade Launchers Virtual Task Trainers were provided
simulating the single-shot, shoulder-fired, break open grenade
A. Turbojet Engines launcher, which fires a 40x46mm grenade. Now soldiers are
able to receive familiarization training regardless of geographic
Ineffective training to technicians that are geographically
barriers or lack of access to weapons.
distributed has resulted in improper troubleshooting procedures
being carried out on Turbojet Engines resulting in reliability of
the engine being compromised. Technicians are now being
trained using interactive 3D simulations of the engine
explaining its description, operation, maintenance and
troubleshooting procedures resulting in an estimated saving of
$1.5 Million with an improved maintenance turn-around-time.
Figure 8. M79 Grenade Launcher
Technicians now are able to practice these procedures on
standard desktops, laptops, and Tablet PCs eliminating
geographic barriers and imparting a high standard of training.

D. Black Shark Torpedo


Black Shark torpedo is designed for launching from
submarines or surface vessels. It is meant to counter the threat
posed by any type of surface or underwater target. Due to the
fast pace of operations, Navy technicians received little to no
training on Black Shark torpedo. This has resulted in improper
Figure 6. Turbojet Engine
operating procedures and preventative maintenance checks.
Web-enabled 3D simulations have been developed allowing
technicians to have hands-on practice anytime and anywhere
along with familiarization to parts, maintenance procedures and
B. Anti-tank and Anti-personel mines repair tasks. This has resulted in technicians showing a level of
There is a difficulty in training soldiers in the Army on interest using 3D simulations compared to existing course
handling and disposal of both anti-tank mines and anti- materials like PowerPoint presentations, manuals and videos.
personnel mines. Anti-tank mines are large and heavy (usually

23 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
TABLE II. USING INTERACTIVE 3D FOR TRAINING

Installation Time spent in Actual time spent to


training a trainee complete the installation
using Interactive by a trainee
3D
Hydraulic Pump 2 hours 20 minutes
Figure 9. Black Shark Torpedo Hydraulic Reservoir 3 hours 45 minutes
High Pressure Filter 1 hours 30 minutes 30 minutes
Anti-Skid Control Valve 1 hours 30 minutes 30 minutes
E. Phoenix Missile Quick Disconnect Coupling 1 hours 30 minutes 20 minutes
Technicians were constantly facing operational difficulties Suction
concerning Phoenix Missile due to the inability to demonstrate
the operation of its internal components. Phoenix Missile is a TABLE III. PERCENTAGE OF EFFORT SAVED IN USING INTERACTIVE 3D
FOR TRAINING
long-range air-to-air missile. Interactive 3D simulation
demonstrating its internal components along with functioning Installation Time spent in Actual time spent to
was developed allowing technicians to view parts information, training a trainee complete the installation
using Interactive by a trainee
rotate and cross-section of the Phoenix Missile.
3D
Hydraulic Pump 33.3% 34%
Hydraulic Reservoir 33.3% 25%
High Pressure Filter 50% 33.3%
Anti-Skid Control Valve 40% 33.3%
Quick Disconnect Coupling 50% 34%
Suction

Figure 10. Phoenix Missile


IX. CONCLUSION
The metrics collected and analyzed during the implementation
VIII. CASE STUDIES
of interactive 3D demonstrates the following benefits in
A case study to find out the effort saved using interactive 3D maintenance
was carried out on the following installations: • Reducing the time spent in training by 30% or more
• Hydraulic Pump
• Reducing the time spent in performing the installation
• Hydraulic Reservoir by a trainee by 25% or more
• High Pressure Filter In conclusion interactive 3D reduces the amount of time spent
• Anti-Skid Control Valve in hands-on training on real equipment, protects trainees from
injury and equipment from damage when performing
• Quick Disconnect Coupling Suction procedures that are hazardous. It provides an opportunity for
technicians to study the internal components of equipments and
TABLE I. USING TRADITIONAL TOOLS LIKE VIDEO FOR TRAINING this training can be provided to technicians that are
geographically distributed.

Installation Time spent in Actual time spent to


training a trainee complete the installation REFERENCES
using video clips by a trainee [1] Wright Elizabeth E, "Making the Multimedia Decision: Strategies for
Hydraulic Pump 3 hours 30 minutes Success." Journal of Instructional Delivery Systems, Winter 1993, pp.
15-22.
Hydraulic Reservoir 4 hours 30 minutes 1 hour
[2] Tamie L. Veith , “World Wide Web-based. Simulation”, Int. J. Engng
High Pressure Filter 3 hours 45 minutes Ed. 1998, Vol. 14, No. 5, pp. 316-321.
[3] Alessi and Trollip, “Computer Based Instruction: Methods and
Anti-Skid Control Valve 2 hours 30 minutes 45 minutes
Development”, New Jersey: Prentice Hall, 1991.
Quick Disconnect Coupling 2 hours 15 minutes 30 minutes [4] Jong, Ton, Sarti, Luigi, “Design and Production of Multimedia and
Suction Simulation-Based Learning Material”, Kluwer Academic Publishers,
Dordrecht, 1993.

24 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

An Empirical Comparative Study of Checklist-


based and Ad Hoc Code Reading Techniques in a
Distributed Groupware Environment
Olalekan S. Akinola Adenike O. Osofisan
Department of Computer Science, Department of Computer Science,
University of Ibadan, Nigeria. University of Ibadan, Nigeria
Solom202@yahoo.co.uk mamoshof@yahoo.co.uk

Abstract widely used [36] since it was first introduced by Fagan [25]
at IBM. This is due to its potential benefits for software
Software inspection is a necessary and important tool for software development, the increased demand for quality certification
quality assurance. Since it was introduced by Fagan at IBM in in software, (for example, ISO 9000 compliance
1976, arguments exist as to which method should be adopted to requirements), and the adoption of the Capability Maturity
carry out the exercise, whether it should be paper-based or tool- Model as a development methodology [27].
based, and what reading technique should be used on the Software inspection is a necessary and important
inspection document. Extensive works have been done to tool for software quality assurance. It involves strict and
determine the effectiveness of reviewers in paper-based close examinations carried out on development products to
environment when using ad hoc and checklist reading techniques.
detect defects, violations of development standards and
In this work, we take the software inspection research further by
examining whether there is going to be any significant difference
other problems [18]. The development products could be
in defect detection effectiveness of reviewers when they use either specifications, source code, contracts, test plans and test
ad hoc or checklist reading techniques in a distributed groupware cases [33, 4, 8].
environment. Twenty final year undergraduate students of Traditionally, the software inspection artifact
computer science, divided into ad hoc and checklist reviewers (requirements, designs, or codes) is normally presented on
groups of ten members each were employed to inspect a medium- papers for the inspectors / reviewers. The advent of
sized java code synchronously on groupware deployed on the Collaborative Software Development (CSD) provides
Internet. The data obtained were subjected to tests of hypotheses opportunities for software developers in geographically
using independent t-test and correlation coefficients. Results from
dispersed locations to communicate, and further build and
the study indicate that there are no significant differences in the
defect detection effectiveness, effort in terms of time taken in
share common knowledge repositories [13]. Through CSD,
minutes and false positives reported by the reviewers using either distributed collaborative software inspection methodologies
ad hoc or checklist based reading techniques in the distributed emerge in which group of reviewers in different
groupware environment studied. geographical locations may log on synchronously or
asynchronously online to inspect an inspection artifact.
Key words: Software Inspection; Ad hoc; Checklist; groupware. It has been hypothesized that in order to gain
credibility and validity, software inspection experiments
I. INTRODUCTION have to be conducted in different environments, using
A software could be judged to be of high or low different people, languages, cultures, documents, and so on
quality depending on who is analyzing it. Thus, quality [10, 12]. That is, they must be redone in some other
software can be said to be a “software that satisfies the environments. The motivation for this work therefore stems
needs of the users and the programmers involved in it”, from this hypothesis.
[28]. Pfleeger highlighted four major criteria for judging the Specifically, the following are the target goals of
quality of a software: this research work, to determine if there is any significant
(i) It does what the user expects it to do; difference in the effectiveness of reviewers using ad hoc
(ii) Its interaction with the computer resources is code reading technique and those using Checklist reading
satisfactory; technique in a distributed tool-based environment.
(iii) The user finds it easy to learn and to use; and Twenty final year students of Computer Science
(iv) The developers find it convenient in terms of were employed to carry out inspection task on a medium-
design, coding, testing and maintenance. sized code in a distributed, collaborative environment. The
students were grouped into two groups. One group used the
In order to achieve the above criteria, software Ad hoc code reading technique while the second group used
inspection was introduced. Software inspection has become the checklist-based code reading technique (CBR). Briefly,
results obtained show that there is no significant difference

25 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

in the effectiveness of reviewers using Ad hoc and Checklist detection fully depends on the skill, the knowledge, and the
reading technique in the distributed environment. experience of an inspector. Training sessions in program
In the rest of this paper, we focus on review of comprehension before the take off of inspection may help
related work in section 2. In section 3, we stated the subjects develop some of these capabilities to alleviate the
experimental planning and instruments used as well as the lack of reading support [19].
subjects and hypotheses set up for the experiment. Threats Checklists offer stronger, boilerplate support in the
to internal and external validities are also treated in this form of questions inspectors are to answer while reading the
section. In section 4, we discuss the results and statistical document. These questions concern quality aspects of the
tests carried out on the data in the experiment. Section 5 is document. Checklists are advocated in many inspection
about the discussion of our results while section 6 states the works. For example, Fagan [24,25], Dunsmore [2],
conclusion and recommendations. Sabaliauskaite [12], Humphrey [35] and Gilb and Grahams'
manuscript [32] to mention a few.
II. Related Works: Ad hoc versus Although reading support in the form of a list of
Checklist-based Reading Techniques questions is better than none (such as ad-hoc), checklist-
Software inspection is as old as programming based reading has several weaknesses [19]. First, the
itself. It was introduced in the 1970s at IBM, which questions are often general and not sufficiently tailored to a
pioneered its early adoption and later evolution [25]. It is a particular development environment. A prominent example
way of detecting faults in a software documents – is the following question: “Is the inspected artifact correct?”
requirements, design or code. Recent empirical studies Although this checklist question provides a general
demonstrate that defect detection is more an individual than framework for an inspector on what to check, it does not tell
a group activity as assumed by many inspection methods him or her in a precise manner how to ensure this quality
and refinements [20, 29, 22]. Inspection results depend on attribute. In this way, the checklist provides little support for
inspection participants themselves and their strategies for an inspector to understand the inspected artifact. But this
understanding the inspected artifacts [19]. can be vital to detect major application logic defects.
A defect detection or reading (as it is popularly Second, how to use a checklist are often missing, that is, it is
called) technique is defined as the series of steps or often unclear when and based on what information an
procedures whose purpose is to guide an inspector in inspector is to answer a particular checklist question. In fact,
acquiring a deep understanding of the inspected software several strategies are actually feasible to address all the
product [19]. The comprehension of inspected software questions in a checklist. The following approach
products is a prerequisite for detecting subtle and / or characterizes the one end of the spectrum: The inspector
complex defects, those often causing the most problems if takes a single question, goes through the whole artifact,
detected in later life cycle phases. answers the question, and takes the next question. The other
According to Porter et al, [1], defect detection end is defined by the following procedure: The inspector
techniques range in prescription from intuitive, reads the document. Afterwards he or she answers the
nonsystematic procedures such as ad hoc or checklist questions of the checklist. It is quite unclear which approach
techniques, to explicit and highly systematic procedures inspectors follow when using a checklist and how they
such as scenarios or correctness proofs. A reviewer's achieved their results in terms of defects detected. The final
individual responsibility may be general, to identify as many weakness of a checklist is the fact that checklist questions
defects as possible, or specific, to focus on a limited set of are often limited to the detection of defects that belong to
issues such as ensuring appropriate use of hardware particular defect types. Since the defect types are based on
interfaces, identifying un-testable requirements, or checking past defect information, inspectors may not focus on defect
conformity to coding standards. types not previously detected and, therefore, may miss
Individual responsibilities may or may not be whole classes of defects.
coordinated among the review team members. When they To address some of the presented difficulties, one
are not coordinated, all reviewers have identical can develop a checklist according to the following principles
responsibilities. In contrast, each reviewer in a coordinated [19]:
team has different responsibilities. • The length of a checklist should not exceed one
The most frequently used detection methods are ad page.
hoc and checklist. Ad-hoc reading, by nature, offers very • The checklist question should be phrased as precise
little reading support at all since a software product is as possible.
simply given to inspectors without any direction or • The checklist should be structured so that the
guidelines on how to proceed through it and what to look quality attribute is clear to the inspector and the
for. However, ad-hoc does not mean that inspection question give hints on how to assure the quality
participants do not scrutinize the inspected product attribute. And additionally,
systematically. The word `ad-hoc' only refers to the fact that • The checklist should not be longer than a page
no technical support is given to them for the problem of how approximately 25 items [12, 32].
to detect defects in a software artifact. In this case, defect

26 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

In practice, reviewers often use Ad Hoc or Checklist used Checklist based reading technique. The tool provided
detection techniques to discharge identical, general them with some checklist as aid for the inspection.
responsibilities. Some authors, especially Parnas and Weiss
[9] have argued that inspections would be more effective if B. Experimental Instrumentation and Set up
each reviewer used a different set of systematic detection Inspro, a web-based distributed, collaborative code
techniques to discharge different specific responsibilities. inspection tool was designed and developed as a code
Computer and / or Internet support for software inspection groupware used in the experiment. The
inspections has been suggested as a way of removing the experiment was actually run in form of synchronous,
bottlenecks in the traditional software inspection process. distributed collaborative inspection with the computers on
The web approach makes software inspection much more the Internet. One computer was configured as a server
elastic, in the form of asynchronicity and geographical having WAMP server installed on it, while the other
dispersal [14]. computers served as clients to the server.
The effectiveness of manual inspections is dependent The tool was developed by the authors, using
upon satisfying many conditions such as adequate Hypertext Preprocessor (PHP) web programming language
preparations, readiness of the work product for review, high and deployed on Apache Wamp Server. The student
quality moderation, and cooperative interpersonal reviewers were orientated on the use of the Inspro web-
relationships. The effectiveness of tool-based inspection is based tool as well as the code artifact before the real
less dependent upon these human factors [29, 26]. Stein et experiment was conducted on the second day. The tool has
al., [31] is of the view that distributed, asynchronous the following features:
software inspections can be a practicable method. Johnson
[17] however opined that thoughtless computerization of the (i) The user interface is divided into three
manual inspection process may in fact increase the cost of sections. A section displays the code artifact
inspections. to be worked upon by the reviewers along
To the best of our knowledge, many of the works in this with their line numbers, while another section
line of research from the literatures either report experiences displays the text box in which the reviewers
in terms of lessons learned with using the tools, for instance, keyed in the bugs found in the artifact. The
Harjumaa [14] and Mashayekhi [34], or compare the third section below is optionally displayed to
effectiveness of tools with paper-based inspections, for give the checklist to be used by the reviewers
instance, Macdonald and Miller [23]. In the case of ICICLE if they are in checklist group and does not
[7], the only published evaluation comes in the form of display anything for the ad hoc inspection
lessons learned. In the case of Scrutiny, in addition to group.
lessons learned [16], the authors also claim that tool-based (ii) Immediately a reviewer log on to the server,
inspection is as effective as paper-based, but there is no the tool starts counting the time used for the
quantifiable evidence to support this claim [15]. inspection and finally records the time stopped
In this paper, we examine the feasibility of tool support when submit button is clicked.
for software code inspection as well as determining if there (iii) The date the inspection is done as well as the
is any significant difference in the effectiveness of reviewers prompt for the name of the reviewer is
using ad hoc and checklist reading techniques in a automatically displayed when the tool is
distributed environment. activated.
(iv) The tool actually writes the output of the
III. Experimental Planning and Design inspection exercise on a file which will be
automatically opened when submit button is
A. Subjects: clicked. This file will then be finally opened
Twenty 20 final year students of Computer Science and printed for further analysis by the chief
were employed in the study. Ten (10) of the student- coordinator of the inspection.
reviewers used Ad Hoc reading technique, without
providing any aid for them in the inspection. The other ten
Fig. 1 displays the user interface of the Inspro tool.

27 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

C. Experimental Artifact condition not being fulfilled for any of the operations, the
The artifact used for this experiment was a 156 program reports appropriate error log for that operation.
lines java code which accepts data into two 2-dimensional
arrays. This small-sized cod was used because the students D. Experimental Variables and Hypothesis
involved in the experiment had their first experience in code The experiment manipulated 2 independent
inspection with this experiment; even though they were variables: The number of reviewers per team (1, 2, 3, or 4
given some formal trainings on code inspection prior the reviewers, excluding the code author) and the review
exercise. The experiment was conducted as a practical class method – ad hoc and checklist. Three dependent variables
in a Software Engineering course in 400 level (CSC 433) in were measured for the independent variables: the average
the Department of Computer Science, University of Ibadan, number of defects detected by the reviewers, that is, defect
Nigeria. The arrays were used as matrices. Major operations detection effectiveness (DE), the average time spent for the
on matrices were implemented in the program such as sum, inspection (T) in minutes and the average number of false
difference, product, determinant and transpose of matrices. positives reported by the reviewers (FP). The defect
All conditions for these operations were tested in the detection effectiveness (DE) is the number of true defects
program. The code was developed and tested okay by the detected by reviewers out of the total number of seeded
researcher before it was finally seeded with 18 errors: 12 defects in a code inspection artifact. The time measures the
logical and 6 syntax/semantic. The program accepts data total time (in minutes) spent by a reviewer on inspecting an
into the two arrays, then performs all the operations on them inspection artifact. Inspection Time is also a measure of the
and reports the output results of computation if there were effort (E) used by the reviewers in a software inspection
no errors. If there were errors in form of operational exercise. False positives (FP) are the perceived defects a
reviewer reported but are actually not true defects.

Fig.1 Inspro User Interface

28 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Three hypotheses were stated for this experiment as follows. negligible or did not take place at all since all the groups
inspected the artifacts within the same period of time on the
Ho1: There is no significant difference between the same web based tool.
effectiveness of reviewers using Ad hoc and
Checklist reading techniques in distributed code 3.4.2 Threats to External Validity
inspection. Threats to external validity are conditions that can limit our
Ho2: There is no significant difference between the effort ability to generalize the results of experiments to industrial
taken by reviewers using Ad hoc and Checklist practice [1]. We considered three sources of such threats: (1)
techniques in distributed code inspection. experimental scale, (2) subject generalizability, and (3)
Ho3: There is no significant difference between the false subject and artifact representativeness.
positives reported by reviewers using Ad hoc and
Checklist techniques in distributed code inspection. Experimental scale is a threat when the experimental setting
or the materials are not representative of industrial practice.
E. Threats to validity This has a great impact on the experiment as the material
The question of validity draws attention to how far used (matrix code) was not a true representative of what
a measure really measures the concept that it purports to obtains in industrial setting. The code document used was
measure (Alan and Duncan, 1997). Therefore, in this invented by the researchers. A threat to subject
experiment, we considered two important threats that may generalizability may exist when the subject population is not
affect the validity of the research in the domain of software drawn from the industrial population. We tried to minimize
inspection. this threat by incorporating 20 final year students of
Computer Science who have just concluded a 6 months
F. Threats to Internal Validity industrial training in the second semester of their 300 level.
Threats to internal validity are influences that can The students selected were those who actually did their
affect the dependent variables without the researcher's industrial trainings in software development houses.
knowledge. We considered three such influences: (1)
selection effects, (2) maturation effects, and (3)
Threats regarding subject and artifact representativeness
instrumentation effects.
arise when the subject and artifact population is not
Selection effects are due to natural variation in
representative of the industrial population. The explanations
human performance [1]. For example, if one-person
given earlier also account for this threat.
inspections are done only by highly experienced people,
then their greater than average skill can be mistaken for a
difference in the effectiveness of the treatments. We limited 4. Results
this effect by randomly assigning team members for each
inspection. This way, individual differences were spread Table 1 shows the raw results obtained from the experiment.
across all treatments. The “defect (%)” gives the percentage true defects reported
Maturation effects result from the participants' by the reviewers in each group (Ad hoc and Checklist) from
skills improving with experience. Randomly assigning the the 18 seeded errors in the code artifact, the “effort” gives
reviewers and doing the review within the same period of the time taken in minutes for reviewers to inspect the online
time checked these effects. inspection documents while the “No of FP” is the total
Instrumentation effects are caused by the artifacts number of false positives reported by the reviewers in the
to be inspected, by differences in the data collection forms, experiment.
or by other experimental materials. In this study, this was

29 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Table 1: Raw Results Obtained from Collaborative Code Inspection

Ad hoc Inspection Checklist Inspection


Defect Effort No of Defect Effort No of
s/n (%) (Mins) FP (%) (Mins) FP
1 27.78 31 1 44.44 80 5
2 44.44 72 5 33.33 60 5
3 50.00 57 4 27.78 35 4
4 50.00 50 2 22.22 43 3
5 38.89 82 0 22.22 21 0
6 61.11 98 6 27.78 30 2
7 50.00 52 5 33.33 25 4
8 38.89 50 3 38.89 45 1
9 44.44 48 1 38.89 42 2
10 50.00 51 3 44.44 38 3

The traditional method of conducting inspection on by which nominal teams can be created. Aybüke, et al, [3]
a software artifact is to do it in teams of different sizes. suggest creating all combinations of teams where each
However, it is not possible to gather the reviewers into individual reviewer is included in multiple teams, but this
teams online, since they did not meet face-to-face. introduces dependencies among the nominal teams. They
Therefore, nominal teams’ selection as is usually done in also suggest randomly create teams out of the reviewers
this area of research was used. without repetition and using bootstrapping as suggested by
Nominal teams consist of individual reviewers or Efron and Tibshirani [11], in which samples are statistical
inspectors who do not communicate with each other during drawn from a sample space, followed by immediately
inspection work. Nominal teams can help to minimize returning the sample so that it possibly can be drawn again.
inspection overhead and to lower inspection cost and However, bootstrapping on an individual level will increase
duration [30]. The approach of creating virtual teams or the overlap if the same reviewer is chosen more than once in
nominal teams has been used in other studies as well, [30, a team.
6]. An advantage of nominal inspections is that it is possible In this study, four different nominal teams were
to generate and investigate the effect of different team sizes. created each for the Ad hoc and Checklist reviewers. The
A disadvantage is that no effects of the meeting and possible first reviewers form the team of size 1, next two form the
team synergy are present in the data [3]. The rationale for teams of size two. So we have Teams of 1-person, 2-person,
the investigation of nominal teams is to compare nominal 3-person and 4-person. Table 2 gives the mean aggregate
inspections with the real world situation where teams would values of results obtained with the nominal teams.
be formed without any re-sampling. There are many ways

Table 2: Mean Aggregate Results for Nominal Team Sizes

Ad hoc Inspection Checklist Inspection


Nominal DE DE
Team size (%) T (mins) FP (%) T (mins) FP

1 27.78 31.0 1.0 44.44 80.00 5.00

2 47.22 64.5 4.5 30.56 47.50 4.50

3 50.00 76.7 2.7 24.06 31.33 1.67

4 46.11 50.3 3.0 38.89 37.50 2.50

30 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Table 2 shows that 42.8% and 34.5% of the defects were and 3.42 ± 0.79 SEM respectively for the ad hoc and
detected by the ad hoc and checklist reviewers respectively checklist reviewers.
in the inspection experiment.
Fig. 2 shows the chart of defect detection effectiveness of
The aggregate mean effort taken in minutes by the ad hoc the different teams in each of the defect detection method
reviewers was 55.62 ± 9.82 SEM minutes while the groups.
checklist reviewers take 49.08 ± 1 0.83 SEM minutes. SEM
means Standard Error of Means. The aggregate mean false
positives reported by the reviewers were 2.80 ± 0.72 SEM
60

50

40
Effectiveness (%)

30

20

10

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Team Size

Ad Hoc DE (%) CBR DE (%)

Fig. 2: Chart of Defect Detection Effectiveness of Reviewers Against The Nominal Team Sizes

Fig. 2 shows that the effectiveness of Ad hoc reviewers rises Effort in terms of time taken by reviewers in minutes also
steadily and positively with team size, with peak value takes the same shape as shown in Fig. 3.
recorded on team size 3. However, checklist reviewers take
a negative trend in that their effectiveness decreases with
team size up to team size 3 before rising again on team 4.

31 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

90.0

80.0

70.0

60.0
Average Effort (Mins)

50.0

40.0

30.0

20.0

10.0

0.0
1 2 3 4
Team Sizes

Ad hoc Av Effort (mins) Checklist Av Effort (mins

Fig. 3: Chart of Effort (Mins) against the Nominal Team Sizes

Fig. 4 shows the mean aggregate false positives reported by the reviewers in the experiment.
6.0

5.0

4.0
Average False Positives

3.0

2.0

1.0

0.0
1 2 3 4
Nominal Team Sizes

Ad hoc Average False Positives Checklist Average False Positives

Fig. 4: Average False Positives against the Nominal Team Sizes

32 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Examining the false positive curves in Fig. 4 Further Statistical Analyses


critically, we could see that the shapes follow more or less
trend with what are obtained on defect detection Table 3 shows the results of major statistical tests performed
effectiveness and effort. on the data obtained in this experiment. Independent T-test
was used for the analyses since different subjects were
involved in the experiments and the experiments were
carried out independently.

Table 3: Major Statistical Test Results

Hypothesis Tested p-value Correlation Decision


Ho: There is no significant difference
between the effectiveness of reviewers
using Ad hoc and Checklist techniques in 0.267 - 0.83 Ho accepted
distributed code inspection.
Ho: There is no significant difference
between the effort taken by reviewers
using Ad hoc and Checklist techniques in 0.670 - 0.85 Ho accepted
distributed code inspection.
Ho: There is no significant difference
between the false positives reported by
reviewers using Ad hoc and Checklist 0.585 - 0.15 Ho accepted
techniques in distributed code inspection.

Even though the shapes of the curves obtained on reviewers, and that another method, the scenario method,
the experiment indicates differences in the defect detection had a higher fault detection rate than either Ad hoc or
effectiveness, the effort taken and the false positives Checklist methods. However, their results were obtained
reported by the reviewers, statistical tests conducted show from a manual (paper-based) inspection environment.
that there are no significant differences in the defect Lanubile and Visaggio [21] on their work on
detection effectiveness (p = 0.267), the effort taken (p = evaluating defect detection techniques for software
0.670) as well as false positives reported by the reviewers (p requirements inspections, also show that no difference was
= 0.585) for the two defect detection techniques found between inspection teams applying Ad hoc or
understudied. Null hypotheses are thus accepted for all the Checklist reading with respect to the percentage of
tests. discovered defects. Again, they conducted their experiment
Correlation coefficients are highly strong for defect in a paper-based environment.
detection effectiveness and effort / time taken albeit in Nagappan, et al, [26] on their work on preliminary
negative directions as depicted in the charts in Figs. 2 and 3. results on using static analysis tools for software inspection
There is a very weak negative correlation coefficient in the made reference to the fact that inspections can detect as little
false positives reported by the reviewers. as 20% to as much as 93% of the total number of defects in
an artifact. Briand, et al, [5] reports that on the average,
5. Discussion of Results software inspections find 57% of the defects in code and
Results from the groupware experiment show that design documents.
there is no significant difference between the Ad Hoc and In terms of percentage defects detected, low results
Checklist based reviewers in terms of the parameters were obtained from the experiment compare to what obtains
measured – defect detection effectiveness, effort and false in some related works. For instance, Giedre et al, [12]
positives reported. Aggregate mean values of defect results from their experiment to compare checklist based
detection effectiveness and effort are slightly higher for Ad reading and perspective-based reading for UML design
hoc reviewers while aggregate mean false positives result is documents inspection shows that Checklist-based reading
slightly higher for Checklist based reviewers. About 43 and (CBR) uncovers 70% in defect detection while Perspective –
35% of the defects were detected by the reviewers using ad based (PBR) uncovers 69% and that checklist takes more
hoc and checklist reading techniques respectively. time (effort) than PBR.
Our results are in consonance with some related The implication of these results is that any of the
works in the literatures. To mention a few, Porter and Votta defect detection reading techniques, Ad hoc or Checklist,
[1] on their experiment for comparing defect detection could be conveniently employed in software inspection
methods for software requirements inspections show that depending on choice either in a manual (paper-based) or
checklist reviewers were no more effective than Ad hoc

33 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

tool-based environment; since they have roughly the same engineering: A Roadmap, Proc. of the 22nd
level of performance. Conference on Software Engineering, Limerick
Ireland, June 2000.
6. Conclusion and Recommendations [11] Efron, B. and Tibshirani, R.J. (1993): An Introduction
In this work we demonstrate the equality of ad hoc to the Bootstrap, Monographs on statistics and
and checklist based reading techniques that are traditionally applied probability, Vol. 57, Chapman & Hall.
and primarily used as defect reading techniques in software [12] Giedre Sabaliauskaite, Fumikazu Matsukawa, Shinji
code inspection; in terms of their defect detection Kusumoto, Katsuro Inoue (2002): "An Experimental
effectiveness, effort taken and false positives. Our results Comparison of Checklist-Based Reading and
show that none of the two reading techniques outperforms Perspective-Based Reading for UML Design
each other in the tool-based environment studied. Document Inspection," ISESE, p. 148, 2002
However, results in this study need further International Symposium on Empirical Software
experimental clarifications especially in industrial setting Engineering (ISESE'02), 2002
with professionals and large real-life codes. [13] Haoyang Che and Dongdong Zhao (2005).
Managing Trust in Collaborative Software
References Development.
[1] Adam A. Porter, Lawrence G. Votta, and Victor R. http://l3d.cs.colorado.edu/~yunwen/KCSD2005/paper
Basili (1995): Comparing detection methods for s/che.trust.pdf downloaded in October, 2008.
software requirements inspections: A replicated [14] Harjumaa L., and Tervonen I. (2000): Virtual
experiment. IEEE Trans. on Software Engineering, Software Inspections over the Internet, Proceedings
21(Harvey, 1996):563-575. of the Third Workshop on Software Engineering over
[2] Alastair Dunsmore, Marc Roper and Murray Wood the Internet, 2000, pp. 30-40
(2003): Practical Code Inspection for Object Oriented [15] J. W. Gintell, J. Arnold, M. Houde, J. Kruszelnicki,
Systems, IEEE Software 20(4), 21 – 29. R. McKenney, and G. Memmi. Scrutiny (1993): A
[3] Aybüke Aurum, Claes Wohlin and Hakan Peterson collaborative inspection and review system. In
(2005): Increasing the understanding of effectiveness Proceedings of the Fourth European Software
in software inspections using published data set, Engineering Conference, September 1993.
journal of Research and Practice in Information [16] J.W. Gintell, M. B. Houde, and R. F. McKenney
Technology, vol. 37 No.3 (1995): Lessons learned by building and using
[4] Brett Kyle (1995): Successful Industrial Scrutiny, a collaborative software inspection system.
Experimentation, chapter 5. VCH Publishers, Inc. In Proceedings of the Seventh International
[5] Briand, L. C., El Emam, K., Laitenberger, O, Workshop on Computer Aided Software Engineering,
Fussbroich, T. (1998): Using Simulation to Build July 1995.
Inspection Efficiency Benchmarks for Development [17] Johnson P.M. and Tjahjono D. (1998): Does Every
Projects, International Conference on Software Inspection Really Need a Meeting? Journal of
Engineering, 1998, pp. 340 – 449. Empirical Software Engineering, Vol. 4, No. 1.
[6] Briand, L.C., El Emam, K., Freimut, B.G. and [18] Kelly, J. (1993): Inspection and review glossary, part
Laitenberger, O. (1997): Quantitative evaluation of 1, SIRO Newsletter, vol. 2.
capture-recapture models to control software [19] Laitenberger Oliver (2002): A Survey of Software
inspections. Proceedings of the 8th International Inspection Technologies, Handbook on Software
Symposium on Software Reliability Engineering, Engineering and Knowledge Engineering, vol. II,
234–244. 2002.
[7] L. R. Brothers, V. Sembugamoorthy, and A. [20] Laitenberger, O., and DeBaud, J.M., (2000): An
E. Irgon. Knowledge-based code inspection Encompassing Life-cycle Centric Survey of Software
with ICICLE. In Innovative Applications of Inspection. Journal of Systems and Software, 50, 5-
Artificial Intelligence 4: Proceedings of IAAI- 31.
92, 1992. [21] Lanubile and Giuseppe Visaggio (2000): Evaluating
[8] David A. Ladd and J. Christopher Ramming (1992): defect Detection Techniques for Software
Software research and switch software. In Requirements Inspections,
International Conference on Communications http://citeseer.ist.psu.edu/Lanubile00evaluating.html,
Technology, Beijing, China, 1992. downloaded Feb. 2008.
[9] David L. Parnas and David M. Weiss (1985): Active [22] Lawrence G. Votta (1993): Does every inspection
design reviews: Principles and practices. In need a meeting? ACM SIGSoft Software Engineering
Proceedings of the 8th International Conference on Notes, 18(5):107-114.
Software Engineering, pages 215-222, Aug. 1985. [23] Macdonald F. and Miller, J. (1997): A comparison of
[10] Dewayne, E. Perry, Adam A. Porter and Lawrence G. Tool-based and Paper-based software inspection,
Votta(2000): Empirical studies of software

34 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Empirical Foundations of Computer Science, [30] Stefan Biffl and Michael Halling (2003):
University of Strathclyde. Investigating the Defect Detection Effectiveness and
[24] Michael E. Fagan (1976): Design and code Cost benefit of Nominal Inspection teams, IEEE
inspections to reduce errors in program development. Transactions on Software Engineering, vol. 29, No.
IBM Systems Journal, 15(3):182-211. 5, pp. 385 – 397.
[25] Michael E. Fagan (1986): Advances in software [31] Stein, M., Riedl, J., SÖren, J.H., Mashayekhi V.
inspections. IEEE Trans. on Software Engineering, (1997): A case study Distributed Asynchronous
SE-12(7):744-751. software inspection, in Proc. of the 19th International
[26] Nachiappan Nagappan, Laurie Williams, John Conference on Software Engineering, pp. 107-117,
Hudepohl, Will Snipes and Mladen Vouk (2004): 1997.
Preliminary Results On Using Static Analysis Tools [32] Tom Gilb and Dorothy Graham (1993): Software
for Software Inspections, 15th International Inspection. Addison-Wesley Publishing Co.
Sypossium on Software reliability Engineering [33] Tyran, Craig K (2006): “A Software Inspection
(ISSRE’04), pp. 429 – 439. Exercise for the Systems Analysis and Design
[27] Paulk, M., Curtis, B., Chrissis, M.B. and Weber, C. Course”. Journal of Information Systems Education,
V. (1993): “Capacity Maturation Model for vol 17(3).
Software”, Technical Report CMU/SEI-93-TR-024, 34] Vahid Mashayekhi, Janet M. Drake, Wei-tek Tsai,
Software Engineering Institute, Carnegie Mellon John Riedl (1993): Distributed, Collaborative
University, Pittsburgh, Pennsylvania. Software Inspection, IEEE Software, 10: 66-75, Sept.
[28] Pfleeger S. Lawrence. (1991): Software Engineering: 1993.
The Production of Quality Software, Macmillan [35] Watts S. Humphrey (1989): Managing the Software
Publishing Company, NY, USA. Process, chapter 10. Addison-Wesley Publishing
[29] Porter A. Adam and Johnson P. M. (1997): Assessing Company.
Software review Meetings: Results of a Comparative [36] Wheeler, D. A., B. Brykczynski, et al. (1996).
Analysis of Two Experimental Studies, IEEE Software Inspection: An Industry Best
Transactions on Software Engineering, vol. 23, No. Practice, IEEE CS Press.
3, pp. 129 – 145.
ix 1

Adenike OSOFISAN is currently a Reader in the


Department of Computer Science, University of
Ibadan, Nigeria. She had her Masters degree in
Computer Science from the Georgia Tech, USA and
PhD from Obafemi Awolowo University, Ile-Ife,
Nigeria. Her areas of specialty include data mining
and communication.

Solomon Olalekan AKINOLA is currently a PhD


student of Computer Science in the University of
Ibadan, Nigeria. He had his Bachelor of Science in
Computer Science and Masters of Information Science
in the same University. He specializes in Software
Engineering with special focus on software inspection.

35 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Robustness of the Digital Image Watermarking


Techniques against Brightness and Rotation Attack
Harsh K Verma1, Abhishek Narain Singh2, Raman Kumar3
1,2,3
Department of Computer Science and Engineering
Dr B R Ambedkar National Institute of Technology
Jalandhar, Punjab, India.
E-mail: 1vermah@nitj.ac.in, 2singhabhi444@gmail.com, 3er.ramankumar@aol.in

Abstract- The recent advent in the field of multimedia proposed a structures digital watermarks. Digital watermarks may be
many facilities in transport, transmission and manipulation of comprised of copyright or authentication codes, or a legend
data. Along with this advancement of facilities there are larger essential for signal interpretation. The existence of these
threats in authentication of data, its licensed use and protection watermarks within a multimedia signal goes unnoticed except
against illegal use of data. A lot of digital image watermarking when passed through an appropriate detector. Common types
techniques have been designed and implemented to stop the of signals to watermark are still images, audio, and digital
illegal use of the digital multimedia images. This paper compares video. To be effective a watermark must be [4]:
the robustness of three different watermarking schemes against Unobstructive; that is, it should be unperceivable when
brightness and rotation attacks. The robustness of the embedded in the host signal.
watermarked images has been verified on the parameters of Discreet; unauthorized watermark extraction or detection
PSNR (Peak Signal to Noise Ratio), RMSE (Root Mean Square must be arduous as the mark’s exact location and
Error) and MAE (Mean Absolute Error). amplitude are unknown to unauthorized individuals.
Easily extracted; authorized watermark extraction from
Keywords- Watermarking, Spread Spectrum, Fingerprinting, the watermarked signal must be reliable and convenient.
Copyright Protection. Robust/fragile to incidental and unintentional distortions;
depending on the intended application, the watermark
I. INTRODUCTION must either remain intact or be easily modified in the face
Advancements in the field of computer and technology have of signal distortions such as filtering, compression,
given many facilities and advantages to the human. Now it cropping and re-sampling performed on the watermarked
becomes very easier to search and develop any digital content data.
on the internet. Digital distribution of multimedia information In order to protect ownership or copyright of digital media
allows the introduction of flexible, cost-effective business data, such as image, video and audio, encryption and
models that are advantageous for commerce transactions. On watermarking techniques are generally used. Encryption
the other hand, its digital nature also allows individuals to techniques can be used to protect digital data during the
manipulate, duplicate or access media information beyond transmission from sender to the receiver. Watermarking
the terms and conditions agreed upon. Multimedia data such as technique is one of the solutions for the copyright protection
photos, video or audio clips, printed documents can carry and they can also be used for fingerprinting, copy protection,
hidden information or may have been manipulated so that broadcast monitoring, data authentication, indexing, medical
one is not sure of the exact data. To deal with the problem of safety and data hiding [5].
trustworthiness of data, authentication techniques are being
developed to verify the information integrity, the alleged II. WATERMARK EMBEDDING AND EXTRAXTION
source of data, and the reality of data [1]. Cryptography and
A watermark, which is often consists of a binary data
steganography have been used throughout history as means to
sequence, is inserted into a host signal with the use of a key
add secrecy to communication during times of war and peace
[6]. The information embedding routine imposes small signal
[2].
changes, determined by the key and the watermark, to
A. Digital Watermarking generate the watermarked signal.
Digital watermarking involves embedding a structure in a This embedding procedure (Fig. 1) involves imperceptibly
host signal to “mark” its ownership [3]. We call these modifying a hoist signal to reflect the information content in

36 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Original c. Read in the watermark message and reshape it into a


Multimedia vector
Signal d. For each value of the watermark, a PN sequence is
Watermarking Watermarked generated using an independent seed
Algorithm Signal e. Scatter each of the bits randomly throughout the
cover image
Watermark f. When watermark contains a '0', add PN sequence
1010101…. with gain k to cover image
.
i. if watermark(bit) = 0
watermarked_image=watermarked_image +
Key
k*pn_sequence
Fig. 1 Watermark embedding process ii. Else if watermark (bit) = 1
watermarked_image=watermarked_image +
pn_sequence
g. Process the same step for complete watermark vector

Watermarked Watermark Extracted 2. To recover the watermark


Multimedia Extraction Watermark
Signal 1010101…. a. Convert back the watermarked image to vectors
b. Each seed is used to generate its PN sequence
c. Each sequence is then correlated with the entire
Key
image
i. If ( the correlation is high)
that bit in the watermark is set to “1”
Fig. 2 Watermark extraction process ii. Else
the watermark so that the changes can be later observed with that bit in the watermark is set to “0”
the use of the key to ascertain the embedded bit sequence. d. Process the same step for complete watermarked
The process is called watermark extraction. vector
The principal design challenge is in embedding the e. Reshape the watermark vector and display recovered
watermark so that it reliably fulfills its intended task. For copy watermark
protection applications, the watermark must be recoverable B. Comparison of Mid-Band DCT Coefficients in Frequency
(Fig. 2) even when the signal undergoes a reasonable level of Domain
distortion, and for tamper assessment applications, the
watermark must effectively characterize the signal distortions. The algorithm of the above method is given below:
The security of the system comes from the uncertainty of the
key. Without access to this information, the watermark cannot 3. To embed the watermark
be extracted or be effectively removed or forged.
a. Process the image in blocks.
III. WATERMARKING TECHNIQUES b. For each block
Three different watermarking techniques each from different Transform block using DCT.
domain i.e. Spatial Domain, Frequency Domain and Wavelet If message_bit is 0.
Domain [7] watermarking have been chosen for the If dct_block (5,2) < (4,3) .
experiment. The techniques used for the comparative analysis Swap them.
of watermarking process are CDMA Spread Spectrum Else If (5, 2) > (4,3)
watermarking in spatial domain, Comparison of mid band Swap them.
DCT coefficients in frequency domain and CDMA Spread If (5, 2) - (4,3) < k
Spectrum watermarking in wavelet domain [8]. (5, 2) = (5, 2) + k/2;
(4, 3) = (4, 3) – k/2;
A. CDMA Spread Spectrum Watermarking in Spatial Domain Else
The algorithm of the above method is given below: (5, 2) = (5, 2) - k/2;
(4, 3) = (4, 3) + k/2;
1. To embed the watermark c. Move to next block.

a. Convert the original image in vectors 4. To recover the watermark


b. Set the gain factor k for embedding
a. Process the image in blocks.

37 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

b. For each block

Transform block using DCT.


If (5,2) > (4,3)
Message = 1;
Else
Message=0;
c. Process next block. (a) (b)
C. CDMA Spread Spectrum Watermarking in Wavelet Domain
The algorithm of the above method is given below:

5. To embed the watermark

a. Convert the original image in vectors


b. Set the gain factor k for embedding
c. Read in the watermark message and reshape it into a (c) (d)
vector
d. Do Discrete Wavelet Transformation of the cover Fig. 3 Brightness Attack (a) Original watermarked image, (b) Watermarked
image image after -25% brightness, (c) Watermarked image after 25% brightness,
i. [cA,cH,cV,cD]=dwt2(X,'wname')computes (d) Watermarked image after 50% brightness
the approximation coefficients matrix cA
IV. BRIGHTNESS ATTACK
and details coefficients matrices cH, cV, and
cD (horizontal, vertical, and diagonal, Brightness attack is one of the most common types of attack
respectively), obtained by wavelet on digital multimedia images. Three different levels of
decomposition of the input matrix X. The brightness attacks have been done. First the brightness is
'wname' string contains the wavelet name increased by -25% i.e. decreased by 25%, again the brightness
e. Add PN sequence to H and V components is increased by 25% and at last the brightness is increased by
ii. If (watermark == 0) 50%. The brightness attack has been shown below in Fig. 3.
cH1=cH1+k*pn_sequence_h; Table 1 shows the results of brightness attack on different
cV1=cV1+k*pn_sequence_v; watermarking techniques on various parameters like Peak
f. Perform Inverse Discrete Wavelet Transformation Signal to Noise Ratio [9], average Root Mean Square Error
iii. watermarked_image = and average Mean Absolute Error.
idwt2(cA1,cH1,cV1,cD1,'wname',[Mc,Nc])
V. ROTATION ATTACK
6. To recover the watermark
Rotation attack is among the most popular kinds of
a. Convert back the watermarked image to vectors geometrical attack on digital multimedia images [10]. Three
b. Convert the watermark to corresponding vectors levels of rotations have been implemented. First the original
c. Initialize watermark vectors to all ones watermarked image is being rotated by 90 degree, then 180
i. Watermark_vector = ones (1,MW*NW) degree and at last the image is being rotated by 270degree in
where, MW= Height of watermark. clock wise direction. The rotation attack has been shown
NW = Width of watermark. below in Fig. 4.
d. Find correlation in H and V components of The results of the rotation attack have been shown below in
watermarked image Table 2 for all the three watermarking schemes.
i. correlation_h()=corr2(cH1,pn_sequence_h);
ii. correlation_v()=corr2(cV1,pn_sequence_v); VI. EXPERIMENTAL RESULTS
iii. correlation(wtrmkd_img)=(correlation_h() + The comparative analysis of the three watermarking
correlation_v())/2; schemes has been done on the basis of brightness and rotation
e. Compare the correlation with mean correlation attacks. Results of the individual watermarking technique have
i. if (correlation(bit) > mean(correlation)) been compared on the basis of PSNR (Peak Signal to Noise
watermark_vector(bit)=0; Ratio), RMSE (Root Mean Square Error) and MAE (Mean
f. Revert back the watermar_vector to Absolute Error).
watermark_image

38 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Table 1 Performance analysis of watermarking techniques against Brightness


Attack 30

20 CDMA
PSNR Avg. Avg.
(dB) RMSE MAE
10 DCT
CDMA SS in Spatial D. 19.982 25.55 2.563
0 DWT
-25 Comp. of Mid Band DCT 22.427 19.283 1.459
% CDMA SS in Wavelet D. 26.215 12.466 0.61 -25% 25% 50%
CDMA SS in Spatial D. 17.203 35.184 4.86

25% Comp. of Mid Band DCT 21.339 21.855 1.874 Fig. 5 Graph showing PSNR values for brightness attack on different
watermarking schemes
CDMA SS in Wavelet D. 24.305 15.533 0.946
CDMA SS in Spatial D. 16.921 36.345 5.185

50% Comp. of Mid Band DCT 18.435 30.534 3.659


CDMA SS in Wavelet D. 22.605 18.892 1.40 40
30
CDMA
20
DCT
10
0 DWT
-25% 25% 50%

Fig. 6 Graph showing Average RMSE values for brightness attack on


(a) (b) different watermarking schemes

4 CDMA
2 DCT
(c) (d)
0 DWT
Fig. 4 Rotation Attack (a) Original watermarked image, (b) Watermarked
image after 90 degree rotation, (c) Watermarked image after 180 degree -25% 25% 50%
rotation, (d) Watermarked image after 270 degree rotation

Table 2 Performance analysis of watermarking techniques against Rotation Fig. 7 Graph showing Average MAE values for brightness attack on different
Attack watermarking schemes

A. Results of Brightness Attack


PSNR Avg. Avg.
(dB) RMSE MAE The Fig. 5, 6 and 7 show the results of brightness attack on
all the three techniques of watermarking. A comparative
CDMA SS in Spatial D. 12.286 61.973 15.087 analysis is done thereafter.
Comp. of Mid Band DCT 15.602 42.305 7.035 The graphs shown above are the comparative results of the
900 brightness attack on the three watermarking techniques
CDMA SS in Wavelet D. 19.50 27.008 2.866
discussed. Greater the PSNR value implies more robust is the
CDMA SS in Spatial D. 11.193 70.283 19.398 watermarking technique against attack. Having a look on the
Comp. of Mid Band DCT 11.999 64.059 16.122 Fig. 5 the DWT (CDMA SS watermarking in Wavelet
1800
CDMA SS in Wavelet D. 13.517 53.784 11.368 domain) technique is proved to be the best candidate for the
CDMA SS in Spatial D. 11.818 65.404 16.803
digital image watermarking, since its having greater PSNR
value than the other two techniques. Similarly from Fig. 6 and
Comp. of Mid Band DCT 13.533 53.685 11.323 7, the values of Root Mean Square Error and Mean Absolute
2700
CDMA SS in Wavelet D. 16.251 39.263 6.057 Error are also minimum for the Discrete Wavelet Transform

39 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Experimental values of the brightness and rotation attack


25
shows that the CDMA Spread Spectrum watermarking
20
technique is the best choice for the watermarking of digital
15 CDMA multimedia images.
10
Discrete Cosine Transformation Domain shows somewhat
5 DCT
greater robustness against the rotation attack. Spatial domain
0
DWT watermarking techniques are not good candidates for large
90 Degree 180 270 size of watermarks. They show poor results with larger size of
Degree Degree watermarks.

VII. CONCLUSIONS
Fig. 8 Graph showing PSNR values for rotation attack on different
watermarking schemes This paper focuses on the robustness of the watermarking
techniques chosen from all the three domains of watermarking
against brightness and rotation attack. The key conclusion of
the paper is that the Wavelet domain watermarking technique
80 is the best and most robust scheme for the watermarking of
60 digital multimedia images. This work could further be
extended to the watermarking purpose of another digital
40 CDMA
content like audio and video.
20 DCT
0 REFERENCES
DWT [1] Austin Russ “Digital Rights Management Overview”, SANS
90 Degree 180 270 Institute Information Security Reading Room. Retrieved October,
Degree Degree 2001.
[2] Stallings W., “Cryptography and Network Security: Principles and
Practice”, Prentice-Hall, New Jersey, 2003.
Fig. 9 Graph showing Average RMSE values for rotation attack on different [3] D. Kundur, D. Hatzinakos, "A Robust Digital Image Watermarking
watermarking schemes Scheme Using the Wavelet-Based Fusion," icip, vol. 1, pp.544, 1997
International Conference on Image Processing (ICIP'97) - Volume 1,
1997.
[4] Liu J. and he X., “A Review Study on Digital Watermark”,
Information and Communication Technologies, 2005, ICICT 2005
25 First International Conference, pp. 337-341, August, 2005.
[5] Petitcolas F.A.P, Anderson R.J, Kuhn M. G,"Information Hiding-A
20 Survey" Proceedings of the IEEE,Vol. 87, No.7, PD. 1062-1078,
15 CDMA 1999.
10 [6] Nedeljko Cvejic, Tapio Seppanen, “Digital Audio Watermarking
Techniques and Technologies Applications and Benchmarks”, pages
5 DCT x-xi, IGI Global, Illustrated edition, August 7, 2007.
0 [7] Corina Nafornita, "A Wavelet-Based Watermarking for Still
DWT Images", Scientific Bulletin of Politehnica University of Timisoara,
90 Degree 180 270 Trans. on Electronics and Telecommunications, 49(63), special
Degree Degree number dedicated to the Proc. of Symposium of Electronics and
Telecommunications ETc, Timisoara, pp. 126-131, 22 - 23 October
2004.
[8] Chris Shoemaker,“Hidden Bits: A Survey of Techniques for Digital
Fig. 10 Graph showing Average MAE values for rotation attack on different
Watermarking”, Independent Study EER-290. Prof Rudko, Spring
watermarking schemes
2002.
[9] Kutter M. and Hartung F., Introduction to Watermarking
domain technique, so it is being proved to be the best against Techniques, Chapter 5 of “Information Hiding: Techniques for
brightness attack. Steganography and Digital Watermarking”, S. Katzenbeisser and F.
A. P. Petitcolas (eds.), Norwood, MA: Artech House, pp. 97-120,
B. Results of Rotation Attack 2000.
[10] Ping Dong, Jovan G. Brankov, Nikolas P. Galatsanos, Yongyi Yang,
A comparison of the results of rotation attack is being done Franck Davoine,”Digital Watermarking Robust to Geometric
by showing the results in the form of Fig. 8, 9 and 10. Graphs Distortions”, IEEE Transactions on Image Processing, Vol. 14, NO.
are drawn for all the three parameters of evaluation. A 12, December, 2005.
comparative analysis of the result has been done thereafter.
The PSNR values in Fig. 8 shows that the CDMA SS
watermarking in Wavelet domain technique is having the
greatest value for the PSNR value. This shows that the
wavelet domain watermarking is the best practice for the
digital image watermarking purpose.

40 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security
Vol. 5, No. 1, 2009

ODMRP with Quality of Service and local recovery with


security Support
Farzane kabudvand
Computer Engineering Department zanjan
Azad University
Zanjan, Iran
E-mail: fakabudvand@yahoo.com

Abstract battlefields, emergency search and rescue sites,


In this paper we focus on one critical issue in mobile ad hoc classrooms, and conventions where participants share
networks that is multicast routing and propose a mesh based information dynamically using their mobile devices.
”on demand” multicast routing protocol for Ad-Hoc
networks with QoS (quality of service) support.
QoS (Quality of Service) routing is another critical
Then a model was presented which is used for create a local
recovering mechanism in order to joining the nodes to multi
issue in MANETs. QoS defines nonfunctional
sectional groups at the minimized time and method for characteristics of a system that affect the perceived
security in this protocol we present . quality of the result. In multimedia, this might include
Keywords: multicast protocol, ad hoc, security, request picture quality, image quality, delay, and speed of
packet response. From a technological point of view, QoS
characteristics may include timeliness (e.g., delay or
response time), bandwidth (e.g., bandwidth required or
1.Introduction available), and reliability (e.g., normal operation time
between failures or down time from failure to restarting
Multicasting is the transmission of packet to group of normal operation) [8].
hosts identified by destination address.
In this paper, we propose a new technique for
A multicast datagram is typically delivered to all supporting QoS Routing for this protocol, and a technique
members of its destination host group with the same then a model was presented which is used to create a local
reliability as regular unicast datagrams[4]. In the case of recovering mechanism in order to joining the nodes to
IP, for example, the datagram is not guaranteed to arrive multi-sectional groups at the minimized time, the fact that
intact at all members of the destination group or in the increases reliability of the network and prevents data
same order relative to other datagrams. wastage while distributing in the network.

Multicasting is intended for group-oriented


computing. There are more and more applications in 2.Proposed protocol Mechanism
which one-to-many dissemination is necessary. The A. Motivation
multicast service is critical in applications characterized ODMRP1] provides a high packet delivery ratio even at
by the close collaboration of teams (e.g., rescue patrols, high mobility, but at the expense of heavy control
military battalions, scientists, etc.) with requirements for overhead. It does not scale well as the number of senders
audio and video conferencing and sharing of text and and traffic load increases. Since every source periodically
images [3]. floods advertising RREQ2 packets through the network,
congestion is likely to occur when the number of sources
A MANET consist of a dynamic collection of nodes is high . So control overhead is one of the main
without the aid of the infrastructure of centralized weaknesses of ODMRP, under the presence of multiple
administration . the network topology can change sources, CQMP solved the this problem, but both of these
randomly and rapidly at predictable times. protocols have common weakness which is the lack of
any admission control policy and resource reservation
The goal of MANETs is to extend mobility into the mechanism. Hence, to reduce the overhead generated by
realm of autonomous, mobile, wireless domains, where a the control packets during the route discovery and apply
set of nodes form the network routing infrastructure in an admission control to network traffics, proposed protocol
ad hoc fashion. The majority of applications for the adopts two efficient optimization mechanisms. One is
MANET technology are in areas where rapid deployment
and dynamic reconfiguration are necessary and the wire-
1
line network is not available [4]. These include military On-demand Multicast Routing Protocol
2
route request packet

41 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security
Vol. 5, No. 1, 2009

applied on nodes that cannot support QoS requirements, As the RREQ may contain more than one Source Row,
thus ignore the RREQ packet. The other is for every the processing node goes through each and every Source
intermediate node and based on the comparison of Row entry in the RREQ, and make admission decision for
available bandwidth of each node versus required non-duplicated rows.
bandwidth according to node position and neighboring Admission decision is made at the processing node
node's role (sender, intermediate, receiver …). To address and it's neighbors listed in neighbor table as described in
control packet problem, we use CQMP protocol's idea in Section 3. If the request is accepted and there was enough
RREQ packet consolidation, moreover we apply an bandwidth, the node will add a route entry in its routing
admission control policy along with bandwidth table with status explored. The node will remain in
reservation to our new protocol. explored status for a short period of Texplored. If no reply
arrives at the explored node in time, the route entry will
B. Neighborhood maintenance be discarded at the node and late coming reply packets
will be ignored. Thus, we reduce the control overhead as
Neighborhood information is important in proposed well as exclude invalid information from the node’s
protocol.. To maintain the neighborhood information, each
routing table.
node is required to periodically disseminate a “Hello” Upon receiving each request packet, as the RREQ may
packet to announce its existence and traffic information to contain more than one Source Row the receiver goes
its neighbor set. This packet contains the Bavailable of the through each entry in the packet, builds and transmits a
REPLY packet based upon matched entries along the
originator and is sent at a default rate of one packet per
three seconds with time to live (TTL) set to 1. Every node reverse route. Available bandwidth of intermediate and
in the network receives the Hello packet from its neighboring nodes may have been changed due to the
neighbors, maintains a neighbors list that contains all its activities of other sessions. Therefore, similar to the
neighbors with their corresponding traffic and co- admission control in RREQs, upon receiving a RREP,
neighbor number. nodes double check the available bandwidth to prevent
possible changes during the route discovery process. If
C. Route discovery and resource reservation the packet is accepted, the node will update the route
status to registered. After registration, the nodes are ready
to accept the real data packets of the flow. The node will
Proposed protocol conforms to a pure on-demand
only stay in registered status for a short period of
routing protocol. It neither maintains any routing table, Tregistered. If no data packet arrives at the registered node
nor exchange routing information periodically. When a in time, it means that the route was not chosen by the
source node needs to establish a route to another node, source. Then the route entry will be deleted at the node.
with respect to a specific QoS requirement, it disseminates When any node receives a REPLY packet, it checks if
a RREQ that includes mainly, the requested bandwidth, the next node Id in any of the entries in the REPLY
delay and node's neighbor list. Hence, each intermediate matches its own. If so, it realizes that it is on the way to a
node, upon receiving the RREQ performs the following source, It checks its own available bandwidth and
tasks; compares it with required bandwidth of this flow, then
• Updates its neighbor's co-neighbor number; checks its one-hop neighbor's available bandwidth which
• Determines whether it can consolidate into this recorded in the neighbor table. If there was enough
RREQ packet information about other sources from bandwidth it sets a flag indicating that it is part of the
which it is expecting to hear a RREQ. When a source FORWARDING GROUP for that multicast group, and then
receives a RREQ from another source, it processes the builds and broadcasts its own REPLY packet.
packet just as non-source intermediate node does, in When a REPLY reaches a source, a route is established
addition checks its INT to determine if it would expire from the source to the receiver. The source can now
within a certain period of time, in other words the transmit data packets towards the receiver. A Forwarding
source checks if it is about to create and transmit its Group node will forward any data packets received from a
own RREQ between now and TIME-INTERVAL. If so, member for that group.
it adds one more row to the RREQ.
• Tries to respond to QoS requirements by D. Data Forwarding
applying a bandwidth decision in reserving the
requested bandwidth B which described in the follow,
and before transmitting the packet appends its one-hop After constructing the routes, the source can send
neighbor list along with their corresponding co- packets to multicast group via selected routes and
neighbor number to the packet. forwarding nodes. Upon receiving a data packet
forwards it, only when;

42 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security
Vol. 5, No. 1, 2009

It is not a duplicate packet, Forwarding flag for this structure of the protocol, that is, every FG add only the
session has not expired,There was an entry with registered name of the node to the received answer package by
or reserved status corresponds to this session. sending it up to a higher node. In other words the existing
addresses in the answer package are not to be omitted
It then changes its ‘registered’ status to ‘reserved’. rather some desired address of FG node is added to the
The node will only stay in reserved status for a short answer package. In this way every FG can be aware of
period of Treserved. This procedure minimizes the other FG between itself and destination, and starts to use
traffic overhead and prevents sending packets through them. Here the number of the steps is considered 2.
the stale routes. As it can be seen in figure 4. while sending the
answer package of membership destination in this method
3. Local Recovery Mechanism based on proposed puts the address of the proceeding group in the package
protocol with reliability and sends it. Now FGs also do the same. Therefore every
node can recognize the member of the proceeding group
In this section the mechanism of local recovery will be of all proceeding nodes between itself and destination and
discussed on the basis of a suggested protocol. The begin to send local recovering package in case of route
suggested method leads to fast improvement of the corruption.
network and therefore the destination can be connected to
the source through a new route. Discovered routes M,K,D K
between destination and source may be corrupted for
many reasons most of which could be occurred because of
H M K,D
removing in nodes.
D

D H,M,K,D
M D

Sending membership reply packet

C,D,E
C A B D,E E
C
Fig3. local recovery with proposed method D E

By considering figure .3, if direct link A-B corrupts an


indirect route of A to B will be formed by C which stands
next door to them. In this condition if some package with
many steps is sent to find next node regenerating of the A B
present route will be possible and there is no need to
A,C,D,E
regenerate the end to end by three times. Algorithm B,E
follows as that when a middle node FG recognizes route
corruption between itself and the next step it places data A,B,E
on its buffer and starts to set a timer. Then it sends the
package with more steps (i.e. two steps) and puts on it a
set of nodes which are placed at a farther space between Fig4.sending local recovery packet and update address
source and destination. Receiving this package, every
node begins to consider whether its name to be there or Given timer is taken 1 second, namely if FG which
not. If the address of the node corresponds to one of the sends data fails to receive the same data after utmost one
current addresses, the answer package may be sent and as second from following FG, then it discovers a route
a result of that it can be sent through a new route. But if corruption and modulates another timer amount to 0/1
the answer isn’t received by the end of given time sec. in order to receive the answer package and therefore
determined on the timer, the package is thrown away and a new route can be resulted. During this the package is put
another route may be discovered again. Every node which in a temporary buffer. If new routes cannot be found the
receives a local regained package and its answer will package would be thrown away.
function as a FG for that destination. Thus every node
should be aware of its FG between itself and destination. 4. Security in mobile Ad hoc networks
In this way we can recognize some alteration in the

43 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security
Vol. 5, No. 1, 2009

In mobile Ad hoc networks , due to unreliable data and was 512 bytes. The nodes are placed randomly within this
lack of infrastructure , providing secure communications region. The multicast sources are selected from all 50
is a big challenge . in wired and wireless networks nodes randomly and most of them act as receivers at the
cryptographic techniques are used for secure same time. The mobility model used is random waypoint,
communications. in which each node independently picks a random
The key is a piece of input information for cryptography. destination and speed from an interval (min, max) and
If the key is discovered ,the encrypted information can be moves toward the chosen destination at this speed. Once it
revealed. reaches the destination, it pauses for pause number of
There are some domaining trust model because the seconds and repeats the process. Our min speed is 1 m/s,
authentication of key ownership is important . max speed is 20 m/s and pause interval is 0 seconds. The
One of important models is centralized .In this model we RREQ interval is set at 3 second. The HELLO refresh
can use a hierarchical trust structure . interval is the same as the RREQ interval. We've varied
It is necessary for security we distribute the control trust the following items: mobility speed, number of multicast
to multiple entities that is the system public key is senders and network traffic load.
distributed to whole network . because a single
certification node could be a security bottleneck and Performance Metrics used:
multiple replicas of certification node are fault tolerant. • RREQ Control Packet Load: The average
In proposed technique for security we consider number of number of RREQ packet transmissions by a node in the
nodes that they hold a system private key share and are network.
able of producing certificates. These nodes are named s- • Packet delivery Ratio: The ratio of data packets
node . s-nodes and forwarding nodes( a sunset of non s- sent by all the sources that is received by a receiver.
node) generate a group . • End to end delay: refers to the time taken for a
When a s-node enters the network it broadcasts a request packet to be transmitted across a network from source
packet . this packet has extra attributes this packet consist to destination.
of TTL field , this field decrease by 1 as the packet leaves
the node . 6.Results
When a node receives the request packet it first checks the
validity of packet before taking any further actions. Then In Fig. 5, we calculated the delivery ratio of data
discards non-authenticated packets. Neighbor nodes to s- packets received by destination nodes over data
nodes receive the request and rebroadcast it .This process packets sent by source nodes. Without admission
continues at other nodes . control, more packets are injected into the network
When another s-node receives the packet from neighbor ( despite they cannot reach destinations. These packets
example node B ) it could send back a server reply waste a lot of channel bandwidth. On the other hand, if
message to neighbor ( example node B ). the admission control scheme is enabled, the
When B receives the join reply packet , it learns that it’s inefficiency usage of channel resource can be limited
neighbor is a s-node and it is on the selected path between and the saturation condition can be alleviated. Since
two server and set the forwarding attribute to 1 . proposed protocol has less RREQ packet transmissions
After all s-nodes finish the join procedure the group mesh than ODMRP and CQMP, there is less chance of data
structure is formed . packet loss by collision or congestion. Owning to
This procedure can create security in whole network . additional Hello overhead, proposed protocol performs a
litter worse when there are few sources. The data
delivery ratio of evaluated protocols decreases as the
5 .Performance Evaluation number of sources increases under high mobility
conditions, but proposed protocol constantly maintains
We implement the proposed protocol in GloMoSim. about 4 to 5 percent higher packet delivery ratio than
The performance of the proposed scheme is evaluated in others because of reduction of join query overhead.
terms of average Number of RREQ sent by every node,
end-to-end delay, and packet delivery ratio. In the
simulation, we modeled a network of 50 mobile hosts
2
placed randomly within a 1000*1000 m area. Radio
propagation range for each node was 250 meters and
channel capacity was 2Mbit/sec. Each simulation runs for
300 seconds of simulation time. The MAC protocol used
in our simulations is IEEE 802.11 DCF [22]. We used
Constant Bit Rate as our traffic. The size of data payload

44 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security
Vol. 5, No. 1, 2009

[6] H. Dhillon, H.Q. Ngo, "CQMP: a mesh-based multicast routing


100
protocol with consolidated query packets", IEEE Wireless
90
Communications and Networking Conference, WCNC 2005, pp. 2168–
80
2174.
Packet Delivery ratio

70
[7] Y. Yi, S. Lee, W. Su, and M. Gerla, "On-Demand Multicast Routing
60 AMOMQ
Protocol (ODMRP) for Ad-hocNetworks",draft-yi-manet-odmrp-00.txt,
50 CQMP
2003.
40 ODMRP
[8] D. Chalmers, M. Sloman, "A survey of quality of service in mobile
30
[9] M. Effatparvar, A. Darehshoorzadeh, M. Dehghan, M.R. Effatparvar,
20
"Quality of Service Support and Local Recovery for ODMRP Multicast
10
Routing in Ad hoc Networks," 4th International Conference on
0
Innovations in Information Technology (IEEE IIT 2007), Dubai, United
1 5 10 15 20 25
Arab Emirates, PP. 695-699, 18-20 Nov. 2007.
Number of sources [10] Y. Chen, Y. Ko, “A Lantern-Tree Based QoS on Demand
Multicast Protocol for A wireless Ad hoc Networks”, IEICE Transaction
Fig5 Packet Delivery Ratio as a function of Number of Sources on Communications Vol.E87-B., 2004, pp. 717-726.
[11] Xu, K. Tang, K. Bagrodia, R. Gerla, M. Bereschinsky, M.
MILCOM, "Adaptive Bandwidth Management and QoS Provisioning in
7. Conclusion Large Scale Ad hoc Networks", Proceedings of MILCOM, Boston, MA,
2003 VOL 2, pp. 1018-1023
[12] M. Saghir, T. C. Wan, R. Budiarto, "QoS Multicast Routing Based
In this paper, we have proposed a mesh-based, on- on Bandwidth Estimation in Mobile Ad Hoc Networks," in Proc. Int.
demand multicast routing protocol with admission control Conf. on Computer and Communication Engineering (ICCCE`06), Vol.
decision, proposed protocol, which similar to CQMP uses I, 9-11 May 2006, Kuala Lumpur, Malaysia, pp. 384-389.
[13] G. S. Ahn, A. T. Campbell, A. Veres and L.H. Sun, "SWAN:
consolidation of multicast group membership advertising Service Differentiation in Stateless Wireless Ad hoc Networks", In Proc.
packets plus admission control policy. IEEE INFOCOM, 2002, VOL 2, pp. 457-466.
then model was presented which is used to create a local [14] J. Garcia-Luna-Aceves, E. Madruga. "The Core Assisted Mesh
recovering mechanism in order to joining the nodes to multi- Protocol", IEEE Journal on Selected Areas in Communications, vol. 17,
sectional groups at the minimized time, the fact that increases no. 8, 1999
reliability of the network and prevents data wastage while [15] H. Zhu, I. Chlamtac, " Admission control and bandwidth
distributing in the network. In this mechanism a new package reservation in multi-hop ad hoc networks", Computer Networks 50
known as local recovering package was created by using of a (2006) ,1653–1674.
[16] Q. Xue, A. Ganz, "QoS routing for mesh-based wireless LANs",
membership suit package and placing the address of the nodes International Journal of Wireless Information Networks 9 (3) (2002)
between a proceeding group and destination. Here we 179–190.
considered the number of steps restricted but it can be changed. [17] A. Darehshoorzadeh, M. Dehghan, M.R. Jahed Motlagh, "Quality
We implemented proposed protocol using GlomoSim and show of Service Support for ODMRP Multicast Routing in Ad hoc Networks",
by simulations that proposed protocol shows up to 30 percent ADHOC-NOW 2007, LNCS 4686, pp. 237–247, 2007.
reduction in control packet load. In addition, our results [18] IEEE Computer Society LAN MAN Standards Committee,
Wireless LAN Medium Access Protocol (MAC) and Physical Layer
show that as the number of mobile sources increased and (PHY) Specification, IEEE Std 802.11-1997, IEEE, New York, NY
under large traffic load, proposed protocol performs better (1997).
than ODMRP and CQMP in terms of data packet delivery [19] Q. Xue, A. Ganz, "Ad hoc QoS on-demand routing (AQOR) in
ratio, end-to-end delay and number of RREQ packets. By mobile ad hoc networks", Journal of Parallel Distributed Computing 63
(2003) 154–165
proposed scheme, network saturation under overloaded [20] Herberg A , Jarecki S ,Krawczyk H, Yung M.Proactive secret
traffic can be alleviated, and thereby, the quality of sharing or : how to cope with perpetual leakage . proceeding of Crypto
service can be improved. ’95,vol.5.1995.p.339-52
[21] Luo H, Lu S.URSA:ubiquitous and robust access control for mobile
ad hoc networks IEEE/ACM Trans Networking 2004;12(6):1049-63
[22] Shamir A.How to share a secret .Commun ACM 1979;22(11):612-3

References
[1] Yu-Chee Tseng, Wen-Hua Liao, Shih-Lin Wu, "Mobile Ad Hoc
Networks and Routing Protocols" pp . 371–392, 2002.
[2] S. Deering, "Host extensions for IP multicasting", RFC 1112,
August 1989, available at http://www.ietf.org/rfc/rfc1112.txt.
[3] Thomas Kunz, "Multicasting: from fixed networks to ad hoc
networks", pp. 495–507, 2002.
[4] S. Corson , J. Macker " Mobile ad hoc networking (MANET):
Routing protocol performance issues and evaluation considerations",
RFC 2501, January 1999, available at http://www.ietf.org
/rfc/rfc2501.txt
computing environments", IEEE Communications Surveys, Second
Quarter, 2–10, 1999.
[5] H. Moustafa, H. Labiod, " Multicast Routing in Mobile Ad Hoc
Networks", Telecommunication Systems 25:1,2, 65–88, 2004.

45 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

A Secure and Fault-tolerant framework for Mobile


IPv6 based networks
Thanuskodi K
Rathi S Principal,
Sr. Lecturer, Dept. of Computer Science and Engineering Akshaya College of Engineering
Government College of Technology Coimbatore, Tamilnadu, INDIA
Coimbatore, Tamilnadu, INDIA

Abstract— Mobile IPv6 will be an integral part of the next gets the service from the HA. If the MN roams away from the
generation Internet protocol. The importance of mobility in the coverage of HA, it has to register with any one of the FAs
Internet gets keep on increasing. Current specification of Mobile around to obtain the COA. This process is known as
IPv6 does not provide proper support for reliability in the mobile “Registration” and the association between MN and FA is
network and there are other problems associated with it. In this known as “Mobility Binding”.
paper, we propose “Virtual Private Network (VPN) based Home
Agent Reliability Protocol (VHAHA)” as a complete system
In the Mobile IP scenario described above, the HAs are the
architecture and extension to Mobile IPv6 that supports single point of failure. Because all the communication to the
reliability and offers solutions to the security problems that are MN is through the HA, since the Correspondent Node (CN)
found in Mobile IP registration part. The key features of this knows only the Home Address. Hence, when a particular HA
protocol over other protocols are: better survivability, is failed, all the MNs getting service from the faulty HA will
transparent failure detection and recovery, reduced complexity be affected. According to the current specification of Mobile
of the system and workload, secure data transfer and improved IP when a MN detects that it’s HA is failed, it has to search for
overall performance. some other HA and recreate the bindings and other details.
This lacks the transparency, since everything is done by the
Keywords-Mobility Agents; VPN; VHAHA; Fault-tolerance; MN. Also, this is a time consuming process which leads to the
Reliability; Self-certified keys; Confidentiality; Authentication; service interruption. Another important issue is the security
Attack prevention problem in Mobile IP registration. Since the MN is allowed to
change its point of attachment, it is highly mandatory to
I. INTRODUCTION ensure and authenticate the current point of attachment. As a
As mobile computing has become a reality, new form of remote redirection that involves all the mobility
technologies and protocols have been developed to provide entities, the registration part is very crucial and must be
mobile users the services that already exist for non-mobile guarded against any malicious attacks that might try to take
users. Mobile Internet Protocol (Mobile IP) [1, 2] is one of illegitimate advantages from any participating principals.
those technologies that enables a node to change its point of Hence, the major requirements of Mobile IPv6
attachment to the Internet in a manner that is transparent to the environment are providing fault-tolerant services and
application on top of the protocol stack. Mobile IP based communication security. Apart from the above said basic
system extends an IP based mobility of nodes by providing requirements, the Mobile IP framework should have the
Mobile Nodes (MNs) with continuous network connections following characteristics: 1) The current communication
while changing their locations. In other words, it transparently architecture must not be changed. 2) The mobile node
provides mobility for nodes while backward compatible with hardware should be simple and does not require complicated
current IP routing schemes by using two types of Mobility calculations. 3) The system must not increase the number of
Agents (MA), the Home Agent (HA) and the Foreign Agent times that communication data must be exchanged. 4) All
(FA). communication entities are to be highly authenticated 5)
While HA is responsible for providing permanent location Communication confidentiality and location privacy are to be
to each mobile user, the FA is responsible for providing Care- ensured and 6) Communication data must be protected from
Of-Address (COA) to each mobile user who visits the Foreign active and passive attacks.
Network. Each HA maintains a Home Location Register Based on the above said requirements and goals, this paper
(HLR), which contains the MN’s Home Address, current proposes “A secure and fault-tolerant framework for Mobile
COA, secrets and other related information. Similarly, FA IPv6 based networks” as a complete system architecture and
maintains Visitors Location Register (VLR) which maintains an extension to Mobile IPv6 that supports reliability and offers
information about the MNs for which the FA provides solutions to the registration security problems. The key
services. When the MN is within the coverage area of HA, it features of the proposed approach over other approaches are:

46 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
better survivability, transparent failure detection and recovery, categories: (i) Certificate Authority – Public key Infrastructure
reduced complexity of the system and workload, secure data (CA-PKI) based protocol [15] (ii) Minimal public key based
transfer and improved overall performance. Despite its protocol [16] (iii) Hybrid technique of Secret and CA-PKI
practicality, the proposed framework provides a scalable based protocol [17] and (iv) Self-certified public key based
solution for authentication, while sets minimal computational protocols [18].
overhead on the Mobile Node and the Mobility agents. (i) CA-PKI based mechanisms define a new Certificate
Extension message format with the intention to carry
II. EARLIER RESEARCH AND STUDIES information about Certificates, which now must always be
Several solutions have been proposed for the reliability appended in all the control messages. Due to high
problem. The proposals that are found in [3-8] are for Mobile computational complexity, this approach is not suitable for
IPv4 and [9-15] are for Mobile IPv6 based networks. The wireless environment.
architecture and functionality of Mobile IPv4 and Mobile IPv6 (ii) The Minimal Public key based method aims to provide
are entirely different. Hence, any solutions that are applicable public key based authentication and a scalable solution for
for Mobile IPv4 can not be applicable for Mobile IPv6 for the authentication while setting only minimal computing on the
reason cited here: In mobile IPv4, the single HA at the Home mobile host. Even if this approach uses only the minimal
Link serves the MN which makes the Mobile IPv4 prone to public key based framework to prevent the replay attack, the
single point of failure problems. To overcome this problem, framework must be executed by using complex computations
the Mobile IPv4 solutions propose HA redundancy. But in due to the creation of digital signatures at the MN. This
Mobile IPv6, instead of having single HA, the entire Home increases the computational complexity at the MN.
Link would serve the MNs. The methods proposed in [9, 10, (iii) Hybrid technique of Secret and CA-PKI based
11, 12] are providing solutions for Mobile IPv6 based protocol proposes the secure key combine minimal public key
networks. besides producing the communication session key in mobile
In Inter Home Agent Redundancy Protocol (HAHA) [9], node registration protocol. The drawback of this approach is
one primary HA will provide service to the MNs and Multiple found to be the registration delay. When compared to other
HAs from different Home Links are configured as Secondary protocols, this approach is considerably increasing the delay in
HAs. When the primary HA failed, the secondary HA will be registration. In addition to that, the solution to the location
acting as Primary HA. But, the registration delay is high and anonymity is only partial.
the approach is not transparent to MNs. The Home Agent (iv) Providing strong security at the same time reducing the
Redundancy Protocol (HARP) proposed in [10] is similar to Registration delay and Computational complexity is an
[9], but here all redundant HAs are considered from the same important issue in Mobile IP. Hence, for the first time self-
domain. The advantages of this approach are registration delay certified public keys are used in [18, 19] which considerably
and computational overhead are less when compared to the reduce the time complexity of the system. But, this proposal
other methods. But, the drawback of this approach is that the does not address the authentication issues of CN, Binding
Home link is the single point of failure. Update (BU) messages which lead to Denial-Of-Service attack
The Virtual Home Agent Redundancy Protocol (VHARP) and impersonation problems.
[11, 12, 13] is similar to [10], but it deals with load balancing Based on the above discussions, it is observed that a secure
issues also. In [14], the reliability is provided by using two and fault-tolerant framework is mandatory which will tolerate
HAs in the same Home link. The primary and the secondary inter home link failures and ensure secure registration that
HAs are synchronized by using transport layer connections. should not increase registration overhead and computational
This approach provides transparency and load balancing. complexity of the system.
Also, registration delay and service interruptions are less. But,
if the Home Link or both HAs are failed, then the entire III. PROPOSED APPROACH
network will be collapsed. This paper proposes a fault-tolerant framework and
Moreover, none of the above said approaches deals with registration protocol for Mobile IPv6 based networks to
the registration security even if it plays a crucial role. provide reliability and security. The solution is based on
Registration in mobile IP must be made secure so that interlink HA redundancy and self certified keys. The proposal
fraudulent registration can be detected and rejected. is divided into two major parts: (i) Virtual Home Agent
Otherwise, any malicious user in the internet could disrupt Redundancy (VHAHA) architecture design and (ii) VHAHA
communications between the home agent and the mobile node Registration protocol.
by the simple expedient of supplying a registration request The first part proposes the design of fault-tolerant
containing a bogus care-of-address. The secret key based framework while the second part ensures the secure
authentication in Base Mobile IP is not scalable. Besides, it registration with the Mobility Agents. This proposed approach
also can’t provide non-repudiation that seems likely to be provides reliability and security by introducing extension to
demanded by various parties, especially in commercial overall functionality and operation of current Mobile IPv6.
settings. The advantages of this approach are: reliable Mobile IPv6
Many proposals are available to overcome the above said operations, better survivability, transparent failure detection
problems which can be broadly classified under the following and recovery, reduced complexity of the system and workload,

47 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Figure 1. VHAHA Architecture

secure data transfer and improved overall performance. The Tunneling, reverse Tunneling, Return Routability and IPv6
results are also verified by performing simulation. The neighbor discovery.
simulation results show that with minimal registration delay Backup HA: For each MN, there will be at least two HAs
and computational overhead the proposed approach achieves which will be acting as backup HAs (no limits on maximum
the desired outcome. no. of HAs). The purpose of Backup HA is to provide
continuous HA services in case of HA failures or overloading.
IV. FRAMEWORK OF VHAHA The back up HA could hold [1-N] bindings in its binding
The design of VHAHA framework is divided into three cache. This provides all the services of Active HA except the
major modules: (i) Architecture design (ii) VHAHA Scenario exclusive services.
and data transmission and (iii) Failure detection and recovery Inactive HA: Inactive HAs will not hold any Mobility
algorithm. Bindings and it provides only limited services from Backup
HA services since any HA in the Home Link can act as
A. Architecture design
Inactive HA.
The architecture of the proposed protocol is given in Fig. 1. The VHAHA is configured with static IP address that is
As part of Mobile IPv6, multiple Home Links are available in referred as Global HA Address. The Global HA address is
the network and each Home Link consists of multiple HAs. In defined by the Virtual Identifier and a set of IP addresses. The
this approach, one HA is configured as Active HA, some of the VHAHA may associate an Active HA’s real IP address on an
HAs are configured as Backup HAs and few other HAs are interface with the Global HA address. There is no restriction
configured as Inactive HAs from the Home Link. The Active against mapping the Global HA address with a different
HA provides all Mobile IPv6 services, the Inactive HA Active HA. In case of the failure of an Active HA, the Global
provides minimal set of services and Backup HA provides mid address can be mapped to some other backup HA that is going
range of services. VHAHA requires that for each MN there to act as active HA. If the Active HA becomes unavailable, the
should be at least two HAs (one active HA and the other could highest priority Backup HA will become Active HA after a
be any one of the backup HA) holding its binding at any short delay, providing a controlled transition of the
instance of time. The functionalities of these HAs are given responsibility with minimal service interruption. Besides
below: minimizing service interruption by providing rapid transition
Active HA: There must be a HA on the Home Link serving from Active to Backup HA, the VHAHA design incorporates
as the Active HA. Only one HA could act as Active HA at any optimizations that reduce protocol complexity while
instance of time. The active HA maintains the Binding cache, guaranteeing controlled HA transition for typical operational
which stores the mobility bindings of all MNs that are scenarios. The significant feature of this architecture is that,
registered under it. This will hold [0-N] mobility bindings. the entire process is completely transparent to MN. The MN
This is responsible for data delivery and exclusive services. knows only the Global HA address and it is unaware of the
The exclusive services mainly include Home Registration, De- actual Active HA. It also does not know about the transition
registration, Registration, Registration-refresh, IKE and between backup and active HAs.
DHAD. Besides these, it provides regular HA services such as

48 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
B. VHAHA Scenario of the MN (Fig. 2. Step 4). Finally, the COA decapsulate and
Two or more HAs (One active HA and minimum of one send the packet (Fig. 3d) to the MN using base Mobile IPv6
backup HA) from each Home Link are selected. Then Virtual (Fig. 2. Step 5).
Private Network (VPN) [20, 21, 22, 23] is constructed among C. Failure detection and Recovery
the selected HAs through the existing internetworking. This
In contrast to Mobile IPv6 and other approaches, failure
VPN is assigned with Global HA address and it will act as
detection and tolerance is transparent to the MN. Since the
Global HA. HAs of the VPN will announce their presence by
MN is unaware of this process, over-the-air (OTA) messages
periodically multicasting Heart Beat messages inside the
are reduced, the complexity of the system is reduced and the
VPN. So, each HA will know the status of all other HAs in
performance is improved. The failure detection and recovery
the Private network.
algorithm is illustrated in procedure 1.
__________________________________________________
Begin
Calculate the priority for HAs that are part of Virtual
Private Network
workload(HAi) Å (Current mobility bindings of HAi x
Current Throughput) / (Maximum no. of mobility
bindings of HAi x Maximum Throughput)
Priority(HAi) Å 1/workload(HAi)

If(HAs failed to receive heartbeats from HAx)


Figure 2. VHAHA Scenario. HAx Å Faulty
The scenario of VHAHA protocol is given in Fig. 2. The
protocol works at layer 3. In this approach, the HAs are If(HAx == Faulty) Then
located in different Home Links still sharing the same subnet Delete entries of HAx from the tables of all HAi, where
address. The shared subnet address is known as Global HA 1≤i≤n, i≠x
address and the HAs in inter home link are identified by using If(HAx == Active HA) Then
Local HA addresses. The data destined to the MN will be Activate exclusive services of Backup HA
addressed to the Global HA address of the MN, which will be Active HA Å Backup HA with highest priority
decapsulated by the Active HA and forwarded to the MN Backup HA Å Select_Backup_HA (Inactive HA
appropriately using base Mobile IP. The various steps in with highest priority),
forwarding the data packets are illustrated in Fig. 2. activate the required services and acquire the binding
details from primary HA to synchronize with it
If(HAx == Backup HA) Then
Backup HA Å Select_Backup_HA(Inactive HA with
highest priority),
activate the required services and acquire the binding
details from primary HA to synchronize with it.
If(HAx == Inactive HA) Then
Do nothing till it recovers, if it permanently goes off;
select an Inactive HA from the Home Link of HAx
End
__________________________________________________
Figure 3. Packet Formats Procedure 1: Failure detection and Recovery

The packet formats are shown in Fig. 3. As in Mobile IPv6, The workload of each HA in the VPN is calculated based
the CNs and MNs only know about the Global HA address. on the number of mobility bindings associated with each HA.
The packet (Fig. 3a) addressed to the MN from CN (Fig. 2. This workload is used for setting priority for the HAs. The
Step 1) will be directed to the Home Network using the Global priority is dynamically updated based on the changes in the
HA address (Fig. 2. Step 2) of the MN. Here, the Home number of mobility bindings. The heartbeat messages are
Network refers to the VPN that is constructed by using the exchanged among the HAs at a constant rate. These heartbeat
above procedure. Once the packet reaches the Global HA messages are used for detecting the failure. When any HA
address, all HAs that belong to Global HA address will hear fails, it will not broadcast the heartbeat message and all other
the packet and the one which is closer to the MN and has less HAs will not receive the heartbeats from the faulty one.
workload will pick up (Fig. 2. Step 3) the packet (Fig. 3b) Hence, the failure of the faulty HA can be detected by all other
using the internal routing mechanism. Then the packet will be HAs that are part of the VPN.
routed to the Active HA and this Active HA will do the Once the failure is detected, entry of that faulty HA will be
required processing and tunnel the packet (Fig. 3c) to the COA deleted from all other HAs that are part of the Global HA

49 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

TABLE I. COMPARISON OF VHAHA WITH OTHER APPROACHES


Metrics MIPv6 HAHA HARP VHARP TCP VHAHA
Recovery overhead High No No No No No
Fault tolerance
No MN initiated MN initiated HA initiated HA initiated HA initiated
mechanism
Fault tolerant Covers entire Limited to Limited to Limited to Covers entire
No
Range range Home Link Home Link Home Link range
Transparency No No No Yes Yes Yes
OTA messages
exchanged for More More Less Nil Nil Nil
recovery

subnet. Then, if the faulty HA is Active HA, based on the with the Global HA address instead of with a single HA.
priority of backup HAs anyone of the backup HA with the Hence, this delay will be high when compared to the normal
highest priority will be mapped to Active HA by activating its Mobile IP registration. The initial registration delay includes
exclusive services. Now, the new Active HA will be the owner the time taken by the MN to get registered with the Active HA
of the Global HA address. If the faulty HA is a backup HA and the time taken by the Active HA to update this
then anyone of the Inactive HA will be set as the information in all other backup HAs.
corresponding backup HA by activating the required services The Time Complexity is O (D log3k) and Message
and acquiring binding cache entries from the Primary HA. If Complexity is O (|1| + klog3k), where ‘D’ is the diameter of
the Inactive HA is failed then nothing needs to be done. But if VHAHA and ‘k’ is number of active, backup and Inactive
it permanently goes off, then any other HA from the link will HAs of the MN.
be set as Inactive HA. 3) Failure detection and Recovery overhead: The failure is
The significant feature of this approach is that the Global detected when heartbeats are not received from a particular
HA address will never change. Based on the availability of the HA for a particular period of time (T). The heartbeat is
Active HA, the Global HA address will be mapped to the actually multicasted using the multicast address. The number
appropriate backup HA. The CN and the MN would know of heartbeats exchanged depends on ‘T’ and the time taken to
only the Global HA address and do not know any thing about detect the failure depends on the speed of the underlying wired
this mapping of addresses. All other internal mappings will be network. After the failure is detected, it requires just a single
handled by the VHAHA’s internal routing mechanism. message to switch over to the backup HA and the time taken is
negligible.
D. Performance Evaluations
The Time Complexity is O (D log3n) and the Message
The proposed protocol will introduce certain amount of Complexity is O (|L| + nlog3n), where ‘D’ is the diameter of
overhead in the network to construct the Virtual Network and VHAHA, ‘n’ is number of HAs that are part of VHAHA and
to provide reliability. Hence, the performance of the proposed ‘L’ represents the number of links that constitute VHAHA.
approach depends on two overheads: (a) Time and (b) Control 4) Over-the-air messages: This is very important factor
message overhead. In the proposed approach, these two because it is dealing with the air interface which is having less
overheads depend on the following four factors: (1) VHAHA bandwidth. When OTA messages are increased performance
configuration (2) Home Registration (3) failure detection and of the system will be degraded. But in the proposed approach,
recovery and (4) Over-the-air communication between MNs the MN is not involved in failure detection and recovery
and Mobility Agents. process, so no OTA messages are exchanged during this
1) VHAHA configuration: The VHAHA is configured only process. The time and message complexity introduced by this
during the initialization of the network and it will be updated factor is Nil.
only when the inactive HA fails. This happens to be a rare From the above description, it is observed that the
case, since most of the implementations will not take any performance of VHAHA is directly proportional to the speed
action if the Inactive HA fails and let the Inactive HA to heal of the wired network because the proposal only involves the
automatically because it will not affect the overall wired backbone operations. Actually, this is not a fair
performance. Hence, this can be considered as one time cost constraint because bandwidth of the network is very high
and it is negligible. The Time complexity and message thanks to the high speed and advanced networks.
complexity introduced to the over all systems are negligible.
2) Home Registration: This factor depends on the total E. Simulation Results and Analysis
numbers and locations of Active, Backup and Inactive HAs The proposed approach is compared with Simple Mobile
that are part of VHAHA network. The registration messages IPv6, HAHA, HARP, VHARP, and TCP. The comparison
include the number of messages required for the MN to get results are given in Table 1. From the comparisons, it is found
registered with the Active HA and the control messages that VHAHA is persistent and has less overhead when
required by the Active HA to update this information in all compared to other approaches.
other backup and Inactive HAs of the MN. In the proposed Simulation experiments are performed to verify the
approach, the Initial registration of the MN should take place performance of the proposed protocol. It is done by extending

50 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
the Mobile IP model given in ns-2 [24]. MIPv6 does not use previous section. But TCP and VHARP will collapse and
any reliability mechanism; hence the time taken to detect and Failure detection and Recovery will be left to MNs.
recover from the failure will be high. TCP, VHARP and 120

VHAHA take almost same time to recover from the failure. 110

Failure detection & Recovery time (sec).


100

This is in the case of the HAs from the same link fail. But, 90
80

when the entire network fails, only the VHAHA survives. All 70
60
MIPv6
T CP

other methods will collapse. The following parameters are 50


VHARP
VHAHA

used to evaluate the performance. (i) Failure detection and


40
30

Recovery time when a HA fails in the Home Link (ii) Failure 20


10

detection and Recovery time when entire Home Link fails (iii) 0

00

00

00

00

00
0
Home Registration delay (iv) Packet loss (v) Number of

10

20

30

40

80
No.of MNs

messages exchanged during registration and (vi) Failure


Figure 5. Comparison of Failure detection and recovery time,
detection and Recovery messages. The simulation results are when the entire Home Link fails
shown in Figures 4, 5, 6, 7, 8 and 9. The results are also
compared with Mobile IPv6, TCP and VHARP to analyze the This situation is represented in Fig. 5, where VHAHA’s
performance of the proposed approach. Recovery time is almost equal to the previous scenario. But,
1) Failure detection and Recovery time when a HA fails in TCP and VHARP approaches fail to handle the situation and
the Home Link: When a particular HA is failed, all other HAs Recovery time is very high which is equal to that of base
will not hear the heartbeat messages. When the heartbeat MIPv6.
message from a particular HA is missed continuously for three 3) Registration delay: The registration delay is calculated
times, then it is decided that the particular HA is a faulty HA. by using the equation (2). The Active HA Registration delay is
Once the failure is detected, the corresponding backup HA equal to that of base MIPv6. Nowadays, the bandwidth of the
will be activated by the Recovery procedure. The failure core network is very high and hence the propagation delay of
detection and recovery time (TFD-R) is calculated by using the the VHAHA is very less. The values are given in Fig. 6 and it
equation (1). is compared with other protocols.

T FD _ R = 3T H + prop .delayOfVHA HA (1) reg .delay = reg .delay Active − HA + prop .delayOfVHA HA (2)

where TH represents the time required to hear the heartbeat 130


120

messages by HAs. 110


100
Registration time (sec)

120 90

110 80 MIP v6
Failure detection & Recovery time (sec)

70 T CP
100
60 VHARP
90
50 VHAHA
80
MIPv6 40
70
TCP 30
60 20
VHARP
50 10
VHAHA
40 0
0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000
30
20
No.of MNs
10
0
Figure 6. Comparison of Home Registration delay
00

00

00

00

00
0

10

20

30

40

80

No.of MNs

Figure 4. Comparison of Failure detection and recovery time, 4) Packet loss: The packet losses of the compared
when a HA fails in the Home Link protocols are represented in Fig. 7. From the Figure, it is
inferred that packet loss in the proposed approach is very less
The Fig. 4 shows the TFD_R of VHAHA and other when compared with MIPv6, TCP and VHARP, because it is
protocols. Base Mobile IPv6 does not take any action for able to handle both intra link and interlink failures.
failure detection and Recovery of HAs. This needs to be
handled by MN itself. Because of that, the time taken for 10
failure detection and Recovery is very high. This causes
service interruption to MNs that are affected by the faulty HA.
Packet Loss(pkts/sec)

Other schemes like TCP, VHARP and VHAHA handle the


problem and almost take same amount of time for failure 1
MIPv6
TCP
1000 5000 10000 VHARP
detection and Recovery. VHAHA

2) Failure detection and Recovery time when entire Home


Link fails: The proposed protocol constructs VPN by
0.1
considering HAs from different Home links. Hence, when one No. of MNs

Home Link fails completely also, the proposed approach


handles the problem in normal manner as described in Figure 7. Comparison of Packet Loss

51 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
5) Number of messages exchanged during registration: prescribed; please do not alter them. You may note
This includes number of messages required to register with the peculiarities. For example, the head margin in this template
Active HA and Binding Update messages to the backup HA measures proportionately more than is customary. This
during the Initial Registration, FA Registration and measurement and others are deliberate, using specifications that
anticipate your paper as one part of the entire proceedings, and
deregistration. Again, the bandwidth of the core network is
not as an independent document. Please do not revise any of
very high and hence delay experienced by the MN will be the current designations.
negligible. This is illustrated in the Fig. 8. From the Figure, it
is found that the number of messages exchanged in VHAHA is V. VHAHA SECURE REGISTRATION
somewhat high when compared to base protocol but it is The VHAHA secure registration protocol is based on self
comparable with the VHARP protocol. certified keys. Self-certified public keys were introduced by
Girault. In contrast to the traditional public key infrastructure
80000
(PKI), self-certified public keys do not require the use of
certificates to guarantee the authenticity of public keys. The
No. of msgs exchanged during Home Registration..

70000

60000
authenticity of self-certified public key is verified implicitly
50000
MIPv6 by the proper use of the private key in any cryptographic
applications in a logically single step. Thus, there is no chain
T CP
40000
VHARP
VHAHA
30000
of certificate authorities in self-certified public keys. This
20000
property of the self certified keys optimizes the registration
10000
delay of the proposed protocol at the same time ensuring
0
250s--->
registration security.
Simulation time

A. VHAHA Secure Registration


Figure 8. Comparison of no. of msgs exchanged
during Home Registration Protocol
The proposed protocol is divided into three different parts:
6) Failure detection and Recovery messages: This is (i) Mobile node’s initial registration with home network (ii)
represented in Fig 9. Here, also the complexity of the VHAHA Registration protocol of MN (from Foreign Network) with
is approximately equal to that of VHARP while TCP based authentication and (iii) Authentication between MN and CN.
mechanism is having less complexity and the base protocol is The MN’s initial registration part deals with how the MN is
having the maximum complexity. initially registered with its Home Network. First, the identity
of the MN is verified and other details like nonce, Temporary
250000 Identity and secret key between MN and HA will be assigned
to the MN.
No. of Failure detection & Recovery msgs exchanged

200000
__________________________________________________
150000
a. Mobile node initial registration with home
MIPv6
TCP network(VHAHA)
VHARP

100000 VHAHA (IR1) Verify the Identity of the MN


(IR2) Allocate nonce, Temporary ID(H(IDMN//NHA) and
50000
shared secret KMN-HA
0
(IR3) Transfer these details to the MN through secret
250s--->
Simulation time
channel and also store in the HA’s database.
Figure 9. Comparison of no. of failure detection
and Recovery messages b. Registration protocol of MN (from Foreign Network) with
authentication
F. Observation Agent Advertisement:
From the results and analysis, it is observed that the (AA1) FA Æ MN: M, where M1 = Advertisement, FAid,
proposed approach outperforms all other reliability MNCoA, NFA,wF
mechanisms because it survives even when the entire Home Registration:
link fails. The overhead and complexity introduced by the (R1) MN Æ FA: M2, <M2>KMN-HA
proposed approach is almost negligible when compared to where M2 = Request, Key-Request, FAid, HAid,
other existing recovery mechanisms. The failure detection and MNCoA, NHA, NMN, NFA, H(IDMN || NHA), wH
recovery overhead imposed by the proposed approach is (R2) FA: (upon receipt of R1)
increased by 2% when compared to VHARP. The home • Validate NFA and Compute the key KFA-HA
registration delay is also increased by 2% when compared to
VHARP. The packet loss in the proposed approach is reduced K FA−HA = H1 [ (wH
h( I H )
+ I H ) xF mod n] = H1 [(wHh ( HAid ) + HAid ) xF . mod n]
by 25% when compared to all other approaches.
The template is used to format your paper and style the text. • Compute MAC
All margins, column widths, line spaces, and text fonts are

52 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
(R3) FA Æ HA: M3 , <M3>KFA-HA, where M3 = M2, except that the MN authenticates the FA using its witness
<M2>KMN-HA , FAid, wF value. And, in Registration part, instead of passing the MN’s
(R4) HA: (upon receipt of R3) actual identity, it is combined with nonce and then hashed.
• Check whether FAid in M3 equals FAid in R1. This provides the location anonymity. Also, the witness value
• Compute the key, is passed which enables the calculation of shared secrets.
The third part deals with the authentication between MN
K FA−HA = H1 [ (wF
h( I H )
+ I F ) xH mod n] = H1 [(wFh ( FAid ) + FAid ) xH . mod n] and CN. This authentication enables the MN to communicate
with the CN directly which resolves the triangle routing
• Compute MAC* and compare it with MAC value problem. First MN sends the authentication request to the
received. This is the authentication of FA by HA Home Agent (HA2) of the MN. There the HA2 verifies and
authenticates the MN and forward the message to CN. The CN
• Check the identity of the MN in HA’s database.
validates the MN, calculates the shared secret and sends
• Produce new nonce NHA’, new Temporary
response to MN. Finally, the MN calculates the shared secret
ID(H(IDMN|| NHA’)) and new session key K’MN-FA
and validates the CN. Then, the MN and CN can directly
and overlay the details in database.
communicate each other. The details of the proposed protocol
(R5) HA Æ FA: M4, <M4>KFA-HA
are summarized in procedure 2.
If IDMN is found out in HA’s dynamic parameter database,
M4=M5, <M5>KMN-HA, NFA, {KMN-FA}KFA-HA B. Performance Evaluations
Else, In this section, the security aspects of the proposed
M4 = M5, <M5>K0MN-HA, NFA, {KMN-FA}, KFA-HA protocol are analyzed. The following attributes have been
M5 = Reply, Result, Key-Reply, MNHM, HAid, considered for the analyses are: (i) Data confidentiality, (ii)
N’HA,NMN Authentication, (iii) Location anonymity and synchronization
(R6) FA: (upon receipt of R5) and (iv) Attack prevention.
• Validate NFA 1) Confidentiality: Data delivered through the Internet can
• Validate <M4>KFA-HA with KFA-HA. This is the be easily intercepted and falsified. Therefore, ensuring
authentication of FA to HA. confidentiality of communication data is very important in
• Decrypt {KMN-FA}KFA-HA with KFA-HA and get the Mobile IP environment. The data confidentiality of the various
session key KMN-FA protocols and the proposed one is listed in the Table 2.
(R7) FA Æ MN: M5, <M5>KMN-HA
(R8) MN: (upon receipt of R7) TABLE II. DATA CONFIDENTILAITY ANALYSIS
• Validate NMN MN- FA- MN- CN-
• Validate <M5>KMN-HA with the secret key, KMN- Methods
FA HA HA MN
HA used in R1. This is the authentication of MN Secret key No No Yes Yes
to HA. CA-PKI No No Yes Yes
Minimal Public
Yes No Yes Yes
key
c. Authentication between MN and CN Hybrid Yes Yes Yes Yes
(A1) MN Æ HA2: M1 <M1>KMN-HA2, where Self certified Yes Yes Yes Yes
M1=Auth-Request, MNCOA, CNCOA, NMN, wMN VHAHA secure
Yes Yes Yes Yes
(A2) HA2: (Upon receipt of A1) Registration
• Validate NMN, and compute MAC
HA2 Æ CN: M2 <M2>KHA2-CN , where M2= The proposed approach achieves data confidentiality

M1 <M1>KMN-HA2 between all pairs of network components. From the table, it is
(A3) CN Æ MN: M3 <M3>KCN-MN, wCN, where M3 found that Hybrid and Self certified approaches also provide
=Auth-Response, MNCOA, CNCOA, h(NMN) the same result. But the computational complexities of these
• Validate MAC and nonce. This is the protocols are high when compared to the proposed one, due to
authentication of HA2 and MN by CN. the usage public keys and dynamically changing secret keys.
2) Authentication: Prior to data delivery, both parties must
• Compute KCN-MN
(A4) MN: (Upon receipt of A3) be able to authenticate one another’s identity. It is necessary
• Validate MAC and nonce. This is the to avoid any bogus parties from sending unwanted messages
authentication of HA2 and CN by MN. to the entities. The Mobile IP user authentication protocol is
• Compute KCN-MN different from the general user service authentication protocol.
__________________________________________________ Table 3 shows the authentication analysis of various protocols
Procedure 2: VHAHA Secure Registration Protocol with the proposed one. From the analysis, it is found that the
VHAHA secure registration excels all approaches because it
The second part deals with how the MN is registered with provides authentication between all pairs of the networking
the Foreign Network when it roams away from the Home nodes.
Network. There is no change in Agent advertisement part

53 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
TABLE III. AUTHENTICATION ANALYSIS C. Simulation Results and Analysis
Methods MN-FA FA-HA MN-HA MN-CN The system parameters are shown in Table 6. The
Secret key None None MAC None cryptography operation time on the FA, HA and MN is
Digital Digital Digital
CA-PKI
Signature Signature Signature
None obtained from [25]. The following parameters are used for the
Minimal Digital evaluation: (i) Registration delay and (ii) Registration
None None None
Public key Signature Signaling traffic.
Digital Symmetric 1) Registration delay: The registration delay plays an
Hybrid None None
Signature Encryption
MAC
important role in deciding the performance of the Mobile IP
MAC protocol. To strengthen the security of the Mobile IP
(Static/
Self certified None (dynamic None
dynamic
key) registration part, the data transmission speed can not be
key) compromised because it will cause the direct impact on the
VHAHA MAC MAC MAC
secure
TTP
(Static/ (dynamic (dynamic end user. If the delay is high, then the interruption and packet
Registration dynamic) key) key) loss will be more. Due to the properties of public keys,
naturally the registration delay of these protocols is very high
3) Location Anonymous and Synchronization: The and the packet loss is also high. But certificate based protocols
proposed approach uses temporary identity instead of the are not based on public keys and thanks to the properties of the
actual identity of the MNs. Since, the actual location of the certificates, the delay is less. The registration time is
MN is not revealed to the outsides environment (i.e. CNs and calculated by using the equation (3) and the results are given
Foreign links). Similarly, the proposed approach maintains in table 7.
two databases: (i) Initial parameter base and (ii) Dynamic
parameter base. These are used for maintaining Re g.Time = RREQMN − FA + RREQFA− HA + RREPHA− FA + RREPFA− MN (3)
synchronization between MNs and HAs. The results are given
in table 4. where, RREQMN-FA is the time taken to send the registration
TABLE IV. LOCATION ANONYMITY AND SYNCHRONIZATION
request from MN to FA, RREQFA-HA is the time taken to
forward the registration request from FA to HA, RREPHA-FA is
Methods
Location
Synchronization the time taken to send the registration reply from HA to FA
anonymity
No No
and RREPFA-MN is the time taken to forward the registration
Secret key
CA-PKI No No reply from FA to MN.
Minimal Public key No No
Hybrid No No TABLE VI. COMPARISON OF REGISTRATION DELAY
Self certified No No
VHAHA secure Delay
Yes Yes RREQ RREQ RREP RREP
Registration (ms)
Methods MN-FA FA-HA HA-FA FA-MN
(1)+(2)+
(1) (2) (3) (4)
(3)+(4)
4) Attack Prevention: The following attacks are Secret
2.7191 1.004 1.0144 2.7031 7.4406
considered for the analysis: (i) Replay attack (ii) TCP Splicing key
attack (iii) Guess attack (iv) Denial-of-Service attack (v) Man- CA-PKI 7.6417 5.9266 6.3170 7.6457 27.5312
Minimal
in-the-middle attack and (vi) Active attacks. Table 5 shows the Public 2.8119 0.9966 10.8770 7.7466 22.4322
attack prevention analyses of the various approaches. key
Hybrid 2.7934 16.0565 15.011 2.8007 36.6625
TABLE V. ATTACK PREVENTION ANALYSIS Self
3.4804 14.2649 1.0176 2.8402 21.6023
certified
Re- Splic Man in Acti- VHAHA
Methods Guess DOS
play -ing middle ve Registrat 3.3813 7.64708 1.0156 2.7615 14.8056
-ion
Secret
No No No No Yes Yes
key
CA-PKI No Yes No Yes Yes Yes 2) Registration Signaling Traffic: The computation
Minimal overhead depends on the amount of traffic (i.e. the packet size)
Public Yes No No Yes Yes Yes that is to be transmitted to successfully complete the
key
registration. If the amount of signaling traffic is high means,
Hybrid Yes Yes Yes Yes Yes Yes
Self computational complexity at the MN and the mobility agents
Yes Yes Yes Yes Yes Yes will be high. The signaling traffic of the various protocols
certified
VHAHA considered for comparison are computed and given in Table 8.
secure
Yes Yes Yes Yes Yes Yes From the table, it is observed that VHAHA secure registration
Registrat
-ion is having the lowest traffic. Hence complexity is less both at
MNs and Mobility Agents. Because of the lowest traffic, the
bandwidth consumption is comparatively less.

54 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
TABLE VII. COMPARISON OF REGISTRATION TRAFFIC [4] R. Ghosh, and G. Varghese, “Fault Tolerant Mobile IP,” Washington
University, Tech nical Report (WUCS-98-11),1998.
Methods
MN- FA- HA- FA- Size [5] J. Ahn, and C. S. Hwang, “Efficient Fault-Tolerant Protocol for Mobility
FA HA FA MN (bytes) Agents in Mobile IP,” in Proc. 15th Int. Parallel and Distributed
Secret key 50 50 46 46 192 Processing Symp., California, 2001.
CA-PKI 224 288 64 128 704 [6] K. Leung, and M. Subbarao, “Home Agent Redundancy in Mobile IP,”
Minimal IETF Draft, draft-subbarao-mobileipredundancy-00.txt, June 2001.
178 178 174 174 704 [7] M. Khalil, “Virtual Distributed Home Agent Protocol(VDHAP),”
Public key
Hybrid 66 578 582 66 1292 U.S.Patent 6 430 698, August 6, 2002.
[8] J. Lin, and J. Arul, “An Efficient Fault-Tolerant Approach for Mobile IP in
Self certified 226 404 124 70 824
Wireless Systems,” IEEE Trans. Mobile Computing, vol. 2, no. 3, pp.
VHAHA
206 364 108 54 732 207-220, July-Sept. 2003.
Registration
[9] R. Wakikawa, V. Devarapalli, and P.Thubert, “Inter Home Agents
Protocol (HAHA),” IETF Draft, draft-wakikawamip6- nemo-haha-00.txt,
October 2003.
D. Observation [10] F. Heissenhuber, W. Fritsche, and A. Riedl, “Home Agent Redundancy
and Load Balancing in Mobile IPv6,” in Proc. 5th International Conf.
The proposed approach does not affect the complexity of Broadband Communications, Hong Kong, 1999.
the initial registration. But, foreign network registration delay [11] Deng, H. Zhang, R. Huang, X. and K. Zhang, “Load balance for
is significantly increased due to the application of security Distributed HAs in Mobile IPv6”, IETF Dreaft, draft-wakikawa-mip6-
algorithms. From the procedure 2, it is understood that the nemo-haha-00.txt, October 2003.
[12] J. Faizan, H. El-Rewini, and M. Khalil, “Problem Statement: Home
proposed scheme does not change the number of messages Agent Reliability,” IETF Draft, draftjfaizan- mipv6-ha-reliability-00.txt,
exchanged for the registration process. But, the size of the November 2003.
message will be increased due to the security attributes that are [13] J. Faizan, H. El-Rewini, and M. Khalil, “Towards Reliable Mobile IPv6”
passed along with the registration messages. Southern Methodist University, Technical Report (04-CSE-02),
November 2004.
From the results and analysis, it is observed that the [14] Adisak Busaranun1, Panita Pongpaibool and Pichaya Supanakoon,
VHAHA secure registration reduces the registration delay “Simple Implement of Home Agent Reliability for Mobile IPv6
overhead by 40% and signaling traffic overhead by 20% when Network”, Tencon, November 2006.
compared to other approaches. [15] S. Jacobs, “Mobile IP Public Key based Authentication,”http: // search
/ietf.org /internet drafts / draft jacobs-mobileip-pkiauth- 01.txt. 1999.
VI. CONCLUSION AND FUTURE WORK [16] Sufatrio and K.Y. Lam, “Mobile-IP Registration Protocol: a Security
Attack and New Secure Minimal Pubic-key based Authentication,”
This paper proposes a fault-tolerant and secure framework Proc.1999 Intnl. Symp. Parallel Architectures, Sep. 1999.
for mobile IPv6 based networks that is based on inter home [17] C.Y. Yang and C.Y. Shiu, “A Secure Mobile IP Registration Protocol,”
Int. J. Network Security, vol. 1, no. 1, pp. 38-45, Jul. 2005.
link HA redundancy scheme and self-certified keys. The [18] L. Dang, W. Kou, J. Zhang, X. Cao, J. Liu, “Improvement of Mobile IP
performance analysis and the comparison results show that the Registration Using Self-Certified Public Keys.” IEEE Transaction on
proposed approach has less overhead and the advantages like, Mobile Computing, June 2007.
better survivability, transparent failure detection and recovery, [19] M. Girault, “Self-certified Public Keys,” Advances in Cryptology
(Proceeding EuroCrypt 91), LNCS, vol. 547, pp. 490-497, Springer-
reduced complexity of the system and workload, secure data Verlag 1991.
transfer and improved overall performance. Moreover, the [20] T.C. Wu, Y.S. Chang, and T.Y. Lin, “Improvement of Saeedni’s Self-
proposed approach is compatible with the existing Mobile IP certified Key Exchange Protocols,” Electronics Letters, vol 34, Issue: 11,
standard and does not require any architectural changes. This pp.1094–1095, 1998.
[21] RFC 3069, VLAN Aggregation for Efficient IP Address Allocation. D.
is also useful in future applications like VoIP and 4G. The McPherson, B. Dykes. February 2001. ftp://ftp.isi.edu/in-
formal load balancing of workload among the HAs of the VPN notes/rfc3069.txt.
is left as future work. [22] Ruixi Yuan, W. Timothy Strayer, " Virtual Private Networks:
Technologies and Solutions," Addison-Wesley, April 2001.
REFERENCES [23] Dave Kosiur, David Kosiur, "Building & Managing Virtual Private
Networks," Wiley, October 1998.
[1] C. Perkins, D. Johnson, and J. Arkko, “Mobility Support in IPv6,” IETF [24] NS -2 , http://www.isi.edu/nsnam.
Draft, draft-ietf-mobileip-ipv6-24 August 2003. [25] Wei Dai, “Crypto++ 5.2.1 Benchmarks,” http:// www.eskimo.com/
[2] C. Perkin RFC 3344: “IP Mobility Support for IPv4”, august 2002. ~weidai/ benchmarks.html. 2004.
[3] B. Chambless, and J. Binkley, “Home Agent Redundancy Protocol,” IETF
Draft, draft-chambless-mobileip-harp- 00.txt, October 1997.

55 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

A New Generic Taxonomy on Hybrid Malware Detection


Technique
Robiah Y, Siti Rahayu S., Mohd Zaki M, Shahrin S., Faizal M. A., Marliza R.
Faculty of Information Technology and Communication
Univeristi Teknikal Malaysia Melaka,
Durian Tunggal, Melaka,
Malaysia
robiah@utem.edu.my, sitirahayu@utem.edu.my, zaki.masud@utem.edu.my, shahrinsahib@utem.edu.my,
faizalabdollah@utem.edu.my, marliza@utem.edu.my

Abstract-Malware is a type of malicious program that replicate process. Therefore certain detection mechanisms or
from host machine and propagate through network. It has been technique need to be integrated with IDS correlation process
considered as one type of computer attack and intrusion that can in order to guarantee the malware is detected in the alert log.
do a variety of malicious activity on a computer. This paper Hence, the proposed research is to generate a new generic
addresses the current trend of malware detection techniques and
taxonomy of malware detection technique that will be the
identifies the significant criteria in each technique to improve
malware detection in Intrusion Detection System (IDS). Several basis of developing new rule set for IDS in detecting
existing techniques are analyzing from 48 various researches and malware to reduce the number of false alarm.
the capability criteria of malware detection technique have been
reviewed. From the analysis, a new generic taxonomy of malware The rest of the paper is structured as follows. Section
detection technique have been proposed named Hybrid-Malware II discuses the related work on malware and the current
Detection Technique (Hybrid-MDT) which consists of Hybrid- taxonomy of malware detection technique. Sections III
Signature and Anomaly detection technique and Hybrid- present the classification and the capability criteria of
Specification based and Anomaly detection technique to malware detection techniques. Section IV discusses the new
complement the weaknesses of the existing malware detection
propose taxonomy of malware detection technique and.
technique in detecting known and unknown attack as well as
reducing false alert before and during the intrusion occur. Finally, section V conclude and summarize future directions
of this work.

Keywords: Malware, taxonomy, Intrusion Detection II RELATED WORK


System.
A. What is Malware?
I INTRODUCTION
According to [3], malware is a program that has
malicious intention. Whereas [4] has defined it as a generic
Malware is considered as worldwide epidemic due to term that encompasses viruses, Trojans, spywares and other
the malware author’s activity to have a finance gain through intrusive codes. Malware is not a “bug” or a defect in a
theft of personal information such as gaining access to legitimate software program, even if it has destructive
financial accounts. This statement has been proved by the consequences. The malware implies malice of forethought
increasing number of computer security incidents related to by malware inventor and its intention is to disrupt or
vulnerabilities from 171 in 1995 to 7,236 in 2007 as damage a system.
reported by Computer Emergency Response Team [1]. One
of the issues related to this vulnerability report is malware [5] has done research on malware taxonomy according
attack which has generated significant worldwide epidemic to their malware properties such as mutually exclusive
to network security environment and bad impact involving categories, exhaustive categories and unambiguous
financial loss. categories. In his research he has stated that generally
malware is consists of three types of malware of the same
Hence, the wide deployment of IDSs to capture this level as depicted in Figure 1 which are virus, worm and
kind of activity can process large amount of traffic which Trojan horse although he has commented that in several
can generate a huge amount of data. This huge amount of cases these three types of malware are defined as not being
data can exhaust the network administrator’s time and mutually exclusive
implicate cost to find the intruder if new attack outbreak
happen especially involving malware attack. An important Malware
problem in the field of intrusion detection is the
management of alerts as IDS tends to produce high number
Worm Trojan
of false positive alerts [2]. In order to increase the detection Virus
horse
rate, the use of multiple IDSs can be used and correlate the
alert but in return it increases the number of alerts to Figure 1. General Malware Taxonomy by Karresand

56 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
B. What is Malware Intrusion Detector? According to [8] and [9], intrusion detection technique
can be divided into three types as in Figure 2 which are
Malware intrusion detector is a system or tool that signature-based or misuse detection, anomaly-based
attempts to identify malware [3] and contains malware detection and specification-based detection which shall be a
before it can reach a system or network. Diverse research major reference in these research. Based on previous
has been done to detect this malware from spreading on host worked [8][9][10][11], the characteristics of each techniques
and network. These detectors will use various combinations are as follows.
of technique, approach and method to enable them to detect
the malware effectively and efficiently during program B. Signature-based detection
execution or static. Malware intrusion detector is
considered as one of the component of IDS, therefore Signature-based or sometime called as misuse detection
malware intrusion detector is a complement of IDS. as described by [10] will maintain database of known
intrusion technique (attack signature) and detects intrusion
C. What is Taxonomy of Malware Detection by comparing behavior against the database. It shall require
Technique? less amount of system resource to detect intrusion. [8] also
claimed that this technique can detect known attack
To clearly identify the malware detection technique accurately. However the disadvantage of this technique is
terms in depth, a research on a structured categorization ineffective against previously unseen attacks and hence it
which is call as taxonomy is required in order to develop a cannot detect new and unknown intrusion methods as no
good detection tools. Taxonomy is defined in [6] as “a signatures are available for such attacks.
system for naming and organizing things, especially plants
and animals, into groups which share similar qualities”. C. Anomaly-based detection
[7] has done a massive survey on malware detection Anomaly-based detection stated by [10] analyses user
techniques done by various researchers and they have come behavior and the statistics of a process in normal situation,
out with taxonomy on classification of malware detection and it checks whether the system is being used in a different
techniques which have only two main detection technique manner. In addition [8] has described that this technique
which are signature-based detection and anomaly-based can overcome misuse detection problem by focusing on
detection. They have considered the specification-based normal system behavior rather than attack behavior.
detection as sub-family of anomaly-based detection. The However [9] assume that attacks will result in behavior
researcher has done further literature review on 48 various different from that normally observed in a system and an
researches on malware detection technique to verify the attack can be detected by comparing the current behavior
relevancies of the detection technique especially the hybrid with pre-established normal behavior.
malware detection technique so that it can be mapped into
the proposed new generic taxonomy of malware detection This detection approach is characterized by two phases
technique. Refer to Table IV for the mappings of the which is the training phase and detection phase. In training
literature review with the malware detection technique. phase, the behavior of the system is observed in the absence
of attack, and machine learning technique is used to create a
III CLASSIFICATION OF MALWARE profile of such normal behavior. In detection phase, this
DETECTION TECHNIQUES profile is compared against the current behavior, and
deviations are flagged as potential attacks. The
Malware detection technique is the technique used to effectiveness of this technique is affected by what aspect or
detect or identify the malware intrusion. Generally, a feature of system behavior is learnt and the hardest
malware detection technique can be categorized into challenge is to be able to select the appropriate set of
Signature-based detection, Anomaly-based detection and features.
Specification-based detection.
The advantage of this detection technique is that it can
A. Overview of Detection Technique detect new intrusion method and capable to detect novel
attacks. However, the disadvantage is that it needs to update
the data (profiles) describing the user’s behavior and the
statistics in normal usage and therefore it tend to be large
and therefore need more resources, like CPU time, memory
and disk space. Moreover, the malware detector system
often exhibit legitimate but previously unseen behavior,
which leads to high rate of false alarm

D. Specification-based detection

Specification-based detection according to [9] will rely


on program specifications that describe the intended
behavior of security-critical programs. The goal of the
Figure 2. Existing taxonomy of malware detection technique policy specification language according to [11] is to provide

57 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
a simple way on specifying the policies of privileged generic taxonomy on malware detection technique. It can
programs. be done by analyzing the current malware detection
technique and identify the significant criteria within each
It monitors executions program involve and detecting technique that can improve the IDS problem. As mentioned
deviation of their behavior from the specification, rather by [12], IDS has developed issues on alert flooding,
than detecting the occurrence of specific attack patterns. contextual problem, false alert and scalability. The
This technique is similar to anomaly detection where they characteristic that shall be analyzed in each detection
detect the attacks as deviate from normal. technique is according to the issue listed in Table II.

The difference is that instead of relying on machine TABLE II


learning techniques, it will be based on manually developed Issue analyzed in IDS
specifications that capture legitimate system behavior. It
can be used to monitor network components or network
services that are relevant to security, Domain Name Service,
Network File Sharing and routers.

The advantage of this technique according to [8] is that


the attacks can be detected even though they may not
previously encounter and it produce low rate of false alarm.
They avoid high rate of false alarm caused by legitimate-
but-unseen-behavior in anomaly detection technique.
However, the disadvantage is that it is not as effective as
anomaly detection in detecting novel attacks, especially
involving network probing and denial-of-service attacks due
to the development of detail specification is time-consuming
and hence increase false negative due to attacks may be
missed. Table I summarized the advantages and
disadvantages of each technique.

TABLE I
Comparison of Malware detection techniques

[13] has proposed the criterion of malware detection


technique that shall be analyzed against the issue listed in
Table II , which are :-

1. Capability to do alert reduction


2. Capability to identify multi-step attack.
3. Capability to reduce false negative alert.
4. Capability to reduce false positive alert
5. Capability to detect known attack
6. Capability to detect unknown attack

Alert reduction is required in order to overcome the


problem of alert flooding or large amount of alert data
generated by the IDS. This capability criterion is important
in order to reduce the network security officer’s tension in
performing troubleshooting when analyzing the exact
attacker in their environment.

For second criteria, most of the malware detection


technique is incapable to detect multi-step attack. Therefore
this capability is required as attacker behavior is becoming
E. Proposed criteria for Malware Detection more sophisticated and it shall involve one to many, many
Technique to one and many to many attacks.
Three major detection techniques have been reviewed The third and fourth criteria, most of the IDS have the
and the objective of this research is to develop a new tendency to produce false positive and false negative alarm.

58 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
This false alarm reduction criterion is important as it closely
related to alert flooding issue. For fifth and sixth criterion,
the capability to detect both known and unknown attack is
required to ensure that the alert generated will overcome the
issue of alert flooding and false alert.

IV DISCUSSION AND ANALYSIS OF MALWARE


DETECTION TECHNIQUES

In the current trend, few researches such as [14], [15],


[16], [17] and [8] have been found to manipulate this
detection technique by combining either Signature-based
with Anomaly-based detection technique(Hybrid-SA) or
Anomaly-based with Specification-based detection
technique (Hybrid-SPA) in order to develop an effective
malware detector’s tool. Figure 3. Proposed generic taxonomy of malware detection technique

In this paper, a new proposes taxonomy of malware To further verify the relevancies of the above proposed
detection technique is proven to be effective by matching generic taxonomy of malware detection technique, the
the current malware detection technique: Signature-based researchers have review on 48 researches of various
detection, Anomaly-based detection and Specification-based malware detection techniques which can be mapped to the
detection with capability criteria propose by [13] as propose taxonomy in Figure 3. Table IV shows the related
discussed in section III. This analysis is summarized in literature review in malware detection techniques.
Table III.
TABLE IV
TABLE III Related literature review in malware detection techniques
Malware detection technique versus proposed capability criteria
(Capable=√, incapable=×)

Referring to Table III, all of the detection techniques


have the same capability to detect known attack. However,
anomaly-based and specification-based have the additional
capabilities to detect unknown attack. Anomaly-based has
the extra capabilities compare to other detection techniques
in terms of reducing false negative alert and detecting multi-
step attack. Nevertheless, it cannot reduce the false positive
alert which can only be reduced by using signature-based
and specification-based technique. V CONCLUSION AND FUTURE WORKS

Due to the incapability to reduce either false negative or In this study, the researchers have reviewed and
false positive alert, all of these techniques are incapable to analyzed the existing malware detection techniques and
reduce false alert. This has given an implication that there match it with the capability criteria propose by [13] to
are still some rooms for improvement in reducing false improve the IDS’s problem. From the analysis researcher
alarm. Based on the analyses, the researcher has propose an has proposed a new generic taxonomy of malware detection
improved solution for malware detection technique which techniques which is called Hybrid-Malware Detection
can either use combination of signature-based with Technique (Hybrid-MDT) which consists of Hybrid-SA
anomaly-based detection technique (Hybrid-SA) or detection and Hybrid-SPA detection technique. Both
specification-based with anomaly-based detection technique techniques in Hybrid-MDT shall complement the
(Hybrid-SPA) to complement each other weaknesses. weaknesses found in Signature-based, Anomaly-based and
Specification based technique. This research is a preliminary
These new technique is later on named by the researcher worked for malware detection. This will contribute ideas in
as Hybrid-Malware Detection Technique (Hybrid-MDT) malware detection technique field by generating an optimize
which shall consists of Hybrid-SA detection and Hybrid- rule set in IDS. Hence, the false alarm in the existing IDS
SPA detection technique as depicted in Figure 3. will be reduced.

59 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
REFERENCES [24] Castaneda, F., Can Sezer, E., & Xu, J. (2004). Worm Vs. Worm:
Preliminary Study of an Active Counter-Attack Mechanism. Paper
presented at the 2004 ACM Workshop on Rapid Malcode.
[1] “CERT/CC Statistics 2008", CERT Coordination Center, Software
[25] Christodorescu, M., & Somesh, J. (2003). Static Analysis of
Engineering Institute, Carnegie Mellon University Pittsburg, PA,
Executables to Detect Malicious Pattern. Paper presented at the 12th
2003. Retrieved August 2008, from: http://www.cert.org/stats/
USENIX Security Symposium.
[2] Autrel, F & Cuppens, F (2005), “Using an Intrusion Detection Alert
[26] Debbabi, M., Giasson, E., Ktari, B., Michaud, F., & Tawbi, N.
Similarity Operator to Aggregate and Fuse Alerts”, The 4th
(2000). Secure Self-Certified COTS. Paper presented at the IEEE
Conference on Security and Network Architectures, France, 2005.
International Workshops on Enabling Technologies:Infrastructure
[3] Mihai Christodorescu , Somesh Jha , Sanjit A. Seshia , Dawn Song ,
for Collaborative Enterprises.
Randal E. Bryant, “Semantics-Aware Malware Detection”,
[27] E. Schechter, S., Jung, J., & W. Berger, A. (2004). Fast detection of
Proceedings of the 2005 IEEE Symposium on Security and Privacy,
scanning worms infections. Paper presented at the 7th International
p.32-46, May 08-11, 2005
Symposium on Recent Advances in Intrusion Detection (RAID)
[4] Vasudevan, A., & Yerraballi, R., “SPiKE: Engineering Malware
2004.
Analysis Tools using Unobtrusive Binary-Instrumentation”.
[28] Filiol, E. (2006). Malware pattern scanning schemes secure against
Australasian Computer Science Conference (ACSC 2006),2006
black-box analysis. Journal of Computer Virol 2, 35-50.
[5] Karresand, M., “A proposed taxonomy of software weapons” (No.
[29] Forrest, S., S. Perelson, A., Allen, L., & Cherukuri, R. (1994). Self-
FOI-R-0840-SE): FOI-Swedish Defence Research Agency, 2003.
Nonself Discrimination in a Computer. Paper presented at the IEEE
[6] Cambridge, U. P., “Cambridge Advanced Learner’s Dictionary”
Symposium on Research in Security and Privacy.
Online. Retrieved 29 January 2008, from
[30] Ilgun, K., A. Kemmerer, R., & A. Porras, P. (1995, March 1995).
http://dictionary.cambridge.org
State Transition Analysis: A Rule- Based Intrusion Detection
[7] Idika, N., & Mathur, A. P. (2007). A Survey of Malware Detection
Approach. Paper presented at the IEEE Transactions on Software
Techniques. Paper presented at the Software Engineering Research
Engineering.
Center Conference, West Lafayette, IN 47907.
[31] Kirda, E., Kruegel, C., Vigna, G., & Jovanovic, N. (2006). Noxes: A
[8] Sekar, R., Gupta, A., Frullo, J., Shanbhag, T., Tiware, A., & Yang,
client-side solution for mitigating cross-site scripting attacks. Paper
H., “Specification-based Anomaly Detection: A New Approach for
presented at the 21st ACM Symposium on Applied computing.
DetectingNetwork Intrusions”, ACM Computer and Communication
[32] Kreibich, C., & Crowcroft, J. (2003). Honeycomb - Creating
Security Conference, 2002
Intrusion Detection Signatures Using Honeypots. Paper presented at
[9] Ko, C., Ruschitzka, M., & Levitt, K., “Execution monitoring of
the 2nd Workshop on Hot Topics in Network.
security critical programs in distributed systems: A Specification-
[33] Kumar, S., & H. Spafford, E. (1992). A generic virus scanner in
based Approach”, IEEE Symposium on Security and Privacy,1997.
C++. Paper presented at the 8th IEEE Computer Security
[10] Okazaki, Y., Sato, I., & Goto, S., “A New Intrusion Detection
Applications Conference.
Method based on Process Profilin”, Symposium on Applications and
[34] Lee, W., & J. Stolfo, S. (1998). Data Mining approaches for
the Internet (SAINT '02) IEEE, 2002.
intrusion detection. Paper presented at the 7th USENIX Security
[11] Ko, C., Fink, G., & Levitt, K., ‘Automated detection of
Symposium.
Vulnerabilities in priviliged programs by execution monitoring”,
[35] Li, W.-J., Wang, K., J. Stolfo, S., & Herzog, B. (2005).
10th Annual Computer Security Applications Conference, 1994.
Fileprints:Identifying file types by n-gram analysis. Paper presented
[12] Debar, H., & Wespi, A., “Aggregation and Correlation of Intrusion
at the 2005 IEEE Workshop on Information Assurance and Security,
Detection Alert”, International Symposium on Recent Advances in
United States Military Academy, West Point, NY.
Intrusion Detection, Davis, CA, 2001.
[36] Linn, C. M., Rajagopalan, M., Baker, S., Collberg, C., K. Debray, S.,
[13] Robiah Yusof, Siti Rahayu Selamat, Shahrin Sahib, “Intrusion Alert
& H. Hartman, J. (2005). Protecting against unexpected system calls.
Correlation Technique Analysis for Heterogeneous Log”,
Paper presented at the Usenix Security Symposium.
IJCNS,2008
[37] Masri, W., & Podgurski, A. (2005, 30 May 2005). Using dynamic
[14] Cowan, C., Pu, C., Maier, D., Walpole, J., Bakke, P., Beattie, S., et
information flow analysis to detect attacks against applications.
al, “Stackguard: Automatic adaptive detection and prevention of
Paper presented at the 2005 Workshop on Software Engineering for
buffer-overflow attacks”, 7th USENIX security Conference, 1998.
secure systems-Building Trustworthy Applications.
[15] G.J. Halfond, W., & Orso, A., “AMNESIA: Analysis and
[38] Milenkovic, M., Milenkovic, A., & Jovanov, E. (2005). Using
Monitoring for NEutralizing Sql-Injection Attacks”, 20th
Instruction Block Signatures to Counter Code Injection Attacks.
IEEE/ACM International Conference on Automated Software
ACM SIGARCH Computer Architecture News(33), 108-117.
Engineering, 2005.
[39] Mori, A., Izumida, T., Sawada, T., & Inoue , T. (2006). A Tool
[16] Bashah, N., Shanmugam, I. B., & Ahmed, A. M., ”Hybrid Intelligent
for analyzing and detecting malicious mobile code. Paper presented
Intrusion Detection System”, 2005 World Academy of Science,
at the 28th International Conference on Software Engineering.
Engineering and Technology, 2005.
[40] Peng, J., Feng, C., & W.Rozenblit, J. (2006). A Hybrid Intrusion
[17] Garcia-Teodoro, P., E.Diaz-Verdejo, J., Marcia-Fernandez, G., &
Detection and Visualization System. Paper presented at the 13th
Sanchez-Casad, L., “Network-based Hybrid Intrusion Detection
Annual IEEE International Symposium and Workshop on
Honeysystems as Active Reaction Schemes”, IJCSNS International
Engineering of Computer Based Systems (ECBS '06).
Journal of Computer Science and Network Security, 7, 2007
[41] R. Ellis, D., G.Aiken, J., S.Attwood, K., & D. Tenaglia, S. (2004). A
[18] A. Hofmeyr, S., Forrest, S., & Somayaji, A. (1998). Intrusion
Behavioral Approach to Worm Detection. Paper presented at the
detection using sequences of system calls. Journal of Computer
2004 ACM Workshop on Rapid Malcode.
Security, 151-180.
[42] Rabek, J. C., Khazan, R. l., Lewandowski, S. M., & Cunningham, R.
[19] Adelstein, F., Stillerman, M., & Kozen, D. (2002). Malicious Code
K. (2003). Detection of Injected, Dynamically Generated, and
Detection For Open Firmware. Paper presented at the 18th Annual
Obfuscated Malicious Code. Paper presented at the 2003 ACM
Computer Security Applications Conference (ACSAC '02), IEEE.
Workshop on Rapid Malcode.
[20] B. Lee, R., K. Karig, D., P. McGregor, J., & Shi, Z. (2003). Enlisting
[43] Sekar, R., Bendre, M., Dhurjati, D., & Bollineni, P. (2001). A Fast
hardware architecture to thwart malicious code injection. Paper
Automaton-Based Method for Detecting Anomalous Program
presented at the International Conference on Security in Pervasive
Behaviours. Paper presented at the IEEE Symposium on security
Computing (SPC)
and Privacy.
[21] Bergeron, J., Debbabi, M., Desharnais, J., M., E., M., Lavoie, Y., &
[44] Sekar, R., Bowen, T., & Segal, M. (1999). On preventing intrusions
Tawbi, N. (2001). Static Detection of Malicious Code in executables
by process behavior monitoring. Paper presented at the USENIX
programs. International Journal of Req Engineering
Intrusion Detection Workshop, 1999.
[22] Bergeron, J., Debbabi, M., M. Erhioui, M., & Ktari, B. (1999). Static
[45] Suh, G. E., Lee, J., & Devadas, S. (2004). Secure program execution
Analysis of Binary code to Isolate Malicious Behaviours. Paper
via dynamic information flow tracking. Paper presented at the
presented at the 8th Worksop on Enabling Technologies:
International Conference Architectural Support for Programming
Infrastructure for Collaborative Enterprises.
Languages and Operating Systems,2004.
[23] Boldt, M., & Carlsson, B. (2006). Analysing Privacy-Invasive
[46] Sulaiman, A., Ramamoorthy, K., Mukkamala, S., & Sung, A. (2005).
SoftwareUsing Computer Forensic Methods. from http://www.e-
Malware Examiner using disassembled code (MEDiC). Paper
evidence.info/b.html

60 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
presented at the 20th Annual Computer Security Application
Conference (ACSAC'04).
[47] Sung, A., Xu, J., Chavez, P., & Mukkamala, S. (2004). Static
Analyzer of Vicious Executables. Paper presented at the 20th Annual
Computer Security Applications Conferece (ACSAC '04), IEEE
[48] T. Giffin, J., Jha, S., & P. Miller, B. (2002). Detecting manipulated
remote call streams. Paper presented at the 11th USENIX Security
Symposium.
[49] Taylor, C., & Alves-Foss, J. (2002). NATE: Network Analysis of
anomalous traffic events, a low cost approach. Paper presented at the
New Security Paradigm Workshop, New Mexico, USA.
[50] W. Lo, R., N. Levitt, K., & A. Olsson, R. (1994). MCF: A Malicious
Code Filter. Computers and Society, 541-566.
[51] Wagner, D., & Dean, D. (2001). Intrusion Detection via static
analysis. Paper presented at the IEEE Symposium on Security and
Privacy 2001.
[52] Wang, K., & J. Stolfo, S. (2004). Anomalous payload-based network
intrusion detection. Paper presented at the 7th International
Symposium on (RAID).
[53] Wang, Y.-M., Beck, D., Vo, B., Roussev, R., & Verbowski, C.
(2005). Detecting Stealth Software with Strider Ghostbuster. Paper
presented at the International Conference on Dependable Systems
and Networks (DSN '05).
[54] Wespi, A., Dacier, M., & Debar, H. (2000). Intrusion detection using
variable-length audit trail patterns. Paper presented at the Recent
Advances in Intrusion Detection (RAID).
[55] Xiong, J. (2004). Act: Attachment chain tracing scheme for email
virus detection and control. Paper presented at the ACM Workshop
on Rapid Malcode (WORM).

61 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009

Hybrid Intrusion Detection and Prediction multiAgent


System, HIDPAS
Farah Jemili Montaceur Zaghdoud Mohamed Ben Ahmed
RIADI Laboratory RIADI Laboratory RIADI Laboratory
Manouba University Manouba University Manouba University
Manouba 2010, Tunisia Manouba 2010, Tunisia Manouba 2010, Tunisia

Abstract— This paper proposes an intrusion detection and resources [1]. Malicious behavior is defined as a system or
prediction system based on uncertain and imprecise inference individual action which tries to use or access to computer
networks and its implementation. Giving a historic of sessions, it system without authorization and the privilege excess of those
is about proposing a method of supervised learning doubled of a who have legitimate access to the system. The term attack can
classifier permitting to extract the necessary knowledge in order be defined as a combination of actions performed by a
to identify the presence or not of an intrusion in a session and in malicious adversary to violate the security policy of a target
the positive case to recognize its type and to predict the possible computer system or a network domain [2]. Each attack type is
intrusions that will follow it. The proposed system takes into characterized by the use of system vulnerabilities based on
account the uncertainty and imprecision that can affect the some feature values. Usually, there are relationships between
statistical data of the historic. The systematic utilization of an attack types and computer system characteristics used by the
unique probability distribution to represent this type of
intruder. If we are able to reveal those hidden relationships, we
knowledge supposes a too rich subjective information and risk to
will be able to predict the attack type.
be in part arbitrary. One of the first objectives of this work was
therefore to permit the consistency between the manner of which From another side, an attack generally starts with an
we represent information and information which we really intrusion to some corporate network through a vulnerable
dispose. Besides, our system integrates host intrusion detection resource and then launching further actions on the network
and network intrusion prediction in the setting of a global anti- itself. Therefore, we can define the attack prediction process
intrusions system capable to function like a HIDS (Host based as the sequence of elementary actions that should be
Intrusion Detection System) before functioning like NIPS performed in order to recognize the attack strategy. The use of
(Network based Intrusion Prediction System). The so proposed
distributed and coordinated techniques in attacks makes their
anti-intrusions system permits to combine two powerful tools
together to permit a reliable host intrusion detection leading to
detection more difficult. Different events and specific
an as reliable network intrusion prediction. In our contribution, information must be gathered from all sources and combined
we chose to do a supervised learning based on Bayesian networks. in order to identify the attack plan. Therefore, it is necessary to
The choice of modeling the historic of data with Bayesian develop an advanced attack strategies prediction system that
networks is dictated by the nature of learning data (statistical can detect attack strategies so that appropriate responses and
data) and the modeling power of Bayesian networks. However, actions can be taken in advance to minimize the damages and
taking into account the incompleteness that can affect the avoid potential attacks.
knowledge of parameters characterizing the statistical data and
Besides, the proposed anti-intrusions system should take
the set of relations between phenomena, the proposed system in
the present work uses for the inference process a propagation
into account the uncertainty that can affect the data. The
method based on a bayesian possibilistic hybridization. The so uncertainty on parameters can have two origins [3]. The first
proposed system is adapted to the modeling of reliability with source of uncertainty comes from the uncertain character of
taking into account imprecision. information that is due to a natural variability resulting from
stochastic phenomena. This uncertainty is called variability or
Keywords-uncertainty; imprecision; host intrusion detection; stochastic uncertainty. The second source of uncertainty is
network intrusion prediction; Bayesian networks; bayesian related to the imprecise and incomplete character of
possibilistic hybridization. information due to a lack of knowledge. This uncertainty is
called epistemic uncertainty. The systematic utilization of an
I. INTRODUCTION unique probability distribution to represent this type of
knowledge supposes a too rich subjective information and risk
The proliferation of Internet access to every network to be in part arbitrary. The system proposed here offers a
device, the use of distributed rather than centralized formal setting adapted to treat the uncertainty and imprecision,
computing resources, and the introduction of network-enabled while combining probabilities and possibilities.
applications has rendered traditional network-based security
infrastructures vulnerable to serious attacks. In this paper we propose an intrusion detection and
prediction system which recognizes an upcoming intrusion
Intrusion detection can be defined as the process of and predicts the attacker’s attack plan and intentions. In our
identifying malicious behavior that targets a network and its approach, we apply graph techniques based on bayesian

62 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
reasoning for learning. We further apply inference to There are two general methods of detecting intrusions into
recognize the attack type and predict upcoming attacks. The computer and network systems: anomaly detection and
inference process is based on hybrid propagation that takes signature recognition [13,14]. Anomaly detection techniques
into account both the uncertain and imprecise character of establish a profile of the subject’s normal behavior (norm
information. profile), compare the observed behavior of the subject with its
norm profile, and signal intrusions when the subject’s
II. RELATED WORK observed behavior differs significantly from its norm profile.
Signature recognition techniques recognize signatures of
Several researchers have been interested in using Bayesian known attacks, match the observed behavior with those known
network to develop intrusion detection and prediction systems. signatures, and signal intrusions when there is a match.
Axelsson in [5] wrote a well-known paper that uses the Systems that use misuse-based techniques contain a number of
Bayesian rule of conditional probability to point out the attack descriptions, or ‘signatures’, that are matched against a
implications of the base-rate fallacy for intrusion detection. It stream of audit data looking for evidence of the modeled
clearly demonstrates the difficulty and necessity of dealing attacks. The audit data can be gathered from the network [15],
with false alarms [6]. In [7], a model is presented that from the operating system [16], or from application log files
simulates an intelligent attacker using Bayesian techniques to [6].
create a plan of goal-directed actions.
IDSs are usually classified as host-based or network-based.
In [8], a naïve Bayesian network is employed to perform Host-based systems use information obtained from a single
intrusion detection on network events. A naïve Bayesian host (usually audit trails), while network based systems obtain
network is a restricted network that has only two layers and data by monitoring the trace of information in the network to
assumes complete independence between the information which the hosts are connected [17].
nodes (i.e., the random variables that can be observed and
measured). Kruegel in [6] proposed an event classification A simple question that will arise is how can an Intrusion
scheme that is based on Bayesian networks. Bayesian Detection System possibly detect every single unknown
networks improve the aggregation of different model outputs attack? Hence the future of intrusion detection lies in
and allow one to seamlessly incorporate additional developing an Intrusion Prediction System [10]. Intrusion
information. Prediction Systems must be able to predict the probability of
intrusions on each host of a distributed computer system.
Johansen in [9] suggested a Bayesian system which would Prediction techniques can protect the systems from new
provide a solid mathematical foundation for simplifying a security breaches that can result from unknown methods of
seemingly difficult and monstrous problem that today’s attacks. In an attempt to develop such a system, we propose a
Network IDS (NIDS) fail to solve. The Bayesian Network IDS global anti-intrusions system which detects and predicts
(BNIDS) should have the capability to differentiate between intrusions based on hybrid propagation in Bayesian networks .
attacks and the normal network activity by comparing metrics
of each network traffic sample. Govindu in [10] wrote a paper
which discusses the present state of Intrusion Detection IV. BAYESIAN NETWORKS
Systems and their drawbacks. It highlights the need of A Bayesian network is a graphical modeling tool used to
developing an Intrusion Prediction System, which is the future model decision problems containing uncertainty. It is a
of intrusion detection systems. It also explores the possibility directed acyclic graph where each node represents a discrete
of bringing intelligence to the Intrusion Prediction System by random variable of interest. Each node contains the states of
using mobile agents that move across the network and use the random variable that it represents and a conditional
prediction techniques to predict the behavior of user. probability table (CPT) which give conditional probabilities of
this variable such as realization of other connected variables,
III. INTRUSION DETECTION AND PREDICTION SYSTEM based upon Bayes rule:
The detection of certain attacks against a networked Π(Β|Α)=Π(Α|Β)Π(Β)/Π(Α) (1)
system of computers requires information from multiple The CPT of a node contains probabilities of the node being
sources. A simple example of such an attack is the so-called in a specific state given the states of its parents. The parent-
doorknob attack. In a doorknob attack the intruder’s goal is to child relationship between nodes in a Bayesian network
discover, and gain access to, insufficiently-protected hosts on indicates the direction of causality between the corresponding
a system. The intruder generally tries a few common account variables. That is, the variable represented by the child node is
and password combinations on each of a number of causally dependent on the ones represented by its parents [18].
computers. These simple attacks can be remarkably successful
[12]. An Intrusion Detection system, as the name suggests, Several researchers have been interested by using Bayesian
detect possible intrusions [13]. An IDS installed on a network network to develop intrusion detection systems. Axelsson in
is like a burglar alarm system installed in a house. Through [5] wrote a well-known paper that uses the Bayesian rule of
various methods, both detect when an intruder/attacker/burglar conditional probability to point out the implications of the
is present. Both systems issue some type of warning in case of base-rate fallacy for intrusion detection. It clearly
detection of presence of burglar/intrusion [10]. demonstrates the difficulty and necessity of dealing with false
alerts.

63 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
Kruegel in [1] presented a model that simulates an Herskovits, is to define a database of variables : X1,..., Xn,
intelligent attacker using Bayesian techniques to create a plan and to build an acyclic graph directed (DAG) based on the
of goal-directed actions. An event classification scheme is calculation of local score [22]. Variables constitute network
proposed based on Bayesian networks. Bayesian networks nodes. Arcs represent “causal” relationships between
improve the aggregation of different model outputs and allow variables.
one to seamlessly incorporate additional information.
K2 Algorithm used in learning step needs:
Johansen in [9] suggested that a Bayesian system which
provides a solid mathematical foundation for simplifying a • A given order between variables
seemingly difficult and monstrous problem that today’s • and the number of parents of the node.
Network IDS fail to solve. He added that Bayesian Network K2 algorithm proceeds by starting with a single node (the
IDS should differentiate between attacks and the normal first variable in the defined order) and then incrementally adds
network activity by comparing metrics of each network traffic connection with other nodes which can increase the whole
sample. probability of network structure, calculated using the S
Bayesian networks learning has several advantages. First, function. A requested new parent which does not increase
it can incorporate prior knowledge and expertise by populating node probability can not be added to the node parent set.
the CPTs. It is also convenient to introduce partial evidence
and find the probability of unobserved variables. Second, it is
capable of adapting to new evidence and knowledge by belief (2)
updates through network propagation.
Where, for each variable Vi, ri is the number of possible
A. Bayesian Network Learning Algorithm instantiations, Nij is the j-th instantiation of C(Vi) in the
database, qi is the number of possible instantiations for C(Vi),
Methods for learning Bayesian graphical models can be Nijk is the number of cases in D for which Vi takes the value
partitioned into at least two general classes of methods: Vik with C(Vi) instantiated to Nij, Nij is the sum of Nijk for all
constraint-based search and Bayesian methods. The constraint- values of k.
based approaches [19] search the data for conditional
independence relations from which it is in principle possible to
deduce the Markov equivalence class of the underlying causal V. JUNCTION TREE INFERENCE ALGORITHM
graph. Two notable constraint based algorithms are the PC The most common method to perform discrete exact
algorithm which assumes that no hidden variables are present inference is the Junction Tree algorithm developed by Jensen
and the FCI algorithm which is capable of learning something [23]. The idea of this procedure is to construct a data structure
about the causal relationships even assuming there are latent called a junction tree which can be used to calculate any query
variables present in the data [19]. through message passing on the tree.
Bayesian methods [21] utilize a search-and-score The first step of JT algorithm creates an undirected graph
procedure to search the space of DAGs, and use the posterior from an input DAG through a procedure called moralization.
density as a scoring function. There are many variations on Moralization keeps the same edges, but drops the direction,
Bayesian methods, however, most research has focused on the and then connects the parents of every child. Junction tree
application of greedy heuristics, combined with techniques to construction follows four steps:
avoid local maxima in the posterior density (e.g., greedy
search with random restarts or best first searches). Both • JT Inference Step1: Choose a node ordering. Note that
constraint-based and Bayesian approaches have advantages node ordering will make a difference in the topology
and disadvantages. Constraint-based approaches are relatively of the generated tree. An optimal node ordering with
quick and possess the ability to deal with latent variables. respect to the junction tree is NP-hard to find.
However, constraint-based approaches rely on an arbitrary • JT Inference Step2: Loop through the nodes in the
significance level to decide independencies. ordering. For each node Xi, create a set Si of all its
neighbours. Delete the node Xi from the moralized
Bayesian methods can be applied even with very little data graph.
where conditional independence tests are likely to break down. • JT Inference Step3: Build a graph by letting each Si be
Both approaches have the ability to incorporate background a node. Connect the nodes with weighted undirected
knowledge in the form of temporal ordering, or forbidden or edges. The weight of an edge going from Si to Sj is |Si
forced arcs. Also, Bayesian approaches are capable of dealing ∩ Sj |.
with incomplete records in the database. The most serious • JT Inference Step4: Let the junction tree be the
drawback to the Bayesian approaches is the fact that they are maximal-weight spanning tree of the cluster graph.
relatively slow.
VI. PROBLEM DESCRIPTION
In this paper, we are dealing with incomplete records in the
database so we opted for the Bayesian approach and The inference in bayesian networks is the post-calculation
particularly for the K2 algorithm. K2 learning algorithm of uncertainty. Knowing the states of certain variables (called
showed high performance in many research works. The variables of observation), the inference process determines the
principle of K2 algorithm, proposed by Cooper and states of some other variables (called variables targets)

64 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
conditionally to observations. The choice to represent the measures reflects the imprecise character of the information. It
knowledge by probabilities only, and therefore to suppose that is about defining a possibility distribution on a probability
the uncertainty of the information we dispose has stochastic measure. This possibility distribution reflects the imprecise
origin, has repercussions on the results of uncertainty character of the true probability of the event.
propagation through the bayesian model.
A probability measure is more reliable and informative
The two sources of uncertainties (stochastic - epistemic) when the gap between its two upper and lower terminals is
must be treated in different manners. In practice, while the reduced, ie imprecision on the value of the variable is reduced,
uncertain information is treated with rigorous manner by the as opposed to a measure of probability in a confidence interval
classic probability distributions, the imprecise information is relatively large, this probability is risky and not very
much more better treated by possibility distribution. The two informative.
sources of uncertainty don't exclude themselves and are often
related (for example: imprecise measurement of an uncertain C. Hybrid Propagation Process
quantity ). The hybrid propagation proceeds in three steps:
The merely probabilistic propagation approach can
1) Substitute probability distributions of each variable in
generate some too optimistic results. This illusion is reinforced
by the fact that information is sometimes imprecise or the graph by probability distributions framed by measures of
incomplete and the classic probabilistic context doesn't possibility and necessity, using the variable transformation
represent this type of information faithfully. from probability to possibility TV, applied to probability
distributions of each variable in the graph. The gap between
In the section below, we will present a new propagation
the necessity and possibility measures reflects the imprecise
approach in bayesian networks called hybrid propagation
combining probabilities and possibilities. The advantage of character of the true probability associated to the variable.
this approach over the classic probabilistic propagation is that 2) Transformation of the initial graph to a junction tree.
it takes into account both the uncertain and the imprecise 3) Uncertain and imprecise uncertainty propagation which
character of information. consists in both :
a) The classic probabilistic propagation of stochastic
VII. HYBRID PROPAGATION IN BAYESIAN uncertainties in junction tree through message passing on the
NETWORKS tree, and
The mechanism of propagation is based on Bayesian b) The possibilistic propagation of epistemic uncertainties
model. Therefore, the junction tree algorithm is used for the in the junction tree. Possibilistic propagation in junction tree is
inference in the Bayesian network. The hybrid calculation a direct adaptation of the classic probabilistic propagation.
combining probabilities and possibilities, permits to propagate Therefore, the proposed propagation method:
both the variability (uncertain information) and the
imprecision (imprecise information). 1) Preserves the power of modeling of Bayesian
networks (permits the modeling of relations between
A. Variable Transformation from Probability to Possibility variables),
(TV) 2) This method is adapted to both stochastic and
Let's consider the probability distribution p=(p1,...,pi ,...,pn) epistemic uncertainties,
ordered as follows: p1>p2>…>pn. The possibility distribution 3) The result is a probability described by an interval
π=(π1,…,πi,…,πn) according to the transformation (p→π) delimited by possibility and necessity measures.
proposed in [24] is π1> π2> …> πn. Every possibility is defined
by: VIII. HYBRID INTRUSION DETECTION AND
PREDICTION SYSTEM
∀ i = 1, 2, .., n (3)
Our anti-intrusions system operates to two different levels,
it integrates host intrusion detection and network intrusion
Where k1=1, , ∀ i =2, 3, …, n prediction.
The intrusion detection consists in analyzing audit data of
B. Probability Measure and Possibility Distribution the host in search of attacks whose signatures are stocked in a
Let's consider a probabilistic space (Ω,A,P). For all signatures dataset. The intrusion prediction, consists in
analyzing the stream of alerts resulting from one or several
measurable whole A⊆Ω, we can define its high probability
detected attacks, in order to predict the possible attacks that
and its low probability. In other terms the value of the
will follow in the whole network.
probability P(A) is imprecise: ∀ A⊆Ω, N(A) ≤ P(A) ≤ Π(A)
where N(A) = 1-Π( ). Our anti-intrusions approach is based on hybrid
propagation in Bayesian networks in order to benefit the
Each couple of necessity/possibility measures (N,Π) can power of modeling of Bayesian networks and the power of
be considered as the lower and higher probability measures possibilistic reasoning to manage imprecision.
induced by a probability measure. The gap between these two

65 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
A. Hybrid intrusion detection network learning step. The first step is to browse the set of
The main objective of intrusion detection is to detect each entry variables to extract their different values and to calculate
security policy violation on a system of information. Signature their probabilities. Then, we use the K2 probabilistic learning
Recognition approach, adopted in our contribution, analyzes algorithm to build the Bayesian network for intrusion
audit data in search of attacks whose signatures are stocked in detection. The result is a directed acyclic graph whose nodes
a signatures dataset. Audit data are data of the computer are the entry variables and edges denote the conditional
system that bring back information on operations led on this dependences between these variables. To each variable of the
later. A signatures dataset contains a set of lines, every line graph is associated a conditional probability table that
codes a stream of data (between two definite instants) between
a source (identified by its IP address) and a destination quantifies the effect of its parents,
(identified also by its IP address), under a given protocol 4) Hybrid propagation in Bayesian network : consists in
(TCP, UDP...). Every line is a connection characterized by a the three steps mentioned previously.
certain number of attributes as its length, the type of the At the end of this step, every connection (normal or
protocol, etc. According to values of these attributes, every intrusion) in a host is classified in the most probable
connection in the signatures dataset is considered as being a connection type. In case of detected intrusions in a host, one or
normal connection or an attack. several alerts are sent in direction of the intrusion prediction
In our approach, the process of intrusion detection is module, this later is charged to predict the possible intrusions
considered as a problem of classification. Given a set of that can propagate in the whole network.
identified connections and a set of connections types, our goal
is to classify connections among the most plausible B. Hybrid intrusion prediction
corresponding connections types. The intrusion prediction aims to predict attack plans, given
Our approach for intrusion detection consists in four main one or several intrusions detected at the level of one or several
steps [30]: hosts of a computer network. An intrusion detected at the level
of a host results to one or several alerts generated by a HIDS.
1) Important attributes selection : In a signatures The intrusion prediction tent to classify alerts among the most
dataset, every connection is characterized by a certain number plausible hyper-alerts, each hyper-alert corresponds to a step
of attributes as its length, the type of the protocol, etc. These in the attack plan, then, based on hyper-alerts correlation,
attributes have been fixed by Lee and al. [31]. The objective of deducts the possible attacks that will follow in the whole
this step is to extract the most important attributes among computer network [11].
attributes of the signatures dataset. To do so, we proceeded by 1) Alerts Classification: The main objective of the alerts
a Multiple Correspondences Factorial Analysis (MCFA) of classification is to analyze the stream of alerts generated by
attributes of the dataset, then we calculated the Gini index for intrusion detection systems in order to contribute in the attack
every attribute of the dataset in order to visualize the different plans prediction. In our approach, given a set of alerts, alerts
attributes distribution and to select the most important classification's goal is to classify alerts among the most
attributes [32]. It results a set of the most important attributes plausible corresponding hyper-alerts.
characterizing connections of the signatures dataset. Some of Our approach for alerts classification consists in four main
these attributes can be continuous and can require to be steps :
discretized to improve classification results, a) Important attributes selection: In addition to time
2) Continuous attributes discretization : The selected information, each alert has a number of other attributes, such as
attributes, can be discrete (admitting a finished number of source IP, destination IP, port(s), user name, process name, attack
values) or continuous. Several previous works showed that the class, and sensor ID, which are defined in a standard document,
discretization improved bayesian networks performances [4]. “Intrusion Detection Message Exchange Format (IDMEF)”, drafted
by the IETF Intrusion Detection Working Group [20]. For the most
To discretize continuous attributes, we opted for the
important attributes selection, we proceed by a Multiple
discretization by the fit together averages. it consists in cutting Correspondences Factorial Analysis (MCFA) of the different
up the variable while using some successive averages as limits attributes characterizing alerts. The attributes selection doesn't
of classes. This method has the advantage to be bound include time stamps, we will use time stamps in the attack plans
strongly to the variable distribution, but if the variable is cut prediction process in order to detect alerts series. It results of this
up a big number of time, this method risks to either produce step, a set of the most important attributes characterizing alerts of the
some empty or very heterogeneous classes, in the case of very alerts dataset,
dissymmetric distributions. Thus, we use only one iteration, b) Alerts aggregation: An alerts dataset generally
i.e. a binary discretization based on the average, but this contains a big number of alerts, most are raw alerts and can
supposes that the behavior of the observation variables is not make reference to one same event. Alerts aggregation consists
too atypical, in exploiting alerts attributes similarities in order to reduce the
3) Bayesian network learning : The set of important redundancy of alerts. Since alerts that are output by the same
attributes being discretized as well as the class of connection IDS and have the same attributes except time stamps
types constitute the set of entry variables to the Bayesian correspond to the same step in the attack plan [26], we

66 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
aggregate alerts sharing the same sensor, the same attributes entry variables to extract their different values and to calculate
except time stamps in order to get clusters of alerts where each their probabilities. Then, we use the K2 probabilistic learning
cluster corresponds to only one step of the attack plan, called algorithm to build the Bayesian network for attack plans
hyper-alert. Then, based on results of this first step, we merge prediction. The result is a directed acyclic graph whose nodes
clusters of alerts (or hyper-alert) corresponding to the same are the hyper-alerts and edges denote the conditional
step of the attack plan. At the end of this step of alerts dependences between these hyper-alerts.
aggregation, we get, a cluster of alerts (or hyper-alert) for each c) Hybrid propagation in Bayesian network : consists in
step of the attack plan (i.e. hyper-alert = step of the attack the three steps mentioned previously.
plan). We regroup in one class all the observed hyper-alerts,
At the end of this step, given one or several attacks
c) Bayesian network learning: The set of selected detected, we can predict the possible attacks that will follow.
attributes of alerts as well as the class regrouping all the
observed hyper-alerts forms the set of entry variables to the IX. HIDPAS SYSTEM AGENT ARCHITECTURE
Bayesian network learning step. The first step is to browse the
set of entry variables in order to extract their different values HIDPAS system architecture is composed by two
interconnected layers of intelligent agents. The first layer is
and calculate their probabilities. Then, we use the K2
concerned by host intrusion detection. On each host of a
probabilistic learning algorithm to build the Bayesian network
distributed computers system an intelligent agent is charged by
for alerts classification, detecting intrusion eventuality.
d) Hybrid propagation in Bayesian network : consists in
Each agent of the intrusion detection layer uses a signature
the three steps mentioned previously.
intrusion database (SDB) to build its own bayesian network.
At the end of this step, every generated alert is classified For every new suspect connection, the intrusion detection
in the most probable corresponding hyper-alert. agent (IDA) of the concerned host uses hybrid propagation in
2) Attack plans prediction: Attack plans prediction its bayesian network to infer the conditional evidences of
consists in detecting complex attack scenarios, that is implying intrusion given the new settings of the suspect connection.
Therefore, based on the probability degree and the gap
a series of actions by the attacker. The idea is to correlate
between the necessity and the possibility degrees associated
hyper-alerts resulting from the previous step in order to with each connection type, we can perform quantitative
predict, given one or several attacks detected, the possible analysis on the connection types.
attacks that will follow.
Our approach for attack plans prediction consists in three In the final selection of possible connection type, we can
main steps : select the type who has the maximum informative probability
value. An informative probability is a probability delimited by
a) Transaction data formulation [26]: we formulate two confidence measures where the gap between them is
transaction data for each hyper alert in the dataset. under a threshold.
Specifically, we set up a series of time slots with equal time
interval, denoted as ∆t, along the time axis. Given a time range
T, we have m = time slots. Recall that each hyper alert A
includes a set of alert instances with the same attributes except
time stamps, i.e., A = [a1 ,a2, …, an], where ai represents an IPA
alert instance in the cluster. We denote NA = {n1, n2, …, nm} as
the variable to represent the occurrence of hyper alert A during Network Intrusion ADB
the time range T, where ni is corresponding to the occurrence Prediction Layer
(i.e., ni = 1) or un-occurrence (i.e., ni = 0) of the alert A in a
specific time slot , Using the above process, we can create IDA
IDA
a set of transaction data. Table 1 shows an example of the
transaction data corresponding to hyper alerts A, B and C. SDB SDB
TABLE I. AN EXAMPLE OF TRANSACTION DATA SET
IDA IDA
Time Slot A B C
1 0 0
SDB SDB
1 0 1
1 1 0 Host Intrusion
… … … … Detection Layer
1 0 0
Figure 1. HIDPAS system architecture
b) Bayesian network learning: The set of the observed In case of intrusion, IDA agent informs the intrusion
hyper-alerts forms the set of entry variables to the Bayesian prediction agent (IPA) which is placed in the prediction layer,
network learning step. The first step is to browse the set of

67 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
about the eventuality of intrusion on the concerned host and its operating systems and services. Additional three machines are
type. then used to spoof different IP addresses to generate traffic.
The second layer is based upon one intelligent agent which Finally, there is a sniffer that records all network traffic
is charged by network intrusion prediction. using the TCP dump format. The total simulated period is
seven weeks [27]. Packet information in the TCP dump file is
When the Intrusion Prediction Agent (IPA) is informed summarized into connections. Specifically, “a connection is a
about a new intrusion which will be happened on a host of the sequence of TCP packets starting and ending at some well
distributed computers system and its type, it tries to compute defined times, between which data flows from a source IP
conditional probabilities that other attacks may be ultimately address to a target IP address under some well defined
happen. To accomplish this task, IPA uses another database protocol” [27].
type (ADB) which contains historical data about alerts
generated by sensors from different computer systems. DARPA KDD'99 dataset represents data as rows of
TCP/IP dump where each row consists of computer
Given a stream of alerts, agent IPA first output results as connection which is characterized by 41 features. Features are
evidences to the inference process of the first graph for alerts grouped into four categories:
classification, second, it output results of alerts classification
to the inference process of the second graph for attack plans 1) Basic Features: Basic features can be derived from
prediction. Each path in the second graph is potentially a packet headers without inspecting the payload.
subsequence of an attack scenario. Therefore, based on the 2) Content Features: Domain knowledge is used to assess
probability degree and the gap between the necessity and the the payload of the original TCP packets. This includes features
possibility degrees associated with each edge, IPA can
such as the number of failed login attempts;
perform quantitative analysis on the attack strategies.
3) Time-based Traffic Features: These features are
The advantage of our approach is that we do not require a designed to capture properties that mature over a 2 second
complete ordered attack sequence for inference. Due to temporal window. One example of such a feature would be the
bayesian networks and the hybrid propagation, we have the number of connections to the same host over the 2 second
capability of handling partial order and unobserved activity interval;
evidence sets. In practice, we cannot always observe all of the
4) Host-based Traffic Features: Utilize a historical
attacker’s activities, and can often only detect partial order of
attack steps due to the limitation or deployment of security window estimated over the number of connections – in this
sensors. For example, IDA can miss detecting intrusions and case 100 – instead of time. Host based features are therefore
thus result in an incomplete alert stream. designed to assess attacks, which span intervals longer than 2
seconds.
In the final selection of possible future goal or attack steps,
IPA can either select the node(s) who has the maximum In this study, we used KDD'99 dataset which is counting
informative probability value(s) or the one(s) whose almost 494019 of training connections. Based upon a Multiple
informative probability value(s) is (are) above a threshold. Correspondences Factorial Analysis (MCFA) of attributes of
the KDD’99 dataset, we used data about only important
After computing conditional probabilities of possible features. Table 2 shows the important features and the
attacks, IPA informs the system administrator about possible corresponding Gini index for each feature:
attacks.
TABLE II. CONNECTIONS IMPORTANT FEATURES
X. HIDPAS SYSTEM IMPLEMENTATION N° Feature Gini
HIDPAS was implemented using JADE multiagent A23 Count 0.7518
plateform. The dataset used for intrusion detection A5 src_bytes 0.7157
A24 src_count 0.6978
implementation and experimentation is DARPA KDD’99 A3 service 0.6074
which contains signatures of normal connections and A36 dst_host_same_src_port_rate 0.5696
signatures of 38 known attacks gathered in four main classes: A2 protocol_type 0.5207
DOS, R2L, U2R and Probe. A33 dst_host_srv_count 0.5151
A35 dst_host_diff_srv_rate 0.4913
A. DARPA’99 DATA SET A34 dst_host_same_srv_rate 0.4831

MIT Lincoln Lab’s DARPA intrusion detection evaluation


datasets have been employed to design and test intrusion To these features, we added the "attack_type". Indeed each
detection systems. The KDD 99 intrusion detection datasets training connection is labelled as either normal, or as an attack
are based on the 1998 DARPA initiative, which provides with specific type. DARPA'99 base counts 38 attacks which
designers of intrusion detection systems (IDS) with a can be gathered in four main categories:
benchmark on which to evaluate different methodologies [25]. 1) Denial of Service (dos): Attacker tries to prevent
To do so, a simulation is made of a factitious military legitimate users from using a service.
network consisting of three ‘target’ machines running various

68 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
2) Remote to Local (r2l): Attacker does not have an 4) Phase 4: The attacker uses telnet and rpc to install a
account on the victim machine, hence tries to gain access. DDoS program on the compromised machines.
3) User to Root (u2r): Attacker has local access to the 5) Phase 5: The attacker telnets to the DDoS master
victim machine and tries to gain super user privileges. machine and launches the mstream DDOS against the final
4) Probe: Attacker tries to gain information about the victim of the attack.
target host. We used an alert log file [28] generated by RealSecure
Among the selected features, only service and protocol- IDS. As a result of replaying the “Inside-tcpdump” file from
type are discrete, the other features need to be discretized. DARPA 2000, Realsecure produces 922 alerts. After applying
Table 3 shows the result of discretization of these features. the proposed alerts important attributes selection, we used data
about only important features as shown in Table4.
TABLE III. CONTINUOUS FEATURES DISCRETIZATION
TABLE IV. ALERTS IMPORTANT FEATURES
N° Feature Values
cnt_v1 Feature Gini
m < 332.67007446 SrcIPAddress 0,6423
A23 count SrcPort 0,5982
cnt_v2
m ≥ 332.67007446 DestIPAddress 0,5426
sb_v1 DestPort 0,5036
m < 30211.16406250 AttackType 0,4925
A5 src_bytes
sb_v2
m ≥ 30211.16406250 After applying the proposed alerts aggregation, we
srv_cnt_v1 obtained 17 different types of alerts as shown in Table5.
m < 293.24423218
A24 src_count
srv_cnt_v2 TABLE V. HYPER-ALERTS REPORTED BY REALSECURE IN LLDOS 1.0
m ≥ 293.24423218
dh_ssp_rate_v1 ID Hyper-alert Size
m < 0.60189182 1 Sadmind_Ping 3
A36 dst_host
dh_ssp_rate_v2 2 TelnetTerminaltype 128
m ≥ 0.60189182 3 Email_Almail_Overflow 38
dh_srv_cnt_v1 Email_Ehlo
4 522
m < 189.18026733
A33 dst_host_srv_count 5 FTP_User 49
dh_srv_cnt_v2
m ≥ 189.18026733 6 FTP_Pass 49
dh_dsrv_rate_v1 7 FTP_Syst 44
m < 0.03089163 8 http_Java 8
A35 dst_host_diff_srv_rate
dh_dsrv_rate_v2 9 http_Shells 15
m ≥ 0.03089163
10 Admind 17
dh_ssrv_rate_v1
m < 0.75390255 11 Sadmind_Amslverify_Overflow 14
A34 dst_host_same_srv_rate Rsh
dh_ssrv_rate_v2 12 17
m ≥ 0.75390255 13 Mstream_ Zombie 6
14 http_ Cisco_ Catalyst_ Exec 2
The dataset used for intrusion prediction implementation 15 SSH_Detected 4
and experimentation is LLDOS 1.0 provided by DARPA 16 Email_Debug 2
2000, which is the first attack scenario example dataset to be 17 Stream_DoS 1
created for DARPA. It includes a distributed denial of service
attack run by a novice attacker. C. SYSTEM IMPLEMENTATION
HIDPAS system contains three interfaces:
B. LLDOS 1.0 – SCENARIO ONE
DARPA2000 is a well-known IDS evaluation dataset 1) Intrusion Detection Interface : Figure 2 shows the
created by the MIT Lincoln Laboratory. It consists of two bayesian network built by AGENT ID1. For every new
multistage attack scenarios, namely LLDDOS1.0 and connection, AGENT ID1 uses its bayesian network to decide
LLDOS2.02. The LLODS1.0 scenario can be divided into five about the intrusion and its type.
phases as follows [29]: 2) Alerts Classification Interface : Figure 3 shows the
1) Phase 1: The attacker scans the network to determine bayesian network built by the IPA for alerts classification. The
which hosts are up. IPA receives alerts messages sent by intrusion detection agents
2) Phase 2: The attacker then uses the ping option of the about the detected intrusions. The IPA uses its bayesian
sadmind exploit program to determine which hosts selected in network to determine hyper-alerts corresponding to these
Phase 1 are running the Sadmind service. alerts.
3) Phase 3: The attacker attempts the sadmind Remote- 3) Attack Plans Prediction Interface : Figure 4 shows the
to-Root exploit several times in order to compromise the bayesian network built by the IPA for attack plans prediction.
vulnerable machine.

69 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
The IPA uses its bayesian network to determine the eventual TABLE VI. DETECTION RATE COMPARISON

attacks that will follow the detected intrusions. Detection Classic Hybrid
propagation propagation
Normal (60593) 99.52% 100%
DOS (229853) 97.87% 99.93%
Probing (4166) 89.39% 98.57%
R2L (16189) 19.03% 79.63%
U2R (228) 31.06% 93.54%

Table 6 shows high performance of our system based on


hybrid propagation in intrusion detection.
• False Alerts : Bayesian networks can generate two
types of false alerts: False negative and false positive
alarms. False negative describe an event that the IDS
fails to identify as an intrusion when one has in fact
occurred. False positive describe an event, incorrectly
Figure 2. Intrusion Detection Interface identified by the IDS as being an intrusion when none
has occurred.

TABLE VII. FALSE ALERTS RATE COMPARISON

False Classic Hybrid


alerts propagation propagation
Normal (60593) 0.48% 0%
DOS (229853) 1.21% 0.02%
Probing (4166) 5.35% 0.46%
R2L (16189) 6.96% 2.96%
U2R (228) 6.66% 1.36%

Table 7 shows the gap between false alerts results given by


the two approaches. Hybrid propagation approach gives the
Figure 3. Alerts Classification Interface smallest false alerts rates.
• Correlation rate: can be defined as the rate of attacks
correctly correlated by our system.
• False positive correlation rate : is the rate of attacks
correlated by the system when no relationship exists
between them.
• False negative correlation rate : is the rate of attacks
having in fact relationship but the system fails to
identify them as correlated attacks.
Table 8 shows experimentation results about correlation
measured by our system:

TABLE VIII. CORRELATION RATE COMPARISON


Figure 4. Intrusion Prediction Interface Classic hybrid
propagation propagation
XI. EXPERIMENTATION Correlation rate 95.5% 100%
False positive
The main criteria that we have considered in the 6.3% 1.3%
correlation rate
experimentation of our system are the detection rate, false False negative
4.5% 0%
alerts rate, alerts correlation rate, false positive correlation rate correlation rate
and false negative correlation rate.
Table 8 shows high performance of our system based on
• Detection Rate: is defined as the number of examples
hybrid propagation in attack correlation and prediction. The
correctly classified by our system divided by the total
use of hybrid propagation in bayesian networks was
number of test examples.

70 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009
especially useful, because we have deal with a lot of missing [10] Surya Kumari Govindu, Intrusion Forecasting System,
information. http://www.securitydocs.com/library/3110 15/03/2005.
[11] Jemili F., Zaghdoud M., Ben Ahmed M., « Attack Prediction based on
Hybrid Propagation in Bayesian Networks », In Proc. of the Internet
XII. CONCLUSION Technology And Secured Transactions Conference, ICITST-2009.
In this paper, we outlined a new approach based on hybrid [12] B. Landreth, Out of the Inner Circle, A Hacker’s Guide to Computer
Security, Microsoft Press, Bellevue, WA, 1985.
propagation combining probability and possibility, through a
bayesian network. Bayesian networks provide automatic [13] Paul Innella and Oba McMillan, An introduction to Intrusion detection,
Tetrad Digital Integrity, LLC, December 6, 2001, by URL:
learning from audit data. Hybrid propagation through bayesian http://www.securityfocus.com, 2001.
network provide propagation of both stochastic and epistemic [14] Brian C. Rudzonis, Intrusion Prevention: Does it Measure up to the
uncertainties, coming respectively from the uncertain and Hype? SANS GSEC Practical v1.4b. April 2003.
imprecise character of information. [15] M. Roesch. Snort - Lightweight Intrusion Detection for Networks. In
USENIX Lisa 99, 1999.
The application of our system in intrusion detection
[16] K. Ilgun. USTAT: A Real-time Intrusion Detection System for UNIX. In
context helps detect both normal and abnormal connections Proceedings of the IEEE Symposium on Research on Security and
with very considerable rates. Privacy, Oakland, CA, May 1993.
Besides, we presented an approach to identify attack plans [17] Biswanath Mukherjee, Todd L. Heberlein and Karl N. Levitt, Network
intrusion detection. IEEE Network, 8(3):26{41, May/June 1994.
and predict upcoming attacks. We developed a bayesian
[18] Peter Spirtes, Thomas Richard-son, and Christopher Meek. Learning
network based system to correlate attack scenarios based on Bayesian networks with discrete variables from data. In Proceedings of
their relationships. We conducted inference to evaluate the the First International Conference on Knowledge Discovery and Data
likelihood of attack goal(s) and predict potential upcoming Mining, pages 294-299, 1995.
attacks based on the hybrid propagation of uncertainties. [19] Peter Spirtes, Clark Glymour, and Richard Scheines. Causation,
Prediction, and Search. Springer Verlag, New York, 1993.
Our system demonstrates high performance when detecting [20] GROUP, I. I. D. W., « Intrusion Detection Message Exchange Format »
intrusions, correlating and predicting attacks. This is due to the http://www.ietf.org/internet-drafts/draft-ietf-idwg-idmef-xml-09.txt,
use of bayesian networks and the hybrid propagation within 2002.
bayesian networks which is especially useful when dealing [21] Gregory F. Cooper and Edward Herskovits. A Bayesian method for the
with missing information. induction of probabilistic networks from data. Machine Learning, 1992.
[22] Sanguesa R., Cortes U. Learning causal networks from data: a survey
There are still some challenges in attack plan recognition. and a new algorithm for recovering possibilistic causal networks. AI
First, we will apply our algorithms to alert streams collected Communications 10, 31–61, 1997.
from live networks to improve our work. Second, our system [23] Frank Jensen, Finn V. Jensen and Soren L. Dittmer. From influence
can be improved by integrating an expert system which is able diagrams to junction trees. Proceedings of UAI, 1994.
to provide recommendations based on attack scenarios [24] M. Sayed Mouchaweh, P. Bilaudel and B. Riera. “Variable
prediction. ProbabilityPossibility Transformation”, 25th European Annual
Conference on Human Decision-Making and Manual Control (EAM'06),
September 27-29,Valenciennes, France, 2006.
REFERENCES [25] DARPA. Knowledge Discovery in Databases, 1999. DARPA archive.
[1] Kruegel Christopher, Darren Mutz William, Robertson Fredrik Valeur. Task Description
Bayesian Event Classification for Intrusion Detection Reliable Software http://www.kdd.ics.uci.edu/databases/kddcup99/task.htm
Group University of California, Santa Barbara, , 2003. [26] Qin Xinzhou, «A Probabilistic-Based Framework for INFOSEC Alert
[2] F. Cuppens and R. Ortalo. LAMBDA: A language to model a database Correlation », PhD thesis, College of Computing Georgia Institute of
for detection of attacks. In Third International Workshop on the Recent Technology, August 2005.
Advances in Intrusion Detection (RAID’2000), Toulouse, France, 2000. [27] Kayacik, G. H., Zincir-Heywood, A. N. Analysis of Three Intrusion
[3] C. Baudrit and D. Dubois. Représentation et propagation de Detection System Benchmark Datasets Using Machine Learning
connaissances imprécises et incertaines: Application à l'évaluation des Algorithms, Proceedings of the IEEE ISI 2005 Atlanta, USA, May,
risques liées aux sites et aux sols pollués. Université Toulouse III – Paul 2005.
Sabatier, Toulouse, France, Mars 2006. [28] North Carolina State University Cyber Defense Laboratory, Tiaa: A
[4] Dougherty J., Kohavi R., Sahami M., «Forrests of fuzzy decision trees», toolkit for intrusion alert analysis,
Proceedings of ICML’95 : supervised and unsupervised discretization of http://discovery.csc.ncsu.edu/software/correlator/ver0.4/index.html
continuous features, p. 194-202, 1995. [29] MIT Lincoln Laboratory, 2000 darpa intrusion detection scenario
[5] S. Axelsson. The Base-Rate Fallacy and its Implications for the speci¯ c data sets, 2000.
Difficulty of Intrusion Detection. In 6th ACM Conference on Computer [30] Jemili F., Zaghdoud M., Ben Ahmed M., « Intrusion Detection based on
and Communications Security, 1999. Hybrid Propagation in Bayesian Networks », In Proc. of the IEEE
[6] Christopher Kruegel, Darren Mutz William, Robertson Fredrik Valeur, International Conference on Intelligence and security informatics, ISI
Bayesian Event Classification for Intrusion Detection Reliable Software 2009.
Group University of California, Santa Barbara, 2003. [31] Lee W., Stolfo S. J., Mok K. W., « A data mining framework for
[7] R. Goldman. A Stochastic Model for Intrusions. In Symposium on building intrusion detection models », Proceedings of the 1999 IEEE
Recent Advances in Intrusion Detection (RAID), 2002. symposium on security and privacy, 1999.
[8] A. Valdes and K. Skinner. Adaptive, Model-based Monitoring for Cyber [32] Arfaoui N., Jemili F., Zaghdoud M., Ben Ahmed M., « Comparative
Attack Detection. In Proceedings of RAID 2000, Toulouse, France, Study Between Bayesian Network And Possibilistic Network In
October 2000. Intrusion Detection », In Proc. of the International Conference on
Security and Cryptography, Secrypt 2006.
[9] Krister Johansen and Stephen Lee, Network Security: Bayesian Network
Intrusion Detection (BNIDS) May 3, 2003.

71 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

An Algorithm for Mining Multidimensional


Fuzzy Assoiation Rules
1 2 3
Neelu Khare Neeru Adlakha K. R. Pardasani
Department of computer Applications Department of Applied Mathematics Department of Mathematics
MANIT, Bhopal (M.P.) SVNIT, Surat (Gujrat) MANIT, Bhopal (M.P.)
Email: neelukh_29@yahoo.com Email: neeru.adlakha21@gmail.com Email: kamalrajp@hotmail.com

Abstract— Multidimensional association rule mining searches itemset, i.e., A ∩ B= φ . The strength of an association rule
for interesting relationship among the values from different
dimensions/attributes in a relational database. In this method the can be measured in terms of its support and confidence.
correlation is among set of dimensions i.e., the items forming a Support determines how often a rule is applicable to a given
rule come from different dimensions. Therefore each dimension data set, while confidence determines how frequently items in
should be partitioned at the fuzzy set level. This paper proposes a B appear in transactions that contain A [5]. The formal
new algorithm for generating multidimensional association rules definitions of these metrics are
by utilizing fuzzy sets. A database consisting of fuzzy σ (A ∪ B)
transactions, the Apriory property is employed to prune the Support s (A ⇒ B) =
useless candidates, itemsets. N
Keywords- interdimension ; multidimensional association rules; σ (A ∪ B)
fuzzy membership functions ;categories. Confidence c (A ⇒ B) =
σ (A)
I. INTRODUCTION In general, association rule mining can be viewed as a two-
Data Mining is a recently emerging field, connecting the step process :
three worlds of Databases, Artificial Intelligence and 1. Find all frequent itemsets: By definition, each of these
Statistics. The computer age has enabled people to gather large itemsets will occur at least as frequently as a predetermined
volumes of data. Every large organization amasses data on its minimum support count, min_sup.
clients or members, and these databases tend to be enormous. 2. Generate strong association rules from the frequent
The usefulness of this data is negligible if “meaningful itemsets: By definition, these rules must satisfy minimum
information” or “knowledge” cannot be extracted from it. Data support and minimum confidence [6].
Mining answers this need. Association rule mining that implies a single predicate is
Discovering association rules from large databases referred as a single dimensional or intra-dimension association
has been actively pursued since it was first presented in 1993, rule since it contains a single distinct predicate with multiple
which is a data mining task that discovers associations among occurrences (the Predicate occurs more than once within the
items in transaction databases such as the sales data [1]. Such rule) [8]. The terminology of single dimensional or intra-
kind of associations could be "if a set of items A occurs in a dimension association rule is used in multidimensional
sale transaction, then another set of items B will likely also database by assuming each distinct predicate in the rule as a
occur in the same transaction". One of the best studied models dimension [11]. Association rules that involve two or more
for data mining is that of association rules [2]. This model dimensions or predicates can be referred as multidimensional
assumes that the basic object of our interest is an item, and that association rules. Rather than searching for frequent itemsets
data appear in the form of sets of items called transactions. (as is done in mining single dimensional association rules), in
Association rules are “implications” that relate the presence of multidimensional association rules, we search for frequent
items in transactions [16]. The classical example is the rules predicate sets (here the items forming a rule come from
extracted from the content of market baskets. Items are things different dimensions) [10]. In general, there are two types of
we can buy in a market, and transactions are market baskets multidimensional association rules, namely inter-dimension
containing several items [17][18]. association rules and hybrid-dimension association rules [15].
Association rules relate the presence of items in the same Inter-dimension association rules are multidimensional
basket, for example, “every basket that contains bread contains association rules with no repeated predicates. This paper
butter”, usually noted bread ⇒ butter [3]. The basic format of introduces a method for generating inter-dimension
an association rule is: An association is an implication of association rules. Here, we introduce the concept of fuzzy
expression of the form A ⇒ B, where A and B is disjoint transaction as a subset of items. In addition we present a

72 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

general model to discover association rules in fuzzy attribute will be considered to find support of an item [8]. A
transactions. We call them fuzzy association rules. new algorithm is proposed by considering that every item will
have a relation (similarity) to the others if they are purchased
II. APRIORI ALGORITHM AND APRIORI PROPERTY together in the same record of transaction. They will have
Apriori is an influential algorithm in market basket stronger relationship if they are purchased in more
analysis for mining frequent itemsets for Boolean association transactions. On the other hand, increasing number of
rules [1]. The name of Apriori is based on the fact that the categories/values in a dimension will reduce the total degree
algorithm uses prior knowledge of frequent itemset properties. of relationship among the items involved from the different
Apriori employs an iterative approach known as a level-wise dimensions [12]. The proposed algorithm is given in the
search, where k-itemsets are used to explore (k+1)-itemsets[2]. following steps:
First, the set of frequent 1-itemsets is found, denoted by L1. L1 Step-1:
is used to find L2, the set of frequent 2-itemsets, which is used Determine λ ∈ {2, 3, …, n}(maximum value threshold). λ is
to find L3, and so on, until no more frequent k-itemsets can be a threshold to determine maximum number of
found. categories/values in a dimension by which the dimension can
Property: All non empty subsets of frequent item sets must be or cannot be considered in the process of generating rules
frequent [5]. mining. In this case, the process just considers all dimensions
with the number of categories/values in the relational database
A. Itemsets in multidimensional data sets RD less than or equal to λ . Formally, let DA = (D1 × D2 ×
D3………. × Dn ) is a universal set of attributes or a domain of
Let RD is relational database with m records and n dimensions
[4]. It consists of a set of all attributes/dimensions D = {d1 attributes/dimensions [13]. M ⊆ DA is a subset of qualified
∧ d2…….∧dn } and set of tuples T ={t1, t2, t3…….. tm} [10]. attributes/dimensions for generating rules mining that the
Where ti represents ith tuple and if there are n domains of number of unique categories/values(n) in DA are not greater
attributes D1, D2, D3………. Dn, then each tuple ti = (vi1 than λ :
∧vi2……∧vin ) here vij is atomic value of tuple ti with vij ∈ Dj M={D|n(D) ≤ λ } (1)
; j-th value in i-th record, 1 ≤ i ≤ m and 1 ≤ j ≤ n [9]. Whereas where n(D) is the number of categories/values in attribute/
RD can be defined as: RD ⊆ D1 × D2 × D3………. × Dn. dimension D.
To generate Multidimensional association rules we Step-2:
search for frequent predicate sets. A k-predicate set contains k Set k=1, where k is an index variable to determine the number
conjunctive predicate. Dimensions, which are also called of combination items in itemsets called k-itemsets. Whereas
predications or fields, constitute a dimension combination with each item belongs to different attribute/dimension.
a formula (d1, d2, …, dn), in which dj represents j-th Step-3:
dimension [9]. The form (dj, vij) is called an “item” in Determine minimum support for k-itemsets, denoted by βk ∈
relational database or other multidimensional data sets, which (0,|M|) as a minimum threshold of a combination k items
is denoted by Iij. That is: Iij = (dj, vij), where 1 ≤ i ≤ m and 1 appearing from the whole qualified dimensions, where |M| is
≤ j ≤ n. Suppose that A and B are items in the same relational the number of qualified dimensions. Here, βk may have
database RD. A equals B if and only if the dimension and the different value for every k.
value of the item A are equal to the dimension and the value of Step-4:
the item B, which is denoted by A=B. If it is not true that A Construct every candidate k-itemset, Ik, as a fuzzy set on set of
equals B, then it is denoted by A ≠ B. A set constituted by qualified transactions, M. A fuzzy membership function, µ, is
some “item” defined above is called “itemset”. a mapping:
µ I K : M → [0,1] as defined by:
III. GENERATION OF FUZZY ASSOCIATION RULES
η (i ) 
Apriori algorithm ignores the number items when µ I ( D ) = inf  D ij , ∀D j ∈ M
K (2)
determining relationship of the items. The algorithm that i ∈D
ij j  n ( D j ) 
calculates support of itemsets just count the number of
occurrences of the itemsets, in every record of transaction K
Where I is a k-itemset, and items belongs to different
(shopping cart), without any consideration of the number of dimensions.
items in a record of transaction. However, based on human A Boolean membership function, η, is a mapping:
intuitive, it should be considered that the larger number of
items purchased in a transaction means that the degree of η D : D →{0,1}as defined by:
association among the items in the transaction may be lower.
 1, i ∈ D
When calculating support of itemsets in relational database, η D (i ) = 
count of the number of categories/values in every dimension /  0 , otherwise (3)

73 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .
such that if an item, i, is an element of D then η D (i) =1, step. The process is started from a given relational database as
shown in TABLE I.
otherwise η D (i ) =0.
Step-5: TABLE I.
Calculate support for every (candidate) k-itemset using the TID A B C D E F
following equations [7] : T1 A1 B1 C1 D1 E1 F1
K
T2 A2 B2 C2 D1 E1 F2
Support( I )= ∑µ (D) (4) T3 A2 B2 C2 D2 E1 F1
IK
T ∈M T4 A1 B3 C2 D1 E1 F4
M is the set of qualified dimensions as given in (1); it can be T5 A2 B3 C2 D1 E2 F3
T6 A2 B1 C2 D1 E2 F1
proved that (4) satisfied the following property: T7 A1 B2 C1 D2 E1 F4
∑ Support (i ) =| M |
i∈D
T8
T9
A1
A1
B2
B2
C2
C1
D1
D2
E1
E1
F2
F4
For k=1, Ik can be considered as a single item. T10 A2 B3 C1 D1 E2 F3
Step-6:
Ik will be stored in the set of frequent k-itemsets, Lk if and Step-1:
only if support (Ik) ≥ βk. Suppose that λ arbitrarily equals to 3; that means qualified
Step-7: attribute/dimension is regarded as an attribute/dimension with
Set k=k+1, and if k > λ , then go to Step-9. no more than 3 values/categories in the attribute/dimension.
Step-8: Result of this step is a set of qualified attribute/dimensions as
Looking for possible/candidate k-itemsets from Lk-1 by the seen in TABLE II.
following rules: A k-itemset, Ik, will be considered as a TABLE II.
TID A B C D E
candidate k-itemset if Ik satisfied:
T1 A1 B1 C1 D1 E1
T2 A2 B2 C2 D1 E1
∀ F ⊂ Ik, | F |= k -1 ⇒ F ∈ L T3 A2 B2 C2 D2 E1
T4 A1 B3 C2 D1 E1
T5 A2 B3 C2 D1 E2
For example, Ik={i1, i2, i3, i4} will be considered as a candidate
T6 A2 B1 C2 D1 E2
4-itemset, iff: {i1, i2, i3}, { i2, i3, i4},{i1, i3, i4} and {i1, i2, i4} are T7 A1 B2 C1 D2 E1
in L3. If there is not found any candidate k-itemset then go to T8 A1 B2 C2 D1 E1
Step-9. Otherwise, the process is going to Step-3. T9 A1 B2 C1 D2 E1
Step-9: T10 A2 B3 C1 D1 E2
Similar to Apriori Algorithm, confidence of an association
rule mining, A ⇒ B , can be calculated by where M={A,B,C,D,E}
the following equation[14]: Step-2:
The process is started by looking for support of 1-itemsets for
Support(A ∪ B)
Conf = ( A ⇒ B ) = P(B|A) = Support(A)
(5) which k is set equal to 1.
Step-3:
where A, B ∈ DA. Since λ =3 and 1≤k≤ m. It is arbitrarily given β1= 2, β2 =2,
β3=1.5. That means the system just considers support of k-
It can be followed that (5) can be also represented by:
itemsets that is ≥ 2, for k=1,2 and ≥ 1.5, for k=3.
Step-4:
∑ inf( µ i ( D ))
D ∈ M i∈ A ∪ B
Every k-itemset is represented as a fuzzy set on set of
transactions as given by the following results:
Conf(A⇒ B) = (6)

D∈M
inf ( µ i ( D ))
i∈ A
1-itemsets:
{A1} = {0.5/T1, 0.5/T4, 0.5/T7,0.5/T8, 0.5/T9},
{A2} = {0.5/T2, 0.5/T3, 0.5/T5, 0.5/T6, 0.5/T10},
Where A and B are any k-itemsets in Lk. (Note: µi(T) = {B1} = {0.33/T1, 0.33/T6},
µ{i}(T), for simplification)[12]. Therefore, support of an {B2} = {0.33/T2, 0.33/T3, 0.33/T7, 0.5/T8, 0.5/T9,},
itemset as given by (4) can be also expressed by: {B3} = {0.5/T4, 0.33/T5, 0.33/T10},
{C1} = {0.5/T1, 0.5/T7, 0.5/T9, 0.5/T10},

∑ inf (µ ( D))
{C2} = {0.5/T2, 0.5/T3,0.5/T4, 0.5/T5, 0.5/T6, 0.5/T8},
Support(IK)= i (7) {D1} = {0.5/T1, 0.5/T2, 0.5/T4,0.5/T5, 0.5/T6,0.5/T8, 0.5/T10},
i∈I K
D∈M {D2} = {0.5/T3, 0.5/T7, 0.5/T9},
{E1} = {0.5/T1, 0.5/T2, 0.5/T3, 0.5/T4, 0.5/T7,0.5/T8, 0.5/T9},
IV. AN ILLUSTRATIVE EXAMPLE {E2} = {0.5/T5, 0.5/T6, 0.5/T10}
An illustrative example is given to understand well the
concept of the proposed algorithm and how the process of the From Step-5 and Step-6, {B1},{B2},{B3},{D2},{E2} cannot be
generating fuzzy association rule mining is performed step by considered for further process their because support is < β1.

74 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .
2-itemsets: TABLE IV : L2 (β2=2)
{A1,C1}={0.5/T1, 0.5/T7,0.5/T9}, L2
{A1,C2}={0.5/T4, 0.5/T8}, {A2,C2} 2.5
{A2,C1}={0.5/T10}, {A2,D1} 2
{A2,C2}={0.5/T2, 0.5/T3, 0.5/T5,0.5/T6},
{A1,E1} 2.5
{A1, D1}={0.5/T1, 0.5/T4, 0.5/T8},
{A2,D1}={ 0.5/T2, 0.5/T5, 0.5/T6, 0.5/T10}, {C2,D1} 2.5
{A1,E1}={ 0.5/T1, 0.5/T4, 0.5/T7, 0.5/T8, 0.5/T9}, {D1,E1} 2
{A2,E1}={ 0.5/T2, 0.5/T3}, {C2,E1} 2
{C1,D1}={ 0.5/T1,0.5/T10},
{C2,D1}={ 0.5/T2, 0.5/T4, 0.5/T5, 0.5/T6, 0.5/T8},
TABLE V : L3 (β3=1.5)
{C1,E1}={ 0.5/T1, 0.5/T7, 0.5/T9},
{C2,E1}={ 0.5/T2, 0.5/T3, 0.5/T4, 0.5/T8}, L3
{D1,E1}={ 0.5/T1, 0.5/T2, 0.5/T4,0.5/T8} {A2,C2,D1} 1.5
{C2,D1,E1} 1.5
From Step-5 and Step-6 {A1,C1}, {A2,C1}, {A1,C2}, {A1,D1},
{A2,E1}, {C1,D1}, {C1,E1} cannot be considered for further Step-6:
process because their support< β2. From the results as performed by Step-4 and 5, the sets of
frequent 1-itemsets, 2-itemsets and 3-itemsets are given in
Table 8, 9 and 10, respectively.
1-itemsets 2-itemsets Step-7:
This step is just for increment the value of k in which if all
support({A1}) = 2.5, support({A1,C1}) = 1.5
elements of LK< β K, then the process is going to Step-9.
support({A2}) = 2.5, support({A1,C2}) = 0.5
support({B1}) = 0.66, support({A2,C1}) = 0.5 Step-8:
support({B2}) = 1.65, support({A2,C2}) = 2.5 This step is looking for possible/candidate k-itemsets from
support({B3}) = 0.99, support({A1,D1}) = 1.5 Lk-1. If there is no anymore candidate k-itemset then go to
support({C1}) = 2, support({A2,D1}) = 2 Step-9. Otherwise, the process is going to Step-3.
support({C2}) = 3, support({A1,E1}) = 2.5 Step-9:
support({D1}) = 3.5, support({A2,E1}) = 1 The step is to calculate every confidence of each possible
support({D2}) =1.5, support({C1,D1}) = 1 association rules as follows:
support({E1}) = 3.5 support({C2,D1}) = 2.5
support({E2}) = 1.5 support({C1,E1}) = 1.5 2, 2 2.5
support({C2,E1}) = 2 2  2


1
support({D1,E1}) = 2 2 2.5
3-itemsets: .
{A2,C2,D1} = {0.5/T2, 0.5/T5,0.5/T6},
{C2,D1,E1} = {0.5/T2, 0.5/T4,0.5/T8} .
.
Step-5: .
2, 1 2
Support of each k-itemset is calculated as given in the 2  1


0.8
2 2.5
following results:

3-itemsets: 2, 2, 1 1.5


2 ∧ 2  1 2,

2, 1

0.6
support({A2,C1,E1}) = 1.5 2, 2 1.5 2.5
support({C2,D1,E1}) = 1.5 2  2 ∧ 1


0.6
2 2.5

TABLE III : L1(β1=2)


2, 1, 1 1.5
L1 2 ∧ 1  1


0.6
{A1} 2.5 2, 1 2.5
{A2} 2.5
{C1} 2 2, 1, 1 1.5
{C2} 2 2  1 ∧ 1


0.5
2 3
{D1} 3.5
{E1} 3.5
V. CONCLUSION
This paper introduced an algorithm for generating fuzzy
multidimensional association rules mining as a generalization
of inter-dimension association rule. The algorithm is based on
the concept that the larger number of values/categories in a
dimension/attribute means the lower degree of association

75 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .
among the items in the transaction. Moreover, to generalize [10] Rolly Intan, Department of Informatics Engineering, Petra
inter-dimension association rules, the concept of fuzzy Christian University, Surabaya, “A Proposal Of Fuzzy
itemsets is discussed, in order to introduce the concept of
fuzzy multidimensional association rules. Two generalized Multidimensional Association Rules”, Jurnal INFORMATIKA VOL
formulas were also proposed in the relation to the fuzzy 7: pp. 85-90 ,NOV 2006.
association rules. Finally, an illustrated example is given to [11] Reda ALHAJJ ADSA , Mehmet KAYA ,”Integrating Fuzziness
clearly demonstrate and understand steps of the algorithm.
In future we discuss and propose a method to into OLAP for Multidimensional Fuzzy Association Rules Mining”,
generate conditional hybrid dimension association rules using Third IEEE International Conferenceon Data Mining (ICDM’03) ,
fuzzy logic, whereas hybrid dimension association rule is (2003)
hybridization between inter-dimension and intra-dimension
association rules. [12]Rolly Intan, “An Algorithm for Generating Single Dimensional
Fuzzy Association Rule Mining”, JURNAL INFORMATIKA VOL. 7,
REFERENCES
NO. 1, MEI 2006: 61 - 66 (2006)
[1] Agrawal, R., Imielinski, T., and Swami, A. N. “Mining
[13] Rolly Intan, “Mining Multidimensional Fuzzy Association Rules
association rules between sets of items in large databases”. In
from a Normalized Database”, International Conference on
Proceedings of the ACM SIGMOD International Conference on
Convergence and Hybrid Information Technology © 2008 IEEE
Management of Data, pp .207-216 (1993).
[14] Rolly Intan1, Oviliani Yenty Yuliana, Andreas Handojo, Mining
[2] Agrawal, R. and Srikant, R. ”Fast algorithms for mining
“Multidimensional Fuzzy Association Rules From A Database Of
association rules”. In Proc. 20th Int. Conf. Very Large Data Bases,
Medical Record Patients” , Jurnal Informatika Vol. 9, No. 1, 15 - 22
pp. 487-499, (1994).
Mei 2008.
[3] Klemetinen, L., Mannila, H., Ronkainen, P., “Finding interesting
[15] Anjna Pandey and K. R. Pardasani, “Rough Set Model for
rules from large sets of discovered association rules”. Third
Discovering Multidimensional Association Rules” IJCSNS VOL 9,
International Conference on Information and Knowledge
pp 159-164, June 2009.
Management Gaithersburg, pp.401-407 USA (1994).
[16] Miguel Delgado, Nicolás Marín, Daniel Sánchez, and María-
[4] Houtsma M, Swami A. “Set-oriented mining of association rules
Amparo Vila, “Fuzzy Association Rules: General Model and
in relational databases”. In: Proc of the 11th International Conference
Applications” IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 11,
on Data Engineering. Taipei, pp. 25-33 Taiwan: 1995.
NO. 2, APRIL 2003.
[5] R. Agrawal, A. Arning, T. Bollinger, M. Mehta,J. Shafer, and R.
[17] Han, J., Pei, J., Yin, Y. “Mining Frequent Patterns without
Srikant. “ The Quest Data Mining System” , Proceedings of the 2nd
Candidate Generation”,SIGMOD Conference, pp 1-12, ACM Press
Int'l Conference on Knowledge Discovery in Databases and Data
(2000).
Mining”, Portland, Oregon, August (1996).
[18] Han, J., Pei, J., Yin, Y., Mao, R. “Mining Frequent Patterns
[6] J. Han, M. Kamber, Data Mining: Concepts and Techniques, The
without Candidate Generation: A Frequent-Pattern Tree Approach”.
Morgan Kaufmann Series, (2001).
Data Mining and Knowledge Discovery, 53–87 (2004).
[7] G. J. Klir, B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and
[19] Hannes Verlinde, Martine De Cock, and Raymond Boute,
Applications, New Jersey: Prentice Hall, (1995).
“Fuzzy Versus Quantitative Association Rules: A Fair Data-Driven
Comparison” IEEE TRANSACTIONS ON SYSTEMS, MAN, AND
[8] Jurgen M. Jams Fakultat ,”An Enhanced Apriori Algorithm for
CYBERNETICS—PART B: CYBERNETICS, VOL. 36, NO. 3, pp.679-684,
Mining Multidimensional Association Rules”, 25th Int. Conf.
JUNE 2006.
Information Technology interfaces ITI Cavtat, Croatia (1994).
[9] Wan-Xin Xu1, Ru-Jing Wang, “A Fast Algorithm Of Mining
Multidimensional Association Rules Frequently”, Proceedings of the
Fifth International Conference on Machine Learning and Cybernetics,
Dalian, 13-16 August 2006 IEEE.

76 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

Analysis, Design and Simulation of a New


System for Internet Multimedia Transmission
Guarantee
O. Said, S. Bahgat, S. Ghoniemy, Y. Elawady
Computer Science Department
Faculty of Computers and Information Systems
Taif University, Taif, KSA.
dr_osaid@yahoo.com

Abstract: QoS is a very important issue for multimedia routing protocol, sender, and receiver. Also, it's notable
communication systems. In this paper, a new system that that the resource reservation process occurs before the
reinstalls the relation between the QoS elements (RSVP, multimedia transmission. At the beginning of the
routing protocol, sender, and receiver) during the multimedia multimedia streams transmission, the relations between the
transmission is proposed, then an alternative path is created QoS elements are disjoint. Hence, if a change occurs in
in case of original multimedia path failure. The suggested
system considers the resulting problems that may be faced
the reserved path during the multimedia streams
within and after the creation of rerouting path. Finally, the transmission operation, the previous stated problems may
proposed system is simulated using OPNET 11.5 simulation occur [1], [2].
package. Simulation results show that our proposed system In this paper, a new system for internet multimedia
outperforms the old one in terms of QoS parameters like transmission guarantee is proposed and solves the old ones
packet loss and delay jitter. problem. This paper is organized as follows. In section 2,
the related work that contains the RSVP analysis and
Key words: Multimedia Protocols, RSVP, QoS, DiffServ, MPLS DiffServ & MPLS evaluation is illustrated; in section 3,
1. INTRODUCTION the problem definition is introduced; in section 4, our
system is demonstrated; in section 5, detailed simulation
The path that the multimedia streams is to follow should and evaluation of our system are showed. Finally, the
provide it with all required Quality of Services (QoS). conclusion and the future work are illustrated.
Suppose that the determined multimedia path gives the
multimedia streams all the needed services. In this 2. RELATED WORK (RSVP, DIFFSERV, AND MPLS)
situation, an urgent question arises. The question is what is The three systems that are closely related to our work are
the solution if, during the multimedia streams are RSVP, DiffServ, and MPLS. In this section, a brief
transmitted in the path, that path is failed? This state may analysis for RSVP is introduced. In addition, an evaluation
cause a loss in multimedia streams especially when are of DiffServ & MPLS is demonstrated.
transported under the User Datagram Protocol (UDP). So,
the solution is either to create an alternative path and A. RSVP operational model
change the multimedia streams away to flow in the new The RSVP resource-reservation process initiation begins
path or retransmit the failed multimedia streams. The when an RSVP daemon consults the local routing
second solution is so difficult (if not impossible) because protocol(s) to obtain routes. A host sends Internet Group
the quantity of lost multimedia streams may be too huge to management Protocol (IGMP) messages to join a multicast
be retransmitted. So, the only available solution is to create group and RSVP messages to reserve resources along the
another alternative path and complete the transmission delivery path(s) from that group. Each router that is
process. To determine an alternative path, we face two capable of participating in resource reservation passes
open questions. The first question is: how a free path, that incoming data packets to a packet classifier and then
will transport the multimedia streams to the same queues them as necessary in a packet scheduler. The
destination, is created? The second question that may be RSVP packet classifier determines the route and QoS class
put forward after the path creation is: can the created path for each packet. The RSVP scheduler allocates resources
provide the required QoS assigned for the failed one? for transmission on the particular data link layer medium
From these queries and RSVP analysis, it's obvious that used by each interface. If the data link layer medium has
the elements of resource reservation and QoS are RSVP, its own QoS management capability, the packet scheduler

77 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

is responsible for negotiation with the data-link layer to In a Differentiated Service domain, all the IP packets
obtain the QoS requested by RSVP. The scheduler itself crossing a link and requiring the same DiffServ behavior
allocates packet-transmission capacity on a QoS-passive are said to constitute a behavior aggregate (BA). At the
medium, such as a leased line, and also can allocate other ingress node of the DiffServ domain, the packets are
system resources, such as CPU time or buffers. A QoS classified and marked with a DiffServ Code Point (DSCP),
request, typically originating in a receiver host application, which corresponds to their Behavior Aggregate. At each
is passed to the local RSVP implementation as an RSVP transit node, the DSCP is used to select the Per-Hop
daemon. The RSVP protocol is then used to pass the Behavior (PHP) that determines the queue and scheduling
request to all the nodes (routers and hosts) along the treatment to use and, in some cases, drop probability for
reverse data path(s) to the data source(s). At each node, the each packet [5], [6].
RSVP program applies a local decision procedure called From the preceding discussion, one can see the
admission control to determine whether it can supply the similarities between MPLS and DiffServ: an MPLS LSP
requested QoS. If admission control succeeds, the RSVP or FEC is similar to a DiffServ BA or PHB, and the MPLS
program sets the parameters of the packet classifier and label is similar to the DiffServ Code Point in some ways.
scheduler to obtain the desired QoS. If admission control The difference is that MPLS is about routing (switching)
fails at any node, the RSVP program returns an error while DiffServ is rather about queuing, scheduling and
indication to the application that originated the request. dropping. Because of this, MPLS and DiffServ appear to
However, it was found that unsurprisingly, the default best be orthogonal, which means that they are not dependent on
effort delivery of RSVP messages performs poorly in the each other, they are both different ways of providing
face of network congestion. Also, the RSVP protocol is higher quality to services. Further, it also means that it is
receiver oriented and it's in charge of setting up the possible to have both architectures working at the same
required resource reservation. In some cases, to reallocate time in a single network, but it is also possible to have
the bandwidth in a receiver oriented way could delay the only one of them, or neither of them, depending on the
required sender reservation adjustments [3], [4], see Fig. choice of the network operator. However, they face several
(1). limitations:
1. No Provisioning methods
2. No Signaling as (RSVP).
3. Works per hop (i.e. what to do with non-DS hop
in the middle?)
4. No per-flow guarantee.
5. No end user specification.
6. Large number of short flows works better with
aggregate guarantee.
7. Works only on the IP layer
8. DiffServ is unidirectional – no receiver control.
9. Long multimedia flow and flows with high
bandwidth need per flow guarantee.
10. Designed for static topology.
3. PROBLEM FORMULATION
The routing and resource reservation protocols must be
capable to adapt a route change without failure. When new
Fig. 1 The RSVP Operations possible routes pop up between the sender and the receiver,
the routing protocol may tend to move the traffic onto the
B. DiffServ & MPLS new path. Unfortunately, there is a possibility that the new
path can’t provide the same QoS as the previous one. To
MPLS simplifies the routing process used in IP networks, avoid these situations, it has been suggested that the
since in an MPLS domain, when a stream of data traverses resource reservation protocol should be able to use a
a common path, a Label Switched Path can be established technique called the route pinning. This would deny the
using MPLS signaling protocols. A packet will typically routing protocol the right to change such a route as long as
be assigned to a Forwarding Equivalence Class (FEC) only it is viable. Route pinning is not as easy to implement as it
once, when it enters the network at the ingress edge Label sounds. With technologies such as Classless Inter-Domain
Switch Router, where each packet is assigned a label to Routing (CIDR) [7], [8], a pinned route can use as much
identify its FEC and is transmitted downstream. At each memory from a router as a whole continent! Also, this
LSR along the LSP, only the label is used to forward the problem may occur if a path station can’t provide the
packet to the next hop.

78 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

multimedia streams with the same required QoS during a • Detector


transmission operation. At this situation, the multimedia
streams should search about an alternative path to The detector and the connector are fired simultaneously.
complete the transmission process. The detector precedes the connector in visiting the
multimedia path’s stations. The detector visits each path
4. THE PROPOSED SYSTEM station to test the required QoS. If the detector notes a
From the problem definition and the RSVP analysis, it is defect in the QoS at any station (i.e. the station can’t
obvious that the elements of the resource reservation and provide the required QoS), then it sends to the connector
QoS are RSVP, routing protocol, sender, and receiver. an alarm message containing the station IP address and the
Also, it is notable that the resource reservation process failed required QoS, see algorithm 3for more detector
occurs before the multimedia transmission. At the discussion.
beginning of the multimedia streams transmission (i.e.
after the resources are reserved for the multimedia), the Algorithm 1
relations between the QoS elements are disjoint. So, if a 1- While the number of multimedia packets < > Null
change occurs in the reserved path during the multimedia 2-1 Begin
streams transmission operation, the above stated problem 2-2 The multimedia starts the transmission operation
may occur. 2-3 The connector agent is fired with the starting of the
If the connections between the QoS elements are transmission operation.
reinstalled during the multimedia streams transmission,
2-4 For I = 1 To N.
then the QoS problems may be solved. The reinstallation
process is accomplished by three additive components that 2-4-1 Begin
are called the proposed system components. 2-4-2 The connector agent tests the stored detector flag
value.
A. The proposed system components
2-4-3 If the flag value is changed to one.
The proposed system comprises three additive components 2-3-3-1 Go to the step number 3
in addition to the old system components. The additive
2-4-4 Else
components are 1- Connector. 2- Analyzer. 3- Detector. In
the following subsections, the definition and the functions 2-4-4-1 Complete the I For Loop
of each additive component are demonstrated. 2-4-5 End I For Loop.
2-5 While ((SW-SC) * TR ) < > Null)
• Connector 2-5-1 Begin
2-5-2 The connector extracts the nearest router address
This component is fired at the transmission starting and
around the failed station.
can be considered as a software class(s). The connector
has more than one task for helping the system to 2-5-3 The connector sends a message to the router
accomplish its target. The main function of the connector asking about alternative path (or station).
is to reinstall the connections between QoS elements in a 2-5-4 The connector receives all available paths in a
problem occurrence case, see algorithm 1 for more reply message sent by the router.
connector discussion. 2-5-5 The connector sends the router reply message to
the analyzer asking about the new QoS request
• Analyzer
for the new path.
This component, located at the receiver, is considered also 2-5-6 For J = PFS To M
as a software class(s). The main function of the analyzer is 2-5-6-1 Begin.
to extract the failed station(s) and its alternative(s). Also, 2-5-6-2 The connector tests the QoS.
the analyzer connects to RSVP at the receiver site to 2-5-6-2-1 If the QoS fails, the router
extract a QoS request or a flow description of the new path. returns to the step 2-5.
Also the analyzer uses some features of DiffServ and 2-5-6-2-2 Else, complete the J For Loop.
MPLS to acquire an alternative simple path with full QoS
2-5-6-3 End J For Loop.
requirements. The DiffServ provides the system with
simplest path and pushes the complexity to the network 2-5-7 (SW-SC) * TR = ((SW-SC) * TR) –1(Unite time)
edges. The MPLS provides our system with next hop for 2-5-8 End Inner while loop
each packet and to perform traffic conditioning on traffic 2-6 End outer while loop
streams flow in different domains (paths), see algorithm ٢ 2-7 Stored flag value = 0.
for more analyzer discussion. 2- End of the connector algorithm.

79 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

Algorithm 2 Table 1: The Data Stored in each System Component


1- If the stored connector flag is changed to one Connector stored Analyzer stored Detector stored data
2-1 The analyzer receives an old and a new paths data data
from the connector. Connector ID Connector ID Detector ID
2-2 The analyzer compares between the two paths Address of each path Analyzer ID Connector ID
and separates the similar stations and the station
different ones. Time of each visiting Connector address Connector address
2-3 The analyzer keeps the similar stations in a table station
(called same) and keeps the different stations in Analyzer ID RSVP connections QoS required from
another two tables (called Diff1 and Diff2). each path station
2-4 The analyzer constructs a mapping in relation to Analyzer Address Similar table Path structure
the QoS in the tables of different stations, see
Stream ID Different tables The connector flag
step 2.
value (default value
2-5 The analyzer cooperates with the RSVP to extract
=0)
the QoS request of a new path.
Detector flag value The connector flag QoS test value (default
2-6 The analyzer capsulate the results in a message
(default value =0) value (default value value =0)
and sends it to the connector.
=0)
2- The analyzer handling and mapping operations RSVP connections
2-1 For I = 1 to old[N].
2-2-1 Begin
2-2-2 If the old[I] = New[I] B. System approach
2-2-2-1 Begin.
2-2-2-2 old[I] = Same[K] After the resource reservation processes have been done,
2-2-2-3 K=K+1 the multimedia streams begin the flood across the
2-2-2-4 End IF. predetermined path. The connector accompanies the
2-2-3 Else multimedia streams at every station. When the connector
2-2-3-1 Begin. receives an error message from the detector, the connector
2-2-3-2 old[I] = Diff1[H]. starts to install the connections between the QoS elements.
2-2-3-3 old[I] = Diff1[H].
2-2-3-4 H = H+1
2-2-3-5 End Else.
2-2-4 If H=K
2-2-4-1no changing in the old QoS request.
2-2-5 For J = 1 to H
2-2-5-1 Begin
2-2-5-1 Diff2 [J] = Construct a QoS
request.
2-2-5-2 End J For Loop.
2-2-6 End I For Loop
3- End of the analyzer Algorithm.

`
Algorithm 3
1- While the number of multimedia packets < > Null
1-1 Begin
1-2 If the QoS test value = 1
1-2-1 Begin
1-2-2 The detector multicast alarm message Fig. 2 Functional Diagram of the Proposed System
including the connector ID.
1-2-3 The detector changes the test value to 0. The connector extracts the address of the failed station and
1-2-4 The detector tests another succeed the nearest router. The connector constructs a message that
stations. will be sent to the routing protocol asking for an
1-2-5 End IF. alternative path (or station). The routing protocol provides
1-3 End of the While Loop the connector with the available path(s) that compensates
1-4 QoS test value = 0. the old one. The connector constructs a message,
2- End of the detector algorithm.
containing the old and new paths, and sends it to the
analyzer. The analyzer extracts the failed station(s) and its
corresponding one(s) in the new path. The analyzer
Note: the symbols description is found at appendix A. connects the RSVP to extract the QoS request. The

80 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

analyzer constructs a message to be sent to the connector. message to access the alternative path (or station) that
The connector transforms the analyzer message to the replaces the failed path (or station). There are two types of
sender informing it with the new selected path. Hence; the this message, the request message and the reply message.
sender transmits the new multimedia streams using the The request message comprises the failed path and the
new connector path see figures (2), (3). reply message contains the alternative path. The request
message has the following fields, 1) Message type, 2)
Container ID, and 3) Old path. The reply message has the
following fields, 1) Message type, 2) Connector ID, and 3)
Alternative path(s).

• Between the connector and the analyzer (Request


and Reply).

This message is used to communicate the connector and


the analyzer. This message is fired when the connector
needs a QoS request for the new path. The message has
two types, the request message and the reply message. The
request message contains a new path that is accessed from
the router. The reply message contains the QoS request
that is extracted after the analysis operation. The request
message contains the following fields 1) Message type, 2)
Container ID, and 3) Alternative path. The reply message
contains the following fields 1) Message type, 2)
Connector ID, and 3) QoS request.
Fig. 3 Analyzer Operation
• Between the analyzer and the RSVP at the
C. System messages receiver (Request and Reply).
To complete the connections between proposed system
components, we have to demonstrate the structure of each This message is used to complete the dialog between the
used message. The proposed system contains five new analyzer and the RSVP at the receiver site. The analyzer
messages that can be stated as follows. handles the old path and its alternative(s) to extract the
1. From the connector to the sender. failed station(s) and its corresponding station(s) in the new
2. Between the connector and the routing protocol path. The analyzer needs it to construct a QoS request for
(router) (Request and Reply). the new path (s). This message has two types, the request
3. Between the connector and the analyzer (Request message and the reply message. The request message
and Reply). contains the new path that was sent by the connector. The
4. Between the analyzer and RSVP at the receiver reply message contains the QoS request that is extracted
site (Request and Reply). by the RSVP. The request message contains the following
5. From the detector to the connector fields, 1) Message type, 2) Analyzer ID, and 3) Alternative
path. The reply message contains the following fields, 1)
• From the connector to the sender Message type, 2) Analyzer ID, and 3) Required QoS.

This message joins the connector with the multimedia • From the detector to the connector
sender. This message is sent when the connector receives
the QoS request from the analyzer. This message structure This message can be used to alarm the connector with a
looks like the RSVP reservation request message but with new event occurrence. If the detector finds a failure at a
the connector ID (This field is used in case of more than station in relation to QoS, then it sends this message to the
one connector in the proposed system). connector asking to start its function for solving the
problem. The message contains the following fields, 1)
• Between the connector and the routing protocol Message type, 2) Connector ID, 3) QoS request, and 4)
(Request and Reply). Address of the failed station.

This message joins the connector with the router or the D. Decreasing the number of system messages
routing protocol. This message is fired when the detector It is notable that our system contains a number of
alarms the connector that a QoS failure is occurred at a messages that may cause a network overload. To make our
station in the multimedia path. The connector needs this system suitable for every network state, a new strategy to

81 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

decrease a number of sent and received messages should 4. The links between the workstations (video
be demonstrated. This strategy is built on the cumulative transmitters and receivers), are 1 Mbps. The links
message idea. For the detector component, it’s clear that between the routers are 2 Mbps.
its job is to test if each router (station) can provide the 5. For internet simulation, the routers are connected
multimedia with required QoS or not. In case of network via IP cloud.
overload, the detector can capsulate its messages in one
message. The capsulated message contains the addresses
of the QoS failed stations that not visited by the
multimedia streams in the transmission trip. For the
analyzer component, it can use the same idea during the
communication with the DiffServ and MPLS provided that
the multimedia streams keep away from the analyzer
transactions.
5. PERFORMANCE STUDY
In this section, the performance of the suggested multi-
resource reservation system is studied. In our simulation
the network simulator OPNET 11.5 [9] is used. A
distributed reservation-enabled environment, with multiple
distributed services deployed and multiple clients
requesting these services is simulated. In particular, for
runtime computation of end-to-end multi-resource
reservation plans, the performance of the proposed system
with the best effort communication system (old system) is
compared. The key performance metrics in our simulations
are: 1) End-to-end delay, 2) Packet loss, 3) Packet Loss in
Case of Compound Services, 4) Re-Routing State, 5) Fig. 4 Simulation Model Infrastructure
Reservation Success Rate, 6) Utilization, and 7) Delay
jitter. These parameters are evaluated for an increasing 5.2 General Notes and Network Parameters
network load. Also, in our simulations, we compare
between our system and the DiffServ && MPLS. In our 1. The data link rate and queue size for each queue
simulation, Abhay Agnihotri study [10] is used to build the scheme are fixed.
simulation environment. 2. The multimedia traffics are considered MPEG
with all characters.
A. Simulation Setup 3. The small queue size didn’t affect the queue
The infrastructure of the simulation contains the following delay.
items: 4. Inadequate data link rate causes more packet
1. 3 Ethernet routers to send and receive the drops and too much consumed bandwidth.
incoming traffics and police it according to the 5. Data link rate are fixed at 2 Mbps between the
QoS seniors specified in the proposed system, routers.
DiffServ, MPLS, and RSVP. 6. For FIFO queue scheme, the queue size was fixed
2. 15 video transmitters distributed on the router 1 at 400 packets.
and the router 2 as follows: 10 video transmitters 7. The traffic pattern (continuous or discrete), the
are connected to router 1 and 5 are connected to protocol (TCP or UDP), and application
router 2. The video workstations used to transmit (prioritized or not) are considered input
375 MPEG video packets per second, of size parameters.
1000 bytes. Each transmitter can send the 8. The output parameters are calculated as regards
multimedia packets only if it has a full required the RSVP, DiffServ, MPLS, and our proposed
QoS like specified priority interactive, streaming, technique.
full bandwidth, specified delay jitter, and 9. It’s supposed that the number of multimedia
excellent effort. packets is increased with simulation time.
3. 15 video receivers distributed on the router 2 and 10. The simulation of the old system can be found at
the router 3 as follows: 10 video receivers are [11], [12].
connected to the router 2 and 5 are connected to
the router 3.

82 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

B. Simulation Results packet loss in our system is decreased compared to the old
In our simulation, the parameters of multimedia and system. This decrease is justified by the following; the
network QoS are scaled. The curves below contain a increasing in the network load means the increasing in the
comparison between the old system (RSVP, DiffServ, and network hosts and this require services with different
MPLS) and the new proposed system. qualities. When the number of services and resources
increases, the old system efficiency decreases hence; the
• End-to-End Delay number of packet loss increases. Unlike the old system,
our system uses the detector, the connector, and the
One of the key requirements of high speed packet analyzer, to handle a failure that occurred in the old system
switching networks is to reduce the end-to-end delay in before the multimedia packets affect and this promotes its
order to satisfy real time delivery constraints and to efficiency. The number of packet loss is approximately
achieve the necessary high nodal throughput for the equal especially before the middle of simulation time. The
transport of voice and video [13]. Figure 5 displays the notable packet loss in our system comes from making the
end-to-end delay that may result from our computations, analyzer component inactive. The system fault tolerance
component messages and a buffer size. It’s clear that our will be discussed in the future work.
system computations didn’t affect the delay time. This is
because the computations are done during the multimedia
transmission even a path failure is detected. Also, the old
one uses the rerouting technique when finds a failure at
any path station. The rerouting operations load the old
system with more computations that will increase the time
delay. In addition, our proposed system uses the
cumulative message technique in case of network overflow.
Packet Loss
End-to-End Delay Percentage

Simulation Time

Figure (6): Packet Loss

• Packet loss in case of compound services

This metric scales the efficiency of our system as regards


the complete reservation of the resources that are required
Simulation Time quality of the compound service. The compound service is
a service that needs other one(s) to be reserved (dependant
Fig. 5 End-to-End Delay service). The curve in figure 7 shows the relation between
the number of lost bits versus the generic times. It’s
• Packet Loss notable that the efficiency of our system in compound
service reservation is better than the old one. This
This metric demonstrates the number of packet loss that indicates that the old system has a delay in dealing with
occurred in the proposed system and the old system. The the required compound services and this causes a loss of
diagram found in figure 6 demonstrates the packet loss huge number of bits especially at the start of simulation
versus the time unit (it’s supposed that the network load is time.
increased with the time). It’s obvious that the number of

83 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

• Reservation Success Rate

This metric scales the efficiency of the proposed system as


regard the resource reservation. The diagram in figure 9
shows the success reservation rates per time unit. It is
The Number of Lost Bits

observed that the success reservation rate in our system


increases the success reservation rate in the old system.
This increasing is due to efficiency of the detector in fault
detection at any resource before it is used, in addition,
efficiency of the connector in finding and handling the
alternative solution. Also, the difference between the two
systems is notable at the second hour of simulation time.

The Success Reservation Rate


Generic Times

Fig. 7 Packet Loss in Case of Compound Services

• Re-Routing State

To meet high throughput requirements, paths are selected,


resources reserved, and paths are recovered in case of
failure. This metric should be scaled to make sure that our
new system has an ability to find a new path when a
failure occurred. This metric scales the rerouting state for
our system and old one. The curve in figure 8 shows the
relation between the number of recovered paths versus
simulation time for new system and old one.
Simulation Time

Fig. 9 Reservation Success Rate

• Utilization
The Number of Recovered Paths

This metric scales the efficiency of our system additive


components (the connector, the analyzer, and the detector).
The efficiency of the connector is scaled by the number of
successful connections in relation to the number of stations
that cannot provide their QoS. The efficiency of the
analyzer is scaled by the number of successful QoS
requests extraction in relation to the number of its
connections with the connector. The efficiency of the
detector is scaled by the number of failed point’s detection
in relation to the number of failed points in the new system
during the simulation time. For accuracy, all the
components efficiency is scaled under different network
loads. Figure 10 shows the average efficiency of three
system components compared with the old system
efficiency. The old system efficiency is calculated with a
Simulation Time percentage of the services that are correctly reserved with
the same required quality.
Fig. 8 Re-Routing State

84 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

6. CONCLUSION
In this paper, a brief analysis for the RSVP, the DiffServ,
and the MPLS is demonstrated. Also, the QoS problems
that may be occurred during the multimedia transmission
are demonstrated. A new system to solve the QoS problem
The Utilization Percentage

is introduced. The proposed system adds new three


additive components, called connector, analyzer, and
detector, over the old RSVP system to accomplish its
target. A simulated environment is constructed and
implemented to study the proposed system performance. A
network simulator called OPNET 11.5 is used in the
environment simulation construction. Finally, detailed
comments are demonstrated to clarify the extracted
simulation results. The test-bed experiments showed that
our proposed system increases the efficiency of the old
system with approximately 40 %

Simulation Time 7. FUTURE WORK


To complete our system efficiency, the fault tolerance
Fig. 10 Utilization (System Efficiency) problem should be. What will be done if one of the system
components fails? In our simulation, we faced this
• Delay Jitter problem in packet loss diagram; hence we should find an
alternative component (software or hardware) to replace
This metric is introduced to make sure that the additive
the failed one and solve this problem. The suggested
components didn’t affect the multimedia packets delay
solution is to use a multi agent technology instead of one
jitter. The delay jitter as regards the multimedia streams is
agent. Consequently, we simulate the multi agent QoS
a very important QoS parameter. The plot in figure 11
system and show the results. We will apply the proposed
describes the relation between the delay jitter and the first
system with different types of multimedia data. This will
1500 packets sent by the new system. In the new system’s
make our system goes to the standardization. Hence, we
curve, it is obvious that the delay jitter is less than the old
can transform the proposed system to a new application
system’s curve in the most simulation time. So, the
layer protocol used for solving the multimedia QoS
additive components operate in harmony without affecting
problems.
the delay jitter of the multimedia packets.
ACKNOWLEDGMENT

The authors would like to convey thanks to the Taif


University for providing the financial means and facilities.
This paper is extracted from the research project number
1/430/366 that is funded by the deanship of the scientific
The Delay Jitter

research at Taif University.

REFERENCES
[1] K. RAO, Z. S. Bojkovic, and D. A. Milovanovic, Multimedia
Communication Systems Techniques, Standards, and
Networks, Prentice-Hall Inc. Upper Saddle River, NI, 2003

[2] G. Malkin, R. Minnear, RIPng for IPv6, Request For Comment


(RFC) 2080, January 1997.

[3] R. Guerin, S. Kamat, S. Herzog, QoS Path Management with


RSVP, Internet Engineering Task Force (IETF) Draft, March
Sent Packets 20, 1997.

Fig. 11 Delay Jitter

85 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009 .

[4] Marcos Postigo- Bois, and Jose L. Melus, "Performance Dr. Omar Said is currently assistant professor
Evaluation of RSVP Extension for a guaranteed Delivery
Scenario", Computer Communication, Volume 30, Issue 9,
in the Computer Science at Dept. of Computer
June, 2007. Science, Taif University, Taif, KSA. He
received Ph.D degree from Menoufia
[5] B. Davie, and Y. Rekhter, MPLS Technology and
University, Egypt. He has published many
Applications, Morgan Kaufmann, San Francisco, CA, 2000.
papers at international journals and
[6] S. Blake et al, An Architecture for Differentiated Services, conferences. His research areas are Computer Networking,
Request For Comment (RFC) 2475, December 1998. Internet Protocols, and Multimedia Communication

[7] Daniel Zappala, Bob Braden, Deborah Estrin, Scott Shenker,


Interdomain Multicast Routing Support for Integrated Services Prof. Sayed F. Bahgat is a Professor in the
Networks, Internet Engineering Task Force (IETF) Internet-
Draft, March 26, 1997.
Department of Scientific computing, Faculty
of Computer and Information Sciences, Ain
[8] Y. Rekhter,C. Topolcic, Exchanging Routing Information Shams University, Cairo, Egypt. He received
Across Provider Boundaries in the CIDR Environment, his Ph.D. from the Illinois Institute of
Request For Comment (RFC) 1520, September 1993 Technology, Chicago, Illinois, U.S.A., in 1989. From
2003 to 2006, he was the head of Scientific Computing
[9] http://www.opnet.com/. Department, Ain Shaams University, Cairo, Egypt. From
2006 to 2009 he was the head of Computer Science
[10] Abhay Agnihotri.” Study and Simulation of QoS for
Department, Taif University, KSA. He is now a professor
Multimedia Traffic”, Master’s project, 2003.
http://www3.uta.edu/faculty/reyes/teaching/software/OPNET_ in the Computer Science Department, Taif University,
Modeler/AgnihotriMasterProject.ppt KSA. Dr. Sayed F. Bahgat has written over 39 research
articles and supervised over 19 M.Sc. and Ph. D. Theses.
[11] M. Pullen, R. Malghan, L. Lavu, G. Duan, J. Ma, J. Ma, A His research focuses in computer architecture and
Simulation Model for IP Multicast with RSVP, Request For organization, computer vision and robotics.
Comment (RFC) 2490, January 1999.

[12] Raymond Law and Srihari Raghavan, “DiffServ and MPLS – Prof. Said Ghoniemy is a Professor in the
Concepts and Simulation” 2003.
Department of Computer Systems, Faculty
http://nmg.upc.es/intranet/qos/8/8.7/8.7.2.pdf
of Computer and Information Sciences, Ain
[13] System for coding voice signals to optimize bandwidth Shams University, Cairo, Egypt. He received
occupation in high speed packet switching networks, 2000. his Ph.D. from the Institute National
http://www.patentstorm.us/patents/6104998/fulltext.html Polytechnique du Toulouse, Toulouse,
France, in 1982. From 1996 to 2005, he was the head of
Computer Systems Department and director of the
Information Technology Research and Consultancy Center
(ITRCC), Ain Shaams University, Cairo, Egypt. From
Appendix A 2005 to 2007 he was the vice-dean for post-graduate
Assumptions: affairs, Faculty of Computer and Information Sciences,
Symbol Description
Ain Shams University. He is now a professor in the
SW Detector visited station address. Computer Engineering Department, Taif University, KSA.
SC Connector visited station address. Dr. Ghoniemy has written over 60 research articles and
TR Time spent to reach any station. supervised over 40 M.Sc. and Ph. D. Theses. His research
TC Connector visiting time. focuses in computer architecture and organization,
I, J Counters.
computer vision and robotics.
H, K Two used variables.
PFS Failed station position.
N Number of stations in the old path.
M Number of stations in the new path. Eng. Yasser Elawady is a Lecturer in the
Old[ ] Array used to keep the old path stations addresses. Department of Computer Engineering,
New[ ] Array used to keep the new path stations addresses. Faculty of Computers and Information
Same[ ] Array used to keep the similar stations found in the Systems,Taif University,Taif, KSA. He
two paths.
Diff1[ ] Array used to keep the different stations found in the
received his M.Sc. from the Department of
old path. Computer Engineering, Faculty of
Diff2[ ] Array used to keep the different stations found in the Engineering, Mansoura university, Mansoura, Egypt, in
new path. 2003. His subject of interest includes Multimedia
Communication, Remote Access and Networking.

86 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Hierarchical Approach for Key Management in


Mobile Ad hoc Networks
Renuka A. Dr. K.C.Shet
Dept. of Computer Science and Engg. Dept. of Computer Engg.
Manipal Institute of Technology National Institute of Technology Karnataka
Manipal-576104-India Surathkal, P.O.Srinivasanagar-575025
renuka.prabhu@manipal.edu kcshet@rediffmail.com

87 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No.1, 2009

Abstract—Mobile Ad-hoc Network (MANET) is a collection We propose a distributed group key management
of autonomous nodes or terminals which communicate with approach wherein there is no central authority and the
each other by forming a multi-hop radio network and users themselves arrive at a group key through simple
maintaining connectivity in a decentralized manner. The computations. In large and high mobility mobile ad hoc
conventional security solutions to provide key management networks, it is not possible to use a single group key for
through accessing trusted authorities or centralized servers the entire network because of the enormous cost of
are infeasible for this new environment since mobile ad hoc computation and communication in rekeying. So, we
networks are characterized by the absence of any logically divide the entire network into a number of
infrastructure, frequent mobility, and wireless links. We
groups headed by a group leader and each group is
propose a hierarchical group key management scheme that
is hierarchical and fully distributed with no central
divided into subgroups called clusters headed by the
authority and uses a simple rekeying procedure which is cluster head. Though the term group leaders and cluster
suitable for large and high mobility mobile ad hoc networks. heads are used these nodes are no different from the other
The rekeying procedure requires only one round in our nodes, except for playing the assigned roles during the
scheme and Chinese Remainder Theorem Diffie Hellman initialization phase and inter group and inter cluster
Group Diffie Hellmann and Burmester and Desmedt it is a communication. After initialization phase, within any
constant 3 whereas in other schemes such as Distributed cluster, any member can initiate the rekeying process and
Logical Key Hierarchy and Distributed One Way Function the burden on the cluster head is reduced. The
Trees, it depends on the number of members. We reduce the transmission power and memory of the cluster head and
energy consumption during communication of the keying the group leaders is same as other members. The members
materials by reducing the number of bits in the rekeying within the cluster communicate with the help of a group
message. We show through analysis and simulations that key. Inter cluster communication take place with the help
our scheme has less computation, communication and of gate way nodes if the nodes are in the adjacent clusters
energy consumption compared to the existing schemes. and through the cluster heads if the are in far off clusters..
Inter group communication is routed through the group
Keywords- mobile ad hoc network; key management; leaders. Each member also carries a public key, private
rekeying.
key pair used to encrypt the rekeying messages
exchanged. This ensures that the forward secrecy is
preserved.
I. INTRODUCTION
The rest of the paper is organized as follows. Section II
A mobile ad hoc network (MANET) is a collection of focuses on the related work in this field. The proposed
autonomous nodes that communicate with each other, scheme is presented in Section III. Performance analysis
most frequently using a multi-hop wireless network. of the scheme is discussed in Section IV. Experimental
Nodes do not necessarily know each other and come Results and Conclusion are given in Section V and
together to form an ad hoc group for some specific Section VI respectively.
purpose. Key distribution systems usually require a
trusted third party that acts as a mediator between nodes
of the network. Ad hoc networks typically do not have
II. RELATED WORK
an online trusted authority but there may be an off line
one that is used during system initialization. Key management is a basic part of any secure
Group key establishment means that multiple parties communication. Most cryptosystems rely on some
want to create a common secret to be used to exchange underlying secure, robust, and efficient key management
information securely. Without relying on a central trusted system. Group key establishment means that multiple
entity, two people who do not previously share a common parties want to create a common secret to be used to
secret can create one based on the party Diffie Hellman exchange information securely. Secure group
(DH) protocol. The 2-party Diffie Hellman protocol can communication (SGC) is defined as the process by which
be extended to a generalized version of n-party DH. members in a group can securely communicate with each
Furthermore, group key management also needs to other and the information being shared is inaccessible to
address the security issue related to membership changes. anybody outside the group. In such a scenario, a group
The modification of membership requires refreshment of key is established among all the participating members
the group key. This can be done either by periodic and this key is used to encrypt all the messages destined to
rekeying or updating right after member change. The the group. As a result, only the group members can
change of group key ensures backward and forward decrypt the messages. The group key management
security. With frequently changing group memberships, protocols are typically classified in four categories:
recent researches began to pay more attention on the centralized group key distribution (CGKD), de-centralized
efficiency of group key update. Recently, collaborative group key management (DGKM), distributed/contributory
and group-oriented applications in MANETs have been an group key agreement (CGKA), and distributed group key
active research area. Obviously, group key management is distribution (DGKD).
a central building block in securing group In CGKD, there exists a central entity (i.e. a group
communications in MANETs. However, group key controller (GC)) which is responsible for generating,
management for large and dynamic groups in MANETs is distributing, and updating the group key. The most famous
a difficult problem because of the requirement of CGKD scheme is the key tree scheme (also called Logical
scalability and security under the restrictions of nodes’ Key Hierarchy (LKH) proposed in [1] is based on the tree
available resources and unpredictable mobility.

88 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

structure with each user (group participant) corresponding networks. Moreover the computational burden is high
to a leaf and the group initiator as the root node. The tree since it involves a lot of exponentiations.
structure significantly reduces the number of broadcast
messages and storage space for both the group controller Another approach using logical key hierarchy in a
and group members. Each leaf node shares a pairwise key distributed fashion was proposed in [13] called Distributed
with the root node as well as a set of intermediate keys One-way Function Tree (D-OWT) This protocol uses the
from it to the root. One Way Function (OFT) is another one-way function tree. A member is responsible for
centralized group key management scheme proposed in generating its own key and sending the blinded version of
[2].similar to LKH. However, all keys in the OFT scheme this key to its sibling. Reference [14] also uses a logical
are functionally related according to a one-way hash key hierarchy to minimize the number of key held by
function group members called Diffie–Hellman Logical Key
Hierarchy. The difference here is that group members
The DGKM approach involves splitting a large group generate the keys in the upper levels using the Diffie–
into small subgroups. Each subgroup has a subgroup Hellman algorithm rather than using a one-way function.
controller which is responsible for the key management of In Chinese Remainder Theorem Diffie-Hellman
its subgroup. The first DGKM scheme to appear was (CRTDH) [15] each member computes the group key as
IOLUS [3]. The CGKA schemes involve the participation the XOR operation of certain values computed. This
by all members of a group towards key management. Such requires that the members agree on two large primes.
schemes are characterized by the absence of the GC. The CRTDH is impractical in terms of efficiency and security,
group key in such schemes is a function of the secret such as low efficiency, possibly a small key, and
shares contributed by the members.Typical CGKA possessing the same Least Common Multiple (LCM).
schemes include binary tree based ones [4] and n-party However this CRTDH scheme was modified in [16]
Diffie-Hellman key agreement [5, 6]. Tree Based Group wherein the evaluation of the LCM was eliminated and
Diffie Hellman (TGDH) is a group key management other steps were modified slightly, so that a large value for
scheme proposed in [4]. The basic idea is to combine the the key is obtained. In both these methods, whenever
efficiency of the tree structure with the contributory membership changes occur, the new group key is derived
feature of DH. The DGKD scheme, proposed in [7], from the old group key as the XOR function of the old
eliminates the need for a trusted central authority and group key and the value derived from the Chinese
introduces the concepts of sponsors and co distributors. Remainder Theorem values broadcast by one of its
All group members have the same capability and are members. Since it is possible for the leaving member to
equally trusted. Also, they have equal responsibility, i.e. obtain this message, and hence deduce the new group key
any group member could be a potential sponsor of other backward secrecy is not preserved.
members or a co-distributor. Whenever a member joins or
leaves the group, the member’s sponsor initiates the In this paper, we propose a distributed approach in
rekeying process. The sponsor generates the necessary which members contribute to the generation of group key
keys and securely distributes the keys to co-distributors by sending the hash of a random number during
respectively. The co distributors then distribute in parallel, initialization phase within the cluster. They regenerate the
corresponding keys to corresponding members. In group key themselves by obtaining the rekeying message
addition to the above four typical classes of key from one of its members during rekeying phase or
management schemes, there are some other forms of key whenever membership changes occur. In a group the
management schemes such as hierarchy and cluster based group key used for communication among the cluster
ones [6, 8]. A contributory group key agreement scheme is heads is generated by the group leader and transmitted
most appropriate for SGC in this kind of environment. securely to the other clusterheads. The same procedure is
used to agree on a common key among the group leaders
Several group key management schemes have been wherein the network head generates the key and passes on
proposed for SGC in wireless networks [9, 10]. In Simple to the other group leaders. Symmetric key is used for
and Efficient Group Key (SEGK) management scheme for communication between the members of a cluster and
MANETs proposed in [11] group members compute the asymmetric key cryptography for distributing the rekeying
group key in a distributed manner. Also, a new approach messages to the members of the cluster.
was developed in [12] called BALADE, based on a
sequential multi-sources model, and takes into account
both localization and mobility of nodes, while optimizing
energy and bandwidth consumptions. Most of these III. PROPSED SCHEME
schemes involve complex operations which is not suitable
for large and high mobility networks. In Group Diffie- A. System model
Hellman, the group agrees on a pair of primes and starts
The entire set of nodes is divided into a number of groups
calculating in a distributive fashion the intermediate
values. The setup time is linear since all members must and the number of nodes within a group is further
contribute to generating the group key. Therefore, the size subdivided into subsets called clusters. Each group is
of the message increases as the sequence is reaching the headed by a group leader and a cluster by the cluster
last members and more intermediate values are necessary. head. The layout of the network is as shown in Fig.1. One
With that, the number of exponential operations also of the nodes in the cluster is head. A set of eight such
increases. Therefore this method is not suitable for large clusters form a group and each group is headed by a
group leader. The cluster head is similar to the nodes in
the network. The nodes within a cluster are also the

89 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

physical neighbors. The nodes within a cluster use become members of the cluster or local nodes. The nodes
contributory key agreement. Each node within a cluster update the status values accordingly.
contributes his share in arriving at the group key. Step 3: The cluster head broadcasts the message “I am
Whenever membership changes occur, the adjacent node cluster head” so as to know its members.
initiates the rekeying operation thereby reducing the Step 4: The members reply with the message “I am
burden on the cluster head. The group leader chooses a member” and in this way clusters are formed in the
random key to be used for encrypting messages network.
exchanged between the cluster heads and the network Step 5: If a node receives more than one “I am cluster
head sends the key to the group leaders that is used for head” messages, it becomes Gateway which acts as a
communication among the group leaders. The mediator between two clusters.
hierarchical arrangement of the network is shown in In this manner clusters are formed in the network.
Fig.2. The cluster heads broadcast the message, “Are there any
cluster heads” so as to know each other. The cluster head
with the smallest id is selected as the leader of the cluster
heads which is representative of the group called the
group leader. The group leaders establish communication
with other group leaders in a similar manner and one
among the group leaders is selected as the leader for the
entire network. The entire network is hierarchical in
nature and the following hierarchy is observed
networkgroupclustercluster members

C. Group Key Agreement within a cluster


Step 1: Each member broadcasts the public key along
with its id to all other members of the cluster along with
the certificate for authentication.
Step 2: The members of the cluster generate the group
key in a distributive manner. Each member generates a
Figure 1. Network Layout random number and sends the hash of this number to the
other members encrypted with the public key of the
The key management system consists of two phases individual members, so that the remaining members can
(i) Initialization decrypt the message with their respective private key.
(ii) Group Key Agreement Step 3: Each member concatenates the hash values of the
received members in the ascending order of the ids and
mixes it using a one way hash function on the
concatenated string. This is the group key used for that
cluster.
Let HRi be the hash of the random number generated by
node i and GK denote the group key then
GK=f (HR1 , HR2 , HR3 , ........... HRn)
where
HRi = hash(Random number i)
f is a one way function and
hash is secure hash function such as SHA1.
All the members now possess a copy of the same key as
same operations are performed by all the nodes.

Figure 2. Hierarchical layout D. Inter cluster group key agreement


The gateway node initiates communication with the
neighboring node belonging to another cluster and
B. Initialization mutually agrees on a key to be used for inter cluster
communication between the two clusters. Any node
Step 1: After deployment, the nodes broadcast their id belonging to one cluster can communicate with any other
value to their neighbors along with the HELLO message. node in another cluster through this node as the
Step 2: When all the nodes have discovered their intermediary. In this way adjacent clusters agree on group
neighbors, they exchange information about the number key. A set of eight clusters form a group. The cluster
of one hop neighbors. The node which has maximum one heads of each of these clusters mutually agree on a group
hop neighbors is selected as the cluster head. Other nodes key to be used for communication among the clusterheads

90 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

within a group in the similar manner. This key is different When a member leaves the group key of the cluster to
from the key used within the cluster. Going one level which it belongs must be changed. This is changed in the
above in the hierarchy, a number of groups can be similar manner as described above. The leaving member
combined headed by a group leader and the group leaders informs the neighboring node which in turn informs the
agree on a group key to be used for communication other nodes about the leaving member. It also generates
among the group leaders which aids in intergroup two random numbers and sends it securely to the other
communication. members which generate the group key.
Even though we have considered that the network is leaving node adjacent node : {leaving message}
divided into eight groups, each group consisting of eight
clusters and each cluster consisting of eight members, it adjacent node leaving node :{acknowledge}
need not be constant. It may vary and this number does adjacent node each node i :{rekeying message}pki
not change the manner in which group key is derived. This
is assumed so that it gives the hierarchical appearance in b) When a gateway node leaves
the form of a fractal tree. When a gateway node leaves the network, it delegates the
role of the gateway to the adjacent node. In this case, the
E. Network Dynamics group key of both the clusters with which this node is
The mobile ad hoc network is dynamic in nature. associated need to be changed. When the gateway node
Many nodes may join or leave the network. In such cases, moves into one of the clusters only the group key of the
a good key management system should ensure that other cluster has to be changed.
backward and forward secrecy is preserved. leaving gateway node adjacent node : {leaving
1) Member join: message + other messages for delegating its role}
When a new member joins, it initiates communication adjacent node leaving gateway node
with the neighbouring node. After initial authentication, :{acknowledge}
this node initiates the rekeying operations for generating a
new key for the cluster. The rekeying operation is as adjacent node each node i in cluster1:{rekeying
follows. message}pki
new node adjacent node : {authentication} adjacent node each node j in cluster2:{rekeying
message}pkj
adjacent node new node :{acknowledge}
c) When the cluster head leaves
adjacent node all nodes:{rekeying message}k(old
cluster key) When the cluster head leaves the group key used for
communication among the cluster heads need to be
changed. Also, the group key used within the cluster has
The neighboring node broadcasts two random numbers to be changed. This cluster head informs the adjacent
that are mixed together using a hashing function and is cluster head about its desire to leave the network which
inserted at a random position in the old group key, the initiates the rekeying procedure. The adjacent cluster head
position being specified by the first random number. The generates two random numbers and sends it to the other
two random numbers are sent in a single message, so that
cluster heads in a secure manner.
any transmission loss may not result in wrong key being
generated. Let the two bit strings be leaving cluster head adjacent cluster head : {leaving
message}
I Random no. = 00100010
adjacent node leaving node :{acknowledge}
II Random no. = 10110111
adjacent node each node i :{rekeying message}pki
Suppose the result of mixing function is 11010110
leaving clusterheadadjacent clusterhead : {leaving
and the previous group key is message + other messages for delegating its role}
10010100010101010001110000111100000110001000 adjacent node leaving clusterhead :{acknowledge}
0001
adjacent node each cluster headi :{rekeying
The new group key is message}pki
10010100010101010001110000111100000110011010 The group key of the clusterheads is obtained by
110010000001 taking the product of the two random numbers, inserting
Since all members know the old group key they can at the position of indicated by the first number and
compute the new group key. removing the initial bits old group key of the clusterheads
and removing the bits equal to the number of bits in the
This new group key is transmitted to the new member product from the old group key.
by the adjacent node in a secure manner.
Suppose
2) Member Leave
I Random no. = 00101101
a) When Cluster Member leaves
II Random no. = 00111111

91 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

The product of the two numbers is00 00010101000110 10010100010101010001110000111100000110001000


000100110101
Suppose the old group key is
Dividing the group key into 8 bit blocks (size of the
10010100010101010001110000111100000110001000 random number), we get,
000100110
10010100 01010101 00011100 00111100 00011000
The new group key is 10000001 00110101
00011100001111000001100010000000010101000110 Performing the XOR operation and concatenating as
000100110. Thus the cluster heads compute the group key shown,
after rekeying operation. This is the new group key for
clusterheads within a group.Even the group key used for 00100010 XOR 10010100 || 00100010 XOR
intra cluster communication in that particular cluster needs 01010101 || 00100010 XOR 00011100 || 00100010 XOR
to be changed. This is changed in the manner described 00111100 || 00100010 XOR 00011000 || 00100010
above for rekeying within the clauster. XOR 10000001 || 00100010 XOR 00110101
d) Whenever the group leader leaves the following group key is obtained
Whenever the group leader leaves all the three keys 10110110011101110011111000011110001110101010
should be changed. These are 001100010111
(i) group key among the group leaders This is the new group key of the group leaders.
(ii)group key among the clusterheads and
F. Communication Protocol
(iii) group key within the cluster
The nodes within the cluster communicate using the intra
leaving group leader adjacent group leader:{leaving cluster group key. The communication between intra
message + other messages for delegating its role } group and inter cluster nodes takes place through the
adjacent group leader leaving node :{acknowledge} gateway node, if they belong to adjacent clusters and
through the cluster heads if the are in far off clusters.
adjacent group leader each group leaderi Sourcenodegateway nodeDestination node --- For
:{rekeying message}pki adjacent clusters
leaving group leaderadjacent node : {leaving Sourcenodeclusterhead(source)clusterhead(destinati
message} on)Destination node ---For far away clusters
adjacent node leaving group leader :{acknowledge}
adjacent node each node i in that cluster:{rekeying For adjacent clusters
message}pki
GKCL1 GKCL2
leaving group leader adjacent clusterhead : {leaving Source node ------------->Gateway node ---------->
message + other messages for delegating its role} Destination node
adjacent clusterhead leaving clusterhead For nodes in far off clusters
:{acknowledge}
GKCL1 GKCH
adjacent node each cluster headi :{rekeying
Source node ------------->Cluster head1 ----------> Cluster
message}pki
GKCL2
leaving node adjacent node : {leaving message} head1 -------------- Destination node
adjacent node leaving node :{acknowledge}
adjacent node each node i :{rekeying message}pki The inter group communication is through corresponding
cluster heads and the group leaders as shown
The first two group keys are changed in the manners Source nodecluster head (source)group leader
described above. To change the group key of the group (source) group leader (destination) cluster head
leaders, the leaving group leader delegates the role of the (destination) Destination node.
group leader to another cluster head in the same group and
informs it to the other group leaders about this change.
The adjacent group leader initiates the rekeying operation.
GKCL1 GKCH1
It generates two random numbers, and sends it the other
group leaders. The group leaders divide the old group key Source node ------------->Cluster head1 ----------> Group
into blocks of size which is the same as the number of bits
in the random number, perform the exclusive OR of the GKGR GKCH2
random number and the blocks of the old group key and Leader1-----------Group Leader2--------------Cluster
concatenate the result to arrive at the new group key. GKCL2
head2 ------------ Destination node
Suppose the Random no. is 00100010 and the old
group key is

92 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

IV. PERFORMANCE ANALYSIS Table II gives the communication cost of rekeying for
various schemes. In our scheme, the entire network is
A. Cost Analysis divided into a number of groups which in turn is divided
We compute communication cost of our scheme under into a number of clusters, wherein each cluster consists of
various situations and for different network organizations. members. When a member leaves, in the non hierarchical
scheme, the key of the entire network needs to be
We also compare the communication cost of rekeying for
changed. But in hierarchical scheme, it is just sufficient if
various schemes. Some schemes such as GD-H use 1024 the group key of the cluster to which it belongs is
bit message for rekeying whereas our seheme uses a 32 changed. The hierarchical scheme reduces the number of
bit meaasge and therefore the energy required for rekeying messages transmitted and this is shown in Table
rekeying is very less. This is very important in energy I. The communication between far off nodes (nodes in
constrained mobile ad hoc networks. different groups) has to undergo 5 encryptions and
Let us denote decryptions whereas in non hierarchical schemes it is only
one. In very large networks, this is tolerable compared to
N= Network size the enormous rekeying messages that need to be
M=Group Size transmitted whenever membership changes occur. From
P=Cluster Size this table we observe that the rekeying procedure requires
G=No. of groups only one round in our scheme and CRTDH and modified
CH=Cluster Head CRTDH, in GD-H and BD it is a constant 3 whereas in
CL=Cluster member other schemes such as D-LKH and D-OFT, it depends on
GL =Group leader the number of members. Regarding the number of
1) Member joins messages sent, BD method involves 2N broadcast
When a new member joins, the public key of the new messages and no unicast messages, whereas in our
member is broadcast to all old members encrypted with technique, the number of unicast messages is N-1. We
the old group key. Suppose the average number of also observe that CRTDH has the least communication
members in a cluster is P, two 16 bit numbers or a cost among all the methods, but it does not provide
message of 32 bits is transmitted to all the existing forward secrecy because the rekeying message is
members encrypted with the old key. This scheme broadcast and even the leaving member can derive the
requires one round and 1 broadcast message . The group new group key. Moreover, in our scheme the rekeying
keys of other clusters need not be changed. message is only 32 bits wide and thus the communication
2) Member leaves overhead is greatly reduced.
When a node leaves, there are three cases TABLE I. NO. OF REKEYING MESSAGES FOR DIFFERENT
(i) The cluster member leaves NETWORK SIZES
(ii)The cluster head leaves
Network No. of nodes that receive Non-
(iii)The gateway node leaves Organization rekeying messages hierarchical
(iv)The group leader leaves (Our scheme) scheme
a) When the cluster member leaves CL leaves CH GL
or CL joins leaves leaves
The random numbers are sent to the existing
members encrypted with their respective public keys and
N=256 Join -15 32 34 256
unicast to the existing members. Therefore this requires M=8 (Broadcast)
one round and P-1 unicast messages. P=16 Leave-15
b) When the cluster head leaves G=2

The rekeying is similar to member leave within the N=256 Join -16 32 36 256
cluster i.e P-1 unicast messages and M-1 messages among M=4 Leave-15
P=16
the cluster heads for changing the cluster head key. G =4
c) When gateway node leaves
N=256 Join -8 40 44 256
The group key of both the cluster with which it is M=4 Leave-7
associated have to change the group keys. Therefore, this P=8
requires one round in each cluster and M-1 unicast G =8
messages in each cluster that is a total of 2 (P-1)
N=256 Join -4 68 72 256
messages. M=4 Leave-3
d) When the group leader leaves P=4
G =16
The group key of the group leaders , the group key of
the cluster heads and also the cluster key of the cluster N=256 Join -4 68 70 256
need to be changed. This requires one round and G-1 M=2 Leave-3
P=4
unicast messages among the group leaders, M-1 unicast G=32
messages among the group leaders and P-1 messages
within the cluster.

93 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. XXX, No. XXX, 2009

TABLE II. COMMUNICATION COST OF REKEYING V. EXPERIMENTAL RESULTS


Scheme No. of No. of messages The simulations are performed using Network
rounds Broadcast Unicast
Simulator (NS-2.32) [17], particularly popular in the ad
Burmester and 3 2N 0
Desmedt(BD)
hoc networking community. The MAC layer protocol
IEEE 802.11 is used in all simulations. The Ad Hoc On-
Group-Diffie N N N-1 demand Distance Vector (AODV) routing protocol is
Hellman(GDH) chosen for the simulations. Every simulation run is 500
Distributed Logical 3 1 N seconds long. The simulation is carried out using different
Key Hierarchy number of nodes. The simulation parameters are shown in
(D-LKH)
Distributed One Way Log 2 N 0 2Log 2 N
Table III.
Function Trees(D- The experiment is conducted with different mobility
OFT)
CRTDH 1 1
patterns generated using the setdest tool of ns2. These are
stationary nodes located at random positions, nodes
Modified CRTDH 1 1 moving to random destinations with speeds varying
Our scheme(join) 1 1 0 between 0 and a maximum of 5m/s, 10m/s and 20m/s.
The random waypoint mobility model is used in which the
Our scheme CL (leave) 1 0 P-1 nodes move to a randomly selected position with the
speed varying between 0 and maximum speed, pauses for
Gateway leave 1 0 2(P-1) a specified pause time and again starts moving with the
CH leave 1 0 M+P-2 same speed to a new destination. The pause time is set to
200 secs. Different message sizes of 16, 32, 48, 64, 128,
GL leave 1 0 G+P+M-3 152, 180, 200, 256, 512 and 1024 bits are used. We
observed that in all the four scenarios the energy
Let consumed by the node increases as the message size
Exp=Exponential operation increases. This is depicted in Fig.3. Since the nodes in a
D=Decryption operation mobile ad hoc network communicate in a hop by hop
OWF=One Way Function manner, the energy consumed by all the nodes is not the
X=Exclusive OR operation same, even though same number of messages are sent and
CRT=Chinese Remainder Theorem method for received by the nodes. This is clearly visible from the
solving congruence relation graphs. From the graph we observe that the energy
i= node id consumed is less for a speed of 10m/s. This may be due to
M= Cluster size the fact that the movement brings the nodes closer to each
other which reduces the relaying of the messages. The
energy shown is inclusive of the energy for forwarding the
TABLE III. COMPUTATIONAL COMPLEXITY message by the intermediate node.

Scheme During Set up phase During rekey TABLE IV. SIMULATION PARAMETERS
Cluster head Members Parameters Values
Simulation time 1000 sec
Burmester (M+1)Exp ----- (M+1)Exp Topology size 500m X 500m
and Desmedt Initial energy 100 Joules
Transmitter Power 0.4W
Group-Diffie (i+1)Exp ------ (i+1)Exp Receiver Power 0.3W
Hellman Node mobility Max. speed 0m/s,5m/s, 10m/s,
20m/s
Distributed Log2(MExp) Log 2MD Log 2MD Routing Protocol AODV
Logical Key Traffic type CBR, Message
Hierarchy MAC IEEE 802.11
Mobility model Random Waypoint
Distributed (Log 2M+ 1) ------ (Log 2M + Max. no. of packets 10000
One Way Exp 1)Exp Pause time 200sec
Function
Trees
CRTDH ----- LCM(M-1) LCM+X+CRT
In the next experiment, we varied the cluster size and
+ (M-1) X leader observed the effect of the cluster size on the average
+MExp + CRT+Xmem energy consumed by the nodes for communicating the
CRT bers rekeying messages. In this setup one node sends a
Modified ------ (M-1) X X+CRT message to every other node in the cluster. For P nodes, P-
CRTDH +MExp 1 messages are exchanged. This is indicated in Fig 4 for
+CRT
Our scheme Sort+ OWF Sort+ D+OWF
the mobility pattern of max. speed 20m/s. We observe that
OWF Multiplication( the energy consumed by the nodes increases as the
CH leave) network size increases and this is true with message sizes
XOR(GL leave) also.

94 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

member should not have access to this information, doing


this in a secure manner is a challenging task.

REFERENCES
[1] Wallner, D.M., Harder, E.J. and Agee, R.C. (1998) “Key
management for multicast: issues and architectures”, Internet
Draft, draft-wallner-key-arch-01.txt
[2] Sherman, A.T. and McGrew, D.A. (2003) “Key establishment in
large dynamic groups using one-way function trees”, IEEE
Transactions on Software Engineering, Vol. 29, No. 5, pp.444–
458
[3] S. Mittra. Iolus: “A framework for scalable secure multicasting”,
Journal of Computer Communication Reviews, 27(4):277–288,
1997.
[4] Y. Kim, A. Perrig, and G. Tsudik., “Tree-based group key
agreement. ACM Transactions on Information Systems Security”,
7(1):60–96, Feb. 2004.
[5] Y. Amir, Y. Kim, C. Nita-Rotaru, J. L. Schultz, J. Stan, and G.
Figure 3. Average energy consumed by the nodes for various message Tsudik, “Secure group communication using robust contributory
sizes for cluster size of 8 nodes key agreement”. IEEE Trans. Parallel and Distributed Systems,
15(5):468–480, 2004.
[6] M. Burmester and Y. Desmedt,. “A secure and efficient conference
key distribution system” In Advances in Cryptology -
EUROCRYPT, 1994.
[7] P. Adusumilli, X. Zou, and B. Ramamurthy, “DGKD: Distributed
group key distribution with authentication capability” Proceedings
of 2005 IEEEWorkshop on Information Assurance and Security,
West Point, NY, USA, pages 476–481, June 2005.
[8] J.-H. Huang and S. Mishra, “Mykil: a highly scalable key
distribution protocol for large group multicast”, IEEE Global
Telecommunications Conference, (GLOBECOM), 3:1476– 1480,
2003.
[9] B. Wu, J. Wu, E. B. Fernandez, M. Ilyas, and S. Magliveras,
“Secure and efficient key management in mobile ad hoc networks”
Journal of Network and Computer Applications, 30(3):937–954,
2007.
[10] Z. Yu and Y. Guan., “A key pre-distribution scheme using
. deployment knowledge for wireless sensor networks” Proceedings
of the 4th ACM/IEEE International Conference onInformation
Figure 4. Average energy consumed by the nodes vs. message size for Processing in Sensor Networks (IPSN), pages 261–268, 2005.
different cluster sizes with mobility pattern of max. speed=20m/s [11] Bing Wu, Jie Wuand Yuhong Dong, “An efficient group key
management scheme for mobile ad hoc networks”, Int. J. Security
and Networks, 2008.
[12] MS. Bouassida, I. Chrisment, and 0. Festor , “A Group Key
VI. CONCLUSION Management in MANETs. in International Journal of Network
Security”, Vol.6, No. 1, PP.67-79, Jan. 2008
We proposed a hierarchical scheme scheme for group [13] Dondeti L., Mukherjee S., and Samal A. 1999a. “A distributed
key management that does not rely on a centralized group key management scheme for secure many-to-many
authority for regenerating a new group key. Any node can communication”. Tech. Rep. PINTL-TR-207-99, Department of
initiate the process of rekeying and so the energy Computer Science, University of Maryland.
depletion of any one particular node is eliminated unlike [14] Kim Y., Perrig, A., And Tsudik G. 2000, “Simple and fault-
the centralized schemes. Our approach satisfies most of tolerant key agreement for dynamic collaborative groups”, In
Proceedings of the 7th ACM Conference in Computer and
the security attributes of a key management system. The Communication Security, (Athens, Greece Nov.). (S. Jajodia and
communication and computational overhead is small in P. Samarati, Eds.), pp. 235–241.
our scheme compared with other distributed schemes. The [15] R. Balachandran, B. Ramamurthy, X. Zou, and N.
energy saving is approximately 41% for 8 nodes and 15% Vinodchandran, “CRTDH: An efficient key agreement scheme for
for 200 nodes when the message size is reduced from secure group communications in wireless ad hoc networks”
Proceedings of IEEE International Conference on
1024 to 16 bits. This indicates that small message size and Communications (ICC), pages 1123–1127, 2005.
small cluster size is most suitable for energy limited [16] Spyros Magliveras and Wandi Wei Xukai Zou, “Notes on the
mobile ad hoc networks. A small cluster size increases the CRTDH Group Key Agreement Protocol” The 28th International
overhead of inter cluster communication since it needs Conference on Distributed Computing Systems Workshops 2008
more encryptions and decryptions whereas a large cluster [17] The network simulator, http://www.isi.nsnam
size increases the communication cost of rekeying. An
optimal value is chosen based on the application. As a
future work, instead of unicasting the rekeying messages,
broadcasting may be done that will reduce the number of
messages sent through the network.Since the leaving

95 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

An Analysis of Energy Consumption on ACK+Rate


Packet in Rate Based Transport Protocol
P.Ganeshkumar K.Thyagarajah
Department of IT Principal
PSNA College of Engineering & Technology, PSNA College of Engineering & Technology,
Dindigul, TN, India, 624622 Dindigul, TN, India, 624622
p_ganeshkumar@rediffmail.com drkt52@gmail.com

Abstract— Rate based transport protocol determines the rate of respectively. Transport layer protocols tailored specifically for
data transmission between the sender and receiver and then ad hoc network are broadly classified in to (i) Rate based
sends the data according to that rate. To notify the rate to the transport protocol. (ii) Window based transport protocol. In
sender, the receiver sends ACK+Rate packet based on epoch rate based transport protocol the rate is determined first and
timer expiry. In this paper, through detailed arguments and then the data are transmitted according to that rate. The
simulation it is shown that the transmission of ACK+Rate packet intermediate node calculates the rate of data transmission. This
based on epoch timer expiry consumes more energy in network rate is appended in the data packet and it is transmitted to the
with low mobility. To overcome this problem, a new technique receiver. Receiver collates the received rate from the
called Dynamic Rate Feedback (DRF) is proposed. DRF sends
intermediate node and sends it to sender the along with ACK.
This ACK+Rate packet is transmitted based upon epoch timer
ACK+Rate whenever there is a change in rate of ±25% than the
expiry. If the epoch timer is set to 1 second, then for each and
previous rate. Based on ns2 simulation DRF is compared with a
every 1 second the receiver transmits ACK+Rate packet to the
reliable transport protocol for ad hoc network (ATP) .
sender. In this paper, the epoch timer based transmission of
Keywords- Ad hoc network, Ad hoc transport Protocol, Rate ACK+Rate packet and its problems are discussed. The
based transport protocols, energy consumption, Intermediate node frequency of ACK+Rate packet transmission with respect to
the energy consumption, mobility and rate adaptation is
presented. Frequency of rate change with respect to mobility
I. INTRODUCTION and its result is also presented. Energy consumption with
In this paper, the design of new technique called Dynamic respect to simulation and its results are found out for various
rate feedback (DRF), which minimizes the frequency of mobility speeds.
ACK+Rate packet transmission for rate based transport
protocol in ad hoc network is focused. Ad hoc network is
dynamically reconfigurable wireless network that does not II. RELATED WORK
have a fixed infrastructure. The characteristics of ad hoc In this paper, the focus is based on the proposals that aim to
network are completely different from that of the wired minimize the frequency of the ACK packets transmitted.
network. Therefore TCP protocol which is designed originally Jimenz and Altman [13] investigated the impact of delaying
for wired network cannot be used as such for ad hoc network. more than 2 ACKs on TCP performance in multi hop wireless
Several studies have focused on the transport layer issues in ad networks and they proved through simulation that encouraging
hoc network. Research work have been carried out on both result have been obtained. Johnson [14] investigated the
studying the impact of using TCP as the transport layer impact of using extended delayed acknowledgement interval
protocol and improving its performance either through lower on TCP performance. Allman [15] conducted an extensive
layer mechanisms that hide the characteristics of ad hoc simulation evaluation on delayed acknowledgement (DA)
network from TCP, or through appropriate modifications to strategies. Most of the approaches aims only on the ACK
the mechanisms used by TCP [1-8]. Existing approaches to packets of window based transmission. In contrast to window
improve transport layer performance over ad hoc networks fall based transmission, rate based transmission is another
under three broad categories [9] (i) Enhancing TCP to adopt to classification which falls under transport layer protocols as
the characteristics of ad hoc network (ii) Cross layer design mentioned in section 1. As compared to window based
(iii) A new transport protocol tailored specifically for ad hoc transmission, rate based transport protocols aid in improving
network environment. TCP ELFN proposed by Holland et al the performance in the following 2 ways [12] (i) Avoid the
[10], Atra framework proposed by Anantharaman et al [11], draw back due to burstiness. (ii) the transmission are
Ad hoc transport protocol (ATP) proposed by Sunderasan et al scheduled by a timer at the sender, therefore the need for self
[12] are examples protocols of the three categories clocking through the arrival of ACK is eliminated. The latter

96 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
benefit is used by rate based protocols to decouple congestion The ATP receiver collates this information and sends it
control mechanism from the reliability mechanism. The back to the sender in the next ACK packets, and the ATP
protocols that fall under rate based transmission are as follows. sender can adjust its transmission rate based on this
Sunderasan et al [12] proposed ATP: a reliable transport information. During the establishment of the connection, the
protocol for ad hoc network. Kaichen et al [16] proposed an ATP sender determines the initial transmission rate by sending
end to end rate based flow control scheme called EXACT. probe packet to the receiver. Each intermediate node attaches
Ashishraniwala et al [17] designed a link layer aware reliable network congestion information to the probe packet and the
transport protocol called LRTP. Danscofield [18] in his thesis ATP receiver replies to the sender with an ACK packet
presented a protocol called a hop by hop transport control for containing relevant congestion information. In order to
multi hop wireless network. Nengchungwang et al [19] have minimize control overhead, ATP uses connection request and
proposed improved transport protocol which uses fuzzy logic ACK packets as probe packets.
control. Ganeshkumar et al [20,21] have studied ATP and ATP increases the transmission rate only if the new
designed a new transport protocol called PATPAN. In all the transmission rate (R) received from the network is beyond a
rate based transport protocol mentioned above, the rate threshold (x) greater than a current rate (S), e.g. if R>S(1+x)
(congestion information) is transmitted by the intermediate then the rate is increased. The transmission rate is increased
node to the receiver during the transmission of data packets. only by a fraction (k) of the difference between two rates, i.e.:
The receiver collates the congestion information and notifies S=S+(RS)/k, this kind of method avoids rapid fluctuations in
the same to the sender along with ACK. The sender adjusts the the transmission rate. If the ATP sender has not receive ACK
rate of data packet transmission according to the rate packets for two consecutive feedback periods, it significantly
(congestion information) received from the receiver. The decreases the transmission rate. After a third such period,
ACK+Rate feedback packet is transmitted periodically or connection is assumed to be lost and the ATP sender moves to
based on epoch timer. The granularity of setting epoch timer the connection initiation phase where it periodically generates
and the frequency of ACK+Rate feedback packet transmission probe packets. When a path break occurs, the network layer
highly influences the performance of the protocol. In window sends an explicit link failure notification (ELFN) packet to the
based transport protocol huge amount of work is done to ATP sender and the sender moves to the connection initiation
minimize the frequency of ACK packet transmission. The phase. The major advantage of ATP is the avoidance of
literature survey on the related work of rate based transport congestion window fluctuations and the separation of the
protocol clearly depicts that until now research work have not congestion control and reliability. This leads to a higher
been carried out in minimizing the frequency of ACK+Rate performance in ad hoc wireless networks. The biggest
feedback packet transmission. Therefore this motivated us to disadvantage of ATP is incompatibility with a traditional TCP,
study the behaviour of well known rate based transport nodes using ATP cannot communicate directly with the
protocol ATP [12] and further explore deep in to the frequency Internet. In addition, fine-grained per-flow timer used at the
of ACK + Rate feedback packet transmission. ATP sender may become a bottleneck in large ad hoc wireless
networks.
III. BACKGROUND
IV. DYNAMIC RATE FEEDBACK (DRF)
A. Ad Hoc Transport Protocol (ATP)
A. Design issues
Ad hoc transport protocol (ATP) is a protocol designed for
ad hoc wireless networks. It is not based on TCP. ATP differs In rate based transport protocols, the intermediate node
from TCP in many ways: ATP uses coordination between sends the congestion information (rate) to the receiver. The
different layers, ATP uses rate based transmissions and congestion information is usually appended with the data
assisted congestion control and finally, congestion control and packet. The receiver collates the congestion information and
reliability are decoupled in ATP. Like many TCP variants, sends it to the sender. The sender finds out the rate according
ATP also uses information from lower layers for many to the received congestion information and starts to transmit
purposes like estimating of initial transmission rate, the data. According to the concept of IEEE 802.11 MAC, at
congestion detection, avoidance and control, and detection of any particular instant only one node make use of the channel,
path breaks. ATP obtains network congestion information even though the channel is common for all the nodes lying in
from intermediate nodes, while the flow control and reliability the same contention area. The receiver sends transport layer
information are obtained from the ATP receiver. ACK for the data which it have received. Since MAC uses
The ATP uses a timer-based transmission where the rate is shared channel, the data packet and ACK packet will contend
dependent on the congestion in the network. As packets travel to occupy the channel. This reduces the number of data packet
through the network, intermediate nodes attach the congestion send. To address this issue SACK is used in the transport
information to each ATP packet. This congestion information layer. In TCP SACK, the ACK will be send for every 3
is expressed in terms of weighted average of queuing delay packets. In ATP SACK, the ACK will be send for every 20
and contention delay experienced by the packets at packets or less than 20 packets. In ATP, to trigger the process
intermediate node. of sending ACK, epoch timer is used. This epoch timer has a
fixed value. After each epoch timer expires, ACK will be send

97 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
The congestion information which the intermediate node sends data transmission. The data is transmitted only based on the
to the receiver is transmitted by the receiver to the sender received rate. In DRF while using slow speed mobile
along with ACK. The ACK is triggered by epoch timer expiry. topology, transmission of ACK+Rate packet is limited. This
Therefore for each and every epoch timer expiry the does not affect the number of data packet transmitted in the
congestion information (delay/rate) is send to the sender. forward path. In low mobility network, the frequency of rate
changes will be minimum. This does not cause any side effect
Even though the rate is determined on the fly and except for which the sender should not discard the buffered
dynamically through out the session, it is notified to the sender data transmitted already until ACK is received for the same.
only once per epoch. Therefore the granularity of epoch DRF does not affect the data recovery through retransmission.
affects the performance of the protocol. If the epoch time is If any data is lost, then it is the indication of congestion, route
very short, then ACK packet transmitted per unit time will failure. This causes change in rate. If rate change occurs, then
become more. This unnecessary traffic in the reverse path ACK+Rate will be transmitted. From the ACK, the lost data
creates lot of problems such as high energy consumption packet can be found out and the same can be recovered
leading to poor network life time and throughput reduction in through retransmission.
the forward path. If the epoch time is very large, then
ACK+Rate packet transmitted per unit time will be less. Due
C. Triggering ACK+Rate Packet
to this the rate changes which occur within the epoch will not
be notified to the sender promptly. This causes the sender to The DRF technique is developed to eliminate the
choose rate that are much lower than necessary, resulting in unnecessary transmissions of ACK+Rate packet in order to
periodic oscillations between high and low transmission reduce the additional overhead and energy consumption bared
speeds. by the intermediate nodes. In this paper, through detailed
arguments it is told that the triggering of ACK+Rate packet in
In ad hoc network, the topology of the network response to expiry of epoch timer causes serious performance
changes dynamically which result in frequent rate changes. If problems. If the epoch time period is set to 1 second, then the
the sender is not notified with frequent rate changes, then the rate changes that occur with in 1 second could not be informed
sender will choose a rate which does not appropriately to the sender. This causes harmful effects such as reduction in
matches with the characteristics of network at that particular throughput and under utilization of network resources. To
instant. Due to this the sender may choose a rate that is much overcome this drawback, DRF sends ACK+Rate packet when
lower or higher than necessary resulting in periodic oscillation ever there is a rate changes, rather than sending ACK+Rate
between high and low transmission speeds. packet for each and every expiry of epoch timer. If ACK+Rate
packet is transmitted for every rate change, then more traffic
The epoch timer plays an important role in terms of will be generated in the network which causes high energy
throughput in the forward path, energy consumption and consumption in the intermediate node and reduction in the
congestion in the network. According to the characteristics of throughput due to the contention of data and ACK+Rate
the ad hoc network choosing a constant epoch timer value for packet in the forward and reverse path respectively. If
the entire period of operation of protocol does not hold good. ACK+Rate packet is transmitted after a ±100% rate
Therefore in the proposal the ACK+Rate packet is transmitted change(i.e., if the current rate is 4, then the next ACK+Rate
by the receiver to the sender when ever there is 25 percent rate packet will be transmitted only if rate becomes 8) than the
change than the previous rate. If the receiver finds a rate which previous rate, then the sender will choose an inappropriate rate
is 25 percent more or less than the previous rate, then it which does not suits exactly with the characteristics of the
transmits the ACK+Rate to the sender. This procedure is network at that particular instant. Therefore, in order to find
termed as Dynamic Rate Feedback(DRF). According to this out the optimal time to transmit ACK+Rate packet an
technique the rate is notified to the sender when ever there experiment is conducted in ns2 simulator. The simulation set
exist a 25 percent rate change, eliminating the concept of up is discussed in section 5.1. The source node and destination
ACK+Rate packet transmission based on epoch timer expiry. node are randomly chosen. Throughput is analyzed in various
angles. Transmission of ACK+Rate packets with respect to ±
15%, ±25%, ±35%, ±50%, ±65%, ±75% rate changes than the
B. Independence on ACKs previous rate is analyzed. Throughput in pedestrian mobile
In rate based transport protocols, the rate is notified to the topology (1m/s), throughput in slow speed mobile topology
sender along with ACK. The receiver sends ACK+Rate to the (20m/s), and throughput in high speed mobile topology (30m
sender, the sender adopts its date transmission according to the /s) is found out for 1flow, 5flow and 25flow. The results are
received rate. In window based transmission, the reception of shown in Table1. The results are rounded to nearest integer.
ACK triggers the data transmission. So if ACK is late or if The average throughput for pedestrian, slow speed and high
only few ACK is received per unit time, then only less number speed mobile topology for ±15%, ±25%, ±35%, ±50%,
of data packets will be transmitted. This decreases the ±65%,±75% rate changes in a network load of 1 flow are 539,
throughput and performance of the protocol. In rate based 529, 468, 366, 278, 187 respectively. From the result it is clear
transport protocols, the reception of ACK does not trigger the that

98 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
TABLE 1: THROUGHPUT STATISTICS OF DIFFERENT MOBILITY FOR VARIOUS LOADS.

Transmission of
ACK+Rate Throughput in pedestrian Throughput in slow speed Throughput in high speed
packet with topology topology topology
respect to % of
rate change
1 flow 5 flow 25 flow 1 flow 5 flow 25 flow 1 flow 5 flow 25 flow
±15 635 214 46 551 157 43 432 141 38
±25 629 198 39 537 146 39 421 133 32
±35 522 153 31 492 114 31 392 111 24
±50 424 132 27 371 91 24 304 92 17
±65 367 123 21 254 68 19 213 63 14
±75 217 105 19 184 54 15 161 43 8
TABLE 2: STATISTICS OF ACK+RATE PACKET TRANSMISSION

Transmission of Number of ACK+Rate packet transmitted for simulation time of 100


ACK+Rate packet with second and a network load of 1 flow.
respect to % of rate Pedestrian
Slow Speed Topology High Speed Topology
change Topology
±15 43 56 69
±25 64 73 84
±35 76 85 94

the throughput for ±15%, ±25% and ±35% are greater than A. Simulation Setup
that of ±50%, ±65%, ±75%. Therefore, it can be concluded
that ACK+Rate packet may be triggered for transmission if Simulation study was conducted using ns2 network
there is ±15%, ±25% or ±25% of rate change than the previous simulator. A mobile topology of 500m X 500m grid consisting
rate change. of 50 nodes is used. Radio transmission range of each node is
kept as 200 meters. The interference range is kept as 500
Transmission of ACK+Rate packet from the rate change of meters. Channel capacity is chosen as 2 Mbit/sec. channel
±15%, ±25% and ±25% than the previous rate is analysed as delay is fixed as 2.5µs. Dynamic source routing and IEEE
shown in table 2. The appropriate percent of rate change so as 802.11b is used as the routing and MAC protocol. FTP is used
when to trigger the ACK+Rate packet must be chosen. From as the application layer protocol. The source and destination
the results shown in Table 2, it is clear that the number of pairs are randomly chosen. The effect of load on the network
ACK+Rate packet in ±25% rate change is lower than ±35% is studied with 1,5 and 25 flows. The performance of DRF is
rate change and slightly higher than ±15% rate change. evaluated and compared with ATP.
Therefore in DRF the method of triggering ACK+Rate packet,
whenever there is ±25% rate change than the previous rate is B. Instantaneous Rate Dynamics
adopted.
Instantaneous rate dynamics refers to change in rate
V. PERFORMANCE EVALUATION (packets/sec) with respect to time. Fig.1 shows the result of
change in rate for various mobility 1m/s, 10m/s, 30m/s, 50m/s.
This section presents the evaluation of DRF and ATP When mobility is 1 m/s, rate change occurs 3 to 4 times. The
considering the aspects such as energy consumption, change in average rate of rate changes is 210. When mobility is 50 m/s,
rate with respect to time for various mobility. The frequent rate change occurs. The average rate of rate changes
performance of DRF is compared with ATP. The reason for is 225.4. While the mobility is 50 m/s, the maximum deviation
the comparison with ATP is that it is a well known and widely of rate change with respect to average value ranges from 50 to
accepted rate based transport protocol. Since the scope of this 60. From this observation it can be concluded that mobility is
paper limits to rate based transport protocol, other version of directly proportional to the change in rate i.e., whenever
TCP is not considered for comparison mobility increases the rate change also increases.

99 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Figure 1. Rate Vs Time (a)

D. Energy Consumption
In this section, the energy consumption in intermediate
node is examined for ATP and DRF. The energy model
implemented in the ns2 simulator [22] is used in simulation. A
node starts with initial energy level that is reduced whenever
the node transmits, receives a packet. The total amount of
energy E(n) consumed at a given node n is represented as
E(n) = Et(n) + Er(n)
Where Et and Er denote the amount of energy spended by
transmission, reception respectively. Energy/ Bit ratio i.e., the
energy consumption per bit is computed as follows (b)
e= n*p*8/es
Where n is the number of packet transmitted, p is the size of
the packet in bytes and es is the energy spend by the node in
joules.

E. Performance in pedestrian (1 m/s) mobile topology


In pedestrian mobile topology, the rate change with respect
to time is low as compared to the slow speed and high speed
mobile topology. Snapshot of energy consumption due to the
transmission of ACK+Rate packet in DRF and ATP are
presented for single flow, 5 flows, and 25 flows in Fig. 2 a, b,
c respectively. Here the focus is only to high light the energy (c)
Figure 2. Energy Consumption in Pedestrian Mobile Topology (a) [1flow]
consumption of ATP and DRF. Hence other variants of TCP (b) [5flow], (c) [25flow].
and other rate based transport protocols are not considered.
The average energy consumption of ATP are 3.84, 4.65, 5.05
for 1 flow, 5 flow, 25 flow respectively. The average energy F. Performance in slow speed mobile topology
consumption of DRF are 3.24, 4.01, 4.35 for 1 flow, 5 flow, Snapshot of energy consumption due to the transmission of
25 flow respectively. In ATP since the ACK+Rate packet is ACK + Rate packet in DRF and ATP are presented for single
transmitted for every epoch period (say 1 second), the number flow, 5 flows, and 25 flows in Fig.3 a, b, c respectively. The
ACK+Rate packet transmitted is high. Therefore the energy average energy consumption of ATP are 3.92, 4.75, 5.12 for 1
consumption in ATP is higher as compared to DRF. Energy flow, 5 flows, 25 flows respectively. The average energy
consumption in DRF is low because ACK+Rate is transmitted consumption of DRF are 3.75, 3.94, 4.24 for 1 flow, 5 flow,
to the sender only when there is rate change. In pedestrian 25 flow respectively. In slow speed mobile topology the speed
topology since the mobility is low ( 1m/s), the rate change is of mobility is 10m/s. The average energy consumption in slow
also low so the number of ACK+Rate packet transmission is speed mobile topology is greater than that of the results
minimum. Therefore the energy consumption for both ATP obtained in pedestrian mobile topology. According to the
and DRF is minimum as compared to slow speed and high results shown in Fig. 1. as the speed increases the change in
speed mobile topology. rate with respect to time also increases. Therefore energy
consumption in both ATP and DRF is higher in slow speed
topology than that of pedestrian mobile topology. But

100 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
comparing ATP and DRF, ATP has high energy consumption consumption of DRF are 4.89, 5.6, 6.23 for 1 flow, 5 flow, 25
than DRF, this is due to the transmission of ACK+Rate packet flow respectively It is seen that the energy consumption of
based on epoch timer expiry. This is the indication to use DRF DRF is greater than that of ATP. This is because as mobility
technique in a topology where mobility speed varies from 1 increases rate change also increases. In DRF the ACK+Rate
m/s to 10 m/s. DRF causes reduction of energy consumption packet is transmitted whenever there is a rate change. Since
and there by increases the network life time which is critical rate changes are higher, number of ACK+ Rate packets
issue. transmitted is higher. The result shows that within 1 second 5
to 6 ACK+Rate packets are transmitted. This raises the energy
consumption. In case of ATP ACK+Rate packet is transmitted
for each and every epoch period (say 1 second) in contrast,
DRF send ACK+Rate packet whenever there is a rate change.

(a)

(a)

(b)

(b)

(c)
Figure 3. Energy Consumption in slow speed mobile topology (a) [1flow],
(b) [5flow],(c) [25flow].

G. Performance in high speed mobile topology (c)


Figure 4. Energy Consumption in High speed mobile topology (a) [1flow],
Snapshot of energy consumption for high speed mobile (b) [5flow], (c) [25flow].
topology of ATP and DRF are presented for single flow, 5
flows and 25 flows in Fig.4 a, b, c respectively. The average The results clearly reveals that the DRF technique reduces
energy consumption of ATP are 4.15, 4.85, 5.33 for 1 flow, 5 energy consumption and increases the life time of a node in
flows and 25 flows respectively. The average energy very slow speed and slow speed mobility. But in high speed

101 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
mobility topology DRF consumes slightly higher energy than [13] T. Jimenez and E. Altman, “Novel Delayed ACK Techniques for
Improving TCP performance in Multihop Wireless Networks,”
ATP. Therefore it is clear that DRF technique can be deployed Proc. Personal Wireless Comm. (PWC ’03), Sept. 2003.
in an ad hoc network where the mobility speed does not [14] S.R. Johnson, “Increasing TCP Throughput by Using an Extended
exceed 20 m/s. Acknowledgment Interval,” master’s thesis, Ohio Univ., June
1995.
[15] M. Allman, “On the Generation and Use of TCP
VI. CONCLUSION Acknowledgements,” ACM Computer Comm. Rev., vol. 28, pp.
The Dynamic Rate Feedback (DRF) strategy aims to 1114-1118, 1998.
minimize the contention between data and ACK+Rate packets [16] K.Chen, K.Nahrstedt, & N.Vaidya, The utility of explicit rate-
by transmitting as few ACK+Rate packet as possible. This based flow control in mobile ad hoc networks, Proc. Wireless
Communications and Networking Conference, USA, 2004.
mechanism is self adaptive and it is feasible and optimal to use [17] Raniwala, S Sharma, R Krishnan, & T Chiueh, Evalaution of a
in ad hoc networks whose mobility is restricted to 20m/s. Stateful Transport Protocol for Multi-channel Wireless Mesh
Through simulation it is found that the frequency of change in Networks, Proc. International conf. on Didtributed Computing
rate is directly proportional to the mobility. In DRF technique, System, Lisboa, Portugal, 2006.
[18] Dan scofield, Hop-by-Hop Transport Control for Multi-Hop
it is adopted that the ACK+Rate is transmitted only, whenever Wireless Networks, ms thesis, Department of Computer Science,
there is change in rate of ±25% than the previous rate. The Brigham Young University, Provo, 2007.
simulation result showed that the DRF can outperform ATP, [19] Neng-Chung Wang, Yung-Fa Huang, & Wei-Lun Liu, A Fuzzy-
the well known rate based transport protocol for ad hoc Based Transport Protocol for Mobile Ad Hoc Networks, Proc. of
IEEE International Conference on Sensor Networks, Ubiquitous
network in terms of energy consumption in intermediate node. and Trustworthy Computing, SUTC apos, 2008,320-325.
This technique is easy to deploy since the changes are limited [20] P.Ganeshkumar, & K.Thyagarajah, PATPAN: Power Aware
to the end node (receiver) only. It is important to emphasize Transport Protocol for Adhoc Networks, Proc. of IEEE Conference
that in DRF the semantics of ATP is retained but the on Emerging Trends in Engineering and Technology, Nagpur,
India, 2008, 182-186.
performance is improved. [21] P.Ganeshkumar, & K,Thyagarajah, Proposals for Performance
Enhancement of intermediate Node in Ad hoc Transport Protocol,
REFERENCE Proc. of IEEE International Conference on Computing,
[1] B. Bakshi, P. Krishna, N.H. Vaidya, & D.K. Pradhan, Improving communication and Networking, Karur, India, 2008.
Performance of TCP over Wireless Networks, Proc. 17th [22] Y. Xu, J. Heidemann, and D. Estrin, “Adaptive Energy Conserving
International Conference on Distributed Computing systems Routing for Multihop Ad Hoc Networks,” Research Report 527,
(ICDCS), Baltimore, Maryland, USA,1997, 112-120. Information Sciences Inst., Univ. of Southern California, Oct. 2000
[2] G. Holland & N.H. Vaidya, Impact of Routing and Link layers on
TCP Performance in Mobile Ad Hoc Networks, Proc. of IEEE AUTHORS PROFILE
Wireless Comm. and Networking Conference, USA, 1999, 505-
509.
[3] M. Gerla, K. Tang, & R. Bagrodia, TCP Performance in Wireless P.Ganesh kumar received the bachelor’s degree
Multi Hop Networks, Proc. IEEE Workshop Mobile computing in Electrical and Electronics Engineering from
Systems and Applications, USA, 1999, 202-222. Madurai Kamaraj University in 2001, and
[4] G. Holland & N.H. Vaidya, Analysis of TCP Performance over Master’s degree in Computer science and
Mobile Ad Hoc Networks, Proc. of ACM MOBICOM Conference, Engineering with distinction from Bharathiar
1999, Seattle, washinton, 219- 230. University in 2002. He is currently working
[5] J.P. Monks, P. Sinha, & V. Bharghavan, Limitations of TCPELFN towards the PhD degree in Anna university
for Ad Hoc Networks, Proc. Workshop Mobile and Multimedia Chennai. He has 7 years of teaching experience
Communication, USA, 2000, 56-60. in Information Technology. He is a member of
[6] K. Chandran, S. Raghunathan, S. Venkatesan, & R. Prakash, A IEEE, CSI and ISTE. He had published several
Feedback Based Scheme for Improving TCP Performance in Ad papers in International journal, IEEE
Hoc Wireless Networks, Proc. International Conference on international conferences and national conferences. He authored a book
Distributed Computing Systems, Amsterdam, The Netherlands, “Component based Technology”. His area of interest includes Distributes
1998, 472-479. systems, Computer Network and Ad hoc network.
[7] T.D. Dyer & R. Bopanna, A Comparison of TCP Performance over
Three Routing Protocols for Mobile Ad Hoc Networks, Proc. ACM Dr. K.Thyagarajah received the bachelor’s degree
MOBIHOC 2001 Conference, Long Beach, California, USA, 2001, in Electrical Engineering and master’s degree in
156-162. Power systems from Madras university. He
[8] J. Liu & S. Singh, ATCP: TCP for Mobile Ad Hoc Networks, received doctoral degree in Power Electronics
IEEE Journal on Selected Areas in Comm., 2001. and AC Motor Drives from Indian Institute of
[9] Karthikeyan Sundaresan, Seung-Jong Park, & science, Banglore in 1993. He has 30 years of
Raghupathy Sivakumar, “Transport Layer Protocols in Ad Hoc experience in teaching and research. He is a senior
Networks, Ad hoc Network, ( Springer US ) 123-152. member of IEEE. He is a senior member of
[10] J. P. Monks, P. Sinha, & V. Bharghavan, Limitations of TCP- various bodies such as ISTE, Institution of
ELFN for Ad hoc Networks, Proc. in Workshop on Mobile and engineers, India etc. He is syndicate member in
Multimedia Communication, Marina del Rey, CA, Oct. 2000. Anna university Chennai and in Anna University
[11] V. Anantharaman & R. Sivakumar, A Microscopic Analysis of Tiruchirapalli. He is member of board of studies in various universities in
TCP Performance Analysis over Wireless Ad Hoc Networks, Proc. India. He has published more than 50 papers in various national and
of ACM SIGMETRICS 2002., Marina del Rey, CA, 2002. international referred journals and conferences. He authored a book
[12] K.Sundaresan, V.Anantharaman, Hung-Yun “Advanced Microprocessor”. His area of interest includes Network Analysis,
Hsieh & A.R.Sivakumar, ATP: a reliable transport protocol for ad Power electronics, Mobile computing, Ad hoc networks.
hoc networks, IEEE Transaction on Mobile computing, (4)6,
2005,588- 603.

102 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009

Prediction of Zoonosis Incidence in Human using


Seasonal Auto Regressive Integrated Moving
Average (SARIMA)

Adhistya Erna Permanasari Dayang Rohaya Awang Rambli P. Dhanapal Durai Dominic
Computer and Information Science Dept. Computer and Information Science Dept. Computer and Information Science Dept.
Universiti Teknonologi PETRONAS Universiti Teknonologi PETRONAS Universiti Teknonologi PETRONAS
Bandar Seri Iskandar, 31750 Tronoh, Bandar Seri Iskandar, 31750 Tronoh, Bandar Seri Iskandar, 31750 Tronoh,
Perak, Malaysia Perak, Malaysia Perak, Malaysia
astya_00@yahoo.com roharam@petronas.com.my Dhanapal_d@petronas.com.my

Abstract— Zoonosis refers to the transmission of infectious cases, at least 2185 deaths). Some of these zoonoses recently
diseases from animal to human. The increasing number of have major outbreaks worldwide which resulted in many losses
zoonosis incidence makes the great losses to lives, including of lives both to humans and animals.
humans and animals, and also the impact in social economic. It
motivates development of a system that can predict the future Zoonosis evolution from the original form could cause the
number of zoonosis occurrences in human. This paper analyses newly emerging zoonotic disease [4]. Indeed this is evidenced
and presents the use of Seasonal Autoregressive Integrated in a report presented by WHO [4] associating microbiological
Moving Average (SARIMA) method for developing a forecasting factors with the agent, the animal hosts/reservoirs and the
model that able to support and provide prediction number of human victims which could result in a new variant of a
zoonosis human incidence. The dataset for model development pathogen that is capable of jumping the species barrier. For
was collected on a time series data of human Salmonellosis example, Influenza A virus mechanism have jumped from wild
occurrences in United States which comprises of fourteen years of waterfowl species into domestic farm, farm animal, and
monthly data obtained from a study published by Centers for humans. The other recent example is the swine flu that the
Disease Control and Prevention (CDC). Several trial models of outbreaks in human originally come from a new influenza virus
SARIMA were compared to obtain the most appropriate model. in a swine. The outbreak of disease in people caused by a new
Then, diagnostic tests were used to determine model validity. The influenza virus of swine origin continues to grow in the United
result showed that the SARIMA(9,0,14)(12,1,24)12 is the fittest
States and internationally
model. While in the measure of accuracy, the selected model
achieved 0.062 of Theil’s U value. It implied that the model was Worldwide frequency of zoonosis outbreak in the past 30
highly accurate and a close fit. It was also indicated the years [3] and the risk factor of the newly emerging diseases
capability of final model to closely represent and made prediction forced many governments to apply stringent measures to
based on the tuberculosis historical dataset. prevent zoonosis outbreak, for example by destroying the last
livestock in the infected area. These mean great losses to
Keywords—zoonosis; forecasting; time series; SARIMA farmer. The significant impact to human life, however, still
remains the biggest issue in zoonosis. Therefore, it highlights
I. INTRODUCTION the need for a modeling approach that can give decision makers
Zoonosis refers to any infectious disease that is transmitted an early estimate of future number zoonosis incidence, based
from animals humans [1, 2]. It was estimated that around 75% on the historical time series data. The use of computer software
of emerging disease infections to humans come from animal couple with a statistical modeling can be used to forecast the
origin [3-5]. The zoonosis epidemics arise and exhibit the number of zoonosis incidence.
potential threat for public health and economic impact. Large Time series analysis regarding forecasting model is widely
numbers of people have been killed by zoonotic disease in used in various fields. In fact, there are few of studies regarding
different countries. zoonosis forecasting comparing to other areas, such as energy
The WHO statistic [6] reported some zoonosis outbreaks demand prediction, economic field, traffic prediction, and in
including Dengue/dengue haemorrhagic fever in Brazil (647 the health support. Indeed, prediction the risk of zoonosis
cases, with 48 deaths); Avian Influenza outbreaks in 15 impact in human need to be focused, due to the need to obtain
countries (438 cases, 262 deaths) ; Rift Valley Fever in Sudan the result to take the further decision.
(698 cases, including 222 deaths); Ebola in Uganda (75 Many researchers have developed different forecasting
patients), Ebola in Philippines (6 positive cases from 141 methods to predict zoonosis human incidence.
suspect) and Ebola in Congo (32 cases, 15 deaths); and the
latest was Swine Flu (H1N1) in many countries (over 209438

103 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
Multivariate Markov chain model was selected to project data from 1993 to 2006 was collected to compute the forecast
the number of tuberculosis (TB) incidence in the United States values until 12 months-ahead.
from 1980 to 2010 [7]. This work pointed out the study of TB
incidence based on demographic groups. The uncertainty in Different forecasting approaches were applied into zoonosis
model parameters was handled by fuzzy number. The model time series. However, the increasing number of zoonosis
determined that the decline rate in the number of cases among occurrences in human made the need to take a further study of
Hispanics would be slower than among white non-Hispanics zoonosis forecasting in different zoonotic disease [13]. Due to
and black non-Hispanics. that issue, selections of the fitted model were necessary to
obtain the optimal result.
The number of human incidence of Schistosoma
haematobium at Niono, Mali was forecasted online by using This paper analyses the empirical results for evaluating and
exponential smoothing method [8]. The method was used as a predicting the number of zoonosis incidence by using
core of a proposed state-space framework. The data was Autoregressive Integrated Moving Average (ARIMA). This
collected from 1996-2004 from 17 community health center in model is selected because of the capability to correct the local
that area. The framework was able to assist managing and trend in data, where the pattern in the previous period can be
assessing S. haematobium transmission and intervention used to forecast the future. Thus this model also supports in
impact, modeling one perspective as a function of time (in this case, the
number of human case) [14]. Due to the seasonal trend of time
Three different approaches were applied to forecast the series used, the Seasonal ARIMA (SARIMA) is selected for
SARS epidemic in China. The existing time series was the model development.
processed by AR(1), ARIMA(0,1,0), and ARMA(1,1). The
result could be used to support the disease reports [9]. The The remainder of the paper is structured as follows. Section
result of this study could be used to monitor the dynamic of II presents preparation of the time series. Section III describes
SARS in China based on the daily data. basic theory of ARIMA and SARIMA model. Section IV
introduces Bayesian Information Criterion (BIC) and Akaike
A Bayesian dynamic model also could be used to to Information Criterion (AIC). Section V reports model
monitor the influenza surveillance as one factor of SARS development. Finally, Section VI present conclusion and
epidemic [10]. This model was developed to link pediatric and directions for future work.
adult syndromic data to the traditional measures of influenza
morbidity and mortality. The findings showed the importance II. DATASET FOR MODEL DEVELOPMENT
of modeling influenza surveillance data, and recommend
dynamic Bayesian Network. This section describes the dataset (time series) that was
used for model development. Salmonellosis disease was
Monthly data of Cutaneous Leishmaniasis (CL) incidence selected because these incidences can found in any country. A
in Costa Rica from 1991 to 2001 was analyzed by using study collected time series data of human Salmonellosis
seasonal autoregressive models. This work was studying the occurrences in United State for the 168 month period from
relationship between the interannual cycles of the diseases with January 1993 to December 2006. The data was obtained from
the climate variables using frequency and time-frequency the summary of notifiable diseases in United States from the
techniques of time series analysis [11]. This model supported Morbidity and Mortality Weekly Report (MMWR) that
the dynamic link between the disease and climate. published by Centers for Disease Control and Prevention
Additive decomposition method was used to predict (CDC). The seasonal variation of the original data is presented
Salmonellosis incidence in US [12]. Fourteen years historical in Fig. 1. Then, trend in every month is plotted by using
seasonal stacked line in Fig.2.

7000
7000

6000 6000

5000 5000
Number of Incidence

4000
4000

3000
3000
2000

2000
1000
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1000
12 24 36 48 60 72 84 96 108 120 132 144 156 168 SALM Means by Season
Month Index

Figure 1. Monthly number of US Salmonellosis incidence (1993-2006) Figure 2. Seasonal stacked line of US Salmonellosis (1993-2006)

104 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
Fig. 2 shows seasonal stacked line for human Salmonellosis • θ Q ( B L ) = (1 − θ 1, L B L − θ 2, L B 2 L − ... − θ Q , L B QL )
incidence in US from 1993 until 2006. The plot shows a peak
season of incidence in August while the minimum number of is the seasonal moving average operator of order Q
incidence occurrences in January. Since time series plot of the • δ = μφ p ( B)φ P ( B L )
historical data exhibited the seasonal variations which present is a constant term where μ is the mean of stationary time
similar trend every year, then SARIMA was chosen as the series
appropriate approach to develop a model prediction.
• φ ,θ , δ are unknown parameter that can be calculated
from the sample data.
III. ARIMA AND SEASONAL ARIMA MODEL
• at , at −1 ,... are random shocks that are assumed to be
This section introduces the basic theory of Autoregressive
Integrated Moving Average (ARIMA). The general class of independent of each other
ARIMA (p,d,q) processes shown in (1) as George Box and Gwilym Jenkins studied the simplified
yt = δ + φ1 yt −1 + φ2 yt −2 + ... + φ p yt − p step to obtain the comprehensive information of understanding
ARIMA model and using the univariate ARIMA model
+ at − θ1at −1 − θ 2 at −2 − ... − θ q at −q [15],[16]. The Box-Jenkins (BJ) methodology consists of four
(1)
iterative steps:
where d is the level of differencing, p is the autoregressive 1) Step 1: Identification
order, and q is the moving average order [15]. The constant is This step focus on selection of the order of regular
notated by δ, while φ is an autoregressive operator and θ is a differencing (d), seasonal differencing (D), the non-seasonal
moving average operator. order of Autoregressive (p), the seasonal order of
Seasonal ARIMA (SARIMA) is used when the time series Autoregressive (P), the non-seasonal order of Moving
exhibits a seasonal variation. A seasonal autoregressive Average (q) and the non-seasonal order of Autoregressive (Q).
notation (P) and a seasonal moving average notation (Q) will The number of order can be identified by observing the sample
form the multiplicative process of SARIMA as (p,d,q)(P,D,Q)s. autocorrelations (SAC) and sample partial autocorrelations
The subscripted letter ‘s’ shows the length of seasonal period. (SPAC).
For example, in a hourly data time series s = 7, in a quarterly 2) Step 2: Estimation
data s = 4, and in a monthly data s = 12. The historical data is used to estimate the parameters of the
In order to formalize the model, the backshift operator (B) tentatively model in Step 1.
is used. The time series observation backward in time by k 3) Step 3: Diagnostic checking
period is symbolized by Bk, such that Bkyt = yt-k Various diagnostic tests are used to check the adequacy of
the tentatively model.
Formerly, the backshift operator is used to present a general 4) Step 4: Forecasting
stationarity transformation, where the time series is stationer if The final model in step 3 then is used to forecast the
the statistical properties (mean and variance) are constant forecast values.
through time. The general stationarity transformation is
presented below:
This approach is widely used to examining the SARIMA
model because of the capability to capture the appropriate trend
zt = ∇ ∇ yt = (1 − B ) (1 − B) yt
D
s
d s D d
(2) by examining historical pattern. The BJ methodology has
several advantages, involving extract a great deal of
where z is the time series differencing, d is the degree of information from the time series using a minimum number of
nonseasonal differencing used and D is the degree of seasonal parameters and the capability in handling stationery and non-
differencing used. stationary time series in non-seasonal and seasonal elements
Then, the general SARIMA (p,P,q,Q) model is [17],[18].

φ p ( B)φP ( B s ) zt = δ + θ q ( B)θQ ( B s )at (3) IV. BAYESIAN INFORMATION CRITERION (BIC) AND
AKAIKE INFORMATION CRITERION (AIC)
Where: Selection of ARIMA model was based on the Bayesian
• φ p ( B ) = (1 − φ1 B − φ2 B − ... − φ p B )
2 p Information Criterion (BIC) and Akaike Information Criterion
(AIC) values. These models are using Maximum Likelihood
is the nonseasonal autoregressive operator of order p principle to choose highest possible dimension. The
• φ p ( B L ) = (1 − φ1, L B L − φ 2, L B 2 L − ... − φ P , L B PL ) is the determinant of the residual covariance is computed as:
seasonal autoregressive operator of order P ^ ⎛ 1 ^ ^ ⎞
• θ q ( B ) = (1 − θ1 B − θ 2 B 2 − ... − θ q B q ) | Ω |= det⎜⎜ ∑ ε t ε t ' / T ⎟⎟ (4)
⎝T − p t ⎠
is the nonseasonal moving average operator of order q

105 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
The log likelihood value is assuming computed by a that the correlogram of time series is likely to have seasonal
multivariate normal (Gaussian) distribution as: cycles especially in SAC which implied level non-stationary.
^
Therefore, the regular differencing and seasonal differencing
T (5) was applied to the original time series as presented in Fig. 4
l = − {k (1 + log 2π ) + log | Ω |}
2 and Fig. 5.
Then the AIC and BIC are formulated as [19]: An Augmented Dickey-Fuller (ADF) test was performed to
determine whether a data differencing is needed [19]. The null
AIC = −2(l / T ) + 2(n / T ) (6)
hypothesis of the Augmented Dickey-Fuller t-test is:
BIC = −2(l / T ) + n log(T ) / T (7) • H0 : θ = 0 then the data needs to be differenced to make
it stationary, versus the alternative hypothesis of
where l is the value of the log of the likelihood function with
the k parameters estimated using T observations and • H1 : θ < 0 then the data is stationary and doesn’t need
n = k ( d + pk ) . The various information criteria are all based to be differenced
on –2 times the average log likelihood function, adjusted by a The result was compared with the 1%, 5%, and 10% critical
penalty function. values to indicate non-rejection of the null hypothesis. The
ADF test statistic value had a t-Statistic value of -1.779 and the
V. MODEL DEVELOPMENT one-sided p-value is 0.389. The critical values reported at 1%,
This following section discusses the result of BJ iterative 5%, and 10% were -3.476, -2.882, -2.578. It showed that t α
steps to forecast an available dataset. value was greater than the critical values that provide evidence
not to reject the null hypothesis of a unit root then the time
series need to be differencing.
A. Identification
Starting with BJ methodology introduced in section 3, the The regular differencing and seasonal differencing was
first step in the model development is to identify the dataset. In applied to the original time series. The ADF test also applied
this step, sample autocorrelations (SAC) and sample partial for both of them. The result showed the critical values of the
autocorrelations (SPAC) of the historical data were plotted to regular differencing were -14.171 and for the seasonal
observe the pattern. differencing were -12.514. The one-sided p-value for both
differencing was 0.000. While the probability value of 0.000
Three periodical data was selected to illustrate the plot. The provided evidence to reject the null hypotheses. It indicated the
result is shown in Fig. 3. Based on Fig. 3, it could be observed stationarity of the time series.

Figure 3. SAC and SPAC correlogram of Figure 4. SAC and SPAC correlogram of regular Figure 5. SAC and SPAC correlogram of
original data differencing seasonal differencing

106 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 5, No. 1, 2009
Selecting of whether to use regular or seasonal differencing Hence, δ value was 0.44, less than |2| and statistically not
was based on the correlogram. In order to develop ARIMA, different from zero, then δ can be excluded from the model.
time series should be stationary. Based on the correlogram Where an autoregressive and a moving average model
shown in Figure 3 and Figure 4 was observed that more spikes presented in the nonseasonal or seasonal level, then
was found in the regular differencing than the seasonal multiplicative terms was used as the following:
differencing. Then, the seasonal differencing was chosen for
model development. • φ9 zt − 9 and φ1,12 zt −12 was used to form the
multiplicative term φ9φ1,12 zt − 21
B. Parameter Estimation
Different ARIMA models were applied to find the best • − θ14 at −14 and − θ 2,12 at − 24 was used to form the
fitting model. The most appropriate model was selected by
multiplicative term θ 14 θ 2 , 12 a t − 38
using BIC and AIC values. The best model was determined
from the minimum BIC and AIC. Table I presents the results of The model was derived using multiplicative form as
estimating the various ARIMA processes for the seasonal follows:
differencing of Salmonellosis human incidence using the
EViews 5.1 econometric software package.
zt = φ9 zt −9 − θ14 at −14 + φ1,12 zt −12 − θ 2,12 at −24

TABLE I. ESTIMATION OF SELECTED SARIMA MODEL


+ φ9φ1,12 zt −21 + θ14θ 2,12 at −38 + at (13)
No Model Variable BIC AIC Adj. R2
zt − φ9 zt −9 − φ1,12 zt −12 − φ9φ1,12 zt −21
1 C, AR(9), SAR(12), SAR(22), 15.614 15.431 0.345 = θ14 at −14 + θ 2,12 at −24 − θ14θ 2,12 at −38 + at
SAR(24), MA(9), SMA(12), SMA(24)
2 AR(9), SAR(12), SAR(22), 15.587 15.427 0.342
SAR(24), MA(9), SMA(12), The backshift operator (B) was applied in (13) yield:
SMA(24)
3 AR(9), SAR(12), SAR(24), 15.583 15.423 0.345
MA(9),SMA(12), SMA(22), zt − φ9 B 9 zt − φ1,12 B12 zt − φ9φ1,12 B 21 zt
SMA(24)
4 AR(3), AR(9), SAR(12), MA(3), 15.394 15.286 0.449 = θ14 B14 at + θ 2,12 B 24 at − θ14θ 2,12 B 38at + at (14)
SMA(24)
5 AR(3), AR(9), SAR(12), MA(14), 15.351 15.243 0.472
(1 − φ9 B − φ1,12 B − φ9φ1,12 B ) zt
9 12 21

SMA(24) = (1 + θ14 B14 + θ 2,12 B 24 − θ14θ 2,12 B 38 )at


6 AR(3), AR(9), SAR(12), MA(24) 15.368 15.282 0.447
7 AR(9), SAR(12), MA(3), SMA(24) 15.359 15.273 0.452
8 AR(9), SAR(12), MA(14), SMA(24) 15.331 15.245 0.468 From the computation the parameter result were AR(9) =
9 AR(9), SAR(12), MA(3), MA(14), 15.352 15.245 0.471 0.154, SAR(12) = -0.513, MA(14) = 0.255, SMA(24) = -0.860.
SMA(24)
The estimated parameters were included into (14) to form the
final model that expressed as follows:
The AIC and BIC are commonly used in model selection,
whereby the smaller value is preferred. From Table 1, model 8
had a relatively small value of BIC and AIC. It also achieved (1 − 0.154 B 9 + 0.513B12 + 0.078 B 21 ) zt (15)
large adjusted R2. = (1 + 0.255 B14 − 0.860 B 24 − 0.219 B 38 )at
The model AR(9), SAR(12), MA(14), SMA(24) also could
be written as SARIMA(9,0,14)(12,1,24)12. Since the seasonal differencing was chosen, then (2) was
notated with d = 0, D = 1 and s = 12 to define zt as:
To produce the model, the separated non-seasonal and
seasonal model was computed first. It was followed by
z t = ∇ sD ∇ d yt = (1 − B s )1 (1 − B) 0 yt
combining these models to describe the final model.
= (1 − B s ) yt = yt − B s yt = yt − yt − s (16)
• Step 1: Model for nonseasonal level
AR (9) : z t = δ + φ9 z t −9 + at (8) = yt − yt −12
MA(14) : z t = δ + a t − θ 14 a t − 14 (9)
The SARIMA final model was used to compute the forecast
• Step 2: Model for seasonal level values for the three years-ahead.
AR (12) : z t = δ + φ 1 , 12 z t − 12 + a t (10)
C. Diagnostic Checking
MA (24) : z t = δ + a t − θ 2 ,12 a t − 24 (11)